Jump to content
Division-M Community

obsidience

Members
  • Content Count

    10
  • Joined

  • Last visited

  1. The only issue I'm having with LZ at the moment is lingering *.TEMP.$DRIVEBENDER files after the LZ is cleared. My LZ is a small 120GB SSD so any leftover files becomes a problem quickly.
  2. When killing the executables I also accidentally killed explorer.exe and then reopened it by going to file \ run "explorer.exe". After doing this I ran into an issue that I've seen beforeand may be related to McBluna's issue. Basically explorer.exe (I think) goes into a crash restart loop where the start menu is completely unresponsive. After about 5 minutes I was able to get into explorer and copy the files over and perform a reboot. I'm running Win 10 1709, have only seen this issue maybe three times in the past year or so with various versions of Drive Bender but I'm not entirely sure if it's a Drive Bender issue or some other hardware issue with my setup. After the reboot everything appears to be back to normal, I'll report any issues if I encounter them. Thanks for the patch!
  3. I have 3 arrays, one primary and two backups that I usually keep in "offline" mode with the enclosures off to save power. I noticed yesterday that the balancing was not working correctly and my LZ SSD drive was nearly full. Looking at the real-time monitor I see: When I turned on the two backups and set them back to "online" mode, the jobs ran and the LZ was cleared and the pool balanced correctly. So it seems like there's a problem where if a few of the pools are offline the tasks don't work correctly for the online pools. Also switching between online and offline appears to be problematic in general. It would be nice if there was functionality similar to the USB drive disconnect in Windows where you can easily disconnect a pool and then reconnect it when necessary. I believe I've seen many errors when I switch pools offline which I wouldn't expect to see. For the most part Drive Bender is working well for me. Keep up the good work.
  4. Hi w3wilkes, I switched balancing over to "most space" and copied some files over to the pool and the files are being moved to the empty drive correctly. The drive has the entire folder structure but only a few files now. It appears that my problem is just with scheduled balancing and configuration set to "Even".
  5. I've been using 2.5.0.0a for a few weeks now with an array of 10x8TB drives. I formatted each drive with the default allocation unit size and I wanted to increase each one to 64KB so I've been removing one drive at a time, reformatting and then adding the drive back into the pool. Everything went well except now the last drive is empty and the rest are partially full. File balancing is not moving files to the last drive that was added. I just upgraded to this beta and file balancing is still not working. You can see in the attached screenshot that the last drive has 100% free space and from the log message Drive Bender thinks that all drives have been balanced. The drives are brand new, SMART messages aren't all getting through my Adaptec 51245 in JBOD mode and when removing each drive I did have to click through a SMART warning even though I know that the drives are fine.
  6. See attached.
  7. It looks like the parity data grows as needed so 1 parity disk would probably be fine until you start filling up all disks. I'm storing content in the parity drives because there will be space as two parities account for enough room for up to 14 drives and I only have 10 managed by SnapRAID. Is it normal? Likely not but SnapRAID deals in just files so it should be fine. I'm currently planning to replace the parity drive letters with mounds in the same locations as the other mounts as I don't need to see them in my drive letter list to keep things clean. I don't know yet. I'm nearly done configuring this replacement box for my home server but one of my last steps will be to do some benchmarking and as well as plug in a kill-a-watt meter to gauge electricity usage in comparison with my old server which was a AMD Richmond box with a Highpoint RocketRAID (hardware) card RAID5 connected to an 8-bay enclosure. I've noticed that reads and writes over gigabit to my old server are both ~99MB/s while to the new server are around ~130MB/s (SSD cache disabled). Possibly the network card is better, unsure until I can do local benchmarking on both. I would suspect that writes to the SSD cache (Samsung 840) will be the speed of the drive which is much faster. One issue I ran into is that I ran out of disk space on the SSD while doing my initial transfer from old to new server so I had to disable the SSD cache until that was complete. Obs
  8. Hi KaySee, I'm new with it as well, just started setting up the SnapRAID configuration last week. Based on all the recommendations from the documentation I ended up with this config: parity F:\SnapRAID\parity\snapraid.parity 2-parity G:\SnapRAID\parity\snapraid.2-parity content C:\SnapRAID\content\snapraid.content content D:\SnapRAID\content\snapraid.content content E:\SnapRAID\content\snapraid.content content F:\SnapRAID\content\snapraid.content content G:\SnapRAID\content\snapraid.content data d1 C:\SnapRAID\mounts\slot1 data d2 C:\SnapRAID\mounts\slot2 data d3 C:\SnapRAID\mounts\slot3 data d4 C:\SnapRAID\mounts\slot4 data d5 C:\SnapRAID\mounts\slot5 data d6 C:\SnapRAID\mounts\slot6 data d7 C:\SnapRAID\mounts\slot7 data d8 C:\SnapRAID\mounts\slot8 data d9 C:\SnapRAID\mounts\slot9 data d10 C:\SnapRAID\mounts\slot10 exclude \$RECYCLE.BIN exclude \System Volume Information I have (10 pooled) data drives (+1 additional for SSD cache), the mounts were created inside the SnapRAID folder since I don't care to see them. The drives D-G are non-pooled drives. I named the mounts based on slot number so that I'd know where they were located. So I use the pooled drive letter and SnapRAID uses the mount points for each drive in the SnapRAID\mounts folder, you can use mount points (configured via disk management tool) rather than drive letters for the data drives. My F:\ and G:\ drives are dedicated parity drives, same model as the 10 data drives. The content files, I believe, are just listings of all files with the checksums (not the parity). So think of it as the directory listing of all your files on your drive. It can be stored anywhere and the more copies the merrier (IMHO). I believe SnapRAID is fine with the content file being on a data drive and likely just skips it. I believe with 5 drives you will need 2 parity drives (assuming they are all the same size). Check the FAQs. That's the only feedback I can give. I've been running Sync's and Scrubs the last few days and noticed data errors on my last scrub. I believe it was because I didn't enable "SnapRAID Mode" in Drive Bender so I just did that and am now doing a full sync. Good luck, let me know how it goes! Obs
  9. I recently purchased and am using with a system with 16 drives. It's working out well so far but here's one suggestion: I envision a dropdown on the top with the first value being "All Drives" and then subsequent listings of created pools. When the user selects "All Drives" they would basically see what's on the main screen. If the user selects one of the pools in the dropdown then they would only see the drives, configuration statistics in the given pool. If we wanted to get really cool the listing of the drives could be arranged in a manner where we could select the number of rows/columns and the user could arrange the location of the drives to match how it's physically laid out on the box. I realize the history of the product and the limited time you all have, I'm a developer as well so I know how it goes. I've read that some users say they've switched because the UI was hard to understand aka busy. Perhaps this would help. Just a suggestion, keep up the good work! PS: I noticed that authentication to this forum isn't using an SSL certificate which means that passwords are going out unencrypted.
×
×
  • Create New...