Jump to content
Division-M Community


  • Content Count

  • Joined

  • Last visited

  • Days Won

  1. Right now you have 7 drives, most more than three quarters full, and a few with just a few percentage points free. Pooled you have almost 2.5tb free and a balanced percentage of each drive empty. That's why I went with drivebender when I needed a replacement for WHS v1 drive pooling.
  2. Chkdsk all your drives, reboot. I'd bet on an issue there first. If still no joy, try an elevated command line and possibly the folder/File's short name (dir *.* /x)
  3. Cleaned up the failed attempt, rebooted, tried again for giggles, same issue. Cleaned up again, and did a successful removal of the 1.5TB drive, the new drive is in and aggressive rebalancing is working on redistributing the free space. I've done about a half dozen drive replacements so far, just thought I'd report the failure. If I hadn't been in a hurry, I probably should have worked harder to discover the bug. Thanks for a great product regardless.
  4. replacing a 1.5 with a 2.0TB drive, thought I'd try the new "swap in a new drive" functionality. After a couple of hours at 0%, I started checking into what is going on. it's be copying the same ~5GB file (Data.4096.51.dat) from the client backups folder, over and over again. I'll probably end up rebooting to end this process and try again, but thought I'd post my experience so far anyway.
  5. I have all the default ServerFolders in my DB pool as well, only caveat is when doing anything with the pool such as adding or removing drives, reboot before trying to fix any error that WHS kicks. Most of the time a reboot solves it without any manual intervention.
  6. Maybe I'm missing something, but there doesn't seem to be a good way to ensure duplication is turned off for all files, without walking my entire folder structure. I believe I've turned off duplication for everything, but a week later I still have ~700 listed in the pools duplicated file count. Considering turning on the pool to mount folder option and walking the drives, but thought I'd post here first.
  7. I'm not surprised, and I do apologize if I failed to search for the request before posting, I honestly don't remember. ;-) For my usage case at least, there doesn't need to be an interface, just some additional logic in the balancer.
  8. Yup, the latest patch fixes this issue for me as well. Thanks folks. For reference though, is it relatively safe to downgrade a version or two for testing purposes?
  9. Sounds like we have similar requirements chickeneye. I have less than 1TB that needs to be fault tolerant, originally I was doing duplication of this data but have now switched over to just including those folders in my server backup. For the remaining 8TB, it's just video that is relatively easy to replace. The issue arises when you lose a drive and have a basically random chance of having any one subfolder complete. I have 115 complete tv shows, when I lost a drive and about 300GB of data, I was missing 4+ files from each show. I'm not even really looking for anything hard and fast, more of just a best effort not to be completely random would be nice.
  10. averaging 12% at the moment, spiking to 20-25 regularly. Anyone know if it's safe to downgrade? Hesitant to test my theory in case I really screw something up.
  11. Noticed that I had the process noted in the subject eating up 10-15% of my cpu(s) for hours on end, eventually tracking it down to the drivebender service. If I uninstall the service, cpu usage returns to normal. Reinstalling the service causes it to go back to eating an average of 11%. I'm pretty anal and feel I would have noticed this before, but I haven't tried downgrading to 2.1.6 yet to test my theory that it's new to 2.2.0 and 2.2.1. Edit: I guess it could also be something else that's accessing the pool, since when the service is uninstalled there's no pool. Having said that, it "feels" like it's new to the latest version.
  12. I've now replaced 4 1.5Tb drives with 5 brand new 2Tb ones. Yup, one managed a full 9 hours of uptime and a full surface scan before dying. Thankfully the other 4 have all been good. Lost around 250Gb of unduplicated data, thankfully sickbeard made short work of that, undoubtedly to the dismay of my ISP. heheh. :-) Anyway, to finish off this thread. All 4 finished with the error in the subject line, I've yet to see a "successful" message. I can't find anything missing in my duplicated data, and with just what appears to be a file count, I've given up manual reconciliation. Nothing but stablebit scanner and aida64 left running that is outside the standard WHS 2011 install.
  13. in the "remove drive" running transactions and tasks, it's probably better to say "may take several hours" rather than "some time". IMHO, "some time" is multiples of 10 minutes, something like 20 or 30 maybe, but doesn't inform that it could be 6+ hours.
  14. Same error when removing the second drive, although the difference are only a single digit off this time. -62087 before, -62086 after. Made sure there was nothing else running this time. I'll manually reconcile again, but it would help if there was a little more verbiage to the error, some idea of what went wrong or something.
  • Create New...