Jump to content
Division-M Community


  • Content Count

  • Joined

  • Last visited

Everything posted by silkshadow

  1. Just replying to keep an eye on this thread. Thinking of taking 8 old SSDs gathering dust and making a DB pool out of them as I am planning on a large LAN upgrade soon.
  2. Sorry for the late reply, I should've come back to check but I got really busy and forgot to. Just want to say thanks Anthony for not abandoning us.
  3. Thank you Anthony! I was planning a 2016 full LAN upgrade this week but I can hold off for a little bit. I know this is always tough, but any chance of a very rough ETA? Can you tell me if it will be before the of the month?
  4. You should buy a license for DB from Anthony if you can. That said, if there is ever a day that it is no longer possible to buy licenses (happened with SageTV and it happened really fast and suddenly), I have a bunch of licenses I will never use. I periodically bought new licenses as a form of donation hoping to help Anthony keep DB alive. Give it some time, but if you or anyone gets really stuck and Anthony hasn't come up with a buyer or any kind of forward plan, send me a PM. I believe it will email me.
  5. ​When I saw the announcement, I was filled with both deep sadness and dread. With one of my storage pools almost at 160TBs now, I am filled with dread for the future. As someone who was anticipating DB from a post over on the Wegotserevred blog, even before the beta test, I am very sad to see things go this way. I expect good things for you in the future Anthony. Except for the 1 incident with the poor decision to use an overly aggressive DRM scheme in the old version, DB has been more stable than even Windows Server for me. So your work is well proven and I am sure opportunities will abound. I only ask do not let DB die or sell to someone that will kill it, please? Just in my home media life, I've already gone through that with Yahoo/Meedio and Google/SageTV. I still haven't fully recovered from the death of SageTV. If I am unable to use DB I am not sure what I will do, how I would migrate all of this data is beyond me. The future of my data is now in your hands, please treat it well. Good luck and thanks for DB!
  6. I have a pool on a HTPC/gaming PC that has less than 10GBs free. Physically the computer has no more SATA ports available. I have a new 1TB SSD to replace a 250GB SSD currently in the pool. I have a USB 3.0 port and a USB 3.0 to SATA bridge/dock. So what would be the best practice here on how to do this? Should I use DB's drive swap function? Will that even work with no space in the array? Should I hook up the 1TB SSD via USB 3.0 and try and dump the contents of the 250GB onto it and then add it to the pool? Will DB freak out during this process as I have duplication on for some folders? Or should I kind of plug in the new SSD via USB 3.0, add it to the pool, wait for that to finish, then do a remove from pool on the 250GB ssd, power down PC, pull 250GB out and put 1TB in its place? Would that even work via USB as I've never added a USB based drive to DB before. Or is there mix and match or better way to do this I haven't thought of yet? Thanks!
  7. Just to answer my own question here, I experienced no problems to existing data that is duplicated or not resident on the failed drive. I opted to pulled the drive and do a pool repair as running in Fault Tolerant/read-only mode was making me way too nervous. However, after the failed drive's removal my existing space was not sufficient to restore all the duplicated data. So some of the data was unavailable to me unless I mounted the drive the duplicated data was resident on, and searched for it in the duplicated folder. Also, I experienced high CPU usage from DB while it tried to restore data that there was no room for. Or I should say I assume that was the cause and it seemed that way when watching the log activity, but I am not sure that was the cause. CPU usage spiked to over 50% for extended periods. We had a typhoon hit us and we had to chopper back to civilization before the storm hit. I was unable to find a 4TB drive replacement though, and my plan was to find a 4TB drive and then restore the sector by sector image I had taken of the failed drive. I was only able to pickup 2 2TB drives. Anyway, I brought it back to our island when it was safe today. I can see that the data that was unable to be restored before is now getting restored. Unless something else happens, it looks like, even after extended time up with a removed drive and limited free space, I suffered no duplicated data loss.
  8. Thanks for addressing this thesmith. Actually it changes my viewpoint on this issue to know that the license system is an off the shelf item. However, the issue is not just the bug, it is a fundamental one. To spell it out, you were willing to treat a paying customer like a criminal with a day and a half of completely unnecessary downtime. So that is to say that I am not convinced. I would look at this system and say never would I implement it. Fundamentally its stupid. You risk 24 hour+ downtimes for your customers (and a business could not afford that) in return you get a failed attempt to stop piracy? That would be unacceptable to my business, but my business is logistics and my customers are too important to me to risk even though pirate problems for me are real pirates with guns. Not kids behind a keyboard cracking software which studies show can actually help sell more copies. With things like Stablebit, Flexraid and MS's own solution improve and put pressure on you, IMO you do not have room to screw with your customers like I was screwed with. Regardless of how you view your customers, we have the ability to impact your bottom line and risking our downtime seems silly. So getting down to brass tax, you will hopefully fix the random license revocation that I experienced. That's a step in the right direction but this: is still a very serious problem. Say you do fix it only to have something else pop up that causes random license revocation? Its an endless cycle. The grace period has(!) to be longer than the longest vacation your support staff takes. At a minimum that should be 3 days to cover weekends ideally 14 days to make sure.
  9. Thanks guys! To be clear, I do not blame DB for taking the system down. I think the bad drive is breaking DB and several services and other apps on this box depend on the DB pool are breaking Windows. Either way the solution is to pull the drive. My concern is running the DB pool in R/O mode for an extended period. I've never had to do that before, so was just checking if anyone knew of any unintended/unforeseen issues apart from the normal R/O issues? Especially in regards to the 10% of data that is duplicated and unique to this box (all pictures and home videos). Thanks!
  10. Sorry I am late to this, but I use a raid expander card with a LSI card to extend my pools to 48 physical drive bays (still have 18 bays free though). DB handles this fine. So depending on your scenario, that may be a better way to go.
  11. Ran the early versions on SBS 2008 R2 and now running on SBS 2011 essentials. On SBS 2008 though, both my SQL and Exchange stores were on a raid array not DB. Lack of VSS support is a significant issue though, so keep that in mind, but it looks to be addressed in a future v2 release.
  12. 2 of the 5 things that I warn about DB when talking about it that I do not see in the proposed changelog. To be clear, at this point, I am forced to give DB a neutral to negative recommendation (depending on the situation) in discussion and social channels because of these issues. As an original beta tester and long time user, man I like this program, but I find these issues to be recommendation breaking. Thankfully you are addressing 3 of them in v2.0 but these 2 are not addressed: 1) Drive removal is terrible. There is no error handling. If a file is on a bad sector, DB will sit there for weeks trying to move it. Needs a time out. Removal process chokes on folder.jpg all the time for me. IDK why, but the removal process ends up sometimes creating a duplicate folder.jpg in the same directory and the drive removal process just stops. After waiting 10+ hours, I then have to manually delete the extra folder.jpg and start over again. Needs a skip prompt and automatic error instructions. See directory opus (a fellow Aussie dev) and how it handles non-prompting file moves. Bottom line: the drive removal process has no error handling. This needs to be addressed. 2) License system. Here is my scenario. Nothing had changed in my system for months. Reboot my server only to find a warning that my license was in trouble and I had one hour to correct it or my pool would go into R/O mode. 1 freaking hour!! Yet you take a day+ to reply to the support ticket? This is why piracy exists and I sometimes get what the pirates claim about making software better. The pirates can go on using their DB but mine was in R/O for more than 24 hours and I lost an entire day. Bottom line: The license system needs to be scrapped or drastically fixed. You piracy paranoid devs don't get that the worse your system is for your actual customers, the more it invites piracy and just ticks off your existing customers who then spread that anger though their networks. For a product like yours whose customers are all the tech leaders of their social and professional circles (ie the tech product recommendation go to), you get the HTC effect x10. At a minimum the grace period should be as long as however long your longest vacation is. Right now I have and will continue to tell anyone looking to use this product for a business to expect their business to be down for 24+ hours because of DB. This is a "do not buy" for business and a breaking bad for consumers decision on your part.
  13. Funny I just had the opposite. A drive died and no smart warnings from sentinel/DB. One of my big gripes with DB as is. Error handling on drive removal is just dumb, as there is none.
  14. Non 10k+ mechanical drives get practically no benefit from sata3 over sata2. The bottleneck there is not the bandwidth but the performance of spinning plates. So really you could make the 2nd pool, but performance wise it won't make much of a difference (if any).
  15. I am having a problem on a Win 7 x86 desktop. I built a small pool of 4 4TB drives, I am not at home but in my beach house. One of the drives suddenly was marked dirty by Windows. It ran a chkdsk upon boot and a bunch of sectors were unreadable. Yikes. I let it run and then booted into windows. There were no SMART errors, but I tried to pull the drive and DB crashed. On reboot, Windows wouldn't load. Safe mode, SFC, etc and I was able to boot Windows. Upon boot the pool mount point "F:"was not there. Tried to launch the DB manager and DB manager locked up. Tried to open task manager to kill it, and then Windows locked up and there was no way to break the system freeze. Force power off. Crap. So I am pretty sure DB is taking my whole Windows system down and its due to the dying drive. The data is mostly duplicated and only about 10% of it is actually unique data, the rest is all copied from my home server. So I am not too worried about the data, however it took a month to download it all (only net here is spotty LTE and solid but slow HSDPA) and I would hate to have to do that again. The thing is, I have no extra hard drives here and I have no means to buy it (I am on an island with maybe a few hundred people and no computer stores anywhere and no post office). Right now I have not powered the system back on, but I kind of need to. So what I am wondering is what happens when I force remove the drive (unplug it) and leave it that way for a week or so? Is any of my data in trouble, will the pool still be usable? Also, a quick unrelated followup that I have been meaning to ask for a few months. I bought 2 DB licenses in the by one get one free promo and I was only emailed 1 key. So is that one key a dual use license? Thanks!
  16. Very good thank you. Especially if the process takes a long time, its not practical to have the pool offline. Yeah, that is a problem. The enumerating process should really not be taking 44 hours under any condition, which it did in my case. I have no idea why that would happen but I would suggest its something to look into. With the actual moving of files, it was at 3% after 7 hours and 49 minutes. It took over 2 hours to move a text file. That text file, I discovered when I was doing the removal manually, didn't even need to be moved. It was duplicated and the repair process (which I did to force removal of the drive) had already restored it to the pool. In the forced/manual process BTW, I moved the files manually in under 2 hours with no issues. A lot of the files were duplicated, and so already restored to the pool in the repair process which made the process go by faster. Is it possible that the removal process is not taking into account duplicated files and moving files that do not need to be moved? This would be a factor in the discrepancy in time, but it can't be the only thing. At 7+ hours for 3%, the total time would be over 200 hours to get to 100%. That's 100x slower (of course that number is all based on estimating, maybe it would've sped up a bit). So I have no idea what would account for the rest, but at least that could be a place to start. Lastly, I would strongly suggest adding a simple error popup. If the process takes longer than it should, a pop up should come up alerting the user that this is not normal. As I had no idea how long the process should take, as it is documented nowhere, I ended up wasting 3 days. A little popup would've sent me to the forums on day 1 and I could've given up and done it manually and only have wasted 1 day. Thanks, great news and thanks for the reply here!
  17. Right, the drive was going bad, my LSI card emailed me and that is where I started the first thread on this topic. Funnily enough though, after I moved all the data back into the pool, I did a 0 write to it and it was coming up as healthy on HDTune. When I get a chance tonight I'm going to run some more tests on it. Though this is not the first time I've seen a drive report as bad then be fine after a full wipe. The bad sectors are marked off and it is under the threshold so reports healthy. The thing is, isn't drive failing the most obvious reason to remove a disk from a pool? The bigger thing is though, that I didn't have much trouble doing it all manually. Edit: As an aside, I wouldn't mind seeing a feature that allows me to add a drive to a pool but designate it as a duplicate only drive. So no data but redundant data is stored on it. That would be a good use for this drive, if it does actually come up as healthy in more exhaustive tests.
  18. It was not good. My 2nd worst experience with DB, though overall I still thoroughly recommend the app, it does have rough edges that need work still. High CPU use, yes. Not pegging any of my cores though, if that was what you meant. A lot of small files is hard to say, as I do not know what was on the drive. The pool has a lot of small files, but I could not say how many were on this particular drive. Mostly the small files are from client machine backups, a document library, database files, my work server mirrors, virtual machine files, pictures and media metadata files. I would say its not a large amount over normal DB users, however, my pool being as large as it is, the sheer number is probably considered a ton. My pleasure, if it can help someone else avoid what I just went though, I'm happy. No, in step 5 I deleted all the duplicate folders. "FOLDER.DUPLICATE.$DRIVEBENDER" is where DB keeps duplicates. Every directory that is duplicated seems to have one of those folders, even if its empty. I believe it is essential to delete all of these before moving the files back to the pool. I ran into this in my worst experience with DB, which was when I had to rebuild my pool from scratch. If you copy all the "FOLDER.DUPLICATE.$DRIVEBENDER" back to the pool, then DB runs into issues with duplication. I had to rebuild from scratch again (redo) when I first thought I could move them into the new pool and they would pickup as duplicated. This was a while ago though, on an much older version of DB. So after I deleted all of those "FOLDER.DUPLICATE.$DRIVEBENDER" folders then, in step 6, I just moved the entire contents of /<pool guid>/ back to the pool. Skipping files that existed (as the repair process restores files that were on the removed drive, which were duplicated). Overall most of the files were already there, so the move operation was fast. Took all of 13-14 minutes. Besides the repair process, searching and deleting all the "FOLDER.DUPLICATE.$DRIVEBENDER" is the other big time sink. I advise to delete "FOLDER.DUPLICATE.$DRIVEBENDER" in batches. Otherwise you might end up with over a million folders to delete (depending on the size of your pool, mine is big) and it will bring your PC to its knees (happened when I had to rebuild my pool from sratch before). Hope Anthony can fix this, but I don't think I will ever be able to trust the drive removal feature unless I see in the changelog, that Anthony fixed it and guarantees it will work. Edit: You know, I should note something I neglected to say in my frustration with this situation (though I did note it in my prior thread on this). I am not sure if my situation is not unique. I do have a large pool of 24 drives that is probably close to the 100TB mark in size. Its possible that my situation is such that DB just didn't anticipate the sheer number of things in my situation.
  19. Used Google cache to get the forced removal info. Note, this really should be documented somewhere(!!!) unless I just missed it with the division-M website/DNS issues that were going on. I will document what I did for anyone else: 1) Remove the disk physically, then click repair pool under the pools tab. 2) Took about 6 hours to do a pool repair. Reboot when complete. You have just forced removed the drive. 3) I stuck the disk I was removing into another computer, to make sure there would be no issues with DB trying to add it back into the pool. 4) Search for folders "FOLDER.DUPLICATE.$DRIVEBENDER". I used Directory Opus' search but whatever you use should be fine. 5) Delete them all there was a ton of them for me. Optional is to move them to a backup location. 6) Move entire directory tree back to pool. Again I did this over gigabit ethernet. It would be faster if the drive was installed in the same box as DB pool, but I just can't trust DB at this point, it was so inept in this process. 7) Done. Took me all of 8 hours including waiting for the repair process to conclude. I wasted 3 days waiting for the inbuilt remove drive feature! So, my advice to you or anyone reading this: do NOT use the inbuilt remove drive feature, its broken badly. edit: For your upcoming dive swap CBers, I suggest you do it manually too.
  20. Yes I have about 78% of this pool duplicated. However, my conclusion is that the removal process is bugged so bad it should be labeled as an alpha state feature. I just timed it and DB took 2 hours and 14 minutes to move a 180kB config (aka text) file.Yes 2 hours. Im going to cancel the process as this was a collosal waste of 3 days. Do you have a good link to the forced removal process? I can't seem to find one. All I can find is my bookmark from the old forum at drivebender.com. Thanks!
  21. All right, it moved off "enumerating" and started moving files at hour 44. This is crazy, there has to be a better way than this. At the pace its going, this is going to take another 60 hours+. I won't get my pool back till next week and I really can't go without the pool starting monday, I have work that requires it. Arg.
  22. Thanks CBers and w3wilkes! I saw activity lights from some of the other drives at about the 40 hour mark. Which might be a positive sign, as I hadn't seen it previously. This is taking forever. I'll give it a couple more hours (currently at 42 hours) and submit a ticket. I am not religious in any sense of the word, but Amen! Having my pool offline for this long is seriously a huge issue.
  23. I am removing just 1 drive from a pool. Its a 4TB coolspin (5900 RPM) Hitachi on a machine with a Q6600 CPU and 16GB of ram. Its been a while, I estimate about 18 hours. Its been running on "enumerating folders on drive being removed" step 3 of 5. Is this normal? How long should this take? Do I need to open a support ticket?
  24. Thanks SteveCliff for the answer. Ouch, I am with you, this is most unfortunate. I have so many services that depend on this pool. Its been on my to do list to transfer some of these services over to my 2nd pool but I've been lazy. Lazy always bites you in the butt, doesn't it? This being the case, can anyone give me a rough estimate on how long it will take to remove a 4TB coolspin (5900 RPM) Hitachi from a pool on a machine with a Q6600 CPU and 8GB of ram? If I swap in 8 more GBs of ram will it make a big difference? Thanks!
  • Create New...