Jump to content
Division-M Community

FlyingShawn

Members
  • Content count

    17
  • Joined

  • Last visited

  • Days Won

    2

FlyingShawn last won the day on March 3 2014

FlyingShawn had the most liked content!

About FlyingShawn

  • Rank
    Member

Profile Information

  • Gender
    Not Telling
  1. I'm considering CX as a solution for cloud-backup of my files once support for Amazon Cloud Drive rolls out (not a true "backup," of course, but a remote copy in case of catastrophic failure wiping out my local files and local backups). From what I'm reading here, it appears that it's not possible to use space on a DB Pool for CX's local cache; so if I were to get a NAS device like a Synology instead of a simple external hard drive, would CX be able to use it as the local copy for encrypted bidirectional sync (the benefits would be more local space and the ability to add/replace drives)? I'd prefer to use a simple external HDD with the remote-only sync option, but from what I'm reading it sounds like too much of a hassle with encryption enabled so I need to look for an option with enough space to store all my cloud files locally and sync bidirectionally. Thanks!
  2. For future reference if anyone else sees this, I was experiencing the password reset issue with Chrome 45 on Windows 10. The password reset worked perfectly under Firefox on the same machine. Thanks again for all the help!
  3. Thanks Anthony! I have no idea how they got unmounted, the whole thing really is quite bizarre. The good news is that I've still got enough free space in the pool to pursue the first option. When copying files from each mount point folder (what I'm understanding to be the GUID folders representing each mount point) back into the associated mount point in the pool, how should I handle the "FOLDER.DUPLICATE.$DRIVEBENDER" folders? If I just copy them along with everything else, will DB recognize them as non-primary folders and fix any duplicate+primary files on the same drive during the next balancing run? As a side note: I tried logging into the support site and I must have used a different password than I thought, but the "forgot password" link isn't working for me. It'll pop-up with the window and I can type in my email address, but clicking on "submit" does nothing (the window stays open and no email is sent).
  4. If it helps, I noticed that the two "missing" drives (which both passed extended self checks) are still listed in grey in the Drive Bender console. I have not mounted them yet (I'm using folder mounts instead of drive letters), so I'm not sure if the lack of a "convert this drive" button (middle button on the other grey drives) is because they don't have a mount point or if it's because DB remembers them as being a part of the pool and will automatically bring them back online once they're mounted. I'm still afraid of the possibility of dual-primaries when they come back online, so I'm waiting to mount them until I learn the best way to go about it.
  5. I received an email notification that I had close to 18,000 orphaned files and that I needed to do a Pool Repair. What Drive Bender didn't tell me was that two of my drives had mysteriously become unmounted (Every indication within the Console was that the pool was "healthy": I had to open the DB System Notifications to even see that there were orphaned files! Shouldn't that trigger an "unhealthy" flag on the main screen of the Console?), so I went ahead and did the Repair as instructed. After the Repair was complete (it un-orphaned most of the files, but not all of them, even on subsequent Repair runs), I discovered the missing drives and immediately began testing them for drive health using the full version of Hard Disk Sentinel. Since I already did the Repair and un-orphaned those files, how do I re-integrate the missing drives into my Pool without messing everything up? I'm worried that the "orphaned" files were former duplicates, so the Repair marked them as primaries and that the Drive Bender software might start trying to create duplicates for each of the now dual-primaries instead of recognizing that they are the same and demoting half of them back to duplicate status).
  6. Drive Bender Nightmare loop..... so Angry

    If it helps any, I've found WinMerge (http://winmerge.org) to be absolutely invaluable when attempting to merge large sets that have any significant overlap of identical files. Still a nightmare situation, but it could potentially save you a lot of time in sorting this out.
  7. For those curious, after discussing the situation with Anthony, here is what I did and learned about DB... We ended up following a simplified version of Approach 2: 1) Stop the DB service 2) Use FlexRAID's parity to restore the Toshiba's data onto the empty WD. 3) Made sure the Driver Bender-specific files on the root of the drive (the *.DRIVEBENDER files) were restored (in my case, FlexRAID hadn't been protecting them for some unknown reason, but I was able to pull valid copies of them off the Toshiba) 4) Pulled the Toshiba from the system (would also work to simply delete those DB-specific files from the drive root) 5) Restarted the DB Service (which automatically went into "Fault Tolerant" mode since the Toshiba was missing) 6) Ran a "Restore Pool" operation from the Pool tab in the DB Dashboard (http://support.drivebender.com/entries/22962601-How-do-I-restore-a-pool-using-the-connected-drives-) to have DB restore the Pool based on the drives that were connected. And that's it: my Pool was back up and running! I did have about a half dozen "duplicate primary files" that DB automatically renamed for me that I'll need to clean up manually, but the DB email notifications were kind enough to tell me the location of each one so that'll be an easy task. The key differences between this procedure and my earlier idea for "Approach 2" are that I did NOT need to reproduce the partition structure of the Toshiba or spoof the partition label: DB apparently doesn't care if the new drive is the same name or size as the old one as long as those root files are present. In other words, it seems DB would be perfectly fine if you pulled a drive from your system, copied its contents to a new drive, installed that new drive, and ran a Pool Restore. (PLEASE NOTE: this is not an approved procedure or DB's recommendation for how to upgrade to a newer or larger drive, that's what the "Swap in a new drive" function is for. I'm just describing it here as a way of simplifying these concepts as much as possible for clarity). Overall, I'm pretty happy with the results and am pretty confident that I didn't lose any of my files. Going forward, I'm going to take steps to protect those DB root files in case they get corrupted in a drive failure (if I can't get FlexRAID to protect them, the next step will be to have Acronis take backups of those specific on each drive at a regular and frequent interval). Thanks again to Anthony and w3wilkes for the help!
  8. One of the drives in my pool is failing and I didn't catch it early enough (TeraCopy chokes when trying to copy files off of it, which I'm assuming means a significant bad sector count). Only about a third of my pool is duplicated due to space constraints, so I would lose a lot if I just pulled the drive and ran a "Repair" operation (most of the non-duped ones are non-critical files such as TV recordings, but I'd obviously prefer not to lose them if I can avoid it). I have a new drive that I can use as a replacement (for the sake of simplicity: the failing drive is the "Toshiba" and the new drive is the "WD"), but I'm trying to figure out the best way to handle the transfer considering the poor condition of the Toshiba. I have three ideas for how to do it: Approach 1: (this is the most "official" DB procedure, but I have some concerns about it) If I add the (empty) WD to the pool while the Toshiba is still connected and run a "Remove" or "Swap" operation, how intelligent is DB about handling the sectors it can't read? Will it simply skip non-duped files that are corrupt? Will it recognize they are corrupt without a duplicate to compare against? Will it know to utilize the duplicates (where available) to replace the corrupted primaries? How can I identify which known-corrupt/non-duped files weren't copied? Approach 2: (involves "tricking" DB, if possible) How does DB recognize a drive? Label? Serial number? I'm using FlexRaid to provide snapshot parity info for my pool (it's a more space-effective way to protect files that aren't worth duplicating). Assuming the parity data is good (not sure exactly when the Toshiba began to fail), what would happen if I followed this process: 1) Shut down the DB service 2) Use FlexRaid's parity to restore the Toshiba's data onto the (empty) WD 3) Unmount the failing Toshiba drive and re-mount the WD with the Toshiba's original label ("POOL3," my drives are mounted within a folder on the C: drive and I'm not letting DB rename them automatically) 4) Restart the DB service Would it trick DB into thinking that nothing had changed and I had a healthy pool? (Adding the parity-restored WD to the pool with or without the Toshiba attached would just create rampant duplicate primary files, so that's not an option). IF DB can be fooled this way, I think this approach stands the best chance for a complete or near-complete recovery of my non-duped files, but that's a big "IF." Approach 3: (a longshot, to say the least, but might be worth considering if Approach 2 is a non-starter) Premise: the Toshiba is failing, but might still have one last gasp of life. Here's the procedure: 1) Restore the parity data onto the WD as a backup. 2) Use Parted Magic (on the Ultimate Boot CD) to wipe the Toshiba clean. 3) Clone the parity-restored WD onto the now-empty Toshiba (the Toshiba wasn't originally full, so the hope would be that the cloning process would be able to write most or all of the data onto the remaining good sectors) 4) Give the Toshiba the same label it had originally. 5) Boot back into WHS 2011. 6) Run a "Remove" operation in DB to cleanly pull the Toshiba from my pool 7) Wipe the WD and add it to DB as an empty drive. What do you guys think?
  9. Caching Hard Drive

    Theoretically, a RAMDisk would offer a performance increase, but I don't think the practical difference would be that great and the downsides would, at least in my mind, far outweigh the benefits. Assuming we're using the model I proposed earlier (new data goes on the cache drive and migrated later, changed data written directly to the Pool), here's how I imagine that working: On the plus side: -A RAMDisk would eliminate the bottleneck of writing data to the cache drive, so then we'd only be limited by how fast we can give it data to cache (either in generating new files, transferring files from non-DB Pool drives, or transferring over the network from other machines). In any of those cases, I'm struggling to imagine a scenario in which we're able to send data to the cache drive fast enough to take any meaningful advantage of the RAMDisk over a decent mid-range SSD. -That being said, I think a RAMDisk would offer a more meaningful performance difference when writing many small files to the Pool, which is an area that DB currently struggles with. But again, you're still limited by how fast you can give it those small files to write. On the negative side: -RAMDisk capacity limits. Even if you're running 32GB+ RAM in your DB Server, it's going to be tough setting aside enough memory for the RAMDisk to use without filling up right away. Especially for those of us with only 4-8GB of RAM, the cache disk would fill up so quickly when transferring larger files that we'd have to start writing directly to the array again and lose all benefits. Especially when you consider transferring large files like backup images, blu-ray rips, or HDTV recordings, a cache drive only starts becoming useful if you can fit multiple 10+GB files on it before it files up. -Power outages (on systems with no or undersized-UPSs). If there's a blackout while files are sitting on an SSD cache drive waiting to be migrated, everything that's already on the SSD is ok. If the same thing happens with a RAMDisk cache, the entire cache is lost and unrecoverable. Obviously, this is only for files that have already been written to the cache drive: anything still being transferred is going to get lost either way.
  10. RC2 v1.4.7.9

    What is the recommended way to upgrade from RC1 v1.4.7.8 to this version? (Note: I am a new user and have not done a DB upgrade before) Looking at the thread for RC1, it seems that there is an "upgrade" option on the install, but several users seemed to have problems with it. Are they the exceptions or is uninstall/reinstall more reliable at this stage in DB's development? If it is, is there a way to export/import my settings or would I have to go through and set up everything again? Would I need to "release" my license prior to starting the uninstall/reinstall process? Thanks!
  11. IBM M1015 and S.M.A.R.T. data

    Any updates on when we can expect this? If it'll be sometime within the next couple of months, I'll just wait for it. Otherwise, I'll probably need to consider purchasing something like HDD Sentinel to use in the interim.
  12. Configurable Duplication

    I'll third this: would LOVE to have Amahi/greyhole style duplication options. Maybe it could be something like Duplication Level = Original + 1 duplicate, Original + 2, Original + 3, and Original + ALL (duplicates on all drives in the Pool)
  13. vm portability for pool?

    Any updates? Were you ever able to get DB running in an ESXi VM?
  14. Caching Hard Drive

    +1 to this idea, especially jamesbc's explanation above regarding slow external drives! Also, add an option to manually set the schedule for migrating data off the cache drive (maybe using the Windows Task Scheduler for plenty of flexibility?). One example for how a user-controllable schedule would be useful: I'm planning on setting up my new server with a combination of Drive Bender and FlexRAID's Snapshot mode. The main idea is that it'll allow me to have duplicate+parity protection on my important files and parity-only protection on less-important ones like TV recordings (as opposed to Drive Bender alone, which would give me dups for important/no protection for non-important if I don't want to waste space by dup-ing everything). The problem with this system is that as my data grows, FlexRAID will start taking a long time to do parity updates and would be much happier if my data wasn't being changed along the way. If I could use a DB cache drive, I could have it migrate the data on a schedule every few days and schedule parity updates a few hours later so that the data would remain much more static during the update. If my understanding of how DB works is correct (creating virtual "mounts" to intercept all data writes and re-direct them to the appropriate disk), it probably wouldn't be that hard to implement this idea. All new writes would first be sent to the cache drive and put in a folder structure organizing where they belong (Pool1/Mount2/Folder/subfolder/). Theoretically, a single disk could even serve as the cache for multiple Pools! Then, using a schedule or triggers like disk activity level or cache drive % full, the data could be migrated from the cache to the appropriate Pool disk and duped if necessary (I'm guessing schedules would be easiest to implement, but the other triggers would be ideal for some users). On second thought, this idea only works for new files. I'm not sure how a cache drive would work for modifying existing files in the Pool. I suppose the simplest option would be that any modifications/changes would be written straight to the Pool as they are now and the cache drive would be left out of the process. I'm guessing most file modifications aren't that large anyway, so the main speed benefits would be for new files anyway.
×