Jump to content
Division-M Community

tosis

Members
  • Content Count

    30
  • Joined

  • Last visited

  1. A few things: 1. Check that your controller drivers are up to date and not corrupted; preferrably use signed drivers where available 2. Check your Windows event logs, particularly the "System" log, and particularly from source "disk" and "Disk" - you might find bad blocks have previously been flagged and mapped to the bad cluster file 3. Do you have a high quality power supply? And is it large enough to handle the peak load from all of your drives running at the same time? This can be a particular issue with gfx cards as well As the signature suggests I've put off upgrading to version 2 and will continue to wait until usable VSS support is available. A landing zone isn't quite enough reason for me to upgrade at this time and everything else appears to be running happily.
  2. Emil3, Generally speaking I've run into this issue in the past and was able to resolve it by checking service dependencies (a down-level service can act as a guardian / agent and start the master service if it believes it has been stopped / failed) and also by checking on scheduled tasks. It's been a while since I've had to delve into my DB install (a stable legacy version) but would make a bet there's a guardian service at work there. Sorry I can't be of more specific help.
  3. Agree completely with Davey126 - I use the console (almost) exclusively as I still find it the fastest way to reach various functions. Until it's deprecated (hopefully never) I'll probably keep doing that.
  4. A thought - I've run into similar issues with mapped home drives in a domain environment, specifically when the My Documents directory is located on a mapped network drive and UAC is called for an application update / install. The update fails as the user credentials typically available to access the network drive path aren't available when the path is called by the installer in the UAC session. Try this - a dirty hack - map a second drive letter to an existing drive in your computer, or use the old SUBST command line tool to temporarily map an H drive. You can get rid of it once you've installed whatever, but it's not going to get you any closer to the source of the problem.
  5. Thanks w3wilkes - just found hist post in another thread...
  6. Simple question - as someone who didn't make it into the early betas (hey, I was happy with my WHS v1 build) I've been a Drive Bender licensee from pretty much the start; so, with the arrival of version 2 not far away, could someone confirm what the situation will be with regard to version upgrade fees, if applicable?
  7. regarding checking it yourself, why not enable debug logging and update a duplicated file, then check the date / timestamps in the logs for the file updates (and possibly possibly open / lock / close actions if logged)
  8. As DriveBender simply redirects the I/O to the drives making up the pool you might have a few alternate options as I see it: 1) Open the DB console (or your choice of tool) and check that all drives making up the pool are attached and reported as healthy, then restart the DriveBender Service. This will cause the drives to be rescanned and the virtual drive's directory tree to be rebuilt from the visible content on each drive in the pool 2) Assuming the directory tree was deleted from the desktop of the DB host (and not via the network) you could check the recycle bin. Then again, you might have done that already, or it might not be applicable 3) Stop the DriveBender Service and then run a file recovery tool against each drive in the pool. Once you've recovered the missing directory from each drive restart DB (and other auto start services dependent on it) and you should see the assembled directory and recovered files.
  9. Without too much info to go on here I'd also suggest that you could be facing driver issues. When was the last time to updated the mboard and storage drivers? I've recently had to update the JMicron driver and rerun the Intel INF Updater (i.e. Intel storage driver) on my server to get around a problem...
  10. Hey there, This is probably due to the "Server" service not running on the WHS box. The Server service effectively shares resources with the network from your server. The WHS installer package for DriveBender (and probably the regular build as well) adds DB as a dependency for Server, so if the DB service / volume isn't running or takes too long to find (add and parse) all disks and become available before Server can start then a timeout condition can occur. I initially had this issue from time to time as I use a series of iSCSI disks. You might consider setting Server to delayed start (list box on the general tab of the Server service properties), so DB (and it's dependencies) will have some extra time to start up before Server starts. To manually start the affected services probably the easiest thing to do is open the services list from Computer Management, sort them by startup type, then individually start the ones with start type of "automatic" that aren't currently running (a few won't necessarily start, Performance Monitor used to be a good example)
  11. I'd recommend eSATA over USB 2 on performance grounds. That said, my own recent run-in with a rubbish SATA card coupled with failing SATA and eSATA drives has resulted in a switch to iSCSI targets on top of RAID. Probably a bit over the top for most people
  12. Are you looking at converting to a new pool or (as my reading suggests) merging your second drive with the first resulting in a single pool from both disks. In the latter case you'll end up with the content of directories merged in the pool - if one disk contains \foo\bar and the other contains the same directory you'll see one directory in the pool containing the content of both. An interesting question is what happens when both folders contain a file named \foo\bar\file.ext - which one "wins"?
  13. I've suffered a similar issue in the past which I solved with a PowerShell script: I searched the pool directory tree instances of *(1).* and put the results into an array. I then removed the (1) from the end of each fully qualified filename string and searched for a corresponding file - where found the duplicate got deleted. A half million files (total) on a fairly slow system took about 15-20 minutes to clear up, the script however took about an hour to write and debug... And before anyone asks, I didn't keep it - I'd have to rewrite it from scratch
  14. yes - your 4 step plan should work well, although if you've not enabled duplication you will lose a large part of your data set. If it was me and I had an additional disk to swap in to replace the failed one I'd try this in an attempt to save my duplication settings and files: 1. Shut down server 2. Remove failed drive, replace with new drive 3. Start server, ensure DriveBender started (pool should be read-only) 4. Add new disk to pool 5. Remove old disk (will be marked as missing) from pool 6. Repair / recover pool to rebuild data duplication
  15. Hi Both, No, this was a multi-volume pool: 2x 1GB SATA disks (1 volume each) 2x 2TB SATA disks (1 volume each) 2 iSCSI targets presenting 5 volumes (to be rationalised down to 2 volumes as part of the work I'm performing) I was trying to gracefully remove one of the 2TB disks - the process succeeded in a single attempt, followed by the folder validations detailed - just over 3900 files were affected by that. Following the folder validation I then reran a pool rebuild which didn't uncover any issues (per the system log) and also a read-only chkdsk of each volume on the server (not just the pool), again with no issues uncovered. Am logging a call as soon as I post this.
×
×
  • Create New...