Jump to content
Division-M Community

Davey126

Members
  • Content Count

    39
  • Joined

  • Last visited

  • Days Won

    3

Reputation Activity

  1. Like
    Davey126 got a reaction from oj88 in Second Impressions - v1.1.0.0   
    So after an initial bad experience (see my other post) I decided to take a second look with a different dataset. In this test I copied a 750 MB folder of photos (multiple formats) to a CX drive connected to S3. I used the default file placement rule except changed the sync method from 'mirror' to 'remote'. Individual file sizes ranged from a few dozen KB to 95 MB.
     
    The upload proceeded as expected and fully saturated my connection (5/30 Mbps service). During that time I could access all files on the CX drive with excellent performance. That came as no surprise as CX cached the entire folder during the upload (with associated local storage implications). However, if I attempted to access (view only) the file that CX was currently uploading the file became locked an essentially unusable. CX never recovered from this error and never finished uploading the file despite many attempts as recorded in the console. Apparently it was locked out too. There is no way to tell which file CX is working on unless monitoring the file/folder management tab in the CX console. As before, I had to take extraordinary steps to delete the now 'permission less' file. 
     
    Despite the above problem I allowed CX to complete the upload then rebooted. All files (except those that were damaged) were shown in on the CX drive with appropriate attributes. At some point the local cache had been cleared with only pointers to their cloud equivalents. Opening smaller files was nearly instantaneous as expected given my fairly robust Internet connections. Larger files (anything over 5 MB) were another story. A 7MB image took nearly 30 seconds to open. A 93 MB image never made it. CX download speeds averaged 1.5-1.7 Mbps vs the 25-30 Mbps I would expect. CX cached the file in ridiculously small 64KB chunks which created many hundreds of tiny files that would have needed to be stitched together had the download actually completed (I killed it after 5 min). Downloading the same file via S3 browser instantly saturated my link and completed in under 20 seconds. I would expect some overhead with CX but obviously this is unacceptable.
     
    So...for multiple reasons my second look at CX comes to the same conclusion as the first. Lots of promise but this version is not ready for prime time. Feels more like a late Alpha or early Beta release. 
     
    Edit: After removing the CX drive via the management console I discovered a handful of additional locked/permissionless files in the CX cache. None had been accessed during the above test. This should never happen in Windows unless junctions/hardlinks are improperly severed. Clearly some additional work is needed to stabilize this product. 
  2. Like
    Davey126 got a reaction from trpltongue in Converting existing disk   
    You can easily migrate existing files into a DB pool. Take a look at this KB article; post back if the procedure is not clear (hint: you may need to create a temporary mount point using Window Disk Management or the tool of your choice).
     
    Faced with your situation I would create a DB pool with just the 3TB disk and then move the files as described in the article. Because it is a move (vs copy) the actions will take place very quickly. Then add the empty 5TB drive to the pool. In the end you will end up with a two disk, 8TB pool containing all of your old files.
     
    DB may have a procedure to automate existing file migration at the time of pool creation. It's been a while since I created a new pool on a populated drive. Regardless, I generally use the above approach as I understand and can control the process. 
  3. Like
    Davey126 got a reaction from piroblecy in Memory Cache   
    Curious if anyone knows if DriveBender uses its own memory cache or leverages the Windows file cache. I was benchmarking drives and saw some crazy numbers posted for my DB pool ... faster than any mechanical drive and all but the most advanced SSDs. This could well be an artifact of the Windows file cache although I did not observe it on any other drive. The only time I have seen benchmark numbers in that range is when a dedicated memory cache sits in front of the drive (eg: Primocache).
     
    More of a curiosity question. Of course, if DB does create its own cached I'd be curious as to its size and whether it can be tweaked via registry settings  .
  4. Like
    Davey126 got a reaction from JohnesRab in Memory Cache   
    Curious if anyone knows if DriveBender uses its own memory cache or leverages the Windows file cache. I was benchmarking drives and saw some crazy numbers posted for my DB pool ... faster than any mechanical drive and all but the most advanced SSDs. This could well be an artifact of the Windows file cache although I did not observe it on any other drive. The only time I have seen benchmark numbers in that range is when a dedicated memory cache sits in front of the drive (eg: Primocache).
     
    More of a curiosity question. Of course, if DB does create its own cached I'd be curious as to its size and whether it can be tweaked via registry settings  .
  5. Like
    Davey126 got a reaction from DoctorTim in Total DBender newb Question - I don't see my pooled drive   
    Yeah - I also considered multiple pools but the high (and increasing) file counts aligned with the inaccessible pool and nearly identical drive flyouts suggest a single pool that is still being built.
     
    What I think may have happened is the "I:" drive links (loose term) got farkled and are pointing to an unpopulated pool. But DB doesn't know that and won't allow another MP to be created to the populated pool because it thinks I: is serving that purpose. To my knowledge there can only be one MP per pool. 
     
    The remediation may be as simple as a pool repair or a little more involved process that involves carefully deleting (or renaming) some of unconventionally named folders DB generates for each pool. It's pretty easy to determine which folder contains pool data; just look at the size.
     
    As others have said - open a ticket. I don't think the fix will be difficult...just need to know the best procedure. 
  6. Like
    Davey126 got a reaction from DoctorTim in Total DBender newb Question - I don't see my pooled drive   
    @w3wilkes - Agreed. It was likely my sloppy terminology that proved misleading. Your test demonstrated you can have multiple 'containers' in the same pool. These containers exist independently but reside on (share space within) same underlying pool of drives. However, I don't believe you can have more than one drive letter pointing to the same container, although you can have a drive letter and multiple folder mount points reference a single container. Thinking about it this is probably more of a Windows restriction than DB limitation.
     
    So what does this have to do with the OPs problem?  I believe something happened during pool creation that is causing DB to think a drive letter has been mapped to the container where his files reside. Due to the above restrictions he can't map another drive letter to the same container. I also believe his 'I:' drive is mapped to an empty container that happens to reside with the same pool.
     
    Of course all of the above is speculative and could be complete rubbish. Just my 2 cents ...
  7. Like
    Davey126 got a reaction from DoctorTim in Total DBender newb Question - I don't see my pooled drive   
    Best open a support ticket. There are slightly differing opinions on what transpired, current status and best path forward. There is a distinct difference between pools and 'containers' (my terminology) with the former containing one or more of the latter. w3wilkes is correct there can be multiple mount points per pool but they don't all point to the same place.
     
    I also believe there may be complexities due to the way the pool was created (conversion on each drive; retaining the original mount points) that should be sorted by the experts. With some experimentation the community could probably figure this out but that's what product support is for
     
    @DoctorTIm - tough introduction to Drive Bender. If you stick around long enough to experience steady state operations I think you will be pleased with the product. Pretty much a no maintenance solution with a nice feature set. That said, there are other options which I'm sure you have encountered.
     
    Please post back the final resolution so we can all learn from this experience. 
  8. Like
    Davey126 got a reaction from DoctorTim in Total DBender newb Question - I don't see my pooled drive   
    Yup - @w3wilkes and I on the same page with pools, containers (probably should retire that term!) and mount points. MP = container. I have speculated that two mount points can't refer to the same place but DB somehow thinks that one of the MPs in the OPs config points to the location of his aggregated files...but it actually is pointing somewhere else. That's why he can't create a new mount point to his aggregated files. It will be interesting to see if any part of that theory pans out.
  9. Like
    Davey126 got a reaction from lupine in Beta v2.0.4.0a   
    Had the same problem locating a copy of v2.0.3.8 after v2.0.4.0 proved faulty. I have both now; PM me if you need a copy.
     
    v2.0.4.0 installer SHA-1 which others can confirm: 9FEA8511BB3D0344131F20B3CF65AF01CED855FD
  10. Like
    Davey126 got a reaction from JohnesRab in Beta v2.0.2.9   
    So far v2.0.2.9 beta has remained stable although a few UI inconsistencies persist which I assume will be sorted in time. However, I can not seem to trigger a manual balancing run and automatic balancing has yet to kick in (I tweaked the schedule to 'aggressive'...normally set to once/day). When I look in the real-time monitor I see the balancing job is scheduled but it never runs. Same thing with heath checks. However, file validations run perfectly. Go figure.
     
    There is a 100GB difference in free space across the two drives in my primary pool with lots of small files and directories on both sides. Balancing on v1.9.5 seems to work just fine. I verified the v5 drivers are installed and did a full reinstall/rebuild/repair just to make sure. No sign of the v4 drivers. Any ideas?
×
×
  • Create New...