Jump to content
Division-M Community
Sign in to follow this  
Davey126

Second Impressions - v1.1.0.0

Recommended Posts

So after an initial bad experience (see my other post) I decided to take a second look with a different dataset. In this test I copied a 750 MB folder of photos (multiple formats) to a CX drive connected to S3. I used the default file placement rule except changed the sync method from 'mirror' to 'remote'. Individual file sizes ranged from a few dozen KB to 95 MB.

 

The upload proceeded as expected and fully saturated my connection (5/30 Mbps service). During that time I could access all files on the CX drive with excellent performance. That came as no surprise as CX cached the entire folder during the upload (with associated local storage implications). However, if I attempted to access (view only) the file that CX was currently uploading the file became locked an essentially unusable. CX never recovered from this error and never finished uploading the file despite many attempts as recorded in the console. Apparently it was locked out too. There is no way to tell which file CX is working on unless monitoring the file/folder management tab in the CX console. As before, I had to take extraordinary steps to delete the now 'permission less' file. 

 

Despite the above problem I allowed CX to complete the upload then rebooted. All files (except those that were damaged) were shown in on the CX drive with appropriate attributes. At some point the local cache had been cleared with only pointers to their cloud equivalents. Opening smaller files was nearly instantaneous as expected given my fairly robust Internet connections. Larger files (anything over 5 MB) were another story. A 7MB image took nearly 30 seconds to open. A 93 MB image never made it. CX download speeds averaged 1.5-1.7 Mbps vs the 25-30 Mbps I would expect. CX cached the file in ridiculously small 64KB chunks which created many hundreds of tiny files that would have needed to be stitched together had the download actually completed (I killed it after 5 min). Downloading the same file via S3 browser instantly saturated my link and completed in under 20 seconds. I would expect some overhead with CX but obviously this is unacceptable.

 

So...for multiple reasons my second look at CX comes to the same conclusion as the first. Lots of promise but this version is not ready for prime time. Feels more like a late Alpha or early Beta release. 

 

Edit: After removing the CX drive via the management console I discovered a handful of additional locked/permissionless files in the CX cache. None had been accessed during the above test. This should never happen in Windows unless junctions/hardlinks are improperly severed. Clearly some additional work is needed to stabilize this product. 

Share this post


Link to post
Share on other sites

So after an initial bad experience (see my other post) I decided to take a second look with a different dataset. In this test I copied a 750 MB folder of photos (multiple formats) to a CX drive connected to S3. I used the default file placement rule except changed the sync method from 'mirror' to 'remote'. Individual file sizes ranged from a few dozen KB to 95 MB.

 

The upload proceeded as expected and fully saturated my connection (5/30 Mbps service). During that time I could access all files on the CX drive with excellent performance. That came as no surprise as CX cached the entire folder during the upload (with associated local storage implications). However, if I attempted to access (view only) the file that CX was currently uploading the file became locked an essentially unusable. CX never recovered from this error and never finished uploading the file despite many attempts as recorded in the console. Apparently it was locked out too. There is no way to tell which file CX is working on unless monitoring the file/folder management tab in the CX console. As before, I had to take extraordinary steps to delete the now 'permission less' file. 

 

Despite the above problem I allowed CX to complete the upload then rebooted. All files (except those that were damaged) were shown in on the CX drive with appropriate attributes. At some point the local cache had been cleared with only pointers to their cloud equivalents. Opening smaller files was nearly instantaneous as expected given my fairly robust Internet connections. Larger files (anything over 5 MB) were another story. A 7MB image took nearly 30 seconds to open. A 93 MB image never made it. CX download speeds averaged 1.5-1.7 Mbps vs the 25-30 Mbps I would expect. CX cached the file in ridiculously small 64KB chunks which created many hundreds of tiny files that would have needed to be stitched together had the download actually completed (I killed it after 5 min). Downloading the same file via S3 browser instantly saturated my link and completed in under 20 seconds. I would expect some overhead with CX but obviously this is unacceptable.

 

So...for multiple reasons my second look at CX comes to the same conclusion as the first. Lots of promise but this version is not ready for prime time. Feels more like a late Alpha or early Beta release. 

 

Edit: After removing the CX drive via the management console I discovered a handful of additional locked/permissionless files in the CX cache. None had been accessed during the above test. This should never happen in Windows unless junctions/hardlinks are improperly severed. Clearly some additional work is needed to stabilize this product. 

 

Thanks for the feedback. We are aware of the file locking issue and is being address in the next update which is scheduled to be release later this week. I'll also look at the caching... the cache block size you mentioned is not right, so we'll take a closer look at what is happening here.

Share this post


Link to post
Share on other sites

Quick footnote: I received a direct communication from Division-M a few days later acknowledging my concerns which was comforting. I recently tested v1.2 and found some (albeit not all) issues had been addressed. I'm not ready to endorse Cloud Xtender (CX) quite yet, especially for important content. It remains an interesting product with a unique feature set that will only get better with time.

 

For those wondering if this product is worth their time I would say 'yes'. I watched Drive Bender (DB) evolve and stabilize; it is now a trusted component on several production systems. CX appears to leverage many of the same technologies as DB so I would expect the stabilization curve to be both steeper and shorter. Whether it can eventually compete with similar offerings (of which there are few) has yet to be determined.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

×
×
  • Create New...