Jump to content
Division-M Community


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by Davey126

  1. Despite the best intentions of the primary software developer (Anthony) it would appear Division-M products have become abandonware. Others concur?


    DB continues to perform flawlessly on several machines so that remains and essential part of my portfolio.


    I have CX installed on one machine but do not use it as the product does not appear to enjoy wide-spread use. I fear an undiscovered bug will ultimately lead to data corruption/loss. Probably walk away form that one despite promising developments late last year.


    Kinda sad as I do believe DB is best in breed - at least the last time I shopped around.

  2. As you are probably aware Division-M is a shell of a company it once was with development and support all but halted on consumer products. I would not expect a formal response to this inquiry.


    I personally have experienced no issues with v2.4 on several machines and continue to use it daily. Rather than return to Win 8.1 and DB v1.x you may want to consider manually recovering your files from the respective drives (potentially time consuming depending on drive count and mirroring options) and starting fresh with v2.4.x That said, you may want to consider another product such as DrivePool or Microsoft's own Storage Spaces if ongoing support is important.  

  3. After penning a couple 'impressions' posts I pretty much stayed away from CX - occasionally testing new versions before broad release but rarely using the product for day-to-day work. I completely uninstalled mid-4Q15 due to zombie CPU use and other idiosyncrasies that would show up at inopportune times.


    Today I went to snag the latest DB client (v2.4.x) and saw CX had advanced several versions. So I reinstalled and found it to be both stable and a bit more refined than previous builds. So far CPU use has been quite modest - on par with other products with similar functionality. 


    I'm looking forward to playing with CX a bit more in the coming days. Kudos to Anthony for putting on a few layers of polish during a difficult time. Hopefully some bigger players take notice and throw coin at the project. DB and CX are fine products that deserve far more that the paltry sum collected for each license.

  4. There is a service patch (v1600b) that corrects this issue. Anthony sent me a pre-release shortly before the disappointing Nov 19th blog post. I was hoping this would show up in the down-loadable product after the Dec 2nd post but nothing yet. I have since disabled CX awaiting time to throughly test and determine whether it is stable enough to integrate into our production environment. It has come a long way in the last several point releases but the support model is obviously concerning. DB has a longer track record so less concerned about that product. 


    Sans patch you can get v1600a working by uninstalling CX and DB (with reboots between each) and then reinstall both products, DB first (plus two more reboots). Rather painful. I hope v1600b arrives soon!

  5. I also did not vote as the option I prefer (prompting user to uninstall but not mandating) is not one of the choices. Like others I have always performed an 'upgrade in place' and only experienced a problem once. Continuing an automated install after a reboot can be problematic on some workstations and will likely generate support tickets for things that are unrelated to DB. A strongly worded warning seems a better path IMO.

  6. Looks like a minor update to address issues introduced with Change logs below:


    Release v2.3.6.0 release (2015-06-03)
    Users running multiple pools could experience errors during a file system health check.
    Some components did not install/upgrade correctly and may cause issues.
    Release v2.3.5.0 release (2015-06-01)
    The file balancing intervals could be set to a value that was not expected.
    SMART settings for individual drives was not taking effect.
    The SMART service and Windows tray application can use excessive CPU when the host machine comes out of sleep mode.
    Scheduled tasks can fail to load on start-up.
    The core driver is spamming the Windows event log.
    If the pool switches to fault tolerant mode, the duplicate file may not be displayed.
    On some occasions it is not possible to create a network mount point.
    Improved the efficiently of the pool health check.
    Fixed a font issue with the duplication manager.
    Windows 10 support.

  7. Just had a notification of  new release - v2.3.5.0.


    Can't see any release notes as yet.


    I have seen this happen before; link doesn't get updated in a timely manner. Click here for v2350 download. Alternatively, right click the 'official' download link->copy->paste and modify the version number in the resulting URL.


    Edit: After install management console still reports v2.3.0.0. However, rtm (real time monitor) reveals v2.3.5.0 running under the covers.

  8. Updated for v1.4


    SOFTWARE: Cloud Xtender
    PLATFORM: Windows XP, Windows Vista, Windows 7, Windows 8, Windows 8.1, Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012 and Windows Server 2012 R2 (x86 and x64 support for all the specified platforms).
    Release information
    [*] General information.
    Bug fix.
    Updated feature.
    [n] New feature.
    [d] Driver change.
    [k] Known issue or limitations.
    [e] Experimental (feature is implemented but under review)
    Release v1.4.0.0 release (2015-04-20)
    [n] Windows Explorer integration.
    Improved handling of multiple files of the same name under Google Drive.
    The enable/disable explorer icon option does not work.
    [n] Added real time monitor to client (double click the Cloud Xtender graphic in the upper left corner).
    Improved handling of client/server disconnections.
    Installer not removing previous versions from the installed programs list.
    Release v1.3.0.0 release (2015-04-15)
    When restoring a One Touch Config, an error dialog can be display even though the restore was successful.
    [n] Added new remote mode streaming engine.
    [n] When the configuration is backing up or restored (via One Click Config), folder containing placement rules are also backed up and restored.
    When the default rule is created, the cloud connection is no longer set to "Use multiple connections", but is set to the specific cloud connection just created.
    Fixed a possible deadlock when managing placement rules under the File/Folders tab.
    Release v1.2.0.0 release (2015-04-29)
    The available free space on a cloud drive can be incorrectly reported.
    When a file is syncing, it can become locked and non-accessible.
    A file in remote mode can appear corrupt due to the tracker not being rendered correctly.
    When a remote is being cached, the cache size is not dynamically adjusting.
    If a file's mode is changed from mirror to remote, the tracker is not retaining the file size or the file's modified time stamp.
    Release v1.1.5.0 release (2015-04-25)
    When there are multiple cloud drives, the individual placement rules can be applied to the wrong cloud drive.
    You can delete the default placement rule.
    [n] When scanning a remote cloud service, an audit file is now created.
    [n] Automatic version update notification.
    [n] Added tray notification application.
    Release v1.1.0.0 release (2015-04-17)
    The One Touch Configuration feature was not updating correctly in the Cloud Xtender Manager.
    The help text was not updating correctly in the Cloud Xtender Manager.
    There was an issue when displaying the placement rule tab in the Cloud Xtender Manager.
    When editing an Azure cloud connection, the values being shown were incorrect.
    Release v1.0.6.0 rc1 (2015-04-16)
    Some files miss being initially sync'd (although they are sync'd at a later date).
    Changed the way the client display queued files.
    The summary displayed when creating a cloud connection was not displaying all the information correctly.
    When the client cannot connect to the service, it now displays an obvious message and handle the broken connection much better.
    At times named streamed files were being sync'd, these are now ignored.
    When adding a cloud connection, the interface was not updating correctly.
    Add the ability to "re-authenticate" a cloud connection.
    Fixed an issue that was preventing the file usage chart from updating correctly.=
    When uninstalling Cloud Xtender, if it detects Drive Bender, it will not uninstall the driver.
    Many client/service bug fixes.
    Release v1.0.4.2 beta (2015-04-14)
    When adding files from a remote cloud service, the queue can stall and repeatedly process the same file.
    The processing result of a queued file can be ambiguous.
    Scanning remote provider and sub folders not picking up sub folders.
    Files using the Box provider can fail to sync to an error "Item with the same name already exists".
    Release v1.0.4.0 beta (2015-04-13)
    Updated Cloud Xtender client UI.
    [n] File placement rules now support file encryption.
    Under some very rare circumstances, internal worker threads could hang.
    The internal processing queue that handles files syncing can stall if a sync process continually fails.
    The internal processing queue has been split into an "in memory queue", and a "file system queue". Only critical actions are place in the "in memory queue", this is to lower the memory usage on system with many files.
    Handling of failed queued action has been improved, after a period the retry rate is reduced and a notification message is sent.
    Handling of many email notifications has been improved, now after a certain send limit, the messages are bundled into a single email.
    [n] Added "One Touch Configuration" feature (disabled in this beta).
    Many client/service bug fixes.
    Release v1.0.3.0 beta (2015-03-20)
    [n] Added Azure support.
    [n] Added Box support.
    When starting if a provider has an invalid registry entry, the initialization fails with an exception. This then causes the client to crash at start up.
    When defining a rule, if the selected cloud drive does not have one or more cloud connections, an error occurs.
    Release v1.0.2.0 beta (2015-03-12)
    [n] Added OneDrive support.
    [n] Added Dropbox support.
    Deleting many files and / or folders is causing a locking issue.
    Client transfer progress is not updating correctly.
    Renaming a cloud connection is breaking other cloud connections.
    Improved upload and download performance.
    Improved remote file caching performance.
    When reading from cache, the read process and get caught in a deadlock.
    When items added to the syncing queue, they can be removed before they have been executed.
    The client's File/Folder view can get corrupt and not display the correct file status.
    Many client/service bug fixes.
    Release v1.0.1.0 beta (2015-02-06)
    When scanning the cloud drive for files, if the sync mode was set to "Remote", the physical file was being pulled down. Now we are only pulling the details required (not the whole file).
    The File/Folder view now works as intended.
    When switching cloud drives in the client, if switched fast, an error was occurring.
    When adding a cloud provider and have selected "Scan for existing files", an error is returned.
    When creating the first cloud connection, you can now specify the default rule's sync mode (i.e. Mirror or Remote).
    Fixed a number of client interface issues.
    [k] When a file is set to Remote sync mode, real time playing of the file (i.e. caching) is not optimal.
    [k] The upload and download speeds have not been optimized.
    Release v1.0.0.7 beta (2015-01-21)
    The client was auto detecting the local language and causing a strange mix of different languages. Languages now fixed to English for the moment.
    The sync modes have been changed to be more flexible (you should reset any existing rules). There are now 3 options:-
    · Mirror: In this mode, a file copied to the Cloud Xtender drive will be COPIED to the cloud provider and the local and remote versions of the file is maintained.
    · Remote: In this mode, a file copied to the Cloud Xtender drive will be MOVED to the cloud provider and the local file is the removed (leaving a local tracker file).
    · Mirror -> Remote (mode currently disabled): In this mode, a file start in mirrored mode, then after a set period, automatically switched to remote mode.
    Now in either the Remote, or Mirror modes, you can select the type of file monitoring you want.
    · Local Monitoring Only: Local file changes are registered, and files are sync'd according. With this type of monitoring, any changes made on the cloud storage provider are not captured. You might use this monitoring type if you are simply backing files etc.
    · Bi-directional Monitoring: Both local and remote changes are registered and files are sync'd according. With this type of monitoring, any file changes, or new files are captured both locally and remotely. The frequency of the remote monitoring is determined when creating the rule. Although on the surface this would be the preferred type, it does mean more requests to the cloud providers and hence, more network traffic.
    When creating a cloud connection, there is an option to scan the cloud storage and sync and existing files.
    You can scan the remote cloud folder / drives via the File/Folder by right clicking a folder (to scan the entire remote drive, right-click the root).
    [k] The files/folders view of a Cloud Xtender drive can sometimes cause a client exception, and also display multiples of the same file.
    Release v1.0.0.6 beta (2015-01-17)
    An issue with the language setting can cause an exception on start up.
    Release v1.0.0.5 beta (2015-01-07)
    [*] Initial v1 beta release.
    [k] Create a drive from an existing storage provider not yet implemented.
    [k] The Windows Explorer integration not included in this beta.
    [k] When streaming a remote file, the caching can sometimes cause a stutter.
    [k] Only the English language is available in this beta.
    [k] Local help not implemented in this beta.

  9. Quick footnote: I received a direct communication from Division-M a few days later acknowledging my concerns which was comforting. I recently tested v1.2 and found some (albeit not all) issues had been addressed. I'm not ready to endorse Cloud Xtender (CX) quite yet, especially for important content. It remains an interesting product with a unique feature set that will only get better with time.


    For those wondering if this product is worth their time I would say 'yes'. I watched Drive Bender (DB) evolve and stabilize; it is now a trusted component on several production systems. CX appears to leverage many of the same technologies as DB so I would expect the stabilization curve to be both steeper and shorter. Whether it can eventually compete with similar offerings (of which there are few) has yet to be determined.

  10. Captured during product installation (last three releases):


    Release v1.2.0.0 release (2015-04-29)
    The available free space on a cloud drive can be incorrectly reported.
    When a file is syncing, it can become locked and non-accessible.
    A file in remote mode can appear corrupt due to the tracker not being rendered correctly.
    When a remote is being cached, the cache size is not dynamically adjusting.
    If a file's mode is changed from mirror to remote, the tracker is not retaining the file size or the file's modified time stamp.
    Release v1.1.5.0 release (2015-04-25)
    When there are multiple cloud drives, the individual placement rules can be applied to the wrong cloud drive.
    You can delete the default placement rule.
    [n] When scanning a remote cloud service, an audit file is now created.
    [n] Automatic version update notification.
    [n] Added tray notification application.
    Release v1.1.0.0 release (2015-04-17)
    The One Touch Configuration feature was not updating correctly in the Cloud Xtender Manager.
    The help text was not updating correctly in the Cloud Xtender Manager.
    There was an issue when displaying the placement rule tab in the Cloud Xtender Manager.
    When editing an Azure cloud connection, the values being shown were incorrect.
    [*] General information.
    Bug fix.
    Updated feature.
    [n] New feature.
    [d] Driver change.
    [k] Known issue or limitations.
    [e] Experimental (feature is implemented but under review)

  11. So after an initial bad experience (see my other post) I decided to take a second look with a different dataset. In this test I copied a 750 MB folder of photos (multiple formats) to a CX drive connected to S3. I used the default file placement rule except changed the sync method from 'mirror' to 'remote'. Individual file sizes ranged from a few dozen KB to 95 MB.


    The upload proceeded as expected and fully saturated my connection (5/30 Mbps service). During that time I could access all files on the CX drive with excellent performance. That came as no surprise as CX cached the entire folder during the upload (with associated local storage implications). However, if I attempted to access (view only) the file that CX was currently uploading the file became locked an essentially unusable. CX never recovered from this error and never finished uploading the file despite many attempts as recorded in the console. Apparently it was locked out too. There is no way to tell which file CX is working on unless monitoring the file/folder management tab in the CX console. As before, I had to take extraordinary steps to delete the now 'permission less' file. 


    Despite the above problem I allowed CX to complete the upload then rebooted. All files (except those that were damaged) were shown in on the CX drive with appropriate attributes. At some point the local cache had been cleared with only pointers to their cloud equivalents. Opening smaller files was nearly instantaneous as expected given my fairly robust Internet connections. Larger files (anything over 5 MB) were another story. A 7MB image took nearly 30 seconds to open. A 93 MB image never made it. CX download speeds averaged 1.5-1.7 Mbps vs the 25-30 Mbps I would expect. CX cached the file in ridiculously small 64KB chunks which created many hundreds of tiny files that would have needed to be stitched together had the download actually completed (I killed it after 5 min). Downloading the same file via S3 browser instantly saturated my link and completed in under 20 seconds. I would expect some overhead with CX but obviously this is unacceptable.


    So...for multiple reasons my second look at CX comes to the same conclusion as the first. Lots of promise but this version is not ready for prime time. Feels more like a late Alpha or early Beta release. 


    Edit: After removing the CX drive via the management console I discovered a handful of additional locked/permissionless files in the CX cache. None had been accessed during the above test. This should never happen in Windows unless junctions/hardlinks are improperly severed. Clearly some additional work is needed to stabilize this product. 

  12. Hi there,  long time user and fan of Drive Bender!


    Just wondering if anyone has experienced this before, and if so, has any advice:


    I currently have a media back-end with 5 pooled drives.   I have several hundred media video files of various format.  I connect via SMB, the shared drive to my media front-end KODI (formerly XBMC).  I also have a wired computer and several wifi laptops around the house that have access to the shared folders.  My media files are marked for duplication


    What has been happening is that on about 1 in 20 media files KODI won't run it.  It shows as 0KB through the SMB share.  The connected laptops and desktops will not run it either.  However, when I log into my backend, it's clear that the media files are still "there" and I can in fact access them through the back end.  I have gone into individual dB drives and deleted the files that aren't working through SMB and then repaired the pool and viola!  once the duplicated file is recovered it now plays through SMB. 

    Has anyone had anything like this, and can anyone suggest any fixes?  I have close to 800 media files and going through each one to determine if it need to be deleted and then restored from duplication is a daunting, tedious task.



    Generally when I encounter a 0KB file on some type of share it boils down to a problem with permissions. I'm sure they are fine on the back-end but somehow get farkled from KODIs perspective. Obviously that doesn't provide a solution but perhaps might lead you in a productive direction.


    Interestingly, I have recently encountered separate instances of 0KB files with DB and CX. In both cases I had to work around the presenter (DB or CX) to 'fix' the problem which in my case was deleting the file at its source. I'm somewhat forgiving of CX as it is a recently released product that is bound to have some bugs. In the DB instance I created an NTFS junction to a non-pooled drive which went south when the machine rebooted. In retrospect that probably wasn't a good idea (and likely not supported) so I can't blame DB for that one. But it does suggest some fragility with both products given they are software based manipulations of the underlying file system. 

  13. Even with balancing disabled DB sets up identical directory structures on both drives. That may be what you were hearing although it should not have lasted very long. In addition there may have been some initial integrity/health checks taking place as it was a new pool. You can use the DB client or console (separate executables) to monitor background operations. 

  14. Hello all,


    I am trying out DB for the first time as I have a 3TB and 5TB external hard drive.  The 3TB is full, the 5TB is empty.  I would like to make them appear as 1 drive.  I have used the DB Manager and selected to convert the 3TB to a pool.  Unfortunately none of my 3TB worth of data show up in the new pool.  I've rebooted multiple times but nothing is there.  When I manually remove the drive, all my data shows again.


    I thought this was the biggest selling point of DB, being able to merge disks with existing data into 1, which is why I wanted to try it out, but so far I've had no success.


    Am I doing something wrong?




    You can easily migrate existing files into a DB pool. Take a look at this KB article; post back if the procedure is not clear (hint: you may need to create a temporary mount point using Window Disk Management or the tool of your choice).


    Faced with your situation I would create a DB pool with just the 3TB disk and then move the files as described in the article. Because it is a move (vs copy) the actions will take place very quickly. Then add the empty 5TB drive to the pool. In the end you will end up with a two disk, 8TB pool containing all of your old files.


    DB may have a procedure to automate existing file migration at the time of pool creation. It's been a while since I created a new pool on a populated drive. Regardless, I generally use the above approach as I understand and can control the process. 

  15. Edit: After working with the product a bit more I have come to the conclusion that it is unusable on my system (Win 8.1 x64). Files copied to the Cloud mount point become permanently locked with NTFS permissions removed. They can not be viewed or manipulated from the host machine. Had to use other tools to clear the local cache. Will wait for the next release.


    I purchased a couple licenses (to support further development) recognizing there would likely be problems with this early production release. After some struggles I was able to hook into my S3 account and begin experimenting with basic functionality. As promised the cloud mount point behaves much like a local drive. Very cool! However, I have already run into several problems:


    - S3 bucket must exist; no option to create within CX. Had to use S3 Browser to create a bucket (this needs to get fixed).


    - Can't set the host drive (where local files are cached) to a DB pool even though that option was available. Weird things started happening ultimately resulting in locked file that required a fair bit of brute force to delete. The file cache appeared to remain on my system drive even though the CX client indicated it was on the DB mount point. Persisted through a reboot. If DB pools are not valid they should not be offered in the selection dialog. 


    - Could not establish a connection to Google Drive. Tried two accounts with different security characteristics. Both failed with an 'invalid client' error along with a message stating "OAuth client was not found". Checked the client IDs passed, both were valid. 


    - Could not establish a connection to OneDrive. Tried two different accounts. Came back with generic error stating OneDrive is experienced technical difficulties. I was able to log in fine from IE...but not form Chrome or Opera. Opera is the system default.


    Lots of promise in CX but plenty of bugs to squash too.  

  16. For what it's worth ...


    Been running 2190 for ~6 hrs and made heavy use of my pool during that time. Also rebooted several times for reasons unrelated to Drive Bender. System has been rock solid with none of the unexplained CPU spikes that were sometimes traceable to DB activity. Obviously need more time to incubate but early indications are favorable.

  17. Good thought - both drives have 16MB caches. But the files written for benchmarking are far larger.


    I too have noticed Drive Bender struggles rapidly accessing large numbers of small files on pooled drives. Some of this is due to OS overhead but it's clearly slower than accessing the same files off native drives. In my case I typically work with only a few files at a time - both small and large (sometimes dozens of GB each). Most involve simple moves/copies with occasional edits. For these types of operations DB shines. 

  18. Yeah - there is clearly some caching going on whether it be Windows or Drive Bender (see below). Not bad for a pair of 500GB spinners; check out those access times - lol. The cache must be relatively small as evidenced by the trail off at the end of the ATTO run with larger data sets. Still pretty cool to see those numbers, especially on my modest core 2 era rig.


    I could dig to figure out where the caching is taking place but in the end it really does not matter. I'm getting great performance out of the pool for my usage pattern. Kinda fun to back perception with numbers.


    BTW - the Drive Bender Manager concurred with the read/write numbers during benchmarking. The read value actually overflowed the allocated space at the bottom of the UI. The developers probably didn't anticipate throughput numbers exceeding 999 MB/s. Also suggests the UI isn't measuring raw IO at the physical disk level.



  19. Curious if anyone knows if DriveBender uses its own memory cache or leverages the Windows file cache. I was benchmarking drives and saw some crazy numbers posted for my DB pool ... faster than any mechanical drive and all but the most advanced SSDs. This could well be an artifact of the Windows file cache although I did not observe it on any other drive. The only time I have seen benchmark numbers in that range is when a dedicated memory cache sits in front of the drive (eg: Primocache).


    More of a curiosity question. Of course, if DB does create its own cached I'd be curious as to its size and whether it can be tweaked via registry settings  ^-^ .

  20. I have two physical drives in my pool; both sleep after the designated timeout period (60 min) if there has been no activity. What's really interesting is only one drive will wake if I access a folder or file that only reside on that drive. Sometimes the second drive will spin up a few minutes later if DB needs to do maintenance after manipulating a few large files.


    Win 8.1 Pro x64. Pool consists of two SATA drives connected directly to motherboard. Pretty simple; YMMV.

  21. Yup - @w3wilkes and I on the same page with pools, containers (probably should retire that term!) and mount points. MP = container. I have speculated that two mount points can't refer to the same place but DB somehow thinks that one of the MPs in the OPs config points to the location of his aggregated files...but it actually is pointing somewhere else. That's why he can't create a new mount point to his aggregated files. It will be interesting to see if any part of that theory pans out.

  22. Best open a support ticket. There are slightly differing opinions on what transpired, current status and best path forward. There is a distinct difference between pools and 'containers' (my terminology) with the former containing one or more of the latter. w3wilkes is correct there can be multiple mount points per pool but they don't all point to the same place.


    I also believe there may be complexities due to the way the pool was created (conversion on each drive; retaining the original mount points) that should be sorted by the experts. With some experimentation the community could probably figure this out but that's what product support is for :)


    @DoctorTIm - tough introduction to Drive Bender. If you stick around long enough to experience steady state operations I think you will be pleased with the product. Pretty much a no maintenance solution with a nice feature set. That said, there are other options which I'm sure you have encountered.


    Please post back the final resolution so we can all learn from this experience. 

  23. @w3wilkes - Agreed. It was likely my sloppy terminology that proved misleading. Your test demonstrated you can have multiple 'containers' in the same pool. These containers exist independently but reside on (share space within) same underlying pool of drives. However, I don't believe you can have more than one drive letter pointing to the same container, although you can have a drive letter and multiple folder mount points reference a single container. Thinking about it this is probably more of a Windows restriction than DB limitation.


    So what does this have to do with the OPs problem?  I believe something happened during pool creation that is causing DB to think a drive letter has been mapped to the container where his files reside. Due to the above restrictions he can't map another drive letter to the same container. I also believe his 'I:' drive is mapped to an empty container that happens to reside with the same pool.


    Of course all of the above is speculative and could be complete rubbish. Just my 2 cents ...

  • Create New...