Jump to content
Division-M Community
KaySee

SnapRaid configuration with Drive Bender

Recommended Posts

Hi,

Is anyone using SnapRaid with Drive Bender?

What is the recommended way to configure a drive pool to use SnapRaid?

I have never used SnapRaid but it looks like it needs direct access to a the files on each disk, is this correct?

If the above is true, should all the pooled drives be assigned a drive letter? If so what are the issues relating to Drive Bender?

Would it be better to assign a mount point on the C: drive for each pooled drive? e.g. c:\Mount\DB1, C:\Mount\DB2 etc

What about the 'snapraid.content' files, can they be put in the root of the pool drives, e.g. c:\Mount\DB1\snapraid.content, c:\Mount\DB2\snapraid.content etc, or should it go on the pooled drive itself? e.g. if your pool is mounted to D:, d:\snapraid.content. Would having a content file on the pool itself cause some kind of loop condition?

This is the configuration I am thinking of using, My parity volume is assigned the drive letter Z:, My drive pool is assigned the drive letter D:, each drive in the pool is mounted to an empty folder in C:\Mount.

parity Z:\snapraid.parity
content C:\SnapRaid\snapraid.content
content D:\SnapRaid\snapraid.content
disk d1 C:\Mount\DB1\{3C023BA8-FEB5-49A9-9795-EF8A62107750}
disk d2 C:\Mount\DB2\{3C023BA8-FEB5-49A9-9795-EF8A62107750}
disk d3 C:\Mount\DB3\{3C023BA8-FEB5-49A9-9795-EF8A62107750}
disk d4 C:\Mount\DB4\{3C023BA8-FEB5-49A9-9795-EF8A62107750}
disk d5 C:\Mount\DB5\{3C023BA8-FEB5-49A9-9795-EF8A62107750}
exclude Thumbs.db
exclude \$RECYCLE.BIN
exclude \System Volume Information

Examples of other setups would be nice.

Share this post


Link to post
Share on other sites

Hi KaySee,

I'm new with it as well, just started setting up the SnapRAID configuration last week.  Based on all the recommendations from the documentation I ended up with this config:

parity F:\SnapRAID\parity\snapraid.parity
2-parity G:\SnapRAID\parity\snapraid.2-parity

content C:\SnapRAID\content\snapraid.content
content D:\SnapRAID\content\snapraid.content
content E:\SnapRAID\content\snapraid.content
content F:\SnapRAID\content\snapraid.content
content G:\SnapRAID\content\snapraid.content

data d1 C:\SnapRAID\mounts\slot1
data d2 C:\SnapRAID\mounts\slot2
data d3 C:\SnapRAID\mounts\slot3
data d4 C:\SnapRAID\mounts\slot4
data d5 C:\SnapRAID\mounts\slot5
data d6 C:\SnapRAID\mounts\slot6
data d7 C:\SnapRAID\mounts\slot7
data d8 C:\SnapRAID\mounts\slot8
data d9 C:\SnapRAID\mounts\slot9
data d10 C:\SnapRAID\mounts\slot10

exclude \$RECYCLE.BIN
exclude \System Volume Information

I have (10 pooled) data drives (+1 additional for SSD cache), the mounts were created inside the SnapRAID folder since I don't care to see them.  The drives D-G are non-pooled drives. I named the mounts based on slot number so that I'd know where they were located.  So I use the pooled drive letter and SnapRAID uses the mount points for each drive in the SnapRAID\mounts folder, you can use mount points (configured via disk management tool) rather than drive letters for the data drives.  My F:\ and G:\ drives are dedicated parity drives, same model as the 10 data drives.

The content files, I believe, are just listings of all files with the checksums (not the parity).  So think of it as the directory listing of all your files on your drive.  It can be stored anywhere and the more copies the merrier (IMHO).  I believe SnapRAID is fine with the content file being on a data drive and likely just skips it.

I believe with 5 drives you will need 2 parity drives (assuming they are all the same size). Check the FAQs.  That's the only feedback I can give.  

I've been running Sync's and Scrubs the last few days and noticed data errors on my last scrub. I believe it was because I didn't enable "SnapRAID Mode" in Drive Bender so I just did that and am now doing a full sync.

Good luck, let me know how it goes!

 

Obs

 

 

Share this post


Link to post
Share on other sites

Hi Obsidience,

Thanks for the reply, I missed the thing about needing 2 parity disks for 5 data disks. I couldn't run to another disk right now, I suppose I could remove a disk from the pooled drive for now but the pool is quite full, or chance single parity until I can run to an additional drive, the facts do say its a rule of thumb.

I note that you put content files on the parity drives is this normal?

Does the SSD cache make a lot of difference to performance?

KaySee.

 

 

Share this post


Link to post
Share on other sites
Quote

Thanks for the reply, I missed the thing about needing 2 parity disks for 5 data disks. I couldn't run to another disk right now, I suppose I could remove a disk from the pooled drive for now but the pool is quite full, or chance single parity until I can run to an additional drive, the facts do say its a rule of thumb.

I note that you put content files on the parity drives is this normal?

It looks like the parity data grows as needed so 1 parity disk would probably be fine until you start filling up all disks.  I'm storing content in the parity drives because there will be space as two parities account for enough room for up to 14 drives and I only have 10 managed by SnapRAID.  Is it normal?  Likely not but SnapRAID deals in just files so it should be fine.  I'm currently planning to replace the parity drive letters with mounds in the same locations as the other mounts as I don't need to see them in my drive letter list to keep things clean.

Quote

Does the SSD cache make a lot of difference to performance?

I don't know yet.  I'm nearly done configuring this replacement box for my home server but one of my last steps will be to do some benchmarking and as well as plug in a kill-a-watt meter to gauge electricity usage in comparison with my old server which was a AMD Richmond box with a Highpoint RocketRAID (hardware) card RAID5 connected to an 8-bay enclosure.  I've noticed that reads and writes over gigabit to my old server are both ~99MB/s while to the new server are around ~130MB/s (SSD cache disabled).  Possibly the network card is better, unsure until I can do local benchmarking on both.

I would suspect that writes to the SSD cache (Samsung 840) will be the speed of the drive which is much faster.  One issue I ran into is that I ran out of disk space on the SSD while doing my initial transfer from old to new server so I had to disable the SSD cache until that was complete. 

Obs

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...