Jump to content
Division-M Community
Sign in to follow this  
KingofRome

Increase pool speed via RAM index and file caching

Recommended Posts

Hey TheSmith,

 

Was wondering if it were possible to utilize extra ram in a system to hold Windows index files of the pool via RAM caching. What I've noticed is that as soon as a NAS HDD is added to the pool, the whole pool slows down. I suspect this because it polls the NAS drive for files and therefore sometimes it leads to waiting a while for folders to load up as it finishes scanning all the drives. RAM caching index files of the drives or storing indexes somewhere else where it may be faster to read (the system drive, especially if it were an SSD) i could see definitely speeding this process up. Not even sure if its possible with DB, but i do know there is software that exists for RAM caching purposes that does a similar job (PrimoCache ect).

 

As well, wondering if there was a way to use RAM caching more extensively than just with reading files via smooth stream but writing as well (ie PrimoCache). I have an HDD as a landing zone now, and i could use an SSD. However, if I used an SSD, it would wear it out so fast with transcoding and remuxing large movie and TV show files that with a large enough pool of RAM if might be better to go that route.

 

A lot of my issues are with the fact that my DB pool is pretty massive (22TB, 12HDD's) and i'm seeing issues as it scales, so i know not everyone is going to seem them. But the opening folders lag, especially folders with lots of pictures can be brutal. 

Share this post


Link to post
Share on other sites

I support two systems using Drive Bender, both are very large, none of them using SSD or Landing Zones and as I scale them I have not seen any speed issues. The Only speed issue I see, is when the drives have spun down, it takes a few moments (Read: 10 - 20 sec for all drives to fully spin up).

 

Speed wise, I generally have no issues, just remember unlike raid, you are only going to see the speed of the individual drive, even using iSCSI (2x 4Gb Fiber in a team between to two machines, so 8Gb Bandwidth) I see no speed issues with the large DB system. That is primarily large uncompressed 1080P and 4K files. I generally see the speed issue relating to the drive itself and not the DB Pool.

 

Primary System --

 

1st PC DB Host

 

i5-4430

16GB Ram

1TB Primary OS Drive

Windows Server 2012 R2 Standard

 

2nd PC for iSCSI

 

i5-750

8GB Ram

500GB Primary OS Drive

Windows Server 2012 R2 Standard

 

48.6TB DB Pool :

1st PC - 7x 2TB, 6x 3TB, 1x 4TB

2nd PC - 5x 2TB, 1x 2.5TB, 1x 3TB

 

 

Secondary System --

 

Xeon E5620

12GB Ram

500GB Primary Drive

Windows Server 2012 R2 Standard

28.1TB (14x 2TB, 1x 3TB)

Share this post


Link to post
Share on other sites

Wow, nice post and your set up is massive. I've noticed for folders with lots of files like pictures it simply takes forever to load, do you have folders that are similar or do you only have large video files? Could be a difference. Speed of each drive is more than enough and they are all connected either SATA 3 (via a Port Multiplier Mediasonic 4-Bay enclosure) or SATA 6 direct to the Mobo, so their should be enough bandwidth for them. I am going to double check though that the drives aren't spinning down even though I thought I set them to never spin down via windows settings. And now that you've said that, I honestly think having the NAS drive is more trouble than it worth attached to the pool then, because with it everything is much slower, as the NAS drive itself isn't exactly top tier.. Will follow up soon!

Share this post


Link to post
Share on other sites

First off I have to say this --> Mediasonic 4-Bay enclosure = Garbage, had two of them and threw both of them in the trash. The USB3.0 is absolute trash and constantly disconnects under load. The ESATA While it works you have to remember with port multiplying you are sharing the speed of all four drives over a single SATA3 Bus. Here is a good way to test. Start a very large transfer to one drive. Should hit around 130MB/S +/- -- Now start another transfer to a different drive in the same enclosure at the same time (I use a SSD or multiple source destinations so you are not bottlenecking the read drive). You will have two 50MB +/- Transfers, 3 Simultaneous transfer, will be around 30MB, and so on.

 

Best rule of thumb i have seen for performance in large home driven drive arrays, stay away from eSATA Port Multipliers and USB3 -- Sadly that leaves you with PCI-E Bus driven options for best speed and reliability.

 

The Above Statement is my opinion and your milage may vary.

 

Now with that out of the way, You are correct about my structures. My folder layouts tend to break things down so there is not 100's of thousands of files in a single directory. If that is what is causing the speed issues, it could be related to how the folders are trying to preview the files / Enumerate them. Easy test for if it is the windows preview functions causing a delay (I have seen weirdness with that on an NFS share.) Pop open a command window and change to the directory that tends to have issues, are you having a delay if you do a DIR /D or something similar?

 

Do you have Smooth Stream reading for all files enabled? Maybe try flipping that and see if your IO/sec Requests show any changes.

 

I will not say I am a pro with DB, but I know my way around for the most part and have not seen any performance issues, but it could be related to how I use my system. If you can provide me a real world scenario, I can try to replicate it. E.G. You have massive performance issues on a specific folder that is 3 folders nested and has 200,000 files. It takes X time to load on DB, however on a standard drive it took X time to load. With information like that, I can try to replicate and possibly ferret out a bug or request.

 

If you are needing more drives internally for a controller card that will give you more than enough speed without all the useless frills and without doing internal port multiplying (Yes some 4 and 8 port SATA cards do that), i suggest -- SUPERMICRO AOC-SAS2LP-MV8 PCI-Express 2.0 x8 SATA / SAS 8-Port Controller Card -- 

Share this post


Link to post
Share on other sites

... i suggest -- SUPERMICRO AOC-SAS2LP-MV8 PCI-Express 2.0 x8 SATA / SAS 8-Port Controller Card -- 

 

I've gone for an LSI card in mine but this one is definitely cheaper and seems to have some good reviews. Cheers for the heads up! :)

Share this post


Link to post
Share on other sites

I know what you mean. I've had one of my MediaSonic enclosures fail and had a warranty swap already. And the stupid USB 3.0 thing dying under load i've definitely experienced where DB shows the drives as disconnected and the only way to recover was through a full system restart.

 

So far though I've personally found that eSATA has been extremely stable and more than fast enough for the purpose of storage and access to the files, at least for my use as the computer is being used solely as a Gigabit NAS PC. Its primarily used to stream media through Plex. I've found on my setup that through the enclosure via port multiplier, i get about 250MB/S throughput and theres no way i can saturate that link via a gigabit network (Extra info: I have both the SI3132 and an on-board Marvell e-Sata controller chipset, the Marvel performs as above but the SI3132 does max out around 160MB/s so i think there are controller differences, the eSATA world is a total mess, so the card you mention doesn't sound bad at all. If i need to do any upgrades i might consider it though it would require a separate case and PSU.). 

 

I do have Smooth Streaming on for all files. Have tried in the past using DB with it off but i do find that with it on transfer rates are slightly slower but more consistent and overall faster. I haven't tried polling the drive via commands, but majority of my testing for polling and opening folders is occurring over the network.

 

E.G: Over network and Local nearly identical in load time - Test is a folder 5 levels deep from main DB drive. 1434 items, 5.46GB, all .jpg files. 23 sec to just load entire folder before previews.

 

To me this seems too long for the folder size, especially since the I/O is spread across 12 different drives, but let me know what you think, and totally appreciate your input.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

×
×
  • Create New...