Mak­ing ZFS faster…

Cur­rently I play a little bit around with my ZFS setup. I want to make it faster, but I do not want to spend a lot of money.

The disks are con­nec­ted to an ICH 5 con­trol­ler, so an ob­vi­ous im­prove­ment would be to either buy a con­trol­ler for the PCI slot which is able to do NCQ with the SATA disks (a siis(4) based one is not cheap), or to buy a new sys­tem which comes with a chip­set which knows how to do NCQ (this would mean new RAM, new CPU, new MB and maybe even a new PSU). A new con­trol­ler is a little bit ex­pens­ive for the old sys­tem which I want to tune. A new sys­tem would be nice, and read­ing about the specs of new sys­tems lets me want to get a Core i5 sys­tem. The prob­lem is that I think the cur­rent of­fers of main­boards for this are far from good. The sys­tem should be a little bit fu­ture proof, as I would like to use it for about 5 years or more (the cur­rent sys­tem is some­where between 5 – 6 years old). This means it should have SATA-​3 and USB 3, but when I look at what is offered cur­rently it looks like there are only beta-​versions of hard­ware with SATA-​3 and USB 3 sup­port avail­able on the marked (ac­cord­ing to tests there is a lot of vari­ance of the max speed the con­trol­lers are able to achieve, bugs in the BIOS, or the  con­trol­lers are at­tached to a slow bus which pre­vents to use the full band­width). So it will not be a new sys­tem soon.

As I had a 1GB USB-​stick around, I de­cided to at­tach it to the one of the EHCI USB ports and use it as a cache device for ZFS. If someone wants to try this too, be care­ful with the USB ports. My main­board has only 2 USB ports con­nec­ted to an EHCI, the rest are UHCI ones. This means that only 2 USB ports are fast (sort of… 40 MBit/​s), the rest is only us­able for slow things like a mouse, key­board or a serial line.

Be warned, this will not give you a lot of band­width (if you have a fast USB stick, the 40MBit/​s of the EHCI are the limit which pre­vent a big stream­ing band­width), but the latency of the cache device is great when do­ing small ran­dom IO. When I do a gstat and have a look how long a read op­er­a­tion takes for each in­volved device, I see some­thing between 3 msec and 20 msec for the hard­disks (de­pend­ing if they are read­ing some­thing at the cur­rent head po­s­i­tion, or if the hard­disk needs to seek around a lot). For the cache device (the USB stick) I see some­thing between around 1 mssec and 5 msec. That is 13th to 14th of the latency of the hard­disks.

With a “zfs send” I see about 300 IOops per hard­disk (3 disks in a RAIDZ). Ob­vi­ously this is an op­timum stream­ing case where the disks do not need to seek around a lot. You see this in the low latency, it is about 2 msec in this case. In the random-​read case, like for ex­ample when you run a find, the disks can not keep this amount of IOops, as they need to seek around. And here the USB-​stick shines. I’ve seen upto 1600 IOops on it dur­ing run­ning a find (if the cor­res­pond­ing data is in the cache, off course). This was with some­thing between 0.5 and 0.8 msec of latency.

This is the ma­chine at home which is tak­ing care about my mails (in­com­ing and out­go­ing SMTP, IMAP and Web­mail), has a squid proxy and acts as a file server. There are not many users (just me and my wife) and there is no reg­u­lar us­age pat­tern for all those ser­vices. Be­cause of this I did not do any bench­mark to see how much time I can gain with vari­ous work­loads (and I am not in­ter­ested in some ar­ti­fi­cial per­form­ance num­bers of my web­mail ses­sion, as the brows­ing ex­per­i­ence is highly sub­ject­ive in this case). For this sys­tem a 1 GB USB stick (which was just col­lect­ing dust be­fore) seems to be a cheap way to im­prove the re­sponse time for of­ten used small data. When I use the web­mail in­ter­face now, my sub­ject­ive im­pres­sion is, that it is faster. I am talk­ing about list­ing emails (sub­ject, date, sender, size) and dis­play­ing the con­tent of some emails. FYI, my maildir stor­age has 849 MB with 35000 files in 91 folders.

Bot­tom line is: do not ex­pect a lot of band­width in­crease with this, but if you have a work­load which gen­er­ates ran­dom read re­quests and you want to de­crease the read latency, it could be a cheap solu­tion to add a (big) USB stick as a cache device.

StumbleUponXINGBalatarinBox.netDiggGoogle GmailNetvouzPlurkSiteJotTypePad PostYahoo BookmarksVKSlashdotPocketHacker NewsDiigoBuddyMarksRedditLinkedInBibSonomyBufferEmailHatenaLiveJournalNewsVinePrintViadeoYahoo MailAIMBitty BrowserCare2 NewsEvernoteMail.RuPrintFriendlyWaneloYahoo MessengerYoolinkWebnewsStumpediaProtopage BookmarksOdnoklassnikiMendeleyInstapaperFarkCiteULikeBlinklistAOL MailTwitterGoogle+PinterestTumblrAmazon Wish ListBlogMarksDZoneDeliciousFlipboardFolkdJamespotMeneameMixiOknotiziePushaSvejoSymbaloo FeedsWhatsAppYouMobdiHITTWordPressRediff MyPageOutlook.comMySpaceDesign FloatBlogger PostApp.netDiary.RuKindle ItNUjijSegnaloTuentiWykopTwiddlaSina WeiboPinboardNetlogLineGoogle BookmarksDiasporaBookmarks.frBaiduFacebookGoogle ClassroomKakaoQzoneSMSTelegramRenrenKnownYummlyShare/​Save

6 thoughts on “Mak­ing ZFS faster…”

  1. You may want to set “secondarycache=metadata” on your filesys­tems where you keep mostly large files. This way con­tent of the files will not pol­lute your small L2ARC and the data will be served dir­ectly from disk which will be much faster than from USB.

  2. I’ve done this at home as well, us­ing a 4 GB USB stick. The box only has 2 GB of RAM, run­ning i386/​32-​bit FreeBSD, so there’s only 1 GB avail­able to the ARC. 

    Since adding the cache device, things have been a lot smoother. Haven’t done any bench­marks to see what the ac­tual im­prove­ment is, though.

  3. How should the USB stick be format­ted and/​or par­ti­tioned, to be used as a cache device for a ZFS stor­age pool?

    1. You can even use the en­tire stick, no need to format/​partition. If you want ot use less than the en­tire stick, you need to par­ti­tion off course.

Leave a Reply

Your email address will not be published. Required fields are marked *