Alexander Leidinger

Just another weblog


Mak­ing ZFS faster…

Cur­rently I play a lit­tle bit around with my ZFS setup. I want to make it faster, but I do not want to spend a lot of money.

The disks are con­nected to an ICH 5 con­troller, so an obvi­ous improve­ment would be to either buy a con­troller for the PCI slot which is able to do NCQ with the SATA disks (a siis(4) based one is not cheap), or to buy a new sys­tem which comes with a chipset which knows how to do NCQ (this would mean new RAM, new CPU, new MB and maybe even a new PSU). A new con­troller is a lit­tle bit expen­sive for the old sys­tem which I want to tune. A new sys­tem would be nice, and read­ing about the specs of new sys­tems lets me want to get a Core i5 sys­tem. The prob­lem is that I think the cur­rent offers of main­boards for this are far from good. The sys­tem should be a lit­tle bit future proof, as I would like to use it for about 5 years or more (the cur­rent sys­tem is some­where between 5 – 6 years old). This means it should have SATA-3 and USB 3, but when I look at what is offered cur­rently it looks like there are only beta-versions of hard­ware with SATA-3 and USB 3 sup­port avail­able on the marked (accord­ing to tests there is a lot of vari­ance of the max speed the con­trollers are able to achieve, bugs in the BIOS, or the  con­trollers are attached to a slow bus which pre­vents to use the full band­width). So it will not be a new sys­tem soon.

As I had a 1GB USB-stick around, I decided to attach it to the one of the EHCI USB ports and use it as a cache device for ZFS. If some­one wants to try this too, be care­ful with the USB ports. My main­board has only 2 USB ports con­nected to an EHCI, the rest are UHCI ones. This means that only 2 USB ports are fast (sort of… 40 MBit/s), the rest is only usable for slow things like a mouse, key­board or a ser­ial line.

Be warned, this will not give you a lot of band­width (if you have a fast USB stick, the 40MBit/s of the EHCI are the limit which pre­vent a big stream­ing band­width), but the latency of the cache device is great when doing small ran­dom IO. When I do a gstat and have a look how long a read oper­a­tion takes for each involved device, I see some­thing between 3 msec and 20 msec for the hard­disks (depend­ing if they are read­ing some­thing at the cur­rent head posi­tion, or if the hard­disk needs to seek around a lot). For the cache device (the USB stick) I see some­thing between around 1 mssec and 5 msec. That is 1/3th to 1/4th of the latency of the harddisks.

With a “zfs send” I see about 300 IOops per hard­disk (3 disks in a RAIDZ). Obvi­ously this is an opti­mum stream­ing case where the disks do not need to seek around a lot. You see this in the low latency, it is about 2 msec in this case. In the random-read case, like for exam­ple when you run a find, the disks can not keep this amount of IOops, as they need to seek around. And here the USB-stick shines. I’ve seen upto 1600 IOops on it dur­ing run­ning a find (if the cor­re­spond­ing data is in the cache, off course). This was with some­thing between 0.5 and 0.8 msec of latency.

This is the machine at home which is tak­ing care about my mails (incom­ing and out­go­ing SMTP, IMAP and Web­mail), has a squid proxy and acts as a file server. There are not many users (just me and my wife) and there is no reg­u­lar usage pat­tern for all those ser­vices. Because of this I did not do any bench­mark to see how much time I can gain with var­i­ous work­loads (and I am not inter­ested in some arti­fi­cial per­for­mance num­bers of my web­mail ses­sion, as the brows­ing expe­ri­ence is highly sub­jec­tive in this case). For this sys­tem a 1 GB USB stick (which was just col­lect­ing dust before) seems to be a cheap way to improve the response time for often used small data. When I use the web­mail inter­face now, my sub­jec­tive impres­sion is, that it is faster. I am talk­ing about list­ing emails (sub­ject, date, sender, size) and dis­play­ing the con­tent of some emails. FYI, my maildir stor­age has 849 MB with 35000 files in 91 fold­ers.

Bot­tom line is: do not expect a lot of band­width increase with this, but if you have a work­load which gen­er­ates ran­dom read requests and you want to decrease the read latency, it could be a cheap solu­tion to add a (big) USB stick as a cache device.


Tags: , , , , , , , , ,

6 Responses to “Mak­ing ZFS faster…”

  1. abrown Says:

    Nice idea/tip! Do you use the zpool add POOL cache DEVICE syn­tax to achieve your result?

  2. netchild Says:

    Yes. And the remove works too.

  3. AB Says:

    You may want to set “secondarycache=metadata” on your filesys­tems where you keep mostly large files. This way con­tent of the files will not pol­lute your small L2ARC and the data will be served directly from disk which will be much faster than from USB.

  4. Freddie Says:

    I’ve done this at home as well, using a 4 GB USB stick. The box only has 2 GB of RAM, run­ning i386/32-bit FreeBSD, so there’s only 1 GB avail­able to the ARC.

    Since adding the cache device, things have been a lot smoother. Haven’t done any bench­marks to see what the actual improve­ment is, though.

  5. CJ Says:

    How should the USB stick be for­mat­ted and/or par­ti­tioned, to be used as a cache device for a ZFS stor­age pool?

  6. netchild Says:

    You can even use the entire stick, no need to format/partition. If you want ot use less than the entire stick, you need to par­ti­tion off course.

Leave a Reply