Last week my ZFS cache device – an USB memory stick – showed xxxM write errors. I got this stick for free as a promo, so I do not expect it to be of high quality (or wear-leveling or similar life-saving things). The stick survived about 9 months, during which it provided a nice speed-up for the access to the corresponding ZFS storage pool. I replaced it by another stick which I got for free as a promo. This new stick survived… one long weekend. It has now 8xxM write errors and the USB subsystem is not able to speak to it anymore. 30 minutes ago I issued an “usbconfig reset” to this device, which is still not finished. This leads me to the question if such sticks are really that bad, or if some problem crept into the USB subsystem?
If this is a problem with the memory stick itself, I should be able to reproduce such a problem on a different machine with a different OS. I could test this with FreeBSD 8.1, Solaris 10u9, or Windows XP. What I need is an automated test. This rules out the Windows XP machine for me, I do not want to spend time to search a suitable test which is available for free and allows to be run in an automated way. For FreeBSD and Solaris it probably comes down to use some disk-I/O benchmark (I think there are enough to chose from in the FreeBSD Ports Collection) and run it in a shell-loop.