Are USB mem­o­ry sticks real­ly that bad?

Last week my ZFS cache device – an USB mem­o­ry stick – showed xxxM write errors. I got this stick for free as a pro­mo, so I do not expect it to be of high qual­i­ty (or wear-leveling or sim­i­lar life-saving things). The stick sur­vived about 9 months, dur­ing which it pro­vid­ed a nice speed-up for the access to the cor­re­spond­ing ZFS stor­age pool. I replaced it by anoth­er stick which I got for free as a pro­mo. This new stick sur­vived… one long week­end. It has now 8xxM write errors and the USB sub­sys­tem is not able to speak to it any­more. 30 min­utes ago I issued an “usb­con­fig reset” to this device, which is still not fin­ished. This leads me to the ques­tion if such sticks are real­ly that bad, or if some prob­lem crept into the USB subsystem?

If this is a prob­lem with the mem­o­ry stick itself, I should be able to repro­duce such a prob­lem on a dif­fer­ent machine with a dif­fer­ent OS. I could test this with FreeB­SD 8.1, Solaris 10u9, or Win­dows XP. What I need is an auto­mat­ed test. This rules out the Win­dows XP machine for me, I do not want to spend time to search a suit­able test which is avail­able for free and allows to be run in an auto­mat­ed way. For FreeB­SD and Solaris it prob­a­bly comes down to use some disk‑I/O bench­mark (I think there are enough to chose from in the FreeB­SD Ports Col­lec­tion) and run it in a shell-loop.