Are USB memory sticks really that bad?

Last week my ZFS cache device – an USB memory stick – showed xxxM write er­rors. I got this stick for free as a pro­mo, so I do not ex­pect it to be of high qual­ity (or wear-​leveling or sim­il­ar life-​saving things). The stick sur­vived about 9 months, dur­ing which it provided a nice speed-​up for the ac­cess to the cor­res­pond­ing ZFS stor­age pool. I re­placed it by an­other stick which I got for free as a pro­mo. This new stick sur­vived… one long week­end. It has now 8xxM write er­rors and the USB sub­sys­tem is not able to speak to it any­more. 30 minutes ago I is­sued an “us­b­con­fig re­set” to this device, which is still not fin­ished. This leads me to the ques­tion if such sticks are really that bad, or if some prob­lem crept in­to the USB sub­sys­tem?

If this is a prob­lem with the memory stick it­self, I should be able to re­pro­duce such a prob­lem on a dif­fer­ent ma­chine with a dif­fer­ent OS. I could test this with FreeBSD 8.1, Sol­ar­is 10u9, or Win­dows XP. What I need is an auto­mated test. This rules out the Win­dows XP ma­chine for me, I do not want to spend time to search a suit­able test which is avail­able for free and al­lows to be run in an auto­mated way. For FreeBSD and Sol­ar­is it prob­ably comes down to use some disk-​I/​O bench­mark (I think there are enough to chose from in the FreeBSD Ports Col­lec­tion) and run it in a shell–loop.