In the light of the recent benchmark discussion, a volunteer imported the tuning man-page into the wiki. Some comments at some places for possible improvements are already made. Please go over there, have a look, and participate please (testing/verification/discussion/improvements/…).
As always, feel free to register with FirstnameLastname and tell a FreeBSD committer to add you to the contributors group for write access (you also get the benefit to be able to register for an email notification for specific pages).
Today I have read an interesting investigation and problem analysis from Jim Gettys.
It is a set of articles he wrote over several months and is not finished writing as of this writing (if you are deeply interested in it go and read them, the most interesting ones are from December and January and the comments to the articles are also contributing to the big picture). Basically he is telling that a lot of network problems users at home (with ADSL/cable or WLAN) experience are because buffers in the network hardware or in operating systems are too big. He also proposes workarounds until this problem is attacked by OS vendors and equipment manufacturers.
Basically he is telling the network congestion algorithms can not do their work good, because the network buffers which are too big come into the way of their work (not reporting packet loss timely enough respectively try to not lose packets in situations where packet loss would be better because it would trigger action in the congestion algorithms).
He investigated the behavior of Linux, OS X and Windows (the system he had available). I wanted to have a quick look at the situation in FreeBSD regarding this, but it seems at least with my network card I am not able to see/find the corresponding size of the buffers in drivers in 30 seconds.
I think it would be very good if this issue is investigated in FreeBSD, and apart from maybe taking some action in the source also write some section for the handbook which explains the issue (one problem here is, that there are situations where you want/need to have such big buffers and as such we can not just downsize them) and how to benchmark and tune this.
Unfortunately I even have too much on my plate to even further look into this. 🙁 I hope one of the network people in FreeBSD is picking up the ball and starts playing.
Last week my ZFS cache device – an USB memory stick – showed xxxM write errors. I got this stick for free as a promo, so I do not expect it to be of high quality (or wear-leveling or similar life-saving things). The stick survived about 9 months, during which it provided a nice speed-up for the access to the corresponding ZFS storage pool. I replaced it by another stick which I got for free as a promo. This new stick survived… one long weekend. It has now 8xxM write errors and the USB subsystem is not able to speak to it anymore. 30 minutes ago I issued an “usbconfig reset” to this device, which is still not finished. This leads me to the question if such sticks are really that bad, or if some problem crept into the USB subsystem?
If this is a problem with the memory stick itself, I should be able to reproduce such a problem on a different machine with a different OS. I could test this with FreeBSD 8.1, Solaris 10u9, or Windows XP. What I need is an automated test. This rules out the Windows XP machine for me, I do not want to spend time to search a suitable test which is available for free and allows to be run in an automated way. For FreeBSD and Solaris it probably comes down to use some disk-I/O benchmark (I think there are enough to chose from in the FreeBSD Ports Collection) and run it in a shell-loop.