Today I have read an interesting investigation and problem analysis from Jim Gettys.
It is a set of articles he wrote over several months and is not finished writing as of this writing (if you are deeply interested in it go and read them, the most interesting ones are from December and January and the comments to the articles are also contributing to the big picture). Basically he is telling that a lot of network problems users at home (with ADSL/cable or WLAN) experience are because buffers in the network hardware or in operating systems are too big. He also proposes workarounds until this problem is attacked by OS vendors and equipment manufacturers.
Basically he is telling the network congestion algorithms can not do their work good, because the network buffers which are too big come into the way of their work (not reporting packet loss timely enough respectively try to not lose packets in situations where packet loss would be better because it would trigger action in the congestion algorithms).
He investigated the behavior of Linux, OS X and Windows (the system he had available). I wanted to have a quick look at the situation in FreeBSD regarding this, but it seems at least with my network card I am not able to see/find the corresponding size of the buffers in drivers in 30 seconds.
I think it would be very good if this issue is investigated in FreeBSD, and apart from maybe taking some action in the source also write some section for the handbook which explains the issue (one problem here is, that there are situations where you want/need to have such big buffers and as such we can not just downsize them) and how to benchmark and tune this.
Unfortunately I even have too much on my plate to even further look into this. I hope one of the network people in FreeBSD is picking up the ball and starts playing.
Tags: algorithms, benchmark, big picture, issue one, network buffers, network congestion, network hardware, os vendors, packet loss, wlan —