Alexander Leidinger

Just another weblog

Jan
07

How big are the buffers in FreeBSD drivers?

Today I have read an inter­est­ing inves­ti­ga­tion and prob­lem analy­sis from Jim Get­tys.

It is a set of arti­cles he wrote over sev­eral months and is not fin­ished writ­ing as of this writ­ing (if you are deeply inter­ested in it go and read them, the most inter­est­ing ones are from Decem­ber and Jan­u­ary and the com­ments to the arti­cles are also con­tribut­ing to the big pic­ture). Basi­cally he is telling that a lot of net­work prob­lems users at home (with ADSL/cable or WLAN) expe­ri­ence  are because buffers in the net­work hard­ware or in oper­at­ing sys­tems are too big. He also pro­poses workarounds until this prob­lem is attacked by OS ven­dors and equip­ment manufacturers.

Basi­cally he is telling the net­work con­ges­tion algo­rithms can not do their work good, because the net­work buffers which are too big come into the way of their work (not report­ing packet loss timely enough respec­tively try to not lose pack­ets in sit­u­a­tions where packet loss would be bet­ter because it would trig­ger action in the con­ges­tion algorithms).

He inves­ti­gated the behav­ior of Linux, OS X and Win­dows (the sys­tem he had avail­able). I wanted to have a quick look at the sit­u­a­tion in FreeBSD regard­ing this, but it seems at least with my net­work card I am not able to see/find the cor­re­spond­ing size of the buffers in dri­vers in 30 seconds.

I think it would be very good if this issue is inves­ti­gated in FreeBSD, and apart from maybe tak­ing some action in the source also write some sec­tion for the hand­book which explains the issue (one prob­lem here is, that there are sit­u­a­tions where you want/need to have such big buffers and as such we can not just down­size them) and how to bench­mark and tune this.

Unfor­tu­nately I even have too much on my plate to even fur­ther look into this. :( I hope one of the net­work peo­ple in FreeBSD is pick­ing up the ball and starts playing.

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share/Save

Tags: , , , , , , , , ,

4 Responses to “How big are the buffers in FreeBSD drivers?”

  1. Bruce Cran Says:

    From if_rl it looks like it’s hard­coded for each dri­ver. if_rlreg.h says the 8169 sup­ports up to 1024 but we set RL_8169_TX_DESC_CNT to 256.

    GD Star Rating
    loading...
    GD Star Rating
    loading...
  2. netchild Says:

    Based upon the exper­i­ments Jim Get­tys did in the LAN and WLAN, it looks like it would be bet­ter to make this configurable.

    GD Star Rating
    loading...
    GD Star Rating
    loading...
  3. Dominic Says:

    At least on the TCP level buffers are self-tuning. You can con­fig­ure a max­i­mum buffer size and an incre­ment step (net.inet.tcp.sendbuf* and net.inet.tcp.recbuf*), which I pre­sume is also the min­i­mal size (at least that’s the way I’d have imple­mented some­thing like this).

    I increased the max buffer size to gap dis­con­nected peri­ods with a 3g con­nec­tion, while trav­el­ling on a train.

    I think dri­ver buffers are only rel­e­vant if they are not much smaller than the TCP buffer. I guess UDP pack­ages (which I’d use for real­time stream­ing) are only sub­ject to dri­ver buffer­ing, so I see why this dis­cus­sion is rel­e­vant for real­time applications.

    I expect fixed buffers are cho­sen to allow full sat­u­ra­tion. How much work would it be to make each buffer self-tuning? How much if there was a com­mon frame­work for this task in the kernel?

    Back to the TCP level, I think one prob­lem I see is that there is one TCP stack per machine/jail. A now fixed bug in the wpi dri­ver caused pack­age loss on other inter­faces, like lo0. Effec­tively mak­ing all X appli­ca­tions die. That such a thing is pos­si­ble is ridiculous.

    An other impli­ca­tion is that my enor­mous 3g caused buffers also effect other inter­faces like LAN or the local inter­face, which is nei­ther nec­es­sary nor desired.

    I under­stand it was a lot of work to give jails their own stack, but I won­der, wouldn’t it be bet­ter to have one stack per interface/alias. Of course that would neces­si­tate an abstrac­tion layer that dis­trib­utes TCP requests to the right TCP stacks.

    GD Star Rating
    loading...
    GD Star Rating
    loading...
  4. netchild Says:

    To get a seper­ate net­work stack per jail you need to wait for the VIMAGE work to be pro­duc­tion ready (a lot of code is already in 9-current).

    I know about send-/recvspace, and that it is auto-tuning (in linux this seems to be a fixed inter­face spe­cific set­ting, while in FreeBSD it is a global option but auto-adapting).

    The size of dri­ver buffers is what this post­ing is about, and I have con­fir­ma­tion that it is not con­fig­urable. Mak­ing it con­fig­urable at run-time is a major task (some Intel dri­vers are already pre­pared as they share code with the Linux dri­ver), I was told. Mak­ing it a boot-time tun­able could be fea­si­ble, but our “NIC guru” does not know how much free time he can invest into this. He wants to take care about this at least in new dri­vers he develops.

    GD Star Rating
    loading...
    GD Star Rating
    loading...

Leave a Reply