How big are the buf­fers in FreeBSD drivers?

Today I have read an in­ter­est­ing in­vest­ig­a­tion and prob­lem ana­lysis from Jim Gettys.

It is a set of art­icles he wrote over sev­eral months and is not fin­ished writ­ing as of this writ­ing (if you are deeply in­ter­ested in it go and read them, the most in­ter­est­ing ones are from Decem­ber and Janu­ary and the com­ments to the art­icles are also con­trib­ut­ing to the big pic­ture). Ba­sic­ally he is telling that a lot of net­work prob­lems users at home (with ADSL/​cable or WLAN) ex­per­i­ence  are be­cause buf­fers in the net­work hard­ware or in op­er­at­ing sys­tems are too big. He also pro­poses work­arounds un­til this prob­lem is at­tacked by OS vendors and equip­ment man­u­fac­tur­ers.

Ba­sic­ally he is telling the net­work con­ges­tion al­gorithms can not do their work good, be­cause the net­work buf­fers which are too big come into the way of their work (not re­port­ing packet loss timely enough re­spect­ively try to not lose pack­ets in situ­ations where packet loss would be bet­ter be­cause it would trig­ger ac­tion in the con­ges­tion al­gorithms).

He in­vest­ig­ated the be­ha­vior of Linux, OS X and Win­dows (the sys­tem he had avail­able). I wanted to have a quick look at the situ­ation in FreeBSD re­gard­ing this, but it seems at least with my net­work card I am not able to see/​find the cor­res­pond­ing size of the buf­fers in drivers in 30 seconds.

I think it would be very good if this is­sue is in­vest­ig­ated in FreeBSD, and apart from maybe tak­ing some ac­tion in the source also write some sec­tion for the hand­book which ex­plains the is­sue (one prob­lem here is, that there are situ­ations where you want/​need to have such big buf­fers and as such we can not just downs­ize them) and how to bench­mark and tune this.

Un­for­tu­nately I even have too much on my plate to even fur­ther look into this. :( I hope one of the net­work people in FreeBSD is pick­ing up the ball and starts play­ing.

StumbleUponXINGBalatarinBox.netDiggGoogle GmailNetvouzPlurkSiteJotTypePad PostYahoo BookmarksVKSlashdotPocketHacker NewsDiigoBuddyMarksRedditLinkedInBibSonomyBufferEmailHatenaLiveJournalNewsVinePrintViadeoYahoo MailAIMBitty BrowserCare2 NewsEvernoteMail.RuPrintFriendlyWaneloYahoo MessengerYoolinkWebnewsStumpediaProtopage BookmarksOdnoklassnikiMendeleyInstapaperFarkCiteULikeBlinklistAOL MailTwitterGoogle+PinterestTumblrAmazon Wish ListBlogMarksDZoneDeliciousFlipboardFolkdJamespotMeneameMixiOknotiziePushaSvejoSymbaloo FeedsWhatsAppYouMobdiHITTWordPressRediff MyPageOutlook.comMySpaceDesign FloatBlogger PostApp.netDiary.RuKindle ItNUjijSegnaloTuentiWykopTwiddlaSina WeiboPinboardNetlogLineGoogle BookmarksDiasporaBookmarks.frBaiduFacebookGoogle ClassroomKakaoQzoneSMSTelegramRenrenKnownYummlyShare/​Save

4 thoughts on “How big are the buf­fers in FreeBSD drivers?”

  1. From if_​rl it looks like it’s hard­coded for each driver. if_rlreg.h says the 8169 sup­ports up to 1024 but we set RL_​8169_​TX_​DESC_​CNT to 256.

    1. Based upon the ex­per­i­ments Jim Gettys did in the LAN and WLAN, it looks like it would be bet­ter to make this con­fig­ur­able.

  2. At least on the TCP level buf­fers are self-​tuning. You can con­fig­ure a max­imum buf­fer size and an in­cre­ment step (net.inet.tcp.sendbuf* and net.inet.tcp.recbuf*), which I pre­sume is also the min­imal size (at least that’s the way I’d have im­ple­men­ted some­thing like this).

    I in­creased the max buf­fer size to gap dis­con­nec­ted peri­ods with a 3g con­nec­tion, while trav­el­ling on a train.

    I think driver buf­fers are only rel­ev­ant if they are not much smal­ler than the TCP buf­fer. I guess UDP pack­ages (which I’d use for re­al­time stream­ing) are only sub­ject to driver buf­fer­ing, so I see why this dis­cus­sion is rel­ev­ant for re­al­time ap­plic­a­tions.

    I ex­pect fixed buf­fers are chosen to al­low full sat­ur­a­tion. How much work would it be to make each buf­fer self-​tuning? How much if there was a com­mon frame­work for this task in the ker­nel?

    Back to the TCP level, I think one prob­lem I see is that there is one TCP stack per machine/​jail. A now fixed bug in the wpi driver caused pack­age loss on other in­ter­faces, like lo0. Ef­fect­ively mak­ing all X ap­plic­a­tions die. That such a thing is pos­sible is ri­dicu­lous.

    An other im­plic­a­tion is that my enorm­ous 3g caused buf­fers also ef­fect other in­ter­faces like LAN or the local in­ter­face, which is neither ne­ces­sary nor de­sired.

    I un­der­stand it was a lot of work to give jails their own stack, but I won­der, wouldn’t it be bet­ter to have one stack per interface/​alias. Of course that would ne­ces­sit­ate an ab­strac­tion layer that dis­trib­utes TCP re­quests to the right TCP stacks.

    1. To get a seper­ate net­work stack per jail you need to wait for the VIMAGE work to be pro­duc­tion ready (a lot of code is already in 9-​current).

      I know about send-​/​recvspace, and that it is auto-​tuning (in linux this seems to be a fixed in­ter­face spe­cific set­ting, while in FreeBSD it is a global op­tion but auto-​adapting).

      The size of driver buf­fers is what this post­ing is about, and I have con­firm­a­tion that it is not con­fig­ur­able. Mak­ing it con­fig­ur­able at run-​time is a ma­jor task (some In­tel drivers are already pre­pared as they share code with the Linux driver), I was told. Mak­ing it a boot-​time tun­able could be feas­ible, but our “NIC guru” does not know how much free time he can in­vest into this. He wants to take care about this at least in new drivers he de­vel­ops.

Leave a Reply

Your email address will not be published. Required fields are marked *