At least on the TCP lev­el buffers are self-tuning. You can con­fig­ure a max­i­mum buffer size and an incre­ment step (net.inet.tcp.sendbuf* and net.inet.tcp.recbuf*), which I pre­sume is also the min­i­mal size (at least that’s the way I’d have imple­ment­ed some­thing like this).

I increased the max buffer size to gap dis­con­nect­ed peri­ods with a 3g con­nec­tion, while trav­el­ling on a train.

I think dri­ver buffers are only rel­e­vant if they are not much small­er than the TCP buffer. I guess UDP pack­ages (which I’d use for real­time stream­ing) are only sub­ject to dri­ver buffer­ing, so I see why this dis­cus­sion is rel­e­vant for real­time applications.

I expect fixed buffers are cho­sen to allow full sat­u­ra­tion. How much work would it be to make each buffer self-tuning? How much if there was a com­mon frame­work for this task in the kernel?

Back to the TCP lev­el, I think one prob­lem I see is that there is one TCP stack per machine/jail. A now fixed bug in the wpi dri­ver caused pack­age loss on oth­er inter­faces, like lo0. Effec­tive­ly mak­ing all X appli­ca­tions die. That such a thing is pos­si­ble is ridiculous.

An oth­er impli­ca­tion is that my enor­mous 3g caused buffers also effect oth­er inter­faces like LAN or the local inter­face, which is nei­ther nec­es­sary nor desired.

I under­stand it was a lot of work to give jails their own stack, but I won­der, would­n’t it be bet­ter to have one stack per interface/alias. Of course that would neces­si­tate an abstrac­tion lay­er that dis­trib­utes TCP requests to the right TCP stacks.

%%footer%%