At least on the TCP level buffers are self-tuning. You can configure a maximum buffer size and an increment step (net.inet.tcp.sendbuf* and net.inet.tcp.recbuf*), which I presume is also the minimal size (at least that’s the way I’d have implemented something like this).
I increased the max buffer size to gap disconnected periods with a 3g connection, while travelling on a train.
I think driver buffers are only relevant if they are not much smaller than the TCP buffer. I guess UDP packages (which I’d use for realtime streaming) are only subject to driver buffering, so I see why this discussion is relevant for realtime applications.
I expect fixed buffers are chosen to allow full saturation. How much work would it be to make each buffer self-tuning? How much if there was a common framework for this task in the kernel?
Back to the TCP level, I think one problem I see is that there is one TCP stack per machine/jail. A now fixed bug in the wpi driver caused package loss on other interfaces, like lo0. Effectively making all X applications die. That such a thing is possible is ridiculous.
An other implication is that my enormous 3g caused buffers also effect other interfaces like LAN or the local interface, which is neither necessary nor desired.
I understand it was a lot of work to give jails their own stack, but I wonder, wouldn’t it be better to have one stack per interface/alias. Of course that would necessitate an abstraction layer that distributes TCP requests to the right TCP stacks.