I was recently working with a colleague on connecting two data centers via an IPsec tunnel. He was using iperf (coming soon to OmniOS bloody along with netperf) to test the bandwidth, and was disappointed in his results.
The amount of memory you need to hold a TCP connection's unacknowledged data is the Bandwidth-Delay product. The defaults shipped in illumos are small on the receive side:
bloody(~)[0]% ndd -get /dev/tcp tcp_recv_hiwat 128000 bloody(~)[0]%and even smaller on the transmit side:
bloody(~)[0]% ndd -get /dev/tcp tcp_xmit_hiwat 49152 bloody(~)[0]%
Even platforms with Automatic tuning, the maximums they use are often not set highly enough.
Introducing IPsec into the picture adds additional latency (if not so much for encryption thanks to AES-NI & friends, then for the encapsulation and checks). This often is enough to take what are normally good enough maximums and invalidate them as too small. To change these on illumos, you can use the ndd(1M) command shown above, OR you can use the modern, persists-across-reboots, ipadm(1M) command:
bloody(~)[1]% sudo ipadm set-prop -p recv_buf=1048576 tcp bloody(~)[0]% sudo ipadm set-prop -p send_buf=1048576 tcp bloody(~)[0]% ipadm show-prop -p send_buf tcp PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE tcp send_buf rw 1048576 1048576 49152 4096-1048576 bloody(~)[0]% ipadm show-prop -p recv_buf tcp PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE tcp recv_buf rw 1048576 1048576 128000 2048-1048576 bloody(~)[0]%
There's future work there in not only increasing the upper bound (easy), but also adopting the automatic tuning so the maximum just isn't taken right off the bat.