Ah ok Be careful with selecting the
ipv4.tcp_(r|w)mem values, because if you get it incorrect, you end up with dropped packets due to buffer overflows.
See if there's functions you can offload from the NIC (ie., checksums). See
ethtool -k eth0
You can also do a quick self-test (on localhost) with netperf, to find out what the maximum performance of the system itself really is, with all the TCP overhead and what have you:
Code:
netperf -T0,0 -C -c
For example, on my puny Kimsufi 2G (Atom 230):
Code:
# netperf -T0,0 -C -c
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo : cpu bind
Recv Send Send Utilization Service Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
87380 16384 16384 10.01 1531.14 93.31 93.31 9.984 9.984
So 1531 Mbit/sec max (1.5 Gbps - add option "-f g" if you want to see it in Gbit/sec directly)
On one of my other servers, a Intel Xeon E3-1240 V2, its:
Code:
# netperf -T0,0 -C -c
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 (127.0.0.1) port 0 AF_INET : demo : cpu bind
Recv Send Send Utilization Service Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
87380 16384 16384 10.00 10824.71 96.30 96.30 0.729 0.729
So 10824 Mbit/sec (10 Gbps, almost to a tee). Neither these two machines were tuned, by the way.