We are in the process of migrating this forum. A new space will be available soon. We are sorry for the inconvenience.

OVH UK Network Upgrade


raxxeh
10-02-2013, 10:00
Quote Originally Posted by Myatu
Does your HG come with OVH<->OVH 10Gbps bandwidth guarantee? New ones seem to.
That's a good question and I can't see anywhere in the ovh manager that says it does or doesn't, but given that its just '10gbps' with a data limit then 10mbit thereafter, it should be safe to assume that it's 10gbit everywhere until i blow that cap.

maybe one of the OVH reps could shed some light, the server was ordered mid august 2012. So it was right before the bandwidth regime change iirc.

Then again, I've not seen anyone else attempt to actually achieve stable 10gbit in OVH's datacentre...

Myatu
10-02-2013, 09:04
Quote Originally Posted by raxxeh
Servers currently under heavy load from various services however, I have seen this test previously get close to 20gbit, I'll rerun in about 12hrs when activity dies down.
Well, even at the 13 Gbps it can manage now, it would indicate that your current network settings and the server capability itself should not affect your ability to achieve a sustained 10 Gbps (bar reading from a storage device, of course). That's quite certainly down the pipeline then Does your HG come with OVH<->OVH 10Gbps bandwidth guarantee? New ones seem to.

raxxeh
10-02-2013, 02:41
Quote Originally Posted by Myatu
Ah ok Be careful with selecting the ipv4.tcp_(r|w)mem values, because if you get it incorrect, you end up with dropped packets due to buffer overflows.

See if there's functions you can offload from the NIC (ie., checksums). See ethtool -k eth0

You can also do a quick self-test (on localhost) with netperf, to find out what the maximum performance of the system itself really is, with all the TCP overhead and what have you:

Code:
netperf -T0,0 -C -c
For example, on my puny Kimsufi 2G (Atom 230):

Code:
# netperf -T0,0 -C -c
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo : cpu bind
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB

 87380  16384  16384    10.01      1531.14   93.31    93.31    9.984   9.984
So 1531 Mbit/sec max (1.5 Gbps - add option "-f g" if you want to see it in Gbit/sec directly)

On one of my other servers, a Intel Xeon E3-1240 V2, its:

Code:
# netperf -T0,0 -C -c
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 (127.0.0.1) port 0 AF_INET : demo : cpu bind
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB

 87380  16384  16384    10.00      10824.71   96.30    96.30    0.729   0.729
So 10824 Mbit/sec (10 Gbps, almost to a tee). Neither these two machines were tuned, by the way.
Code:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo : cpu bind
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB

10000000 10000000 10000000    10.00      13857.71   24.72    24.68    2.622   2.618
Servers currently under heavy load from various services however, I have seen this test previously get close to 20gbit, I'll rerun in about 12hrs when activity dies down.

yeah I've noticed some dropped packets but it's not consistent, i'll probably drop back the rmem stuff later today.

Myatu
09-02-2013, 21:04
Ah ok Be careful with selecting the ipv4.tcp_(r|w)mem values, because if you get it incorrect, you end up with dropped packets due to buffer overflows.

See if there's functions you can offload from the NIC (ie., checksums). See ethtool -k eth0

You can also do a quick self-test (on localhost) with netperf, to find out what the maximum performance of the system itself really is, with all the TCP overhead and what have you:

Code:
netperf -T0,0 -C -c
For example, on my puny Kimsufi 2G (Atom 230):

Code:
# netperf -T0,0 -C -c
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo : cpu bind
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB

 87380  16384  16384    10.01      1531.14   93.31    93.31    9.984   9.984
So 1531 Mbit/sec max (1.5 Gbps - add option "-f g" if you want to see it in Gbit/sec directly)

On one of my other servers, a Intel Xeon E3-1240 V2, its:

Code:
# netperf -T0,0 -C -c
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 (127.0.0.1) port 0 AF_INET : demo : cpu bind
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB

 87380  16384  16384    10.00      10824.71   96.30    96.30    0.729   0.729
So 10824 Mbit/sec (10 Gbps, almost to a tee). Neither these two machines were tuned, by the way.

raxxeh
09-02-2013, 19:17
Quote Originally Posted by Myatu
Raxxeh, what's the MTU set for on your NIC? (ifconfig)
1500 & 9000, results are the same ;p

these are some other changes i've made that have improved performance along the road:

net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0

net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000

net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000

Myatu
09-02-2013, 17:16
Raxxeh, what's the MTU set for on your NIC? (ifconfig)

raxxeh
09-02-2013, 09:51
Not sure that test is doing much elvis, didn't see activity break 130MB/s on jnettop, and half of them appear dead, it also doens't help that they're so small.

Fact remains, 3 known 10gbit servers still don't breach 650MByte/s, even when using several hundred connections to each, i could understand if they were capping out, but there is no variation, they will sit at 645-650MB/s for hours.

elvis1
09-02-2013, 02:25
Out of interest: what do you make use of the server for ( if you'd like to share)

Here is an ugly list I've made:
http://www.imagepedia.net/list-testb.../578#comment-5

Debian/ Ubuntu
Put them in a file ( install aria2: sudo apt-get aria2). If that does not work get the proper repos ( sudo apt-get update and sudo apt-get aria2 :

Repos place:
http://repogen.simplylinux.ch/

run this command with the urls inside :

aria2c -i uris.txt

Best of luck

raxxeh
07-02-2013, 06:48
Quote Originally Posted by MarcA
Taken from their Site
Incoming (Internet to OVH) 10 Gbps
Outgoing (OVH to Internet) 300 Mbps <<
Well aware of that, thank you for the notification of something very obvious while also being unable to read.

Not relevant to me for two reasons.

One: the tests I am running are internal
Two:

Bandwidth / Traffic
Network Connection: 10Gbps
Monthly traffic: TiB 40.00 per month
Resetting: 2013-02-22
Consumed this month: 12.71 TiB
Remaining this month: 27.29 TiB
Additional traffic: TB 0

I have 10gbit external. I do not have a new HG server, it is still under the old bandwidth regime.

And to go one step further, you'll notice that I also specified 650Mbyte/s.

That is about 5-6gbit/s.

MarcA
06-02-2013, 20:41
Taken from their Site
Incoming (Internet to OVH) 10 Gbps
Outgoing (OVH to Internet) 300 Mbps <<


Network
Connection 10 Gbps Lossless
Equipment Cisco Nexus
Bandwidth
Type Guaranteed
OVH to OVH 10 Gbps
Incoming (Internet to OVH) 10 Gbps
Outgoing (OVH to Internet) 300 Mbps <<
Additional outgoing +100 Mbps = + 85.00 /month
Maximum outgoing per server 3 Gbps
Maximum outgoing per customer Not limited

raxxeh
06-02-2013, 20:33
Are they HG servers turb? or legacy MG or EG (forgot which)

I'm not able to maintain anything over 650MB/s for anything short of a 2-3 second burst, which I'm still not entirely sold on being an accurate reading, even when smashing multiple known 10gbit servers (leaseweb mirror, proof.ovh.net, iperf.ovh.net dual mode test)

always 650MB/s. tried about 20 different kernel configs, every networking change under the sun and still wont break 650MB/s

a little annoying given it should easily burst into the 900MB/s region at times. (HG-XL)

Would be nice if I could get hold of someone else who has a HG server that has time to experiment and see if this is an OVH limitation.

wii89
01-02-2013, 13:30
Good news about the Infinera announcement

Happy about London getting the upgrade so quickly!

turbanator
01-02-2013, 10:13
will this make any difference between the internal 10gbit - 10gbit speeds in rbx-4 lol? :P if not then doesn't really affect me. but good news for the rest of you

Myatu
30-01-2013, 15:16
That's fast! (No pun intended..)

Abdurrahman
30-01-2013, 12:44
Hi everyone,

Following our announcement yesterday with Infinera (here), we'll be beginning the work this evening in London, you can follow the task here:

http://status.ovh.net/?do=details&id=4041

We have planned the works for two days just to be sure that everything is okay.