OVH Community, your new community space.

Routing 2011
17-11-2010, 00:52
Good evening,

Some news in vRack on routing:

- If you recall, the output of our data centers,
we thought initially to use the Cisco Nexus 7016. After
production testing we found that it is not
stable enough for us to do the routing. Very good for
the switching. But not quite developed in terms of routing.
So we had a very bad time, because our plans
fell into the water. So we ordered more speed
routers ... Cisco, Juniper and Brocade's.
We had very little time, and therefore the first
router that we received was the Cisco ASR 9010. We
tested it and it works (very) well. We had decided not
to try launch at 14h and then we set
production late last week. You can see
the backbone of our weathermap: Asr-g1-a9
Yes, it clings to understand ... The advantage of this
router is that it is scheduled to deliver a
high availability service. 320Gbps capacity
and passing 150Gbps. Cool.

Without waiting we ordered a second that we hope will
arrive in late December. This will allow us to
implement a second backbone parallel to the present.
2nd routers backbone are already in place to
Amsterdam, ams-5-6k was set up in Frankfurt and we
have added fra-1-6k. London was a little late as
Global Switch has only one rack ... We seek a
solution. But it will also end with a new
router. These 3 routers will help us to double
AMSIX capacity to 80Gbps, and 80Gbps for DECIX. We must
look for LINX shortly. And to connect all this to
Roubaix, we increase the transmission capacity between
Roubaix and Amsterdam to 120Gbps and 140Gbps to Frankfurt.
London will rise to 100Gbps. Then we will add in
February 40 Gbps more. In short, redundancy and
high availability.

All this to say by late December /early January we
should finish this phase of the upgrades.

- Following the issues with Vss Ovh in 2009/2010, we
have decided to change policy and to specialize
routers. Thus, we set up 8 new routers
go... (Rbx-s1/2/3/.../8) and 4 new are planned.
Thus, we will be able to better manage the network quality
through an infrastructure that we used to
Roubaix1: 2 routers work. High availability.

- We are trying to upgrade the network to a HG
high availability network through 10G cable to
the servers. There are still three months for a job. We
will change the offer officially to 100% HG
availability. Truth that will be 100% insured
thanks to very new technology that we
had the joys of debugging throughout early 2010 and
we set up on shared hosting to
Connect 6000 servers This allowed us to announce
Unlimited traffic on the shared hostings...
So these facilities are known by
heart and we will be able to offer 100% availability...
This is our 2011 target: 100% deals.

- In parallel, we launched a large investment in
the privateCloud. We built new rooms in Roubaix 2
where we will host in late December 4000 servers for this
business and by the end of February 8000 servers. We use these same
routing technologies and switching that will allow us
to offer the 100% availability on offers of PCC
(Private Cloud Computing). The offer is in alpha testing internally.
For the beta, we preferred to delay the commissioning of the CCP
to mid December, the time to receive and mount the 4000 servers,
then connect all these servers with 2 cables to a minimum
full configuration of switch that are double connected
again in high availability on aggregation switches
and end on several routers that work in
parallel configurations in active / active / active / active.
In short, it made no sense to do a beta with 100 or 200
servers. Because the supply seems to us Extremely interesting and we
want you to discover them without being stingy with
"The CCP free 5 days." So this requires "a few" servers.
4000 machines with 4.8, 16 or 48 cores with 16, 32, 64,
RAM 128GB + NAS-HA ... etc it should do the trick ...
because if the interface you click "add new server"
and delivery is> 1 minute it is not the CCP ...

5 paragraphs ... I stop there