We are in the process of migrating this forum. A new space will be available soon. We are sorry for the inconvenience.

News from Roubaix Valley


Thelen
07-03-2013, 04:52
Yep agreed there. If only software supported v6 only. Most of the VM software doesn't yet :/

Myatu
06-03-2013, 18:28
Quote Originally Posted by Thelen
964,608 apparently, but same difference. No I meant, free IPs not total. Though I suppose they will reclaim a lot of IPs by raising the prices.
I would imagine they haven't exhausted their entire pool just yet, but raising prices is certainly one way to manage the amount they have available.

I'd still like to see more IPv6-only services (VMs in particular), because in some setups, IPv4 isn't really needed (as is in my case), so wasting the IPs for no good reason...

Thelen
06-03-2013, 06:41
Quote Originally Posted by Myatu
OVH's IPv4 address space is 1,003,008 IPs at the moment.
964,608 apparently, but same difference. No I meant, free IPs not total. Though I suppose they will reclaim a lot of IPs by raising the prices.

Myatu
05-03-2013, 13:26
Quote Originally Posted by Thelen
How you dealing with the IPv4 shortages? I can't imagine how you aim to fit 300k servers at BHS when you haven't even 10% of that available apparently.
OVH's IPv4 address space is 1,003,008 IPs at the moment.

alex
05-03-2013, 12:08
Quote Originally Posted by Thelen
How you dealing with the IPv4 shortages? I can't imagine how you aim to fit 300k servers at BHS when you haven't even 10% of that available apparently.
the deal the following way - increase the prices on IPv4 (used to be only setup fee), this way majority of VPS providers are gone from OVH network.

Shimon
05-03-2013, 08:20
Cloudflare provides an ipv4 < > ipv6 tunnel. That means you just need to give them an ipv6 server address and both ipv4/ipv6 people can connect. Does OVH do the same with it's CDN?

(Slightly offtopic, is OVH considering providing something like railgun for it's CDN or even becoming a cloudflare partner and providing railgun?)

If proxmox is pushed to add ipv6 to it's UI then potentially a lot more ipv4 addresses can be freed up.

Thelen
05-03-2013, 04:32
How you dealing with the IPv4 shortages? I can't imagine how you aim to fit 300k servers at BHS when you haven't even 10% of that available apparently.

Tz-OVH
04-03-2013, 17:34
Hello,
What's New at OVH?

Lot of projects are being finished
and go into production. To see more clearly,
a small recap what we occupy our days.


Datacentres:
------------
DC RBX5, capacity 14,000 servers, will soon
be full, probably sometime in March.
It provides a RBX6 end of April. Then the following
in 2015. We have indeed bought a lot of
new buildings in Roubaix Valley and can be
build RBX7 RBX8 RBX9 RBX10 .. but in two years.
This is the time it takes to bring two EDF
arrivals 20KV a new address.

This is to avoid saturating completely RBX,
we started the 2nd DC SBG: SBG4.
It was put into production there 3 weeks
and discharge already RBX5 for customers
, PL, IT. In parallel, the DC is SBG2
under construction and is expected
for Q3'13. In the meantime, the ends
site security and fiber optic
increasing the electrical power
through the work of the GC port
Strasbourg. SBG2 will actually mount
power by the end of 2013 the network shortest
to IT that will build on our dark fiber.

To take the following RBX, we
GRA expected to Gravelines. DC will start GRA1
current March. It is currently
make the GC to move the fiber optic
and connect to two networks:
Paris / London (bypassing Roubaix) and
Gravelines / Brussels. DC aura 3
outputs to 3 directions: London, Brussels
and Paris, and will be connected to BHS and RBX
live through loops secure.

As you understand, the month of March and
April will be busy in the event. It is
all to continue to deliver you in 1H, but
between RBX5 becomes full, which started SBG4,
GRA1 RBX6 and that will start, it is not
can not we have a few days, max 2
weeks of uncertainty, due to lack of
our place in DC. There is a risk, even if
it manages. So if you have plans for
March / April, please take a little
in advance.

The continuing American we sell
more servers from our
DC BHS. Because success is exponential, we
have started the construction of the 2nd round
BHS2 whose capacity is 10000 servers. The
Following BHS3/BHS4 be built according to the model
RBX5 of that room to start in 2000
servers in just 2 weeks. BHS3/BHS4 will
also supplied as 400V in Europe, instead
from 480V to BHS1/BHS2. Indeed, we have succeeded
to standardize and validate the tensions European
in North America

We started the search for
build 3 DC in the USA: It can not be # 1
"World" without being connected to the No. 1 market where
growths are most important. It is a
project that should take us less than 2 years.


NETWORK:
-------
We have completed the capacity expansion
USA on our network. Currently we push
20% of our traffic from Europe through the
new network is already very well connected inter
operators with American and South American
Asian. DC on our BHS: we have a
slightly less than the output capacity of 500Gbps and
actual traffic exceeded 30Gbps It just goes.

In Europe, we have completed updates
our network along distance between RBX / NDA / AMS.
We changed technology (always
Infinera) and we use "100G coherent" for
our transport circuits between DC and RBX
main points of presence in Europe. Week
Next, we take the work on AMS / BRU / RBX
to complete the first loop and secure optically
all circuits. This week we will
update the Cisco ASR9 of RBX, Paris and SBG.
This will allow us to add capacity in our
network in Paris but also fix the bugs
netflow that prevent us from properly handle
attacks.

Speaking of attacks, we test solutions
Arbor Network, which can detect an attack
the return on boxes and mitigation of
neutralize. We also test for some
Tilera months for filtering time attack
real. 2 give good results. It is
trying to decide which solution we will retain
to implement a comprehensive solution for each
DC. The goal is to treat attacks and 200Gbps
more.

To return to the network, after the loop
RBX / NDA / AMS / BRU / RBX, we will work on
loop RBX / FRA / SBG / PAR / RBX with the aim of securing
all circuits. Why secure circuits?
When failure of optical fiber (an excavator which
hard fiber), the circuit 10G between 2 routers
is cut. Unless it is protected in optical
another way. Thus, when the fiber is
failure, the circuit is UP because the equipment has
automatically used another optical path. It
There is therefore more break in service. This type
network is easier and cheaper to build
in 100G coherent.

Should sign the optical fibers between SBG and
Milan, IT in a few weeks and then put in place
this network in September. In parallel, we look for
a DC location in Germany ..


CDN:
----
BETA is going pretty well. We fixed bugs and
sees that already manages more than 1.5Gbps of traffic
our CDN. Thank you to the BETA testers for feedback
that allow us to adjust the settings.

We think the first start offering commercial CDN
Dedicated in a few days for the number 9.99e/mois
unlimited domain with an IP Anycast.

In parallel, we have planned to start the DNS service
Based on the same infrastructure. DNS DNS Low Latency aka LL
1e/an/domaine cost him.


VPS:
----
Offer great start. We are very happy because we
think it is a good tip for a VM offers 100K / year.

By cons, we have some concerns internally when
First deliveries. It lacked the resources
filers and we noticed a bug in the vCloud enough
late .. short, we made some mistakes.
VPS Cloud are new and correctly delivered
eventually catch up with the passive. End of the first week
VPS Cloud will be all set.

The manager should be out in a few days. At this time
then we get out the offer of GAMMA to make FINAL
So no bug.


pCI 2013
--------
Private version of VPS with LB, VPN, FW, IO HD
guarantees, multi site and time is the pCI 2013.
We think out the BETA end of March.


Hubic:
------
The infrastructure is stable and it's nice. We
still testing different clients on iPhone,
iPad, Android, Windows and WepApp. There is still
few days of work before making them public.
As we said, we want to offer you a
works. Short.

Customers who have trusted us and have paid a
subscription, will have a 2nd year free. All
this will happen at the same time.


Mutu & Geocache
---------------
Work on shared hosting now well forward. The main
problem is due to the fact that the tools we
developed to monitor the activity of some sites
have not taken into account the actual use of
CPU / RAM each site. It was thus found
with a lot of big sites who took 40% of capacity
CPU / RAM below. As 7000 comes to servers
HG, 40% is still 3000 HG servers. The last
work consists in replacing scripts
"Police" (the famous okillerd) to share equitably
resources among all clients. Is used
BigData and MapReduce to collect statistics
resource use on clusters. This we
allows you to see problems more quickly.
We think the problems are still felt on
approximately 5% of sites. Work continues to arrive
0% then update offers Mutu 2013 ..
To accelerate the mutu, we will integrate a CDN default
called internally GeoCaa 75% of our investments
this period. The rest is done by self,
through actuators that do not require the
dividends at the end of each year. This we
will support our growth in France and
in Europe thanks to your commands.


I'll stop there. It's very long to read and
heavy to digest.

In short, it works hard every day

Regards
Octave
Thanks google translate.

oles@ovh.net
04-03-2013, 12:32
Hello,

So what's New at OVH?

Lot of projects are being finished and being put into production. Here's a recap of what we have been doing with our days.


Datacentres:
------------
The RBX5 DC, with a capacity of 14,000 servers will soon be full, probably sometime in March. We're preparing RBX6 for the end of April. Then later in 2015. Infact we have bought a lot of new buildings in Roubaix Valley so we will be able to build RBX7 RBX8 RBX9 RBX10 .. but that's in two years. This is the time it takes to bring the EDF (electricity company) power supply of 20KV to the new address.

This is to avoid completely saturating RBX, we started the 2nd SBG DC : SBG4. It was put into production there 3 weeks ago and RBX5 is already working for PL, IT and DE clients. Also, the SBG2 DC is under construction and is expected in the 3rd Quarter 2013. In the meantime, we're finishing securing the site in fibre optics and increasing the electrical power through the work of GC in the Strasbourg port. SBG2 will really show its power by the end of 2013 thanks to the shortest network shortest path to IT that will build on our dark optical fibre.

To following on from RBX, we're planning GRA in Gravelines. The GRA1 DC will start in March. We are currently moving optical fibre to GC and connected on two new networks: Paris / London (bypassing Roubaix) and Gravelines / Brussels. The DC will have 3 outputs to 3 directions: London, Brussels and Paris, and will be connected to BHS and RBX directly through secure loops.

As you'll understand, March and April will be busy very busy months. We're doing everything to continue to deliver in 1hr, but between RBX5 which is full, SBG4 which is started GRA1 and RBX6 that will start, it may possibly be a few days, max 2 weeks of uncertainty, as we are running out of space in our DC. There is a risk, even if we're managing it. So if you have projects for March / April, you will need to order in advance to ensure they're not affected.

regarding Americia, we're selling more servers from our BHS DC. Because success is exponential, we started the construction of the 2nd DC, BHS2 whose capacity is 10,000 servers. Following that BHS3/BHS4 will be built following the RBX5 model, which allows us to delpoy 2000 servers in just 2 weeks. BHS3/BHS4 will also be powered, as in Europe by 400V instead of 480V like BHS1/BHS2. Infact, we have managed to standardise and validate European voltage in North America

We've started researching the construction of 3 DC's in the US: You can not be No. 1 in the world without being connected to the No. 1 market where growth rates are the most important. This is a project that should take us less than 2 years.


NETWORK:
-------
We have completed the capacity expansion of our USA network. Currently we carry 20% of our traffic from Europe through this new network which is already very well connected with American, South American and Asian operators. Concerning our DC in BHS: we have a little less than 500Gbps of capacity output and actual traffic exceeded 30Gbps and it's rising.

In Europe, we have completed the updates to our network along distance between RBX /LDN/ AMS. We have changed the technology (still on Infinera) and we're using 100G coherent to move our DC circuits between RBX and major points of presence in Europe. Next week, we resume work on AMS/BRU/RBX to complete the first loop and secure all optical circuits. This week, we'll update the Cisco ASR9 for RBX, Paris and SBG.
This will allow us to add capacity in our network in Paris but also fix bugs on netflow that prevent us from properly handling attacks.

Speaking of attacks, we're testing solutions from Arbor Networks, which can detect an attack, send it to mitigation device and neutralise it. We've also been testing for some months Tilera, which filters attacks in real time. The 2 of them have both given good results. We are in the process of choosing what solution we will retain to implement a comprehensive solution for each DC. The goal is to treat attacks of 200Gbps and more.

Coming back to the network, after the RBX/LDN/AMS/BRU/RBX loop, we will work on the RBX/FRA/SBG/PAR/RBX loop with the goal of securing all circuits. Why secure the circuits?
When there's an optical fibre failure (plant machinery that tears the fibre), the 10G circuit between 2 routers is cut, unless it is protected on the optical level by another way, When the fibre is cut, the circuit remains UP by equipment which automatically use another optical path. So there is no break in service. This type of network is more simple and less expensive to build in 100G coherent.

We should sign the optical fibres between SBG and Milan, in a few weeks and then implement this network in September. In parallel, we're looking for a location of a DC in Germany ..


CDN:
----
The BETA is going pretty well. We've fixedx bugs and we can already see more than 1.5Gbps of traffic on our CDN. Thank you to the BETA testers for their feedback which allows us to adjust the settings.

We're thinking of starting the first dedicated CDN offer in the next few days at €9.99 p/m for a number of unlimited domains with an Anycast IP.

In parallel, we have planned to start the DNS service based on the same infrastructure. Low Latency DNS aka LL DNS will cost €1 p/a per domain.


VPS:
----
This offer had a great start. We are very happy because we think its right for 100k p/a VM offers.

However, we have some concerns internally for the first deliveries. We're missing filer resources and we found a bug in the vCloud quite late .. so to summarise, we made some mistakes.
The new VPS Cloud are properly delivered and we have finished catching up with backorders. End of the week the first Cloud VPS will be all set.

The manager should be out in a few days. At that time, we will release the GAMMA offer which be finally bug free.


pCI 2013
--------
pCI 2013 is the private version of VPS with LB, VPN, FW and HD IO guarantees, multi site and by the hour. We're thinking of doing the BETA at the end of March.


hubic:
------
The infrastructure is stable and it's good. We are still testing different clients under iPhone, iPad, Android, Windows and WepApp. There are still a few days of work before making them public.
In short, as we have said all along, we want to give you an offer that works.

Customers who have trusted us and have paid a subscription, will have a 2nd year free. All this happens at the same time.


Shared hosting & GeoCache
---------------
Work on shared hosting now is well advanced. The main problem is due to the fact that the tools we developed to monitor the activity of certain sites were not taking into account the actual use of CPU / RAM for each site. We therefore found that a lot of big sites were using less than 40% CPU/RAM infrastructure capacity. As we approach 7000 HG servers, 40% is still 3000 HG servers. The latest work consists on replacing font scripts (the famous okillerd) to share resources equitably among all customers. It uses BigData and MapReduce to collect resource usage statistics on clusters. This allows us to see problems more quickly. We believe that these problems are still felt by about 5% of sites. Work continues to reach 0% then we will update to Hosting 2013 offers..
To accelerate hosting, we will integrate CDN by default, internally called GeoCaa which has made up 75% of our investments in this period. The rest is automatically financed via actuators that do not require the dividends at the end of each year. This will allow us to sustain our growth in France and Europe with your orders.


I'll stop there. It's a very long and heavy read to digest.

In short, we're working hard every day

Regards

Octave