We are in the process of migrating this forum. A new space will be available soon. We are sorry for the inconvenience.

What's new on September 2011


elvis1
11-09-2011, 08:08
oregon amd denver

lewis4490
08-09-2011, 16:10
Oh OVH really do make me laugh sometimes;

Worst case scenario we'll build RBX5.

I had to laugh at this, but this would be cool

Myatu
07-09-2011, 18:37
Quote Originally Posted by oles@ovh.net
But if you look at the level of natural disasters it's a good thing to be around the border as there are no earthquakes, no tornadoes, no volcanoes.
...
One can then imagine then why we would settle down between ... Seattle (Washington) and / or Portland (Oregon)
Err... I take it Geography isn't your strong point, Oles. Seattle is in a "very high" risk zone for earthquakes, Portland in a "moderate-high" risk zone. Also, there you have the Cascade range of active volcanoes, best known of them is probably Mt. St. Helens

Chicago isn't a bad choice as it is well connected and a large pool of IT people to draw from, but land is very costly in that area. Detroit you'll be able to snap up warehouses a dime a dozen and borders directly with Canada - drive past Toronto and you're in Quebec *shudder*.

Andy
07-09-2011, 13:25
Hello,
What's New at OVH? That bump. We will enjoy
the next few days to complete certain phases
development of several (new) services.
It will be an opportunity to make several announcements and
gain feedback.

Otherwise:
- OVH Mag No. 2 is in-house editing. It was
stopped at 200 pages as No. 1. Should PRODDER
to the end of you.

- The filling is the last datacentre RBX4
faster than expected. So how you look
cope with this growth and not
find no space available in late
of the year. It has several tracks that are
being validated. It passed through Innovation
and technological breakthroughs. The goal is to
to pass the RBX4 of 35,000 servers ...
45000, 60000 and 75000 servers. We'll see
the result of internal alphas. In the worst
case we will build RBX5.

- To set up the backbone in the United States and
Asia, we ordered a lot of new
routers. This equipment has been (forever) 3-4
weeks to arrive and yesterday we received
last orders. It will thus be able
configurations and then finalize the week
Next we start deployments. This we
will introduce new items
peering in the United States and Asia, two new transit
(Level3 and NTT) and thus increase the capacity
Our network of approximately 250 Gbps. As expected
before the end of 2012, we will have> to 1Tbps interco
to the Internet.

- Tomorrow night, we will complete the backbone
NRA of Lille. There will be a break in
the service a few minutes required
to disconnect the old equipment and lighting
the new one.

- A current Lille, each DSLAM is connected
2x1Gbps by a switch located in the
NRA itself. This switch is connected by 10Gbps
on another switch in another NRA. And so
Following the form of loops. This type of backbone
is fine for ADSL but with the arrival of
100Mbps VDSL2 and thus for each subscriber, we
looked better. And we found better and
cheaper. Thus, the DSLAMs are (still) connected
in Xx1Gbps (X = 2,4,8), but directly to 2
routers in the backbone. Switch over in the NRA.
Each is carried on a 1Gbps network under
form of loops. DWDM is used by 4/10Gbps
wavelength. Knowing that there are 5 to 10 by NRA
loop, each loop can be 800Gbps
and made up a big NRA 10Gbps (the sum
All ISP) we can estimate that building a
information backbone that will resist innovations
for 5-7 years. As each NRA is connected
the first router via a road and on the second
router via another route, it supports
fiber cuts, failures of routers
etc. .. All our experience in hosting service
ISP. The result can be ensured by 100Mbps
subscriber for a lower cost than today our
competitors to ensure 150kbps per user.
It means we will offer you the VDSL2
100Mbps for less than 25euro TTC with a band
bandwidth guarantee the most of your syncro 24/24.
What is already hosting the recent
This year also helps deliver the backbone
value-added services such as private networks
in VPN / MPLS, and of course supports video / VOD / TV
in unicast. Google TV or Apple TV or Netflix does
do not be afraid. On the contrary, they are waiting for you
deliver them with unlimited ..

- We will start work on large
backbone in Paris. The goal is to stop the POP
in TH1 and move all equipment
TH1 to TH2. Within a few days we will therefore be
migration of the Infinera that provides one of
2 links Paris / Roubaix and Paris / Frankfurt. We
opportunity to introduce two new routers
Cisco ASR 9010: one in TH2 and the other at GSW. They
will centralize all the current traffic (hosting)
and future (the NRA) in Paris, but Lyon
Bordeaux and Marseille. In all, we put in place
1.2Tbps capacity network to Paris and we will
start down this ability to Lyon,
Bordeaux and Marseille (on Paris / Lille us
already over 250Gbps).

- The site of Strasbourg is being started
and we will cut our connection to Paris / Frankfurt
add to Infinera equipment in Strasbourg.
This will allow us to have 400Gbps to Paris
400Gbps to Frankfurt and then connect Zurick
Milano directly and Strasbourg (to
Frankfurt instead). Thus, the servers and the cloud
Computing we will propose in Strasbourg
very close to Paris, Frankfurt, Prague, Warsaw,
Zurick, Milan and Vienna. We will divide by 2 or
3 by the latencies to these cities and thus to
these countries. Then we propose new services
PRA (disaster recovery plan) based on
Two sites separated by 300 km.

- A few weeks ago, we celebrated 100,000
servers. We removed the cost of installation
on all services at OVH. Since we
have a very sharp increase in abuse in any
type (spam, attacks, scans). That is why we
have changed our internal processes and for 1 month
it terminates the dedicated server from the 2nd attack
less than 30 days. So we saw the number of attacks
increase from 3-4 per week (with the cost of installation)
to 5-8 per day (free installation) then the
situation has stabilized for 2 weeks at 5-6
attacks very important week. Basically, we
reacts quickly and ruthlessly to clean our network
All customers would be hackers or
activities or borderline bizarre or who do not know
not maintain adequate security on their
infra. With this approach less friendly, it can
maintain (for now) installation costs
at 0 Euro.

- In recent months, working on the deployment
of our services on North America. As
a large area, we know that we will be more
data centers. We are studying several sites for
launch our first data center there. But in contrast to
our competitors, the goal is not to go any faster
but the costs of going cheap, with large
capabilities of networks and service end to end
managed by OVH. This approach allows us to be there
profitable from day one, although this has
required to build a larger backbone in
Europe and having to build data center as
last RBX4. Saying that this is not necessarily what
we initially thought to when we did
apt-get install apache .. plustôt rpm-Uvh apache-1.3.4-1.i386.rpm
In short, we want to replicate our business model
its entirety because we think there will always
place for a serious player who has mastered his craft,
2x to 5x which is cheaper than the market, which does not charge
not bandwidth. All this thanks to the fact that
reinvest all profits in the future.

So where to deploy data center? We think we
remain in the "North" "North American", ie
on not far from the US / Canada border. Why? Car
energy is cheaper. It comes from dams on
major rivers in Canada and the United States. It's green, not expensive.
Also, the cost of cooling servers is lower
because as it depends on outside temperature and it
the north is cool, it costs less. If we look
networks, fiber optics, distances and
latencies, it will have about 4-5 areas to create 15-20ms
from each other. So yes it will be several data centers.
But if you look at the level of natural disasters it
not much happening on the border, no tremor
of land, no tornadoes, no volcanoes. So it costs
necessarily cheaper to build and maintain
operating in a hazardous area. One can imagine
So settle down by between Albany (New York)
Montreal (Canada), Detroit (Michigan), Chicago (Illinois)
Seattle (Washington) and / or Portland (Oregon). The exception
to note is that Texas provides cheap energy through
oil locally, but where it is warm, there are tornadoes
etc.. We continue our research but it is beginning to
form. If you have ideas, please. Should
arrive at our first server to ping the other
continuing until the end of the year.

- 2 years ago, we knew that we will need
going to the United States. We have launched internal reflections
on how to organize themselves in order to maintain a
high speed of innovation while magnifying
and while chatting with customers more
with different motivations that are sometimes conflicting.
We are led to the creation of a "Interteam" which
is a transversal puts pressure on
internal teams. Through two weeks of sprints
This team is trying to pass through the plant
sysadmin and developer of the needs of our customers
that have been discussed through the final with our
customers on the mailing list, forums, twitter and
marketing. The goal is to enrich all our offers
2 weeks. One or more new features,
new services, bug fixes. The goal is
to offer services that are approaching the maximum
real needs of our customers. The alphas, betas and
prod is our DNA since starting and that's what gives
the pace at OVH. Nothing has changed except that in 1999-2001 there
felt he had to like that and in 2009-2011 it
Finally able to put the words on these methods.
The Interteam is a photo of this great organization
pragmatic and it's been tested and validated for 9-12 months.
It works not so bad, even if we have (often)
Sometimes the delay in making prod. The pre-service
and we feel OVH moving. It is reassuring enough
to devote myself to the development of the other OVH
continent and enrich OVH (Europe) other points of
view. For those who doubt this is an expression
I often say "the graveyards are full of people
essential "or" it will not be worse or better,
but different. "And then it will not even differ

At work

Friendly
Octave

oles@ovh.net
07-09-2011, 11:02
Hello,

So what's new at OVH? We've been hard at work. It will take few days to complete certain phases of development of many (new) services and we will take the opportunity to make several announcements and give feedback.

Otherwise:
- OVH Mag No. 2 is being edited in-house. We stopped at 200 pages for No. 1. So we should have it ready for you at the end of the month.

- Filling the last datacentre RBX4 has been faster than expected. So we're looking at how to deal with this growth so we won't run out of space toward the end of the year. We have several options that are being validated that will pass through innovative and technological breakthroughs. The goal is to have RBX4 move from 35,000 servers to 45,000, 60000 and 75000 servers. We'll see the result of internal alphas. Worst case scenario we'll build RBX5.

- To set up the backbone in the United States and Asia, we ordered a lot of new routers. This equipment has taken (forever) 3-4 weeks to arrive and yesterday we received the last orders. We will now be able to finalise the configuration and then next week we'll start deployments. This will allow us to establish new peering points in the United States and Asia and two new transit (and Level3 NTT), thus increase the capacity of our network to over 250 Gbps. As expected by the end of 2012, we will have > 1Tbps interconnection to the Internet.

- Tomorrow night, we will finalise the NRA backbone in Lille. There will be a break in service of a few minutes that's needed to unplug the old hardware and turn on the new one again.

- Currently in Lille, each DSLAM is connected to a 2x1Gbps switch that is in the NRA itself. This switch is connected by 10Gbps to another switch to another NRA. This results in loops. This type of backbone is fine for ADSL but with the advent of VDSL2 and therefore 100Mbps for each subscriber, we looked for something better. And we found better and cheaper. Thus, the DSLAMs are (still) connected as Xx1Gbps (X = 2,4,8), but directly to two backbone routers. Above the switch in the NRA.
Each is carried on a 1Gbps network in the form of loops. It's used by 4/10Gbps DWDM wavelength. Knowing that there are 5-10 NRA loops, each loop can be 800Gbps and a large NRA has a maximum 10Gbps (the sum of all the ISPs together) we can estimate that building a backbone that will know how to resist innovations for 5-7 years. As each NRA is connected to the router via a route first and the second router via another route, it supports
fibre cuts, router failures. .. with all our experience in hosting ISPs. The result is a guaranteed 100Mbps per subscriber for a lower cost than our competitors today who guarantee 150kbps per user.

It means we will offer you 100Mbps VDSL2 for less than €25 with guaranteed bandwidth 24/7. This is what is already on the hosting for the last few years. The backbone can also deliver value added services such as private networks VPN / MPLS, and of course supports video/VOD/TV in unicast. Apple TV or Google TV or Netflix does not scare us. On the contrary, they are waiting to be delivered to you unlimited ..

- We will start the major work on the backbone in Paris. The goal is to stop the POP to TH1 and move all equipment from TH1 to TH2. Within a few days we will therefore do a migration of the Infinera that guarantees one of the two links Paris/Roubaix and Paris/Frankfurt. We took the opportunity to introduce two new Cisco ASR 9010: one in TH2 and the other at GSW. They will centralise all the current (hosting) and future (the NRA) traffic in Paris, but also Lyon, Bordeaux and Marseille. In all, we're setting up a 1.2Tbps capacity network to Paris and we'll start off this ability to Lyon, Bordeaux and Marseille (on Paris / Lille we already have over 250Gbps).

- The site at Strasbourg is being started and we will cut our connection to Paris / Frankfurt to add Infinera equipment in Strasbourg. This will allow us to have 400Gbps to Paris and 400Gbps to Frankfurt and then connect Zurich then Milan directly to Strasbourg Milan (instead of Frankfurt). Thus, the servers that we propose for the Cloud will be very close to Strasbourg from Paris, Frankfurt, Prague, Warsaw, Zurich, Milan and Vienna. We will divide the latencies by 2 or 3 to these cities and thus to these countries. Then we will offer new DRP services (disaster recovery plan) based on two sites separated by 300 km.

- A few weeks ago, we celebrated 100,000 servers. We removed the cost of installation of all services at OVH. Since then we have a very sharp increase in abuse of all kinds (spam, attacks, scans). That's why we changed our internal processes and for 1 month on dedicated server terminates the dice the second attack in less than 30 days. So we saw the number of attacks increase from 3-4 per week (with installation costs) to 5-8 per day (free installation) then the situation has stabilised for 2 weeks to 5 - 6 serious attacks per week. Basically, we react quickly and ruthlessly to clean up our network of all customers that would be hackers or with strange or borderline activities or who can not maintain adequate security on their infrastructure. With this less friendly approach, we have been able to keep (for now) installation costs to €0.

- In recent months, we have been working on the deployment of our services on North America. As it's a large area, we know that we will have more datacentres. We are studying several sites to launch our first datacentre there. But in contrast to our competitors, the goal is not to go fast regardless of the cost but to go cheap, with large capacities of networks and service that are managed from start to finish by OVH. This approach allows us to be profitable from day one, even if this means we had to build one of the largest backbones in Europe and have to build more datacentres like RBX4. Saying that, this is not necessarily what we had initially thought when we did apt-get install apache .. earlier rpm-Uvh apache-1.3.4-1.i386.rpm. In short, we want to replicate our business model in its entirety because we think there will always be room for a serious player who have mastered their job, which is 2x to 5x less than the market and which does not charge for bandwidth. All this is thanks to the fact that we reinvest all profits into the future.

So where to deploy a datacentre? We think we should stay in the "North" of "North America", ie.: not far from the US / Canada border. Why? Because energy is cheaper. It comes from dams on major rivers in Canada and the United States. It's green, not expensive. Also, the cost of cooling servers is lower because, as it depends on outside temperature and as it is the north it's cool and it costs less. Looking at the fibre-optic networks, distances and latencies, we will have to create about 4-5 areas to 15-20ms from each other. So yes there will be several datacentres. But if you look at the level of natural disasters it's a good thing to be around the border as there are no earthquakes, no tornadoes, no volcanoes. Also it costs less to build and keep it running than in a risk area. One can then imagine then why we would settle down between Albany (New York), Montreal (Canada), Detroit (Michigan), Chicago (Illinois), Seattle (Washington) and / or Portland (Oregon). The notable exception is Texas, which provides cheap energy from oil locally, but where it is warm, there are tornadoes etc.. We continue our research, but it's starting to take shape. If you have any ideas, please let us know. We should be able to ping our 1st servers with each other before continuing to the end of the year.

- 2 years ago, we knew that we needed to go the United States. We have launched internal reflections on how to organize ourselves in order to maintain a high speed of innovation while magnifying and chatting with customers with increasingly different motivations that are sometimes conflicting.
This led us to the creation of an "Interteam" which, across the board puts pressure on internal teams. Through two weeks of sprints this team is trying to pass through the the sysadmin and developer factory the needs of our customers, who have discussed with our customers on the mailing list, forums, twitter and marketing. The goal is to enrich our offerings every two weeks. One or more new features, new services, bug fixes. The goal is to offer services that come up to the real needs of our customers. The alphas, betas and prod, it's our DNA since starting and that's what sets the pace at OVH. Nothing has changed except that in 1999-2001 we felt what needed to be done in 2009-2011 when we were finally able to put the words on these methods. The Interteam is a photo of this very pragmatic organisation that's been tested and validated for 9-12 months. It's been working quite well, even if we have (often) some delays in prodding. However we're advancing the service and we feel OVH is moving. It is reassuring enough to devote myself to the development of OVH on another continent and enrich OVH (in Europe) from another point of view. For those who doubt this is an expression I often say "the graveyards are full of indispensable people" or "it will not be worse or better, but different." And even then, it will not differ. So to work

All the best

Octave