OVH Community, your new community space.

New distribution: Proxmox 2.0 Alpha


DigitalDaz
15-08-2012, 19:17
Getting there slowly but surely, I now have a Proxmox KVM virtual machine running on a non OVH public ip address. There is no NAT or anything involved, the VM is just configured like a normal machine on the LAN would be

ebony
14-08-2012, 14:53
Quote Originally Posted by K.Kode
The vKS don't have tun/tap enabled.
well that sucks.

anywho i got the ip to failover more of less auto on its own

192.50.50.50:80 > nodeA > 10.13.32.10:80
192.50.50.60:80 > nodeB > 10.13.32.10:80

Round-Robin on DNS Testme.com 192.50.50.50 192.50.50.60


user goes to testme.com then it pick on the 2 failover ip's both the firewalls on nodeA and nodeB forward port 80 to 10.13.32.10.

10.13.32.10 being a OpenVZ running http. now as long as the VPN stays up the http server can jump servers and i would still have Http on testme.com

Now i just need OpenVPN backup. there is 2 more nodes thats not got ip's i would not want to use public.

But i still think this ok what i need to do both my ks2g's are in def rooms/datacenters for them both to be down at the same time would be very very thin.

K.Kode
14-08-2012, 12:41
Quote Originally Posted by ebony
i was thinking about moving the openvpn server to a VKS
The vKS don't have tun/tap enabled.

DigitalDaz
14-08-2012, 10:20
I've done similar, its a HA node that I can add more public IPs to. I just put Ubuntu and OpenVPN on it. I've made that the OpenVPN 'master'. I can still push 750GB traffic through it if I wished which is way, way more than I ever use anyway.

ebony
14-08-2012, 10:09
Quote Originally Posted by DigitalDaz
I may have a solution coming. I've been working on it all night and its looking good so far. The speed wasn't that clever so the first thing I did was switch the VPN to UDP and get rid of encryption. Security isn't my interest here. Its very fast now.
this sounds cool can not wait for it . i was thinking about moving the openvpn server to a VKS (olny like £4) had one for 2 months now and there very good. this would fix the prob if i needed to take the openvpn server down to do fixing etc.

DigitalDaz
14-08-2012, 09:52
Quote Originally Posted by ebony
on my setup my email server is 10.13.32.5 if the server dies on nodeA then it go to a jump to node A and still uses 10.13.32.5

using UFW i re-root FailoverIP>10.13.32.5

Now if the the main openvpn falls down a hole then am stuck like you. and i have to change the failover to the other nodes. This works great if the server happons to get overloaded but not really if the servers go down unless it happens to be the other 2 without the failover and the openvpn on it. Though i can change a failover ip on my phone so its not the biggest deal in the world.

i would like to run openvpn somewhere else though thats not going to go down like a could. Trying to run them under 1 is not working as well as i was hopeing.
I may have a solution coming. I've been working on it all night and its looking good so far. The speed wasn't that clever so the first thing I did was switch the VPN to UDP and get rid of encryption. Security isn't my interest here. Its very fast now.

ebony
14-08-2012, 05:49
on my setup my email server is 10.13.32.5 if the server dies on nodeA then it go to a jump to node A and still uses 10.13.32.5

using UFW i re-root FailoverIP>10.13.32.5

Now if the the main openvpn falls down a hole then am stuck like you. and i have to change the failover to the other nodes. This works great if the server happons to get overloaded but not really if the servers go down unless it happens to be the other 2 without the failover and the openvpn on it. Though i can change a failover ip on my phone so its not the biggest deal in the world.

i would like to run openvpn somewhere else though thats not going to go down like a could. Trying to run them under 1 is not working as well as i was hopeing.

DigitalDaz
13-08-2012, 22:58
I think I have it I follow all what you are saying and I think I now have the missing piece of the jigsaw that I needed

Thanks, for getting the cogs turning, I'll write it up when my solution is working.

ned14
13-08-2012, 22:02
Quote Originally Posted by DigitalDaz
Niall,

I did read your excellent guide a couple of weeks ago, In fact I'm going to revisit it now to double check everything.

I'm fine with using OpenVPN for whatever it needs to be used for but I still do not see the way to automatically fail over the public IPs.

I know rerouting them via the manager is trivial but without the facility to automatically do this then its not quite HA is it, or am I missing something really obvious here?

As I mention above, if you are hitting an IP routed to a server that then goes down, using the current IP failover system as I know it, you then have to go within the manager and reroute the IP manually to another server.
I'm not sure I understand. In Proxmox you keep your VMs/CTs stored on a SAN of some sort or DRBD. Proxmox only ever has one node running any VM/CT at a time, but checkpoints regularly. Each node gets its own unique internal IP local within the VPN. Should a node suddenly die, Proxmox HA will transport the VM/CT instantly to another node. As a result, the VM/CT's internal VPN IP never changes. Therefore routing to it is trivial.

Now, as to how to get the public to reach a HA VM, either you can dedicate a server solely used as a router between the public internet and the VPN (including an OVH cloud instance), or you can configure the traditional fallover solution e.g. get a DNS server to round-robin IPs etc.

Try reading the comments at the bottom of the guide, there's some useful stuff in there on HA with all sorts of weird custom configs.

HTH,
Niall

DigitalDaz
13-08-2012, 20:21
As a quick afterthought to the last post, a scenario now tumbling around my head is: How about using one of the smaller OVH public cloud instances as the public facing entrance to Niall's VPN setup.

I'm certainly no heavy bandwidth user so that wouldn't cause me an issue. Now there's definitely an idea to chew over. The public cloud instance gives the missing HA

DigitalDaz
13-08-2012, 20:00
Niall,

I did read your excellent guide a couple of weeks ago, In fact I'm going to revisit it now to double check everything.

I'm fine with using OpenVPN for whatever it needs to be used for but I still do not see the way to automatically fail over the public IPs.

I know rerouting them via the manager is trivial but without the facility to automatically do this then its not quite HA is it, or am I missing something really obvious here?

As I mention above, if you are hitting an IP routed to a server that then goes down, using the current IP failover system as I know it, you then have to go within the manager and reroute the IP manually to another server.

ned14
13-08-2012, 19:10
Quote Originally Posted by ebony
hi there is a way useing OpenVPN

http://www.nedproductions.biz/wiki/c...ntranet-part-1

have fun works great for me i have one server at home and 2 at ovh and a USA server all on the same cluster.
For reference, I wrote the above guide. Today I just finished wiring a VM running FreeNAS to provide a redundant ZFS pool via NFS into a VM running Ubuntu 12.04 LTS which is turn provides various HA services over OpenVPN to a cloud of OpenVZ containers.

As much as I don't like to blow my own trumpet, this stuff is seriously cool. And surprisingly easy.

Anyway, this is really to say that since writing my guide, my experience with my proposed HA-over-OpenVPN setup in that guide has been absolutely stellar. Occasionally pve-cluster gets confused, but that's because I moved a pfsense router into one of the VMs and that is doing the ADSL out of the house. That creates a very obvious chicken-and-egg problem, but nothing some delayed service restarts in /etc/rc.local can't fix.

OK then a simple question:

Server A with a public IP of 1.1.1.1 goes down with a VM on it with a public IP of 2.2.2.2

How can Server B possibly automatically have packets routed to it for 2.2.2.2??

Unless I have gone completely stupid, you don't have an answer
You can configure OpenVPN to failover. While not covered in the guide, with a bit of work each of your public servers can failover should the primary OpenVPN routing node go down.

If you don't want to use OpenVPN to mimic a local subnet, I'm sure OVH can arrange a local subnet for a hefty fee and many of their premium services.

Niall

DigitalDaz
13-08-2012, 17:11
Quote Originally Posted by ebony
hi there is a way useing OpenVPN

http://www.nedproductions.biz/wiki/c...ntranet-part-1

have fun works great for me i have one server at home and 2 at ovh and a USA server all on the same cluster.
OK then a simple question:

Server A with a public IP of 1.1.1.1 goes down with a VM on it with a public IP of 2.2.2.2

How can Server B possibly automatically have packets routed to it for 2.2.2.2??

Unless I have gone completely stupid, you don't have an answer

ebony
13-08-2012, 16:39
Quote Originally Posted by DigitalDaz
Reviving an old thread for clarification.

Could someone please confirm that using current OVH offerings either with or without vrack that there is no way to cluster Proxmox machines for High Availability?

TIA
hi there is a way useing OpenVPN

http://www.nedproductions.biz/wiki/c...ntranet-part-1

have fun works great for me i have one server at home and 2 at ovh and a USA server all on the same cluster.

DigitalDaz
13-08-2012, 14:08
Reviving an old thread for clarification.

Could someone please confirm that using current OVH offerings either with or without vrack that there is no way to cluster Proxmox machines for High Availability?

TIA

Felix
18-05-2012, 18:50
Antennipasi wrote:
> "corosync [MAIN ] parse error in config: mcastaddr is not a correct
> multicast address."


Did you do the config changes as described here?
http://pve.proxmox.com/wiki/Multicas...d_of_multicast

> Multicast seems not to work either, with vrack nor ripe&vrack.


That I can confirm: Multicast won't work.

Felix

Antennipasi
28-04-2012, 20:06
Quote Originally Posted by mike_
Has anybody had success doing an in-pace upgrade to 2.0?
Yes, i made one successful upgrade.
It was not member of and cluster and it had both OpenVZ- and KVM-machines in it.
Upgrade was not by any means smooth, update-script did stop many times but after fixing things that caused stoppage script did finish and after importing configs everything was fine.

I do not recommend this for anyone without sufficient knowledge about Debian itself, OpenVZ, KVM and Proxmox.
It is definitely easier to do fresh install and import virtual machines to that.

Antennipasi
23-04-2012, 20:30
Quote Originally Posted by Felix
bago wrote:
> So, the question is: how does OVH supports Proxmox 2 ? Does it support
> the clustering of proxmox 2 or simply supports a single machine proxmox2
> environment?


in a vRack, you can set corosync/cman to use your local broadcast address. see
"man cman" and /etc/pve/cluster.conf

This should work, however since Proxmox 2 is in Alpha status at OVH we can't
yet guarantee any result - We hope to able to test everything thouroughly and
provide a guide until the official release of Proxmox 2.
Seems not:
"corosync [MAIN ] parse error in config: mcastaddr is not a correct multicast address."
And service cman fails to start after reboot.
Multicast seems not to work either, with vrack nor ripe&vrack.

Felix
02-04-2012, 17:44
wii89 wrote:
> Proxmox VE 2.0 has been released on 30.03.2012 can someone comform that
> the latest image on ovh is the Proxmox VE 2.0 stable bulid?


Yes I confirm, it's up to date since the minute proxmox 2.0 was announced (even
a bit before

wii89
31-03-2012, 17:37
Quote Originally Posted by mike_
Has anybody had success doing an in-pace upgrade to 2.0?
im doing a fresh install, backing up my data then doing it.

mike_
31-03-2012, 17:08
Has anybody had success doing an in-pace upgrade to 2.0?

wii89
31-03-2012, 14:42
Proxmox VE 2.0 has been released on 30.03.2012 can someone comform that the latest image on ovh is the Proxmox VE 2.0 stable bulid?

Nettus
29-02-2012, 13:32
I had a look at the new version, I must say looks alot better and a nice interface

ned14
24-02-2012, 16:31
Quote Originally Posted by Felix
ned14 wrote:
> It's weird. I can touch foo into /etc/pve just fine and it replicates
> and deletes in both directions. But touch foo into /etc/pve/local
> doesn't replicate. More google searching is obviously required!


"local" should be a symlink to the folder named like ${HOSTNAME}. These symlink
then point to different folders on different hosts - don't know if you took
that into account?
Yeah I hadn't realised that, so thanks for that. Anyway I simply regenerated the Apache SSL certs using the standard way, and it all appears to be working perfectly. It's quite cool actually how I can just push VMs around between the nodes

Thanks for your help.
Niall

Felix
24-02-2012, 13:22
ned14 wrote:
> It's weird. I can touch foo into /etc/pve just fine and it replicates
> and deletes in both directions. But touch foo into /etc/pve/local
> doesn't replicate. More google searching is obviously required!


"local" should be a symlink to the folder named like ${HOSTNAME}. These symlink
then point to different folders on different hosts - don't know if you took
that into account?

ned14
23-02-2012, 18:40
Quote Originally Posted by Felix
ned14 wrote:
> Found a bug in the OVH Alpha image as soon as you join the OVH machine
> into a cluster with a stock Proxmox 2.0 machine:


It's not "stock" anymore once you rename it

My guess is that you didn't restart proxmox cluster service after renaming the
host (and before restarting apache).
You're wrong that I didn't restart both machines, however you're right that /etc/pve/local isn't replicating as it's supposed to and therefore the Apache SSL keys aren't being transferred to the OVH node.

It's weird. I can touch foo into /etc/pve just fine and it replicates and deletes in both directions. But touch foo into /etc/pve/local doesn't replicate. More google searching is obviously required!

Thanks for the help,
Niall

Felix
23-02-2012, 15:44
ned14 wrote:
> Found a bug in the OVH Alpha image as soon as you join the OVH machine
> into a cluster with a stock Proxmox 2.0 machine:


It's not "stock" anymore once you rename it

My guess is that you didn't restart proxmox cluster service after renaming the
host (and before restarting apache).

ned14
23-02-2012, 14:43
Found a bug in the OVH Alpha image as soon as you join the OVH machine into a cluster with a stock Proxmox 2.0 machine:

Code:
root@europe3:/home/ned# /etc/init.d/apache2 restart
Syntax error on line 13 of /etc/apache2/sites-enabled/pve-redirect.conf:
SSLCertificateFile: file '/etc/pve/local/pve-ssl.pem' does not exist or is empty
Action 'configtest' failed.
The Apache error log may have more information.
 failed!
It looks like OVH doesn't use the stock Apache config - as a result, when it joins a cluster and gains the cluster's /etc/pve directory, it can't find its custom config and apache fails to launch.

Otherwise the two machine cluster appears to work well, even though one of the machines is on the other side of a home NATed ADSL connection . I've written up how I did it at http://www.nedproductions.biz/wiki/c...envpn-intranet for those interested.

Niall

Felix
22-02-2012, 11:26
ned14 wrote:
> or 2.0 is simply more
> sensitive to these things.


Yes, it is indeed...

See "man 1 hostname", it contains:

FILES
/etc/hostname This file should only contain the hostname and not the
full FQDN.

Usually a FQDN still works, but proxmox (more precicely: proxmox cluster
configuration filesystem) is very picky about this.

ned14
22-02-2012, 02:44
Quote Originally Posted by ned14
I'm glad to report that a reinstall has fixed the problem and it appears to be working properly now.
I figured out what was going wrong as my new reinstall suddenly broke itself the same way.

It turns out that the OVH image of Proxmox 2.0 Alpha is a bit finickety. What I was doing was changing /etc/hostname to its proper FQDN, and that was causing /etc/pve to not get mounted.

Setting /etc/hostname to a FQDN directly worked fine on 1.9. It also works on a local Proxmox 2.0 server we have here, though that's DHCP configured so probably this works around the FQDN problem. Something is obviously slightly wrong in the 2.0 Alpha image, or 2.0 is simply more sensitive to these things.

Anyway, to fix one simply has to be less lazy. Set /etc/hostname to the server's leaf name. Open /etc/hosts and edit the item with the server's IP to refer to your FQDN and leaf name. Reboot and all should be well - your server has its new name, and moreover it isn't broken.

HTH,
Niall

ned14
22-02-2012, 01:29
Quote Originally Posted by ned14
Maybe it's since been fixed in the 2.0 ALPHA image used by OVH?

I'll try a wipe and reinstall and see what happens.

Niall
I'm glad to report that a reinstall has fixed the problem and it appears to be working properly now.

Thanks,
Niall

ned14
22-02-2012, 00:18
Quote Originally Posted by Felix
/etc/pve has to be totally empty on the / filesystems, the contents are
provided by the proxmox Cluster FUSE filesystem (pmxfs).

Can you tell me on which install you have this problem? On a clean install I
just tested the pmxfs is properly initialized.
Maybe it's since been fixed in the 2.0 ALPHA image used by OVH?

I'll try a wipe and reinstall and see what happens.

Niall

Felix
21-02-2012, 18:06
ned14 wrote:
> The OVH proxmox v2.0 template appears to not place anything inside
> /etc/pve. This causes the web gui to fail to load due to a missing
> /etc/pve/pve-ssl.pem. Manually creating the keys doesn't help, because
> the rest of /etc/pve is empty and therefore nothing works including
> login.


/etc/pve has to be totally empty on the / filesystems, the contents are
provided by the proxmox Cluster FUSE filesystem (pmxfs).

Can you tell me on which install you have this problem? On a clean install I
just tested the pmxfs is properly initialized.

Best regards,
Felix

ned14
17-02-2012, 05:39
Hi,

The OVH proxmox v2.0 template appears to not place anything inside /etc/pve. This causes the web gui to fail to load due to a missing /etc/pve/pve-ssl.pem. Manually creating the keys doesn't help, because the rest of /etc/pve is empty and therefore nothing works including login.

I'd say this makes the image pretty useless

Niall

Felix
01-02-2012, 11:42
bago wrote:
> So, the question is: how does OVH supports Proxmox 2 ? Does it support
> the clustering of proxmox 2 or simply supports a single machine proxmox2
> environment?


in a vRack, you can set corosync/cman to use your local broadcast address. see
"man cman" and /etc/pve/cluster.conf

This should work, however since Proxmox 2 is in Alpha status at OVH we can't
yet guarantee any result - We hope to able to test everything thouroughly and
provide a guide until the official release of Proxmox 2.

bago
01-02-2012, 10:19
I was wrong. Proxmox 2 requires multicast (not broadcast) for the cluster operations:
http://pve.proxmox.com/wiki/Multicast_notes

So, the question is: how does OVH supports Proxmox 2 ? Does it support the clustering of proxmox 2 or simply supports a single machine proxmox2 environment?

Felix
31-01-2012, 17:42
bago wrote:
> AFAIK Proxmox 2 uses udp broadcast for the clustering: how can we use it
> in OVH? Does it only work in a virtual rack?


Broadcast should work in a vRack, yes. (multicast not)

bago
31-01-2012, 17:10
AFAIK Proxmox 2 uses udp broadcast for the clustering: how can we use it in OVH? Does it only work in a virtual rack?

S0phie
26-01-2012, 19:20
Hello,

In May 2008, we offered you to test a new virtualization distribution called Proxmox VE (version 0.8). Over time, it has been very successful since it became the number 1 distribution in OVH for virtualization (in terms of number of servers).

Almost four years and many changes and improvements later, the Proxmox team is about to launch version 2.0. We would like to offer access to the Beta by offering it in the Manager for reinstallation.

Some new features:
- Based on Debian 6.0 Squeeze
- Kernel 2.6.32 (based on the RHEL 6 core)
- Completely redesigned web interface:
*Improved scalability for large infrastructure (many hosts / VM)
*Access to the VM via VNC directly (no Java!)
*Multi-user and permission management (planned)
- High Availability Cluster (OpenVZ and KVM)
- Proxmox Cluster file system (pmxcfs)
- A RESTful API
- Backup / Restore features via the web interface as well as CLI

For the extensive list with all the details, see http://pve.proxmox.com/wiki/Roadmap

http://demo.ovh.com/view/06fb8769877...beca85f4fd95/0

At OVH this distribution is marked as "ALPHA" as it is a "technology preview", please feel free to post your comments in this thread.

Regards,

OVH Team