OVH Community, your new community space.

Help -proxmox with public IPs for both containers and kvm (bridged/virtual mac)


DigitalDaz
02-01-2010, 12:23
Tracing route to 188.165.165.230 over a maximum of 30 hops

1 <1 ms <1 ms <1 ms 192.168.1.254
2 * * * Request timed out.
3 * * * Request timed out.
4 174 ms 100 ms 11 ms sw1.tc.lon.ovh.net [195.66.224.220]
5 17 ms 19 ms 16 ms 20g.vss-2-6k.routers.chtix.eu [94.23.122.109]
6 * * * Request timed out.
7 * * * Request timed out.
8 ^C

Hop 6 should definitely be your server main IP if you are not using Virtual MACs. Rule out this problem first.

I have just traced one of my RIPE ips that is not allocated to any VM and thats exactly what I got and expected. The trace times out AFTER it hits my main server IP

If you have MSN add me at darren@kickingaddiction.com

yatesco
01-01-2010, 22:38
Happy new year all!

I have tried the following:

- using venet and assigning an IP directly in the interface (with no virtual mac)
- using bridged and assigning an IP in the VM (as described in previous posts) without a virtual mac
- using bridged and assigning an IP in the VM (as described in previous posts) with a virtual mac

and none of them work

I have tried using venet and assigning a fail-over IP (i.e. not one of the RIPE ones) and it works fine (i.e. I can ping it from home and the container can access the net).

I think it must be a routing issue their end - for the two containers using venet (one with the RIPE IP and one with the fail over IP), both their (auto-generated /etc/network/interfaces) are identical and 'ip route show' on the host lists them the same (x.y.z.a dev venet0 scope link).

Bah humbug. Any other suggestions?

For info (and I appreciate all your help!) the fail over IP is 94.23.154.30 (which works a treat) and the RIPE IP (which doesn't work) is 188.165.165.230. These are both containers created the same way using the venet device and entering the IP directly.

DigitalDaz
31-12-2009, 19:43
Remove the virtual mac from the ovh manager. It is not needed.

At the moment it is trying to route directly from

20g.vss-2-6k.routers.chtix.eu to your IP, you want it to gor through your server ip.

There should be another hop of 94.23.246.167

yatesco
31-12-2009, 19:08
yeah - that is what I have. I also assigned the virtual mac from the ovh manager.

To be clear, my host network/interface is:

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 94.23.246.167
        netmask 255.255.255.0
        network 94.23.246.0
        broadcast 94.23.246.255
        gateway 94.23.246.254

auto vmbr0
iface vmbr0 inet manual
        address 94.23.246.167
        netmask 255.255.255.0
        post-up /etc/pve/kvm-networking.sh
        bridge_ports dummy0
        bridge_stp off
        bridge_fd 0
Created a virtual mac for one of the RIPE IP addresses (not fail over). I then created a container and selected vmbr0 as the network and assigned it that mac address. Inside the guest (debian 5) I have:

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 188.165.165.234
        netmask 255.255.255.255
        gateway 188.165.165.234
For diagnostics, on the host 'ip route show':

Code:
188.165.165.234 dev vmbr0  scope link
94.23.246.0/24 dev eth0  proto kernel  scope link  src 94.23.246.167
default via 94.23.246.254 dev eth0
and on the guest:

Code:
default via 188.165.165.234 dev eth0 scope link
I am at a loss Any ideas?

1st edit (actually, changing the gateway in the guest to x.x.x.254 allows the guest to talk to the net, but still doesn't allow the world to talk to the guest)
2nd edit (just created a fail over IP and created a container the 'old' way, by selecting venet and it works fine. I created another container with the RIPE IP the same way and 'ip route show' shows them both routed but only the fail over is responding to external pings....)

DigitalDaz
31-12-2009, 18:49
The container was a custom ubuntu karmic one someone had created a template for.
I chose bridged networking
The /etc/network/interfaces was populated with something completely different to what I wanted so I just replaces it with:

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 188.165.x.x
netmask 255.255.255.255
gateway 188.165.x.y

On the host machine I then added

ip route add 188.165.x.y dev vmbr0

Everything now worked

yatesco
31-12-2009, 18:28
Hi DigitalDaz,

Many thanks for your response.

I have updated my interfaces so it is the same as yours, but I still cannot ping the container.

Can you describe how you created the container, i.e. which network you chose and what you had to do on the host and guest?

Many many thanks!

Col

DigitalDaz
31-12-2009, 16:58
I was unable to get mine working properly with the new virtual macs though I think this was more due to the virtual macs not routing properly (some still tried to route through my primary server IP).

I now have both KVM and OpenVZ containers working with public IP's using proxy arp.

My /etc/network/interfaces:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
#iface eth0 inet manual
iface eth0 inet static
address 94.x.y.z
netmask 255.255.255.0
network 94.x.y.0
broadcast 94.x.y.255
gateway 94.x.y.254

auto vmbr0
#iface vmbr0 inet static
iface vmbr0 inet manual
address 94.x.y.z
netmask 255.255.255.0
# gateway 94.x.y.254
post-up /etc/pve/kvm-networking.sh
bridge_ports dummy0
bridge_stp off
bridge_fd 0

94.x.y.z is my servers primary IP

The "bridge_ports dummy0" was the key for me.

Also, I had to add:

ip route add a.b.c.d dev vmbr0

for each of my VM's

This should get it working for you, hope this helps.

yatesco
31-12-2009, 15:42
Hi,

I am pulling my hair out trying to get proxmox 1.4 working with the new virtual macs. I have both containers and KVM machines, each with their own static IP (from the reserved RIPE block) assigned to them. No matter what I have tried, I just cannot get it to work - either the KVM machines (with their virtual macs) work but not the containers, or the other way around.

I have read just about every forum post that I can find, but I just cannot get both working(!).

Rather than list every combination of thing I have tried, it boils down to the following:

- if I configure vmbr0 with the server's IP address then KVM works, but the containers (using veth) no longer respond to pings (http://forums.ovh.co.uk/showpost.php...6&postcount=29)
- if I leave /etc/network/interfaces as it is out of the box then containers work, but not KVM machines
- if I leave /etc/network/interfaces as it is, *except* to change the vmbr0 bridge_port to eth0 then the server never responds after a reboot (and vKVM takes hours to boot the server!)
- if I use the Routed Configuration option in http://pve.proxmox.com/wiki/Network_Model then the server never responds after a reboot

Has anybody got proxmox 1.4 working with both containers and KVM machines running with public IP addresses? If so - would you mind letting me know how

Previously, I had the containers configured with private IPs and followed Myatus's excellent guide (http://www.myatus.co.uk/2009/08/31/g...-with-proxmox/) but there were a couple of downsides with that:

- containers couldn't talk to KVM machines
- proxmox clustering broke

I am sure both of the above issues could also be fixed, but I really want each virtual machine to have it's own public IP.

Any help would be gratefully received! I have to get this sorted today/tomorrow before I lose the original server!

Thanks,

Col