OVH Community, your new community space.

IPv6 Routing Issues and arrogant technical support


athena007
18-02-2015, 19:22
Thanks for the configuration tips.

Well I have a vsphere environment. I have installed a distributed switch via vcenter server virtual appliance.

On this distributed switch I have created a portgroup with vlan set to none. (The uplink is set to vlan trunking 0-4094).

So this portgroup has a vmk0 connected for management. It also has virtual nic of the outside interface of virtual firewall as well as any direct facing VMs.

At first only the host (via vmk0) was able to pass traffic then I noticed by default the first management vmk (i.e. hypervisor management nic) has the same mac address as the connected physical nic of the dedicated server.

So i decided to investigate by swapping the mac address of the vmk0 of the hypervisor and my virtual firewall's outside facing virtual nic mac address. I then noticed traffic was reaching the virtual firewall but no more to the vmk0 with the changed mac.

I then concluded that my configurations are ok except that OVH is blocking mac entries which affects IPv6

I have another server which is subscribed to the soyoustart service and interestingly i dont have that issue. Is it specific to the OVH service?

alvaroag
18-02-2015, 00:44
The following if the configuration I made for Proxmox VE. It may work on other linux platforms. Works either with KVM or other hypervisors, as well as openvz(in bridged mode) or lxc. I don't know if it's some way to do similar in vmware.

The server it's connected to the OVH network on eth0. Proxmox, by default, comes with eth0 bridged into vmbr0. This configuration requires eth0 to be configured without any bridge.

All the VMs and CTs(if used) may be put on a bridge, for example, vmbr0, For the bridge so be always active, you can add a dummy interface to it.

You may need Shorewall & Shorewall6, or other firewall solution which support both IPv4 & IPv6 with ProxyARP & ProxyNDP.

First, let's see the IPv4 part.

The server has IP A.B.C.D on eth0, with subnet mask /32, and the default route is via A.B.C.254. So vmbr0 should be configure with exactly the same address, but without default route. The shorewall configuration files should look like this.

#zones
fw firewall
net ipv4
int ipv4

#interfaces
net eth0
int vmbr0 routeback,bridge

#policy
fw all ACCEPT
net fw DROP
net int ACCEPT
int net ACCEPT
int fw ACCEPT
all all DROP

#rules
ACCEPT net fw icmp
ACCEPT net fw tcp 22
ACCEPT net fw tcp 8006 # Proxmox VE Management

#proxyarp
E.F.G.H vmbr0 eth0 no yes
I.J.K.L vmbr0 eth0 no yes
# ... and so on ...
That may be enough. Note that it's a really basic configuration, and it can be tuned in many ways. For example, you can remove "net int ACCEPT" from policy so you can manage the firewall rules on the host instead of managing them on each VM/CT.

On proxyarp, you may add a line for each address you want to use on a VM/CT. Just change the IP address, the other fields are the same for all. Note that IP addresses here may not have virtual MAC assigned on the manager, or this will now work.

Now, run "shorewall try /etc/shorewall" and the firewall will become active. Now, all the VMs & CTs that are bridged to vmbr0 can use a failover IPv4. Just configure your VM with IP E.F.G.H, mask 32 /or other, if you acquired a block), and gateway A.B.C.254, the same as the host. With that, your VM may get access to internet.

Now the IPv6 part. It's almost the same.

You may have been assigned a prefix 64 IPv6 block. Let's say 2607:5300:60:NNNN::/64. So let's put the host, on eth0, the address 2607:5300:60:NNNN::1/64, and default route via 2607:5300:60:NNFF:FF:FF:FF:FF/64. On vmbr0, the host may have the same address, but with prefix 128: 2607:5300:60:NNNN::1/128, and no default route.

The proxyndp function of shorewall6 does not works as well as the proxyarp of the IPv4 version. So let's configure it directly using the "ip" command.

First, you should enable proxyndp. Run

sysctl net.ipv6.conf.vmbr0.proxy_ndp=1
sysctl net.ipv6.conf.eth0.proxy_ndp=1
Then, you have to expose the gateway to vmbr0:

ip -6 neigh add 2607:5300:60:NNFF:FF:FF:FF:FF vmbr0
Finally, for each IPv6 address you want to use on a VM/CT, you may run this:

ip -6 neigh add 2607:5300:60:NNNN::X eth0
ip -6 route add 2607:5300:60:NNNN::X/128 vmbr0
Now, you can configure each VM/CT with it's own IPv6 address, and the same gateway as the host. You should then have IPv6 access on your VM/CT.

If you want to implement a firewall on the IPv6 part, you can begin from the same configuration as IPv4, but omitting the "proxyarp" file and change the "ipv4" values on "zones" to "ipv6".

Warnings:
-Shorewall may not be enabled to automatic startup, so ensure to do so.
-Even when you have put shorewall on automatic startup, shorewall itself may require additional configuration to really start. Normally, this is done setting "STARTUP_ENABLED" to "Yes" in /etc/shorewall/shorewall.conf . On Debian based distros(like Proxmox), you may set "startup" to "1" in /etc/default/shorewall. The same goes for Shorewall6.
-The sysctl configurations changed are non-persistent. so they will get lost after reboot. Adding them on sysctl.conf may not be enough. Instead, add them as "post-up" in "interfaces", or equivalent in your distro.
-The "ip -6 neigh" commands, as well as the "ip -6 route" commands are also non-persistent. You can put one by one in the "post-up" on "interfaces", or equivalent in your distro. If there are too much, the easy way is to write a shellscript that executes the commands based on a config file.

athena007
17-02-2015, 23:07
Quote Originally Posted by heise
They advertise /64 block, so you should be able to use it. Maybe you can add IPv6 to main server and setup own routing with iptables to forward it to another (private) IP of VM.
You mean NATING IPv6? whats the use of have /64 only to be network translated.

On soyoustart service everything work out of the box. I just configure one of my IPv6 addresses (2001:41d0:x:xxx::hhh) with a prefix of /56 and gateway of 2001:41d0:x:xxff:ff:ff:ff:ff on a virtual machine with a nic on the same port group as the hypervisor management net.

But this is the case that OVH is blocking traffic from all mac addresses other than the hardware nic mac addresses of my server. Adn there is no option to create a virtual mac address for IPv6.

heise
17-02-2015, 19:14
They advertise /64 block, so you should be able to use it. Maybe you can add IPv6 to main server and setup own routing with iptables to forward it to another (private) IP of VM.

athena007
17-02-2015, 16:29
So I finally confirmed ovh technical support are lazy ideotas.

Regarding IPv6 configuration technical support initially replied as follows;
"There is no need for virtual mac addresses on IPv6."

Then after about 5 exhanges of emails trying to get them understand the issue they reply as follows;

"You have found the documentation which explains the requirements for setting up IPv6 on a Virtual Machine. We are not in a position to make exceptions with regards to requiring a ipv4 vmac."

These lazy technical support wait till its almost 24hours when a ticket escalates from their queue then reply with rubbish wasting clients time.

stupid ideotas

ticket reference id: 81553 Service OVH dedicated server SP64

athena007
17-02-2015, 07:06
Hello

From ovh documentation site - http://docs.ovh.ca/en/guides-network-ipv6.html

It is quoted as follows - "If you want to use more than one IPv6 configured on your server (or want to use it on a VM) you will need to have a failover IP configured with a vMAC. Otherwise, the IPv6 won’t be routed by our routers/switches."

This means the virtual machines I want to configure IPv6 address would need to have a failover IPv4 with a virtual mac. This is not reasonable because there are situations a client would want to configure ONLY IPv6 for virtual machines and there is no option to create virtual macs for IPv6.

In the first place why is each server is assigned /64 IPv6 if ovh expect a one-to-one mapping between IPv6 and failover IPs before IPv6 addresses is correctly routed form virtual machines on dedicated server.

I have tried to explain to technical support but their arrogant and lazy attitude is shameful. They only respond each 24 hours without paying attention to details of client complain. I wonder if there is any system in place to check and review how the technical support respond to client complaints.

I have configured IPv6 addresses on many virtual machines on my dedicated server but for some reason traffic is blocked and only allowed for the mac address of the dedicated server. My service is located in RBX5

Please resolve the issues asap its very irritating