OVH Community, your new community space.

2 of my 3 Fail Over IP's are not routing


Ktyak
10-01-2013, 20:07
Ok, thanks. If theres no update from me then you can assume it worked hehe

Neil
10-01-2013, 16:43
Quote Originally Posted by Ktyak
So, to be sure, the solution for having TWO Failover IP's on one VM is to do this:
  • Use the BridgeClient instructions for the first one
  • Use the IPAlias instructions for the second one BUT assign the 2nd IP to the same vMAC as the first one and use that in the VM when aliasing the eth port


Can you confirm ?
Hi

That is correct.

Ktyak
10-01-2013, 16:42
So, to be sure, the solution for having TWO Failover IP's on one VM is to do this:
  • Use the BridgeClient instructions for the first one
  • Use the IPAlias instructions for the second one BUT assign the 2nd IP to the same vMAC as the first one and use that in the VM when aliasing the eth port


Can you confirm ?

Neil
10-01-2013, 11:27
Hi

In the OVH Manager you can choose to generate a new vmac or choose an existing one, so choose the same vmac as you have on IP that is used for the bridge networking. If you already have a vmac then you will need to delete it first before you can choose to duplicate the vmac.

Ktyak
10-01-2013, 10:33
I thought the vMac was always generated for me and that I didnt get to choose one.
I can't tell right now since I have 3 separate vMacs for the 3 failover IPs.

Neil
10-01-2013, 09:51
Quote Originally Posted by Ktyak
Also, should the IpAlias instructions be used for the 2nd Failover IP on the XEN VM/VPS when the BridgeClient instructions have already been used for the 1st Failover IP on the same VM ?
Hi

Yes, but you need to setup a vmac and it must be the same vmac as the main ip of the virtual machine.

Ktyak
10-01-2013, 09:48
Hi Mark,

I'm sorry but I can't afford to have my VM's offline any longer so I decided to do w/o the 2nd Fail Over IP for now and do what I was doing with it in another location.

However, if you are able to come up with a recommended way (see my previous post) to have two failover IP's then I'll look re-adding the 2nd fail over IP later today.

Ktyak
10-01-2013, 03:09
Also, should the IpAlias instructions be used for the 2nd Failover IP on the XEN VM/VPS when the BridgeClient instructions have already been used for the 1st Failover IP on the same VM ?

Ktyak
09-01-2013, 17:20
Mark,

I did setup the basic configuration you asked for the other day.

You saw the diagnostics indicating that I had, that's what caused you to raise the incident with the engineers in the first place.

I've since put it back to what I really need since I'm not going to keep doing it.

However, I am not able to give the engineers access to my real VM's - so what I have done is clone one of them, and set that up with the problematic IP - which now as a result of their work yesterday appears to have improved things. (i.e. its routing now)

I'll explain below.

While waiting for the engineers to come online in the morning, I've spent the evening trying to further investigate and resolve this issue myself on the real setup.

Current situation:

I have partially managed to make some progress to get the 2nd Failover IP to work on one VM (i.e. .91 and .92 on the SAME VM) (And the third failover IP .100 on a 2nd VM)

If I use tcpdump on eth2 I can now see external ping requests coming in for the .92 failover IP but no request replies are being sent back out (and there is NO iptables firewalling defined when I do the testing).

However, on the same VM if I send ping requests to the .91 IP then tcpdump shows both the requests and replies (again with no iptables defined).

Perhaps you or someone else have a suggestion as to why this might be ?

Code:
 
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
5.135.179.254   0.0.0.0         255.255.255.255 UH    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0 (.91 NIC)
169.254.0.0     0.0.0.0         255.255.0.0     U     1004   0        0 eth2 (.92 NIC)
0.0.0.0         5.135.179.254   0.0.0.0         UG    0      0        0 eth0

eth0      Link encap:Ethernet  HWaddr 02:00:00:87:14:04  
          inet addr:AA.AA.AA.91  Bcast:AA.AA.AA.91  Mask:255.255.255.255
          inet6 addr: fe80::ff:fe87:1404/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:47367 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18215 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:6970486 (6.6 MiB)  TX bytes:5123788 (4.8 MiB)
          Interrupt:246 

eth2    Link encap:Ethernet  HWaddr 02:00:00:C8:C7:42  
          inet addr:BB.BB.BB.92  Bcast:BB.BB.BB.92  Mask:255.255.255.255
          inet6 addr: fe80::ff:fec8:c742/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:29049 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2074057 (1.9 MiB)  TX bytes:580 (580.0 b)
          Interrupt:244 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:10432 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10432 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2009972 (1.9 MiB)  TX bytes:2009972 (1.9 MiB)
No firewalling, during the testing
Code:
[root@clientA X]# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@sosweb jason]#
I'll close down this VM before the morning and leave the engineers with the cloned one, so they can have a play but I'm not convinced its going to be of any use to them.

Since I suspect that there is an issue with having two failover IP's from the same subnet multihoming on 2 nics on the same VM.

marks
09-01-2013, 16:14
yes, that's probably related to the problem. But my point is that, for our engineers to check your issue, they need you to set it up in the basic configuration and provide the output of the commands. Otherwise, they cannot do anything.

Thanks

Ktyak
09-01-2013, 14:18
Mark,

I reported both problem IPs in the original report.

You already asked me to switch to just one IP - that now works - now I need them to check the vMac for the .92 IP since that is not coming in and seems to be the same type of issue.

They already have the relavent routing info in the incident report.

They need to go check the vMac for the 2nd IP.

For the record, that .92 IP has been tried on both a VM with 2 failover IPs and a VM by itself with the same results - no routing.

No arps seem to be coming in to the xen host for the .92 IP - which is exactly the same symptoms as for the IP they have just investigated, found a problem with, and fixed.

marks
09-01-2013, 09:45
yeah, I can see that the engineers fixed the switchport for the IP you provided logs for.

I can see that you've reported another IP that doesn't work. Remember that you have to provide the logs of the configuration of the IP for the engineers to see that everything has been setup ok on your side. So, could you add:

# ifconfig
# route -n

Also, setup the VPS for this IP without more IPs. Just simple temporary configuration for the engineers to check the vMAC.

Ktyak
09-01-2013, 01:18
Turns out it was the vMAC, the OVH engineer has managed to fix one of the failing IPs, currently awaiting their attentions for the other one which hopefully will be just as swift and result in all 3 working.

Ktyak
07-01-2013, 21:56
Quote Originally Posted by marks
regarding OVH, we can check whether the vMACs for each IP failover are created and working properly. For that, we will need to check each IP/vMAC separatedly, on temporary VPS, just for troubleshooting.

Once we can make sure that all the vMACs are working properly individually, then we can move on and test them on this more complex setup (as explained on the email I sent to you from the customer support).
Thanks Marc, I replied by email as well.

Ktyak
07-01-2013, 21:45
Thanks very much for the tip.

It came in very handy

DigitalDaz
07-01-2013, 20:10
Hi,

Over the years now I have seen a lot of problems routing etc with virtual macs and a fair share of them have been OVH's problem but equally there are many misconfigs.

One fairly easy way I've found to help determine which is using tcpdump on the main host. If its an OVH problem it will come to light very quickly.

Do a:

tcpdump -ni vmbr0 arp or eth0 or whatever your interface might be.

Then, from outside the OVH network, try pinging one of your failover IPs.

You should see arp requests from the OVH router to your failover and your failover should reply.

If that works, then do the same from your failover VM, try and ping something outside the OVH network like 8.8.4.4 or something.

This time you will see your VM sending arp requests to the OVH router and should see a reply.

This of course assumes no firewalling.

Though there are many other things that can/will go wrong, just these little simple tests alone can rule a lot out.

If the OVH router either does not send the arp or reply to your arps then you can safely get your ticket in.

marks
07-01-2013, 12:34
regarding OVH, we can check whether the vMACs for each IP failover are created and working properly. For that, we will need to check each IP/vMAC separatedly, on temporary VPS, just for troubleshooting.

Once we can make sure that all the vMACs are working properly individually, then we can move on and test them on this more complex setup (as explained on the email I sent to you from the customer support).

Ktyak
07-01-2013, 11:26
Ok, I have a

Xen Dom0 host:
(hostname : X.kimsufi.com xenbr1 IPv4 : A.A.A.A)

On that XEN Host I have 2 Xen DomU Centos Clients with 3 Fail Over IPs and 2 Private IPs. Each Fail Over IP has a Virtual MAC assigned in the OVH Manager.

DomU Client A:
  • 2 Fail Over IPs (X.X.X.91 eth0 & X.X.X.92 eth2)
  • 1 Private IP (10.0.100.2 eth1)


DomU Client B:
  • 1 Fail Over IP (Y.Y.Y.100 eth0)
  • 1 Private IP (10.0.100.1 eth1)


Problems

Client A
  • I can get in to Client A via A.A.A.91
  • I can get out from Client A to the rest of the world
  • I can get from Client A to Client B via the Private IPs
  • I CANNOT get in to Client A via A.A.A.92

Client B
  • I can get in to Client B from Client A via the Private IP
  • I can get in to Client A from Client B via the Private IP
  • I CANNOT get in to Client B via Y.Y.Y.100
  • I CANNOT get out of Client B to the rest of the world


I've largely followed the BridgeClient instructions.

I had a similar setup in the past on a different server and I had that working but it seems to me that the 2 new Fail Over IP's are not routing properly. I've been looking at it for a couple of days but I can't see the problem.

I've posted a support request in the OVH Manager but had no response to that yet.

So I thought I'd see if any forum users could advise.