OVH Community, your new community space.

Virtual MAC not working, sick of going round in circles with support....


iMx
16-02-2010, 14:26
Just as a general FYI for the rest of you, dechris clearly doesnt understand layer 2/3 routing and does not, did not, read my updates where i clearly stated that id tried both methods of setting this up:

a) Adding an extra network, that encompasses the default GW of your box
b) Setting a /32 network with the default gw being the same as the fail over ip.

Neither worked....hence the issue is, clearly, with the scripting of the VMAC addition/modification. A couple of searches on this forum will realise that this is very tempremental, you dont have to take my word for it..

As for proxy ARP being NAT, this is also clearly incorrect - as stated by dechris. Proxy ARP is NOT NAT.....

I strongly encourage you to use proxy arp, if you use VMware server hosted on linux...this wont work for ESXi.

derchris
16-02-2010, 00:54
Not even then.
If you see your servers IP in the traceroute, then you are using some other network seup.
The only time I did see this actually was with Xen.
Servers main IP was one hop before the failover/ripe IP.

DigitalDaz
15-02-2010, 21:20
That is what I thought, the only time I should see my server IP is if the vmacs are not working.

derchris
15-02-2010, 20:55
Don't know your config, but in a traceroute you should never see your servers main ip, only the failover/ripe ip.
Last entry before that woud be a VH router/gateway, most likely a vss-*

DigitalDaz
15-02-2010, 20:00
Unless I'm doing something wrong, which is quite possible, my RIPE addresses are not working with vmac. Failovers are fine.

I'm using the first point to test is that the last hop I get to if the vmac is set right is that the last hop before *** is NOT my server main ip. Is this the right way to test?

This is of course when I am trying a traceroute into the server

derchris
15-02-2010, 10:10
No, I think you don't get it.
NAT was never problem, and always worked. That is what you are using now.
What we (customers) wanted was a solution to use Bridge mode, which we now have thanks to VMAC.
There is a how to (by OVH) how to set it up, and it is working.

This can't be the fault of OVH, if you are now trying to set it up differently.
That is how the nework is working.
If it was working for you on some of your VMs, then I would call that lucky.
Maybe the Switch was not configured correctly, but not in the way you were looking for.

And as a side note, I just installed my 9th VM with VMAC in bridge mode, following the guide, and all working.

iMx
14-02-2010, 21:46
Dechris, please kindly read the other evidence before posting snotty updates Its an issue, clearly, with their virtual mac's at layer 2 - the configuration i posted prior has and was working on other vm's, just not another one.

The last solution in that link does away with the need for virtual mac's entirely, meaning they cant cause me outages moving forward

derchris
14-02-2010, 17:20
Well, I gave you a solution for VMWare. It is up to you if you want to use it or not.

iMx
14-02-2010, 17:03
http://blog.itsmine.co.uk/2009/09/07...-2-networking/

Should have just done this days ago, works perfectly and means i dont have to rely on their Virtual MACs - recommended for any more using vmware server as well.

iMx
14-02-2010, 15:06
Ive been trying for 3 days, on Friday afternoon i was assured i wouldnt have to wait until Monday - i phone them now to be told 'i would expect a reply by tomorrow afternoon'

Myatu
14-02-2010, 14:51
Not bad, 58% uptime /sarcasm

That graph doesn't make any sense either though. I mean, if it's something that's caused by a local process or something, you could at least see some consistency (ie, down every 1 hour for 10 mins). This is all over the place... I would be inclined to tell OVH to fix the switch, indeed.

iMx
14-02-2010, 14:09


Availability for my VM over the last 24 hours.....

iMx
13-02-2010, 15:21
Nope you dont, in most cases yes...in this one...no Second load of data i posted was for mambo, try pinging mambo.streamvia.com

It started working about 12 hours ago, including it working previously...there is a more fundamental problem here, in OVH's network...

Using the latter method of /32 with itself as the gateway works just fine, or should do on VMware - wont on KVM/ProxMox.

derchris
13-02-2010, 15:13
You need to set the Gateways, otherwise it will not work.

iMx
13-02-2010, 14:28
And its just fallen off the face of the earth again, have called support to be told - it will be looked into, even though theres been no update for 24 hours. I cant speak to any one else, including a manager, because they dont speak English :/

iMx
13-02-2010, 10:31
# The primary network interface
auto eth0
iface eth0 inet static
address failover-ip
netmask 255.255.255.255
network failover-ip
broadcast failover-ip
gateway failover-ip
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 213.186.33.99
dns-search domain.com
For Debian and KiddieOS (Ubuntu)

iMx
13-02-2010, 10:24
Hey Dechris,

The method you specify below is certainly required for KVM and ProxMox, however with VMware it should (and did/does) work with simply setting your gateway to the same as your fail-over IP - no need for post routing.

However both methods should work i believe All 3 VM's of mine were working for 5-6 days, then 2 suddenly stopped one morning....since then, those 2 have come back and the other one has gone down. Its not our unstable or incorrect configuration, but a core problem with the OVH infrastructure.

Lets hope they fix this soon.....im getting sick of it.

iMx

Quote Originally Posted by derchris
Don't know what is what in your posts, but here is the network config for a Debian VM:

PHP Code:
iface eth0 inet static
        
address <Virtual IP>
        
netmask 255.255.255.255
        broadcast 
<Virtual IP>
        
dns-nameservers 213.186.33.99
        dns
-search <domain>
        
post-up route <Gateway of Hostdev eth0
        post
-up route add default gw <Gateway of Host>
        
post-down route del <Gateway of Hostdev eth0
        post
-down route del default gw <Gateway of Host
I installed more then 3 sytems with this setup, and could add more and more, until I'm out of IPs

brgroup
13-02-2010, 02:24
I'm 22 days in waiting for a fix with these virtual macs..Waiting waiting..pretty useless..At one point Max asked me for access to the server:
Thanks to provide me access needed (VM and VPS) and your
agree to check the vm configuration, start it and modify it
if that's needed
..which I got to him within the hour and 10 min of receiving the ticket <#346688> and then he disappears for 7 bloody days!!...pretty frustrating..What's up with that?

zzzzz..

derchris
12-02-2010, 23:23
Don't know what is what in your posts, but here is the network config for a Debian VM:

PHP Code:
iface eth0 inet static
        
address <Virtual IP>
        
netmask 255.255.255.255
        broadcast 
<Virtual IP>
        
dns-nameservers 213.186.33.99
        dns
-search <domain>
        
post-up route <Gateway of Hostdev eth0
        post
-up route add default gw <Gateway of Host>
        
post-down route del <Gateway of Hostdev eth0
        post
-down route del default gw <Gateway of Host
I installed more then 3 sytems with this setup, and could add more and more, until I'm out of IPs

iMx
12-02-2010, 20:55
Ok the 2 posted here randomly started working, the 1 that was working is now down.... i did nothing!

This is crazy and really isnt ready for anyone that wants any uptime....at least my mail server is up now! Just need the other webserver back now....

iMx
12-02-2010, 19:33
Incidentally, all of the VM's worked with the /32 netmask and the gateway being set to the same as the l3 failover address.

I can see the VM ARP-ing like mad, but never getting a reply from the OVH gateway:

vmws01:~# tcpdump -en -i eth0 host 94.23.154.152 and arp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
18:38:49.256960 00:50:56:00:07:ac > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: arp who-has 188.220.61.41 tell 94.23.154.152
18:38:49.257027 00:50:56:00:07:ac > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: arp who-has 188.220.61.41 tell 94.23.154.152
The arp table on the 1 vm that is working with this method is full, all with the address of the OVH router - this is set up exactly the same...

This HAS to be infrastructure based...

iMx
12-02-2010, 19:20
Ok...so i have a box that is set up with debian and vmware server 2, 3 vm's - 1 works (on and off) 2 dont....

billabong:~# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags
MSS Window irtt Iface
91.121.223.254 0.0.0.0 255.255.255.255 UH
0 0 0 eth0
94.23.154.0 0.0.0.0 255.255.255.0 U
0 0 0 eth0
0.0.0.0 91.121.223.254 0.0.0.0 UG
0 0 0 eth0
billabong:~# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:50:56:00:0c:d7
inet addr:94.23.154.157 Bcast:94.23.154.255
Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:4719 errors:0 dropped:0 overruns:0
frame:0
TX packets:2165 errors:0 dropped:0 overruns:0
carrier:0
collisions:0 txqueuelen:1000
RX bytes:511221 (499.2 KiB) TX bytes:422903
(412.9 KiB)
Can ping gateway:

billabong:~# ping 91.121.223.254
PING 91.121.223.254 (91.121.223.254) 56(84) bytes of data.
64 bytes from 91.121.223.254: icmp_seq=1 ttl=255 time=3.50
ms
64 bytes from 91.121.223.254: icmp_seq=2 ttl=255
time=0.769 ms
64 bytes from 91.121.223.254: icmp_seq=3 ttl=255
time=0.698 ms
Traceroute fails:

billabong:~# traceroute 91.197.32.1
traceroute to 91.197.32.1 (91.197.32.1), 30 hops max, 40
byte packets
1 rbx-63-m2.routers.ovh.net (91.121.223.252) 11.795 ms
5.420 ms 4.133 ms
2 * * *
3 * * *
4 * * *
5 * * *
6 * * *
7 *
Cant trace route out, however i can hit the gateway. The above is the config that OVH told me to apply, despite my previous config below on a different VM- still active on another VM - which previously worked and now doesnt...

Im really sick of going round on circles explaining the problem is with the infrastructure and no one fixing it!

root@mambo:~# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:50:56:00:07:ac
inet addr:94.23.154.152 Bcast:94.23.154.152 Mask:255.255.255.255
inet6 addr: fe80::250:56ff:fe00:7ac/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2638 errors:0 dropped:0 overruns:0 frame:0
TX packets:4167 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:228802 (223.4 KB) TX bytes:180438 (176.2 KB)
Base address:0x2000 Memory:d8920000-d8940000
root@mambo:~# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 94.23.154.152 0.0.0.0 UG 0 0 0 eth0
root@mambo:~# ping 91.197.32.1
PING 91.197.32.1 (91.197.32.1) 56(84) bytes of data.
From 94.23.154.152 icmp_seq=1 Destination Host Unreachable
From 94.23.154.152 icmp_seq=2 Destination Host Unreachable
From 94.23.154.152 icmp_seq=3 Destination Host Unreachable

... Seems like quite a few of you are having the same problems, and sick of the responses! Im a great fan of OVH, for the prices, but when something is clearly wrong with the infrastructure im sick of being fobbed off....

iMx