OVH Community, your new community space.

possible to break the RAID config WHILE deployed?


Ashley
18-06-2009, 19:11
Quote Originally Posted by ruperthair
Could someone please post the default OVH 'lilo.conf'? I hate lilo, but I need to document this part of the procedure too.
Code:
prompt
timeout=50
default=linux
boot=/dev/md1
raid-extra-boot=mbr-only
map=/boot/map
install=/boot/boot.b
lba32
append=""
#serial=0,9600n8

image=/boot/bzImage-2.6.27.10-xxxx-grs-ipv4-64
        label=linux
        read-only
        root=/dev/md1
This is the default for my Debian etch.

By the way, include IPv6 support if you can!

ruperthair
18-06-2009, 18:55
Quote Originally Posted by ruperthair
I don't like using OSes installed by other people, so I re-installed it myself using the Debian installer's ssh server. I can write that bit up too if anyone's interested.
Could someone please post the default OVH 'lilo.conf'? I hate lilo, but I need to document this part of the procedure too.

Ashley
18-06-2009, 13:17
Quote Originally Posted by zimsters
worked like a charm

thanks! wikis do rule, i plan to scour the entire wiki this evening.
There are only a few articles on it at the moment but I'm sure you may have something nice to contribute.

zimsters
18-06-2009, 13:09
worked like a charm

thanks! wikis do rule, i plan to scour the entire wiki this evening.

Ashley
18-06-2009, 13:06
Quote Originally Posted by ruperthair
Wow! That's quite an omission!

I don't like using OSes installed by other people, so I re-installed it myself using the Debian installer's ssh server. I can write that bit up too if anyone's interested.
If you can that would be great. Would be nice to follow a step by step on removing OVH's Debian 5.0 and installing my own

ruperthair
18-06-2009, 12:46
Quote Originally Posted by zimsters
haha yeah i managed to get around that, hadn't read ahead in your wiki which does explain how to add mdadm
That bit was added by Ashley. That's the great thing about wikis: everyone can edit and improve on what's there!

ruperthair
18-06-2009, 12:40
Quote Originally Posted by zimsters
ok ran into an issue.

i don't have md0/md1, i have md1/md2. so i figured md1=md0 and md2=md1 from your examples. they use the same sda1/sdb1 set up as what you showed.
That's right.

Quote Originally Posted by zimsters
which worked fine. formatted, and mounted. but capacity is only 40mb or something???

i undid my changes and remounted it as a raid array which has already completed rebuilding. don't want to screw it up, what am i doing wrong?
That'll be the '/boot' partition. They are usually small but 40MB is taking the piss a bit. You're not going to fit many kernels in there!

If you do the same with 'md2'/'sdb2' then you should get more space.

zimsters
18-06-2009, 12:35
Quote Originally Posted by ruperthair
Wow! That's quite an omission!

I don't like using OSes installed by other people, so I re-installed it myself using the Debian installer's ssh server. I can write that bit up too if anyone's interested.

haha yeah i managed to get around that, hadn't read ahead in your wiki which does explain how to add mdadm

ruperthair
18-06-2009, 12:35
Quote Originally Posted by zimsters
problem: my server doens't have mdadm command... this is a superplan bestof for vmware, probably debian 32 bit. any ideas?
Wow! That's quite an omission!

I don't like using OSes installed by other people, so I re-installed it myself using the Debian installer's ssh server. I can write that bit up too if anyone's interested.

zimsters
18-06-2009, 12:34
ok ran into an issue.

i don't have md0/md1, i have md1/md2. so i figured md1=md0 and md2=md1 from your examples. they use the same sda1/sdb1 set up as what you showed. here's my mdstat:

root@:/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid1 sdb1[1] sda1[0]
5245120 blocks [2/2] [UU]

md2 : active raid1 sdb2[1] sda2[0]
726804608 blocks [2/2] [UU]

unused devices:
so i did:
mdadm -f /dev/md1 /dev/sdb1
mdadm -r /dev/md1 /dev/sdb1

which worked fine. formatted, and mounted. but capacity is only 40mb or something???

i undid my changes and remounted it as a raid array which has already completed rebuilding. don't want to screw it up, what am i doing wrong?

zimsters
18-06-2009, 12:19
resolved!

apt-get install initramfs-tools mdadm

zimsters
18-06-2009, 12:16
problem: my server doens't have mdadm command... this is a superplan bestof for vmware, probably debian 32 bit. any ideas?

zimsters
18-06-2009, 12:03
Quote Originally Posted by ruperthair
This is possible. I've knocked up a quick wiki page here: http://ovhwiki.com/index.php?title=RAID

love the website, thanks

ruperthair
18-06-2009, 08:17
Quote Originally Posted by zimsters
i have one of the 750gbx2 raid set ups with ovh. i was wondering whether i could "break" the raid array and set up the 2nd drive as a secondary drive, without requiring any reboots / reformats etc?
This is possible. I've knocked up a quick wiki page here: http://ovhwiki.com/index.php?title=RAID

speedler
17-06-2009, 20:22
Quote Originally Posted by zimsters
yep it is software raid. i've seen the thread regarding doing it on rebuild, but isn't there a way i can do it while it's still live, without requiring a rebuild?
I dont think so, seeming OVH dont do this themselves, i am pretty sure this is something u need to do yourself, its very easy to do!

zimsters
17-06-2009, 20:06
yep it is software raid. i've seen the thread regarding doing it on rebuild, but isn't there a way i can do it while it's still live, without requiring a rebuild?

speedler
17-06-2009, 20:00
Quote Originally Posted by zimsters
hey guys,

i have one of the 750gbx2 raid set ups with ovh. i was wondering whether i could "break" the raid array and set up the 2nd drive as a secondary drive, without requiring any reboots / reformats etc?

thanks!
I am pretty sure this server is just software raid, so if u do a reinstall as soon as its deployed and choose the third option, u can then change it to raid 0 meaning you can use the complete drive!

zimsters
17-06-2009, 19:59
hey guys,

i have one of the 750gbx2 raid set ups with ovh. i was wondering whether i could "break" the raid array and set up the 2nd drive as a secondary drive, without requiring any reboots / reformats etc?

thanks!