OVH Community, your new community space.

Resize RAID 1 Partitions Software RAID on CentOS


heise
19-11-2014, 08:39
Impact performance, I would say no.

Mount in fstab, see http://serverfault.com/questions/613...d-in-etc-fstab

Stop mysql and apache? Well yes, if their directories are involved, otherwise no. But you can first run rsync, then stop both services and run rsync again. Minimizes downtime.

IPXVIII
19-11-2014, 00:16
Hello heise,

Thank you for this tip, I'm just trying this on another server seems to work great.
Does this method has impact on the performance?
If I'm going to do this on a production server, should I first stop mysql and apache, cause they are nonstop writing on the disk.

Last question is there a way to mount over /etc/fstab instead of using a cronjob?

heise
18-11-2014, 23:18
I guess you are starting this all wrong. Normal configuration leaves the main data in the home directories. That's why root is so small. For some reason your configuration is not doing that. Well, that is not a problem. You can just create a symlink from the place under root that is consuming your space to a subfolder under /home. That works in most cases, but there are exceptions. In that case you can mount a subfolder from /home to the subfolder under /root that is consuming all the space. The trick is the option "--bind" or "-B" short.

Example:

mount -B /home/subdir /var/lib/subdir

Don't forget to first copy the data from e.g. /var/lib/mysql to the subdirectory of home. I like "rsync -aAHX /var/lib/subdir/ /home/subdir/" to copy the data. Set crontab to mount on reboot and you are all set. Contact me on skype if you need more help.

IPXVIII
18-11-2014, 17:54
Hey Guys,

I have made a huge mistake with my server configuration and getting now really in trouble. I'm running a WHM/cPanel server on my CentOS, it's a production server with some websites on.
My root partitions "/" is running out of space and i need immediately to do something.

Resize RAID partitions is a task above my abilities so i start to read everything about it. Unfortunately this is very complected so maybe you guys can help.

In this step of my research & learn process i have found out what i need to do, but the how is very complected.

My server has two partitions:

/dev/md2: 21,0GB
/dev/md3: 1979GB

Unfortunately md2 is the root partition and to resize it, the server needs to got in the "rescue mode" which means in the end a longer down time.

So I decide to shrink md3 and make two new partition for /var and /usr.

On this point a few server prints:

Code:
root@ns506372 [~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
rootfs                 20G   17G  1,9G  91% /
/dev/root              20G   17G  1,9G  91% /
devtmpfs               32G  276K   32G   1% /dev
/dev/md3              1,8T   76G  1,7T   5% /home
tmpfs                  32G     0   32G   0% /dev/shm
/dev/root              20G   17G  1,9G  91% /var/tmp
/dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/named
/dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/named.rfc1912.zones
/dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/rndc.key
/dev/root              20G   17G  1,9G  91% /var/named/chroot/usr/lib64/bind
/dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/named.iscdlv.key
/dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/named.root.key
ftpback-XXXXXX.net:/export/ftpbackup/XXXXXXXX.net              500G  196G  305G  40% /backup
Code:
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] 
md2 : active raid1 sdb2[1] sda2[0]
      20478912 blocks [2/2] [UU]
      
md3 : active raid1 sdb3[1] sda3[0]
      1932506048 blocks [2/2] [UU]
Code:
root@ns506372 [~]# mdadm --misc --detail /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Tue Sep 16 13:52:02 2014
     Raid Level : raid1
     Array Size : 20478912 (19.53 GiB 20.97 GB)
  Used Dev Size : 20478912 (19.53 GiB 20.97 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Tue Nov 18 13:58:11 2014
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 94c54da7:d37eee63:a4d2adc2:26fd5302
         Events : 0.152

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

Code:
root@ns506372 [~]# mdadm --misc --detail /dev/md3
/dev/md3:
        Version : 0.90
  Creation Time : Tue Sep 16 13:52:03 2014
     Raid Level : raid1
     Array Size : 1932506048 (1842.98 GiB 1978.89 GB)
  Used Dev Size : 1932506048 (1842.98 GiB 1978.89 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Tue Nov 18 13:58:19 2014
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 52bfd18b:4f959e4d:a4d2adc2:26fd5302
         Events : 0.830

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3

Code:
root@ns506372 [~]# parted -l
Modell: ATA HGST HUS724020AL (scsi)
Festplatte  /dev/sda:  2000GB
Sektorgröße (logisch/physisch): 512B/512B
Partitionstabelle: gpt

Nummer  Anfang  Ende    Größe   Dateisystem     Name     Flags
 1      20,5kB  1049kB  1029kB                  primary  bios_grub
 2      2097kB  21,0GB  21,0GB  ext3            primary  raid
 3      21,0GB  2000GB  1979GB  ext3            primary  raid
 4      2000GB  2000GB  536MB   linux-swap(v1)  primary


Modell: ATA HGST HUS724020AL (scsi)
Festplatte  /dev/sdb:  2000GB
Sektorgröße (logisch/physisch): 512B/512B
Partitionstabelle: gpt

Nummer  Anfang  Ende    Größe   Dateisystem     Name     Flags
 1      20,5kB  1049kB  1029kB                  primary  bios_grub
 2      2097kB  21,0GB  21,0GB  ext3            primary  raid
 3      21,0GB  2000GB  1979GB  ext3            primary  raid
 4      2000GB  2000GB  536MB   linux-swap(v1)  primary


Modell: Unbekannt (unknown)
Festplatte  /dev/md2:  21,0GB
Sektorgröße (logisch/physisch): 512B/512B
Partitionstabelle: loop

Nummer  Anfang  Ende    Größe   Dateisystem  Flags
 1      0,00B   21,0GB  21,0GB  ext3


Modell: Unbekannt (unknown)
Festplatte  /dev/md3:  1979GB
Sektorgröße (logisch/physisch): 512B/512B
Partitionstabelle: loop

Nummer  Anfang  Ende    Größe   Dateisystem  Flags
 1      0,00B   1979GB  1979GB  ext3
It's a GPT (GUID Partition Table) System.

I have found this tutorial to Shrink & Grow a Software RAID, but I'm not sure if this gone work for me.
http://www.howtoforge.com/how-to-res...-software-raid
By the side the tutorial is from 2008.

Falko describes the usage of mdadm and resize2fs, but i don't get the point of how mdadm and resize2fs working together.
The other question is, if I'm using "mdadm --grow /dev/md3 --size=XXX" dose it effect on both disks?

I'm trying to get this result:
md02 = 20GB = mount on /
md03 = 1TB = mount on /home
md04 = 500GB = mount on /var
md05 = 250GB = mount on /usr

I'm happy about every advice.