westerndavid,
Here is the information we need. Both sda and sdb have identical partiion tables so I'll only quote one.
Code:
Number Start End Size File system Name Flags
1 20.5kB 1049kB 1029kB primary bios_grub
2 2097kB 21.0GB 21.0GB ext3 primary raid
3 21.0GB 21.5GB 536MB linux-swap(v1) primary
4 21.5GB 2000GB 1979GB ext3 primary raid
The errors are informatinal, and can safely be ignored.
What your partition table tells is that you have no unpartioned space, so making say /var its own partition is not an option.
To grow a partition, yon need free space physically at the end of the partition you want to grow.
The Raid1 adds to the complexity too.
The simple response is to back up /home elsewhere, then reinstall.
There is another more complex response - you should still back up /home because if things go wrong, you will need the backup.
Having raid1 means you have two copies of everything. One on sda, the other on sdb, the idea being that your system can operate on one drive if the other fails. You can take advantage of this to repartition 'on the fly'
The steps are, look in /proc/mdstat to see the names of your raid set and their members. I get
Code:
$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sda6[0] sdd6[3] sdc6[2] sdb6[1]
2912833152 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md126 : active raid5 sda5[0] sdd5[3] sdc5[2] sdb5[1]
15759360 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md125 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
40064 blocks [4/4] [UUUU]
Choose one drive and use mdadm to fail all of the raid members on say sdb.
Use mdadm to remove the failed raid members from the raid sets.
At this point, your box is running on one drive and your raid sets are in degraded mode. Look in /proc/mdstat to make sure you have done it properly.
The failed drive hasn't really failed but its no longer in use.
Repartition the failed drive to your liking
Use mdadm to make some new raid sets in degraded mode. You cannot reuse tho old /dev/mdX numbers as they are in use.
Make filesystems on your mew degraded raid sets
Copy your data from the old to the new raid sets. This step is more difficult than it sounds if you are still running the box from its own install.
You must not copy /proc, /dev, or /sys.
Now you have the old degraded raid' and the new degraded raid both populated with data.
Fix the /etc/fstab in the new raid to refer to the new degraded raid.
Your CentOS uses grub2, which I have managed to avoid so far but grub needs to be updated to be able to boot the new install and if you use an initramfs, it may need to be updated to assemble the new raid set. I'm a Gentoo guy so I don't know the internals of CentOS.
Once thats fixed roboot into your new degraded raid and look around. If all is well, repartition the drive with the old raid to be identical to the new one.
Use mdadm to add the partitions to the new degraded raids. The kernel will sync the data.
You may need to fix grub and the initramfs so they know the old system has gone for good.
Wait for the sync to finish and reboot to test.
This is almost the same process as you would use to replace a faild drive.