Okay, I am going by the assumption that you do NOT have a hardware RAID card (LSI/3ware) installed (which is optional on the EG-Hybrid). If you do have this, then the following is not for you. Otherwise, keep reading
RAID choice
First determine what kind of RAID you wish to use. In laymen's terms, RAID-0 can be equaled to lots of storage (1.4TB), but no safeguard against data loss. RAID-1 on the other hand gives you only half the storage space (750GB), but it give you the benefit of a complete backup of the stored data.
Which Disks?
Now you need to determine which of your disks are the two 750GB SATA ones (and not the SSDs). This you can do with:
It should give you a list of disks, including their partitions. Likely the SATA disks will be listed last (as
"/dev/sdc" and
"/dev/sdd") and they hold no valid partitions (
"Disk /dev/sdX doesn't contain a valid partition table").
It is very important to get this right, as obviously you don't want to erase your existing data on the SSDs. If you're unsure, post the output from the above command here and someone will undoubtedly point out to you which disks to use.
If possible, backup your existing data before starting any of the following steps!
Create partition
In the following steps, I will use "/dev/sdX" in place of the actual disks.
Type:
which brings you to an interactive menu as shown below:
Code:
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x1a9dea6d.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help):
Obviously, typing
"m" followed by the RETURN key will give you a short overview of commands available.
The first thing to do is to create a partition, by typing the
"n" command.
It will then ask for a
"Command action" and you have a choice of extended or primary. You want a primary partition, so type "p".
Now it will ask you for a
"First Cylinder" and after that a
"Last Cylinder". For the First Cylinder, you choose 1 (default). For the Last Cylinder, you can choose the size or cylinder you wish, and by default it's set to the very last cylinder making it use the entire disk. If you're unsure, choose the default.
You're back at the main command menu now. A sample output:
Code:
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-16383, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-16383, default 16383): 16383
Command (m for help):
Up next, you need to change the partition type. And this can be done by typing the
"t" command.
Linux softraid uses the
"fd" hex code and it can be verified by pulling up a list of codes by typing
"L":
Code:
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): l
0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-
2 XENIX root 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-
3 XENIX usr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
...
16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS
17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE
18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto
1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep
1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT
1e Hidden W95 FAT1
Hex code (type L to list codes):
So type
"fd". It will return you to the main menu again after stating
"Changed system type of partition 1 to fd (Linux raid autodetect)".
Finally, verify that all of this was done correctly by typing
"p" to view the partitions.
If this matches what you have done above, then finalize everything by typing
"w" for Write. As the command name suggests, this will write the partition data to the disk - THIS IS PERMANENT (in other words: DATA LOSS if you're using the wrong HDs - so triple check!).
Repeat this process for the 2nd SATA HD.
Create the RAID array
As said in the beginning, the assumption is that you're using Linux softraid. This would mean there may already be existing RAID arrays present. To find out, type:
This could show something similar to:
Code:
Personalities : [raid1]
read_ahead 1024 sectors
md1 : active raid1 hda3[0] hdc3[1]
522048 blocks [2/2] [UU]
md0 : active raid1 hda2[0] hdc2[1]
4192896 blocks [2/2] [UU]
md2 : active raid1 hda1[0] hdc1[1]
128384 blocks [2/2] [UU]
unused devices:
The above is simply an example that's unlikely to be the same in your case, but the main thing to look for in YOUR output is
"md0",
"md1", etc.
The main point being, is that you want to use something other than what's listed here, so in the above example the next possible array would be
"md3". And also note the above example does not necessarily give everything in ascending/descending order.
So having determined what the next one we can use, we issue the following command:
Code:
mdadm --create --verbose /dev/md3 --level=0 --raid-devices=2 /dev/sdc1 /dev/sdd1
where
"/dev/md3" is the new RAID array as determined earlier,
"--level=0" is for RAID0 and where
"/dev/sdc1" and
"/dev/sdd1" match the partitions you created in the step above with fdisk.
If you wish to use RAID1 instead, use
"--level=1" instead in the above command.
After you have issued the above command, you can verify the creation by issuing (again):
This time it should include the new RAID array just created.
Format the new RAID array
This is a simple step, which is done by issuing:
where
"/dev/mdX" is the actual RAID array.
The one thing that is important, is to "forget"/ignore the partitions you've created by fdisk (
"/dev/sdX") and use the RAID array instead.
Code:
Re-assemble the RAID on reboot
To help your system remember the RAID on reboot, you need to create a new mdadm.conf file.
Save your current configuration first, so you have a way to revert this in case things don't go as planned, with:
Code:
cp /etc/mdadm.conf /etc/mdadm.oldconf
(And should you need to restore it, it's
"cp /etc/mdadm.oldconf /etc/mdadm.conf").
Now type:
Code:
mdadm --detail --scan --verbose
It should list ALL your RAID arrays (the ones that existed before, plus the new one you've created in the above steps). So this would include something like
"ARRAY /dev/md3 ...".
If it does indeed show all the RAID arrays, save it to the mdadm.conf file as following:
Code:
mdadm --detail --scan --verbose > /etc/mdadm.conf
Mounting the RAID
We first need a mount point for the RAID. Often, mount points are located within the
/mnt/ directory, so let's make this
/mnt/raid (or whatever you'd like):
Now you need to edit the
"/etc/fstab" file, by adding the following line:
Code:
/dev/mdX /mnt/raid ext3 defaults 0 0
where
/dev/mdX is the actual RAID array you've created and
/mnt/raid the mount point created earlier.
Issuing the following command should complete everything:
You should now have the RAID array mounted, and ready to store data on. Check with the following command:
which will list all mounted partitions and capacities. Depending on your RAID setup, you should now have 750 or 1400 GB (roughly) extra space.
Before you save a lot of things on it, reboot the server first to verify everything remains operational upon reboot. If not, verify you have a correct
/etc/mdadm.conf file,
/etc/fstab file and check the system log messages (ie., typing
"dmesg",
"dmesg | less",
"car /var/log/syslog", etc).
As a reminder:
- You need to be comfortable with using the Linux shell
- Ask here if you're unsure about a certain step, before you attempt to resolve things yourself
- Backup your data before you start, as I cannot be held responsible if you lose any existing data due to errors in this how-to or otherwise
But I'm sure you'll do fine