We are in the process of migrating this forum. A new space will be available soon. We are sorry for the inconvenience.

How to combine 3 SSD Disk ? //Mount


alvaroag
31-05-2015, 18:58
You may need a full reinstall, making sure to install with a RAID5 configuration.

shnslmz
31-05-2015, 16:07
Quote Originally Posted by alvaroag
Looks like your VMs are taking all the space they have assigned, filling entirely your /vz partition. Also, your /vz partition is quite small, only 90G. Considering you have 3 x 120G disks, you may have put them on RAID1, when the best option to have more space here is RAID5(240G)
Okay but, How Can I do? I sent a ticket them but; they didn't answer.

alvaroag
30-05-2015, 18:47
Looks like your VMs are taking all the space they have assigned, filling entirely your /vz partition. Also, your /vz partition is quite small, only 90G. Considering you have 3 x 120G disks, you may have put them on RAID1, when the best option to have more space here is RAID5(240G)

shnslmz
30-05-2015, 09:47
Quote Originally Posted by alvaroag
That's an option. Other alternative is that one of your VMs is using too much space. Check this:



That should give you a clue of where is your space. Also, you can post the output of "df -h"

Help me please!

Code:
# df -h
Filesystem       Size  Used Avail Use% Mounted on
devtmpfs          16G  220K   16G   1% /dev
tmpfs             16G     0   16G   0% /dev/shm
/dev/md1          20G  1.4G   17G   8% /
/dev/md2          91G   90G     0 100% /vz
/dev/md1          20G  1.4G   17G   8% /var/named/chroot/etc/named
/dev/md1          20G  1.4G   17G   8% /var/named/chroot/var/named
/dev/md1          20G  1.4G   17G   8% /var/named/chroot/etc/named.conf
/dev/md1          20G  1.4G   17G   8% /var/named/chroot/etc/named.rfc1912.zones
/dev/md1          20G  1.4G   17G   8% /var/named/chroot/etc/rndc.key
/dev/md1          20G  1.4G   17G   8% /var/named/chroot/usr/lib64/bind
/dev/md1          20G  1.4G   17G   8% /var/named/chroot/etc/named.iscdlv.key
/dev/md1          20G  1.4G   17G   8% /var/named/chroot/etc/named.root.key
/vz/private/106   12G   11G     0 100% /vz/root/106
none             1.0G   12K  1.0G   1% /vz/root/106/dev
/vz/private/106   12G   11G     0 100% /vz/root/106/var/named/chroot/etc/named
/vz/private/106   12G   11G     0 100% /vz/root/106/var/named/chroot/var/named
/vz/private/106   12G   11G     0 100% /vz/root/106/var/named/chroot/etc/named.rfc1912.zones
/vz/private/106   12G   11G     0 100% /vz/root/106/var/named/chroot/usr/lib64/bind
/vz/private/106   12G   11G     0 100% /vz/root/106/var/named/chroot/etc/named.iscdlv.key
/vz/private/106   12G   11G     0 100% /vz/root/106/var/named/chroot/etc/named.root.key
/vz/private/102   40G   39G     0 100% /vz/root/102
none             2.5G  4.0K  2.5G   1% /vz/root/102/dev
none             2.5G  8.0K  2.5G   1% /vz/root/102/dev/shm
/vz/private/102   40G   39G     0 100% /vz/root/102/var/named/chroot/etc/named
/vz/private/102   40G   39G     0 100% /vz/root/102/var/named/chroot/var/named
/vz/private/102   40G   39G     0 100% /vz/root/102/var/named/chroot/etc/named.rfc1912.zones
/vz/private/102   40G   39G     0 100% /vz/root/102/var/named/chroot/usr/lib64/bind
/vz/private/102   40G   39G     0 100% /vz/root/102/var/named/chroot/etc/named.iscdlv.key
/vz/private/102   40G   39G     0 100% /vz/root/102/var/named/chroot/etc/named.root.key
[root@ns393588 tatilgidiyor.com]#
[root@ns393588 tatilgidiyor.com]#

alvaroag
29-05-2015, 20:23
That's an option. Other alternative is that one of your VMs is using too much space. Check this:

cd /vz
du -h --max-depth=2 -x
That should give you a clue of where is your space. Also, you can post the output of "df -h"

HostRange
29-05-2015, 17:32
Quote Originally Posted by shnslmz
Hello,
Emergency! /vz 100%

SolusVM has said:

--
What Can I do?

Code:
# fdisk -l

Disk /dev/sdc: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00081ead

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1        2550    20478976   fd  Linux raid autodetect
/dev/sdc2            2550       14528    96211968   fd  Linux raid autodetect
/dev/sdc3           14528       14593      523264   82  Linux swap / Solaris

Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000755a7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1        2550    20478976   8e  Linux LVM
/dev/sdb2            2550       14528    96211968   fd  Linux raid autodetect
/dev/sdb3           14528       14593      523264   82  Linux swap / Solaris
/dev/sdb4           14593       14593        2016+  83  Linux

Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00068df1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2550    20478976   fd  Linux raid autodetect
/dev/sda2            2550       14528    96211968   fd  Linux raid autodetect
/dev/sda3           14528       14593      523264   82  Linux swap / Solaris

Disk /dev/md2: 98.5 GB, 98520989696 bytes
2 heads, 4 sectors/track, 24052976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/md1: 21.0 GB, 20970405888 bytes
2 heads, 4 sectors/track, 5119728 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Did you set the disk partition up correctly? https://documentation.solusvm.com/di...S/Partitioning

/ 10GB
512 Swap
/vz should have rest of the disk space allocated

shnslmz
29-05-2015, 14:30
Quote Originally Posted by marks
by default, the servers with 3 disks are delivered in RAID5, which make use of 2 disk as available space (240GB) and the 3rd disk offers you redundancy in case of a disk failure.

so you should already have that mounted.

Could you show us your "fdisk -l", please? also, your server name or order number, so we can see if yours come with hardware RAID or software RAID.

If it's software raid, you will have some flexibility on the partitioning, though the installation manager in the OVH Control PAnel will force you to have, at least the system drive in some sort of redundancy.

Thanks.
Hello,
Emergency! /vz 100%

SolusVM has said:

--
What Can I do?

Code:
# fdisk -l

Disk /dev/sdc: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00081ead

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1        2550    20478976   fd  Linux raid autodetect
/dev/sdc2            2550       14528    96211968   fd  Linux raid autodetect
/dev/sdc3           14528       14593      523264   82  Linux swap / Solaris

Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000755a7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1        2550    20478976   8e  Linux LVM
/dev/sdb2            2550       14528    96211968   fd  Linux raid autodetect
/dev/sdb3           14528       14593      523264   82  Linux swap / Solaris
/dev/sdb4           14593       14593        2016+  83  Linux

Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00068df1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2550    20478976   fd  Linux raid autodetect
/dev/sda2            2550       14528    96211968   fd  Linux raid autodetect
/dev/sda3           14528       14593      523264   82  Linux swap / Solaris

Disk /dev/md2: 98.5 GB, 98520989696 bytes
2 heads, 4 sectors/track, 24052976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/md1: 21.0 GB, 20970405888 bytes
2 heads, 4 sectors/track, 5119728 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

marks
12-05-2015, 12:31
by default, the servers with 3 disks are delivered in RAID5, which make use of 2 disk as available space (240GB) and the 3rd disk offers you redundancy in case of a disk failure.

so you should already have that mounted.

Could you show us your "fdisk -l", please? also, your server name or order number, so we can see if yours come with hardware RAID or software RAID.

If it's software raid, you will have some flexibility on the partitioning, though the installation manager in the OVH Control PAnel will force you to have, at least the system drive in some sort of redundancy.

Thanks.

shnslmz
11-05-2015, 13:25
Hello All,
I have bought a new server. Its has 3 ssd disk. (120 x 3). And I have installed. How can I do, another two disk mount?

Best Regards.
Thank you!