OVH Community, your new community space.

SAS speed


vaarsn
02-10-2014, 22:56
I'll test it tomorrow on ESXi. Anyway, thanks for your help. I need to wait for OVH techs to check everything for me and to answer on my questions regarding to migration from one datacenter to another.

mike_
02-10-2014, 22:51
I don't run CentOS on the bare metal, but on one of the VMs it boots in about 30 seconds. I haven't had a need to reboot the machine so have no idea how long it takes.

vaarsn
02-10-2014, 22:20
Thanks for the additional info. This is what I got for both disks:

1ST device:

[root@ns506634 ~]# smartctl -ia -d megaraid,4 /dev/sda
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-3.10.23-xxxx-std-ipv6-64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net

Vendor: SEAGATE
Product: ST9600205SS
Revision: 0002
User Capacity: 600,127,266,816 bytes [600 GB]
Logical block size: 512 bytes
Logical Unit id: 0x5000c5005956cbb3
Serial number: 6XR3A8Y90000B242E19T
Device type: disk
Transport protocol: SAS
Local Time is: Thu Oct 2 17:17:39 2014 EDT
Device supports SMART and is Enabled
Temperature Warning Enabled
SMART Health Status: OK

Current Drive Temperature: 32 C
Drive Trip Temperature: 68 C
Manufactured in week 32 of year 2012
Specified cycle count over device lifetime: 10000
Accumulated start-stop cycles: 140
Specified load-unload count over device lifetime: 300000
Accumulated load-unload cycles: 140
Elements in grown defect list: 1
Vendor (Seagate) cache information
Blocks sent to initiator = 3600400027
Blocks received from initiator = 574271389
Blocks read from cache and sent to initiator = 84978495
Number of read and write commands whose size <= segment size = 86819576
Number of read and write commands whose size > segment size = 68522
Vendor (Seagate/Hitachi) factory information
number of hours powered up = 7365.25
number of minutes until next internal SMART test = 21

Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 3530685976 0 0 3530685976 0 14778.809 0
write: 0 0 0 0 0 6924.051 0
verify: 1470409571 0 0 1470409571 0 6280.279 0

Non-medium error count: 24

SMART Self-test log
Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ]
Description number (hours)
# 1 Background short Completed - 7125 - [- - -]
# 2 Background short Completed - 7125 - [- - -]
# 3 Background short Completed - 7119 - [- - -]
# 4 Background short Completed - 7119 - [- - -]
# 5 Background short Completed - 7051 - [- - -]
# 6 Background short Completed - 7029 - [- - -]
# 7 Background short Completed - 7028 - [- - -]
# 8 Background short Completed - 7026 - [- - -]
# 9 Background short Completed - 7023 - [- - -]
#10 Background short Completed - 5292 - [- - -]
#11 Background short Completed - 5292 - [- - -]
#12 Background short Completed - 5284 - [- - -]
#13 Background short Completed - 5284 - [- - -]
#14 Background short Completed - 157 - [- - -]
#15 Background short Completed - 148 - [- - -]
#16 Background short Completed - 148 - [- - -]
#17 Background short Completed - 145 - [- - -]
#18 Background short Completed - 145 - [- - -]

Long (extended) Self Test duration: 5040 seconds [84.0 minutes]
2ND device:
[root@ns506634 ~]# smartctl -ia -d megaraid,5 /dev/sda
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-3.10.23-xxxx-std-ipv6-64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net

Vendor: SEAGATE
Product: ST9600205SS
Revision: 0002
User Capacity: 600,127,266,816 bytes [600 GB]
Logical block size: 512 bytes
Logical Unit id: 0x5000c5005956ad47
Serial number: 6XR3A8PM0000B304DAX9
Device type: disk
Transport protocol: SAS
Local Time is: Thu Oct 2 17:17:57 2014 EDT
Device supports SMART and is Enabled
Temperature Warning Enabled
SMART Health Status: OK

Current Drive Temperature: 34 C
Drive Trip Temperature: 68 C
Manufactured in week 32 of year 2012
Specified cycle count over device lifetime: 10000
Accumulated start-stop cycles: 140
Specified load-unload count over device lifetime: 300000
Accumulated load-unload cycles: 140
Elements in grown defect list: 0
Vendor (Seagate) cache information
Blocks sent to initiator = 542422431
Blocks received from initiator = 575053642
Blocks read from cache and sent to initiator = 116336400
Number of read and write commands whose size <= segment size = 88720614
Number of read and write commands whose size > segment size = 66025
Vendor (Seagate/Hitachi) factory information
number of hours powered up = 7365.33
number of minutes until next internal SMART test = 21

Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 44106018 0 0 44106018 0 14624.240 0
write: 0 0 0 0 0 6925.025 0
verify: 337813594 0 0 337813594 0 6282.208 0

Non-medium error count: 12

SMART Self-test log
Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ]
Description number (hours)
# 1 Background short Completed - 7125 - [- - -]
# 2 Background short Completed - 7125 - [- - -]
# 3 Background short Completed - 7119 - [- - -]
# 4 Background short Completed - 7119 - [- - -]
# 5 Background short Completed - 7051 - [- - -]
# 6 Background short Completed - 7029 - [- - -]
# 7 Background short Completed - 7028 - [- - -]
# 8 Background short Completed - 7026 - [- - -]
# 9 Background short Completed - 7023 - [- - -]
#10 Background short Completed - 5292 - [- - -]
#11 Background short Completed - 5292 - [- - -]
#12 Background short Completed - 5284 - [- - -]
#13 Background short Completed - 5284 - [- - -]
#14 Background short Completed - 157 - [- - -]
#15 Background short Completed - 148 - [- - -]
#16 Background short Completed - 148 - [- - -]
#17 Background short Completed - 145 - [- - -]
#18 Background short Completed - 145 - [- - -]

Long (extended) Self Test duration: 5040 seconds [84.0 minutes]
Also, what is your centos loading speed? If you can of course please install it and let me know its booting results. On my local VM it takes about 50 seconds to reboot it, but not 8 minutes.
Thanks

mike_
02-10-2014, 21:56
You'll probably need megacli. Then you can do:

Code:
megacli -PDlist -a0 | grep '^Device Id:'

Device Id: 6
Device Id: 7
You can then put those IDs into smartctl like so:

Code:
smartctl -ia -d megaraid,6 /dev/sda
smartctl -ia -d megaraid,7 /dev/sda
OVH staff do visit the forums regularly, but only during working hours on weekdays.

vaarsn
02-10-2014, 21:45
Quote Originally Posted by mike_
I'm a forum member just like you, I don't work for OVH so can't really comment on your second question.

Are you having any performance issues other than boot times? Have you checked the SMART status of your drives?
Is there any OVH workers here?
Regarding to SMART stats. It seems like my MegaRAID controller doesn't support SMART. It's totally incorrect but itself. How I can't support RAID?

[root@ns506634 ~]# smartctl -a -d scsi /dev/sda
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-3.10.23-xxxx-std-ipv6-64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net
Vendor: SMC
Product: SMC2108
Revision: 2.13
User Capacity: 598,999,040,000 bytes [598 GB]
Logical block size: 512 bytes
Logical Unit id: 0x6003048003e962001bb3375fad678e7f
Serial number: 007f8e67ad5f37b31b0062e903800403
Device type: disk
Local Time is: Mon Sep 29 04:54:08 2014 EDT
Device does not support SMART
Error Counter logging not supported
Device does not support Self Test logging
Do you have the same message for your RAID?

mike_
02-10-2014, 21:38
I'm a forum member just like you, I don't work for OVH so can't really comment on your second question.

Are you having any performance issues other than boot times? Have you checked the SMART status of your drives?

vaarsn
02-10-2014, 21:29
Let me try that. Also, what about my other questions?
1. OS reinstallation speed (even rebooting of totally clean CentOS takes about 8 minutes it's very long)
2. Migration from Canadian datacenter to EU? As far as I know it's possible with your services.

mike_
02-10-2014, 21:13
Again this tests cached speeds (writes)

Code:
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 3.54013 s, 607 MB/s
This will give you a better indication of actual disk speed:

Code:
dd if=/dev/zero of=zero.bin bs=1M count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 6.75623 s, 159 MB/s
Code:
wget http://mirror.anl.gov/pub/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-LiveCD.iso -O /dev/null
--2014-10-02 21:08:06--  http://mirror.anl.gov/pub/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-LiveCD.iso
Resolving mirror.anl.gov (mirror.anl.gov)... 2620:0:dc0:1800:214:4fff:fe7d:1b9, 146.137.96.7
Connecting to mirror.anl.gov (mirror.anl.gov)|2620:0:dc0:1800:214:4fff:fe7d:1b9|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 680525824 (649M) [application/octet-stream]
Saving to: `/dev/null'

100%[===============================================================================================================================================================================================================================================>] 680,525,824 10.8M/s   in 79s     

2014-10-02 21:09:25 (8.25 MB/s) - `/dev/null' saved [680525824/680525824]
10.8MB/sec seems fine to me as this is between a US mirror and an EU server.

Code:
 wget http://cdimage.debian.org/debian-cd/7.6.0/amd64/iso-cd/debian-7.6.0-amd64-CD-1.iso -O /dev/null
--2014-10-02 21:11:09--  http://cdimage.debian.org/debian-cd/7.6.0/amd64/iso-cd/debian-7.6.0-amd64-CD-1.iso
Resolving cdimage.debian.org (cdimage.debian.org)... 2001:6b0:e:2018::173, 2001:6b0:e:2018::163, 2001:6b0:e:2018::165, ...
Connecting to cdimage.debian.org (cdimage.debian.org)|2001:6b0:e:2018::173|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://saimei.acc.umu.se/debian-cd/7.6.0/amd64/iso-cd/debian-7.6.0-amd64-CD-1.iso [following]
--2014-10-02 21:11:09--  http://saimei.acc.umu.se/debian-cd/7.6.0/amd64/iso-cd/debian-7.6.0-amd64-CD-1.iso
Resolving saimei.acc.umu.se (saimei.acc.umu.se)... 2001:6b0:e:2018::138, 130.239.18.138
Connecting to saimei.acc.umu.se (saimei.acc.umu.se)|2001:6b0:e:2018::138|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 665845760 (635M) [application/x-iso9660-image]
Saving to: `/dev/null'

100%[===============================================================================================================================================================================================================================================>] 665,845,760 30.8M/s   in 19s     

2014-10-02 21:11:28 (34.1 MB/s) - `/dev/null' saved [665845760/665845760]
This is in line with the 250Mb connection the server is supposed to have.

Code:
wget http://rbx.proof.ovh.net/files/1Gb.dat -O /dev/null
--2014-10-02 21:12:15--  http://rbx.proof.ovh.net/files/1Gb.dat
Resolving rbx.proof.ovh.net (rbx.proof.ovh.net)... 2001:41d0:2:876a::1, 188.165.12.106
Connecting to rbx.proof.ovh.net (rbx.proof.ovh.net)|2001:41d0:2:876a::1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 125000000 (119M) [application/octet-stream]
Saving to: `/dev/null'

100%[===============================================================================================================================================================================================================================================>] 125,000,000  106M/s   in 1.1s    

2014-10-02 21:12:16 (106 MB/s) - `/dev/null' saved [125000000/125000000]
Internal OVH transfer shows the NIC running at near enough full speed.

Code:
wget http://iso.esd.microsoft.com/W9TPI/FD54DF81A4CCF4511BA1445C606DDBA2/WindowsTechnicalPreview-x64-EN-US.iso -O /dev/null
--2014-10-02 21:15:08--  http://iso.esd.microsoft.com/W9TPI/FD54DF81A4CCF4511BA1445C606DDBA2/WindowsTechnicalPreview-x64-EN-US.iso
Resolving iso.esd.microsoft.com (iso.esd.microsoft.com)... 95.101.0.89, 95.101.0.90, 95.101.0.96, ...
Connecting to iso.esd.microsoft.com (iso.esd.microsoft.com)|95.101.0.89|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4100497408 (3.8G) [application/octet-stream]
Saving to: `/dev/null'

100%[=============================================================================================================================================================================================================================================>] 4,100,497,408 45.9M/s   in 98s     

2014-10-02 21:16:47 (39.8 MB/s) - `/dev/null' saved [4100497408/4100497408]
An ISO from Microsoft. Decent speeds. Can you try some of the above? It may be that particular CentOS mirror is slow.

vaarsn
02-10-2014, 20:16
Thank you for your info, but it was 1st problem. Can you provide me with output of:

dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
Regarding to Internet connection issues. Can you provide me with results of downloading e.g. For example on few my Kimsufi server I'm getting the next:
root@newsr [~]# wget -O /dev/null http://mirror.anl.gov/pub/centos/6.5..._64-LiveCD.iso
--2014-10-02 21:06:23-- http://mirror.anl.gov/pub/centos/6.5..._64-LiveCD.iso
Resolving mirror.anl.gov... 146.137.96.7, 2620:0:dc0:1800:214:4fff:fe7d:1b9
Connecting to mirror.anl.gov|146.137.96.7|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 680525824 (649M) [application/octet-stream]
Saving to: “/dev/null”

34% [============> ] 235,303,552 26.3M/s eta 25s
On my second SYS server I'm getting the next:

[root@ns506634 ~]# wget -O /dev/null http://mirror.anl.gov/pub/centos/6.5..._64-LiveCD.iso
--2014-10-02 15:06:51-- http://mirror.anl.gov/pub/centos/6.5..._64-LiveCD.iso
Resolving mirror.anl.gov... 2620:0:dc0:1800:214:4fff:fe7d:1b9, 146.137.96.7
Connecting to mirror.anl.gov|2620:0:dc0:1800:214:4fff:fe7d:1b9|: 80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 680525824 (649M) [application/octet-stream]
Saving to: “/dev/null”

100%[================================================== ================================================== ================================================== =============================================>] 680,525,824 8.39M/s in 82s

I checked my problem server with
mtr --report 8.8.8.8
tool and I see a lot of intermediate connections with 100% packet loss. As I said before, I'm downloading different images from different locations and I need to spent about 3 hours to upload an ISO image from microsoft MSDN or my local 1GB network to ESXi hosted with you. It's incredible slow. Also a simple OS reinstallation (e.g. Centos) via SYS panel it took about 4 minutes, now it take about 12 minutes for the same procedure.
My other question, it's possible to close my hosting on Canadian datacenter and continue on European? I'm ready to wait until new server will be available in EU datacenter.

Criot
02-10-2014, 20:11
Quote Originally Posted by vaarsn
Do you feel the difference between 4 years SATA with 11507.74 MB/sec and new SAS with 6780.54 MB/sec? Also with the same SAS 14097.01 MB/sec ??
Where do you see similar output?
11507.74 - 6780.54 = 4727,2. And this is only the difference between OLD hdd.
The difference almost twice. What are you talking about? Can somebody just switch my server to the EU region? I don't want to have unnamed and unstable server anymore!
As already said you're looking at cached reads, you want to look at buffered reads. Generally cached reads are going to come from your RAM as Linux will Cache into unused RAM to improve performance, the speed that the RAM operates at could be a factor to seeing faster/slower speeds there.

What you need to pay attention to is buffered reads, and they seem to be pretty fine.

mike_
02-10-2014, 19:08
You're paying too much attention to the cached reads which aren't really and indication of disk performance. The buffered read test, which is a better indicator, is actually faster on the SYS SAS server than the one from the other provider.

This is from the hdparm manual:

Quote Originally Posted by hdparm -T
Perform timings of cache reads for benchmark and comparison purposes. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a couple of megabytes of free memory. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test.
And assuming you got the SYS-IP-4S which has the same CPU and memory as my SYS-IP4, then your cached read speeds seem completely normal:

Code:
hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   12220 MB in  2.00 seconds = 6115.03 MB/sec
 Timing buffered disk reads: 498 MB in  3.01 seconds = 165.41 MB/sec

vaarsn
02-10-2014, 18:27
Do you feel the difference between 4 years SATA with 11507.74 MB/sec and new SAS with 6780.54 MB/sec? Also with the same SAS 14097.01 MB/sec ??
Where do you see similar output?
11507.74 - 6780.54 = 4727,2. And this is only the difference between OLD hdd.
The difference almost twice. What are you talking about? Can somebody just switch my server to the EU region? I don't want to have unnamed and unstable server anymore!

Criot
02-10-2014, 18:20
Wait, your results are showing that your SAS drives with another provider show similar output to the ones with OVH. So what's the issue?

If they're SAS 10000rpm drives, then they won't perform that much better than standard HDDs.

vaarsn
02-10-2014, 18:18
Anyone help me or no?!

vaarsn
01-10-2014, 10:07
I can't reach your international phone number anymore! My issue still without any resolution. Nobody has performed investigation of this issue! What is going on?! Two weeks I can't work because my server has poor performance. Everyday you promise to check it and resolve my issue. I need to get refund! Give me refund!

vaarsn
30-09-2014, 15:52
Still waiting during the day. No result from your tech support. What is going on?

vaarsn
30-09-2014, 12:16
Hello,

Can you send me a private message here? I'll provide you with my SYS customer's email in reply. I don't want to post my email here. After 3 days it's still no progress. I don't want to use such sick server. I want a new server with healthy hardware.

TomOVH
30-09-2014, 11:54
Hello,

I am currently looking into this, so I can update you what address did you send them email to? I can't seem to find it in our SYS support panel.

vaarsn
29-09-2014, 20:46
Guys, I wrote support request, called to your support. Still no result. Your techs just checking something, but nothing happened with my open ssh session. When it will be fixed? Or just I want refund

vaarsn
27-09-2014, 13:45
Hello,


1)

My server is: BHS1 - Rack: T02C42 - Server ID: 229792

Recently I bought new server with SAS and hardware RAID. And it seems like its speed is very slow. I compared it with 4 my servers . 1 of them is hosted on SoYouStart (ds151218-sys) as well and it uses soft raid and SATA disks, another one is Kimsufi server, it's RAID1 with SATAs, and another one is a server with another provider which has SAS disks as well.
So this is what I got on clean Centos:

1 SYS server based on SAS:
# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 13548 MB in 2.00 seconds = 6780.54 MB/sec
Timing buffered disk reads: 480 MB in 3.01 seconds = 159.63 MB/sec
2 SYS server based on SATA:
# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 22980 MB in 2.00 seconds = 11507.74 MB/sec
Timing buffered disk reads: 418 MB in 3.00 seconds = 139.30 MB/sec
3 Kimsufi server based on SATA:
# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 22768 MB in 2.00 seconds = 11407.97 MB/sec
Timing buffered disk reads: 344 MB in 3.01 seconds = 114.14 MB/sec
4 Server based on SAS with another provider:
root@ets [~]# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 28146 MB in 2.00 seconds = 14097.01 MB/sec
Timing buffered disk reads: 460 MB in 3.00 seconds = 153.15 MB/sec
As you can see for some reason my first server with its SAS disks is very slow even if we compare it's speed with old Kimsufi server. Also I expected some problems with installation of new OSs with SoYouStart yesterday. For 2-3 hour I was unable to install any new system and I wasn't be able to access to my new ESXi control panel. My current CLEAN Centos very slow if I try to reboot it then I need to wait about 5 minutes to be able to log in again.

2)

Also, I have a problem with ESXi and internet connection. What I did is tried to download windows iso (about 3gb) from MSDN and I took about 5 hours, then I tried to download centos image directly from a several mirrors (about 250mb) and it took 12 minutes. I reinstalled OSs about 10 times, I tried Centos 6.5, Centos 7, Vmware 5.1 and 5.5, Citrix. Everywhere speed the same. I didn't get more that 629K/s (in the meantime I'm getting about 18Mb\s on my other server even on old) of download from about 7 different mirrors, even from Canada where my server is allocated. I thought that OVH is respective hosting. So what did you palmed off on me? So guys, fix these both issues or I'll get refund.

Also I can't find my server on http://status.ovh.co.uk/vms/index_bhs1.html