OVH Community, your new community space.

HDD's Failing & OVH's terrible support


Neil
26-03-2014, 10:47
Hi

You would need to speak to your local support about this, I can not comment on the server and contracts in place as it was not through OVH.co.uk

raxxeh
25-03-2014, 19:33
Quote Originally Posted by Neil
Would need to check but your server never had unlimited traffic, it had 10Gbps with 40TB limit, you can check this with archive.org as it was listed on the website. These limits were removed and when you renewed the server it you would of had to accept the new unlimited bandwidth with a lower average speed. I assume you must be using far more than 40TB a month then.
I am now, because I could never get reverted.

I accepted 1.5E as it ended up being better in the long run. Being taken down to 200mbit however, is not better in any situation, as it makes "10gbps" pointless.

This new change however, was never presented to me at any point. It just happened, and now I'm being told that this is intended and acceptable.

I wonder if it would be acceptable for you if I only paid for 5% of the server? 2% of the speed, and 3% for the hardware?

Neil
25-03-2014, 16:59
Quote Originally Posted by raxxeh
I am contacting my local support.

I am ensuring potential customers know about the workings of ovh, specifically how they like to change servers and contracts with no notification to their clients who have spent $8,000euro on a single server.
Would need to check but your server never had unlimited traffic, it had 10Gbps with 40TB limit, you can check this with archive.org as it was listed on the website. These limits were removed and when you renewed the server it you would of had to accept the new unlimited bandwidth with a lower average speed. I assume you must be using far more than 40TB a month then.

raxxeh
25-03-2014, 15:29
I am contacting my local support.

I am ensuring potential customers know about the workings of ovh, specifically how they like to change servers and contracts with no notification to their clients who have spent $8,000euro on a single server.

Neil
25-03-2014, 11:41
Quote Originally Posted by raxxeh
Good lord.

The saga continues. Limited a 10gbps server to 200mbps without warning or explanation.

(usage was not unrealistic either, a couple of 2gbps bursts over the last couple of days, about 800mbps average which is below the 1.5gbps limit....)

I just don't even know what to say.

I hope anyone who is running a serious business considers twice before using OVH, jeez I used to love them, but their support has shown me that they shouldn't be claiming to offer anything in the state the company is in.
Contact your local support about your issue, most likely you will need to upgrade your bandwidth package.

raxxeh
24-03-2014, 22:54
Good lord.

The saga continues. Limited a 10gbps server to 200mbps without warning or explanation.

(usage was not unrealistic either, a couple of 2gbps bursts over the last couple of days, about 800mbps average which is below the 1.5gbps limit....)

I just don't even know what to say.

I hope anyone who is running a serious business considers twice before using OVH, jeez I used to love them, but their support has shown me that they shouldn't be claiming to offer anything in the state the company is in.

raxxeh
22-03-2014, 08:45
Quote Originally Posted by Trapper
raxxeh,

I did not spend long on this, (and I do not know your exact requirements,) but at LeaseWeb I found I could have:

2x Xeon E5645 (Hex)
32GB DDR3
12x3TB SATA2

Setup 50E
Monthly 344E

I did not look at the network connection(s), or any other spec's really, as I said, you know what you need better than I do...

... Have a look at their custom series, if not for now, at least you know what you could do later...

So far I have not had a HDD fail over at LW |touches wood| but on the two occasions I have needed support it has been fast and "accurate". No need to email / phone / poke / tweet / generally be a pain in the a*se, to get something done.

The other option could be colo - you then get to choose when to replace your disks. - I have never used colo, but it sounds like a good option for a max-spec machine.

~Trap
Cheers; I'm going to send a few emails to see what non-public deals I can get.


Colo isn't an option for me as I'm in Australia and don't have any tech-savvy friends in europe I can buy and have hardware delivered to, for them to configure, but I still need the traffic no more than 60ms from the bulk of Europe.

I have had a LW box before and it has been pretty good, albeit it was a public offering pre-configured server that wasn't that quick, and OVH's prices for what they give (at least for this specific server) are extremely hard to beat - although what you quoted gets damn close.

Beginning to realize that the extra money for support is worth more than saving a buck.

Stupid me never considered custom build on various providers... fishing through that now.

Thanks

Trapper
20-03-2014, 18:28
raxxeh,

I did not spend long on this, (and I do not know your exact requirements,) but at LeaseWeb I found I could have:

2x Xeon E5645 (Hex)
32GB DDR3
12x3TB SATA2

Setup 50E
Monthly 344E

I did not look at the network connection(s), or any other spec's really, as I said, you know what you need better than I do...

... Have a look at their custom series, if not for now, at least you know what you could do later...

So far I have not had a HDD fail over at LW |touches wood| but on the two occasions I have needed support it has been fast and "accurate". No need to email / phone / poke / tweet / generally be a pain in the a*se, to get something done.

The other option could be colo - you then get to choose when to replace your disks. - I have never used colo, but it sounds like a good option for a max-spec machine.

~Trap

raxxeh
20-03-2014, 02:24
Quote Originally Posted by Trapper
Here here...

I have been spouting this for quite awhile and getting nowhere.

I could not get disks looked at properly either, the answer seems to be return the server, and take another. This has three problems of course:

1. Turnover. No need to say more on that.
2. Hassle of transfer every time you need to change server because of something they won't fix.
3. Eventually, after doing this loop a few times, you realise you are better-off elsewhere.

For me number three has worked. I get a server with reasonable support for the same money. YES - I get less RAM, but when 60 to 75% of the RAM I have at OVH stands idle, does that really matter?

~Trap
Any recommendations for 12x3TB, 32GB ram, 2x4core 2.5ghz+, 2x1gbit connections?

I haven't been able to find anything similar for $300eu range elsewhere, and im reluctant to move because of the quantity of disks... but they did replace the 2 worst drives, still getting io pauses on one of them, but the rest seem to be doing 'okay' now, won't last though...

Trapper
19-03-2014, 12:56
Quote Originally Posted by Andy
...Personally I think OVH need to stop focusing on the specs of servers (insane RAM for example, unless you're paying real money for it) and focus more on support instead. How many people really need servers with 16GB+ RAM? Those that do are probably more than willing to pay for it.
Here here...

I have been spouting this for quite awhile and getting nowhere.

I could not get disks looked at properly either, the answer seems to be return the server, and take another. This has three problems of course:

1. Turnover. No need to say more on that.
2. Hassle of transfer every time you need to change server because of something they won't fix.
3. Eventually, after doing this loop a few times, you realise you are better-off elsewhere.

For me number three has worked. I get a server with reasonable support for the same money. YES - I get less RAM, but when 60 to 75% of the RAM I have at OVH stands idle, does that really matter?

~Trap

Andy
19-03-2014, 11:47
Probably, but again it comes down to being a budget company. It'll be a "If it works, who cares about performance?" attitude I expect.

Personally I think OVH need to stop focusing on the specs of servers (insane RAM for example, unless you're paying real money for it) and focus more on support instead. How many people really need servers with 16GB+ RAM? Those that do are probably more than willing to pay for it.

raxxeh
19-03-2014, 10:19
Surly they realize mismatched disks is terrible for raid, right?

I just don't even know. do the people working in the datacentre not know what they should?

Andy
19-03-2014, 10:04
I saw your original reply via e-mail, sigh indeed =/

raxxeh
19-03-2014, 09:41
.... sigh.

Andy
19-03-2014, 00:28
In my case I cancelled the server but they still added it on instead of refunding me... Not on IMO since it does specifically say "refund" which I even mentioned to them and got ignored...

raxxeh
19-03-2014, 00:00
Quote Originally Posted by Andy
I'd still ask for your SLA even if you do get the disks replaced. You've pretty much wasted a week of rental on this server due to the issue so I think you're entitled to something. Don't expect the money back either, they'll just give it to you in additional pro-rata rental time on the server if mine was anything to go by...
That's what I meant; I've claimed SLA twice in 4 years and both times they just added a full 30 days to server.

As far as I'm concerned, that's just pocketting $300EU for me as I get to skip a renewal date. :P

Andy
18-03-2014, 23:57
I'd still ask for your SLA even if you do get the disks replaced. You've pretty much wasted a week of rental on this server due to the issue so I think you're entitled to something. Don't expect the money back either, they'll just give it to you in additional pro-rata rental time on the server if mine was anything to go by...

raxxeh
18-03-2014, 23:47
Quote Originally Posted by Andy
Now you're seeing why I left. If you need priority support and guaranteed uptime, OVH isn't the place. OVH is a budget company.
You're right of course.

Still, going to claim SLA, if I can get 300EU back because of the lack of support it'l make me slightly less jaded.

then again, if I get what I want and all disks replaced (doesn't look like its going to happen, they ran tests on all disks and then extra tests on a couple of disks after that ignoring the rest) i'll just leave it be.

Andy
18-03-2014, 23:22
Now you're seeing why I left. If you need priority support and guaranteed uptime, OVH isn't the place. OVH is a budget company.

raxxeh
18-03-2014, 22:35
ugh I knew I should have just eaten the higher cost of a new server somewhere else.

18 hours ago:
Dear customer,

Your issue is under verification, you will have more
details later on this ticket.

Best regards,

No other information, no timeframe, no update as to what the verification is, nothing.

the customer support presented to customers who have over $1000eur a month with a company is shocking.

Andy
14-03-2014, 17:18
OVH don't have enough support staff to give the required support, that's how they keep prices so cheap, but it's also their major downfall. They don't even have 24/7 customer support for what we would consider mission critical servers. If you buy cheap, you get what you pay for in terms of support and hardware.

NeddySeagoon
14-03-2014, 17:09
Myatu,

That's the first read error that the drive knows about. Sure, you can fix it by writing and forcing a sector reallocation ... for that sector but your data is lost.
There may well be (lots of) other unreadable sectors you don't know about yet too.

I would stop using a drive that can no longer read its own writing as soon an the RMA replacement arrived.
I've not had any problems RMAing such drives and I suspect that OVH don't either. They are just being parsimonious with there support man hours.

raxxeh
12-03-2014, 13:36
Quote Originally Posted by Neil
We would only replace failing or have failed disks. If you have logs that they have failed or going to fail then just open a ticket in the OVH Manager with these details, make sure you provide the serial numbers and confirm you have a backup.
ok, can you clarify OVH's stance on "failed or going to fail"?

with my data storage servers that are in my house, once a disk reports a failed sector I replace it from the zfs pool with a brand new disk and RMA the old one. That is my failed or failing disk quota, as I cannot afford to have data loss.

This means most disks are replaced (for me, on my local physical servers) by the time they tick over to 3 years, although I still have some that are older chugging along without a hiccup at all.

All symptoms I am dealing with right now would result in immediate power down and disk replacement, as the performance degradation is extreme.


Normally I wouldn't have to ask this, but as everyone else here has experienced, it has been easier to buy a NEW server, or just leave OVH all together. I don't want to do either.

Thanks Neil.

Neil
12-03-2014, 13:28
Quote Originally Posted by raxxeh
5 disks have completed smart checks, 2 of them have bad sectors & reallocated sectors, but smart still says the disk is fine.

Both disks with realloc. sectors are less than 60MB/s with dd read/write tests, with all processes turned off after a reboot (clean system).

3 disks report 0 sector reallocates, 0 bad sectors, passed smart checks, 1 disk does 160MB/s with dd (read+write), 2 disks 80MB/s. again, clean system.

of the remaining 7 disks, 4 of them are still at 60% remaining on smart test (not under high load either), 2 are at 40% and 30% respectivly, and one is sitting at 10% and hasn't moved in over a day.

I still don't think that smart long tests should take two days...


...Marks, what are the chances of getting you guys to replace all the disks (all at once) once all these checks have finished?

I have backups of the required data.
We would only replace failing or have failed disks. If you have logs that they have failed or going to fail then just open a ticket in the OVH Manager with these details, make sure you provide the serial numbers and confirm you have a backup.

raxxeh
12-03-2014, 13:15
5 disks have completed smart checks, 2 of them have bad sectors & reallocated sectors, but smart still says the disk is fine.

Both disks with realloc. sectors are less than 60MB/s with dd read/write tests, with all processes turned off after a reboot (clean system).

3 disks report 0 sector reallocates, 0 bad sectors, passed smart checks, 1 disk does 160MB/s with dd (read+write), 2 disks 80MB/s. again, clean system.

of the remaining 7 disks, 4 of them are still at 60% remaining on smart test (not under high load either), 2 are at 40% and 30% respectivly, and one is sitting at 10% and hasn't moved in over a day.

I still don't think that smart long tests should take two days...


...Marks, what are the chances of getting you guys to replace all the disks (all at once) once all these checks have finished?

I have backups of the required data.

Myatu
12-03-2014, 01:19
Quote Originally Posted by NeddySeagoon
Code:
Error: UNC at LBA = 0x02d88e3b = 47746619
Is a dead drive. The drive can no longer read the data at that location.
Only if you cannot write to it, and then its only one sector out of a few million. People should stop going into a panic mode about their HDD failing, when it isn't...

raxxeh
12-03-2014, 01:09
Quote Originally Posted by NeddySeagoon
raxxeh,

Code:
Error: UNC at LBA = 0x02d88e3b = 47746619
Is a dead drive. The drive can no longer read the data at that location.

The SMART long test is like
Code:
dd if=/dev/sdX of=/dev/null
how long it takes depends on what else the drive is doing. It bails out at the first error. If that error was caused by the system accessing the drive, you may have a dmesg entry too. In a raid > 0 setup, system operation will contine.

Remapped sectors can do horrible things to you drive throghput as logically contiguions blocks are no longer physically contiguious on the drive surface. Unfortunately, drives are supposed to remap failing sectors before they fail. In your case that didn't happen to the sector above.
Cheers for the second voice confirming, I think I'll try to wait for these smart tests to complete before making a ticket as they're probably going to try to make me do them anyway.

Now the challenge will be getting them to do all disks in one hit.

NeddySeagoon
11-03-2014, 19:49
raxxeh,

Code:
Error: UNC at LBA = 0x02d88e3b = 47746619
Is a dead drive. The drive can no longer read the data at that location.

The SMART long test is like
Code:
dd if=/dev/sdX of=/dev/null
how long it takes depends on what else the drive is doing. It bails out at the first error. If that error was caused by the system accessing the drive, you may have a dmesg entry too. In a raid > 0 setup, system operation will contine.

Remapped sectors can do horrible things to you drive throghput as logically contiguions blocks are no longer physically contiguious on the drive surface. Unfortunately, drives are supposed to remap failing sectors before they fail. In your case that didn't happen to the sector above.

Andy
11-03-2014, 16:23
That alone makes me want to look elsewhere. It's as if they just put all the bad disks they found into the server... Unfortunately where I've gone don't offer servers with that many disks, at least not yet.

raxxeh
11-03-2014, 16:13
Quote Originally Posted by Andy
Don't expect to get them replaced without a fight. See the link in my sig for my experience... It took me around 3 days with a downed server for them to replace it.
I'm well aware of the stress I'm heading towards.

I would just leave if it wasn't for the fact that this server can't be beaten anywhere, not with OVH's new plans, and not at any other datacentre.

Just hoping for extra opinions before I decide to dance with the devil.

As for progress, we're passing 24hours and 7 disks are still between 50 and 80% remaining on smart long tests.

I'm fairly sure they are all cooked.

Andy
11-03-2014, 14:06
Yeah. After 3 failed HDD's in a couple of years each with the same experience, I chose to move providers. I had a disk failure there and it was replaced in 1hr 30mins. 8 mins total down time thanks to h/w RAID1.

ctype_alnum
11-03-2014, 13:38
Quote Originally Posted by Andy
Don't expect to get them replaced without a fight. See the link in my sig for my experience... It took me around 3 days with a downed server for them to replace it.
That is terrible.

Andy
11-03-2014, 13:11
Don't expect to get them replaced without a fight. See the link in my sig for my experience... It took me around 3 days with a downed server for them to replace it.

raxxeh
10-03-2014, 23:51
Servers in the process of running long smart tests, about to hit 12 hours, and most of them are reporting that there is still 90% remaining of the test to complete.. when they complete i'll post the rest up as this will probably be relevant.


Code:
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Self-test routine in progress 90%     13579         -
# 2  Short offline       Interrupted (host reset)      00%         8         -
# 3  Short offline       Aborted by host               10%         8         -
# 4  Short offline       Completed without error       00%         5         -
# 5  Short offline       Completed without error       00%         0         -

I do have an awful lot of these in a few of the disks though, only 3 of them don't have it, but those 3 disks only read/write at about 60MB/s vs others doing 130+ (using dd, w/ ext4 as fs)


Do note actual disk age is somewhere around 530 days, so the last time I complained about this and was ignored, it seems that OVH's rescue system doesn't work on these servers (have to use special commands for smart checks to work ( smartctl -a -i /dev/sda -d megaraid,5) which the rescue platform doesn't execute

Code:
Error 36 occurred at disk power-on lifetime: 2256 hours (94 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 3b 8e d8 02  Error: UNC at LBA = 0x02d88e3b = 47746619

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  42 00 00 3b 8e d8 42 00   7d+04:21:49.441  READ VERIFY SECTOR(S) EXT
  61 00 08 b9 67 1b 49 00   7d+04:21:49.398  WRITE FPDMA QUEUED
  61 00 08 91 67 1b 49 00   7d+04:21:49.398  WRITE FPDMA QUEUED
  61 00 08 b9 66 1b 49 00   7d+04:21:49.398  WRITE FPDMA QUEUED
  61 00 09 80 66 1b 49 00   7d+04:21:49.398  WRITE FPDMA QUEUED

NeddySeagoon
10-03-2014, 20:29
raxxeh,

Beware the raw data. There may be several raw values packed into a single raw field in the SMART data.
So mind boggling big raw values are really quite OK. Look at the drive vendors web site to see what the numbers mean.

dmesg is a good indicator. Things like
Code:
[415790.257074] ata1.00: exception Emask 0x0 SAct 0xfff SErr 0x0 action 0x0
[415790.257080] ata1.00: irq_stat 0x40000008
[415790.257093] ata1.00: cmd 60/08:58:08:d4:f4/00:00:bd:00:00/40 tag 11 ncq 4096 in
[415790.257095]          res 41/40:00:08:d4:f4/00:00:bd:00:00/40 Emask 0x409 (media error) 
[415790.266899] ata1.00: configured for UDMA/133
[415790.266933] ata1: EH complete
are a very bad sign.
Although, this can also be a failing data cable.

The pending secor count being non zero is also a very bad sign. Thats a count of the sectors the drive knows it can't read.
There may be many more.

Don't skimp on the bandwidth. Post the whole lot. Use wgetpast if your distro has it.

marks
10-03-2014, 19:10
So I guess that you don't have any other errors apart from this one? No error on the over all check result.

You're welcome to open a ticket with the logs.

raxxeh
10-03-2014, 06:35
Hey guys, After a second opinion before I go and attempt to jump off a bridge with OVH's support since we all know just how bad it actually is.

The server I've had is coming up on 2 years old now and I've been having issues with IO, sometimes it will take ages for a request to complete causing everything to hang/slow down.

Now, only half of these disks are in raid together (others are single disks) but the entire server is beginning to slow to a crawl while it's under load, and the massive raw read error and seek error rates have me concerned, especially when none of my other disks (that are older) have anything like it.

So what I'm asking for is a second voice, should I open a ticket and prepare to run in circles for 3 weeks trying to get the disks replaced?

I have 12 disks and these are the worrying stats of each (smart data):

Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   116   099   006    Pre-fail  Always       -       106723120
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   076   051   030    Pre-fail  Always       -       56319246763
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13561
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   109   099   006    Pre-fail  Always       -       21820768
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   079   055   030    Pre-fail  Always       -       30750969426
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13562
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   117   099   006    Pre-fail  Always       -       141350352
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   079   051   030    Pre-fail  Always       -       39355138952
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13561
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   118   099   006    Pre-fail  Always       -       180406344
  3 Spin_Up_Time            0x0003   093   093   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       21
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   088   060   030    Pre-fail  Always       -       695441418
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   116   099   006    Pre-fail  Always       -       113530648
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   078   051   030    Pre-fail  Always       -       39249715963
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13562
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   118   099   006    Pre-fail  Always       -       181646688
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   087   060   030    Pre-fail  Always       -       4926482496
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13562
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   115   099   006    Pre-fail  Always       -       87626952
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   087   060   030    Pre-fail  Always       -       4947226922
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13562
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   119   099   006    Pre-fail  Always       -       202910552
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   077   051   030    Pre-fail  Always       -       43556995988
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13561
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   119   099   006    Pre-fail  Always       -       203400840
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   080   052   030    Pre-fail  Always       -       26554404750
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13562
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   119   099   006    Pre-fail  Always       -       205700736
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   076   055   030    Pre-fail  Always       -       52074226873
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13562
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   114   099   006    Pre-fail  Always       -       64856208
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   087   060   030    Pre-fail  Always       -       585409711
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13563
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   118   089   006    Pre-fail  Always       -       200378936
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       20
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       16
  7 Seek_Error_Rate         0x000f   082   060   030    Pre-fail  Always       -       13543538686
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       13562