Storage Solutions There's An Issue With the Western Digital Green Drives!!

broadway

Disciple
If you look at the way the western digital green HDD's are made to work then one would assume, technically, that it would last a year or two more than the average hard drive. WRONG! I thought that too until i went through this page. The drives are made in such a way that they look for every small reason to slack off.

The head of a green drive is programmed in the firmware to "park" itself after every 8 seconds of inactivity. So assume that your away from your desk with the screensaver up and doing it's thing. In the background you have a software that checks something on the hard drive every 10 seconds like a messenger or an extra-alert anti-virus software snooping around your hard drive while your pc is idle. The means the head will go park itself 10 times every single minute. That's around 600 times in an hour. The average load/unload cycle limit of a hard drive is 300,000 cycles. That means your green drive will reach it's limit in around 500 idle hours.

Like this guy here has million+ load cycles and still counting.
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0003 197 181 021 Pre-fail Always - 7116
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 450
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x000e 200 200 051 Old_age Always - 0
9 Power_On_Hours 0x0032 083 083 000 Old_age Always - 12589
10 Spin_Retry_Count 0x0012 100 100 051 Old_age Always - 0
11 Calibration_Retry_Count 0x0012 100 253 051 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 46
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 310
193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 1739851
194 Temperature_Celsius 0x0022 116 099 000 Old_age Always - 36
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 1
200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 1

Yes, that's 1.73 million.
More can be found here.

If you own a green drive then the load unload cycle can be found with hdtune.

Example:
hdtunehealthwdcwd3200aa.png
 
@udit
So the 32mb green one's are not programmed to park there heads? I read somewhere where a WD representative said the "parking" feature was unique to green drives to reduce power usage.
 
/*Thank god my torrent rig runs a 500AAKS.

Looking at the title, I thought Amarbir sir came up with something! :p
 
broadway said:
@udit

So the 32mb green one's are not programmed to park there heads? I read somewhere where a WD representative said the "parking" feature was unique to green drives to reduce power usage.

point

I'm so dead now

I have 11 wd green 1tb

I can't spend 44k upgrading to Seagate

:'(
 
Here's a HD Tach status of my 1TB WD green drive.(10EADS-00M2B0) dual platter drive.

hdtunehealthwdcwd10eads.png


Load cycle count is 9440 . is that normal ?

The health status shows the drive to be ohk.

On the other hand , something seems to be wrong with my 500GB Seagate 7200.11 drive. :(

Can anyone explain, why some of the attributes are highlighted on yellow ?

Reallocated sector count is 413 . is it related to bad sectors ?

spin retry count is also 5. (on all other drives it is 0 ).

and there's one unknown attribute as well :S.

hdtunehealthst3500320as.png
 
I have read that 300k is an average for a drive. Im not saying your drive will die. Just saying that the green drives will reach the 300k figure very quickly. What will happen after it crosses that figure is not sure.
 
Which HDD comes with the 3.5" Mybook basic(usb) ? I'v just got a 1TB WD Green, and I was planning to sell it off and get the 1TB Mybook just coz i cudnt get any 3.5" external casing of my choice to put it in.
 
Well I don't think that load_cycle_count in WD Green can be directly related to HDD life cycle.

How can process of parking which involves moving HDD head from point A to point B without any data seek would result in HDD eof soon.

Here is mine WDAAJS count for using from last 1 year.

hdtunewdblue.png


Will post my 1month old load count when I'll reach home.
 
The fact that these Green drives have considerably high load cycle count compared to other drives is noticeable with my other drives.

My WD(3-4 months old) which has a power on time of 1235 has load cycle count of 9446 while my Hitachi drive almost a year old having power on time of 5509 has load cycle count of only 1199.

This is scary :(. Need to know if this really affects the health/ life of the drives .
 
HOLY MOTHER OF GOD.... :O my precious 3TBs... I'll be killed on spot if I utter one word about replacing those 3 with Seagate 1TBs. :(

Somebody get in touch with WD guys or some HDD connoisseur and confirm if this indeed is going to be an issue, my media is totally on my 3TBs and my work is backed up on those.
 
What ??????????

I just got a 1TB green a month back.

So this means that the so called 'GREEN' as on power saving and less energy consumption thing is actually bad for the drive.

But i suppose a firmware flash (if WD comes up with) can resolve this issue right ??
 
copy paste from Synology Inc. • View topic - Summary: WD Green Power Disks & High Load Cycle Problem

This problem was only with unix/linux systems and a fix was already provided by WD (see seconf post in the link)....so why panic now?????

--------------------------------------------------------------

Summary of the facts:

* At least some versions of the Green Power disks from Western Digital seem to be switching between energy saving state (which starts after 8 seconds without access) and normal load operation when installed in a Unix/Linux based system.

* Affected disks show a fast growing load cycle count (LCC), which is a metric for how many times the drive unloaded its heads.

* When there is no access to the NAS (idle), the LCC increases between +50 and +120 per hour.

* As WD specified their disks for 300'000 LCC, the threshold will be reached within a few months. Maximum lifespan of affected disks might be shortened.

* If the NAS is under permanent load, LCC is not increased. So it is a problem in the idle state.

* It may be that disks mounted in an external case and attached via esata are not affected. But if these disks are used as internal disks with a logical volume on them, then they will show the high LCC as well.

* Not only Synology NAS are affected. Other Unix based NAS and Linux distributions seem to have the same problem.

* When the disks are installed in a windows system, there seems to be no problem.

* Not all combinations of disk version and NAS version seem to be affected by the problem. Up to now it is not known which combinations do have the problem and which do not.

* It seems that disks with version WD10EACS-00D6B0 (Nov 08) or later no longer show high LCC. Instead they show the START_STOP_COUNTER value as LCC. Question remains, if WD did really fix the problem or just mask it away by no longer showing the high LCCvalues to us.

* Western Digital has the position, that they do not support operation of their disks with other OS than Windows and Mac OS. Maybe you could add some pressure or take back the recommendation of theirs disks.

* The problem is discussed in many threads throughout the web. Just google for "high load cycle count" or similar terms.

* According to WD, setting the timeout of IntelliPark to 300 seconds should reduce the problem (in a Linux box with sync all 30 seconds that results in disabling the main feature of GreenPower technology). This workaround seems not to work for everybody (for me it does not).

Only 4 platter versions of the disk affected:

It seems that only 4 platter versions of the WD10EACS seem to be affected. All newer 3 platter versions of the WD10EACS are not affected. It guess that all WD10EADS (larger disk cache) are 3 platter versions and should not suffer from high LCC.

3 platter: WD10EACS-65D6B0, WD10EACS-00D6B0, WD10EACS-00D6B1 (show no high LCC but LCC equals SSC)

4 platter: WD10EACS-00ZJB0, WD10EACS-32ZJB0, WD10EACS-00C7B0 (show high LCC)

Reproduction of the problem:

* 1. Install the WD disk with green power technology into the NAS as disk1 or disk2 (internal!)

* 2. Write down the following SMART values: Power On Hours (POHstart) and Load Cycle Count (LCCstart)

* 3. Stop all activity on the NAS so the disks reach idle states often

* 4. After several hours write down again the following SMART values: Power On Hours (POHstop) and Load Cycle Count (LCCstop)

* 5. Calculate the LCC increase: LCCdeltaPerHour = ( LCCstop - LCCstart ) / ( POHstop - POHstart )

* 6. If your disk is affected, you will see LCCdeltaPerHour between +50 and +120 (--> you should post your values in the Experiment Thread by Franklin)

* 7. If your disk is not affected, you will see LCCdeltaPerHour between +0 and +10

Resources:

* Good explanation of the head unloading/loading technique by Hitachi. I am not sure if WD uses exactly the same "ramp" technology.

* Potential explanation in another forum of what could be the reason for the high load cylce counts.

* Specification for WD Caviar GP disks

* Explanation of the WD Green Power Functions, the problem is probably due to "IntelliPark" feature

Many thanks for all the valuable help!

Thomas
 
So now which drive to go for.so now samsung to buy. Any new brand whose issues havent cropped up. Instead of new benchmarks these manufacturers are creating more new bugs.
 
Back
Top