CPU/Mobo nForce 4 Data Corruption

Status
Not open for further replies.
zhopudey said:
Saw this link Eazy had posted in another thread - http://forums.nvidia.com/lofiversion/index.php?t8171.html.

Has anyone on TE faced similar problems? Seems like a serious issue. :(

I want to know about this - as next week I may possibly be buying an AMD system. The problem mentioned in the thread linked by Zhopey applies to me in a BIG way as I use Maxtor SATA drives and I do a daily transfer of a large single file of 2.5GB between HDDs - these are my backup image files.
 
I already have a nf4 board, and am planning to get 2*80gb SATA drives, and put them in raid-0, for the OS, apps and games. This news sure has me worried :S. The thread in the link offers so definite solution to the problem. One guy suggested turning off TCQ / NCQ, but it didn't completely solve the problem. :(
 
zhopudey said:
I already have a nf4 board, and am planning to get 2*80gb SATA drives, and put them in raid-0, for the OS, apps and games. This news sure has me worried :S. The thread in the link offers so definite solution to the problem. One guy suggested turning off TCQ / NCQ, but it didn't completely solve the problem. :(

RAID 0 :O :O .... that word scares me now... I had 2 x Maxtor 120GB SATA in Raid 0 running and I saw hardly any improvements in anything but benchies - there was really not much of a difference in data transfers and opening and closing programs and files - I had tried all the optimizing for RAID. So I brought the drives back to normal and a couple of days later one HDD was dead - that was not caused by having the drive running in RAID - it just died - and I was sweating thinking about all the problems I would have had if it died whilst it was in RAID. :O

Here is what Anandtech said in a Raid article - I AGREE :hap2:

Final Words
If you haven't gotten the hint by now, we'll spell it out for you: there is no place, and no need for a RAID-0 array on a desktop computer. The real world performance increases are negligible at best and the reduction in reliability, thanks to a halving of the mean time between failure, makes RAID-0 far from worth it on the desktop.

There are some exceptions, especially if you are running a particular application that itself benefits considerably from a striped array, and obviously, our comments do not apply to server-class IO of any sort. But for the vast majority of desktop users and gamers alike, save your money and stay away from RAID-0.

If you do insist on getting two drives, you are much better off putting them into a RAID-1 array to have a live backup of your data. The performance hit of RAID-1 is just as negligible as the performance gains of RAID-0, but the improvement in reliability is worthwhile...unless you're extremely unlucky and both of your drives die at the exact same time.

When Intel introduced ICH5, and now with ICH6, they effectively brought RAID to the mainstream, pushing many users finally to bite the bullet and buy two hard drives for "added performance". While we applaud Intel for bringing the technology to the mainstream, we'd caution users out there to think twice before buying two expensive Raptors or any other drive for performance reasons. Your system will most likely run just as fast with only one drive, but if you have the spare cash, a bit more reliability and peace of mind may be worth setting up a RAID-1 array.

Bottom line: RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance. That's just the cold hard truth.

Read the full article here... http://www.anandtech.com/storage/showdoc.aspx?i=2101
 
^^thats what i always say...raid 0 is just for benchies...
and zhopudey u can buy seagate satas?...seagate does not seem to have that prob...personally my fav hdd brand is hitachi, but all my hdds r seagate...seagate warranty is amazing!! just the other day i got a 80gb replaced in one day!!....perf wise hitachi is better but still reliablility wise i have had no prob with seagate....get the 7200.9 if available otherwise get the 7200.7, avoid the 7200.8
 
apollyon said:
get the 7200.9 if available otherwise get the 7200.7, avoid the 7200.8

Correct me if wrong isnt that 7200 the rpm of the hdd.

Hey appo this might be a stupid question but still
what is this .9 .8 .7 thing after 7200and is this printed on the HDD.
 
zhopudey said:
Well it must be version no. or something similar.

YES you are right... like Appo said the .8 were known to be problematic and the .7 were known to be solid - the .9 are new and I would wait a while and check the user feedback before buying any of these. I will check at the storage forums and let you people know about the .9's.

I am a total Hitachi convert now. I will buy only Hitachis unless they start to make Deathstars like the IBMs.
 
Seagate SATA is actually not upto spec... many of their drives are blacklisted. Will especially cause problems with the 2.6.11 kernel (.10 & .12 give me no errors)
 
Eazy said:
RAID 0 :O :O .... that word scares me now... I had 2 x Maxtor 120GB SATA in Raid 0 running and I saw hardly any improvements in anything but benchies - there was really not much of a difference in data transfers and opening and closing programs and files - I had tried all the optimizing for RAID. So I brought the drives back to normal and a couple of days later one HDD was dead - that was not caused by having the drive running in RAID - it just died - and I was sweating thinking about all the problems I would have had if it died whilst it was in RAID. :O

OK what about a RAID 0+1 array a striped + mirrored set.....wouldn't that take care of any erstwhile problems .....speed.....data security...

I also plan a RAID array, since I may need to use 4 HDD's i will possibly need RAID 0 (maybe in another combo say R1....) so this bothers me since I am also considering the A8N SLI P...which is NF4 based...what about the SiL 3114 controller....anyone found disparities using that....??
I think Nvidia needs to iron this out before the x16 * 2 models hit mainstream......
 
Status
Not open for further replies.