Graphic Cards GFX Card power draw: What is real?

Status
Not open for further replies.

hatter

Galvanizer
An amazing article that compares how true Nvidia and ATI are with their TDPs. They use furmark to test the power use of a card a come away surprised. Almost all cards are losers... except 4670 (god bless ATI for this card) and 9600 GT (the ninja fighter) :hap2:

The exceeding of own "TDP"-designations and serious violation of the PCI Express Specification demonstrates that neither AMD nor NVIDIA did anticipate the Furmark. With their Dual-GPU graphic adapters both put their worst instincts on display. The AMD Radeon 4870 draws more than 200W out of the 8-pin connector which was specified for 150W and also NVIDIAs GTX 295 behaves inglorious by drawing 104W out of the 6-pin connector which was designed for 75W. Our measurements show: when it is about presentation of performance in the high end market segment both manufacturers are on their limits or rather exceed them massively. In contrast to that with mainstream adapters (where the most money is made) AMD and NVIDIA do not violate their specifications, mostly they even go below them.

Generally both companies appear to have no upper limit in power consumption for their high end cards. Even in the idle mode (where efficiency is important for the electricity bill) only few models can satisfy the customer. Merely the GeForce GTX 285 can score some points within the high end group with a consumption of 28W in idle mode. In the middle class the AMD Radeon 4670 demonstrates with 8W in idle mode that a lower consumption is possibe and nevertheless delivers enough rendering power for the casual gamer. To sum the things up: we will see Green-IT developments reach the high end graphic adapters market segment too. But there are not much signs indicating that this will happen in the near future.

Power consumption of current graphics cards - Artikel & Testberichte bei HardTecs4U
 
  • Like
Reactions: 5 people
ty for article....... never knew tht 4850 fakes its wattage @ 110W when in reality its 148.5W!

GeForce 9800 GX2, Radeon 4870x2 are also dirty fakers :@

n 4670 is 1 heck of a green card :) n the notorious 9600GT seems sweet too.

...

..

repped
 
the article is BS,running a software that pops out your card's circuitry to judge TDP is ridiculous.

nvidia,ati or for that matter intel and amd too don't release the TDP values based on softwares that aren't meant to be run on their hardware in the first place.

also furmark loads a 4870 about 40W more than the maximum usage you would see with a game.
 
gamervivek said:
the article is BS,running a software that pops out your card's circuitry to judge TDP is ridiculous.
nvidia,ati or for that matter intel and amd too don't release the TDP values based on softwares that aren't meant to be run on their hardware in the first place.
also furmark loads a 4870 about 40W more than the maximum usage you would see with a game.

BS. Read Article before commenting.
 
Nice article mate keep it up..
ok seems that this is the chart of our hot-list cards when comparing performance-wise:-

At idle:-
-------------------------------------------------------------------------------------------
{9600GT--> 25.9W || 4670--> 8.3W} though not good performance comparision |

{8800GT --> 35.1W || 4830--> 26W} |

{9800GTX+ --> 28.1W || 4850--> 42.6W} |

{GTX 260(65nm) --> 32.0W || 4870--> 55.4W} GTX 55nm might show more efficiency. |

{GTX280 --> 41.6W || No one here} |

{GTX285 --> 29.0W || No one here} |

{GTX295 --> 62.3W || 4870X2--> 75W} |
-------------------------------------------------------------------------------------------

At full Load:-
-------------------------------------------------------------------------------------------
{9600GT--> 68.5W || 4670--> 64.2W} though not good performance comparision |

{8800GT --> 111.7W || 4830--> 93.2W} |

{9800GTX+ --> 128.4W || 4850--> 148.2W} |

{GTX 260(65nm) --> 166.2W || 4870--> 187.2W} 55nm might show more efficiency. |

{GTX280 --> 226W || No one here} |

{GTX285 --> 214.1W || No one here} |

{GTX295 --> 316.5W || 4870X2--> 373.1} |
-------------------------------------------------------------------------------------------

all the best in choosing your cards if any one is planning.:hap2: :)
 
sato1986 said:
BS. Read Article before commenting.

the article is really old,the 4870s popping up with furmark is older still.whats the point of using furmark to compare real-world power consumption of a graphics card unless you want to run furmark and not play games on it.
EVGA GeForce GTX 260 Core 216 Superclocked (55nm GPU) Graphics Accelerator Review (page 5) - X-bit labs

xbit's review shows that a 4870 drawas 130W for 3dm06 at 1600x1200 with 4xfsaa and 16xaf,the furmark testing shows it pulls up 188W.i hope my first post makes more sense to you now?
 
^^ it's not about how much my card draws... it's about what is theoretically possible. And something which is theoretically possible has every chance to be real. It's simply about the knowledge which again is necessary to make right choices.

You may chose to ignore what the article finds, but don't call it bullsh*it because in no way it is wrong. The authors of the article clearly say that power draw during Furmark's run will probably be never achieved during real life usage. But it's simply a tool to push the cards to max and compare apple-to-apple
 
i hope my post makes more sense to u now....

nevermind.

@Morgoth, dont the latest games push the mid-end cards like 8800GT / 4850 / 9600GT to full load? n full load dsnt mean near-full power cnsumption? Plz ellaborate.

Also, dsnt the Saphhire Ati radeon use Two Six-Pin connectors?
 
^^ full load is theoretical. For example, theoretically HD4870 has:

1.2 Tflop shading power

115.2 GB/s Memory Bandwidth


How much of it will be used depends on the way games or programs are coded. The benchmarks like Furmark are best to test stability of a card because they are specifically coded to stress the cards.

Games on the other hand may or may not stress the card fully because they don't always stress all components of the card. May be in sme scenes they will stress ROPs and in others ALUs. Games like Crisis which renders more complex and heavy scenes will stress the card more than for example COD 4 which rely more on performance of texture units. With games, emphasis is on performance/playability/best results. With stress tools, focus is on putting the card through it's paces.

Importantly, if a game is stuttering even at a low res it may not mean that card is getting stressed fully and it can't cope with requirements. The low performance could be because the card is limited by something else than it's computational power (memory bandwidth or less texture fill rate). This is one reason why even though 4870 has more computational power (shaders) than GTX285, it will be beaten by Geforce in almost all benches because geforce is powerful in texture performance. Similarly, while 9800 GTX has more texture power than 4850, Radeon will beat it in almost all benches because it has more shader power. The key is balance, but it's not easy to achieve. Though G80 and R300 are the closest that have come to it.

BTW, this picture will clear a few points:




Source: PCGH - Pixelshader-Shootout: RV770 vs. GT200 - Nvidia, AMD, Ati, Radeon, Geforce, HD 4850, HD 4870, GTX280, GTX260, 8800 Ultra

Ideally at full loads, power draw will represent the maximum that card can draw. :)
 
morgoth said:
^^ it's not about how much my card draws... it's about what is theoretically possible. And something which is theoretically possible has every chance to be real. It's simply about the knowledge which again is necessary to make right choices.

You may chose to ignore what the article finds, but don't call it bullsh*it because in no way it is wrong. The authors of the article clearly say that power draw during Furmark's run will probably be never achieved during real life usage. But it's simply a tool to push the cards to max and compare apple-to-apple
:rofl:
i don't say the article is wrong,i feel the article is BS,using furmark to compare power draws of various cards is flawed just because it will never represent power draw of cards in their daily usage and just be a representative of how much a power circuitry can load upto at stock clocks.
and do you really think it really shows the theoretical limit of power draw?overclocking,volt modding will push it far beyond that.they use a program that unnturally stresses a card and then draw conclusions regarding TDP,power draw at stock which is pointless.

morgoth said:
^^ full load is theoretical. For example, theoretically HD4870 has:

1.2 Tflop shading power
115.2 GB/s Memory Bandwidth


How much of it will be used depends on the way games or programs are coded. The benchmarks like Furmark are best to test stability of a card because they are specifically coded to stress the cards.

Games on the other hand may or may not stress the card fully because they don't always stress all components of the card. May be in sme scenes they will stress ROPs and in others ALUs. Games like Crisis which renders more complex and heavy scenes will stress the card more than for example COD 4 which rely more on performance of texture units. With games, emphasis is on performance/playability/best results. With stress tools, focus is on putting the card through it's paces.

Importantly, if a game is stuttering even at a low res it may not mean that card is getting stressed fully and it can't cope with requirements. The low performance could be because the card is limited by something else than it's computational power (memory bandwidth or less texture fill rate). This is one reason why even though 4870 has more computational power (shaders) than GTX285, it will be beaten by Geforce in almost all benches because geforce is powerful in texture performance. Similarly, while 9800 GTX has more texture power than 4850, Radeon will beat it in almost all benches because it has more shader power. The key is balance, but it's not easy to achieve. Though G80 and R300 are the closest that have come to it.
BTW, this picture will clear a few points:



Source: PCGH - Pixelshader-Shootout: RV770 vs. GT200 - Nvidia, AMD, Ati, Radeon, Geforce, HD 4850, HD 4870, GTX280, GTX260, 8800 Ultra
Ideally at full loads, power draw will represent the maximum that card can draw. :)

the pcgh review you posted is good but only shows how well the fillrates cards compare to 7900gt,not really a great indicator of what is the limiting part.theoretical limits of graphics cards do not hold true most of the time.the theoretical memory bandwidth can only be sustained for short bursts,shading power is quite overrated even though ati may far surpass nvidia in theoretical limit,yet when it comes to gaming performance its very likely they are about the same.

stress tools are good but if you don't have any problem gaming with your card whats the point in trying to kill it?stability isn't simply a matter of your card's clocks,temperatures but also the application using it.if the applications i use run stably on my card i am fine.
Again i would reiterate the fact these are my personal opinions,and some euphemisms would have kept everyone happy but then they are forum killers.:P
 
morgoth said:
How much of it will be used depends on the way games or programs are coded. The benchmarks like Furmark are best to test stability of a card because they are specifically coded to stress the cards.

Games on the other hand may or may not stress the card fully because they don't always stress all components of the card. May be in sme scenes they will stress ROPs and in others ALUs. Games like Crisis which renders more complex and heavy scenes will stress the card more than for example COD 4 which rely more on performance of texture units. With games, emphasis is on performance/playability/best results. With stress tools, focus is on putting the card through it's paces.

Importantly, if a game is stuttering even at a low res it may not mean that card is getting stressed fully and it can't cope with requirements. The low performance could be because the card is limited by something else than it's computational power (memory bandwidth or less texture fill rate). This is one reason why even though 4870 has more computational power (shaders) than GTX285, it will be beaten by Geforce in almost all benches because geforce is powerful in texture performance. Similarly, while 9800 GTX has more texture power than 4850, Radeon will beat it in almost all benches because it has more shader power. The key is balance, but it's not easy to achieve. Though G80 and R300 are the closest that have come to it.

Oh........ Now i understand the war between the two brands much better.... :hap2:

ty man.
 
Status
Not open for further replies.