Graphic Cards GTX 200 Vs RV770 - Architecture Review

muzux2

ex-Mod
At first glance, the new Nvidia GPU that tends to be the world’s fastest monolithic graphics processor these days made very ambiguous impression. Unlike ATI RV770, it cannot boast that many innovations compared with the previous generation graphics processors, so we can’t call it a revolutionary. G200 is more likely to be considered further extensive development of ideas first brought up in G80. Moreover, we sometimes get the impression that they were in a real hurry when designing this chip. In fact, there was no need for rush, because ATI chose a completely different strategy and gave up the high-performance monolithic GPU concept altogether.

Actually, G200 could be regarded as “G92 on steroids”. Just look at the increased number of all functional units: ALU, TMU, RBE, and wider 512bit memory bus. The only significant architectural change is the addition of the third shader processor into computational clusters that use to have only two of those.

The results of preliminary theoretical benchmarks turned out not very optimistic. New Nvidia’s solution lost to a simpler and cheaper ATI RV770 in a lot of synthetic benchmarks except the fillrate test and texture sampling pure performance. Theoretically, G200 based solutions should feel at home in those games that have a lot of high-resolution complex textures and shaders working mostly with textures, and should lose to ATI only in those games that require high mathematical performance. Moreover, 512-bit memory bus and 32 RBE may be extremely useful in high resolutions with enabled anti-aliasing, which will definitely attract a number of hardcore gamers.
X-bit labs - Nvidia GeForce GTX 200 Graphics Architecture Review: Born to Win? (page 15)
The RV770 design team should be pretty happy. The GT200 architecture is in no way bad, but in comparison to RV770 and its former mate (G80) it fizzes out. The RV770 architecture looks more scalable (both up and down) unlike GT200 which you can scale only down because of current process node technology. AMD could probably have gone for 2000SPs, 100TMUs & 40ROPs (RBEs) for a chip the size of GT200 but they are waiting for the right time ..:eek:hyeah:

Besides the architecture benefit AMD brings to the table, they also have a higher transistor density compared to Nvidia. Since the RV770 and GT200 are manufactured on different processes, by normalizing (19% shrink to 55nm) GT200, we get a die size of 460mm2. That gives Nvidia a transistor density of 3 million transistors per mm2 compared to a transistor density of 3.77 million transistors per mm2 for AMD. All possible because AMD got a headstart on 55nm over Nvidia.
:hap2:
2hno175.png


GPU Café RV770 vs GT200: Architecture Comparison
i wasn't expected, GT200 archtecture would be dusted by RV770 like this way..wondering what RV870 would be on 40nm
:hap2:
 
LOL does AMD pay you money ? they should with the amount of PR you are doing.

On the serious side, if you wanna read about GPU architecture, read it on beyond3d, more on the discussions on the flaws and innovations of the architecure head over to their forums, some of the best minds in the GPU field post there etc...
 
Aces170 said:
LOL does AMD pay you money ? they should with the amount of PR you are doing.

On the serious side, if you wanna read about GPU architecture, read it on beyond3d, more on the discussions on the flaws and innovations of the architecure head over to their forums, some of the best minds in the GPU field post there etc...

so according to you, that review done by X-bit labs is all bogus????

EDITED
 
^^Phew....u dont ever like to stop !

He was talking abt beyond 3d forums...where is are 2 >100 page threads discussing the GT200 and RV770 architecture....will dig up the links in a while :p

Btw why do u act like a headless fanboi :rofl:
 
Supra said:
^^Phew....u dont ever like to stop !

He was talking abt beyond 3d forums...where is are 2 >100 page threads discussing the GT200 and RV770 architecture....will dig up the links in a while :p

Btw why do u act like a headless fanboi :rofl:

you guys have wrongly taken this as fanboi post, nothing to do with that..

yeh,hope he was refering about this

PCGH - Pixelshader-Shootout: RV770 vs. GT200 - Beyond3D Forum

PCGH.COM has done just Pixel Shader shootout, that is one of the part of X-bit labs..:hap5:

or may be this one

RV770 vs GT200 : hidden potential? - Page 2 - Beyond3D Forum

what do you mean by headless fanboi??:mad:
 
I just read that Xbit Labs stuff. If what they have found is true, we can see the end of GTX2XX, as we know them, very soon. :p

GTX2XX is no match for 48XX in shader power. In newer games which use shaders a lot GTX2XX will find it tough to match 48XX. In games where texture and fillrate are important, GTX2XX will win.

And newer games rely more on shaders. Also see the shader performance of GTX2XX under SM 3.0. It's poor.
 
^^ right. I was most surprised with Geometry & Physics benchs. Wasn't knowing GT200 performance would be so poor. If ATI gets havoc drivers
ready, NV would be in serious trouble..:eek:hyeah:
 
beyond 3d has more serious discussions and many more knowledgeable people

you wont be able to grasp most of the stuff :)

my friend who is working on game engine designing cant understand shit too :p
 
What are you trying to say? U mean to say, i'm too dumb that i cant grasp anythng on b3D. ur friend? huh
I hope he isn't another pappu. Give him my mail, i'll understand him all this what u called Shit, if he's not getting it..lol
 
muzux2 said:
What are you trying to say? U mean to say, i'm too dumb that i cant grasp anythng on b3D. ur friend? huh
I hope he isn't another pappu. Give him my mail, i'll understand him all this what u called Shit, if he's not getting it..lol

he doesn't imply that you are dumb,but if a guy who is pursuing Btech in CSE from an IIT finds it difficult to grasp something related to his field of expertise then its kinda hard to expect someone who is trolling around on forums will "understand him".:tongue:
as for the architectural efficiency it means squat to me and to everybody else except the guys who design these things,if the thing can game well then it doesn't concern me in the least bit that my card doesn't output as many fps per mm2 as the other card or per Gbps or per transistor.
the perf/power and perf/price does make sense,but ati isn't that far ahead in those departments.the low idling power of gtx 2xx series and the price cut has almost nullified the edge ati had there.
 
that article was posted 24th june :p , the prices of GTX 200 are much better now , so nvidia better buy , the power consumption of GTX 260 / 280 is much better , GTX 260 just consumes 25w on idle and 280 a little more , and the who cares about the architecture , GTX 280 is still most powerful single gpu and ati cant beat that , 4870x2 is slighly better than GTX 280 , but its price is 100$ more so not worth buying , considering the microslutter and the power consumption of 4870x2 on idle is same as GTX 280 on load :rofl:
 
gamervivek said:
he doesn't imply that you are dumb,but if a guy who is pursuing Btech in CSE from an IIT finds it difficult to grasp something related to his field of expertise then its kinda hard to expect someone who is trolling around on forums will "understand him".:tongue:
Obi Van Kenobi (showing a dart) to Dexter Jettster: You can tell me what this is?

DJ: Whoah! ... this baby belongs to them Cloners. What you got here is a Kamino saberdart.

Kenobi: I wonder why it did not show up in analysis archives.

DJ: ... Those analysis droids focus only on symbols. Huh, I should think you Jedi would have more respect for the difference between knowledge and wisdom.

- From Attack of the clones
RoBoGhOsT said:
that article was posted 24th june :p , the prices of GTX 200 are much better now , so nvidia better buy , the power consumption of GTX 260 / 280 is much better , GTX 260 just consumes 25w on idle and 280 a little more , and the who cares about the architecture , GTX 280 is still most powerful single gpu and ati cant beat that , 4870x2 is slighly better than GTX 280 , but its price is 100$ more so not worth buying , considering the microslutter and the power consumption of 4870x2 on idle is same as GTX 280 on load :rofl:

While discussing an architecture, does it really matter that the article was published two months back? G200 architecture is yet to be changed. Its weaknesses and strengths are still there.

While, it's true that ultimately it's the gaming performance that matters, putting a card through its paces in a suit synthetic tests can reveal much about the potential of the card.
And here, when shaders are becoming more and more important, 48XX series seem quite balanced and strong.
 
^^but if you see from the actual architecture POV, the nvidia architecture is superior because each 5way shader from ati is bounded together and can perform computations on a single vector while nvidia has independent shaders

and from the consumer POV, the battle goes on and both companies exchange the lead every 6months.... lets see what nvidia comes out with on 55nm next :)
 
_pappu_ said:
^^but if you see from the actual architecture POV, the nvidia architecture is superior because each 5way shader from ati is bounded together and can perform computations on a single vector while nvidia has independent shaders

but you won't always see peak performance from real code..RV770 is able to do 3.33 times(800/240) work of GTX 200 per clock.. Both architectures make use threads which are independent on each other by using multiple SIMD units..
well, Xbit labs has shown whose archtecture is superior.:bleh:
 
^^ Actually if you take the CUDA application to physics simulation, NV has a huge advantage.

Since AMD relies on more shader units, then shaders it requires a more complex programming to utilise its units to the fullest. ATM in games, NV has the fastest card, but thats a brute force approach. ATI has done the right thing by trying to minimise cost first, and look at performance metric later.

In the end it will boil down to DX10.1 performance adv or Physx support
 
Aces170 said:
^^ Actually if you take the CUDA application to physics simulation, NV has a huge advantage.

Since AMD relies on more shader units, then shaders it requires a more complex programming to utilise its units to the fullest. ATM in games, NV has the fastest card, but thats a brute force approach. ATI has done the right thing by trying to minimise cost first, and look at performance metric later.

In the end it will boil down to DX10.1 performance adv or Physx support

agree with you..:eek:hyeah:
 
and in this matter, intel's larabee is at another extreme from nvidia

it has 16shaders tied together like ati has 5 and nvidia has all independant

me is learning a bit from friend :p
 
muzux2 said:
but you won't always see peak performance from real code..RV770 is able to do 3.33 times(800/240) work of GTX 200 per clock.. Both architectures make use threads which are independent on each other by using multiple SIMD units..
well, Xbit labs has shown whose archtecture is superior.:bleh:

This must be the most hilarious post from you lol!! So if RV770 is 3.33 times faster :rofl: , why is it equal/slower in most games against 200? I would think it would be much faster with those calculations! :S

Those 800 shaders work in groups of 5 and are dependent likwise, so if you have to compare bare numbers they are 160 shaders compared to 240 shaders of a 280.

The bottom line is actual gaming performance where both companies are equally poised, Nvidia has the fastest single card and ATI has the best value for money. Nothing as of now suggests who's card is going to be faster in future games, synthetic benchmarks made by a particular website, and proving one architecture way ahead of other, while the real world gaming showing totally different results, is something I'm not buying. If this would have had come from Anand Tech/ Guru3d and the actual gaming performance would have been like 2900xt vs 8800gtx, then I might have believed xbit labs, but as of now, nothing is backing their claims!
 
Intel has 16 Pentium cores, (not the core2duo) Its more like 16 lower end processors designed to specific tasks. The beauty is it will be X86 based, and will have a software API. So a upgrade from shader model 3 to SM 4 will just require a driver update :)

Dunno how they will implement to work with the current applications which are optimised for certain architectures, but then again Intel has a USD 5 billion R&d budget....
 
Back
Top