Graphic Cards GTX 200 Vs RV770 - Architecture Review

^^and add to that the fact that intel has one of the best driver building teams out there....way ahead of nvidia and ati neways

man its going to get interesting :)
 
i_max2k2 said:
This must be the most hilarious post from you lol!! So if RV770 is 3.33 times faster :rofl: , why is it equal/slower in most games against 200? I would think it would be much faster with those calculations! :S

Those 800 shaders work in groups of 5 and are dependent likwise, so if you have to compare bare numbers they are 160 shaders compared to 240 shaders of a 280.

The bottom line is actual gaming performance where both companies are equally poised, Nvidia has the fastest single card and ATI has the best value for money. Nothing as of now suggests who's card is going to be faster in future games, synthetic benchmarks made by a particular website, and proving one architecture way ahead of other, while the real world gaming showing totally different results, is something I'm not buying. If this would have had come from Anand Tech/ Guru3d and the actual gaming performance would have been like 2900xt vs 8800gtx, then I might have believed xbit labs, but as of now, nothing is backing their claims!

well, RV770's 3.3 times the work/clock advantage can't be seen in games coz of higher shader clocks ( ~ 2.5 times) on GT200.This gets balanced & hence the output is similar from both architectures..
Furthermore, u can't expect gaming performance only from shaders,if u know it, there are various factors like efficient ROP/RBE, fast & efficient Z/stencil, latency hiding, availibility of textures, faster cache etc. all these has major contrbution on gaming performance & are completely different on both archtectures.
I don't believe that 160 shaders(800/5) are comparable to 240. This is theoritical assumption, u can't compare shaders of both architecture.. Hilbert says...
Now please understand that ATI uses a different architecture shader processors opposed to NVIDIA, so do not compare the numbers that way; or in that manner.

Guru3D

RV770's processing power won't be then equal to 1.2 TFlops, if you take 160 shaders into consideration.

750Mhz x 800 x 2 (FP) = 1.2TFlops

It's not advisable to say 160 shaders are equal to 240.. :bleh:
 
U kiddin ??? :rofl:

Intel's driver team now is hardly its strongpoint. On the integrated graphics side we continue to have tons of issues, even as we're testing the new G45 platform we're still bumping into many driver related issues and are hearing, even from within Intel, that the IGP driver team leaves much to be desired. Remember that NVIDIA as a company is made up of mostly software engineers - drivers are paramount to making a GPU successful, and Intel hasn't proved itself.

I asked Intel who was working on the Larrabee drivers, thankfully the current driver team is hard at work on the current IGP platforms and not on Larrabee. Intel has a number of its own software engineers working on Larrabee's drivers, as well as a large team that came over from 3DLabs. It's too early to say whether or not this is a good thing, nor do we have any idea of what Intel's capabilities are from a regression testing standpoint, but architecture or not, drivers can easily decide the winner in the GPU race.

Developer relations are also very important. Remember the NVIDIA/Assassin's Creed/DirectX 10.1 fiasco? NVIDIA's co-marketing campaign with nearly all of the top developers is an incredibly strong force. While Intel has the clout to be able to talk to game developers, we're bound to see the clash of two impossibly strong forces here.

NVIDIA, the company that walked into 3dfx's house and walked away with its IP, the company who could be out engineered and outperformed by ATI for an entire year and still emerge as dominant.

_pappu_ said:
^^and add to that the fact that intel has one of the best driver building teams out there....way ahead of nvidia and ati neways

man its going to get interesting :)
 
muzux2 said:
well, RV770's 3.3 times the work/clock advantage can't be seen in games coz of higher shader clocks ( ~ 2.5 times) on GT200.This gets balanced & hence the output is similar from both architectures..

Furthermore, u can't expect gaming performance only from shaders,if u know it, there are various factors like efficient ROP/RBE, fast & efficient Z/stencil, latency hiding, availibility of textures, faster cache etc. all these has major contrbution on gaming performance & are completely different on both archtectures.

I don't believe that 160 shaders(800/5) are comparable to 240. This is theoritical assumption, u can't compare shaders of both architecture.. Hilbert says...

Guru3D

RV770's processing power won't be then equal to 1.2 TFlops, if you take 160 shaders into consideration.

750Mhz x 800 x 2 (FP) = 1.2TFlops

It's not advisable to say 160 shaders are equal to 240.. :bleh:

BullS.. stuff doesnt get measured by a direct and simple equation like that

flops are not something you get by a simple multiplication, like you might have to for calculating cpu speed (fsb*multi).

FLOPS measure real world performance. and to do that you need to run Benchmarks.
 
sTALKEr said:
BullS.. stuff doesnt get measured by a direct and simple equation like that

flops are not something you get by a simple multiplication, like you might have to for calculating cpu speed (fsb*multi).
FLOPS measure real world performance. and to do that you need to run Benchmarks.

not a bull, check here

Fudzilla

:bleh:
 
SidhuPunjab said:
U kiddin ??? :rofl:

agree with you on that :)

Intel's integrated chipset drivers have always been a joke.

i know that :p

remember all the hype when GMA900 was going to launch. there was talk that it would be the only IGP capable for running Doom3.

We all know what came of those stupid claims. the G950 was just another continuation of the same story.. and so has been the G45 chipset.

Intel's IGPs have a history of having not only broken hardware but also broken drivers.
 
8800GT beats 4870 in folding :eek:wned:

ATI -

chart_rev1.gif


Nvidia -

nvidia_by_Card_then_PPD.gif


Source - legoman666 (XS)

Not all people buy GPU for only gaming, its sad only few people in India contribute to WCG.
 
SidhuPunjab said:
8800GT beats 4870 in folding :eek:wned:

ATI -

Nvidia -

Source - legoman666 (XS)

Not all people buy GPU for only gaming, its sad only few people in India contribute to WCG.

exactly...the flops which companies market their products as attaining are gonna remain just marketing FUD as long as real world benches dont measure them.

dude.. there is more to life than just quoting websites all the time to back yourself up..

Theoretical Peak Performance counts the number of full precision FP adds and muls. this is a number which is NEVER achieved in the real world coz there are too many other dependencies.

and most importantly, there are like a million different definitions of what 1 FLOP constitutes.
 
it's not mention that PPD for NV is an average score or not. further,you can't compare HD4870 or any ATI card without knowing what type of processor on which the card was working!:bleh:

and why you have hijacked this thread?:mad:
 
no one is hijacking any thread. all that we are saying that is your posts are spreading AMD quoted marketing FUD about as much as FUDZilla does :lol:
 
sTALKEr said:
agree with you on that :)

Intel's integrated chipset drivers have always been a joke.
i know that :p

remember all the hype when GMA900 was going to launch. there was talk that it would be the only IGP capable for running Doom3.

We all know what came of those stupid claims. the G950 was just another continuation of the same story.. and so has been the G45 chipset.

Intel's IGPs have a history of having not only broken hardware but also broken drivers.

how many times have you had a problem with intel drivers?? they always work
their main problem was that a 5-year old designed their onboard graphics on his tea break

and muzux2, no offense but.... FANBOI :)
 
sTALKEr said:
no one is hijacking any thread. all that we are saying that is your posts are spreading AMD quoted marketing FUD about as much as FUDZilla does :lol:

if you think that isn't the way to calculate 1.2TFlops, could you mention here how to calculate 1.2TPLops, then i agree you are someone whom i should be scared of..:rofl:

remember, 1.2Tflops is an official figure from AMD & everyone knows that well..
 
whats the point of fighting over architecture , everyone knows GTX 280 pawned all ati cards , they have no single gpu which can beat it , architecture doesnt matter much , gaming performance matters , 4850 is good card compared to nvidia 9800, but GTX 280 are better than RV 770 in gaming performance .
 
i dont know much of the technical stuff..........

But 800 shaders in a group of 5 will work efficiently with better DRIVERS.

of course ,both Ati and nvidia shaders cannot be compared.

Once , even a 8600GT beat 2900XT in LOST PLANET(when 2900XT was launched) due to lack of latest drivers.In technical terms most of the shaders were not in used.BUT the same happened to nvidia in Company of heroes,where ati was outperforming nvidia.

But now ATI did launch an amazing product with good drivers.

All I am saying is that for both nvidia and ati its upto the there DRIVERS team to optimize there GPU to get the best out from its architecture.

technically , 800shaders at its 100% best can outperform the current Nvidia architecture in gaming. But real world condition ,it differs :)
 
A sane person wanting good gaming performance at decent cost will buy ATI today.

No amount of theoretical numbers and arguments can change this situation.

F@H numbers is whole different game.

The scores there has nothing to do with card's real arithmetic performance is. Its more to do with clients.

I am running F@H GPU client for a while now and discussing things on F@H forums, and its no secret that the current clients and drivers of ATI are not optimised for the same.

The current client core does not even stress the GPU by 10%. Temp hardly moved 2-4C above idle on ATI cards. Still its much faster than pure CPU folding.

But on Nvidia, if you run the GPU client, the GPU immediately kicks in and your performance shoots up.

If you are buying card for folding, buy yourself multiple cheap 8600GTs and be happy.

If you are buying card for gaming, get ATI 4800 series and be happy.
 
Shripad said:
A sane person wanting good gaming performance at decent cost will buy ATI today.

If you are buying card for gaming, get ATI 4800 series and be happy.

Let me put a guy here with a 2560x1600 res, and trust me for him the best bet is a 280 SLI or TRI SLI. No combination of 4800 series would be good for him, If your still buying ati, your not sane

Its just one scenario where a lot of power is needed and 4870 cant do the cut. At the lower end exactly the opposite is true for 4850/4870. Even for 1920x1200 in most situations 4870 is not a good buy against 280.

But most people dont fall in the extreme categories, most people usually have 22" monitors, so 4850CF/ 1x 4870/ 1x 260/ 1x 280 comes in picture.

ATI is definitely a great vfm choice here. Even then significant population would like to go with a single fast gpu then having a CF of 2 cheaper cards, if money permits. with the cheapest 280's at $385 after MIR, people would prefer getting it considering power consumption/CF hiccups etc.

So I would disagree on the statement that for pure gaming only 4800 series is the way to go, 4870 does look better than the 260, but 260's are very evenly priced now $10-$20 cheaper than 4870. So its still very competitive.

Also 4850 with the temps problem is another thing to look at, its not a major bug, but a big chunk of people would consider it a deal breaking factor.

All that said in my opinion this is how the hierarchy is

4850 is one of the best vfm card atm,
260 and 4870 is good tussle with the latest pricing,
280 is the king of the hill at the moment, and in no way it is expensive for its performance anymore!
 
Back
Top