Graphic Cards More rumors on nVidia G71, G80 & ATi R600 :)

RiO

Explorer
Nvidia G80 will make ATI stumble on April. Sources from anonymous mentioned that since the introduction of NV40 which was a very big leap over the defective NV30 with its spectacular features such as Shader Model 3.0 and SLI, in the next couple of months after G71 you will be promised to witness another new spectacular technology that will be brought by Nvidia to 3D Gaming community. It's been almost a year since Nvidia has not really altered much of its architecture just like what we see now in the G70, An increase in pipelines,the intro of MIMD and more vertex units. This is because Nvidia knows that ATI is currently having a hard time fixing and improving their bugged R5xx architecture so it isn't necessary for Nvidia to introduce an incredibly improved architecture. This is what you will see in G71, there will be 32pipes, increase in ROPs and a little speed bump over the coreclock, it is more like an absolute ultra pumped NV40based architecture. In the G80 however, Quad SLI it self can be implemented on a single card with two chips solution because it will carry the first dual core GPU ever with the support of DirectX10 and Shader Model 4.0. The development of G80 is also mentioned as being running very intensive since Nvidia's acquisition over ULi.

Read more at: http://www.vr-zone.com/?i=3177
 
Lately vr zone has been giving stiff competition to Inq for posting BS :P

Edit: The xpentor site that vr zone is linking has no info about the g80 or r600. Tried their seach as well, nothing turns up :lol:
 
Eh... I don't buy the idea of a dual core GPU. Existing GPUs are already massively parallel (16/24pipe) monsters. What will a dual core GPU add? More pipes :P. That can be done by simply increasing the die size of the existing gpu core. Dual core made sense in CPUs cos they are not parallel number crunching monsters like GPUs and getting two parallel threads to run at the same time would make sense, however it makes absolute 0 sense to me in GPUs :P.
 
goldenfrag said:
^^ May have been removed :bleh:

Again, pathetic news for me :@. Why doesnt everything just come in May :@ :@ :@ :@ :@ :@.
Well flowers come in May....:P ....u can get those..
neways..the dual GPU idea is corky at best....
And I feel these will again trigger a price drop however miniscule..
 
Chaos said:
Eh... I don't buy the idea of a dual core GPU. Existing GPUs are already massively parallel (16/24pipe) monsters. What will a dual core GPU add? More pipes :P. That can be done by simply increasing the die size of the existing gpu core. Dual core made sense in CPUs cos they are not parallel number crunching monsters like GPUs and getting two parallel threads to run at the same time would make sense, however it makes absolute 0 sense to me in GPUs :P.

dude increasing die size is not that simple ...... y AMD not launched dual core in 110nm :) ..... as the tecnology is shifting towards smaller manufacturing tech they will try to integrate two GPU's in single processor and maybe multi threading will also come into gaming scene .....
 
Dual core GPU will not be bandwidth starved ...... i am not talking about playing games with 1024x768 resolution :) ...... just think about the 2048 x xxxx resolution ...... anyways they will be high eng stuff for some time ..... it will take time for Dual GPU to become standard ..... like even today dual core processors are not in the reach of budget PCS .....
 
Rahul said:
dude increasing die size is not that simple ...... y AMD not launched dual core in 110nm :) ..... as the tecnology is shifting towards smaller manufacturing tech they will try to integrate two GPU's in single processor and maybe multi threading will also come into gaming scene .....
U'll need 512 bit memory and maybe some 4 GHz ram to go along with a dual GPU solution....bandwidth restrictions don't even begin to describe the problems u'll face here....

Try OCing the memory on a 7800GT with the core clock @ 400 default, and u'll c performance improving...so its memory bandwidth that has beome the issue....remember when I mentioned somewhere that DDR4 and XDR should be included in the game and fast...well that was what I meant... Ofcourse with XDR costing well...the earth....that's not gonna happen very soon.
btw one of the new consoles uses XDR which is it?? not sure :P
 
vandal said:
U'll need 512 bit memory and maybe some 4 GHz ram to go along with a dual GPU solution....bandwidth restrictions don't even begin to describe the problems u'll face here....

Try OCing the memory on a 7800GT with the core clock @ 400 default, and u'll c performance improving...so its memory bandwidth that has beome the issue....remember when I mentioned somewhere that DDR4 and XDR should be included in the game and fast...well that was what I meant... Ofcourse with XDR costing well...the earth....that's not gonna happen very soon.
btw one of the new consoles uses XDR which is it?? not sure :P

Dude u are totally wrong about the memory OCing ..... i tried in my GTX anyways and it didn't improved performace ..... some times it also depends upon the games ..... but if are playing with full AA and AF there are very few games which will make the graphics ram bottleneck ..... XDR is not that far away as GDDR3 is reaching its max in the recent releases of GPUs so they will come with something new ..... and it will be expensive as always .....
 
Rahul said:
Dual core GPU will not be bandwidth starved ...... i am not talking about playing games with 1024x768 resolution :) ...... just think about the 2048 x xxxx resolution ...... anyways they will be high eng stuff for some time ..... it will take time for Dual GPU to become standard ..... like even today dual core processors are not in the reach of budget PCS .....

lol rahul i aint talking abt bottleneck, i am talking abt the tiny 256 bit memory bus that will feed both the cores. As it is todays single core gpus are b/w starved. ;) Even the cards such as dual 7800gt and 3d1 from gigabyte were facing those problems and those werent even dual core just dual gpu. I hope u r getting my point ?

and as for aa and af not being a problem play FEAR @ 1600*1200 with everything high and and full aa and af. ;)
 
Rahul said:
Dude u are totally wrong about the memory OCing ..... i tried in my GTX anyways and it didn't improved performace ..... some times it also depends upon the games ..... but if are playing with full AA and AF there are very few games which will make the graphics ram bottleneck ..... XDR is not that far away as GDDR3 is reaching its max in the recent releases of GPUs so they will come with something new ..... and it will be expensive as always .....
buddy it didn't improve cause memory is sensitive to heating...
theoretically suppose 1 7800GT had 1 GHz mem and the other had 1.6 GHz mem....
Suppose card 1 was run @ 400 MHz and the other @ say 390 MHz...OK

I'd be willing to bet that the card running @ a slower clock (albeit slightly) would outperform the faster one with slower memory...

You won't be able to OC mem successfully on any card... because GDDR3 is reaching its limits and both guys are stretching it now....
And a slight OC means the ram will run much hotter...so data intergrity suffers..now this equates to equitable performance so the gains are negligible...
This is what I was trying to say in the first place...;)
 
Blade_Runner said:
lol rahul i aint talking abt bottleneck, i am talking abt the tiny 256 bit memory bus that will feed both the cores. As it is todays single core gpus are b/w starved. ;) Even the cards such as dual 7800gt and 3d1 from gigabyte were facing those problems and those werent even dual core just dual gpu. I hope u r getting my point ?

and as for aa and af not being a problem play FEAR @ 1600*1200 with everything high and and full aa and af. ;)

now i got ur point :) ..... but i think GPU manufacturer will come with a proper solution for this mayby different cores will be having individual 256Bit bus ..... actually i am telling about dual core coz after a point there is no way to increase the speed of the chip as u can see in Pentium4 and AMD coz after that heat disipation is majoy issue..... so the companies came with a better solution and launched dual core ada they will launch the quad core soon .....

actually before the launch of dual core....single core processor were reached the maximum efficieny and productivity they can provide..... like increasing the FSB was not helping nor the cache ...... so only way out was dual core .... :D
 
vandal said:
buddy it didn't improve cause memory is sensitive to heating...

theoretically suppose 1 7800GT had 1 GHz mem and the other had 1.6 GHz mem....

Suppose card 1 was run @ 400 MHz and the other @ say 390 MHz...OK

I'd be willing to bet that the card running @ a slower clock (albeit slightly) would outperform the faster one with slower memory...

You won't be able to OC mem successfully on any card... because GDDR3 is reaching its limits and both guys are stretching it now....

And a slight OC means the ram will run much hotter...so data intergrity suffers..now this equates to equitable performance so the gains are negligible...

This is what I was trying to say in the first place...;)

now u are talking about one running on 1.6Ghz and another 1Ghz now that 600Mhz difference so the performace of the 1.6Ghz will be obiously high ...... but this doesn't proof that GPU is limited by the ram .....
 
Rahul said:
now i got ur point :) ..... but i think GPU manufacturer will come with a proper solution for this mayby different cores will be having individual 256Bit bus ..... actually i am telling about dual core coz after a point there is no way to increase the speed of the chip as u can see in Pentium4 and AMD coz after that heat disipation is majoy issue..... so the companies came with a better solution and launched dual core ada they will launch the quad core soon .....

actually before the launch of dual core....single core processor were reached the maximum efficieny and productivity they can provide..... like increasing the FSB was not helping nor the cache ...... so only way out was dual core .... :D
Well but parallelism was obviously the way out for cpus when all other tricks to wringe out performance were exhausted. But like chaos said GPUs are already parallel working proccesors. So adding more parallelism might only add to woes. Besides imagine the headache the program drivers for both single and dual core gpus ;). SLI seems much better in that instance although gains in using sli/crossfire arent enough to justify the high price of the platform. Just my 2 cents :)
 
I was getting a 7800gtx but decided to hold off until nVidia releases a DX10 card, before end of year based on the rumors.

And the thing about rumors... they're never entirely true but there is always *some truth* in them :)
 
I just hope G71 & G80 comes out in June so that prices fall before next year for me :D :ohyeah: :hap2: :cool2: :clap:
 
Back
Top