Graphic Cards R600 News Thread

Aditya said:
Vista is coming in January 2007.

ATi will be dumb to delay it further :p..

I hope nVidia has a good answer to the 1000 Mhz Clock Speed :/.

Lol its the inquirer remember, the same people behind "Fudo has 32 pipes" :rofl:
 
R600 is ATI's first cable-less Crossfire

WE HEAR that the upcoming DirectX 10 compatible R600 won't need a master card to work in Crossfire.

ATI will implement this bridge in its upcoming RV570 and RV560 chips and will enable more than one chip on a PCB configuration. But it won't be until R600 that this cable-less Crossfire comes into action.

R580+ will be the slight redesign of the existing R580, so don’t expect miracles here. You will still need a master card and a cable to plug them together.

ATI can already put two X1800 GTO cards in Crossfire and it doesn’t need a master card, but the bandwidth between those two cards is not that big. Therefore two slave cards will work and will do the job, we tried it here in the lab and it works.

The master card is the last problem that ATI has to solve to be competitive with Nvidia's SLI. ATI's entry-level Crossfire works without a master, and its mainstream card works without a master. The firm just needs a high-end offering and it's sorted. Getting there slowly but surely.
 
Chaos said:
I'm guessing it'll stay at 16/16 but they might bump it up to 24/24. The reason I like ATI's cards are cos they never put any half assed "on paper" features on their cards. When they put in a feature, it works. It was true of pixel shader 2.0, same with shader 3.0, FP blending, h.264 acceleration and now dx10. I for one appreciated the fact that they didn't waste transistors on vertex texture support on their SM3.0 cards cos its dog slow on nvidia which does support them. Now that unified architectures are here, I'm sure there'll be vertex texture support.
ejactly... they 'underspec' their stuff if there ever was a suitable situation to use that word, this is it!

What I want to know is will the AVIVO thingy make a reappearance on the R600?
Also unified shader architecture should have an independent design, unlike the X1900 where the shader units were in clusters of 4, right again?? correct if necessary please

We know they are better sure, despite the fact their MS's new b1tches :tongue: (forgive me oh great 1!!!)... anyone comparing the 6800 series with an X800 series card could tell you... my 6800GT's gone, the ATI I still got... testament nuff
 
vandal said:
ejactly... they 'underspec' their stuff if there ever was a suitable situation to use that word, this is it!

What I want to know is will the AVIVO thingy make a reappearance on the R600?
Also unified shader architecture should have an independent design, unlike the X1900 where the shader units were in clusters of 4, right again?? correct if necessary please

We know they are better sure, despite the fact their MS's new b1tches :tongue: (forgive me oh great 1!!!)... anyone comparing the 6800 series with an X800 series card could tell you... my 6800GT's gone, the ATI I still got... testament nuff

Yea like DUH, of course Avivo will make a reappearance in the R600.

Also unified shader architecture should have an independent design, unlike the X1900 where the shader units were in clusters of 4, right again?? correct if necessary please

Explain what exactly what you mean in layman's terms please :S
 
The two major graphics competitors are gradually setting the stage for their next showdown, as nVidia prepares its G80 chip and ATI arms its R600 boards. This next batch of chips will be important for both companies as it will usher in the DirectX 10 era and will offer Shader Model 4.0 support.

nVidia has now confirmed that its G80 chip is taped out and the first chips are being produced, suggesting that the U.S. giant will be first to market with its DX10 product this coming September. ATIs R600 chip will attempt to serve as the acting Nemesis for the G80 but is expected later, possibly in November 2006.

What makes this battle even more intriguing is that this time nVidia and ATI will have some major design differences in their chips and those differences could help tip the graphics war balance one way or the other. The main difference lies in the way the chips will process pixel Shaders, geometry instancing and vertex information. ATIs design is expected to adopt the unified Shader model which allows for flexibility by utilising 64 unified Shaders. This way the chip has 64 pixel lines available per clock. It can use those in any ratio it needs to meaning that, for example, 30 can be pixel, 20 vertex and 14 geometry lines per clock cycle. nVidia's design however, challenges the unified Shader model and will use a more rigid design of 32 pixel Shaders and 16 vertex and geometry lines per clock cycle.

These details are not expected to have an immediate impact on DX10 as it is higher level but they may lead to a speed advantage for either of the competitors, an advantage which will only be apparent in the most demanding applications and at the highest possible settings so don't wear your eyes out trying to figure it out.

Information:
http://www.megagames.com/news/html/hardware/r600soon-g80sooner.shtml
 
Shocker

WHEN we first time heard that R600 was going to be a big chip we could figure out that ATI wanted to completely redesign the chip and fill it full of lot pipes.

We previously wrote that the chip will have sixty four Shader units but we never realised at the time that the design is actually built around a full sixty four physical pipes. That is what various high-ranking sources are telling us.

In this scenario, unless Nvidia also managed to triple its pipeline count in the upcoming G80, this chip could lose out big time when put against a 64-pipeline ATI offering.

The outcome is still uncertain as we don’t know enough about the G80 to make the final conclusion.

The R600 is scheduled for very late 2006, if all goes smoothly. And, if not, you might see this one in January - probably released by AMD's ATI.

Remember this is a DirectX 10 chip that is an advanced version of the R500 chip the Vole uses in its Xbox 360.

ATI's R600 has 64 real pipes
 
Exactly what do those idiots mean by physical pipes :S. It makes no sense any more with everything decoupled. The chip will definitely not have 64 TMUs or Raster Ops for sure... Its a sheer waste of die space cos they'll never be utilized as apps will more or less always be shader limited. 64 shader pipes are on the cards and I won't be surprised if it did have that many.
 
Chaos said:
Exactly what do those idiots mean by physical pipes :S. It makes no sense any more with everything decoupled. The chip will definitely not have 64 TMUs or Raster Ops for sure... Its a sheer waste of die space cos they'll never be utilized as apps will more or less always be shader limited. 64 shader pipes are on the cards and I won't be surprised if it did have that many.

Were we people supposed to understand any of that :S :S :S :S :p ??
 
lol, let's not get caught up with rumors... when the GPUs are out, we'll see what it can do :) And yeah, even I understood nothing of the first 2 posts :D
 
Chaos said:
Exactly what do those idiots mean by physical pipes :S. It makes no sense any more with everything decoupled. The chip will definitely not have 64 TMUs or Raster Ops for sure... Its a sheer waste of die space cos they'll never be utilized as apps will more or less always be shader limited. 64 shader pipes are on the cards and I won't be surprised if it did have that many.
rofl i am sure fuad doesn't know it himself :lol:
 
Chaos said:
Exactly what do those idiots mean by physical pipes :S. It makes no sense any more with everything decoupled. The chip will definitely not have 64 TMUs or Raster Ops for sure... Its a sheer waste of die space cos they'll never be utilized as apps will more or less always be shader limited. 64 shader pipes are on the cards and I won't be surprised if it did have that many.

bhai.. theek se, saaf angreji mei bolo naa... spill those pearls billy boy... and quick :p
 
64 shaders is a definite minimum, might be even 96. But i dont expect them to go over 24 TMU/ROP's. They may just stick to 16 for all you know, but im expecting 24.

Edit: from what i understand(or at least i think i understand), the effects that you see in games are mostly due to pixel and vertex shaders, actually mostly pixel shaders. Games nowadays dont need more pixels, they need more pixel operations. So more TMU's/ROP's is just going to give you more pixels and you have no use for them, while more pixel shaders are what you really need.
 
Wouldn't ROPs matter a lot in case of performance for Supersample AA?

While Shaders have definitely taken the front seat, fill rate should remain quite important for a while..

Besides, its the inquirer, don't take it too seriously. :)
 
Also keep in mind that the ALU setup is bit different from X1800/X1900 cards, where it can issue Vector4 + Scalar MADD instructions. Since, its an Unified Shader Architecture, and considering it will be based on XBOX C1 chip, I presume it will have 4 Shader Arrays (16 Shader Processors per array), which equals 64 Shader Processors in total.

Also, as far as I think, there won't be decoupled eDRAM on R600 like on R500 C1. Hence all ROPs function will perform in the chip itself. And as far as ROPs & TMUs are concerned in my opinion, I believe the number will remain to 16.
 
Darth_Infernus said:
Wouldn't ROPs matter a lot in case of performance for Supersample AA?

That is true, however it also depends on the ROP architecture as well, but considering the current algorithm for Super Sampling on both ATI and NVIDIA is somewhat different from what was used earlier. The algorithm only detects the Alpha textures on screen and Supersampling then applied to it, rather than the whole screen, hence less performance hit.

While Shaders have definitely taken the front seat, fill rate should remain quite important for a while..

Texturing and Multi-Texturing is also growing at the same time, but at very slow speeds, and it will continue to grow, current generation of ATI cards have 16 decoupled Texture Units, while in NVIDIA its coupled with Shader units, which is sufficient for current generation of games. However, the demand for mixed Shaders (especially MADD Shaders) is growing at very fast speed, hence requirement for more Shader Processors is necessary, but don't think Texturing is finished. Also, it's more sensible to make Shader Intense games rather than Texture intense games, as Texturing requires a lot of Memory bandwidth.
 
Back
Top