Graphic Cards R600 News Thread

Even if R600 beats the g80, I still think Nvidia did the right thing by launching th G80 so early....now only if they could manage to get the mainstream dx10 cards (8600 and all) out along with the G80 refresh at the same time as R600 launch, then they would still have a headstart over Ati in the mainstream segment...even if they loose the performance crown.
 
VR-Zone : Technology Beats - ATi R600 Card Re-Design In Progress

A re-design for R600 card and cooling is currently underway to make it shorter and better cooled. The original R600 card design is 12 inches long and ATi is probably trying to shorten it to at least 8800GTX length. The Inquirer has recently reported ATI has already produced some first R600 cards that are clocked lower to send out to game developers for debugging and optimizing their games for R600. The R600 card we seen will conform to the new PCI-SIG graphics spec of delivering 225/300W power for high-end graphics cards. Therefore it will have a new 2x4 pin connector for additional power on top of the current 6-pin connector.
 
ATI R600 can only manage 16 pixels per clock:

ATI R600 can only manage 16 pixels per clock

If Inquirer is right about 64 Vector based Shader Processors, then they're right on money. I never thought Scalar Processors leads to better efficiency, infact Vector processing is a way to go, (a Pixel Shader operates of 4 data elements, i.e., RGBA)

G80 Style = Scalar+Scalar+Scalar+Scalar MADD/MULL Dual-issue

R600/C1 style = Vec4+Scalar MADD/ADD Dual-issue (speculating about ADD Shader processing, because of their traditional way)

4 Scalars = 1 Vector 4, so ATI has has the benefit of more Scalar operations (since XBOX360 could also do Vector4+Scalar) can be executed in single clock, hence more raw power. G80's Scalar design equals 32 Pixel Pipeline design of Vector4 MADD+Scalar, since 4 SP Shader ALU equals 1 Vector+Scalar ALU so we could divide 128 by 4, hence 32.

Not sure about Texturing Units, but it's going to be between 16-32 full Address+Sampler Unit, unlike G80's 32Address+64TF
 
MANY THINGS ABOUT ATI's upcoming R600 are surprising, to say the least.

First of all, the GPU is a logical development that started with the R500Xenos, or Xbox GPU, but without the 10MB eDRAM part. Unlike the Xbox GPU, the R600 has to be able to support a large number of resolutions and, if we take a look at today's massive 5Mpix resolutions, it is quite obvious that R600 should feature at least five times more eDRAM than Xbox 360 has.

DAAMIT kept the RingBus configuration for the R600 as well, but now the number has doubled. The External memory controller is a clear 512-bit variant, while internally you will be treated with a bi-directional bus double the width. The 1024-bit Ringbus is approaching.

Since the company believes this is the best way to keep all of the shading units well-fed, the target is to have 16 pixels out in every clock, regardless of how complex the pixel might be. But, don’t think for a second that R600 is weaker than G80 on the account of ROP units alone.

We also learned the reason why the product was delayed for so long. It seems that ATI encountered yet another weird bug with the A0 silicon, but this one did not lock the chips at 500MHz, but rather disabled the Multi-sampling Anti-aliasing (MSAA). At press time, we were unable find out if the A1 revision still contains the bug or not. Retail boards will probably run A2 silicon.

R600 isn't running on final clocks yet, but the company is gunning for 700 to 800MHz clock for the GPU, which yields pixel a fill rate in the range of G80's or even a bit more. In terms of shading power, things are getting really interesting.

Twenty-four ROPs at 575MHz equals 13.8 billion pixels per clock, while 16 ROPs at 750MHz will end up at 12.0 billion pixels. At the same time, expect ATI to far better in more complex Shader-intensive applications.

Regarding the number of shaders, expect only marketing wars here. Nvidia has 128 Shader units, while the R600 on paper features "only" 64. However, don't expect ATI's own 64 Shaders to offer half of the performance. In fact, you might end up wildly surprised.

ATI's R600 features 64 Shader 4-way SIMD units. This is a very different and complex approach compared to Nvidia's relatively simple scalar Shader units.

Since R600 SIMD Shader can calculate the result of four scalar units, it yields with scalar performance of 256 units - while Nvidia comes with 128 "real" scalar units. We are heading for very interesting results in DX10 performance, since game developers expect that NV stuff will be faster in simple instrucions and R600 will excel in complex shader arena. In a way, you could compare R600 and G80 as Athlon XP versus Pentium 4 - one was doing more work in a single clock, while the other was using higher clock speed to achieve equal performance.

ATI R600 can only manage 16 pixels per clock
 
IT SEEMS the board which will DAAMIT will use as a host for its R600 GPU and corresponding components features a number of innovations and improvements that are interesting, to say the least.

First of all, you need to know that this PCB (Printed Circuit Board) is the most expensive one that DAAMIT has ever ordered. It's a complex 12-layer monster with certain manufacturing novelties used in order to support the requirements of the R600 chip, most notably the 512-bit memory controller and the distribution of power to the components.

The memory chips are arranged in a similar manner as on the G80, but each memory chip has its own 32-bit wide physical connection to the chip's RingBus memory interface. Memory bandwidth will therefore range from anywhere between 115 (GDDR3 at 8800GTX-style 900MHz in DDR mode - 1.8GHz) and 140.1GB/s (GDDR4 at 1.1GHz DDR, or 2.2GHz in marketingspeak).

This will pretty much leave the Geforce 8800 series in the dust, at least as far as marketing is concerned. O course, 86GB/s sounds pretty much like nothing when compared to 140GB/s - at least expect to see that writ large on the retail boxes.

The R600 board is HUGE but funnily enough, biot in length. Even though the very first revision of the board was as long as the 7900GX2, back in late August/early September engineers pulled a miracle and significantly reduced the size of the board. Right now, they are working on even further optimisations of components, but, from what we saw, this is the most packed product in history of 3D graphics.

The PCB will be shorter than 8800GTX's in every variant, and you can compare it to X1950XT and 7900GTX. The huge thing is the cooler. It is a monstrous, longer-than-the-PCB quad-heat pipe, Artic-Cooling style-fan on steroids looking beast, built from a lot of copper. Did we say that it also weighs half a ton?

This is the heaviest board that will hit the market and you will want to install the board while holding it with both hands. The cooler actually enhances the structural integrity of the PCB, so you should be aware that R600 will bring some interesting things to the table.

If you ask yourself why in the world AMD would design such a thing, the answer is actually right in front of you. Why is it important that a cooler is so big? Well, it needs to dissipate heat from practically every element of the board: GPU chip, memory chips and the power regulation unit.

There will be two versions of the board: Pele comes with GDDR4 memory, and UFO has GDDR3 memory, as Charlie already wrote here. DAAMIT is currently contemplating one and two gigabyte variants, offering a major marketing advantage over Graphzilla's "uncomputerly" 640 and 768MB.

Did we mention two gigabytes of video memory? Yup, insane - though perhaps not in the professional world, where this 2GB board will compete against upcoming G80GL and its 0.77/1.5GB of video memory. We do not expect that R600 with 2GB will exist in any other form than in FireGL series, but the final call hasn't been made yet.

The original Rage Theatre chip is gone for good. After relying on that chip for Vivo functions for almost a decade, the company decided to replace the chip with the newer digital Rage Theatre 200. It is not decided what marketing name will be used, but bear in mind that the R600 will feature video-in and video-out functions from day one. The death of the All-in-Wonder series made a big impact on many people inside the company and now there is a push to offer biggest support for HD in and out connectors.

When we turn to power, it seems the sites on-line are reporting values that are dead wrong, especially when mentioning the special power connectors which were present on the A0 engineering sample. Our sources are claiming they are complying to industry standards and that the spec for R600 is different that those rumoured. Some claim half of the rumours out there began life as FUD from Nvidia.

For starters, the rumour about this 80nm chip eating around 300W is far from truth. The thermal budget is around 200-220 Watts and the board should not consume more power than a Geforce 8800GTX. Our own Fudo was right in a detail - the R600 cooler is designed to dissipate 250 Watts. This was necessary to have an cooling headroom of at least 15 per cent. You can expect the R680 to use the same cooler as well and still be able to work at over 1GHz. This PCB is also the base for R700, but from what we are hearing, R700 will be a monster of a different kind.

As far as the Crossfire edition of the board goes, we can only say: good bye and good riddance.

Just like RV570, the X1900GT board, the R600 features new dual-bridge connector for Crossfire capability. This also ends nightmares of reviewers and partners, because reviewing Crossfire used to be such a pain, caused by the rarily of the Crossfire edition cards.

Expect this baby to be in stores during Q1'07, or around January 30th.

AMD's R600 board is a monster

@Radeon : Sry abt the previous post, hadn't refreshed the page.
 
Radeon said:
G80 Style = Scalar+Scalar+Scalar+Scalar MADD/MULL Dual-issue

R600/C1 style = Vec4+Scalar MADD/ADD Dual-issue (speculating about ADD Shader processing, because of their traditional way)

Just a correction there guys, for R600 I believe it's going to Vector4+Scalar MADD/MULL/ADD Co-issue (same as C1/Xbox360). And 64 Shader Processors would be as 4 Cluster with 16 Shader Processors per Cluster. I am slightly inclined that ATI might also go with same Texture:ROP ratio as R580 i.e. 16:16, but still not clear as yet.
 
Pics of the r600 gpu surface.



Via the Beyond3D forums comes a post with what are claimed to be photos of ATI's upcoming R600 graphics GPU. Whether the photos are legitimate or not is up to debate, but on initial inspection nothing jumps out.

Going by the coin in the second photo, rough estimates of the actual die size put it at about 420mm2, which is quite enormous for an 80nm chip. For reference the 80nm RV570 is 230mm2, the 90nm R580 is 352mm2, and NVIDIA's 90nm G80 is 484mm2.
 
ATI R600 to go GDDR-4 only

Samsung was telling everybody that GDDR-4 is going to rock. However, companies were worried that yields are not going to be good, and rumours about Samsung screwing up forced affected companies to spend thousands of dollars to invest in the development of alternatives.

In the end, Nvidia went with GDDR-3 memory in its G80 line-up, but DAAMIT continued to develop both GDDR-4 (Pele) and GDDR-3 (UFO) variants. Being late on the market enabled ATI to optimise. However, bean counters at AMD recently gave a "go ahead" signal for full-GDDR-4 line-up. So, forget about the dual-SKUs, 1Gig and 2Gig versions, GDDR-4 is the only way DAAMIT will go .

Also, the product formerly known as FireSTREAM, conveniently renamed the AMD Stream Processor (GPGPU board) is also in the works. We are unsure of the name for this R600 based one, but one of our sources was caught saying: "With their level of imagination, I don't see why not to go AMD Stream Processor 2 - The New Degeneration". If AMD will be gunning for the price, introduction of 2GB UFO board sounds perfectly reasonable, but our sources are telling us that the GPGPU board is a GDDR-4 baby as well.

Memory layout remains the same, as does the GPU. The chip itself is rotated at 45 degree angle – not the 60 degree this tired hack wrote a week ago. Also, the chip whose pictures were leaked was not a functional chip, but rather a mechanical "dummy".

The traditional layout of ATI boards calls for DVI-DIN-DVI arrangement. However, this might change and R600 could end up with G80-style DIN-DVI-DVI, with the HD connector located at the top of the PCB, not in the middle. Just a bit over two months to go.

ATI R600 to go GDDR-4 only
 
Speaking at yesterday's AMD Financial Analysts Day, Executive Vice President of Visual and Media Businesses, Dave Orton, appeared to throw down the performance gauntlet in favor of AMD's upcoming R600 GPU. Having had over a month to study NVIDIA's G80, Orton did not seem the least bit intimidated. In a slide entitled "R600: Why we lead in graphics", Orton promised that even if the name of the company had changed, that the commitment to GPU performance leadership had not. He promised a "take no prisoners" approach to performance leadership for AMD's new GPU.

More interestingly, in his verbal remarks Orton reported (at roughly the 1:22:30 mark of the webcast) that one of R600's key advantages would be "new levels of memory bandwidth to the graphics subsystem, and bandwidth is critical to graphics performance." As all graphics geeks know, AMD pioneered the move to GDDR4 memory with the Radeon X1950 XTX, which gave them a temporary advantage in bandwidth. However, in the period since NVIDIA has released the 384-bit GeForce 8800 GTX, whose memory bandwidth crushes the X1950 XTX by 86.4GB/s to 64.0GB/s. It is impossible that AMD could regain a significant enough advantage in bandwidth to be cited by Orton as a major competitive advantage without following NVIDIA north of the 256-bit bus that has been a mainstay of the ATI/AMD high-end products since 2002's Radeon 9700 Pro.

R600 will therefore feature a 512-bit external memory bus, likely using 1.2GHz GDDR4 for ~153GB/sec from memory pool to chip, to back up the smack AMD's Executive Vice President laid down.

In other tidbits, Orton also vowed to be first to the 65nm technology process, but did not disclose which product he had in mind for the honor, nor even product type, GPU or chipset. Our graphics-oriented notes (and a few selected slides) on the rest of the conference are included inside, if you dare to take the red pill.

Beyond3D - Which was nice.
 
The expected happens...but the perfomance is astounding......we knew AMD/ATI wud beat G80 but so easily...wow...

8900GTX better get ready lol...
 
^^ Hmm the R600 is on avg 10% faster on beta drivers. I am not too sure if its good enough for the 6mth lead of NV, but then again lets see the results with final drivers and actual DX 10 games..

But yeah atleast will make NV think...
 
Back
Top