ATI's R600 will consume over 250WWe can now confirm that the new ATI card will consume around 250W. The currently fastest ATI, R580+ or Radeon X1950 XTX consumes up to 125W i a worst case scenario and heavy 3D while you can suck up 145W out with the dual chip X7950 GX2 card.
This means that the R600 will consume twice as much power, and probably will end up close to twice as warm but we also hear it will get much faster then the current cards with sixty four pipelines. It could easily end up twice as fast than the current ATI offerings.
It is now a January/February chip, so it will be a while until we have this baby on our desk but after a long time we are getting mildly titillated at the prospect.
False R600 rumours do the roundsGraphics are going to get three-slot, and even four-slot cooling, but the cards will not be using PCI Express slots, but rather sockets on the motherboard.
What R600 is going to look like physically can be seen if you take a look at Radeon X1950 card and add a lot of power regulation components. Also, the chip is specc'ed at the 180W mark, so an external power supply looks more realistic than connecting an 8-pin motherboard connector.
This would end DAAMIT's marketeers mocking of Nvidia's own external power supply. Mind you, both Nvidia and ATi mocked 3dfx over its Voodoo5 6000 and claimed they'd never be forced to rely on an external power supply. Ah, time really flies
RiO said:^^^Hmmm... February is now in the picture, which means it could slip to March too... First Q4, then Jan, now Jan/Feb... hope it's out in time for Crysis.
ATI's next generation DirectX 10 core will end up with more than 500 million transistors.
We already told you that this will be one big, hot chip but we don’t think many people expected 500+ million transistors.
This is just a single piece of the puzzle of unveiling this big crazy chip. Don’t forget that we heard will dissipate 250+ Watts of power and will also be really fast in both DirectX 9 and 10 games.
We hope there won't be any additional delays of this hot chip, which will be the first big and powerful graphic chip unveiled by AMD.
THE UP AND COMING R600 will have a real 512 bit memory controller. Unlike its predecessors which had an internal 512 ring memory bus, the R600 will have it externally as well.
This means that the packaging of the chip will be extremely expensive. The wider memory bus you use the more pins you need in your chip package.
If the 512 memory ring turns to be the real thing, we are talking about 128 GB/s of memory bandwidth with GDDR4 clocked at 2000MHz. We also learned that the R600 may use memory faster than 2000MHz as it will be available by Q1. If ATI keeps pushing the chip we might get even faster GDDR4 chips at production time.
Even the PCB of the R600 will be super complicated, as you need a lot of wires to make 512 bit memory to work. Overall it has the potential to beat Nvidia's G80, but yet again it will come at least three months after Nvidia. The G80's memory works at 384 bit as Nvidia pretty much dis-unified everything in G80 from shaders to memory controllers. Nvidia likes to make rules and probably could not get more than 384 bit wide controller in the chip, as the G80 is still a 90 nanometre chip.
It’s a shame that we will need to wait at least until February to see it in action.
DON'T be too excited about this news, but ATI has already produced some R600 silicon. The R600 is still scheduled for Q1 2006, however. ATI is almost five months late with its original plans.
The first R600 silicon is clocked slower then the final card and is out there just to fiddle with and shows the projected performance to help game developers to debug their games and get it ready for the new marchitecture.
This gets us back to September when Nvidia already had lower clocked G80 chips for its Special, VIP chaps.
ATI/AMD can just sit and cry on a graphic end as they don’t have anything to fight super cool G80 for at least next three months. Interesting timing for a beta GPU isn't it?
ATI most likely is going to have the better hardware and better featureset since they helped write the driver models and all that rot in Vista. They also came up with the unfiied architecture stuff and have done a successful test run in Xbox 360. They are also planning on using a mix of 65 nm and 80 nm technology so it can and will run cooler. They also were the first to demo physics on the gpu and I have a sneaky feeling that the physics API that MS is rumored to be writing will have ATI's help in it as well. And thern there is the ring bus memory architecture which is completely programmable and has been shown to provide decent boosts and that is without much tweaking at all if their press releases are to be believed. I have a feeling that is going to kick some major -BLEEP!- when the new DX 10 parts start showing up. Also ATI has better IQ than does Nvidia at the moment and they are only using 16 pipelines whereass Nvidia is using 24 but Nvidia is using a 100 million or so less transistors right now so it is running a lot cooler but that is because ATI claim the transistors belong to the programmable memory that they have yet to utilize etc etc.