NVIDIA "G80" Retail Details Unveiled
- Fully compatible with Microsoft’s upcoming DirectX 10 API with support for shader model 4.0.
- NVIDIA has code-named G80 based products as the GeForce 8800 series. NVIDIA has revived the GTS suffix.
- GeForce 8800GTX will be the flagship product -
- The core clock will be factory clocked at 575 MHz.
- Equipped with 768MB of GDDR3 memory, to be clocked at 900 MHz.
- 384-bit memory interface and deliver 86GB/second of memory bandwidth.
- 128 unified shaders clocked at 1350 MHz.
- The theoretical texture fill-rate is around 38.4 billion pixels per second.
- The core clock will be factory clocked at 575 MHz.
- Slotted right below the GeForce 8800GTX is the slightly cut-down GeForce 8800GTS -
- GPU clocked at a slower 500 MHz.
- GeForce 8800GTS cards will be equipped with 640MB of GDDR3 graphics memory clocked at 900 MHz.
- Memory interface is reduced to 320-bit and overall memory bandwidth is 64GB/second.
- 96 unified shaders clocked at 1200 MHz.
- GPU clocked at a slower 500 MHz.
- GeForce 8800GTX and 8800GTS products are HDCP compliant
- Support for dual dual-link DVI, VIVO and HDTV outputs.
- Dual-slot coolers.
- Expect GeForce 8800GTX and 8800GTS products to launch the second week of November 2006.
Power Requirements:
- Both cards are dual-slot PCIe cards measuring a little less than nine inches in length.
- The first card, the GeForce 8800GTX, is the full blown G80 experience.
- The GeForce 8800GTS is a cut down version of the first.
- Requires at least a 450W power supply for a single GeForce 8800GTX, and 400W for the 8800GTS.
- GeForce 8800 cards in SLI mode will likely carry a power supply "recommendation" of 800W.
"G80" To Feature 128-bit high dynamic-range and antialiasing with 16X sampling.
The high dynamic-range (HDR) engine found in GeForce 7950 and Radeon series graphics cards is technically a 64-bit rendering. This new HDR approach comes from a file format developed by Industrial Light and Magic (the LucasFilm guys). In a nutshell, we will have 128-bit floating point HDR as soon as applications adopt code to use it. OpenEXR's features include:
- Higher dynamic range and color precision than existing 8- and 10-bit image file formats.
- Support for 16-bit floating-point, 32-bit floating-point, and 32-bit integer pixels. The 16-bit floating-point format, called "half", is compatible with the half data type in NVIDIA's Cg graphics language and is supported natively on their new GeForce FX and Quadro FX 3D graphics solutions.
- Multiple lossless image compression algorithms. Some of the included codecs can achieve 2:1 lossless compression ratios on images with film grain.
- Extensibility. New compression codecs and image types can easily be added by extending the C++ classes included in the OpenEXR software distribution. New image attributes (strings, vectors, integers, etc.) can be added to OpenEXR image headers without affecting backward compatibility with existing OpenEXR applications.
NVIDIA already has 16X AA available for SLI applications. The GeForce 8800 will be the first card to feature 16X AA on a single GPU. Previous generations of GeForce cards have only been able to support 8X antialiasing in single-card configurations. This new 16X AA and 128-bit HDR will be part of another new engine, similar in spirit to PureVideo and the Quantum Effects engines also featured on G80.
G80 Tentative Specs
- 65nm
[*]64 Shader pipelines (Vec4+Scalar)
[*]32 TMU's
[*]32 ROPs
[*]128 Shader Operations per Cycle
[*]800MHz Core
[*]102.4 billion shader ops/sec
[*]512GFLOPs for the shaders
[*]2 Billion triangles/sec
[*]25.6 Gpixels/Gtexels/sec
[*]256-bit 512MB 1.8GHz GDDR4 Memory
[*]57.6 GB/sec Bandwidth (at 1.8GHz)
[*]WGF2.0 Unified Shader
Updated Specs
- Unified Shader Architecture
- Support FP16 HDR+MSAA
- Support GDDR4 memories
- Close to 700M transistors (G71 - 278M / G70 - 302M)
- New AA mode : VCAA
- Core clock scalable up to 1.5GHz
- Shader Peformance : 2x Pixel / 12x Vertex over G71
- 8 TCPs & 128 stream processors
- Much more efficient than traditional architecture
- 384-bit memory interface (256-bit+128-bit)
- 768MB memory size (512MB+256MB)
- Two models at launch : GeForce 8800GTX and GeForce 8800GT
- GeForce 8800GTX : 7 TCPs chip, 384-bit memory interface, hybrid water/fan cooler, water cooling for overclocking. US$649
- GeForce 8800GT : 6 TCPs chip, 320-bit memory interface, fan cooler.
- US$449-499
Leaked Pictures





_______________
NVIDIA G80 Takes Unified Shader Approach
NVIDIA's Chief Architect David Kirk has said in the recent interview that they will do a unified architecture in hardware when it makes sense and when it is possible to make the hardware work faster unified. It will be easier to build in the future, but for the meantime, there's plenty of mileage left in this G70 architecture.
VR-Zone came to know that the future is actually pretty near where NVIDIA G80 design is based on unified shader architecture slated to appear in 2006. ATI has already taken the unified approach for their R500 Xenon within the XBox 360 and has 48 unified pipelines. As we know, in an unified shader architecture, there are no dedicated vertex and pixel shader engines but unified shader engine capable of executing both types of instructions.
G70 already supports Longhorn WGF 1.0 API so most likely G80 will support WGF 2.0 with improved virtualization techniques and new pipeline stages. The likely process technology choice for G80 is 90nm and the architecture is made for high core speed (~1GHz). G80 design is completed and waiting for ATI R580 when the time comes.
Details on nVidia G80 GPU
NVIDIA G80 & ATi R600 Info
We heard that G80 will be in time for launch in June during Computex and the process technology is likely to be 80nm at TSMC. In the recent statement, NVIDIA has said that they will be backing the 80nm "half-node" process by TSMC where it allows reduction of die size by 19%. We have previously mentioned that G80 is likely to take on the Unified Shader approach and supports Shader Model 4.0. G80 is likely to be paired up with the Samsung GDDR4 memories reaching a speed of 2.5Gbps. As for ATi, the next generation R600 is slated for launch end of this year according to the roadmap we have seen and the process technology is 65nm. It seems that the leaked specs of the R600 that surfaced in June last year is pretty likely.
According to Xpentor, NVIDIA G80 will make ATI stumble on April. Quad SLI itself can be implemented on a single card with two chips solution because it will carry the first dual core GPU ever with the support of DirectX10 and Shader Model 4.0. The development of G80 is also mentioned as being running very intensive since NVIDIA's acquisition over ULi. As for the upcoming G71, there will be 32pipes, increase in ROPs and a little speed bump over the core clock.
Nvidia’s First DirectX 10 Chip to Be “Hybrid†Design.
Nvidia’s G80 to Have Dedicated Pipelines – Rumours
Despite of the fact that Microsoft’s next-generation graphics application programming interface (API) will be able to take advantage of unified shader processors, at least Nvidia Corp.’s first DirectX 10-capable chip will utilize more traditional dedicated pixel and vertex processors, according to some rumours.
Nvidia’s code-named G80 graphics processing unit (GPU) will incorporate 48 pixel shader processors and an unknown number of vertex shader processors, some unofficial sources said. The chip is still expected to support feature-set of DirectX 10 along with Shader Model 4.0, even though it will not take advantage of the unified shader processors that can compute both pixel and vertex shaders.
Microsoft Corp. pushes unified shader language for pixel and vertex shaders in its Xbox 360 game console ad well as graphics API of Windows Longhorn – Windows Graphics Foundation 2.0, which is sometimes referred as the DirectX 10. As a result of that, graphics hardware designers should deliver their chips with unified shader engines at some point in future in order to more efficiently support the new API. However, previously Nvidia Corp. expressed opinion that it would be necessary to release an architecture with unified shader processors “when it makes senseâ€.
“We will do a unified architecture in hardware when it makes sense. When it’s possible to make the hardware work faster unified, then of course we will. It will be easier to build in the future, but for the meantime, there’s plenty of mileage left in this architecture,†David Kirk, who is Nvidia’s chief architect, said in an interview with Bit-tech.net web-site.
ATI Technologies has already developed a unified shader architecture used in the Xenos graphics processor of the Xbox 360 game console. Nvidia believes that it is much harder to design a processor with unified pixel and vertex shader processors, as it is not a trivial task to create appropriate load balancing logic that would arbiter the unified arithmetic logic units.
The new DirectX 10 API is expected to be released later this year along with Microsoft Windows Vista operating system.
Nvidia core G80 emerge
CeBIT 2005 After G70 comes G80
NVIDIA'S NEXT generation graphic remains a well protected secret. We still managed to get some information about it, despite that.
We confirmed that G70 is the real thing and we learned that Nvidia has one more chip down the road codenamed the G80. We expect it to be based on 90 nanometre and have more pipes than NV40. We don't think that it will be dual graphics core stuff but you never know, you kow.
We suspect this card should be ready sometimes in September time even some people suggested April as a possibility. But we believe that April is just a poisson d'Avril, or mayhaps a red herring.
Nvidia and its partners are quite confident that they have the right thing but ATI has strong R520 horse for race. Let's see what the future brings.
NVIDIA talks G80
An NVIDIA representative has given some interesting tidbits on the company's next-generation G80 graphics processing unit at the Morgan Stanley Semiconductor & Systems Conference.
The spokesperson stated that they're "increasing the flexibility of the programmability, enabling the artists to express themselves in a free way." That doesn't tell us a great deal about their hardware configuration, but they have stated that programmers will be able to express themselves by programming in a unified shader pipeline.
Obviously, DirectX 10 is going to change things somewhat, as it completely unifies the graphics pipeline. However, we get the impression that NVIDIA's G80 architecture will be a "hybrid" design, based on the discussions we had with NVIDIA's Chief Scientist, David Kirk.
Unifying the pipeline has some distinct benefits, but it can also have some drawbacks as Kirk mentioned in our interview last year. "Another word for 'unified' is 'shared', and another word for 'shared' is 'competing'." Based on this, we feel that NVIDIA will handle the unified API at driver level, rather than in hardware. However, we understand that ATI's architecture will be completely unified at the hardware level.
The stated that NVIDIA has been working on their next generation architecture since 2002, and they've invested around $250M into it already. By the time it launches later this year, they will have invested close to $500M into research and development for their upcoming architecture. You can read more here.
Nvidia G80 misses tape out
A few weeks late
WORD HAS IT that the Nvidia G80 chip missed tape out a few weeks ago.
It was slated to beat the competing ATI part to market by a good margin, but if this slip is real, it should hit the market at roughly the same time.
It should make for a very interesting fall release schedule.
_________________
Please add any G80 related News to this thread, thanks.