Intel GMA950 Integrated Graphics Core

Quad Master

Skilled
Apr 17, 2005
5,301
259
222
Mumbai, IN
Intel recently released a new core logic family, the 945/955 series. Part of the 945 chipset is a new, integrated graphics core. We'd heard from some sources that Intel's new GMA950 offered substantially improved 3D performance over the GMA900 built into the 915G core logic. So we had to check it out for ourselves. According to the data sheet for the GMA950, also known as the GMCH, some of the specs have been beefed up over the earlier 915G core:

GMA900 (915G)
Core frequency:- 333MHz
Pixel rate:- 1.3 gigapixels per second
Memory bandwidth:- 8.5GB/sec
Pixel shader support:- Up to 2.0

GMA900 (945G)
Core frequency:- 400MHz
Pixel rate:- 1.6 gigapixels per second
Memory bandwidth:- 10.9GB/sec
Pixel shader support:- Up to 2.0

The rendering engine supports all the texture modes you'd expect from a modern 3D engine, including cube map support, various texture blending modes, and S3TC texture compression. New this time around is support for anisotropic filtering. Note that vertex shaders are handled by the host processor, so the faster the CPU, the faster the vertex processing.

The new core logic's support for DDR2/667 should mean more available free bandwidth for the graphics processor. Still, integrated graphics is a balancing act between memory fetches for graphics and memory accesses for the CPU. How that arbitration is handled is the key to balanced performance.

Another interesting feature of the new GMCH is the add-on digital video output card. Dubbed "ADD2+", the card can use 4 or 8 lanes of x16 PCI Express and support up to two displays in multimonitor mode. Alternatively, it can work together to support one very high resolution display. The GMCH can also output S-Video. In addition to the 3D capabilities of the integrated core, Intel has built in a video engine with full hardware motion compensation, MPEG2 hardware decode, subpicture support (e.g., for closed captions), and dynamic de-interlacing. Continued...

We certainly recognize that an integrated graphics solution is incapable of running at 1600x1200 with antialiasing. So we made a few compromises in our tests. Let's first take a look at the testbed:

* Intel Pentium 4 model 660 at 3.6GHz
* Intel G945GTP MicroATX motherboard
* 1GB DDR2/667 memory
* Seagate 7200.7 160GB SATA Drive
* ATAPI DVD-ROM drive

We compared the GMA950 with a XFX GeForce 6200TC supporting 128MB of memory. The XFX card actually only has 32MB of local memory and a 64-bit wide interface to memory. You can find this card for around $60 on the Internet.

We used the following benchmarks and games for testing 3D performance:

* 3DMark05 (standard benchmark, fill rate tests, pixel shader test and vertex shader tests)
* 3DMark 2001SE, just to gauge how the system would perform with older DirectX games.
* Doom 3
* Painkiller
* Half-Life 2
* Splinter Cell: Chaos Theory (using Anthony Tan's benchmark scripts)
* Unreal Tournament 2004 Botmatch
* Flight Simulator 2004

All games were run at two resolutions: 640x480, low detail and 1024x768 "medium" detail. In games that include a medium detail setting, we set that parameter but also changed the resolution to 1024x768 as appropriate. The exceptions to this were Half-Life 2, where we ran the default 1024x768 settings and Flight Simulator 2004, because the lowest supported resolution is 800x600. Continued...

We first ran 3DMark 2001SE. This benchmark came out in the early days of DirectX 8.0, and is mostly a DirectX 7 test. There's very little shader work going on, although modern hardware necessarily emulates the hardware transform and lighting using vertex shader code built into the drivers. We then ran the current 3DMark05 test, capturing the standard test numbers, plus the results of the fill rate, pixel, and vertex shader tests.

It's no big surprise that the GeForce 6200 Turbo Cache outperforms the GMA950. What's surprising is how well the Intel part does perform, particularly in the fill rate test, where it actually outperforms the Nvidia card. But as we'll see shortly, fill rate isn't everything. Continued...

Synthetic benchmarks are entertaining for benchmark geeks like us, but in the real world, people play these applications known as games.

To put it mildly, it's no contest. To put it more bluntly, it's a complete and total rout for the GMA950, with the possible exception of the CPU-intensive Flight Simulator 2004. Even then, what you get is playable frame rates, but the GeForce 6800TC still crushes the GMA950. The verdict is even worse when you realize the GMA950 wouldn't even run Splinter Cell: Chaos Theory or Painkiller, even though the specs of the GPU suggests they should. Intel has acknowledged a driver bug with regards to Chaos Theory. But if this is any indication of the potential compatibility issues users may encounter, running games on the new core is a dubious prospect. Continued...

We can state flatly that if you buy a system using Intel's GMA950 integrated graphics and want to play 3D games, invest at least $60 in an add-on card. If what you want is simply a system that can run standard office software, plus maybe play some DVD movies, then Intel's new graphics core is probably suitable.

You might wonder what the point is of putting all the engineering effort into the 3D core, if it sucks so badly at games? The answer is pretty simple:

Longhorn.

Intel's new GMCH will probably run Longhorn's upper tier Aero Glass interface pretty well. And Intel certainly wants that, because its OEMs sell truckloads of systems with integrated graphics into businesses. So businesses whose users want to use the Aero Glass interface will have a solution that works, but the IT budget won't be severely impacted. If you're buying a system for the home with the intent of running the occasional 3D game, drop an extra $60 and get an add-in board.

Source