CPU/Mobo At last nVidia and Intel agree on SLI

HAHA
Now what more will we see as breaking news? 3 PCIE slots on Mobos??????????

nvidia dropped out of Intel chipset business (still a lawsuit is going on with Intel)with the release of Nehalem.Only 3rd party board manufacturers added SLi capablity to Intel X58/P55 chipsets via the NF 200 chip made by Nvidia.So Intel mobos like Dx58SO only support CFX.I don't think this makes much of a difference to the end user most of whom don't go for Intel Mobos
 
Marcus Fenix said:
HAHA

Now what more will we see as breaking news? 3 PCIE slots on Mobos??????????

nvidia dropped out of Intel chipset business (still a lawsuit is going on with Intel)with the release of Nehalem.Only 3rd party board manufacturers added SLi capablity to Intel X58/P55 chipsets via the NF 200 chip made by Nvidia.So Intel mobos like Dx58SO only support CFX.I don't think this makes much of a difference to the end user most of whom don't go for Intel Mobos

We even have 4 slots nowadays... Why are you surprised by 3?
 
mjumrani said:
We even have 4 slots nowadays... Why are you surprised by 3?
I meant it sarcastically that even 2-way SLI on the upcoming Sandy Bridge mobos is being spread virulently all around the web world like it's like new uber cool tech like light peak :P:P:P
 
The difference is that the drivers recognise the P67 as a valid SLI bridge.

This reduces cost to the motherboard manufacturers and enables native support just like Crossfire already is, hopefully making SLI as easy to access as CFX is today. Additionally the power and thermal footprint of the boards is reduced because of the omission of the chip.

Mr. Fenix,

Only 3rd party board manufacturers added SLi capablity to Intel X58/P55 chipsets via the NF 200 chip made by Nvidia.So Intel mobos like Dx58SO only support CFX.

I'm lost, could you please explain this post?

full support for ATI CrossfireX* and NVIDIA SLI* technology.

From

Intel® Desktop Board DX58SO - Overview

Maybe you need to get your facts in place before posting? This is the second time I've seen you posting misleading information. Please use Google.
 
Marcus Fenix said:
HAHA

Now what more will we see as breaking news? 3 PCIE slots on Mobos??????????

nvidia dropped out of Intel chipset business (still a lawsuit is going on with Intel)with the release of Nehalem.Only 3rd party board manufacturers added SLi capablity to Intel X58/P55 chipsets via the NF 200 chip made by Nvidia.So Intel mobos like Dx58SO only support CFX.I don't think this makes much of a difference to the end user most of whom don't go for Intel Mobos
you obviously do not seem to understand why this is even news.

it is like 'aishwarya deciding to marry abhishek and then patching up with salman khan at the last moment' in bollywood terms.
 
cranky said:
The difference is that the drivers recognise the P67 as a valid SLI bridge.

This reduces cost to the motherboard manufacturers and enables native support just like Crossfire already is, hopefully making SLI as easy to access as CFX is today. Additionally the power and thermal footprint of the boards is reduced because of the omission of the chip.

Mr. Fenix,

I'm lost, could you please explain this post?

From

Intel® Desktop Board DX58SO - Overview

Maybe you need to get your facts in place before posting? This is the second time I've seen you posting misleading information. Please use Google.
:(:(
sorry.screwed up big time.
But I hav a question to ask.the nf 200 chip was implemented as an addon in x58/p55 mobos okay?
now explain this line to me from the TP link
"NVIDIA today announced that NVIDIA SLI technology has been licensed by the world's leading motherboard manufacturers -- including Intel, ASUS, Gigabyte, MSI and EVGA -- for use on their Intel P67 Express Chipset-based motherboards designed for the upcoming Intel Sandy Bridge processors."
NVIDIA SLI Technology and Intel Sandy Bridge Form the Ultimate Gaming PC | techPowerUp

Also see this
ECS P67 and H67 Motherboard Preview | bit-tech.net

"The P67H2-A will become the premium ECS Sandy Bridge board, with a Lucid Hydra 200 chip at its core for a variety of multi-graphics options. Right now this early sample board is built using an Nvidia NF200 chip, but ECS intends to change it once the design is finalised"

If
It says that the SLI tech was licensed by board partners.Does it mean that the P67 Northbridge will support SLI natively without any addon chip.
It would mean that the P67 NB was designed right from the beginning to include SLI capability just like it has nativesupport for CFX.This would have been known to people long ago,not just a month away from Sandy Bridge launch
Now check this out
eTeknix.com - Gigabyte Sandy Bridge Motherboards Preview--Boards Based on The P67 Chipset:
The P67a-ud3p has only cfx support.If the P67 NB had native support for both SLi and CFX why would Gigabyte omit SLi capability?Going by the theory of native SLi support of P67 Nb don't u think that it would have cost the same with both cfx and SLI support?
sorry again for the wrong info about dx58so.
If wrong plz do tell me and i will accept my mistakes.No need to give a crappy example of Bolly shitfest
 
First you've to understand the origins of multiGPU, and go back about six years in time.

Electrically, there is zero difference between the SLI and CF at an interface level. This is necessary or dual display will not work, which is the ability of the computer to output video to two displays on two different adapters simultaneously. Don't confuse this with SLI or CF, that is a multiplexing technology that enables GPUs to *process* images together, not display them. In fact both SLI and CF disable the second display (when used) when multi-GPU is active.

It's the drivers that recognise what is known as a bridge chip, which enables the GPUs to connect to each other over the PCIe interface and starts the multiplexing operation.

This is why it was possible for early hackers to run SLI on CF motherboards and vice versa, because early (even modern) chipsets are capable of handling dual GPUs in both display and multiplex mode. Not very effectively, but yes the capability does exist.

What nVidia effectively did was change their driver to disable SLI on motherboards that did not have a nVidia chipset, and later, they expanded the permitted hardware to include the NF200 chip (which board makers had to use as an addon if they didn't use nVidia chipset). Effectively, any board with an Intel chipset would *have* to have the NF200 if SLI was required in the feature set of that board.

A (very) smart hacker can still figure out a way to get SLI support to work (somewhat) on boards that do not natively support it.

The Lucid chip, if you read about it, is basically a hardware multiplexer. It has its own interface and effectively gets two GPUs to talk to each other regardless of their manufacturer. You do have to read a little bit yourself, you know:

Lucid Hydra 200: Vendor Agnostic Multi-GPU, Available in 30 Days - AnandTech :: Your Source for Hardware Analysis and News

No need to give a crappy example of Bolly shitfest

I don't follow you, once again I'm lost. The most obvious thing for you to do is actually read. If you haven't heard of the Hydra, you're supposed to research it. And the content of this post (for example) was not created by magic and it's not my imagination or theory. It is simply acquired by reading what is out there already. A lot of reading. Which, I can see, not many people do. Knowledge only comes to those who are thirsty for it. Not to those who have to be spoon fed.

Good Luck.
 
cranky said:
First you've to understand the origins of multiGPU, and go back about six years in time.

Electrically, there is zero difference between the SLI and CF at an interface level. This is necessary or dual display will not work, which is the ability of the computer to output video to two displays on two different adapters simultaneously. Don't confuse this with SLI or CF, that is a multiplexing technology that enables GPUs to *process* images together, not display them. In fact both SLI and CF disable the second display (when used) when multi-GPU is active.

It's the drivers that recognise what is known as a bridge chip, which enables the GPUs to connect to each other over the PCIe interface and starts the multiplexing operation.

This is why it was possible for early hackers to run SLI on CF motherboards and vice versa, because early (even modern) chipsets are capable of handling dual GPUs in both display and multiplex mode. Not very effectively, but yes the capability does exist.

What nVidia effectively did was change their driver to disable SLI on motherboards that did not have a nVidia chipset, and later, they expanded the permitted hardware to include the NF200 chip (which board makers had to use as an addon if they didn't use nVidia chipset). Effectively, any board with an Intel chipset would *have* to have the NF200 if SLI was required in the feature set of that board.

A (very) smart hacker can still figure out a way to get SLI support to work (somewhat) on boards that do not natively support it.

The Lucid chip, if you read about it, is basically a hardware multiplexer. It has its own interface and effectively gets two GPUs to talk to each other regardless of their manufacturer. You do have to read a little bit yourself, you know:

Lucid Hydra 200: Vendor Agnostic Multi-GPU, Available in 30 Days - AnandTech :: Your Source for Hardware Analysis and News

I don't follow you, once again I'm lost. The most obvious thing for you to do is actually read. If you haven't heard of the Hydra, you're supposed to research it. And the content of this post (for example) was not created by magic and it's not my imagination or theory. It is simply acquired by reading what is out there already. A lot of reading. Which, I can see, not many people do. Knowledge only comes to those who are thirsty for it. Not to those who have to be spoon fed.

Good Luck.

Wow.Thanx a lot man for clearing that up in clear detail.that bolly comment was not meant to inflame anyone.Just what I felt on seeing madnav's comment:P
 
cranky said:
...... It is simply acquired by reading what is out there already. A lot of reading. Which, I can see, not many people do. Knowledge only comes to those who are thirsty for it. Not to those who have to be spoon fed.

Good Luck.

Good one..me still spoon fed but want to get rid of it:D:D seeing big contents makes me lazy even to scroll the page and by the time i decide to read it one day, it becomes obsolete news:P

Just finished COD Black Ops and my rig had no issues running it @ 60fps (vysnc on) @1440*900, so no thoughts of looking into Sandy bridge and will wait for the next bridge or flyover or highway from Intel:cool2:
 
Well, this is good news for us customers
/thread

Anyways, atleast now people with AMD processors can use SLI without having to go for the more expensive boards.
Win for the consumer
 
I do not get what is new here. Was not SLI any ways available on 'most' of the X58s anyways..? So what changes now. They have refreshed the license scheme for the new chipset.

Or did I get it wrong. The articles state that too..? During the LGA775 era there were issues when one had to get nForce chipsets to enable SLI, but it changed once X58 arrived.
 
@Mr. Fenix

what you failed to understand is that the news is about nV licensing intel for SLi and not about having SLi support over intel's platform.

it is more of a political move by nV.

the high fermi is not likely to sell with a slower cpu over AMD's. having licensed intel for SLi support can only mean one thing and that is getting back in bed with intel for QPI.
 
asingh said:
I do not get what is new here. Was not SLI any ways available on 'most' of the X58s anyways..? So what changes now. They have refreshed the license scheme for the new chipset.

Or did I get it wrong. The articles state that too..? During the LGA775 era there were issues when one had to get nForce chipsets to enable SLI, but it changed once X58 arrived.
Native SLI support would mean that the board partners wouldn't have to spend more on the NF200 chips, so cost for us end users would be slightly cheaper. Back with the X58 some boards featured multiple NF200 chips to allow all PCI @ 16x, but all that came at a hefty premium.

Also entry level boards would have SLI support. Not too sure about this actually being a advantage, but an extra feature still doesn't hurt none the less.
 
^^

Seems logical. But initially for the X58 I thought the NF200 was not required for native x16 x16 SLI. It was just a license which the board partner bought (post nVidia evaluation) and some hardware change was done on the board firmware. NF200 was only needed when the X58's PCI.E lanes had to be expanded beyond the x16 x16. Again I could be wrong here.

What do the others think about this.
 
asingh said:
I do not get what is new here. Was not SLI any ways available on 'most' of the X58s anyways..? So what changes now. They have refreshed the license scheme for the new chipset.

Or did I get it wrong. The articles state that too..? During the LGA775 era there were issues when one had to get nForce chipsets to enable SLI, but it changed once X58 arrived.

Nahh, there are unofficial ways around that 775 limitation as well. Running 8800GTs in SLI on a P45 chipset motherboard :_
 
asingh said:
I do not get what is new here. Was not SLI any ways available on 'most' of the X58s anyways..? So what changes now. They have refreshed the license scheme for the new chipset.

Or did I get it wrong. The articles state that too..? During the LGA775 era there were issues when one had to get nForce chipsets to enable SLI, but it changed once X58 arrived.
Only x58 boards with a bridge chip were officially supported for SLI by the nVidia drivers.

However, as you see from TH's post, there are ways to get SLI working on a board with the requisite number of PCI slots, officially or not :P We don't discuss those means here, though, but the point is that multi-GPU is only a way for the card manufacturer to sell two cards instead of one, a licensing game rather than a hardware compatibility issue. nVidia tried to enforce exclusivity by mandating their own chipset and later a bridge chip.

If you look at the nVidia marketshare and share prices over the last three years, the results of that strategy are clear to see.

The other side to this is that the processing overhead of two GPUs is greater than the performance gain at lower resolutions. Practically, no benefit is realised unless the resolution and processing power required is significantly more than what a single GPU can provide for. Multi-GPU is pointless for resolutions below 1920x, and probably the only solution for 2560x with all eye-candy.

So if you've not burned 80K on a 30" monitor, multi-GPU is probably not for you. 2560x is 4 megapixels, FWIW, as opposed to the 2MP resolution of a 1920x display. So it's a safe bet to assume that a card that is maxed out at 1920x will gain from the addition of a partner for 2560x. This is a pretty decent way to judge whether you need two GPUs or not.
 
^^
Cranky, thanks for a detailed write up, but I am still confused, so will bother you more. :)

Questions I have around this:

1. Do all the X58s which have SLI certification actually have an nForce 200 chip embedded..? I am doubting this, cause this article states that it was just a certification program and a 'cookie' was embedded in the board which the driver would pick up. This article also states that same, that nVidia will 'not' limit its SLI program to boards with only with the nForce 200 chip.

2. So are there actually 2 enabling methods for SLI. Either as per (1) above or of course via the nVidia chipset. This slide gives a good summary.

3. How will it work now for the new Sandybridge chipsets.,? Or can we only speculate till actually the board(s) are released.

4. By saying Bridge Chip in your above post, you meant the NF200 chip or something else. Also was not the NF200 just an expansion above the x16 x16 lanes of native X58.

Though true what you said. Only when the resolution is >=1920x are true benefits of multi-GPU realized. I remember using it on my 19" and one card used to sit there doing jack. Now with 1920x1080 I see both cards being taxed quite heavy while gaming -- provided drivers and game support. :) Which of course is another story.

Thanks any ways...!
 
To your questions -

1. Yes. In theory the x58 on its own would be SLI-capable if the motherboard manufacturer simply got the board validated, but IINM the only board that was SLI-certified without a NF200 chip was the DX58SO. I wasn't into Intel designs at the time, but IIRC all other board manufacturers simply bought the chip and soldered it to the board, this also enabled them to get 3 x16PCI slots on a board (x58 only had 40 lanes). I could be wrong, though.

2. and 3. Yes. Enabling SLI is only a driver 'tweak', so to speak. My assumption is that P67 will be recognised as a bridge and SLI will be enabled from the get-go. How it is actually implemented is anybody's guess. The problem is that SLI is a specification, whereas motherboard manufacturers change a lot of things when the rubber meets the road. You may have multiple implementations from different vendors.

4. This is complicated.

See there are two separate issues, hardware and software.

For two GPUs to operate in a single machine, there are two possibilities, multi-display or multi-GPU. Multi-display is a hardware implementation, and multi-GPU is a software implementation.

Multi-display is enabled by default. In spite of all reports to the contrary, it is quite possible to use one nVidia and one AMD card in a system in multi-display mode, as long as Windows knows which the primary adapter is (and therefore the primary display) and both the adapters share no IRQs at all. For this you only need 2 (or more) slots on the motherboard to accept the requisite adapters.

This has been the done thing since the Win98 days, where you could have one AGP and one PCI (not PCIE) card and operate displays off each. I've personally owned a system with the GeForce 8300 chipset, and even after placing a 4870x2 in the board the onboard nVidia display device would not (could not) be disabled. And it worked fine, no conflicts or complaints except for the fact it occupied 16MB of my system memory at all times. Connecting a monitor to the nVidia outputs resulted in display in both monitors - no fancy trickery was required. And this was Windows XP, not really sure how Windows 7 handles it but it can't be much different :)

Multi-GPU is a different ball game. At a purely technical level, video rendering is a digital to analog conversion at the end of a decoding process. Multi-GPU is a method to split the (decoding) processing load among all present display adapters. Think Folding@home, how is it possible to split a huge processing load into segments that random and multiple entities can bear and process? It's the same thing here.

The low-level driver receives the code, and communicates with the adapters, telling each what to 'do'. The important distinctions here are that it is a piece of software making the decisions, and one of the decisions the nVidia driver makes is whether to communicate with the adapters at all, given the system environment. One of the variables in that environment is the presence of a bridge chip, which is a device that can communicate with both the adapters. For example in a dual-GPU card, the bridge chip is present on the display adapter itself. To my understanding, the P67 will also be a recognised bridge chip, as were the x58 and the NF200.

The addition of 16 lanes by the NF200 chip is an added benefit, not necessarily the core function.

Hope that is a little clearer (it is to me now, too :) ).
 
Back
Top