Why does AMD version of nForce4 north bridge only has 18 PCI Express lanes instead of 20, especially when the AMD north bridge doesn't have to include DDR2 memory controller? It sounds like yet another crippleware move to 'justify' the purchase of some upcoming nForce4 Pro chips.
As others have stated, you gain nothing from x16 vs x8 PCIe. I would honestly be surprised if anything is even gained for GPGPU applications, but it is possible.
Any performance differences would be easy enough to examine, even right now. An SLI motherboard can be configured for x8/x8 even if you're only using a single card. I know that all of the benchmarks I ran came out virtually identical. I'll be trying again when my 7800GTX shows up, but I'm willing to bet I'll get the same result. So you don't like switching the "paddle"? Just set it for dual cards and be done with it.
The new chipset is however a great move by Nvidia. Marketing works. Either Dell fell for the marketing, or Dell understands that their customers will fall for the marketing (probably the more likely).
I know that I have no plans to upgrade my A8N-SLI Deluxe just to get a board with the new chipset. I'm perfectly content sitting on what I have until I need to upgrade to M2. Who knows what chipset I'll look at then. Hopefully I'll have a lot of choices when I'm ready (ATI, Nvidia, ULi, maybe even Via [lol]).
Sli is ALSO marketing crap. It was designed for you to spend 2x the money. "oh no its not, i can buy a cheap upgrade to my current card etc etc" No you cant. You want a better graphics card. *buzzer sound!* No you cannot get that card, because it isnt the same one as before. "Oh, well i have 6800 ultra's" *buzzer sound again* No point in spending for 2nd card even here, the 7800 will outperform it anways, and wouldnt you want 1 card for less heat and wattage consumption vs 2 that are now outdated?
SLI does give you a few things, one it makes you spend 20-60 dollars more for a mobo that has one, PLUS it gives you a 30-40 watt more power draw even if you use only 1 card! AND! I'll throw in a bonus: A super hot northbridge that if passively cooled, can exceed 70 degrees celsius. BUY ONE NOW!
What crap. I just called Asrock US sales office and they said they are about to release the m1695 board. At least one company doesnt force someone to get crap that is useless.
agp 4x is hardly broken by all but the 7800, we have 4x slots on pII boards.....
8x agp hasnt been tapped, BUt you SHOULD buy 2 pci x16 lane cards NOW!!!!!
I want to disagree - PCI-E is not marketing crap, it is a way to put again all the external devices on a single bus again.
While PCI will die under many kind of use the current PCI cards are able to generate (think PCI video cards, think PCI RAID cards, think PCI gigabit cards), the PCI-E are easily able to accept it. While the "top" of the line PCI-E (16x) offer no usable bandwidth advantages over the 8x cards, they still offer more power to the card than the 8x slots, and again more power than AGP slots and maybe even than AGP-Pro slots. Just think at all the last-generation ultra-super-extra video cards and their TWO 4-pin connectors to get extra juice. How about a AGP card that needs not two 4-pin connectors for extra power, but three? PCI'E 16x might have solved that problem.
Also, have you seen PCI cards (network cards mainly) that are long and thin to reach to the end of the PCI slot? a PCI-E 1x card can have a third of the PCB, allowing a better airflow, costing less, and so on. PCI-E is surely better than PCI
I think nvidia will be staggering these releases to force people to upgrade again and again
SLI x16
and then ddr2
and then M2 socket -- all bought out over the space of 1 year?
or is ddr2 being released when and only when M2 sockets are released?
All socket M2 boards will support DDR2 only, just as all S939 boards support DDR only. The memory-controller is on the CPU remember, so the socket change is to allow the switch to DDR2.
Wrong!
The jumper is to enable an add in card/board with the socket M2 and memory banks.
So while is it possible to reuse the same socket you always need memory banks for DDR and DDR2. Of course the best is to provide 2 sockets like the combo-Z.
You can easily test to see if there is any performance difference between x8 and x16 PCIe with a standard nF4 SLI board. Just drop one card (ideally a 7800GTX) in the first graphics-card slot, and run tests with the paddle set to single-card mode. That gives you the PCIe x16 results. Now set the paddle to SLI mode and re-run the tests with the same single card. It will now be running at PCIe x8 and you can see if there is any drop in performance. Voila! :)
The thing about graphics slot bandwidth is that it's *always* much less than
native on-card bandwidth. Any game which is optimized to run quickly will,
therefore, do absolutely as much as possible out of on-card RAM. You'd be
unlikely to see much difference in a game between a 7800GTX on an 8 or 16-lane
slot (or even a 4-lane slot). If you want to see much difference, put in a
6200TC card which spends all its time using the bus.
There *is* a difference if you're sending lots of data backwards and forwards.
This tends to be true of Viewperf (and you've got a workstation card which is
trying to do some optimization, which is why the nForce4 Pro workstation chipset
supports this configuration), or - as mentioned - in GPGPU work. It might also
help for cards without an SLi connector, where the image (or some of it) gets
transferred across the PCI-e bus.
This chipset sounds like they've just taken an nForce4 Pro (2200+2050 combo)
and pulled one of the CPUs out. It does make my Tyan K8WE (NF4Pro-based dual
16-lane slots, dual Opteron 248s) look a bit of an expensive path to have taken,
even though I've got a few bandwidth advantages. Guess I'll have to save up
for some 275s so I don't look so silly. :-)
I wasn't suggesting measuring the difference between x8 and x16 with a TC card, it was for people who are worried that there is some performance hit with current SLI setups running at x8 which this new chipset will solve. I'm well aware that performance suffers terribly if the card runs out of onboard memory, and was not suggesting that. Besides anyone with a TC card won't be running in SLI mode anyway so the x8 vs x16 issue is irrelevant there.
I agree there is unlikely to be much difference between x8 and x16 in games but it would be nice to test it just to be sure. Any difference there is could be maximised by running tests at low resolutions (such as 640x480) as that will simulate what the effect would be of the x8 bus limitation on a faster graphics-card at higher resolutions. It's all about how many frames it can send over the bus to the card.
Actually my new box has a 6800GT in it and an X2 4400+ running at 2.6GHz, so I'll do some tests this evening then flick all the little switches (it's a DFI board) and re-run them, then report back with the results. I doubt there'll be much difference.
Sorry, should've been clearer - I didn't mean to suggest a
bandwidth comparison test either, just to say that where
you don't see a difference with the 7800 you might with the
6200TC. Not that I'd expect all that many owners of this
chipset to be buying 6200s.
Of course, this assumes that my statement about how much
data goes over the bus is correct. The same may not apply
to other applications - responsiveness in Photoshop, or
video playback (especially without GPU acceleration) at
high resolutions. Anyone who's made the mistake of running
a 2048x1536 display off a PCI card and then waited for
Windows to try to fade to grey around the "shutdown" box
(it locks the screen - chug... chug...) will have seen the
problem. But you need to be going some for 8 lanes not to
be enough.
It's true that you're more likely to see an effect at
640x480 - simulating the fill rate of a couple of
generations of graphics cards to come, at decent resolution.
The TH results really show when pre-7800 cards become fill
limited.
My understanding was that, in non-SLi mode, the second slot
works but in single-lane config. Is that right? I'd like to
see *that* benchmarked...
Ah, wonderful toys, even if we don't really need them. :-)
Yes, when an nF4 SLI mobo is set to single-card mode, the second slot does run at x1 so it is still very useful assuming companies start making PCIe TV-tuner cards, soundcards, etc in the next year or two. Apparently Creative's new X-Fi will be PCI only at first which is lame beyond belief. The 250MB/s bi-directional bandwidth that a x1 PCIe link would give a graphics-card would have quite an impact I'm sure.
Re. the X-Fi, I don't see the bandwidth requirements needing more
than PCI (not that I know anything about sound); I'm sure they
can make a version with a PCI-e bridge chip once people start
having motherboards without PCI slots (which, given how long ISA
stuck around, will probably be in a while). If even the Ageia
cards are starting out as PCI, I'd not complain too much yet.
Apparently the X-Fi *is* 3.3V compatible, which at least means I
can stick it in a PCI-X slot. (For all the above claims about
PCI sticking around, my K8WE has all of *one* 5V 32-bit PCI
slot, and that's between the two PCI-Es. I hope Ageia works with
3.3V too...)
"Obviously, Intel is our key processor technology partner and we are extremely familiar with their products. But we continue to look at the technology from AMD and if there is a unique advantage that we believe will benefit the customer, sure, we will look at it."
I'm curious as to whether or not they fixed the problem with displaying on two monitors without rebooting into non-SLI mode. I'm planning to buy a new motherboard this week, and am going with the ultra version instead of SLI for this reason alone.
I figure I'll spend less on a motherboard and more on a videocard that will actually do what I want it to.
Anandtech, please make sure to test out the CPU Utilization when using the onboard Ethernet w/ ActiveArmor on production boards. I'd like to see if they revised that at all, since the CPU Utilization was so high on the current revision of the boards. In fact, most nVidia nForce Pro motherboards for Opterons dont use the included nVidia Ethernet, they use the Broadcom or some other chip because performance is so bad.
This will boost video graphics performance as much as when 8x AGP came out over the then cutting edge 4x AGP. Which is to say slim to none. As others have stated, there is no known graphics card capable of fully utilizing the 8x AGP bus much less 16x PCI-Express bus. The Geforce 7800 doesn't come in AGP flavors so we don't know if it has a significant performance difference between 4x and 8x AGP.
A few thoughts about the bandwidth increases now offered with the new chipset. First, for transfers from system RAM to the GPUs, this is completely useless. I also have to wonder what the link between the MCP and SPP is - it would have to have 8 GB/s of bandwidth to make the second X16 slot the same speed as the primary SPP slot. Hmmmm.... most I've heard of for a NB to SB interconnect is about 1 GB/s. Two HyperTransport channels running at 1000 MHz would provide enough bandwidth, but I seriously doubt that's present.
Now, even if the NB to SB connection were fast enough, dual-channel PC3200 DDR only offers 6.4 GB/s of bandwidth - less than that of a single X16 slot. So SATA controllers sitting on X4 connections combined with two GPUs on X16 connections will now be possible, but the actual performance probably wouldn't be any different than SATA controllers on an X2 connection with two GPUs on X8 connections. Maybe we'll get quad-channel DDR2-667 RAM with socket M2 to make this a realizable performance boost? (/sarcasm)
There is a use case for it, though: GPGPU for one, and potentially SLI without the extra connector. Board to board SLI transfers over the internal X16 should be at least as fast as the proprietary connector I'd think. That last one is especially interesting, I think. If current SLI takes an X16 channel and breaks it into two X8 channels, how about a board with four X8 connections and four physical X16 slots for quad-GPU SLI? It wouldn't surprise me at all to find that NVIDIA has a team working on that exact project.
As the article states, the biggest deal about this launch for gamers is that prices will drop on SLI boards. Maybe then I'll be able to stomach recommending SLI for a mid-range system. The even bigger deal for NVIDIA is that they now have an "in" with Dell. THAT is freaking huge! I don't think Dell actually sells that many XPS systems, but then I don't think that many Intel SLI setups have been purchased as a whole. Dell has marketing power, and they WILL find ways to convince people to buy Intel SLI PCs.
4 graphics cards? Looks like the PC is turning into a gaming console and losing its general purpose, unless you're a stock broker maybe ;-)
Having this abundance of PCI-E lanes looks like a step to abandon PCI(-X). The nF4 boards have issues with professional soundcards on the PCI-bus. It is a pity all these gadgets and extra performance have downgraded the PCI-bus instead of enhancing it.
I believe it is time for cardmanufactureres to develop more PCI-E based cards. It seems like chipset manufacturers aren't willing to spend the time to preserve good bandwith for the old PCI-bus.
Nvidia Graphic cards communicate through their "crossbar" on the top of the cards, so, even having just 1 HT link between SB & NB wouldn't be that big of a deal. I don't think that it would saturate the HT link either, due to the crossbar. But this setup would be nice in a system if they got rid of the 16x crap and just gave you the straight channels like the 2200pro and 2250, like a 8x or 2 4x slots like that.
What I really want to know is will they now support raid 5 on a non NForce-Pro AMD system?? Intel edition has it and so does the 2200pro? where is the NON-ECC love!
Yes, they have their crossbar controller, but they still get information from the CPU and main memory. If you ignore the GPU to CPU/RAM via PCIe bus communications, then there is no difference whatsoever between SLI X8 and SLI X16. (Which is likely the case anyway.)
RAID 5 appears to be coming with the nForce4 SLI X16 chipsets to both platforms. We just neglected to mention it:
quote: World-Class features for both AMD and Intel platforms
* ActiveArmor secure networking engine with NVIDIA Firewall
* NVIDIA nTune
* MediaShield with 4 SATA 3Gb/s ports and RAID 5
So now nforce boards will have two really hot chips that need loud little fans or elaborate heat pipes? I hope that with this new generation of nforce chips they figure out a way to cut down some of the heat output. The nForce 3 was perfectly fine but the 4 gets toasty.
This is one of the main reason that I am looking forward to the ATI boards....I like passive chipset coolers.
Perhaps now that it is a 2 chip solution rather than 1, each of the chips will run a bit cooler than if they were combined, hopefully allowing for simple passive cooling with good (aftermarket) heatsinks like that blue Zalman one, the ZM-NB47J. As long as they don't put those chips in un-strategic places...
Doesn't this solution completely do away with the need to either open the case and flip the switch to enable SLI or select it electronically (i.e. ASUS A8N-SLI Premium) and reboot?
If so, that's a positive move even if there is no performance gain.
One of the appeals of ATI's crossfire solution is the expanded flexibility and ease-of-use. I think this even's that part of the playing field somewhat.
As I remember, old NVIDIA SLI had that switch to distribute the PCI-E lines in 1x16 (a single usable slot) or 2x8 (two usable slots). There might have been some kind of extra connections to have a single x16 slot and one 4x slot (20 PCI-E lines used) or two x8 slots (16 PCI-E lines used, and 4 unused).
It would be great to be able to change the SLI/non-SLI configuration from drivers.
quote: ...may mean that Dell is capable of making good decisions in the processor department as well. While it is unlikely that we will see AMD based Dell systems anytime soon, it's nice to know the thin line between volume discounts and unfair business practices is clear enough to allow Dell to make the right choice for performance once in a while
Wow, a double snipe in the same paragraph ... Dell [bang!] Intel [bang!].
And we now turn your attention towards a nitpick:
What's a GPGPU again?
[Last page, next to last paragraph, two occurances.]
"General-Purpose Computing on Graphics Processing Units (also referred to as GPGP and to a lesser extent GP^2) is a recent trend in computer science that uses the graphics processing unit to perform the computations rather than the CPU."
Yeah, I saw this to and had to do a quick check on Google. The text is now tweaked to explain the acronym for those that haven't encountered it before.
...I wonder if this may have influenced ATI's decision to do one last revision on their new line of mobos. The Inquirer had a http://www.theinquirer.net/?article=25198">story a few days back about ATI going back to the drawing board to make undisclosed changes on what seemed to be a already-working chipset, pushing availability back another month into September. One can always hope the wait is worth the hype. I plan on building my rig in the next month or two & would like to have some options available when it comes time to choose a mobo & GPU combination...
I read that same story and I agree with you. I think ATI have probably heard about this x16 chipset before we have. I wonder what they are changing.. Who knows, maby we will see 2x16 pci-e crossfire boards? Time will tell..
Think we will see a ASUS A8N32-SLI sooner rather than later !
Some cards like MATROX QID series (supporting 8 Displays with a pair of cards) REQUIRE individual x16 slots...they are not SLI, but you can only load one card without going to Opteron nFORCE Pro currently.
Kudos to nVIDIA to bring the advantages of professional chipset to the desktop...at current price point and driving down current SLI based mainboard's prices accordingly!
Last note...having two PCI-ex x16 slots will also allow single card user to pick slot to use without having to drop down to x2 or x4 slot like currently shipping solutions...hopefully allowing the use of aftermarket chipset cooler like Zalman or Water cooling without having to go to a shorter video card.
Re. the QID boards - really? How rude. I thought the PCI-e spec required cards to be able to negotiate down to a lower number of lanes.
It *would* be nice if more manufacturers would put longer slots on their motherboards, even if the lanes aren't hooked up, though. (Otherwise it's a case of taking a hacksaw to the bottom of the card and hoping it doesn't pull too much power for the socket.)
Now this is a great reason to buy such boards. However, the people that will buy dual PCI-E x16 mainboards for this reason will be much lower than the number of people that buy the mainboards for their "speed" advantage.
The reason to buy such mainboards? Better/more slots/connectors (PCI-E, SATA, USB, Ethernet), having the flagship board, bragging, better support for the flagship boards than for the mainstream ones. There are reasons enough for people to buy (even if very few of the reasons are justified from an economic standpoint)
Yeah. I don't think this will have a very big impact. The increase in performance from 4x AGP to 8x was minimal. So, I think this one on't be that great.
"Now that all of you early adopters have dropped the cash for an SLI board, we're releasing SLI X16 to encourage you to re-upgrade and REALLY waste some money."
Because now you have a wonderfully great stupendous motherboard.
If it wasn't good, you wouldn't have bought it.
Besides, this new chipset won't really affect you, if you really got the ultra-d. That's not even the SLI board. Those running only 1 card already use all 16 lanes.
The few people left in the market to buy an SLI board right now, that is. Not the majority of people who like to stay current with technology for their gaming rigs who have already early-adopted an SLI board setup and are now like "wtf".
Umm. Hate to tell you but your supposed "majority of people who like to stay current with technology" is not the majority of people in any way. It's the *MINORITY* of people. Bringing prices of SLI boards down to mainstream, combined with Dell's power of distribution will definitely help raise the number of boards sold. This is a nice move by Nvidia.
Also, I am an early adoptor of SLI and am not mad at this piece of news. As a matter of fact, I'll more than likely upgrade to this board if it shows to be quicker than current tech. This the tech market, FFS, things NEVER stay stagnant.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
61 Comments
Back to Article
quanta - Thursday, August 11, 2005 - link
Why does AMD version of nForce4 north bridge only has 18 PCI Express lanes instead of 20, especially when the AMD north bridge doesn't have to include DDR2 memory controller? It sounds like yet another crippleware move to 'justify' the purchase of some upcoming nForce4 Pro chips.Tanclearas - Tuesday, August 9, 2005 - link
As others have stated, you gain nothing from x16 vs x8 PCIe. I would honestly be surprised if anything is even gained for GPGPU applications, but it is possible.Any performance differences would be easy enough to examine, even right now. An SLI motherboard can be configured for x8/x8 even if you're only using a single card. I know that all of the benchmarks I ran came out virtually identical. I'll be trying again when my 7800GTX shows up, but I'm willing to bet I'll get the same result. So you don't like switching the "paddle"? Just set it for dual cards and be done with it.
The new chipset is however a great move by Nvidia. Marketing works. Either Dell fell for the marketing, or Dell understands that their customers will fall for the marketing (probably the more likely).
I know that I have no plans to upgrade my A8N-SLI Deluxe just to get a board with the new chipset. I'm perfectly content sitting on what I have until I need to upgrade to M2. Who knows what chipset I'll look at then. Hopefully I'll have a lot of choices when I'm ready (ATI, Nvidia, ULi, maybe even Via [lol]).
ElJefe - Tuesday, August 9, 2005 - link
16x is freakin retarded marketing crap.so is pci-e. thats also marketing crap.
Sli is ALSO marketing crap. It was designed for you to spend 2x the money. "oh no its not, i can buy a cheap upgrade to my current card etc etc" No you cant. You want a better graphics card. *buzzer sound!* No you cannot get that card, because it isnt the same one as before. "Oh, well i have 6800 ultra's" *buzzer sound again* No point in spending for 2nd card even here, the 7800 will outperform it anways, and wouldnt you want 1 card for less heat and wattage consumption vs 2 that are now outdated?
SLI does give you a few things, one it makes you spend 20-60 dollars more for a mobo that has one, PLUS it gives you a 30-40 watt more power draw even if you use only 1 card! AND! I'll throw in a bonus: A super hot northbridge that if passively cooled, can exceed 70 degrees celsius. BUY ONE NOW!
What crap. I just called Asrock US sales office and they said they are about to release the m1695 board. At least one company doesnt force someone to get crap that is useless.
agp 4x is hardly broken by all but the 7800, we have 4x slots on pII boards.....
8x agp hasnt been tapped, BUt you SHOULD buy 2 pci x16 lane cards NOW!!!!!
Calin - Wednesday, August 10, 2005 - link
I want to disagree - PCI-E is not marketing crap, it is a way to put again all the external devices on a single bus again.While PCI will die under many kind of use the current PCI cards are able to generate (think PCI video cards, think PCI RAID cards, think PCI gigabit cards), the PCI-E are easily able to accept it. While the "top" of the line PCI-E (16x) offer no usable bandwidth advantages over the 8x cards, they still offer more power to the card than the 8x slots, and again more power than AGP slots and maybe even than AGP-Pro slots. Just think at all the last-generation ultra-super-extra video cards and their TWO 4-pin connectors to get extra juice. How about a AGP card that needs not two 4-pin connectors for extra power, but three? PCI'E 16x might have solved that problem.
Also, have you seen PCI cards (network cards mainly) that are long and thin to reach to the end of the PCI slot? a PCI-E 1x card can have a third of the PCB, allowing a better airflow, costing less, and so on. PCI-E is surely better than PCI
nserra - Wednesday, August 10, 2005 - link
I think the other guy was talking about AGP not PCI.... but your points are still valid.xsilver - Tuesday, August 9, 2005 - link
I think nvidia will be staggering these releases to force people to upgrade again and againSLI x16
and then ddr2
and then M2 socket -- all bought out over the space of 1 year?
or is ddr2 being released when and only when M2 sockets are released?
PrinceGaz - Tuesday, August 9, 2005 - link
All socket M2 boards will support DDR2 only, just as all S939 boards support DDR only. The memory-controller is on the CPU remember, so the socket change is to allow the switch to DDR2.lsman - Tuesday, August 9, 2005 - link
no. there are (soon to out)AsRook 939Dual-SATA2 that has M2 jumper build in... I guess it will have adoptor for M2..
nserra - Tuesday, August 9, 2005 - link
Wrong!The jumper is to enable an add in card/board with the socket M2 and memory banks.
So while is it possible to reuse the same socket you always need memory banks for DDR and DDR2. Of course the best is to provide 2 sockets like the combo-Z.
nserra - Tuesday, August 9, 2005 - link
?PrinceGaz - Tuesday, August 9, 2005 - link
You can easily test to see if there is any performance difference between x8 and x16 PCIe with a standard nF4 SLI board. Just drop one card (ideally a 7800GTX) in the first graphics-card slot, and run tests with the paddle set to single-card mode. That gives you the PCIe x16 results. Now set the paddle to SLI mode and re-run the tests with the same single card. It will now be running at PCIe x8 and you can see if there is any drop in performance. Voila! :)Fluppeteer - Tuesday, August 9, 2005 - link
The thing about graphics slot bandwidth is that it's *always* much less thannative on-card bandwidth. Any game which is optimized to run quickly will,
therefore, do absolutely as much as possible out of on-card RAM. You'd be
unlikely to see much difference in a game between a 7800GTX on an 8 or 16-lane
slot (or even a 4-lane slot). If you want to see much difference, put in a
6200TC card which spends all its time using the bus.
There *is* a difference if you're sending lots of data backwards and forwards.
This tends to be true of Viewperf (and you've got a workstation card which is
trying to do some optimization, which is why the nForce4 Pro workstation chipset
supports this configuration), or - as mentioned - in GPGPU work. It might also
help for cards without an SLi connector, where the image (or some of it) gets
transferred across the PCI-e bus.
This chipset sounds like they've just taken an nForce4 Pro (2200+2050 combo)
and pulled one of the CPUs out. It does make my Tyan K8WE (NF4Pro-based dual
16-lane slots, dual Opteron 248s) look a bit of an expensive path to have taken,
even though I've got a few bandwidth advantages. Guess I'll have to save up
for some 275s so I don't look so silly. :-)
PrinceGaz - Tuesday, August 9, 2005 - link
I wasn't suggesting measuring the difference between x8 and x16 with a TC card, it was for people who are worried that there is some performance hit with current SLI setups running at x8 which this new chipset will solve. I'm well aware that performance suffers terribly if the card runs out of onboard memory, and was not suggesting that. Besides anyone with a TC card won't be running in SLI mode anyway so the x8 vs x16 issue is irrelevant there.I agree there is unlikely to be much difference between x8 and x16 in games but it would be nice to test it just to be sure. Any difference there is could be maximised by running tests at low resolutions (such as 640x480) as that will simulate what the effect would be of the x8 bus limitation on a faster graphics-card at higher resolutions. It's all about how many frames it can send over the bus to the card.
Actually my new box has a 6800GT in it and an X2 4400+ running at 2.6GHz, so I'll do some tests this evening then flick all the little switches (it's a DFI board) and re-run them, then report back with the results. I doubt there'll be much difference.
Fluppeteer - Tuesday, August 9, 2005 - link
Sorry, should've been clearer - I didn't mean to suggest abandwidth comparison test either, just to say that where
you don't see a difference with the 7800 you might with the
6200TC. Not that I'd expect all that many owners of this
chipset to be buying 6200s.
I'd be interested in the results of your experiment, but
you might also be interested in:
http://graphics.tomshardware.com/graphic/20041122/...">http://graphics.tomshardware.com/graphic/20041122/...
(which is the source of my assertions) - although not as many
games are tested as I'd thought I remembered. Still, the full
lane count makes a (minor) difference to Viewperf, but not
to (at least) Unreal Tournament.
Of course, this assumes that my statement about how much
data goes over the bus is correct. The same may not apply
to other applications - responsiveness in Photoshop, or
video playback (especially without GPU acceleration) at
high resolutions. Anyone who's made the mistake of running
a 2048x1536 display off a PCI card and then waited for
Windows to try to fade to grey around the "shutdown" box
(it locks the screen - chug... chug...) will have seen the
problem. But you need to be going some for 8 lanes not to
be enough.
It's true that you're more likely to see an effect at
640x480 - simulating the fill rate of a couple of
generations of graphics cards to come, at decent resolution.
The TH results really show when pre-7800 cards become fill
limited.
My understanding was that, in non-SLi mode, the second slot
works but in single-lane config. Is that right? I'd like to
see *that* benchmarked...
Ah, wonderful toys, even if we don't really need them. :-)
PrinceGaz - Tuesday, August 9, 2005 - link
Yes, when an nF4 SLI mobo is set to single-card mode, the second slot does run at x1 so it is still very useful assuming companies start making PCIe TV-tuner cards, soundcards, etc in the next year or two. Apparently Creative's new X-Fi will be PCI only at first which is lame beyond belief. The 250MB/s bi-directional bandwidth that a x1 PCIe link would give a graphics-card would have quite an impact I'm sure.Fluppeteer - Wednesday, August 10, 2005 - link
Re. the X-Fi, I don't see the bandwidth requirements needing morethan PCI (not that I know anything about sound); I'm sure they
can make a version with a PCI-e bridge chip once people start
having motherboards without PCI slots (which, given how long ISA
stuck around, will probably be in a while). If even the Ageia
cards are starting out as PCI, I'd not complain too much yet.
Apparently the X-Fi *is* 3.3V compatible, which at least means I
can stick it in a PCI-X slot. (For all the above claims about
PCI sticking around, my K8WE has all of *one* 5V 32-bit PCI
slot, and that's between the two PCI-Es. I hope Ageia works with
3.3V too...)
nserra - Tuesday, August 9, 2005 - link
"Obviously, Intel is our key processor technology partner and we are extremely familiar with their products. But we continue to look at the technology from AMD and if there is a unique advantage that we believe will benefit the customer, sure, we will look at it."jamori - Monday, August 8, 2005 - link
I'm curious as to whether or not they fixed the problem with displaying on two monitors without rebooting into non-SLI mode. I'm planning to buy a new motherboard this week, and am going with the ultra version instead of SLI for this reason alone.I figure I'll spend less on a motherboard and more on a videocard that will actually do what I want it to.
Doormat - Monday, August 8, 2005 - link
Anandtech, please make sure to test out the CPU Utilization when using the onboard Ethernet w/ ActiveArmor on production boards. I'd like to see if they revised that at all, since the CPU Utilization was so high on the current revision of the boards. In fact, most nVidia nForce Pro motherboards for Opterons dont use the included nVidia Ethernet, they use the Broadcom or some other chip because performance is so bad.Anemone - Monday, August 8, 2005 - link
Just cuz if you own the mobo a few years there will be things to stick in x1 and x4 slots I'm sure.Nice going Nvidia :)
akugami - Monday, August 8, 2005 - link
This will boost video graphics performance as much as when 8x AGP came out over the then cutting edge 4x AGP. Which is to say slim to none. As others have stated, there is no known graphics card capable of fully utilizing the 8x AGP bus much less 16x PCI-Express bus. The Geforce 7800 doesn't come in AGP flavors so we don't know if it has a significant performance difference between 4x and 8x AGP.JarredWalton - Monday, August 8, 2005 - link
A few thoughts about the bandwidth increases now offered with the new chipset. First, for transfers from system RAM to the GPUs, this is completely useless. I also have to wonder what the link between the MCP and SPP is - it would have to have 8 GB/s of bandwidth to make the second X16 slot the same speed as the primary SPP slot. Hmmmm.... most I've heard of for a NB to SB interconnect is about 1 GB/s. Two HyperTransport channels running at 1000 MHz would provide enough bandwidth, but I seriously doubt that's present.Now, even if the NB to SB connection were fast enough, dual-channel PC3200 DDR only offers 6.4 GB/s of bandwidth - less than that of a single X16 slot. So SATA controllers sitting on X4 connections combined with two GPUs on X16 connections will now be possible, but the actual performance probably wouldn't be any different than SATA controllers on an X2 connection with two GPUs on X8 connections. Maybe we'll get quad-channel DDR2-667 RAM with socket M2 to make this a realizable performance boost? (/sarcasm)
There is a use case for it, though: GPGPU for one, and potentially SLI without the extra connector. Board to board SLI transfers over the internal X16 should be at least as fast as the proprietary connector I'd think. That last one is especially interesting, I think. If current SLI takes an X16 channel and breaks it into two X8 channels, how about a board with four X8 connections and four physical X16 slots for quad-GPU SLI? It wouldn't surprise me at all to find that NVIDIA has a team working on that exact project.
As the article states, the biggest deal about this launch for gamers is that prices will drop on SLI boards. Maybe then I'll be able to stomach recommending SLI for a mid-range system. The even bigger deal for NVIDIA is that they now have an "in" with Dell. THAT is freaking huge! I don't think Dell actually sells that many XPS systems, but then I don't think that many Intel SLI setups have been purchased as a whole. Dell has marketing power, and they WILL find ways to convince people to buy Intel SLI PCs.
ceefka - Tuesday, August 9, 2005 - link
4 graphics cards? Looks like the PC is turning into a gaming console and losing its general purpose, unless you're a stock broker maybe ;-)Having this abundance of PCI-E lanes looks like a step to abandon PCI(-X). The nF4 boards have issues with professional soundcards on the PCI-bus. It is a pity all these gadgets and extra performance have downgraded the PCI-bus instead of enhancing it.
I believe it is time for cardmanufactureres to develop more PCI-E based cards. It seems like chipset manufacturers aren't willing to spend the time to preserve good bandwith for the old PCI-bus.
ChiefNutz - Monday, August 8, 2005 - link
Nvidia Graphic cards communicate through their "crossbar" on the top of the cards, so, even having just 1 HT link between SB & NB wouldn't be that big of a deal. I don't think that it would saturate the HT link either, due to the crossbar. But this setup would be nice in a system if they got rid of the 16x crap and just gave you the straight channels like the 2200pro and 2250, like a 8x or 2 4x slots like that.What I really want to know is will they now support raid 5 on a non NForce-Pro AMD system?? Intel edition has it and so does the 2200pro? where is the NON-ECC love!
JarredWalton - Monday, August 8, 2005 - link
Yes, they have their crossbar controller, but they still get information from the CPU and main memory. If you ignore the GPU to CPU/RAM via PCIe bus communications, then there is no difference whatsoever between SLI X8 and SLI X16. (Which is likely the case anyway.)RAID 5 appears to be coming with the nForce4 SLI X16 chipsets to both platforms. We just neglected to mention it:
afrost - Monday, August 8, 2005 - link
So now nforce boards will have two really hot chips that need loud little fans or elaborate heat pipes? I hope that with this new generation of nforce chips they figure out a way to cut down some of the heat output. The nForce 3 was perfectly fine but the 4 gets toasty.This is one of the main reason that I am looking forward to the ATI boards....I like passive chipset coolers.
Anton74 - Monday, August 8, 2005 - link
Perhaps now that it is a 2 chip solution rather than 1, each of the chips will run a bit cooler than if they were combined, hopefully allowing for simple passive cooling with good (aftermarket) heatsinks like that blue Zalman one, the ZM-NB47J. As long as they don't put those chips in un-strategic places...Gerbil333 - Tuesday, August 9, 2005 - link
That was my first thought when I read this. The current nF4 chips run way too hot. I really hope the new two chip design runs cooler.virtualrain - Monday, August 8, 2005 - link
Doesn't this solution completely do away with the need to either open the case and flip the switch to enable SLI or select it electronically (i.e. ASUS A8N-SLI Premium) and reboot?If so, that's a positive move even if there is no performance gain.
One of the appeals of ATI's crossfire solution is the expanded flexibility and ease-of-use. I think this even's that part of the playing field somewhat.
Calin - Tuesday, August 9, 2005 - link
As I remember, old NVIDIA SLI had that switch to distribute the PCI-E lines in 1x16 (a single usable slot) or 2x8 (two usable slots). There might have been some kind of extra connections to have a single x16 slot and one 4x slot (20 PCI-E lines used) or two x8 slots (16 PCI-E lines used, and 4 unused).It would be great to be able to change the SLI/non-SLI configuration from drivers.
Amplifier - Monday, August 8, 2005 - link
First PostHoudani - Monday, August 8, 2005 - link
Wow, a double snipe in the same paragraph ... Dell [bang!] Intel [bang!].
And we now turn your attention towards a nitpick:
What's a GPGPU again?
[Last page, next to last paragraph, two occurances.]
rrsurfer1 - Monday, August 8, 2005 - link
From the Wiki..."General-Purpose Computing on Graphics Processing Units (also referred to as GPGP and to a lesser extent GP^2) is a recent trend in computer science that uses the graphics processing unit to perform the computations rather than the CPU."
Houdani - Monday, August 8, 2005 - link
I can honestly say this is the first time I have ever heard of GPGPU. And here I was thinking it was a misspelling of GPU.I learned something new today!
JarredWalton - Monday, August 8, 2005 - link
Yeah, I saw this to and had to do a quick check on Google. The text is now tweaked to explain the acronym for those that haven't encountered it before.Phantronius - Monday, August 8, 2005 - link
I really hate this hobby sometimes....BoBOh - Monday, August 8, 2005 - link
There are some SATA RAID5 cards that run on x8 PCIe. That's why I want one of these new boards!system404 - Monday, August 8, 2005 - link
...I wonder if this may have influenced ATI's decision to do one last revision on their new line of mobos. The Inquirer had a http://www.theinquirer.net/?article=25198">story a few days back about ATI going back to the drawing board to make undisclosed changes on what seemed to be a already-working chipset, pushing availability back another month into September. One can always hope the wait is worth the hype. I plan on building my rig in the next month or two & would like to have some options available when it comes time to choose a mobo & GPU combination...Marlowe - Monday, August 8, 2005 - link
I read that same story and I agree with you. I think ATI have probably heard about this x16 chipset before we have. I wonder what they are changing.. Who knows, maby we will see 2x16 pci-e crossfire boards? Time will tell..R3MF - Monday, August 8, 2005 - link
now give me a performance mainstream mATX SLI board.bring it on Abit/Asus/MSI/DFI............
DerekWilson - Monday, August 8, 2005 - link
I'm sure they'll be coming along at some point. We'll try to look around and see if we hear anything on that front.R3MF - Monday, August 8, 2005 - link
cheers. :)i'm sure my next PC will be a Silverstone SG01, i just need a decent M/B!
KenRico - Monday, August 8, 2005 - link
Think we will see a ASUS A8N32-SLI sooner rather than later !Some cards like MATROX QID series (supporting 8 Displays with a pair of cards) REQUIRE individual x16 slots...they are not SLI, but you can only load one card without going to Opteron nFORCE Pro currently.
Kudos to nVIDIA to bring the advantages of professional chipset to the desktop...at current price point and driving down current SLI based mainboard's prices accordingly!
Last note...having two PCI-ex x16 slots will also allow single card user to pick slot to use without having to drop down to x2 or x4 slot like currently shipping solutions...hopefully allowing the use of aftermarket chipset cooler like Zalman or Water cooling without having to go to a shorter video card.
Fluppeteer - Tuesday, August 9, 2005 - link
Re. the QID boards - really? How rude. I thought the PCI-e spec required cards to be able to negotiate down to a lower number of lanes.It *would* be nice if more manufacturers would put longer slots on their motherboards, even if the lanes aren't hooked up, though. (Otherwise it's a case of taking a hacksaw to the bottom of the card and hoping it doesn't pull too much power for the socket.)
Calin - Tuesday, August 9, 2005 - link
Now this is a great reason to buy such boards. However, the people that will buy dual PCI-E x16 mainboards for this reason will be much lower than the number of people that buy the mainboards for their "speed" advantage.The reason to buy such mainboards? Better/more slots/connectors (PCI-E, SATA, USB, Ethernet), having the flagship board, bragging, better support for the flagship boards than for the mainstream ones. There are reasons enough for people to buy (even if very few of the reasons are justified from an economic standpoint)
bob661 - Monday, August 8, 2005 - link
If this offers more performance over current SLI, I may just upgrade to this or wait till next year and do DDR2.shoRunner - Monday, August 8, 2005 - link
this will offer absolutely no performance increase, since cards aren't anywhere near using 8x bandwidth, yet another worthless upgrade.bob661 - Monday, August 8, 2005 - link
How do you know? Do you have one of these sitting in front of you?MrSmurf - Monday, August 8, 2005 - link
Did you read the article?Rock Hydra - Monday, August 8, 2005 - link
Yeah. I don't think this will have a very big impact. The increase in performance from 4x AGP to 8x was minimal. So, I think this one on't be that great.yacoub - Monday, August 8, 2005 - link
"Now that all of you early adopters have dropped the cash for an SLI board, we're releasing SLI X16 to encourage you to re-upgrade and REALLY waste some money.":)
MrSmurf - Monday, August 8, 2005 - link
Did you even read the article?Affectionate-Bed-980 - Monday, August 8, 2005 - link
Now can someone tell me why I spent $129 on a DFI LP NF4 Ultra-D just a week ago?Futureproofing is just a sad concept. It makes me want to cry.
OvErHeAtInG - Monday, August 8, 2005 - link
Because now you have a wonderfully great stupendous motherboard.If it wasn't good, you wouldn't have bought it.
Besides, this new chipset won't really affect you, if you really got the ultra-d. That's not even the SLI board. Those running only 1 card already use all 16 lanes.
Turin39789 - Monday, August 8, 2005 - link
YAPR: Yet Another Press ReleaseDerekWilson - Monday, August 8, 2005 - link
It's not just a press release -- this will have a real impact on pricing of current boards that will benefit everyone.yacoub - Monday, August 8, 2005 - link
The few people left in the market to buy an SLI board right now, that is. Not the majority of people who like to stay current with technology for their gaming rigs who have already early-adopted an SLI board setup and are now like "wtf".rrsurfer1 - Monday, August 8, 2005 - link
Umm. Hate to tell you but your supposed "majority of people who like to stay current with technology" is not the majority of people in any way. It's the *MINORITY* of people. Bringing prices of SLI boards down to mainstream, combined with Dell's power of distribution will definitely help raise the number of boards sold. This is a nice move by Nvidia.bob661 - Monday, August 8, 2005 - link
Yepper yep.Also, I am an early adoptor of SLI and am not mad at this piece of news. As a matter of fact, I'll more than likely upgrade to this board if it shows to be quicker than current tech. This the tech market, FFS, things NEVER stay stagnant.
AnnihilatorX - Monday, August 8, 2005 - link
YayTime to waste some money
Well though does a graphic card which utilize 16X over 8X bandwidth exists?
Don't think so
Schadenfroh - Monday, August 8, 2005 - link
#2, the question is there a graphics card which utilizes the features/speed advantage that PCI-E 8x offers over AGP 8X.