Wow, sounds too good to be true, and sadly, that's because it is. It is not an insurmountable problem, and this is a first effort, but when you look at the details some cracks become apparent quite quickly.
First up, the good. Nvidia did a good thing here, and pretty much validated the entire Physics Processing Unit (PPU) space with a press release. As I said before, with quad-SLI, it was painfully CPU bound, and GPU performance was ramping vastly faster than CPU performance.
Nvidia had to do something with the power, and do it quick. If it didn't, when a budget card can drive your monitor at its max rez, why again would you need a $600 card, much less two? SLI physics is a good way to go, and the APIs (application programme interfaces) have a steep uphill climb on the developer side.
The biggest limitation of this approach is the fact that you need to devote a GPU to physics, you can't load balance or split time. This kind of makes sense if you think about the data that is needed by the GPU, do you swap physics data out for textures every one third of a frame? Maybe a 512MB card would have the room, but the majority of cards out there are not quite that memory rich.
On the up side, it plays the marketing game to perfection, Nvidia has a roadmap that is coherent, and every time ATI catches up, it raises the bar. It may not have the fastest cards right now, but Nvidia has the only whole package. When the next gen ATI chipsets come out in the near future, this may change. Expect an announcement from ATI soon, and by the time it makes a difference to the end user, both will have products out.
The reason I personally am not jumping up and down with joy at this announcement like I did with Ageia is simple, SLI physics does not come close to doing what Ageia does. SLI physics is eye candy at the moment, and not much more while Ageia does the real thing. What am I talking about? Look at what is spun as a strength by Nvidia on slides six and seven ("Traditional Physics" and "Integration of SLI Physics") as shown on Rojakpot. Here lie footnotes, and as the saying goes, lies, damn lies and statistics.
The problem is that physics can be two things, eye candy-like particle effects, ripples and visual stunts, or hardcore collision detection, proximity effects like gravity, and other play affecting effects. One is frosting, the other is a V8 engine. The Nvidia slides seem to indicate that the physics data and results in its methodology stay on the cards, and only go out as pretty pictures. This interpretation seems to be backed up by The Tech Report.
So, the SLI physics engine can make water fall, ripples move, and rocks from the exploding cliff wall bounce off each other in a way that would be damn near impossible to do on a CPU. It can't however make those things interact with your player, the ripples may look like they are washing over your legs, and the rocks bounce off your shins, but it won't cause in game collisions.
The Ageia way of doing things is the 'real deal', it can make things bounce, cars crash, and have the game work with it. The gravity of the asteroid can pull your ship off course at a critical time just before the warp jump in ways that SLI physics, in V1.0 at least, can't.
So, the Nvidia approach is smoke and mirrors: pretty, shiny, and ineffectual. The kind of thing you want to do in demos, but not in games. Ageia claims to do it 'right' out of the box, and I think this will pay off in a much greater way for the average gamer. SLI physics is not by any means useless, but I would categorise this release a special effect, not a simulator. Perhaps V2.0 will fix all of this, most likely by the time of the Game Developers Conference 2007.
Another problem is the price. Imagine you just bought two high end $400 graphics cards for your SLI rig, and all is happy. Birds are singing, particle effects are particling, and the credit card bills have not hit your mailbox yet. If you enable SLI Physics, you essentially spend $400 on a PPU, but you can always turn it off and use the SLI mode in 'old' SLI fashion. I hear the Ageia card will retail for $250 or so, and it looks to be much more suited to the task, but it can't speed up graphics when not needed.
So, do you buy 2x GPUs for $800, a GPU and a PPU for $650, or all three for $1,050? That depends on the games you play, and how much is left in your trust fund. If you play heavy physics games, you are probably better off with the PPU, but the occasional FPS and a lot of RTS games leans towards the SLI of old setup. There are way to many 'ifs' here to say for sure. Aw heck, buy all three and auction the kids off on Ebay to pay for it.
The other stumbling block I see for SLI physics is the API. There are two commercial physics APIs of note, Havok and NovodeX. Both were independent commercial entities that you bought to slap into your game engine giving you a more robust solution than you were likely able to code yourself in less time. Both cost a lot of money, at E3 the numbers floating around were in the $250K+ per title range, not a trivial amount even in these days of multi-million dollar games.
Developers would pick their poison based on a lot deeper methodsand some madness, then write a cheque. Along comes Ageia and buys NovodeX, and renames it PhysX from the looks of things. Almost a year later, Nvidia hooks up with Havok. Since the installed base of Ageia cards is about zero and the NVidia base is probably in the millions, why would any sane developer pick PhysX?
The answer is money. Havok costs more than a good Ferrari for every game, Ageia gives away PhysX for free now. In a time when game companies are popping like zits, they need every dollar they can get. If you ship eight titles in a year, that is potentially $2 million+ in savings, no strings attached. Do you fork over the cash and have your game run potentially better on a small slice of Nvidia cards, or pocket the cash and hope for Ageia cards to come out?
If you are running the games in old school software only physics mode, there is no difference in performance, so for 98% or more of the people out there, the decision the developers make is academic. Said devevelopers know this, and their corporate masters will most likely look to the dollars not going out the door to Havok. Because of this, I think SLI physics has an uphill climb.
If it sounds like I am down on SLI physics, I kind of am for now, but only for now. What is being billed as a huge advance in physics looks to be more of an advance in physics based eye candy, not the real deal. It costs a GPU, and no one seems to be talking about the performance hit that will cause.
On the up side, the might of Nvidia stepped in and slapped the collective consciousness of the couch-class gamer with a new buzzword, physics. Soon everyone will be wanting it, needing it, and bragging about it. The outlook for Ageia went from "they do what?" to "Oh, that is cool", faster than you can say Powerpoint.
Until the next major revision of SLI physics, I find it hard to get excited about the prospect. It is neat, but that is about as far as I would go. I would go the dual GPU route and buy an Ageia card if you care about performance. That said, Nvidia never sits still for long, and when the next version debuts, pay very close attention. µ
GPU Physics revealed: Part II
Sign up for INQbot – a weekly roundup of the best from the INQ