The Inquirer-Home

Nvidia GT200M and GT100M slides described in minute detail

Spin unspun Nauseating presentation of the day Part 1
Mon Mar 02 2009, 15:42

SINCE WE HAD to sit through the stomach-churning presentation of the latest Nvidia renamed product, we feel it is only right to make you do the same. With that in mind, we bring you the latest from Nvidia, the G92, this time renamed to the Geforce GTX 200M.

Yes, another two-year old card, this time renamed to GT200 class, but once again, there is nothing new here. It is the same old same old. Nvidia can't make a GT200 based laptop part, so it is pretend time, and let's hope no one will point this out in a public way.

Oops, sorry guys.

That said, they claim it is all new, but don't believe it, it is just a 9800 series with a coat of paint, which is an 8800GT with a coat of paint.

The press slide deck was 21 pages, and since we can't publish it, we will just share the notes we got on our unofficial briefing. Prepare to not be awed, it is hard to make recycled material interesting, especially on the fourth time around. That said, there are some hilarious moments in this deck, being vain and desperate does make a company do strange things, but we never thought they would stoop this low. Or make this many maths errors.

Slide 1 is a splash screen showing that the GT200M and GT100M parts are going to be discussed. Slide 2 is marketshare numbers showing that Nvidia has about two thirds of the market in notebooks with discrete GPUs, ATI the other third. They don't want you to know that discrete is about one third of the market, the two thirds without is utterly ruled by Intel and ATI. Nvidia has a miniscule share of this market, and you can usually tell those notebooks by the vertical lines on the screen shortly before they turn black.

Slide 3 is where the funny numbers show up, they claim that 55nm is the 'perfect process' for G92. Funny, we thought those were bad too. Nvidia claims 50 per cent higher performance and 80 per cent higher performance per sq mm. Given how semiconductors are made, and the roughly 30 per cent area change between a 65nm G92 and a 55nm G92, these are the exact same numbers, Nvidia just hopes you are dumb enough not to realise that fact. Sadly, this is a good bet.

They claim higher clocks, full shader (128) count, and the same power budget as before. This is the long way of saying they shrunk the die, and are using the power savings to increase performance. What a shock! This is followed by the claim that the G92 is a 'superior architecture for notebooks'. I guess that means that the newer GT200 architecture must blow for said notebooks, otherwise wouldn't they be touting that?

Slide 4 is a list of notebook segments, enthusiast, high performance, performance and mainstream. Enthusiast used to be 9800M GTX and 9800M GTS, but those have been replaced by the GTX 280M and GTX 260M respectively. Same die though, just a new name and much higher price tag. High performance sees the 9800M GS replaced by the GTS 160M, performance had the 9600M GT but is now GT130M.

In a sleazy PR move, they are trying to claim the 9300 integrated is a discrete part now, but thankfully they are not using the moronic name 'motherboard GPU' any more. In any case the integrated 9300M G for the mainstream segment is now called the G110M. It is still integrated, and still blows for gaming.

We then go on to Slide 5 where they claim that the GTX 280M is now the 'unambiguous performance leader' because it is up to 50 per cent faster than the 9800M GT in some tests that they refuse to specify. It also has SLI and Cuda, but since those are trademarked terms, what can you say besides 'duh'.

The next slide, 6 if you are paying attention, is a bunch of graphs comparing it to the ATI Radeon Mobility 4870, and claiming 'up to 30 per cent faster'. There are nine tests shown, one is 30 per cent faster, two are a hair over 20 per cent faster, and the rest cluster around 10 per cent faster. As usual, the graphs start out at 90 per cent so they look really impressive if your mathematical education ended at age nine.

In any case, they do not specify the notebook, don't even attempt to say that it is the same hardware, and is very likely a cooked benchmark set with very dissimilar hardware and hand-picked games.

Slide 7 shows more or less the same set of vague and cooked benchmarks for a 4870X2 vs 2 280Ms. This one is much worse for averages though, one 30 per cent, one 20 per cent, the rest in the teens or below. We would be interested in seeing the ATI version of this slide, it should utterly kill Nvidia. This is a weak showing.

Slide 8 does the same for the 260M vs the 4850M, but this time they proudly proclaim 20 per cent better, but the average looks to dip below 10 per cent, with many showing no lead over the ATI parts. It amazes me that this is the best they could do with a tame audience that won't question them, hand-picked benchmarks, and no specs given. Sad.

We then move back to hard specs, intermixed with marketing bullpoop. The hard specs on the 280M are 128 'cores', 585MHz, 1463Mhz and 950MHz for the graphics, processor and memory clocks. The memory is 1G of GDDR3 at 256 bits wide. Save some power, gain a bottleneck. The 260M has 112 'cores', 550MHz, 1375MHz and 950MHz, the rest is the same.

Slide 10 moves off into the realm of 'I can't believe they are this dumb', but look out for Web sites that tout this one. They actually have the temerity to compare 'efficiency' per core, saying that ATI with 800 cores is somehow less efficient that Nvidia with 128. The actual comparison they use is 800 air rifles to 128 Uzis, and then laughably claim this means they are 8x more efficient.

I keep saying that Nvidia PR couldn't find their way out of a paper bag with a flashlight, map, guide dog and GPS unit, but up until this point, I did credit them with basic math skills. 800 divided by 128 is 6.25, it doesn't even round to 7. Come on guys, there is a calculator included in Windows, it is under accessories somewhere, but since I am typing this on Ubuntu, it is under Applications –>Accessories, top program. Try it, you just might learn something.

Then again, with the 800 air rifles to 128 Uzis crack, one has to wonder does Intel have four Howitzers with the Core i7? That must kick Nvidia's butt, 32 times more efficient, and it doesn't even die in the field for reasons they won't tell you about. If you check these numbers, you will see that I am not joking, the i7 ties the much faster GT260, and uses far less power.

Nvidia proudly manages to score an own goal in a public way. Brilliant. Talk about battles of wits with unarmed opponents. *SIGH*. This one left me speechless. At least this time they claim it is on the same hardware which more or less proves in their own words that they cooked the previous three or four graphs.

Slide 11 is rather incomprehensible, but seems to be an attack on GDDR5. It is entitled 'Lessons on Memory Bandwidth' with a subtitle of 'Performance requires balance between Memory interface and GPU engine', and they show an I Love Lucy clip to prove the point, but it makes no sense. At least they used a good clip, so they are capable of humour, but it could be a coincidence. I will take this as progress.

Slide 12 is again a bit of a head-scratcher. It is supposed to show that 'Physx and Cuda' somehow make things better, and there are a lot more programs in 1H/2009 than there were in all of 2008. In 2008, they show four programs, with 14 added in 1H/2009, and yet unspecified 'more' in 2H/2009, quite the upward trend if you don't look too closely. The problem? We looked closely.

Of the 14 'Physx and Cuda' programs in 1H/2009, at least seven of them were out in 2008. No really, Unreal Tournament III is not a brand new program, and Cyberlink, Arcsoft, MotionDSP, Seti and TPMGenc were actively touted by Nvidia last year. By my count, the list is heavily weighted on 2008, and has a downward trend for 2009, mirroring what I see in the market. Second own goal in as many slides.

The next few slides move on to the 1xx series, and the first claims that it is the fastest GPU for sleek notebooks. All I can say is that, if this is true, why didn't Apple use it? Maybe the next one down, the 9600M melts and dies prematurely in such close confines. This one is going to be much better, just ask Nvidia, the last three or four weren't, but this time is different. Someone has to believe it one of these days.

Slide 14 shows that the 160M is 'up to 50 per cent' faster than the ATI 4670M. Again, there is one 50 per cent, and the rest are much closer to 10 per cent, with many at zero. Once again, they don't disclose the hardware, almost like they only do that when it is in their advantage. If it were true, it would be really dishonest, right?

From there, we go back to the magical land where people don't know maths. There are three graphs that show a 9800M GTX vs a GTX 280M, a 9800M GTS vs a GTX 260M and a 9700M GT vs a GTS 160M. Let's put aside the little problem that on slide 4 they were comparing the 160M to the 9800M GS and know that those two would not make a favorable comparison, so they switch things up. I wonder why?

In any case, this one is hilarious for a completely different reason. The first comparison shows that the 280M is 20 per cent more efficient than it's predecessor, the 260M is 30 per cent better, and the 160M is 40 per cent better than the 9700M, not the 9800M GS. The hilarity? There is a big label pointing to the graph showing 40 per cent gains and it is labeled, "~30 per cent Better!". The arrow does not point to the one showing a 30 per cent gain, it clearly points to the 40 per cent. Words fail me, they can't count. When people publish these slides in a few days, check this out... it goes well past typo.

Slide 16 is actually dead on, it quotes Anandtech on the sorry state of notebook drivers. Nvidia did a good thing with quarterly releases of drivers, and they deserve credit for it. Hopefully ATI will follow suit shortly.

From there they claim that the 280M switches '10x faster' from integrated to discrete graphics with hybrid power, from seven seconds to less than one in the new chips. Shall I be the first to point out that this likely means they fixed the broken-ass state of the 9300 in the B2 steppings like we told you about earlier, because this functionality is quite distinctly missing in the Apple Macbooks? It could be that apple didn't need the awesome power of hybridnameoftheweek, or it could be that the chips were as buggy as Apple was claiming they were. You decide.

Slide 18 touts Blu-ray playback. YAWN, if you don't do this, it is a problem. It is 2009 now guys, everyone can do this, most better than you. Let it die, no one cares any more, discs are dead.

The next one claims to be the first to have the first DX11 drivers, supposedly released in November, which Nvidia claims means they are Windows 7 ready. A little honesty creeps into the presentation here, it say that they have DX11 drivers on DX10 hardware. How useful, especially considering how well DX11 maps to the 4870 vs the GT200 cores.

Then again, this presentation is about the G92 which is far worse than the GT200 cores in that regard. I am not sure which claim is dumber, but there is no Nvidia DX11 hardware, and given their track record lately, it may not happen this year. Or next. If they survive that long.

Moving onto slide 20, the claim here is that Nvidia has 'graphics +' while ATI only has 'graphics'. All of the Nvidia boxes are higher than the corresponding ATI boxes, with downward arrows. Buy 12 kiddies, Nvidia has a +, which we assume to mean aggravation when the vendor denies your bad bumps warranty claim. In any case, it is well worth the price premium for that + symbol.

The closing slide is a wrap-up where Nvidia claims performance leadership, drivers, and proprietary languages that have the industry uptake of three-day-old fish.

Luckily they don't claim basic counting skills... that might clue in even the most tame reviewer. µ

 

 

Share this:

blog comments powered by Disqus
Advertisement
Subscribe to INQ newsletters

Sign up for INQbot – a weekly roundup of the best from the INQ

Advertisement
INQ Poll

Microsoft's Windows 10 Preview has permission to watch your every move

Does Microsoft have the right to keylog users of its Windows 10 Technical Preview?