The first thing is I was a little off on the dates, the tech day now seems more likely in late October with a launch in early November. What's a few weeks between friends? If they have not gotten back the latest silicon though, it is going to be really tight to make that schedule, chips take time to fab.
What they will talk about is the odd part. First is the arrangement of the chip, physically we are hearing that it is 2 * 2 cm, or about a 400mm die. Ouch. One of the reasons it is so big is the whole dual core rumor that has been floating around. G80 is not going to be a converged shader unit like the ATI R500/XBox360 or R600, it will do things the 'old' way.
Some people are saying that it will have 96 pipes, split as 48 DX9 pipes and 48 DX10. While this may sound like a huge number, we are told if this is the case, it is unlikely that you can use both at the same time. Think a 48 or 48 architecture. That said, I kind of doubt this rumor, it makes little sense.
In any case, NV is heavily downplaying the DX10 performance, instead shouting about DX9 to anyone who will listen. If the G80 is more or less two 7900s like we hear, it should do pretty well at DX9, but how it stacks up to R600 in DX9 is not known. R600 should annihilate it in DX10 benches though. We hear G80 has about 1/3 of it's die dedicated to DX10 functionality.
One of the other interesting points that surfaced is much more plausible but still odd, and that is memory width. As far as I know, GDDR3 is only available in 32-bit wide chips, same with the upcoming GDDR4. Early G80 boards are said to have 12 memory chips on them, a number that is not computer friendly.
Doing a little math, 12 * 32 gives you a 384b wide bus. On the odder side, if you take any of the common card memory capacities, 256 and 512MB and divide them by 12, you end up with about 21 or 42MB chips, a size not commonly found in commodity DRAMs. If 12 chips are destined for production, you will end up with a marketers nightmare in DRAM capacities, 384/768 is not a consumer friendly number.
While it could just be something supported by the GPU and not destined for production, you have to ask why they may need this. The thing that makes the most sense to me is that with all the added pipes, they needed more memory bandwidth. To get that, you can either go faster or go wider.
If you look at the cost of high end GDDR parts, faster is ugly. If you look at the yields of high end GDDR parts, faster is ugly. Drawing more traces on a PCB to go wider is far less ugly. If NVidia needed the bandwith that you can only get from a 384b wide bus, then the odd memory size may simply be a side effect of that.
In any case, the G80 is shaping up to be a patchy part. In some things, it may absolutely scream, but fall flat in others. The architecture is looking to be quite different from anything else out there, but again, that may not be a compliment. In either case, before you spend $1000 on two of these beasts, it may be prudent to wait for R600 numbers. µ
Plus the cost of ambition as moonshots eat into the coffers
Spoiler alert: it's probably VeriSign
Did we say cuts off? We meant traps them inside their own home