CHIPZILLA'S Shmuel "Mooly" Eden of Intel's Israel team is one of the most famous recent microprocessor architects, having led the original Centrino architecture push and evolved it into efficient, fast "Core" CPUs that not only replaced the troubled Netburst, but also abruptly ended AMD's performance dominance in 2007. His team created the Sandy Bridge generation, expected to launch in mobile, mainstream and high end markets early next year.
A lot is already known about the initial mainstream desktop and mobile Sandy Bridge part, including its four CPU cores together with one GPU core, all sharing 8MB of obviously shared L3 cache, which, just like other on-chip caches, is expected to have lower latency than the current Nehalem or Westmere parts.
Besides the architectural core execution efficiency improvements, there are also AVX 256-bit vector floating-point instruction extensions with the new 3-operand format, like on the good old RISC machines, and matching doubled vector floating-point units for substantial performance improvement once the code is recompiled. The official DDR3-1600 dual channel support and improved overall interconnects and power management round up the story.
The speed of that integrated GPU in Sandy Bridge was the target of a lot of speculation. Intel's supposed confidential slides circulating on the web mentioned around twice the speed of the current Core i5 6XX series CPU, which would be decent, but not exactly easy to be competitive against a low-end Radeon HD 55XX series derivative expected to be integrated into the top end AMD Fusion parts in the same timeframe. And AMD people I spoke to on condition of anonymity were confident of having better GPU performance in those parts than what Sandy Bridge would have, resulting supposedly in "better overall user experience" despite likely slower CPU cores.
When confronted with that opinion, Mooly told us that he is confident of the Intel part's top performance, processing technology advantage and much better CPU to GPU integration. The latter two do make sense since the tuned 32nm CMOS process is now mature in Intel's fabs. Also, as far as we know Sandy Bridge CPU and GPU cores communicate tightly at the shared L3 cache level, while AMD's Fusion components supposedly talk one level below, at the integrated memory controller and crossbar level.
On the other hand, to really beat AMD's HD55XX series GPU cores in its Fusion parts, and in the absence of Larrabee, Intel will have to work hard on that GPU core anyway, and I'm sure it knows that. Well, we will know too, in about six months, how far ahead both Intel and AMD have gone. µ
Something else for carriers to blame poor reception on
Will it work on Songs for the Deaf?
What took so long?