OVER THE PAST few years, you would have noticed the emphasis on defying the gravity in the overclocking world. The "enthusiasts" were sometimes driving DDR2 memory up to 2.5 volts from its 1.8 volt defaults, or, more recently, DDR3 memory at beyond 2.1 volts from its 1.5 volt default - an utterly dangerous 40 per cent plus voltage jump in either case?
Not to mention the related North Bridge and then FSB overclocking and over voltage seen in such cases as well. Of course, the memory voltage would invariably end up much higher than the FSB or NB voltage either way.
However, with desktop Core i7 "Bloomfield" Nehalems around the corner - less than a week away - and AMD Deneb here as well a little later, the full-fledged North Bridge and FSB tuning will disappear from the high-end enthusiast checklist. Everything related to memory will be sharing the same die with the CPU itself now - to cut a long story short, are the priorities changing now?
As the new CPU voltages go below 1.2 volts, even if the voltages for the integrated memory controllers and QuickPath Interconnect or HyperTransport are decoupled, they still shouldn't differ drastically. Thus, on the first Nehalems we see the 1.65 volt limit for the DDR3, not much more than the basic 1.5 volt stock voltage for that DRAM kind.
Anyway, the first "Nehalem-optimised" memory kits don't seem to suffer much from these restrictons - the Qimonda Xtune does DDR3-1900 CL9 at stock 1.5 volts, as we saw, and Kingston will offer DDR3-2000 CL9 tri-channel kits at the 'prescribed maximum' 1.65 volt settings.
Yes, this is not as good as DDR3-2000 CL8 setups you could achieve if going with some of the current DIMMs to near 2 volts, but it saves a bundle on power and especially cooling requirements - and it's more than compensated for by the lower latency integrated memory controller anyway.
A tri-channel DDR3-2000 setup will give you in excess of 48 GB/s raw theoretical memory bandwidth, and easily around 30 GB/s in tests like Sandra or Everest. That's good enough even once you (hopefully in exactly a year's time) upgrade your Core i7 Extreme setup to its 32 nm "Westmere" based 6-core 12 MB L3 cache shrink, running somewhere above 3.6 GHz in that same LGA1366 socket.
Talking about upgrades in that socket, do expect to see a few more Bloomfield steppings in the next few quarters. This is a brand new microarchitecture after all, and Intel will be tuning the stuff along the way.
A side benefit of the new platform is that, with the memory controller on the CPU, overclocking the North Bridge becomes far less important - in fact you can completely avoid it, unless you're trying to speed up the QPI from its (more than sufficient we'd say) 25.6 GBytes/s bi-directional speed.
The X58 North Bridge doesn't have a heat spreader any more either - a simple decent aluminum heat sink with a medium rpm fan would suffice completely here. However, you do see the vendors overbuilding the heat sinks there. If Asus' Rampage Extreme large sinks and heat pipes weren't enough, Gigabyte goes another step too far by offering water cooling connectors for a chipset that, again, might not need to be overclocked at all.
Even the I/O requires no further tune-ups. the PCI-E v2 dual x16 plus one x4 connector should take care of even combined twin dual GPU plus hardware SAS RAID cards if you wish - the base bandwidth provided is sufficient without any need to push things up, including the upcoming GT300 and HD5870 card families next year.
The only problem is that, when they need to reach system memory, those GPUs will not have to go through one more 'hop': PCI-E first, then QPI to the CPU to get to the RAM. Same 'issue' was already seen on the AMD systems for a while, hopping over HT and PCI-E as well.
On the other hand, yes you better give the very best cooling you can for that CPU as pretty much everything else including your memory paths depends on that single hot spot. For an enthusiast wanting to go above 4 GHz reliably for everyday operation, I'd strongly recommend a compressor-based fridge like Thermaltake Xpressar or a true freezer, the type of Asetek Vapochill LS. The side benefit of supercooled CPUs usually being able to reach a certain GHz at somewhat lower voltage, i.e. higher reliability, than the air or plain water-bathed processors, does help as well.
Expect to see far more complicated BIOS setup tuning options as the Nehalem generation matures and the myriad interdependent links on the voltage, clock, latency and bandwidth parameters for the CPUs, caches, memory controllers and QPI links emerge - not all of these seem to be in the initial BIOSes yet.
Luckily, the usual Trd limit of the chipset latency, Anandtech's favourite topic some months ago, will be gone this time. I do expect, though, that the minimum latency on Nehalems (or Deneb for that matter) will be experienced when the CPU and memory controller clocks are in sync, even if it means slightly lower mem con frequency.
In summary - before we go into details next week - it seems an overclocked enthusiast Nehalem system would focus more on the CPU itself, its cooling and power delivery on the mobo, while the memory and chipset cooling would by right be simplified. There's simply not that much need for it for the DIMMs as they'll run pretty close to the stock voltage.
As for the chipset, it's a non requirement unless the unlucky mobo designer had to fit in NF200 PCI bridge for that TriSLI offering.
Now that's something that should be obsolete by the time this Christmas' bells start ringing. µ
Sign up for INQbot – a weekly roundup of the best from the INQ