Product Intel MFSYS25 "Clearbay" server platform
PERFORMANCE density, power efficiency, easy manageability - all the boring buzzwords we often hear from the server vendors promoting their new dense servers and blade servers. Not "dense" as in dumb, neither razor-sharp like knife blades, just in case you've wandered in from the dark.
If you try to put a bunch of servers together, whether in an enterprise farm or a scientific compute clusters, you'd know that physically plumbing it all up - all the racking, fitting, cabling across multiple connections, linking to a common console, and then powering it up all at once successfully - is not a layman's job. In fact, in a large cluster, you may not even want to power up things all at once as suddenly pulling several hundred kilowatts power out of the grid could be, umm, risky.
Then comes managing all that while it works - checking the status of all the computing, storage, networking and power components, making sure any faults are promptly pointed to the right source and diagnosed, at least at the management console level.
We had a quick look at an Intel box that tries to solve most of these problems, at least if you aim for a small cluster or a farm - the Multi-Flex Server System MFSYS25, a.k.a. "Clearbay".
The black, rack-mounted unit isn't small by any means: 6 U (10.3 inch) high, 71 cm deep steel behemoth weighs over 85 kg when fully equipped. Nearly a 200-pounder. Intel specifies at least three persons to lift this monster.
The interesting stuff is densely packed inside: six dual-CPU Xeon compute modules, each a full server with up to dual quad-core CPUs (the test box had two dual 3GHz Clovertown units, you should be able to get dual Harpertowns too), up to 32GB FBD-667 RAM, ATI 16MB graphics, twin Intel Gigabit Ethernet and LSI SAS controllers - per module.
Then, there are two 7-disk SAS RAID drive modules in the front, and, at the backside, up to four PSUs (the whole shebang may take 3 kilowatts in full blast), and up to two GBe switch and two SAS storage controller modules each. You need at least one switch and storage controller module for the system, but two of them will give you redundancy.
In the middle of the backside modules there is another one: the mandatory management module, whose extra Ethernet connection goes to your desktop or laptop used as a Web management console. Via Firefox or IE GUI web page, you can interactively managed the whole mini-cluster, including hardware diagnostics, system status and so on.
All of these modules slot into a common backplane containing Ethernet, SAS and I2C bus links between them - no complicated PCI-E or Infiniband links here.
Aside from the expected fan noise - even this test setup with two computer modules is still a kilowatt+ guzzler - the system was tolerable to sit next to, and we managed to run it fine out of the standard 220-240v, 13 A sockets.
Take a look at the CPU modules here - the simple air-duct cooling is quite minimalistic, and this is for the "old" 65 nm CPUs inside the test configuration. How would it last a Linpack run? Unfortunately, the nodes only had 2 GB RAM wih them, making them unsuitable for this purpose. Yours truly is hunting right now for more of that "ubiqutous" FB-DIMM memory to try a run...
A very interesting superdense server farm - all you need in a single box, no cable mess and such. Up to 48 cores and 192GB RAM in there: if these were X5482 3.2GHz Xeons, we'd be talking about 614GFLOPs peak FP power, and something like half a teraflop maximum realisable under the infamous Linpack benchmark. And if, for some reason, there was a successor to the Seaburg chipset with 32MB snoop filter, and twin six-core Dunningtons inside? We could come close to a teraflop peak, provided there's stronger cooling, of course.
We'd like to see an updated Harpertown Seaburg chipset version, with one (or both) PCIe x16 ports going to a MXM or alike mezzanine module slot. This'd give us dual-GPU graphics capability per module - or some much faster I/O. So, you could talk about a compact superspeed 3-D OpenGL visualisation cluster, or, with extra I/O, high-bandwidth storage server farm. We'll be working on this platform further, with extra memory and then some actual benchmarks to follow.
Good Immense density without compromising performance,
capacity or features, easy cableless installation and one-button start
Bad it needs rackmount setup right now, a standalone departmental/SOHO version would be welcome
Ugly you'll not run this off a home power plug, especially in the 110 volt US or Japan
Sign up for INQbot – a weekly roundup of the best from the INQ