The Inquirer-Home

Is hyperthreading hyperthreatening homestyle PC preconceptions?

TechInsider Hypertension extension
Wed Jun 25 2003, 10:35
HARDLY ANY topic is currently as disputed, confused and misunderstood as HyperThreading and what is behind the new technology. Evangelism is taking the place of educated discussions, and often enough, it is frightening to listen to the arguments, regardless of which side they are coming from. In addition, position and rank don't appear to protect from ignorance.

In general, the question is not so much about what HyperThreading really is. There is enough material available to explain the technology that, in the simplest of all terms can be described as splitting one physical processor into two logical units with quasi-independent operability. Admittedly, the mere task of internal administration of the different tasks, including assignment of resources and scheduling of output carries some overhead with it, which is similar to running any multiprocessor system. This overhead causes a less or more pronounced performance hit in applications that are not SMP enabled, meaning applications that are not capable of taking advantage of multiple processors.

SMP enabling can be done on several levels. In the simplest case, one application can draw upon the resources of several CPUs, examples are ray tracing programs that do not even distinguish between two physically separate processors and two logical processors. The outcome and screen output will in both cases be the same, that is, two different render lines, each of which is processing its own portion of the scene. In this particular scenario, there is no overhead associated with HyperThreading either and regardless of whether HT is enabled or disabled, both approaches yield the identical run time. Only the way to get there is different, with HT enabled, there are two independent scan lines moving at ½ speed, in the case of HT being turned off, a single scan line will move at twice the speed and the final result is a wash.

It gets more interesting in applications that are multithreaded, audio visual applications like Mainconcept are excellent examples of dramatic shortening of runtimes by distributing the load of audio and video processing over two logical processors. The net savings are in the order of 25% runtime. And then, there are those scenarios conjured up by Intel like, for example, running a virus check while playing a 3D game. It is not entirely clear, whether the examples used were created out of ignorance or whether the strategy behind them was to play dumb in order to create a false sense of irrelevance for HT.

Either way, both the camouflage and the concept appear to have worked out quite well. With respect to the latter, another example for very successful implementation / optimization of software for HT is Cinebench which gives roughly 20 % performance increase of HT over non - HT. This is now, with WindowsXP or Win2K as operating systems that are poorly optimized for HT. Moreover the current hardware infrastructure is not capable of making the most of multiple physical OR logical processors. Now will end about two to three months from now.

One of the major technical evolutions of this year has been the transition from Parallel ATA to Serial ATA. The first toddler steps have been successfully accomplished and the general acceptance of the new format is rising at an alarming rate. However, SATA in its present form is still fettered by the lack of intelligent device-internal data management also known as Command Queuing. Command Queuing has been the major advantage of SCSI over conventional implementation of ATA and also the reason why IBM ATA drives were always faster than any other comparable hard disk drives, despite often lagging in raw media performance.

What is the connection between command queuing and HyperThreading? It is very simple, SCSI drives can play out the full performance potential of CQ only in a multiple host / multiple target configuration. Likewise, Native Command Queuing will only show its full gamut of acceleration in situations where multiple processors compete for HDD service.

The best analogy for this is still the elevator. If there is only a single user, it does not matter whether intelligent management of the floor addresses is done, the elevator will just go to whatever floor's button is pushed. However, even with only two passengers, the elevator will work much better if the stops are serviced not in the order of the buttons pushed but rather in the order of the most efficient and economic service. This is where NCQ and HT complement each other and will push system performance way beyond today's standards.

What goes around, comes around, HT will help NCQ and NCQ will help HT and by the end of the day, the subtle individual performance increments will potentiate each other. HDD performance is the main bottleneck in today's systems and if it can be accelerated, the entire system will become faster. As a result, also the CPU will work more effectively but the road there will have to include better management of data on all system levels, regardless of whether it is called HyperThreading or anything else. µ

Michael Schuette is editor-in-chief of Lostcircuits

 

Share this:

blog comments powered by Disqus
Advertisement
Subscribe to INQ newsletters

Sign up for INQbot – a weekly roundup of the best from the INQ

Advertisement
INQ Poll

Heartbleed bug discovered in OpenSSL

Have you reacted to Heartbleed?