The Inquirer-Home

Fire your IT director now

And if you don't have one, fire your IT manager
Tue Sep 09 2003, 16:59
This article, written by Jon Honeyball, first appeared in PC Pro earlier this year. As it's still pretty sound advice, he has kindly allowed us to reprint it. The copyright remains his

HERE IS SOME harsh, cruel but worthwhile advice. If your company was hit by the recent SQL Server virus, and you are a Director level member of your company, then I advise you to fire your IT Director now. If you don't have an IT Director, but you are nominally in charge of IT, then fire your IT Manager now. Also please identify who is responsible for the company firewalls, and fire him or her as well.

You might think this is a little harsh, maybe a little hasty. It is not. Those companies which got infected did so entirely because of their own carelessness. There was no reason for them to get infected, and no need either.

It might be noticeable that I haven't suggested firing the SQL Server administrators. There is a good reason for this - it wasn't their fault. And before I go any further, I should add that I am spitting mad at the appallingly low quality of reporting, analysis and so-called expert opinion which has been expressed on the web, in print and even on TV over this affair. Most everyone should be hanging their heads in shame.

Yes, Microsoft has some part to play in the finger pointing. Yes, they issued a patch last summer which cleared up the problem, but only the most generous of persons could claim that Microsoft is blameless. The original patch was a bitch to install, and worse still some subsequent patches appear to have undone the original one. This is unacceptable. That SQL Server is not covered by Windows Update is another point that Microsoft should ponder upon, because it ought to be easier than this to install patches to core back office technologies.

Having said that, it is hard to see that Microsoft was in the wrong. The fault was found, a patch was made available months ago, and the database sysadmins decided not to apply the patch.

Therefore the finger logically points at the DB sysadmins, whom I have already said are innocent. And here is why. If you are running a line of business database store in a large company, the last thing you do is fiddle with it. Apply patches and changes to such a box is something that is done in only the most desperate of times, because the business risk is too great. If you pop a service pack or patch file onto a desktop, and it doesn't work well, then you have to recover one desktop. If you do the same to one of the many servers in the organisation, then the impact is greater but its still one server. If you apply a patch to a database server, then it is usually the single point of failure and everyone notices its absence. So the increased caution goes hand in hand with the increased risk exposure.

Now let's look at the hack itself. It talked to SQL Server, and the Microsoft Database Engine (MSDE) found in a number of desktop applications, on port 1434. It then propagated from that machine to other random machines. The rate of infection was staggering - in a few minutes, major internet backbones were saturated with traffic. And analysis has shown that these were not line of business SQL Server boxes, but MSDE machines which were mostly desktops. Most users were probably not even aware that they had MSDE installed.

But remember - it had to get through the front door in order to get to a computer. The vehicle of infection was not a dodgy attachment on an email, or some unpleasant hack code downloaded from a porn website. This virus walked up to the front door, opened the network door, and walked straight in. The only reason it could do this was because port 1434 was open on the firewall for externally sourced traffic. Read that again carefully - port 1434 was open on the firewall for externally sourced traffic.

Unless there was an exceptionally good reason for this being the case, and this reason was internally discussed, planned, documented and approved, then the person running the firewall has, in my opinion, just signed their letter of resignation.

There appears to be a commonly held view that ports above 1024 are somehow not important. There is an internet meme, or rumour or accepted wisdom, that you should leave everything above 1024 wide open for incoming traffic. Similarly, there is a view that every port for internally sourced connections, ie internal to external, should be left open.

This is appalling. This is nothing short of culpable liability. And it is dangerous beyond belief.

So here you get my view on firewalls and on how to harden machines to attacks, both internally and from externally. None of this is difficult or complicated, and nothing is magic or witchcraft. By following these rules, you will be safe and secure. If you want to break them, then go ahead - but please have good and well-understood and documented reasons for doing so. Breaking these rules because they are inconvenient should be a dismissable offence.

Rule number 1 - every port, from 1 to 65536, for all internal IP addresses to every external IP address, shall be blocked to internally and externally sourced traffic. That means, in the default starting configuration, there is no traffic from inside to outside, nor from outside to inside. It doesn't matter if it is externally sourced or internally sourced, the firewall acts like a 2 inch air gap on the network.

Rule number 2: you only open a port for a specified, clearly understood and properly documented reason. Therefore you open port 80 for internally sourced connections coming from probably all machines, and you allow them to connect to all outside sites. Ditto for port 443 for HTTPS. If you route all internal web traffic through a proxy server, then you do not allow every desktop to make a port 80 connection to the outside world - only the proxy server can have that.

Rule number 3: you do not confuse client access with server access. For example, a client will need access to port 80 to the outside world, assuming no mandatory proxy server. The email server will require SMTP inbound and outbound. I cannot think of a good reason why a client would need SMTP access through the firewall. Ditto for DNS, SNTP, POP3, FTP and so forth.

Therefore in a well designed network, clients talk to servers, and servers talk to the outside world. This applies for web, via an http proxy. It applies to email, via the mail server. It applies to SNTP time services, via the main time server. In fact, when you think about it, there is almost no circumstances in which a client should have any direct access at all to the outside world, for normal line of business operations. It might seem obvious, but there should be no access whatsoever, under any circumstances, for externally originating traffic coming to a client machine.

Rule number 4: VPN tunnels should not be wide open. This is a classic screwup, and people keep getting it wrong. If you allow a VPN tunnel from a remote client machine, maybe one held at a domestic premises, then this is wide open. Just as if the machine was on the local network. But do you have any control over the state of this machine? What if it is virus infected? What if it is running some nasty spyware application? Do you really want to let this machine have wide open access to everything inside the network? Of course not, but most people set up VPN tunnels from home machines to be wide open, and are then surprised when nasty things walk straight into the network.

There is an even more worrying version of this, which I know for a fact hit companies in the City of London. Lets say you have a business relationship with another company, and you are desiring to have your database servers talk to their servers and vice versa. You are working in partnership with them. The classic solution is set up a VPN tunnel between them - you trust them, they trust you. Lets now assume that their firewall design is somewhat shaky compared to yours, and they get infected with something nasty. Some unpleasantness gets into their building. Guess what? Its now on your network too. The solution is stupidly simple - do not have wide open connections to unverifiable sources. If this partner company needs access to your mail server, then open up port 25 to their network and leave everything else blocked. Apply a layered security model, so that only the right machines get the appropriate access.

Now there are some applications which place unreasonable demands on the firewall design. Netmeeting is one, which demands random ports to allow it to work. If you really need to have someone run Netmeeting to an external source, then set up a VPN tunnel and shut it down when it is not required. Booking firewall access should be as planned as booking a meeting room.

These rules might sound paranoid. They are not - those people who followed the basics did not have a single problem with this virus. Advice like “well maybe we should block port 1434 now” is staggeringly naïve. Last week it was 1434, next week it might be 2096. Who knows? You don't, and the only security policy is one which keeps every single external TCP/IP packet out unless it has a genuine, verifiable and accountable reason for being there. And which ensures that every internal TCP/IP packet is kept within the network, unless it too has a gold-plated reason to be allowed outside the network.

Yes, it is a harsh set of rules. And yes, they will break all sorts of half-baked and half-arsed IT business practises. This is a good thing. Some things in life have to be mandatory, and good security is one of them.

If your firewall and network people disagree, fire them on the spot. Have the courage of your convictions, and demand that security policy is something which is written down, fully documented and which can be completely justified. And it can be explained to and independently verified by a non-technical person too. The rules do not change on a complex multisite network either, in fact the need for them becomes ever more necessary. A corporate client of mine runs the rules as I have described them, and discovered that a satellite office in another country had installed an ADSL line into their local network, which they refused to disconnect. There was only one workable response from the head office - they disconnected the remote office from the rest of the company network until the remote office abided by the security policy.

This is not a game, it is not complicated and there are no excuses. Follow these rules or go dig potatoes instead.


Share this:

blog comments powered by Disqus
Subscribe to INQ newsletters

Sign up for INQbot – a weekly roundup of the best from the INQ

INQ Poll

Happy new year!

What tech are you most looking forward to in 2015