Security Exploits

When software contains a bug, the bug may be exploitable by someone (an attacker) trying to compromise the security of the computer system running the software. In such a case, the process by which the attacker so exploits the bug is known as an exploit.

When a security analyst discovers a bug with potential bearing on security, notifying the publisher of the software is a natural first step (assuming the security analyst is decent and law-abiding, of course). The publisher of the software should then fix the software and distribute the fix to users; albeit this is not as easy as one might naïvely imagine.

Lamentably, software publishers have a bad habit of doing nothing about bugs. In the case of a security-relevant bug, they'll claim that the problem is purely theoretical, so do nothing about it. To persuade them, security analysts have taken to developing exploits and sending those as part of the bug report - to show that the problem is real. Even then, some software publishers (most notably, the big bully everyone's heard of) neglect the problem until an exploit is observed in the wild - i.e. until their customers have suffered real harm as a result of it - and then treat the problem as if it shouldn't exist and complain about security analysts who've published exploits.

Such complaints are silly. If the software publisher doesn't deal with the problem, its discoverer has a duty, eventually, to warn the public of the danger they're being left in by the publisher's negligence. The software publishers themselves don't treat a security bug report seriously unless it includes a working exploit: how then can they expect the public to be any less skeptical, unless the real and present danger is made explicit by includion, in the bug report, of the exploit ?

Micro$oft have recently (2001/Autumn) had one of its bouts of telling the security community off for releasing security exploits. Micro$oft say that folk shouldn't publish exploits, since they help attackers to develop malicious code. It is time to take Micro$oft to task over this.

Micro$oft's hypocrisy

First off, of course, the it helps attackers to develop malicious code charge can equally be laid at Micro$oft's own door. They fail to eliminate bugs from their operating system and deliver it with insecure default configuration. The scripting facilities they build into all manner of products are designed without adequate thought for security - it should be simply impossible for a scripting language embedded in a word-processor document format to enable outsiders to reformat the hard disk of a computer on which such a document is merely read, but Micro$oft don't have the technical competence to design a secure scripting language, let alone to implement it securely in one of their products. All these things help attackers to develop malicious code. But they have to innovate - speaking of which ...

Then there's that new-release treadmill of theirs. Their revenue model is based on getting everyone to pay for a new version of their software every three years. To justify that, they have to release a new version every three years, and each release has to include lots of fancy new features for the sake of which folk will desperately need to upgrade to the new version. This treadmill imposes various constraints on how Micro$oft develop software. For one thing, stability is explicitly a non-priority - everything the programmer does will be replaced in three years' time anyway. More importantly, the systematic reworking (and subsequent testing) of a large body of code to make it do exactly what it used to do (but more robustly) is going to fall right to the back of every project's priority queue - routine maintenance will be neglected. Instead, developers will be perpetually adding complexity (and, consequently, bugs) to existing code in order to make it interact with applications of kinds its original authors never contemplated. This is a recipe for disaster and sure enough, disasters happen.

Still, back to the problem in hand: allowing that Micro$oft's code is full of bugs and creaking at the seems with code senescence, surely publishing exploits just exacerbates the situation ? Well, no. Micro$oft doesn't fix a bug which leads to a security vulnerability until it's caused alleged gigadollars of harm to businesses around the world: they don't fix bugs until it's too late. When a security vulnerability has been reported to them and they have done nothing about it, they are leaving the door open for malicious attackers (who will spot the same defect, eventually) to cause widespread trouble - this culpable negligence is a far worse crime against their customers than the security analyst's release of exploit code; and the latter may fairly be justified as the only practical means of warning the public of the danger in which Micro$oft leaves its customers.

Why One Should Publish

Suppose a security analyst has discovered a bug with security implications for all Micro$oft's users and reported it; eight months later, Micro$oft is still selling the product in question aggressively, as a fundamental component of security-critical systems, and hasn't fixed the problem. That is criminally negligent and self-evidently so: I dare suspect, my dear reader, that you consequently assume I'm exaggerating. So, before I go further, here's a (lightly edited) quote from a mail sent by Eric J. Glover to a public discussion list:

... I reported a potentially serious flaw with their Passport authentication system about 8 months ago -- I even went so far as to tell a high-up manager of theirs, in person, at a conference -- and they still have not fixed it.

The other bug is that apparently Microsoft does not utilize a user's password in any way as part of the authentication process when you choose (on hotmail) "Keep me signed in to this and all other Passport sites unless I sign out." The consequences of this are significant. Although I have not actually attempted to steal another user's identity, I have (in the past) demonstrated that if you do save your password, then using a different machine you change your passsword, your first session is still valid (and can be recovered after quitting the browser). There is no user controlled way to deactivate previously stored sessions (on a different host) -- hence your password (or any function of your password) is not part of the authentication process, or there is a more severe security hole that I do not yet fully understand.

A high up manager at a conference last May told me that he thought they had fixed the problem, and would pass on my issue to be sure -- well we all know the result of that discussion. This relates to the previous message to the IP list where Microsoft is trying to strong-arm companies into NOT reporting security flaws till after they have fixed them -- so basically in the past 8 months any employee who had a hotmail account and left their job has given their bosses (or anyone with physical access to their machine) full, unrestricted access -- all because Microsoft has not fixed the problem.

Passport is the personal-information package Micro$oft is trying to get everyone to use to do electronic commerce (and much more) - it'll hold your credit card information, among other things, so any security failing in it can mean that all your money is suddenly gone into the hands of a cracker. Micro$oft can see that there's huge amounts of money coming its way if it can control the processes of trade, so it's doing its level best to acquire a monopoly in the infrastructure of electronic commerce - for which it is using its usual tactics of introducing proprietary protocols and data formats and bundling its solution with all the software it can get away with including it in - but the rush to gain control of the market hasn't involved pausing to consider whether, by so doing, they might be setting us all up for a massive economic disaster.

It is the analyst's duty to the folk who might rashly buy this product, in such a situation, to make the problem - including any known exploits - public. That's an extreme case involving a product which affects millions of users and, potentially, the global economy. Now, I've met histories of similar severity distressingly often in the past, but now let's consider a smaller scale situation - which seems to happen most weeks - in which the discoverer's duty is equally clear.

Suppose some business is using Micro$oft's Internet Information Server, usually known as IIS: it's a web server. About 20% of the web runs on it, as compared to about 70% on Apache, but very nearly all the security alerts about web servers are about defects in IIS. IIS has a clear history of frequent catastrophic security failings - by catastrophic I mean some outsider can acquire total control of the machine running IIS; by frequent I mean several per month.

Businesses routinely use Micro$oft's products, including IIS, for critical parts of their infrastructure. If the IT team of some business discover some problem in IIS's security which could expose confidential information they hold about their clients, they have a proper - if distressing - duty to shut down the service which exposes this security flaw (and, one would hope, provide as good a substitute as possible using software without the flaw, e.g. Apache). This has clear adverse effects on their business, so they have good cause to want the problem fixed.

So, naturally, they'll report the problem to Micro$oft; and nothing will happen about it, because the flaw hasn't yet been widely exploited. Now, they can't resume the service until the flaw has been fixed - if they did, when the flaw does get exercised by some cracker or other, they'd be liable for culpable negligence - and they can't afford to wait until someone does exploit it. What course of action is open to them ?

They could try suing Micro$oft for failing to provide a reasonable service. However, they can't afford as good lawyers as Micro$oft, and Micro$oft's End User License Agreement (EULA) disclaims all repsonsibility, so they'd have a hard time with this. They might hope, given such conditions, to be entitled to reverse engineer the product and devise a patch by which to mend the defect: but the EULA forbids them to reverse engineer the product. [If they'd had the good sense to go Open Source, they'd have the source code to work from (no need to reverse engineer the binary) and they'd be at liberty to fix the problem, so they'd have less cause to complain at a total disclaimer of responsibility.]

They could stop using IIS, but Micro$oft has taken some care to ensure that its customers, once they're using their systems, are trapped. In order to use IIS, they'll have needed to use various other products of Micro$oft's, because IIS is (by design) not reliably interoperable with anyone else's, and they'll have committed some of their infrastructure to the use of these products - which are (by design) not interoperable with IIS's competitors. So the cost to them of switching from Micro$oft is as high as Micro$oft could make it - albeit the Zeus Web Server goes a long way towards making it easy to migrate away from IIS by providing support for applications written for Micro$oft's proprietary extensions of the web standards.

Alternatively, they could publish a benign exploit which illustrates just how much damage a malicious one could cause. Sure, this has its down-side: it is then easy for someone to modify their exploit into a malicious one; but they aren't using the vulnerable product any more, so they don't suffer by it. The rest of Micro$oft's customers do, but at least they'll get warning from security advisories; and, if someone actually deploys the benign exploit, it may well advise them of the problem. Furthermore, releasing the exploit puts pressure on Micro$oft to actually do something about the problem. Responsible businesses can't afford the alternatives, so it ends up being their duty to force Micro$oft to fix its execrable software.

Micro$oft doesn't like being forced to fix its bugs - after all, that costs money - and it doesn't like the adverse publicity it gets when Yet Another Bug in its software leads to a widespread problem for its customers. So they try to stop folk publishing exploits. Still, if they will bury their heads in the sand until an exploit is seen in the wild, they're getting what they deserve; and folk releasing exploits are doing the only thing left to them to force Micro$oft to face up to the responsibilities naturally attendant on its self-appointed rôle of software provider to the world. One might regret the harm suffered by their customers at these junctures, but then those customers may fairly be accused of culpable negligence in using Micro$oft's products for business-critical purposes: they're not suitable for anything but playing games.

Using software of doubtful quality is irresponsible -- Pascal Meunier

A call for litigation

It's widely known that

No-one ever got fired for buying the market leader.

Which makes perfect sense when the market leader is a product which deserves (on its own merits) to lead the market. However, in the world of software, the market leader, in many cases, is severely defective and only survives by the judicious manipulation of previously acquired monopolies.

Micro$oft's products have a well-documented history of security failings and Micro$oft has a well-documented history of failing to address those. Consequently, it should be possible for shareholders to bring a due diligence case against the boards of businesses, or perhaps their IT staff, for using Micro$oft's products. Then someone would get fired for buying the market leader and, quite suddenly, a corner-stone of Micro$oft's grip on the market would be gone.

Next time Micro$oft products are being splattered by a virus or a worm, check with any businesses in which you own shares. Ask how much money the business lost through down-time attendant on the exploit; ask the business, but also ask the folk who journalists consult for their routine estimate that the exploit has cost the global economy gigadollars. Then talk to a lawyer.

The EULA precludes recovering damages from Micro$oft for their wilful negligence: but it doesn't preclude recovering compensation from the idiots who chose those defective products despite the widely reported and unambiguous prior evidence of avoidable risk. It is hard to see how the executives of any business could justify accepting a total disclaimer of responsibility from the provider of infrastructure critical to the business. When the said provider has a widely publicised history of releasing defective products - and relying on the same EULA to spare it any responsibility for fixing them - such justification becomes hopelessly implausible.

To purchase a product from such a provider when the product is so new as to be functionally untried is clearly fool-hardy: especially when the supplier has a long-standing and well-documented history of using early adopters as (full price) paying beta testers who discover the bugs, then pay for the bug-fixes as an upgrade. Furthermore, the EULA forbids publication of reviews of the product - save ones authorised by (read: favourable to) the supplier - so one may safely assert that the absence of adverse reviews is no evidence of the product being any better than usual, even when the product has been on the market for long enough that one might hope it would have garnered the reviews it deserves. Reliance on the supplier's claims about how good the product is, in such a situation, would be rash - all the more so, as their own EULA explicitly repudiates all such claims.

Of course, if the business were in a position to review the source code of the product before deciding to use it, or to trust in a review by disinterested third parties (bound by no restraint but the law's general objection to unjustified defamation), it would be harder to complain against the directors in the event of some defect - missed by the reviewer - being exploited. They might plausibly attempt a similar justification if such a review were conducted on the executable - but this would require reverse-engineering, forbidden by the EULA. Furthermore, Micro$oft wires all its products into one another - deep integration - making their source code so complex (and bloated) that access to it would not suffice to enable a review to assess its fitness for any particular purpose (which, doubtless, contributes to Micro$oft's unwillingness to warrant its products' fitness for any particular purpose).

The only possible remaining defence a Micro$oft customer might then offer against a charge of culpable negligence is that there was no other product which could provide the service they needed. Yet

There is no excuse.

See also

A lot of exploits are buffer over-runs, which depend on the programmer making a mistake for which there is no excuse.

End
Written by Eddy.