Matt Blaze's
EXHAUSTIVE SEARCH
Science, Security, Curiosity
When Should the Government Disclose "Stockpiled" Vulnerabilities?
Somewhere between immediately and never.

Encryption, it seems, at long last is winning. End-to-end encrypted communication systems are protecting more of our private communication than ever, making interception of sensitive content as it travels over (insecure) networks like the Internet less of a threat than it once was. All this is good news, unless you're in the business of intercepting sensitive content over networks. Denied access to network traffic, criminals and spies (whether on our side or theirs) will resort to other approaches to get access to data they seek. In practice, that often means exploiting security vulnerabilities in their targets' phones and computers to install surreptitious "spyware" that records conversations and text messages before they can be encrypted. In other words, wiretapping today increasingly involves hacking.

This, as you might imagine, is not without controversy.

 


From a privacy standpoint, official hacking feels problematic at best. No one wants government-sponsored intruders spying on their devices, to say nothing of the risks of abuse should their hacking tools fall into the wrong hands. But exploiting pre-existing flaws at least has the virtue of being inherently relatively targeted. In the last few years, my colleagues Steve Bellovin, Sandy Clark, Susan Landau and I have written fairly extensively about "lawful hacking". We concluded that while there are definitely risks with the approach, controlled and regulated targeted hacking is preferable to law enforcement proposals that restrict or weaken encryption. Exploiting the (regrettably vast) sea of existing flaws in modern software, at least, doesn't introduce new vulnerabilities the way proposed mandates for "wiretap friendly" systems would.

In any case, whether we might like it or not, government agencies -- both law enforcement and intelligence -- are definitely hacking like never before. Earlier this week, for example, Wikileaks released documents about an extensive toolkit for compromising phones and other devices, purportedly (and apparently credibly) belonging to the CIA.

The interesting question (and one for which we desperately need sensible policy guidance) isn't so much whether the government should exploit vulnerabilities (it will), but what it should do with the vulnerabilities it finds.

Modern software systems are, above all else, dazzlingly complex. While computers can accomplish amazing things, the sheer size and complexity of modern software makes it inevitable that there are hidden defects -- bugs -- in almost any non-trivial system. And some of these bugs, inevitably, have security implications that can allow an attacker to bypass authentication or otherwise take unauthorized control of the system. In practice, real systems have so many bugs that the question is not whether there's an exploitable vulnerability, but simply how long it will be until the next one is found.

Exploiting flawed software thus carries with it a fundamental -- and fundamentally difficult -- conflict for the government. The same vulnerable phones, computers and software platforms used by law enforcement and intelligence targets (the "bad guys") are often also used by the rest of us (the "good guys") to manage everything from private chitchat to our personal finances to the national power grid to critical defense systems. And if we find a flaw in one of these systems, it seems reasonable to worry that someone else, with less pure intentions, might find and exploit it too.

So when the government finds exploitable flaws in software, it's torn between two competing -- and compelling -- "equities". On the one hand, it has bad guys to catch and intelligence to gather. That suggests that the government should keep these vulnerabilities to itself, quietly exploiting them for as long as it can. On the other hand, the same vulnerabilities also expose innocent people and government institutions to the potential for attack by criminals and spying by rival nations' intelligence agencies. That suggests that the government should promptly report discovered flaws to software vendors so they can be fixed quickly, before someone else finds them and uses them against us. There are reasonable arguments to be made on both sides, and the stakes in our increasingly online and software-controlled world are higher now than ever.

So how do we resolve such a seemingly un-resolvable conflict? It involves balancing risks and rewards, a difficult task even when all the facts and probabilities are known. Unfortunately, there isn't a lot of definitive research to tell us when or if a vulnerability in a complex software system is likely to be re-discovered and used for nefarious purposes.

Let's first define the problem a bit more precisely. Suppose the government discovers some vulnerability. What's the maximum amount of time it can wait before the same flaw is likely to be re-discovered and exploited by an adversary? In other words, when, exactly, should the government report flaws and have them fixed?

There are a couple of easy cases at the edges. One involves flaws discovered in some system used exclusively by good guys, say control software for hospital life support systems. Since there's no legitimate reason for the government to compromise such systems, and every reason to want to prevent bad guys from messing with them, clearly the right strategy is for the government to report the flaws immediately, so they can be fixed as quickly as possible. The other easy case involves flaws in software systems used exclusively by bad guys (say, "Mujahedeen Secrets 2"). There, no good guys depend on the system, and so there's no benefit (and much to lose) by helping to strengthen it. Here, the government clearly should never report the flaws, so it can continue to exploit them as long as it can.

But real systems are rarely at either of these two simple extremes. In practice, software is almost always "dual use", protecting both good guys and bad. So the conflict is between solving crime (by exploiting flaws) on the one hand, and preventing crime (by fixing them) on the other. The right time to report requires estimating (guessing?) how long it's likely to take before someone else finds and uses the same flaws against us. In other words, in most cases, the right time to report will be somewhere between immediately and never. But how long? And how to calculate?

Which brings us to two very interesting -- and phenomenally timely -- papers published this week that each aim to shed some light on the ecosystem of vulnerability re-discovery.

One, by Trey Herr and Bruce Schneier, looked at over 4000 reported vulnerabilities in browsers, mobile operating systems, and other software. The other, by RAND's Lillian Ablon and Timothy Bogart, takes a deeper look at a smaller set of 200 exploitable vulnerabilities. Both papers offer important new insights, and each repays a careful read.

So what have we learned? Unfortunately, the data so far is unsatisfying and somewhat contradictory. In Herr and Schneier's data, vulnerabilities were rediscovered relatively frequently and quickly; between 15% and 22% of vulnerabilities are duplicated by at least one other person or group. But in Ablon and Bogart's data, fewer than 6% of zero-day vulnerabilities were rediscovered in any given year.

This suggests (and intuition would probably agree) that no single simple factor predicts whether a vulnerability will be rediscovered. It's clearly a heavily non-uniform space, and we need to study it a lot more before we can make reliable predictions. And even then, no available data tells us how likely it is that a re-discovered 0-day will actually be fielded against us by an adversary. Unhappily for everyone (except perhaps for researchers like me), what we've learned is mostly that we need more research.

So, other than funding more research (always a good idea, if I do say so myself), what do we do in the meantime? The Federal government has a White House-level Vulnerabilities Equities Process (VEP) that is charged with evaluating 0-day vulnerabilities discovered by intelligence and law enforcement and deciding when and if to disclose them to vendors. The process is shrouded in secrecy, and there's some evidence that it isn't working very well, with many vulnerabilities evidently not going through the process at all. But the principle of an independent body to weigh these decisions is a good one. By virtue of their jobs, intelligence and law enforcement agencies who find vulnerabilities are disinclined to "spoil" them by reporting. A working VEP body would have to actively and aggressively counterbalance the natural pressure to not report that agencies would put on it. With sufficient political and bureaucratic will, that could, at least in principle, be an achievable goal, though hardly an easy one.

But how can the VEP make sensible decisions in the absence of good predictive models for vulnerability rediscovery? It's worth observing that while there's much we don't know about the vulnerability ecosystem, there's one thing we know for sure: there are a lot of vulnerabilities out there, and finding them is largely a matter of resources. So a prudent approach would be for the VEP to report newly discovered vulnerabilities in most systems relatively quickly, but also to ensure that agencies that found them have sufficient resources to maintain and replenish their "supply". That is, vulnerability discovery becomes essentially a large-scale, pipelined process rather than just a collection of discrete tools.

A side effect, as my co-authors and I have noted in our papers, is that under a policy biased toward reporting, the more active agencies are in finding weaknesses to exploit in software, the more often vulnerabilities would ultimately get reported and fixed in the systems we rely on. But for that to happen, we need a more transparent, more engaged VEP process than we appear to have.