Matt Blaze's
EXHAUSTIVE SEARCH
Science, Security, Curiosity
Archives: June 2009 - March 2010

A decade ago, I observed that commercial certificate authorities protect you from anyone from whom they are unwilling to take money. That turns out to be wrong; they don't even do that much.

SSL certificates are the primary mechanism for ensuring that secure web sites -- those displaying that reassuring "padlock" icon in the address bar -- really are who they purport to be. In order for your browser to display the padlock icon, a web site must first present a "certificate", digitally signed by a trusted "root" authority, that attests to its identity and encryption keys.

Unfortunately, through a confluence of sloppy design, naked commercial maneuvering, and bad user interfaces, today's web browsers have evolved to accept certificates issued by a surprisingly large number of root authorities, from tiny, obscure businesses to various national governments. And a certificate from any one of them is usually sufficient to bless any web connection as being "secure".

What this means is that an eavesdropper who can obtain fake certificates from any certificate authority can successfully impersonate every encrypted web site someone might visit. Most browsers will happily (and silently) accept new certificates from any valid authority, even for web sites for which certificates had already been obtained. An eavesdropper with fake certificates and access to a target's internet connection can thus quietly interpose itself as a "man-in-the-middle", observing and recording all encrypted web traffic traffic, with the user none the wiser.

But how much of a threat is this in practice? Are there really eavesdroppers out there -- be they criminals, spies, or law enforcement agencies -- using bogus certificates to intercept encrypted web traffic? Or is this merely idle speculation, of only theoretical concern?

A paper published today by Chris Soghoian and Sid Stamm [pdf] suggests that the threat may be far more practical than previously thought. They found turnkey surveillance products, marketed and sold to law enforcement and intelligence agencies in the US and foreign countries, designed to collect encrypted SSL traffic based on forged "look-alike" certificates obtained from cooperative certificate authorities. The products (apparently available only to government agencies) appear sophisticated, mature, and mass-produced, suggesting that "certified man-in-the-middle" web surveillance is at least commonplace and widespread enough to support an active vendor community. Wired's Ryan Singel reports in depth here.

It's worth pointing out that, from the perspective of a law enforcement or intelligence agency, this sort of surveillance is far from ideal. A central requirement for most government wiretapping (mandated, for example, in the CALEA standards for telephone interception) is that surveillance be undetectable. But issuing a bogus web certificate carries with it the risk of detection by the target, either in real-time or after the fact, especially if it's for a web site already visited. Although current browsers don't ordinarily detect unusual or suspiciously changed certificates, there's no fundamental reason they couldn't (and the Soghoian/Stamm paper proposes a Firefox plugin to do just that). In any case, there's no reliable way for the wiretapper to know in advance whether the target will be alerted by a browser that scrutinizes new certificates.

Also, it's not clear how web interception would be particularly useful for many of the most common law enforcement investigative scenarios. If a suspect is buying books or making hotel reservations online, it's usually a simple (and legally relatively uncomplicated) matter to just ask the vendor about the transaction, no wiretapping required. This suggests that these products may be aimed less at law enforcement than at national intelligence agencies, who might be reluctant (or unable) to obtain overt cooperation from web site operators (who may be located abroad).

Whether this kind of surveillance is currently widespread or not, Soghoian and Stamm's paper underscores the deeply flawed mess that the web certificate model has become. It's time to design something better.

It's been a frighteningly confusing week for frequent flyers (and confirmed cowards) like me. First we had the Underpants Bomber, his Christmas-day attempt to take down a Detroit-bound flight thwarted by slow-acting chemistry and quick-thinking passengers. Next -- within a day -- came inexplicable new regulations that seemed designed more to punish the rest of us than to discourage future acts of terrorism. The new rules were unsettling not just because they seemed as laughably ineffective as they were inconvenient, but because they suggested that the authorities had no idea what to do, no real process for analyzing and reacting to potential new threats. As the Economist was moved to write, "the people who run America's airport security apparatus appear to have gone insane".

A few days later the TSA, to its credit, rolled back some of the more arbitrarily punitive restrictions -- in-flight entertainment systems can now be turned back on, and passengers are, at the airline's discretion, again permitted to use the toilets during the last hour of flight.

But while a degree of sanity may have returned to some of the rules, the TSA's new security philosophy appears to yield significant advantage to attackers. The current approach may actually make us more vulnerable to disruption and terror now than we were before.

If you can climb a fifteen foot ladder and fit through a two foot diameter hole, you can, with a bit of advance planning, take an extensive "top-to-bottom" tour of a Titan II ICBM launch complex, complete with missile silo and missile. Best of all, you no longer have to trespass or join the Air Force to do it.

And so I just returned from Sahuarita, AZ and the Titan Missile Museum, a place known during most of the cold war as SMS Launch Site 571-7. I spent the better part of the day beneath the surface of the earth, part of a group of six hardy nuclear tourists under the direction of Lt. Col. Chuck Smith (USAF, retired, a former "missileer" at the site), exploring the nuts, bolts and welds of Armageddon.

At the peak of the cold war, there were over 1,000 nuclear missiles in buried silos located throughout sparsely populated areas of the continental United States, all fueled and ready to be launched toward the Soviet Union on a few minutes notice. From 1963 through 1984, this included 54 Titan II missiles at sites in Arizona, Arkansas and Kansas, each equipped with a W-53 warhead capable of delivering a nine megaton thermonuclear yield. Nine megatons is horrifically destructive even by the outsized standards of atomic bombs, capable of leveling a good size city in a single blast. And the Soviets had at least as many similar weapons aimed right back at us.

How did we keep from blowing ourselves up for all those years?

This week in Chicago, Micah Sherr, Gaurav Shah, Eric Cronin, Sandy Clark, and I have a paper at the ACM Computer and Communications Security Conference (CCS) that's getting a bit more attention than I expected. The paper, Can They Hear Me Now? A Security Analysis of Law Enforcement Wiretaps [pdf] examines the standard "lawful access" protocols used to deliver intercepted telephone (and some Internet) traffic to US law enforcement agencies. Picking up where our 2004 analysis of wireline loop extender wiretaps [pdf] left off, this paper looks at the security and reliability of the latest communications surveillance standards, which were mandated by the 1994 Communications Assistance for Law Enforcement Act (CALEA). The standards, it turns out, can leave wiretaps vulnerable to manipulation and denial of service by surveillance targets who employ relatively simple technical countermeasures.

Of particular concern to law enforcement and others who rely on wiretap evidence is the fact that the protocols can be prevented not just from collecting accurate call content (which can already be obscured by a target using encryption), but also from collecting the metadata record -- who called whom and when. Metadata-only taps (called "pen registers" for historical reasons) make up more than 90% of legal wiretaps in the US. Call metadata, which over time reveals a suspect's "community of interest" and behavior patterns, can be more revealing to an investigator than the content. Many agencies use software that automatically aggregates and analyzes call metadata to discover the members (and structure) of suspected criminal networks.

The wiretap standard, called ANSI J-STD-025, was originally designed to cover only the low-bandwidth voice telephone services that existed in the early 1990's. But as the communications services that law enforcement agencies might want to tap have expanded -- think SMS, 3G internet, VoIP, and so on -- the standard has been "patched" to allow the delivery of more and more different kinds of traffic. Unfortunately, many of these new services are a poor fit for the tapping architecture, especially in the way status messages are encoded for delivery to law enforcement and in the way backhaul bandwidth is provisioned. In particular, many modern services make it possible for a wiretap target to generate messages that saturate the relatively low bandwidth "call data channels" between the telephone company and the government, without affecting his or her own services. Worse, these channels are shared among all the taps in a particular central office, so a single person employing countermeasures can suppress wiretaps of other targets as well.

Just to be clear, we don't suggest that anyone actually do these things in the hopes of thwarting a government wiretap. Aside from being somewhat technically difficult, there would be no way to be sure that the countermeasures were actually working. And I know of no evidence that criminals are actually using these techniques today. (Of course, a determined and technically savvy criminal could prove me wrong about this.)

The real problem is that these protocols -- used in the most serious criminal investigations -- were apparently designed and deployed (and mandated in virtually every communications switch in the US) without first subjecting them to a meaningful security analysis. They were engineered to work well in the average case, but ignored the worst case of an adversary trying to create conditions unfavorable to the eavesdropper. And as the services for which these protocols are used have expanded, they've created a wider range of edge conditions, with more opportunities for manipulation and mischief.

That's a familiar theme for security engineers, and the CALEA wiretap standard is hardly the first example of a serious protocol being deployed without considering what an adversary might do. Unfortunately, it probably won't be the last, either.

loop extender I wrote the "Secure Systems" column in the September/October 2009 IEEE Security and Privacy, which is hitting the streets just about now. You can read the full column, "Taking Surveillance out of the Shadows", at www.mattblaze.org/papers/mab-sec-200909.pdf [pdf]. The subject, familiar to readers here, is wiretapping gone wrong.

Wiretapping has been controversial lately, often framed, rightly or wrongly, as a zero-sum battle between our right to privacy on the one hand and the needs of law enforcement and intelligence agencies on the other. But in practice, surveillance is about more than policy and civil liberties -- it's also about technology. Reliably intercepting traffic in modern networks can be harder than it sounds. And, time and again, the secret systems relied upon by governments to collect wiretap intelligence and evidence have turned out to have serious problems. Whatever our policy might be, interception systems can -- and do -- fail, even if we don't always hear about it in the public debate.

Indeed, the recent history of electronic surveillance is a veritable catalog of cautionary tales of technological errors, risks and unintended consequences. Sometime mishaps lead to well-publicized violations of the privacy of innocent people. There was, for example, the NSA's disclosure earlier this year that it had been accidently "over-collecting" the communications of innocent Americans. And there was the discovery, in 2005, that the standard interfaces intended to let law enforcement tap cellular telephone traffic had been hijacked by criminals who were using them to tap the mobile phones of hundreds of people in Athens, Greece.

Bugs in tapping technology don't always let in the bad guys; sometimes they keep out the good guys instead. Cryptographers may recall the protocol failures in the NSA's "Clipper" key escrow system [pdf], in which wiretaps could be defeated easily by exploiting a weak authentication scheme during the key setup. More recently, there was the discovery in 2004, by Micah Sherr, Sandy Clark, Eric Cronin and me, that law enforcement intercepts of analog phones can be disabled just by sending a tone down the tapped line [pdf].

What's going on here? It's hard to think of another law enforcement technology beset by as many, or as frequent, flaws as are modern wiretapping systems. No one would tolerate a police force regularly hobbled by exploding guns, self-opening handcuffs, stalled radio cars, or contaminated evidence bags. Yet many of the systems we depend on to track suspected criminals and spies have failed just as spectacularly, if more quietly.

A common factor in these failed systems is that they were designed and deployed largely in secret, away from the kind of engineering scrutiny that, as security engineers know well, is essential for making systems robust. It's a natural enough reflex for law enforcement and intelligence agencies to want to keep their surveillance technology under wraps. But while it may make sense to keep secret who is under surveillance, there's no need to keep secret how. And the track record of current systems suggests a process that is seriously, even dangerously, broken.

Fortunately, unlike some aspects of the debate about wiretapping, reliability isn't a political issue. The flaws in these systems cut across ideology. No one is served by defective technology that spies on the wrong people or that lets criminals, spies, or terrorists evade legal wiretaps. If surveillance is to protect us, it has to come out of the shadows.

Photo: Law enforcement "loop extender" tap for analog phone lines.

Stewart Baker, former director of policy at the Department of Homeland Security, the parent agency of the TSA, took me to task for my recent posting about the new TSA boarding pass scanners being installed at airport security checkpoints.

My observation was that the ID checkpoint is insufficient and in the wrong place; fixing the Schneier/Soghoian attack requires that a strong ID check be performed at the boarding gate, which the new system still doesn't do. Stewart says that the TSA security process doesn't care what flight someone is on as long as they are screened properly and compared against the "no fly" list.

Maybe it doesn't; the precise security goals to be achieved by identifying travelers have never been clearly articulated, which is an underlying cause of this and other problems with our aviation security system. But the TSA has repeatedly asserted that passenger flight routing is very much a component of their name screening process. For example, the regulations governing the Secure Flight program published last October in the Federal Register [pdf] say that "... TSA may learn that flights on a particular route may be subject to increased security risk" and so might do different screening for passengers on those routes. I don't know whether that's true or not, but those are the TSA's words, not mine.

Anyway, Stewart's confusion about the security properties of the protocol, and about my reasons for discussing them notwithstanding, the larger point is that aviation security is a complex (and interesting) problem in the discipline that I've come to understand as "human-scale security protocols".

I first wrote about human scale security as a computer science problem back in 2004 in my paper Toward a broder view of security protocols [pdf]. Such protocols share much in common with the cryptographic authentication and identification schemes used in computing: they're hard to design well and they can fail in subtle and surprising ways. Perhaps cryptographers and security protocol designers have something to contribute toward analyzing and designing better systems here. We can certainly learn something from studying them.

In 2003, Bruce Schneier published a simple and effective attack against the TSA's protocol for verifying a flyer's identity on domestic flights in the US. Nothing was done until 2006, when Chris Soghoian, then a grad student at Indiana University, created an online fake boarding pass generator that made it a bit easier to carry out the attack. That got the TSA's attention: Soghoian's home was raided by the FBI and he was ordered to shut down or face the music. But the TSA still didn't change its flawed protocol, and so for more than five years, despite the inconvenience of long security lines and rigorous checks of our government-issue photo IDs at the airport, it has remained possible for bad guys to exploit the loophole and fly under a fake name. I blogged about the protocol failure, and the TSA's predictably defensive, shoot-the-messenger reaction to it, in this space a couple of years ago.

Well, imagine my surprise yesterday when I noticed something new at Philadelphia International Airport: the TSA ID checker was equipped not just with the usual magnifying glass and UV light, but with a brand new boarding pass reader. The device, which did not yet seem to be in use, apparently reads and validates the information encoded on a boarding pass and displays the passenger name as recorded in the airline's reservation record. According to the TSA blog, boarding pass readers are being rolled out at selected security checkpoints, partly to to enable "paperless" boarding passes on mobile phones, but also to close the fake boarding pass loophole.

Unfortunately, the new system doesn't actually fix the problem. The new protocol is still broken. Even when the security checkpoint boarding pass readers are fully operational, it will still be straightforward to get on a flight under a false name.

I'm not sure how often I'll actually update it, but I've got a Twitter feed now: twitter.com/mattblaze

Even if you can't say it in an SMS message, it still might be worth saying. I suspect I'll still mainly be using this blog.

It's the holiday weekend, so today's (overdue) entry has perhaps less to do with computer security and crypto than usual.

I'm interested in techniques for capturing the ambient sounds of places and environments. This kind of recording is something of a neglected stepchild in the commercial audio world, which is overwhelmingly focused on music, film soundtracks, and similarly "professional" applications. But field recording -- documenting the sounds of the world around us -- has a long and interesting history of its own, from the late Tony Schwartz's magnetic wire recordings of New York city street life in the the 40's and 50's to the stunning natural biophonies hunted down across the world by Bernie Krause. And the Internet has brought together small but often quite vibrant communities of wonderfully fanatic nature recordists and sound hunters. But we're definitely on the margins of the audio world here, specialized nerds even by the already very geeky standards of the AV club.

We tend to take the unique sounds of places for granted, and we may not even notice when familiar soundscapes radically change or disappear out from under (or around) us. And in spite of the fact that high quality digital audio equipment is cheaper and more versatile than ever, hardly anyone thinks to use a recorder the way they might a digital camera. Ambient sound is, for most purposes, as ephemeral as it ever was. I commented last year in this space on the strange dearth of available recordings of David Byrne's Playing the Building audio installation; Flickr is loaded with photos of the space, but hardly any visitors thought to capture what it actually sounded like. And now, like so many other sounds, it's gone.

Anyway, I've found that making good quality stereo field recordings carries its own challenges besides the obvious ones of finding interesting sounds and getting the right equipment to where they are. In particular, most research on, and commercial equipment for, stereo recording is focused (naturally enough) on serving the needs of the music industry. There the aim is to get a pleasing reproduction of a particular subject -- a musician, an orchestra, whatever -- that's located in a relatively small or at least identifiable space, usually indoors. "Ambience" in music recording has to do mainly with capturing the effect of the subject against the space. Any sounds originating from the local environment are usually considered nothing more than unwelcome noise, blemishes to be eliminated or masked from the finished product.

But the kind of field recording I'm interested in takes the opposite approach -- the environment is the subject. Most of the standard, well-studied stereo microphone configurations aren't optimized for capturing this. Instead, they're usually aiming to limit the "recording angle" to the slice where the music is coming from and to reduce the effects of everything else. There are some standard microphone arrangements that can work well for widely dispersed subjects, but most of the literature discusses them in the context of indoor music recording. It's hard to predict, without actually trying it, how a given technique can be expected to perform in a particular outdoor environment. If experience is the best teacher, it's pretty much the only one available here.

Compounding the difficulty of learning how different microphone configurations perform outdoors is the surprising paucity of controlled examples of different techniques. There are plenty of terrific nature recordings available online, but people tend to distribute only their best results, and keep to themselves the duds recorded along the way. For the listener, that's surely for the best, of course, but it means that there are lamentably few examples of the same sources recorded simultaneously with different (and documented) techniques from which to learn and compare.

And so I've slowly been experimenting with different stereo techniques and making my own simultaneous recordings in different outdoor environments. In doing this, I can see why similar examples aren't more common; making them involves hauling around more equipment, taking more notes, and spending more time in post-production than if the goal were simply to get a single best final cut. But the effort is paying off well for me, and perhaps others can benefit from my failures (and occasional successes). So I'm collecting and posting a few examples on this web page [link], which I will try to update with new recordings from time to time. But mostly, I'd like to encourage others to do the same; my individual effort is really quite pale in the grand scheme of things, limited as it is by my talent, equipment, and carrying capacity.

My sample clips, for what they're worth, can be found at www.mattblaze.org/audio/soundscapes/.

Once again I was lucky enough to be invited to this year's Interdisciplinary Workshop on Security and Human Behavior at MIT this week. Organized by Alessandro Acquisti, Ross Anderson, and Bruce Schneier, the workshop aims to bring together an aggressively diverse group of researchers from perspectives in computing, psychology, economics, sociology, and philosophy. (I blogged about last year's workshop here.)

This is a small and informal event, with no published proceedings or other tangible record. But Bruce Schneier, Adam Shostack and Ross Anderson are liveblogging the sessions.

As with last year, I ended up making quick-and-dirty sound recordings of the sessions, which I'll put up here as I process them. (I apologize for the uneven audio quality; the recording conditions were hugely suboptimal. And I didn't know I'd was supposed to be doing this until five minutes before the first session, using a recorder and microphone I luckily happened to have in my backpack.)

Update 6/12/09 1745: All session audio is now online after the fold below.