Matt Blaze's
EXHAUSTIVE SEARCH
Science, Security, Curiosity
Archives: 7 January - 2 August, 2007

Readers of this blog may recall that for the last two months I've been part of a security review of the electronic voting systems used in California. Researchers from around the country (42 of us in all) worked in teams that examined source code and documents and performed "red team" penetration tests of election systems made by Diebold Election Systems, Hart InterCivic and Sequoia Voting Systems.

The red team reports were released by the California Secretary of State last week, and have been the subject of much attention in the nationwide press (and much criticism from the voting machine vendors in whose systems vulnerabilities were found). But there was more to the study than the red team exercises.

Today the three reports from the source code analysis teams were released. Because I was participating in that part of the study, I'd been unable to comment on the review before today. (Actually, there's still more to come. The documentation reviews haven't been released yet, for some reason.) Our reports can now be downloaded from http://www.sos.ca.gov/elections/elections_vsr.htm .

I led the group that reviewed the Sequoia system's code (that report is here [pdf link]).

The California study was, as far as I know, the most comprehensive independent security evaluation of electronic voting technologies ever conducted, covering products from three major vendors and investigating not only the voting machines themselves, but also the back-end systems that create ballots and tally votes. I believe our reports now constitute the most detailed published information available about how these systems work and the specific risks entailed by their use in elections.

My hats off to principal investigators Matt Bishop (of UC Davis) and David Wagner (of UC Berkeley) for their tireless skill in putting together and managing this complex, difficult -- and I think terribly important -- project.

By law, California Secretary of State Debra Bowen must decide by tomorrow (August 3rd, 2007) whether the reviewed systems will continue to be certified for use throughout the state in next year's elections, and, if so, whether to require special security procedures where they are deployed.

We found significant, deeply-rooted security weaknesses in all three vendors' software. Our newly-released source code analyses address many of the supposed shortcomings of the red team studies, which have been (quite unfairly, I think) criticized as being "unrealistic". It should now be clear that the red teams were successful not because they somehow "cheated," but rather because the built-in security mechanisms they were up against simply don't work properly. Reliably protecting these systems under operational conditions will likely be very hard.

The problems we found in the code were far more pervasive, and much more easily exploitable, than I had ever imagined they would be.

Eric Cronin found this cute little junior phone bugging kit on sale at Toys 'R Us. Recommended for ages 10-14 (presumably because children any older than that are more likely to be prosecuted under 18 USC 2511 and 18 USC 2512), the kit is basically a tunable low-power FM radio transmitter designed to connect to an analog telephone line. I especially like the way the instruction sheet [pdf] prominently warns of the dangers of eating solder, but only casually mentions the illegality of listening to other people's phone calls once you've got the thing built. (A non-trivial concern, especially considering the trouble that Ramsey Electronics got into with the US Customs service a few years back for selling similar kits.)

As strongly as I feel about the evils of illegal wiretapping, I must admit to having decidedly mixed feelings here. No, kids, don't tap your neighbor's phone. But unraveling the once-forbidden mysteries of telephone electronics has a way of pulling a young geek into a lifetime of technological exploration. It certainly did for me.

I was at a conference recently where everyone was asked to recall their first moment of thinking "I rule!" over some technology. It's a surprisingly revealing question; experience the exhilaration of hacker empowerment at a sufficiently impressionable age and you're hooked forever. A disproportionately large fraction of the answers seemed to involve telephony. (Mine was when I discovered you could dial a phone by flashing the hookswitch. I think I was too young to have anyone to call, though).

So I suppose if the nerdy kid next door figures out how to hook one of these kits up to my phone, I won't be too upset. Just make sure not to eat the solder.

Vassilis Prevelakis and Diomidis Spinellis just published (in the July '07 IEEE Spectrum) a terrific technical analysis [link] of the recent Greek cellular eavesdropping scandal. In 2005, it was discovered that over a hundred Athens cellphones, mostly belonging to politicians (ranging from the mayor to the prime minister), were being illegally wiretapped. The culprit hasn't been found, but there's plenty of fodder for speculation, including mysteriously missing records, suspicious suicide, and, as Prevelakis and Spinellis point out, an intriguing technological mystery.

This would all be interesting enough for its stranger-than-spy-fiction elements alone, but what makes the story essential reading here is how definitively it illustrates something that many of us in the security and privacy community have been warning about for years: so-called "lawful interception" interfaces built in to network infrastructure become inviting targets for abuse. (See, for example, this point made in 1998 [pdf] and in 2006 [pdf]). And, as this case shows, those targets can be rich indeed.

For some reason, wiretapping interfaces don't seem to get much technical scrutiny, and we're starting to see how easy it can be to exploit them to nefarious ends. Vulnerabilities here can cut both ways, too, sometimes making it easier for real criminals to evade legal surveillance. A couple of years ago, Micah Sherr, Eric Cronin, Sandy Clark and I discovered basic weaknesses in the interception technologies used for decades to tap wireline telephones. Many of the vulnerabilities have found their way, in the name of "backward compatibility", into the latest eavesdropping standards, now implemented just about everywhere. Maybe even in Greek cellular networks.

Addendum: I just noticed that Steve Bellovin has a blog post on the same subject here, with some interesting comments and links.

Several people asked me for a list of references from my talk on "Safecracking, Secrecy and Science" Sunday morning in Sebastopol, and I promised a blog entry with pointers. (If you were there, thanks for coming; it was fun. For everyone else, I gave a talk on the relationship between progress and secrecy in security, as illustrated by the evolution of locks and safes over the last 200 years.)

Unfortunately, few of the historical references I cited are on the web (or even in print), but a bit of library work is repaid with striking parallels between the security arms races of the physical and virtual worlds.

The California Secretary of State recently announced plans for a "top-to-bottom" review of the various electronic voting systems certified for use in that state. David Wagner of U.C. Berkeley and Matt Bishop of U.C. Davis will be organizing source code and "red team" analysis efforts for the project, and they've recruited a large group of researchers to work with them, including me. This has the potential to be one of the most comprehensive independent evaluations of election technologies ever performed, and is especially significant given California's large size and the variety of systems used there. Trustworthy voting is perhaps the most elemental of democratic functions, but, as security specialists know all too well, complex systems on the scale required to conduct a modern election are virtually impossible to secure reliably without broad and deep scrutiny. California's review is a welcome and vitally important, if small, step forward.

I'll be leading one of the source code review teams, and we'll be getting to work by the time you read this. We have a lot to do in a very short time, with the final report due to be published by late summer or early fall. Until then, I won't be able to discuss the project or the details of how we're progressing, so please don't take it personally if I don't.

For some more details, the project FAQ is available here (PDF format).

UPDATE Aug 2, 2007: Our code review reports are now available. See this blog entry for details.

As interested as I am in the human-scale side of security, I suppose I should have strong opinions about last week's unscheduled evacuation drill in Boston. There's plenty to react to, after all: misguided marketing, hair-trigger over-reaction, shameless media pandering, oddball artists, and of course, disingenuous self-justification from all concerned. Yet for all the negligence and ineptitude on display, there doesn't seem to be very much to learn from these mistakes that we didn't already know. More troubling to me is the manipulative con game that triggered the whole spectacle in the first place. And, for a change, this has nothing to do with homeland security or fear mongering. But it strikes at the heart of commerce, culture and trust.

We often say that researchers break poor security systems and that feats of cryptanalysis involve cracking codes. As natural and dramatic as this shorthand may be, it propagates a subtle and insidious fallacy that confuses discovery with causation. Unsound security systems are "broken" from the start, whether we happen to know about it yet or not. But we talk (and write) as if the people who investigate and warn us of flaws are responsible for having put them there in the first place.

Words matter, and I think this sloppy language has had a small, but very real, corrosive effect on progress in the field. It implicitly taints even the most mainstream security research with a vaguely disreputable, suspect tinge. How to best disclose newly found vulnerabilities raises enough difficult questions by itself; let's try to avoid phrasing that inadvertently blames the messenger before we even learn the message.

I've long been an admirer of the James Randi Educational Foundation (JREF), tireless advocates for critical thinking, skepticism, and the scientific method. They offer a one million dollar prize to the first person who can provide convincing, testable proof of supernatural powers. The foundation recently set up a "remote viewing" challenge in which the purported psychic is asked to describe the contents of a special sealed box held at the JREF office in Fort Lauderdale, Florida.

Those who know me may be surprised to read this, but I'm pleased to announce that Jutta Degener and I have successfully visualized the contents of Randi's challenge box. We accomplished this from over a thousand miles away and entirely through mental concentration and the application of our unique talents (or, I should say, gifts), and without any physical access or inside information. We can now reveal to the world the item in the box: a small mirrored flat circular wheel or disk, such as a DVD or CD. Randi, if you're reading this, a money order or certified check will be fine.

This Spring at Penn, Jonathan Smith and I are co-teaching CSE125: Technology and Policy, a new seminar-format interdisciplinary course on, well, technology and policy. Whether you're a techno-geek, a policy wonk, or just hope to understand why the music industry is behaving so strangely, check out the (still very preliminary) course web page and syllabus here and consider attending. We meet in Towne 321, Tuesdays and Thursdays, 1200-1330 (that's noon to 1:30pm, for you non-nerds).

I saw an interesting story (thanks to Dave Farber's Interesting-People list) on how the TSA is considering selling advertising space at airport security checkpoints. My distaste at the prospect of being subjected to ads during these already humiliating and irritating screenings aside, I found the most fascinating part of this article to be its glimpse at the officious technical jargon that has emerged for airport security paraphernalia. Those grey tubs that you put your laptop in (after removing it from its case, of course) are apparently properly called "divestiture bins"; after X-ray, we retrieve our items at the "composure tables". I don't know about you, but I don't usually feel especially composed after making it through a long security line.

I'd say you can't make this stuff up, but apparently someone does.

Newly armed with the official terminology, I did a bit of googling this morning and found the TSA's Airport Security Design guidelines. This 333 page (PDF format) manual specifies, in all the detail one could ever hope for, everything there is to know about designing the security infrastructure for an airport, right down to the layout of the divest tables for the X-ray ingress points at sterile concourse station SSCPs. It's all very meticulous and complete, even warning of the "potential for added delay while the passenger divests or composes" (page 99). For some geeky reason, I find all this mind-numbing detail about the physical architecture of security to make strangely compelling reading, and I can't help but look for loopholes and vulnerabilities as I skim through it.

Somehow, for all the attention to minutiae in the guidelines, everything ends up just slightly wrong by the time it gets put together at an airport. Even if we accept some form of passenger screening as a necessary evil these days, today's checkpoints seem like case studies in basic usability failure designed to inflict maximum frustration on everyone involved. The tables aren't quite at the right height to smoothly enter the X-ray machines, bins slide off the edges of tables, there's never enough space or seating for putting shoes back on as you leave the screening area, basic instructions have to be yelled across crowded hallways. According to the TSA's manual, there are four models of standard approved X-ray machines, from two different manufacturers. All four have sightly different heights, and all are different from the heights of the standard approved tables. Do the people setting this stuff up ever actually fly? And if they can't even get something as simple as the furniture right, how confident should we be in the less visible but more critical parts of the system that we don't see every time we fly?