It's been a frighteningly confusing week for frequent flyers (and confirmed cowards) like me. First we had the Underpants Bomber, his Christmas-day attempt to take down a Detroit-bound flight thwarted by slow-acting chemistry and quick-thinking passengers. Next -- within a day -- came inexplicable new regulations that seemed designed more to punish the rest of us than to discourage future acts of terrorism. The new rules were unsettling not just because they seemed as laughably ineffective as they were inconvenient, but because they suggested that the authorities had no idea what to do, no real process for analyzing and reacting to potential new threats. As the Economist was moved to write, "the people who run America's airport security apparatus appear to have gone insane".
A few days later the TSA, to its credit, rolled back some of the more arbitrarily punitive restrictions -- in-flight entertainment systems can now be turned back on, and passengers are, at the airline's discretion, again permitted to use the toilets during the last hour of flight.
But while a degree of sanity may have returned to some of the rules, the TSA's new security philosophy appears to yield significant advantage to attackers. The current approach may actually make us more vulnerable to disruption and terror now than we were before.
According to the New York Times, the TSA's strategy now relies heavily on "unpredictable" procedures that randomly subject passengers to different kinds of screenings at different times:
"The advantage we have of random unpredictable procedures is it will prevent somebody from figuring out how to game the system," [Acting TSA Director Gale] Rossides said. "The security strategy in its core has built in randomness and unpredictability. That is a strength of the system."
Unfortunately, this "strength" may instead result in a significant weakness, making it easier, rather than harder, for adversaries to game the aviation security system in their favor.
Spot checks and random (less than 100%) screening are time-honored techniques in law enforcement and security; properly used, they can be quite effective at discouraging a wide range of misconduct. The IRS audits only a small fraction of tax returns, many transit systems only occasionally spot check riders tickets, customs agents inspect only a small sample of travelers for contraband, and so on.
The strategy works in these cases because when a random spot check detects a violation, the authorities can react in a way that influences the behavior even of those who aren't checked. If the cost of being caught is high enough (fines and prison sentences), mechanisms that detect only a small fraction of potential violators can be sufficient to deter most cheaters. And spot checks have the advantage not only of being much cheaper to implement than the alternative of 100% compliance checking, but also of reducing inconvenience to the honest public.
"Unpredictable" security as applied to air passenger screening means that sometimes (perhaps most of the time), certain checks that might detect terrorist activity are not applied to some or all passengers on any given flight. Passengers can't predict or influence when or whether they are to be subjected to any particular screening mechanism. And so, the strategy assumes, the would-be terrorist will be forced to prepare for every possible mechanism in the TSA's arsenal, effectively narrowing his or her range of options enough to make any serious mischief infeasible.
But terrorist organizations -- especially those employing suicide bombers -- have very different goals and incentives from those of smugglers, fare beaters and tax cheats. Groups like Al Qaeda aim to cause widespread disruption and terror by whatever means they can, even at great cost to individual members. In particular, they are willing and able to sacrifice -- martyr -- the very lives of their soldiers in the service of that goal. The fate of any individual terrorist is irrelevant as long as the loss contributes to terror and disruption.
Paradoxically, the best terrorist strategy (as long as they have enough volunteers) under unpredictable screening may be to prepare a cadre of suicide bombers for the least rigorous screening to which they might be subjected, and not, as the strategy assumes, for the most rigorous. Sent on their way, each will either succeed at destroying a plane or be caught, but either outcome serves the terrorists' objective.
The problem is that catching someone under a randomized strategy creates a terrible dilemma for the authorities. What do we do when we detect a bomb-wielding terrorist whose device was discovered through the enhanced, randomly applied screening procedure?
We could simply arrest the unlucky suspect and do nothing else. But then what if there are others trying the same or similar things who aren't screened in the same way? Is some other flight about to fall from the sky? And what if a second bomb-wielding terrorist is caught, perhaps with a slightly different kind of bomb? The only viable alternative at that point may be to shut down all commercial aviation until the the most rigorous screening possible can henceforth be applied universally, effectively creating the same kind of havoc that occurs after a successful attack.
Indeed, there is no response to detection in this strategy that does not serve the terrorists' interests. In game-theoretic terms, the "unpredictable" scheme has a high positive payoff for the attacker (and a high negative payoff for the rest of us), even when it succeeds at catching someone.
If random screening favors the terrorists, what should we do instead?
A critical security engineering principle appears to have been ignored in the design of the TSA's "unpredictable" strategy. There does not seem to have been an analysis of the threats under which the system must work or of the security properties that the various components of the system must have. It is all but impossible to design a sound security system without understanding precisely the threats it is intended to address or what the different mechanisms do and how they work togther.
We might reflexively assume that any passenger screening system needs to be 100% effective at detecting all possible weapons and dangerous objects, an obviously difficult task. But, fortunately, that's not the requirement. Instead, the mechanisms need only be highly effective at detecting objects that can create actual terror under the conditions they will be subjected to in an actual flight. That is, in order to have meaningful security screening, we first must understand what it realistically takes to bring down an airplane. The security system can then be designed specifically to eliminate the preconditions for successful terrorism.
The TSA's much maligned "three ounce" liquid rule is, in fact, a nice example of good security engineering of this kind. There are flammable and volatile liquids, to be sure, but rather than simply banning all liquids, an attempt was made to determine the kinds and minimum quantities of (easily transported) liquids required to do serious damage to an airplane in flight. It's possible that the analysis was wrong, of course, and perhaps there's a yet unknown improvised three ounce liquid bomb design that can do enough damage to take down a flight. But so far, qt least, no one, not even the Christmas Underpants Bomber, has managed to build and ignite one on an airliner, despite obvious incentives for terrorists to try.
Again, it's important to recognize that the goal here is not to prevent all damage or injury, only damage that rises to the level of actual terror in the context of the other conditions on an airplane. A "terrorist" who sets fire to three ounces of weapons-grade hand sanitizer might cause burns to his seatmates. But as long as we're confident that that's the extent of the potential danger, this sort of behavior would best be understood as a garden-variety assault, not as terrorism.
The fact that pilots, flight attendants and other passengers are now instructed not to cooperate with hijackers further reduces the opportunity for small acts of violence to be leveraged into terrorist control of the airplane itself. We might reasonably rely on magnetometers, pat-downs, and x-ray machines to detect only the most dangerous objects, such as guns and bombs, and on other mechanisms, such as readily available fire extinguishers and well-trained flight crews, to manage the remaining threats. The passenger and carryon screening system must still work very reliably, but is not required to supernaturally detect every object that might do any harm, however small.
This kind of threat-based engineering has long been applied successfully outside aviation. In well designed security systems, individual mechanisms need not perform perfectly, but rather must work well enough to guarantee that the other components can do their jobs within the specified limits. Commercial safes, for example, are only designed to resist forced opening for a limited time, typically just 10-30 minutes. But even a safe with a short time rating can be used to engineer a very robust anti-burglary system. A 15 minute safe is completely sufficient when it is accompanied by an alarm that summons the police within 14 minutes.
The TSA's new random and unpredictable screening strategy,
on the other hand, makes no guarantees about what will and won't
be scrutinized, and so can't be used as the basis for other parts of
the security system. It cannot, as a computer scientist might say,
be composed with other mechanisms to give us strong assurances
about what can and can't happen aboard an airplane.
It only leaves us wondering what we might have missed.