Matt Blaze's
EXHAUSTIVE SEARCH
Science, Security, Curiosity
Archives: 2013 - 2022

A 1935 Radio Orphan Annie's Secret Society decoder badge resting on a souvenir mug from the CRYPTO '93 conference.

Between 1935 and 1949, many North American children (and adults) got their introduction to cryptography through encrypted messages broadcast at the ends of episodes of two popular radio adventure serial programs: Little Orphan Annie and Captain Midnight. Dedicated listeners could join Radio Orphan Annie's Secret Society or (later) Captain Midnight's Secret Squadron, whereupon they would be sent a decoder that would allow them to decrypt each week's messages (generally a clue about what would happen in the next episode).

Orphan Annie (and her Secret Society members) fought crime, battled pirates, solved mysteries, and had other typical American pre-adolescent adventures. Captain Midnight (with his Secret Squadron) used his aviation prowess to perform daring rescues and emergency transports, and, with the outbreak of WWII, was commissioned by the government to lead secret missions behind enemy lines.

The main qualification for membership in (and issuance of a decoder for) Radio Orphan Annie's Secret Society and Captain Midnight's Secret Squadron involved drinking Ovaltine, a malted milk flavoring containing the vitamins and nutrients then understood to be needed by growing secret operatives, or at least to be profitable for its manufacturer (which sponsored the broadcasts). Proof of sufficient Ovaltine consumption was established by mailing in labels from Ovaltine packages. New pins and badges were issued annually, requiring additional labels to be sent in each year. (The devices are sometimes remembered as decoder rings, but in fact they took the form of pins, badges, and the occasional whistle or signal mirror.)

Orphan Annie's Secret Society produced decoders (variously called "Super Decoder pins", "Telematic Decoder Pins" and other names from year to year) from 1935 through 1940. From 1941 through 1949, the decoders were rebranded as "Code-O-Graphs" and distributed by Captain Midnight's Secret Squadron. These years corresponded to Ovaltine's sponsorship of the respective programs. Although the decorative elements and mechanical designs varied, the underlying cryptographic principles were the same for all the decoders.

Encrypted messages were included in the broadcasts roughly once per week, usually at the end of Thursday's show (which typically ended with a cliffhanger). Unfortunately, there does not appear to be an easily available full online archive of the broadcasts. However, you can listen to (and, with the information below, decode) airchecks of several original messages here (note the year to ensure you use the correct decoder badge parameters):

1936 Orphan Annie 1936 Pin (1)
Orphan Annie 1936 Pin (2)
1938 Orphan Annie 1938 Pin (1)
Orphan Annie 1938 Pin (2)
Orphan Annie 1938 Pin (3)
1942 Captain Midnight 1942 Badge (1)
Captain Midnight 1942 Badge (2)
Captain Midnight 1942 Badge (3)
1947Captain Midnight 1947 Badge (1)

These decoders have endured as iconic examples of simple, "toy" cryptography, even among those (like me) born well after the golden age of radio. And while they are indeed vulnerable to weaknesses that make them unsuitable for most "serious" use, that doesn't mean we shouldn't take them seriously. In fact, the underlying cryptographic and security principles they embody are important and subtle, part of the foundations for much of "modern" cryptography, and the badges combine multiple techniques in interesting ways that repay a bit of careful study. Indeed, they were almost certainly the most cryptologically sophisticated breakfast premiums ever produced. And, by understanding them sufficiently well, we can cryptanalyze and decode messages without needing to buy Ovaltine or scour Ebay. The rest of this post explains how.



Back in the not-so-distant past, if you were patient and knowledgeable enough, you could reverse engineer the behavior of almost any electronic device simply by inspecting it carefully and understanding the circuitry. But those days are rapidly ending. Today, virtually every aspect of complex electronic hardware is controlled by microprocessors and software, and while that's generally good news for functionality, it's also bad news for security (and for having any chance of being sure what, exactly, your gadgets are doing, for that matter). For devices like smartphones, software runs almost every aspect of the user interface, including how and when it's powered on and off, and, for that matter, what being "off" actually means.

Complex software is, to put it mildly, hard to get right (for details, see almost any other posting on this or any other security blog). Especially for gadgets that are rich with microphones, cameras, location and environmental sensors, and communication links (such as, you know, smartphones), errors and security vulnerabilities in the software that controls them can have serious privacy implications.

The difficulty of reliably turning software-based devices completely off is no longer merely a hypothetical issue. Some vendors have even recognized it as a marketable feature. For example, certain Apple iPhones will continue to transmit "Find My Device" tracking beacons even after they've ostensibly been powered off. Misbehaving or malicious software could enable similar behavior even on devices that don't "officially" support it, creating the potential for malware that turns your phone into a permanently on surreptitious tracking device, no matter whether you think you've turned it off. Compounding these risks are the non-removable batteries used in many of the latest smartphones.

Sometimes, you might really want to make sure something is genuinely isolated from the world around it, even if the software running on it has other ideas. For the radios in phones (which can transmit and receive cellular, wifi, bluetooth, and near field communication signals and receive GPS location signals), we can accomplish this by encasing the device inside a small Faraday cage.

A Faraday cage severely attenuates radio signals going in or out of it. It can be used to assure that an untrustworthy device (like a cellphone) isn't transmitting or receiving signals when it shouldn't be. A Faraday cage is simple in principle: it's just a solid conductive container that completely encloses the signal source, such that the RF voltage differential between any two points on the cage is always zero. But actually constructing one that works well in practice can be challenging. Any opening can create a junction that acts as an RF feed and dramatically reduces the effective attenuation.

There are somewhat pricey (USD40-USD80) commercial Faraday pouches made specifically for cell phones, and there are a variety of improvised shielding methods that make the rounds as Internet folklore. The question is, then, how well do they actually work? It can be hard to reliably tell without access to a fairly specialized RF test lab. But fortunately, I sort of have one of those. While I can't compete with a full-scale commercial EMC test lab, my modest setup can make moderately accurate measurements of the signal attenuation provided by various commercial shielding pouches and home-brewed designs at most of the frequencies we care about.

I tested three commercial pouches as well as three commonly-recommended makeshift shielding methods. Read on for the results. (Note that I have no connection with any vendor mentioned here, and I do not endorse any of the products discussed for any particular purpose. Caveat emptor.)


A PDF of this letter can be found here.

I picked up the new book Compromised last week and was intrigued to discover that it may have shed some light on a small (and rather esoteric) cryptologic and espionage mystery that I've been puzzling over for about 15 years. Compromised is primarily a memoir of former FBI counterintelligence agent Peter Strzok's investigation into Russian operations in the lead up to the 2016 presidential election, but this post is not a review of the book or concerned with that aspect of it.

Early in the book, as an almost throwaway bit of background color, Strzok discusses his work in Boston investigating the famous Russian "illegals" espionage network from 2000 until their arrest (and subsequent exchange with Russia) in 2010. "Illegals" are foreign agents operating abroad under false identities and without official or diplomatic cover. In this case, ten Russian illegals were living and working in the US under false Canadian and American identities. (The case inspired the recent TV series The Americans.)

Strzok was the case agent responsible for two of the suspects, Andrey Bezrukov and Elena Vavilova (posing as a Canadian couple under the aliases Donald Heathfield and Tracey Lee Ann Foley). The author recounts watching from the street on Thursday evenings as Vavilova received encrypted shortwave "numbers" transmissions in their Cambridge, MA apartment.

Given that Bezrukov and Vaviloa were indeed, as the FBI suspected, Russian spies, it's not surprising that they were sent messages from headquarters using this method; numbers stations are part of time-honored espionage tradecraft for communicating with covert agents. But their capture may have illustrated how subtle errors can cause these systems to fail badly in practice, even when the cryptography itself is sound.

You may have noticed that this blog, and my domain, is now at www.mattblaze.org. Twenty five years ago, back in 1993, I registered the name crypto.com, which I've used as my personal domain as well as to host a variety of cryptography technology and policy resources.

During that quarter century the "dotcom" era came and went, but for whatever reason, I held on to the domain as basically a personal home, a kind of Internet version of the little house increasingly enveloped by skyscrapers in Pixar's Up. (You kids can get off my lawn now, please.)

Cryptography has long been intertwined with difficult public policy issues, especially the balance between security of data on the one hand and law enforcement access for surveillance on the other. I've spent a good part of my career grappling with these issues, and remember "crypto" being misguidedly derided as some kind of criminal tool during the very time when we needed to be integrating strong security into the Internet's infrastructure. (That "debate", in the '90's, set back Internet security by at least a decade, and we're still paying the price in the form of regular data breaches, many of which could have been prevented had better security been built in across the stack in the first place.)

Somehow, the word "crypto" has recently acquired an alternative new meaning, as a somewhat unfortunate shorthand for digital currencies such as Bitcoin. I've been involved around the edges of digital currency since early on -- old timers in this space will remember that I once chaired the Financial Cryptography conference, where much of the foundational work toward practical digital money began.

I don't think conflating cryptography and digital currency will serve either field well in the long run, particularly as to how they're perceived by the public and policymakers. Surprisingly few of the important aspects of digital currency are directly related to its cryptographic components. Cryptography itself already attracts disproportionate attention for its potential as a tool for criminals and evildoers. Digital currency adds a completely different (but equally fraught) regulatory and policy morass into the equation. Still, there's no doubt that, at this moment in time, the two have become hopelessly intermixed, at least in the minds of the digital money people. That doesn't mean this won't end badly, but it's unarguably where we are right now.

Over the last few years, I've gotten a growing barrage of offers, many of which were obviously non-serious, but a few of which were, frankly, attention-getting, for the crypto.com domain. I shrugged most of them off, but it became increasingly clear that holding on to the domain was making less and less sense for me. I quietly entered discussion with a few serious potential buyers earlier this year.

Last month, I reached an agreement to sell the domain. I have no idea what the new owner plans to use it for beyond what I read in the trade press, and I have no financial stake in their business. The details will have to stay confidential, but I will say that I'm satisfied with the outcome and that it involved neither tulips nor international postal reply coupons.

It's been, I think, a pretty good run, as these things go. See you on the Internets.

This Monday, The Intercept broke the story of a leaked classified NSA report [pdf link] on an email-based attack on a various US election systems just before the 2016 US general election.

The NSA report, dated May 5, 2017, details what I would assume is only a small part of a more comprehensive investigation into Russian intelligence services' "cyber operations" to influence the US presidential race. The report analyzes several relatively small-scale targeted email operations that occurred in August and October of last year. One campaign used "spearphishing" techniques against employees of third-party election support vendors (which manage voter registration databases for county election offices). Another -- our focus here -- targeted 112 unidentified county election officials with "trojan horse" malware disguised inside plausibly innocuous-looking Microsoft Word attachments. The NSA report does not say whether these attacks were successful in compromising any county voting offices or what even what the malware actually tried to do.

Targeted phishing attacks and malware hidden in email attachments might not seem like the kind of high-tech spy tools we associate with sophisticated intelligence agencies like Russia's GRU. They're familiar annoyances to almost anyone with an email account. And yet they can serve as devastatingly effective entry points into even very sensitive systems and networks.

So what might an attacker -- particularly a state actor looking to disrupt an election -- accomplish with such low-tech attacks, should they have succeeded? Unfortunately, the possibilities are not comforting.

Encryption, it seems, at long last is winning. End-to-end encrypted communication systems are protecting more of our private communication than ever, making interception of sensitive content as it travels over (insecure) networks like the Internet less of a threat than it once was. All this is good news, unless you're in the business of intercepting sensitive content over networks. Denied access to network traffic, criminals and spies (whether on our side or theirs) will resort to other approaches to get access to data they seek. In practice, that often means exploiting security vulnerabilities in their targets' phones and computers to install surreptitious "spyware" that records conversations and text messages before they can be encrypted. In other words, wiretapping today increasingly involves hacking.

This, as you might imagine, is not without controversy.

Recall NGNR2000 DNR Recent news stories, notably this story in USA Today and this story in the Washington Post, have brought to light extensive use of "Stingray" devices and "tower dumps" by federal -- and local -- law enforcement agencies to track cellular telephones.

Just how how does all this tracking and interception technology work? There are actually a surprising number of different ways law enforcement agencies can track and get information about phones, each of which exposes different information in different ways. And it's all steeped in arcane surveillance jargon that's evolved over decades of changes in the law and the technology. So now seems like a good time to summarize what the various phone tapping methods actually are, how they work, and how they differ from one another.

Note that this post is concerned specifically with phone tracking as done by US domestic law enforcement agencies. Intelligence agencies engaged in bulk surveillance, such as the NSA, have different requirements, constraints, and resources, and generally use different techniques. For example, it was recently revealed that NSA has access to international phone "roaming" databases used by phone companies to route calls. The NSA apparently collects vast amounts of telephone "metadata" to discover hidden communications patterns, relationships, and behaviors across the world. There's also evidence of some data sharing to law enforcement from the intelligence side (see, for example, the DEA's "Hemisphere" program). But, as interesting and important as that is, it has little to do with the "retail" phone tracking techniques used by local law enforcement, and it's not our focus here.

Phone tracking by law enforcement agencies, in contrast to intelligence agencies, is intended to support investigations of specific crimes and to gather evidence for use in prosecutions. And so their interception technology -- and the underlying law -- is supposed to be focused on obtaining information about the communications of particular targets rather than of the population at large.

In all, there are six major distinct phone tracking and tapping methods used by investigators in the US: "call detail records requests", "pen register/trap and trace", "content wiretaps", "E911 pings", "tower dumps", and "Stingray/IMSI Catchers". Each reveals somewhat different information at different times, and each has its own legal implications. An agency might use any or all of them over the course of a given investigation. Let's take them one by one.