Home / Blogs

Does Apple’s Cloud Key Vault Answer the Key Escrow Question?

In a recent talk at Black Hat, Apple’s head of security engineering (Ivan Krstić) described many security mechanisms in iOS. One in particular stood out: Apple’s Cloud Key Vault, the way that Apple protects cryptographic keys stored in iCloud. A number of people have criticized Apple for this design, saying that they have effectively conceded the “Going Dark” encryption debate to the FBI. They didn’t, and what they did was done for very valid business reasons—but they’re taking a serious risk, one that could answer the Going Dark question in the other way: back-up copies of cryptographic keys are far too dangerous to leave lying around.

Going Dark, briefly, is the notion that law enforcement will be cut off from vital sources of evidence because of strong encryption. FBI directory James Comey, among others, has called for vendors to provide some alternate way for law enforcement—with a valid warrant—to bypass the encryption. On the other hand, most non-government cryptographers feel that any possible “exceptional access” mechanism is unreasonably dangerous.

The problem Apple wanted to solve is this. Suppose that you have some sort of iToy—an iPhone, an iPad, etc.—or Mac. These systems allow you to back up your keychain to Apple’s iCloud service, where they’re protected by your AppleID (an email address) and password. If you buy a new device from Apple, your keychain can be downloaded to it once you log on to iCloud. (Note: uploading keys to iCloud is optional and disabled by default, though you are prompted about enabling it during device setup.)

That’s a fine notion, and very consumer-friendly: people want to be able to back up their devices securely (remember that iToys themselves are strongly encrypted), and recover their information if their device is lost or stolen. The trick is doing this securely, and in particular guarding against brute force attacks on the PIN or password. To do this, iOS uses a “Secure Enclave”—a special co-processor that rate-limits guesses and (by default) erases the phone after too many incorrect guesses. (The details are complex; see Krstić‘s talk for details.) The problem is this: how do you ensure that level of protection for keys that are stored remotely, when the attacker can just hack into or subpoena an iCloud server and guess away. Apple’s solution to this problem is even more complex (again, see Krstić‘s talk), but fundamentally, Apple relies on a Hardware Security Module (HSM) to protect these keys against guessing attacks. It’s supposed to be impossible to hack HSMs, and while they do have master keys that are written to smartcards, Apple solved this problem very simply: they ran the smartcards through a blender…

So: it would seem that this solves the Going Dark problem. Instead of destroying these smart cards, suppose that Apple stored one copy in Tim Cook’s safe and another in James Comey’s. Problem solved, right? Not so fast.

Unfortunately, solving Going Dark can’t be done with a simple piece of code in one place. It’s many different problems, each of which needs its own secure solution; furthermore, the entire system—the set of all of these solutions, and the processes they rely on—has to be secure, as well as the code and processes for combining them.

The first part is the cryptographic protocols and code to implement the key upload functions. As I mentioned, these mechanisms are exceedingly complex. Although I do not know of any flaws in either the protocols or the code, I won’t be even slightly surprised by bugs in either or both. This stuff is really hard to get right.

The next step is protecting the master keys. Apple’s solution—a blender—is simple and elegant, and probably reliable. If we want some sort of exceptional access, though, we can’t do that: these smartcards have to exist. Not only must they be protected when not in use, they must be protected when in use: who can get them, what can be decrypted, how the requests are authenticated, what to do about requests from other countries, and more. This isn’t easy, either; it only seems that way from 30,000 feet. Apple got out of that game by destroying the cards, but if you want exceptional access that doesn’t work.

There’s another risk, though, one that Apple still runs: are the HSMs really secure? The entire scheme rests on the assumption that they are, but is that true? We don’t know, but research suggests that they may not be. HSMs are, after all, computers, and their code and the APIs to them are very hard to get right. If there is a flaw, Apple may never know, but vital secrets will be stolen.

Was Apple wrong, then, to build such an elaborate system? Clearly, no lesser design would have met their requirements: being able to recover old backups with just a password as the only authentication mechanism, while keeping a strict limit on password-guessing. But are those requirements correct? Here’s where life gets really tricky. Apple is a consumer device company; their prime goal is to make the customers happy—and customers get really unhappy when they lose their data. There are more secure designs possible, if you give up this remote recovery requirement, but those are more appropriate for defense gear, for situations where it’s better to lose the data than to let it be compromised by the enemy. Apple’s real problem is that they’re trying to satisfy consumer needs while still defending against nation-state adversaries. I hope they’ve gotten it right—but I won’t be even slightly surprised if they haven’t.

By Steven Bellovin, Professor of Computer Science at Columbia University

Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds several patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs.

Visit Page

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Threat Intelligence

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign

DNS

Sponsored byDNIB.com

New TLDs

Sponsored byRadix

Brand Protection

Sponsored byCSC

Domain Names

Sponsored byVerisign