Home / Blogs

Legal Controls on Extreme End-to-End Encryption (ee2ee)

One of the most profoundly disruptive developments occurring in the cyber security arena today is the headlong rush by a set of parties to ubiquitously implement extreme End-to-End (e2e) encryption for communication networks using essentially unbreakable encryption technology. A notable example is a new version of Transport Layer Security (TLS) known as version 1.3. The activity ensues largely in a single venue—the informal entity known as the Internet Engineering Task Force (IETF), where the proponents cite stolen highly classified documents as the basis for their efforts.

The generally understood objective by its zealous leaders is to cause everyone except the end parties of the communications services to “go dark”—impeding even the operations of network operators to manage their infrastructures and meet legal compliance obligations. The parties and organizations pursuing this activity generally share common interests born of cryptology competition, anti-government paranoia, and libertarianism—marketed as their own extreme notion of privacy. On the commercial side, the parties involved seek competitive commercial advantages for unimpeded Over-The-Top (OTT) services or e2e encryption products.

The assumption among these parties and organizations is largely that the potentially enormous adverse consequences are not their problem, and there are no legal consequences to their actions. This article, which is taken from a larger treatise, examines some of the diverse legal mechanisms for control of this activity, especially judicial “causes of action” potentially arising from existing and emerging new case law that suggest the legal risk exposure of extreme e2e encryption zealots could be significant.

The technology

End-to-end encryption of communications is hardly new. The basic technology has been around since human antiquity and adapted with every new communication technology over the past few millennia. What is new is the ubiquitous availability of extremely high-performance computational capacity at the communication network end points like contemporary smartphones or laptops coupled with its exploitation by parties who don’t bear the disastrous consequences of its widespread implementation. Those known adverse effects include a lengthy list that include the inability of network operators to manage their infrastructure, diminished resilience and performance of networks, the uncontrollable proliferation of malware and other threat vectors, the inability to meet critical compliance obligations including the detection of insider threats, and global exploitation for criminal, cyberwar, and terrorist purposes.

Responsible commercial and intergovernmental industry technical venues have for decades adopted appropriate forms of Transport, Network, and Application Layer Security—rejecting extreme e2e encryption capabilities—and instituted alternative techniques that mitigate the adverse consequences and provide a balance among the competing design requirements. However, this balance seems unsatisfactory to encryption zealots who are hellbent on leading an extremist vanguard toward some nirvana of ultimate e2e encryption. Indeed, it is ensuing now at the Singapore IETF meeting in November, notwithstanding that the implementations would likely be unlawful in that country under its Computer Misuse Act.

Non-judicial controls

The legal controls in this category include enforcement of treaty provisions, national organic law and regulations, and contractual requirements among providers and with enterprise or governmental customers.

The key provision that is dispositive in public international law is known as Article 34 and it asserts that Nation-States have a sovereign “right to cut off, in accordance with their national law, any ...private telecommunications which may appear dangerous to the security of the State or contrary to its laws, to public order or to decency.” The provision has existed for 167 years, and reaffirmed without reservations by every nation in the world continually in the face of every new technology.

There is flatly no “right” to unfettered personal encrypted communication on publicly available infrastructures and services. Conversely, the Art. 34 treaty provision and its precursors are the basis for broad proscriptions against e2e encryption in many if not most nations, that include active blocking mechanisms by detecting the signature of the traffic. Although further treaty-based requirements concerning e2e encryption have not been formulated, they could see further amplification under the aegis of enabling the Art. 34 provision within any of the several International Telecommunication Union bodies. As the original home of the Transport Layer Security Protocol and an array of other encryption specifications more than two decades ago designed to meet treaty provision requirements, it has a well-established basis for action today if necessary.

Similar global technical specification requirements among all the law enforcement agencies have long existed and continue to be updated yearly. Today, communication services providers must assist in making decrypted communications available when lawfully compelled by government authorities, and the requirements for wireline and mobile services are implemented within almost all industry technical standards bodies.

The concerns and avenues of legal regress have also been amplified recently by the U.S. Deputy Attorney General who noted: “Technology companies almost certainly will not develop responsible encryption if left to their own devices. Competition will fuel a mindset that leads them to produce products that are more and more impregnable. That will give criminals and terrorists more opportunities to cause harm with impunity. Sounding the alarm about the dark side of technology is not popular. Everyone who speaks candidly about ‘going dark’ faces attacks by advocates of absolute privacy.”

Other important non-juridical controls on e2e encryption are implemented through contractual requirements—especially for cloud data centres. Contract provisions can either require standardized capabilities to enable trusted exposure of e2e encrypted communications or block them entirely. For example, in essentially all enterprise network implementations—especially for governmental use—private individual e2e uses are broadly proscribed, and indeed, any use is a prima facie indicator of a security threat.

A significant new global initiative known as the Middlebox Security Protocol (MSP) to responsibly manage e2e encryption consists of a set of new Technical Specifications plus a report in the European Telecommunication Standards Institute (ETSI) in collaboration with an array of other industry and scholarly bodies.

Judicial controls

Judicial legal controls on e2e encryption exist in several forms—both criminal and civil. It is not apparent that criminal measures have been yet pursued—which could include criminal conspiracy or being an accessory to a crime. Both criminal causes of action are potentially available. It is civil causes of action, however, that have become prominent in recent litigation. These include both tort and the violation of anticompetitive provisions of the Sherman Act. Coupled with the tort liability is the increasing likelihood that insurers will increase premiums or outright deny coverage for those engaging in irresponsible e2e encryption as an activity with increased financial risk exposure.

The question of civil liability for end-to-end encryption became actively discussed in a seminal Lawfare two-part article in 2015. As the article notes “thinking through liability can be a useful way of thinking through how society wants to allocate risk. And one way of thinking about the regulation (or lack thereof) of end-to-end encryption is to ask who, if anyone, should pay when things go horribly wrong.” The article also notes that Judge Posner’s advance of notions of proximate causation helps further the potential for culpability.

This civil liability control continues to be pursued in several recent cases growing out of terrorist incidents. The cases typically argue that the defendants (variously Facebook, Twitter, and Google) have liability for: 1) aiding and abetting acts of international terrorism, 2) conspiring in those acts, 3) providing material support and resources, and 4) negligent infliction of harm, including wrongful death. Some of the litigation has been dismissed, albeit not without concern being expressed by both the judges involved and legal scholars.

Some of the cases remain ongoing. It seems like only a matter of time before one of these cases proceeds to jury trial and results in significant damage awards. In the meantime, the litigation costs are significant. Providers, organizations, and individuals advancing extreme forms of e2e encryption that are almost certain to aid and abet multiple forms of terrorism, criminal activities, and infrastructure harm seem likely to be facing civil complaints for resulting damages in the U.S. and other jurisdictions worldwide.

Another recent relevant legal development acting as a control is the case of Trueposition v. Ericsson and other companies in the context of standards-setting activities. Here the complaint under the anticompetitive provisions of the Sherman Act, alleged that some of the participants in the standards process, including those in leadership positions, engaged in a conspiracy to harm Trueposition’s ability to compete in the marketplace. The case did overcome various challenges and eventually resulted in a settlement where no wrongful actions were admitted.

The case did, however, shake up the standards community into a realization that there were potential consequences to activities. In addition to the far-reaching implications concerning the court’s exterritorial jurisdiction over a standards-making body discussed below, the case advanced an additional viable control on pursuing irresponsible e2e encryption that potentially causes significant adverse harm to telecommunication transport service providers and vendors. The likelihood of an antitrust complaint here is enhanced because the e2e encryption developments arguably significantly benefit OTT providers to the detriment of underlying carriers.

Jurisdictional issues and venue liability

Until recently, the organizations which supported the discussions of network technology standards and the participants considered themselves largely immune from civil liability. That changed in 2012 with the Trueposition litigation initiated in U.S. Federal Court against several companies including the standards venues European Telecommunication Standards Institute (ETSI) and the Third Generation Partnership Project 3GPP). The complaint involved alleged anticompetitive conduct that ensued within the standards-making processes. After several years of litigation costing the parties many millions of dollars, the court held that there was basis for jurisdiction even though ETSI was based in France. The parties entered into a settlement agreement recognizing that those acting in a standards-making setting can be held liable for wrongful actions occurring in that setting.

Those participating in the IETF—which only exists as a kind of virtual umbrella of individuals—face even greater exposure. Unlike a normal standards body like ETSI or 3GPP, the IETF does not exist as a legal entity. It is asserted that participants act as individuals, and several non-profit corporations provide supporting services.

Thus, there appears to be no actual anti-trust policy or rules—only a kind of guide for conduct. There is also no legal entity to reduce the exposure of individuals for technical specifications that subsequently result in significant harm. To the extent that civil tort liability exists for initiatives led in the IETF and adopted among the participants, including the pseudo-leadership positions, it is those individuals (and possibly their employers) who would appear to bear the culpability. The IETF Trust purchases liability insurance for the Trust and its Trustees for the purpose of holding the IPR. For those playing IETF roles, the Internet Society provides liability insurance and a promise of legal support for their activities. Individuals, however, would appear to participate at their own risk for potential consequences of their proffered specifications.

Potential Actions

There is a kind of simplistic, self-referential zeal among some in venues like the IETF who bandy about terms like privacy to justify technical platform actions that have extreme adverse consequences—believing they are the ultimate authorities in determining the righteousness of their actions and thereby imparted legal immunity. This activity, however, exists within a larger ecosystem of legal controls which are rapidly evolving. It is legal systems in our societies that balance consequences and determine responsibilities, not self-appointed technical groups.

There are three potential sets of legal controls that are emerging with respect to those who are developing, promoting, and implementing extreme end-to-end encryption (ee2ee) capabilities:

• Intergovernmental, Nation-State, and service provider proscription of these actions
• Litigation by parties adversely affected against those entities and individuals in the pursuit of compensation of resulting damages
• Adjustment to the insurance coverage provided by insurers to deny protection to the entities and individuals

By Anthony Rutkowski, Principal, Netmagic Associates LLC

The author is a leader in many international cybersecurity bodies developing global standards and legal norms over many years.

Visit Page

Filed Under

Comments

Extremist and proud Jeremy Malcolm  –  Oct 24, 2017 4:29 PM

Extremist and proud

Pride is good Anthony Rutkowski  –  Oct 24, 2017 5:16 PM

The issues here, however, are who gets to decide what is allowed on public infrastructures or as offerings, and how the risks are shared.  The answer to the first is Nation-States.  The answer to the second is our legal systems. If someone offers a public product or service and significant harm occurs, that entity is potentially liable in sharing the risk. C’est simple.

Sorry Anthony, things, techniques and technologies, can Charles Christopher  –  Oct 24, 2017 5:42 PM

Sorry Anthony, things, techniques and technologies, can never be good or bad, they just are. Its intent that is the difference. To deprive good lawful people from creative expression and innovation is staggering, but it does seem to be the way the world is going. And that is truly sad for it makes clear the “terrorists” have in fact won their war, they are in control of our minds, as fear is the primary feeling of too many GOOD PEOPLE today. The idea that some encryption is “just encryption” and some is “extreme encryption” boggles my mind. Foundationaly, the encryption I use is either secure, or its not. Its either encryption that is secure, or its just a fictional wrapper to sell me an illusion of security. To create something that has known exploits, and expect people to use it (without pointing a gun at them) because it has a label that reads "secure" is also mind boggling. For an algorithm to be called “secure” there can be no grey on this point. Control exists only at the user endpoints, or there is no security. >The provision has existed for 167 years Thought experiment: At the beginning such cooperation was needed to work together to build a global network. Now take away all treaties, all provisions, all pretty pieces of paper with pretty writing on it. When you do is anybody TODAY going to turn their network off? No …. Not a chance of it. Unless they are suicidal regarding their economy and basic infrastructure. At some point the desire / demand for centralization became obsolete. Last one out please remember to turn off the lights ….. >If someone offers a public product or service and significant harm >occurs, that entity is POTENTIALLY liable in sharing the risk. There are good people, there are bad people. Good people should NEVER live in fear of bad people. Bad people should ALWAYS live in fear of good people, its a great motivator to change ones ways and be good. So for good people and companies, until they rise to the level of GOD, no one can predict all future use of their product or service. To be expected to be a G-rand O-mnipotent D-esigner does only one thing, it causes the DEATH of innovation. Everyone lives in fear of being different. Terrorists win as we all live in fear of each other, and seek "The Central Scrutinizer" to save us: https://www.youtube.com/watch?v=w-0VFbJamSY Forty years later perhaps the media has changed, but the illusion has not.

This is not about bad or good Anthony Rutkowski  –  Oct 24, 2017 6:59 PM

In my ee2ee article, we are treating the completely cloaked transport of potentially harmful content (e.g., malware or terrorist instructions) using publicly available infrastructure and services between endpoints on that infrastructure.  The law has generally held that given a lawful order, those offering those services or operating the infrastructure, must have the ability to “expose” the content in some fashion.

It is not apparent how that requirement abrogates anything in the U.S. Constitution or rational enumeration of rights. We’re not dealing here with anything static in your home or on your phone.  But rather it is something that is delivered to you via a transportation infrastructure, and the ability to observe in some fashion, what it is while is moving on the network. 

The article treats the legal systems that decide who bears the risk when someone provides an impenetrable cloaking mechanism and significant harm occurs.  If someone wants to creatively express themselves with such a cloaking mechanism in this context, they should be willing to assume the consequences when harm occurs.

This is a very narrow context.

No it isn't a narrow context. As Todd Knarr  –  Oct 25, 2017 6:18 PM

No it isn't a narrow context. As was said, encryption is either secure (cannot be broken by a third party with less effort than a brute-force iteration of all possible key values) or it isn't. If any third party (ie. anyone not either the sender or the recipient) can get at the clear-text content of an encrypted transmission, then the encryption isn't secure and any third party can get the clear text content. This is where the problem lies, at the heart of cryptography itself: there is no distinction between specific third parties. That's why in textbooks on it (eg. "On Cryptography") you see nonce names used for the various parties, to emphasize that we don't care about the exact identity of the parties but only their positions in the communication. And while I know of plenty of laws that say that a carrier has to divulge the content of communications if law-enforcement asks, I know of no laws that say that I as a private citizen must give the carrier content which law-enforcement can understand if it's divulged. The channel doesn't matter. Whether it's bits over a network connection or a letter sent through the US Postal Service, no law (at least in the US) says I must send clear-text or the equivalent and can't write in code or even just write random garbage (an ideal encryption algorithm produces results indistinguishable from random garbage, so any ban on secure encryption would by necessity ban random garbage in the process as there's no feasible way to distinguish between them).

au contraire. Anthony Rutkowski  –  Oct 25, 2017 7:00 PM

Encryption is not binary (no pun intended) There are innumerable variables, which is in fact demonstrated by the current controversy to push from the IETF's TLS 1.2 to 1.3. The network operator has a right to manage the traffic transported on its infrastructure and to protect it. When presented with a lawful order to assist in providing unencrypted content, it must comply to the extent possible; and there are many ways to do that. There are many laws in the U.S. that restrict encrypted messages for enterprise networks. Restrictions on the use of encryption is determined by the access network context. It is quite feasible to distinguish between the signature of random garbage and any standard secure encryption. The desired end-state here is not to ban ee2ee, but to instantiate a standard means whereby some level of exposure of the content can occur when lawfully compelled. It is a narrow context.

For the difference between a secure encrypted Todd Knarr  –  Oct 25, 2017 7:43 PM

For the difference between a secure encrypted message and random garbage, I suggest familiarizing yourself with the standard reference on cryptography for software engineers: https://www.schneier.com/books/applied_cryptography/

And the push for TLS 1.3 is primarily to remove many of the older insecure algorithms. That’s precisely because encryption security is binary. Those algorithms are all vulnerable to decryption by third parties, which makes them unsuitable for any use in TLS. Either encryption is secure (cannot be decrypted by anyone other than the sender or receiver) or it isn’t and it can be decrypted by any third party, there isn’t any middle ground between those two states. Again, Schneier covers why this is and the mathematical theory behind it quite well.

Yes, stick with Bruce Anthony Rutkowski  –  Oct 25, 2017 8:15 PM

:-)

If so, then you have a slight Todd Knarr  –  Oct 27, 2017 10:57 PM

If so, then you have a slight problem in that he discusses exactly why an ideal encryption algorithm is equivalent to a random oracle. And the characteristic of a random oracle is that it produces a unique random output from it's output space for any given input (note that salts are considered part of the input, not part of the algorithm). And modern encryption algorithms are very close approximations of an ideal algorithm (ones that aren't tend to be broken more easily and they get replaced by better ones). On a side note, the random oracle characteristic takes us into the distinction between a cipher and a code. Technically the output of a random oracle would be a code rather than a cipher, but one not vulnerable to a known-plaintext attack since each complete plaintext corresponds to a different code symbol. That leads directly into the proof that a one-time pad cipher can't be broken more efficiently than by the enumeration of all possible keys.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Cybersecurity

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com

New TLDs

Sponsored byRadix

Brand Protection

Sponsored byCSC

IPv4 Markets

Sponsored byIPv4.Global