Experts Urge Congress to Reject DNS Filtering from PROTECT IP Act, Serious Technical Concerns Raised

By CircleID Reporter

Security and Other Technical Concerns Raised by the DNS Filtering Requirements in the PROTECT IP Bill — Paper describes technical problems raised by the DNS filtering requirements in the Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act of 2011 ("PROTECT IP Act"). Click to DownloadA group of leading DNS experts have released a paper detailing serious concerns over the proposed DNS filtering requirements included as part of the bill recently introduced in the U.S. Senate named Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act of 2011 ("PROTECT IP Act”).

The group who is urging lawmakers to reconsider enacting such a mandate into law, includes leading domain name system (DNS) designers, operators, and researchers, responsible for numerous "RFCs" (technical design documents) for DNS, publication of many peer-reviewed academic studies related to architecture and security of the DNS, and responsible for the operation of important DNS infrastructure on the Internet.

The paper highlights the following as some of the serious technical and security concerns should the mandated DNS filtering be enacted into law:

The group insists that "the goals of PROTECT IP can be accomplished without reducing DNS security and stability, through strategies such as better international cooperation on prosecutions and the other remedies contained in PROTECT IP other than DNS-related provisions."

To download the paper title, "Security and Other Technical Concerns Raised by the DNS Filtering Requirements in the PROTECT IP Bill," click here [PDF].

Authors of the paper:

Steve Crocker (CircleID) is CEO of Shinkuro, Inc. a security-oriented consulting and development company, and has been leading Shinkuro's work on deployment of DNSSEC, the security extension to DNS. He currently serves as vice chair of the board of ICANN and served as chair of ICANN's Security and Stability Advisory Committee from its inception in 2002 until 2010. He has been active in the Internet community since 1968 when he helped define the original set of protocols for the Arpanet, founded the RFC series of publications and organized the Network Working Group, the forerunner of today's Internet Engineering Task Force (IETF). He later served as the first Area Director for Security in the IETF. Over his forty plus years in network research, development and management, he has been an R&D Program Manager at DARPA, senior researcher at University of Southern California's Information Sciences Institute, Director of Aerospace Corp's Computer Science Laboratory, vice president of Trusted Information Systems, co-founder, senior vice president and CTO of CyberCash, Inc. and co-founder and CEO of Longitude Systems, Inc.

David Dagon is a post-doctoral researcher at Georgia Institute of Technology studying DNS security and the malicious use of the domain resolution system. He is a co-founder of Damballa, an Internet security company providing DNS-based defense technologies. He has authored numerous peer-reviewed studies of DNS security, created patent-pending DNS security technologies, and proposed anti-poisoning protocol changes to DNS.

Dan Kaminsky (CircleID) has been a noted security researcher for over a decade, and has spent his career advising Fortune 500 companies such as Cisco, Avaya, and Microsoft. Dan spent three years working with Microsoft on their Vista, Server 2008, and Windows 7 releases. Dan is best known for his work finding a critical flaw in the Internet's Domain Name System (DNS) and for leading what became the largest synchronized fix to the Internet's infrastructure of all time. Of the seven Recovery Key Shareholders who possess the ability to restore the DNS root keys, Dan is the American representative. Dan is presently developing systems to reduce the cost and complexity of securing critical infrastructure.

Danny McPherson (CircleID) is Chief Security Officer for VeriSign where he is responsible for strategic direction, research and innovation in infrastructure, and information security. He currently serves on the Internet Architecture Board (IAB), ICANN's Security and Stability Advisory Council, the FCC's Network Reliability and Interoperability Council (NRIC), and several other industry forums. He has been active within the Internet operations, security, research, and standards communities for nearly 20 years, and has authored a number of books and other publications related to these topics. Previously, he was CSO of Arbor Networks, and prior to that held technical leadership positions with Amber Networks, Qwest Communications, Genuity, MCI Communications, and the US Army Signal Corp.

Paul Vixie (CircleID) founded Internet Systems Consortium in 1996 and served as ISC's President from 1996 to 2011 when he was named Chairman and Chief Scientist. Vixie was the principal author of BIND versions 4.9 to 8.2, which is the leading DNS server software in use today. He was also a principal author of RFC 1996 (DNS NOTIFY), RFC 2136 (DNS UPDATE), and RFC 2671 (EDNS), coauthor of RFC 1876 (DNS LOC), RFC 2317 (DNS for CIDR), and RFC 2845 (DNS TSIG). Vixie's other interests are Internet governance and policy, and distributed system security.

Additional reading:
On Mandated Content Blocking in the Domain Name System, Paul Vixie

Updates:  UPDATED May 26, 2011 1:42 PM PDT
Ron Wyden: Puts Hold On PROTECT IP, Temporarily Withdraws Amendment On The PATRIOT Act Techdirt, May.26.2011
Wyden Places Hold on Protect IP Act Press Releases, May.26.2011
Wyden Vows To Again Block Leahy Anti-Online Piracy Bill National Journal, May.26.2011
Senate panel OKs controversial antipiracy bill CNet, May.26.2011

Related topics: Cybercrime, Cybersecurity, DNS, DNS Security, Internet Governance, Law, Policy & Regulation


Expert views differ Phillip Hallam-Baker  –  May 27, 2011 6:48 PM PDT

The idea that the DNS cannot possibly survive unless every Internet client receives the same data is quite a common one but completely mistaken.

Similar objections were rised decades ago when enterprises connecting to the Internet deployed firewalls and split DNS. They were proved wrong. Split DNS is in fact a necessary security measure without which most Enterprises would not connect to the Internet at all. The same is true of firewalls. The fact that firewalls do not provide perfect security does not mean that they have no value at all.

DNSSEC as currently proposed is built around the assumption that every Internet client should be connected to the ICANN DNS. That is not a position that some of us have ever accepted. I have no interest in allowing any of my machines to connect to hosts operated by the Russian maffia. Hence I have no desire to allow them to resolve DNS addresses that have been identified as belonging to the Russian maffia.

The risk of allowing such connections is a known quantity and is significant. ICANN does not and should not curate the DNS. Thus connecting to the ICANN DNS represents a risk that some of us wish to mitigate.

The risk of DNS spoofing is also a risk of concern to me, but not in fact as big a concern as the legitimately registered DNS domains that are used for fraud. There are existing controls available to prevent DNS spoofing. SSL certificates being one such control. But DNS spoofing would not actually worry me as much as the known malicious legitimate domains.

DNSSEC as envisaged by Crocker, Vixie et. al. forces us to make a choice between controlling one risk or the other. That does not mean that we must or should accept DNSSEC in that particular form.

End to end DNSSEC has always been a misguided concept. The problem being that the ends of the protocol interaction do not match the ends of the trust relationship. Alice is not a Turing machine and it is Alice, not the computer who is the ultimate decider of policy for her computer systems. It is for Alice, not ICANN to decide what DNS names may or may not be resolved by the hosts she owns.

Recognizing that the true end of the trust relationship is Alice, not the client host leads to a different DNSSEC architecture in which Alice decides which DNS resolver to trust and processing of DNSSEC information is performed in the resolver and not the endpoint client. Realizing such an architecture securely requires a practical means of securing the communication between the client and the resolver. This particular step is something that many people have suggested in the past ten years during which time the 'experts' have invariably replied that they are too far down this course to consider changing it.

Fortunately the Congressional proposal will bring this issue to a head but not for the reason or with the effect that these 'experts' imagine.

The reason to object to the proposals is not that they are technically infeasible. That is nonsense and in any case a weak case.

The reason to object is that government censorship is an odious practice and the commercial interests of copyright holders are not a sufficient reason to give government the power to license the press.

And the correct response to defeating such odious proposals is not to smugly declare the technical deficiencies, the correct response is to develop technologies that make the Internet resistant to this form of control.

That is why we need to move from the current model of Alice taking the nearest DNS service and implicitly trusting it to correctly deliver bits to a model in which Alice chooses a DNS resolution service she trusts to implement her view of security and establish a secure connection to it that is capable of bypassing attempted government controls by whatever means are necessary.

Re: expert views differ Paul Vixie  –  May 29, 2011 7:13 AM PDT

Hallam-Baker's response demonstrates several misconceptions.

1. On Split DNS.  When I consulted with the authors of RFC 1597 (so, in 1994 or so) on the two DNS related paragraphs of their "Operational Considerations" section I knew full well that split DNS was the only way we were going to make private addressing work.  As the main author of BIND during those years, the need for "split DNS" strongly influenced my thinking about the "views" feature that we included in BIND9 (so, in 1999 or so.) It would be a mistake therefore to interpret this whitepaper as being ignorant of or in opposition to deliberate differences in DNS responses for different DNS clients.

2. On Mandated Blocking.  When I created the DNS RPZ technology I knew full well that anyone on the Internet could and should control their content experience up to and including filtering out traffic from some places or erasing DNS names they considered dangerous.  It would be a mistake therefore to interpret this whitepaper as being ignorant of or opposed to DNS blocking by private right of action.  What I'm worried about is mandated blocking.

3. On DNSSEC.  Having worked on Secure DNS in all of its forms since 1996 or so both as a participant in IETF and as a programmer and later as executive supporting other programmers, I am keenly aware of its limitations.  DNSSEC is not capable of ramming "ICANN DNS" content down somebody's throat, DNS blocking by private right of action will always be possible.  It would be a mistake therefore to interpret this whitepaper as saying otherwise.

4. On End-to-End.  The Internet architecture contemplates a "network of networks", some tightly coupled and some loosely.  A network of networks which is disconnected from "The Internet" can operate without interference, including operating its own DNS and DNSSEC infrastructure.  It's only when such a network of networks wants to connect to some other network of networks that the operational considerations of universal naming will apply.  For example there can only be one holder of a particular DNS name within each sphere of internet connectivity.  It would be a mistake to interpet this whitepaper as originating these ideas, and I refer interested readers to RFC 2826.

While I wish Hallam-Baker the best possible success with his company's products and services offered as alternatives to the DNSSEC design, I'm also keenly aware that the authors of this whitepaper have no commercial interests in our stated position.

Oh I forgot Phillip Hallam-Baker  –  May 29, 2011 2:41 PM PDT

Oh I forgot that everyone's motives are to be considered suspect except for those of Mr ultra-pure, never made a dollar from the DNS Vixie.

Projection is a really ugly thing to see. No Paul you are not without commercial motives and neither are your co-authors. Plus I will point out that I was making exactly the same points when I was retired from the business entirely.

My point was that there is absolutely no technical obstacle to the type of blocking being proposed. The backers of the proposal in Congress are well aware of that fact and can point to abundant examples (those of Mr Vixie's activities included) of the type of blocking they are proposing.

The issue here is not the technical feasibility, it is political, as Mr Vixie admits.

Thus it is immensely counterproductive for his group to be polluting a solid case on first amendment and freedom of speech grounds with a clearly spurious technical claim that even the authors can't sustain in public.

>The idea that the DNS cannot possibly Charles Christopher  –  May 29, 2011 9:39 AM PDT

>The idea that the DNS cannot possibly survive unless every Internet
>client receives the same data is quite a common one but completely mistaken.

That statement is well worth repeating many times.

Like most internet technologies, when I wanted to learn I consulted the RFC's etc, and as always I found the organic evolution of the internet is NOT articluated in any RFC or document anywhere.

So my solution to learning how DNS worked was, as usual, write my own DNS server FROM SCRATCH. In other words throw myself into the chaos that is the otherwise undocumented de-facto operation of DNS.

There ain't no standards now, so there ain't no "standard reponse" now either.

In fact I switched to my current ISP since they were the only one not performing filtering over the last few years. Since I manage a number of registrars I need a clear via of the network. That has changed over the last year or so. Now even the ISP I use is "manipulating" DNS behavior, and not admitting to it.

And without going into to much detial, one set of domains being blocked my one of the largest US ISPs are registered to a lawyer. I told this lawyer to call the ISP up and complain. This was done and the lawyer was told it was a DNS problem on the part of the registrant's DNS ... A day later I watched that ISP UNBLOCK the domains, there was NO CHANGE to ANY DNS settings ..... More than enough proof that there is no "standard DNS response" or behavior right now.

The very idea of "standard behavior" of DNS can't be entertained by anybody with real world DNS battle scars .... Look at the proud quotes of "my network my rules" in the "Taking Back the DNS thread".