Home / Blogs

DNS Resolution, Browsers & Hope For The Future

Don't miss a thing – sign up for CircleID Weekly Wrap newsletter delivered to your inbox once a week.
Andrew Sullivan

DNS is not something that most people think about when using the Internet. Neither should they have to: the DNS is just part of the infrastructure in the same way that IP addresses are. The only time a user ought to notice the DNS is when it breaks (and it should never break).

If that's true, then we ought to expect any Internet client — including web browsers — to use the very same infrastructure as everything else and for the DNS resolution mechanisms to be the ones offered by the operating system. What makes browsers different?

If we think about the way browsers work, we can see what the difference is.

First, browsers usually have a real human in front of them and people are very sensitive to any delay in getting what they want. In order to reduce such delays, many web browsers want control over the way the DNS is used. A shared facility from the operating system doesn't provide that control.

Second, native resolver libraries offer programmers a minimal interface called getaddrinfo(), a part of the POSIX standard. This interface blocks: if the application wants to resolve more than one name at a time, it needs to make several separate calls to getaddrinfo(). That technique can cause overloading on upstream resolvers, which might regard the many parallel requests as an attack (and stop responding).

When that happens, it looks (to the user) like the Internet connection is slow or has failed. Moreover, getaddrinfo() does not provide a way to know how long the DNS record is allowed to be cached (the time to live or TTL); so the application can't tell how long it is safe to use the result.

Third, browsers have a security problem that most other systems don't have. Many things that "run in the browser" are actually really separate clients with examples including Adobe's Flash and every Java applet. In order to mitigate some of the inherent problems with this, browsers have adopted a strategy called "pinning" that is very similar to a DNS cache.

As attacks have become more clever, the pinning policies have become more sophisticated. The inevitable effect of pinning is to treat results from the DNS differently than the authoritative DNS server's TTL (even if the TTL were available to the application).

The pressing need

Chromium (the basis for Chrome) now has its own experimental stub resolver inside the browser. The Mozilla project, which produces Firefox, has not gone quite that far yet but the topic comes up from time to time. Yet putting a resolver in every browser, tempting though it might be, is not a good idea. Instead of having one or two well-known but problematic ways to do end-point DNS resolution, stub resolvers inside applications gives us even more. That's more redundant code with subtle incompatibilities and bugs, surprising and inconsistent behavior and features that don't overlap.

DNS resolution is part of the infrastructure and making it part of an application undermines the layered approach that has made networked applications so strong. The need to deliver a robust, cross-platform, reliable system that solves applications' needs is urgent. To their credit, the browser manufacturers know all this. But they have a real problem to solve and quickly.

What is to be done?

A large part of the problem as things stand is that applications — in this case, browsers — can't get the information they need from the operating system facilities and those facilities are inadequate anyway because they block. This doesn't even consider the services and features that could be available if applications could take advantage of knowledge about DNSSEC validation.

Application designers need a stable, high-performance, non-blocking API so that the full richness of DNS data is available to them and under difficult performance constraints. The DNS operating environment has changed over the years and the operating system environment is also diverse. But applications are still stuck with the minimal interface offered by POSIX. That needs to change.

The way forward is for people who know about the DNS, but who are not end-user application developers, to collaborate with those end-user application developers. Together we need to develop a new widely-available, cross-platform API that solves the problems applications have. Performance needs to be tunable by applications so that timeouts do not take a long time (the "happy eyeballs" requirement).

The API needs not to block, so that an application does not have to sit forever waiting for an answer that may never come. All of the data available from deep in the DNS guts needs to be available for the application to use. But this API must not require application designers to become experts in the DNS, how it works, or its arcane formats.

This is a tall order, but it isn't impossible. Most of the Internet today has been built up with many layers, so that applications can do complicated things without understanding all the details. We need to do this again for DNS.

Now is the ideal time to do it. DNSSEC is still in the relatively early days of widespread adoption and standards for building services on top of DNSSEC are just being finalized. Browser makers are starting to run into the limitations of their traditional approaches to many of these issues and so they may be amenable to thinking about better ways of doing things.

Ok, so just do it already.

If the issues are so obvious, what's the problem? There are two.

  • The people who work on DNS and the people who work on applications often have very different interests and problems. It is hard to get such people to work together on a problem, not because they disagree but because they talk past one another.
  • In order to get something useful, a large amount of work needs to be done. Until all (or at least most) of the important work is complete, the new API will not be useful. That means that early testing and deployment will not happen.

Despite these barriers, we as an industry must find a way to get this work done. The current direction is unsustainable and as the name space expands, it will get worse. We cannot afford to wait or to keep tinkering at the edges.

Skeptics say that it can never happen. The DNS environment is too polluted. Hotels and web caf├ęs and home gateways and a million other devices will continue to interfere in DNS transactions and make deployment impossible. But applications need origin policies actually linked to the domains they are talking to — not to lists of special domains and not to cache times that seem like a good guess, but to the real, verifiable data from the DNS.

Customers demanded Internet access in hotels in the past. If it's important enough, customers will demand reliable Internet access in hotels in the future. We need to make reliability a core value. Since it is now that the root is expanding and now that the attention to web security is more acute than ever, now is the time to strike. We can create a more usable, more sustainable and friendlier system to sit under every other Internet transaction.

By Andrew Sullivan, Tech Evangelist, Dyn+Oracle

Related topics: DNS, DNS Security, Web



man res_searchYes, it may block, but a Carl Byington  –  Apr 16, 2012 9:35 PM PDT

man res_search

Yes, it may block, but a pool of helper processes can fix that, where you do the dns resolution in the helper process.

No, res_search is a poor idea for Stephane Bortzmeyer  –  Apr 23, 2012 7:21 AM PDT

No, res_search is a poor idea for applications since name resolution is not always done purely with the DNS (hosts file, LDAP, may be a DHT in the future). That's why getaddrinfo() exists.

The ideal replacement should keep this protocol-agnostic property. For instance, it should return a flag "secure" which will work for all protocols (hosts file => always secure, DNS => secure if AD bit, LDAP => secure if over TLS, etc).

Awesome David A. Ulevitch  –  Apr 18, 2012 8:43 PM PDT

This is awesomely written.  It will take time to replace system calls, but POSIX has introduced new system calls over the years that have gained traction.  It's easy to be backwards compatible and still move the ball forward.

Andrew — I'm all for helping and count us in.

adns ? Stephane Bortzmeyer  –  Apr 23, 2012 7:23 AM PDT

For the asynchronicity, there is already adns. May be starting from its API?

To post comments, please login or create an account.

Related Blogs

Related News

Explore Topics

Sponsored Topics

Promoted Posts

Now Is the Time for .eco

.eco launches globally at 16:00 UTC on April 25, 2017, when domains will be available on a first-come, first-serve basis. .eco is for businesses, non-profits and people committed to positive change for the planet. See list of registrars offering .eco more»

Boston Ivy Gets Competitive With Its TLDs, Offers Registrars New Wholesale Pricing

With a mission to make its top-level domains available to the broadest market possible, Boston Ivy has permanently reduced its registration, renewal and transfer prices for .Broker, .Forex, .Markets and .Trading. more»

Industry Updates – Sponsored Posts

Global Domain Name Registrations Reach 329.3 Million, 2.3 Million Growth in Last Quarter of 2016

DeviceAtlas' Deep Device Intelligence Now Addresses Native App Environment

A Look at How the New .SPACE TLD Has Performed Over the Past 2 Years

Neustar to be Acquired by Private Investment Group Led by Golden Gate Capital

Don't Gamble With Your DNS

Why .com is the Venture Capital Community's Power Player

Verisign Releases Q2 2016 DDoS Trends Report - Layer 7 DDoS Attacks a Growing Trend

Miss.Africa Announces 2016, Round II Seed Funding Tech Initiative for Women in Africa

How Savvy DDoS Attackers Are Using DNSSEC Against Us

Airpush Chooses DeviceAtlas to Provide Device Awareness to Mobile Ad Network

DeviceAtlas Releases Q2 2016 Mobile Web Intelligence Report, Apple Loses Browsing Market Share

Effective Strategies to Build Your Reseller Channel (Webinar)

Radix Adds Dyn as a DNS Service Provider

Facilitating a Trusted Web Space for Financial Service Professionals

News.Markets: A Rising Star in the World of Financial Trading and New TLDs

Dyn Partners with the Internet Systems Consortium to Host Global F-Root Nameservers

Is Your TLD Threat Mitigation Strategy up to Scratch?

Mobile Web Intelligence Report: Bots and Crawlers May Represent up to 50% of Web Traffic

Domain Management Handbook from MarkMonitor

What Holds Firms Back from Choosing Cloud-Based External DNS?