Home / Blogs

The Design of the Domain Name System, Part IV - Global Consistency

John Levine

In the previous installments, we've been looking at aspects of the design of the DNS (see part I, II and III).

Many databases go to great effort to present a globally consistent view of the data they control, since the alternative is to lose credit card charges and double-book airline seats.

The DNS has never tried to do that. The data is roughly consistent, but not perfectly so.

Multiple servers and caches

Most zones have multiple DNS servers. The servers usually do not update their copies of the zone data at the same time, so one server may have slightly newer data than another. RFC 1034 describes the method that many zones use to keep in sync, with one server as the master, and the rest as slaves use AXFR to copy over updated versions of the zone as needed. The zone's SOA record contains some time values to tell the slaves how often to check. Although in theory one could crank the refresh interval down to one second, in practice refresh intervals are an hour or more. Hence it is quite common for the authoritative servers to be slightly out of sync.

Furthermore, DNS caches remember previous queries, up to the TTL provided by the server when the query was answered. If the data changes, a cache will not re-query until the TTL expires. Although it's possible for a server to set the TTL to zero, meaning not to cache the data, typical TTLs are from minutes to days. Adding to the excitement, caches do not always honor the requested TTL, sometimes applying minimum or maximum retention times.

As a result, "flash cuts" where the DNS changes and all clients can immediately see the new data don't work. Instead, DNS changes need to be done in ways that allow for the old and new data to coexist for a while. For example, when changing the IP address of a server, rather than trying to crank the TTL down to zero and forcing all the zone's servers to update at once, it's a lot easier if one can run the service in parallel on the old and new IP addresses long enough for all of the servers to update on their normal schedule, and for cached entries to expire.

Deliberately inconsistent DNS

Some DNS servers deliberately return different answers to different clients. The primary reasons are for "split horizon" DNS and for load sharing.

Split horizon means that clients on different networks get different answers. Most often that clients within the organization's own network get a larger set of names than the rest of the world does, or names resolve to addresses on the internal network while external clients get addresses on a firewall system. I've also used split horizon to deal with broken clients or caches that send high volume streams of bogus queries, sending them delegations to name servers on non-existent networks.

Load sharing in its simplest form involves rotating through a set of records in responses, so that clients are spread across a set of mirror web or mail servers. In more sophisticated forms, the DNS server tries to guess where the client is, and tries to return the address of a server topologically or geographically close to the client.

Split horizon DNS is somewhat defensible as a legitimate DNS configuration, since the responses to each client are consistent, and for the usual inside/outside split, organizations should know what addresses are on their own network, with appropriate router configurations to keep out forged outside traffic purporting to be from their own network.

Load sharing shouldn't hurt anything (much) so long as the server is prepared for its guesses about the client's location to be completely wrong. As an obvious example, people all over the world use Google's public DNS cache. At this point, DNS-based load sharing is probably a necessary evil, but given the ever more convoluted topology of the Internet, it is a poor idea to design new applications that depend on it.

In our next installment, we'll look at just how much data you can ask the DNS to give you.

By John Levine, Author, Consultant & Speaker. More blog posts from John Levine can also be read here.

Related topics: DNS

WEEKLY WRAP — Get CircleID's Weekly Summary Report by Email:

Comments

To post comments, please login or create an account.

Related Blogs

Related News

Topics

Industry Updates – Sponsored Posts

3 Questions to Ask Your DNS Host About DDoS

Introducing Our Special Edition Managed DNS Service for Top-Level Domain Operators

Afilias Partners With Internet Society to Sponsor Deploy360 ION Conference Series Through 2016

The Latest Internet Plague: Random Subdomain Attacks

Digging Deep Into DNS Data Discloses Damaging Domains

Nominum Announces Future Ready DNS

Dyn Acquires Internet Intelligence Company, Renesys

Introducing getdns: a Modern, Extensible, Open Source API for the DNS

Why We Decided to Stop Offering Free Accounts

Tony Kirsch Announced As Head of Global Consulting of ARI Registry Services

24 Million Home Routers Expose ISPs to Massive DNS-Based DDoS Attacks

Dyn Acquires Managed DNS Provider Nettica

Why Managed DNS Means Secure DNS

SPECIAL: Video Interviews from NamesCon 2014 in Las Vegas

Rodney Joffe on Why DNS Has Become a Favorite Attack Vector

Motivated to Solve Problems at Verisign

Dyn Announces Largest Quarter In Company History

Diversity, Openness and vBSDcon 2013

How Does Dyn Deliver on Powering the Internet? By Investing in Standards Organizations Like the IETF

Neustar's Proposal for New gTLD Collision Risk Mitigation

Sponsored Topics