Everyone is probably well aware of the Kashpureff-style DNS cache- poisoning exploit (I'll call this "classic cache poisoning"). For reference, see the original US-CERT advisory prompted by this exploit.
Vendors patched their code to appropriately scrub (validate) responses so that caches could not be poisoned. For the next 7-8 years, we didn't hear much about cache poisoning. However, there was still a vulnerability lurking in the code, directly related to cache poisoning.
An Additional (not new) Poisoning Problem
On March 26, 2005, a thread on the NANOG mailing list started that was titled "DNS cache poisoning attacks - are they real?". Operators began to notice more systems being poisoned again, and discussion ensued as to how this could be happening.
On April 7, 2005, the SANS ISC (not to be confused with Internet Systems Consortium) posted an update detailing how Microsoft Windows DNS servers were still being poisoned, even though the "Secure cache against pollution" option was set. The SANS ISC found that Windows DNS servers using BIND4 and BIND8 servers as forwarders were being poisoned. But how could this be?
I think a slight clarification of the News.com article is necessary, with respect to this paragraph:
"The vulnerable servers run the popular Berkeley Internet Name Domain software in an insecure way and should be upgraded, Kaminsky said. The systems run BIND 4 or BIND 8 and are configured to use forwarders for DNS requests--something the distributor of the software specifically warns against."
In reality, the vulnerable systems are using BIND4 or BIND8 DNS servers *as* forwarders. So what is forwarding, anyway?
What is a Forwarder?
A forwarder is a recursive/caching name server used by downstream systems that have no root.cache file, or that can not access the global DNS for policy reasons. The downstream systems then "forward" all DNS requests to the forwarder for resolution. On a cache miss only, the forwarder will recursively find the answer, scrub any poisoned responses, insert the clean response into the cache, and pass the answer back to the requesting system. However, the answer that is passed back to the requesting system is NOT scrubbed. If that response was poisoned, the downstream system receives poisoned data. Since it has no root.cache file to perform anti-poisoning logic… ouch!
This gives an attacker a chance to poison any system using BIND4 or BIND8 as a forwarder on *any* cache miss on the forwarder. Given the number of systems Dan Kaminsky found that are using a BIND8 system as a forwarder, and the number of possible domains to poison, you can see the potential scale of this problem. Of course, once the forwarder has clean data in the cache, any answers sent to downstream systems from the cache is clean.
Who is Most at Risk?
Anyone using large scale "forwarding" configuration for their name servers is at risk. Imagine tens of thousands of cable model or DSL customers using an ISP DNS servers as a forwarder. Even more dangerous is "forward chaining" where three or more DNS servers are configured to forward queries upstream. Forwarding in and of itself is really not dangerous, but operators should consider the use of forwarding dangerous simply because the systems using a forwarder have no way to validate the responses they get from it.
Why Don't Organizations Update DNS Software?
Lack of money, lack of expertise, confusion, politics? Yes! Enterprise IT staff or ISP engineering staff are all plagued by one or more of these issues that may prevent or hinder them from upgrading DNS software. If you use the "canonical" BIND source and don't have resources to deal with upgrading/configuring BIND, consider:
What's the Future of DNS Management?
I don't believe this is so much an issue of "managing" DNS. This is a never ending battle of managing software development, and a never ending battle of bad guys exploiting vulnerabilities and good guys racing to find them and fix them first.
What we really need is more large-scale measurement and testing projects for the DNS. DNS is a large, complex, distributed database. We must better understand how it works *when it's working*, as well as what happens when it breaks, or even ways the DNS might fail that we haven't thought of yet! The bad guys are continually probing software and systems for weaknesses. We need better intelligence, and we need it yesterday.
How can I help with this, you ask? Support and/or participate in organizations and projects such as the DNS Operations, Analysis, and Research Center (OARC), the Cooperative Association for Internet Data Analysis (CAIDA), or Doxpara Research (Dan Kaminsky's Infrastructure Validation Project).
The only way we can *avoid* these problems, and reduce the fear, uncertainty, and doubt that circulates is to measure, test, understand, and educate everyone. I have hopefully furthered those goals in this article.
By Brett Watson, Senior Manager, Professional Services at Neustar. Brett's experience spans large-scale IP networking, optical networking, network/system administration and design, and security architecture including high level security policy and architecture, as well as vulnerability assessments and penetration testing.
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services
.eco launches globally at 16:00 UTC on April 25, 2017, when domains will be available on a first-come, first-serve basis. .eco is for businesses, non-profits and people committed to positive change for the planet. See list of registrars offering .eco more»