Home / Blogs

Not a Guessing Game

Paul Vixie

On Tuesday July 8, CERT/CC published advisory #800113 referring to a DNS cache poisoning vulnerability discovered by Dan Kaminsky that will be fully disclosed on August 7 at the Black Hat conference. While the long term fix for this attack and all attacks like it is Secure DNS, we know we can't get the root zone signed, or the .COM zone signed, or the registrar / registry system to carry zone keys, soon enough. So, as a temporary workaround, the affected vendors are recommending that Dan Bernstein's UDP port randomization technique be universally deployed.

Reactions have been mixed, but overall, negative. As the coordinator of the combined vendor response, I've heard plenty of complaints, and I've watched as Dan Kaminsky has been called an idiot for how he managed the disclosure. Let me try to respond a little here, without verging into taking any of this personally.

Q: "This is the same attack as <X> described way back in <Y>."
A: No, it's not.

Q: "You're just fear-mongering, we already knew DNS was terribly insecure."
A: Everything we thought we knew was wrong.

Q: "I think Dan's new attack is <Z>."
A: If you guess right, you can control the schedule, is that what you want?

Q: "I think Dan should have just come right out and described the attack."
A: Do you mind if we patch the important parts of the infrastructure first?

Q: "Why wasn't I brought into the loop?"
A: Management of trusted communications is hard. No offense was intended.

Now for a news bulletin: Tom Cross of ISS-XForce correctly pointed out that if your recursive nameserver is behind most forms of NAT/PAT device, the patch won't do you any good since your port numbers will be rewritten on the way out, often using some pretty nonrandom looking substitute port numbers. Dan and I are working with CERT/CC on a derivative vulnerability announcement since it appears that most of the NAT/PAT industry does indeed have this problem. The obvious workaround is, move your recursive DNS to be outside your NAT/PAT perimeter, or enable your NAT/PAT device to be an ALG, or use TSIG-secured DNS forwarding when passing through your perimeter.

Please do the following. First, take the advisory seriously — we're not just a bunch of n00b alarmists, if we tell you your DNS house is on fire, and we hand you a fire hose, take it. Second, take Secure DNS seriously, even though there are intractable problems in its business and governance model — deploy it locally and push on your vendors for the tools and services you need. Third, stop complaining, we've all got a lot of work to do by August 7 and it's a little silly to spend any time arguing when we need to be patching.

By Paul Vixie, CEO, Farsight Security. More blog posts from Paul Vixie can also be read here.

Related topics: DNS, Security

WEEKLY WRAP — Get CircleID's Weekly Summary Report by Email:

Comments

Some questions George Kirikos  –  Jul 15, 2008 7:49 AM PDT

Just to be clear, would regular clients (Macs, Windows, Linux) behind a NAT firewall (e.g. a Netgear, Linksys or similar routers used for sharing a network connection) be vulnerable if they have DNS set to a non-vulnerable external server in their local client settings? (i.e. does the NAT make them now vulnerable?)

Would most routers used for sharing a network connection in a LAN need firmware updates due to this issue, if the routers themselves are DNS servers?

I think folks do take the advisory seriously, but want a "no brainer" guide "for dummies" to make sure they've done all they can, especially given that many vendors haven't made an official statement yet.

Re: Some questions Paul Vixie  –  Jul 15, 2008 8:30 AM PDT

Just to be clear, would regular clients (Macs, Windows, Linux) behind a NAT firewall (e.g. a Netgear, Linksys or similar routers used for sharing a network connection) be vulnerable if they have DNS set to a non- vulnerable external server in their local client settings? (i.e. does the NAT make them now vulnerable?)

yes and no. if you have a de-randomizing NAT/PAT device in front of your randomizing DNS questioner, then you will be less safe as a result of the de-randomization. however, non-caching (end-user; client only) DNS questioners are inherently less vulnerable to spoofing since they won't save or re-use any bad data that someone might be able to fool them with. so while there is a danger, it's not quite as dramatic. most non-caching DNS questioners will never be upgraded to randomize their UDP ports, so most of the time a de-randomizing NAT/PAT device will not make them even less safe.

Would most routers used for sharing a network connection in a LAN need firmware updates due to this issue, if the routers themselves are DNS servers?

almost certainly, but you'll have to contact your router vendor to be sure. note, if you can do a "dig porttest.dns-oarc.net in txt" from a client of that router or any other DNS server, you can learn a lot about how random its UDP ports are. a rating of FAIR or GOOD means you have no worries. any other rating means you need to call your vendor and set their hair on fire.

I think folks do take the advisory seriously, but want a "no brainer" guide "for dummies" to make sure they've done all they can, especially given that many vendors haven't made an official statement yet.

for early notification we mostly picked on the large enterprise DNS vendors since a small number of patches and upgrades would protect a large number of endpoints. also we knew all of them personally :-). for the router vendors who embed a DNS server in their product, we were mostly told "it's hopeless, noone ever installs our firmware updates, but we will try." so, call your vendor, large or small, and tell them you need a fix to CERT #800113.

note, if you can do Carl Byington  –  Jul 15, 2008 7:56 PM PDT

note, if you can do a "dig porttest.dns-oarc.net in txt" from a client of that router or any other DNS server, you can learn a lot about how random its UDP ports are.

That port test seems to claim it computes the standard deviation of the sample of 26 port numbers. See http://en.wikipedia.org/wiki/Standard_deviation for a not completely terrible overview. However, that is implicitly assuming that those port numbers are drawn from a population with a normal distribution.

We should hope that our outbound port numbers are drawn from a uniform distribution, perhaps over [1024, 65535].  In particular, if a vulnerable bind uses only two ports, say port 1050 and port 60050, the computed standard deviation will be very large, but won't tell us anything usefull.  That should be replaced with a more appropriate test, trying to determine if the sample is drawn from a uniform distribution over a suitably large range.

Standard deviation does not imply normal distribution Alan U. Kennington  –  Jul 15, 2008 10:34 PM PDT

Carl,

Standard deviation is a measure of variation of a set of numbers which has nothing at all to do with the normal distribution. All distributions have a standard deviation. The normal distribution is just one among an infinite number of random distributions.

The use of the standard deviation in statistics does not imply necessarily that the numbers are random. Even a normal distribution might not be at all random.

However, the point is certainly correct that the std dev does not tell you whether the sample of 26 UDP source ports is random. Randomness tests are rather deep analysis which can get quite philosophical.

As I understand it, the purpose of the UDP source port std dev measure is to just distinguish between the extremes of the expected kinds of UDP source port behaviours. E.g. between constant port number (std dev = 0), simple linear sequence (std dev = about 7.5) and a uniform distribution over 64,000 values (std dev = 32,000/sqrt(3) = about 18,000). So if the distribution is uniform, you can infer the range of the sample space (the range of possible ports) from the std dev. But like all statistical estimation, you have to start with some sort of model. Testing whether a sample does or does not seem to be consistent with a particular statistical model is probably too deep to put in a DNS server.

Perhaps another idea would be to return a TXT field containing the complete list of 26 ports and let the client do the stats. But then this might assist a bad guy to hack some DNS resource using the explicit port numbers. The std dev just anonymizes the mean, minimum and maximum of the UDP port distribution. It certainly is enormously useful for the present purposes. Not that many routers map UDP source ports to a bi-modal distribution in my experience.

standard deviation and randomness Duane Wessels  –  Jul 16, 2008 9:02 AM PDT

Carl and Alan,

I'm the author of the porttest tool.  Thanks for your comments about how standard deviation doesn't necessarily relate to randomness.  While writing the tool I spent some time researching other ways to report randomness, but became overwhelmed by it all.  As Alan said (better than I), a simple standard deviation calculation seems to be a very strong indicator of port randomness in this case.

I am working on a new web-based tool that will also report the actual ports (and query IDs) seen, so stay tuned for that.

Duane W.

Failing the test George Kirikos  –  Jul 15, 2008 9:02 AM PDT

Thanks for posting the test, Paul. I appreciate that it's nice and straightforward, as many other people's descriptions have been very technical and less than clear. That's something that everyone can easily run from the command line, to see if they're vulnerable:

dig porttest.dns-oarc.net in txt

I just ran it, and got the result:

(IP address) is POOR: 26 queries in 1.5 seconds from 1 ports with std dev 0.00"

which is obviously a failure of the test (you might want to make the language of the failure even more obvious, e.g. DANGER!! YOU ARE LEAST SAFE!!). I suspect I'm not alone! (probably millions of people with consumer-grade cable/DSL routers face the same issue). I already put in a call to my cable/DSL router manufacturer (before I posted the initial question even) and await a reply.

Once again, for the "for dummies" crowd, if the vendor doesn't do anything, are there any countermeasures we can easily employ today? I noticed OpenDNS.com suggested using their nameservers, and then firewalling off any other DNS responses. Is that a possible solution that consumers can deploy? (I assume we'd need to put their nameservers into the router's DNS configuration, as well as the DNS settings of all the clients? Or should we point all clients to the DNS of the router, and then point the router's DNS to OpenDNS? And then figure out what access rules to employ on the router's firewall to ensure that poisoning can't occur?)

Re: Failing the test Paul Vixie  –  Jul 15, 2008 11:13 AM PDT

Once again, for the "for dummies" crowd, if the vendor doesn't do anything, are there any countermeasures we can easily employ today?

run the test ("dig porttest.dns-oarc.net in txt") and if it doesn't report either GOOD or FAIR, call your ISP or your phone company or the store where you bought your DSL box Cable box.

I noticed OpenDNS.com suggested using their nameservers, and then firewalling off any other DNS responses. Is that a possible solution that consumers can deploy?

i don't imagine most consumers can "firewall off" anything. note that opendns is a commercial venture, they may have commercial reasons for offering you a free service. many users set their DNS servers to be 4.2.2.1 and 4.2.2.2 and i've never heard of any complaints about those.

(I assume we'd need to put their nameservers into the router's DNS configuration, as well as the DNS settings of all the clients? Or should we point all clients to the DNS of the router, and then point the router's DNS to OpenDNS? And then figure out what access rules to employ on the router's firewall to ensure that poisoning can't occur?)

it's that "figuring out" process that i question. if you're able to understand these issues at all, then you're not a "dummy", and you should be using your ISP's recursive nameservers, and turning off DNS in your DSL router. or you should use 4.2.2.1 and 4.2.2.2, or opendns, or neustar's dnsadvantage. the universal thing you should do is, not use a DSL router as a DNS server. and make sure noone else can, either. turn that thing off.

4.2.2.1 and 4.2.2.2 nameservers George Kirikos  –  Jul 15, 2008 11:38 AM PDT

Oddly, it was the 4.2.2.1 and 4.2.2.2 nameservers that I had been using previously (as they have a great reputation for speed), and which gave me the "poor" result, before I switched to OpenDNS.

I just switched back to them on my Mac (and left the OpenDNS servers on the router), and got back the following:

dig porttest.dns-oarc.net in txt

; <<>> DiG 9.4.1-P1 <<>> porttest.dns-oarc.net in txt
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5016
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;porttest.dns-oarc.net.  IN TXT

;; ANSWER SECTION:
porttest.dns-oarc.net.  5 IN CNAME z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net. 60 IN TXT "209.244.4.18 is POOR: 26 queries in 1.5 seconds from 1 ports with std dev 0.00"

;; Query time: 1796 msec
;; SERVER: 4.2.2.1#53(4.2.2.1)
;; WHEN: Tue Jul 15 14:20:38 2008
;; MSG SIZE rcvd: 199

If you do a WHOIS of 209.244.4.18 it appears to be the same owner as 4.2.2.1. The Doxpara.com test also says I'm "vulnerable" with those nameservers on my Mac client behind the NAT firewall/router, although it reports the IP address as 209.244.4.24.

Repeating the test again after switching the router's DNS to 4.2.2.1 and .2 (and keeping the Mac at 4.2.2.1 and .2 too) yields the similar failure:

209.244.4.24 is POOR: 26 queries in 1.5 seconds from 1 ports with std dev 0.00"

Switching everything back to OpenDNS again, and I get the "GOOD" result. Must be something happening in the router, if indeed 4.2.2.1 and 4.2.2.2 are secure for everyone else. Has everyone else reported that 4.2.2.1 and .2 are secure? (maybe it depends on one's geographical location?)

RE: 4.2.2.1 and 4.2.2.2 nameservers Paul Vixie  –  Jul 16, 2008 6:44 AM PDT

I've learned from Level(3), operators of 4.2.2.1 and 4.2.2.2, that they will be rolling out UDP port randomization well in advance of Dan's August 6 BlackHat talk, but it's not an overnight process owing to the largeness of their anycast cloud. Something about doing the odd numbered servers first.

Note: they also said they would eventually restrict 4.2.2.1 and 4.2.2.2 to customer access only, so if you're not a Level(3) customer, you probably need to find another solution. Almost every ISP has recursive name servers, and if yours is honest — sends you an error rather than advertising if you type in a nonexistent domain name — you should be using it. If your ISP is dishonest, then you should consider opendns or neustar's dnsadvantage, or do what I do, run your own RDNS. I use BIND, but I've also heard good things about PowerDNS and Unbound. There are also many non-free RDNS servers.

4.2.2.1 isn't secure David A. Ulevitch  –  Jul 15, 2008 12:10 PM PDT

It appears anycasted, but the one I tested against failed.

You're right George Kirikos  –  Jul 15, 2008 1:21 PM PDT

You're right, I just tested 4.2.2.1 and .2 from one of my dedicated servers that had been using those in its configuration, and only got a "FAIR" result, thus it appears that not all of their (anycasted) systems have been updated accordingly, or something.

The dig porttest.dns-oarc.net test is sometimes deceptive. Alan U. Kennington  –  Jul 15, 2008 1:40 PM PDT

I am not an expert, but....
I tried the dig test a few times and got the opposite to the truth sometimes.

Case 1.
I ran dig @patched.dns.server porttest.dns-oarc.net in txt and got POOR. It turned out that my patched DNS server was connected on a 192.168.1.0/24 subnet to the ADSL modem, which then duly de-randomized the UDP source ports on the way out. So the dig test gave me a POOR. Then I reconfigured the ADSL modem to only S-NAT the source IP address, and got a GOOD. This is a case where running a packet trace does not reveal the truth because the harm is done further up the path by the ADSL modem. So the dig test is very valuable here to expose this problem.

Case 2.
This might be more serious.
I pointed dig at the ADSL router (on different premises in a different city) as in dig @adsl.router porttest.dns-oarc.net in txt and got a GOOD. That surprised me because I was sure this modem was using fixed UDP source ports for DNS requests from the built-in resolver. It turned out that the ADSL router's DNS server was getting DNS translations from the two DNS resolvers of the ISP, and the ISP's DNS requests were using randomized ports already. So the dig test gave GOOD, although the DNS server was definitely not good at all.

Now if I have vaguely understood the attack mode at all, it seems to me that a DNS server inside an ADSL modem using fixed UDP source ports to request translations from the ISP is going to be vulnerable if the source IP address of the ADSL modem is the same as that used for DNS requests. In other words, this very, very typical DNS set-up is going to be as vulnerable as the worst case because the application destination IP address host will know the IP address of the DNS server which is using fixed UDP source ports. Conclusion: You get GOOD from the dig test, but the DNS server is bad, apparently.

Question:
Has anyone clarified this anywhere?
Case 2 gives a false sense of security apparently, and it is a very common scenario. And the ADSL modem needs a firmware upgrade to fix it. And most people don't want to buy an extra box to do DNS cached resolving within their LAN, even if they knew how to do it.

TSIG Niles Mills  –  Jul 15, 2008 9:13 AM PDT

Does securing zone transfers with TSIG eliminate the need for the BIND upgrade?  Or does TSIG just mitigate the NAT issue after upgrading?

Re: TSIG Paul Vixie  –  Jul 15, 2008 11:15 AM PDT

Does securing zone transfers with TSIG eliminate the need for the BIND upgrade? Or does TSIG just mitigate the NAT issue after upgrading?

TSIG is used for updates and zone transfers, and is absolutely secure against spoofing, as long as you keep your keys secret. however, spoofing is far more common for queries, where TSIG is almost never available. so, you need to upgrade for queries even if you're using TSIG for updates and transfers, as you certainly should be.

Quick Update George Kirikos  –  Jul 15, 2008 9:51 AM PDT

Just a quick followup, I changed the DNS settings on both my cable/DSL router and on my LAN clients behind its NAT firewall to OpenDNS, and now I get:

(IP Address) is GOOD: 26 queries in 2.3 seconds from 26 ports with std dev 19822.26

which appears to be a lot more secure (although, it would be nice if DNSSEC or an alternative was available for even greater security). I did not need need to alter my firewall rules on the cable/DSL router.

Great news David A. Ulevitch  –  Jul 15, 2008 10:36 AM PDT

That's great George.  I'm glad we were able to help!

dnssec for opendns ? johnjones  –  Jul 15, 2008 3:06 PM PDT

why does opendns not provide DNSSec ?

a multitude of reasons though David A. Ulevitch  –  Jul 15, 2008 4:29 PM PDT

a multitude of reasons though I do think we can do something to encourage adoption…

Not a new or unknown problem Jeffrey A. Williams  –  Jul 15, 2008 11:59 PM PDT

Comment removed by CircleID Admin as per Codes of Conduct.

I can say with absolute David A. Ulevitch  –  Jul 16, 2008 12:30 AM PDT

I can say with absolute certainty that Mr. Williams does not know about the vulnerability that will be disclosed.

I can say with /near/ certainty that Paul didn't know about this issue before Dan raised it.

But your entire post looks like a troll… so I guess I bit.

Missive/abusive responses to: Not a new or unknown problem Jeffrey A. Williams  –  Jul 16, 2008 1:20 AM PDT

Comment removed by CircleID Admin as per Codes of Conduct.

Re: Not a new or unknown problem Paul Vixie  –  Jul 16, 2008 6:30 AM PDT

He has known of this Hole for years and also knew what needed to be done to fix it.

That grossly mischaracterizes both the current situation and my position.

While I've been writing about holes like this since 1995, I did not know about *this* hole until Dan Kaminsky called me in February and said "guess what?" And, while I've known what needed to be done to pave over this whole class of problems, I also know that ICANN and USG lack the political will to sign the root zone with Secure DNS, so here we are with UDP port randomization, which is just another band-aid for DNS's sucking chest wound.

I also know that ICANN David Conrad  –  Jul 16, 2008 3:45 PM PDT

I also know that ICANN and USG lack the political will to sign the root zone with Secure DNS,

While I obviously cannot speak for the USG (and I don't speak for ICANN), stating that ICANN lacks the political will to sign the root is obviously wrong, see the IANA signed root zone demo.  But Paul knows this, having been asked long ago if ISC would be interested in discussing providing secondary service for said signed demo root (which, for the record he indicated he would, but the demo effort got derailed by other politics).

While ISC is totally ready Paul Vixie  –  Jul 16, 2008 7:10 PM PDT

While ISC is totally ready and willing to cooperate with ICANN on the root zone demo David mentioned, that demo is of technology, not political will, and is therefore somewhat off-topic. I want the real root zone signed with a real key so that TLDs who sign themselves will have a secure place to publish their TLD keys. I have searched the world and I've busted into every smoke filled room on it, and I have still not found the person who can say "yes" to that, nor have I found the people who are currently saying "no". What I do know is that if ICANN and USG had the political will to make this happen, it would happen.

Red Herring David Conrad  –  Jul 16, 2008 8:08 PM PDT

that demo is of technology, not political will, and is therefore somewhat off-topic.

One of the points of the demo was to indicate IANA was undertaking to be in a position to sign the root, even at non-trivial cost (you think multiple FIPS 140-3 hardware security modules come free?).  I'm not sure how that cannot be a demonstration of political will on the part of ICANN to see that the root gets signed, but it is actually irrelevant.  Even if the root were signed today, it would be essentially meaningless to address this particular vulnerability in the foreseeable future since:

a) last I checked, a total of 4 TLDs are currently signed (SE, PR, BR, and BG);
b) infinitesimally few caching servers are configured to validate responses and a goodly portion of the caching servers that people use either do not now support DNSSEC (e.g., Microsoft's DNS server, PowerDNS, OpenDNS, etc.) or will never (according to the author) support DNSSEC (e.g., djbdns);
c) even if every zone on the planet were signed and trust anchors were appropriately configured and maintained, the mechanisms by which validation failure is returned to the end user is indistinguishable from a variety of network problems for the vast majority of applications.  As a result, an ISP turning DNSSEC on will likely be subject to a flood of expensive support calls, greatly encouraging that ISP to turn DNSSEC off.

That is not to say that I wish to discourage you from tilting at that particular windmill (after all, any journey starts with a single step and all of the above can be fixed with sufficient effort), but there is a lot more to seeing DNSSEC usefully deployed than "signing the root".  Further, as you well know, the shorthand "sign the root" means quite a bit more than running dnssec-signzone over the root zone data and it is simply silly to assume ICANN is or even should be in a position to undertake the steps to "sign the root" unilaterally.

What I do know is that if ICANN and USG had the political will to make this happen, it would happen.

While I know in some circles it is considered a fun sport to bash ICANN, asserting ICANN doesn't have the political will to see the root signed is both wrong as well as somewhat insulting to the folks at IANA and ICANN who have spent considerable amount of time, resources, and energy to see forward motion.

Re: Red Herring Paul Vixie  –  Jul 17, 2008 7:23 AM PDT

While I know in some circles it is considered a fun sport to bash ICANN, asserting ICANN doesn't have the political will to see the root signed is both wrong as well as somewhat insulting to the folks at IANA and ICANN who have spent considerable amount of time, resources, and energy to see forward motion.

i have not yet met an ICANN staffer, in or out of IANA, who didn't want to sign the root zone, and who wouldn't get it done in a matter of days or at most weeks if it was up to them. if this is bashing, which it's not, then it'd be bashing of the shape of the conference table, not the folks sitting at it. i'm proud to call the ICANN staffers i know personally "friends", proud to work in the same industry as they do, proud to have them as fellow travellers. let's not treat any of this thread as being ad hominem.

Strange test result with MS caching DNS server Labrador Tea  –  Jul 17, 2008 7:04 AM PDT

Our windows admin claims they are recently updated.  I've tested the AD server providing caching DNS service and zone transfer from our bind, and it responds to the test like so:

dig +short porttest.dns-oarc.net in txt @adc1
z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.

There is no GOOD or POOR.  What does this mean?

re: Strange test result with MS caching DNS server Duane Wessels  –  Jul 18, 2008 10:27 AM PDT

Hard to say exactly what it means.  Perhaps the reply was blocked for some reason.  Or maybe the test timed out.  You can try 'dig' without the '+short' option.  Also you can try our new web-based test:

https://www.dns-oarc.net/oarc/services/dnsentropy

Is this typical DNS topology for ADSL routers vulnerable? Alan U. Kennington  –  Jul 17, 2008 7:56 AM PDT

Since July 9, I've been trying to get clarity from many people on this question, and have tried to infer the facts from the publicly available information. But after more than 8 days, I just really have to ask this question here directly. Either no one is answering it because the answer is too obvious, or maybe because it would cause mass panic. (If it would cause mass panic, please delete this e-mail!!)

My question:
Could you please tell me is the Dan's DNS protocol vulnerability applies to the following scenario?

Suppose we have these nodes in a DNS request path.

1. A desktop computer (on a LAN) with a typical internet application, like a browser.
2. An ADSL (or other kind of) router, which contains a DNS server which fetches DNS translations for the computers on the LAN.
3. Two IP addresses supplied by the ISP for domain name resolution. The customer premises router gets its translations by sending requests to these two nodes.
4. An IP host belonging to the ISP which runs a nicely patched DNS server which uses random UDP source ports to request DNS translations from the world-wide DNS.
5. The world-wide domain name system.

In this set-up, typically the ISP's node 4 is fine. (This is the DNS server whose IP address shows up in the dig porttest.dns-oarc.net test when dig is run against the ADSL router, node 2, on the LAN-facing interface.)

It seems to me that since the application software in node 1 will use the S-NAT-translated source IP address of node 2 (the router) for its application requests (like on TCP port 80), and node 2 contains a DNS server which is not using randomized UDP source ports, then the router must surely be fully vulnerable to the worst of whatever Dan has in store for us. Even though the DNS requests from this router normally only go to the ISP's two hosts (nodes 3), surely the attacker can mess around with that router to the maximum extent permitted by the vulnerability. The attacker can send DNS traffic to the router's internet-facing interface.

In other words, even though node 1 is using safe node 4 for the last hop of the DNS request chain before hitting the wide open DNS (nodes 5), the intermediate node 2 can be poisoned, and the ISP's virtuous patching will have been to no avail.

Question 2:
If the above scenario is correct, does this imply that hundreds of millions of routers of users at home or in small offices will be toast on 7 August?

Just curious…

Re: Is this typical DNS topology for ADSL routers vulnerable? Paul Vixie  –  Jul 17, 2008 8:34 AM PDT

Could you please tell me is the Dan's DNS protocol vulnerability applies to the following scenario?
...
... then the router must surely be fully vulnerable to the worst of whatever Dan has in store for us.

yes and no. the worst is when you not just believe something that's not true but you also remember it and pass it for a while until you eventually forget. in the scenario you're outlining, the ADSL router isn't a full DNS server, it just SNATs to the ISP's fully-randomizing DNS server. so, yes, your ADSL router is vulnerable. but since it has no cache, it's not a very fat target.

Question 2:
If the above scenario is correct, does this imply that hundreds of millions of routers of users at home or in small offices will be toast on 7 August?

"toast" isn't exactly the right word for most of them. try "garlic bread." however, some of them are running RDNS (patched or not) inside their SNAT, which will de-randomize their UDP port numbers, and they are toast. Others are running RDNS (with caching) in unpatchable SOHO routers, and they are also toast. but i'd say hundreds of thousands, not hundreds of millions, since those configurations aren't all that common.

More Exhaustive Test Needed? George Kirikos  –  Jul 17, 2008 8:45 AM PDT

So, is it possible that one might pass the above "dig" port test (i.e. receive a "GOOD" or "FAIR" result), yet have a false sense of security because one is still vulnerable to the DNS cache poisoning attack?

Or is passing that test sufficient to sleep reasonably well at night? If it's not a sufficient test (i.e. it catches only a subset of all vulnerable people), perhaps a more exhaustive test is required?

George,yes, unfortunately, the false sense Duane Wessels  –  Jul 18, 2008 10:20 AM PDT

George,

yes, unfortunately, the false sense of security is a possibility, for three reasons:

1) your DNS queries may go through more than one resolver.  All resolvers in the path should be patched to be safe(er) from this vulnerability.

2) your DNS queries may go through a NAT device that does not preserve source port randomness.

3) the porttest server calculates standard deviation, which does not necessarily equate to randomness.  We think it is a good indicator, but it is not perfect.

For 1) and 2) the porttest response will tell you what IP address it received queries from.  So if that address doesn't match where you sent queries to, you know that they are either going through NAT or another resolver.

For 3) you can also check out https://www.dns-oarc.net/oarc/services/dnsentropy.  This will present the results graphically so that you can see the actual port distribution.

My nameserver looks good, Verizon's not so much Henry Hartley  –  Jul 17, 2008 9:09 AM PDT

My question is basically the flip of some that have been asked (if I understand correctly).  If I run the test on my own nameserver (CentOS 5, bind-9.3.4-6.0.2.P1.el5_2, 12-Jul-2008 12:46), I get a GOOD, which is good:

dig @72.83.159.115 porttest.dns-oarc.net in txt
porttest.dns-oarc.net.  60 IN CNAME z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net. 60 IN TXT "72.83.159.115 is GOOD: 26 queries in 2.3 seconds from 26 ports with std dev 14019.46"

But if I use my provider (Verizon) nameserver I get POOR, which isn't good:

dig @71.252.0.12 porttest.dns-oarc.net in txt
porttest.dns-oarc.net.  60 IN CNAME z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net. 60 IN TXT "71.252.0.38 is POOR: 26 queries in 2.2 seconds from 21 ports with std dev 17.55"

Does that mean I've done all I can to my machine but I need to bug Verizon?  I'd think this is the sort of thing Verizon would care about and would deal with pretty quickly.

re: My nameserver looks good, Verizon's not so much Duane Wessels  –  Jul 18, 2008 10:22 AM PDT

Henry,

your analysis is correct.  The nameserver at 71.252.0.12 needs to be upgraded/patched.  Hopefully they will get it done soon.  If not you may be able to configure your systems to use your local nameserver and bypass the ISP nameserver entirely.

My Name Servers are questionable... Steven Bulls  –  Jul 25, 2008 11:33 AM PDT

So, I have 2 name servers, both running bind 9.5.0-p1, yet they both are rated "POOR: 26 queries in 3.0 seconds from 1 ports with std dev 0.00"… What now?  I ran the test from the machines themselves....  Thanks.

Re: My Name Servers are questionable... Paul Vixie  –  Jul 27, 2008 8:28 AM PDT

If you upgraded your BIND9 to one of the -P1's and you're still seeing "POOR" from "dig porttest.dns-oarc.net in txt" then it's either because you still have "query-source" set to a single port in your named.conf (in which case your syslog should be warning you about this) or because you are behind a de-randomizing NAT of some kind.

Re: My Name Servers are Questionable... Steven Bulls  –  Jul 28, 2008 8:01 AM PDT

Thanks, Paul.  the 'query-source' is not set (I understand that default is random) and I don't *think* I'm behind a NATting device, but I'll check…

Thanks again…
Steve

Re: My Name Servers are Questionable... Duane Wessels  –  Jul 28, 2008 8:27 AM PDT

Steve,

If you use http://entropy.dns-oarc.net/test/ it will show you the actual port numbers received by the server.

DW

Re: My Name Servers are Questionable... Steven Bulls  –  Jul 28, 2008 8:55 AM PDT

Thanks, Duane.  BTW… what do you mean by 'use' the link?  Sorry, sometimes things have to be spelled out in order to make sense…

Thanks…

Re: My Name Servers are Questionable... Duane Wessels  –  Jul 28, 2008 9:49 AM PDT

I mean enter the URL into your web browser Location bar

Re: My Name Servers are Questionable... Steven Bulls  –  Jul 28, 2008 10:22 AM PDT

That's what I thought, but wanted to be sure…

I've tried that numerous times but I'm guessing it's extremely busy… I can't get to it…

Thanks again, Duane..  I appreciate your time…

Steve

# 41 Reply (max. reply level reached)  |  Link  |  Report Problems

To post comments, please login or create an account.

Related Blogs

Related News

Topics

Industry Updates – Sponsored Posts

3 Questions to Ask Your DNS Host About DDoS

Introducing Our Special Edition Managed DNS Service for Top-Level Domain Operators

Afilias Partners With Internet Society to Sponsor Deploy360 ION Conference Series Through 2016

Neustar to Build Multiple Tbps DDoS Mitigation Platform

The Latest Internet Plague: Random Subdomain Attacks

Digging Deep Into DNS Data Discloses Damaging Domains

New gTLDs and Best Practices for Domain Management Policies (Video)

Nominum Announces Future Ready DNS

New from Verisign Labs - Measuring Privacy Disclosures in URL Query Strings

DotConnectAfrica Delegates Attend the Kenya Internet Governance Forum

3 Questions to Ask Your DNS Host about Lowering DDoS Risks

Continuing to Work in the Public Interest

Verisign Named to the OTA's 2014 Online Trust Honor Roll

4 Minutes Vs. 4 Hours: A Responder Explains Emergency DDoS Mitigation

Dyn Acquires Internet Intelligence Company, Renesys

Tips to Address New FFIEC DDoS Requirements

Smokescreening: Data Theft Makes DDoS More Dangerous

Introducing getdns: a Modern, Extensible, Open Source API for the DNS

Why We Decided to Stop Offering Free Accounts

dotStrategy Selects Neustar's Registry Threat Mitigation Services for .BUZZ Registry

Sponsored Topics