Home / News

Attack Seriously Slows Two Internet Root Servers

Online attackers have briefly disrupted service on at least two of the 13 "root" servers that are used to direct traffic on the Internet.

The attack, which began Tuesday at about 5:30 a.m. Eastern time, was the most significant attack against the root servers since an October 2002 distributed denial of service (DDOS) attack, said Ben Petro, senior vice president of services with Internet service provider Neustar Inc.

Read full story: Yahoo! News

Related topics: Cyberattack, Cybercrime, DDoS, DNS, Security

WEEKLY WRAP — Get CircleID's Weekly Summary Report by Email:

Comments

Re: Attack Seriously Slows Two Internet Root Servers The Famous Brett Watson  –  Feb 07, 2007 6:12 PM PDT

Are there any compelling reasons not to recommend caching of the whole root zone locally as a best current practice? I mean for networks that are already providing recursive DNS service, which is most of them. Under those conditions, DDoSing the root servers merely prevents updates from being distributed, rather than crippling the entire domain name system at the root.

Re: Attack Seriously Slows Two Internet Root Servers Stephane Bortzmeyer  –  Feb 09, 2007 1:58 AM PDT

I do not think it would be a good idea. First, each time you have a
cache (and a local copy of the root zone is a cache), you have to
manage it, invalidate stale entries and so on. Given that the vast
majority of people who use blacklists or bogon lists do not manage
them and do not keep them up to date, what can you hope here? One day,
the cron job will suddenly fail and nobody will notice. As a TLD
manager, I really do not want to deal with such issues ("I cannot
resolve names in .fr" anymore" because people have old NS for ".fr").

Second, with the current system, the cache is managed automatically by
your resolver (so it cannot fail to invalidate old entries). A few
minutes after it starts, the resolver already knows most TLD,
especially big ones like ".com" or ".fr" (check on your own
resolver). So, even if the root suddenly collapses, these TLD will
still be reachable during many hours.

Third, the attack of this week was the largest since 2002 and it only
took down two name servers (five the last time, which clearly show
that the crackers are ,not getting better) and only the non-anycasted
ones. So, promoting a scheme that will certainly bring problems
(unmanaged caches at many sites) to solve a possible problem does
not seem right to me.

[Apparently, there is no formal text, such as a RFC, explaining this,
may be because it is too obvious?]

Re: Attack Seriously Slows Two Internet Root Servers The Famous Brett Watson  –  Feb 09, 2007 4:15 AM PDT

Every DNS service already maintains a cache of the root servers (usually called the "hints" file), although I grant that the size and rate of change in the root zone is greater than the root server hints. On the other hand, the root zone cache doesn't have to be maintained manually: configuring your server to act as a slave to the root zone is sufficient to mirror the root zone locally. This is not an advanced piece of BIND-fu.

The part that seems open to question is the impact it would have on overall traffic to the root servers. My expectation is that zone transfers are larger, but sufficiently less frequent that the load on the root servers would be reduced by this practice. Perhaps someone has already conducted this research. If not, it seems like a research paper waiting to be written. Or perhaps some sites have already adopted the practice? A survey could be undertaken.

Re: Attack Seriously Slows Two Internet Root Servers Stephane Bortzmeyer  –  Feb 11, 2007 2:26 PM PDT

There is a big difference between the hints file and the root zone file. Hints are checked at startup (at least by BIND), a local root zone file is not.

There are been a long thread in the IETF DNS Operations Working Group on this issue and, with the notable exception of Mark Andrews, it seems there was a large consensus against the idea of local copy.

Re: Attack Seriously Slows Two Internet Root Servers Martin Hannigan  –  Feb 11, 2007 10:09 PM PDT

There is a big difference between the hints file and the root zone file. Hints are checked at startup (at least by BIND), a local root zone file is not.

There are been a long thread in the IETF DNS Operations Working Group on this issue and, with the notable exception of Mark Andrews, it seems there was a large consensus against the idea of local copy.

Which doesn't seem to make too much sense.

There are few, if any, reasons to not run local copies of a-m. Improved reliability, consistent performance, and increased security, as well as granularity of on-net traffic loads all seem to be beneficial. What exactly is the problem besides control?

As long as the DoC sanctioned file is used, there should be little problem with redirecting traffic inside of a private network. IP addresses, as you know, are not property. The precedents to such activity have been in place many years.

The two root servers that were impacted are not widely anycasted like many of the others - that are also able to tender much higher rates of traffic before being significantly impacted as a result. This is much of a less story than it seems, and for networks that are running local roots (there are some), this was a non-event.

Re: Attack Seriously Slows Two Internet Root Servers Stephane Bortzmeyer  –  Feb 12, 2007 1:26 AM PDT

There are few, if any, reasons to not run local copies of a-m. Improved reliability, consistent performance, and increased security, as well as granularity of on-net traffic loads all seem to be beneficial. What exactly is the problem besides control?

The main reason (but I advise everyone to read the original thread, I provided a link to the archive) is the risk of stalesness. Many operators will not check if the copy is properly refreshed. So, it will become one more opportunity to screw up.

The experience with DNS blacklists of IP bogon lists is quite frightening: many are not kept up-to-date. When the cron job stops working, for a reason or the other, nobody notices.

Re: Attack Seriously Slows Two Internet Root Servers Martin Hannigan  –  Feb 12, 2007 7:30 AM PDT

The main reason (but I advise everyone to read the original thread, I provided a link to the archive) is the risk of stalesness. Many operators will not check if the copy is properly refreshed. So, it will become one more opportunity to screw up.

IMHO, the arguments to not support operating local copies of root zones are weak at best.

Operating a local copy of a root server is not difficult, and the benefits far outweigh the risks. Insuring that a zone is refreshed is an operational function and if the operator of a local zone "breaks" his zone, that should be an issue between the customer and the provider.

Re: Attack Seriously Slows Two Internet Root Servers Stephane Bortzmeyer  –  Feb 20, 2007 1:21 PM PDT

Of course, it is difficult to operate a local copy. You have to be sure it is up-to-date, you have to monitor it, etc.

Case in point: ask ns.eu.org about ".eu" or ".mobi". It does not know them. Now, ask it the SOA serial number and see since when it is not maintained…

Re: Attack Seriously Slows Two Internet Root Servers Martin Hannigan  –  Feb 20, 2007 1:40 PM PDT

Of course, it is difficult to operate a local copy. You have to be sure it is up-to-date, you have to monitor it, etc.

Surely you aren't saying that only the current root server operators are qualified to operate reliable root servers?

While I understand your fear, I don't necessarily agree and I'm sorry, but operationally, day to day operational arguments continue to be the weakest.

Policy is the key regardless of who the operator is.

To post comments, please login or create an account.

Related Blogs

Related News

Topics

Industry Updates – Sponsored Posts

Introducing getdns: a Modern, Extensible, Open Source API for the DNS

Why We Decided to Stop Offering Free Accounts

dotStrategy Selects Neustar's Registry Threat Mitigation Services for .BUZZ Registry

Tony Kirsch Announced As Head of Global Consulting of ARI Registry Services

24 Million Home Routers Expose ISPs to Massive DNS-Based DDoS Attacks

Dyn Acquires Managed DNS Provider Nettica

What Does a DDoS Attack Look Like? (Watch First 3 Minutes of an Actual Attack)

Joining Forces to Advance Protection Against Growing Diversity of DDoS Attacks

Why Managed DNS Means Secure DNS

Rodney Joffe on Why DNS Has Become a Favorite Attack Vector

Motivated to Solve Problems at Verisign

Dyn Announces Largest Quarter In Company History

Diversity, Openness and vBSDcon 2013

How Does Dyn Deliver on Powering the Internet? By Investing in Standards Organizations Like the IETF

Neustar's Proposal for New gTLD Collision Risk Mitigation

Dyn Announces the Opening of New Data Center in Mumbai, India

15 Facts About .net to Celebrate 15 Million Registrations

SPECIAL: Updates from the ICANN Meetings in Durban

Dyn Building a Lineup of Technical Talent

IT Project Management: Best Practices in Small-Scale Engagements

Sponsored Topics