Stay informed about the acquisition of Public Interest Registry

by Ethos Capital

Home / Blogs

How to Abolish the DNS Hierarchy… But It's a Bad Idea

Steven Bellovin

There's been a fair amount of controversy of late about ICANN's decision to dramatically increase the number of top-level domains. With a bit of effort, though — and with little disruption to the infrastructure — we could abolish the issue entirely. Any string whatsoever could be used, and it would all Just Work. That is, it would Just Work in a narrow technical sense; it would hurt innovation and it would likely have serious economic failure modes.

The trick is to use a cryptographic hash function to convert a string of bytes into a sequence of hexadecimal digits. For example, if one applies the SHA-1 function to the string,

Amazon.com

the result is a46af6931d9dace2200617548fab3274549e308f. Add a dot after every pair of hex digits, tacking on a suffix like .arb (for "arbitrary", since .hash might be seen as having other connotations), and you get,

a4.6a.f6.93.1d.9d.ac.e2.20.06.17.54.8f.ab.32.74.54.9e.30.8f.arb

which looks like a domain name, albeit a weird one. It not only looks like one, it is; that string could be added to the DNS today with no changes in code. We could even distribute the servers; at every level, there are 256 easy-to-split subtrees. So what's wrong?

The technical limitation is that every end point would have to be upgraded to do the hashing. Yes, that's a problem, but we've been through it before; supporting internationalized domain names required the same thing. And it works:

But — how do endpoints know to do the hashing in this scheme? Something in a the URL bar of a web browser? There are lots of things on the net that aren't web browsers; how will they know what to do? You can't necessarily tell from a string if it should be used literally or via this hashing scheme; "Amazon.com" appears to be the legal name of the corporation.

There's another problem: canonicalization. Similar strings will produce very different hash values. Here's an example:

New York Times7e145e463809ea5e7c28f2ddf103499f942c9ea3
The New York Times1950c50c10f288dd6e9190361c968e1b8c4a3775
N.Y. Timese69011929d6d30347ddca11c7955a07df8390984
NY Times48b6b7d57f0ed2885816f1df96da1ffa86f09dda

We could no doubt define some set of rules that would handle many common cases. Equally certain, we'd miss many more. Companies could think of their own rules, but if they missed some we'd be back to cybersquatting and typosquatting. This would be worse, though, because the names are so spread out.

The real issue, though, is economic: who would run the different pieces of .arb? There are currently about 100M names in .com. Let's allow for growth and assume 1,000,000,000 names. To handle canonicalization, assume another factor of 10, for about 10B names. Does that work? To a first approximation, sure; we can delegate at each period in the name, and there are 256 values at each level. That means that going down just two levels, we could have 65,536 different registries, each handling about 150K names. That's easy to do, but a given registry could handle more than one zone. Let's assume that 1.5M names is a good size (which is somewhat challenging, though it's clearly possible since it works today). That means we'd need about 6,600 registries. But they have no way to do marketing; there's no way to target any particular business segment, since names are mapped to more or less random parts of the name tree. If a registry failed, an unpredictable portion of the net would suddenly be unreachable.

Most of us never see registries; when we want to create a new domain, we do business with a registrar. But every registrar would need to do business with every registry! The number of relationships would get ungainly, and again, there's no way to do targeted marketing. The registrars for, say, .museum can target museums, while ignoring, say, banks. With this scheme, everyone is doing business with everyone. It's great to have a global market; it's also very expensive.

By Steven Bellovin, Professor of Computer Science at Columbia University
Follow CircleID on
SHARE THIS POST

If you are pressed for time ...

... this is for you. More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

Vinton Cerf, Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Share your comments

.onion already uses a similar system where jeroen  –  Jul 13, 2011 1:05 AM PDT

.onion already uses a similar system where they put the hash of the Tor node in DNS and voila, you can also use that to cryptographically know you are talking to at least somebody who was able to come up with that same hash.

From a DNS point of view though, this is not an issue, as the biggest problem with DNS is: people are making boatloads of money with virtual bits.

As for abolishing the DNS hierarchy, that hierarchy is already gone:
- ICANN is now allowing new TLDs (hopefully not too many will pop up)
- Mostly everything is in .com, and various companies have their company name in TLD.

As such, to come back to your question (how to abolish DNS hierarchy), and looking at your examples (The Times etc), you can already solve that problem very easily and it is already being done like that by most people (excluding marketing folks) with a simple solution: Google.

You just type in what you want, and Google or Bing, or Yahoo or various others will tell you where that is. The good thing is that those companies LOVE to tell you where you want to go, they LOVE to record your search queries and sell that information, as that information is money to them, especially with a few more bits of advertisement if you don't run AdBlockPlus ;)

The DNS does not really need a change, it is the marketing people who need a change and realize that not everything needs it's own domain. www.moviecompany.us/movies/awesomemovie/ has a much higher search rating on the day of launch than www.awesomemovie.com where nobody links to. Of course, if you give extra money to the search engines they will rank you at the top anyway, but was that not the point to avoid giving money away? :)

~~ It is all about the money.....

Good idea... unless one were collecting a 'tax' or 'fee' per sld Jothan Frakes  –  Jul 13, 2011 1:06 AM PDT

Hypothetically if there were an entity charging $6,250 per quarter or $.25 per SLD name, the thought of not getting more than $25k/yr from such 'registrations' might be a non-starter… as this idea is capped at 256 SLD registrations (00.arb ... FF.arb)

Wouldn't you need to involve a wildcard DNS record for the this in some form as well?

Good idea, bad implementation. Phillip Hallam-Baker  –  Jul 14, 2011 2:48 PM PDT

I think its a good idea, just a lousy implementation.

Or rather, this is how we would approach implementing the DNS if we were starting from scratch today. It is only a lousy proposal insofar as you can't get there from where we are now. We are 30 years into a different approach.

From a technical perspective there are two parts to a registry, there is the production of the zone and there is the publication of the zone.

The first is a reasonably manageable technical task that can be performed by pretty much any one of twenty potential suppliers. So put that out to tender every five years on a rolling basis. Managing the interaction with the registrars is not cost free but it is pretty well constrained. And even if the system does go down for some period of time it is not a disaster, the consequences can be limited to preventing updates to the registry.

The second part is the hard part. How to publish a large zone. When the DNS was first proposed the only technology available was to have redundant servers and hope they could cope with the load. One result of this limitation was that we ended up with a situation where the only party with the $500 million in infrastructure necessary to support .com was the incumbent. If the root is genuinely opened up for arbitrary registrations the same problem will re-occur at the root.

Today we have multicast. One consequence of multicast is that it is not at all necessary for every publication server to be run by the same entity. We could have ten, twenty an hundred companies all performing a part of the publication task.

Now imagine that we have a competitive tendering system that allows companies to bid to support some percentage of the publication task. ICANN would be required to ensure that it maintained purchasing power over suppliers by never allowing any one supplier to provide more than 20% of the publication needs.

The final part of the scheme would be a measurement contractor that would be continually checking to see which suppliers were within their SLA requirements and which were not. Suppliers who were out of their SLA conditions would lose revenue and eventually lose their contract.

To post comments, please login or create an account.

Related

Topics

Cybercrime

Sponsored byThreat Intelligence Platform

DNS Security

Sponsored byAfilias

Domain Names

Sponsored byVerisign

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byAppdetex

Whois

Sponsored byWhoisXML API

New TLDs

Sponsored byAfilias

IP Addressing

Sponsored byAvenue4 LLC