Home / Blogs

The Long Gestation and Afterlife of New gTLDs

ICANN continues to flail, pointlessly. The latest in a series of missteps that could easily have been avoided is its recommendations on what to do about a report on the potential for confusion and misaddressing when someone’s internal network names match the name for a new gTLD, and they have misconfigured their routers and/or DNS to the extent that someone typing in a new gTLD name might end up in the middle of someone else’s network.

You’d think that if someone misconfigured their machinery it might be a good idea to get them to fix it, instead of reconfiguring the entire new gTLD program, particularly when it’s not clear that anyone would notice. But that’s what ICANN has done. New gTLD applicants have now been consigned to three categories:

  • Hell (.corp and .home), which means everlasting torment. These names are unlikely ever to be delegated, but so that the torture can continue, there is always hope.
  • Limbo (20% of all new gTLD apps), which means waiting around for six months while ICANN slowly decides that there is no danger.
  • Purgatory (the remaining 80%), which means that no names may be made live (although they may be allocated/sold) for at least 120 days after their contract has been signed.
  • In case you were wondering, there is no Paradise.

This is all based on a 197-page study by Interisle, which purports to show which new gTLDs are most at risk for “causing” confusion. I will leave it to others more technically gifted than me to talk about the various ways in which the ICANN staff recommendations based on this study do not reflect reality. Suffice it to provide just one example: “HSBC” (in Limbo) is probably a name that is found almost exclusively found in HSBC’s own network, and since HSBC won’t be selling names anyway, and is presumably happy to fix any errors in its network in double-quick time, it hardly warrants a long delay in delegation. There are other similar common-sense snafus in the staff recommendations.

ICANN could have avoided what promises to be another embarrassment by publishing the study and asking the wider community what to do about it, instead of making ad-hoc recommendations. There are actually some smart people who could have helped turn this report into something actionable and reasonable—the multi-stakeholder model has many advantages, if ICANN would only use them.

Here are some of the flaws in the recommendations that will be raising eyebrows and tempers across the globe:

  • The 20% cutoff (the dividing line between Limbo and Purgatory) is entirely arbitrary. There is no scientific basis for such a division.
  • The Curve of Confusion (the distribution of names that according to the report may cause confusion) follows a power curve. There are many orders of magnitude between the top few and the rest of the 20%. Basically, there are 5 to 10 names that have any potential for real trouble, and a long way between those any of the other 20%.
  • There is no good reason given (nor any that I can imagine) for taking six months to come up with further recommendations.
  • There is no consideration given to the likelihood of confusion at the end-user level, and no consideration of the damage it would cause if it did occur. Some have already noted that network misconfiguration already occurs with some frequency in the .com and .net zones, and yet the Internet continues to function.
  • There is no consideration given to how trivially easy it is (in many cases) to fix any errors that do occur.

The New TLD Applicant Group (NTAG), which comprises many of the new gTLD applicants, will be writing to ICANN on these (and other) obvious points, and also responding to the methodology and reliability of the study itself.

The new gTLD program itself is a recognition that the ICANN Board and staff shouldn’t be in the business of choosing which new gTLDs get delegated. ICANN will be receiving a number of suggestions over the coming weeks, from NTAG and from individual companies, on how to undo this debacle-in-the-making. There are ways to mitigate potential problems caused by network configuration errors short of semi-randomly condemning new gTLDs to needless and costly delays.

Filed Under

Comments

Honestly I think when it comes to Todd Knarr  –  Aug 7, 2013 8:16 AM

Honestly I think when it comes to the local use of TLDs, just divide things up into two categories:

1. A couple of obvious choices for local use, eg. .home, .corp or .lan. Those get reserved, guaranteed to never be delegated in the global DNS, guaranteed safe for anybody to use for their own local namespace.

2. Everything else. No guarantees they won’t get used, no restrictions on their use as TLDs. Anyone using them, they either fix their networks to use something from category 1 or nobody’s going to have any sympathy for them.

All it takes is something bureaucracies seem to lack: the ability to make a decision and move on. I frankly don’t think the whole massive load of new TLDs is particularly useful, and IMO it’s movement back towards a flat namespace reminiscent of the ARPAnet hosts file is a Bad Thing for all the same reasons we moved away from the hosts file in the first place, but if we’re going to do it then do it and be done with it.

20% Kevin Murphy  –  Aug 7, 2013 3:00 PM

The 20% cut-off may not be wholly scientific, but I don’t think it was completely arbitrary either. It’s based on the fact that 20% of the applied-for strings get more root server queries than .sj, which is the existing TLD with the fewest daily queries, according to the DITL data. Page 80 of the Interisle report may explain it better.

.sj isn't even active Antony Van Couvering  –  Aug 7, 2013 4:19 PM

Kevin - your note might have some weight were it not for the fact that .sj doesn't have a single domain delegated in it, and never has, and if the Norwegians have their way, it never will. It's completely dormant. It's closest analogy is a not-yet-delegated gTLD. See http://www.norid.no/omnorid/bv-sj.en.html. Antony

No weight intended Kevin Murphy  –  Aug 7, 2013 4:40 PM

I was merely pointing out that the 20% number did have a rationale behind it; it wasn't just plucked out of thin air.

Quite right Jay Daley  –  Aug 7, 2013 9:43 PM

I agree that it is pretty short-sighted of ICANN to have published their recommendations at the same time as the report without harnessing the community. This is a very complex area and the more eyes and ingenuity the better.  In comparison to the complexity of the problem the recommendations are remarkably lightweight, bordering on embarrassing.  They were obviously rushed through by an exec who wanted ICANN to be seen to be on top of this rather than someone who actually understands the depth of this issue.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

DNS

Sponsored byDNIB.com

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC

New TLDs

Sponsored byRadix

Threat Intelligence

Sponsored byWhoisXML API

Domain Names

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global