Home / Blogs

Is the Risk Real With the New gTLD Program? (An Interview with Verisign)

It’s late in the new gTLD day and the program looks to be inching ever closer to the finish line. Yet last minute hiccups seem to be a recurring theme for this ambitious project to expand the Internet namespace far beyond the 300 odd active TLDs in existence today (counting generics and country codes). A drive for growth which is already underway, with 63 gTLD contracts now signed as of mid September. The list includes incumbents like .COM, of course, but also a spate of the first of the new strings that are set to be commonplace for tomorrow’s Internet users.

But will those users find themselves at greater risk because of this namespace expansion? That’s what several parties have been asking in recent months. Not that increasing the number of gTLDs is inherently dangerous. No, the risk would appear to be with the way it’s currently being done if you listen to ICANN’s Security and Stability Advisory Committee (SSAC) who has put out several reports such as SAC045 and SAC046 recommending action be taken to mitigate the risk. One of the most recent prods from SSAC was SAC059, which was published on April 18, 2013, and underscored the need for additional interdisciplinary study.

Others have chimed in. Another ICANN body, the At Large Advisory Committee, also called for a more determined risk mitigation strategy. Most recently, in response to ICANN’s proposed name collision risk mitigation strategy, several companies and organizations are also voicing their concerns, including Verizon, Microsoft and Yahoo, the United States Telecom Association and the Online Trust Alliance.

But most noticeable of all, just because they are the incumbent of all incumbents to the domain ecosystem, was Verisign’s appeals for ICANN to act. In March of this year, the .COM registry wrote to ICANN CEO Fadi Chehadé with a study on “new gTLD security and stability considerations” calling for risk mitigation actions. Just recently, Verisign has followed this up with various comments, such as this one on the ICANN proposal to mitigate the risk of name collisions created by the delegation of new gTLDs. Verisign also submitted analysis that contradicted statements by .CBA applicant Commonwealth Bank of Australia that its TLD is safe. As if that wasn’t enough, the company followed this up with analysis on other applied-for TLDs in its drive to “illustrate the need to undertake qualitative impact assessments for applied-for strings.

In short, Verisign is saying that to launch new gTLDs now would be tantamount to jumping off a cliff and then seeing if, by some stroke of luck, one might then sprout wings and fly away.

Others are saying this is, at best, needlessly alarmist, at worst a protectionist play from the company that has the most to loose from new strings coming to market and taking aim at .COM’s existing dominance.

Rather than condemning Verisign outright, I wanted to try and understand what their problem with the current state of the new gTLD program really is. So I spoke with their Chief Security Officer, Danny McPherson, making it clear that I would use his answers in this article. My questions were really about trying to understand what’s behind the alarm bells, and whether they should be heard, or just silenced as we stand aside and let innovation stride forth. Here are some excerpts from the 30-minute telephone conversation I had this week with Danny.

* * *

SVG: Isn’t all this really about Verisign protecting its own interests?

DMP: Well, we’ve certainly heard that a lot. Verisign has many roles in relation to the new gTLD program. That includes applicants who represent approximately 200 applied-for new gTLDs, and have contracts with Verisign to supply back-end registry services. As you can imagine, this is not an easy line for Verisign to walk with them.

But the substance of what we’ve talked about on the technical side hasn’t been dismissed by anyone since our March report that highlighted the need to address the SSAC recommendations. For the most part, the recommendations we’ve made are simply re-iterations of things that SSAC and other ICANN commissioned experts have recommended. Yet we seem to be the only ones that want to hold anyone accountable for delivering on those. However, others are now realizing the problem with ICANN’s approach and have filed comments with ICANN.

Verisign is not only the operator of the A and J root servers. We also have a unique role as zone publisher, and in that role, we’re actually the ones that provision these new gTLDs in the root zone file and publish that to all the root operators. As part of our cooperative agreement for that, we have security and stability obligations. Those not only extend to the root system itself, but also to looking at what the consequences of doing something bad are. So we have an obligation to look at this and look at what other experts have said and what our own experts think.

We have done that and we see a lot of outstanding issues. We have to be concerned about security and stability obligations. It isn’t just about protecting ICANN or SSAC or the root server system itself, it’s also about what the consequences to users are. Are there going to be new exploits or vulnerabilities because some string is delegated? Is it potentially going to cause disruption to some piece of infrastructure? Or is it going to make some element of the network less stable or predictable?

It seems as though everyone’s lost sight of that and isn’t worried about the consumers or the long-term effects of this. This is part of the reason we did the CBA analysis. It’s about highlighting that there’s a whole array of attributes, something we call the “risk matrix,” that need to be considered because we know each one of these represents some level of risk and may result in an actual threat if we have a motivated, capable adversary.

SVG: Verisign aren’t the only capable registry and infrastructure manager around. Neustar also have a proven track record yet they seem to be in strong disagreement with your assessments. Have you looked at their analysis?

DMP: Definitely. The reality is that the data used by anyone that’s done data analysis today was mostly based on the DITL data, and that is a 2 day snapshot taken earlier this year. If you’re going to use occurrence or incidence of something as a measure, then do it over a reasonable time frame and data set.

I believe we should only be using 2 classifications for strings right now: known high risk like .MAIL, .CORP and .HOME… and uncertain. I think anything we base upon the DITL data and a 2 day snapshot is going to be inaccurate. You may have some level of precision within that data, but it’s by no means going to be accurate. Don’t draw an arbitrary line at 20% based upon what’s secure and not secure in a 2 day snapshot of data, instead of using measurement apparatus across the system that allows you to do it intelligently and in a sustainable way. There needs to be objective criteria. Query count alone over a 2-day period is not objective. It’s across a subset of the root system and doesn’t consider other elements in the DNS ecosystem. So I don’t think anyone with the DITL data set is qualified to make an objective decision about what constitutes risk and what doesn’t.

I do see a lot of people saying strings are not risky when, quite frankly, they don’t have the information to be able to make that judgement in a qualitative way. We have the Interisle report and the DITL data, but I do not believe these constitute a set of data that affords enough visibility for people to be able to draw those lines. That’s the reason we published our CBA analysis and did that analysis over a 7-week period. We took data across a reasonable data set, only 15% or so of the root server system, but it was a more objective data set with a larger base. What we saw is illustrative of the types of systems that could be impacted by this. Those are the types of consumers that could be impacted by those delegations. These aren’t things that should be subjective. You can measure this, but you have to take a step back and make it a point to get the right data and to define that objective matrix. Right now, this has not been done.

SVG: I see you have also studied other TLDs apart from .CBA?

DMP: Yes, we submitted analysis of .CLUB, .COFFEE and .WEBSITE. One was already classified as “uncalculated risk” by ICANN, but the others are in the 80% low risk category. Yet we showed a number of namespaces and regional affinities that query each of these strings. That’s an example of precisely why we don’t believe you can currently draw a line between uncalculated risk and low risk. Because there’s not an objective matrix and you can’t do it based on query volume.

SVG: But work analysis means more delay in the new gTLD program, doesn’t it?

DMP: If something’s worth doing, it’s worth doing right. You can either take this approach of death by a thousand cuts, or you can step back and do this correctly. We realise ours is not a popular position. But we also know that it’s a responsible one. Everything we’ve said stands on technical merit. No one’s claimed it is technically inaccurate.

There are definitely some recommendations in SAC045 and SAC046 that need to be implemented. ICANN should have had a plan in place, so that on Reveal Day in 2012, it could have begun the process of forewarning potentially impacted parties of the impending delegation of new strings so these folks could mitigate against the potential impact that the delegation of a particular string may have on their operating environment.

Other recommendations pertain to the ability to protect the root server system itself to make sure all the root operators are performing to par, and if any negative consequences are experienced as a result of the delegation of a new gTLD, that the root zone partners have a way to quickly back that out, or at least assess the problem. If there are impacts, we ought to have some visibility to that, some early warning capability. These are all prudent steps any engineer would want to take. These things still haven’t been done. We still haven’t made an intellectually honest approach and a sound engineering approach to solving these issues.

While ours is not a popular position to be in, we believe it’s the right position. There seems to be this romantic notion that these things happen magically and there’s no need to worry about the impact to consumers. The reality is there’s a lot of risk today. We have 3 billion Internet users and hundreds of billions in commerce in the U.S. alone that are based on the Internet. So the consequences of not doing this could be much worse for ICANN and the community, and the Internet, than stepping back and doing it properly.

The sooner ICANN takes responsibility and recognises that these outstanding issues are unresolved and need to be resolved, and this was outlined by ICANN’s own advisory committees in good faith and with good reason, the better. The sooner these steps are taken, the sooner new gTLDs can be delegated responsibly. If ICANN and the community had heeded our call back in March, six months ago, we’d probably be done with this and much closer to seeing new gTLDs in the marketplace.

By Stéphane Van Gelder, Consultant

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

Threat Intelligence

Sponsored byWhoisXML API

New TLDs

Sponsored byRadix

Cybersecurity

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global