Home / Blogs

SECSAC Special Meeting on Site Finder: A Technical Analysis

After attending the afternoon ICANN Security & Stability Committee meeting, I realized that the issues involved fall into several related but independent dimensions. Shy person that I am *Cough*, I have opinions in all, but I think it’s worthwhile simply to be able to explain the Big Picture to media and other folks that aren’t immersed in our field.

In these notes, I’m trying to maintain neutrality about the issues. I do have strong opinions about most, but I’ll post those separately, often dealing with one issue at a time. For those of you new to the media, it’s often best to put things into small, related chunks.

1. Governance Issues

Did Verisign have the right, regardless of technical merit, to do what it did without prior warning? I’m simply saying “did they do anything contractually or otherwise legally forbidden”, not “was it strongly counter to the assumptions of the Internet” or “were they mean and nasty.”

The news/political interest here is whether any other group should or could have affected this, or if we need new governance mechanisms.

Has this revealed any conflict of interest issues? To what extent should a registry be able to act unilaterally?

These points are meant to be examined here in the context of law, regulation and governance, as opposed to the less formal points in #2.

2. Process Issues
Slightly different than governance

Moving away from the letter of their contracts, what should they have done (if anything) about open comment and forming consensus? This is vaguely making me wonder if they had evidence of WMDs….oops, wrong controversy.

Assume they had no requirement for prior discussion. What, if any, requirements did they have for testing and validating their approach, given that a top-level registry is in a unique connectivity position with special privileges.

3. Internet Architectural Impact
Slightly different than effects on innovation and/or effect on existing software

I think it’s reasonable to state that Sitefinder, and changes of “internal” behavior, violates at least the traditional end-to-end and robustness principles. This should be considered in the spirit of the core vs. end state discussion in RFC 3439, and the architectural work going into midboxes.

A general question here is to what extent is it important that the Internet be consistent with its relatively informal architectural assumptions? Even among the newer technical folks, when teaching, I rarely hear anyone aware of the architecture work—they think “7 layers” is the ultimate answer [1].

[1] I spent over five years of my life in OSI research, development and promotion. We may have had the answer, but, unfortunately, we never could articulate the question. That is a lesson here.

4. Is the Internet the Web? Are All Internet Users People?

I don’t think it’s unfair to say Sitefinder is web-centric. The current responses may be useful for people who can interact with it. Apparently, there are patches that will help with mail response and even anti-spamming tools.

But what of other protocols, especially those intended to run without human intervention? What about failover schemes that employ DNS non-resolution as an indication that it’s time to pick an alternate destination?

Is the apparent trend to move from “everything over IP” to “everything over HTTP” a good one? _could_ it be a good one in well-defined subsets of the Internet?

5. Effects on Innovation

Innovation and stifling innovation has come up quite a bit. If one looks at the End-to-End Assumption, the historic perspective is that the “killer apps” appear at the edges and depend on a consistent center (e.g., web and VoIP, the latter with a QoS-consistent center [2]). Development in the core tends to be more evolutionary and subject to discussion (e.g., CIDR). Other development in the core tends to be with the implementations (e.g., faster routers and lines).

[2] Remember that the access links to an ISP usually aren’t the QoS problem. Once you get to the POP, voice and other delay-critical services can go onto VPNs or other QoS-engineered alternatives to the public Internet.

Verisign says Sitefinder is innovative, and let’s assume that it is. But, if so, it’s an innovation in the core, which is not the “time-proven way”. When I speak of time-proven, I certainly don’t mean that there isn’t innovation—this message did NOT reach you over a 56 Kbps line between IMPs.

Internet Explorer, for example, has a means of dealing with domain typos, but it is contrary to the way Sitefinder does things. IE also does it at the edge. How do we deal with potential commercial wars between the edge and core as far as competition for innovation?

6. Stability

Assume that Sitefinder and the associated mechanisms are ideal. In such a case, users would expect it. Unless a large number of users learn to spel and tipe gud, these instances will be points of heavy traffic.

What are the availability requirements to make the service dependable? This includes clustered servers at individual nodes, as well as distributed nodes. There has to be sufficient bandwidth to reach the nodes, and even if the node has adequate connectivity bandwidth, there are subtle congestion issues. It was pointed out that wireless implementers, used to expecting a small error message in their bandwidth-limited edge environments, are less than thrilled about getting a 17K HTML response.

Remember, if these concepts prove themselves in .com and .net, users will expect them in all TLDs—or we get to the generally undesirable situation of different behavior in different domains. Let’s assume Verisign has an adequate track record of running reliable servers—but what would be the requirements for a new operator of .com and .net for people expecting the Sitefinder functionality. In a new TLD, what has to be the support on Day 1?

A very different question is whether business models associated with this service are sufficiently robust to be sure it stays present once users expect it.

By Howard C. Berkowitz, Chief Technology Officer & Author

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Brand Protection

Sponsored byCSC

DNS

Sponsored byDNIB.com

Domain Names

Sponsored byVerisign

Cybersecurity

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global