Home / Blogs

Are We Slowly Losing Control of the Internet?

I have long been intrigued by the question of how do we turn the internet into a lifeline grade infrastructure. (See, for example my presentation From Barnstorming to Boeing - Transforming the Internet Into a Lifeline Utility [PowerPoint] with speakers notes [PDF].)

My hope that this will occur soon or even within decades is diminishing.

Most of us observe, almost daily, how even well established infrastructures tend to crumble when stressed, even slightly. For example, even something as small and foreseeable as a typo in someone’s name or SSN number during a medical visit can generate months of grief when dealing with insurance companies.

I was at the O’Reilly Etel conference last week. The content was impressive and the people there were frequently the primary actors in the creation and deployment of VOIP. However, not once during the three days did I hear a serious discussion by a speaker or in the hallways about how this evolving system would be managed, monitored, diagnosed, or repaired.

My mailbox is being filled with IETF announcements for the upcoming meeting in Prague. I see internet draft after internet draft making proposals that are going to cause implementation errors, security holes, and ultimately service outages.

Take for example the prime candidate protocol for VOIP - SIP.

I’ve spoken to many people who have implemented SIP components. There is a common theme - that SIP is far too complex. Even the basic encoding method is a mess - apparently the SIP working group could not agree among alternatives, so like most committees, they comprised by allowing all alternatives. The result is that the SIP implementer has to write code to handle many different representations of exactly the same information. That means that there will probably be code paths that are insufficiently, or never, tested. It also means that SIP systems will probably be susceptible to failure or misbehavior when introduced, perhaps years after initial instillation, to new SIP devices based on different SIP engines.

And to top that off, many of the new proposals for SIP use completely different encoding methods (the darling of the moment is XML) from the textual ASCII/UTF8 form used in the core parts of SIP. Implementers are going to go gray from the stress of trying to make this mish-mosh work. And people who have to maintain and troubleshoot VOIP will go bleary eyed and take hours longer to resolve outages than they would had there been a consistent and uniform design.

There is a lot of talk about the benefits of network effects, but few people talk about how those same network effects lock-in the work of the past and make it difficult, perhaps impossible, to evolve to new and improved mechanisms.

History often survives and reaches out through very long periods of time. It has been said that the size of modern day airplanes are derived from the width of the Roman horse: The width of the horse dictated the spacing of wheels on Roman carts. Those carts created standardized ruts that coerced other carts to conform through the ages. Early railroads, adopting carts, spaced the rails one-rut-pair width apart. That width dictated cargo load size. The need to carry those cargos has affected airplane design.

Consider how long it has taken to deploy IPv6 - a technology that celebrated its 10th anniversary a few years ago. And IPv6 has the luxury of being an alternative to IPv4 rather than a transparently compatible upgrade. Consider how much longer it will take to deploy VOIP protocol redesigns when the old protocol is embedded in telephones around the world?

We have to admire old Ma Bell for building a reliable and maintainable system. Yes, it took a 100 years of work - and modern telco phones, particularly on the local loop, use a lot of technology created in the late 1800’s.

You would have thought that in this internet age that we might have learned that clarity of internet protocol design is a great virtue and that management, diagnostics, and security are not afterthoughts but primary design goals.

There is a lot of noise out there about internet stability. And a lot of people and businesses are risking their actual and economic well being on the net, and the applications layered on it, really being stable and reliable.

But I have great concern that our approach to the internet resembles a high pillar of round stones piled on top of other round stones - we should not be surprised when it begins to wobble and then falls to the ground.

I am beginning to foresee a future internet in which people involved in management, troubleshooting, and repair are engaged in a Sisyphean effort to provide service in the face of increasingly non-unified design of internet protocols. And in that future, users will have to learn to expect outages and become accustomed to dealing with service provider customer service “associates” whose main job is to buy time to keep customers from rioting while the technical repair team tries to figure out what happened, where it happened, and what to do about it.

By Karl Auerbach, Chief Technical Officer at InterWorking Labs

Filed Under

Comments

The Famous Brett Watson  –  Mar 12, 2007 8:06 AM

Karl, I’d like to separate your discussion into two distinct but related issues.

The bulk of your article talks about protocol design, albeit slightly obliquely. I will be the first to agree that the art of protocol design isn’t anywhere near the engineering discipline I’d like it to be. The IETF has BCPs on the administrative processes of the IETF, but the nearest they come to a BCP on the actual subject of protocol design is currently RFC 3205, “On the use of HTTP as a Substrate”. With all due respect to the work of the IETF, the title itself is a misnomer: it’s a guild of craftsmen, not engineers, because the field itself is more “craft” than “discipline”.

I don’t know how inflammatory active members of the various IETF groups will find that remark. It’s not intended to be inflammatory: some of the craftsmanship is high grade stuff in my opinion, but without a solid engineering discipline behind the process, an opinion like mine is just an aesthetic judgement. And, in broad terms, that’s the trouble with protocol design today: there’s too much “taste” and not enough cold, hard, quantifiable paradigm.

In fact, if your description of SIP is correct, aesthetic compromises (the inclusion of everyone’s favourite pet encoding) are undermining one of the few rough engineering maxims we have: specifically, “if there are several ways of doing the same thing, choose one.” [RFC 1958, 3.2] Engineering discipline demands that aesthetic preferences be subordinate to sound design, and we’re obviously not there yet. Of course, it doesn’t help that the design principles themselves read like a list of aesthetic judgements.

That brings me to my second issue: the question of turning the Internet into a lifeline grade infrastructure. You ask how it can be done, but I would first ask whether the project is a sensible goal at all from an engineering perspective. I grant you that a protocol like SIP suffers from poor engineering—or rather that it suffers from the lack of a surrounding engineering discipline, but even if it were as good as it could be, would it still be sensible to talk about making it “lifeline grade”?

It strikes me that there is a fundamental disconnect between a loose global collection of networks with a basic agreement to make a “best effort” at delivering individual packets and any kind of “lifeline grade” service. While it’s true that you can build a “reliable” service on top of an unreliable one, the classic example being TCP/IP, there are fundamental limits to that reliability. TCP presents a stream-oriented connection free (with high probability) of duplication and errors over an IP transport that promises far less, but that guarantee is reached by detecting errors and recovering from them, not by magically making the path between endpoints any better than it was. If it takes all day—or all week—to deliver one kilobyte of data reliably, TCP will do it. This is not a step towards “lifeline grade infrastructure”.

The beauty of the Internet is that it delivers the vast majority of packets placed on it to the desired destination at a cost approaching zero. It seems to me that this is fundamentally incompatible with any kind of “lifeline grade infrastructure” which must provide guarantee upon guarantee, each of which invariably increases the cost of delivery. It also seems to me that we are far better off with two or three “consumer grade” lifelines than a single “lifeline grade” one. The ideal emergency phone would communicate over the first available working medium of VOIP, POTS, and the emergency CB channel, rather than resting wholly on one super-reliable medium.

But that’s just my intuition on the matter. I don’t have an engineering paradigm in which I can demonstrate my assertions formally, particularly since I don’t have a formal model of a “lifeline” and other project goals.

Karl Auerbach  –  Mar 12, 2007 9:27 AM

Brett, I really appreciate how well you isolated the essential question whether we want the internet to be a lifeline grade utility.

And I do like your suggestion that lifeline grade *applications* ought to utilize multiple kinds of communications mechanisms to find one that works.

No matter what we say or think, people are beginning to treat the internet as something on which they feel safe building their businesses, even if they are not yet ready to entrust their personal safety.  (But I have heard of people doing surgery via the net - I still shudder at the thought.)

So whether we think the net ought to be a lifeline grade utility, we could help avoid a lot of future unhappy users if we tried to at least narrow the gap between the net and a true lifeline utility.

The approach I suggested in the paper I referenced contained severl suggestions.  One was was legal liability for flaws, using some sort of negligance standard (not strict liability.)  I know that people do not like this, but I feel that some sort of compulsion is needed to incite people to move to the more boring, slower, and less fun approaches that the elder engineering disciplines use; techniques such as design rules and testing from the get-go.

I picked on SIP because it is such an easy target.  On the other hand, clearly there are engineering wonders accomplished on the net - the network time protocol being one that I consider akin to magic.

Much of my perspective is colored from my family experience - my grandfather repaired radios, my father repaired TVs, and I have spent far to many 3am’s laying on a concrete floor in a wireing closet trying to figure out why a network is malfunctioning.  I tried, and suceeded, in the early 1990’s to construct an internet “butt set” that, useful as it was (and still is) on its own, it was meant to be part of a more elaborate system to help monitor and diagnose the net.

From that perspective I am perhaps more sensitive than most to the need to engineer the net so that it can be maintained, diagnosed, and repaired.

There has been a lot of resistance to incorporating mechanisms to monitor, diagnose, and troubleshoot the net except on a piecemeal basis.

We need to stop thinking of the net as a collection of individual machines but, rather, as a great distributed process.  (I was part of a DARPA project to work on this except that it, unfortunately, my time was devoured by my position on the ICANN board.)

Ultimately I believe that much of the net needs to become homeostatic - self healing (it already is in many regards, such as routing and soft tables such as ARP caches) - even if the control loops will for the foreseeable future, require the permission of people.

Unfortunately I see the design to be moving, and ossifying, in ways that make control, and time-to-recover, more difficult rather than less difficult.  And many implementations are weak and waiting to be pushed into failure by the arrival of a new peer implementation that does the protocol in a slightly, but still legitimate way.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Domain Names

Sponsored byVerisign

Brand Protection

Sponsored byCSC

Cybersecurity

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global

DNS

Sponsored byDNIB.com

New TLDs

Sponsored byRadix