Home / Blogs

The Fragile Network

One of the more persistent founding myths around the internet is that it was designed to be able to withstand a nuclear war, built by the US military to ensure that even after the bombs had fallen there would still be communications between surviving military bases.

It isn’t true, of course. The early days of the ARPANET, the research network that predated today’s internet, were dominated by the desire of computer scientists to find ways to share time on expensive mainframe computers rather than visions of Armageddon.

Yet the story survives, and lies behind a generally accepted belief that the network is able to survive extensive damage and still carry on working.

This belief extends to content as well as connectivity. In 1993 John Gilmore, cyberactivist and founder of the campaigning group the Electronic Frontier Foundation, famously said that ‘the net interprets censorship as damage and routes around it’, implying that it can find a way around any damaged area.

This may be true, but if the area that gets routed around includes large chunks of mainland China then it is slightly less useful than it first appears.

Sadly, this is what happened at the end of last year after a magnitude 7.1 earthquake centred on the seabed south of Taiwan damaged seven undersea fibre-optic cables.

The loss of so many cables at once had a catastrophic effect on internet access in the region, significantly curtailing connectivity between Asia and the rest of the global Internet and limiting access to websites, instant messaging and email as well as ordinary telephone service.

Full service may not be restored until the end of January since repairs involve locating the cables on the ocean floor and then using grappling hooks to bring them to the surface so they can be worked on.

The damage has highlighted just how vulnerable the network is to the loss of key high-speed connections, and should worry anyone who thought that the internet could just keep on working whatever happens.

This large-scale loss of network access is a clear example of how bottlenecks can cause widespread problems, but there are smaller examples that should also make us worry.

At the start of the year the editors of the popular DeviceForge news website started getting complaints from readers that their RSS feed had stopped working.

RSS, or ‘really simple syndication’, is a way for websites to send new or changed content directly to user’s browsers or special news readers, and more and more people rely on it as a way to manage their online reading.

The editors at DeviceForge found that the reason their feed was broken was that the particular version of RSS they were using, RSS 0.91, depended on the contents of a particular file hosted on the server at www.netscape.com.

It looks as if someone, probably a systems administrator doing some clearing up, deleted what seemed to be an unneeded old file called rss-0.91.dtd, and as a result a lot of news readers stopped working.

Having what is supposed to be a network-wide standard dependent on a single file hosted on a specific server may be an extreme case, but it is just one example of a deeply-buried dependency within the network architecture, and it surely not alone.

This is going to get worse. The architecture of the Internet used to resemble a richly-connected graph, with lots of interconnections between the many different levels of network that work together to give us global coverage, but this is no longer the case.

The major service providers run networks which have few interconnections with each other, and as a result there are more points at which a single failure can seriously affect network services.

There may even be other places where deleting a single file could adversely affect network services.

If we are to avoid these sorts of problems then we need good engineers and good engineering practice. We have been fortunate over the years because those designing, building and managing the network have cared more for its effective operation than they have for their personal interests, and by and large they have built the network around standards which are robust, scalable and well-tested.

But we need to carry on doing this and make things even better if we are going to offer network access to the next five billion users, and this is getting harder and harder to do.

In the early days the politics was small-scale, and neither legislators nor businesses really took much notice, but this is no longer the case as we see in the ongoing battles over internet governance, net neutrality, content regulation, online censorship and technical standards.

Bodies like the Internet Society, the International Electrotechnical Commission and the Internet Engineering Task Force still do a great job setting the standards, but they, like the US-government appointed ICANN, are subject to many different pressures from groups with their own agendas.

And setting technical standards is not enough to guard against network bottlenecks like the cables running in the sea off Taiwan, since decisions on where to route cables or how the large backbone networks are connected to each other are largely made by the market.

The only body that could reasonably exert some influence is the International Telecommunications Union, part of the UN. Unfortunately its new Secretary-General, Hamadoun Toure, says that he does not want the ITU to have direct control of the internet.

Speaking recently at a press conference he said ‘it is not my intention to take over the governance of Internet. I don’t think it is in the mandate of ITU’. Instead he will focus on reducing the digital divide and on cyber-security.

These are worthy goals, but they leave the network at the mercy of market forces and subject to the machinations of one particular government, the United States. If we are going to build on the successes of today’s internet and make the network more robust for tomorrow we may need a broader vision.

By Bill Thompson, Journalist, Commentator and Technology Critic

Filed Under

Comments

Simon Waters  –  Jan 19, 2007 11:28 PM

I doubt governments would produce a more robust network, or even a network as robust. I know the UK government failed pitifully to build such a network for its own military several times, that covered areas under one military, and legal jurisdiction.

It is also naive to assume that government isn’t involved in some of the big Internet connectivity decisions. If Taiwan thinks it needs better connectivity for reasons of national security, it can readily arrange it, that it hasn’t suggests the government didn’t consider it a priority.

The robustness issue is governed by cost. More robust networks cost more. The great things about the Internet is that one can make ones own connectivity more robust by spending money, when it is appropriate. This underlies the concept of scalability, anyone who perceives a need for more robustness can do it without a change of global governance (assuming the big networks carry on talking, which is an issue, but that issue would arise if it were governments instead of companies, at least usually with companies you can wave some money to resolve an issue if it hurts enough, try doing that with say the US and North Korea relationship).

Managing dependencies on remote resources is distinct from network robustness, and wouldn’t go away even if one made the network magically perfect.

The first I heard of the loss of fibre were reports of a significant drop in spam. None of my companies systems, or my personal systems, depended on poorly connected areas in the far east.

I did see an issue when Katrina struck. One of the UK government websites I visited used a search service provided by someone who bought their DNS services from a well known DNS vendor who had located all their DNS servers in New Orleans. This was a dependency a properly informed purchaser could have seen before they purchased the DNS services (and also before the search system was bought from that vendor). Then again perhaps the website thought that their search service purchase didn’t require that level of diligence because it is a commodity service easily replaced.

I think the US monopoly on the root DNS is an issue worth discussion, but again I’m not sure how moving it to the UN would make it work better.

By and large the control is illusionary. There are competing root operators, and it is only by agreement that people use the IANA root servers. If tomorrow the US decided to take the top level domain of a major country away, I expect a large number of big ISPs would switch root servers within hours. Indeed serious folk in the DNS root server business are involved in running parallel infrastructure which could easily take over if the US government attempted to abuse its position.

That no major ISPs (AFAIK) jumped ship over the XXX issue, suggests that they don’t perceive it as significant enough to sacrifice the consistency of an IANA root.

Thomas Kuehne  –  Jan 20, 2007 12:48 PM

One of the more persistent founding myths around the internet is that it was designed to be able to withstand a nuclear war, [...] It isn’t true, of course.

The internet is a sandwich of different layers. The basic protocol level (IP) and the protocols of the management level (BGP et. al) were designed to work even if large part of the network were to be disconnected due to failures at the physical level.

If we are to avoid these sorts of problems then we need good engineers and good engineering practice.

The problem are usually not caused by bad engineers but by decisions of the management and insufficient communication between different network providers. Point in case, the town centre of my hometown is on an island connected with 4 bridges. Now have a guess where the 3 local telecommunication providers put their cables. That’s right, they all used the same bridge.

The architecture of the Internet used to resemble a richly-connected graph, with lots of interconnections between the many different levels of network that work together to give us global coverage, but this is no longer the case.

I’d be interested to see some hard data on this issue. The relation of available bandwidth per link of backbone providers and “normal” networks has surely changed over the years, however I’m not sure if there are less interconnections per network than in the past.

Simon Waters  –  Jan 20, 2007 1:46 PM

“I’d be interested to see some hard data on this issue.”

I’m sure the folks at NANOG would oblige.

I’m not sure what Halcyon days are being referred to, probably the one where everyone in Australia paid per bit, one cable left the continent, and people worried that virtually the entire Far East routed phone and data along one cable along the Malay peninsula, and through lots of politically unstable countries of the mid-East, and Eastern Europe.

We definitely have more cables these days, whether people are prepared to pay to use them is another matter entirely.

Tom Vest  –  Jan 20, 2007 3:48 PM

Describing the actual (or at least observable) state of affairs in straightforward terms is difficult. Overall the Internet’s logical structure-the patterns formed by the 25,000 or so institutional “nodes” (Autonomous Systems) that manage Internet routing, plus the many millions of “edges” or links that interconnection them to each other-form a steep hierarchy, or what systems theorists call a “scale- free graph”. That is to say, the biggest Internet networks are *very* big (meaning they directly broker Internet connectivity for a very large number of users and devices) and *very* well-connected compared to the next largest group of networks, which in turn is very big relative to the next tier, and so on. The bigness and connectedness generally covary across observable nodes.

Some academic researchers have observed that this steep hierarchy is consistent even over time, even across different portions of the Internet, e.g., portions associated with different individual countries. Some people (me, for example) might find this observation surprising, if not downright troubling, given the perception that markets change over time, and the sense that different countries seem to have very different market structures and communications network environments at any point in time, ranging from complete monopoly to radical “liberalization”, decentralization, and network diversity.

Although the degree of overall hierarchy is hard to dispute at this level of analysis, the apparent consistency or homogeneity observed by researchers is in fact a product of measurement error. The source of the error is the treatment of Autonomous Systems as the fundamental ontology or building blocks of the Internet - in effect as the sovereign entities of Internet traffic exchange. This approach is rendered questionable by the existence of even broader “Autonomous Routing Domains” (ARDs), which are composed of many (in some cases hundreds) of individual ASes, which are in fact directly controlled by a single decision maker (PSTN, commercial carrier, etc.). Such ARDs are not evenly distributed globally, and in fact some apparently diverified national network economies are composed almost entirely of one, or perhaps 2-3 ARDs. Fail to recognize these structures (which are not so transparently self-evident in the empirical data as are ASes) and you might easily assume that it’s all equally steep hierarchy, everywhere, all the time… but you would be wrong.

Some of the countries that were most severely impacted by the recent earthquake-related cable outages also play host to some of the largest and most locally dominant ARDs. Whether or not this is fact correlates with differences in degree of impact or duration of recovery remains to be explored…

Karl Auerbach  –  Jan 31, 2007 9:58 PM

In 1972 I was at System Development Corporation (SDC).  I was part of the R&D Group.  We worked on what is now called the internet.

It was expressly understood by us and our customer that we were building a network that would be (not “could be” - in those days we were certain that it would happen) subjected to nuclear attack, that nodes and links would be vaporized.  We designed machines and prototols with this in mind.

Survival of the network during nuclear war was not an ancillary aspect of our work, it was a core element.

So I would say that I disagree with those who say that the story that the early internet was designed to be robust in the face of nuclear war is merely an urban legend.

As for current day internet enginnering.

You’ve heard the old joke about a Microsoft engineer and a Boeing engineer getting onto a flight across the Pacific.  About an hour into the flight the Microsoft guy opens up his lapgop
and it immediately goes blue-screen.  The Boing guy looks at the laptop, looks at the airplane, and says “Aren’t you glad we don’t build airplanes the way you build software?”

Well, the internet is constructed of software that is not much better.  I’m in the business of testing internet protocol applications.  And from where I sit, it’s my sense that there is a lot of “it runs in the lab, ship it to customers” mentality out there in internet software land.

And I don’t mean this casually - just note, for example today’s report that Cisco IOS can crash if it receives an unsolicited SIP packet.

Are we doing good engineering?  Take a look at SIP.  I’ve been to several SIP interoperability events.  The common complaint is that SIP is a mess - that it has too many equivalent, but different encoding methods, that it is more like a bulletin board with everybody’s favorite idea nailed on.

The internet is becoming a lifeline utility - people are starting to depend on it for health and safety.

But in order to achieve the network robustness needed to meet that expectation we need to reinvent out engineering practices, and also, I believe, to adopt legal liability for engineering

A couple of years ago I gave a presentation on this topic:

From Barnstorming to Boeing - Transforming the Internet Into a Lifeline Utility (Powerpoint)
Speaker’s notes (Acrobat)

The Famous Brett Watson  –  Feb 12, 2007 1:58 PM

Interesting remarks, Karl. The question of “are we doing good engineering” (in protocol design) raises another question: do we have any kind of consensus on what constitutes good engineering in protocol design? There seems to be a dearth of literature on the subject. Where is the protocol design bible? We have a couple of classic papers, like the “end-to-end principle” paper, but where can one read about good protocol design in general?

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Brand Protection

Sponsored byCSC

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

DNS

Sponsored byDNIB.com

Domain Names

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API