One of the more persistent founding myths around the internet is that it was designed to be able to withstand a nuclear war, built by the US military to ensure that even after the bombs had fallen there would still be communications between surviving military bases.
It isn't true, of course. The early days of the ARPANET, the research network that predated today's internet, were dominated by the desire of computer scientists to find ways to share time on expensive mainframe computers rather than visions of Armageddon.
Yet the story survives, and lies behind a generally accepted belief that the network is able to survive extensive damage and still carry on working.
This belief extends to content as well as connectivity. In 1993 John Gilmore, cyberactivist and founder of the campaigning group the Electronic Frontier Foundation, famously said that 'the net interprets censorship as damage and routes around it', implying that it can find a way around any damaged area.
This may be true, but if the area that gets routed around includes large chunks of mainland China then it is slightly less useful than it first appears.
Sadly, this is what happened at the end of last year after a magnitude 7.1 earthquake centred on the seabed south of Taiwan damaged seven undersea fibre-optic cables.
The loss of so many cables at once had a catastrophic effect on internet access in the region, significantly curtailing connectivity between Asia and the rest of the global Internet and limiting access to websites, instant messaging and email as well as ordinary telephone service.
Full service may not be restored until the end of January since repairs involve locating the cables on the ocean floor and then using grappling hooks to bring them to the surface so they can be worked on.
The damage has highlighted just how vulnerable the network is to the loss of key high-speed connections, and should worry anyone who thought that the internet could just keep on working whatever happens.
This large-scale loss of network access is a clear example of how bottlenecks can cause widespread problems, but there are smaller examples that should also make us worry.
At the start of the year the editors of the popular DeviceForge news website started getting complaints from readers that their RSS feed had stopped working.
RSS, or 'really simple syndication', is a way for websites to send new or changed content directly to user's browsers or special news readers, and more and more people rely on it as a way to manage their online reading.
The editors at DeviceForge found that the reason their feed was broken was that the particular version of RSS they were using, RSS 0.91, depended on the contents of a particular file hosted on the server at www.netscape.com.
It looks as if someone, probably a systems administrator doing some clearing up, deleted what seemed to be an unneeded old file called rss-0.91.dtd, and as a result a lot of news readers stopped working.
Having what is supposed to be a network-wide standard dependent on a single file hosted on a specific server may be an extreme case, but it is just one example of a deeply-buried dependency within the network architecture, and it surely not alone.
This is going to get worse. The architecture of the Internet used to resemble a richly-connected graph, with lots of interconnections between the many different levels of network that work together to give us global coverage, but this is no longer the case.
The major service providers run networks which have few interconnections with each other, and as a result there are more points at which a single failure can seriously affect network services.
There may even be other places where deleting a single file could adversely affect network services.
If we are to avoid these sorts of problems then we need good engineers and good engineering practice. We have been fortunate over the years because those designing, building and managing the network have cared more for its effective operation than they have for their personal interests, and by and large they have built the network around standards which are robust, scalable and well-tested.
But we need to carry on doing this and make things even better if we are going to offer network access to the next five billion users, and this is getting harder and harder to do.
In the early days the politics was small-scale, and neither legislators nor businesses really took much notice, but this is no longer the case as we see in the ongoing battles over internet governance, net neutrality, content regulation, online censorship and technical standards.
Bodies like the Internet Society, the International Electrotechnical Commission and the Internet Engineering Task Force still do a great job setting the standards, but they, like the US-government appointed ICANN, are subject to many different pressures from groups with their own agendas.
And setting technical standards is not enough to guard against network bottlenecks like the cables running in the sea off Taiwan, since decisions on where to route cables or how the large backbone networks are connected to each other are largely made by the market.
The only body that could reasonably exert some influence is the International Telecommunications Union, part of the UN. Unfortunately its new Secretary-General, Hamadoun Toure, says that he does not want the ITU to have direct control of the internet.
Speaking recently at a press conference he said 'it is not my intention to take over the governance of Internet. I don't think it is in the mandate of ITU'. Instead he will focus on reducing the digital divide and on cyber-security.
These are worthy goals, but they leave the network at the mercy of market forces and subject to the machinations of one particular government, the United States. If we are going to build on the successes of today's internet and make the network more robust for tomorrow we may need a broader vision.
|Cybersquatting||Policy & Regulation|
|DNS Security||Registry Services|
|IP Addressing||White Space|
Neustar DNS Services
Neustar DDoS Protection
Minds + Machines