Home / Blogs

Thoughts on the Open Internet - Part 1: What Is “Open Internet”

I’m sure we’ve all heard about “the Open Internet.” The expression builds upon a rich pedigree of term “open” in various contexts. For example, “open government” is the governing doctrine which holds that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight, a concept that appears to be able to trace its antecedents back to the age of enlightenment in 17th century Europe. There is the concept of “open society,” a theme that was developed in the mid 20th century by the Austrian philosopher Karl Popper. And of course in the area of technology there was the Open Systems Interconnection model of communications protocols that was prominent in the 1980’s. And lets not forget “Open Source,” which today is an extremely powerful force in technology innovation. So we seem to have this connotation that “open” is some positive attribute, and when we use the expression of the “Open Internet” it seems that we are lauding it in some way. But in what way?

So let’s ask the question: What does the “Open Internet” mean?

The Federal Communications Commission of the United States has published its views on this question:

‘The “Open Internet” is the Internet as we know it. It’s open because it uses free, publicly available standards that anyone can access and build to, and it treats all traffic that flows across the network in roughly the same way. The principle of the Open Internet is sometimes referred to as “net neutrality.” Under this principle, consumers can make their own choices about what applications and services to use and are free to decide what lawful content they want to access, create, or share with others. This openness promotes competition and enables investment and innovation.

‘The Open Internet also makes it possible for anyone, anywhere to easily launch innovative applications and services, revolutionizing the way people communicate, participate, create, and do business—think of email, blogs, voice and video conferencing, streaming video, and online shopping. Once you’re online, you don’t have to ask permission or pay tolls to broadband providers to reach others on the network. If you develop an innovative new website, you don’t have to get permission to share it with the world.’

The FCC’s view of an “Open Internet” appears to be closely bound to the concept of “Net Neutrality,” a concept that attempts to preclude a carriage service provider from explicitly favouring (or disrupting) particular services over and above any other.

Wikipedia offer a slightly broader interpretation of this term that reaches beyond carriage neutrality and touches upon the exercise of technological control and power.

“The idea of an open internet is the idea that the full resources of the internet and means to operate on it are easily accessible to all individuals and companies. This often includes ideas such as net neutrality, open standards, transparency, lack of internet censorship, and low barriers to entry. The concept of the open internet is sometimes expressed as an expectation of decentralized technological power, and is seen by some as closely related to open-source software.”

In this six-part essay I’d like to expand upon this theme of openness and extend it to include considerations of coherence within the Internet and also consider fragmentary pressures. I’d like to see if we can provide a considered response to the question: Is today’s Internet truly “Open?”

What does the “Open Internet” mean?

Let’s examine the attributes of an “Open Internet” through the lens of its component technologies. The questions being addressed here for each of these major technology activities that support the Internet are: What would be the expectations of an “Open Internet”? What would a truly open and coherent Internet look like?

The technology model used here is an adaption of the earlier Open Systems Interconnection reference model (ISO/IEC 7498-1), where each layer of this reference model uses services provided by the layer immediately below it, and provides services to the layer immediately above it. It is a conventional taxonomy for networking technologies. An Internet-specific protocol reference model is shown in Figure 1, based on the technology model used in RFC1122 (“Host Requirements” RFC)

Figure 1 – A Protocol Reference Model for the Internet (after RFC 1122)

The concept of “Openness” as applied to networks carries an obvious connotation of accessibility. This is not the same as a free service, but it is a service where access to the service is not limited or restricted in arbitrary ways. An open network is an accessible network. This concept of general accessibility encompasses more than accessibility as a potential consumer of the network’s service. It also implies that there are no inherent restrictions or arbitrary inhibitions for anyone to provide services, whether it’s services as a provider of transmission capacity, switching, last-mile access, mobility, names, applications, of any of the other individual components that make up the Internet. This concept of openness also extends to the consequent marketplace of services that exists within this networked environment. Consumers can make their own choices about the applications and services that they choose to use in such an open network. The environment promotes competition in the supply of goods and services, and stimulates investment in innovation and development that provides evolutionary pressure to expand and diversify the ways in which we make use of this common network.

Such outcomes are the result of the application of the same set of overall principles into each of the areas of technology that form the essential components of the Internet.

An Open Switched Network

The theoretical model of an open and coherent network is a re-statement of an interpretation of the end-to-end principle in packet-switched networks, where the network’s intended role is strictly limited to the carriage of individual packets from source to destination, and all users, and all functions and services that populate the network, are provided by devices that sit outside of the network itself. These devices communicate between themselves in a manner that is largely opaque to the packet-switched network. Furthermore, edge devices are not expected to communicate with packet switching devices within the network, and equally, packet switching devices within the network do not directly communicate with edge devices (with the one exception of the generation of Packet Control messages (ICMP message in the context of the Internet Protocol). Network consistency implies that all active (packet switching) elements within the network that perform switching functions in IP packets use a consistent single interpretation of the contents of an IP packet, supporting precisely the same IP protocol specification.

The seminal work on the end-to-end principle is the 1981 paper “End-to-End Arguments in System Design” by J.H. Saltzer, D. P. Reed, and D. D. Clark, published in Proceedings of the Second International Conference on Distributed Computing Systems. Paris, France. April 8—10, 1981. IEEE Computer Society, pp. 509-512.

A simple restatement of the end-to-end principle is that the network should not replicate the functions that can be performed by communicating end systems.

A further paper on the topic is “Tussels in Cyberspace: defining Tomorrow’s Internet” by D. D. Clark, K. R. Sollins, J. Wroclawski and R. Braden, published in SIGCOMM’02, August 19-23, 2002.

A restatement of this paper’s thesis is that in an unbundled environment each actor attempts to maximize their role and value, and when applied to networks and applications they may create conflicting situations, which undermines a pure end-to-end network design.

The Internet Protocol has chosen a particular form of packet switching which is a “stateless” switching function. Within each active switching element each IP packet is forwarded towards its intended destination without reference to any preceding or following packets, and without reference to any pre-configured state within the switching element. This implies that every IP packet contains a destination address that is not relative to any assumed network state or topology, and that this address has an identical unique interpretation across the entire network domain. In this way IP addresses are not relative to any locality, network or scope, and each IP address value is required in this architecture to be unique across the entire Internet.

This is a claim more honored these days as an exception rather than general practice. Network operators have eschewed passing all responsibility for packet transmission to end points and have responded by constructing internally segmented networks that rely on various forms of virtual state within the network. This extends from the extensive use of VLANs in ether-switched data services to the almost ubiquitous use of MPLS in wide network networks. The current enthusiasm SDN is no exception to this bias towards the use of virtual circuits within networks.

A Open Consistent Address Space

In an open and consistent Internet, every destination on the Internet is reachable from any location on the Internet. The way this is achieved is the universal ability to send a packet to any destination, and this implies that all such destinations require an IP address that everyone else may use. These IP addresses must be allocated and administered such that each address is uniquely associated with a single attached network and with a single attached device within that network. The network itself cannot resolve the inconsistency of address clashes where two or more devices are using the same address, so the responsibility for ensuring that all addresses are used in a manner that is unique is left to the bodies who administer address allocation and registration.

This has been an evolutionary process. The original address administration and registry function was managed through the US research agencies, and the evolution of this model has lead to the creation of five “Regional Internet Registries” each of which serve the address allocation and registry function needs of regional communities. The administration of the central pool of unallocated addresses is part of the IANA function. The policies that govern the administration of the distribution and registration functions within each of these regional registries are determined by the regional communities themselves, in a so-called “bottom-up” self regulatory manner.

The practices relating to access to address space through allocation and assignment are based on policies developed by the respective address communities in each region. The general theme of these address distribution policies is one of “demonstrated need” where addresses are available to applicants on the proviso that the applicant can demonstrate their need for these addresses within their intended service infrastructure.

Open End-to-End Transport

The service model of a stateless packet switched network is one of unreliable datagram delivery. This service model is inadequate for most useful network services. The Internet has commonly adopted a single end-to-end stream protocol, the Transmission Control Protocol (TCP), that is conventionally used by communicating end systems to transform the network’s unreliable datagram delivery service into a reliable lossless byte stream delivery service.

This is not the only end-to-end transport protocol in common use. Another protocol, the User Datagram Protocol (UDP), is a minimal abstraction of the underlying IP datagram behavior, commonly used by simple query/response applications, such as the DNS resolution protocol.

While many other transport protocols have been defined, common convention in the Internet has settled on TCP and UDP as the two “universal” end-to-end transport protocols, and all connected systems in an open coherent network would be expected to be able to communicate using these protocols. The uniform adoption of end-to-end transport protocol behaviors is a feature of such an open network, in that any two endpoints that both support the same transport protocol should be able to communicate using that protocol. In this open network model, the operation of these end-to-end protocols is completely opaque to the packet-switched network, as it concerns only the communication signaling between the two end systems.

This perspective of the end-to-end protocols in use in the Internet also makes a critical assumption about the nature of the flow control processes. This model assumes that TCP is the predominate protocol used by end hosts and, most critically, that the flow control algorithm, used by all TCP implementations, behaves in very similar ways. This model assumes that there is no central method of allocation or governance of network resource allocation to individual end-to-end conversation flows, and instead the model relies on the aggregate outcome of the TCP flow control protocols to provide a fair share allocation of common network resources where an approximately equal proportion of network resources is utilized by each active conversation flow.

The conventional flow control process is one of additive increase in flow rates (slow) and multiplicative decrease (fast), or “AIMD”. TCP sessions have no arbitrary speed settings, and each TCP session will both impose pressure on other concurrent sessions and respond to pressure from other concurrent sessions to try and reach a meta-stable equilibrium point where the network’s bandwidth is, to some level of approximation, equally shared across the concurrent active flows.

Packet loss is the signal of over-pressure, so a flow will gradually increase its sending rate to the point of onset of packet loss, and at that point it will immediately halve its sending rate and once more gradually probe increased rates until the next packet loss event.

TCP implementations that use a different flow control algorithm normally fare worse, as their efforts to place greater flow pressure on concurrent flows often results in higher packet loss rates in their own flows. However, there has been a significant body of research into flow control algorithms and there are TCP flow control algorithms that can secure a greater relative share of the network over a conventional AIMD flow control algorithm without the element of self-damage. These algorithms are capable of exerting “unfair” pressure on other concurrent TCP flows, and can consume a greater proportion of network resources as a result.

One aspect of the “network neutrality” debates is the assumption of a relatively passive network where the network’s resources will be equitably allocated due to the general fair-shared outcome that is achieved by the uniform use of particular TCP flow control behaviour. The TCP ecosystem is changing with entrants such as Akami’s use of FAST, Google’s use of QUIC with Chrome and some Linux distributions using CUBIC, and these assumptions about the general equity of outcome of competing end-to-end streaming sessions are now an increasingly approximate set of assumptions.

Source: “TCP Protocol Wars

A Open Consistent Name Space

This open and coherent model of the Internet is not limited to the network packet switching and end-to-end transport functions. A critical component of the Internet is implemented as a distributed application that sits alongside clients and servers at the “edge” of the network rather than within the network’s direct purview. This is the Internet’s symbolic name space, the Domain Name System (DNS).

This name space is the combination of a name structure and a name resolution function that allows a user level discourse using familiar symbols and terms to refer to service points connected to the Internet that are identified by IP addresses and transport protocol port numbers.

While it is conceivable to think about many diverse name spaces and even many diverse name resolution protocols, and the Internet as such would not necessarily prevent such an outcome, a coherent view of the Internet requires that the mapping of symbols to IP addresses follows a uniform and consistent convention across the entire network. Irrespective of where and how a DNS query is generated, the response should reflect the current state of the authentic information published in the DNS. The implication here is that an open and consistent DNS uses the hierarchical name space derived from an single and unique root zone, and that all name resolvers perform the resolution of a name query using a search within this same uniquely rooted name space. This is the essential element of a consistent name space for all of the Internet.

Open Applications

The context and content of individual conversations in this open coherent network model is also the subject of a number of common conventions, where certain commonly defined application level protocols are defined for common services. For example, applications wishing to pass email messages are expected to use the SMTP protocol, the retrieval of web pages to use the HTTP protocol, and so on.

This implies that the protocols used to support network-wide functions, including for example data transfer, electronic mail, instant messaging, and presence notification, all require the adoption of openly available protocol specifications to support that application, and that these specifications are openly implementable and not encumbered by restrictive claims of control or ownership.

Much of today’s environment also relies heavily on the concept of “open source” technologies. The Unix operating system, originally developed at AT&T Bell Labs in the 1970’s and distributed as open source, is now the mainstay of the much of today’s environment. The implementation of the TCP/IP protocol suite by the Computer Systems Research Group at the University of Berkeley in the 1980’s was made available as open source and the ready availability of this software package was part of the reasons behind the rapid adoption of this protocol as the common computer networking protocol in the 1990’s. Subsequent “open” implementations of popular applications, such as sendmail for Mail, BIND for the DNS, Apache for Web servers, added further momentum to this use of open source, and these days the concepts of open source is fundamental to much of the technology base of not only the Internet but to the entire information technology world.

Open Security

Security functions include both the open and unrestricted ability for communicating end users to invoke protection from third party eavesdropping and the ability for these end users to verify the identity of the remote party with whom they are communicating, and to authenticate that the communication as received is an authentic and precise copy of the communication as sent. This is useful in many contents, such as for example in open communications environments using the radio spectrum, or in environments that trade goods and services where authentication and non-repudiation is vitally important.

To allow such functions to be openly available to all users requires the use of unencumbered cryptographic algorithms that are generally considered to be adequately robust and uncompromised, and the associated availability of implementations of these algorithms on similar terms of open and unencumbered availability.

An Open Internet

One view of an open network is a consistent network, in that the same actions by a user will produce the same response from the networked environment, irrespective of the user’s location and their choice of service provider. In other words, the interactions between the application on the user’s device and the application that serves the referenced content should not be altered by the network in any way, and users should see identical outcomes for identical inputs across the entire network.

These considerations of the prerequisites of an open coherent Internet do not imply the requirement for an Internet that is operated by a single operator, or one where services are provided via a single service delivery channel or technology provided through a single channel. While the Internet is an amalgam of tens of thousands of component networks, populated by millions of services, and services by thousand of suppliers of services and technologies it is still feasible that this collection of service providers are individually motivated follow common conventions, and to operate their component services in a fashion that is consistent with all other providers. The property of coherence in an open internet is an outcome of individual interests to maximize their effectiveness and opportunities by conforming to the common norms of the environment in which they operate.

An Open Internet is not one where open access equates to costless access. The considerations of openness in such a model of an open network relate to the absence of arbitrary barriers and impositions being placed on activities.

What these considerations imply is the ability to evolve the Internet through incremental construction. A novel application need not require the construction of a new operating system platform, or a new network. It should not require the invention and adoption of a new network protocol or a new transport protocol. Novel applications can be constructed upon the foundation of existing tools, services, standards and protocols. This model creates obvious efficiencies in the process of evolution of the Internet.

The second part of the evolutionary process is that if a novel application uses existing specifications and services then all users can access the application and avail themselves of its benefits if they so choose. Such an open unified environment supports highly efficient processes of incremental evolution that leverage the existing technology base to support further innovation. The process of evolution is continual, so it is no surprise that the Internet of the early 1990s is unrecognizable from today’s perspective. But at the same time today’s Internet still uses the same technology components from that time, including the IP protocol, the TCP and UDP end-to-end transport protocols, the same DNS system, and even many of the same application protocols. Each innovation in service delivery in the Internet has not had to reinvent the entire networked environment in order to be deployed and adopted.

Much of the Internet today operates in a way that is consistent with common convention and is consistent with this model of an open, unified and accessible public resource. But that does not mean that all of the Internet environment operates in this manner all of the time, and there are many fragmentary pressures.

Such pressures appear to have increased as the Internet itself has expanded. These fragmentary pressures exist across the entire spectrum of technologies and functions that together make up the internet.

Some of these fragmentary pressures are based in technology considerations, such as the use of the Internet in mobile environments, or the desire to make efficient use of high capacity transmission systems. Other pressures are an outcome of inexorable growth, such as the pressures to transition the Internet Protocol itself to IPv6 to accommodate the future requirements of the Internet of Things. There are pressures to increase the robustness of the Internet and improve its ability to defend itself against various forms of abuse and attack.

How these pressures are addressed will be critical to the future of the concept of a coherent open Internet in the future. Our ability to transform responses to such pressures into commonly accepted conventions that are accessible to all will preserve the essential attributes of a common open Internet. If instead we deploy responses that differentiate between users and uses, and construct barriers and impediments to the open use of the essential technologies of the Internet then not only will the open Internet be threatened, but the value of the digital economy and the open flow of digital goods and services will be similarly impaired.

By Geoff Huston, Author & Chief Scientist at APNIC

(The above views do not necessarily represent the views of the Asia Pacific Network Information Centre.)

Visit Page

Filed Under

Comments

Why settlements are important Michael Elling  –  Jan 19, 2016 11:14 AM

Geoff,

Absent is an discussion about the role of settlements between horizontal layers and across vertical boundary points.

Settlements provide important price signals and (dis)incentives to clear rapidly depreciating supply across growing and fragmenting demand.

The bill and keep or settlement free model of the end-to-end principle (which grew out of Carterphone, equal access and Computers 2/3, and therefore scaled due to a competitive WAN in the US) has resulted in non-generativity, and coalescence into vertical monopolies at the edge and core.

How ironic the debate over free basics.

There is another way.

Best,
Michael

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

IPv4 Markets

Sponsored byIPv4.Global

New TLDs

Sponsored byRadix

Threat Intelligence

Sponsored byWhoisXML API

Domain Names

Sponsored byVerisign

Brand Protection

Sponsored byCSC

DNS

Sponsored byDNIB.com

Cybersecurity

Sponsored byVerisign