Home / Blogs

How Did IPv6 Come About, Anyway?

Geoff Huston

This is a special two-part series article providing a distinct and critical perspective on Internet Protocol Version 6 (IPv6) and the underlying realities of its deployment. The first part gives a closer look at how IPv6 came about and the second part exposes the myths.

In January 1983, the Advanced Research Projects Agency Network (ARPANET) experienced a "flag day," and the Network Control Protocol, NCP, was turned off, and TCP/IP was turned on. Although there are, no doubt, some who would like to see a similar flag day where the world turns off its use of IPv4 and switches over to IPv6, such a scenario is a wild-eyed fantasy. Obviously, the Internet is now way too big for coordinated flag days. The transition of IPv6 into a mainstream deployed technology for the global Internet will take some years, and for many there is still a lingering doubt that it will happen at all.

Let's look more closely at how IPv6 came about, and then look at IPv6 itself in some detail to try to separate the myth from the underlying reality about the timeline for the deployment of IPv6. Maybe then we can suggest some answers to these questions.

How IPv6 Came About

The effort that has led to the specification of IPv6 is by no means a recently started initiative. A workshop hosted by the then Internet Activities Board (IAB) in January 1991 identified the two major scaling issues for the Internet: a sharply increasing rate of consumption of address space and a similar, unconstrained growth of the interdomain routing table (Interdomain routing protocols such as BGP are the glue that tie the various networks together to make sure that a user of one network can reach a resource no matter where it connects to the network). The conclusion reached at the time was that "if we assume that the Internet architecture will continue in use indefinitely, then we need additional [address] flexibility."

These issues were considered later that year by the Internet Engineering Task Force (IETF) with the establishment of the ROAD (ROuting and ADdressing) effort. This effort was intended to examine the issues associated with the scaling of IP routing and addressing, looking at the rate of consumption of addresses and the rate of growth of the interdomain routing table. The ultimate objective was to propose some measures to mitigate the worst of the effects of these growth trends. Given the exponential consumption rates then at play, the prospect of exhaustion of the IPv4 Class B space within two or three years was a very real one at the time. The major outcome of the IETF ROAD effort was the recommendation to deprecate the implicit network/host boundaries that were associated with the Class A, B, and C address blocks. In their place the IETF proposed the adoption of an address and routing architecture where the network/host boundary was explicitly configured for each network, and proposed that this boundary could be altered such that two or more network address blocks may be aggregated into a common, single block.

This approach was termed Classless Interdomain Routing, or CIDR. This was a short-term measure that was intended to buy some time, and it was acknowledged that it did not address the major issue of defining a longer-term, scalable network architecture. But as a short-term measure it has been amazingly successful, given that almost ten years and one Internet boom later, the CIDR address and routing architecture for IPv4 is still holding out.

The IAB, by then renamed the Internet Architecture Board, considered the ROAD progress in June 1992, still with its eye on the longer-term strategy for Internet growth. The board's proposal was that the starting point for the development of the next version of IP would be Connectionless Network Layer Protocol (CLNP). This protocol was an element of the Open System Interconnection (OSI) protocol suite, with CLNP being defined by the ISO 8473 standard. It used a variablelength address architecture, where network level addresses could be up to 160 bits long. RFC 1347 contained an initial description of how CLNP could be used for this purpose within the IPv4 TCP/IP architecture and with the existing Internet applications. For the IAB this was a bold step, and considering that the IETF community at the time regarded the OSI protocol suite as a very inferior competitor to its own efforts with IP, it could even be termed a highly courageous step. Predictably, one month later in July 1992, at the IETF meeting this IAB proposal was not well received.

The IETF outcome was not just a restatement of architectural direction for IP, but a sweeping redefinition of the respective roles and membership of the various IETF bodies, including that of the IAB.

Of course such a structural change in the composition, roles, and responsibilities of the bodies that collectively make up the IETF could be regarded as upheaval without definite progress. But perhaps this is an unkind view, because the IAB position also pushed the IETF into a strenuous burst of technical activity. The IETF immediately embarked on an effort to undertake a fundamental revision of the Internet Protocol that was intended to result in a protocol that had highly efficient scaling properties in both addressing and routing. There was no shortage of protocols offered to the IETF during 1992 and 1993, including the fancifully named TUBA, as well as PIP, SIPP and NAT.

This effort was part of a process intended to understand the necessary attributes of such a next-generation protocol.

The IETF formed an Internet Protocol Next Generation (IPng) Directorate in 1994, and canvassed various industry sectors to understand the broad dimensions of the requirements of such a protocol. This group selected the IPv6 Protocol from a set of proposals, largely basing its selection on the so-called "Simple Internet Protocol," or SIP proposal. The essential characteristic of the protocol was that of an evolutionary refinement of the Version 4 protocol, rather than a revolutionary departure from Version 4 to an entirely different architectural approach.

The final word from the Internet Assigned Numbers Authority (IANA) was that protocol number 6 was unused, and the final specification was named Version 6 of the Internet Protocol.

The major strength of IPv6 is the use of fixed-length, 128-bit address fields. Other packet header changes include the dropping of the fragmentation control fields from the IP header, dropping the header checksum and length, and altering the structure of packet options within the header and adding a flow label. But it is the extended address length that is the critical change with IPv6. A 128-bit address field allows an addressable range of 2 to the 128th power, and 2 to the power of 128 is an exceptionally large number. On the other hand, if we are talking about a world that is currently capable of manufacturing more than a billion silicon chips every year, and recognizing that even a one in one thousand address utilization rate would be a real achievement, then maybe it is not all that large a number after all. There is no doubt that such a protocol has the ability to encompass a network that spans billions of devices, which is a network attribute that is looking more and more necessary in the coming years.

Its not just the larger address fields per se, but also the ability for IPv6 to offer an answer to the address scarcity workarounds being used in IPv4 that is of value here. The side effect of these larger address fields is that there is then no forced need to use NAT as a means of increasing the address scaling factor. NAT has always presented operational issues to both the network and the application. NAT distorts the implicit binding of IP address and IP identity and allows only certain types of application interaction to occur across the NAT boundary. Because the "interior" to "exterior" address binding is dynamic, the only forms of applications that can traverse a NAT are those that are initiated on the "inside" of the NAT boundary. The exterior cannot initiate a transaction with an interior end point simply because it has no way of addressing this remote device. IPv6 allows all devices to be uniquely addressed from a single address pool, allowing for coherent end-to-end packet delivery by the network. This in turn allows for the deployment of end-to-end security tools for authentication and encryption and also allows for true peer-to-peer applications.

IPv6, as a protocol architecture, is not a radical departure from the architecture of IPv4. The same datagram delivery model is used, with the same minimal set of assumptions about the underlying network capabilities, and the same decoupling of the routing and forwarding capabilities. The use of an address field in the IP header to contain the semantics of both location and identity was not altered in any fundamental way. The changes made by IPv6 could be seen as conservative set of decisions, based on falling back to the IPv4 protocol model for guidance, on the principle that IPv4 is an operating proof of concept for this architectural approach.

In such a light, IPv6 can be seen as an attempt to regain the advantage of the original IP network architecture: that of a simple and uniform network service that allows maximal flexibility for the operation of the end-to-end application.

It is often the case that complex architectures scale very poorly, and from this perspective the core of IPv6 appears to be a readily scalable architecture.

Good as all this is, these attributes alone have not been enough so far to propel IPv6 into broad-scale deployment, and consequently there has been considerable enthusiasm to discover additional reasons to deploy IPv6.

Exposing the Myths… Read Part Two

---
Reprinted with permission from The Internet Protocol Journal (IPJ), Volume 6, No. 2, June, 2003. IPJ is a quarterly technical journal published by Cisco Systems.

By Geoff Huston, Author & Chief Scientist at APNIC. (The above views do not necessarily represent the views of the Asia Pacific Network Information Centre.)

Related topics: Internet Protocol, IP Addressing, IPv6, Security

WEEKLY WRAP — Get CircleID's Weekly Summary Report by Email:

Comments

To post comments, please login or create an account.

Related Blogs

Related News

Topics

Industry Updates – Sponsored Posts

Nominum Announces Future Ready DNS

New from Verisign Labs - Measuring Privacy Disclosures in URL Query Strings

DotConnectAfrica Delegates Attend the Kenya Internet Governance Forum

3 Questions to Ask Your DNS Host about Lowering DDoS Risks

Continuing to Work in the Public Interest

Verisign Named to the OTA's 2014 Online Trust Honor Roll

4 Minutes Vs. 4 Hours: A Responder Explains Emergency DDoS Mitigation

Dyn Acquires Internet Intelligence Company, Renesys

Tips to Address New FFIEC DDoS Requirements

Smokescreening: Data Theft Makes DDoS More Dangerous

dotStrategy Selects Neustar's Registry Threat Mitigation Services for .BUZZ Registry

24 Million Home Routers Expose ISPs to Massive DNS-Based DDoS Attacks

What Does a DDoS Attack Look Like? (Watch First 3 Minutes of an Actual Attack)

Joining Forces to Advance Protection Against Growing Diversity of DDoS Attacks

Why Managed DNS Means Secure DNS

Rodney Joffe on Why DNS Has Become a Favorite Attack Vector

Motivated to Solve Problems at Verisign

Diversity, Openness and vBSDcon 2013

How Does Dyn Deliver on Powering the Internet? By Investing in Standards Organizations Like the IETF

Neustar's Proposal for New gTLD Collision Risk Mitigation

Sponsored Topics