Home / Blogs

IPv6: Extinction, Evolution or Revolution?

For some years now the general uptake of IPv6 has appeared to be “just around the corner”. Yet the Internet industry has so far failed to pick up and run with this message, and it continues to be strongly reluctant to make any substantial widespread commitment to deploy IPv6. Some carriers are now making some initial moves in terms of migrating their internet infrastructure over to a dual protocol network, but for many others it’s a case of still watching and waiting for what they think is the optimum time to make a move.

So when should we be deploying IPv6 services? At what point will the business case for IPv6 have a positive bottom line? It’s a tough question to answer, and while advice of “sometime, probably sooner than later” is certainly not wrong, it’s also entirely unhelpful as well!

I’m not sure that anyone can provide a clearer date in response to that question, but what may be useful is to explore why IPv6 will be useful to have sometime in the near term future and how IPv6 and IPv4 are likely to interact. And then the “when” of IPv6 may be a little clearer—or maybe not!

To start off with this exploration it may be useful to compare where we started with the Internet with where we are today, and then see how this relates to the IPv6 story.

The Evolution of the Internet Architecture

The original architectural model for IP was in many respects a very simple model, but also one that was very powerful. Perhaps, in the spirit of William of Occam, the true strength of IP lay in what had been deliberately omitted from the specification, leaving in the form of the Internet a relatively simple and straightforward packet switching architecture.

William of Occam, (1285-1349), English philosopher and scholastic theologian. Occam was born in Surrey, England. He entered the Franciscan order and studied and taught at the University of Oxford from 1309 to 1319. Denounced by Pope John XXII for dangerous teachings, he was held in house detention for four years (1324-28) at the papal palace in Avignon, France, while the orthodoxy of his writings was examined. Siding with the Franciscan general against the pope in a dispute over Franciscan poverty, Occam fled to Munich in 1328 to seek the protection of Louis IV, Holy Roman emperor, who had rejected papal authority over political matters. Excommunicated by the pope, Occam wrote against the papacy and defended the emperor until the latter’s death in 1347. The philosopher died in Munich, apparently of the plague, while seeking reconciliation with Pope Clement VI.

Occam’s Razor, “Pluralitas non est ponenda sine necessitate”, has become a basic principle in science and philosophy, stating that entities should not be multiplied needlessly. This principle underlies all scientific modeling and theory building. In any given model Occam’s Razor helps to cut away those concepts, variables or constructs that are not really needed to explain the phenomenon. Though such a process there is less chance of introducing inconsistencies, ambiguities and redundancies.

The network implemented an unreliable datagram delivery service. Each datagram (or packet), had information describing its source and intended destination. Each network switch (or router), either moved the packet closer to where it believed the destination was located, or it just dropped the packet. In the latter case the switch may send a control notification packet back to the sender, depending on the reasons for the drop. All the functionality that created various transport services, functionality to support mapping of application-level endpoint names to network addresses, and functionality to distribute available network resources across competing applications resided within the end systems rather than the network. For a network it really doesn’t get much simpler than this.

But if you were to look for a faithful implementation of this simple architecture in today’s Internet networks you’ll be somewhat disappointed. The concept of single packet forwarding plane, with a single addressing model spanning the entire network, and a uniform end-to-end transport level congestion control model, has largely disappeared from most production networks, and the basic concept of ‘end-to-end’ is now perhaps more of an item of historic interest than a current pillar of networking architecture. These days carrier internet networks come replete with multiple forwarding layers, thanks to MPLS, numerous active network elements, including firewalls NATs and application layer gateways, various forms of NAT traversal agents and of course application level gateways and application level switches, load balancers, dynamic application switches and various forms of context-sensitive dynamic environments. We also have various forms of resiliency mechanisms, including path diversity elements, resource management systems, and QoS response systems. We have active Distributed Denial of Service (DDOS) detection elements embedded in the network and even network level session and application tracking systems as one more level of network defense against the ever-escalating security problem. This is no longer anything remotely similar to the concept of a simple unreliable datagram delivery service, and if you are looking for a simple dumb network with smart edges then you won’t find it in production Internets.

What happened to the original Internet model? What was so wrong with a model of data communications that placed most of the functionality of the network into the devices themselves, and cast the network into a role of best effort packet switching? One sneaking suspicion is that the data communications industry itself, or at least the carrier part of the industry, is resisting this path to network simplicity, and in their continual quest to wring out every drop of value out of their networks the carrier ISP sector continues to be seduced by feature-packed network services that are intended to offer their customer higher value network solutions. Another way of looking at this role is that the carrier industry is hooked on the complexity business, and has embarked on a business model of creating networking systems that are sufficiently complex that customers are supposed to baulk at doing it themselves. After all any construction enterprise can hang wire on poles, bury wire in the ground, or drop wire to the bottom on the sea. The highly complex operation of the resultant network is supposedly the unique value-adding role of the carrier enterprise. Of course this complexity escalation works only as long as the solutions are not so complex that the carriers themselves start to baulk as well! As a carrier industry we may have already crossed this particular complexity line, and we may have already managed to create a technology environment that is sufficiently complex that no player, not even the carrier, is able to manage the resultant interwoven mesh of disparate systems that make up a carrier Internet platform.

The question in my mind when looking at this rapid progression from architectural simplicity into often mind-boggling, and doubtless eye-wateringly expensive complexity for Internet networks is whether this is the outcome of a disordered process of entropy or one of a more ordered and directed process of evolution of the Internet?

The case for entropy is certainly very strong. What is evident is that the internet is besieged by various forms of local optimizations that intentionally alter the behaviour of parts of the network to suit the desired characteristics of certain classes of application. Such incremental local actions tend to impose a cost on the entire system. Whether the issue is one of adding network level support for mobility, support for various forms of address compression, support for differentiated service outcomes, resilience against various forms of hostile attack, or various forms of enhanced service availability, the typical outcome is one of increased network complexity and increased network cost with increasingly marginal returns in terms of overall service capability. This is a drive to disorder and decay in that local changes are not uniformly adopted, and the network itself starts to alter its overall state from uniform simple order into visible chaotic disorder.

Of course it is also possible to view this change process as one of evolution, where an active system is under constant pressure to adapt in order to survive and thrive in a changing environment. There’s no obviously intelligent design here, and the overall evolutionary process follows no particular planned path. The outcomes are often chaotic and invariably unpredictable, but within the process is a driving discipline of a competitive environment where service providers are constantly challenged to adapt their service offering to meet the demands of customers. Here it is the competitive market that imposes the evolutionary pressure to adapt and survive or wither away into commercial bankruptcy.

Herbert Spencer, 1820 – 1903, British philosopher and sociologist, was a major figure in the intellectual life of the Victorian era. He was one of the principal proponents of evolutionary theory in the mid nineteenth century. It was Spencer who invented the phrase “survival of the fittest”, and originally applied it to the process of elimination of firms in the rather vicious cut and thrust of Victorian capitalism. Upon the publication of Charles Darwin’s “On the Origin of Species” in 1859 Spencer quickly saw the parallels to natural selection and applied the phrase to the process of natural evolution. As a result he became on the a group of philosophers known as “social Darwinists”, applying Darwin’s principles to human society. It has often been considered a relatively harsh philosophy, espousing in its most extreme form that the fittest members of society naturally survived and prospered, while the weaker members of a society were doomed to perish.

Many of the incremental measures we see in today’s networks have been bought about by this reactionary response to market pressures rather than though a distinct planned process of technology development. One could characterize firewalls, Network Address Translators (NATs), Quality of Service (QoS), Application Level Gateways (ALGs), network caches, and a myriad of similar mechanisms as examples of this form of ad hoc response to market pressures for network services. Whether they represent entropy or evolutionary change in the Internet model is perhaps left as a personal perspective.

One area of technology continues to sit outside this process of current technology churn in the Internet, and that’s IPv6. IPv6 is not an outcome of a reactive model of technology development, but is instead an example of a centrally planned development that was designed in anticipation of a market situation. Curiously, the very conditions that IPv6 was intended to avoid, namely that of a chronic address shortage in the deployed network, have already manifested themselves in many ways and in many places, and yet the market demand for IPv6 services remains relatively insignificant, and certainly below a threshold for viable commercial services for many operators.

So what’s the problem? How will IPv6 services appear in the market? Is this an evolutionary process of orderly migration of IPv4-based services into an IPv6 networking realm? Or is IPv6 going down a path of premature extinction, never to appear as part of the mainstream communications portfolio? Or will IPv6 play for high stakes here and take on IPv4 as its major competitor and win market share through a revolutionary process of defining price and performance points that are simply not sustainable with any other technology, including IPv4?

Lets now look at the potential futures for IPv6, and in particular look at the options of extinction, evolution and revolution in the context of IPv6 and its struggle for market takeup in the coming years.

Extinction

Is IPv6 another case of OSIfication, or another example of a network technology that simply will never attain mainstream adoption?

The Open Systems Interconnection (usually abbreviated to OSI) was a new effort in networking started in 1982 by the International Organization for Standardization (ISO), along with the ITU-T.

Prior to OSI, networking was completely vendor-developed and proprietary, with protocol standards such as SNA and DECnet. OSI was a new industry effort, attempting to get everyone to agree to common network standards to provide multi-vendor interoperability. It was common for large networks to support multiple network protocol suites, with many devices unable to talk to other devices because of a lack of common protocols between them.

However, the actual OSI protocol stack that was specified as part of the project was considered by many to be too complicated and to a large extent unimplementable. Taking the “forklift upgrade” approach to networking, it specified eliminating all existing protocols and replacing them with new ones at all layers of the stack. This made implementation difficult, and was resisted by many vendors and users with significant investments in other network technologies. In addition, the OSI protocols were specified by committees filled with differing and sometimes conflicting feature requests, leading to numerous optional features. Because so much was optional, many vendors’ implementations simply could not interoperate, negating the whole effort.

The collapse of the OSI project severely damaged the reputation and legitimacy of the organizations involved, especially ISO. The worst part was that OSI’s backers took too long to recognize and accommodate the dominance of the TCP/IP protocol suite.

http://en.wikipedia.org/wiki/Open_Systems_Interconnection

Will IPv6 act as a catalyst to take a step in some completely different technology direction that may be as radical in their nature as previous big leaps of technology in the communications sector? In the same fashion as the industry has already lurched though multiplexing solutions based on Frequency Division Multiplexing, Time Division Multiplexing and then Packet Switching, are we awaiting something far more radical than a realignment of some of the IP packet’s header fields? Is IPv6 a rather eloquent demonstration that packet switching has reached some basic set of limitations and that a successor technology to IPv4 needs to take a completely new approach to a shared communications environment?

The original IP architecture, as a very simple adaptation layer between a broad collection of packet switching technologies and a similarly broad collection of services and application, is certainly dying at the moment, if not already dead. The model of coherent and transparent end-to-end packet transmission is disappearing from today’s network, and is being replaced with a collection of packet header rewriters, a set of content sensitive packet forwarding systems and even entities than perform session interception and regeneration. Any application that assumes a simple end-to-end model of packet delivery has no role in today’s Internet, and any popular internet application has to be able to invent its own identity space, and be able to allow its data streams to pass through NATS, ALGs and other middleware elements with impunity. This may require multi-party interactions to complete the transaction were previously only two parties were necessary. For peer-to-peer environments we are now looking at application mediators and agents to assist in setting up the necessary rendezvous points, as well as assisting in the identification of what forms of middleware behaviour exist in the network path (STUN, ICE and TURN are good examples of this approach of application-level middleware discovery). Efforts to impose overlay topologies, tunnels, virtual circuits, traffic engineering, fast reroutes, protection switches, selective QoS, policy-based switching on IP networks appear to have simply added to the cost and detracted from the end user utility

So, today, we are engineering applications and services in an environment where NATs, firewalls and ALGs are assumed to be part of the IP plumbing. We now have constrained models of interaction that divide the work into clients and servers, and mandate that all transactions are initiated by clients and are directed to servers.. We now have forced applications to invent their own per-application identity realms, and required applications to also require the deployment of active middleware in the form of agents in order to orchestrate multi-party rendezvous and referral. By implication NAT states and other middleware states are now multi-party shared states, and what were considered to be local autonomously functioning entities now are faced with the complexities of supporting a signalling environment that is associated with distributed shared state.

All this complexity is not just a problem in the abstract sense, but a form of architecture that results in more fragile applications and higher operational costs. The Internet, far from becoming simpler and cheaper, is under increasing pressure to take on increasing complexity and operate with escalating costs

Can IPv6 reverse this trend? We’ve all heard the observations that IPv6 was a typical standardization conservatism. IPv6 also represents an outcome of engineering compromise between making marginal changes and taking an entirely new approach to packet switching architecture, and the standards process is invariably one that tends to avoid making radical decision. IPv6 represents a very marginal change in terms of design decisions from IPv4. IPv6 did not manage to tackle the larger issues of overloaded address semantics. IPv6 did nothing to address routing scaling issues. IPv6 has done little in terms of altering the semantics of packet switching, and what we are left with in IPv6 is a slightly larger address field:

One could be excused for thinking that the marginal changes in IPv6 over IPv4 represent such a small difference that no one would be interested in paying their share of the rather high price of worldwide transition. Alex Lightman, chairman of the IPv6 Summit, was reported to have raised the question of who will actually pay for the transition to IPv6. As reported by internetnews.com, “There is an unreleased report by the Dept. of Commerce estimating it will take $25-$75 billion to pay for the transition, according to one of our speakers,” Lightman said. “So what part of that will the U.S. government pay for?”

December 12, 2005,
http://www.internetnews.com/infra/article.php/3570211

But if IPv6 is indeed too small a change over IPv4 and its fate is really to be that of extinction, then what other approaches can we take to a successor to IPv6? Is there anything else around today that takes a radically different view of how to multiplex individual transactions within a common communications system? The answer to this question appears to be “no”, or at least there appears to be nothing that has been developed beyond the initial conceptual stage, and certainly nothing that has been extensively evaluated for such a role. So, for the near term, there does not appear to be any alternative technology waiting in the wings. If we don’t appear to want to adopt IPv6, and are happy to let it lapse into extinction, then we need to design and develop another protocol. In that case how long would such a new design effort take? And if we embarked along such a path what is the likelihood that the effort would encounter precisely the same set of constraints as the IPv4 and IPv4 design efforts and what is the likelihood that the effort would end up in much the same place as IPv6—taking a slightly different view of a common set of design trade-offs between a common set of basic constraints that were already encountered in IPv4? Of course there is also the option of heading well beyond the current concepts of packet switching and look at entirely different communications architectures, but here the considerations of the design and development timelines become a significant inhibitory factor here.

So if we think that IPv6 is not the answer, and we believe that we should look elsewhere for a successor technology to IPv4, then it is likely that any such effort would take at least a decade, or, more likely longer to generate a workable outcome. And the other nagging consideration here is the question of whether such a design effort would end up as a marginal outcome in any case. Would we be looking at no more than a slightly different set of design trade-offs within a common set of constraints?

So in the near term, and possibly in a longer term of some decades to come “extinction” is not a very likely outcome for IPv6—there is simply no other option on our horizon, so if we are to move away from IPv4 sometime soon then IPv6 is what we will be using instead.

Evolution

So if the premature extinction of IPv6 is highly unlikely, then can we made do with IP4 indefinitely, or should we be looking for some evolutionary path into IPv6?

Can we continue to use IPv4 indefinitely? There’s little doubt that the IPv4 network model is under relatively severe stress in terms of its address and routing scalability, and there is no confidence that IPv4 can be made to scale indefinitely to encompass larger and larger populations of users. As we’ve already noted the Internet is no longer a simple network, and as it continues to grow then its likely that at some point the cost of scaling the various components and their forms of interaction reach a point where its just no longer a viable proposition to continue to grow. While increased volume usually implies lower unit cost, at come point the cost of complexity starts to become a significant factor in unit cost escalation, and the network reaches a scaling failure point. The possible pressure points include the capability to scale NAT deployment indefinitely, the capability to scale routing systems, the capability to scale network middleware indefinitely, the capability to effectively ward off various forms of hostile attack on the network, and the capability for an ever larger ever more complex network to operate in a stable and useful fashion. Whether this is a failure point of the capability of the technology, where the network itself reaches a size where it just cannot operate in a stable mode, or whether this is a failure point of the underlying economics of the network where the unit costs of the service escalate beyond the point of viability is an open question, but the common factor is that IPv4 is a technology platform with finite scaling bounds, and it cannot fuel an open-ended networking future.

Hopefully we should have evolved the network beyond these limitations well before reaching such a critical failure point, and the major lever here appears to be to head towards a simpler network that performs fewer functions within the network. Simpler networks, simpler applications, simpler operation, better scaling properties. This is certainly the core promise of IPv6.

So if the question is “should we evolve the network to IPv6?”, then the general answer appears to be a resounding “yes” for most values of “we”.

However the precise motivations vary for each player. IPv6 can allow for the resumption of a network model that uses unique global addresses for each connected endpoint, for endpoint populations that can scale into the hundreds of billions. IPv6 is capable of embracing a device-dense world. The per-address cost can be reduced dramatically through the elimination of various forms of dynamic address translation technologies, as well as the elimination of the scarcity premium factor in IPv4 address mechanisms. Application complexity can also be reduced, and the diversity of application models can be broadened. This model of universal addressing allows for many forms of peer-to-peer networking models as well as supporting communication transaction security models that reply on end-to-end coherence. All these factors point to a networking model that supports simple and ubiquitous communications services which in turn supports utility device deployments. So the desired outcomes appear to point to simpler networks, simpler applications, larger populations of connected devices, more efficient services, and a broader diversity of service models. So the set of potentials presented by ubiquitous adoption of IPv6 presents a very compelling picture of benefits for a diversity of players in the industry.

However none of these potentials has managed to persuade the industry to take the plunge and undertake the transition to IPv6 so far. The potential benefits of IPv6 appear to offer insufficient drive to the industry to get this transition underway. Why is this? Perhaps its because the pressure points of the current IPv4 deployment don’t cause uniformly high levels of pain. ISPs are neither application authors nor are they device manufacturers. So ISPs do not directly incur the additional cost of complexity in the application or the cost of additional memory, additional software and additional configuration complexity in the device. So the ISP feels insufficient levels of direct pressure to roll out a new network protocol.

What else would drive an ISP to deploy a new networking protocol? In crude terms there are two very basic business drivers—fear and greed. Greed is the desire to enter new markets in a way that maximizes beneficial outcomes, while fear is a defensive response to emulate the business opposition to defend an existing market position. So in these terms is there an “early adopter reward” for deployment of IPv6? What is the fear or greed driver here that would propel the ISP industry into undertaking this transition? Unfortunately there appear to be no clear “early adopter” rewards for IPv6. Existing players currently have strong motivations to defer expenditure decisions because of strong shareholder pressure to improve the earnings per share position within the carrier industry. This is not the time to support a business case to leap too far ahead of the existing business model and take a somewhat riskier longer term position in the market. There is still some considerable uncertainty over the future of the voice industry as the competition with VOIP becomes more intense, and there is still a basic push by the industry to enter into value-added service markets that entail more complex network architectures, and IPv6 is seen as being a longer term direction that has little of relevance to the current ISP industry position. The return on investment in the IPv6 business case is simply not evident in today’s ISP industry. New players have no compelling motivations to leap too far ahead of their seed capital. All players see no incremental benefit in early adoption. And many players short term interests lie in deferral of additional expenditure. So the short term industry response appears to be to defer expenditure on IPv6-based deployments and await further developments.

So if the question is “when will this transition to IPv6 happen”, the general industry response appears to be “later”. So the real question here is what is the nature of the trigger for change, or, at what point, and under what conditions, does a common position of “later” become a common position of “now”?

So far we have no clear answer from industry on this question

This is not a case of where regulatory initiative would be all that helpful. Our previous experience with OSI and various national and regional GOSIP programs has provided a convincing lesson that technology adoption though regulatory measures or administrative fiat are abject failures. So we are forced to look back at the market interaction between services providers and consumers of the services to see where the leverage may lie. Unfortunately there are few network differentials in the current consumer world that provide any great leverage—after all its still email and its still the web, ands the choice of protocol over which these applications operate should be a matter of supreme indifference to the end consumer. Expecting the consumer to pay more for a supposedly seamlessly invisible network attribute is indeed a bad case of wishful thinking. Indeed it is perhaps worse than this. In recent years we have managed to create a secondary supply industry based on network complexity, address scarcity, and insecurity. The prospect of further revenue erosion from simpler cheaper network models based on IPv6 deployment is one that this industry views with some suspicion and fear. The business obstacles don’t stop here. The concept of simpler networks leads to the concept of revenue erosion for provision of network services. In an industry that has already undergone significant turmoil over the past decade, and where the current incumbents are looking at weak financial figures for their businesses the entire concept of outlaying more capital investment to deploy an IPv6 network is not exactly a glowing proposition. Indeed the industry has already invested large sums in packet-based data communications over the past decade, and there is little investor interest in still further infrastructure investment at present. When you add to this the consideration that IPv6 is a step back to a simpler, cheaper network, then this translates to an incremental investment that will reduce their revenue yield per customer. This is not exactly a business-friendly proposition. So its little wonder that the industry has been far more fascinated in the concept of MPLS, QoS and VPNs in an effort to increase the returns on their network investment through the quest for “value added services” and at the same time paid lip service to IPv6 without any major level of investment to match.

Oops!

So evolution, or an ordered migration from IPv4 to IPv6, does not appear to be happening. IPv6 is not seen in a highly positive light. IPv6 promotion may have been too much too early, and these days IPv6 may be seen as tired rather than wired.

“Everything over HTTP” and the client-server model of networking has proved far more viable than perhaps it should have, and these days any decent application that gains popular attention can traverse NATs, ALGs and a myriad of other middleware barriers with consummate ease. If it couldn’t be so agile then it simply would not gain popular attention. So we now have an Internet where the service portfolio appears to be collapsing into a small set of applications that are based on an even more limited set of HTTP transactions between servers and clients.

Maybe it’s just deregulation of the industry, where short term business pressures simply support the case for further deferral of IPv6 infrastructure investment. In this economic view of the Internet industry there is insufficient linkage between the added cost, complexity and fragility of deploying network middleware and associated traversal applications at the edge of the network and the costs of infrastructure deployment of IPv6 in the middle. This leads to the observation that deregulated markets are often not perfect information markets, and the points of pain, or cost, become isolated from potential remedies, or savings.

It would appear that evolution is really not an option for IPv6 either.

Revolution

The transformation of IPv4 from a research experiment to a mainstream public communications environment is an interesting case of technology revolution. IPv4 presented a portfolio of cheaper switching technologies, more efficient network usage, simpler networks with lower operational costs, and structural cost transfer from operational costs within the network to capital costs at the edge. IPv4 represented a compelling and revolutionary business case of stunningly cheaper and more effective services to end customers. This was the silicon revolution at its most effective. The transformation has not been ordered and well planned. Some of the giants of the older telephone world have lost vast amounts of money, some have gone bankrupt with others have been sold off as mere shadows of their former market presence. Workforces are being realigned, investors have had to adjust their expectations and regulators have been confronted with an entirely new set of market behaviours and associated services.

Perhaps the most compelling view of IPv6 is in the same vein of being a revolutionary force with large scale disruptive implications to the industry. The leverage here lies in the observation that IPv6 represents an opportunity to embrace the communications requirements of a device-dense world—an opportunity that is simply lacking in the IPv4 realm. This device dense world is a world that is far larger than that of human-use devices, and encompasses a potential population that is at least some 2 - 3 orders of magnitude larger than today’s Internet. This encompasses a world of embedded communications, smart tags and applications that can encompass many forms of active and passive monitoring.

In and of itself this sounds benign, of not innocuous for the Internet. But how much money would you let your washing machine spend on communications services? Or your luggage tag? Or any one of thousands of chattering devices? The economics of a device-based communications world are vastly different fro that of a human-mediated communication. In the voice world the value proposition shifted away from cost-based service tariffs towards value-based tariffs. It wasn’t the cost of allowing two people to speak to each other, but the value people placed in being able to talk to each other. Even the Internet so far has an inherent value in human-based communication. The value of today’s Internet lies in people-to-people messaging, lies in web browsing, lies in downloading entertainment, and lies in other predominately human pastimes. In a device world the value proposition is at a much lower level, and one way to look at the resolution of a device-based Internet is to think of a service environment that reduces the end consumer costs by a further 2 to 3 orders of magnitude. Yes, that implies that the threshold for a device-rich communications world is an industry price benchmark of megabit per second access tariffs for between 2 to 30 cents a month, or being able to purchase gigabit per second internet access for the same $30 price benchmark we use today.

How to achieve these revised price benchmarks for Internet services is the critical question. We’ve already extracted massive improvements in transmission cost efficiencies in the move into wave division multiplexing on fibre cable. We’ve already extracted massive improvements in the efficiency of switching through the move from time to packet switches and the move from state-based circuit switches into stateless packet-based switches. We’ve already extracted further cost efficiency in the network by pushing many of the services and functionality out to the edge and attempting to follow a direction of simpler cheaper networks.

So what’s left? I suspect that the truly revolutionary message in IPv6 is a message about the extracting efficiencies in the business model of communications. We appear to be looking at a transition from value to volume with IPv6. IPv6’s true leverage is about the ability to encompass world of tens of billions of chattering devices. The service industry that provides the networking services to these tens of billions of devices will not be a bloated inefficient relic of a bygone era of monopoly service enterprises. Indeed its likely that there will be nothing in common with the enterprises that operate in this industry today. IPv6 appears to be carrying an implication of a quite dramatic shift in the service enterprise to an industry based on a commodity utility. We are looking at an industry that will operate at a level of single digit operating margins and investment returns similarly phrased. If we want IP to operate from anonymous sockets in the wall, or seamlessly over wireless, then we will be looking at service delivery systems that provide simple lowest common denominator networking service. The search for value-added services and value-added networks have no logical role in such a commodity utility world. This all sounds quite conventional, and the path to commoditization of many artifacts and services is a well trodden one in many industries and service sectors. So why is this such a revolutionary message for the communications industry? I suppose that the observation here is that this is one industry which is continuing to live the myth that there is a pot of gold out there in value-added networking-land, and that the windfall profits made in successive waves of innovation in the telephone industry over the decades will continue to repeat itself, and there is a pervasive air of denial over a message that says that the value is going to be destroyed by volume. In this industry the words “commodity” and “utility” remain taboo!

The IPv6 Condition

In taking an objective look at IPv6, there are no compelling technical feature or revenue levers in IPv6 that are driving new investments in existing IP service platforms. It does not appear that an industry-wide shift to IPv6 is going to be driven by the current value-added network service model and the associated current set of consumers of today’s services. There is just insufficient marginal benefit to the end consumer to create a value proposition that will justify paying an increased tariff for having access to IPv6 as well as IPv4—after all its still email and its still the web!

The current user base has managed to become wedged in a situation where there is not enough impetus to move away from the networking model of IPv4, and we appear to be stuck within a client-server model of network-mediated relationships. The network operators continues to push the network into undertaking a higher valued role in mediating communications and usage of the network continues with a largely human-directed set of services. One could characterize this as an environment that places extracting maximal value from the network as the prime objective, over serving maximal volume

Interestingly, the underlying engine for digital communications, the silicon chip industry also started in a vein of attempting to place silicon chips in highly-valued devices, but this industry made the switch to a volume industry decades ago. This is an industry that has significant cost differentials between design and fabrication, so it’s probably little surprise that they quickly appreciated the longer term value in a general approach to recouping the design cost in very high volume production runs.

It likely that IPv6 sits in this same situation, and will only gain widespread industry acceptance within a broader shift in the communications industry from value to volume. It we are truly looking at an Internet of gadgets, of billions of chattering devices, then what will drive IPv6 deployment in a device rich world is a radical and revolutionary value to volume shift in the IP packet carriage industry. In IPv6 we appear to be looking at a shift in the industry to that of an undistinguished commodity utility service provision industry. An industry that will inevitably take on once more a very conservative profile and one that will no longer be able to afford further extensive and rapid innovation. So if we take this step into such a world then we need to be pretty confident that we are comfortable with this step being a very long term one.

The IPv6 Revolutionary Manifesto

It is going to be unlikely that IPv6 is an evolutionary step for the Internet, but rather that of yet another revolutionary step for the communications industry. It is likely that IPv6 will need to compete for market share with IPv4, and the basic terms of the competition for the consumer will be price-based competition rather than feature or service-based. IPv6’s basic potential is that of extraordinary volume, but to achieve this we will need to push down unit cost of packet delivery by orders of magnitude. It appears that the major means of getting there is through commodity volume economics that will direct the industry towards even “thicker” transmission systems, simpler, faster switching systems, lightweight application transaction models, and an industry profile of a commodity utility sector.

This is definitely going to be a painful revolution, as it will be the industry itself that will offer the highest levels of resistance to such a radical agenda.

* * *

In June 2003 the following announcement was made by the US Department of Defense:

US Department of Defense adopts IPv6

Implementation of the next-generation Internet protocol that will bring the Department of Defense closer to its goal of net-centric warfare and operations was announced on June 13, 2003 by John P. Stenbit, Assistant Secretary of Defense for networks and information integration and DoD Chief Information Officer.

The new Internet protocol, known as IPv6, will facilitate integration of the essential elements of DoD’s Global Information Grid—its sensors, weapons, platforms, information and people. Secretary Stenbit is directing the DoD-wide transition.

The current version of the Internet’s operating system, IPv4, has been in use by DoD for almost 30 years. Its fundamental limitations, along with the world-wide explosion of Internet use, inhibit net-centric operations. IPv6 is designed to overcome those limitations by expanding available IP address space, improving end-to-end security, facilitating mobile communications, enhancing quality of service and easing system management burdens.

“Enterprise-wide deployment of IPv6 will keep the warfighter secure and connected in a fast-moving battlespace,” Secretary Stenbit said. “Achievement of net-centric operations and warfare depends on effectively implementing the transition.”

Secretary Stenbit signed a policy memorandum on June 9 that outlines a strategy to ensure an integrated, timely and effective transition. A key element of the transition minimizes future transition costs by requiring that, starting in October 2003, all network capabilities purchased by DoD be both IPv6-capable and interoperable with the department’s extensive IPv4 installed base.

I was asked to provide a comment on this announcement, and at the time I made the following response:

The enduring value of IPv6 lies in the massive amount of coherent address space that allows literally billions of devices to be uniquely addressed. Address uniqueness is a strong value proposition when you want an identifier space to cover a very large deployment space. As an example of this, one of the two properties of the original Digital-Intel-Xerox Ethernet II specification that remains in today’s 10 Gigabit Ethernet specification is unique 48 bit MAC addresses. All of that highly innovative CSMA/CD thinking that at the time we thought was the fundamental property of Ethernet has been dispensed with, and it’s the address space that still defines “Ethernet” today.

The general observation is that any communications system requires any party to be able to uniquely identify any other party in order to initiate a private communication session. If you cannot perform that most basic of communications functions, then you simply do not have a functional peer-to-peer communications network.

But doesn’t that mean that the stories of IPv4 address exhaustion have some substance? With the large amount of addressable devices hidden behind NATs, and the associated move to using domain names as the underlying identifier space for many communications applications, the pressure on consumption of IPv4 address space has been reduced considerably, but at the cost of increased network complexity. This has implied that in a world of human-driven screens and keyboards we see some considerable lifetime left in the admittedly comfortable world of IPv4 as we know it. To support this model we’ve actually moved away from the IP address as the unique identifier token for many applications, and substituted an application model that is largely driven from domain names. As a trivial example, look at the virtual hosting mechanism as implemented in web server implementations to see this shift in server identifiers from IP address to domain name. So in the context of the current IP market, as both as consumers of the technology and as an industry, we can live with this identity split for some time yet, because we appear to concentrate our use IP addresses as a routing and forwarding framework identity and increasingly use the DNS as the identifier realm of applications.

Our world is a world where the device is subservient to the user, and the applications we associate with the Internet of today are applications that are essentially human pastimes, such as e-mail, web browsing, or high-value automated transactions, such as those commonly bracketed into the e-commerce area. And we’ve now established a highly valuable global industry upon these foundations.

In so doing we should recognize the emergence of a second set of communications realms populated by uniquely identified devices that number in their billions, where the inter-device traffic is not human mediated, and the value of the device transactions are, on an individual transactions value level, far lower than the value of the human-driven realm of IPv4. In other words, in a device rich communications realm, it’s likely that the human value we’d ascribe on average to each packet is far lower than our current Internet IPv4 world of human-mediated communications. And it’s this extravagantly device-equipped world that we see the U.S. Department of Defense heading. If your stock in trade is one of quite astounding feats of logistical deployment of large numbers of people and large numbers of items of equipment, then the communications requirement is of a different order of scale to that of the retail Internet markets, and, yes, I’m sure that there are entirely effective arguments behind that decision to look forward to a communications realm with a uniform base protocol identifier domain in a scale that is 2 to the power 96 times larger than the entire IP address identifier domain of IPv4.

I would be cautious about high levels of expectation that this immediately translates into an impetus in the market where you and I converse. My host here where I’m typing this message is already IPv6 capable, and if you are running a recent version of host software, then it’s a reasonable assumption that yours is too. But I’ll send this message over IPv4 and you’ll receive it over IPv4, and between my mail sender and your mail receiver the transport channel will also be IPv4. Should we use IPv6 instead? Would I pay my provider additional money to compensate it for part of its additional expenditure to support a simultaneous IPv6 capable network between you and me? To send precisely the same message? In precisely the same time? Along the same path? Using the same transport TCP session? Obviously, to me, as a (hopefully) economically rational consumer of such services, and no doubt to you, in a similar role, there is no value in spending more money to achieve outcomes in IPv6 that are identical to what we can already do today in IPv4. And in the retail Internet world that remains the basic IPv6 conundrum. Why should any provider spend additional resources to service the same market with identical services, and in so doing be unable to raise additional revenue to offset their additional service costs? One interpretation is that there is no natural motivation for such activities in today’s market, otherwise it would already be very widespread indeed.

What we’ve seen in the mainstream Internet world is an emerging mythology about IPv6 that somehow this additional expenditure, ultimately on the part of the consumer, provides some additional benefit for the consumer, motivating them to switch from IPv4-only services to some hybrid of mixed v4 and v6 and ultimately to a v6 world, and thereby funding the additional provider expenditure associated with such a massive transition.

The reality is more sobering in that in the retail Internet world there is so far nothing obvious in the “additional benefit” category. I’m using Network Address Translation (NAT) right now, using an ssh session back to my mail server that drives through NAT boxes to make a secure SMTP session, across a first step of 802.11 wireless in order to pass this message into a mailing list. I’ve auto-configured my laptop in the wireless world, and for me I’m living in a plug-and-play world that supports my level of roaming access. Would IPv6 make this session any more secure? Any different in terms of Quality of Service (QoS)? In plug-and-play models of roaming? Would there be any visible difference in terms of my ability to communicate with you? To all of these questions the basic answer is still “no.”

So, for you and I, we look inside the IPv6 technology box, and find nothing new there to motivate us to spend more money for our existing Internet-based communications services, and for some time to come it would appear that this limitation will still hold.

On the other hand there are circumstances where there is a need to operate in a much larger base protocol address space. These include situations where one wants to take advantage of Internet applications that operate across a world of literally billions of devices, large and small. The application space may want to gather constant reports on the characteristics of the “thing” it is attached to, from a ration pack to a component of a large naval vessel. You may want to use supply channels for such devices such that the deployment is a plug-and-play world without a massive variety of detailed configuration processes. You may be looking to an architecture that would be stable for many years. In such circumstances you really want take advantage of a uniform set of Internet application technologies that potentially span massive numbers of addressable devices. Here a large base address space is a definite asset. And for such industry sectors in voicing such requirements where there is also a somewhat different ultimate value proposition for the supported communications activity, then it’s quite understandable that there can be an attractive proposition offered by immediate adoption of IPv6.

But back in the communications realm where you and I currently exchange our messages, such requirements remain in a future framework that is still waiting for relevant value propositions that allow it to gain traction with you and me.

Maybe we just need to be patient. Steam ships did not halt operation the first day a diesel powered vessel appeared. It was a much slower process that led to an outcome of the change of the maritime fleet. The next generation of mechanization of naval vessels offered cheaper services, and, as often happens, market price won in that commodity market.

Market price often wins in competitive commodity markets. And the Internet retail market is, in many parts of the world and in many sectors, a strongly competitive space with all the characteristics of a commodity offering. And there no doubt that if you and I could communicate in precisely the same fashion as we do today, with precisely the same applications and service environment, using precisely the same host devices and operating systems as we do today, but at some attractive fraction of today’s price, then I’m sure that neither of us would care in the slightest that our data was encapsulated using a packet framing format and address tokens that used the IPv6 protocol specifications.

By Geoff Huston, Author & Chief Scientist at APNIC

(The above views do not necessarily represent the views of the Asia Pacific Network Information Centre.)

Visit Page

Filed Under

Comments

Ian Peter  –  Jan 11, 2006 12:16 AM

Thoughtful piece, Geoff, a great read, thanks for making it available.

If I can paraphrase, IPV6 is offering a featureless upgrade that solves no apparent immediate problems for most (if not all) of us. Why bother?

You argued down the cases for extinction, evolution, or revolution of IP protocols. The other alternative I see is a feature laden upgrade with market appeal - maybe an IPV7 that solves some compelling problems? That might apeal to the market, but not to the purveyors of simplicity in base protocols. So are we stuck where we are?

Or do we (or someone else) just pave over IP and start again in the same way that IP paved over legacy telecommunications protocols?

Geoff Huston  –  Jan 11, 2006 1:35 AM

I am of the view that a ‘feature laden’ IPv7 (or whatever number we would be up to) would in fact be a design problem of attempting to make a different set of design tradeoffs within the same set of fundamental engineering constraints that lie behind all contextless packeting switching environments. The problem that such an exercise faces is twofold: one it would take some time to develop as there is certainly nothing ‘hovering in the wings’ at the moment, and, secondly, there is this horrible suspicion that the design effort would end up with once more nothing much different, as the set of trade-offs are in fact much the same as they have always been.

After all the basic constructs of contextless packet switching are to expose to the network switching elements information relating to the location of the intended destination, the location of the purported source, some information relating to the progress so far of the packet within the network, and some rudimentary information relating to the reason for the packet. There’s not a whole lot of room to wriggle around within this set of constraints, so I suspect that we are stuck with this IP architecture.

That lead me to thinking along the lines of “well what IS the true driver for IPv6 (or indeed moving to any packet switching protocol from the current IPv4 base) and this line of thinking lead me to the “volume over value transition” that is the thesis of the article.

I suspect that the next step we are looking at as an industry is true commoditization of the essential “raw” service of simple packet switching. i.e. removing from the industry all pretence of “value-adding” the network service offering. Of course this is a message that neither the current incumbents nor the service industries that supports these incumbents are too keen on hearing right now.

Edward Lewis  –  Jan 11, 2006 4:20 PM

I’ve spent some time trying to deploy IPv6 in different environments over the past couple of years.  I used to have the license plate “ip6arpa” but have turned that in for a generic one, a reflection of my optimism.

As a consumer of Internet services, I would adopt IPv6 for one of two reasons.  If there was some service I wanted that was only available over IPv6, I’d adopt.  If adoption of IPv6 was significantly cheaper than IPv4, I would switch.  The fact that IPv6 has more addresses is not important to me, as a consumer.  It’s “what does the network do for me” that matters, not “how does the network do it for me.”

As an engineer, there’s no feature of IPv6 that beats IPv4 hands down.  Sure there are the extra addresses available, no question, but that is of little use given other limitations plus other stop-gap measures added to IPv4.  IPv4 has NAT, has DHCP.  Some believe NAT is evil, I can understand why, but it does extend the address space.  (In real life, society can deal with drive-on-the-right and drive-on-the-left.)

I don’t mean to say that NAT and DHCP make up for the shortfall in addresses in IPv4.  One limiting factor is still routing.  IPv6 hasn’t solved the IPv4 routing problems.  There are discussions towards “provider independent” address space policies in IPv6 - which disrupts aggregation of routes.  In IPv4, there is a lot of provider independent space - a lot of it the “legacy” space (that is, mostly pre-RIRs) that is assigned without regard to topology.  If this happens in IPv6 we will wind up with a network in which everything is addressed, but you won’t know how to get there.  IPv6 needs to make sure this won’t happen.

(I have always been concerned about lack of advances in IPv6 routing.  The 6Bone existed for IPv6 operations research - and still for <5 months more - but used tunnels over IPv4.  I couldn’t see - when I had a 6Bone network - how it provided a way to experiment with routing.)

The need for provider independent space is driven by two factors.  One, consumers of IP services have been burned by ISPs going out of business or suffering network-wide failures.  Two, it is not easy to re-address a consumer network, which is needed when your ISP goes out of business and you are still profitable.  It’s not just a host address issue, it is a firewall and DNS issue (to name two applications that “burn in” IP addresses into configurations).

It is said IPv6 provides security.  With IPSEC, it did provide a leap forward, but IPv4 was able to add this.  Security on the wire isn’t all that needs to be provided though.  “Con games” like fake websites and outright attacks on credit card databases are still in play.  So, saying IPv6 “improves security” is a hollow claim, even if it does require stepped up security.

Is IPv6 an exercise in futility?  No, I don’t think so.  But it is hard to justify adopting it because it offers so little new.  It attacks issues some engineers see as critical, but it doesn’t necessarily solve the problems at the services level.  This is why I have a hard time convincing people we need to go to v6.

In the article, IPv6 is said to not be a reaction but rather an example of “centrally planned development.”  This strikes me as an odd compliment to IPv6 because I recall the same sentiment being an insult to OSI.  OSI was committee driven, a planned and studied approach to replacing the status quo of communications pre-IPv4.  IPv4 zoomed by because it “reacted” to needs, not anticipated trends.  (Having been raised in a IPv4 “household,” the phrase “centrally planned” makes me think of the “evil, wrong way of thinking.”)

“If not IPv6, then what?”  That’s the most powerful question the IPv6 proponents can put forward.  But that’s arguing from weakness.  IPv6 will come when it is needed, not because it’s the last one left at the dance.

Suresh Ramasubramanian  –  Jan 13, 2006 4:15 AM

As Ed Lewis pointed out - there’s no “killer app” that’s driving v6 adoption .. nothing that can’t be done over existing v4 networks.

High amounts of hype from IPv6 advocates make an interesting constrast to less than visible performance / deployment of v6

And the concept of a device rich internet with every single toaster and microwave having a v6 stack is far, far away into the future (and tell you the truth, I dont want my toasters and microwaves on the internet, or even on my LAN).  Adds all kinds of security complexities there - bolting v6 or any other kind on IP stack on devices that were meant to toast bread, or cook food or whatever.

Entertainment - gaming / TV over IP might help. Largescale deployment to network otherwise intelligent devices (radar / scanners or whatever else the DoD is networking together) might help.  But unless it is largescale and more or less universally in use, it is not going to drive v6 deployment at all.

Successors to v6?  Jim Fleming maybe, or that chinese company that was trumpeting an “ipv9” :)

Or maybe when Vint Cerf’s vision of IP networks across outer space comes through, THAT might probably drive v6, or v12 or whatever.

We’re seeing the same old mistakes in IP allocation for v6, mistakes that characterized the early v4 networks, handing out class A, B and C space without any thought to the future.

Soon (!) we’ll have IPv12. And then we’ll have the IGSIS (Intergalactic WSIS) with little green men and tentacled aliens fulminating about how those *$^#$^#$ earthlings hogged all the usable v12 space so that they have nothing at all available for their own use.

Kim Liu  –  Jan 31, 2006 3:31 AM

You know, everytime I read another IPv6 or anti-NAT related article, I really have to wonder if people are making some assumptions I am not getting.

Why does end-to-end matter at the transport (protocol) level, honestly? 

I mean, I don’t care if my data is going over copper, fiber, microwave, 802.11, or what.  It gets converted five ways from last Sunday during transmission.  Why do I care if it remains IPv4 end-to-end or not?  Why should my applications care if they are running on IPv6 or what?  (If the applications do care, that’s a badly designed application—they’ve tied their application into network layer dependencies.  Of course many applications *have* done this, using IPv4 specific implementations and—tada!—locked themselves out of IPv6.)

Heck, my data might get stored on optical media, magnetic media, solid state memory, whatever.  A hundred different media formats and systems, but as long as my applications access them through a common *interface* and can get out what I put in, why should I care about the specifics of what happens internally? 

NAT is not an abberation.  NAT is the network fumbling, evolving (badly) towards a way of creating a layer of abstraction.  Encapsulation, abstraction, common interfaces that can be reused without worrying about a particular implementation—some basic software engineering principles, ya know?

Arguing for an end-to-end model is like arguing that everyone in the planet should only use one company’s word processing file format.  Or one manufacture’s image format.  Keep it simple, no competing, confusing formats.  Don’t think about conversion or improvements.  Conform.  Conform.  Conform.  One of the key benefits of the digital world has been the ability to convert things between formats, not be locked into formats.  We convert images/videos to bits and back again, sounds, text, etc. to electronic representations—copy, search, transform, convert.  Trying to argue for a universal, one-size-fits-all network protocol is against the very nature of what makes the digital medium so revolutionary! 

The vision of all these devices running IPv6 stacks is cute, but is an evolutionary dead-end that a monocultural deployment represents.  I doubt you can convince me that IPv6 is the end-all-be-all, protocol perfection that will never be replaced.  Yet, when a replacement shows up, how will you deal with all of this device-to-device communication between IPv6 devices you refer to?  How will you upgrade them, this magnitude more quantity of devices?  Communicate with them?  Will you require all the world to upgrade each device they have that now has IPv6 built in?  I doubt it.

*Conversion between protocols is a requirement!*  It is ultimately the only way forward, not dead-end-to-dead-end, monolithic/mono-culture systems, because we will always, always run into the problem of converting between protocols/media as we move forward.  Where would the network be if we mandated that data originating on copper could not be converted for transmission onto fiber?  Or wireless?

Now that we’ve outgrown our IPv4 box, people are looking at IPv6 and saying “This is a bigger box and therefor better!”, but the issue is *it’s still a box* no matter hw you paint it.  A dead-end in every direction one might want to progress in—if not today, then in a decade from today.  Why should I be interested in paying the cost of upgrading from one prison to another prison with only incremental value, and may have to pay to upgrade to yet another larger prison later?  *That’s* the question that really holds things back. 

Get application/service makers to stop tying things to a specific box/protocol.  Once users (and your devices) can communicate freely *between* different protocols through standard interfaces, they can have the choice, the freedom to live in the protocol/box most suited to their needs without giving up anything—and if they don’t like any of the ones available, they can make their own while still being able to interoperate.  That’s evolution, by definition, and also your revolution, breaking out of the prisons of dead-end-to-dead-end.  Copper, fiber, or wireless—I can make that choice for my network and still have my applications work.  Give me that chioce for my *transport* protocols, and take IP down off its pedestal.  If we had that choice to choose between IPv4 and IPv6 the way we could between copper and wireless, then IPv6 adoption would be well on its way.  IPv4 and IPv6 are anti-choice in their end-to-end models, and thus mutually exclude each other, which hampers the adoption.

People may be forced into the IPv6 box, simply because they’re squeezed out of the IPv4 box due to lack of room, but they won’t like it.  All the value right now is in the IPv4 box, and the dead-end-to-dead-end model makes the two boxes try to exclude one another.  The revolution, the value and incentive, will come when some figures out a way to successfully build a door between the boxes—that allows people to build doors into other boxes, whenever and where-ever they want.  That’s the difference between a prison and room—a room has a door you can open yourself.  The end-to-end architecture is a prison, because it has no doors to other protocols.

As it stands IPv6 is neither evolutionary or revolutionary compared to IPv4.  It’s just a bigger prison.  When you realize this, all the value in the proposition goes away.

Anssi Porttikivi  –  Jan 31, 2006 11:05 AM

NAT works reasonably well, and problems can be adapted to at application level. Global public addressability can be also solved at application level, like Skype proves. SIP addresses could be another standard set of handles to negotiate connections to private addresses through Teredo like proxies.

IPv6 transition requires a lot of work, most notably because of a need to upgrade all software from the bottom IP routing level up to the applications.

It also requires re-building and re-testing of lots of tricky, untried and often not-yet-available IPv6 packet filtering and firewalling, which your security guys will not do until IPv6 is mainstream. This is a very difficult chicken-and-egg problem. Due to security problems, two organizations starting IPv6 experiments typically can not communicate, and the promised new global addressing is useless.

Or actually it works right now, because the firewalls don’t yet block Teredo, because the security guys don’t know about this horrible Teredo technology which allows global incoming connections to any private addresses inside!

Then what to do in the future? I strongly suggest that if not as a solution, at least as inspiration you should look at the 9P protocol, the factotum “security agent” and Plan 9 OS networking. Currently 9P (also incluced in the latest Linux kernels) provides you a simplistic byte stream over TCP, but any transport/network level could be used, allowing innovations transparently to higher levels, transparently to the security architecture, and certainly transparently to applications and users. The lower levels could use IPv4, IPv6, or any other future technologies, now considered to be “link” level. Note that new here can co-exist with the old, as long as global IPv4 connectivity is available as a legacy. In the future, when the lower levels bifurcate and fragment, the 9P level would allow global connectivity.

For hate or for love, note that in a way 9P is UUCP for the next millenium. It has the same historical origins in Bell Labs Unix world, uses the same node name based station-to-station source addressing and is strongly file oriented, leaving lower levels up to the implementation.

Edward Lewis  –  Jan 31, 2006 5:19 PM

Why is end-to-end important?

That is a good question.  In short, end-to-end is important in TCP/IP - technically important even if it seems invisible to policy and other layers.

The OSI defintion of the Network Layer and the Transport layer assumes that “everything is reachable.”  These are layers 3 and 4 counting up the stack.  Layers 1 and 2 (Physical and Link) took care of moving a “frame” of data from one electronic thing to another.  Layers 3 and 4 are defined to be the glue that holds all the little pieces in place, creating the network.

I know that when I trot out the argument that “X” was an original assumption, so it is important, this seems like I am saying that “well, son, that’s the way things are.”  But what I mean to impress is that a lot of the succeeding development has occurred assuming “X” and therefore detailing why “X” is important could take a lot of time.

For example, one crucial feature of the IP network is congestion control.  Unlike telephony, datagrams (or packets) in the Network (3) Layer are sent out without prior reservation of resources.  When a computer originates a packet, the packet is placed into a frame (layer 1/2) and shipped to an interconnection device, such as a router.

The router has to read in the frame with the packet, and “in real time” put the packet into a newly made frame and ship it out another interface.  Inside a router, packets without frames (waiting for forwarding decisions and/or a frame buffer to be made available) use up memory.  Memory is one such resource that is needed, yet is unreserved.

Congestion control is the process of limiting the number of packets being originated so that the resources in the network are not overwhelmed.  If they are, packets are dropped resulting in more and more retries (they way IP networks overcome losses).

TCP/IP performs congestion control at the two endpoints of the data transfer - the originator of the packet and the destination.  The originator increases the rate of packet generation until no acknowledgement from the destination is received - testing the limits of the path.  Once the limits are known, the orginator settles into a rate just below what can be borne.  Whenever the destination fails to acknowledge the data, the rate is lowered.  Periodically, higher rates are tried again to see if the network has been freed up.

This process is designed to be end-to-end.  The intermediate elements don’t do anything but pass the packets, or drop them if there’s a problem.  The benefit of the end-to-end here is that the two communicating parties are getting the best estimate of what’s available.

What if we don’t do this end-to-end?  That means we are approaching communications with either a reservations policy (as in the PSTN) or are doing store-and-forward.  Reserving resources means committing a resource for more time that is it being used, one of the perceived “wastes” in the PSTN.  (Arguably, hearing someone humm into the phone isn’t a waste.)

Store-and-forward is the more intuitive alternative.  It requires large buffers to hold the communications going across the network and it also makes streaming of real-time data almost impossible (to manage).  Having to wait to hear a 512 byte datagram at 100Mbps before retransimitting is nothing, having to listen to MB’s of a message before retransimitting (at the same rate) takes considerable time.

More significantly, operating a reservation or store-and-forward network means putting more intelligence into the core of the network.  A more intelligent core has always been a criticism of PSTN networks in the eyes of the TCP/IP community.  The less intelligence in the core the more adaptable the network is - i.e., the less is assumed about the traffic flowed the easier it is to innovate at the edges.

NAT is a band-aid to the problem of managing address space.  It does it’s job and application protocols can be hardened against the way it breaks end-to-end.  However, by its nature it disrupts the flow, slowing it down (more significantly for some flows and less noticiable forothers).  NAT boxes are one piece of a more “intelligent core”, along with firewalls, all of which result in higher operations costs of the core as well as increasing its rigidity.

I am sure there are features other than congestion control that depend on end-to-end, but it is the one that leaps to mind.  Climbing up the logical ladder, store-and-forward in the Network and Transport layers means putting into the network what we really need to keep out, namely “intelligence.”  In a nutshell, this is (one reason) why technicians clamor of end-to-end in the (TCP/IP) Internet.  “End-to-end” is a basic assumption of the design of the TCP/IP protocols.

Kim Liu  –  Jan 31, 2006 6:12 PM

Quote: I know that when I trot out the argument that “X” was an original assumption, so it is important, this seems like I am saying that “well, son, that’s the way things are.” But what I mean to impress is that a lot of the succeeding development has occurred assuming “X” and therefore detailing why “X” is important could take a lot of time.

Irrevelant.

You need to show *why the assumption still holds true*.  We have accomplished a lot within the frame of Newtonian physics, but it has since been proved to be insufficient at larger and smaller scales.  The end-to-end assumption may have been true *at the time and scale of the original Internet*—it does not follow that that, and other, assumptions scale to the size and needs of the Internet as it is today.  If the assumption is no longer true, all the successes of yesteryear are worth remembering, but simultaneously insufficient to carry us forward.  I am not as concerned with faithfully emulating past successes as I am with ensuring the possibility of future break-throughs.

I am asserting, in part, that the end-to-end assumption, combined with the scale of the Internet as it is today, the magnitude of the devices and systems out there, and the variety, leads ultimately to dead-end design because the end-to-end protocol architecture not only excludes other protocols, it excludes its own successors.  This hampers further development and progress of the network—such as the adoption of IPv6.  That, I can hold out as a concrete case to support my assertion that the end-to-end model itself hampers the development of the network—the only way IPv6 is going forward is by breaking the end-to-end model and having IPv4/IPv6 cross-connectivity.  The assumption of end-to-end only survives by breaking, not by its adherance.

You have not convinced me that end-to-end is the *only* way of doing congestion control.  Nor have you convinced me that it is of such worth as to over-ride the issues of being locked in another dead-end-to-dead-end protocol.  The fact that my data is able to be transported over ethernet frames, sonet frames, PPP framing, etc., and still survive does not lend any support to this proposition. 

You have not addressed why my applications should care, at the application level, why the transport protocol is end-to-end or not, insofar as the application’s needs for network connectivity are met.  It is by no means clear that the end-to-end box is the only solution, nor the best solution.

The congestion ‘control’ mechanism as you describe it make it sound even more wasteful than the alternatives.  An applications grabs as much bandwidth from the network as it can, regardless of how much it actually needs, until it reaches the point where it cannot grab any more, and backs off.  Then it constantly probes to see if it can’t get any more.  This does not sound like actual congestion control any more than removing all the traffic signals from a city sounds like congestion control—everyone should drive forward as much and as fast as they can, until they bump into something, then they should constantly push and shove to try to go faster.  You may need a better way of explaining it to actually make it sound efficient. 

Intelligence is coming to the network.  You can fight it, or you can try to figure out how to best use it.  This is not a mark of failure, but a mark of the success of the network.  We do not have cars with impact sensors, crumple zones, air bags, anti-lock breaks, automatic transmissions, puncture-resistant tires, air conditioning, computer-controlled fuel injection because the concept of the automobile was a failure.  Rather, we have these things because the automobile was a success, and such a success that we in turn require the complexity and the intelligence within the vehicle itself to manage the additional performance efficiently.  There are no doubt people who bemoan the complexity of today’s automobile, waxing nostalgic for when automobiles were simple and you could cobble one together in the junkyard.  (There are probably individual cells that bemoan the inefficiencies of having multi-celluar organisms with brains and digestive systems and all that overhead.) 

The intelligent network should be looked at as an opportunity, not feared.  I fail to see the ultimate benefit in keeping intelligence out of the network.  “Keep things as simple as possible - but no simpler.” is not the same as “Keep things brain dead.” 

Success is complex and takes work.  Failure is simple and doesn’t.

Edward Lewis  –  Jan 31, 2006 6:59 PM

Perhaps my comments were not understood.

If you read my first two paragraphs, you’ll see I am addressing why end-to-end is important to TCP/IP.  The Internet today is TCP/IP, or largely TCP/IP.  An unwritten assumption of mine is that the question about end-to-end is about today’s Internet.

Sure, there are ways to build communications in other ways, ways in which end-to-end is not a given.  TCP/IP was the result of design decisions made way back when.  Perhaps it is time to review the requirements, the interplantery network study is one such effort.  End-to-end is probably not viable there.

For a better description of the TCP/IP congestion control there are textbooks.  For a generic understanding of the problem, I have always favored a book by Andrew Tannenbaum.  I forget the title, it as written in the 1980’s.  That text presents the problems of communications in a theoretical manner, the concepts transcend implementations.  In it, for instance, you’ll see why the ALOHA protocol probably isn’t best for a X window clients 1 AU away from the server.

Concerning intelligence in the network, I think a lot has been learned from the building of the PSTN.  Trying to make a network “smart” means teaching it one way of business, which is one rationale for Voice over IP work.  Many PSTN concerns are now looking to TCP/IP protocols to improve on their smart network cores.  (Make the service smart, not the underlying pipes.)

I have been told that the Internet serves about 1 billion people.  With estimates that the population of the earth is about a magnitude greater than that, the Internet has a lot of room for growth.  For all the progress to date, I’d be foolish to believe that it can scale one more order of magnitude.  If there is a better way to build the Internet, I’ll all for it.  I just haven’t seen a better alternative yet.

The problem with analogies is that you have to apply them appropriately.  Yes, cars are becoming smarter but not in the same was as we talk about “smart” or dedicated networks.  My car is “smart” for my commute to work - small, lightweight, agile.  Even though it has an air bag, it isn’t so good for hauling trash to the dump.  I need my truck for that, it too has an air bag, but more importantly a large, open, washable bed.  The air bag intelligence isn’t helpful when the vehicle is doing its job, it’s helpful when there is a failure.

Kim Liu  –  Jan 31, 2006 7:40 PM

Quote: “If you read my first two paragraphs, you’ll see I am addressing why end-to-end is important to TCP/IP.  The Internet today is TCP/IP, or largely TCP/IP.  An unwritten assumption of mine is that the question about end-to-end is about today’s Internet.”

To clarify, my concern is not about today’s Internet.  My concern is how we will get to tomorrow’s Internet, and the Internet after that, and the one after that. 

Quote: “I have been told that the Internet serves about 1 billion people.  With estimates that the population of the earth is about a magnitude greater than that, the Internet has a lot of room for growth.  For all the progress to date, I’d be foolish to believe that it can scale one more order of magnitude.  If there is a better way to build the Internet, I’ll all for it.  I just haven’t seen a better alternative yet.”

I agree.  If it cannot scale one more order of magnitude, then something else will be needed.  My concern is, when that something else developed, how will we get there?  The end-to-end design is a prison, even in IPv6.  Given how bad the transition from IPv4 to IPv6 is, when the time comes that something better than IPv6 is needed, what will *that* transition be like?  Have we learned anything yet? 

With those concerns, I see no reason to move to IPv6, but to try to explore better architectures, one that allow for transitions between protocols and growth, now, rather than repeating the mistakes we are seeing in the IPv4 to IPv6 transition.  So, insofar as the original article title goes, I’m leaning towards IPv6 extinction.  We need to be working on the alternatives that will allow us to reach that order of magnitude more growth—not necessarily an alternative to IPv6 per se, admittedly, but to remove the end-to-end design and acknowledge that there will be *something* after IPv6 that IPv6 devices will have to communicate with.  If we build devices and applications now with the understanding that the end-to-end assumption will not/is not holding true any longer, then future transitions will be much smoother and our ability to advance networking technologies and bring them to deployment faster will be improved.

Again, I assert: The end-to-end design assumption does not hold true for the scale and scope that the Internet has grown to and is growing beyond.  I hold the troubled transition of IPv4 to IPv6 as an example of why the end-to-end design assumption now harms the development of the network going forward.  Regardless of past successes, there is no indication that the benefits of continuing this design mindset into tomorrow are greater than the penalties.

So as with regards to the original article, I feel there is no clear value gained from the IPv6 transition, architecture-wise, because we’ll just have to go through a whole new round of transition in the future which will eat up any benefits gained from IPv6 for the short time we have it. 

Quote: “The problem with analogies is that you have to apply them appropriately.  Yes, cars are becoming smarter but not in the same was as we talk about “smart” or dedicated networks.  My car is “smart” for my commute to work - small, lightweight, agile.  Even though it has an air bag, it isn’t so good for hauling trash to the dump.  I need my truck for that, it too has an air bag, but more importantly a large, open, washable bed.  The air bag intelligence isn’t helpful when the vehicle is doing its job, it’s helpful when there is a failure.”

Oh, I agree with that entire statement, in all parts, and likewise twist it:  I’d like to be able to use different transport protocols/vehicles as appropriate to the job, commute vs. hauling trash.  Except, again, the end-to-end design principle excludes the possibility of having multiple protocols and having them work together.  I certainly feel that people should be able to choose the vehicles appropriate to their transportation needs, and I feel like-wise about network transport protocols.

Victor Grishchenko  –  Aug 16, 2006 11:15 AM

Hi Kim, Edward. The correct analogy is roads, not cars. An ideal road has to be flat and straight, end-to-end. Any intellect or complexity of the road is nothing but a problem. Vehicles may evolve and car vendors may innovate without rebuilding the road network.
Regarding the revolution scenario. I’ve experimented with infinitely-scalable, billions-of-devices, self-configured, potentially-extremely-cheap, unmanned routing scheme (see TAMARA). Exactly the case of “volume, not value”. It is not a too complicated task; after dropping some legacy concepts the desired result might be achieved for IPv6 (IMO). But, I found no interest and no funding.

Kim Liu  –  Aug 16, 2006 2:09 PM

But, I found no interest and no funding.

Welcome to the monoculture of dead-end to dead-end.  If you’re not conforming, you’re not performing, so there’s no interest in developing alternatives or variations.

Edward Lewis  –  Aug 16, 2006 2:38 PM

Kim,

I wouldn’t be so harsh to Victor.  You could be right, not being able to get funding is a sign that an idea is not meeting a need, sometimes it is just the sales job.  But a lack of funding may be shortsightedness on the part of the mainstream.  Truthfully, it is rarely the latter case.

The important thing for a researcher is to avoid becoming quixotic (http://en.wiktionary.org/wiki/quixotic) or stuck on solving irrelevant problems.

I’ve often asked myself if IPv6 has become a quixotic quest.  Does it meet the need or has it become a cause unto itself?  No doubt IPv4 addresses are nearing their capacity, but is IPv6 the answer?  Is it the answer because it’s the “last one left at the dance?”

Kim Liu  –  Aug 16, 2006 3:11 PM

Is it the answer because it’s the “last one left at the dance?”

Well, the mindset appears to be that there should be only one at the dance (i.e. the dead end to dead end conformity).  Given that IPv6 is already out in the real world deployments and dancing, proposing an alternative would have to either interoperate with IPv6 perfectly—i.e. just IPv6 all over again—or be require some form of adaptation/conversion (NAT) between not only IPv6 but IPv4, too.  The dead end to dead end mindset rules out conversion, IPv6 is already out there, and thus there cannot be alternatives to choose among.  So, Victor finds neither interest or funding.

I suspect that in part IPv6 has become a cause unto itself for some folks—some parties have an invested self-interest in IPv6.  Equipment makers, software vendors, etc. who want to keep selling IPv6 upgrades to current gear.  And of course there are people who probably have their reputations and/or egos tied up in the matter.  (Heh.  Look at me!)

To some extent, IPv6 is vastly important to the IETF, to keep the IETF relevant—if there were alternative network protocols, if one could convert/communicate freely between protocols as one can do between copper/fiber/wireless, then the IETF/IP might have competition.  It’s a vested interest of the IETF as an organization (not necessarily that of its members) to lock out other protocols.  That’s not saying that the IETF is consciously acting like that, just that it does benefit from promoting an dead end to dead end mindset that requires only the protocols of the IETF be used and that ‘breaks’ if some competitor tried to interoperate with it with a different solution.

(Slightly off topic, but has anyone ever considered how the ‘end-to-end’ mindset facilitates the development of ‘digital rights management’?  I would prefer to view the network as ‘gateway to gateway’, that there might always be NAT, a conversion from one physical media to another, from one protocol to another, from one format to another.  The data flow does not ‘end’ at my computer—maybe it gets sent over Bluetooth to a PDA, copied onto an MP3 player, printed out and faxed, etc.  Having a model that insists there are ‘end points’ and that traffic should only be between these (dead) ends, would seem to facilitate the DRM mindset because it gives something, the end point, to lock files and data in to.  If instead the IP address was never viewed as the final, absolute end point, that there might always be NAT or another conversion of data afterwards, and another after that, and another after that, and that that conversion was required for the network to work, that the conversion of data between network protocols and formats was an integral part of communication, I suspect DRM proponents would have a tougher time of developing solutions.  It would be harder to identify an ‘end’ machine to lock data to.)

Victor Grishchenko  –  Aug 17, 2006 10:33 AM

No doubt IPv4 addresses are nearing their capacity, but is IPv6 the answer?

I don’t think we may run out of IPv4 literally; it is more like Achilles never catching up with the tortoise. The problem is that we aren’t living in the IPv4 world, but we did not get into IPv6 either. The current state of things may be described as “post-IPv4”, “ex-IPv4” or even “undead IPv4”.
Problems of this state are self-evident. For example, current bleeding-edge very-promising technologies of STUNT/traversal/punching/libjingle/whatever are just a poor reinvention. That technologies are obviously
1) more complex
2) more expensive
3) less effective
4) less reliable
...than our old friends UDP and TCP. People don’t innovate but fight consequences instead. Of course, it is bad from engineering/architectural viewpoint but it is good for business. In a world of no NATs Skype Inc doesn’t worth much. I think, this road of shame may last for several decades and be “profitable”, yes. Why don’t charge sites for “premium access”? Why don’t charge for IPv4 addresses, finally?

So, my opinion is that IPv6 is not that quixotic, but it is not commercial also until there are tons of small devices crying for connectivity and convergence (one more chicken-and-egg).

BTW, I just figured out that TAMARA could be transparently compatible with IPv6. What a surprise! So, I still believe in the hourglass model and the end-to-end principle :)

Regarding Bluetooth and PDA I think that better way is to assign IPs to that devices. Use any medium you want and any technology you want; just identify every destination with an IP address. Under some assumptions, including no address scarcity, this works, so why not? (Scarcity of numbers is a truly strange problem, isn’t it?)

Victor Grishchenko  –  Sep 3, 2006 7:20 AM

BTW, Kim. I think, your remarks correspond not actually to the end-to-end principle, but to its simplest understanding (which prevails today).
There are other variants. E.g. in TAMARA hosts are identified by prefixes, not addresses. So, there are actually no “dead ends”: each host may freely host own network. Still, it is end-to-end in the sense of “dumb network” and “complexity at the edges”.

The Famous Brett Watson  –  Sep 3, 2006 1:20 PM

What is this “TAMARA” of which you speak? I can find no network protocols by that name with a Google search.

Victor Grishchenko  –  Sep 3, 2006 6:08 PM

TAMARA is more of a concept than a protocol. It was discussed above.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Cybersecurity

Sponsored byVerisign

DNS

Sponsored byDNIB.com

New TLDs

Sponsored byRadix

Threat Intelligence

Sponsored byWhoisXML API

Domain Names

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global

Brand Protection

Sponsored byCSC