Home / Blogs

Addressing the Future Internet

Geoff Huston

The National Science Foundation of the United States and the Organisation for Economic Co-operation and Development held a joint workshop on January 31, 2007 to consider the social and economic factors shaping the future of the Internet. The presentations and position papers from the Workshop are available online.

Is Internet incrementalism a sufficient approach to answer tomorrow's needs for communications? Can we get to useful outcomes by just allowing research and industry to make progressive marginal piecemeal changes to the Internet's operational model? That's a tough question to answer without understanding the alternatives to incrementalism. Its probably an equally hard question to attempt to phrase future needs outside of the scope of the known Internet's capabilities. Its hard to quantify a need for something that simply has no clear counterpart in today's Internet. But maybe we can phrase the question in a way that does allow some forms of insight on the overall question. One form of approach is to ask: What economic and social factors are shaping our future needs and expectations for communications systems?

This question was the theme of a joint National Science Foundation (NSF) and Organisation for Economic Co Operation and Development (OECD) workshop, held on the 31st January of this year. The approach taken for this workshop was to assemble a group of technologists, economists, industry, regulatory and political actors and ask each of them to consider a small set of specific questions related to a future Internet.

Thankfully, this exercise was not just another search for the next "Killer App", nor a design exercise for IP version 7. It was a valuable opportunity to pause and reflect on some of the sins of omission in today's Internet and ask why, and reflect on some of the unintended consequences of the Internet and ask if they were truly unavoidable consequences. Was spam a necessary outcome of the Internet's model of mail delivery? Why has multi-lingualism been so hard? Is network manageability truly a rather poor afterthought? Why has Quality of Service proved to be a commercial failure? Can real time applications sit comfortably on a packet switched network that is dominated by rate adaptive transport applications? Why are trust and identity such difficult concepts in this particular model of networking? How did we achieve this particular set of outcomes with this particular Internet framework? Can we conceive of a different Internet model where different outcomes would've happened as naturally?

These are more than technical questions. When we consider innovation and investment models, the regulatory framework, consumer choices, innovation in services and the health of the value chain in the communications industry we are considering issues that reach well beyond technical topics and are essentially economic and social in nature.

It was against this background that workshop participants were requested to consider particular questions and submit papers on these topics. The presentations and position papers from the Workshop are available online.

I took away from the workshop a number of observations that struck me as interesting.

One of these was the view from the regulatory perspective that it takes some level of trust in the industry and confidence in the underlying dynamics of private equity in the public communications enterprise to continue a course of deregulation in the face of industry uncertainty. Direct regulatory involvement in the form of phrasing objectives and enforcing certain forms of behaviour in the market would be a more conventional form of expressing regulatory interest. From a public policy perspective the question relates to the challenge of structuring an environment where market forces and competitive pressures lead towards outcomes that are desirable, or even essential, in terms of public policies and national and regional aspirations. Of course even considering outcomes at a national level is challenging given that the network and the economic activity it facilitates are ones that resist the creation of distinct outcomes at a national level. The degrees of freedom in setting public policies at a national level that relate to the communications enterprise, be they economic or social in nature are highly constrained. Is this a desirable outcome? Are there other models of communications systems that admit greater levels of control? Is the internet necessarily not only a child of deregulation in the industry but an outcome that requires a deregulated environment and one that relies on strong substitutability in terms of competitive supply in order to function efficiently as an open market. Can other regulatory frameworks support a similar outcome in terms of functionality and service portfolio? Is the Internet a unique outcome of a unique form of deregulation?

From an economic perspective I was exposed to the view that the Internet represent the Big Bang of Cosmology for many economists. The Internet's development over the past few decades leading to the boom and bust of the early years of this decade appear to have followed classic forms of economic and social theory. It has assumed the forms of a disruptive wave of technology, with textbook phases of early adopters and high risk ventures lead by the research world, followed by initial broader impetus through the definition of new carriage economics, and then massive disruption as this technology and the associated dazzling ranges of services attained broad visibility to the entire market. Economically the technology evolution process can be seen almost as a bipolar process, with a tendency to flip states between incrementalism and piecemeal adoption and intervals of acute disruption as the industry is confronted with change that is neither backward compatible with existing deployment nor backward compatible with existing infrastructure technologies. The disruptive waves allow for large leaps forward in technology, while imposing a considerable cost in terms of stability and surety of investment, while incrementalism sends out more reassuring signals in terms of overall stability, while at the same time constraining innovation into relatively tightly constrained areas. The market approach also admits considerable efficiencies, and the Internet's peering and settlement framework, based as it is on a market approach to interconnection, is often touted as a poster child of the benefits of transition from regulation to markets. The difference in terms of overall cost and efficiency from the call minute regulated accounting and settlement regime to the flat rate market-based approach of the Internet is one where market-based economists can look upon smugly!

Of course not every topic is one where market-based approaches yield superior outcomes. The considerations of the longer term interests, the issue of routing which is akin to the problem of the tragedy of the commons, equitable service provision prices across domains where the incremental costs of service provision differ markedly are all topics of considerable interest from an economic perspective. Economic failures do exist, and the euphoria and ease of access to capital at the height of a boom can be rapidly replaced by scepticism, panic and high cost of capital in the subsequent bust, and this transition can occur in a matter of days.

Are there "property rights" in this environment? Is "network neutrality" an expression of the network operator asserting some form of property right over the use of the network? Or are networks best operated in an open fashion with open end devices and open services and where substitutability and competition feature at each and every level in the value chain? Just constructing an "open network" where the fundamental network interactions are based on an open and freely available specification is one thing, but we commonly see "openness" as more than this, and assume that devices are general purpose programmable devices that are open to the user to load service applications upon. We also assume that these service applications are based on an open specification and that there are multiple sources of supply. We tend to view closed devices and closed applications with some scepticism. Are 'closed" applications where the internal working of the program as well as the communication components explicitly encrypted and hidden from any form of third party view a positive step for the Internet? Whose interests are we protecting when we load blocking features into software? Should third party software enforce some form of digital rights management when manipulating certain forms of content?

Should networks allow for complete anonymity of use, or should networks be configured as active systems that play an explicit role in identity and trust?

What is the relationship between a network and the devices that attach to it and the services that are operated over it? Are these disconnected activities? Can they be bound together in various ways where a choice of network implies a restricted choice of device and content?

In looking at the range of scenarios that relate to a future Internet we are presented with an array of choices. Some of these represent what appear to be clear choices with clear technical aspects, but others ultimately represent choices between various social values and various forms of control and freedoms. How should we look at this array of economic and social outcomes and then consider what form of technical design decision make 'sense' for a future Internet?

Whatever way we chose to undertake this examination, it appears that a sensible approach is to undertake such study in a deliberate fashion, and understand that a choice in technology often represents a choice in related economic and social realms, and that it is also sensible to consider the economic and social perspectives of a future Internet at the same time as we ponder the set of technology choices open to us.

* * *

As part of the preparation for the workshop each participant was requested to prepare a paper addressing a given set of questions . I was requested to respond to three questions concerning addressing and the Internet:

Question 1: Addressing (as reflected in routing protocols, etc) is a fundamental part of the Internet. How can addressing be improved so as to improve efficiency of inter-networking and so as to face the challenges of addressing new devices and objects?

Addresses in the context of the Internet's architecture fulfill a number of roles. Addresses uniquely identity network "endpoints", providing a means of identifying the parties to a communication transaction ("who" you are, in a sense). As well as this end-to-end identification role, addresses are also used by the network itself to undertake transfer of data, where the address is used as a means of identifying the location of the identified endpoint relative to the topology of the network ("where" you are). Addresses are also used within the switching elements of the network as lookup key to perform a switching decision ("how" a packet is directed to you through the network). In other words addresses in the IP architecture simultaneously undertake the combination of "who", "where" and "how" roles in the network's architecture.

Addresses have a number of properties are all essential in the context of the integrity of the network:

  • Uniqueness: Addresses should be uniquely deployed (considerations of anycast deployments notwithstanding). Uniqueness is not an intrinsic property of an addressing scheme per se, but is a derived property of the associated distribution and deployment framework. And addressing scheme in this context should preserve this attribute.
  • Consistency: Addresses should be drawn from a consistent identifier space. Using multiple identifier spaces causes the potential of misinterpretation of the address.
  • Persistence: The address value should remain constant, and gratuitous changes in the mapping from the identifier to the referenced object should be avoided. Constantly changing address-derived identities are, at the very least, very difficult to track. For how long addresses should remain persistent is something that has changed over the lifetime of the Internet. While the initial concept was that addresses should be highly persistent, current semantics indicate that addresses should remain persistent for at least the duration of a communication session.
  • Trust: Can an address object withstand a challenge as to the validity of the address? Other parties who would like to use this address in the context of an identified endpoint would like to be reassured that they are not being deceived. 'Use' in this context is a generic term that includes actions such as resolution to the object identified by the address value, storage of the address for subsequent 'use', referral, where the address token is passed to another party for their 'use'.
  • Robustness: The deployed address infrastructure should be capable of withstanding deliberate or unintentional attempts to corrupt it in various ways. A robust address system should be able to withstand third party efforts to subvert the integrity of the address framework as a means of undertaking identity theft or fraud.

The issues, or perhaps shortfalls, with the Internet's addressing architecture start with the collection of basic roles that are undertaken by a single IP address. This combination of "who", "where" and "how" makes for highly efficient network functions that are essential in any high speed connectionless packetized data communications system, but at the same time this semantic overload of the address in assuming the roles of "who", "where" and "how" is also the cause of considerable added complexity in today's Internet:

  • Mobility remains a significant challenge in this environment, where the major attribute of any form of mobility is to preserve the notion of endpoint identity of the mobile endpoint, while allowing the network location (and related network switching decisions) to change, reflecting the changing relative location of the mobile endpoint within the network. If the endpoint location changes, then so does the "where" and "how" components of its "address". But how do you keep active sessions open, or establish new sessions with this endpoint? How can you keep the "who" component of an address constant, while at the same time changing the "where" and the "how" components?
  • The granularity of the addressing system represents an uncomfortable compromise. An IP address is intended to identify a device's network interface, as distinct from the device itself or the device's user. A device with multiple active interfaces has multiple IP addresses, and while its obvious to the device itself that it has multiple identities, no one else can tell that the multiple identities are in fact pseudonyms, and that the multiple addresses simply reflect the potential for multiple paths to reach the same endpoint. In terms of identity "who" of an address the protocol stack within the endpoint should remain a constant, while allowing for multiple "where" locations that reflect the connectivity of each of the device's connected interfaces.
  • Also, the address does not identify a particular path, or set of paths through a network, or possibly even a sequence of forwarding landmarks, but simply the desired endpoint for the packet's delivery. This has implications in terms of application performance and robustness, and also has quite fundamental implications in terms of the design of the associated routing system.

The Internet's address architecture represents a collection of design decisions, or trade-offs, between various forms of apparently conflicting requirements. For example, with respect to the routing system, the desire for extremely high speed and low cost switching implementations has been expressed as a strong preference for fixed size and relatively compact address formats. With respect to the role of addresses as identity tokens, the desire for low cost deployment and a high level of address permanence implies a strong desire for long term stable address deployments in production networks, which, in turn, is expressed as a strong desire for low levels of address utilization efficiency in deployed systems, which for large systems implies extended address formats, potentially of variable length.

With respect to the IP architecture, these trade-offs in addressing design are now relatively long-standing aspects of the address, representing decisions that were made some time ago in an entirely different context to that of today's Internet. Are these design decisions still relevant today, or are there other potential ways of undertaking these design tradeoffs that would represent a more effective outcome? Indeed if we look at future forms of network evolution are these aspects of an address invariant, or should we contemplate other address structures that represent different trade-offs in design?

The changing nature of an "address"

A significant issue with addressing is the address "span". While the 32 bits of the IPv4 address space represents a considerable span, encompassing some 4.4 billion unique addresses, there is an inevitable level of wastage in deployment, and a completely exhausted 32 bit address space may encompass at best some 200 to 400 million uniquely addressed IP devices. Given that the population of deployed IP devices already exceeds this number by a considerable margin, and when looking forward to a world of potentially billions of embedded IP devices in all kinds of industrial and consumer applications, this 32 bit address space is simply inadequate.

In response, we've seen the deployment of a number of technologies that deliberately set out to break any strong binding of IP address with persistent endpoint identity, and treat the IP address purely as a convenient routing and forwarding token without any of the other attributes of identity, including long term persistence. The Dynamic Host configuration Protocol (DHCP) is a commonly used method of extending a fixed pool of IP addresses over a domain where not every device is connected to the network at any time, or when devices enter and leave a local network over time and need addresses only for the period where they are within the local network's domain. This has been used in LANs, ADSL, WiFi service networks and a wide variety of applications. In this form of identity, the association of the device to a particular IP address is temporary, and hence there is some weakening of the identity concept, and the dynamically-assigned IP address is being used primarily for routing and forwarding.

This approach of dynamic addressing was taken a further step with the use of Network Address Translation (NAT) approaches, where an "edge" network gateway device has a pool of public addresses to use, and maps a privately used address device that is on the "inside" of the gateway to one of its public addresses when a private device initiates a session with a remote public device. The private-side device has no idea of the address that the NAT edge will use for a session, nor does the corresponding public-side device know that it is using a temporary identity association to address the private device. This approach has been further refined with the NAT Port Address translators that also use the port address field in the TCP and UDP packet headers to achieve an even high level of effective address compression.

NATs, particularly port translating NATs, are very effective in a client- server network environment, where clients lie on the "internal" side of a NAT and all the well known servers lie on the "external" side. But is an environment of peer-to-peer applications, including VOIP this concept of using addresses in this way raises a number of challenging questions. Each unique session is mapped to a unique port and IP address, and sessions from multiple private sources may share a common "public" IP addresses, but differentiate themselves by having the NAT-PT unit assign port addresses such that the extended IP + port address is unique. How do you know if you are talking directly to a remote device, or talking through a NAT filter, or multiple NAT filters, or NAT-PT filters? And if you are talking through a NAT, how do you know if you are on the 'outside' or the 'inside'? What's your "address" if you want others to be able to initiate a session with you if you are on the "inside" of a NAT? What if you have cascading NATs?

These forms of changes to the original semantics of an IP address are uncomfortable changes to the concept of identity in IP, particularly in the area of NAT deployment. The widespread adoption continues to underline the concept that for an address as an identity token there is a lack of persistence, and the various forms of aliasing and dynamic translation weaken its utility as an identity system. Increasingly an IP address, in the world of IPv4, is being seen as a locality token with a very weak association with some form of identity.

Of course that doesn't stop undue assumptions being made about the uniform equivalence of identity and IP address, however specious it may be in particular situations, and various forms of IP filter lists, whether they be various forms of abuse black lists or security permission lists all are evidence of this contradictory behavior of assuming that persistent identity and IP address are uniformly equivalent.

Version 6 of IP is an attempt to restructure the address field using a larger span, and the 128 bits of address space represent a very large space in which to attempt to place structure. However in and of itself IPv6 still has not been able to make any significant changes to the address role within the Internet architecture. IPv6 addresses still contain the same overloaded semantics of "who", "where" and "how", and IPv6 also admits the same capability of dynamic addresses assignment. We are even witnessing the use of IPv6 NATs, so whatever benefits IPv6 may represent, in and of itself it still does not represent many major shift in the role of an address in the IP architecture.

How could we change "addresses"?

If we want to consider changes to the address semantics in a future Internet's architecture then it appears that simply increasing the span of the address value range presents a weak value proposition in terms of remedies to the shortfalls of the overloaded semantics of an IP address. None of the deeper and more persistent issues relating to the overloaded address semantics are reduced through this measure and the issues relating to the scaleability of routing, mobility, application level complexity, and robustness persist.

An area of investigation that presents greater levels of potential may lie in cleaving the concept of an address into distinct realms, and minimally that structural separation should reflect a distinction between endpoint identity and network location. Such an approach could embrace a relatively unstructured identity space, whose major attribute would be persistent uniqueness, and where the identity value of an object, or part thereof, could be embedded at the time of manufacture. It would also allow the deployment of a structured location space that had the capability to describe the topology of the network in a manner that was able to guide efficient local switching decisions. The challenge here is not necessarily in the devising of the characteristics of these identity spaces, but more likely to be in the definition of mapping capabilities between the two distinct identification realms. In other words how to map, in a highly efficient ad robust manner, from an identity value to a current or useable location value, and, potentially, how to perform a reverse mapping from a location to the identity of the object that is located at that position in the network.

There is a considerable range of design choices that are exposed when the address-based binding of identity with location is removed. The most salient observation here is that if we want to consider some form of "improvement" to the current role of addresses in the Internet's architecture, then there is little, if any, practical leverage to be obtained by simply increasing the size of the address field within the protocol's data structures or altering the internal structure of the address, or even in altering the address distribution framework. Such measures are essentially meaningless in terms of making any significant impact on the semantics of the address, nor on its flexibility of utility within the IP architecture.

If we want to create additional degrees of flexibility within the architecture of the network, then it would appear that we need to decouple aspects of current address semantics and in so doing we need to revisit the fundamental concepts of the Internet's architecture. If we want identity, location and network path determination to be expressed in such a manner that are not fate-shared then we also need to bring into play additional concepts of dynamic mapping, binding security and integrity, and various forms of rendezvous mechanisms.

As the original question asserts, addressing is a fundamental part of the Internet. If we want to contemplate substantive changes to the address model we are in effect contemplating substantive changes to the architecture of the resultant network, as compared to today's Internet. Perhaps this is indeed a potentially more productive area of activity than the approach taken by IPv6, where the changes have been relatively minor and the impetus for adoption by industry has, to date, proved to be insufficient to offset against the incremental costs and perceived incremental benefits.

Question 2: In designing new protocols, what lessons can be learned from the slow deployment of IPv6 to date?

There are significant differences between devising an experiment that investigates various models of communications paradigms and undertaking a major revision to a mainstream communications protocol. The major reasons for the slow deployment of IPv6 today lie in both economic and public policy considerations as much as they lie in considerations of the underlying technology.

The Internet's major positive attribute was not derived any particular aspect of its underlying architecture or characteristic of its protocols. Indeed, the Internet was in many ways essentially dormant through the 1980's, and, in terms of its architecture and protocol technology the Internet has not changed in any fundamental sense for some decades. It remains a connectionless, hop-by-hop forwarding, destination-addressed unreliable datagram delivery system with end-to-end control loop overlays to provide additional services related to resiliency, session management and performance characteristics.

The major economic and social factor of the late 1980s' and early 1990s' when the Internet was expanding rapidly included the shift away from a highly regulated data market to a regulatory framework that allowed, and in some cases even encouraged, the proliferation of private data networks that went well beyond closed user groups based on tightly constrained bounds of common use. The prevailing regulatory regime allowed all forms of resale and service provision in a highly competitive market for data services, and the economic environment was one of considerable interest in technology and communications. This was coupled with the shift in the computing market from large scale mainframe systems into the computer as an item of consumer electronics, and a change in the nature of the information industry workforce into one that relied on intense use of IT solutions and associated networks.

The attributes that the Internet bought to this emerging market was an unprecedented level of flexibility and efficiency that allowed almost any combination of underlying communications media and all forms of end devices to be amalgamated into a single cohesive IP network. The technical factors that lead to the rapid deployment of IPv4 included IPv4's flexibility and ability to bridge across multiple underlying network media in a flexibly and cost efficient way.

The economic and public policy factors included IPv4's considerably lower unit cost due to the high carriage efficiency and foundation in open standards with open reference implementations. The policy framework of deregulating the data services market and allowing various forms of resale and competition encouraged new investors who were inclined to use innovative products and services as part of their market positioning.

None of these factors are driving IPv6 deployment. IPv6 is no different to IPv4 in terms of its deployment capabilities, carriage efficiencies, security properties, or service capabilities. There is no change in the public policy regime with respect to IPv6, and no significant innovative difference in IPv6 that would provide a competitive edge to innovators in the market. An additional consideration is that IP services are now marketed in a highly contested price-sensitive market, and the revenue margins for most forms of mass-market IP services are very low. The capacity of the service industry to absorb the incremental costs associated with a dual-stack deployment of IPv6 without an associated incremental revenue stream are, to date, evidently unsupportable.

The basis of this observation is that the significant impediment to IPv6 deployment is not availability of network equipment, nor the capability of end systems to support IPv6, nor the capability to roll out IPv6 support in most service providers' IP network. The impediment for IPv6 deployment appears to be a well-grounded business case to actually do so.

The expectation with IPv6 was that the increasing scarcity of IPv4 addresses would drive service providers and their customer base to IPv6 deployment. What does not appear to be factored into this expectation is that Network Address Translators (NATs) produce a similar outcome in terms of virtually extending the IPv4 address space, and, additionally, are an externalized cost to the service provider. Service providers do not have to fund NAT deployment. For the consumer the use of embedded NATs into the edge device is a zero cost solution. The marginal incremental cost of NAT functionality in the edge device is effectively zero for the consumer. So, in general, neither the consumer nor the service provider see a higher incremental cost in the use of NATs. Even at the application level the incremental cost of NATs are not uniformly visible. For traditional client-server based applications then there is no incremental cost of NATs. Even various forms of peer-to-peer applications operate through NATs. It appears that the only application that has some significant issues with NATs are VOIP applications, where the major issue is not the presence of NATs per se, but the fact that NATs has never been standardized and different NATS can behave differently. Currently it appears that the path of least resistance for the industry appears to be that of standardizing NATs, over the option of a near term migration of the entire Internet to IPv6.

It is not enough to consult with industry players as to their current perceptions of future technology needs, as was the case in the design of IPv6. It is also necessary to understand how needs are actually expressed within the economics of the industry. If a technology is to be taken up by an industry, then the factors that lead to take up are variable, and are not wholly concentrated on aspects of superior performance or lower cost of deployment and operation as incremental improvements over the current situation. The factors also include the capabilities of incremental deployment, the alternation in the models of externalities, and the nature of the deployment cost, as well as the revenue model.

Question 3: What will a new Internet with different architecture and protocols mean to IPv6 deployment?

This is a topic that lies well into the area of speculation.

The one salient observation is that infrastructure investment is a long term investment and such investments accrete strong resistance to further change. It is unlikely that the development of further communications technologies, whether its called a "new internet" or otherwise, would have any particular impact on the prospects of IPv6 deployment, positive or negative, assuming that the incremental benefits of this "new" technology were relatively marginal in nature.

Any viable "new" communications technology in the context of changes to the existing Internet architectural model would once again have to demonstrate further gains in efficiency of at least one order of magnitude, or potentially two or three orders of magnitude over those achieved by existing Internet networks, and make substantive gains in the areas of support for mobility, configurability, security and performance in order to represent a serious increment in the value proposition that would induce industry deployment. Of course if a new technology were capable of offering such significant improvement in benefits, then there would be little sense in further deployment of either IPv4 or IPv6 technology.

So the most appropriate response to the question is that "it depends". If such a "new" Internet with a different architecture and different protocol were in a position to offer clearly demonstrable significant improvements in the cost and benefit proposition, then the impetus for deployment would be relatively assured. On the other hand if the case for improvements in the cost and benefit were more marginal in nature then the case for deployment would be regarded as highly dubious.

By Geoff Huston, Author & Chief Scientist at APNIC. (The above views do not necessarily represent the views of the Asia Pacific Network Information Centre.)

Related topics: Internet Protocol, IP Addressing, IPv6, Mobile, Policy & Regulation, Security, Spam, Telecom, VoIP, Wireless

WEEKLY WRAP — Get CircleID's Weekly Summary Report by Email:

Comments

Re: Addressing the Future Internet Colin Sutton  –  Feb 10, 2007 2:33 PM PDT

Regarding the decoupling of 'address' and identity - it's already happening; where
'address' is the location of a device and identity is made up of the services a user has subscribed to.
A person's on-line identity is not a single attribute, but it's made up of the mailing lists and groups they are subscribed to, their Skype id, their avatar, the applications in their dock, toolbars and menus, their favourite web pages, etc.
All that's missing is the portability of the identity - just a small thing :-}

Re: Addressing the Future Internet Tom Vest  –  Feb 12, 2007 10:04 AM PDT

Hi Geoff,

Great article! However, I'm curious about your decision to highlight the "200 to 400 million uniquely addressed IP devices" that are currently supported by the IPv4 addressing regime, without mentioning the many hundreds of millions of dynamic user/access processes that are also currently supported by IPv4. My concern is for readers who might take your original number as a comprehensive reckoning of the total carrying capacity (or perhaps even the total required carrying capacity) for an internetworking addressing system, and in so doing completely miss the importance of "uniform equivalence" across both core or mid-path devices and edge or access devices and processes. As you rightly note, the material fact of this equivalence has been eroding for over a decade as a result of macro-level protocol changes (DHCP, NAT, etc.) as well as micro-level policy innovations (IP filter lists, selective port/service filter lists, etc.). Even so, I would argue that "uniform equivalence" remains an important normative architectural property, deserving (at least) of recognition along side your other essential properties. After all, it's this property that invests IP addressing systems with the essential qualities of end-to-end systems design; it's this property that permits (where it exists) end users to choose their own devices and applications — or even invent new devices and processes — independent of the commercial designs of their network service providers. Arguably, it was this property that accounts for the Internet's phenomenal growth rate over the past two decades, and its increasing centrality in so many aspects of life today — not to mention the general enthusiasm with which the Internet has been sought and embraced by (almost all of) the billion-and-counting users and would-be users around the world.

"Uniform equivalence" (UE) may be an endangered feature in current IP addressing, but I would argue that it is still embraced as the norm, and not merely by naive idealists like me ;-) Without some visceral attachment to UE, there would be no reason for anyone — e.g., any knowledgeable end user — to think that DHCP with public IP addresses is preferable to DCHP with NAT, or that the latter is preferable to DHCP with NAPT. Just because (even) one of these features "breaks" some of bindings that can make public IPv4 addressing uniquely flexible and adaptable, doesn't mean that things can't become "more broken" when more than one is applied concurrently.

Finally, I think UE deserves independent consideration because it is not (or perhaps no longer) reducible to your other essential features (uniqueness, consistency, persistence, etc.). For example, it's quite conceivable that the next-generation of (IPv6) addressing will permit each and every user, interface, edge and core device everywhere to be supported with one (or possibly many!) unique, consistent, and persistent public IP addresses. It's equally conceivable that this achievement will be accompanied by the deployment of next-generation (IMS) hardware that will effectively break all of the old "given" bindings, and subject each and every packet and flow to close inspection and potential interdiction by any operator along the network service path.  Given the fact that almost all Internet service paths must still traverse at least one critical bottleneck (the "last mile" facilities platform), competition alone cannot be expected to sustain the transparency and flexibility of the "old Internet" unless and until it reaches down to that level. The proximate cost and benefit proposition for such a move might be very appealing (especially to any commercial entity that commands one of those bottlenecks), but what would be lost — flexibility and adaptability, decentralized freedom of innovation — would literally be priceless.

Of course, the cynical realist in me recognizes that embracing UE as a design principle for IP addressing could lead to many hypothetical contradictions and paradoxes — routing is never guaranteed, every operator is autonomous, preferences and policies are defined locally, notions of "equivalence" can be taken to absurd extremes (if only for rhetorical purposes), etc., etc. Even so, the naive idealist in me is quite happy to engage in those debates; an Internet that does not even aspire to UE would likely be a sadly impoverished place…

On Layer Three, no one knows you're a dog (or a host, or a router).

To post comments, please login or create an account.

Related Blogs

Related News

Topics

Industry Updates – Sponsored Posts

3 Questions to Ask Your DNS Host About DDoS

Afilias Partners With Internet Society to Sponsor Deploy360 ION Conference Series Through 2016

Neustar to Build Multiple Tbps DDoS Mitigation Platform

Mobile Web Traffic: A Dive Into the Data

The Latest Internet Plague: Random Subdomain Attacks

Digging Deep Into DNS Data Discloses Damaging Domains

Mobile Web Has Now Overtaken PC in 40 Nations, Including India, Nigeria and Bangladesh

New gTLDs and Best Practices for Domain Management Policies (Video)

Nominum Announces Future Ready DNS

New from Verisign Labs - Measuring Privacy Disclosures in URL Query Strings

DotConnectAfrica Delegates Attend the Kenya Internet Governance Forum

Neustar to Launch usTLD Stakeholder Council

3 Questions to Ask Your DNS Host about Lowering DDoS Risks

Continuing to Work in the Public Interest

Verisign Named to the OTA's 2014 Online Trust Honor Roll

4 Minutes Vs. 4 Hours: A Responder Explains Emergency DDoS Mitigation

Dyn Acquires Internet Intelligence Company, Renesys

Tips to Address New FFIEC DDoS Requirements

Smokescreening: Data Theft Makes DDoS More Dangerous

New Chinese "Mobile" Top-Level Domain Now Available

Sponsored Topics