Home / Blogs

Net Neutrality: A Net-Head View

Jay Daley

Net neutrality is a complex issue with some strongly opposed views that at times sound more like religion than sensible argument, so this article is an attempt to provide some sense for those still not completely sure what it is all about. Be warned though, that this article is not an unbiased appraisal of the arguments, it is written from the perspective of a confirmed net-head.

If you are wondering why this matters to a domain name registry, well one factor is that DNS is subject to a great deal of 'non-neutral' behaviour from ISPs, ranging from blocking of servers to actively rewriting DNS messages sent from a provider to a customer. This is an area of intense debate within the DNS world and it is only because DNS is generally regarded as underlying infrastructure that this is not wider known. Another factor is that our goal of providing an optimum DNS service to Internet users relies on local peering and if ISPs take action to fragment that, for the same reasoning as other non-neutral decisions, then that hinders us achieving that goal.

Traffic management

To start with, we need to tackle the growing push to equate network neutrality with traffic management, when the two are quite different. Traffic management by definition is about protocols and pipes, about balancing services at the protocol level within the resource constraint of the transmission media. So a goal of traffic management might be to ensure that a real time service like VoIP is delivered well, or it might be to ensure that other real time services like IPTV do not saturate a link.

On the face of it this might seem entirely reasonable. It is apparently non-discriminatory as it works at the protocol level and it seems to be geared towards providing a better service for customers.

However, there are strong arguments against this form of traffic management based on the end-to-end principle, namely that true innovation has demonstrably come from end points managing the traffic themselves and the moment that someone starts to manage the traffic in the middle the protocols get 'frozen' and innovation stops or diverts. What if ISPs had managed traffic to strongly support HTTP just a few years ago, would YouTube or Skype ever have got off the ground? Unfortunately these arguments take a long time to recognise and internalise, as evidenced by the age of their proponents (Vint Cerf et al), and new generations are unaware of their impact.

The place where most active traffic management occurs is on the border of the enterprise, at the firewall, where some protocols are allowed, some blocked and some shaped. The impact of this is generally seen to be good for the Internet because it reduces criminal behaviour but there are more subtle problems that it creates that the Internet is struggling with:

  • Some firewalls do deep packet inspection and check protocol conformance. For example one well known make of firewall checks DNS packets to see what resource record types are being used and blocks those that it does not recognise. This seemingly simple practice has come close to crippling the development of DNS as the concern about whether a new resource record will be usable by significant numbers of users, causes considerable uncertainty in the minds of protocol developers.
  • There are increasing efforts to tunnel one protocol inside another to bypass the blocking of particular protocols. This might seem like it is criminally motivated but generally this is a response from manufacturers to their customers who are unable to use their product due to corporate policies that may indeed by corporate bureaucracy. Tunneling in turns leads to more deep packet inspection, which in turn leads to more protocol freezing and so on.
  • Some of the basic diagnostic tools available to network operators, such as ICMP, are routinely blocked on the broad assumption that the less you expose about your network the safer you are.

This is not to say that traffic management by firewalls is bad per se, just that the seemingly sensible use of them has unintended consequences that are distorting the Internet and it is not hard to imagine how this will scale upwards if traffic management by ISPs becomes even more sophisticated.

The basic claim against network neutrality

Net neutrality is quite different from traffic management, it is entirely about the economics of Internet connectivity and the belief of some ISPs that this is a two-sided market they are being denied access to. At the recent NZ Telecommunications and ICT Summit the often repeated argument was put quite clearly that some ISPs believe that they incur all the costs and content providers reap all the profits.

To quote that article: "The dilemma of over the top providers such as TradeMe, eBay and Google making the money while the telecommunications industry incurs the cost is still unresolved."

The economics of Internet connectivity

As an Internet person, when I look at the economics of the network they are quite simple. First there is the connectivity. Content providers have contracts with locally connected ISPs to carry their content, those ISPs in turn have contracts with other ISPs and we follow the contract chain right down to the home user who as a contract that their ISP will connect them to the service they request.

Some people think that when you pay for Internet access you pay to join a cloud and that's it. For the consumer it should look like that, but for ISPs it is very different and always has been. ISPs, as well as paying specifically for speed and data volumes (traffic) as consumers do, also pay specifically for routes. If they want access to international routes then they pay extra for that compared to paying for national routes. That's the nature of the transit market, to provide access to those routes that it would be too expensive for an ISP to patch a cable to.

And that ultimately is the only way the Internet can and does work, with contracts to exchange traffic and routes (sometimes symmetric and sometimes not). The scale of the Internet and the physical topology of the planet mean that every ISP cannot connect directly to every other ISP, there have to be intermediaries, sometimes several layers, who carry traffic and routes between ISPs.

Nowhere in this model is there such a thing as a free Internet connection port where the content providers have secretly connected their kit and so avoided paying for connectivity. Everybody pays, everyone is connected, that's the Internet.

But the view from some ISPs particularly those that were once just telcos, is that they are only getting a fair payment if everyone who has data carried along their pipes pays directly for that service, never mind what intermediate contracts are in place. Yet the ISPs making these claims already have a full contractual framework around them. Their consumer customers pay them to deliver the traffic to them that they want, from whatever content provider, and from the money the ISP receives from the customer they pay their transit providers the cost of deliving non-local content. So when an ISP wants the content provider to pay them to send data to their customers (never mind that the content provider has paid someone else to send it) whilst also charging the consumer the full cost of receiving it then this is simply double-dipping.

At a strategic level, what they are effectively attempting is to disintermediate the global Internet connectivity market, the transit providers, and force content providers to deal only them, the last mile ISPs.

The economics of Internet investment

The second part to the economics is the concept of what is the network. As national fibre networks are being implemented across the world the last mile ISPs with existing infrastructure have been making the case that they will be putting in all the investment and the content providers will be getting all the reward.

To quote the article above again: "The telcos are being forced to invest in an infrastructure that is unlikely to provide the same revenues as the copper network. And to compound matters it appears that the riches that are to be gained in a fibre network may be taken by those companies that haven't paid a cent towards it — Google and Apple are the global examples most often cited."

This view assumes that the Internet is just pipes, when it is obviously much more than that - it is the pipes, the end devices (servers, printers, desktops, phones, etc), the software, and the content, all of which costs money and all of which makes the network. Google have over 1 million servers, which is a huge capital investment however you measure it, and then there is the software on top of it and paying ISPs to deliver their data is not cheap either. A case could probably be made that it is the ISPs who are the laggards as far as investment goes, relying on a copper network that was in place decades ago.

Implications of the non-neutral view of the world

If we assume for a minute that net neutrality was abandoned and try to envisage what that would mean for Internet users, then we end up with a very different Internet with some new characteristics:

  • Discriminatory pricing policies where individual content providers can suddenly be blocked or rate limited unless they pay an ISP for that ISPs customers to reach them (even though that ISPs customer are already paying for that service). I'm sure ISPs in favour of the two-sided market would claim that this will be equitable and above board, but how exactly will they measure traffic to ensure that. If it is to be by IP address then shared hosting will suffer and big sites with multiple IPs will escape, if it is to be by domain name, then understanding what is a sub-domain and what is a delegation is critical to making this work. An ISP will need to keep track of the various endpoints that they define as a single content provider and they can't do that for everyone so it will inevitably be discriminatory.
  • Complexity at a huge scale. Suppose the traffic from .nz nameservers hits a particular level (what with DNSSEC and all) and an ISP decides to charge us for delivering it to their customers, do we then pay them or wait until more demand it, and since presumably they can't charge the same amount as that would be price fixing, do we now need to deal with 1000 ISPs globally?
  • Confusion with the consumer not knowing what service they are getting from where. Imagine looking up on a directory to see what web sites you can access at what speed, depending on what they have paid your ISP. Mind you, there are other reasons this last one might happen anyway, as covered below.

Those are all round disadvantages, whatever side of the debate you are on, there is also the change in behaviour of content providers to consider. As all net-heads know, the Internet routes around failure, that's the way it was designed. Content providers will take whatever action they can that is economic for them, to avoid being caught in the aggregation based charging of ISPs — rotating IP addresses, using multiple domains, peer-to-peer caching, content obfuscation to prevent deep packet inspection and so on. If the bell-heads think that spotting the big content providers will continue to be easy, they are deluded.

Above all, none of that is going to improve the Internet.

Peering

In case it is not obvious, this issue is at the heart of the peering disconnect in NZ because some believe that the more you peer the less leverage you have for disintermediation. Unfortunately this is fallacious reasoning.

To illustrate the fallacy, take an example where an ISP's customers draw down a noticeable amount of content from one provider and look at three scenarios:

  1. Where the content is non-local.

    The ISP pays $L for the cost of local distribution and $T for the cost of transit to get that content, so the total cost is $L+$T

  2. Where the content is local and the content providers pays the ISP to have it delivered.

    The ISP pays $L for the cost of local distribution and receives $l as payment, so the total cost is $L-$l which may possibly, at a pinch, be close to 0.

  3. Where the content is local and freely exchanged (settlement-free peering).

    The ISP pays $L for the cost of local distribution in total.

We end up with ISPs at scenario 1, who want to move to 2 and therefore don't want to move to 3 as it stops them getting to 2. Which means they would rather pay $L+$T than just $L because of some vain strategy that they could reduce it to $L-$l.

But we know that no ISP has even a snowflake's chance in hell of achieving 2. The Internet community will resist it, the big content providers will fight against it, the consumers will vote with their feet and the regulatory authorities will intervene.

It is commercial madness but that is the way too many ISPs operate.

A better business model

The real financial issue here is one of margins. The margin on content that has to be bought in by transit contracts is so much smaller than the margin on local content and that in turn appears so much smaller than the margin the big content providers earn. But rather than trying to snatch margin from the transit providers and content providers, the non-neutrals need to understand what it is about their business model that is successful and emulate that — and no it isn't freeloading off someone else's bandwidth.

The content providers are selling a service and that's what people are willing to spend their money on, not access, because the benefits of a service are direct and the benefits of access are only indirect.

To give you an example, I have a phone that cost several hundred dollars and the only recognition of that by my mobile access provider is a little configuration script to set the voicemail number and the Internet access point. That's it, nothing more at all. If I want to check my balance I have to dial a number, if I want to see my call log then the phone has a record and if I want to see how much my various calls have cost then I have to wait for the bill. And those are just the utmost basic interactions I have with the phone company, let alone anything sophisticated like home automation.
Confusion ahead?

If national fibre networks provide the benefits that many hope they will then they might create some behaviours that lead to some of the confusion identified above.

I found out entirely by accident the other day that there is a local Internet TV streaming service that my ISP allows access to independent of any data caps. The reason for this is probably twofold — they peer locally with them and so the cost of distribution is low and they want to try to break the Sky stranglehold on TV content. But for me it could be a nightmare because I don't have any technological support to help me identify and remember what sites are zero-rated in this way. What happens if I start using it and one day the two companies fall out and I don't see the notification until I get my next bill?

Admittedly this is only one site, so I'm not really going to have many problems, but if national fibre networks lead to much higher cost differentials on local vs remote content then this problem will rapidly expand and a technological solution will become necessary. Hopefully ISPs will realise that's their job as part of the service they sell and not leave it to Google to do for them.

Final word from history

The whole debate about net neutrality and traffic management is actually a battle in the proxy war for the opposing ideologies about how to build a network, the bell-heads vs net-heads debate. Us net-heads have been right so far every single step of the way (packet switching, end-to-end, open protocols, open institutions, freedom of access, freedom of content, global in nature, and so on) and we should not give in now if we want the Internet to continue being the force for change that it has been.

This post originally appeared on the InternetNZ blog.

By Jay Daley, Chief Executive of the .nz registry
Follow CircleID on
SHARE THIS POST

If you are pressed for time ...

... this is for you. More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

Vinton Cerf, Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Share your comments

Factual problems with your article George Ou  –  Aug 06, 2010 2:54 AM PDT

"Google have over 1 million servers, which is a huge capital investment however you measure it, and then there is the software on top of it and paying ISPs to deliver their data is not cheap either. A case could probably be made that it is the ISPs who are the laggards as far as investment goes, relying on a copper network that was in place decades ago"

Interesting theory, but completely ignores the facts.
AT&T;spends over $20 billion on capex a year.  Google spends under $1 billion a year on capex.  Google and all the other dotcoms in the U.S. combined spent less than AT&T;or Verizon.  Serving the "eyeballs" and the end user is a lot more expensive than servicing servers.

Your description of non-neutral networks is completely off basis and you're describing a needlessly complex business model where billing is infeasible.

The fact of the matter is that Google or any other website have paid for their transit service to some transit provider.  That transit provider must have a peering agreement indirectly or directly with all the broadband providers so those broadband providers are obligated to carry the web traffic at best effort whether they like it or not.  They are contractually forbidden from blocking traffic or doing anything else sinister.

So the only thing left for the ISP to do is to offer a better than best effort service.  Perhaps it would offer a geographic advantage with direct peering and/or caching or perhaps it could honer better than best-effort DiffServ labels or maybe support things like multicasting.  By having these additional voluntary offerings available to the content/application providers (not under duress from the ISP), and with individual consumer consent and the consent of the ISP, the Internet benefits.  I discuss the benefit of this here

Yet the FCC wants to ban these voluntary agreements based on flawed economic theory. .  They justify this based on flawed economic theory based on the idea that there must not be any winners or losers on the Internet.  In short, they actually want an equal outcome Internet and not an equal opportunity Internet.

Customer, customers, customers! Jay Daley  –  Aug 06, 2010 3:49 PM PDT

You are describing a impossible world without customers where you say:

That transit provider must have a peering agreement indirectly or directly with all the broadband providers so those broadband providers are obligated to carry the web traffic at best effort whether they like it or not.

Broadband providers have customers, without them they would not have a business.  Broadband providers contract with customers to deliver them the traffic the customers choose.  If the traffic the customer chooses is non-local then the broadband provider must buy it in from transit providers.

The only traffic that broadband providers get from the transit providers they are connected to is the traffic that the customers of those broadband providers have chosen.

Very poor article Richard Bennett  –  Aug 06, 2010 1:55 PM PDT

The author doesn't understand the way the Open Internet debate is taking place in the USA, although he may be correct vis a vis similar issues in New Zealand. What generally happens when people enter into this from an ideological point of view rather than a technical one - saying "I'm a member of the Net-head tribe and these are the ways of my people" - is that they get the facts garbled, as George Ou has pointed out. Here's one example from the long-winded post:

"However, there are strong arguments against this form of traffic management based on the end-to-end principle, namely that true innovation has demonstrably come from end points managing the traffic themselves and the moment that someone starts to manage the traffic in the middle the protocols get 'frozen' and innovation stops or diverts."

Mr. Daley should familiarize himself with RFC 2475, "An Architecture for Differentiated Services." The RFC says:

"This document defines an architecture for implementing scalable service differentiation in the Internet.  A "Service" defines some significant characteristics of packet transmission in one direction across a set of one or more paths within a network.  These characteristics may be specified in quantitative or statistical terms of throughput, delay, jitter, and/or loss, or may otherwise be specified in terms of some relative priority of access to network resources.  Service differentiation is desired to accommodate heterogeneous application requirements and user expectations, and to permit differentiated pricing of Internet service."

DiffServ doesn't violate any principles of Internet architecture, or it would not have been approved by the IETF and implemented in all IP routers of any note. Since DiffServ is in conflict with Mr. Daley's understanding of the Internet, I'll have to take the IETF's side rather than his. The IETF represents the true "net-heads" of the world. The correct and productive way to manage traffic is for the endpoints and the network to do it cooperatively. There is no black-and-white dichotomy here.

One of the means by which we can achieve meaningful Internet openness - full support for all applications - is to actively manage traffic as RFC 2475 describes. This behavior is "non-neutral" but it's "pro-openness." This is a place where the common understanding between net-heads and other tribes is not only possible, but necessary.

Traffic management and net neutrality are quite distinct Jay Daley  –  Aug 06, 2010 3:43 PM PDT

You are equating non net neutrality with traffic management, which the article explains is a false equation to make.  Traffic management undoubtedly has high utility, is very popular and is apparently reasonable, but still has its dangers.  Diffserv is traffic management.

Real Non net neutrality on the other hand is entirely different as it does not have high utility, it is not very popular and it is not apparently reasonable.  It is an attempt to defend an unjustifiable economic argument and create a severely distorted market. Unfortunately it appears that many advocates of it are hiding behind the apparent reasonableness of traffic management to disguise this.

In the NN debate, they're one and the same Richard Bennett  –  Aug 06, 2010 3:58 PM PDT

If you were familiar with the nature of this debate in the United States, you would be aware that advocates of net neutrality - such as Free Press - are arguing for principles of non-discrimination that would forbid network operators from charging for DiffServ prioritization. The problem is with their decision to apply non-discrimination rules at the traffic management level. So traffic management is very much a part of the debate as it's carried out in this country.

See Free Press's remarks on DiffServ:

...it is nonsensical to portray DiffServ as something that a third-party content provider could pay an ISP to use for paid-prioritization. Either an ISP respects DiffServ flags as outlined by IETF and chosen by the application or they do not — and if they do not, then it isn’t DiffServ. By way of analogy, an individual customer cannot pay a restaurant to obey the health code — they either do or they don’t. If an ISP is using DiffServ, but not respecting application flags, then that is not the standard as outlined by the IETF. Similar to how Comcast was improperly using RST packets to block BitTorrent, such a nonstandard use of DiffServ would be entirely new, improper, and not at all in line with that outlined by the IETF.

As I noted, the DiffServ RFC says: "Service differentiation is desired to accommodate heterogeneous application requirements and user expectations, and to permit differentiated pricing of Internet service."

Now I would agree with you that this is stupid and that we should be able to craft Open Internet principles that don't prevent traffic management, but that's not what our net neutrality advocates want.

I've also commented on this at HighTechForum.

Conflating the two is obfuscation Jay Daley  –  Aug 06, 2010 11:31 PM PDT

Any traffic management technology can be misused, whether it be diffserv, firewalls, traffic shaping or whatever.  There can be no defence of that misuse by claiming it is simply the application of technology.

When people start to talk about applying traffic management technology in a way that attempts to distort the free market, then is ceases to be traffic management.  The Free Press remarks above express that quite succinctly.

If ISPs were considering charging their customers for adding traffic management to the service they provide, in a way that manages solely the content types not the content providers those customer choose to access, then that might stand as neutral behaviour.  But as the article thoroughly explains, the real issue here is the desire of some ISPs to target high profile content providers to force them into a direct contractual relationship.  The particular technology they choose to do that with is irrelevant and focusing on that is disguising the real motivation.

No defense for false accusations either George Ou  –  Aug 07, 2010 1:15 AM PDT

"Any traffic management technology can be misused, whether it be diffserv, firewalls, traffic shaping or whatever.  There can be no defence of that misuse by claiming it is simply the application of technology."

True, but there's no defense for false accusations either and you're engaging in a lot of that.  Even the Comcast example wasn't really an example of abuse and more of an annoyance/mistake that was quickly corrected.

"When people start to talk about applying traffic management technology in a way that attempts to distort the free market, then is ceases to be traffic management.  The Free Press remarks above express that quite succinctly."

Who is talking about a technology that "distorts" the market?  The Internet is a level playing field with uneven players.  The only people that are trying to distort the market are the people trying to eliminate any differences in the outcome by hijacking private property.

"But as the article thoroughly explains, the real issue here is the desire of some ISPs to target high profile content providers to force them into a direct contractual relationship."

Your article explains nothing because you start off with an idealogical "net head" bent and a gross misunderstanding of the debate in the United states.

Nobody is "targeting" anybody here.  Everyone agrees that the ISP shouldn't be able to use put any content provider under undue duress to get them to buy premium services.  The problem is that you people are trying to define the inability of some content providers to buy premium as undue duress.  Having differentiation certainly qualifies as duress for those who can't afford to differentiate, but that isn't a "distorted" market.  The free market allows for winners and losers.  It's only undue duress if an ISP threatens to block or degrade traffic below best effort.

The 'debate' being framed in terms of wolfkeeper  –  Aug 10, 2010 8:52 PM PDT

The 'debate' being framed in terms of 'free market' 'content providers' 'ISP's 'winners' and 'losers' is itself a distortion of the problem. The general lack of network neutrality hurts *everyone* on the internet, whether they're in business or not. Even the ISPs are currently having to maintain deep packet inspection tools that are pretty dubious on the whole and need constant tweaking.

Network neutrality, real network neutrality happens when the internet just works for all protocols, not at the whim of any provider of anything.

This is technically possible. The real question is whether people want to implement it.

Your comment shows the typical ignorance of the issues George Ou  –  Aug 10, 2010 9:19 PM PDT

First, you grossly misunderstand the DPI issue.  Here's how DPI actually works.

Second, you're conflating DPI with the issue of Net Neutrality.  Net Neutrality the call to prohibit voluntary and legal business transactions between ISPs and content providers (not between end users).  It is based on the fantasy that the Internet is supposed to be equal outcome rather than equal opportunity.

Third, "real net neutrality" where all bits are treated equal is a world where applications don't "just work".  In fact it's ensuring that some applications like VoIP and gaming don't work when jitter-inducing applications like BitTorrent or YouTube are being used.  Net Neutrality is the opposite of true neutrality in the Network where all applications "just work".

# 9 Reply (max. reply level reached)  |  Link  |  Report Problems
I read your paper and that showed wolfkeeper  –  Aug 10, 2010 10:41 PM PDT

I read your paper and that showed that I already understood DPI perfectly well thanks. DPI is a somewhat brittle technology, as it is not capable of correctly classifying all network traffic.

I note that you seem to be describing Network Neutrality differently to (for example) Tim Wu, who pretty much coined the term, and you seem to be casting it exclusively in terms of end-users. That makes no logical sense; at the other end of a users connection is either another users equipment or a businesses equipment. Businesses therefore clearly need net neutrality as well, to end users, as well as to other businesses. So it's an internet wide issue.

Only one of about 4 different definitions of network neutrality supports bit-level identical treatment of all classes of packets, but that definition doesn't make any sense to me, for the reasons you outline, and it's not the one that Tim Wu championed either. In fact Tim Wu IRC notes that the IP is non neutral in that best effort handles files better than other forms of traffic. I note that DPI is a bit of a sticking plaster that can be used to approximate NN- or deliberately break it.

You're citing Tim Wu? George Ou  –  Aug 11, 2010 1:59 AM PDT

You keep talking about DPI in relation to NN so it's clear you're confused about it.  It's a totally separate and exaggerated issue.  People try to make DPI sound evil when it is a technology that can be used for many things, good and bad.

"I note that you seem to be describing Network Neutrality differently to (for example) Tim Wu, who pretty much coined the term"

I debated Tim Wu face to face in San Francisco (there's a video somewhere).  I tried to get him to debate the actual policy matter of outlawing content-side differentiation facilitated by the ISP (note that Net Neutrality policy has nothing to do with ISP-user relationships).  Tim Wu repeated over and over again that he didn't care about the regulatory/legislative issues and that he didn't want to debate them.

Then I pointed out his interview where he said that it was important and that the democrats got elected and better do something about it.  Then he said OK I do care about the policies but it's not important.  What's import is how we feel about Net Neutrality and the spirit of Net Neutrality (LOL).  The man has absolutely no principled argument in favor of Net Neutrality (like all the other NN supporters) other than personal ideology and they don't even want to debate the actual policy matters.  I've yet to find a person that wants to debate the actual policy and they espouse the usual fantasy about the Internet being this place where all end points and users and sites are equal or that it should be.

Free Press advocates the most extreme form of Net Neutrality as I describe here.  They don't even want Cable and DSL companies to be able to carve out bandwidth for their own video services on their own private property and this was the same view held in Rep. Markey's third Net Neutrality proposal in Congress.  They don't want ISPs to be able to offer enhanced services to content providers which is the biggest battle in Net Neutrality policy and the FCC majority has signed on to this view which arbitrarily penalizes ISPs in favor of CDN providers.

So all the dozens of definitions aside, the de facto battle in Net Neutrality is whether ISPs can offer enhanced/prioritized services to content/application/service providers.  Free Press claims CDN based differentiation is harmless but router based differentiation is harmful but they have their facts backwards.  It turns out that CDN based services are far more harmful to other applications on the networks.

Well, what you write here:http://www.digitalsociety.org/2009/11/what-is-true-neutrality-in-the-netwo wolfkeeper  –  Aug 12, 2010 9:15 PM PDT

Well, what you write here:

http://www.digitalsociety.org/2009/11/what-is-true-neutrality-in-the-network/

is more or less what I consider Network neutrality to be, and that aligns with Tim Berners-Lee's definition as well.

So I consider you to be a network neutrality supporter.

The real problem is the extreme wings of the debate.

The problem is that there are almost no reasonable net neutrality proposals George Ou  –  Aug 13, 2010 12:09 AM PDT

There are only three Net Neutrality proposals for regulation one more extreme than the other.

Least extreme but still very bad - Ban voluntary business agreements and prevent ISPs from selling prioritized or enhanced services to content and application providers.  This is what the FCC majority is proposing in their NPRM.

Then there's the Markey proposal which wants the government to limit the percentage of capacity that broadband providers (cable and telcos) can allocate to their non-Internet services.  This is in addition to the ban on enhanced and prioritzed services.

Then there's the lunatic fringe of Net Neutrality that claims that the Internet must be a "dumb pipe" and that it must treat all bits equal.

As for Tim Berners Lee, he actually does support the right to sell premium priority.  The problem with TBL is that he supports (in fact demands people to call congress and act) the very regulations that ban the sale of prioritized services.  Many of us have tried to point this out to him but he has always dodge this inconsistency.

I would be very surprised if Tim's Phillip Hallam-Baker  –  Aug 13, 2010 3:23 PM PDT

I would be very surprised if Tim's position was very different from Danny Weitzner's. And Danny certainly knows every detail of what is going on.

To post comments, please login or create an account.

Related

Topics

Domain Names

Sponsored byVerisign

DNS Security

Sponsored byAfilias

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byAfilias

IP Addressing

Sponsored byAvenue4 LLC