Home / Blogs

Dr. Peering Commits Malpractice on Net Neutrality

Daniel Golding

At Tier1 Research, we hate to call out individuals for wrongdoing, but once in a while, it's absolutely necessary. At the moment, the Federal Communications Commission (FCC) is in the middle of the rulemaking process for network neutrality, a complex endeavor. While Tier1 is against interference from regulators as a concept, the proposed rulemaking document from the FCC, while vague, is not completely unreasonable — while it bans carriers from prioritizing traffic, the basic underlying assumptions that the Internet relies on are not touched, nor are any real, revenue-producing products negatively affected.

Of course, that's if you read and understand the proposed rulemaking document. Tier1 encourages everyone to do so — it's not that long, nor is it complex. One of the most contentious parts of the proposed rulemaking is section 106, reprinted below. The idea is that carriers won't be allowed to prioritize traffic for money. That means that Google can't pay Verizon to give preference to Google's traffic. This only really comes into play on the last mile, which is sometimes congested — lower priority bits might be dropped, while higher priority might be assured delivery. Some have opined that this is a good idea, because content like video would be assured of delivery, but the content providers have, not surprisingly, come out against this, as have consumer groups. Tier1 is unsure when big profitable content providers like Google became the good guys and big profitable carriers became the bad guys, but that appears to be the situation, at least in terms of public perception.

In the midst of this great debate comes a significant amount of disinformation, mostly from the pro-network neutrality (i.e., no prioritization) side. One often-quoted idea is that consumers will be charged for the prioritization. Of course, the carriers want the content providers to pay, so any upcharge to consumers would be indirect at best. A more serious, if much more technical, bit of disinformation has materialized in the past few days, originating from 'Internet Interconnection expert' Bill Norton. On his 'Dr. Peering' website, Norton has twisted section 106 of the proposed rule-making into a pretzel, claiming that it would ban the practice of 'paid peering,' or selling of a content provider or carrier's 'on-net' routes. In effect, this is when a broadband provider sells partial Internet transit, allowing access only to its subscribers, typically to a content provider. So, Comcast might sell paid peering to a CDN, which would enable the CDN to dump its outbound traffic to Comcast for a fraction of the price of transit — a win-win. Paid peering is considered a valuable arrow in the Internet interconnection quiver, along with settlement-free peering and regular paid Internet transit.

Paid peering traffic is no more prioritized than regular peering or transit traffic. There is no way that any reasonable person would feel that section 106 might apply. Now, however, Norton has raised the possibility of such, and seems intent on convincing the FCC that he's right. Recently, prominent (if technically challenged) blog GigaOm has published a piece by Richard Bennett, which agrees with Norton, using Norton as the only source. Numerous experienced and prominent Internet architects have spoken out about this matter but this has seemingly fallen on deaf ears — neither Bennett nor Norton seem willing to retract their comments.

This has also turned into a smear on Google, which Norton has claimed wants to ban paid peering. This is in spite of protestations to the contrary by Google staff and the total lack of any sort of advantage accruing to Google. Tier1 does not agree wholeheartedly with Google's network-neutrality stance, but lying about its stated position is no way to advance the debate in a positive way.

Why do you care? Paid peering is a valuable interconnection tactic and is widely utilized by hosting providers and content delivery networks. Those paid peering interconnections (the fibers) are purchased from carrier-neutral colocation facilities. The practice improves Internet performance for content from providers that is delivered to many millions of users. It also saves a significant amount of money. The arguments against? None, as far as Tier1 can tell — Norton and Bennett seemed to need something to write about. But the threat extends past a few self-promoting bloggers — Norton was recently invited to share his insights with the FCC.

One criticism of these network-neutrality regulations is that they can be distorted or misinterpreted. That is certainly a danger and its one that is on display here — for no reason other than sheer self-promotion. Tier1 urges that everyone involved listen to the real experts and leave paid peering alone — stop dragging it into the network-neutrality debate, or else we may all lose a valuable tool while gaining nothing.

Tier1 also urges caution when dealing with self-appointed experts — most of the true experts are senior technical staff at major service providers, CDNs or content providers, and are not in a position to comment in the way that Bennett or Norton are. Tier1 can only point to the legion of actual Internet architecture experts who have uniformly condemned Bennett and Norton's position. A short list of those who commented — in opposition — to the GigaOm blog entry include widely regarded Internet architects Patrick Gilmore, Vijay Gill, Dave Temkin, Steve Meuse, Joe Provo, Adam Rothschild, and Richard Steenbergen.

Section 106 of the Notice of Proposed Rulemaking

106. We understand the term "nondiscriminatory" to mean that a broadband Internet access service provider may not charge a content, application, or service provider for enhanced or prioritized access to the subscribers of the broadband Internet access service provider, as illustrated in the diagram below. We propose that this rule would not prevent a broadband Internet access service provider from charging subscribers different prices for different services. We seek comment on each of these proposals. We also seek comment on whether the specific language of this draft rule best serves the public interest.

By Daniel Golding, VP and Research Director at Tier 1 Research – To learn more about Tier1Research, visit http://www.t1r.comVisit Page
Follow CircleID on
SHARE THIS POST

If you are pressed for time ...

... this is for you. More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

Vinton Cerf, Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Share your comments

Past Bias Dan Campbell  –  Nov 24, 2009 9:01 AM PDT

A major (if not the only) reason that "...Google became the good guys and big profitable carriers became the bad guys..." is because some (many) folks historically hate the telcos and cable companies from their past experiences with them, even prior to commercialization and proliferation of the Internet much less since carriers providing broadband services, but certainly since they've diversified into other areas.  Some of that is historically from people's views of the unfairness of the regulatory environment and the edge it has given those providers.  Others - mostly the masses / laypeople - hate such companies because of their experience with them in terms of service, customer service and pricing, which many perceive to be, well, bad, for lack of a better word.  It's not unlike the Postal Service or DMV.  Even if those organizations improved their service and customer service dramatically (and in fact they have), it's next to impossible for them to change the deeply entrenched negative stigmas and mocks that they continue to be the recipient of.  People love to jump on the telcos and cable companies for just about anything.  Even if those opinions have been justified by past experience, they are often erroneously applied to anything and everything the telcos/cablecos do, and that just clouds the whole conversation.

Google has been escaping it because, well, they are still kind of the cute dotcom champion that most laypeople view (right or wrong) as nothing more than a search engine, even if that is understating what they really are and how similarly agressive their strategies are when compared to telcos/cablecos.

Dan,I agree 100% - that was just Daniel Golding  –  Nov 24, 2009 9:05 AM PDT

Dan,

I agree 100% - that was just a little snark.

Comment from GigaOm Richard Bennett  –  Nov 24, 2009 3:48 PM PDT

Is this what CircleID has come to, a place where people discuss GigaOm discussions? Here's a comment I posted at GigaOm on the discussion around my article:

* Nobody has offered any information to show that the article is flawed in any serious way. It is true, of course, that paid peering is not new; AOL offered a paid peering product back in the days when Vijay Gill and Daniel Golding worked there, but it wasn't terribly popular because it was priced higher than transit. The reason they priced it that way was that it provided better performance to/from AOL than transit did. That was the kind of service the FCC regulations are meant to ban. The Comcast-style paid peering which is cheaper than transit raises interesting issues for the FCC because of the pricing. The intent of the regulations - and this is something you won't get from going to NANOG meetings - is to prevent ISPs from extracting "monopoly rents" from content and service providers seeking access to the ISP's customers, but offering higher performance at lower price to people who have built networks that can take advantage of it is outside the set of expectations that the proponents of an anti-discrimination rule have in mind. It is for this reason, among others, that I'm opposed to the anti-discrimination rule that the FCC is considering. The consequences of an overly-broad ban in differentiated access services are likely to fall just as strongly on legitimate as illegitimate services. My article didn't mention anything about "on-net routes" BTW.

* Google is and has been the primary moneybags behind the drive to regulate the Internet according to net neutrality principles, and has always claimed it was promoting the program in the interests of smaller players, not for itself. The likely effects of a paid peering ban on smaller players shows that regardless of Google's intentions, the expansive ban in differentiated access to ISP networks is not good for smaller players. And the nastiest comments in this section come from people who work for Google and claim to be protecting the Internet from ISP shenanigans. Draw your own conclusions.

* Disclosure is a tricky thing. I don't have a financial interest in any ISP, carrier, content network, or network services provider, I'm simply a former network engineer working for a Washington, DC think tank. The most critical comments in this section are in fact from people who work for Google, Akamai, YouTube, and Netflix and similar companies and who have, for the most part, chosen not to disclose their affiliations. You folks know who you are, so this would be a good time to 'fess up if we're going to play full disclosure. You might also say how you happened to hear about this article. It's a bit unusual to see 7 NANOG members commenting on an article at GigaOm; I've certainly never seen it before.

* Who has the skill to interpret FCC regulations? Given that the net neutrality NPRM is still in the discussion stage, it's entirely appropriate to raise questions about what it might cover, and completely inappropriate to insist that the rules are already black-and-white as Vijay and Daniel have. We're in a process of discussion, and as Jeff Turner points out the goal at this stage is to ensure that the FCC does not enact a ban on paid peering or make any similarly destructive move. I think most of us would agree that such a ban would not be beneficial for anyone except the operators of the largest CDNs (the ones who've been complaining the loudest about my article.) So don't lull yourself into a false sense of security by following Daniel's guess that the FCC would never do such a thing; they're a government agency and they screw up and/or protect certain companies and business models over competitors all the time. The only way to prevent this is vigilance, and it's not 100%.

* We don't know the value of better-than-best-efforts or cheaper-than-best-efforts transit on the public Internet because we haven't conducted a large-scale experiment with it, but there is reason to believe that such services have value based on small-scale experiments so far. This relates to network theory and emerging business models, and isn't in the NANOG sphere of operational expertise, so don't worry about it. I want to see the space for experimentation with SLAs, business models, and differentiated transit and peering preserved. Obviously, some others would like to see it stopped an for a best-efforts model to be imposed on the Internet of the Future by law. That's why we're having a fight over net neutrality; companies that are successful on today's Internet want to lock in the design and operation of the system and prevent it from changing. The FCC is in the middle of the fight, and these issues are international and ongoing.

* Arbor points to the Rise of the Hyper Giants, the 30 large firms who control 30% of Internet traffic. As more traffic concentrates into fewer hands, the ability the Hyper Giants to affect everyone's Internet experience increases. Historically, the FCC has paid more attention to ISPs than to CDNs and other streamers, but given the end-to-end nature of the Internet, this focus is inadequate to protect user experience. Going forward, if the FCC is going to regulate Internet services at all - and I don't want them to do that - they will need to regulate all the firms who affect user experience, not just the ISPs. Once again, I would prefer they don't impose any new regulations, but if they are going to regulate, they should apply the rules to everyone, not just to ISPs. And there is nothing in the NPRM that limits it to last-mile netorks, that's junk; it applies to ISPs period, both on the customer-facing side and on the Internet-facing side.
(more in next comment...)

Final part of comment Richard Bennett  –  Nov 24, 2009 3:49 PM PDT

So this is where we are. The FCC is floating some rules, some people are constructing scenarios and discussing implications, others are trying to stifle discussion by various means. Do what you want, but don't kid yourself: there's a lot of money at stake, and the established players aren't going to roll over and play dead without a fight.

Richard, you are incorrect on just about Daniel Golding  –  Nov 24, 2009 7:41 PM PDT

Richard, you are incorrect on just about every fact you enumerate. I don't think this is because you're trying to be deceptive. Its because you don't know anything about the subject and listen to Bill Norton, whose knowledge is spotty at best. 

You see conspiracies where none exist: I have spoken to several senior architects for major carriers. The agree with my points, in spite of their disagreements with Google and other major content providers, but are prohibited from coming forward.

In my case, please point out my conflict of interest, since you are sure there is one here. In your case: don't you work for an anti-neutrality lobbying group?

Personally, I am quite anti-regulation and pro-free market. However, I think an honest discussion is vital. These sorts of scorched earth arguments will hurt the Internet and impair the debate.

Are you serious? Richard Bennett  –  Nov 24, 2009 8:03 PM PDT

I don't see any conspiracies, and don't know (or care) whether you personally have any conflicts of interest; you're not relevant my argument at all, and have no significance in the net neutrality debate.

My issue is that the broad-based ban on so-called "discrimination" on any network system is certain to have unintended side effects because "discrimination" is primarily a productive activity on packet-switched networks carrying diverse types of traffic. It's ironic that Google is now in a position of promoting a rule that will harm small upstart competitors after it has posed as their champion for so long; I don't think this is intentional on Google's part as much as it's a testament to their political and technical naivete.

The problem I have with what you're written is that you're acting as if the FCC's Rule 5 has already been adopted. The fact is that it's currently under discussion, so it's premature to declare in affirmative terms what it will or won't do. You don't actually know, and neither does anyone else. Hence, your attempt to stifle discussion is not productive.

What does prioritized and ENHANCED mean? George Ou  –  Dec 09, 2009 6:21 AM PDT

Daniel, you conveniently left out the fact that paragraph 106 bans priorized and *ENHANCED* services.  What does it mean to enhance or prioritize delivery?  Putting someone's packet's ahead of the line BY DEFINITION is prioritized and enhanced and it's probably the most effective form of prioritization.  Another effective form of prioritization is to avoid contention by paying for a more exclusive connection.  If 1000 other companies are vying for access over a shared 100 Gbps transit connection and you have a 10 Gbps connection going to the ISP all to yourself, that's by definition a prioritized and enhanced connection.  Nobody uses the type of over-simplified assured delivery priority queues you're suggesting not because it's prohibited by the government, but because it has almost no functional value in the context of content delivery.

Second, you say that congestion is mainly a last-mile problem.  That's not true in the context of the large-scale unicast video streaming revolution on the Internet.  Last-mile congestion problems are self-induced e.g., the user tries to download too many things at once and the user can determine how much or little last-mile congestion there is by easing off the number of applications they're trying to use.  The main problem that needs to be overcome for unicast over the Internet is that the core of the Internet cannot handle all of the unicast traffic load.  The only way around this is to simply not send the bits through the core of the Internet by using caching or paid peering and that's precisely what's going on.

The fact that the FCC admits they don't fully understand how this stuff works and they don't know what their rules mean and that they want to hear from the public how to move forward, and the fact that all the Net Neutrality proponents are saying that the rules aren't restrictive enough against the ISPs means that William's concerns are justified.  Richard Bennett and myself share Mr. Norton's concerns.

Ridiculous assertions Daniel George Ou  –  Dec 09, 2009 6:28 AM PDT

"Paid peering traffic is no more prioritized than regular peering or transit traffic".

That's a ridiculous statement.  The fact that Paid Peering traffic jumps ahead of transit traffic and the fact that it's a far more exclusive and less contentious connection means it's prioritized and enhanced by definition.  Nowhere in the NPRM does it claim that the word "prioritized" is solely limited to an "assured forwarding" state on a transit or last-mile network.  That's not even consistent with what the extreme Net Neutrality proponents want, which is Utopian state where all packets are treated equally on the Internet and all content providers have the same distribution capability.

Another source on Paid Peering Richard Bennett  –  Dec 09, 2009 8:01 PM PDT

Another source on the paid peering question is a paper written by David Clark, a number of his colleagues at MIT, and Patrick Gilmore of Akamai in 2007 for TPRC. The paper discussing the "emerging phenomenon" of paid peering in some depth, and cites Bill Norton as an authority in peering agreements. See: http://people.csail.mit.edu/wlehr/Lehr-Papers_files/Clark Lehr Faratin Complexity Interconnection TPRC 2007.pdf

So you have choice, gentle reader: go with Golding's claim that Bill Norton and I don't know what we're talking about, or go with the former Chief Architect of the Internet, his colleagues at MIT, and Patrick Gilmore of Akamai.

That's not a hard choice, is it?

You don't give up, do you? First, Daniel Golding  –  Dec 10, 2009 9:25 AM PDT

You don't give up, do you? First, there is no such thing as a "Chief Architect of the Internet". Second, Patrick Gilmore has spoken out against your ideas, and Bill's, in several public forums, including the NANOG mailing list.

You are a paid lobbyist who employs misdirection and political smears. Recently, you accused everyone on the NANOG mailing list who disagreed with you of being a socialist. Now, you assert that your views are somehow backed up by Patrick Gilmore, who you know does not agree with you. How much lower can you go? I guess it depends how much your backers will pay you....

Let's get beyond the silly arguments George Ou  –  Dec 10, 2009 9:41 AM PDT

Bottom line, would it be such a horrible thing to ask for more clarification regarding paragraph 106?  Why stifle this dialog when the FCC is asking for this very type of clarification?

NPRM paragraph 106 never says "Quality of Service" in regards to the FCC prohibiting ISPs from selling enhanced or prioritized access to content/app/service providers.  Yet the FCC was very clear to use the full term "Quality of Service" in several other places in the document where it wanted to explicitly refer to any kind of Priority Queuing technology.  It was notably vague here in paragraph 106 and the use of the term "enhanced or prioritized" is far broader than any other section in the NPRM.

Sad comment, Daniel Richard Bennett  –  Dec 10, 2009 12:40 PM PDT

See Clark's bio: "Since the mid 70s, Dr. Clark has been leading the development of the Internet; from 1981-1989 he acted as Chief Protocol Architect in this development, and chaired the Internet ActivitiesBoard. Recent activities include extensions to the Internet to support real-time traffic, explicit allocation of service, pricing and related economic issues, and policy issues surrounding local loop employment." http://www.csail.mit.edu/user/1526

Here’s a nice summary of what I’ve been trying to say here: “We also have a cautionary conclusion: if one should be motivated (for whatever reason) to contemplate some regulatory rule to manage interconnection (which the debate over Net Neutrality is, in part, about), the design of such a rule will be both complex and informationally demanding. Any simplistic rules that try to define network neutrality as the elimination of discrimination will fail even to match today’s reality by a wide margin. There is a substantial level of economic discrimination today just in the variation in willing to peer, and the emergence of paid peering and partial transit only increase this space. Partial transit and paid peering may be seen as efficiency-enhancing responses to changing market conditions. While there may be opportunities for abuse by providers with excessive bargaining power, the complexity of what is in place today, and what seems to be working today, would argue that the best way to address any potential concern would be to focus on the sources of bargaining power and identify anti-competitive opportunism, rather than to impose ex ante restrictions on the range of bilateral contracts.” – Complexity of Internet Interconnections: Technology, Incentives and Implications for Policy, P. Faratin, D. Clark, P. Gilmore, S. Bauer, A. Berger and W. Lehr. http://people.csail.mit.edu/wlehr/Lehr-Papers_files/Clark Lehr Faratin Complexity Interconnection TPRC 2007.pdf

Does that help?

To post comments, please login or create an account.

Related

Topics

DNS Security

Sponsored byAfilias

IP Addressing

Sponsored byAvenue4 LLC

New TLDs

Sponsored byAfilias

Domain Names

Sponsored byVerisign

Cybersecurity

Sponsored byVerisign