Home / Blogs

Comcast is Right, the FCC is Wrong

A fellow named Paul Korzeniowski has written a very good, concise piece on the Comcast action at the FCC for Forbes, Feds And Internet Service Providers Don’t Mix. He manages to describe the controversy in clear and unemotional language, which contrasts sharply with the neutralists who constantly use emotionally-charged terms such as “blocking,” “Deep Packet Inspection,” “forgery,” and “monopoly” to describe their discomfort.

What Comcast actually did, and still does today, is simply limit the amount of free upstream bandwidth P2P servers can use to 50% of capacity. This isn’t “blocking” or “censorship,” it’s rational network management:

Cable giant Comcast is at the center of a very important controversy for small businesses. In the summer of 2007, it became clear that the carrier was putting restrictions on how much information selected customers could transmit. BitTorrent, a P2P application-sharing company, had been using lots of bandwidth, so the ISP throttled back some its transmissions.

“Throttled back some of its transmissions” is correct. Comcast doesn’t throttle back P2P downloads, which you can prove to yourself if you happen to have a Comcast account: download a large file using P2P and notice that it moves faster than it possibly can on any flavor of DSL. My recent tests with Linux have files downloading at 16 Mb/s, the advertised maximum for my account.

Korzeniowski then explains the facts of life:

The reality is that all ISPs are overbooked—they have sold more bandwidth than they can support.

This overbooking has been an issue since the old Public Switched Telephone Network (PSTN) days. In that situation, individuals would receive a busy signal when the network was overloaded. Because the Internet has an antithetical design, ISPs don’t have a busy signal option.

ISP’s actually do have a “busy signal option:” it’s the Reset packet that Comcast uses to limit active upstream sessions. But neutrality regulationists call it “forgery” and abhor it.

“Overbooking” bandwidth isn’t a bad thing, and in fact it’s central to the economics of packet-switching. The PSTN forces each caller into a bandwidth ghetto where he is allocated a small chunk of bandwidth, 4 KHz, regardless of how much he currently requires. If you’re on the phone and have to set it down to check on your chili, you have 4 KHz. If you’re blasting files over a modem connection, you have 4 KHz. It doesn’t matter how many other callers are on-line and what they’re doing: you each get 4 KHz. That’s the law.

But packet switching, of which the Internet is an example, allows your bandwidth allocation to float depending on what you need to do and what other people are doing. You share network facilities with your neighbors (and this is true whether you use DSL or cable, you just share at different points on these technologies), so you can get a larger chunk of bandwidth when they’re idle than when they’re banging the net hard.

Overbooking allows you to use very large amounts of bandwidth for short periods of time, which is ideal for web surfing: you click on a link, you get a ton of graphics sent to your computer. While you’re reading, your neighbors get to use the bandwidth that would be wasted if you had PSTN connections. It works for everybody, most of the time. It works so well, in fact, that ISPs haven’t bothered to meter actual bandwidth use: the resource is so abundant, and the demands so few (especially in the upstream direction, where your clicks move) that there’s never been a need to control or meter it.

Enter P2P, a method of moving large files across networks that relies on free upstream bandwidth. Now the abundant broadband upstream is constantly occupied, not by an interactive application that sends a click now and click 5 seconds from now and a click a minute from now, you’ve got applications running that constantly stream traffic up the wire, to the detriment of the others in the neighborhood. Something has to give.

One approach is to cap upstream traffic:

However, the “all you can eat” model may no longer be viable—a change the government seems to be ignoring. ISPs could use the open salad bar model when users were mainly transmitting small textual data. But with video becoming more common, users increasingly transmit very large high-definition files.

In response, Comcast plans to cap customer usage at 250 GB of data each month. That translates to about 50 million e-mails, 62,500 songs, 125 standard-definition movies, or 25,000 high-resolution digital photos. That amount would seem to meet the needs of most customers, including small and midsize businesses. The only folks affected would be companies such as BitTorrent, that have based their business on the “all you can eat” model, and hackers, who routinely spew out tons of unwanted solicitations and malware.

Capping has its critics, mostly the same people who object to traffic management as well:

For whatever reason, some believe ISPs should not be able to put any restrictions on the volume of information that any user transmits. That’s absurd. Per-bit and per-byte pricing models have long been used for data transmissions. In trying to build and sustain their businesses, carriers constantly balance their attractiveness and viability versus unlimited usage pricing models. By government decree, they no longer have that option. In effect, the FCC has decided to tell ISPs how to run their networks.

Capping frees up bandwidth for sharing by taking free bandwidth off the table for P2P. But it’s not a technically elegant approach. Humans respond to caps month-by-month, but networks experience congestion and overload millisecond-by-millisecond. So the sensible engineering approach is to manage traffic in pretty much the way that Comcast does it today: identify the bandwidth requirements of applications, and allocate bandwidth to those that need it the most, as we would with any scarce resource: grant transmission opportunities (that’s a technical term we use in network architecture) to highly interactive applications such as VoIP ahead of non-interactive applications such has HDTV file transfers. This is sound practice, but the FCC has now said it’s illegal. The FCC is anti-consumer.

Net neutrality supporters have pressured the FCC because they believe cable companies are unfairly monopolizing the Internet access marketplace. This conveniently ignores a couple of factors. First, there is no Internet access monopoly. A small or midsize business can get access from cable companies, telcos or wireless suppliers. True, there are not 50 choices, as you might have when buying a new pair of pants, but there is a reason why so few companies compete in the Internet access arena—it’s not a great business.

In fact, net neutrality advocates have turned a blind eye to the history of the dot-com bubble. Internet access start-ups burned through more cash with fewer positive results than any market sector in memory—and perhaps ever. Providing Internet access requires a lot of capital for the network and support infrastructure, and there’s not a lot of money to be made when customers pay about $20 a month for unlimited access.

The alternative to application-sensitive traffic management is a crude user-based system that treats all of each user’s traffic the same. This means, for example, that your VoIP streams get the same service from your ISP as your web clicks and your file transfers. This is insane.

Each Internet user should be able to multitask. We should be allowed to share files with P2P or any other non-standard protocol of our choice at the same time that we’re video-chatting or surfing the web. The heavy-handed FCC ruling that all packets must be treated the same undermines the economics of packet switching and delays the day when the Internet will make the PSTN and the cable TV systems obsolete.

Comcast was right to take the ruling to the courts to get it overturned. ISPs should be allowed to deploy a traffic system that combines elements of the protocol-aware system currently in use at Comcast with the new “protocol-agnostic” system that’s under test, such that each customer has a quota for each class of traffic. This is sound network engineering, but the current state of law makes it illegal.

This is not good.

 

By Richard Bennett, Consultant

Richard is co-creator of the Ethernet and Wi-Fi standards.

Visit Page

Filed Under

Comments

TCP Resets are Not a Bandwidth Management Tool The Famous Brett Watson  –  Sep 29, 2008 2:59 AM

I’ve said this many times before, but I’m afraid I’ll have to say it again, just to ensure that a lie oft-repeated isn’t accepted as truth. Time and time again, Richard Bennet has pushed the idea that TCP Resets are a perfectly valid bandwidth management technique. This notion is not given any currency by any IETF standard or recognised best current practice. In fact, it flatly conflicts with the base TCP standard, which states the following.

As a general rule, reset (RST) must be sent whenever a segment arrives which apparently is not intended for the current connection. A reset must not be sent if it is not clear that this is the case. [RFC 793, p.36]

It is thus perfectly accurate and rational to call the resets generated mid-stream by Comcast “forged packets”. They purport to originate at the connection end-points, but do not, and they convey the message that inappropriately numbered segments have arrived at the endpoints, which is untrue. That Comcast “throttled back transmissions” with this method is not true, since TCP throttling is achieved by dropping packets, not forging resets. This is not simply my opinion: it is a re-statement of the relevant standards, and the kind of thing that a university student is likely to find in his final exam in a second or third year networking subject. There is no official term for the “TCP reset” method of traffic management, since it’s not supposed to happen at all, but it would be more accurate to say that Comcast interrupted transmissions using this technique.

Traffic management is well and good when done in harmony with the design of the protocols being so managed. TCP resets are the antithesis of such harmony, however, and it’s impossible to debate the finer issues on rational grounds while this particular elephant remains standing in the room.

In short, Richard Bennet has an unconventional theory of network management which stands in direct contradiction to many widely accepted practices and architectural principles of the Internet. I believe in giving unconventional theories (such as Richard’s) a fair hearing, but I urge the reader not to be swayed by his rhetoric which portrays the conservative mainstream (“neutralists”) as though it were whiny, self-interested, and technically incompetent minority.

The incorrectness of ISP-generated resets is the issue Mike O'Donnell  –  Sep 30, 2008 6:58 PM

Brett Watson explained the reason why the TCP reset packets generated by Comcast are properly viewed as "forged" much better than I would have. There is so much distracting noise in the ensuing discussion, that I think it worth adding my less well written support. I pay an ISP to forward my packets toward those I address them to, and to present to me those packets that are addressed to me, according to the acknowledged best practices that I read of in Internet standards and RFCs. I expect to be able to use those packets in any way agreeable to myself and my correspondents. I understand that the ISP may drop packets to or from my address, and may send control packets on its own authority, according to some of those best practices. But my ISP has failed in its commitment to me when it presents to me a packet that it created, with a return address that is not its own, and when it sends to any other host a packet purporting to come from me, but actually created by the ISP. The ISP has no legitimate authority to determine for itself that this mislabeling is benign; only I and my correspondents have the authority to decide what conclusions we will draw from the others' packets. Mike O'Donnell http://people.cs.uchicago.edu/~odonnell

All fine and good Richard Bennett  –  Sep 30, 2008 7:33 PM

The fact remains that none of the applications Comcast targets for throttling are harmed by the Reset packets. They typically open thousands of TCP connections, leaving them open until they're closed by network management. This is actually one scenario where Resets are quite reasonable.

Pedantic objection Richard Bennett  –  Sep 29, 2008 9:49 AM

The more significant point is whether it’s reasonable to prioritize applications according to their unique requirements at all. That point so dwarfs the pedantic discussion of one technique over another as to render it moot.

Neutralists argue, for the most part, that limiting application appetites for bandwidth is inherently anti-competitive, and that’s too large an elephant for me to swallow.

Pedantic claim of pedanticness Mike O'Donnell  –  Sep 30, 2008 7:03 PM

The more significant point is whether it's reasonable to prioritize applications according to their unique requirements at all. That point so dwarfs the pedantic discussion of one technique over another as to render it moot.
Stunningly wrong. The more significant point is whether it is OK to label a packet with an incorrect address. Protocols work in the long run because we agree what various signals mean, and leave the decision how to react to that meaning to the recipient. Providing false information to provoke a reaction that the falsifier believes to be desirable is completely destructive of the long-term confidence in the protocol. Mike O'Donnell http://people.cs.uchicago.edu/~odonnell

Once again Richard Bennett  –  Sep 30, 2008 7:37 PM

P2P works perfectly fine on the Comcast network, and always has. Any harm to the protocols is strictly theoretical.

Consistency in Opinions Dan Campbell  –  Sep 30, 2008 8:09 PM

Mike, So, what are your viewpoints on NAT, PAT, transparent caching, WAN optimization / TCP acceleration, etc.? These and other technologies are found in abundance on the Internet. They too alter IP addresses, packets, data transactions and protocol interaction. They are not designed nor applied with some sinister motive to be destructive or deceptive. They are designed to enhance the Internet experience for the end user. And they do. For example, the Internet wouldn’t be where it is today without NAT/PAT. (That, or we’d be using IPv6 now). Yet, NAT/PAT is the ultimate in altering the subscriber’s source address. It screws up some applications, and to a degree it continues to restrict us. But most application designers have found ways around its issues so few Joe-average consumers know the difference or complain about it. But it is there and it is creating a new packet that you didn’t originally send. So, it is ok if your viewpoint is that your traffic should be untouched, your IP address should remain intact, all messages should be between you and the web server only, everything should follow RFC standards, etc. But you can’t say that it is wrong for a service provider to alter your transaction when performing network management when you feel it works against you (e.g., if your BitTorrent sessions are disrupted), then say it's ok or not address similar actions that work in your favor, e.g., NAT, which has helped us all. Consistency in arguments or objections are needed.

Accuracy in attributing opinions Mike O'Donnell  –  Sep 30, 2008 8:40 PM

I run NAT on my house network because, presumably due to the delay in deploying IPV6, it is not cost-effective for me to pay my ISP to provide me with more than one IPv4 address. When I run NAT, I am merely making my entire home network into a single distributed IP host, and all traffic appears to the outside world as coming from and going to that single distributed host. I never create packets purporting to come from someone else's host. I would not choose an ISP who multiplexed the same IP address between me and other users. I have no objection to other people making such a choice. I would object to an ISP that told a customer she had sole use of a particular IP address, and also delivered some packets for that address to other customers, or delivered to the outside world packets with that customer's address as the source IP address, which were not actually authorized by that customer. I am a bit too ill today to untangle all of your interesting speculations about opinions that I might have about various other things, and whether or not they are consistent. I am concerned here with the accuracy of the source IP address in the header of an IP packet. I acknowledge all sorts of virtualization, but only when it is transparent or has the consent of the virtualized. I object to an ISP assuming implicit consent to change the visible behavior of whatever I choose to associate with my IP address.

all messages should be between you and the web server only
I don't think I said anything about a Web server. I reserve the right to communicate with any sort of correspondent.
you can’t say that it is wrong for a service provider to alter your transaction when performing network management when you feel it works against you (e.g., if your BitTorrent sessions are disrupted), then say it's ok or not address similar actions that work in your favor
I don't see the relevance of this sort of ethical-consistency discussion to the question of accuracy of the source IP address field in a header. But, since you mentioned it, I certainly do reserve the right to complain about behavior that harms me, while not complaining about similar behavior that helps me. I acknowledge that it's not inherently OK for an ISP to help me by harming others, and I acknowledge the need for a certain simplicity and understandability in the rules. But this proposed ethical rule of yours is quite bizarre and unacceptable in general.

Diffserv The Famous Brett Watson  –  Sep 29, 2008 11:02 AM

Prioritisation and other nuances of packet handling are not bad ideas when implemented appropriately. We have a ten-year-old architecture for this, known as diffserv, and if you want to promote it, go right ahead. It is, however, a bad idea to make guesses at the desired class of service based on attempts to identify the application. Such an approach is built on too many assumptions, and ties itself to every possible application (an endlessly updating list), rather than a few well understood service parameters like “delay” and “jitter”.

Silly Richard Bennett  –  Sep 29, 2008 6:28 PM

Diffserv has never been widely adopted, and until applications mark their packets consistently, it doesn't address the problem at hand. Header inspection works just fine.

Ulterior motives The Famous Brett Watson  –  Sep 29, 2008 11:11 PM

Push for the adoption of DiffServ by the carriers, and application support will follow. In any case, the fact that a ten-year-old solution hasn't gained much traction in the real world should give you some idea as to how important this issue really is: i.e. trivial enough to ignore. So why do you make it sound like it's the end of the world as we know it? All packets being treated equal? "Insane," you say! Yet that's been the status quo for a long time, and it's not like Comcast are using their "application-sensitive traffic management" for any such noble purpose as VoIP enhancement (except in the crude sense that interrupting P2P uploads offers relatively better service to everything else). QoS can be addressed here and now with DiffServ. Cisco was issuing press releases on their DiffServ support seven years ago, so you can be pretty sure that every carrier has the necessary functionality lying dormant in their existing router infrastructure right now. All that's lacking is the will of the carriers to actually use it, which is pretty conclusive evidence that no such will exists. Obviously QoS is nothing but a smokescreen in this discussion, since carriers have been neglecting the QoS tools at their disposal for years. And that's the tip-off, isn't it? The real problem with DiffServ is that it only solves the QoS problem, which is actually a matter of no importance to the carriers. DiffServ empowers the carriers only to obey the whim of the end-points, which is no empowerment at all. Deep packet inspection, on the other hand, gets the carriers in on the conversation. It allows them to discriminate not only in the politically acceptable sense of making QoS adjustments, but also allows them to degrade applications which compete with their paid offerings, insert paid advertising into web pages, and so on. QoS is clearly just a pretext to admit the nose of the camel into the tent without protest, but the rest of the camel will surely follow.

Black helicopters circle the sky above the Famous castle Richard Bennett  –  Sep 29, 2008 11:33 PM

And inside the secure bunker, Sarah Palin lectures about QoS. Comcast doesn't retain any customer information obtained by the Sandvine deep header inspection system, and in fact touts the fact that it's completely anonymous. But don't let me confuse you with facts, Brett.

Mea culpa The Famous Brett Watson  –  Sep 30, 2008 12:54 AM

I'm sorry, Richard, but I thought we digressed from "what Comcast is doing with DPI" into "what Comcast (and others) could do with DPI" when you mentioned it as a possible QoS tool. My additional apologies for pointing out the DiffServ thing, and how it undermines your QoS argument: I can tell I've annoyed you by your silence on the matter. It seems I have an awkward habit of noticing new elephants standing in the room even as you draw attention away from the old ones.

Please Richard Bennett  –  Sep 30, 2008 1:32 AM

The virtue of header inspection is that it works on legacy applications that don't happen to be on-board with whatever future agreement the Industry wants to reach about packet classification. It's a practical approach, in other words, and not simply a thesis topic. Anyone who's genuinely concerned about digital privacy would do well to address the problem of businesses whose business plan calls for the retention of personal data, such as Google. There are many ways to come by personal data, and none of them is a cause for concern in its own right. What counts is what you do with it when you get it.

The Korzeniowski article contains mistatements of the Michael Roberts  –  Sep 29, 2008 1:10 PM

The Korzeniowski article contains mistatements of the actual broadband situation.  Over 90% of residential customers in the US have only two choices, cable or DSL.  These are provided by former government sponsored monopolies which are trying to adjust to deregulation and to the fact that they are unable to offer real broadband because of the obsolescence of their engineering plant.  Cable is facing the major upheaval of the digital tv conversion in 2009, and only Verizon is installing genuine fiber to the home, with miniscule percentage of homes wired so far.  In the meantime, the monopolies are fighting each other to control video, where they think great riches of content profits lie.  Is it any wonder the consumer comes out badly?  The FCC is trying to promote a level playing field of open access with genuine competition instead of snake oil claims.  Is there any other service business around where all you get is “best efforts” for your consumer dollar?

Actually, no Richard Bennett  –  Sep 29, 2008 10:53 PM

Most American broadband consumers choose DSL or cable, but that's not because they don't have other choices. There are at least two national wireless broadband systems, Verizon and AT&T;, and their user base is growing. In fact, most of the broadband growth in the US is wireless. Wireless offers lower speeds than wireline, but it has the significant advantage of mobility. And in addition, there are some 3-4000 independent ISPs like LARIAT who offer regional service. The point of my critique is that the FCC's efforts to promote broadband competition are wrong-headed, even if they're virtuous in intention. It takes more than political rants to make broadband grow.

Satellite services too Dan Campbell  –  Sep 30, 2008 1:37 PM

There are also several satellite-based Internet access services available, at least Hughes, Starband and Wildblue. Their satellites cover the whole US of course, a larger area than Comcast or any other cable or DSL provider, although some satellite providers focus on the rural areas rather than compete head to head in areas where there are multiple terrestrial providers. But for those that are complaining that they have limited choices because of where they live, this may be an option. There are tradeoffs and limitations of course, but it has just been more disinformation spread in this whole debate when people claim that they don't have any choice but their cable company for Internet service. Those that truly do are a very small minority of people and, frankly, that is one of the tradeoffs you make when you decide where to live. Broadband Internet access options was actually on my list of requirements/features when I recently bought moved to a new house (I only wish I had done more research to learn about Verizon's dispute with my local city regarding FiOS deployment.)

Actually quite the opposite Dan Campbell  –  Sep 29, 2008 1:42 PM

Whether you realize it or not, most business are unfortunately best effort no matter how much they disguise it.  (Have you flown lately?)  Pretty much everything we do operates on an oversubscription scheme of some sort, whether it is tables or servers in a restaurant, lanes on a highway, aisles in a super market, channels for traditional or mobile phones calls and, yes, broadband bandwidth.  And sometimes it gets congested and service quality goes down.  But oversubscription and the means to manage it is part of delivering business within a business model that can (maybe!) deliver profits while keeping consumer costs low.  Because we live with those kind of things daily, it’s hard for me to understand why the general public cannot see how it applies to broadband, and that’s what clutters the whole Comcast / Net Neutrality arguments.  I guess we don’t often get fast busies on the telco network anymore since it’s had 100 years to work out the erlang tables, but do we not remember how just a few years ago (and sometimes even now) we’d get fast busies on mobile phones?  Doesn’t anyone remember the fast-busy fiasco when AOL went to unlimited dial-up access roundabout 1997?

It’s a worthy cause to discuss what is the best way to manage broadband traffic to ensure service quality, and there are many opinions.  It will continue to evolve.

It’s a worthy cause to discuss the history of legacy cable and telco monopolies that still limit us to this day, and suggest what can be done about it.

It’s a worthy cause to determine if the FCC or the Gov’t in general should step in, if they are currently allowed to step in, and if they do how far they can go before they overstep their bounds.

That’s what makes this whole thing a complex issue.  It’s like Rubik’s cube.  You have to solve all sides in tandem.  Focusing on just one will not solve the puzzle and may clutter the other sides.

In the meantime, we still need to execute fair access policies to ensure that the majority of the broadband consumer base gets decent service for a low price, whatever the technical method is.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC

DNS

Sponsored byDNIB.com

Threat Intelligence

Sponsored byWhoisXML API

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Domain Names

Sponsored byVerisign