Home / Blogs

FCC and Comcast: Reasonably Vague

So, the FCC will recommend that Comcast be “punished” or receive “sanctions” for its peer-to-peer throttling practice. And the network neutrality debate goes on, as does its ambiguities and vagueness.

Even if you hate Comcast and agree with the net neutrality argument and the FCC’s decision, one thing Comcast is correct in saying is that “reasonable network management” specified by the FCC in network neutrality policy set in 2005 is vague. Actually, the term “network management” by itself is broad before you even try to interpret what is meant by “reasonable”, and it is not exactly correct in its application here.

The terms that more accurately describe what Comcast and other broadband service providers are doing include “throttling”, “bandwidth management”, “traffic shaping”, “fair access policy (FAP) enforcement”, “quality of service (QoS)”, “rate limiting” and “prioritization” to name just a few. Take your pick or add any one that I missed, but in my experience “network management” would not make the short list. It is way too broad.

If you asked any network engineer what “network management” means, they will likely give you a definition that revolves around the act of monitoring a network for faults and performance, typically according to the FCAPS model and through the use of SNMP-based management tools. Maybe you will hear about configuration management and security. But few if any would give you a definition geared towards traffic shaping or peer-to-peer (P2P) throttling.

Sure, traffic shaping technically falls into the realm of performance management (i.e., the “P” in “FCAPS”). But when one is speaking of performance management, they typically mean monitoring the utilization of network circuits and devices, producing utilization graphs, and looking for trends and issues. This might lead to a traffic shaping or P2P throttling policy and solution, which is why I say that “network management” is not exactly incorrect, but there are better and more specific terms to define what we are talking about here.

I realize that I am splitting some technical hairs; that is exactly the point. If this issue is so important, warranting this level of FCC attention, intense media coverage and blog posts that border on lunacy and threats (yes, I actually saw one net neutralist threaten someone defending Comcast’s practice), then can’t we at least get the simple terminology correct?

Perhaps the FCC chose to be deliberately vague so that they have the final say in interpreting what the terms actually mean, something they can do case by case and perhaps without consistency, rather than allowing a team of high priced corporate lawyers representing a cable company or telco the flexibility to pick through specific language to find loopholes that could not only get them off the hook but actually set legal precedent in the process.

Or (and maybe I’m about to give the FCC too much credit), they understood that there are a myriad of interchangeable terms out there and chose instead to summarize at a higher level. That’s’ fine. But if so, how can you blame an ISP for making their own interpretation and then deploying commercial off-the-shelf products that are manufactured exactly for the purpose of P2P throttling to enforce a policy whose main intent is to protect the vast majority of the subscriber base?

As an aside, why is there virtually no discussion regarding the off-the-shelf-products themselves or any real voice from the vendors? The biggest network vendor of all, Cisco, whose SCE product performs peer-to-peer throttling, has been incredibly quiet on the topic, probably because Comcast does not use their product. I saw one article where Sandvine commented on the situation, but there hasn’t been much else from the product vendors.

Presumably the hardware manufacturers are exonerated from any wrongdoing because they do not instruct their customers to violate laws, evade Government policies, breach some canon of network ethics or act deceptively. Perhaps it is similar to the developers of, say, photocopiers or guns (or, uh, BitTorrent), where as long as some legitimate use of the product can be quantified (and it ALWAYS can), the culpability for any questionable or illegal use (again, see BitTorrent) rests with the person who uses the product.
Still, such little commentary from Cisco, Sandvine and alike? Why?

Similarly, if much of this issue revolves around network neutrality and privacy, where subscriber traffic is to be left untouched and just routed as-is without any preferential treatment, why is there virtually no parallel conversation about the many other technologies used on the Internet that similarly inspect and manipulate traffic?

The Internet is composed of more than just routers and switches. The Internet includes firewalls, content switches, local or global load balancers, WAN acceleration appliances, cache engines, content filters, bandwidth management devices, routers configured with QoS features, spam filters, etc.

The aforementioned devices view, inspect, throttle, discard, delay, rearrange, queue, block, prioritize, alter, reset, redirect, proxy, spoof or otherwise manipulate data packets and end user sessions. Pick your technology, pick your term. It’s out there. These things were developed with the good intention of improving the performance of the Internet for the end user, while also allowing service providers and network operators some flexibility in how they deliver services and operate their networks. Just because the policies that govern the implementation are developed by commercial companies, who also have the goal to protect the investment they paid for and that which underlies their business model, does not automatically make it wrong, even if the ISP doing it is a cable company with a near monopoly stranglehold on the available service offerings in a given area. Each of those technologies comes with a potential downside that can hurt the user community’s experience, whether that occurs inadvertently or deliberately. We have all been on the receiving end of such downsides at one time or another whether you realize it or not. But mostly, the user community benefits from them.

If peer-to-peer traffic shaping (or network management or whatever you want to call it) is wrong, then is it the net neutralist’s contention that all of these technologies are also wrong, and that service provider networks must be dumbed-down to routers just routing at layer 3, based on destination IP address and a corresponding routing table entry, and that’s it? In light of the development of all the aforementioned technologies and the reasons for them, does that viewpoint promote progress or regression?

(Note: If we really wanted to get picky with this, we could easily include in this discussion the fact that most routers are configured (usually by default) to perform flow (session) based load balancing, which is a technique that also goes as step beyond simple routing by destination IP address. Furthermore, we could bring up BGP and how ISPs derive policy such that routers make decisions on how to route to packets outbound or influence their return path. The routers’ decisions are ultimately at layer 3 without application-level preference, but the routing protocols are making decisions that go beyond just best path or IP address. Such policies may or may not be in the best interest of the end user, whose traffic could get routed through a sub-optimal path with a longer delay (however small it may be). The service provider’s intent may instead be to reduce their backbone traffic, a la “hot potato routing”. Where does this fit into the network neutrality debate? Where do you draw the line on what is considered traffic “blocking”, “delaying” or “preferential treatment”?)

So, why such public outrage, press coverage and Government scrutiny directed at Comcast for their traffic shaping policy, while there is close to nothing regarding these other, very common technologies?

Furthermore, given that all of these technologies exist on the Internet, and given that other broadband ISPs do what Comcast has been doing, why has Comcast taken the brunt of all this? It is more likely because people hate the cable company and love to bash them whenever they can? There is an undercurrent of this through the blogs.

The cable companies have long been the butt of jokes and sarcasm, long before they started offering data services. Many people place them in a category of, say, the Department of Motor Vehicles or the Postal Service, for their notoriously bad service, poor customer support and (what the consumers perceive are) high prices. Whether this is justified or not with respect to their relatively recent Internet access service or if it is a byproduct of their legacy cable TV service is hard to say. Like the DMV or USPS, the cable company has dug such a hole for itself in the consumer’s eye that they may never be able to recover. Even if they suddenly delivered incredible service, a brand new service or dramatically cut their prices, it would be very hard for any of those entities to shake the stigma they have “earned” over the years. The legacy is there. This is partly the explanation for the outrage directed towards Comcast.

It is fair to say that Comcast and any other broadband ISPs are to blame for not explicitly disclosing their policies and leading the average consumer to believe they will truly get the maximum bandwidth dedicated to them 24x7. I’ve seen this issue (and the customer complaints) since day 1 of broadband. The service providers could always do a better job here. But you have to consider that, if the service providers really attempted to disclose it, if they really tried to explain the theory behind oversubscription and how it keeps broadband prices low, could it really be done in a way where the general layperson can understand (or would even read in the service agreement for that matter)?

The other thing that somehow keeps being sidestepped is the root of the whole issue: illegal file sharing of copyrighted material. Let’s not kid ourselves. Regardless of the legitimate use of P2P and its long term potential, the vast majority of P2P traffic on broadband networks has been the downloading and trading of copyright-protected material (which by the way happens to be a federal crime.) The statistics don’t lie. The bandwidth usage becomes so intense, even from just a few users, that it ruins the experience for the remainder of the user community who are not engaging in the illegal practice. This cannot be rectified by simply allowing routers to drop packets based on application-agnostic first-in-first-out queuing. More intelligent traffic shaping techniques are necessary. Maybe eventually the majority of P2P traffic will be legitimate, but right here right now the problem is still rooted in illegal file sharing despite how much that seems to be brushed aside at the moment. If it was not occurring, this whole Comcast issue would not have come to light and we wouldn’t be having this discussion.

At the end of the day, the network neutrality debate is well worth having. But at the moment it appears to be clouded by inconsistent arguments, absent discussion of related technology used in “network management”, vague terminology, unsubstantiated conspiracy theories, inaccurate assertions on competitive service offerings, and consumer feedback that may be tainted by generalized opinions that were solidified long before the cable companies started offering Internet access.

The definition of “reasonable” needs a lot more thought indeed.

By Dan Campbell, President, Millennia Systems, Inc.

Filed Under

Comments

But you have to consider wolfkeeper  –  Jul 15, 2008 5:07 PM

But you have to consider that, if the service providers really attempted to disclose it, if they really tried to explain the theory behind oversubscription and how it keeps broadband prices low, could it really be done in a way where the general layperson can understand (or would even read in the service agreement for that matter)?

Well, ISPs in other countries do, countries with significant competition between ISPs, so the answer would appear to be yes.

The ISPs do disclose it... Dan Campbell  –  Jul 15, 2008 6:06 PM

Few people will read the service contract.  Those that do will skim it, probably not understand the part about the shared bandwidth or at least not understand the implication, and even if they do they will likely not remember months later if and when a throttling issue becomes apparent.  People are too busy or too lazy, don’t see it as important or have only a few choices and figure that the DSL and Cable provider probably have similar stipulations that they the consumer don’t have the power to change.  And I’m pretty sure thet ISPs have always disclosed it, it’s just vague and buried in the fine print.  I’d have to go look at my own broadband service contract but I’m pretty sure there will be some language in there right after the “Get speeds up to XYZ Mbps” saying something like “actual bandwidth may vary depending on time of day or network conditions…”  I don’t know about overseas but in the very litigious United States where you can sue anyone anytime anywhere for anything, you can’t rent a car or sign up at Blockbuster to rent a movie without receiving a contract whose length rivals War and Peace, even when they are printed in 4-pt font.  It has even become impractical to read through every detail in every document when you are closing on a house, a major purchase and contract agreement, and alot is just assumed and taken on faith.  In the broadband world, you would be trying to explain something that is both technical and a business model that is foreign to many a layperson (which is obvious when you read through alot of blogs on this topic.)  So, I wouldn’t expect contract language to help much no matter how clearly it is spelled out.  To a degree, it is buyer beware, where the consumer must educate themselves a bit to make wise purchasing decisions and, if they are later unhappy, to exercise the power to cancel service.

RE: Disclosure J Iannone  –  Jul 15, 2008 8:59 PM

The point of disclosure is not for users to read it.  It’s for service providers to remain transparent.  Had Comcast the necessary transparency, a lot of this hubub could have been avoided.  And regardless of the difficulty of leagalese involved, transparency is what matters most in this particular instance. 

The problem, as arstechnica points out, is at the edge.  The core experiences little or no congestion.  Regardless of traffic shaping or other ‘management’ practices, as the edge becomes oversubscribed, users suffer.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

Threat Intelligence

Sponsored byWhoisXML API