Home / Blogs

Network Neutrality, UPS, and FedEx

Karl Auerbach

I buy a lot of things that are delivered by UPS or FedEx. And I kinda like to watch the progress of the shipments.

Now we all know that UPS and FedEx have different grades of service — Overnight, Two Day, Three Day, etc. And faster deliver costs more.

Several years ago UPS and FedEx would frequently deliver a Two Day package the next day, i.e. they would effectively elevate the class of service. A lot of us took advantage of that by sending almost everything using the lesser grade (and price) and often winning a higher grade (and price) delivery.

I am sure that that that did not please the bean counters at the shipping companies.

Today, with better tracking systems UPS and FedEx almost never deliver a package in advance of the delivery time for the paid class of service. They will hold packages in their warehouses in order to make this so. Today, if you want a given class of service you can get it only by paying for it; the old gambling trick no longer works. I am sure that this has increased UPS' and FedEx' revenue.

The thing to note here is that UPS and FedEx can carry packages Overnight, but that they impose a delay, often an artificial delay, on packages that aren't paying the premium Overnight tariff.

So what has this got to do with Network Neutrality?

Consider an ISP that adopts the UPS/FedEx model. In particular let's say that this ISP decides to impose a delay of 100 milliseconds on all standard class packets and does so in a way that is completely neutral as to source, destination, or protocol. On a 10gigabit link that means holding about 125megabytes of traffic, in each direction, in a delay queue — that's a number readily within the range of today's technology.

Then that ISP could offer premium, i.e. more expensive, grades of service that bypass some or all of that 100 millisecond delay.

I have never heard anyone claim that either UPS or FedEx is not acting with neutrality. It would seem that an ISP that acts as I have described would also be able to claim that it is just as neutral as UPS and FedEx.

I did not pick 100 milliseconds out of the air — rather I picked it because it can have a pernicious effect on VoIP. The ITU publishes 150ms as the time limit beyond which the users of a VoIP call to go into "walkie-talkie" mode. 100ms, one way, does not reach that amount, but it is close enough that other network delays could easily push the connection over the edge; and round trip time will certainly exceed the threshold. In other words, a completely neutral application of 100ms to all packets, VoIP or not, will force VoIP users to upgrade to a premium service.

Other network activities would be impaired. Domain name transactions would slow down causing user perceptions of sloggish service.

Bulk data transfers, such as web downloads of images, would only be marginally effected once TCP adapts to the round trip time. But ISP's could "fix" that by adding some packet loss and some delay jitter to their "standard" quality.

The point of this exercise is to suggest that ISPs have a well stocked bag of tricks to induce users to pay more for what we used to get for free from "best effort" services on the internet.

By Karl Auerbach, Chief Technical Officer at InterWorking Labs
Follow CircleID on
SHARE THIS POST

If you are pressed for time ...

... this is for you. More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

Vinton Cerf, Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Share your comments

You have it exactly backwards Karl. The Larry Seltzer  –  Dec 24, 2009 9:52 AM PDT

You have it exactly backwards Karl. The real threat to uses like VoIP comes from unrestrained P2P users who cause jitter on the network.

Why would an ISP cause a costly support nightmare for themselvess by doing what you suggest?

Points and counterpoints Dan Campbell  –  Dec 24, 2009 10:31 AM PDT

First of all, it's really not as easy as you say to arbitrarily place a delay as high as 100ms on all packets over a serial link much less end-to-end, not with regular routers/routing, and not without doing something really whacky like putting delay simulators on all of your WAN links for just such a purpose, or injecting a satellite link (much more than 100ms) arbitrarily in the network not because you wanted satellite but just because.  That's too much of a delay.  You can't hold data in router buffers forever.  They will fill up and packets will be discarded. And in any meshed network, at what point(s) do you put such buffering so that the 100ms delay only occurred, in theory, once?  Only at ingress?  Your access routers better have the capabilities you are suggesting.

Second, if you were to do such rate limiting / QoS (with smaller delays than 100ms of course), you would HAVE to do it by selecting source, destination or application (the latter by virtue of TCP/UDP ports or some other identifier).  You may say that by putting ALL traffic into the lower end queue, the catch-all for the best effort or worst effort traffic, and only specifically selecting higher paying customers (by source, destination or application) and putting them in the best queue, that you are still being neutral with respect to those in the lower queue; but I don't buy that. And I also don't buy the notion where you would put all traffic from a single specific customer into the lower end queue because, in essence, you are still selecting them by virtue of some network parameter, be it their source IP address or MAC address or how either maps into your DHCP server if this is broadband we are talking about (which really is always the essence of the Net Neutrality debate.) You are differentiating traffic in one way or another, and the only real ways to do that are using source and destination addresses, source and destination ports, or application-level marking where you can.

Third, jitter affects real-time applications but has little affect on "Bulk data transfers, such as web downloads of images", provided that the jitter is not horrendous.  However, a 100ms delay WILL have affects on such bulk transfers though.  TCP won't "adapt to the round trip time" in any positive manner with respect to transfer time.  On the contrary, it will back off to reduce throughput and application download time in a manner inversely proportional to the round trip delay (known as the Bandwidth Delay Product or BDP).  If your worry is that an ISP would specifically be targeting VoIP to degrade a competitors in favor of theirs, well, they would also really affect all of their customer's traffic by doing this, and some worse than others, and they'd likely lose some customers regardless of whether they use VoIP at all, much less theirs or a competitors.

Fourth, there is no standard for latency or jitter on the public Internet, and there may never be, so use VoIP at your own risk.  The fact of the matter is that it is somewhat subjective as to when it degrades to being unusable.  MoS scores were initially purely subjective; they have better, more technical ways to calculate them now.  But plenty of voice, particularly from third worlds, goes over satellite at 500ms round trip delay.  Once the callers get used to the "walkie talkie" like effect, it works pretty well provided that the jitter is not bad.  VoIP will always suffer from delay and jitter, and it's not likely to get corrected on the Internet at any time soon, particularly since Net Neutrality proponents oppose application-level intelligence (and thus QoS and intelligent queuing) on service provder networks.  (But note that my opinion on why end-to-end QoS doesn't exist on the Internet to date has nothing to do with the NN debate.)

Fifth, there are indeed many "tricks" that an ISP can do, and many (most? all?) of them have positive effects on the user community's traffic and online experience.  The trouble with alot of the NN debate is that NN proponents often are only upset with ISP tactics that, in their view only, seem to negatively affect them personally.  But what about those that are or could be positive, such as QoS or Email spam filtering or WAN optimization to mitigate latency affect or transparent caching and content distribution.  The list is huge, and many have existing on the Internet for a while and are in operation right now regardless of any NN policy.

I realize it's just an analogy to FedEx and UPS, but I can spin it the other direction.  It's a business decision as to what UPS and FedEx do.  It is nice when companies give away little bonuses even accidentally but definitely intentionally.  It will bring you back to them.  It will create customer retention.  Maybe their reasoning for holding packages, if that is indeed happening, isn't as conspiratorial as you suggest; maybe there were real evaluations regarding space on planes or gas prices relative to the weight on planes or the time it takes for their folks to pack the trucks and planes or whatever other reason.  But even if not, that is their business model and their choices to make, and if the consumers don't like them they can go with another provider like DHL or someone who doesn't operate that way.

Dan's made all the points I thought to make Suresh Ramasubramanian  –  Dec 25, 2009 6:15 PM PDT

So thanks Dan for spending christmas day countering this fud in a teacup.

cheating Larry Seltzer  –  Dec 24, 2009 10:55 AM PDT

BTW, I've also done the UPS trick you speak of, but it takes a lot of nerve to feel cheated for getting exactly what you pay for.

General response Karl Auerbach  –  Dec 27, 2009 2:59 PM PDT

What a surprising set of responses.

First let's deal with the technical issues:

There was a suggestion that it would not be "feasible" for a provider to degrade its service by injecting a constant delay on all packets.

To the contrary, it is quite feasible.  Up to gigabit rates one can do it today on cheap commodity personal computer hardware using open source software such as Netem.  It is harder to obtain on commercial routing gear, but the reason for that has nothing to do with feasibility and more to do with the fact that providers are not, yet, demanding their equipment providers to to have such a feature.

(I have direct experience with some of this - I build gear that does this kind of thing for purposes of testing protocol implementations in the face of sub-par network conditions.)

Second let's deal with the non-technical issues:

One respondent seemed to believe that my suggestion that providers might create different classes of network service as a means to generate more revenue was an attack on the free market system.  That struck me as odd given that my intent was to suggest that in the context of competition among providers our debates about "network neutrality" ought to recognize that those providers might, in their pursuit of profits, find reasons to offer intentionally degraded service products that are completely neutral in their application (except for price.)

One respondent discounted my note as "FUD" on grounds, that seemed to me, to say that because he could not imagine why a provider might want to it that no provider would ever do it.  To that I would merely say that we technologists are often babes lost in the woods when we speculate about what a marketing person might induce his/her company to do or not to do.

When I wrote the original note I had in the back of my mind a scenario in which one big provider, let's call it Comcorp, sells its best-effort network access service at $30/month.  Then its competitor AttVeriz, introduces a new rate structure in which 100ms delayed service is at $25/month, on-demand/switchable no-delay (best-effort) at $35/month, and never-delayed at $50/month.  Some of us techies might think "gee Comcorp has the best technical/price ratio and hence the best deal".  But AttVeriz might be thinking "users will migrate to the lowest price and then once they are signed up they can be induced into the higher priced version.  In other words, the creation of artificially tiered classes of service could be a useful marketing tool.

And don't we already see this kind of self-inflicted impairment today in the form of tiered bandwidth classes on consumer access links?

One respondent's comment reminded me of a line from the old Simon And Garfunkel song "At the Zoo":

Orangutans are skeptical
Of changes in their cages,

Why was I reminded of this?  Because many technologists seem to have a possessory emotion towards the internet, they feel that they must protect against change from the "clueless" and "newbies".  There is nothing new in that kind of protective mentality - those who built the telephone network we so protective of their creation that they launched a crusade against any outside changes, a crusade that reached a zenith of absurdity with the Hush-A-Phone when they tried to assert that a passive plastic hand attached to a telephone would destroy the telephone system.

Easier things to do than that Dan Campbell  –  Dec 28, 2009 8:57 AM PDT

Sorry, but the suggestion that a service provider might deliberately inject a 100ms delay on all “best effort” traffic in order to get customers to upgrade is far fetched.

When it is suggested that something is not feasible, it is in the full context of the service, which includes not only the technical aspect but also operational and financial.  It doesn’t mean “theoretically impossible”.  It means that it is not something that is practical to deploy in a commercial service.  Yes, many things can be proven and done in a lab.  Like Kevin Costner said in “JFK”, “Theoretical physics can prove that an elephant can hang from a cliff with its tail tied to a daisy.  But use your eyes and common sense.”

Software can be written to do almost anything these days, and the raw computing power of your average home PC is actually pretty amazing.  Hardware can be developed to do many things.  But it had better have significant market potential before you commit to developing it.  And you have to consider that in service provider networks that must be operated 24x7, simplicity brings operational stability.  There are literally hundreds if not thousands of possible configuration options on Cisco routers that no one in their right mind would deploy because of the operational complexity they create.  You might dabble with them on an enterprise network where you can tolerate issues a little better, but not on a service provider network.

I worked in the satellite industry for half my career, so I’m well aware of delay simulators and other test gear used to test “sub-par network conditions”.  Would it be practical to inject such technology into a national or global service that operates 24x7 and has thousands if not millions of customers?  And if you didn’t find a trustworthy off-the-shelf appliance from a reputable vendor with distribution chains and support, you’d be forced to develop it in house.  Then you would hand off out dozens or hundreds of common machines with custom code to your operations group.  Doubtful.  Handing off these newer off-the-shelf hardened appliances that work in the layer 4-7 domain is hard enough.

And as you said, it is harder – e.g., impossible right now – to “obtain on commercial routing gear.” Could it be developed?  Sure.  Memory is not that expensive any more.  But what significant and legitimate demand from (multiple) service providers would cause vendors like Cisco or Juniper to do that, to assume that there would be a trend in service providers to deliberately buffer traffic for a long time period (even when there’s no congestion) just to sort of punish what is likely to be its largest subscriber class?  What start-up vendor (like Sandvine and P-Cube about a decade ago) would have the business plan and capital available to devote to this venture without there being a clear need, but just on the off chance that even one service provider would commit to such a strategy and buy hundreds of the appliances?

Cisco, the largest and most well known networking hardware provider of all, didn’t even have a product or feature set available to thwart P2P traffic when the whole Napster thing blew up.  Basic QoS couldn’t really do it.  A few products emerged and Cisco saw the need and benefit and bought into the market by buying P-Cube, a product they still deliver to this day (as the SCE).  A major router manufacturer like Cisco isn’t just going to devote resources and coding time to develop a feature on a whim, and certainly not from one service provider’s marketing department gone haywire.  The P2P issue that started with Napster and continued (continues) with BitTorrent and others was/is real and required real mitigation.

And if a broadband subscriber was savvy enough to figure out how the Sandvine boxes were thwarting P2P traffic for Comcast, I’m pretty sure within the first hour of service, many much-less-savvy subscribers will be sending their traceroutes into customer support and questioning why there’s 100ms of delay between two adjacent devices in the same POP or city, even if they somehow did commit to such “delayed service” when they bought it.  It won’t take long for that to boil over.  People complain about the natural propagation delay and router hops all the time.  Artificially induced delay?  Forget it.

It’s doubtful that a rogue marketing person, as technically misguided as they can be, would be able to deliver on a product with 100ms of deliberate delay.  It would probably not make it through the Product Development folks and Engineering, much less have enough muscle to make it all the way through a major 3rd party networking vendor.  You’d have to pay such a vendor quite a bit for what would be a customized solution unless you could really sway them that this demand is coming and deliberate backbone delays and buffering will become the norm.  Again, doubtful.

A bit more... Dan Campbell  –  Dec 28, 2009 8:58 AM PDT

It’s somewhat pessimistic to view the lowest tier of any class-based service as a kind of “worst effort” service or something that is intentionally degraded.  Often, that is the service provider’s bread and butter.  Think of the airlines.  Coach class may be uncomfortable and one with constantly diminishing features and amenities, but I highly doubt the marketing folks are deliberately causing that under the premise that the average person would upgrade from a $500 coach class ticket to a $4000 business or first class ticket for a 3 hour flight.  Coach may be the lowest margin for the airlines, but it is still the one that is and will continue to be used the most.  And it is perfectly suitable for most people for most flights.

Again, as I said earlier, if a service provider wants to create tiers of service where the lowest tiers are truly not “best effort” but actually are “deliberately delayed and worst effort”, there’s easy ways to do it right now with packet shaping appliances that have been available for a decade, or QoS in routers.  You can simply make your best effort queue really small and create enough congestion and packet loss that it more or less has the same “cause bad service and make them upgrade” effect.  If nothing else, the provider may as well save on bandwidth and put up a few low speed backbone and transit links and policy route the best effort traffic over them, leaving those customers to contend for backbone bandwidth in the same constrained manner they contend for access, and leave the high paying subscribers to the higher speed links.  That would also be easily do-able with today’s technology, as would some other things that I mentioned, if a service provider is really out to get its core subscriber base.  Once again, doubtful.

(And I’d love to see the marketing spin on any new rate structure that advertised a “100ms delayed service at $25/month”.  Think of the airlines again with such a service.  Fly coach for cheap and we’ll circle the airport for 3 extra hours on what should have been a 2 hour flight.  No way.)

And don't we already see this kind of self-inflicted impairment today in the form of tiered bandwidth classes on consumer access links?

I’m not sure what you mean here.  On broadband or other IP-based networks, or in general?  For many if not most subscribers, the worst Comcast service (which I actually currently use) is perfectly fine.  I am technical, and Internet access at home is vital to me both to run my business as well as personal finance and other functions, not to mention the entertainment aspect.  For the most part the service I get is fine.  And I’m certain there are lower end subscribers than me out there.

Regarding comment #10... Dan Campbell  –  Dec 28, 2009 12:59 PM PDT

As far as the P2P situation goes, there were no "bandwidth hogs."

There were indeed “bandwidth hogs”.  A few P2P users degraded the service for everyone else.  This is well known and not up for debate.

Most telecom services are based on some sort of contention, oversubscription, and statistical modeling, which is why we get broadband for $40/month.  If you don’t want a contention-based service, you can buy a leased line, just don’t expect to pay $40 but more like 10x that amount.

Yes, service providers are notorious for poor choices of words.  I remember back around 1999 while consulting to an ISP and overhearing the DSL help desk take angry calls from some of the early adopters who were technically savvy (i.e., dangerous) enough to be quoting textbook-level theoretical data rate maximums and questioning why they weren’t seeing that, without regard to any of the realities of how broadband services are constructed and operated.  It was amusing to say the least.  Maybe it is tantamount to false advertising.  But they need succinct and common verbiage for the 30 second commercial or advertisement.  Even the term “high speed” is ambiguous.

But there is a certain spirit to the terms, and they shouldn’t be taken that literally or liberally.  Is an “all you can eat” restaurant really saying to the public that they support the consumption of all food for all mankind in a single sitting, or are they saying that the average (and above average) eater can pretty much eat until they are full and then some, within the reasonable confines of a single meal and timeframe?  But if you showed up with 10 buses of NFL football players, the “network congestion” would occur and the “oversubscription” model breaks down.  First off they would not have enough tables, plates and silverware to serve everyone at the same time, and there would be a queue where some people would give up and drop out while others would wait and get angry.  Then the ability to prepare food will succumb to the “patrons” ability to polish it off, causing more delays and anguish.  You may even run out of food or something in the kitchen would break.  And if someone took “all you can eat” to mean that they could show up at 10am and stay for breakfast, lunch and dinner, at some point the owners would pull the plug and cut it off, even if it meant permanently losing those customers.

For a more technical take, if you suggest that “unlimited” means I can do whatever I want as much as I want whenever I want and the interpretation of the term is up to me, then couldn’t I argue that I should have whatever bandwidth I really want, say a full GigE to my house?  Couldn’t I argue that I should be able to watch 5 simultaneous live HD streams at once, or download/upload a terabyte of data in 2 seconds, and have room to spare?  Since the provider said “unlimited”, is my definition of “unlimited” fair when it goes beyond 2009 technical possibilities?  I’m actually surprised we haven’t seen this yet.

A big reason why we have to review and a 25-page disclaimer-laden contract just to rent movies is in part because of consumers who abuse the system and in part by the business owners and their lawyers protecting themselves.  It’s a vicious cycle, and it gets tiring.  It’s why there are “limit to one coupon per purchase” or “limit 2 per customer”, as consumers will take advantage of any loophole, and businesses will counter with legalese and limits.  I’m pretty sure broadband providers, regardless of language like “unlimited” used in advertising, covered themselves in the fine print with “actual speeds may vary based on time of day or network resources…” or something similar.

Comcast could have very easily said the service is not "unlimited" and simply put a bandwidth cap and charged for overages.

That is what it has come too.  And what is funny is that instead of the way they were metering traffic, which was too throttle specific (but not all) application traffic for specific users, they’ve put forth a policy that basically will throttle ALL of the heavy users’ traffic and not just their P2P.  Their entire service could more or less get cut off.  NN may have won the application-agnostic battle, but the heavy downloaders could easily be worse off, and rest of us better off!  Ironic.

Instead, Comcast went ahead and tricked people while continuing to claim to offer "unlimited" Internet.

Again, there is this undercurrent that assumes the worst in the carriers, in this case deliberate deceptive advertising, rather than just misleading statements and poor choices of words.  I don’t know where I see it, but if it was deliberate then they should be sued or fined for false advertising, however that works.  I’m not aware of anything along this line to date.  And even the FCC concedes that the definition of broadband is tough and has asked for industry and public help and feedback.

In this case Comcast degraded speed intentionally in order to save money on the total bandwidth they had to purchase.

No, I see it as improving service for all other users on the same segment while throtting aggressive applications that act like a virus, which I’m certain many would like to see them do (if it indeed was a virus.) At the time, the technology couldn’t really do much better, other than the brute force bandwidth cap tactic now imposed.  We’ll see how DOCSIS 3.0 does with it.  And bottom line, let’s not forget where the Napster thing originated – people committing copyright infringement, a federal crime, by illegally downloading music and movies, the bad side affect being the disruption of broadband service to other subscribers.  Not that the service providers care about the RIAA, but somehow that point (root cause) keeps getting overlooked.

vendor claims Larry Seltzer  –  Dec 28, 2009 4:23 PM PDT

For those complaining about vendor claims of "unlimited" service or about the rates speeds, I think government, working with industry and others, could play a constructive role standardizing fair language for such things. If ISPs didn't have to one-up each others' marketing everyone would be better off.

I'll cede the technical feasibility point to Christopher Parente  –  Dec 29, 2009 6:05 PM PDT

I'll cede the technical feasibility point to others more versed. Seems to me Karl you were trying to make a rather provocative point — that most businesses today offer tiered service, Net Neutrality would be an artificial restriction on carriers and even if it was implemented, carriers would find ways to create a "floor" that won't satisfy most consumers.

Right or wrong? You're against NN, but also no fan of ISPs, since you write:

The point of this exercise is to suggest that ISPs have a well stocked bag of tricks to induce users to pay more for what we used to get for free from "best effort" services on the internet.

I'm not sure I agree with you re the best effort point. People want more services and cool stuff like video, which is far more taxing than email or surfing. I don't know the numbers, but I'd think the majority of consumers who use VoIP today do so over connections backed by business SLAs.

Thanks for sparking this conversation and happy holidays.

To post comments, please login or create an account.

Related

Topics

Cybersecurity

Sponsored byVerisign

IP Addressing

Sponsored byAvenue4 LLC

New TLDs

Sponsored byAfilias

DNS Security

Sponsored byAfilias

Domain Names

Sponsored byVerisign