Home / Blogs

Packet Loss: How the Internet Enforces Speed Limits

Steven Bellovin

There's been a lot of controversy over the FCC's new Network Neutrality rules. Apart from the really big issues — should there be such rules at all? Is reclassification the right way to accomplish it? — one particular point has caught the eye of network engineers everywhere: the statement that packet loss should be published as a performance metric, with the consequent implication that ISPs should strive to achieve as low a value as possible. That would be very bad thing to do. I'll give a brief, oversimplified explanation of why; Nicholas Weaver gives more technical details.

Let's consider a very simple case: a consumer on a phone trying to download an image-laden web page from a typical large site. There's a big speed mismatch: the site can send much faster than the consumer can receive. What will happen? The best way to see it is by analogy.

Imagine a multiline superhighway, with an exit ramp to a low-speed local road. A lot of cars want to use that exit, but of course it can't handle as many cars, nor can they drive as fast. Traffic will start building up on the ramp, until a cop sees it and doesn't let more cars try to exit until the backlog has cleared a bit.

Now imagine that every car is really a packet, and a car that can't get off at that exit because the ramp is full is a dropped packet. What should you do? You could try to build a longer exit ramp, one that will hold more cars, but that only postpones the problem. What's really necessary is a way to slow down the desired exit rate. Fortunately, on the Internet we can do that, but I have to stretch the analogy a bit further.

Let's now assume that every car is really delivering pizza to some house. When a driver misses the exit, the pizza shop eventually notices and sends out a replacement pizza, one that's nice and hot. That's more like the real Internet: web sites notice dropped packets, and retransmit them. You rarely suffer any ill effects from dropped packets, other than lower throughput. But there's a very important difference here between a smart Internet host and a pizza place: Internet hosts interpret dropped packets as a signal to slow down. That is, the more packets are dropped (or the more cars who are waved past the exit), the slower the new pizzas are sent. Eventually, the sender transmits at exactly the rate at which the exit ramp can handle the traffic. The sender may try to speed up on occasion. If the ramp can now handle the extra traffic, all is well; if not, there are more dropped packets and the sender slows down again. Trying for a zero drop rate simply leads to more congestion; it's not sustainable. Packet drops are the only way the Internet can match sender and receiver speeds.

The reality on the Internet is far more complex, of course. I'll mention only aspects of it; let it suffice to say that congestion on the net is in many ways worse than a traffic jam. First, you can get this sort of congestion at every "interchange". Second, it's not just your pizzas that are slowed down, it's all of the other "deliveries" as well.

How serious is this? The Internet was almost stillborn because this problem was not understood until the late 1980s. The network was dying of "congestion collapse" until Van Jacobson and his colleagues realized what was happening and showed how packet drops would solve the problem. It's that simple and that important, which is why I'm putting it in bold italics: without using packet drops for speed matching, the Internet wouldn't work at all, for anyone.

Measuring packet drops isn't a bad idea. Using the rate, in isolation, as a net neutrality metric is not just a bad idea, it's truly horrific. It would cause exactly the problem that the new rules are intended to solve: low throughput at inter-ISP connections.

By Steven Bellovin, Professor of Computer Science at Columbia University
Follow CircleID on
Related topics: Broadband, Net Neutrality
SHARE THIS POST

If you are pressed for time ...

... this is for you. More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

Vinton Cerf, Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Share your comments

I think the intent wasn't for ISPs Todd Knarr  –  Feb 28, 2015 1:26 PM PST

I think the intent wasn't for ISPs to concentrate on packet loss itself but to use it as a proxy for the same thing congestion control uses it for: sufficiency of bandwidth. If an ISP's suffering packet loss at an entrance or exit from it's network, that's an indication that entrance/exit doesn't have enough bandwidth to handle the traffic and the ISP's response ought to be to increase the available bandwidth there to handle the traffic (which results in eliminating the packet loss). I agree with you that trying to treat packet loss (the symptom) by making it go away without treating the lack of bandwidth (the underlying cause of the symptom) is like treating a broken leg by giving the patient enough painkillers that they don't notice the pain when they walk.

To post comments, please login or create an account.

Related

Topics

Cybercrime

Sponsored byThreat Intelligence Platform

Whois

Sponsored byWhoisXML API

Domain Names

Sponsored byVerisign

New TLDs

Sponsored byAfilias

DNS Security

Sponsored byAfilias

Cybersecurity

Sponsored byVerisign

IP Addressing

Sponsored byAvenue4 LLC