Home / Blogs

Packet Loss: How the Internet Enforces Speed Limits

There’s been a lot of controversy over the FCC’s new Network Neutrality rules. Apart from the really big issues—should there be such rules at all? Is reclassification the right way to accomplish it?—one particular point has caught the eye of network engineers everywhere: the statement that packet loss should be published as a performance metric, with the consequent implication that ISPs should strive to achieve as low a value as possible. That would be very bad thing to do. I’ll give a brief, oversimplified explanation of why; Nicholas Weaver gives more technical details.

Let’s consider a very simple case: a consumer on a phone trying to download an image-laden web page from a typical large site. There’s a big speed mismatch: the site can send much faster than the consumer can receive. What will happen? The best way to see it is by analogy.

Imagine a multiline superhighway, with an exit ramp to a low-speed local road. A lot of cars want to use that exit, but of course it can’t handle as many cars, nor can they drive as fast. Traffic will start building up on the ramp, until a cop sees it and doesn’t let more cars try to exit until the backlog has cleared a bit.

Now imagine that every car is really a packet, and a car that can’t get off at that exit because the ramp is full is a dropped packet. What should you do? You could try to build a longer exit ramp, one that will hold more cars, but that only postpones the problem. What’s really necessary is a way to slow down the desired exit rate. Fortunately, on the Internet we can do that, but I have to stretch the analogy a bit further.

Let’s now assume that every car is really delivering pizza to some house. When a driver misses the exit, the pizza shop eventually notices and sends out a replacement pizza, one that’s nice and hot. That’s more like the real Internet: web sites notice dropped packets, and retransmit them. You rarely suffer any ill effects from dropped packets, other than lower throughput. But there’s a very important difference here between a smart Internet host and a pizza place: Internet hosts interpret dropped packets as a signal to slow down. That is, the more packets are dropped (or the more cars who are waved past the exit), the slower the new pizzas are sent. Eventually, the sender transmits at exactly the rate at which the exit ramp can handle the traffic. The sender may try to speed up on occasion. If the ramp can now handle the extra traffic, all is well; if not, there are more dropped packets and the sender slows down again. Trying for a zero drop rate simply leads to more congestion; it’s not sustainable. Packet drops are the only way the Internet can match sender and receiver speeds.

The reality on the Internet is far more complex, of course. I’ll mention only aspects of it; let it suffice to say that congestion on the net is in many ways worse than a traffic jam. First, you can get this sort of congestion at every “interchange”. Second, it’s not just your pizzas that are slowed down, it’s all of the other “deliveries” as well.

How serious is this? The Internet was almost stillborn because this problem was not understood until the late 1980s. The network was dying of “congestion collapse” until Van Jacobson and his colleagues realized what was happening and showed how packet drops would solve the problem. It’s that simple and that important, which is why I’m putting it in bold italics: without using packet drops for speed matching, the Internet wouldn’t work at all, for anyone.

Measuring packet drops isn’t a bad idea. Using the rate, in isolation, as a net neutrality metric is not just a bad idea, it’s truly horrific. It would cause exactly the problem that the new rules are intended to solve: low throughput at inter-ISP connections.

By Steven Bellovin, Professor of Computer Science at Columbia University

Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds several patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs.

Visit Page

Filed Under

Comments

I think the intent wasn't for ISPs Todd Knarr  –  Feb 28, 2015 9:26 PM

I think the intent wasn’t for ISPs to concentrate on packet loss itself but to use it as a proxy for the same thing congestion control uses it for: sufficiency of bandwidth. If an ISP’s suffering packet loss at an entrance or exit from it’s network, that’s an indication that entrance/exit doesn’t have enough bandwidth to handle the traffic and the ISP’s response ought to be to increase the available bandwidth there to handle the traffic (which results in eliminating the packet loss). I agree with you that trying to treat packet loss (the symptom) by making it go away without treating the lack of bandwidth (the underlying cause of the symptom) is like treating a broken leg by giving the patient enough painkillers that they don’t notice the pain when they walk.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Threat Intelligence

Sponsored byWhoisXML API

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Brand Protection

Sponsored byCSC

DNS

Sponsored byDNIB.com

Cybersecurity

Sponsored byVerisign

Domain Names

Sponsored byVerisign