Home / Blogs

Introductory Remarks from Innovation ‘08

Here’s my opening remarks from Media Access Project’s Innovation ‘08 in Santa Clara this morning. A DVD will be available shortly. This was a lively discussion, with Google and Vuze on the case.

Good morning and welcome. My name is Richard Bennett and I’m a network engineer. I’ve built networking products for 30 years and contributed to a dozen networking standards, including Ethernet and Wi-Fi. I was one of the witnesses at the FCC hearing at Harvard, and I wrote one of the dueling Op-Ed’s on net neutrality that ran in the Mercury News the day of the Stanford hearing.

I’m opposed to net neutrality regulations because they foreclose some engineering options that we’re going to need for the Internet to become the one true general-purpose network that links all of us to each other, connects all our devices to all our information, and makes the world a better place. Let me explain.

The neutrality framework doesn’t mesh with technical reality: The Internet is too neutral in some places, and not neutral enough in others.

For one thing, it has an application bias. Tim Wu, the law professor who coined the term network neutrality, admitted this in his original paper: “In a universe of applications, including both latency-sensitive and insensitive applications, it is difficult to regard the IP suite as truly neutral.” The distinction Professor Wu makes between latency-sensitive and insensitive applications isn’t precisely correct: the real distinction is between jitter sensitive and insensitive applications, as I explained to the FCC. VoIP wants its packets to have small but consistent gaps, and file transfer applications simply care about the time between the request for the file and the time the last bit is received. In between, it doesn’t matter if the packets are timed by a metronome or if they arrive in clumps. Jitter is the engineering term for variations in delay.

The IP suite is good for transferring short files, and for doing things that are similar to short file transfers. It’s less good for handling phone calls, video-conferencing, and moving really large files. And it’s especially bad at doing a lot of different kinds of things at the same time.

The Internet’s congestion avoidance mechanism, an afterthought that was tacked-on in the late 80’s, reduces and increases the rate of TCP streams to match available network resources, but it doesn’t molest UDP at all. So the Internet is not neutral with respect to its two transport protocols.

The Internet also has a location bias. The Internet’s traffic system gives preferential treatment to short communication paths. The technical term is “round-trip time effect.” The shorter your RTT, the faster TCP speeds up and the more traffic you can deliver. That’s why we have content delivery networks like Akamai and the vast Google server farms. Putting the content close to the consumer on a really fast computer gives you an advantage, effectively putting you in the fast lane.

The Internet is non-neutral with respect to applications and to location, but it’s overly neutral with respect to content, which causes gross inefficiency as we move into the large-scale transfer of HDTV over the Internet. Over-the-air delivery of TV programming moves one copy of each show regardless of the number of people watching, but the Internet transmits one copy per viewer, because the transport system doesn’t know anything about the content. Hit TV shows are viewed by tens of millions of viewers, and their size is increasing as HDTV catches on, so there’s major engineering to do to adapt the Internet to this mission. Until this re-engineering is done, HDTV is trouble for the Internet.

Internet traffic follows the model where the more you ask for, the more you get, and when you share resources with others, as we all do on packet networks, that can be a problem if you have a neighbor who wants an awful lot. This is what happens with peer-to-peer, the system designed for transferring very large files, a work-around for the Internet’s content inefficiency.

According to Comcast, increased use of P2P BitTorrent two or three years ago caused the performance of VoIP to tank. They noticed because customers called in and complained that their Vonage and Skype calls were distorted and impaired. Activists accused ISPs of bagging VoIP in order to sell phone services, but the problem was actually caused by a sudden increase in traffic. Their response was to install equipment that limited the delay that P2P could cause VoIP through various traffic management techniques, such as fair queuing and caps. As the amount of traffic generated by P2P increased, the cable Internet companies increased the power of their traffic management systems to the Sandvine system that caused the current dust-up. Cable Internet is more vulnerable than DSL and Fiber to the delays caused by P2P because the first mile is shared.

This problem is not going to be solved simply by adding bandwidth to the network, any more than the problem of slow web page loading was solved that way in the late 90’s or the Internet meltdown problem disappeared spontaneously in the 80’s. What we need to do is engineer a better interface between P2P and the Internet, such that each can share information with the other to find the best way to copy desired content.

Where do we turn when we need enhancements to Internet protocols and the applications that use them? Not to Congress, and not to the FCC. Congress sees the Internet as an election-year ATM and little else. And while Commission has done a bang-up job in creating the regulations that enabled Wi-Fi and UWB, two of the systems I’ve helped develop, this help had a distinct character: they removed licensing requirements, set standards for transmit power levels and duty cycles, and backed-off. They didn’t get into the protocols and format the Beacon and dictate the aggregation parameters. Industry did that, in forums like the IEEE 802 and the Wi-Fi Alliance. Engineers solve engineering problems.

Presently, the P2P problem is being worked by the DCIA in its P4P Forum, and in the IETF in the P2P Infrastructure group. P2PI held a meeting last month at MIT, and will most likely meet again in Dublin the last week of July. They have an active e-mail list, and are charitably receiving complaints and suggestions about the best way to handle P2P interaction with Internet core protocols. The system is working, and with any luck these efforts will address some the unsolved problems in network architecture that have emerged in the last 15 to 20 years.

We need to let the system that has governed the Internet for the last 35 years continue to work. The legislation that has been introduced has been described by one of its sponsors, Rep. Adam Schiff (D, Burbank) as a “blunt instrument” designed to “send a message.” The message that it sends to me is that some people believe that the Internet’s core protocols have reached such a refined level of perfection that we don’t need to improve them any more.

I know that’s not true. The Internet has some real problems today, such as address exhaustion, the transition to IPv6, support for mobile devices and popular video content, and the financing of capacity increases. Network neutrality isn’t one of them.

By Richard Bennett, Consultant

Richard is co-creator of the Ethernet and Wi-Fi standards.

Visit Page

Filed Under

Comments

Informative trinsic  –  Jun 15, 2008 7:42 PM

I agree with this technical analysis. The internet is not neutral when it comes to applications and distances. As such, I think the NN we are talking about though, is that we need regulations to prevent ISP’s and telecoms from doing network management based on content. (I.E censorship based on agendas of the network providers them selfs, as there is a clear conflict of interest there). I think its ok for them to do network management for protocol specific things such as VOIP or HDTV, all though, the internet is not really designed for large amounts of bandwidth that kind of content is going to need to deliver to the masses. If the world wants a on-demand service like this, there should be a build out of a separate network for multimedia content that can be hooked in at the last mile that runs parallel to internet traffic. Leave the internet for person to person communications, transfers and the like as that is what it was designed for in the first place.

Another view: Why Net Neutrality is a sub-optimal, but the only acceptable way right now. Sachin Agarwal  –  Jun 15, 2008 8:35 PM

Let us say, for simplicity, that most of us are connected to the best-effort, statistically multiplexed, Internet. What does this mean? This means that every byte on the Internet going from point A to point B will, on average, get the same service from the Internet (same probability of loss, same delay, same delay-jitter, etc.) Therefore the Internet has the tendency to treat each byte traversing it as equal because in our simple example of 2 bytes going from A to B, the fraction of service (or utility) that each byte receives from the Internet is equal.

However, most people agree that the importance, or utility, of every byte on the Internet is not equal. For example, it may be more important to quickly transfer a byte from a voip conversation than a byte from a file transfer. Or it may be more important to send bytes that update stock prices than to send bytes to play a You tube video.

Or so I think. But what do you think? What does Skype think? What does Google think? What does Comcast think? And if they think differently, then whose voice matters? Or should anyone’s voice matter more than the others?

This is the key point of the Net Neutrality corundum. Everyone agrees that the present design of the best-effort Internet is suboptimal in that it treats every byte as equal and gives equal precedence to equal fractions of content. But the issue with doing away with this net neutrality model is that vested interests will decide which particular byte is more important than another byte. Can we trust one single company, or authority, to make the correct decision on this one? As a market believer, I would first say that let the market decide, i.e., let the price per byte, and hence the value attached to that particular byte, be the deciding factor. But the big issue is whether a flexible, effective, and dynamic market of this sort can be set up and quickly integrated with the existing and upcoming Internet protocols. Until this happens, I am more comfortable with time-tested simple statistical multiplexing, the fair but sub-optimal egalitarian algorithm, to do the job.

I am relieved that the question of Net Neutrality does have a technical solution - setup a market to do the job. I am just concerned if there is enough political patience to wait for the technology to develop the byte market we will ultimately need.

http://multicodecjukebox.blogspot.com/

Missed the point of net neutrality Anonymous Coward  –  Jun 15, 2008 10:31 PM

While I have only a layman’s understanding of the technical aspects behind network management practices and the needs of different types of software, these concerns are not quite what network neutrality is meant to address.

A quick explanation of net neutrality is this: As it is, individuals pay for access that is priced according to speed. Every website or internet service is then delivered to the individual at the paid-for speed (though of course subject to traffic conditions and distances, etc). Network neutrality legislation is meant only to ensure that this remains so. It doesn’t necessarily preclude the use of network management practices or tie the internet to its current architecture or protocols. What such legislation is meant to stop is when an ISP makes a deal to favor certain companies over others. One is example is a deal to provide better service to Skype’s VOIP than to Gizmo Project’s VOIP. Another example is to make a deal with AOL to prioritize their Oscar protocol over other, similar protocols such as XMPP or whatever Yahoo IM uses.

Comcast’s network management techniques are actually a separate question. If Comcast were throttling bittorrent in order to promote their own Comcast P2P protocol, that would fall more squarely into the realm of net neutrality. As it is, the distaste with Comcast’s practices just comes from people who want to use bittorrent without restrictions and was aggravated by Comcast’s attempts to hide the throttling. Net neutrality legislation is meant to be a general protection against monopolies on the internet, not a restriction on network management practices meant to improve service or an attempt to freeze the web’s architecture in place. Lately, however, the Comcast issue is getting more mixed into the net neutrality debate (see the CDT link below), so the field is changing. On reading your article, I suggest that you take a stance in favor of net neutrality’s anti-monopoly policy stance, but with the warning that ISPs must be allowed to manage their networks (purely to improve service) and that the internet cannot be frozen in place. For a more in-depth explanation, see Public Knowledge (PK lists the Comcast issue separately under it’s “issues” tab), http://cdt.org/publications/policyposts/2008/7.

Comcast sent fraudulent packets: it did not "throttle" Mike O'Donnell  –  Jun 16, 2008 3:41 AM

I generally agree with the attitude of this particular Anonymous Coward. But I think that it is more important to learn precisely what is and isn't going on on the Internet than to affirm or deny particular attitudes. So, I'll pick at one statement: "the distaste with Comcast's practices just comes from people who want to use bittorrent without restrictions and was aggravated by Comcast's attempts to hide the throttling" This is probably an accurate description of a widely held view regarding Comcast's recent behavior. It is technically and ethically a very misleading description. Comcast transmitted fraudulent IP packets purporting to come from its customers, and delivered fraudulent IP packets to its customers. In some cases, many of which appear to have been bittorrent communications, Comcast created IP packets with "From" addresses that it had assigned to its own customers, rather than addresses correctly associated with its own servers. The contents of those packets indicated that the customers were terminating a sequence of communications. They forwarded these fraudulent packets to locations with which their customers were communicating. Comcast also created similar packets with "From" addresses associated with its customers' correspondents, telling the customers that their correspondents were terminating a sequence of communications. These fraudulent packets in many cases had the effect of slowing down bittorrent traffic. But "throttling" by a router refers more properly to dropping packets, not to tricking senders into thinking that their recipients are no longer available. Years ago, when telephones had party lines, it was possible to have an operator request that one party relinquish a line in favor of a more urgent call. Leaving aside the question of the judgment of relative urgency, suppose that instead the operator was able to imitate the voices of two people in a conversation, telling each in the voice of the other, "goodbye." That's the sort of thing that Comcast was doing to conversations involving its customers. It was impersonating both parties in the conversation to induce them both to terminate the conversation. So, the question of "net neutrality" should not have even arisen in the evaluation of Comcast's behavior. They were guilty at the least of gross dishonesty, probably of violation of their agreement to provide Internet service to their customers (creating fraudulent packets is very clearly not a legitimate part of Internet service), and possibly of wire fraud (I don't know enough about the law to even make a good guess about this). Cheerio, Mike O'Donnell http://people.cs.uchicago.edu/~odonnell

Let's check the technical statements carefully Mike O'Donnell  –  Jun 16, 2008 3:16 AM

This is a very useful article as a starting point for thinking. But that thinking should lead us to examine, and in some cases contradict, some of the statements that it presents as obvious.

For those who worry: I’m not in favor of net neutrality legislation. I am in favor of getting as many people as possible to understand what’s really at stake in various technical decisions.

My first claim to pick apart.

“Over-the-air delivery of TV programming moves one copy of each show regardless of the number of people watching, but the Internet transmits one copy per viewer,”

The notion that radio/television broadcast moves “one copy” is based on a very strong assumption that transmission may only be localized in time, frequency, and geography. In a more basic sense, “Over-the-air delivery of TV programming” moves a separate copy of each show to every point within the broadcast area, whether or not there is any receiver at that point. How many copies that amounts to depends entirely on the resolution with which we partition the broadcast area.

There is surely a sensible way to compare radiated broadcast to multicast with time-shift over IP. It might be something like comparing the total energy expended in the multicast. Clearly, the Internet will win for sufficiently small sets of viewers sufficiently dispersed in space and/or time. Radiated broadcast will win for sufficiently large sets of viewers sufficiently concentrated in space and all viewing at the same time. It would be very interesting to know something about the break-even points, and something about the magnitude of the differences under various assumptions.

But the comparison quoted above compares two different definitions of “copy” rather than the actual performance of broadcast television vs. the Internet.

Cheerio,

Mike O’Donnell
http://people.cs.uchicago.edu/~odonnell

Is jitter really more important than latency? Mike O'Donnell  –  Jun 16, 2008 3:55 AM

Another technical point to pick at:

“the real distinction is between jitter sensitive and insensitive applications,”

I have heard concerns about jitter before, and have studied them rather carefully. I am not at all confident that they are wrong, but I have never heard a convincing explanation that they are right.

Given sufficient buffer space at the receiver (and low enough jitter in its access to the buffers, of course), jitter may always be eliminated in favor of higher latency. So, it appears that the real problem is that there is a probabilistic distribution of latencies. The maximum acceptable latency for an application must be longer than the point in the distribution where the maximum acceptable failure probability comes. That sounds to me like something the network engineer should treat as a probabilistic latency problem, not as a jitter problem.

Are there important receivers that just can’t be given enough buffer space? I’m not aware of them, and I’m inclined to think that the Internet will be better if it assumes substantial memory at its receivers, but I’d like to see reasons that it shouldn’t.

I have left off the possibility that the signals are used for timing purposes. I haven’t totally proved to myself that there mightn’t be substantial value in controlling jitter to the point that packet delivery can be used for timing. This arises some in MIDI networking, but the latency is also a problem there. Again, it seems that the natural assumption is that the Internet can produce the greatest total value by favoring high throughput first, low latency second, and only worrying about jitter as a component of the probabilistic description of latency.

So, I hope someone will post real reasons to worry about jitter, that don’t assume that current limitation on software remain fixed.

Cheerio,

Mike O’Donnell
http://people.cs.uchicago.edu/~odonnell

Latency is network buffering Mike O'Donnell  –  Jun 16, 2008 4:00 AM

So, from another point of view, a communication link with high latency and low jitter is actually performing the buffering within the network. It's a lot of fun (which I'll leave to someone else) to caculate how many bits might be simultaneously in transit on an optical fiber from the Atlantic to the Pacific. So, if we should attack jitter while allowing latency to remain high, it appears that what we are doing is asking the network to perform buffering for us. Under what circumstances is that a really good idea? I can't think of any, but I don't take that as evidence that there are none.

VOIP is both latency sensitive wolfkeeper  –  Jun 16, 2008 2:56 PM

VOIP is both latency sensitive as well as jitter sensitive. As latency goes above about 100ms perceived quality by the users plummets and you tend to get a lot of echoes and people talking at the same time and so forth. VOIP implementations generally avoid TCP for partly this reason, and use UDP and similar.

Distinguish network jitter from presentation jitter Mike O'Donnell  –  Jun 16, 2008 4:33 PM

The presentation of a sound signal to the ear is very sensitive to jitter in the transmission of the sound wave. But that doesn't imply that VoIP network services are inherently sensitive to jitter in the network. Assuming sufficient buffer space, network jitter can be removed from the audible presentation, at the cost of additional latency. Furthermore, since VoIP tolerates a certain amount of data loss with graceful degradation, rather than total failure, the latency can be limited at the cost of some data loss. Let's choose a latency that appears to be acceptable in a voice conversation---I'll stipulate 50 ms. A VoIP receiver can buffer 50 ms worth of packets before presenting the results to the ear. Packets that take longer than 50 ms from sender to receiver can be dropped. The whole system is successful if the number and time-distribution of lost packets produces results tolerable to the listener. Yes, I know that the receiver doesn't have a precise knowledge of the age of each packet, but the behavior described above can be approximated pretty well using information available to the receiver. That is, the receiver can put a hard limit on presentation jitter, postulate a certain amount of buffering that appears to provide acceptable latency, and then performance degradation is all in the form of data loss. I am aware that VoIP usually uses UDP instead of TCP. I'm pretty sure that it is because TCP delays all packets to insure that nothing is dropped, which defeats the strategy that I described above. This has more to do with data-loss tolerance than with jitter sensitivity.

Not all apps are jitter-sensitive Richard Bennett  –  Jun 16, 2008 6:02 PM

My comments about jitter were a response to some things one of the other panelists had written, to the effect that all apps care about latency. While it's true that we all want all our stuff to run fast, it's a fact that P2P can tolerate a lot more jitter than VoIP and still be successful. VoIP uses UDP because it doesn't want packets re-transmitted if a congestion or noise condition causes them to be dropped. VoIP loses interest in any packet that isn't delivered in 100ms or is delivered out-of-order. File transfer apps have a whole different definition of success.

I still don't see the jitter intolerance, only latency intolerance Mike O'Donnell  –  Jun 16, 2008 8:53 PM

I won't be offended if you get bored with this little point and drop it. But I don't want to leave the impression that things are settled in any simple way. "P2P can tolerate a lot more jitter than VoIP and still be successful." I haven't seen any support at all for this claim. Rather, I have seen a lot of support for the claim that file transfer (I think that's what you mean by P2P---the distributed aspect of the transfer doesn't appear to affect this particular point a lot) can tolerate a lot more LATENCY than VoIP and still be successful. Sure, it's a probabilistic notion of latency, not an absolute one. VoIP needs for the probability of a latency greater than, let's say 100 ms, to be low. The 100 ms is determined by human perception and cognition applied to audible conversation. The low probability is a bit more complicated and depends on the encoding, aggregation into packets, and the total bit rate at the source (since we can lose more packets if the number of bits sent is sufficiently liberal). There is also a difference between independent chance of loss for each packet sent vs. a pattern of loss, particularly a substantial interval of packet loss. But none of this affects the basic fact that VoIP needs a low latency at high probability. I don't know a precise quantitative definition of network "jitter," but it surely has a lot to do with the variance of the latency distribution. Since latency is always >0, there is a correlation between jitter and latency at a given confidence level, but it seems totally misleading to focus on the variance itself. For example, an engineering change that reduced variance drastically, while increasing the mean latency, could ruin VoIP. So, to the extent that latency and jitter are independent, it appears to be always the latency that VoIP finds hard to tolerate, not the jitter. "VoIP loses interest in any packet that isn't delivered in 100ms or is delivered out-of-order" That's probably true of current implementations, but not at all essential. With sufficient processing speed (well within current reach), a VoIP receiver could reorder packets within the 100ms window. In some environments, it will even be better to aggregate samples so that a single packet doesn't contain contiguous samples, in order to make the fidelity degradation from packet loss more graceful. Caveat: I have no special knowledge of the VoIP software deployed today. Rather, I've reasoned out how it should work from basic principles of acoustics and networking.

Jittering toughts Nicolau Werneck  –  Jun 16, 2008 6:37 PM

I hope we don't turn this debate around a strange word ("neutrality") into a debate around another strange word ("jitter"). To the listener of VoIP, there is no jitter. It is important to realize that, because some people know "jitter" from arguments about CD and digital audio versus analog media. It is not the same "waveform" jitter at all. I would just like to rephrase some of the things Mr. O'Donnell said. The jitter in packet transmission means a "variable delay" for each packet. To play them in continuously, a buffer is needed. Packets that arrive too late are simply discarded. What the user receives is a continuous flow of packets, with some of them missing from time to time. From time to time, also, the program might feel the need to change its estimate of what is the delay it should impose (buffer size). It is not easy to say the amount of jittering / delay / transmission rate we need to use VoIP. It is a complicated compromise. You must take into account the codec used, the bitrate used in the encoding (example: 10 LPC or 12 LPC), and the delay. More than that, you might need to consider different kinds of latency variation happening in the network, it is not always just a simple memoryless distribution around a mean... These all are current research topics in electrical engineering schools (et alii). We should also mention something interesting: frequently, to insure a certain low delay in a transmission, one has to allocate a "band" over the network, eventually missing out opportunities to transmit some packets. Sometimes there is an interesting compromise between delay and effective transmission rate, e.g. we allocate what looks like a constant 10kbps channel to transmit just 5kbps with more quality. Now, where does the ISPs and net neutrality come into play? ISPs want to do just like telephone companies: They want to tell their users they have access to a great service, a telephone line that works any time with a guaranteed QoS (a maximum delay and minimum bitrate), but the reality is that they multiplex all calls into a line that has much less transmission power then it would be needed to answer to all users. The actual network is smaller then it seems to be in the advertising. On the Internet, there are people wanting to transmit all the time (e.g. bittorrent users). They would be just like telephone users using the telephone lines all the time. There are also some other people wanting to host web server at home, for example. ISPs don't like them because their "strange" user profile forces them to work, and they are lazy. ISPs want all users to be sheeps using just VoIP and IM from time to time, so they can have minimum expenses to advertise "100Mbps" lines. So when users start to use bittorrent, they get angry, saying they are ruining the net. What must be done is recognizing what people want, and giving them this service. We must improve the Internet with IPv6 and QoS services. We need more lines, we need a better management... The danger here is ISPs hijacking the Internet to users of VoIP, webpages IM and e-mail (what Mr. Bennett calls "transmission of short files"), saying to the "power users" they don't belong there. There are making a choice about how they think the Internet should look like. They don't want to pursue a project of how it should be like, for example, implementing IPv6. They want to work the less they can winning the more money they can. "Net neutrality" means the net should be there for everyone, regardless of what they want to do, regardless of the size of the shoe they wear, regardless of they being different, what Mr. Bennett much confusingly called "non-neutrality". ISPs must not "neutralize" out bittorrent users. What we should do is improving the way the net works so that people with larger feet don't step on others. For that the companies and all organizations that manage the Internet must reach an agreement about how they will do that, an not just go on prohibiting people to walk outside of the yellow line. And I really don't understand why the IP suite would be "unfit" to all of that. IP can be used for VoIP, streaming an all. It _is_ used. The network must be MANAGED to do that. It is true that the IP suite doesn't have protocols for managing QoS, but that can be complemented. Mr. Bennett gives the impression that the Internet "is not fit" for bittorrent + VoIP, but that is not a consequence of the use of the IP suite, it is not an inherent flaw in the IP suite design. It is a consequence of how the network was built. He makes it look like bittorrent users are wrong because they would be going against principles in IP's design, and that is plain wrong. He does give the impression to be concerned with IPv6, and that is my fight too. I want that to make it easier for people using bittorrent and VoIP and whatever they want... So what must be done? We spank the geek users until the the IPv6 come, what may never come if nobody seems to be needing it badly, or do we let people push the state of the art, and kick the large sitting butts of the rich companies? *** I had a similar argument the other day with some friends. They were sad because they like to keep their wi-fi routers open, to be nice to their neighbors who might be in need of a wi-fi connection. But they found out some people "taking over" their connections, using all the band and exhausting the number of TCP connections the NAT could handle. They wanted to be nice and were exploited by nasty folks. They said these people were bad, shouldn't do what they did, and should be ashamed. I disagree. Of course they are wrong, but they are not to blame. The guilty ones are the companies, specially the router manufacturer, who didn't create an easy way to restrict the resources shared, who didn't come up with engineering solutions. They sell us crappy routers, and let us, poor users, fighting each other for the rotten fish.

Word neutrality Nicolau Werneck  –  Jun 16, 2008 5:32 AM

This article is political, although seemingly technical. It is one of those texts that makes a lot of implicit statements, and so is hard to counter without seeming boring and dumb. But I am always boring anyway, so let’s do it.

I believe there is a misunderstanding of ideas going on here. The debates are happening around this term “net neutrality”, and “neutrality”, but I feel each person is using the word the way he feels more convenient. What we must do is run away from these slogans and talk technically.

As I understand, the first concern is regarding _access_ to services. The author of this article talks a lot about quality of service (QoS), what can certainly be seen as a concern with access in a wider meaning, but seems an extremely naïve concern when compared to the true subject: the possibility of ISPs to completely block users from certain services or activities, not necessarily because of administrative needs.

For example, people are concerned with ISPs charging extra for accessing youtube, for reading slashdot and circleid, and the worse: charging extra to let you post comments to blogs or to let you host a web server in your house. Think of it as the telephone company charging you extra to let you put an answering machine, or an old-time BBS in your home telephone line. In fact, it is already very difficult to have a web server in your home, specially because of the use of NAT, and it is very questionable the idea that ISPs only sell the service like this because it is the only engineering solution, and not because they are very happy due to second interests.

ISPs must provide data transmission. Connections of X kbps with Y seconds delay, to whoever the user wants. They shouldn’t restrict the use of the service because “bittorrent and Quake III (I’m that old) are interfering with VoIP”. What we need is to develop the QoS protocols, and implement IPv6.

The author talked only partially about how QoS can be managed in a network. His first omission was implying that jitter is the “real issue” in VoIP. That is quite inaccurate. Jitter _translates into latency_ when the packets are fed to a buffer. The more delay, the less the jitter bothers you (the less packets are lost). A connection with 0 jitter but low transmission rate is not perfect for VoIP. A larger rate with some jitter (i.e. packet loss) can be more acceptable.

Bittorrent, on the other hand, is “latency-insensitive”. You let the (free, non-copyrighted, of course)  files downloading, and don’t care about them arriving immediately. It’s not like a word someone is speaking in VoIP, or a soccer match.

The whole problem ISPs have is that they must “guess” when does the user want just a certain transmission rate (with any jitter level), and when does it want more quality. Selecting QoS is just something people talk about, but unknown to the user, and is also the whole of the Internet is far from ready to handle.

More quality means a more expansive network, and because of that ISPs are always complaining. They have a certain infrastructure, and want at the same time to tell people they can use VoIP with no problems, and that they can transfer big files very quickly. But there is a limit, of course!... But instead of recognizing that, and telling the truth to the consumers, they simple beat up the users, saying it’s all their fault. Something like telephone companies promising that people could call anywhere anytime, and then later complaining people are calling too much. (This problem happens sometimes, like in disasters, when you can’t get a line.)

The author doesn’t make it clear what are the deficiencies of the “IP suite” that must be solved so that QoS becomes a reality, and then we can make the life of network administrators easier, with bittorrent skype and browsers living happily ever after.

I got the impression sometimes from this text that the protocols are everything that must be discussed, but there is a whole world of network management decisions to be made outside of that. IPv6 will be great, I can’t wait for it, for multicasting, for QoS protocols to create properly charged TCP connections with low jitter. I’m specially eager for everyone to have real IP adresses, just like today we have telephone numbers.

But the development of such technologies is a separate problem from that of specifying what ISPs can do. Today the “non-neutrailty” the author is mentioning in this text is being used as an excuse for doing things like restricting the use of P2P. We must make it explicit that there are two separate things, the network administration, engineering problems, and the desire to control people’s life for money.

So, I do agree with the author if it is the case that he is saying we can’t just go blind about the problems in the networks today. But we can’t treat ISPs as poor victims of its users. They are smart, they want you to give your hands so they can try to eat your arms.

I would like very much to know how many ISPs care about deploying Pv6 today…

Partial Solution to Multiple Copies Problem Alex Westphal  –  Jun 16, 2008 8:12 AM

The issue of a single file being downloaded thousands or millions of times has already been partially addressed by using mirrors and caching. The problem with these methods is that unwanted versions of packets are often received, causing further problems to the network and end users. What we need is a more intelligent caching mechanism where packets can be marked as cache-able for movies or TV shows and not cache-able for VOIP or IM.

Even with this type of system in place these problems will persist. The real problem is ISP’s overselling their networks. Phone, power and other utility companies aren’t allowed to sell x number of people y amount of service, then give them only half of y because their systems can’t handle y.

ISP’s need to either; sell only what they can provide or upgrade their systems to provide what they sell. Governments should enforce this and be regulating internet services just as they do with all other products.

Technical solutions vs rent seeking. John Lopez  –  Jun 16, 2008 6:32 PM

I agree with the technical analysis. The problem is that there are two issues with one name being attached: the first is network management and innovation, which would be a horrible thing to curtail. The second is rent seeking based on non-technical issues.

This latter issue arises because there are large companies that act as the conduit most packets that travel the Internet, and they see the possibility of affecting (either delaying or accelerating) packets based on *source* as a additional revenue stream. The problems with this rent seeking should be obvious to anyone who has enjoyed the massive technical innovation that rides on top of the technical innovation of the networking itself.

Of course, it is politically difficult to say up front that you plan on extortion as a revenue stream (“wouldn’t it be a shame if something happened to your packets there” is unlikely to be spoken) but nonetheless the same *impact* can be achieved by couching network management “solutions” properly, especially when many of the incumbents are strongly motivated to see some of the new Internet based services fail (seeing these services as direct competitors to their other core businesses).

While Mr. Bennett brings up Milton Scritsmier  –  Jun 16, 2008 8:18 PM

While Mr. Bennett brings up valid technical reasons why various internet protocols conflict when bandwidth becomes limited, I disagree with his flat assertion that increased bandwidth is not the answer. I know because my own VOIP and file transfer conflicts ended when I paid for increased bandwidth. And the truth is that all these protocols coexisted quite happily a few years ago when there was simply less of each and the existing bandwidth was not overtaxed.

Mr. Bennett does make a valid point: The internet is not good at optimizing its existing capacity with all these different protocols. But with cable companies let us remember that they have plenty of bandwidth that could be devoted to the internet. They simply have made the choice to reserve that capacity for what they view as their primary business: cable TV, whether digital or analog. If they saw the internet as their primary business, we would not be having this discussion right now. Perhaps it is time for the cable companies to see that the interests and concerns of their customers are starting to change before their competitors do.

The latest tack by cable companies is to put a monthly cap on internet usage. Is this not simply just another way of solving the problem by using bandwidth? Instead of increasing bandwidth to match increasing usage, it solves the problem by limiting usage to the point where everything works again with the same old bandwidth. According to Mr. Bennett’s technical analysis, this strategy could never work, something I doubt he really believes.

Japan and Korea have congestion problems Richard Bennett  –  Jun 17, 2008 12:29 AM

One proof of the assertion that adding bandwidth to Internet access links doesn’t solve the congestion problem is the experience of countries that have already tried that experiment, such as Japan and Korea (that’s two proofs, but who’s counting?)

Adding bi-directional capacity in the first mile simply shifts congestion from the that portion of the Internet access network one step closer to the core, onto the regional network inside the ISP. Japan’s studies of traffic show that their Internet links run at 95% of capacity much of the time, a completely unhealthy scenario. And further analysis shows that a small minority of customers running a specific set of P2P applications is responsible for the bulk of the load. So Japan has had to institute bandwidth rationing for P2P simply to preserve bandwidth on its 100 Mb/s and 1 Gb/s networks.

For years we’ve been told that adding symmetric capacity to the Internet and its access networks would obviate the need for rationing and QoS, but that’s simply a myth. The more capacity you have for getting stuff off the Internet, the more you also have for putting stuff on it.

It’s no accident that the people who’ve been yelling the loudest for high-speed symmetric links are lawyers, not engineers. That’s not an accident.

What do we mean by "adding bandwidth"? Mike O'Donnell  –  Jun 17, 2008 12:59 AM

Now, I'm very confused about what you are claiming, so I despair of understanding well enough to get insight regarding the claims. From the article: "This problem is not going to be solved simply by adding bandwidth to the network" From your comment above: "Adding bi-directional capacity in the first mile simply shifts congestion from the that portion of the Internet access network one step closer to the core, onto the regional network inside the ISP" I never imagined that we were talking about adding bandwidth only to the last mile. That's not at all what I understand from "adding bandwidth to the network." Sure, we can all calculate that a bottleneck is likely to be annoying. Many of us (at least I) suppose that "adding bandwidth to the network" as a "solution" (though I'm not happy presenting this as a "solution" to a "problem" rather than an expansion of a service) means adding it in a rationally balanced way. Cheerio, Mike O'Donnell

The artithmetic is pretty simple Richard Bennett  –  Jun 17, 2008 1:05 AM

The demand for bandwidth comes from the network edge, Mike, so adding bandwidth at the edges simply increases supply and demand in equal parts. That doesn't actually affect congestion at all. What you have to do to reduce congestion is to add bandwidth in such a way that it's not immediately consumed, and that turns out to be an impossible task. The demand for bandwidth on a shared-pipe network always exceeds supply, so the real engineering task is to apportion it fairly.

High throughput may be a better goal than low congestion Mike O'Donnell  –  Jun 17, 2008 1:56 AM

"What you have to do to reduce congestion is to add bandwidth in such a way that it's not immediately consumed, and that turns out to be an impossible task." I'm not convinced that reducing congestion is, by itself, a worthwhile goal. To "add bandwidth in such a way that it's not immediately consumed" sounds a lot like adding bandwidth that's not used. I understand why the presence at most times of extra capacity can be valuable (nobody would be happy with 100% utilization of cars). But I need a lot of careful analysis, based on total value provided, to be convinced that a network that is exploited near capacity is worse than one with excess capacity. If the excess capacity provides a margin beyond valuable demands, that's good. But the most likely way to maintain excess capacity in the network is to impose bottlenecks destroying value outside the network. That doesn't really help the world. I didn't understand the article at first to be about congestion. It appears to me that, from the point of view of protocol design, it is wrong to try to eliminate congestion. Congestion on the network causes dropped packets, so it has a structure completely different from congestion on automobile highways, air routes, sea routes, train tracks. If input to the network exceeds capacity, then packets will be dropped. I don't see any reason to get emotional about that. As an engineer, I feel a responsibility to minimize the loss. I am very pessimistic about comparing the value of packets, so I concentrate on minimizing the number dropped. So, I distinguish "capacity drops," which are inevitable at a given network capacity, from "congestion drops," which could have been avoided by an omniscient and omnipotent global router. I don't see any point in losing sleep over the capacity drops as a protocol designer (I'll lose that sleep as a link provisioner). But I should look for ways to keep congestion drops low. Roghly, a congestion drop occurs when packet A causes packet B to be dropped, and then gets dropped itself, while B could have been delivered had it survived the collision with A. As you observed, the quick fix to TCP in the late 80s didn't address congestion very well. But I'm not convinced that rationing or reservation is the next line of defense to bring in. Rather, simple traffic-based throttling might be the better line. I got my ideas from Michael Greenwald (once at U Penn), who proposed a system for controlling congestion at routers, called AHBHA. Alas, I am not aware of a proper publication of the work, and I do not know where Mr. Greenwald is now. I cooked up the classification of drops to explain AHBHA to students, and you can find it in slides (with notes) numbers 82-103 at http://people.cs.uchicago.edu/~odonnell/Teacher/Courses/Strategic_Internet/Slides/ in case anyone has the fortitude to read through it. I *think* that the hardest design problem is aligning incentive with authority and knowledge. E.g., a sender of packets doesn't necessarily have a strong incentive to avoid crowing out other traffic, and in some cases (of DoS attacks by flooding) is particularly intent to do so. But the real problem in getting anything done is deployment. Fiddles to TCP were easy to deploy in the 80s. "The demand for bandwidth on a shared-pipe network always exceeds supply, so the real engineering task is to apportion it fairly." Perhaps the real engineering task is to provide as much of it as is cost-effective, and make it possible for a social system to apportion it productively and/or satisfactorily. I have very serious doubts that "fairly" is ever well defined except by one person's individual view (another post above treats the polymorphism of "fairness.") Cheerio, Mike O'D.

I think you are looking Milton Scritsmier  –  Jun 17, 2008 2:22 AM

I think you are looking at it from a strictly engineering perspective. From an economical perspective if you want keep people from using increased bandwidth at the central core, simply charge more for it. The pricing structure doesn't have to be linear, you can certainly charge the heavy users on a progressive scale until they back off. Just don't mess with how they use their bandwidth, that's all. My empirical experience is that demand is not infinite at all times, nor does it automatically always equal supply. I've certainly had better luck with downloads late at night than during lunchtime. This leaves me confused. How do you reconcile this common experience with your assertion that demand will always equal the supply? If there were suddenly a million or a billion times more bandwidth available on the internet, do you think all of it would be used? At some point, of course not. So it's not really an immutable physical law that demand will equal supply. It's really about human behavior. It's simply that right now we are at a point where the actual demand does greatly exceed supply so it appears that supply will always be used up. And this is just a classic economics problem where experience shows it is best handled by pricing, not by arbitrary rules that only serve to distort the market. It amazes me that here we have a market were the users are clamoring for more. It's a huge opportunity for somebody to fill that demand and make a big profit. And yet we are talking as if this is the worst thing that can possibly happen, and that consumers must be made to suffer. Does that make any sense?

Cost-benefit analysis for "QoS" Mike O'Donnell  –  Jun 17, 2008 1:28 AM

"For years we've been told that adding symmetric capacity to the Internet and its access networks would obviate the need for rationing and QoS, but that's simply a myth." Oh dear. It's hard to get a grip on statements that contradict other postulated, but unquoted, statements by whoever it is who has "told" us. One could equally accurately (and equally fruitlessly) say that, "for years we've been told that the Internet needs QoS services, without any real analysis explaining why." I am only aware of one carefully reasoned study that analyzes the potential benefit of QoS support (which the authors and I understand will boil down to a reservation system): "Best-effort versus reservation: a simple comparative analysis", Lee Breslau & Scott Shenker, ACM SIGCOMM 1998. I heard Mr. Shenker speak for an hour on the analysis. I recall that he favors reservations, but found the analysis did not give it any strong support. The analysis depends unavoidably on the statistical mix of demands on the network, so it cannot give a "yes" or "no" answer---rather it can demonstrate the sorts of conditions that will make reservations worthwhile or not. I found Breslau/Shenker's "simple ... analysis" a bit complex, so I simplified further for a class that I taught. If you care enough to dig through it, you may find my analysis on slides 104-119 of the class material (the count comes from the "slides plus notes" version): http://people.cs.uchicago.edu/~odonnell/Teacher/Courses/Strategic_Internet/Slides/ The essence of the result is that reservations improve the total value of network operations when there are applications that derive a value from delivery of n packets that grows superlinearly in n. The essence of the reason is that reservations do not produce more service, or an overall higher reliability. They merely rearrange the granularity of failure. It is very easy to fallaciously compare the reliability of communication with a reservation to that without the reservation. For a meaningful analysis we must include the possibility of failure to obtain the reservation at all. If there is something worth reserving, then almost certainly the reservation cannot be provided every time that it is desired. My analysis, and as well as I remember, Breslau/Shenker's as well, fall apart when you include the mechanism for charging for reservations. If reservations help, then almost certainly there must be a charge, or some other disincentive to cause some self-selection in requests for reservations. But, the ways in which the models fail appear to harm the case for reservations, rather than helping it. I will be delighted to learn of other analyses, and/or arguments why QoS is NOT properly provided by a system of reservations. An analysis that includes [dis]incentive mechanisms will be particularly interesting. Based on my understanding to date, I do not favor reservations at the basic IP level. One may certainly provide them out of band, and there are clearly cases where that's a win. Every private network constitutes a reservation. But I am not confident in my decision about the IP level and would like to see some more analysis, whether mathematical or experimental. Cheerio, Mike O'Donnell

Reservations aren't the only form of QoS, actually Richard Bennett  –  Jun 17, 2008 1:54 AM

You're a bit off the track here. The previous question was about adding bandwidth to eliminate congestion. Before we go off on this tangent, do you agree with my arithmetic on that point? The QoS issue is complicated by the assumption you make connecting my statement about "rationing and QoS" to "reservations," a bit of an odd leap. The IETF has pretty well abandoned reservation-based QoS in favor of prioritized QoS, such as DSCP. DSCP simply re-orders transmit queues, it doesn't make end-to-end reservations. In fact it does redistribute latency, which was the earlier point on VoIP vs. file transfers. But while latency above 100 ms is failure in interactive voice, it's not for file transfer. Interactive delays, you see, are not infinitely extensible. One of the requirements of a good phone call is the ability to cut in, and another is the ability to reply quickly. File transfer has no such requirement.

Congestion above, QoS comment here Mike O'Donnell  –  Jun 17, 2008 2:40 AM

"The previous question was about adding bandwidth to eliminate congestion. Before we go off on this tangent, do you agree with my arithmetic on that point?" Sure, I did the exercise in Peterson & Davie, and I understand how increased capacity near edges tends to increase congestion at centralish bottlenecks. I just posted above about my doubts whether reducing congestion is a good focus of attention here (vs. maximizing throughput). "The QoS issue is complicated by the assumption you make connecting my statement about "rationing and QoS" to "reservations," a bit of an odd leap." Not my leap. I got it from Breslau & Shenker, but I found their reasoning more step-like than leap-like. But I think they were assuming that the user of a QoS feature requires good knowledge of the probability of failure, independent of other load. Priorities don't deliver that. Reservations of capacity do. I started hearing about "QoS" many years ago, and it seems to have changed a bit since then. "The IETF has pretty well abandoned reservation-based QoS in favor of prioritized QoS, such as DSCP. DSCP simply re-orders transmit queues, it doesn't make end-to-end reservations." I understand priorities, and I'm just surprised that they are now being called "QoS." That seems like quite a leap, since priority can usually only deliver an ordering of quality, not a particular level of quality. I get both DSL last-mile connection and VoIP from Speakeasy, and within their own network, a 2-level priority system does a pretty good job of keeping VoIP reliable. But priority systems are hard to scale with good behavior. Anyway, a modest priority system, perhaps with only 2 levels, could be a no-brainer win for latency apportionment, as long as you can get people who only need the lover priority to label their traffic honestly. I suppose that, with the current system of last mile providers, if those providers could all be made to act rationally, a small charge for high-priority traffic could be a sufficient disincentive to mislabelling. I'm not optimistic, and I'm not eager to increase our dependence on these providers. What I'd really like to see, then, is an analysis of priorities vaguely analogous to the Breslau/Shenker analysis of reservations. You seem to imply that they determine queueing discipline, but I wonder whether they might be more important in determining who wins a collision (that is, the case when both packets eventually get service may be less important than the case when one is dropped due to queue overflow). There are lots of reasons to minimize queue length, and the more one succeeds, the less difference priorities make at that point.

minor clarification on reservations vis a vis QoS Mike O'Donnell  –  Jun 17, 2008 2:56 AM

BTW, neither the Breslau/Shenker analysis of reservations, nor mine, really depend on the reservations going end-to-end. Any reservation of resource at any point in the path (including the beginning and end) is subject to the same probabilistic phenomena. As I understand priority vs. reservation, either one can operate either locally, multilocally, or end-to-end. The difference is that reservation provides a certain stated amount of resource (which limits how much of the resource can be offered for reservation), while priority offers one class of packet better service than another, which still leaves the preferred packet subject to unlimited potential competition with others of its class. If badly deployed, priority can lead to starvation (there is a famous paradox of queueing theory where giving priority to the most important work leads to disaster, while giving priority to the least important is great).

Again, if Korea and Japan Milton Scritsmier  –  Jun 17, 2008 1:42 AM

Again, if Korea and Japan have problems with congestion in their central core, then add more bandwidth to the central core. And as I think you would agree, the people who should pay for it should be the heavy users. As I said before, when I needed more bandwidth I stepped up and paid for it. I just don't want anybody messing with the bandwidth I do pay for. Look, I'll be the first to agree that adding bandwidth to solve our congestion problems is a bit like hitting a nail with a sledge hammer. It gets the job done, but it's a little bit overkill. But as Mr. Bennett points out, the internet protocol really doesn't lend itself to elegant solutions anyway when you run it near capacity. I ran across an article in the Wall Street Journal today (2008-06-16) that said a study by Cisco shows that video will soon overtake P2P as the most common protocol on the internet. It also predicts a six-fold jump in internet traffic between 2007 and 2012. That assumes, of course, a huge build-up of our networking infrastructure at all levels that involves far more than a little router tweaking. Nobody says that this should be done for free. But if, for example, I got my TV programs, movies, etc. over the internet instead of a separate private cable company network, I would be willing to pay a lot more for my internet service. But then the cable companies would be cut out of the loop between the content providers and the end users which is where the real profit is, and they will never allow it. I suppose instead the cable companies will sit smugly collecting their subscription fees until someone comes up with a new technology to provide internet service that goes around them. It won't be easy, but the customer discontent that the cable companies are currently generating will only serve as a huge incentive for someone to try. Comcast and the other cable companies are now upgrading their networks to provide as much as 100 Mbps to the last mile. It will be interesting to see just how much of that bandwidth is actually allowed onto the internet central core (beyond the marketing hype, I mean). My guess is that most of it will be dedicated to the cable companies private networks for HDTV.

It's like this Richard Bennett  –  Jun 17, 2008 1:59 AM

Would you say there's some point at which a network has so much bandwidth that it's immune to congestion, Milton? I wouldn't, because whatever technology you use to transport packets can also be used to supply them. So incremental increases in bandwidth simply move congestion around, and wholesale increases merely delay it by a little while. The collection of computers connected to the Internet - some 1.3 billion of them - are collectively capable of generating vastly more traffic that any known transport network can move. The only reasons they don't are user motivation, traffic management, and cost. The additional bandwidth on the cable networks comes about by re-allocating channels that are currently used for analog TV to Internet access. None of that is for the company's convenience, it's all for you, me, and the accounting department.

The collection of computers connected Milton Scritsmier  –  Jun 17, 2008 2:42 AM

The collection of computers connected to the Internet - some 1.3 billion of them - are collectively capable of generating vastly more traffic that any known transport network can move. The only reasons they don't are user motivation, traffic management, and cost.
And what's wrong with controlling usage primarily by cost? Why the emphasis on traffic management? As you point out, traffic management needs to be dictated by technical considerations. But there's absolutely no correlation between those technical requirements and what consumers really want to do on the internet. On the other hand, cost can also control bandwidth demands while giving the consumer the greatest flexibility in what he or she wants to do.
The additional bandwidth on the cable networks comes about by re-allocating channels that are currently used for analog TV to Internet access. None of that is for the company's convenience, it's all for you, me, and the accounting department.
Yes, but is that the best use of bandwidth from my point of view, or the accounting department? Far more for the accounting department, I'd say.

Engineers like clever systems Richard Bennett  –  Jun 17, 2008 2:57 AM

Controlling usage by cost strikes me as using a blunt instrument, so I'd like to refine that approach just a little. My ideal system is one where each user has the right to send a certain quantity of packets each second with an average delay of something like 10 milliseconds, and the right to send an additional quantity with an average delay of something like 100 ms. After that, he would be allowed to send an addition number per 5 seconds with an average delay of 300 ms, and beyond that an unlimited amount that travels only when space is available. Let's say these classes are called Priority, Special Delivery, First Class, and Bulk Rate. The pricing plans of all ISPs would use these terms and be testable. ISPs could then compete with each other on the basis of number of packets in each class. Perhaps Comcast would offer a 40 - 400 - 4000 plan, and Verizon a 500 - 5000 - 50000 plan. I can tell at a glance what the plan does and whose offers me better value. These terms are much more meaningful than the raw download/upload numbers we presently get. Users don't have to consciously alter their behavior to conform to the plan limits, and they don't have any end-of-the-month surprises. And yes, at the end of the day, usage patterns are constrained by economics, which has to happen, and the carriers are incentivized to invest in network upgrades. If I were the king of the world, this would be my policy. For a while, anyhow.

The gaseous user strikes again Nicolau Werneck  –  Jun 17, 2008 3:58 PM

I totally agree that increasing bandwidth is no solution. This is just like the famous effect observed in personal computers, where the storage space always gets quickly occupied by the users. There will always be a number of users wanting to transmit as much as they can. What we must do is create a way to qualify transmissions, just like the "clever system" below. The problem we have today comes from the fact that the natural sharing of bandwidth between users is too slow for some applications. Torrents have large momenta. The problem is not simply the bandwidth, or even the quality of connections (latency size and uncertainty) but the dynamics of resource allocation. TCP works fine "in steady state regime", sharing the resources between all users after a settling time. To allow fine VoIP connections to work as telephone, we must complement the protocols with something else. Pricing won't help. First of all because the problem is not the "last mile" bandwidth. In fact, there is a large number of people unaware that having a 100Mbps link to the Internet doesn't mean you will frequently be able to download anything at that speed. And users definitely lack the tools to do a proper management of their network usage. It's very wrong to offer a certain speed, but then say that you can only use a lower one. It's a cheap way to make money. The problems we are facing have to do with the whole of the Internet. Perhaps ISPs are like taxi companies taking costumers through overloaded highways. It doesn't matter much if the car is fast, or if the street you live has low traffic. The difference is that the governments can easily dictate how people should drive, and create "express lanes". But who will do that on the Internet?... And just reminding: net neutrality means you can drive the car you want to go anywhere you want. This is possible on the Internet and not with cars first of all because people don't get killed due to high-speeding on the Internet, and second because it's not the user who controls the speed. The choice we face is: either beat up the "bully" users that are almost unknowingly pushing the limits of the system and disturbing others, or enhance the network so that it does exactly what we advertise. It's a choice between saying that the users are to blame for speeding too much, or that the fault is from the car companies that made a very fast car. ISPs must sell what they say they sell. The whole problem is that they simply say "it's a 100Mbps connection", and then afterwards they want to restrict your bittorrent use. The limitations must be stated clearly. The true complexity of things must be acknowledged. But that's bad marketing... It's not just an engineering problem, the problem is marketing, advertising, rhetoric, politics.

I totally agree that increasing Milton Scritsmier  –  Jun 17, 2008 9:01 PM

I totally agree that increasing bandwidth is no solution. This is just like the famous effect observed in personal computers, where the storage space always gets quickly occupied by the users.
That's also observed in most offices. But once you have already purchased a disk, any unused space on that disk is essentially free. So why not fill it up with junk? But the real question is why didn't that user buy a bigger disk in the first place, or more disks? Because that is not free. The user made a cost/benefit choice that fit his or her needs. When I was a kid getting a long distance call was a big event (yes, I go back that far). You were given maybe 10 seconds to talk to a relative you might hear from once a year. Today long distance is bundled into my VOIP plan and is essentially "free". Yet I don't spend all my time making long distance calls; far from it. I am not saying increasing bandwidth alone is the solution. In fact, to solve our current bandwidth problems we don't need to increase bandwidth at all -- just increase prices.
And just reminding: net neutrality means you can drive the car you want to go anywhere you want. This is possible on the Internet and not with cars first of all because people don't get killed due to high-speeding on the Internet, and second because it's not the user who controls the speed.
I would disagree with you to the extent that traffic laws concerning speed, staying between the lines, etc., are more like the internet protocol itself. They are simply basic rules that make the flow of traffic possible. You may not, for example, go the wrong way on a one-way street, but there is a way to get your car to a business on that one-way street. Some of more egregious traffic management policies cable companies are proposing would have the effect of blocking off that one-way street altogether.

The increase of the speed Nicolau Werneck  –  Jun 17, 2008 4:25 PM

The increase of the speed of the whole network would have a beneficial effect iff the transmission speed in the leafs were kept the same. When we talk about increasing the speed we understand everybody's speed, but there is the possibility of increasing just the "core" speed of the network, thus making a relative decrease of the limit speed of the users. Suppose all users were still with their old 14400 modems. There is certainly a point in the increase of the bandwidth in the network where all those users would be receiving a great service. The bad side is that the network would start being "wasted". The guts of the Internet needs to have more transmission power than comes to it so that traffic jams can be more easily dealt with. The problem is that it's always too easy to simple let go all barriers and say "oh, let those users bang their faces in the wall". It's too easy for ISPs to offer very fast leaf connections that can easily overload the "uplink" connections. So, things would be easier if we simply stop giving that much high connections to users. But the problem is that users want to be stop by the network jam instead of being limited at the leaf... With QoS protocols we have a better way to allocate resources in the network, and alternative to simply limiting the inputs. That does work, but is ugly. Resources must be shared in a more sophisticated way, something TCP/IP does not care about. *** It's the network architects who need to figure out how to properly allocate the resources on the network. We must build the thing properly, and not complain about "evil users" who did nothing more than hanging to a leaf of the network. And the solution must not involve looking inside people's packets.

The increase of the speed Milton Scritsmier  –  Jun 17, 2008 9:16 PM

The increase of the speed of the whole network would have a beneficial effect iff the transmission speed in the leafs were kept the same
That's one way. But another way would be increase the speeds at the leafs as much as you want, but raise prices so much the end user won't want to use all of it.
It's too easy for ISPs to offer very fast leaf connections that can easily overload the "uplink" connections.
I agree with you here. When I upgraded my Comcast internet service, I very much doubt much of the extra money went to upgrade the internet core even though increased internet bandwidth was all I cared about. More likely it is being used to help pay for the 100 Mbps upgrade to homes, and that's really being done to give them more bandwidth to provide digital TV. Internet service is a secondary concern to Comcast and the other cable companies, much to the detriment of the internet as a whole.

Controlling usage by cost strikes Milton Scritsmier  –  Jun 17, 2008 4:33 AM

Controlling usage by cost strikes me as using a blunt instrument, so I’d like to refine that approach just a little.

My guess is that these numbers are derived from your years of experience, and so I’m willing to bet that they would indeed help improve bandwidth in the aggregate. But as an end-user I may be primarily interested in only one or two of your classes. Having to pay for the other classes from my point would then be a waste of my money, and there may not be enough competitors out there to provide me with a plan that I’d like. However, if your classes can reduce the cost of my internet usage to below that from an unrestricted plan at equivalent bandwidths, you might be able to convince me.  However, as your comment above implies, everybody has a different break-even point.

If I were the king of the world, this would be my policy. For a while, anyhow.

:-)

I hope my comments do not give you the impression that I didn’t find your article valuable. You did convince me that traffic management has technical issues that make strict neutrality impossible. And as an engineer myself, I can appreciate the technical trade-offs that must be constantly on your mind when you attempt to design the best possible product for your customer.

Ironically, however, the convincing case you made for technical concerns governing effective traffic management left me more convinced than before that cost management might better satisfy end users. ;-)

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

New TLDs

Sponsored byRadix

Brand Protection

Sponsored byCSC

Threat Intelligence

Sponsored byWhoisXML API

Cybersecurity

Sponsored byVerisign

DNS

Sponsored byDNIB.com

IPv4 Markets

Sponsored byIPv4.Global

Domain Names

Sponsored byVerisign