Home / Blogs

Network Neutrality in the Wireless Space

There’s been a tremendous amount written about the Google-Verizon joint proposal for network neutrality regulation. Our commentary at the EFF offers some legal analysis of the good and bad in this proposal. A lot of commentary has put a big focus on the exemption for wireless networks, since many feel wireless is the real “where it’s gonna be,” if not the “where it’s at” for the internet.

Previously I wrote about support for the principles of a neutral network, but fear of FCC regulation and decided that the real issue here is monopoly regulation, not network regulation. My feelings remain the same. In wireless we don’t have the broadband duopoly, but it is a space with huge barriers to entry, the biggest one being the need to purchase a monopoly on spectrum from the government. I don’t believe anybody should get a monopoly on spectrum (either at auction or as a gift) and each spectrum auction is another monopoly bound to hurt the free network.

Most defenders of the exemption for wireless think it’s obvious. Bandwidth in wireless is much more limited, so you need to manage it a lot more. Today, that’s arguably true. I have certainly been on wireless networks that were saturated, and I would like on those networks to have the big heavy users discouraged so that I can get better service.

With Martin Cooper (Left), former Motorola vice president and division manager who in the 1970s led the team that developed the handheld mobile phone (as distinct from the car phone).
Source: Wikipedia
As I said, on those networks. Those networks were designed, inherently, with older more expensive technology. But we know that each year technology gets cheaper, and wireless technology is getting cheaper really fast, with spectrum monopolies being the main barrier to innovation. We would be fools to design and regulate our networks based on the assumptions of the year 2000 or even on the rules of 2010. We need to plan a regime for what we expect in 2015, and one which adapts and changes as wireless technology improves and gets cheaper. Planning for linear improvement is sure to be an error, even if nobody can tell you exactly what will be for sale in 2015. I just know it won’t be only marginally better or cheaper than what we have now.

The reality is, there is tons of wireless bandwidth—in fact, it’s effectively limitless. Last week I got to have dinner with Marty Cooper, who built the first mobile phone, and he has noticed that the total bandwidth we put into the ether has been on an exponential doubling curve for some time, with no signs of stopping. We were in violent agreement that the FCC’s policies are way out of date and really should not exist. (You’ll notice that he’s holding a Droid X while I have the replica Dyna-Tac. He found it refreshing to not be the one holding the Dyna-Tac.)

Bandwidth is limitless both because we keep improving it, and because we can build picocells anywhere there is demand. The picocells use very high frequencies and won’t go through walls. You may think that’s a bug, but actually it’s a feature, because you can have two picocells in different rooms that don’t interfere much with each other, and get gigabits in each individual room. While wireless use is growing quickly, much of that is coming inside buildings.

In the past, having so many cells would be too expensive. But today the electronics for the cells cost a pittance compared to what old thinking predicted. And that’s going to continue. This is just one way we know to get more bandwidth for everybody.

The original question was whether it was good for somebody to be soaking up the wireless bandwidth in your area downloading a movie, and whether networks needed to throttle such users. We scream out that they should, but our thinking is short-term. It is the congestion caused by these heavy users, after all, that drives the innovation and network expansion. If we can “solve” our problem with network management rather than putting in more bandwidth, then we don’t create as much incentive to make the bandwidth technology cheap. If the only way we can solve the problem is to boost the network capacity to match the wired one, that’s how we will solve it.

Some have argued, in fact, that it’s cheaper to solve these problems with more bandwidth than it is to solve them with network management. Network management turns out to be pretty hard, and requires lots of work by human beings, and thus it’s quite expensive. And it’s not getting cheaper, for it is not a problem that Moore’s law (or Cooper’s law) helps you as much with. Boosting the network is such a problem. And if you solve congestion this way, and drive the creation of better and cheaper products, not only do you get reduced congestion but you also get a nice fast network when it’s not congested. It’s a huge win for the network and for the world, since everybody gets to buy the new technology, while not everybody needs the network management.

It’s been popular to tell Google they are being evil by getting together with Verizon on this deal. I suspect it’s more a case of not thinking about the future. Once the FCC encodes rules into law, we’ll have them for decades, and even if we’re lucky enough to get the right rules today, they won’t be the right rules for the future. Alas, they will probably be the rules the lobbyists want.

If the FCC or FTC want to make rules, they should be monopoly busting rules. Let’s have better roaming, for example, so our devices can readily and rapidly make use of the small cells. Most new phones have 802.11, so what about a system where any operator of a short-range access point can easily make it a picocell and sell service to the wireless company (now a wireless aggregator) at negotiated or auctioned rates. Most wifi hotspots would be happy to do this at very low rates (they often do it free right now) that can easily be bundled with any plan. A hotspot that wants to charge extra might only get premium customers.

A good roaming system helps enable the ethic I think is right for spectrum sharing—“don’t be selfish.” Under this regime you are required to use only as much power and spectrum as you need, and if you’re inside a building and there is a nice 100 megabit in-room 5ghz wireless, you should not be broadcasting to everybody for a mile around at 850mhz. Doing so is wasteful and doesn’t make sense. If the FCC needs to do anything, it should slightly tweak things to encourage such good behaviour.

By Brad Templeton, Electronic Frontier Foundation (EFF) Boardmember, Entrepreneur and Technologist

Filed Under

Comments

Not dealing with reality Richard Bennett  –  Aug 26, 2010 9:32 PM

This post is largely fanciful. Most obviously, Cooper’s Law forecasts a doubling of data rate per hertz every 30 months. In optical systems, the doubling time is every 8 months. The gap between wireless and wireline is permanent and will only grow larger.

Secondly, the cost of employing picocells and femtocells outside your home is considerably greater than the cost of pulling optical cable, and the challenges in terms of dealing with interference in outdoor settings are severe.

Thirdly, management vs. capacity is not an either/or, in real systems, especially mobile broadband, they’re both essential technologies; each does something that the other can’t do.

The magic bandwidth fairy is not going to solve this problem for us, and personal feelings aren’t the guide to a lasting solution. If learned anything from the Internet, it’s that some people will not play by the rules no matter how reasonable the rules may be.

People won't play by the rules Brad Templeton  –  Aug 26, 2010 9:54 PM

Indeed they don't play by the rules, and yet the wired network works fine with minimal bandwidth management, mostly DDOS stuff against the deliberately malicious. My point is that if we think we can solve our problems with a managed, non-neutral network, then we reduce the incentives for solving them by making more capacity. And those are the solutions we want. You may not believe in the "bandwidth fairy," but the if the fairy is going to come it's because market demand needs it to come. If you feel you can't count on the bandwidth fairy to give you more bandwidth, consider that every time somebody has said the fairly would not come, it has anyway. I don't say that wired isn't better. It is. We should always used wired where we can, and put it everywhere it's easy to put it, and then use wireless when the land link is not convenient. As such, you don't need the picocells everywhere outside.

TCP is part of the problem Richard Bennett  –  Aug 26, 2010 10:27 PM

The Internet as we know it today "works fine" for some applications, and doesn't work at all for others; and some applications are sometimes usable on the Internet and sometimes not. One application that doesn't work at all on the Internet is immersive conferencing, such as Cisco TelePresence. To make this application work, users (typically large firms) need to purchase private wire or virtual private wire connections. The issue isn't bandwidth, it's latency and jitter. Any circuit that carries a traffic mix dominated by TCP will exhibit synchronization, periodic micro-congestion situations that last less than a second each but are noticeable to video users because they case frames to be dropped or delayed. You can't solve this at the end point or with neutral management; it's caused by Jacobson's Algorithm. Firms who split their communication between generic, best efforts Internet and private lines would like to unify them, but they can't do this without non-neutral management. So that's your choice: allow meaningful traffic engineering on the public Internet, or fragment communications between managed networks and the generic Internet. Open is good, neutral is bad.

I'm not saying we should require neutrality Brad Templeton  –  Aug 26, 2010 10:50 PM

I am not calling for laws to demand neutrality, I am pointing out ways in which it has worked well.  Private lines can offer low jitter and packet loss but only at significant cost.  The question is, “what else can you get with that money?” using just the most basic tech.  Because QoS and private lines are so expensive, it’s often a question of getting 10x the bandwidth for the cost of 1x the bandwidth of IP and 1x of dedicated.  And you end up getting better telepresence from the 10x network than from the 1x dedicated net.  But more to the point, you get a much faster network for all sorts of things you did not even plan for.

Now, you may dispute that the difference is so large as to be 10x (or rather 5x) but it is a fair bit greater than one.  Particularly because a lot of the cost of QoS and managed networks is human cost.

I won’t tell you it’s always this way, but it happens more than people expect.

But my real message is not that you can do everything better on an unmanaged network. My point is in fact that the challenges of the unmanaged network is part of what has driven us to make more bandwidth, it is what made that videoconferencing system workable in the first place.

Advocating a fanciful position Richard Bennett  –  Aug 26, 2010 11:05 PM

It's not really satisfactory to offer a defective analysis of a technical subject such as traffic engineering and then try to paper it over with "but I'm not saying anyone should ACT on what I've just said." In fact it's destructive. If your analysis of the trade-offs between capacity and management were correct, why shouldn't a law be passed to enforce the style of management you recommend? If TelePresence could be done cheaper over fat pipes than over dedicated ones, that's the way people would do it. Network administrators aren't so stupid as to waste money like that. For big firms like Cisco, as much as half the traffic on the corporate WAN is videoconferencing. You're making the same naive arguments I used to hear back in the 1980s when we were changing Ethernet from an edge-managed coax cable to a managed switch. They didn't work then and they don't work now. Everybody loves bandwidth, but it doesn't solve all the problems of data communication. And in particular, immersive videoconferencing is still not a practical application over the Internet, not even for Cisco, and I put a lot more stock in their expertise in network engineering than in your back-of-the-envelope speculations.

Abuse and overuse Brad Templeton  –  Aug 28, 2010 7:53 AM

This is fine if you can somehow avoid the problem where people define the overuse they don’t like as abuse.  It’s one thing to say a DDOS or spambot is abuse, but people also have called VoIP, USENET, streaming, videoconferencing and bittorrent (the protocol, rather than a particular download) abuse.  In fact it is demand for many of these apps that has made people buy faster networks and gave us what we have today.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

IPv4 Markets

Sponsored byIPv4.Global

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

New TLDs

Sponsored byRadix

Domain Names

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

Cybersecurity

Sponsored byVerisign