What's Wrong With the FCC's Consumer Broadband Test?

By Karl Auerbach
Karl Auerbach

The FCC recently published some tools to let consumers measure some internet characteristics.

The context is the FCC's "National Broadband Plan". I guess the FCC wants to gather data about the kind of internet users receive today so that the National Broadband Plan, whatever it may turn out to be, actually improves on the status quo.

The motivation is nice but the FCC's methodology is technically weak.

There are several goals to which the National Broadband Plan ought to aspire:

This note will address only the first of these goals.

The first thing that is wrong is that the FCC's tools are not well focused with regard to exactly what parts of the internet they are measuring. And second, the measurements that are taken are too vague to be of more than anecdotal value.

I've drawn up a simple diagram to illustrate.

This is a simplified diagram, it is intended to focus on that part of the net of concern to the National Broadband Plan. In particular it looks at the part of the net that represents the "internet" product sold by today's Internet Service Providers (ISPs). The arrows in this drawing are interfaces where these clouds join, they are not communications lines.

This diagram shows things as connected clouds because that more accurately represents the things that make up the way that user's connect to the internet. The basic parts of the diagram are these:

The portions of interest to the FCC's National Broadband Plan are the part between "A" and "B" and between "A" and "C". These are shown inside the yellow box.

So what does all of this have to do with the National Broadband Plan in general and the FCC's Consumer Broadband test in particular?

First of all, we must recognize that a user's perception of network quality and speed is a complex function that involves the entire path between the user and the remote service.

Many protocol stacks and applications can degrade badly even if one seemingly small aspect changes. For example, the speed with which domain name system (DNS) queries are answered is often a major, or even the dominant, component of how quickly web pages are fetched and rendered. Indeed with the increasing number of "analytics" web bugs and links to "share" content the number of DNS queries involved in a page fetch can be quite surprising.

And DNS responsivity is a matter that involves more than mere bandwidth.

Other applications degrade for other reasons. VoIP is often made incomprehensible by even small amounts of packet reordering, something that can occur quite often as a result of certain wireless technologies, load-balanced pathways, or routing behavior. And applications that use large packets, applications such as high quality video, can be badly affected by fragmentation of packets due to link MTU values of less than about 1500 bytes.

There are many characteristics that play a part. Among these are Quality of Service (QoS) handling, queuing disciplines and drop policies in routers, and congestion handling in protocol stacks. Moreover there are an increasing number of protocol "accelerators" that try to obtain better user performance by abandoning the protocol etiquette algorithms that are built into well implemented TCP stacks. Those accelerators may create local benefits to their users, as long as the number of such users is small, but they damage the experience of other users.

The National Broadband Plan tends to be involved only with the "User Access Link" part of my drawing.

Yet the FCC's tests tend to lump all the parts of the drawing into one number thus masking the contribution of each part.

A national broadband build-out that does not deal with the entire system will be a waste of time and money. A user whose ISP has a magnificent broadband User Access Link but inadequate backhaul and connectivity to the internet at large is a user who is going to be dissatisfied.

Thus for the FCC's tests to be meaningful they need to do two things:

• They need to isolate and separately report the attributes of the User Network, the User Access Link, the User's ISP Cloud, and the degree of private peering to large content providers.

• The attributes that are measured need to be much deeper than "bandwidth" and "latency" and "jitter". I would recommend that the FCC look at the way that tools like PathChar and Pchar construct a detailed hop-by-hop analysis of network paths. Those tools require many thousands of packets over many tens of minutes for each hop in a path. In my own work I began (but never completed) a project to design a protocol to enable the fast and inexpensive measure of paths characteristics for proposed packet flows. That work is visible on the net at http://www.cavebear.com/archive/fpcp/fpcp-sept-19-2000.html.

By Karl Auerbach, Chief Technical Officer at InterWorking Labs. Visit the blog maintained by Karl Auerbach here.

Related topics: Access Providers, Broadband, Policy & Regulation, Telecom