Home / Blogs

If Slate Comes in Standard Sizes, Why Not Broadband?

Why does the broadband industry, supposedly a “high technology” one, lag behind old and largely defunct industries that now have reached the “museum piece” stage?Last week I was at the National Slate Museum in Wales watching slate being split apart. On the wall were sample pieces of all the standard sizes. These have cute names like “princess”. For each size, there were three standard qualities: the thinnest are the highest quality (at 5mm in thickness), and the thickest have the lowest quality (those of 13mm or more). Obviously, a lighter slate costs less to transport and lets you roof a wider span and with less supporting wood, hence is worth more.

These slates were sold around the world, driven by the industrial revolution and need to build factories and other large structures for which “traditional” methods were unsuitable. Today we are building data centers instead of factories, and the key input is broadband access rather than building materials. Thankfully telecoms is a far less dangerous industry and doesn’t give us lung disease that kills us off in our late 30s. (The eye strain and backache from hunching over iDevices is our deserved punishment for refusing to talk to each other!)

What struck me was how this “primitive” industry had managed to create standard products in terms of quantity and quality, that were clearly fit-for-purpose for different uses such as main roofs versus drainage versus ornamental uses. This is in contrast to broadband where there is high variability in the service, even with the same product from the same operator being delivered to different end users.

With broadband, we don’t have any kind of standard units for buyers to be able to evaluate a product or know if it offers better or worse utility and value that another. The only promise we make is not to over-deliver, by setting an “up to” maximum burst data throughput! Even this says nothing about the quality on offer.

In this sense, broadband is an immature craft industry which has yet to even reach the most basic level of sophistication in how it defines its products. To a degree, this is understandable, as the medium is a statistically multiplexed one, so naturally is variable in its properties. We haven’t yet standardized the metrics in which quantity and quality are expressed for such a thing. The desire is for something simple like a scalar average, but there is no quality in averages.

Hence we need to engage with the probabilistic nature of broadband, and express its properties as odds, ideally using a suitable metric space that captures the likelihood of the desired outcome happening. This is by its nature something that is an internal measure for industry use, rather than something that end consumers might be exposed to.

Without standard metrics and measures, and transparent labeling, a proper functioning market with substitutable suppliers is not possible. The question that sits with me is: whose job is it to standardize the product? The regulator? Equipment vendors? Standards bodies? Network operators? Industry trade groups? Or someone else?

At the moment we seem to lack both awareness of the issue, as well as incentives to tackle it. My hunch is that the switch-over to software-defined networks will be a key driver for change. When resources are brought under software control then they have to be given units of measure. Network operators will have a low tolerance for control systems that have vendor lock-in at this elementary level. Hence the process of standardizing the metrics for quantity and quality will rise in visibility and importance in the next few years.

By Martin Geddes, Founder, Martin Geddes Consulting Ltd

He provides consulting, training and innovation services to telcos, equipment vendors, cloud services providers and industry bodies. For the latest fresh thinking on telecommunications, sign up for the free Geddes newsletter.

Visit Page

Filed Under

Comments

A big part of the problem in Todd Knarr  –  Jan 16, 2017 7:42 PM

A big part of the problem in the US is that substitution of suppliers isn’t possible. The limitation is the last mile to consumers where the physical limitations of running wires and obtaining RF spectrum are aggravated by contracts prohibiting allowing other suppliers from needed access or indeed prohibiting other suppliers from even entering the market at all.

The number of attributes is large. Karl Auerbach  –  Jan 17, 2017 7:33 PM

I want to begin by saying I agree with Todd Knarr’s point about the lack of choice making measurements an academic issue for many of us.

As for the the attributes that have to be measured and reported:
  - For several ranges of packet sizes, and for each direction of packet flow:
    = MTU
    = Delay
    = Variation in delay (jitter)
    = Loss rate
    = Mis/Re-ordering rate
    = Duplication rate
    = Queue sizes (which would raise bufferbloat issues)
    = ICMP rate limits/suppression
  - Whether IPv6 is handled, and if so, what options (such as proper prefix delegation - I’m looking at you Comcast)
  - Presence of traffic shaping
  - Presence of filtering
  - Presence of hidden proxies (such as DNS interceptors or HTTP framing - again looking at you Comcast)
  - Presence of NAT

etc.

In follow-up, it's a difference in leverage. Todd Knarr  –  Jan 17, 2017 7:48 PM

In follow-up, it’s a difference in leverage. With slate, a builder could easily switch suppliers and it was just a matter of whether or not a particular supplier could deliver the sizes the builder wanted. The builder had the leverage, and it was in any given supplier’s best interest to have the standard sizes on hand so he didn’t risk losing a sale because he didn’t have them and someone else did. With bandwidth, on the consumer end there isn’t much opportunity to switch suppliers and the cost of doing so’s high. The consumer doesn’t have any leverage, if the edge ISP can’t or won’t deliver a standard product the consumer can’t believably threaten to switch suppliers to one that will. It’s not in the ISP’s best interest to standardize, and in fact it’s in their best interests not to standardize and not lower the bar to moving even a little bit. On the upstream side where the edge ISPs connect to larger backbone networks it’s a bit different, but I don’t think the issues of variability and non-standard measurements exist on those interfaces and where they do the edge ISP’s in the same position: they’re the one and only path to those consumers and if you want to reach their customers it’s their way or the highway.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC

Threat Intelligence

Sponsored byWhoisXML API

DNS

Sponsored byDNIB.com

New TLDs

Sponsored byRadix

Domain Names

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global