Buying or Selling IPv4 Addresses?

Watch this video to discover how ACCELR/8, a transformative trading platform developed by industry veterans Marc Lindsey and Janine Goodman, enables organizations to buy or sell IPv4 blocks as small as /20s.

Avenue4 LLCRead Message Promoted Post

Home / Blogs

Three Reasons Why Broadband Is So Unreliable

Martin Geddes

We all take the predictability and reliability of other utilities for granted. So why is broadband such a frustrating exception? Why do our Skype calls fail mid-way? What makes Netflix buffer like crazy? How come our gaming sessions are so laggy?


No real experience intention

Imagine if the design of your electrical supply was optimised to apply the biggest possible voltage and current to anything that was plugged in. That would clearly be ridiculous!

Imagine if the design of your kitchen tap was optimised to deliver as much water as possible at the highest possible pressure the moment you turned it on. That would clearly be ridiculous!

Imagine if the design of your gas cooker was optimised to burn everything to a crisp as fast as possible in a white hot inferno. That would clearly be ridiculous!

So, why have we optimised broadband to deliver as much bandwidth as possible? That's clearly ridiculous!

In order to work, applications need enough packets to arrive "fresh" enough. In other words, they are sensitive to quality, and need a sufficient quantity of good enough quality. Instead, we've aimed to deliver a maximum quantity with an undefined quality.

This is disconnected from what the user values, unlike all the other utilities. There is no specific experience intention, merely a "you get what you get".

Missing engineering specification

With a domestic AC power supply, we primarily define its quality through having a stable voltage and frequency. With gas we have a regulated composition and energy content. With water, it has to be potable and delivered under sufficient pressure.

So what's the specification for the quality of broadband? It is, and please don't laugh too hard, purely accidental. Yup, the quality of all current ISPs is an emergent property of random processes. Whilst it may be a stable and managed property, it is (unlike all those other utilities) not engineered to a specification with a known safety margin.

The quality of your broadband can and will suddenly shift (under load) in ways your ISP has effectively no control over. Some genius came up with the PR term of "best effort" to describe "out of control" and "not engineered".

Inappropriate operational mechanisms

With power, gas and water we understand that there are switches, valves and taps to regulate flow. With networks we have buffers. And we've chosen the wrong kind. Absolutely everywhere. Honest!

In every network you are likely to encounter, the default policy is to send as many packets as quickly as possible. After all, we wouldn't want any expensive data link to become sinfully idle, would we? We want a network that is busy, busy, busy!

Regrettably, this is a really dumb thing to do. Other industries figured this out decades ago with their 'lean' revolutions. More work in progress and busyness is not the same as delivering value.

What is happening is that we are sending packets into networks faster than downstream data links can process them. The excess "work" we do can only have one effect: those packets get in the way of other data being delivered, without creating any value.

So we have optimised our networks for instability and overload, not for smooth flow of packets within the inherent limits of the system. This architecture error (called "work conservation") is ubiquitous.

The core (and mistaken) industry belief that the job of the network is to create as much "bandwidth" as possible by delivering as many packets as fast as possible. It doesn't matter whether it is cable, cellular, DSL, fibre or any other bearer: everyone is selling on bandwidth with unpredictable quality.

This is not the same as delivering a predictable user experience. Whoever first switches to an outcome-centric and engineered performance model may well revolutionise the broadband industry.

By Martin Geddes, Founder, Martin Geddes Consulting Ltd He provides consulting, training and innovation services to telcos, equipment vendors, cloud services providers and industry bodies. For the latest fresh thinking on telecommunications, sign up for the free Geddes newsletter.
Related topics: Access Providers, Broadband
SHARE THIS POST

If you are pressed for time ...

... this is for you. More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

Vinton Cerf, Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Share your comments

To post comments, please login or create an account.

Related

Topics

IP Addressing

Sponsored byAvenue4 LLC

Cybersecurity

Sponsored byVerisign

Mobile Internet

Sponsored byAfilias

DNS Security

Sponsored byAfilias

Promoted Post

Buying or Selling IPv4 Addresses?

Watch this video to discover how ACCELR/8, a transformative trading platform developed by industry veterans Marc Lindsey and Janine Goodman, enables organizations to buy or sell IPv4 blocks as small as /20s.