Home / Blogs

NTIA on IoT - ICANN 2? And Reconsidering IoT as Distributed Process Control

Karl Auerbach

NTIA has published a Notice for Public comment that is titled "The Benefits, Challenges, and Potential Roles for the Government in Fostering the Advancement of the Internet of Things". This could become ICANN-2, bigger, longer, and uncut; and with a much greater impact on the future direction of the internet.

However, my thoughts on this go well beyond the possibility of another ICANN.

The phrase "internet of things" really bothers me. The internet has always been about things.

Way back in 1989 I was involved in the creation of the first internet toasters. Many people today think those are apocryphal. However, they were very real, they toasted real bread. Other "things" on the early internet were soda machines, lego robots (to insert and remove bread from the toasters), elevators, talking bears, weather monitors, and model railroads.

The "internet of things" is a misnomer. It is not, as many people think, the deployment of small appliance things or larger scale workplace or industrial monitoring.

Rather what is happening is that we are removing human link between computer-made decisions and the operation of various kinds of infrastructures. Some of those infrastructures are small (like home thermostats) and some are large (like control of municipal water systems).

Yes, we have always had computer control systems. But in the past there was usually a geographically constrained and close relationship between the computers and the devices being controlled.

But that has been eroding in various ways:

First is that rather than a single control computer, we are seeing the creation of distributed control systems, often with open or inadequately protected connections to the public internet.

Second is that these control systems are gaining larger amounts of control while, at the same time, humans are being removed.

The net (pun intended) effect of these evolutionary changes is that computers are gaining more control and there is less opportunity for humans to notice that something is amiss and to apply corrective pressure.

We are seeing this play out in the arena of self-driving automobiles.

The advocates say that the computer software is better and faster than humans. That may be true. But unfortunately even where it is true, it is often true only so long as that software is operating within the realm of inputs that it was designed for and tested within.

However, as we know from experience, software, particularly distributed network processes, are sometimes, perhaps often, under-designed, and are frequently under-tested against any but routine real-world conditions. And they are rarely tested against the ravages of time in which software components change (a particularly serious danger to distributed control systems) or in which sensors age or are maintained with less the scrupulous adherence to same-part-replaces-same-part policies.

One of the most serious risks I see in the "internet of things", particularly when those things are outside, such as with self-driving vehicles, is the slow erosion of sensors and wiring connectors and the impact of inadequate repairs and substitution of used or incorrect replacement parts. Is the control software being designed with these kinds of things in mind? I do not have a good feeling about the answer.

My company (InterWorking Labs) builds tools to test how well network implementations, usually at the IP layer and above, react to real-life network conditions and readily foreseeable future conditions.

What I see scares me. Too much network software is ill written — for example a common flaw is that network code written in C lacks an "unsigned" qualification on integer data types.

And too much network software is written with the presumption that the quality of network traffic on the developer's laboratory network is reflective of the type and quality of network traffic that the software will encounter when deployed.

And it is not merely quality of network traffic (by which I mean things like delay, jitter, loss, re-sequencing, or rate limitation). A lot of of our internet standards have fields that are reserved for the future or that are not used today. I've seen previously running software collapse or mis-behave when a new device is placed onto a network. That new device may be totally RFC-compliant but does things in a way that tickles the existing devices in bad ways and causes them to misbehave or fail.

And over time there will be different implementations or generations of devices that adopt what seem like small variations on algorithms that, when they encounter one another, result in destructive patterns. We see this with humans who approach one another on a sidewalk. Most of the time these people will not collide. However, we have all seen and experienced a feedback loop in which each person steps to the same side, and the two people lock-step, going left-and-right in unison, until they collide or stop just before colliding.

This kind of group emergent behavior is not usually seen in lab test conditions or when there are just a few "things" that are interacting. Rather, these behaviors begin to emerge as the number of participants increase. For instance, I have yet to see a test of self driving vehicles (or flying drones) in which dozens or hundreds of such devices are present and interacting.

Before "internet of things" there was often a human that might notice that something had gone awry. But as we move towards a more integrated, more distributed world with more direct coupling between the control systems and our infrastructure, the space for bad things to happen greatly increases.

We need to begin viewing the internet not as a collection of things but rather as a system of distributed processes. And like all distributed process control systems, we need to recognize that this "internet of things" will have feedback loops that can become unstable if there are data errors, re-sequencing, losses or delays. We need to recognize that a change in place A may well cause an unexpected effect in place B.

We have been very lax in our design of the internet in the design and deployment of monitoring points and tools to detect and manage these kinds of things.

Take a look at a presentation I made back in 2003 at the 8th IFIP/IEEE International Symposium on Integrated Network Management on some of these issues. I titled it "From Barnstorming to Boeing — Transforming the Internet Into a Lifeline Utility"

The presentation slides are at:
http://www.cavebear.com/archive/rw/Barnstorming-to-Boeing.ppt

And my notes (which contain the heart of the talk) are at: http://www.cavebear.com/archive/rw/Barnstorming-to-Boeing.pdf

By Karl Auerbach, Chief Technical Officer at InterWorking Labs
Follow CircleID on
Related topics: ICANN, Internet of Things
SHARE THIS POST

If you are pressed for time ...

... this is for you. More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

Vinton Cerf, Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Share your comments

To post comments, please login or create an account.

Related

Topics

Cybersecurity

Sponsored byVerisign

IP Addressing

Sponsored byAvenue4 LLC

DNS Security

Sponsored byAfilias

Domain Names

Sponsored byVerisign

New TLDs

Sponsored byAfilias