Home / Blogs

The Internet: Missing the Light

Today’s Internet is wonderful for solving hard problems such as connecting to Amazon to buy goods or for using Netflix. Amazon and Netflix, among others, demonstrate what is possible if you put in enough effort.

Yet if we are to understand the Internet we need to look beyond those applications to the simplest application such as sending one bit of information from a light switch to a light fixture.

If we seek to improve on the Internet or, to speak loosely, get more Internet, we need to recognize that its power lies in making it easy for everyone and anyone to create their solutions by simply exchanging a few bits between two end points.

And what could be simpler than sending one bit from a light switch to a light bulb? If we can’t do something that simple then we are stuck refining what we have now and will find it difficult to move beyond more of the same.

The Light Problem

In 1996 I was on a committee representing Microsoft and working with Honeywell and Intel on what was supposed to become Home Plug and Play. I wanted to use standard Internet Protocols but I was stymied by the simplest of problems—I couldn’t turn on a light using the Internet protocols!

Of course today people do produce light bulbs and light switches that use IP and I can send a message from the switch to the bulb to turn it on. But how do I set this up and establish the relationship between the switch and the bulb?

We can program a switch to send a message to a bulb. We could then register their identifiers in the DNS just like we do for a website. But does it really make sense to have to query the DNS just to turn on your light? If you lose connectivity and are using a local generator does that mean you can’t turn on your lights because you can’t get to the DNS?

Of course using the DNS has other problems. You have to pay an annual fee just to keep the name registered and if you miss a payment you lose ownership. In effect your light switch is a service and you are no longer in control of your own home!

Those who understand the Internet realize you don’t need to use the DNS. If two devices are on the same wire they can simply use the MAC address to send the message. Today IPv6 makes this simpler because you can use the MAC address to form the link-local IP address. This works as long as the two devices are on the same local area network and you don’t need to replace a device (thus changing the MAC address). MAC addresses are assigned from a central source—your DIY device won’t have an official MAC address although usually that doesn’t matter.

Using Wi-Fi presents an additional problem. We’ve accepted the misguided idea that security is implemented in the network by putting a security perimeter around the network. In order for this to work we have to carefully manage every step along the path to make sure the right credentials are presented at each step along the way and any failure along the path will prevent the message from getting through. This is the opposite of the resilience that has made the Internet work.

The other reason for locking down access points is the fear that we will use up the Internet—another consequence of treating the Internet as a service we get from a provider as if we could use up the supply of ones and zeros. From bad metaphors come terrible policies.

At least if you’re careful enough you can get the light switch to work. Until you leave home with the switch. If you are using the DNS and the light bulb has a publicly valid IP address (that is, it is not hidden behind a NAT) you may be able to send a message to the bulb. Probably anyone can since that’s another issue and we don’t have standard practices for managing that relationship. Though if you are visiting a friend during some event that allows for local connectivity but not global you might still be unable to reach the DNS servers. Yes, caching helps but that’s neither a permanent solution nor reliable.

Once you’re away from home you find that most access points are locked down and typically you need an account and/or a browser in order to have access even if there is no charge. Sometimes it is as simple as pressing agree because the lawyers are doing their damndest to remove any risks. Sadly this includes the risk of success. The problem is that the applications are totally unaware of why the bits aren’t getting through. There is no way to know and that blockage may be further downstream. All you know is that the Internet pipe (the path) is clogged.

There is some irony in the fact that some of the efforts to make the Internet more available are making it worse. I now find that my apps often hang because unbeknownst to me I’ve connected to an Xfinity access point and something has gone wrong. Even a well-meaning totally open access point might fail or a local provider may have a policy rule in the path. While there is some route-around in the backbone the edge has brittle single-path (or should we say single-pipe) connectivity.

There are work-arounds for some of this such as indirect references through a third party (AKA “the cloud”)

But it Works!

There’s some cognitive dissonance because despite all of these complaints the Internet works very well. You can reach websites around the world and casually have high quality video conversations.

And you can get the IP light switch to work though it may take some effort to work around the problems. And therein lies the problem. For high value applications you can get things to work. For that matter you were able to make a phone call across the country and send messages around the world a century ago long before the Internet.

Trying to understand the Internet by looking at what we do with it limits us to doing more of the same. The Internet has been transformational because it created the opportunity to do what could not be anticipated. In fact the Internet itself wasn’t so much anticipated as happened.

Very simply we can look at radio packet networks as a catalyst because they were unreliable but that wasn’t a problem for programmers. If a packet was lost then the program could simply resend it. What is less obvious is that not all packets need to get through. If you want the current temperature you just send the new current temperature and ignore the readings that failed to get through.

There was no third party in the path trying to add value by making promises of delivery and allowing only valuable packets through.

The Internet thrived because of this benign neglect. Initially it was useful for applications that weren’t very important such as (slowly) exchanging files and email. If you wanted a service such as a voice conversation then you used a different packet network—the phone system designed to guarantee specific services would work.

The Internet couldn’t make such guarantees because only the applications outside the network knew what the packets meant. In the middle there were just raw packets totally devoid of any meaning or even relationships between the packets.

In the early 1990’s the world discovered the World Wide Web. It was one of the many experiments enabled by the availability of the simple unfettered connectivity provided by the Internet. With no third party in the middle Tim Berners-Lee was able to experiment and set an example that others could build on.

The big surprise is that the increased demean for “web” created a bigger supply of “Internet”. In technical terms the more demand for the capacity to exchange packets the more capacity was made available. This is the way real markets work—if the customers are buying the products then you supply more of it. This is especially true for the Internet because it’s merely a technique for using any available means to exchange packets.

We can casually have conversions of the Internet with no charge for a service like “video” only because no one can promise that it will work. The phone network, on the other hand, promises that voice will work. It did try to promise video would work in the 1960’s but in order to keep the promise it had to offer expensive black and white limited video in order to fit within its technical and business constraints. In the end it was a novelty rather than something we could use casually.

Today voice on the Internet not only works but has made the entire existing phone service (the PSTN) obsolete and the FCC is planning to decommission it! Too bad the FCC’s approach is to kill the Internet with the kindness that has doomed the PSTN.

To make matters worse the current Internet is a work in progress. As we’ve seen the DNS provides us with a mechanism for finding other end points but not the stable relationships we need. We need to develop appropriate technologies rather than treat such shims as essential elements of Internet architecture and governance.

More Internet by Doing Less

The problem we face in trying to “get more Internet” (get more of the benefits we associate with the Internet) is that the Internet isn’t a thing and, more problematic, is that the traditional solution-finding process of public policy and the engineering principles such as layers work against us in this case.

In a sense the Internet is similar to other transitions. Railroads were transformative because they enabled commerce over a distance but, over time, became captive to the accidental properties of the rails and rolling stock and the attendant business models. We are able to “conquer distance” using facilities such as roads and sidewalks. Such facilities differ from railroads in that value accrues to society as a whole rather than having an owner which is required to limit access in order to make a profit.

Today we would say that railroads were TaaS or Transportation as a Service as opposed to DIY transport in which we walk or use whatever means are available. TaaS would be a rent-seeking model that can only provide transport to destinations that would generate revenue to the provider.

Traditional telecommunications is CaaS (Communications as a Service) and its history mirrors that of the railroads to the point that the FCC is modeled on the agency that regulated railroads. The implicit assumption is that we must have a railroad-like system because in the days of the telegraph, analog telephony was very much like a railroad with a service provider assuring reliable delivery.

And the Internet, going back to the days of the radio packet networks, has shown us how to do our own communicating using any means available. We understand how to take advantage of opportunities and don’t require ordered or reliable delivery. What we do need is an economic model that doesn’t require a direct relationship between the user (or application) and the parties along the path who may be assisting in the transport of packets. This is more like a sidewalk assisting walking than a railroads notion of assistance as a service.

It’s not just about money—relying on a third party for one’s name as in the case of the DNS is also a problem. But the economic model is the gating factor because it’s hard to workaround a rent-seeker who owns the path and who must make a profit if we are to communicate at all.

We think of telecommunications in terms of the services provided—telephony and television being primary. But now that all content is converted to bits and we can use services from others (VoIP, Hulu, Apple TV, HBO etc.) as well as services we create ourselves.

In this formulation everything is reduced to charging us to exchange bits. No wonder it’s so important for the carriers to make sure that all the access points are locked down. If the carriers are to charge you for exchanging bits they must first prevent unbilled bits from passing.

The idea that a carrier must prevent bits from flowing isn’t obvious because we’ve been taught it is difficult and expensive to exchange bits. And that was true in days of analog signaling. But think about your home connectivity—you pay nothing per month (within the home) and a gigabit switch is a very inexpensive purchase. That’s because we allow Moore’s law to work where we have ownership. It doesn’t work when we have a rent-seeker setting the rules.

Yet we accept this because we treat the web as if it were provided by the carrier rather than something we do with available connectivity. As long as high value services such as accessing web sites, video, commerce etc. seem to work then everything seems fine. If anything we want more of the same which translates into asking for faster and faster Internet.

We also want solutions so we ask for smart cities instead of empowering the smart people who live there. It may be nice to have a car that drives itself but it would be nicer to have a city that provides information no matter how the car might be driven.

Just as with our home there is no thing with boundaries that is “The Internet” or “the smarts”. Smart is what we do with the opportunities and information available. By thinking the Internet is something separate and apart we accept the idea of a wall around the Internet which we must pay to cross. We let ourselves be clients rather than recognize our empowerment.

Before we can address the protocol and policy issues we need to understand and appreciate the value of unfettered connectivity.

Back Home

I started this essay with the example of a light switch. It’s a very simple problem but a defining one because when we get down to the essentials it’s about sending a single message to from point A to point B. All the rest is built on that simple capability.

When we look at the state of home control we find out that it is possible to pay someone to put in a custom system for a house. Advanced users can do what I did and buy “smart” switches and devices and get them to work within the constraints I describe above.

But the norm for home control is still the common light switch. It is indeed a simple form of home control even if it is limited to directly controlling a single light by connecting or disconnecting it from a power source.

Everything changes once we think of this as a control system with the light switch, in effect, sending a message to the bulb to turn on (or off).

The relationships are no longer defined by the wiring and can be changed in software. We can also define rules by incorporating sensors and do something sophisticated like “first open the shades and then, if necessary, tell the bulb to provide light”.

We’re not just replacing the wire but shifting from a world of physical objects to a world of abstractions. Key to taking advantage of this is our ability to focus on relationships. Rather than treating communication as a service it becomes something we do with the available facilities.

To put it in pragmatic terms I can simply send a message from the switch to the bulb without having to worry about a provider expecting payment to let the bits pass.

This works to the extent we can treat the wires and radios as a common facility just like halls and sidewalks. We then extend the range of “just works” by joining with our neighbors pay for the wires like we pay for the sidewalks.

This approach mirrors the history of the Internet. First individual research groups would pay for the wires (and radios). As the network range increased it would be funded by the university or by a corporation. These systems were interconnected using leased lines as tunnels between the schools.

This is what happens with your home network now. You interconnect with others by buying a path through telecommunications. We call this “broadband” because it’s typically a tunnel through the broadband facilities. But don’t let that confuse you. You’re not consuming or using up “Internet”.

Ambient Connectivity

The term “provider” and the business model of telecom will soon become relics from the past but no matter. The new story is powerful and will start to dominate.

To avoid confusion I’m using the fresh term Ambient Connectivity to emphasize the simple connectivity.

Just as corporations, universities and research groups fund common connectivity, the management board of an apartment house can fund connectivity for a building. Eventually this perimeter would grow to cities and beyond. To connect beyond that the community as a whole would pay for a shared connection and get the benefit of dramatically lower cost thanks to the combined purchasing power.

Within these connected areas we will be able to explore the possibilities of the simple connectivity epitomized by the switch/bulb relationship. If we can do connected switches then we can do explore healthcare and other applications.

It is this is ability to do simple things simply that is at the heart of the innovation and value-creation we associate with the Internet.

It isn’t enough to refine today’s applications. We must instead provide the building blocks of the future and we are only at the very beginning of the process.

And then we can began to address the challenges of making a light switch (and so much more) “just work”.

* * *

My Related Writings

There is a lot that is new in this essay but much that I go into more detail in other essays. First there are my columns in the IEEE CE Magazine. For convenience I have preprint versions available on my website:

Refactoring Consumer Electronics goes into considerable technical detail about the history of the Internet and whys of what it is.

(Not) In Control address some of the home control issues and the importance of having a common protocol for exchange messages.

The Internet of Things vs. Access goes into more detail about the problem of implementing the so-called Internet of Things given that we’ve accepted this strange notion that the there is an Internet out there somewhere that we have to access.

There are some essays here on CircleID including:

Purpose versus Discovery is raises similar issues in explaining that the value of the Internet comes from its lack of purpose.

Internet Connectivity: Toward a Sustainable Funding Model goes into more detail about what I call the sidewalk funding model.

Other essays include:

Understanding Ambient Connectivity which explains ore about the new framing of Ambient Connectivity.

Not Super is a relatively short essay explaining why the so-called broadband business model doesn’t make any sense.

Beyond Limits is a chapter I wrote in 1996 why Moore’s Law is about markets. While this article focuses on the Internet the economic concepts are far deeper and more widely applicable. Writing that chapter helped me catalyze my thinking.

By Bob Frankston, IEEE Fellow

Bob Frankston is best known for writing VisiCalc, the first electronic spreadsheet. While at Microsoft, he was instrumental in enabling home networking. Today, he is addressing the issues associated with coming to terms with a world being transformed by software.

Visit Page

Filed Under

Comments

Link-Local addresses are still problematic because the Frank Bulk  –  Jul 26, 2013 9:47 PM

Link-Local addresses are still problematic because the full addresses are prefixed by the public address of the local network and that address can change whenever the routing tables change and you don’t even have a valid address until you get your first connection to the public network.

I don’t believe Link-Local addresses are regenerated when routing tables changes, and they definitely aren’t prefixed by the public address of the local network.  See https://tools.ietf.org/html/rfc4291#section-2.5.6 and https://en.wikipedia.org/wiki/Link-local_address#IPv6

Oops. You're right -- link-local addresses do Bob Frankston  –  Jul 26, 2013 10:08 PM

Oops. You're right -- link-local addresses do not have a prefix. As to the routing tables -- I meant the external address changes though, given that the prefix is not used in the link-local addresses that is moot. I'll see about correcting the technical error.

Link local works Todd Knarr  –  Jul 26, 2013 10:34 PM

Link-local networks (fe80::/10) work without needing a public network prefix. That’s the whole point of them. IIRC there’s also a site-local prefix. There’s also the unique-local block (fd00::/8). What’s missing the idea of a site-local DNS domain (usable by everything on the local network but not visible outside it) and the ability for users to assign names within that site-local domain. We geeks have that because we regularly run our own nameserver software and can host our own authoritative zones, but standard home routers don’t have that capability. And of course with the new gTLDs it’s not safe for us to just grab a TLD that we know isn’t used, because it might suddenly be used. It used to be safe to assume that TLDs like .local or .ttk weren’t ever going to collide with the official set of TLDs. Now, we really need a TLD reserved for local use, one that’s guaranteed to never resolve in the global DNS and that can safely be used locally without worrying about collision with a real TLD.

User-assigned code elements David Conrad  –  Jul 27, 2013 2:28 AM

Now, we really need a TLD reserved for local use, one that's guaranteed to never resolve in the global DNS and that can safely be used locally without worrying about collision with a real TLD.
I suspect the safest would be the "User-assigned" 2-letter code elements. See the light blue codes in the ISO-3166 Decoding Table.

Link-Local and the DNS do not support Bob Frankston  –  Jul 27, 2013 2:40 AM

Link-Local and the DNS do not support persistent relationships that work independent of the accidental properties of the physical and logical wiring. I used them of examples of what we can do to make things work if we try hard enough but which do not scale. Let's not confuse "I can make it work" with "it works".

DNS doesn't support persistent relationships. It's not Todd Knarr  –  Jul 27, 2013 5:32 AM

DNS doesn't support persistent relationships. It's not supposed to. It supports the prerequisite for setting up a relationship: identifying the endpoints. If I want to set up a control relationship between a switch and a light fixture, I first have to know which fixture I want to control and which switch I want to control it with. And I have to be able to identify them at the network level, not the human level. Hence DNS, which provides a way to create human-readable names and associate them with logical or physical network addresses (usually logical, with something like ARP to make the last hop from logical to physical addresses). Which brings the next issue: how do you identify physical nodes in this network? The equivalent of MAC addresses won't work, the number of devices is too large (I'd guess conservatively tens of billions of light fixtures in the US alone, each of which would need an identifier) and their size and capabilities are too limited (light fixtures don't have a convenient console to do configuration with, and remote configuration runs up against the problem of how do you identify the device you want to configure). The usual solution to this is to identify the device by it's location in the physical or logical wiring, you know which socket you plugged a fixture into so "the device plugged into the top socket of outlet 4 in the living room" is an understandable reference for most people and they can assign identity from there. I think, though, that before going much further a perusal of the X10, KNX and C-Bus standards would be in order. They're designed to address the particular problems of home-automation networking, which is a lot of what you're describing. Other devices have different requirements, eg. a refrigerator can usefully report back more information and has far fewer problems with identification (homes rarely have more than 1 or 2 refrigerators, ovens and such). Of course for obvious reasons you want the network limited to the home itself with no ability to communicate with the outside world without assistance. We have enough problems with attacks on computers, we don't need to encourage kids with antisocial attitudes and too much time on their hands to start cycling people's lights at 2Hz at 1AM.

I'm only using the home as an Bob Frankston  –  Jul 27, 2013 5:50 AM

I'm only using the home as an example. This is not about "home automation". It's about maintaining a persistent relationship between two end points as the move around the world or the house. It's just an example and you need to think about the architecture rather than just specific cases.

Am correcting the comment on link local. Bob Frankston  –  Jul 26, 2013 11:03 PM

Am correcting the comment on link local.

As to local DNS—yes, geeks an work around a lot of issues but those solutions don’t real scale nor address the fundamental issues. That’s my point in listing some of the issues one faces in trying to do something simple.

Removed spurious comments on link-local prefixes Bob Frankston  –  Jul 27, 2013 12:07 AM

(In case later readers are puzzled by this discussion—it’s now moot)

Relational Time James Bowery  –  Jul 30, 2013 7:45 PM

At the risk of applying Ockham’s Chainsaw Massacre, I’ll boil your rather lengthy discussion down to:

“Key ... is our ability to focus on relationships.”

And at the risk of lengthy digression, let me relate a relevant anecdote:

At the Consumer Electronics Show, back in the summer of 1982, a friend of mine from our old PLATO network days took me to a hotel room where some of his friends were showing off this cool new software application called TK!Solver. 

The thing that struck me about the application was that it did something I hadn’t seen done properly ever since Bruce Sherwood added units to PLATO’s TUTOR compiler.  Moreover, I had been looking at adding just such a feature to an authoring language I was then designing for the network programming language I was then designing, as Manager of Interactive Architecures at Viewdata Corp. of America.  Another part of that network programming language I had been looking at incorporating was a distributed synchronization system based on an MIT PhD dissertation called “Naming and Synchronization in a Decentralized Computer System”.  I had come up with a way of unifying that dissertation’s naming system with the token ID system for a dataflow virtual machine called the U-Interpreter – and thought I could, thereby, incorporate some ideas in a recent Turing Award Lecture titled “Can Programming Be Liberated From the von Neumann Style?”, which set forth a vision for functional programming that still informs much programming language design and research.  The thing that bothered me about this functional approach to network programming was that functions are degenerate relations.  I was therefore attempting to incorporate the ideas of a paper titled “Introduction to relational programming” by one of the foremost experts in programming language design of that time, Bruce J. MacLennan.

Upon entering the hotel room, I looked around and my eyes met up with a guy named David P. Reed, at which point we both asked of each other:  “What are YOU doing here?” 

You see, Dr. Reed was the author of the MIT PhD thesis and I had flown up to MIT from Miami to meet and talk with him about the isomorphism between his “names” (hierarchical timestamps) and the U-Interpreter’s data token Ids that been concocted by two guys who just happened to be in the same building (LCS):  Arvind and Gostelow.  I won’t digress further into that meeting at MIT nor the hotel room meeting.

Anyway, I left Viewdata when my, still nascent, architecture was rejected by Knight-Rider editorial authority as undercutting the business model of Knight-Ridder and its business partners by allowing mere “consumers” to become “producers” of information and services.  The exact quote was very close to “Jim, we see Videotex as we the institutions providing you the people with information and services.”

Moreover, when I tried to work with MacLennan to incorporate Reed’s thesis into a relational paradigm he produced a (in my humble opinion) horrible paper that, rather than incorporating the network synchronization into the relational virtual machine, implemented it on top of a predicate calculus virtual machine that dealt with state transitions as assertion and retraction of relationships, with appropriate incremental maintenance of extensional encachements of relational intension.  What I wanted – what I needed – was something that reduced time, itself, to a relational construct.  I figured out what I needed was a relational model of time and I just didn’t have what it took to figure it out on my own and neither did the premiere programming language designer of the era nor did the premiere network synchronization expert of the era have what it took.

About 13 years later I finally got a chance to actually implement my unification between Reed’s thesis and functional programming in an industrial setting at Memex Corporation on College Ave in Palo Alto.  It turns out that at that very time, Reed was with Interval Research headquartered just down the street.  I didn’t have any particular need to bother Dr. Reed at that juncture since what I was implementing was 17 year-old news.  However, it brought back to mind the need for a relational model of time that had blocked my progress years before.  Fortunately, one of the members of the board of directors of Memx Corporation was Bob Johnson, a professor from University of Utah who had previously been CTO at Burroughs Corporation where, among other things, he had developed the magnetic ink used to print bank routing numbers and account numbers you see at the bottom of all your checks.  The reason this was “fortunate” is that Dr. Johnson had been struggling with the way to represent networked states in a mathematical form.  He found that he needed to generalize probability to include not only negative probabilities but also complex probabilities.  He, of course, immediately recognized the intersection of this with quantum probability amplitudes, but he never published these results as far as I know but it did set me looking for someone, anyone, who had found a similar link and had pursued a relational derivation of the same sorts of generalized probabilities. 

My search took me, again, to Interval Research where a consultant named Tom Etter, had been hired to advance quantum information systems.  Tom had been able to apply Macky’s “spinor” transformation to reduce complex probability amplitudes to a degenerate case of real-valued probability amplitudes, with the remaining “weirdness” that these probabilities could take on negative, as well as positive, values.  Relations can be viewed as probability distributions by treating each relationship as a case – allowing duplicate cases for each observation of that relationship resulting in a case count for each observed relationship.  But what about negative probabilities required by Etter’s Macky-simplified quantum programming?  The answer is to allow negative observations.  What does a “negative observation” even mean?  Well, that’s where we get quantum weirdness.  It is also where we find the basic building block to build time.

Interval Research dissolved before Tom could get very far, but I had turned down a position with Interval so I could go to work at Hewlett Packard for more money.  Low and behold, about the time Interval went belly up, a $500M project got started at HP called “eSpeak” which was touted to be about creating “Internet Chapter II” based on “services”.  Sound familiar?  My project at that time had been gluing together all of HP’s websites into a single-signon CRM and, having succeeded, and having some inkling of my deep background with networking, the eSpeak guys were trying to get me to go to work for that project.  I declined because I couldn’t understand their idea.  I finally agreed to go over to the eSpeak project on condition that I be allowed to pursue what I knew to be the necessary paradigm for “Internet Chapter II”.  After some haggling to try to explain why I needed Tom Etter – and indeed having to threaten to resign if I could get him – I was able to hire Tom for a few months during which he and I worked through the paper now residing at the Boundary Institute’s website, titled “Relation Arithmetic Revived” by Tom Etter in which Tom debugs Principia Mathematica’s fourth volume on relation arithmetic and provides the start of a rigorous foundation for a new programming language paradigm.  Another, very closely related, paper completed during that time was “Structure Theory” which described a theory of relations that incorporated counts for columns as well as rows – including negative counts for both.

This brings me full circle to TK!Solver.

Columns are dimensions.  In Toms’ theory a column count can be thought of as the exponent on a dimension, such as kg*m/s^2 where the column ‘s’ has a count of -2.  By developing Russel’s relation arithmetic as debugged by Tom, dimensional analysis, hence units arithmetic, falls out as an emergent phenomenon – along with the core laws of quantum mechanics.  Types disappear into dimensional commensurability.

No one at the eSpeak project understood the direction Tom and I were taking, and we exited that project together, one month before the burst of the DotCom bubble – never to be employed again. 

Tom died April 1, 2013 after suffering from dementia following his wife’s death a few years earlier.

Thanks for the memories. Of course David Bob Frankston  –  Jul 30, 2013 8:38 PM

Thanks for the memories. Of course David Reed was a big influence on my thinking. Was it Ray Ozzie from the Plato days? We had lots of great people at Software Arts those days. Though VisiCalc is gone you can still buy TK!Solver http://www.uts.com/ItemDetails.asp?ItemID=0100-50-0010-00. Amazing.

Ray and Racter James Bowery  –  Jul 30, 2013 8:54 PM

Yes, it was Ray.  I agree—quite a crew.

Unfortunately I didn’t know Etter at that time or I would have put him together with Reed and tasked them to solve the hard problem and maybe TK!Solver could have evolved into a true “Internet Chapter II” early on and prevented a lot of blood-letting confusion.  However, Etter, at that time, had not yet developed is theory of quantum information (called “link theory”) and was playing with with statistical linguistics in the form of his toy program called “Racter”—so perhaps they wouldn’t have had the necessary theoretic tools.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Domain Names

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

DNS

Sponsored byDNIB.com

New TLDs

Sponsored byRadix

Brand Protection

Sponsored byCSC

Cybersecurity

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global