Home / News I have a News Tip

Researcher Propose Faster, Safer Internet by Abandoning TCP/IP Protocol

Researchers at Aalborg University in Denmark, in association with MIT and Caltech, reckon that the Internet can be made faster, and more secure, by abandoning the whole concept of packets and error correction. Error correction slows down traffic because the chunks of data, in many cases, have to be sent more than once. The researchers are using a mathematical equation instead. The formula figures out which parts of the data didn't make the hop. They say it works in lieu of the packet-resend.

Read full story: Network World

Follow CircleID on
Related topics: Internet Protocol

If you are pressed for time ...

... this is for you. More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

Vinton Cerf, Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Share your comments

Can anyone understand this presentation? The Famous Brett Watson  –  Aug 12, 2014 4:25 AM PDT

The linked article is empty hype, and only useful for its links to better sources, such as the YouTube lecture. Summary of that lecture: we take packets, transcode them through a random matrix, and magic happens. Beyond that, it's unclear to me as to what's going on.

Towards the end of the presentation, he talks about encoding data for storage in the cloud, using multiple storage services in a manner similar to RAID. It sounds like the rough equivalent of putting your data through an all-or-nothing transform, then distributing it across multiple stores in such a way that any N of the M stores can reconstruct the data, but any fewer than that loses some data and renders the package transform irreversible.

Exactly how this maps back to an improvement for multi-path end-to-end communications is a bit of a mystery. I can see how it might be used to transmit data redundantly over multiple network paths, rendering it more resistant to random packet loss, but that improvement comes at the expense of more packets in the network — hardly a free lunch. Maybe the approach has merit for networks with many unreliable paths, and I'm sure that there are networks like that, but I don't think it's about to make "The Internet" faster and more secure.

Then again, I struggled to understand the presentation for the most part of it, so I may be missing the point. Does anyone else have any insight into the matter?

To post comments, please login or create an account.




Sponsored byVerisign

Domain Names

Sponsored byVerisign

DNS Security

Sponsored byAfilias


Sponsored byThreat Intelligence Platform

New TLDs

Sponsored byAfilias

IP Addressing

Sponsored byAvenue4 LLC


Sponsored byWhoisXML API