Home / Blogs

The Longevity of the Three-Napkin Protocol

Here is an interesting article for those who may have missed it:

http://www.washingtonpost.com/sf/business/2015/05/31/net-of-insecurity-part-2/

It is not often I go out to my driveway to pick up the Washington Post—yes, I still enjoy reading a real physical paper, perhaps a sign of age—and the headline is NOT about how the (insert DC sports team here) lost last night but is instead is about an IT technology. That technology is the Border Gateway Protocol (BGP), a major Internet protocol that has been around for more than a quarter century, before the Internet was commercialized and before most people even knew what the Internet was. This article on BGP is interesting but of course nothing new to anyone who has been submerged in it long enough to know the issues. It is somewhat a narrative of the Internet’s history in general where much has been built on collaboration and a trust relationship. That is somewhat difficult and counter intuitive for the average layperson to accept in today’s security-hyped world. But it’s something that has always been the norm for the Internet, and this is no more clear than with BGP and peering.

There has always been somewhat of an inherent trust between Internet Service Providers when peering with each other and with downstream transit customers. There is no one size fits all best practice for managing BGP routes, and certainly not one that is mandated to all providers. While some ISPs may perform authentication of the BGP session and/or strict prefix-list or AS-path filtering, others may not or may do it in less secure ways. It is not hard to inject a false route into someone’s network or to accept one into yours, whether mistakenly or maliciously, if filters are inaccurate or non-existent. We long ago wondered when (or how many times already?) someone had deliberately injected the route of a content provider or commercial company they didn’t like so as to black hole or hijack their traffic or just create mischief. The Pakistan / YouTube outage was no surprise at all, even if that was just a mistake. It could be easy to do deliberately.

An ISP may have policies stating that a transit customer provide in writing all of the routes that it will advertise (specifically or in aggregate blocks) in advance of service commissioning so that a strict route filter may be built. This protects against mistakenly advertised routes. Once service is connected, it is typical that new route announcements (if not already included in an existing aggregate) be preceded by a Help Desk ticket and a filter update. Theoretically, the ISP should validate all such routes provided by its customer PRIOR to permitting them to be advertised to the Internet, to ensure they actually belong to them and that no mistakes were made.

There have been efforts to have IP address blocks and their underlying routes registered into automated public repositories such as the databases of the Regional Internet Registries (RIR)—the entities that allocate IP address blocks in the first place. Routes are registered with information pertaining to their “owners,” a process sometimes called “SWIP’ing” according to the Shared WHOIS Project (SWIP). There is also the Merit RADB registry, among others. So, historically there was intent to have a means to ensure that routes that belong to certain entities are publicly visible such that filters can be automatically updated as things change. But because there is no central enforcement entity for this, how well it is followed varies. How consistently ISPs take advantage of these tools to ensure route integrity also varies.

I have seen where it starts off quite strict but eventually loosens when a customer advertises a route that is (correctly) filtered by the ISP’s route filter, then troubleshoots for a while thinking the problem is on their end, and then finally ends up with an angry call to the ISP who relents and (depending on the NOC engineer) loosens the filter more than they should. Mistakes in IP addresses and subnet masks are not impossible or uncommon either. Even within a single service provider, it may not be consistent and may come down to the diligence of the staff creating the configurations at the time, which may not be double checked to ensure consistency. Ultimately, security is frequently in the hands of those doing the deployment and operating the service regardless of what policies are set in advance or auditing is done afterwards. This is true of almost everything in IT really.

Because this article is presented in a mainstream publication whose readership is predominantly non-IT laypersons, it may come across as somewhat of a criticism of BGP. I also see this article much like the ones on DNS that made headlines when the Kaminsky bug came out not that long ago. It was well know that there were security issues in DNS, if not that exact issue; but when it is finally big or interesting enough to make the mainstream news media, the average layperson is left with the feeling that the whole technology is stupid and poorly designed and wonders why it couldn’t have been corrected early if not avoided altogether.

Eventually, the author forgives BGP a bit by noting its success rather than shortcomings. I would tend to agree and would put BGP on the short list of technologies that are directly responsible for the explosion of the Internet, permitting it to scale massively and become what it is today. We should wonder what it would have been like without it or with something not designed as well, even if it was designed on the back of three napkins. The fact that it has gone for so long with minimal changes and is only making mainstream news headlines in 2015 is a testament to its success and longevity. Few things are designed so well. I would put DNS and RFC1918/NAT in that same category. Both can (and have been) widely criticized for various legitimate reasons, but both have been remarkably resilient. We would not be where we are now without them and other protocols and technologies too numerous to list, even if improvements are needed or obsolescence emerges.

At the end of the day, I’m less concerned about BGP than I am about how (insert DC sports team here) lost last night.

By Dan Campbell, President, Millennia Systems, Inc.

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Threat Intelligence

Sponsored byWhoisXML API

Brand Protection

Sponsored byCSC

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com