Home / Blogs

Protecting the Internet: Certified Attachments and Reverse Firewalls?

In many respects the internet is going to hell in a hand basket.

Spam, phishing, DNS poisoning, DDoS attacks, viruses, worms, and the like make the net a sick place. It is bad enough that bad folks are doing this. But it is worse that just about every user computer on the net offers a nice fertile place for such ill behavior to be secretly planted and operated as a zombie under the control of a distant and unknown zombie farmer.

Most people still think that the main risk of being on the net is the risk that one’s own machine might be damaged from things lurking out there on the net.

Some of us are coming to the converse point of view that the net is being endangered by the masses of ill-protected machines operated by users.

For a decades upon decades Ma Bell (AT&T) insisted that the telephone networks be protected against the dangers of non-Bell phones and other equipment. This reached the height of absurdity with the Hush-A-Phone case when AT&T claimed that an innocent plastic hand could deafen operators, shock linemen off of poles, and otherwise wreck havoc.

Yet Ma Bell had a point - the telephone network could be damaged if I were to plug my Tesla-Coil Phone or my Arc-Welder Phone into the little phone jack on my wall. There clearly are some limits.

And those limits were found - today in the US, and I imagine in most other countries, telephones must pass muster and obtain a certification before they may be legally plugged into the telephone network.

Is it unreasonable to conceive of a day, perhaps a day not all that far distant, when only certified equipment can be legally plugged into the Internet?

When this thought first went through my head I said, nah, no way. I was thinking “a requirement to certify personal computers is a death knell for the kind of innovation we have had inside PC’s.” But then I looked at my own setups and considered how most people connect to the net: via intermediary boxes. It occurred to me that what would have to be certified are those intermediary boxes, not the user PC’s or the software they run.

At home I have a nice little router attached, in turn, by my nice little DSL box. These sit between me (the user) and the network. These are in a position not unlike that of the old ISDN NT-1 protection device. At the office I have a not-so-little router that sits between the internet at-large and my office networks.

The burden of certification would fall on exactly those companies best prepared to deal with the issue - companies like Cisco (Linksys) or Netgear - who build attachment devices. These devices are not open to general programming and have a well defined, and relatively fixed, function.

In order to obtain a certificate these devices would have to demonstrate that they offer robust protection to the network from adverse behavior on the customer side of the internet/customer-premise demarcation. In other words, part of the certificate would require that the device operate as a reverse firewall.

That’s easier to write than to do. When viewed through a peephole in which packets are observed one at a time or with only limited context, it is difficult to recognize and block behavior that constitutes a danger to the internet. (In fact the whole idea of what kinds of actions are dangerous is still somewhat obscure and few objective principles have been enunciated - and I once more refer to my First Law of the Internet as an attempt to propose one such principle.)

Despite the difficulty of finding a fully satisfying general definition there are certainly several specific things that could be required for a certificate. For example the following restrictions on out flowing packets could be implemented without too much effort and would not significantly impair anyone’s ability to use the internet and create new innovative uses.

  • Block the outflow of packets bearing false source addresses.
  • Block certain illegal bit patterns (e.g. TCP SYN+FIN or FIN+RST).
  • Require TCP packets to be related to established connections.
  • Block IP fragments and excessive ICMP activity.

I’m sure that this list could be easily extended without getting into contentious issues such as how a user might offer a network service rather than simply being a consumer of such services.

Bad people will ignore the requirement. But if good folks, the kind of people who make up the vast majority of machine owners, did use a certified attachment device then today’s big zombie farms would lose much of their ability to do bad things.

There are certain other potential benefits. For example a certified box on the customer demarcation is a nice place to do remote loop backs so that ISPs could more quickly diagnose and resolve service issues.

Of course this is yet another layer of regulation. And it’s imperfect and incomplete - it’s not a panacea. But I am not convinced that it is an idea that should be discarded without serious contemplation of the costs (long and short term) and the benefits.

—-
Originally published on CaveBear Weblog.

By Karl Auerbach, Chief Technical Officer at InterWorking Labs

Filed Under

Comments

The Famous Brett Watson  –  Mar 17, 2005 1:32 AM

The analogy with telephone equipment is limited in its utility. Telephone equipment is regulated in terms of its electrical characteristics, and most domestic modems are likewise regulated because they *are* telephone equipment. And even when the devices are not explicitly regulated as part of the telephone network, they attach to some physical network that has its own set of requirements, or operate wirelessly in accordance with other regulations.

I’m aware that you’re not talking about physical layer compatibility, but that was the analogy you drew, and I think it could create confusion. Regulation of the physical layer is very different from regulation of the layers above that point. You’re talking about regulation that sits somewhere between “device certification” and the likes of CAN-SPAM.

My most significant concern with your proposal is that you’re already looking to regulation as a solution when I don’t think we have a sufficient understanding of the problem. If we regulate at a low level, specifying that devices must react in such-and-such a way to particular traffic patterns, we may possibly defeat a number of current attack vectors. At the same time, I’m quite sure that other vectors would remain open, or be discovered, and so the net effect would be like rearranging deck chairs on the Titanic. I doubt that the regulative effort could ever pay off in positive results, and I despise regulation for its own sake.

If you are going to consider regulative efforts, then I have an alternative target for you to consider. Make ISPs liable for the hostile traffic generated by their customers, with protection clauses for ISPs that take “reasonable measures” to prevent abuse and halt it when alerted to it. If ISPs decide that your black box solution is a good “reasonable measure”, then they are welcome to use it. They could just as easily black-box their own end of the link, or consider any number of other possible solutions.

My rationale for this approach is this: the real problem actors in this scenario are the ISPs that don’t want to be responsible for activity on their own networks. Some ISPs are already well-behaved in this regard; others are the devil incarnate. The bad guys see their lack of action as a cost saving measure, since they tend not to bear any direct costs for the garbage their networks spew. They are the Internet equivalent of industrial polluters, and need to be regulated as such.

Given that there are good ISPs out there already, it may be sufficient to make the threat, “industry, regulate thyself lest the government intervene”. Just as with your proposal, mine requires significant further thought, but I’d be much happier with it as an overall direction.

Stephane Bortzmeyer  –  Mar 27, 2005 8:19 PM

> # Block certain illegal bit patterns (e.g. TCP > SYN+FIN or FIN+RST).

SYN+FIN is not illegal. See RFC 1644.

The fact that even Karl Auerbach can not get the list of forbidden things right clearly shows that it is a doomed task. I agree with Brett Watson, it’s much simpler to protect against physical dangers (where the fact that a behavior is a danger is clear) than against behaviors, where there is no possible definition of dangerous behavior.

Karl Auerbach  –  Mar 28, 2005 12:57 AM

Partial response to Bret Watson and Stephane Bortzmeyer:

You are both right that it is very hard to define a good “reverse firewall” set.  But I remain convinced that a few relatively simple things - such as source address protection - could solve a large number of problems.

As for SYN+FIN - RFC1644 (T/TCP) remains an “experimental” RFC since 1994 and has well documented security issues.  SYN+FIN, because it has few, if any, legitimate uses is one of those things that mom-pop PC’s really ought not to be originating until the state of the art in T/TCP significantly matures.  But the large context of what you are raising is what is ratio of marginal use versus improper use that justifies blocking or allowing a protocol pattern.  Given the Grokster case coming up this week, that is a rather timely question.

Bret suggests that ISP’s ought to be subject to liability when they allow their customers to do bad things.  That approach is a valid one and it might be the right one.  (And in that case, the kind of device I’m talking about might be a useful tool for ISPs to consider as a means of constraining what their customers do.)

The larger issue is not whether ISP’s impose the protection or whether it is done via a certified attachment device.  Rather it is that there is imposed some penalty for “doing bad things” to the net (and to other users).  Because the net is worldwide in scope the means to impose that kind of responsibility is hard.  And unless the responsibility falls equally on everyone those who chose to ignore it get an advantage.

ISP’s have had a long time to get their act together and to prevent their customers from doing bad things.  And, unfortunately, the good guys have been good and done the right thing and the bad or uncaring guys have done nothing and let the rest of us bear the burden and the costs.

I’m not particularly thrilled with the idea of a device that sits between user machines and the net to prevent those user machines from doing bad things.  But given the rapidly deteriorating state of the net and the rise of armies of zombie machines I’m not particularly hopeful that other approaches will work.

And as I mentioned in my main post - my goal is not to prevent people from doing bad things, rather I’m trying to diminish the amount of damage that can be caused through the capture and use of machines of innocent users.

Peter Bachman  –  Apr 1, 2005 6:03 AM

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Domain Names

Sponsored byVerisign

New TLDs

Sponsored byRadix

Threat Intelligence

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global

DNS

Sponsored byDNIB.com

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC