Home / Blogs

The Problem With HTTPS SSL Runs Deeper Than MD5

The recent research highlighting the alarming practice of Secure Socket Layer (SSL) Certificate Authority (CA) vendors using the MD5 hashing algorithm (which was known to be broken since 2005) has shown a major crack in the foundation of the Web. While the latest research has shown that fake SSL certificates with MD5 hashes can be forged to perfection when the CA (such as VeriSign’s RapidSSL) uses predictable certificate fields, the bigger problem is that the web has fundamentally botched secure authentication.

The problem is that web browsers make SSL entirely optional and it relies on user vigilance and expertise to ensure that the secure authentication mechanism is working correctly. But this is fundamentally unsound because we know that users will almost always make the wrong decision. One study from Harvard showed that most users will ignore an explicit warning of an invalid SSL certificates and more alarmingly, it showed that 100% of test subjects will log in to a webpage with SSL completely turned off. This isn’t difficult to believe given that half of all Banks in the U.S. ran online banking sites with no SSL login page and the customers didn’t care. It took more than a year of public shaming for the majority of those banks to strengthen their security but even today, we can go to online banking sites like wachovia.com and they’ll still ask you to login on a non-SSL webpage. See image below.

This is proof that the Web standard is fundamentally insecure no matter how secure the underlying cryptography is because web browsers defer to the user for security decisions. Anyone wanting to steal a password doesn’t need to go to the trouble of generating a fake SSL certificate at a cost of several hundred dollars per certificate, they merely need to set up a fake http://bankofamerica.com, a fake http://GMail.com, a fake http://Salesforce.com, or any other website and the user will gladly hand their password over. DNS hijacking wouldn’t even be necessary if the Internet connection is a wireless hotspot because it’s trivial to create a fake hotspot.

But there is a 100% reliable way to implement SSL or Transport Layer Security (TLS) authentication securely. For example, most VPN clients and Microsoft Outlook are good examples of how to reliably implement SSL/TLS authentication. All of these secure authentication solutions have one thing in common: they all take the decision of enabling SSL and which certificates to accept out of the user’s hands. If SSL is turned off, the application simply doesn’t work. If the certificate is invalid, the application simply doesn’t work. The responsibility to keep things working correctly now falls on the shoulders of the Information Technology (IT) department where it belongs and users aren’t burdened with confusing security decisions which they aren’t qualified to handle in the first place. The application simply works without hassling the user and it works securely 100% of the time.

What is needed is an overhaul of the way web browsers handle secure authentication but the challenge is transition to a more secure mechanism while maintaining backwards compatibility. This could be achieved by implementing a mandatory SSL/TLS HTTPS mechanism that could be bolted on to the existing DNS infrastructure and web browsers with the following mechanisms.

  • Web browsers surfing on mandatory SSL/TLS websites or domains should always require valid SSL/TLS certificates with no exceptions and no opt-out interface accessible to the user.
  • Custom DNS records on a domain could explicitly announce to the world which host and/or domain names should operate in mandatory SSL/TLS mode.
  • Web browsers could have a list of default mandatory SSL/TLS host names e.g., “secure” or “wwws”.
  • Web browsers should maintain a list of any mandatory SSL/TLS HTTPS websites that the user has visited. So even if DNS is untrustworthy in the case of public wireless hotspots, the web browser will insist on mandatory SSL/TLS even if the DNS infrastructure says otherwise.

The mandatory SSL/TLS HTTPS mechanism would require changes to all modern web browsers but it would work with existing DNS infrastructure. The system would be backward compatible and the mechanism would remain entirely optional. But once a company, a bank, or a cloud application vendor opts in to mandatory SSL/TLS, they will be able to enforce strong authentication policies throughout the entire globe.

Unfortunately, many companies perceive brand names and the perception of security more than actual security itself and they balk at the notion of good mandatory SSL/TLS security. I’ve known companies that would rather run an invalid out-of-date VeriSign certificate which cost them several hundred dollars than do it properly with a $20 GoDaddy certificate. These attitudes will eventually change once stronger authentication mechanisms are available and standardized and hopefully they’ll implement good SSL security. In the long run DNSSEC could facilitate even easier and simpler Public Key Infrastructure (PKI).

To move forward, we first need to acknowledge that the current HTTPS SSL mechanism is completely broken. It’s hard for a security professionals like myself to say that because I risk reinforcing the attitude that we might as well not bother with SSL security to begin with since it’s worthless to the majority of users. But when the mechanism is so fundamentally broken, it’s time that we all stand up and say enough is enough and we need to fix it.

The Internet standards bodies needs to quickly adopt and standardize a custom DNS record for mandatory SSL/TLS. Then Microsoft, Mozilla, Apple, or Google need to implement the changes on their web browser products to enforce the strong SSL policies.

As Robert Graham, co-founder and CEO of Erratasec pointed out, I have the implementation process backwards because implementation has always come before standardization on the Internet. I must have been asleep writing that last paragraph because I should already know better.

Robert Graham: What made the Internet different from all the other competing internetworks of the 1980s was that people would implement something first, then standardize it. OSI failed because standards led implementations.

Either Microsoft or Mozilla should just implement something, and document the DNS format that they will accept. Standards bodies can catch up later.

Microsoft or Mozilla, the two dominant web browser companies, should simply implement something immediately in their respective browsers and document the required custom DNS records. This would be similar to Microsoft’s SenderID standard used for mail authentication which was eventually standardized. Then if the mandatory SSL/TLS mechanism gains acceptance, the standards bodies will follow.

Some might ask “what about Extended Validation (EV) certificates”? They suffer from the same problem in that they rely on the user to decide if the security mechanism should be enforced or not which we know this is virtually 100% unreliable. Furthermore, the cost of EV SSL certificates is insane and it addresses a problem that is the least of our worries today. There’s nothing wrong with a cheap $20 SSL certificate if the system is systematically enforced correctly. $20 per year is more than reasonable price given the fact that the Certificate Authority (CA) is merely performing an automated email round trip verification and a few math calculations to sign your digital certificate.

By George Ou, Policy Director - Digital Society

Filed Under

Comments

login from http page Carl Byington  –  Jan 5, 2009 4:40 PM

In the specific case of http://www.wachovia.com/, they used http to deliver the original form to the browser. However, that content includes form method=“post” action=“https://onlineservices.wachovia.com/auth/AuthService” which should send the user credentials to wachovia wrapped in ssl. It would be nice if the browser could show some indication of http/https when the mouse is over a link, but in many cases including this one, the bank has used javascript or other methods to hide that information.

Carl, I don't care how they post it. George Ou  –  Jan 5, 2009 5:18 PM

Carl, I don’t care how they post it.  It doesn’t matter if you “securely” deliver your login credentials to an attacker whose identity you failed to verify due to the lack of HTTPS.  This is the same dumb argument those banks have been using to justify this bad practice.

The whole point of this MANDATORY SSL/TLS article is that the user should be taken out of this decision process and they shouldn’t be bothered with it.  The site should be 100% valid or it should not work period.

Threat model The Famous Brett Watson  –  Jan 6, 2009 2:45 AM

Like most proposals for a security solution, this article dedicates insufficient time to analysing the security problem. What are the possible threats? Which of the possible threats does this solution address? What avenues of attack remain open given each suggested implementation? What are the other drawbacks of the security mechanism? And ultimately, are the benefits (if any) worth the costs?

I have other things to do today, so I’m not going to perform that analysis here and now. I do note, however, that the solution does not address the problem of phishing at all: if a user can be tricked into visiting a malicious website, then no amount of SSL validation (or its absence by publicly stated policy) helps. At a glance, the proposal doesn’t seem to offer any significant benefits, particularly given the implementation effort involved.

The problem and the costs are fairly obvious George Ou  –  Jan 6, 2009 3:03 AM

The problem is that web authentication with SSL can be hijacked nearly 100% of the time based on every study conducted through simple social engineering techniques.

As for the costs, do you see a problem if your bank account gets hijacked?  Do you see a problem if your email or facebook account gets hijacked?  Do you see a problem if your Ebay account gets hijacked?  Do you see a problem if any website with sensitive information gets hijacked?  If the answer is yes to any of these questions, then you should care about the proposed solution.

The proposed solution would facilitate a systematic way of locking down websites so that they can avoid page hijacking which would avoid the said problems.

You should wait till you've analyzed the proposed solution before drawing any conclusions Brett George Ou  –  Jan 6, 2009 3:41 AM

Brett says: “I do note, however, that the solution does not address the problem of phishing at all: if a user can be tricked into visiting a malicious website, then no amount of SSL validation (or its absence by publicly stated policy) helps. At a glance, the proposal doesn’t seem to offer any significant benefits, particularly given the implementation effort involved.”

If the user is redirected a malicious but seemingly legitimate website (extremely simple to do with a fake hotspot), then the proposed solutions DOES fix the problem.  The web browser in this case would systematically refuse to open up the site (with no GUI opt-out) if they’ve visited the website from a secure location before.  That’s why I proposed web browsers should maintain a list of mandatory SSL/TLS websites.

If the user visits a site they’ve never been to from a compromised network, then the proposed solution would fail to operate but this would be a smaller vector for attack.  Covering this last vector of attack would be extremely impractical without universal DNSSEC deployment with ZERO backward compatibility for plain old DNS and that won’t be practical even when DNSSEC is 99% ubiquitous.  I suppose it might be possible if operating systems maintained a list of DNSSEC enabled domains but that would probably be impractical.

The proposed solution can’t possibly cover every angle of attack e.g., if your computer is completely hijacked by malware.  What it does do is cover one of the biggest vectors of attack for uninfected computer systems when DNS is intact and it even protects websites the user has already visited when DNS is not trustworthy.

And how does a normal user know jeroen  –  Jan 6, 2009 1:48 PM

And how does a normal user know that https://www.bank.com != https://www.b4nk.com !?
(and better fake names with the nice IDN names of course, or just https://www.thebank.com)

Both will have a valid security certificate, correctly received from a very trusted CA, both domains and SSL cers are really owned by the respective owners, no need to do hacking here in anyway. There just is no way that the user can know which one is the real one.

The b4nk.com website will just mirror 100% the look and feel of the b4nk.com website and that is all there is to it, the user will just think it is from there bank, that is all there is to it for all those phishing things, nothing difficult. SSL, DNSSEC and all that tech doesn’t help here at all.

One of the few solutions is to have a ‘white list of good domains’, but who decided what goes on the whitelist? IMHO b4nk.com is MY bank and is just as ‘legit’ and ‘good’ and ‘white’ as your bank.com domain. Same for your ‘mandatory tls/ssl’ thing: b4nk.com will be listed in the same way too. Where is the difference?

Jeroen, that's far cry from proper DNS names George Ou  –  Jan 6, 2009 6:33 PM

“And how does a normal user know that https://www.bank.com != https://www.b4nk.com”

Jeroen, that’s far cry from proper DNS names.  Like I said, the proposal is designed to help users against the biggest vector of attack where consumers are protected from http://www.bank.com when they’re really trying to go to http://www.bank.com.

I agree that there will still be some users who might get suckered by a misspelled domain name or where someone will obfuscate the real domain name by using a super long URL with one of the host names masquerading as a proper DNS name.  I think Google Chrome has the right idea by highlighting the proper DNS name and graying out everything else.

Why is that not a proper DNS jeroen  –  Jan 6, 2009 8:19 PM

Why is that not a proper DNS name? Mmistypes and fake websites is how phishers work, hacking into “hotspots” really is not worth the effort (unless you get to hack up a big ISP network, but hey, why not go for the DSL and cable directly then if you are doing that).

I’ll assume you tried to write “http://www.bank.com when they’re really trying to go to https://www.bank.com.” note the missing ‘s’ after the second http.

How does an entry in DNS, which can be faked (otherwise the attacker can’t set up the fake http://www.bank.com either to point it somewhere else in the first place) help? One would need full DNSSEC supporting webbrowser and resolvers to recognize that it is the wrong bank. And also, because of that silly MD5 problem they can just fake the correct certificate, thus even if there is a white-list, the fact that it is SSL’d is useless.

> I think Google Chrome has the right idea by highlighting the proper DNS name and graying out everything else.

Since when is Google is the Boss of The Internet who controls what is good and bad?


Your “proposal” (quoted as there is no real proposal aka an IETF-like draft) defines to have “a special DNS record which describes that a host only should use HTTPS”.

There is already such a record: SRV (with usage from the Zeroconf RFCs).
eg the following would specify that the TCP HTTP service is to be found at the HTTPS TCP service and is actually located on 3 hosts which have certain load balance ratios etc (read up on SRV records for that, it is a really useful RR):
$ORIGIN example.com.
_http._tcp PTR _https._tcp
_https_._tcp SRV 0 100 443 www1
_https_._tcp SRV 0 0 443 www2
_https_._tcp SRV 0 0 443 www3

And of course a bank (or for that matter any other secure thing) can just force all connections to be redirected to https using HTTP 302 redirects.

Unfortunately SRV lookups are not enabled in any of the mainstream browsers. Safari does understand it, but doesn’t do it for http://www.example.com.

The problem of course still is, that one can always set up b4nk.com and make it identical to the original, the client won’t know better even if it is SSL’d or not.

It's a "proper" name, just not the right name George Ou  –  Jan 6, 2009 8:32 PM

What I’m trying to say is that in the case of DNS hijacking (either DNS poisoning or fake wireless hotspot), you won’t be fooled by a fake bank.com when you’re trying to go to the real bank.com.

This mechanism doesn’t stop you from being redirected to b4nk.com or bank.com.iejl.ejhn.b4nk.com/login.  At that point, we are hoping that the user understands that they don’t want to be at the b4nk.com domain.  The latter case of name obfuscation is a little harder to combat and Google’s Chrome does a good job of highlighting b4nk.com and graying out everything else so that it makes it easier to recognize the domain name as b4nk.com.

As for HTTP redirects, that isn’t good enough in the case where DNS isn’t trustworthy.

Nobody ever types HTTP or HTTPS or bothers to check it. George Ou  –  Jan 6, 2009 9:18 PM

Nobody ever types HTTP or HTTPS or bothers to check it.  When people go to bank.com, they either type http://www.bank.com or just bank.com and expect to be taken to the right place.  I can’t think of a single person that manually types HTTP or HTTPS and it’s clear from all studies that people don’t care whether there’s an “s” in HTTP or not.  It’s also clear that a lot of banks (and websites like facebook.com) routinely use HTTP for user login.

> What I'm trying to say is jeroen  –  Jan 6, 2009 10:16 PM

> What I’m trying to say is that in the case of DNS hijacking
> (either DNS poisoning or fake wireless hotspot), you won’t be fooled by a
> fake bank.com when you’re trying to go to the real bank.com.

How? When one is in the middle one can spoof it all, including your in-DNS entry for saying “only use SSL”. Full supprt and verification of DNSSEC would be required to catch it. And of course on top of that a secure cert (thus not that MD5 thing *anywhere*, as as long as browsers support that or another hole, the attacker can use it). Note that DNSSEC is not the full answer either, as routing can still be faked too.

> At that point, we are hoping that the user understands that they don’t want to be at the b4nk.com domain.

You assume that users actually know what they are doing. If they where, then these issues would not exist in the first place. The biggest issue with anything is stupid users; and I am not insulting anyone here, granny is not supposed to know or be bothered with anything like certificates.

As for ‘obscuring’ things. There exist quite a few CSS-based hacks which make sites look like ‘ok’. The user still has to be educated on which part of the screen to trust and which part they are not supposed to trust.

Not even forgetting the other annoying part: if your computer is hijacked, because you visited some shady site accidentally that installed something for you, or you did accidentally follow that link in that email you got from that good “friend”, you still have nothing as they can modify any data going in and out of your browser. Something like that in the video here http://www.youtube.com/watch?v=mPZrkeHMDJ8 is supposed to fix that though, although that thing is just the next step to be compromised.

> As for HTTP redirects, that isn’t good enough in the case where DNS isn’t trustworthy.

It is better than nothing: if the user always expects a SSL domain, then they will expect it. Though some users indeed will not notice at all.

Oh and as we are actually mostly jeroen  –  Jan 6, 2009 10:19 PM

Oh and as we are actually mostly mentioning banks, check http://video.google.com/videoplay?docid=3041861094296331549&hl=en to see a lot more ways to actually get around it all ;)

As I've said, the browser can maintain a list of visited sites George Ou  –  Jan 6, 2009 10:34 PM

“How? When one is in the middle one can spoof it all, including your in-DNS entry for saying “only use SSL”. Full supprt and verification of DNSSEC would be required to catch it. And of course on top of that a secure cert (thus not that MD5 thing *anywhere*, as as long as browsers support that or another hole, the attacker can use it). Note that DNSSEC is not the full answer either, as routing can still be faked too.”

Full DNSSEC with no backward compatibility to regular DNS can certainly make DNS extremely secure even when the routing is faked.  However, as I’ve written above, it’s highly unlikely we can give up backward compatibility to regular unsigned DNS even if 99% of the world supports DNSSEC fully.

As I’ve said, the browser can maintain a list of sites the user has visited and any site it has been to that requires mandatory SSL/TLS it will insist on it perpetually until someone manually deletes or modifies the list.  Granted, this does not cover sites you haven’t visited before which may have been hijacked but I never advertised the solution as full proof.  The primary purpose of this solution is to cover the largest vector of attack and offer the user a reasonable amount of additional protection.  Even a partial implementation of DNSSEC would be helpful because it can override the cached settings and offer assurance when the DNS server isn’t trusted.

As for complete computer hijacking, nothing is going to protect you there because the root DNS records and certificate authorities can be hacked and the hacker can already see everything you’re doing anyways.  No one has ever claimed to protect against this vector of attack.

“It is better than nothing: if the user always expects a SSL domain, then they will expect it. Though some users indeed will not notice at all.”

Expecting the user to check for SSL is NOT better than nothing because research has shown it’s about as useful as nothing since everyone ignores the lack of SSL.  This proposal isn’t full proof, but it’s miles ahead of nothing.

> ...it will insist on it perpetually jeroen  –  Jan 6, 2009 11:24 PM

> ...it will insist on it perpetually until someone manually deletes or modifies the list…

Take a small guess what malware will do for you, totally automatically and free ;)

> ...I never advertised the solution as full proof.

If it does not fully cover the problem, why try to add it and encumber a lot of people and organisations with something that doesn’t work in the first place. Yes, it covers a part of the issue, but only when DNSSEC is globally deployed, which is unfortunately not going to happen. Next to that, most credentials get nicked by infected hosts, because the user is ‘stupid’.

> Even a partial implementation of DNSSEC would be helpful because it can override the cached settings and offer assurance when the DNS server isn’t trusted.

How exactly can it do that? Lets say you are a bank, and lets say that you DNSSEC protect your bank, and lets say that you get your domain in the ISC DLV (should not be too hard) and then oops, oh yes, please upgrade ALL the users to fully support DNSSEC in both their DNS caches/resolvers which are run by their ISPs in most cases etc etc. Neither IE or Firefox have DNSSEC support, thus even if you are able to upgrade all of that, you still need to upgrade those too. A lot of users simply do not upgrade their browser or computer, either due to corporate policy or due to simply not caring or knowing; the field of upgrading gets better, but there will always be computers which are not upgraded.

Nevertheless, before you are at a ‘partial DNSSEC implementation’, one is a lot of changes ahead. And even if it is partial, how do you differ between ‘there is no DNSSEC for this domain’ and ‘my packets are being routed to a DNS Server which is simply ignoring all the DNSSEC muck’; unless you start dropping all domains that are not DNSSEC signed of course, but guess what, that will just break computers and make customers unhappy == support costs at the ISPs will go up == unhappy ISPs == not going to happen (even if I would like to see it that DNSSEC gets deployed…)

> This proposal isn’t full proof, but it’s miles ahead of nothing.

But it is also eons into the future before it can possibly be used due to the simple missing factor of globally deployed DNSSEC (infrastructure and applications).

If you have malware, it's already gameover George Ou  –  Jan 6, 2009 11:47 PM

You keep coming back to the malware issue but as I keep saying, it’s already game over if you have malware.  So the malware issue is moot in this discussion.

“If it does not fully cover the problem, why try to add it and encumber a lot of people and organisations with something that doesn’t work in the first place. Yes, it covers a part of the issue, but only when DNSSEC is globally deployed, which is unfortunately not going to happen.”

1.  It doesn’t encumber companies with anything if they don’t want to use it; but it does help consumers stay away from hijacked websites with perfectly legitimate names.

2.  It doesn’t require DNSSEC at all to offer the benefits that I have claimed.  I only brought up DNSSEC as a requirement to cover a wider array of attacks.

> You keep coming back to the jeroen  –  Jan 7, 2009 8:21 AM

> You keep coming back to the malware issue but as I keep saying, > it's already game over if you have malware. So the malware issue > is moot in this discussion Thus you must simply accept that a bank protecting their site with this proposed mechanism is not doing anything to help the situation as the user will accidentally stumble upon another site and get infected with malware, thus destroying all the effort of the first (effort == cost). And really the cost versus benefit factor of what you are proposing is really too low for it to be worthwhile. There are way more convenient and easier to mass-exploit attack vectors than waiting for somebody to stupidly connect to your wireless hotspot and starting to do banking and other logins on it. Can I suggest again that you actually check http://video.google.com/videoplay?docid=3041861094296331549&hl=en it will enlighten you a lot. > 1. It doesn't encumber companies with anything if they don't want to use it; It does, as their IT Staff is not going to upgrade the computers of their users. There are loads of companies still using Internet Explorer 6 simply because they don't dare to upgrade to IE7 due to expected incompatibilities. Same for companies still using Win2000 etc. Upgrades involve downtime, user inconvenience, lots of helpdesk calls and more importantly costs due to all of that and possible revenue loss. And that is the part on the user side. On the 'server' side, where eg bank.com would deploy something new, they also need to go through full implementation, testing and release cycles. When something breaks it will hurt and it will cost costumers and money. Not happening either. > 2. It doesn't require DNSSEC at all to offer the benefits that I have claimed. It does require DNSSEC, as otherwise one can fake any response in DNS. Simplest attack: just redirect all port 80/tcp and port 53/tcp&udp;traffic to the hijacker box. Presto.

It benefits those who deploy a patched or new browser George Ou  –  Jan 7, 2009 9:48 AM

DNSSEC doesn't protect against malware either, nor does it ever claim to be a substitute for anti-virus. The same is true of this proposal. "It does, as their IT Staff is not going to upgrade the computers of their users" First of all, IT works for the company. If the company doesn't want to bother with the upgrade, they don't have to do it an it won't benefit them or affect them adversely. If the company wants it, then IT will do what they're paid to do. The proposed solution allow people running newer or patched web browsers to benefit and that's all I can ask for. "It does require DNSSEC, as otherwise one can fake any response in DNS. Simplest attack: just redirect all port 80/tcp and port 53/tcp&udp;traffic to the hijacker box. Presto" No it doesn't. Read the proposal and the comments again, especially the part about the browser whitelist that operates on top of the scheme and the part about an independent DNSSEC server that will act as an authority for other DNS domains.

Simple suggestion The Famous Brett Watson  –  Jan 7, 2009 2:09 AM

Most phishing attacks involve either a lure to a false URL or malware at the client end. Given that your proposal addresses neither of these problems, I don’t see that it warrants much attention. Having said that, your proposal may offer a good idea for a Firefox plugin: a secure bookmark manager. This bookmark manager would be for sensitive sites (let the user decide what sites are sensitive). All bookmarks stored in the secure bookmark manager must be HTTPS or its equivalent, and the page will only load if all security checks pass.

As has been pointed out, this “secure” bookmark manager is just as vulnerable to tampering by malware as anything else, but it may be a good-enough and simple-enough interface to keep dear old grandma out of trouble on most days.

The malware issue is moot here George Ou  –  Jan 7, 2009 2:28 AM

The malware issue is moot here.  As I’ve stated over and over again, malware means game over for everything.  If you’re going to knock any solution for failing to protect against complete PC hijacking, then you can argue against any security solution.  Heck, PKI can be hacked with malware since you can modify the Certificate Authority list so would you argue that PKI is not worth it?

But you’ve essentially agreed with me that a secure bookmark manager (possibly an online validation service with digitally signed results) could be a very good solution.  So why would it be wrong to add a DNS mechanism to this to offer some centralized control?

Clarifications The Famous Brett Watson  –  Jan 7, 2009 4:15 AM

But you've essentially agreed with me that a secure bookmark manager (possibly an online validation service with digitally signed results) could be a very good solution.
I didn't say it would be a very good solution. I said it would be good enough to keep grandma out of trouble on most days. In case the tone of that remark isn't clear, I mean that it's a small incremental improvement for a few fringe cases. That's a far cry from "very good solution".
So why would it be wrong to add a DNS mechanism to this to offer some centralized control?
I didn't say it would be wrong to add a DNS mechanism. In fact, I didn't mention the DNS aspect. If you want my comment on the DNS aspect, I'd say it's not worth the bother. If your threat model includes DNS poisoning or a man-in-the-middle attack, then DNS records can't offer additional protection unless they are fully signed. Without DNSSEC, then, the records only work when you aren't under attack. If a site seriously wants to make a policy of HTTPS only, then they should simply offer HTTPS only. If a victim has been redirected to a false non-HTTPS site by malicious DNS records or traffic interception, then the DNS mechanism is already subverted. That's why it's not worth the bother: it's not addressing the actual problem.

You're mistaken George Ou  –  Jan 7, 2009 4:41 AM

"If your threat model includes DNS poisoning or a man-in-the-middle attack, then DNS records can't offer additional protection unless they are fully signed. Without DNSSEC, then, the records only work when you aren't under attack. If a site seriously wants to make a policy of HTTPS only, then they should simply offer HTTPS only." The unsigned DNS mechanism would only act as an on-trigger for the "secure bookmark" manager. Once it's been toggled on, it stays on. Only a signed DNSSEC mechanism would act as an on- or off-trigger for the SSL requirement in the whitelist. Most of us have fairly predictable web surfing patterns and it doesn't require a massive list to maintain all the secure sites or unsecured sites we visit. That whitelist could automatically be populated by DNS mechanism based on the websites that we surf. Now you feel this mechanism is only a small incremental solution, I disagree and feel it's an important improvement given the fact that laptops are outselling desktops and hotspot wireless access is an increasingly popular way of accessing the Internet. There's also the potential for a centralized DNSSEC server to let client browsers know what should and shouldn't connect to over SSL and that covers all angles of attack provided the client refuses to access any website it can't verify from that central DNSSEC server or verify from its internal whitelist. That centralized DNSSEC server would also vouch for unsigned DNS sites that it can verify so it would add the benefit of DNSSEC to non DNSSEC domains.

Why confuse the user? George Ou  –  Jan 7, 2009 4:01 AM

“let the user decide what sites are sensitive”

Why confuse the user?  Would it be that awful if too much of their surfing habits are protected?  Encryption is so cheap that a $30 CPU can encrypt 800 Mbps of AES-256 so you can only do too little encryption and not too much.  Does a user really understand that the data in Google maps could be used to track where they live?

Also, regarding fake and possibly similar looking URL, I’ve already responded to that and we try to protect against that using blacklisted sites on search engines and anti-spyware protection built in to Windows Vista and other anti-spyware applications.  They’re not perfect, but they help and it’s not what this proposal addresses.  The key is that software and infrastructure should do all that it can to protect the user because processing is cheap.

Lastly, another great mechanism would be a secure DNSSEC server on the Internet that can respond to queries.  If the browser is going to bank.com which isn’t in the secure bookmark list, it should query that DNSSEC server on the Internet and get a validated response.  If that server isn’t reachable for any reason and bank.com isn’t on the bookmark list as a non SSL site or an SSL site, then the secure response would be to refuse the connection.  There are too many HTTP hotspot portal systems that ask for login credentials and credit card numbers and this mechanism could combat that fraud.

Lastly, it is important to acknowledge that wireless access is becoming the dominant form of Internet access as laptops are now beginning to sell better than desktops.  It’s critical that we secure this mode of Internet access as much as possible.

> So why would it be wrong jeroen  –  Jan 7, 2009 8:11 AM

> So why would it be wrong to add a DNS mechanism to this to offer some centralized control?

Because without *FULL* DNSSEC, the DNS is just another way to inject more false details. As such it doesn’t help. “Caching the first response” just means you cache the wrong information. And what if the bank policy changes?

> Most of us have fairly predictable web surfing patterns and it doesn’t require
> a massive list to maintain all the secure sites or unsecured sites we visit.

Does granny understand those concepts? Smart

> The unsigned DNS mechanism would only act as an on-trigger for the “secure bookmark” manager. 

There is already such a thing in place in Firefox. You can “View Page Info” (right mouse) and under the security tab it at least tells you how often you where at that site. Which helps against bank.com != b4nk.com as you have not been at the latter before; it does not helped against somebody hijacking the route (be it using DNS or just re-routing it somewhere else) for that domain though. But how many people know that they can check that and how many people actually do?

> Encryption is so cheap that a $30 CPU can encrypt 800 Mbps of AES-256

Maybe $30 for the CPU, but the rest of the box, what will that cost? What will the interface costs be, interconnect, is it stable? The internet at large also does a bit more than 800mbps. Just AMS-IX does a 1000 fold of that (see http://www.ams-ix.net/technical/stats/) and that is just public peering, ignoring all the 40GE links to Google around the world. I really would not want to tell my budget folks to upgrade everything to SSL and that we would need a million US$ in hardware extra because of SSL, sorry but the accounting people will ask “what are you protecting of” and there will be no proper answer. It all comes down to how important the data is of course. Costs still outweigh that though especially for public services. For banks the story is different and they already do SSL in most cases. The problem there is “do you really know who you are talking to”? And without DNSSEC you can’t be sure that bank.com is bank.com from the last time; and even if you know that bank.com should be correct, is it really your bank.com and not the competition?

> Does a user really understand that the data in Google maps could be used to track where they live?

Does it really help to encrypt that data if Google is storing it on their ends anyway? It only ‘protects’ from intermediate sites to see where you are going to, does that help?

> Lastly, another great mechanism would be a secure DNSSEC server on the Internet that can respond to queries.

Sorry to ask, it probably sounds insulting, it most likely is, but do you know how DNSSEC works? There is no such thing as a “central server”. Next to that, as I re-iterated before, ALL the tools and infrastructure need to understand it for it to work, if there is a cache/resolver in the middle which doesn’t pass through the DNSSEC specific records (like quite a few DSL/Cable modems resolvers just drop AAAA records or break on large TXT records as they don’t support EDNS0) then everything breaks and you are out of luck.

Indeed, if DNSSEC was in place then one could store something in DNS telling “hey that site is HTTPS only”, as I also mentioned already that CAN already be done with SRV records. Tools don’t use that though and you would need to upgrade EVERY user for it to work (including those corp lap/desktops). The problem here is not the people who upgrade (they will be fine most of the time), it is the people who don’t upgrade and who can be tricked into things as they use outdated software, they get infected in most cases anyway. And a forced-upgrade is not going to happen as that would mean that some users don’t have the tools to use it which means revenue loss either due to customers going somewhere else and/or due to loads of helpdesk costs being incurred.

> There are too many HTTP hotspot portal systems that ask for login credentials
> and credit card numbers and this mechanism could combat that fraud.

If you are connecting to an unknown system, then that is your problem. Nothing one can do against that. As for ‘HTTP hotspot portal systems’ hijacking DNS and pushing one to a portal page, yes, they do, how else are those systems supposed to authenticate the user in a method that is convenient for the user? You really want the user to install a special application for every hotspot they might encounter (where do they get that app from, the malware site? :) 802.1x might work here a bit, but try convincing all those hotspot providers to remove their cool logos and advertisements and very handy instructions for new users from their portal sites.

If people just connect to random hotspots, then you can’t protect them against providing their credentials to banks either.

> It’s critical that we secure this mode of Internet access as much as possible.

Ever read something about WPA2+PSK? When that is in use it is more protected than an Ethernet link (which can be protected with more or less the same system, see 802.1x).

Even with those authentications in place, there is generally further no crypto happening over the rest of the Internet and this only solves the part of the access to the gateway. Anybody not using SSH or SSL-protected connectivity is sending their data in plaintext if that goes over Ethernet or Wireless (which is also Ethernet mind you ;). Nothing to do about that. If you want secured links, you will have to fix every part of the Internet, starting with a signed root in DNS (or using DLV) and demanding every single component that you are using to be DNSSEC compliant (apps, infrastructure etc). That requires a lot of work though, I would say that you go try to fix up DNSSEC for your own domain/organisation/site/computer etc, when you have done that, come back and explain how hard it was and how useful it can be.

I don't think you're getting it Jeroen George Ou  –  Jan 7, 2009 9:42 AM

"> Most of us have fairly predictable web surfing patterns and it doesn't require > a massive list to maintain all the secure sites or unsecured sites we visit. Does granny understand those concepts? Smart" The whole point of this proposal is that granny WON'T need to decide when the browser makes the decision for her. Right now we ask granny to figure it out herself and she fails like everyone else nearly 100% of the time. Once the browser remembers a site must be secure, it insists on it even if DNS says otherwise unless it's signed DNSSEC reply telling it to disable SSL. "> Encryption is so cheap that a $30 CPU can encrypt 800 Mbps of AES-256 Maybe $30 for the CPU, but the rest of the box, what will that cost? What will the interface costs be, interconnect, is it stable? The internet at large also does a bit more than 800 mbps"" I don't know if you're being intentionally obtuse or what. The point is that ANY end point computer can do about 800 Mbps PER END POINT. I don't know of too many end points that has more than 50 Mbps of Internet connectivity. Yes I'm an expert at wireless link layer security, and I can tell you that shared key WPA2-PSK mode is worthless in a hotspot environment because anyone who sniffs the initial WPA2-PSK handshake with knowledge of the PSK can sniff the session. I've actually proposed that WPA/802.1x with any known username/password can facilitate anonymous privacy (not access control), see http://blogs.zdnet.com/Ou/?p=587. Even when people know the username/password, they cannot sniff their fellow authenticated users. Unfortunately, almost no hotspots run that type of security. Furthermore, it doesn't stop the rogue hostile hotspot. The proposal closes some (not all) vectors for site/credential hijacking and it is an incremental improvement in security as part of a defense in-depth strategy. It's nothing more and nothing less. This isn't a very difficult concept and you keep confusing the very basics of it. I'll summarize one last time. Problem: * Humans don't bother to check for HTTPS. * Human goes to bank.com and sees http://bank.com and gives his password willingly. * http://bank.com is a fake website and human gets credentials stolen. Solution: * Don't bother ask human to check for HTTPS. * Announce to the world via some mechanism e.g., plain old DNS that bank.com should only work with proper SSL and a proper digital certificate. * Web browser checks said announcement and enforces policy with human out of decision making process. * Web browser caches settings in a white list for bank.com so that from this point forward, it will always insist on proper SSL for bank.com until user manually clears SSL white list or a trusted DNSSEC server says to stop doing it. What solution doesn't cover: * Malware. This isn't a substitute for anti-virus or anti-spam so don't expect it to be. * User going to b4nk.com or bank.com.login.somewebsite.ru/authentication. Google Chrome mitigates the latter exploit by highlighting the actual domain of SOMEWEBSITE.RU and grays out bank.com.login. * User going to site they've never been to when DNS or routing has been hijacked. This might be addressed with an online trusted DNSSEC server that the browser can check with but this would be the next phase.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC

DNS

Sponsored byDNIB.com

Threat Intelligence

Sponsored byWhoisXML API

New TLDs

Sponsored byRadix

Domain Names

Sponsored byVerisign