Home / Blogs

Wow! BIND9 9.10 Is out, and What a List of Features!

Today the e-mail faerie brought news of the release of BIND9 9.10.0 which can be downloaded from here. BIND9 is the most popular name server on the Internet and has been ever since taking that title away from BIND8 which had a few years earlier taken it from BIND4. I used to work on BIND, and I founded ISC, the home of BIND, and even though I left ISC in July 2013 to launch a commercial security startup company, I remain a fan of both ISC and BIND. I’m here to tell you, BIND9 9.10 is the most featureful single release of any open source DNS software package ever

. Let’s look at some of the highlights:

DNS Response Rate Limiting (RRL), created by Vernon Schryver and Paul Vixie, is now part of the base server. It’s not enabled by default, though I have hopes for that in time to come. But since it’s in the name server, and turning it on is very simple, we can expect to see more authoritative name servers opt out of their long held DDoS reflecting amplifier role in months and years to come.

Zone files can be stored in “map” format, which means the preparation of a large zone can take place offline, and the actual moment of publication where the new zone data is made available via a running server, is instantaneous. Kudos to the NSD Team for being first into the field with this fantastic idea. And kudos to the BIND team for not being too proud to copy a good idea when they see one.

DNS Response Policy Zones (RPZ), another advanced security feature brought to you by the creative team of Vernon Schryver and Paul Vixie has been upgraded to moot the old performance problem whereby a name server that subscribed to multiple RPZ feeds would slow down as more RPZ feeds were added. Also, DNS RPZ Format 3 is now supported, which makes it possible to preserve the original QNAME when a wildcard rule is used—helpful when building walled gardens. It is also now possible to use the client’s IP address as an RPZ rule trigger, in case you find a bad actor who deserves the mushroom treatment (keep them in the dark and feed them manure).

(Side note: My day job (Farsight Security) offers an RPZ feed we call Newly Observed Domains (NOD) which allows a recursive DNS server operator to pretend that extremely young domain names just don’t exist yet. It turns out that many online criminals create, abuse, and destroy their DNS names within a span of minutes or hours.)

There are a bazillion other smaller features and new utilities that I’ll skip here for the sake of the high points. I want to say to my old team:

BIND9 9.10.0 is the best thing ever, and: thanks for keeping the faith

.

Excelsior!

By Paul Vixie, VP and Distinguished Engineer, AWS Security

Dr. Paul Vixie is the CEO of Farsight Security. He previously served as President, Chairman and Founder of Internet Systems Consortium (ISC), as President of MAPS, PAIX and MIBH, as CTO of Abovenet/MFN, and on the board of several for-profit and non-profit companies. He served on the ARIN Board of Trustees from 2005 to 2013, and as Chairman in 2008 and 2009. Vixie is a founding member of ICANN Root Server System Advisory Committee (RSSAC) and ICANN Security and Stability Advisory Committee (SSAC).

Visit Page

Filed Under

Comments

Deleting the newly created domains Phillip Hallam-Baker  –  May 2, 2014 2:41 PM

OK Paul, I accept that there is every good reason not to recognized newly created domains immediately. But you are challenging one of the foundational assumptions of DNSSEC which is that ‘security’ means seeing the authoritative zone.

Don’t get me wrong here, I think that idea is a crock. I think the idea that I want to have my machines able to connect up to a newly minted domain the second it comes online is rubbish. The only time I want that is when I am bringing up my own domains. I remember when we had to wait 24 hours for every change in .com, that led to expensive delays.

But the idea that DNSSEC is the be all and end all to DNS security is still a big problem. It solves one problem well but there are far more DNS security problems than just one.

I think that the new model you are moving to is the right one. Instead of regarding the resolver as a passive cache, it is potentially a point where we can edit out the bits of the DNS and the Internet that aren’t safe (or if we are dealing with critical infrastructure, aren’t known to be essential).

What I want though is that we run rather than walk. In particular I think that we want to look at the authentication of records published by the authoritative zones as a separate problem from authenticating messages from the resolver.

In this new world resolvers are no longer interchangeable. A resolver that is taking appropriate security feeds is better than a randomly chosen resolver.

Which is why I want to authenticate and encrypt communications between the stub and the resolver and not just encrypt.

security in the last mile Paul Vixie  –  May 3, 2014 6:01 AM

phillip, we’re in agreement this time. i’d like an EDNS option to allow a client to signal “if the policy-based response is different than the dnssec authentic response, please send me both, and sign the former with your SIG(0) key”. clients should be able to select between truth and pravda, having access to provably authentic both. this would enable clients to decide whether to believe the domain owner, or their ISP (or their government who is compelling their ISP), based on stub resolver config knob.

m. bortzmeyer has a JSON schema for dns transactions now. that means we could specify a RESTful API for it and move to TCP/445, using only privately shared keys (no X.509 CA allowed, since this is meant to enable DANE.)

the ingredients and requirements and understanding are finally coming together on this.

Well lets go further Phillip Hallam-Baker  –  May 3, 2014 4:13 PM

This is very similar to the approach I take in Omnibroker. The idea there being an extension of the SAML approach where an assertion consists of a series of statements that answer the question originally asked plus optional Conditions and Advice. The idea of Advice being that a proof chain can be added.

So lets imagine that the client asks the resolver how to connect to example.com to do a _http._tcp connection back comes a reply ‘Use TLS to port 443 of 10.1.2.3’. The response can optionally include a list of DNSSEC and DNS records giving the rationale. But the policy ‘use TLS’ probably came from a HTTP header from a crawler that pinged the site in the past.

At the moment we have a confusion of discovery and policy mechanisms. It should all be in DNS but we can’t get from where we are to where we want to be directly. Presence services tend to be LDAP or REST based.

So the starting point for Omnibroker was a high level interface. But now we have DNS Privacy on the table. So I think we need a low level interface as well.

I don’t think we have to go to TCP. We can of course but that means a performance hit and stops use of anycast. All my transactions work with a scheme of one UDP request packets followed by up to 16 UDP response packets. There is space there for almost any sort of query and/or reply. And I can ask multiple DNS queries in one go and it still works.

As for wanting to force use of DANE, I disagree. DANE is still a science project and it is built on X,509v3 in any case so your statement makes little sense. What I think you mean is don’t build this on the WebPKI which I agree with. This is a platform to build the WebPKI on, not the other way round. Although I would not design a system to deliberately sabotage use either.

The approach I have always looked for is one where each party is their own root of trust. So part of the idea of Omnibroker is that the Omnibroker is selected by the relying party. And all the arguments made for editing ICANN’s DNS zones apply to editing CA issued certs as well.

Here is how I would see it work in an enterprise, the customer replaces their DNS resolvers with broker/resolvers and establishes a private CA. This could be run in-house on the Entrust model or be outsourced. But either way it is closely linked to the DNS/broker service which acts as a hub for managing the enterprise network. The enterprise then contracts with one or more providers of protection streams.

For the home consumer, this is going to have to be a little simpler. Which makes it harder from a design point of view as the requirements of a modern home are essentially the same as a university department back in 1995. I have 50 DHCP leases open right now and that is before I network all the light switches. This is the point where using the WebPKI to enable initial setup is useful. I can control a lot of risks by requiring the broker/trust stream configuration to be mediated by an EV certified provider.

dane is not x.509 v3 Paul Vixie  –  May 3, 2014 7:53 PM

This is very similar to the approach I take in Omnibroker. The idea there being an extension of the SAML approach where an assertion consists of a series of statements that answer the question originally asked plus optional Conditions and Advice. The idea of Advice being that a proof chain can be added.
Phillip, I know that you intend this as an example, but I want to make it clear to anyone observing this exchange now or in the future that we're discussing open systems not our various products. As to the point itself, while I do think that the Internet needs a higher-than-DNS level API for "connect me to where I want to go", I also do want a method of (a) securing the last mile of DNS itself, with (b) a signalling path by which "advice" or connection policy can be authentically expressed. So, to your proposal, I say, let's do both.
I don't think we have to go to TCP. We can of course but that means a performance hit and stops use of anycast. All my transactions work with a scheme of one UDP request packets followed by up to 16 UDP response packets. There is space there for almost any sort of query and/or reply. And I can ask multiple DNS queries in one go and it still works.
I should have been clearer -- sorry! I don't want to move to a primary TCP transport for DNS and DNS-like lookups. EDNS works fine where the middlebox doesn't try to outsmart us. What I want though is a fallback transport for when EDNS does not work. Right now the fallback is "just use normal EDNS" which foregoes DNSSEC which means all an enemy coffee shop or hotel room has to do in order to control your resolution path is break EDNS and then intercept your DNS. We're not going to see DNSSEC-enabled apps (like DANE) until our end users can trust the last mile.
As for wanting to force use of DANE, I disagree. DANE is still a science project and it is built on X,509v3 in any case so your statement makes little sense. What I think you mean is don't build this on the WebPKI which I agree with. This is a platform to build the WebPKI on, not the other way round. Although I would not design a system to deliberately sabotage use either.
Well, yes, but you make an interesting point. I don't care what DANE looks like or smells like -- I characterize it from its capability of letting a relying party trust what X.509 would call a self-signed cert, because the signature is DNSSEC verified. I share what seems to be your desire for a CA-free world. And I think we can safely characterize not just DANE, but IPv6 and DNSSEC and the web itself, as "still in its science fair project stage". We've never let that stop us from rolling big rocks toward the tops of various hills.

Yes, these are open systems Phillip Hallam-Baker  –  May 3, 2014 9:25 PM

Just to be clear here, yes Comodo does have a product in this area, to the extent that a free service is a ‘product’. The Omnibroker proposal is an attempt to generalize that product and turn it into an open system. All the specifications are public and there is open source code.

I certainly don’t want to be in a position where I have to enable every application on the Internet to use my scheme. So it has to be open to fulfill the purpose.

At the start I was looking at a high level API only. However all the complexity is actually in the initial setup of the security relationship between the client and the resolver. So adding a low level query interface for raw DNS queries is simple.

I agree that layering over HTTPS is a last resort to clean up the 3-5% of network situations where it is not possible to use UDP or DNS transport.

From a business point of view, I am not trying to get rid of CAs. There will always be a role for companies that manage cryptographic apparatus. As soon as cryptography is added to a system the stakes change and it becomes essential to get it right. For example, consider the difference between traditional unix systems and a system using an encrypted file store. On traditional unix, you make a mistake and root privilege can probably save you. On an encrypted file system you make a mistake and nothing can save you.

The point where I think traditional PKI fails is actually on the relying party side. The browser providers are not effective proxies for the interests of the relying parties. They don’t have an effective range of sanctions, dropping the Symantec roots out of the browser is not a credible threat. The cost would be too high.

What this proposal does is to provide an alternative approach where the relying party is in control rather than the subject. So this isn’t subject side PKI like the traditional CA model, it is relying party side. There are problems relying exclusively on either. Different problems have different needs. The WebPKI was only designed to meet the near term needs of Netscape’s IPO. Nothing more.

i think we can do what we need to do within the evolutionary context of dns itself Paul Vixie  –  May 4, 2014 7:23 AM

Phillip, you wrote:

I agree that layering over HTTPS is a last resort to clean up the 3-5% of network situations where it is not possible to use UDP or DNS transport.
Where are you getting that data? Realizing that the plural of anecdotes is not "data", I've got to say that looking at home WiFi, hotel and coffee shop WiFi, and office WiFi, my ability to get EDNS (and therefore DNSSEC) signalling in the last mile is well under 50%.
From a business point of view, I am not trying to get rid of CAs. There will always be a role for companies that manage cryptographic apparatus. ...
If you mean a new kind of CA who can act as a trusted introducer, in the sense of a bank or church or school or insurance company, I'm all for it. If you mean in the style of the ~1900 or so X.509 CA's who sell certificates for SSL today, I'm 100% against it. The falseness of the sense of security in this one industry and technology are, by themselves, enough to threaten the continued existence of civilization as we knew it. I am not singling out Comodo for any special mention here. Please don't take this personally -- I don't think less of Comodo than I do of the other ~1899 CA's out there. The problem isn't with any one company, it's with the basic idea, itself, of a CA industry that's independent of a certificate holder's other meatspace relationships.
The point where I think traditional PKI fails is actually on the relying party side. The browser providers are not effective proxies for the interests of the relying parties. They don't have an effective range of sanctions, dropping the Symantec roots out of the browser is not a credible threat. The cost would be too high.
Well, yes. Which is why last mile DNSSEC, coupled with DANE, coupled with self signed certificates backed by DNSSEC signatures, is workable by comparison. Mandatory outsourcing of trust, beyond getting one's DNS registry to accept some DNSSEC keying material (DS RR) and sign it with their key to get the registrant's keying material into "the PKI", provides the kind of local autonomy that made the Internet possible in the first place. ("Good fences make good neighbors.")

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Brand Protection

Sponsored byCSC

New TLDs

Sponsored byRadix

Cybersecurity

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global

Domain Names

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

DNS

Sponsored byDNIB.com