Home / Blogs

A Cynic’s View of 2015 Security Predictions - Part 3

A number of security predictions have been doing the rounds over the last few weeks, so I decided to put pen to paper and write a list of my own.

However, I have a quite a few predictions so I have listed them over several blog posts (see part 1 & part 2). After all, I didn’t want to bombard you with too much information in one go!

Part three examines the threats associated with data breaches.

Data Breaches

What would an annual collection of predictions be if it didn’t include a perspective on data breaches? The biggest news stories for 2014 were all about brand-name organizations being breached and the tens-of-millions of credit card numbers and personal information that was stolen.

Of course there’ll be more breaches in 2015—and they’ll be bigger and more sophisticated—the stars have foretold it! Well, actually, less reliance upon the movements of the stars and more likely a summary analysis of public breach disclosure statistics for the last decade would have been sufficient.

While it doesn’t take a soothsayer to predict the increase in data breaches for 2015, it does raise another important question. Are we being breached more often?

In many ways there is a large disparity between the breach statistics being recited and the volume of associated hacking activity. Most certainly the number and sophistication of hacks has been increasing year-on-year since the birth of the Internet, but the increase in hacking frequency most likely parallels the like-for-like growth of the Internet in general—while the growth in data breach disclosures looks more like a scary hockey-stick projection.

I think that there are a handful of critical factors as to why the metrics for data breaches are on a dramatic incline:

  • There are more legal and regulatory pressures upon businesses to openly disclose them.
  • Organizations found out to have been hiding a data breach will be openly chastised by their shareholders, the media and consequently their customers.
  • It’s become acceptable to admit your failures publicly and to blame hackers (or foreign governments) for using sophisticated techniques they could never have stopped.
  • Network-based breach detection tools have improved and are being more widely deployed.
  • Law enforcement and threat intelligence companies are becoming adept at tracing the source of stolen data being sold and traded in the cyber underground.

All of the above, when combined, makes for a case of observational bias. It bears a striking similarity to another story were a number of scientific papers released in the middle of the twentieth century that discussed how the increased annual count of tornadoes in the USA were due to an increased settlement of the West, farming techniques and global warming, only to be later debunked.

The reality of the situation was that more people were settling in the West and communication channels and alerting mechanisms had advanced, which meant that there were more people capable of observing tornadoes and reporting them.

Vulnerabilities

Going hand-in-hand with data breaches is of course the discussion on vulnerabilities and vulnerability disclosure. Unlike previous years—where the mainstay of predictions had been vulnerability specific—very few vendors voiced their predictions for the growth of vulnerabilities.

This may be because these projections have been relegated to annual threat reports (of which there are many)—rather than summaries of the public’s top-ten things to worry about in 2015.

Vulnerability landscape

It is however, interesting to see why so few people commented on what the vulnerability landscape will look like for the year. With such notable pan-Internet threats such as Heartbleed and Shellshock making huge splashes in 2014, only a handful of commentators projected more of these big vulnerabilities.

When vulnerability predictions were made for 2015 they often took the form of changes in attack vector or emphasized a particular category of technology. For example, pointing out that point-of-sale (PoS) systems are vulnerable to attack and, as companies hardened those systems in the wake of big breaches in 2014, that attackers would move to exploit vulnerabilities in the payment processors instead.

Vulnerability disclosure has changed radically in the last five years. What once was largely the realm of security vendors investing in research teams to find or categorize new bugs, or setting up purchase programs to entice third-party researchers to disclose to them first (seeking to gain advantage over competitors by covering a vulnerability first), is rapidly becoming a standalone business.

Bug bounty programs—often funded by the vulnerable software vendors themselves—pay the researchers directly for their discoveries. The surprise for many is how well this new arrangement is working out.

Bug bounty frameworks

With a direct line between the researcher and the vulnerable vendor, a legal framework allowing them to hunt without fear of prosecution, and assurances of hassle-free payment, more bugs are being found and disclosed this way. In effect, a chunk of the security quality assurance process has been conveniently outsourced in a pay-for-results model. The grey and underground channels for researchers to sell and disclose newfound vulnerabilities will continue to exist, but it would seem that less talent is following that path now with the commercialization of bug bounty frameworks.

The elephant in the room is, however, the software and code not owned by any particular organization, but used my many. Unfortunately the biggest vulnerabilities of 2014 lay undiscovered for years in some of the most popular Open Source software powering the Internet. The question on the lips of many is which new open source vulnerabilities will surprise us in 2015.

Conclusion

It is likely that 2015 will see a sea-change in the way open source code is viewed and managed. The ferocious media attention to Shellshock and Heartbleed has already initiated a renewed vigour in bug hunting open source projects. I think that, over the next couple of years, the key outcomes will be something along the following trajectory:

Vendors of automated code analysis and bug hunting tools will take the lead in analysing popular open source projects. By uncovering new bugs they’ll initially harness the media to extol the virtues of their advanced technology and, once the media tires of bug overload, they’ll shift to publishing statistical reports and cite academic papers as competitive differentiators.

  • Software vendors that are (overly) reliant on particular open source projects for their own commercial products will have to invest more in to analyzing the public code. Industry collectives have already formed to fund particularly sensitive open source code projects—but the number will increase, and so too will the amount being invested.
  • As automated code analysis tools improve we’ll see them uncover new bugs in code that had previously been analyzed by other automated scanners. The smarter vendors (or researchers behind a new proof-of-concept tool) will obviously use the findings for competitive advantage—and we’ll be okay with that.
  • Purchasers of software will be more demanding of their vendors and require assurances that any open source code included in their products be assessed as secure as their own proprietary code. Procurement teams will require that vendors disclose all open source elements in a product, along with a validation of the code’s security and integrity.

By Gunter Ollmann, CTO, Security (Cloud and Enterprise) at Microsoft

Filed Under

Comments

I'd add a fourth conclusion there:* To Todd Knarr  –  Jan 30, 2015 3:38 AM

I’d add a fourth conclusion there:

* To the consternation of vendors, purchasers will require that the proprietary code be subject to the same analysis and assessment as the open-source code. Pressure will increase as problems are found to have more often originated in the proprietary code than the OSS code.

Why do I predict that? Because many OSS projects are already being scanned and analyzed for errors (eg. the Linux kernel itself, the PostgreSQL database, Apache projects) by Coverity and others, and to date Coverity’s found that the error density for OSS is significantly better than for proprietary projects Coverity also scans (an average of 0.59 errors per thousand lines for OSS vs. 0.72 for proprietary in 2013). My experience as a developer is that most proprietary software outside of a few industries isn’t routinely scanned for errors, it’s seen as a cost with little benefit to sales and so gets the same treatment as most other QA (ie. it’s first on the chopping block when time has to be made to add the latest new feature Marketing’s asked for). With OSS having gotten a head start and eliminated most of the low-hanging and even a lot of the high-hanging fruit, it’s not hard to predict that the most errors are going to surface in the codebase that hasn’t been subject to that analysis on a regular basis yet.

Lack of IPv4 addresses Jean Guillon  –  Jan 30, 2015 9:04 AM

Wasn’t the world going to collapse due to lack of IPv4 addresses?
;-)

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Cybersecurity

Sponsored byVerisign

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API

Domain Names

Sponsored byVerisign