Home / Blogs

A Cynic's View of 2015 Security Predictions - Part 3

Gunter Ollmann

A number of security predictions have been doing the rounds over the last few weeks, so I decided to put pen to paper and write a list of my own.

However, I have a quite a few predictions so I have listed them over several blog posts (see part 1 & part 2). After all, I didn't want to bombard you with too much information in one go!

Part three examines the threats associated with data breaches.

Data Breaches

What would an annual collection of predictions be if it didn't include a perspective on data breaches? The biggest news stories for 2014 were all about brand-name organizations being breached and the tens-of-millions of credit card numbers and personal information that was stolen.

Of course there'll be more breaches in 2015 — and they'll be bigger and more sophisticated — the stars have foretold it! Well, actually, less reliance upon the movements of the stars and more likely a summary analysis of public breach disclosure statistics for the last decade would have been sufficient.

While it doesn't take a soothsayer to predict the increase in data breaches for 2015, it does raise another important question. Are we being breached more often?

In many ways there is a large disparity between the breach statistics being recited and the volume of associated hacking activity. Most certainly the number and sophistication of hacks has been increasing year-on-year since the birth of the Internet, but the increase in hacking frequency most likely parallels the like-for-like growth of the Internet in general — while the growth in data breach disclosures looks more like a scary hockey-stick projection.

I think that there are a handful of critical factors as to why the metrics for data breaches are on a dramatic incline:

  • There are more legal and regulatory pressures upon businesses to openly disclose them.
  • Organizations found out to have been hiding a data breach will be openly chastised by their shareholders, the media and consequently their customers.
  • It's become acceptable to admit your failures publicly and to blame hackers (or foreign governments) for using sophisticated techniques they could never have stopped.
  • Network-based breach detection tools have improved and are being more widely deployed.
  • Law enforcement and threat intelligence companies are becoming adept at tracing the source of stolen data being sold and traded in the cyber underground.

All of the above, when combined, makes for a case of observational bias. It bears a striking similarity to another story were a number of scientific papers released in the middle of the twentieth century that discussed how the increased annual count of tornadoes in the USA were due to an increased settlement of the West, farming techniques and global warming, only to be later debunked.

The reality of the situation was that more people were settling in the West and communication channels and alerting mechanisms had advanced, which meant that there were more people capable of observing tornadoes and reporting them.

Vulnerabilities

Going hand-in-hand with data breaches is of course the discussion on vulnerabilities and vulnerability disclosure. Unlike previous years — where the mainstay of predictions had been vulnerability specific — very few vendors voiced their predictions for the growth of vulnerabilities.

This may be because these projections have been relegated to annual threat reports (of which there are many) — rather than summaries of the public's top-ten things to worry about in 2015.

Vulnerability landscape

It is however, interesting to see why so few people commented on what the vulnerability landscape will look like for the year. With such notable pan-Internet threats such as Heartbleed and Shellshock making huge splashes in 2014, only a handful of commentators projected more of these big vulnerabilities.

When vulnerability predictions were made for 2015 they often took the form of changes in attack vector or emphasized a particular category of technology. For example, pointing out that point-of-sale (PoS) systems are vulnerable to attack and, as companies hardened those systems in the wake of big breaches in 2014, that attackers would move to exploit vulnerabilities in the payment processors instead.

Vulnerability disclosure has changed radically in the last five years. What once was largely the realm of security vendors investing in research teams to find or categorize new bugs, or setting up purchase programs to entice third-party researchers to disclose to them first (seeking to gain advantage over competitors by covering a vulnerability first), is rapidly becoming a standalone business.

Bug bounty programs — often funded by the vulnerable software vendors themselves — pay the researchers directly for their discoveries. The surprise for many is how well this new arrangement is working out.

Bug bounty frameworks

With a direct line between the researcher and the vulnerable vendor, a legal framework allowing them to hunt without fear of prosecution, and assurances of hassle-free payment, more bugs are being found and disclosed this way. In effect, a chunk of the security quality assurance process has been conveniently outsourced in a pay-for-results model. The grey and underground channels for researchers to sell and disclose newfound vulnerabilities will continue to exist, but it would seem that less talent is following that path now with the commercialization of bug bounty frameworks.

The elephant in the room is, however, the software and code not owned by any particular organization, but used my many. Unfortunately the biggest vulnerabilities of 2014 lay undiscovered for years in some of the most popular Open Source software powering the Internet. The question on the lips of many is which new open source vulnerabilities will surprise us in 2015.

Conclusion

It is likely that 2015 will see a sea-change in the way open source code is viewed and managed. The ferocious media attention to Shellshock and Heartbleed has already initiated a renewed vigour in bug hunting open source projects. I think that, over the next couple of years, the key outcomes will be something along the following trajectory:

Vendors of automated code analysis and bug hunting tools will take the lead in analysing popular open source projects. By uncovering new bugs they'll initially harness the media to extol the virtues of their advanced technology and, once the media tires of bug overload, they'll shift to publishing statistical reports and cite academic papers as competitive differentiators.

  • Software vendors that are (overly) reliant on particular open source projects for their own commercial products will have to invest more in to analyzing the public code. Industry collectives have already formed to fund particularly sensitive open source code projects — but the number will increase, and so too will the amount being invested.
  • As automated code analysis tools improve we'll see them uncover new bugs in code that had previously been analyzed by other automated scanners. The smarter vendors (or researchers behind a new proof-of-concept tool) will obviously use the findings for competitive advantage — and we'll be okay with that.
  • Purchasers of software will be more demanding of their vendors and require assurances that any open source code included in their products be assessed as secure as their own proprietary code. Procurement teams will require that vendors disclose all open source elements in a product, along with a validation of the code's security and integrity.

By Gunter Ollmann, Chief Security Officer at Vectra

Related topics: Cyberattack, Cybersecurity, Malware

 
   

Don't miss a thing – get the Weekly Wrap delivered to your inbox.

Comments

I'd add a fourth conclusion there:* To Todd Knarr  –  Jan 29, 2015 8:38 PM PDT

I'd add a fourth conclusion there:

* To the consternation of vendors, purchasers will require that the proprietary code be subject to the same analysis and assessment as the open-source code. Pressure will increase as problems are found to have more often originated in the proprietary code than the OSS code.

Why do I predict that? Because many OSS projects are already being scanned and analyzed for errors (eg. the Linux kernel itself, the PostgreSQL database, Apache projects) by Coverity and others, and to date Coverity's found that the error density for OSS is significantly better than for proprietary projects Coverity also scans (an average of 0.59 errors per thousand lines for OSS vs. 0.72 for proprietary in 2013). My experience as a developer is that most proprietary software outside of a few industries isn't routinely scanned for errors, it's seen as a cost with little benefit to sales and so gets the same treatment as most other QA (ie. it's first on the chopping block when time has to be made to add the latest new feature Marketing's asked for). With OSS having gotten a head start and eliminated most of the low-hanging and even a lot of the high-hanging fruit, it's not hard to predict that the most errors are going to surface in the codebase that hasn't been subject to that analysis on a regular basis yet.

Lack of IPv4 addresses Jean Guillon  –  Jan 30, 2015 2:04 AM PDT

Wasn't the world going to collapse due to lack of IPv4 addresses?
;-)

To post comments, please login or create an account.

Related Blogs

Related News

Explore Topics

Dig Deeper

Afilias Mobile & Web Services

Mobile Internet

Sponsored by Afilias Mobile & Web Services
Verisign

Cybersecurity

Sponsored by Verisign
Afilias

DNS Security

Sponsored by Afilias

Promoted Posts

Now Is the Time for .eco

.eco launches globally at 16:00 UTC on April 25, 2017, when domains will be available on a first-come, first-serve basis. .eco is for businesses, non-profits and people committed to positive change for the planet. See list of registrars offering .eco more»

Industry Updates – Sponsored Posts

Verisign Named to the Online Trust Alliance's 2017 Audit and Honor Roll

Attacks Decrease by 23 Precent in 1st Quarter While Peak Attack Sizes Increase: DDoS Trends Report

Leading Internet Associations Strengthen Cooperation

Verisign Releases Q4 2016 DDoS Trends Report: 167% Increase in Average Peak Attack from 2015 to 2016

Verisign Q3 2016 DDoS Trends Report: User Datagram Protocol (UDP) Flood Attacks Continue to Dominate

2016 U.S. Election: An Internet Forecast

Government Guidance for Email Authentication Has Arrived in USA and UK

ValiMail Raises $12M for Its Email Authentication Service

Don't Gamble With Your DNS

Defending Against Layer 7 DDoS Attacks

Understanding the Risks of the Dark Web

New TLD? Make Sure It's Secure

Verisign Releases Q2 2016 DDoS Trends Report - Layer 7 DDoS Attacks a Growing Trend

How Savvy DDoS Attackers Are Using DNSSEC Against Us

Facilitating a Trusted Web Space for Financial Service Professionals

MarkMonitor Partners with CYREN to Deepen Visibility into Global Phishing Attacks

Verisign Named to the Online Trust Alliance's 2016 Honor Roll

Verisign Q1 2016 DDoS Trends: Attack Activity Increases 111 Percent Year Over Year

Is Your TLD Threat Mitigation Strategy up to Scratch?

i2Coalition to Host First Ever Smarter Internet Forum