Home / Blogs

The Sins of the Flash

Steven Bellovin

Recent news stories (based on research by Stanford student Feross Aboukhadijeh) state that an Adobe bug made it possible for remote sites to turn on a viewer's camera and microphone. That sounds bad enough, but that's not the really disturbing part. Consider this text from the Register article:

Adobe said on Thursday it was planning to fix the vulnerability, which stems from flaws in the Flash Player Settings Manager. The panel, which is used to designate which sites may access feeds from an enduser's camera and mic, is delivered in the SWF format used by Flash.


Because the settings manager is hosted on Adobe servers, engineers were able to close the hole without updating enduser software, company spokeswoman Wiebke Lips said.

That's right — code on a remote computer somewhere decides whether or not random web sites can spy on you. If someone changes that code, accidentally or deliberately, your own computer has just been turned into a bug, without any need for them to attack your machine.

From a technical perspective, it's simply wrong for a design to outsource a critical access control decision to a third party. My computer should decide what sites can turn on my camera and microphone, not one of Adobe's servers.

The policy side is even worse. What if the FBI wanted to bug you? Could they get a court order compelling Adobe to make an access control decision that would turn on your microphone? I don't know of any legal rulings on this point directly, but there are some analogs. In The Company v. U.S.,

349 F.3d 1132 (Nov. 2003), the 9th Circuit considered a case with certain similarities. Some cars are equipped with built-in cell phones intended for remote assistance. OnStar is the best-known such system; in this case, analysis of court records suggests that ATX Technologies was involved. Briefly, the FBI got a court order requiring "The Company" to turn on the mike in a suspect's car. The Court of Appeals quashed that order, but only because given the way that particular system was designed, turning it into a bug disabled its other functionality. That, the Court felt, conflicted with the wording of the wiretap statute which required a "minimum of interference" with the service. If the service had been designed differently, the order would have stood. By analogy, if a Flash-tap doesn't interfere with a user's ability to have normal Flash-based voice and video interactions with a web site, such a court order would be legal.

No wonder the NSA's Mac OS X Security Configuration guide says to disable the camera and microphone functions, by physically removing the devices if necessary.

UPDATE: A few days ago, I posted the above criticism of Adobe for a design that, I thought, was seriously incorrect. I made a crucial error: the access control decision is (correctly) made locally; what is done remotely is the user interface to the settings panel. The bug that Adobe fixed was a way for a remote site to hide the view of the UI panel, thus tempting you to click on what you thought were innocuous things but were in fact changing your privacy settings. (The annoying thing is that as I posted it, I had a niggling feeling that I had gotten something wrong, but I didn't follow up. Sigh.)

This is a much better (though hardly good) design. It still leaves open the vulnerability: at least in theory, the bug could be reinstituted by court order, to aid in tricking a user into changing his or her own settings. In other words, a crucial part of the security and privacy process is still outsourced. The argument has been that back when Adobe designed the interface, it wasn't as obviously wrong. I don't think I agree — there was enough criticism of any form of active content going back to the very early days of the web — but I won't belabor the point.

There's one aspect I'm still unclear about. There is obviously some way to build a Flash file that tells the local plug-in to do something in conjunction with Adobe that changes local privacy settings. Is it possible for a malicious party to generate a Flash file that will talk to their site in conjunction with local privacy settings, rather than to Adobe? I hope (and think) not; if it is possible, the danger is obvious. Unless the interaction with Adobe is digitally signed, though, a malicious site could send a booby-trapped Flash file in conjunction with mounting a routing or DNS cache contamination attack and impersonate Adobe. This isn't a trivial attack, but routing attacks and DNS attacks have been known for a very long time; until we get BGPSEC (and probably OSPFSEC) and DNSSEC widely deployed, that risk will remain. I do note that when I invoke the current remote-UI settings manager, I'm doing so over a connection that is at least initially HTTP, not HTTPS; I don't know if there's a second, secure connection set up.

To its credit, Adobe has realized that there are a number of problems with the whole model of a Flash-based settings mechanism; if nothing else, it was hard for most people to find. Accordingly, they've recently released versions of Flash that use local preference-setting interfaces (Control Panel on Windows; System Preferences on the Mac; something else normal on Linux) to change the values. That's an excellent step forward. Now, they need to disable the remote variant (when contacted by a new Flash plug-in), and simply return a pointer to the local one…

By Steven Bellovin, Professor of Computer Science at Columbia University. More blog posts from Steven Bellovin can also be read here.

Related topics: Malware, Security

WEEKLY WRAP — Get CircleID's Weekly Summary Report by Email:


While sharing your outrage at allowing a Richard M Stallman  –  Oct 26, 2011 9:05 AM PST

While sharing your outrage at allowing a remote computer the power to
enable remote access to the camera or microphone in your computer,
let us not forget other cases that have the same result and deserve
the same outrage.

For instance, a nonfree operating system such as Windows, MacOS or iOS
(used in the iPhone and iBad) effectively gives its developer the same
power to use the hardware to spy on you.  Windows and iOS contain
known surveillance features.

Software that allows remote installation of modifications also allows
remote insertion of malfeatures.  Windows and Chrome include such
remote-installation back doors.

Whenever a program is not free (freedom-respecting) software, it is
under the control of someone other than its users.  You should never
trust nonfree software.

See fsf.org and gnu.org for more information.

To post comments, please login or create an account.

Related Blogs

Related News


Industry Updates – Sponsored Posts

Afilias Supports the CrypTech Project - Ambitious Hardware Encryption Effort to Protect User Privacy

Public Sector Experiences Largest Increase in DDoS Attacks (Verisign's Q4 2014 DDoS Trends)

Help Ensure the Availability and Security of Your Enterprise DNS with Verisign Recursive DNS

Verisign iDefense 2015 Cyber-Threats and Trends

What's in Your Attack Surface?

Q3 2014 DDoS Trends: Attacks Exceeding 10 Gbps on the Rise

3 Questions to Ask Your DNS Host About DDoS

Afilias Partners With Internet Society to Sponsor Deploy360 ION Conference Series Through 2016

Neustar to Build Multiple Tbps DDoS Mitigation Platform

The Latest Internet Plague: Random Subdomain Attacks

Digging Deep Into DNS Data Discloses Damaging Domains

New gTLDs and Best Practices for Domain Management Policies (Video)

Nominum Announces Future Ready DNS

New from Verisign Labs - Measuring Privacy Disclosures in URL Query Strings

DotConnectAfrica Delegates Attend the Kenya Internet Governance Forum

3 Questions to Ask Your DNS Host about Lowering DDoS Risks

Continuing to Work in the Public Interest

Verisign Named to the OTA's 2014 Online Trust Honor Roll

Introducing the Verisign Quarterly DDoS Trends Report

4 Minutes Vs. 4 Hours: A Responder Explains Emergency DDoS Mitigation

Sponsored Topics



Sponsored by
Minds + Machines

Top-Level Domains

Sponsored by
Minds + Machines


Sponsored by

DNS Security

Sponsored by