Recent news stories (based on research by Stanford student Feross Aboukhadijeh) state that an Adobe bug made it possible for remote sites to turn on a viewer's camera and microphone. That sounds bad enough, but that's not the really disturbing part. Consider this text from the Register article:
Adobe said on Thursday it was planning to fix the vulnerability, which stems from flaws in the Flash Player Settings Manager. The panel, which is used to designate which sites may access feeds from an enduser's camera and mic, is delivered in the SWF format used by Flash.
Because the settings manager is hosted on Adobe servers, engineers were able to close the hole without updating enduser software, company spokeswoman Wiebke Lips said.
That's right — code on a remote computer somewhere decides whether or not random web sites can spy on you. If someone changes that code, accidentally or deliberately, your own computer has just been turned into a bug, without any need for them to attack your machine.
From a technical perspective, it's simply wrong for a design to outsource a critical access control decision to a third party. My computer should decide what sites can turn on my camera and microphone, not one of Adobe's servers.
The policy side is even worse. What if the FBI wanted to bug you? Could they get a court order compelling Adobe to make an access control decision that would turn on your microphone? I don't know of any legal rulings on this point directly, but there are some analogs. In The Company v. U.S.,
349 F.3d 1132 (Nov. 2003), the 9th Circuit considered a case with certain similarities. Some cars are equipped with built-in cell phones intended for remote assistance. OnStar is the best-known such system; in this case, analysis of court records suggests that ATX Technologies was involved. Briefly, the FBI got a court order requiring "The Company" to turn on the mike in a suspect's car. The Court of Appeals quashed that order, but only because given the way that particular system was designed, turning it into a bug disabled its other functionality. That, the Court felt, conflicted with the wording of the wiretap statute which required a "minimum of interference" with the service. If the service had been designed differently, the order would have stood. By analogy, if a Flash-tap doesn't interfere with a user's ability to have normal Flash-based voice and video interactions with a web site, such a court order would be legal.
No wonder the NSA's Mac OS X Security Configuration guide says to disable the camera and microphone functions, by physically removing the devices if necessary.
UPDATE: A few days ago, I posted the above criticism of Adobe for a design that, I thought, was seriously incorrect. I made a crucial error: the access control decision is (correctly) made locally; what is done remotely is the user interface to the settings panel. The bug that Adobe fixed was a way for a remote site to hide the view of the UI panel, thus tempting you to click on what you thought were innocuous things but were in fact changing your privacy settings. (The annoying thing is that as I posted it, I had a niggling feeling that I had gotten something wrong, but I didn't follow up. Sigh.)
This is a much better (though hardly good) design. It still leaves open the vulnerability: at least in theory, the bug could be reinstituted by court order, to aid in tricking a user into changing his or her own settings. In other words, a crucial part of the security and privacy process is still outsourced. The argument has been that back when Adobe designed the interface, it wasn't as obviously wrong. I don't think I agree — there was enough criticism of any form of active content going back to the very early days of the web — but I won't belabor the point.
There's one aspect I'm still unclear about. There is obviously some way to build a Flash file that tells the local plug-in to do something in conjunction with Adobe that changes local privacy settings. Is it possible for a malicious party to generate a Flash file that will talk to their site in conjunction with local privacy settings, rather than to Adobe? I hope (and think) not; if it is possible, the danger is obvious. Unless the interaction with Adobe is digitally signed, though, a malicious site could send a booby-trapped Flash file in conjunction with mounting a routing or DNS cache contamination attack and impersonate Adobe. This isn't a trivial attack, but routing attacks and DNS attacks have been known for a very long time; until we get BGPSEC (and probably OSPFSEC) and DNSSEC widely deployed, that risk will remain. I do note that when I invoke the current remote-UI settings manager, I'm doing so over a connection that is at least initially HTTP, not HTTPS; I don't know if there's a second, secure connection set up.
To its credit, Adobe has realized that there are a number of problems with the whole model of a Flash-based settings mechanism; if nothing else, it was hard for most people to find. Accordingly, they've recently released versions of Flash that use local preference-setting interfaces (Control Panel on Windows; System Preferences on the Mac; something else normal on Linux) to change the values. That's an excellent step forward. Now, they need to disable the remote variant (when contacted by a new Flash plug-in), and simply return a pointer to the local one…
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services
Minds + Machines