Eldo Kim sent an e-mail bomb threat to Harvard so he could skip a final exam. (It's just a coincidence that I was on the Harvard campus that day.) Even though he used an anonymous account and Tor, the FBI identified him. Reading the criminal complaint, it seems that the FBI got itself a list of Harvard users that accessed the Tor network, and went through them one by one to find the one who sent the threat.
This is one of the problems of using a rare security tool. The very thing that gives you plausible deniability also makes you the most likely suspect. The FBI didn't have to break Tor; they just used conventional police mechanisms to get Kim to confess.
Tor didn't break; Kim did.
An interesting research paper documents a "honeymoon effect" when it comes to software and vulnerabilities: attackers are more likely to find vulnerabilities in older and more familiar code. It's a few years old, but I haven't seen it before now. The paper is by Sandy Clark, Stefan Frei, Matt Blaze, and Jonathan Smith: "Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities," Annual Computer Security Applications Conference 2010.
Abstract: Work on security vulnerabilities in software has primarily focused on three points in the software life-cycle: (1) finding and removing software defects, (2) patching or hardening software after vulnerabilities have been discovered, and (3) measuring the rate of vulnerability exploitation. This paper examines an earlier period in the software vulnerability life-cycle, starting from the release date of a version through to
the disclosure of the fourth vulnerability, with a particular focus on the time from release until the very first disclosed vulnerability.
Analysis of software vulnerability data, including up to a decade of data for several versions of the most popular operating systems, server applications and user applications (both open and closed source), shows that properties extrinsic to the software play a much greater role in the rate of vulnerability discovery than do intrinsic properties such as software quality. This leads us to the observation that (at least in the first phase of a product's existence), software vulnerabilities have different properties from software defects.
We show that the length of the period after the release of a software product (or version) and before the discovery of the first vulnerability (the 'Honeymoon' period) is primarily a function of familiarity with the system. In addition, we demonstrate that legacy code resulting from code re-use is a major contributor to both the rate of vulnerability discovery and the numbers of vulnerabilities found; this has significant implications for software engineering principles and practice.
This story is about how at least two professional online poker players had their hotel rooms broken into and their computers infected with malware.
I agree with the conclusion:
So, what's the moral of the story? If you have a laptop that is used to move large amounts of money, take good care of it. Lock the keyboard when you step away. Put it in a safe when you're not around it, and encrypt the disk to prevent off-line access. Don't surf the web with it (use another laptop/device for that, they're relatively cheap). This advice is true whether you're a poker pro using a laptop for gaming or a business controller in a large company using the computer for wiring a large amount of funds.
Snappy-looking bow tie.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
This is an interesting story from World War II about trust:
Jones notes that the Germans doubted their system because they knew the British could radio false orders to the German bombers with no trouble. As Jones recalls, "In fact we did not do this, but it seemed such an easy countermeasure that the German crews thought that we might, and they therefore began to be suspicious about the instructions that they received."
The implications of this are perhaps obvious but worth stating nonetheless: a lack of trust can exist even if an adversary fails to exploit a weakness in the system. More importantly, this doubt can become a shadow adversary. According to Jones, "...it was not long before the crews found substance to their theory [that is, their doubt]." In support of this, he offers the anecdote of a German pilot who, returning to base after wandering off course, grumbled that "the British had given him a false order."
I think about this all the time with respect to our IT systems and the NSA. Even though we don't know which companies the NSA has compromised -- or by what means -- knowing that they could have compromised any of them is enough to make us mistrustful of all of them. This is going to make it hard for large companies like Google and Microsoft to get back the trust they lost. Even if they succeed in limiting government surveillance. Even if they succeed in improving their own internal security. The best they'll be able to say is: "We have secured ourselves from the NSA, except for the parts that we either don't know about or can't talk about."
Last week the Washington Post reported on how the NSA tracks mobile phones worldwide, and this week they followed up with source documents and more detail.
Barton Gellman and Ashkan Soltani are doing some fantastic reporting on the Snowden NSA documents. I hope to be able to do the same again, once Pierre Omidyar's media venture gets up and running.
The Washington Post has a detailed article on how the NSA uses cookie data to track individuals. The EFF also has a good post on this.
I have been writing and saying that surveillance is the business model of the Internet, and that government surveillance largely piggy backs on corporate capabilities. This is an example of that. The NSA doesn't need the cooperation of any Internet company to use their cookies for surveillance purposes, but they do need their capabilities. And because the Internet is largely unencrypted, they can use those capabilities for their own purposes.
Reforming the NSA is not just about government surveillance. It has to address the public-private surveillance partnership. Even as a group of large Internet companies have come together to demand government surveillance reform, they are ignoring their own surveillance activities. But you can't reform one without the other. The Free Software Foundation has written about this as well.
Little has been written about how QUANTUM interacts with cookie surveillance. QUANTUM is the NSA's program for real-time responses to passive Internet monitoring. It's what allows them to do packet injection attacks. The NSA's Tor Stinks presentation talks about a subprogram called QUANTUMCOOKIE: "forces clients to divulge stored cookies." My guess is that the NSA uses frame injection to surreptitiously force anonymous users to visit common sites like Google and Facebook and reveal their identifying cookies. Combined with the rest of their cookie surveillance activities, this can de-anonymize Tor users if they use Tor from the same browser they use for other Internet activities.
The NSA is spying on chats in World of Warcraft and other games. There's lots of information -- and a good source document. While it's fun to joke about the NSA and elves and dwarves from World of Warcraft, this kind of surveillance makes perfect sense. If, as Dan Geer has pointed out, your assigned mission is to ensure that something never happens, the only way you can be sure that something never happens is to know everything that does happen. Which puts you in the impossible position of having to eavesdrop on every possible communications channel, including online gaming worlds.
One bit (on page 2) jumped out at me:
The NMDC engaged SNORT, an open source packet-sniffing software, which runs on all FORNSAT survey packet data, to filter out WoW packets. GCHQ provided several WoW protocol parsing scripts to process the traffic and produce Warcraft metadata from all NMDC FORNSAT survey.
NMDC is the New Mission Development Center, and FORNSAT stands for Foreign Satellite Collection. MHS, which also appears in the source document, stands for -- I think -- Menwith Hill Station, a satellite eavesdropping location in the UK.
Since the Snowden documents first started being released, I have been saying that while the US has a bigger intelligence budget than the rest of the world's countries combined, agencies like the NSA are not made of magic. They're constrained by the laws of mathematics, physics, and economics -- just like everyone else. Here's an example. The NSA is using Snort -- an open source product that anyone can download and use -- because that's a more cost-effective tool than anything they can develop in-house.
The weird squid-like creature floating around Bristol Harbour is a hoax.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
I have a new book. It's Carry On: Sound Advice from Schneier on Security, and it's my second collection of essays. This book covers my writings from March 2008 to June 2013. (My first collection of essays, Schneier on Security, covered my writings from April 2002 to February 2008.)
There's nothing in this book that hasn't been published before, and nothing you can't get free off my website. But if you're looking for my recent writings in a convenient-to-carry hardcover-book format, this is the book for you.
I'm also happy with the cover.
The Kindle and Nook versions are available now, and they're 50% off for some limited amount of time.
Unfortunately, the paper book isn't due in stores -- either online or brick-and-mortar -- until 12/27, which makes it a pretty lousy Christmas gift, though Amazon and B&N both claim it'll be in stock there on December 16. And if you don't mind waiting until after the new year, I will sell you a signed copy of the book here.
Suggestions for a title of my third collection of essays, to be published in five-ish years, are appreciated.
Telepathwords is a pretty clever research project that tries to evaluate password strength. It's different from normal strength meters, and I think better.
Telepathwords tries to predict the next character of your passwords by using knowledge of:
- common passwords, such as those made public as a result of security breaches
- common phrases, such as those that appear frequently on web pages or in common search queries
- common password-selection behaviors, such as the use of sequences of adjacent keys
Password-strength evaluators have generally been pretty poor, regularly assessing weak passwords as strong (and vice versa). I like seeing new research in this area.
Here's a new biometric I know nothing about:
The wristband relies on authenticating identity by matching the overall shape of the user's heartwave (captured via an electrocardiogram sensor). Unlike other biotech authentication methods -- like fingerprint scanning and iris-/facial-recognition tech -- the system doesn't require the user to authenticate every time they want to unlock something. Because it's a wearable device, the system sustains authentication so long as the wearer keeps the wristband on.
EDITED TO ADD (12/13): A more technical explanation.
Some apps are being distributed with secret Bitcoin-mining software embedded in them. Coins found are sent back to the app owners, of course.
And to make it legal, it's part of the end-user license agreement (EULA):
COMPUTER CALCULATIONS, SECURITY: as part of downloading a Mutual Public, your computer may do mathematical calculations for our affiliated networks to confirm transactions and increase security. Any rewards or fees collected by WBT or our affiliates are the sole property of WBT and our affiliates.
This is a great example of why EULAs are bad. The stunt that resulted in 7,500 people giving Gamestation.co.uk their immortal souls a few years ago was funny, but hijacking users' computers for profit is actually bad.