Congratulations to Apple for featuring a fingerprint reader as part of its new iPhone. It was reported by The Wall Street Journal here, in the blog of Bruce Schneier here, by Time Tech here, and in dozens of other places. Very much expectedly, this revelation spurred anxiety among the conspiracy theorists out there. The two common concerns that were raised are:
(There is another line of concern, related to the fifth amendment and how its protection may be foiled by authenticating using biometrics alone, but this is a legal concern which is off topic.)
While a bit of paranoid thinking is always helpful, security engineering requires more than crying out each time a mega-corporate launches a new technology that involves private data. Assets and threats need to be determined, and then we can decide whether or not the risk is worth the benefits.
Say you own an iPhone and use the fingerprint reader. There are two primary risks that you face, which happen to map into the two concerns quoted above.
Other imaginary threats, such as that service providers may collect your fingerprint data, ignore the fact that, given proper design, the fingerprint itself never leaves the device as part of the authentication protocol. Applications will never have access to the raw fingerprint data, but only to trusted authentication services that use this data. If an application somehow got unauthorized access to the fingerprint data (i.e., although the interface does not support this access), then this application falls under the "malware" category above.
Let us start with the Apple database. Theoretically, it is possible that Apple makes the device send the fingerprint data to their servers, where it is stored forever. This database will not be equivalent to the risky national biometric databases, which usually contain more than one fingerprint, but it is not a pleasant thought nonetheless. While this is possible, Apple clearly declared as part of their press info, that the fingerprint data is kept inside the A7 chip on the device, and is not sent to their servers. Of course, they could by lying; but that would make a blunt enough lie to surface sooner or later, and Apple does have a lot to lose in this case. If they do lie — you will have your fingerprint on their server. As irritating as this might be, consider that if Apple is indeed in the business of fraudulently collecting personal data, then, fingerprints aside, you may already have your facial photo and all your private information, calls, and correspondence on their servers already.
Now let us assume that the device is hacked real badly; as badly as we can image. The A7 chip features what Apple refers to as the Secure Enclave. To the best of my understanding, this is an ARM TrustZone-based secure execution environment. This environment carries out its operations separately from the native operating system, so if all is properly implemented, it is not supposed to reveal the fingerprint data, even in light of a complete iOS breakdown. If this is the case, then getting the fingerprint data out takes more than the typical rooting or jailbreaking process. The attacker will have to mount a more sophisticated attack involving exploitation of the privileged code running in the Secure Enclave, or a hardware attack. While certainly not impossible, if Apple engineers know what they're doing — it can be made fairly difficult.
Difficult is not impossible, so let us discuss this scenario nonetheless. An attacker gets at the Secure Enclave and obtains fingerprint data. According to Apple, what is stored is not the fingerprint itself, but let us take the side of caution and assume it is. For a start, hardware attacks can be declared as non-interesting. If the attacker can get to your physical device, he can most likely scrape your fingerprint from the plastic case or from the glass of water you held. The more severe attack is one in which malware that the user might be subject to by mistake carries out the exploit payload that can compromise the Secure Enclave and leak the fingerprint data out. This is a scalable attack, and probably the most realistic and severe threat we discussed so far.
Malware that can compromise the Secure Enclave can get your fingerprint data and send it to an adversary. However, malware with such capabilities can get into every asset on the device: it can take your photo, capture all your data, and also collect all the payment credentials stored in the Secure Enclave. That lost fingerprint, which is unique to you but yet barely a secret, might be the least of your problems in such a case.
To sum things up, there is a risk. There seldom is not. Apple could turn up collecting and possibly selling and/or losing a fingerprint database. It is not too likely, but it might happen. Also, someone could find an exploit against the Secure Enclave and get useful fingerprint data out of malware-infested devices. He can then build a database of the fingerprints of owners of compromised devices. This can also happen. But when attackers reach this capability, or when Apple turns against its users, they will both be able to populate a database with your facial photo, all your photos, data, location history, and secure payment information. So is that fingerprint what you most worry about?