A few months ago, I wrote about the problem that emerges from having to rely on digital certificates that are issued by Certification Authorities of which we, the relying parties, are not the paying customers. As a result, we rely on the CA (Certification Authority) certification process, while there is no economic incentive for the CA to actually maintain a robust certification mechanism and to justify our trust.
Unexpectedly, this post, titled “The Inevitable Collapse of the Certificate Model”, quickly became the favorite post on my blog, pulling more views than all other individual posts.
One alternative that was suggested is by CAcert.org, a community based certification organization. Here are my thoughts on the ability of such a mechanism to solve the certification problem.
What CAcert proposes is a certification process that utilizes a community of Assurers. The trust process is bootstrapped by physical introduction, cross authentication and certification of individuals. Individuals receive reputation scores (“assurance points”) that grow as more people attest for them, and can subsequently certify others.
This is a likable concept. I have a lot of appreciation for systems that maintain their usefulness and robustness by depending solely on their direct beneficiaries, such as without relying on trusted third parties. The question to be asked, however, is whether and how can this mechanism solve the certification issue at hand.
The mechanism adopted by CAcert is presented as for certifying individuals, but it should work for organizations as well. For example, a company can have its public key signed by individuals (or other companies) that have adequate reputation points, thus forming a certificate for that company, which relying parties should find trustworthy.
One difficulty with such reputation based systems, is that the ease of entry into the system allows it to be poisoned by entities that pretend to be benign for as long as they can, and then use the powers given to them by the community for malicious purposes. Such systems often protect themselves by the notion that while there are some rotten apples, the population is largely well-meaning; a concept often referred to as “The Kindness of Strangers”.
However, to foster the positive power of the well-meaning mass, a mechanism needs to be in place to dilute the impact of the few malicious participants. This can be achieved, for example, by using aggregation or probability. As an example for aggregation, consider customer review web-sites. A bad product might still receive great reviews from a couple of individuals associated with the vendor, but the overall effect of these false reviews is likely to be diluted by tens or hundreds of authentic reviews issued by the authentic masses. As an example for probability, consider my tip to my young son on how to act if he gets lost and requires the assistance of adult strangers. The tip is simple: count five adults from left to right and approach the fifth one (i.e., select an adult at random and approach him/her.) While there are criminals out there, the chances of a person who is selected at random to be a predator are ultra slim, due to predators being such a minuscule part of society at large.
In CAcert, there does not seem to be any mechanism to protect us against the malicious few that turn into assurers. If CAcert becomes prevalent, enough false assurers will try to sneak into the system, and there is no way to prevent all of them from ever succeeding. In an extreme case, attackers can always bribe a once-benign assurer that needs cash, or simply break into his computer to get a hold of his credentials.
Malicious assurers can then sign a rogue key as belonging to, say, Google. This certificate of Google, technically non-fake but with a false key, may have a reputation score of 5. Indeed, at the same time, there is another certificate for the same Google, the original one, with a reputation score of 100,000. However, your browser is never at a position of comparing two certificates; it only sees the one it got, and needs to decide if it should trust it or not. Obviously, you will be able to configure the browser to only accept certificates with a score of 100 or higher, but how will you do with less commonplace sites that do not interest so many assurers to ever possess such a score? Also, say you set the threshold score in your browser to 20, connect one day to your bank, and the browser receives a certificate with a score of 18. Will you get into your car and drive to the bank, or just reduce the threshold, that is largely arbitrary anyway, by two points?
As a remedy to the situations above, the browser may also be taught to cache past certificates and compare them to new certificates it receives, alerting you when certificates change for no anticipated reason. This can be, and is, done with ordinary certificates as well, forming a local (rather than global) reputation system.
Another question, as important, is how would web-sites of companies get well-informed assurance votes? Say you are an Assurer with a high score that allows you to assign high scores to other individuals and sites. How would you know that Google’s certificate is indeed theirs and not a counterfeit? Will a company be required to convince a hundred independent individual assurers that it is who it claims to be? Remember, the fact that you logged into Google and nothing bad happened does not at all mean that you were not introduced with a fake certificate. When a suppressing regime, or just an employer, facilitates a MITM (Man In The Middle) attack using a fake certificate, it captures all traffic, but only very seldom does it cause an active noticeable interference.
CAcert may be taking the right path. Harvesting the power of the kind masses is often the right way to go when solving wide public security problems. However, it is still not clear how this can solve the abovementioned inadequacy of the certificate model. There is probably more work to be done.