A few days ago, we were notified (e.g., here and here) that a hack into the network of RSA Security (the security division of EMC) has led to someone stealing something that is related to the SecurID token product.
We cannot determine the real impact of this security breach until RSA Security tells us what exactly got stolen. I believe that this information will be made available, as a result of legal or public pressure, if for no other reason. Until this data becomes available, let us examine the two most probable options, and how we may respond to each.
There are two main options as to what could have been stolen from the RSA Security network:
Seed values: those secret keys that allow a server component to be able to determine what one-time-password is displayed on each of a set of SecurID tokens, at any given moment.
Source code or design of the implementation: either design documents, or source code of the logic that is carried out by the tokens, and/or by the server component, to determine the one-time-password of a token at any given moment.
If it is seed values that were stolen, then we should throw away any tokens we currently use, and replace them with new tokens for which seed values were not stolen, and cannot be derived from seed values that were stolen. I would certainly not rely on observations (circulating over the net) stating that even if the seed values were compromised, attacks are difficult to mount, because the serial numbers of the attacked tokens are not known. Serial numbers of tokens are not secret values and cannot be expected to behave as such. These serial numbers can be obtained by diverse means: perhaps by reverse-engineering traffic data, by various physical attacks, or by guessing (such as if the corporate serial-number space is known). If the relevant seed values are gone, then so shall be the tokens that rely on their secrecy for their uniqueness.
If it is the design, or source code, that was stolen, then we should throw away any tokens we currently use, as well as SecurID as a product, until it is subject to a major and well-scrutinized rewrite. A properly-designed security product does not collapse by the disclosure of its design and/or source code. Good security products deploy safe algorithms, and safe algorithms are such that retain their robustness as long as their secret parameters are kept secret, regardless of the secrecy of their logic. This is a reasonable requirement, because the design cannot realistically be considered as secret for long. It is known to too many people and is incorporated into too many apparatuses to just never be revealed. Good commercial security systems never rely on the secrecy of their logic; never.
In the RSA Security case, the only thing we do know is that some security degradation was caused. First, the filing read: “…could potentially compromise guarded networks in a ‘broader attack’ in the future.” Second, the very lack of essential information in this breach disclosure is an indication to the discomfort the company feels about it. If the design was proper, and no security impact was caused, then the company was likely to either not mention this breach at all, or tell us all that it knows about it. If it had nothing to hide, then it would not behave as if it had.
If the design or code was stolen, and damage was caused as a result of that, then the RSA Security engineers shall go back to the deign board before issuing new tokens.
As a note for the sake of completeness, even if only seeds (rather than design or source code) were stolen from RSA Security, proper design could prevent security harm from being caused. Optimally, the system could probably be designed so not to rely on any secrets kept by RSA Security. However, I can accept that considerations of token cost may have prevented the adoption of such a design.
I do not think we should boycott RSA Security from now on. RSA Security is a good security company. Security is a ‘war’ not won or lost by incidents, but by statistics, and RSA Security are not bad in this respect. I do not blame the company for facing an occasional security breach, as long as they properly deal with the aftermath. So far, I am only against the concealment of the details of the breach. Corporate and government users rely on RSA Security and on SecurID for protecting their assets, and when an event happens that jeopardizes the protection of these assets, RSA Security shall provide the necessary information, to allow these organizations to carry out their independent assessment of the risk they were made subject to.
Edited to add: By today (April 6th, 2011), we know that the attackers used phishing e-mails sent to several RSA employees, which carried a payload exploiting an Adobe Flash hole. This allowed the attackers to take over machines inside RSA. There are no details yet addressing the more important question of what exactly was stolen.