It is already obvious that security is hard to do right. Bruce Schneier has written a good essay called: Why Cryptography Is Harder Than It Looks. This essay refers to cryptography, but touches on the subject as a whole. It is still not always clear, however, where the hard-core of security analysis work is, and where exactly the difference from QA, and from other system engineering domains, lies.
I would like to take a shot at explaining the fundamental difference between assuring functionality and assuring security, and pinpoint the toughest part of security analysis.
Typical system design and QA is about making sure a system does what it should. Security analysis is about making sure a system does not do what it should not. This is probably not the best definition of security analysis. Yet, when realizing this, it appears obvious that the problem of assuring security is one of a wider dimension than the one of assuring functionality.
Programming and system engineering consist of the acts of converting real-world requirements into the appropriate set of computer instructions. Computers are deterministic, and thus they are programmed by positive constructs — that is, by specification of what they shall do. They cannot be told what they shall not do.
Functional aspects of the design are all about mapping the shall’s of the functional requirements into shall’s that are implemented. It is not an easy translation — but it is a definitive one. QA is about deriving testing requirements from the functional requirements and from the implementation. It is sometimes more demanding than programming, because one must consider all cases, and not just intuitive cases, and must make sure that the system treats all of them properly. It requires a sharp mind, but not necessarily a creative one. Creativity is needed for QA — not by the definition of the task, but for practicality. Creativity in QA is used to decide what subset of the group of all cases shall actually be tested, when all cases cannot be tested in practice. Given an endless amount of time (or endless computation power), and an oracle for determining the correctness of results, typical QA could be done in a completely automated fashion.
Security analysis requires a different type of mapping. It is the mapping that needs to be done from all the shall not’s that are mentioned in the security policy documents, to the shall’s that need to be implemented.
Just like other aspects of system design, security design consists of phases that are parallel to the system design phases. However, there is a non-deterministic gap somewhere in the middle; a gap that cannot be crossed by automated means.
At the beginning, there are the security objectives, security policy, threat model, and/or security target documents. These largely specify the reasoning and the objectives of the security measures. These documents typically form a well structured representation of what seems intuitive, often trivial. In a sense, they map our fears into a scientific document saying what we would like to avoid. Sometimes, these documents are broken into smaller pieces, due to the complexity of the system, or to have specific security objectives for different parts of the system. Breaking our objectives into statements of a smaller scope is useful, and it may be tricky, but is still not rocket science.
What probably is the rocket science of security is what comes next, which is the specification of the security requirements. These requirements can take one of many forms. It can be one long list of do’s and don’ts, or it can be a sub-chapter at the end of every chapter of the detailed design document. Regardless of how the requirements are presented, they need to be followed by the implementer, and thus shall be written in his language. His language is deterministic. A statement like “Keys shall not be available outside the module in cleartext” was a valid objective. However, “make sure that the keys are not available in the clear” is not a valid requirement that the implementer can follow. The security analyst needs to map all that shall not happen into positive instructions of what shall be done.
The mapping of world conditions, specified by their negation, into positive instructions of all that shall be done to block the path leading to the undesired world conditions, is not something that can ever be done by a deterministic algorithm. As an exception, it is possible to prove the security for some limited systems with limited security targets. However, in real systems with real security objectives, security assurance is a matter that leaves very little room for automation and a lot of room for intuition.
What is the intuitive part? The intuitive part is the part where the requirements are drawn from the objectives. In order to convert statements like “an opponent shall not be able to inject code into the running process” to positive requirements that can be followed by the designer or coder, the security analyst should imagine all paths an opponent might take to inject code. Then, he needs to come up with requirements that will effectively mitigate every single possible path. Foreseeing all paths cannot be done by an algorithm and is the most error prone part of the job. The rest is like typical system design and programming.
The definition of security objectives requires an analytical mind and clarity. The following of security requirements, to the lowest detail of secure coding, requires typical engineering skills. Deriving the above requirements from the above objectives requires both, as well as high levels of creative thinking.