Skip to content

Useful threat modelling

Do you know what all security documents have in common? — they all were at some time called “threat model”… A joke indeed, and not the funniest one, but here to make a point. There is no one approach to threat modelling, and not even a single definition of what a threat model really is. So what is it? It is most often considered to be a document that introduces the security needs of a system, using any one of dozens of possible approaches. Whatever the modelling approach is, the threat model really has just one strong requirement: it needs to be useful for whatever purpose it is made to serve. Let us try to describe what we often try to get from a threat model, and how to achieve it.


The high level purpose of any sound threat model is to be the first document in a series of deliverables that form the security specification of the system. Naturally, this first document has to set the stage by listing the high level objectives of the security properties of the system. If this sounds trivial to you, then you might have not yet seen enough threat models. Those documents, particularly when done unprofessionally, often delve into prosaic descriptions of evil attackers and detailed attack scenarios. Such an essay does make a good read, and is often well written, but seldom qualifies as an engineering document, which threat models eventually are. So with the purpose at hand we are left with the question of how do we document the security expectations of the system?

All models are wrong…

A British statistician called George Box said in 1976 that “all models are wrong, some are useful.” This statement can be taken to multiple directions, such as that there is no right way, but that there is no wrong way either. The most straightforward way to take it is that there is no absolute truth, so one should seek an approach that is the most helpful (just as long as it is not entirely flawed, of course, which would also make it unhelpful.)

To this end, we approach threat modelling from the goal backwards: we would like to specify the security expectations from the product; how can we most effectively accomplish this? The industry so far entertains two main approaches:

  • the attack-centric approach, and

  • the asset-centric approach.

The attack-centric approach, as the name implies, defines the objectives of the product security in terms of what attacks it needs to mitigate. The asset-centric approach defines the objectives in terms of what assets it needs to protect. Those are two ways to look at the very same thing, as all attacks (that are interesting from the security perspective) are eventually against assets, and all that assets are protected against (in the security context) – are attacks. Therefore, there is no right or wrong here, but there is more versus less useful.

The big difficulty in security engineering

Engineering methodologies largely take the form of a series of specification efforts that take a high level definition and peel off abstraction layers one at a time. For example, one may start with a high level product specification containing product requirements, then create a system specification to implement it, followed by a high level design document (HLDD) to implement that system, and conclude with a low-level design document (LLDD) that implements whatever is prescribed by the HLDD. Finally, code (or any other form of machine logic) is written to implement whatever the detailed design specified.

Security engineering has much of that, but contains one large caveat that makes it more difficult, imperfect, and a combination of art and science. This caveat is that when specifying security, other than the usual steps of reducing the level of abstraction, there is also a point at which descriptive high level provisions of what should not happen (i.e., security objectives) are converted into prescriptive instructions of what shall be implemented (i.e., security controls, or: requirements). If the high level objectives would not be descriptive, or would not be worded as “what it is that shall be protected” – coverage would be impossible to inspect, let alone assure; and yet, if controls would be descriptive rather than prescriptive, then how are they supposed to be implemented? Engineering is prescriptive, always.

So somewhere along the specification chain, a security engineer takes objectives such as “credit card data shall be protected from disclosure” and converts them into a set of controls, such as “credit card data shall be encrypted (this or that way) in transit“, and “credit card data shall be protected in the databases (using this or that mechanism)”, etc. This conversion is not merely peeling off an abstraction level; it is converting “that shall not happen” to “that shall be done“. This type of conversion can be traced, but cannot be assured. Whereas some of the security flaws result from controls that are improperly implemented, many other flaws, and those that are often harder to fix, emerge as a result of imperfect such mapping.

(I have written about this difference between security engineering and other types of engineering 13 years ago, in a post titled: The toughest part of designing secure products.)

What would make a threat model helpful?

A threat model is best seen as a security document that has two primary intents:

  • set and communicate the security promise of the product; i.e., define the security expectations from the security model of the product, and

  • form the first level of security specification towards the fulfillment of those expectations.

With this in mind, the threat model shall define the security objectives, be those defined in either that attack-centric or the asset-centric way. By the attack-centric model, an objective could look like “the system shall protect against an attack by (this kill-chain)” or “the system shall be protected against (this or that) side-channel attack". The objectives in the threat model could be global or specific, depending on its depth. By the asset-centric model, an objective may look like “this asset of the system shall be protected against (this or that) class of attacker” (regardless of the attack).

The former attack-based approach creates objectives that are more ripe for becoming prescriptive requirements (security controls). Since the attack vectors are spelled out, a security practitioner can use her library of security measures, and pick all that is necessary. It is much work, as she still needs to make sure that the means are applicable, effective and sufficient, but it is better contained. The latter asset-based approach results in objectives that are as detailed, but still descriptive. Protect the asset? How? The security practitioner first needs to envision possible types of attacks, and then define controls to mitigate them.

The former approach makes the security engineer’s work easier, by enumerating the attacks for her, but was this our goal with threat modelling? And on the engineering front: what does it imply on the maintainability of the threat model?

The bridge over the gap

A big challenge with security engineering, as explained above, is bridging the gap between the descriptive objectives and the prescriptive controls. The threat model is at the descriptive side, setting objectives, and the specification of controls is on the prescriptive side – telling what to do. For making the gap the safest and least error-prone to cross, the boundaries have to be clear: this is the objective, and this is how it is to be met; for example: “Objective: protect user passwords when stored; Control: hash passwords with strong salts, and store only password hashes.”

An objective that is strongly tied to the particular attack, and hence to its mitigation, may be easier to specify controls against, but at the same time blurs what the real objective is, and hence makes it more difficult to ascertain coverage. When the objective is “protect against (the attack by) a password cracker“, it is easier to come up with the control of “hash passwords securely”, but say we did hash passwords securely, and the objective is covered indeed – is our security covered?

Higher-level objectives are more frightening, and leave more room for omissions in their interpretation, but they force more thought to be put into the security specification process, and omissions in interpretation are still easier to detect than omissions in definition…

Maintainability of design

Engineering processes shall converge, and are designed to be such that they minimize the cost of handling modifications. This is very clear in agile methodologies, but waterfall engineering methodologies are also made suited for efficient reaction to changes in design. The cost of modifications is contained by having the design artifacts structured like a pyramid, where most likely changes occur closer to the base, and fewer changes (if at all) occur at the top.

Security specifications shall maintain the same paradigm. When an objective changes, it may require changing controls, which in turn require design and code to be rewritten. It is therefore beneficial to have multiple layers of documentation that can effectively absorb changes so they effect only the parts that are absolutely necessary. When a threat model contains objectives that are specific to attack vectors – this threat model will have to be maintained whenever new attacks are discovered, triggering a domino effect against other design artifacts. If the objectives are generic enough, more newly discovered attacks will result in additional controls, without necessitating changes to the threat model.


I still stress that all models are wrong and some are useful, and everyone should adopt what he/she finds useful. I encourage you to first decide what is useful for you, and follow this decision when selecting a threat modelling methodology. By my own logic, asset-based threat models win the usefulness contest, but you may have your own right answer; if you just ask yourself the right question.

See also


No Trackbacks


Display comments as Linear | Threaded

No comments

Add Comment

Markdown format allowed
Enclosing asterisks marks text as bold (*word*), underscore are made via (_word_), else escape with (\_).
E-Mail addresses will not be displayed and will only be used for E-Mail notifications.
Form options

Submitted comments will be subject to moderation before being displayed.