Skip to content

Product Security Governance: Why and How

The term “security governance” is not widely used in the product security context. When web-searching for a decent definition, among the first results is a definition by Gartner that addresses cyber security rather than product security. Other sources I looked at also focus on IT and cyber security.

But product security governance does exist in practice, and where it doesn’t – it often should. Companies that develop products that have security considerations do engage in some sort of product security activities: code reviews, pen-tests, etc.; just the “governance” part is often missing.

Product security is science; treat it as such.

This post describes what I think “security governance” means in the context of product security. It presents a simple definition, a discussion on why it is an insanely important part of product security, and a short list of what “security governance” should consist of in practice.

An informal definition

The definition of “security governance” in the context of product security is quite simple and matches the Gartner definition for the cybersecurity context. It is the overall supervision of the actions taken to ascertain product security. Since we speak of ‘governance’, the focus is on the strategic decisions that guide the security teams in deploying security measures so they match corporate objectives and priorities, such as the corporate ‘risk appetite’, and how it brands the particular product.

Security governance in the traditional sense of IT and Cyber defense is already complex, but at least the connection between security activities and the risk reductions they bring is straightforward. For example, if you want to protect user accounts, then there are a handful of technologies you can use, such as: multi-factor authentication, phishing prevention tools, training, etc. The challenge is significant, because there are many sources of risk and plenty of technologies to choose from, and prioritization also takes some work, but all this effort is not sophisticated to model, ‘just’ to carry out.

If the company creates a product that has security considerations to it, then product security is a completely different beast, which calls for its own flavor of security governance. Security governance in the product security context means the same thing: making sure that security engineering actions are in line with corporate priorities. However, in this context of product security, the problem is substantially harder. Mapping the organizational priorities of product security, such as risk appetite, branding requirements, or the need to comply with certain industry standards, all to the prioritization of technical security activities, is not trivial.

Aligning corporate objectives with product security activities takes more than deciding what to do. It calls for defining and prioritizing activities, of course, but it also calls for a complete feedback loop as part of risk-assessment; it calls for product security to behave like science.

Why do we need product security governance at all?

It is clear why we want product security. Nobody questions the need for products to be secure, at least for those products that have security considerations. But what do we get from the “governance” part? Why can’t we suffice with testing the product’s security once in a while (as often as we see fit) to make sure it’s reasonably secure?

There are two equally-important reasons, which I will address one at a time:

  1. Product security governance is required for properly securing the product all in all, and in a cost-effective way.

  2. In markets that care at all about security, security has to be seen as much as it has to be done, and proper governance provides such demonstrability.

Security engineering, apart of being sophisticated and multidisciplinary, also has another property that makes it different from most other domains of engineering: it involves an additional challenge of coverage, on top of the usual challenge of solving engineering problems. In other words, it is not enough to properly do all that needs to be done; one also needs to worry about properly enumerating all that needs to be done in the first place, which is a challenge in its own right. When a product manager asks an engineer to write code that quickly looks up a value in a database, the engineer lists what needs to be carried out, gets to work, and once he’s done he knows it. When a product manager asks an engineer to make sure that credit card data is never exposed to unauthorized users, the engineer can define some measures and implement them, but determining if those measures are sufficient is in itself another challenge. Is it enough to encrypt the data at rest? At rest and in transit? What about encryption of backups? And what about the APIs? And buffer overflows in the implementation code? And this data does have to be revealed to some users, so how are they authenticated? and authorized? And what if someone changes my code? And the list goes on…

(For more on this challenge, see my post from 14 years ago: The toughest part of designing secure products.)

What usually happens is that either too little is done, or too much is done, or too much is done against one attack vector, while not really raising the bar due to lax protection against another attack vector touching the same asset. (This last case is the most devastating, because we got no more security, for a higher cost.) The question of “what is enough?”, which is trivial to answer in plain engineering (its enough when it meets all prescribed requirements), is not always as easy in security engineering, where the requirements are not prescriptive; this is what product security governance comes to solve.

Product security governance is the process where the bar is uniformly defined across the product (or products), is formally articulated, and is consistently translated into a prescriptive engineering format for both design, implementation and testing.

The second point, of having security ’seen’ and not just ‘done’, is tempting to overlook, particularly by engineers. This topic probably deserves its own post like the previous one, but I somehow overlooked it myself, demonstrating this very statement. For now, let us suffice by saying that Security is a market of lemons. This is the term used in economics to denote a market where the customer cannot determine the quality of the product before buying it. When the customer cannot determine if he gets security – he will not pay for security, leading to products being purposely made insecure (the customer does not pay for security anyway, so why not invest in what the customer does pay for instead.) This is bad for everyone. The customer does want security, and contrary to the common belief – he often is willing to pay for it, but just as long as he knows what he gets.

The customer wants to know how secure our product is, and we (if we indeed took security seriously) want the same. The question is: How can we ‘convey’ product security to the customer?

Unfortunately, as of today, there is no good answer. Security standards such as ISO and FIPS are one attempt, some other certification schemes are another, but we are not there yet. Customers today, instead of measuring the quality of the security outcome, often fallback into checking the quality of our security story. The process by which we handle security engineering is easier to assess and hopefully forms a close enough indicator to what can be expected from the security of our product. Many compliance schemes, such as SOC2 and ISO 27001, also follow a similar logic and focus on processes as indicators of product security.

In our strive for establishing customer traction, it is one thing to show that we run a pen-test and fix its findings, and a completely different thing to show a security story, by which that pen-test is part of a holistic process of defining agreeable security objectives, and of scientifically assessing the extent to which we meet those objectives on an ongoing basis.

What product security governance consists of

Product security governance consists of at least three main parts:

  • Objectives

  • Controls

  • Risk Assessment

Each one of those probably warrants its own post, but for the sake of completeness of this one, let us discuss the fundamentals of each.

Objectives

Objectives are written to help guarantee coverage. Those are the high-level statements that describe what we try to achieve security-wise. If properly written, Objectives help us to:

  • make sure that areas of security are not neglected (thanks to their wide scope and being phrased in a descriptive language),

  • define what security Controls (security requirements) are needed for security (or compliance) and just as importantly – what requirements are not, and finally:

  • form a baseline for continuous risk assessment, based on the level to which our Objectives are met.

Objectives do not make it straight to engineering, as they are descriptive rather than prescriptive in nature, and engineering does not work that way. Objectives are articulated to substantiate prescriptive requirements, to allow us to properly enumerate the required security efforts, and to allow us to grade ourselves by this same enumeration.

The source of objectives could be a combination of:

Controls

Controls, often referred to as security requirements (although there are significant differences between the two), are the means by which Objectives touch engineering. Those are the prescriptive ‘instructions’ that are substantiated by the Objectives and which describe what shall be done in practice for an Objective to be met.

Those Controls, once fed into engineering backlogs, also form the basic unit of measurement for our security posture and risk assessment. If we tried to measure our risk directly through Objectives, we would have ended up with an assessment that is largely subjective and unsubstantiated. A subjective non-scientific assessment convinces neither ourselves nor customers and auditors of our security and risk posture. Controls are prescriptive and actionable, hence measuring the implementation status of Controls is more accurate, and provable (or at least, demonstrable).

Controls can take many forms and tackle different areas of security, depending on the Objectives. In a typical case, you are likely to have Controls addressing:

  • security properties of the product design, as part of what is sometimes called ‘non-functional requirements’, such as requirements for deploying encryption, access controls, logging, etc.,

  • security properties of the implementation, such as the requirement for adherence to safe coding standards or the use of certain code libraries,

  • security properties of the implementation process, such as requirements addressing design and code reviews, static code analysis, dynamic analysis and fuzzing, as well as also dev-ops, code supply chain, change management, CI/CD security,  etc.,

  • security properties of the operation, if applicable, such as requirements covering cloud security, firewalls, WAFs, periodic pen-tests, the use of a Security Operations Center (SOC), etc., and sometimes even:

  • security properties of the organization.

As you can see, everything that ever happens security-wise is, this way or the other, specified as a Control. The number of Controls can easily reach the hundreds, but the binding between Controls and higher-level Objectives keeps them maintainable. This fact, that every security activity is associated with a Control, is not a bug but a feature of the process. It casts all security-related activities into a unified form that allows orderly tracking, and makes our risk assessment accurate and defensible. It also makes sure that costly activities and requirements that do not serve towards a valid recognized Objective are avoided.

Risk Assessment

Risk Assessment is where it all comes together and where, in some sense, security governance concludes. This is the point at which the implementation status of all the hundreds (or thousands) of Controls is considered towards a useful and convincing assessment.

When carrying out an assessment, those same Objectives that allowed us to ascertain coverage by being the descriptive motivators of the prescriptive Controls, now serve the inverse function that is just as important. They map the engineering-centric tabulation of implemented Controls (which can take the form of tickets in Jira or otherwise), back into statements that make sense in the descriptive realm of risk and security postures. For example, the Objective stating “all privileged users of the system shall be strongly authenticated, which led to a dozen Controls, one of which could be: “Hardware-based MFA shall be automatically enforced on all Manager accounts, could now be assessed based on the Controls that were already implemented versus the ones that are still pending implementation. If this sample Control is still not implemented, then the objective could be considered as unmet, with the residual risk understood accordingly. On the other hand, if all such Controls were implemented, then we could say that the Objective is met and be able to scientifically demonstrate it with great traceability. Refuting this assessment would require either showing that our Controls did not offer the right coverage, or that a Control that was marked as ‘implemented’ actually wasn’t. We therefore reached an important goal of security governance, which is the deployment of risk assessment that is treated as science.


See also

Trackbacks

No Trackbacks

Comments

Display comments as Linear | Threaded

No comments

Add Comment

Markdown format allowed
Enclosing asterisks marks text as bold (*word*), underscore are made via (_word_), else escape with (\_).
E-Mail addresses will not be displayed and will only be used for E-Mail notifications.
Form options

Submitted comments will be subject to moderation before being displayed.