Skip to content

Recommended: A Corporate Anthropologist’s Guide to Product Security

I recently read a good essay by Alex Gantman titled: A Corporate Anthropologist’s Guide to Product Security. It’s a year old, but I did not notice it before, and in any event, its contents are not time-sensitive at all. If you’re responsible for deploying SDLC in any real production environment, then you are likely to find much truth in this essay.

Continue reading "Recommended: A Corporate Anthropologist’s Guide to Product Security"

Getting security requirements implemented

Every company that has both development teams and security teams also enjoys a healthy amount of tension between them. Specifics of the emotions involved may vary, but quite often security guys see developers as: not caring enough about security, focusing on short-term gains in features rather than on long-term robustness, and all-in-all, despite best intentions, still not “seeing the light”. Developers, in turn, often see their security-practicing friends as people with overly intense focus on security, which blinds them to all other needs of the product. They sometimes see those security preachers as people who maintain an overly simplistic view of the product design, and particularly of the cost and side-effects of the many changes they request for the sacred sake of security.

People of both camps are to a certain extent right, and to a certain extent exaggerating and not giving the other side enough credit. And yet, it doesn’t even matter where the truth lies, nor if there is truth at all. What matters is that there are two groups that are both essential for product success, and which should work towards a common goal: a product that has many appealing properties, including security.

The rest of this post presents tips for proper collaboration between security and development teams, specifically where it comes to setting and implementing security requirements. Due to my default affiliation with the security camp, the actions I prescribe are targeted primarily at the security people, but I hope that both developers and security practitioners can benefit from the high level perspective that I try to convey in the following five tips.

Continue reading "Getting security requirements implemented"

An interview on security challenges of organizations deploying IoT

On July 12th, I was interviewed on Security challenges of organizations deploying IoT. The recorded (and transcribed) video interview can be found here. For those who prefer a written abstract, here is the outline of what I said in reply to a short set of questions about the security challenges with IoT deployment, and the approach followed at Pelion to overcome them.

Continue reading "An interview on security challenges of organizations deploying IoT"

Machine Learning Security: a new crop of technologies

Artificial Intelligence (AI), and Machine Learning (ML) specifically, are now at the stage in which we start caring about their security implications. Why now? Because that’s the point at which we usually start caring about the security considerations of new technologies we’ve started using. Looking at previous cases, such as of desktop computing, the Internet, car networks, and IoT (Internet of Things), those technologies first gained fast momentum by the urge to capitalize on their novel use-cases. They were deployed as fast as they could possibly be, by stakeholders rushing to secure their share of the emerging revenue pie. Once the systems started operating en masse, it was finally time to realize that where there is value – there is also malice, and every technology that processes an asset (valuable data that can be traded, the ability to display content to a user and grab her attention, potential for extortion money, etc.) will inevitably lure threat actors who demonstrate impressive creativity when attempting to divert or exploit those assets.

This flow of events is barely surprising, and we were not really shocked to learn that the Internet does not provide much security out of the box, that cars could be hacked remotely through their wireless interfaces, or that cheap home automation gear doesn’t bother to encrypt its traffic. This is economy, and unless there is an immediate public safety issue causing the regulator to intervene (often later than it should), we act upon security considerations only once the new technology is deployed, and the security risks are manifested in a way that they can no longer be ignored.

It happened with desktop computing in the 80’s, with the Internet in the 90’s, with car networks about a decade ago, and with mass IoT about half a decade ago. (In those approximate dates I am not referring to when the first security advocate indicated that there are threats, this usually happened right away if not before, but to when enough security awareness was built for the industry to commit resources towards mitigating some of those threats.) Finally, it’s now the turn of Machine Learning.

When we decide that a new technology “needs security” we look at the threats and see how we can address them. At this point, we usually divide into two camps:

  • Some players, such as those heavily invested in securing the new technology, and consultants keen on capitalizing on the new class of fear that the industry just brought on itself, assert that “this is something different”; everything we knew about security has to be re-learned, and all tools and methodologies that we’ve built no longer suffice. In short, the sky is falling and we’re for the rescue.

  • Older security folks will point at the similarities, concluding that it’s the same security, just with different assets, requirements, and constraints that need to be accounted for. IoT Security is the same security just with resource constrained devices, physical assets, long device lifetime, and harsh network conditions; car security is the same security with a different type of network, different latency requirements, and devastating kinetic effects in case of failure, and so forth.

I usually associate with the second camp. Each new area of security introduces a lot of engineering work, but the basic paradigms remain intact. It’s all about securing computer systems, just with different properties. Those different properties make tremendous differences, and call for different specializations, but the principles of security governance, and even the nature of the high-level objectives, are largely reusable.

With Machine Learning the situation is different. This is a new flavor of security that calls for a new crop of technologies and startups that deploy a different mindset towards solving a new set of security challenges; including challenges that are not at focus in other domains. The remainder of this post will delve into why ML Security is different (unlike the previous examples), and what our next steps could look like when investing in mitigation technologies.

Continue reading "Machine Learning Security: a new crop of technologies"

Product Security Governance: Why and How

The term “security governance” is not widely used in the product security context. When web-searching for a decent definition, among the first results is a definition by Gartner that addresses cyber security rather than product security. Other sources I looked at also focus on IT and cyber security.

But product security governance does exist in practice, and where it doesn’t – it often should. Companies that develop products that have security considerations do engage in some sort of product security activities: code reviews, pen-tests, etc.; just the “governance” part is often missing.

Product security is science; treat it as such.

This post describes what I think “security governance” means in the context of product security. It presents a simple definition, a discussion on why it is an insanely important part of product security, and a short list of what “security governance” should consist of in practice.

Continue reading "Product Security Governance: Why and How"

SDL and Agile

One of the challenges that agile development methodologies brought with them is some level of perceived incompatibility with security governance methodologies and SDLs. No matter how you used to integrate security assurance activities with the rest of your engineering efforts, it is likely that Agile messed it up. It almost feels as if agile engineering methodologies had as a primary design goal the disruption of security processes.

But we often want Agile, and we want security too, so the gap has to be bridged. To this end, we need to first understand where the source of the conflict really is, and this also requires understanding where it is not. Understanding the non-issues is important, because there are some elements of agile engineering that are sometimes considered to be contradicting security interests where they really are not; and we would like to focus our efforts where it matters.

We will start by highlighting a few minor issues that are easy to overcome, and then discuss the more fundamental change that may in some cases be required to marry security governance with Agile.

Continue reading "SDL and Agile"

What makes company values?

How can you tell apart real company values from more superficial mantras or slogans?

There is one objective mark for values: they fight and they win, when contesting on scarce resources of any type.

A real company value wins fights against other interests when competing on budget, resource allocation, and other cost-bearing priorities.

If it does not fight – it’s not a value but a preference.

If it does not win – it’s not a value but a show.

Useful threat modelling

Do you know what all security documents have in common? — they all were at some time called “threat model”… A joke indeed, and not the funniest one, but here to make a point. There is no one approach to threat modelling, and not even a single definition of what a threat model really is. So what is it? It is most often considered to be a document that introduces the security needs of a system, using any one of dozens of possible approaches. Whatever the modelling approach is, the threat model really has just one strong requirement: it needs to be useful for whatever purpose it is made to serve. Let us try to describe what we often try to get from a threat model, and how to achieve it.

Continue reading "Useful threat modelling"

For and against security checklists, frameworks, and guidelines

We have seen many of those by now. Starting with old ones like FIPS 140, and concluding with more recent additions as the NIST CSF (Cyber Security Framework). The question is: are whose worth my time? What are they good for? Do we need to adhere to them? In a nutshell, I think they have their value, and need to be consulted, but not worshiped.

Continue reading "For and against security checklists, frameworks, and guidelines"

Running an effective security research team

I have been running a security research group at Sansa Security since 2006, and while I think about it often, I never bothered to publish any post about how to run an effective security research team. So here is a first post on this topic, with an anticipation for writing additional installments in the future.

I will address a few random topics that come to my mind this moment, about staffing, external interaction, being in the know, and logging. Feel free to bring up other topics of interest as comments to this post.

Continue reading "Running an effective security research team"