On July 12th, I was interviewed on Security challenges of organizations deploying IoT. The recorded (and transcribed) video interview can be found here. For those who prefer a written abstract, here is the outline of what I said in reply to a short set of questions about the security challenges with IoT deployment, and the approach followed at Pelion to overcome them.Continue reading "An interview on security challenges of organizations deploying IoT"
Artificial Intelligence (AI), and Machine Learning (ML) specifically, are now at the stage in which we start caring about their security implications. Why now? Because that’s the point at which we usually start caring about the security considerations of new technologies we’ve started using. Looking at previous cases, such as of desktop computing, the Internet, car networks, and IoT (Internet of Things), those technologies first gained fast momentum by the urge to capitalize on their novel use-cases. They were deployed as fast as they could possibly be, by stakeholders rushing to secure their share of the emerging revenue pie. Once the systems started operating en masse, it was finally time to realize that where there is value – there is also malice, and every technology that processes an asset (valuable data that can be traded, the ability to display content to a user and grab her attention, potential for extortion money, etc.) will inevitably lure threat actors who demonstrate impressive creativity when attempting to divert or exploit those assets.
This flow of events is barely surprising, and we were not really shocked to learn that the Internet does not provide much security out of the box, that cars could be hacked remotely through their wireless interfaces, or that cheap home automation gear doesn’t bother to encrypt its traffic. This is economy, and unless there is an immediate public safety issue causing the regulator to intervene (often later than it should), we act upon security considerations only once the new technology is deployed, and the security risks are manifested in a way that they can no longer be ignored.
It happened with desktop computing in the 80’s, with the Internet in the 90’s, with car networks about a decade ago, and with mass IoT about half a decade ago. (In those approximate dates I am not referring to when the first security advocate indicated that there are threats, this usually happened right away if not before, but to when enough security awareness was built for the industry to commit resources towards mitigating some of those threats.) Finally, it’s now the turn of Machine Learning.
When we decide that a new technology “needs security” we look at the threats and see how we can address them. At this point, we usually divide into two camps:
Some players, such as those heavily invested in securing the new technology, and consultants keen on capitalizing on the new class of fear that the industry just brought on itself, assert that “this is something different”; everything we knew about security has to be re-learned, and all tools and methodologies that we’ve built no longer suffice. In short, the sky is falling and we’re for the rescue.
Older security folks will point at the similarities, concluding that it’s the same security, just with different assets, requirements, and constraints that need to be accounted for. IoT Security is the same security just with resource constrained devices, physical assets, long device lifetime, and harsh network conditions; car security is the same security with a different type of network, different latency requirements, and devastating kinetic effects in case of failure, and so forth.
I usually associate with the second camp. Each new area of security introduces a lot of engineering work, but the basic paradigms remain intact. It’s all about securing computer systems, just with different properties. Those different properties make tremendous differences, and call for different specializations, but the principles of security governance, and even the nature of the high-level objectives, are largely reusable.
With Machine Learning the situation is different. This is a new flavor of security that calls for a new crop of technologies and startups that deploy a different mindset towards solving a new set of security challenges; including challenges that are not at focus in other domains. The remainder of this post will delve into why ML Security is different (unlike the previous examples), and what our next steps could look like when investing in mitigation technologies.Continue reading "Machine Learning Security: a new crop of technologies"
The term “security governance” is not widely used in the product security context. When web-searching for a decent definition, among the first results is a definition by Gartner that addresses cyber security rather than product security. Other sources I looked at also focus on IT and cyber security.
But product security governance does exist in practice, and where it doesn’t – it often should. Companies that develop products that have security considerations do engage in some sort of product security activities: code reviews, pen-tests, etc.; just the “governance” part is often missing.
Product security is science; treat it as such.
This post describes what I think “security governance” means in the context of product security. It presents a simple definition, a discussion on why it is an insanely important part of product security, and a short list of what “security governance” should consist of in practice.Continue reading "Product Security Governance: Why and How"
One of the challenges that agile development methodologies brought with them is some level of perceived incompatibility with security governance methodologies and SDLs. No matter how you used to integrate security assurance activities with the rest of your engineering efforts, it is likely that Agile messed it up. It almost feels as if agile engineering methodologies had as a primary design goal the disruption of security processes.
But we often want Agile, and we want security too, so the gap has to be bridged. To this end, we need to first understand where the source of the conflict really is, and this also requires understanding where it is not. Understanding the non-issues is important, because there are some elements of agile engineering that are sometimes considered to be contradicting security interests where they really are not; and we would like to focus our efforts where it matters.
We will start by highlighting a few minor issues that are easy to overcome, and then discuss the more fundamental change that may in some cases be required to marry security governance with Agile.Continue reading "SDL and Agile"
How can you tell apart real company values from more superficial mantras or slogans?
There is one objective mark for values: they fight and they win, when contesting on scarce resources of any type.
A real company value wins fights against other interests when competing on budget, resource allocation, and other cost-bearing priorities.
If it does not fight – it’s not a value but a preference.
If it does not win – it’s not a value but a show.
We have seen many of those by now. Starting with old ones like FIPS 140, and concluding with more recent additions as the NIST CSF (Cyber Security Framework). The question is: are whose worth my time? What are they good for? Do we need to adhere to them? In a nutshell, I think they have their value, and need to be consulted, but not worshiped.Continue reading "For and against security checklists, frameworks, and guidelines"
I have been running a security research group at Sansa Security since 2006, and while I think about it often, I never bothered to publish any post about how to run an effective security research team. So here is a first post on this topic, with an anticipation for writing additional installments in the future.
I will address a few random topics that come to my mind this moment, about staffing, external interaction, being in the know, and logging. Feel free to bring up other topics of interest as comments to this post.Continue reading "Running an effective security research team"