Skip to content

COVID vaccination certificates done almost right

Israel is probably the most advanced to date in terms of COVID19 vaccination. With more than one third of the residents fully inoculated, life can almost get back to pseudo-normal. This, however, requires being able to tell the vaccinated people apart from those who are not. The green pass, or vaccination certificate, is made to achieve precisely that. Technically, this government-issued certificate is not substantially different than a driver’s license, just that it’s shorter lived, can be stored in a phone app, and most importantly: was designed in a hurry.

For something that was launched so quickly, it seems to be decently architected, but slightly better work could still be done to protect that piece of attestation that is so critical to public health.

What do we require of a vaccination certificate? Not much, really. It obviously needs to be as secure as it could be made under the strict cost and distribution constraints. The certificate has to also be easily renewable (it currently expires every six months), and it has to be verifiable by a wide range of checkpoints with varying capabilities. Finally, verification has to be both reliable and fast; entry into a shopping mall cannot resemble passport control, and people cannot arbitrarily be locked out of key facilities just because of simple IT downtime.

The certificate itself is sent to its holder by e-mail (or via a web-site), to be printed at home. There are no measures that could be taken to prevent anyone with Microsoft Paint from crafting fake such certificates. The digital part of the vaccination certificate, i.e., the QR Code printed on it, is the only part of the certificate that can practically be used against forgery.

See the following write-up as a quick guide to cheap-but-secure attestation certificates; for COVID or otherwise.

Continue reading "COVID vaccination certificates done almost right"

The role of security focused alternatives

Our digital lives are more or less governed by very few providers of products and services. Our desktop computing is almost invariably based on Microsoft Windows, our document collaboration is most likely based on either Google Docs or on O365, our instant messaging is either Whatsapp or Slack, our video collaboration is either Teams or Zoom, etc. Given the prevalence of digital life and work, you would expect more options to exist. However, all those large pies seem to each be divided into just a few thick slices each. Those lucky providers that won their dominance did so by catering to the needs of the masses while serving their own agendas, or more accurately: by serving their own agendas while giving enough to make their products preferable by the masses.

Customers appreciate ease of deployment and ease of use, and all of the dominant products excel in that. However, customers never said anything too explicit about security and customers never demanded data sovereignty. Those properties are also very non-compelling for some providers, either because they increase cost, because they prevent lock-in, or because they hinder business models that rely on using customer data. The vast majority of customers never really required, and hence never really got, anything more than ease of use and ease of deployment, along a few key functional features. For most customers, this is enough, but customers who also require security, privacy, and/or data sovereignty, face a challenge when working out alternatives.

But alternatives do exist, for desktop computing, for collaboration and for messaging and video communication. Those alternatives play an important role in our digital ecosystem, even if most people never care to use them.

Continue reading "The role of security focused alternatives"

TEDTalk: "The counterintuitive way to be more persuasive"

This is a brilliant TED Talk by Niro Sivanathan.

It introduces the dilution effect. Information that is less relevant is not merely discarded, but rather dilutes the impact of the information that is relevant. So next time you bring up arguments for something, remember that your arguments don’t add up – they average out.

TEDTalk: The counterintuitive way to be more persuasive

Machine Learning Security: a new crop of technologies

Artificial Intelligence (AI), and Machine Learning (ML) specifically, are now at the stage in which we start caring about their security implications. Why now? Because that’s the point at which we usually start caring about the security considerations of new technologies we’ve started using. Looking at previous cases, such as of desktop computing, the Internet, car networks, and IoT (Internet of Things), those technologies first gained fast momentum by the urge to capitalize on their novel use-cases. They were deployed as fast as they could possibly be, by stakeholders rushing to secure their share of the emerging revenue pie. Once the systems started operating en masse, it was finally time to realize that where there is value – there is also malice, and every technology that processes an asset (valuable data that can be traded, the ability to display content to a user and grab her attention, potential for extortion money, etc.) will inevitably lure threat actors who demonstrate impressive creativity when attempting to divert or exploit those assets.

This flow of events is barely surprising, and we were not really shocked to learn that the Internet does not provide much security out of the box, that cars could be hacked remotely through their wireless interfaces, or that cheap home automation gear doesn’t bother to encrypt its traffic. This is economy, and unless there is an immediate public safety issue causing the regulator to intervene (often later than it should), we act upon security considerations only once the new technology is deployed, and the security risks are manifested in a way that they can no longer be ignored.

It happened with desktop computing in the 80’s, with the Internet in the 90’s, with car networks about a decade ago, and with mass IoT about half a decade ago. (In those approximate dates I am not referring to when the first security advocate indicated that there are threats, this usually happened right away if not before, but to when enough security awareness was built for the industry to commit resources towards mitigating some of those threats.) Finally, it’s now the turn of Machine Learning.

When we decide that a new technology “needs security” we look at the threats and see how we can address them. At this point, we usually divide into two camps:

  • Some players, such as those heavily invested in securing the new technology, and consultants keen on capitalizing on the new class of fear that the industry just brought on itself, assert that “this is something different”; everything we knew about security has to be re-learned, and all tools and methodologies that we’ve built no longer suffice. In short, the sky is falling and we’re for the rescue.

  • Older security folks will point at the similarities, concluding that it’s the same security, just with different assets, requirements, and constraints that need to be accounted for. IoT Security is the same security just with resource constrained devices, physical assets, long device lifetime, and harsh network conditions; car security is the same security with a different type of network, different latency requirements, and devastating kinetic effects in case of failure, and so forth.

I usually associate with the second camp. Each new area of security introduces a lot of engineering work, but the basic paradigms remain intact. It’s all about securing computer systems, just with different properties. Those different properties make tremendous differences, and call for different specializations, but the principles of security governance, and even the nature of the high-level objectives, are largely reusable.

With Machine Learning the situation is different. This is a new flavor of security that calls for a new crop of technologies and startups that deploy a different mindset towards solving a new set of security challenges; including challenges that are not at focus in other domains. The remainder of this post will delve into why ML Security is different (unlike the previous examples), and what our next steps could look like when investing in mitigation technologies.

Continue reading "Machine Learning Security: a new crop of technologies"

Book review: "Think Like a Rocket Scientist"

The book “Think Like a Rocket Scientist” by Ozan Varol (a real rocket scientist, actually), has nothing to do with Security. However, I do have the habit of sharing recommendations on such resources as well, and this piece is certainly worthy of such a recommendation.

The text promotes the deployment of thought processes that are often used in engineering and science (primarily in rocket science, where mistakes are costly), by everyone. The motivation of this book is probably a quote brought by Carl Sagan: “Science is a way of thinking much more than it is a body of knowledge”; a statement with which I could not agree more.

The book covers a few principles and delves into each one of them with excellent examples and historic facts, all written in an engaging style. Some of the topics that the author discusses are:

Continue reading "Book review: "Think Like a Rocket Scientist""

Product Security Governance: Why and How

The term “security governance” is not widely used in the product security context. When web-searching for a decent definition, among the first results is a definition by Gartner that addresses cyber security rather than product security. Other sources I looked at also focus on IT and cyber security.

But product security governance does exist in practice, and where it doesn’t – it often should. Companies that develop products that have security considerations do engage in some sort of product security activities: code reviews, pen-tests, etc.; just the “governance” part is often missing.

Product security is science; treat it as such.

This post describes what I think “security governance” means in the context of product security. It presents a simple definition, a discussion on why it is an insanely important part of product security, and a short list of what “security governance” should consist of in practice.

Continue reading "Product Security Governance: Why and How"

Addressing the shortcoming of machine-learning for security

In a previous post I wrote about cases in which machine-learning adds little to the reliability of security tools, because it often does not react well to novel threats. In this post I will share a thought about overcoming the limitation of machine-learning, by properly augmenting it with other methods. The challenge we tackle is not that of finding additional methods of detection, as we assume such are already known and deployed in other systems. The challenge we tackle is of how to combine traditional detection methods with those based on machine-learning, in a way that yields the best overall results. As promising as machine-learning (and artificial intelligence) is, it is less effective when deployed in silo (not in combination with existing technologies), and hence the significance of properly marrying the two.

I propose to augment the data used in machine-learning with tags that come from other, i.e., traditional, classification algorithms. More importantly, I suggest distinguishing between the machine-learning-based assessment component and the decision component, and using the tagging in both components, independently.

Continue reading "Addressing the shortcoming of machine-learning for security"

SDL and Agile

One of the challenges that agile development methodologies brought with them is some level of perceived incompatibility with security governance methodologies and SDLs. No matter how you used to integrate security assurance activities with the rest of your engineering efforts, it is likely that Agile messed it up. It almost feels as if agile engineering methodologies had as a primary design goal the disruption of security processes.

But we often want Agile, and we want security too, so the gap has to be bridged. To this end, we need to first understand where the source of the conflict really is, and this also requires understanding where it is not. Understanding the non-issues is important, because there are some elements of agile engineering that are sometimes considered to be contradicting security interests where they really are not; and we would like to focus our efforts where it matters.

We will start by highlighting a few minor issues that are easy to overcome, and then discuss the more fundamental change that may in some cases be required to marry security governance with Agile.

Continue reading "SDL and Agile"

Your Bitcoin wallet will never be your bank account

Don’t get me wrong; Bitcoin and crypto currencies are a big deal, at least technology-wise. Bitcoin and blockchains taught us a lot on what can be done with security protocols, and at a lower level, it even taught us that computation inefficiency is not always a bad word, but something that can yield benefits, if that inefficiency is properly orchestrated and exploited. It was also the most prevalent demonstration of scarcity being artificially created by technology alone. As I wrote before, blockchains will probably have some novel use-cases one day, and Bitcoin, aside of being a mechanism for transferring money, also provides a target of speculation, which in itself can be (and is) monetized.

What I truly do not understand are the advocates who see Bitcoin wallets as the near-future replacement for bank accounts, and Bitcoin replacing banks (and other financial institutions) in the near future. I understand the motivation, as those are dreams easy to fall for, but for crypto-currency wallets to replace financial institutions much more is needed, and for the sake of this discussion I will not even delve into the many technical difficulties.

Continue reading "Your Bitcoin wallet will never be your bank account"

An obvious limitation of machine-learning for security

I recently came across this study titled “Unknown Threats are The Achilles Heel of Email Security”. It concludes that traditional e-mail scanning tools, that also utilize machine-learning to cope with emerging threats, are still not reacting fast enough to new threats. This is probably true, but I think this conclusion should be considered even more widely, beyond e-mail.

Threats are dynamic. Threat actors are creative and well-motivated enough to make threat mitigation an endlessly moving target. So aren’t we fortunate to have this new term, “machine learning”, recently join our tech jargon? Just like many other buzzwords, the term is newer than what it denotes, but nonetheless, a machine that learns the job autonomously seems to be precisely what we need for mitigating ever-changing threats.

All in all, machine-learning is good for security, but yet in some cases it is a less significant addition to our defense arsenal. Why? – Because while you learn, you often don’t do the job well enough; and a machine is no different. Eventually, the merits of learning-while-doing are to be determined by the price of the resulting temporary imperfectness.

Continue reading "An obvious limitation of machine-learning for security"