I recently read a good essay by Alex Gantman titled:
I recently read a good essay by Alex Gantman titled:
Every company that has both development teams and security teams also enjoys a healthy amount of tension between them. Specifics of the emotions involved may vary, but quite often security guys see developers as: not caring enough about security, focusing on short-term gains in features rather than on long-term robustness, and all-in-all, despite best intentions, still not “seeing the light”. Developers, in turn, often see their security-practicing friends as people with overly intense focus on security, which blinds them to all other needs of the product. They sometimes see those security preachers as people who maintain an overly simplistic view of the product design, and particularly of the cost and side-effects of the many changes they request for the sacred sake of security.
People of both camps are to a certain extent right, and to a certain extent exaggerating and not giving the other side enough credit. And yet, it doesn’t even matter where the truth lies, nor if there is truth at all. What matters is that there are two groups that are both essential for product success, and which should work towards a common goal: a product that has many appealing properties, including security.
The rest of this post presents tips for proper collaboration between security and development teams, specifically where it comes to setting and implementing security requirements. Due to my default affiliation with the security camp, the actions I prescribe are targeted primarily at the security people, but I hope that both developers and security practitioners can benefit from the high level perspective that I try to convey in the following five tips.Continue reading "Getting security requirements implemented"
A few months ago I read an interesting post, which I felt compelled to write about. The post titled “Australian Court determines that an Artificial Intelligence system can be an inventor for the purposes of patent law” tells exactly what its title denotes. The case in question comes from the drugs industry, which has always been an avid user of the patent system, but one can easily see how the verdict can be applied to many (if not all) patent areas as well.
The article reads:
“In Australia, a first instance decision by Justice Beach of the Federal Court has provided some guidance: pursuant to Thaler v Commissioner of Patents (2021) FCA 879, an AI system can be the named inventor for an Australian patent application, with a person or corporation listed as the applicant for that patent, or a grantee of the patent.” [...] Worldwide, this is the first court decision determining that an AI system can be an inventor for the purposes of patent law.” [...] “The UK Intellectual Property Office (UKIPO), European Patent Office (EPO), and US Patent and Trademark Office (USPTO) each determined that an inventor must be a natural person.”
An appeal process is still ongoing, but this judgment still serves as an important milestone in the anticipated future of artificial intelligence, which bears enough resemblance to traditional human intelligence to demand similar treatment, first as art, and now also as the subject of patents.
I must admit that when I first read this article it seemed to me as a joke, and even a funny one at that. However, as I kept thinking about it, it made more and more sense. The purpose of this post is to take you through my thought process.
Just note that I am not a lawyer, not a patent attorney, and only express an opinion as someone who’s nowhere close to being authoritative on the subject.Continue reading "Patents invented by Machine Learning"
On July 12th, I was interviewed on Security challenges of organizations deploying IoT. The recorded (and transcribed) video interview can be found here. For those who prefer a written abstract, here is the outline of what I said in reply to a short set of questions about the security challenges with IoT deployment, and the approach followed at Pelion to overcome them.Continue reading "An interview on security challenges of organizations deploying IoT"
I recently participated in a discussion about the role of machine-generated text in the spread of fake news.
The context of this discussion was the work titled: How Language Models Could Change Disinformation. The progress made by the industry in the area of algorithmic text generation has led to concerns that such systems could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3 — an AI system that writes text, to analyze its potential use for promoting disinformation (i.e., fake news).
The report reads:
In light of this breakthrough, we consider a simple but important question: can automation generate content for disinformation campaigns? If GPT-3 can write seemingly credible news stories, perhaps it can write compelling fake news stories; if it can draft op-eds, perhaps it can draft misleading tweets.
Following is my take on this.Continue reading "Machine generated content helping spread fake news"
On May 12th, the Biden administration issued an Executive Order that was written to improve the overall security posture of software products that the government buys from the private sector. Recent events, such as the SolarWinds hack, contributed to the realization that such a move is necessary.
This Executive Order is a big deal. Of course, nothing will change overnight, but given the size and complexity of the software industry, as well as the overall culture behind software security (the culture of: “If the customer doesn’t see it — don’t spend money on it”), an Executive Order can probably yield the closest thing to immediate improvement that we could reasonably wish for. The US Government is a very large customer, and all major vendors will elect to comply with its requirements rather than cross it all off their addressable markets.
A lot has been written on how important it is for the government to use its buying power (if not its regulatory power) to drive vendors into shipping more secure products. Product security suffers from what could best be described as a market failure condition, which would call for such regulatory intervention.
To not overly repeat the mainstream media, I would like to focus on one unique aspect of the current Executive Order, and on how it can ignite a new trend that will change product and network security for the better. I’ll discuss true machine-readable security documentation.Continue reading "One blessing of the Cybersecurity Executive Order"
An NFT (Non-Fungible Token) is a data structure that points at a particular data object in a unique way. See it as a way of naming digital objects, such as photos, texts, audio or video, in a way that allows referring to them with no ambiguity.
The ability to refer to data objects allows to “mention” them in transactions. This seemingly trivial ability, when combined with the ability to create immutable records of transactions (as provided by Blockchains), allows us to create immutable records that refer to data objects.
Technically, NFTs do not require blockchains. You could take a photo of a cat, create an NFT for this photo, which is essentially a unique pointer to (or: a descriptor of) it, and then go on to write a real contract on paper that says “this photo of a cat, bearing this unique ID, is hereby assigned to John Smith”, whatever this assignment means.
Blockchains and smart contract technologies allow for such digital agreements to be stored in a public immutable record that does not allow anyone to change it once it was written. The combination of NFTs and blockchain-based smart contracts thus allows us to securely record agreements that declare ownership of digital goods. If you have any file (photo, text, video, etc.), you can create an attestation that tells the entire world that you assign this file to be owned by whoever. What does this “ownership” mean? – Good question; but whatever it means, billions of dollars have already been paid towards such ownerships. Is this real? The money surely is, but is also the value?Continue reading "On the value of NFT"
Israel is probably the most advanced to date in terms of COVID19 vaccination. With more than one third of the residents fully inoculated, life can almost get back to pseudo-normal. This, however, requires being able to tell the vaccinated people apart from those who are not. The green pass, or vaccination certificate, is made to achieve precisely that. Technically, this government-issued certificate is not substantially different than a driver’s license, just that it’s shorter lived, can be stored in a phone app, and most importantly: was designed in a hurry.
For something that was launched so quickly, it seems to be decently architected, but slightly better work could still be done to protect that piece of attestation that is so critical to public health.
What do we require of a vaccination certificate? Not much, really. It obviously needs to be as secure as it could be made under the strict cost and distribution constraints. The certificate has to also be easily renewable (it currently expires every six months), and it has to be verifiable by a wide range of checkpoints with varying capabilities. Finally, verification has to be both reliable and fast; entry into a shopping mall cannot resemble passport control, and people cannot arbitrarily be locked out of key facilities just because of simple IT downtime.
The certificate itself is sent to its holder by e-mail (or via a web-site), to be printed at home. There are no measures that could be taken to prevent anyone with Microsoft Paint from crafting fake such certificates. The digital part of the vaccination certificate, i.e., the QR Code printed on it, is the only part of the certificate that can practically be used against forgery.
See the following write-up as a quick guide to cheap-but-secure attestation certificates; for COVID or otherwise.Continue reading "COVID vaccination certificates done almost right"
Our digital lives are more or less governed by very few providers of products and services. Our desktop computing is almost invariably based on Microsoft Windows, our document collaboration is most likely based on either Google Docs or on O365, our instant messaging is either Whatsapp or Slack, our video collaboration is either Teams or Zoom, etc. Given the prevalence of digital life and work, you would expect more options to exist. However, all those large pies seem to each be divided into just a few thick slices each. Those lucky providers that won their dominance did so by catering to the needs of the masses while serving their own agendas, or more accurately: by serving their own agendas while giving enough to make their products preferable by the masses.
Customers appreciate ease of deployment and ease of use, and all of the dominant products excel in that. However, customers never said anything too explicit about security and customers never demanded data sovereignty. Those properties are also very non-compelling for some providers, either because they increase cost, because they prevent lock-in, or because they hinder business models that rely on using customer data. The vast majority of customers never really required, and hence never really got, anything more than ease of use and ease of deployment, along a few key functional features. For most customers, this is enough, but customers who also require security, privacy, and/or data sovereignty, face a challenge when working out alternatives.
But alternatives do exist, for desktop computing, for collaboration and for messaging and video communication. Those alternatives play an important role in our digital ecosystem, even if most people never care to use them.Continue reading "The role of security focused alternatives"
Artificial Intelligence (AI), and Machine Learning (ML) specifically, are now at the stage in which we start caring about their security implications. Why now? Because that’s the point at which we usually start caring about the security considerations of new technologies we’ve started using. Looking at previous cases, such as of desktop computing, the Internet, car networks, and IoT (Internet of Things), those technologies first gained fast momentum by the urge to capitalize on their novel use-cases. They were deployed as fast as they could possibly be, by stakeholders rushing to secure their share of the emerging revenue pie. Once the systems started operating en masse, it was finally time to realize that where there is value – there is also malice, and every technology that processes an asset (valuable data that can be traded, the ability to display content to a user and grab her attention, potential for extortion money, etc.) will inevitably lure threat actors who demonstrate impressive creativity when attempting to divert or exploit those assets.
This flow of events is barely surprising, and we were not really shocked to learn that the Internet does not provide much security out of the box, that cars could be hacked remotely through their wireless interfaces, or that cheap home automation gear doesn’t bother to encrypt its traffic. This is economy, and unless there is an immediate public safety issue causing the regulator to intervene (often later than it should), we act upon security considerations only once the new technology is deployed, and the security risks are manifested in a way that they can no longer be ignored.
It happened with desktop computing in the 80’s, with the Internet in the 90’s, with car networks about a decade ago, and with mass IoT about half a decade ago. (In those approximate dates I am not referring to when the first security advocate indicated that there are threats, this usually happened right away if not before, but to when enough security awareness was built for the industry to commit resources towards mitigating some of those threats.) Finally, it’s now the turn of Machine Learning.
When we decide that a new technology “needs security” we look at the threats and see how we can address them. At this point, we usually divide into two camps:
Some players, such as those heavily invested in securing the new technology, and consultants keen on capitalizing on the new class of fear that the industry just brought on itself, assert that “this is something different”; everything we knew about security has to be re-learned, and all tools and methodologies that we’ve built no longer suffice. In short, the sky is falling and we’re for the rescue.
Older security folks will point at the similarities, concluding that it’s the same security, just with different assets, requirements, and constraints that need to be accounted for. IoT Security is the same security just with resource constrained devices, physical assets, long device lifetime, and harsh network conditions; car security is the same security with a different type of network, different latency requirements, and devastating kinetic effects in case of failure, and so forth.
I usually associate with the second camp. Each new area of security introduces a lot of engineering work, but the basic paradigms remain intact. It’s all about securing computer systems, just with different properties. Those different properties make tremendous differences, and call for different specializations, but the principles of security governance, and even the nature of the high-level objectives, are largely reusable.
With Machine Learning the situation is different. This is a new flavor of security that calls for a new crop of technologies and startups that deploy a different mindset towards solving a new set of security challenges; including challenges that are not at focus in other domains. The remainder of this post will delve into why ML Security is different (unlike the previous examples), and what our next steps could look like when investing in mitigation technologies.Continue reading "Machine Learning Security: a new crop of technologies"