Skip to content

Pausing AI development? As if we could...

When we hear of a new way in which artificial intelligence (AI) can risk humanity, or even just move someone’s cheese, the debate often changes to whether or not we should pause AI development until some impact analysis is carried out, or some regulation kicks in.

I do not think that pausing AI development is even a plausible option to be considered. History has shown that technology never stops. We have tried this in areas that are much riskier for humanity, and without much success. We should accept that Humanity is insufficiently consensus-driven, and the stakeholders in AI technologies have too many easy ways to bypass bans, and too many good reasons to do so. Yes, there is serious lobbying promoting this pausing option, but I would argue that most is for pausing productization, rather than development, and is, to a notable extent, led by players who are just late in the market and need some time to catch up.

Patents invented by Machine Learning

A few months ago I read an interesting post, which I felt compelled to write about. The post titled “Australian Court determines that an Artificial Intelligence system can be an inventor for the purposes of patent law” tells exactly what its title denotes. The case in question comes from the drugs industry, which has always been an avid user of the patent system, but one can easily see how the verdict can be applied to many (if not all) patent areas as well.

The article reads:

“In Australia, a first instance decision by Justice Beach of the Federal Court has provided some guidance: pursuant to Thaler v Commissioner of Patents (2021) FCA 879, an AI system can be the named inventor for an Australian patent application, with a person or corporation listed as the applicant for that patent, or a grantee of the patent.” [...] Worldwide, this is the first court decision determining that an AI system can be an inventor for the purposes of patent law.” [...] “The UK Intellectual Property Office (UKIPO), European Patent Office (EPO), and US Patent and Trademark Office (USPTO) each determined that an inventor must be a natural person.”

An appeal process is still ongoing, but this judgment still serves as an important milestone in the anticipated future of artificial intelligence, which bears enough resemblance to traditional human intelligence to demand similar treatment, first as art, and now also as the subject of patents.

I must admit that when I first read this article it seemed to me as a joke, and even a funny one at that. However, as I kept thinking about it, it made more and more sense. The purpose of this post is to take you through my thought process.

Just note that I am not a lawyer, not a patent attorney, and only express an opinion as someone who’s nowhere close to being authoritative on the subject.

Continue reading "Patents invented by Machine Learning"

Machine generated content helping spread fake news

I recently participated in a discussion about the role of machine-generated text in the spread of fake news.

The context of this discussion was the work titled: How Language Models Could Change Disinformation. The progress made by the industry in the area of algorithmic text generation has led to concerns that such systems could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3 — an AI system that writes text, to analyze its potential use for promoting disinformation (i.e., fake news).

The report reads:

In light of this breakthrough, we consider a simple but important question: can automation generate content for disinformation campaigns? If GPT-3 can write seemingly credible news stories, perhaps it can write compelling fake news stories; if it can draft op-eds, perhaps it can draft misleading tweets.

Following is my take on this.

Continue reading "Machine generated content helping spread fake news"

Machine Learning Security: a new crop of technologies

Artificial Intelligence (AI), and Machine Learning (ML) specifically, are now at the stage in which we start caring about their security implications. Why now? Because that’s the point at which we usually start caring about the security considerations of new technologies we’ve started using. Looking at previous cases, such as of desktop computing, the Internet, car networks, and IoT (Internet of Things), those technologies first gained fast momentum by the urge to capitalize on their novel use-cases. They were deployed as fast as they could possibly be, by stakeholders rushing to secure their share of the emerging revenue pie. Once the systems started operating en masse, it was finally time to realize that where there is value – there is also malice, and every technology that processes an asset (valuable data that can be traded, the ability to display content to a user and grab her attention, potential for extortion money, etc.) will inevitably lure threat actors who demonstrate impressive creativity when attempting to divert or exploit those assets.

This flow of events is barely surprising, and we were not really shocked to learn that the Internet does not provide much security out of the box, that cars could be hacked remotely through their wireless interfaces, or that cheap home automation gear doesn’t bother to encrypt its traffic. This is economy, and unless there is an immediate public safety issue causing the regulator to intervene (often later than it should), we act upon security considerations only once the new technology is deployed, and the security risks are manifested in a way that they can no longer be ignored.

It happened with desktop computing in the 80’s, with the Internet in the 90’s, with car networks about a decade ago, and with mass IoT about half a decade ago. (In those approximate dates I am not referring to when the first security advocate indicated that there are threats, this usually happened right away if not before, but to when enough security awareness was built for the industry to commit resources towards mitigating some of those threats.) Finally, it’s now the turn of Machine Learning.

When we decide that a new technology “needs security” we look at the threats and see how we can address them. At this point, we usually divide into two camps:

  • Some players, such as those heavily invested in securing the new technology, and consultants keen on capitalizing on the new class of fear that the industry just brought on itself, assert that “this is something different”; everything we knew about security has to be re-learned, and all tools and methodologies that we’ve built no longer suffice. In short, the sky is falling and we’re for the rescue.

  • Older security folks will point at the similarities, concluding that it’s the same security, just with different assets, requirements, and constraints that need to be accounted for. IoT Security is the same security just with resource constrained devices, physical assets, long device lifetime, and harsh network conditions; car security is the same security with a different type of network, different latency requirements, and devastating kinetic effects in case of failure, and so forth.

I usually associate with the second camp. Each new area of security introduces a lot of engineering work, but the basic paradigms remain intact. It’s all about securing computer systems, just with different properties. Those different properties make tremendous differences, and call for different specializations, but the principles of security governance, and even the nature of the high-level objectives, are largely reusable.

With Machine Learning the situation is different. This is a new flavor of security that calls for a new crop of technologies and startups that deploy a different mindset towards solving a new set of security challenges; including challenges that are not at focus in other domains. The remainder of this post will delve into why ML Security is different (unlike the previous examples), and what our next steps could look like when investing in mitigation technologies.

Continue reading "Machine Learning Security: a new crop of technologies"

Addressing the shortcoming of machine-learning for security

In a previous post I wrote about cases in which machine-learning adds little to the reliability of security tools, because it often does not react well to novel threats. In this post I will share a thought about overcoming the limitation of machine-learning, by properly augmenting it with other methods. The challenge we tackle is not that of finding additional methods of detection, as we assume such are already known and deployed in other systems. The challenge we tackle is of how to combine traditional detection methods with those based on machine-learning, in a way that yields the best overall results. As promising as machine-learning (and artificial intelligence) is, it is less effective when deployed in silo (not in combination with existing technologies), and hence the significance of properly marrying the two.

I propose to augment the data used in machine-learning with tags that come from other, i.e., traditional, classification algorithms. More importantly, I suggest distinguishing between the machine-learning-based assessment component and the decision component, and using the tagging in both components, independently.

Continue reading "Addressing the shortcoming of machine-learning for security"

An obvious limitation of machine-learning for security

I recently came across this study titled “Unknown Threats are The Achilles Heel of Email Security”. It concludes that traditional e-mail scanning tools, that also utilize machine-learning to cope with emerging threats, are still not reacting fast enough to new threats. This is probably true, but I think this conclusion should be considered even more widely, beyond e-mail.

Threats are dynamic. Threat actors are creative and well-motivated enough to make threat mitigation an endlessly moving target. So aren’t we fortunate to have this new term, “machine learning”, recently join our tech jargon? Just like many other buzzwords, the term is newer than what it denotes, but nonetheless, a machine that learns the job autonomously seems to be precisely what we need for mitigating ever-changing threats.

All in all, machine-learning is good for security, but yet in some cases it is a less significant addition to our defense arsenal. Why? – Because while you learn, you often don’t do the job well enough; and a machine is no different. Eventually, the merits of learning-while-doing are to be determined by the price of the resulting temporary imperfectness.

Continue reading "An obvious limitation of machine-learning for security"

The Fake News problem will not be solved by technology

One reason we struggle with finding a solution to the fake news problem is that we have never defined the problem properly. The term “fake news” started as referring to publications that look like news but are entirely fabricated. It then migrated to consist also of news articles that are just grossly inaccurate, to later expand further into consisting also of news one doesn’t like and tries to dispute.

It is amusing to see how we seek technical mitigation towards a problem which is entirely semantic. Just like a lie detector does not detect untruths but only the artifacts of a lying person, all technologies that are considered for fighting fake news do not detect untruths but mostly willful propaganda. However, just like plain deceiving, publishing propaganda also consists of many shades of grey, implying that whatever solutions we find, we will never be happy with them.

We should recalculate our route.

Continue reading "The Fake News problem will not be solved by technology"