Skip to content

Machine Learning Security: a new crop of technologies

Artificial Intelligence (AI), and Machine Learning (ML) specifically, are now at the stage in which we start caring about their security implications. Why now? Because that’s the point at which we usually start caring about the security considerations of new technologies we’ve started using. Looking at previous cases, such as of desktop computing, the Internet, car networks, and IoT (Internet of Things), those technologies first gained fast momentum by the urge to capitalize on their novel use-cases. They were deployed as fast as they could possibly be, by stakeholders rushing to secure their share of the emerging revenue pie. Once the systems started operating en masse, it was finally time to realize that where there is value – there is also malice, and every technology that processes an asset (valuable data that can be traded, the ability to display content to a user and grab her attention, potential for extortion money, etc.) will inevitably lure threat actors who demonstrate impressive creativity when attempting to divert or exploit those assets.

This flow of events is barely surprising, and we were not really shocked to learn that the Internet does not provide much security out of the box, that cars could be hacked remotely through their wireless interfaces, or that cheap home automation gear doesn’t bother to encrypt its traffic. This is economy, and unless there is an immediate public safety issue causing the regulator to intervene (often later than it should), we act upon security considerations only once the new technology is deployed, and the security risks are manifested in a way that they can no longer be ignored.

It happened with desktop computing in the 80’s, with the Internet in the 90’s, with car networks about a decade ago, and with mass IoT about half a decade ago. (In those approximate dates I am not referring to when the first security advocate indicated that there are threats, this usually happened right away if not before, but to when enough security awareness was built for the industry to commit resources towards mitigating some of those threats.) Finally, it’s now the turn of Machine Learning.

When we decide that a new technology “needs security” we look at the threats and see how we can address them. At this point, we usually divide into two camps:

  • Some players, such as those heavily invested in securing the new technology, and consultants keen on capitalizing on the new class of fear that the industry just brought on itself, assert that “this is something different”; everything we knew about security has to be re-learned, and all tools and methodologies that we’ve built no longer suffice. In short, the sky is falling and we’re for the rescue.

  • Older security folks will point at the similarities, concluding that it’s the same security, just with different assets, requirements, and constraints that need to be accounted for. IoT Security is the same security just with resource constrained devices, physical assets, long device lifetime, and harsh network conditions; car security is the same security with a different type of network, different latency requirements, and devastating kinetic effects in case of failure, and so forth.

I usually associate with the second camp. Each new area of security introduces a lot of engineering work, but the basic paradigms remain intact. It’s all about securing computer systems, just with different properties. Those different properties make tremendous differences, and call for different specializations, but the principles of security governance, and even the nature of the high-level objectives, are largely reusable.

With Machine Learning the situation is different. This is a new flavor of security that calls for a new crop of technologies and startups that deploy a different mindset towards solving a new set of security challenges; including challenges that are not at focus in other domains. The remainder of this post will delve into why ML Security is different (unlike the previous examples), and what our next steps could look like when investing in mitigation technologies.

Continue reading "Machine Learning Security: a new crop of technologies"

Useful threat modelling

Do you know what all security documents have in common? — they all were at some time called “threat model”… A joke indeed, and not the funniest one, but here to make a point. There is no one approach to threat modelling, and not even a single definition of what a threat model really is. So what is it? It is most often considered to be a document that introduces the security needs of a system, using any one of dozens of possible approaches. Whatever the modelling approach is, the threat model really has just one strong requirement: it needs to be useful for whatever purpose it is made to serve. Let us try to describe what we often try to get from a threat model, and how to achieve it.

Continue reading "Useful threat modelling"

On protecting yourself against MITM in SSH

SSH is one of the best security protocols out there. It is used by anyone remotely logging into servers, as well as for secure connection to Git servers, and for secure file transfers via SFTP. One of the key promises of SSH is protection against active man-in-the-middle attacks. This makes SSH the best choice when connecting to a server over a hostile network, such as over a public hotspot. However, some SSH clients (particularly on mobile phones) void this protection by not caching server keys. Can you do anything about it? Yes, use private-keys instead of passwords for client authentication. Read more (also) for the technical details.

Continue reading "On protecting yourself against MITM in SSH"

Poodle flaw and IoT

The Poodle flaw discovered by Google folks is a big deal. It will not be hard to fix, because for most systems there is just no need to support SSLv3. Fixing those will only imply changing configuration so not to allow SSL fallback. However, this flaw brings to our attention, again, how the weakest link in security often lies in the graceful degradation mechanisms that are there to support interoperability. Logic that degrades security for the sake of interoperability is hard to do right and is often easy to exploit. Exploitation is usually carried out by the attacker connecting while pretending to be “the dumbest” principal, letting the “smarter” principal drop security to as low as it will go.

All this is not new. What may be new is a thought on what such types of flaws may imply on the emerging domain of the Internet-of-Things.

Continue reading "Poodle flaw and IoT"

Capturing PINs using an IR camera

This video demonstrates how an IR camera, of the type that can be bought for a reasonable price and attached to a smart-phone, can be used to capture a PIN that was previously entered on a PIN pad, by analyzing a thermal image of the pad after the fact. When the human finger presses a non-metallic button, it leaves a thermal residue that can be detected on a thermal image, even if taken many seconds later.

The video refers to the article: Heat of the Moment: Characterizing the Efficacy of Thermal Camera-Based Attacks, written in UC San-Diego.

OpenSSL "Heartbleed" bug: what's at risk on the server and what is not

A few days ago, a critical bug was found in the common OpenSSL library. OpenSSL is the library that implements the common SSL and TLS security protocols. These protocols facilitate the encrypted tunnel feature that secure services – over the web and otherwise – utilize to encrypt the traffic between the client (user) and the server.

The discovery of such a security bug is a big deal. Not only that OpenSSL is very common, but the bug that was found is one that can be readily exploited remotely without any privilege on the attacker’s side. Also, the outcome of the attack that is made possible is devastating. Exploiting the bug allows an attacker to obtain internal information, in the form of memory contents, from the attacked server or client. This memory space that the attacker can obtain a copy of can contain just about everything. Almost.

There are many essays and posts about the “everything” that could be lost, so I will take the optimistic side and dedicate this post to the “almost". As opposed to with other serious attacks, at least the leak is not complete and can be quantified, and the attack is not persistent.

Continue reading "OpenSSL "Heartbleed" bug: what's at risk on the server and what is not"

How risky to privacy is Apple's fingerprint reader?

Congratulations to Apple for featuring a fingerprint reader as part of its new iPhone. It was reported by The Wall Street Journal here, in the blog of Bruce Schneier here, by Time Tech here, and in dozens of other places. Very much expectedly, this revelation spurred anxiety among the conspiracy theorists out there. The two common concerns that were raised are:

  • Apple will have a database of all our fingerprints.

  • What if someone breaks into the device and gets at our fingerprint?

(There is another line of concern, related to the fifth amendment and how its protection may be foiled by authenticating using biometrics alone, but this is a legal concern which is off topic.)

While a bit of paranoid thinking is always helpful, security engineering requires more than crying out each time a mega-corporate launches a new technology that involves private data. Assets and threats need to be determined, and then we can decide whether or not the risk is worth the benefits.

Continue reading "How risky to privacy is Apple's fingerprint reader?"

Improving the security provided by Yubikey for local encryption

In the previous post, I discussed the use of Yubikey for local encryption. I noted that Yubikey can store a long string that can be used as an encryption key, or a password. It provides no extra protection against key-loggers, but still allows to use strong passwords without remembering and typing them. Today, I would like to discuss a technique that makes Yubikey based encryption more secure; still not resistant to a key-logger, but resistant to having the Yubikey “borrowed” by a thief.

Continue reading "Improving the security provided by Yubikey for local encryption"

Preventing the Evil Maid Attack on FDE

The attack referred to as the ”Evil Maid Attack”, or the “Cleaning Maid Attack” against full disk encryption (FDE), is considered as one of the serious attacks concerning people who travel with laptops full of confidential information. This attack involves an attacker, who can obtain physical access to an FDE-protected laptop. The attacker boots the laptop from a second drive, and modifies the boot-sector so that subsequent boot-ups, e.g., by the owner, will cause the execution of malicious code that will capture the passphrase and/or key that is used to boot the system. Then, the attacker should get the laptop again to collect his loot. This attack was discussed everywhere, including in the PGP Blog, LWN.net, ZDNet, and the blog of Bruce Schneier.

Some people claimed that there are no feasible countermeasures against this attack, other than making sure your laptop is never left alone for too long. A while ago, I traveled to a place where laptops were not allowed; I had to leave it at the hotel every day for two weeks. This made me devise a practical solution which can be dubbed as:
be the cleaning maid yourself.

Continue reading "Preventing the Evil Maid Attack on FDE"

Automobile hack: we should have known better

No one in the automotive security industry could miss the recently published news article titled “Beware of Hackers Controlling Your Automobile”, published here, and a similar essay titled “Car hackers can kill brakes, engine, and more”, which can be found here. In short, it describes how researchers succeeded in taking over a running car, messing up with its brakes, lights, data systems, and what not.

As alerting and serious as this is, it should not come by as a surprise.

Continue reading "Automobile hack: we should have known better"

InZero provides some security

I was just made aware of InZero, a new physical device that you connect to your PC, and your browsing becomes secure. I find it amazing that some people treat it as among the most revolutionary of security solutions.

I think the InZero device is cool. I think it protects against
some attack vectors, at some usability costs. It may even make a worthwhile trade-off for some people. But to consider the protection granted by this device as something that is revolutionary, or to claim that it is “giving hackers, criminals, and spies the middle finger” is an exaggeration, even when it comes from marketing guys.

Continue reading "InZero provides some security"

Right, the kernel can access your encrypted volume keys. So what?

On January 15th, TechWorld published an article called Encryption programs open to kernel hack. Essentially, it warns that the key to encrypted volumes, that is, to volumes of software-encrypted virtual drives, is delivered by the encryption application to the kernel of the operating system, and thus may be captured by a malicious kernel.

According to a paper […] such OTFE (on-the-fly-encryption) programs typically pass the password and file path information in the clear to a device driver through a Windows programming function called ‘DevicelOControl’.”


And they consider it as a threat:

Dubbed, the Mount IOCTL (input output control) Attack by Roellgen, an attacker would need to substitute a modified version of the DevicelOControl function that is part of the kernel with one able to log I/O control codes in order to find the one used by an encryption driver. Once found, the plaintext passphrase used to encrypt and decrypt a mounted volume would be vulnerable.”


Such “findings” occur often when the security model of a security system is ignored.

Continue reading "Right, the kernel can access your encrypted volume keys. So what?"

Firewire threat to FDE

Full-Disk Encryption (FDE) suffers class attacks lately.

As if the latest research (which showed that RAM contents can be recovered after power-down) was not enough, it seems as Firewire ports can form yet an easier attack vector into FDE-locked laptops.

From TechWorld: Windows hacked in seconds via Firewire

The attack takes advantage of the fact that Firewire can directly read and write to a system’s memory, adding extra speed to data transfer.


The tool mentioned seems to only bypass the Win32 unlock screen, but given the free access to RAM, exploit code that digs out FDE keys is a matter of very little extra work.

This is nothing new. The concept was presented a couple of years ago, but I haven’t seen most FDE enthusiasts disable their Firewire ports yet.

Continue reading "Firewire threat to FDE"