Skip to content

An obvious limitation of machine-learning for security

I recently came across this study titled “Unknown Threats are The Achilles Heel of Email Security”. It concludes that traditional e-mail scanning tools, that also utilize machine-learning to cope with emerging threats, are still not reacting fast enough to new threats. This is probably true, but I think this conclusion should be considered even more widely, beyond e-mail.

Threats are dynamic. Threat actors are creative and well-motivated enough to make threat mitigation an endlessly moving target. So aren’t we fortunate to have this new term, “machine learning”, recently join our tech jargon? Just like many other buzzwords, the term is newer than what it denotes, but nonetheless, a machine that learns the job autonomously seems to be precisely what we need for mitigating ever-changing threats.

All in all, machine-learning is good for security, but yet in some cases it is a less significant addition to our defense arsenal. Why? – Because while you learn, you often don’t do the job well enough; and a machine is no different. Eventually, the merits of learning-while-doing are to be determined by the price of the resulting temporary imperfectness.

The meaning of machine-learning for security

In the context of security, machine-learning has one goal: to allow a security-system to augment and improve its logic in light of new information. That new information could either be directly applicable, such as new threats that the system encounters for the first time, or indirectly applicable, such as non-harmful events which are indicative not of things that go wrong, but of things that go right. The latter is just as important for systems that try to classify events. Bayes-based spam filters, for example, need to know not only what spam looks like, but just as importantly what the genuine messages you normally receive look like.

A machine-learning security system is one that augments its event classification logic based on new information that keeps coming.

The limitation of machine-learning for security

When a security system uses machine-learning, it is straightforward to imply that the system is designed to be imperfect as deployed, even against its own security model. Of course, no defense system is ever perfect; no need to point that out; but a system that is based on machine-learning is designed for temporary imperfection, even within its own optimal design model. A Bayes spam filter is known to do an awful job until it is properly trained (and hence it is first deployed in a learning mode, not taking any action for a while).

This is not a disadvantage, it is just a fact. The security system could indeed be deployed in learning mode until it gains enough knowledge to be adequately reliable, at which point it can be put into production mode and keep improving forever. But we shall still consider that such a system is reliant on familiarity with certain inputs for it to be sufficiently effective, by its own design.

This is a key point, particularly considering that the opponent has visibility, often accurate enough, into the inputs that the security system has been subject to, and hence to its current state of know-how. The opponent will never be able to predict all data that the system has seen, but will be able to predict, often with great accuracy, some inputs that the system has not yet seen. Those would be inputs that result of his/her own ingenuity.

Our takeaway is that security systems that deploy machine-learning are more rewarding of attackers’ novelty than other security systems. This is not to say that such security systems are ineffective or less worthy. Obviously, a system that does machine-learning on top of traditional capabilities may utilize the best of both worlds and end up being better than the traditional method alone. Also, some security systems do not necessarily target attackers that make up their own novel exploits. A zero-day attack is hard to come by, and substantial harm can be, and is, caused by “script kiddies” utilizing known techniques. There is enough market just in mitigating those.

However, if the defense goal is the protection of a few valuable targets (such as a CFO laptop) against well-crafted unique attacks that are cost-efficient to produce as one-time/few-times disposable weapons, then a security system that focuses on machine-learning while neglecting more traditional techniques, just as the referenced study showed, might end up missing its objective.


See also

Trackbacks

No Trackbacks

Comments

Display comments as Linear | Threaded

Raphael Bar-El on :

I am convinced. Is there any response to that?

J.K. on :

Good essay.

This is also what we took from NVIDIA's MintNV tester. When it comes to cyber defense, AI moves the bar sideways (not necessarily upwards). Not a bad thing, just something to work with for getting the best out of!

I'll now read the continuation post on solutions...

Add Comment

Markdown format allowed
Enclosing asterisks marks text as bold (*word*), underscore are made via (_word_), else escape with (\_).
E-Mail addresses will not be displayed and will only be used for E-Mail notifications.
Form options

Submitted comments will be subject to moderation before being displayed.