Opinions

AI And Machine Learning For Cybersecurity: Friend And Foe?

With global cyber attacks like WannaCry and NotPetya affecting hundreds of thousands of people, the cybersecurity industry needs to arm itself with new technologies, or risk being overrun.

Share this article

Share this article

With global cyber attacks like WannaCry and NotPetya affecting hundreds of thousands of people, the cybersecurity industry needs to arm itself with new technologies, or risk being overrun.

Opinions

AI And Machine Learning For Cybersecurity: Friend And Foe?

With global cyber attacks like WannaCry and NotPetya affecting hundreds of thousands of people, the cybersecurity industry needs to arm itself with new technologies, or risk being overrun.

Share this article

Artificial Intelligence and machine learning are increasingly promoted as a solution in the context, and you don’t need to look far to find products and services trumpeting their use of such techniques as a key selling point.

At the same time, there is a growing concern that the growth of AI will herald a new era of cybercrime, making attacks more complex and more difficult to stop.

Before looking at the potential downsides, it is worth thinking about the defensive role the AI can play in cybersecurity.  The key areas here will include spotting things that we would not ordinarily notice and aiding the automation of things that otherwise rely upon manual intervention.

Although it’s getting significant attention at present, the opportunity to link AI into cybersecurity has been recognised for quite some time, with application in behavioural profiling and anomaly detection (e.g. determining what is ‘normal’, and then flagging things that sit outside of this for further investigation).

Such approaches do not have much scope without using AI. Statistical methods can work to some degree, but AI can pull out more subtle patterns, and can help to derive new rules that would have been unlikely to be identified if security analysts were asked to specify them directly.

It is also useful for addressing the needle in the haystack challenge that would otherwise face us when confronted with performing analysis and correlation across increasingly large and diverse data sets.

Meanwhile, from the automation perspective, AI can be linked to areas such as incident monitoring, recognising that the speed of response is often critical in limiting impact. This is particularly important as we consider the rise of the Internet of Things and face the challenge of monitoring and responding to larger attacks affecting increasingly diverse sets of targets.

AI can help to reduce the dependency upon human involvement in deciding how to respond, potentially increasing our speed of defence, as well as learning how to differentiate false alarms from genuine attacks.

Perhaps most significantly, we can benefit from using AI to cope with the scale and complexity of the threats we are now facing, with the rise of the Internet of Things, and an increasingly large and diverse range of devices and data sets.

As a result, we don’t need to look far to find evidence of AI and machine learning becoming ever more commonplace.  For example, around half of the shortlisted candidates for Best Threat Intelligence Technologies in the SC Awards 2018 feature the use of related techniques as part of their solutions.

Moreover, further evidence of the mainstreaming of the technology is that the awards themselves now include a newly-introduced category for Best use of Machine Learning/AI.

However, as the recent report on The Malicious Use of Artificial Intelligence has also highlighted, AI has the potential to be a double-edged sword, with an increasing chance to be used against security as well as for it.

Even outside the context of cybercrime and attacks, various concerns have been expressed about where AI is heading and where people will be left as a result.  For example, the late Prof. Stephen Hawking repeatedly warned that AI could ultimately prove harmful to mankind, with ‘powerful autonomous weapons’ being cited amongst the possible negatives.

He was also one of the signatories to an Open Letter on Research Priorities for Robust and Beneficial Artificial Intelligence. This has received over 8,000 signatories to date (including Elon Musk, Steve Wozniak, and numerous academics), with cybersecurity and privacy being flagged as key elements amongst the recommended research priorities

Given the tendency for all technologies to find negative applications, it is reasonable to assume that AI will become a feature of future attacks. Indeed, given that the security community has already foreseen various contexts in which AI could be applied (e.g. to change the way that attacks are targeted, how they are launched, as well as to help automate, personalise, and disguise attacking activities) it is almost inconceivable that attackers would ignore it.

Such an eventuality clearly amplifies the challenge in an area where many are already struggling to keep pace.  Indeed, at present, the need to use AI for attack is arguably limited by a plenitude of low-hanging fruit in terms of unpatched or misconfigured systems that can be targeted without it.

However, as the technology becomes standard in defence, it is likely to become equally standard in attack – with one side using AI to spot patterns of misuse and malicious activity, while the other uses it to find vulnerabilities and evade detection...taken to the extreme, this may reduce things to a machine-to-machine conflict, in which we become the observers rather than participants.

Cyber attack and defence could literally become a case of each side pressing their ‘go’ buttons, and waiting for a message to tell them who won!

Of course, we are not close to that yet, and the technologies have some way to mature before they would fully replace the human participants on either side. Nonetheless, what will be interesting as we move forward is how much people then remain actively in-the-loop, or at least the level of direct involvement they would need to have and the extent to which security will be able to cope without technological assistance.

At the same time, there are aspects in which the human element may still trump the machine, supplementing the use of AI in the context of defence, and potentially outwitting it when faced with an attack.

While AI offers a speed of response that human analysts would be unable to match, there will be some cases in which human judgement can delivers insights that the AI system won't know about. People bring elements such as intuition and creativity (e.g. the ability to spot things by thinking like at attacker) that an AI system may not be able to replicate.

Similarly, the potential for ingenuity and out-of-the-box thinking may be the salvation in defending against machine-driven attacks, doing unexpected things that the underlying logic of AI may have left unanticipated. Or, at least, this should be our hope if cybersecurity is ultimately to remain within our control!

Professor Steven Furnell is a senior member of the IEEE and Professor of Information Systems Security at the University of Plymouth.

Related Articles
Get news to your inbox
Trending articles on Opinions

AI And Machine Learning For Cybersecurity: Friend And Foe?

Share this article