1.2071401-520909718
AI is of extreme importance since it is required to stay on level ground with black hats as they adopt this technology themselves. Machine learning will not benefit the industry unless it is applied into an expert system. Image Credit: Supplied

Dubai: Artificial intelligence (AI) will play a stronger role in the cyber security space in the future and the key purpose is to initially help automate mundane tasks, like prioritising security logs, so that companies can reduce the human time and effort.

A rapidly-growing number of logs, metrics and all kind of different indicators open doors to two possibilities — cyber blindness or cyber intelligence.

Unfortunately, what mostly happens today is cyber blindness, essentially because there is no way to manually check the huge amount of data that cyber experts process every day.

Two options

The industry is faced with two options again: to leave the data as it is, leaving the possibility open of looking back to the past to verify data or to develop something, which could help solutions providers to analyse real time logs and take decisions.

The second option is called machine learning or AI. More and more organisations are choosing machine learning and artificial intelligence today.

In fact, the concept of artificial intelligence has been born along with the computing era (Alan Turing).

Alan Turing called an infant’s mind and unorganised machine in 1947 and created the early definitions of machine learning.

He saw the need for a seeded solution set of accurate or known potential output. Only now are concepts of artificial neural networks (ANNs) being applied to modern cyber security solutions.

Machine learning

And yes, AI is of extreme importance since it is required to stay on level ground with black hats as they adopt this technology themselves.

Machine learning will not benefit the industry unless it is applied into an expert system.

Machine learning allows a computer to learn for itself. Imagine an environment where a machine learning system is constantly analysing data across billions or trillions of logs per second (such as in a neural network) and is able to classify, detect patterns, behaviours and eventually predict attacks before it occurs.

This is the possibility and eventual promise of AI; to use Machine Learning so that a system does not need to be taught, that it uses its own ‘instinct’ to take decisions fast.

Security solutions providers have been using AI and Machine Learning to prevent attacks and keep their customer safe but, at the same time, cyber criminals are also deploying AI for attacks.

According to Symantec’s 2017 internet Security Threat Report (ISTR), new sophistication and innovation are the nature of the threat landscape currently.

Haider Pasha

Haider Pasha, chief technology officer for emerging markets at Symantec, said that with the increased adoption of AI, cyber criminals are likely to exploit or leverage this technology to their advantage.

However, there are advancements in cyber security technologies to prevent such attacks. AI will be used as any other technology to create new sources of threats and organisations, and will continue to protect our customers using the latest resources available.

Lee Fisher, head of security at Juniper Networks EMEA, said that there are simply far too many threats created every day — not by humans of course, but by automation.

Analysis on a massive scale

To cope with the scale, AI will help in a number of different ways. AI enables statistical, contextual analysis on a massive scale based on historic activity.

Derek Manky, global security strategist at Fortinet, said that the world is seeing more and more automation being built into black hat attackers’ attack technology. What this means is, the time to respond to cyber-attack is shrinking drastically. Ten years ago, weeks or days to respond to a cyber-attack was adequate. Today, “we begin to measure in minutes (less than an hour)”.

“In the future, we will start measuring this in seconds. Humans cannot operate on this level, and therefore AI is crucial to respond at machine speed to the threat of cyber-attack,” he said.

“For example, there is a vast number of events that we receive from security logs across our network (endpoints, firewalls, etc.). A large bank may receive billions of such events.

On average, a Security Operations Centre (SOC) analyst will spend hours on a set of correlated events to determine if an incident is real or a false positive, even with an automated tool, such as a Security Incident Event Monitor (SIEM),” Pasha said.

Dmitry Bestuzhev, director of Kaspersky Lab’s Global Research and Analysis Team in Latin America, said that the key is to find a way to be able to analyse such a huge amount of data that the world gets in real time. The mission is to be able to identify attacks and any kind of cyber anomalies to detect cybercriminal activities.

Infancy

“At this moment this is something very young, but has already gained much importance today. The key is to find a way to be able to analyse such a huge amount of data that the world gets in real time. The mission is to be able to identify attacks and any kind of cyber anomalies to detect cybercriminal activities,” he said.

With machine learning, Pasha said that this task of prioritising incidents can be completed in seconds and can provide a more accurate and near real-time analysis of an incident.

When asked whether human interference is not needed with AI or will it become a combination of people, processes and technologies to tackle today’s complex cybersecurity landscape, he said that it should definitely be a combination of these in a way that the combination is used to help concur with the reasoning of the AI system.

For example, he said that if an AI deduces that a potential security incident requires a block on a firewall, a human should be able to approve the ultimate decision before this block is implemented.

In time, some tasks with cyber security may be moved to detect and block across the cyber kill chain, but this will take time.

“I predict the amount of automation and autonomous usage will increase with AI so the amount of work by ‘people’ will be reduced, or be re-prioritised to more important, strategic, or advanced tasks,” Pasha said.

Can AI be defeated by humans?

Of course it can, Fisher said, and added what is important is how quickly any system can react to and adapt to new techniques.

Manky believes that humans will always be required to supervise, override and otherwise work alongside (escalation paths) with AI technology. AI is useful to reducing cycles and replacing mundane activities that humans are often tasked with day to day, so that humans can be re-purposed to more sophisticated tasks working with the AI technology.

Bestuzhev said that humans are essential in the successful functioning of AI, especially in the training process. A system cannot identify the malicious activity of a log, an event or an indicator without being taught singling out anomalies or looking for a break in a pattern by a person.

Yes, in his opinion this is going to be always true as long as humans understand how a machine learning programme works.

“This is especially important today, as we face a shortage of cyber security professionals. The professionals we do have should be utilised to maximum efficiency — that cannot be done without AI,” Manky said.

Pasha said that it really depends on what level of advancement you are referring to with AI.

With a fully functional, self-aware and internet connected AI, he said the possibilities of defeating it are low in his opinion.

“However, in a more practical environment, with kill switches and silos of AI analytical engines used for specific purposes, we would be able to control specific use cases.

"Let’s use an analogy of the car manufacturing factory. The factory uses robots to assemble the cars, but each robot has a specific task and can learn and conclude when a part is faulty. The implementation of something similar to this example for the software industry should be most beneficial,” he said.