The topic of artificial intelligence (AI) can be a thorny one. Some say it will take humanity to new heights; others think it will be the end of us. The truth is hard to determine, especially since many of these futuristic systems don’t even exist yet.
But AI is certainly very powerful today, particularly if you want to do pattern recognition, and pattern recognition is very useful if you are trying to keep cybercriminals at bay. These threats use the speed and volume of data, as well as the complexity of modern networks and technology stacks, to ferret into systems and hide attacks.
But they are not clandestine. Those acts can often create alerts. The problem is that so does everything else in an ICT estate. According to a survey by the Cloud Security Alliance, over 30% of IT security professionals ignore alerts because there are so many false positives.
The sheer volume of information generated by modern technology threatens to overwhelm security measures, says Tumelo Mashego, Business Unit Manager for Security at Axiz. “The era of big data means there is much more going around. Adding to this is the fact that IT professionals have a lot to do because of digital transformation and other technology influences. They also don’t have the control they used to rely on because data can now leave the company’s parameters. Even just a poor BYOD security environment can become incredibly dangerous. Cybersecurity has never been harder and more complicated than it is now.”
Can AI save us?
Take out the security, but leave in the factors overwhelming it, and you have the reasons why everyone struggles with today’s Information Age. From finding songs on Spotify to producing situation reports, we have too much information and too little time to make sense of it.
AI has become popular for this specific reason. It can move faster than humans, take in more data, connect more dots through pattern recognition and respond at the blink of a computer’s eye.
Computer systems have not been inept against cyber attackers. But they tend to focus on high-volume and low-sophistication attacks. When a threat is much more advanced, more akin to a careful chess game than a random bug infection, it becomes much harder to spot. It’s how some black hat hackers have stayed inside systems for months and even years on end.
But unsophisticated attacks can also have an edge that’s hard to stop, as Mashego explains: “Ransomware isn’t a very sophisticated attack. But once it’s in a system, it can spread quickly, right under the noses of security measures. You want to catch it at the source.
“Such unsophisticated attacks can also be introduced in sophisticated ways, such as spear-phishing. That’s when criminals use tailored correspondence to get to a specific person, usually to get their security credentials. Then the criminals can infect the systems using those login details.”
AI trained to spot for behavioural anomalies can spot such attempts. In a practice called multi-factor authentication, different indicators such as user behaviour, geography and timing are used to calculate if something doesn’t add up around certain credentials. It’s not that different from a bank noticing your credit card is suddenly being used in Burma, only more sophisticated in the behaviours it spots.
Sophisticated pattern recognition can also detect behaviours such as ransomware or malware trying to spread. With the right policies in place, the AI can lock down infections before they spread.
So, should we just hand over security to AI? No, that would be a bad idea, but not because of AI trust issues. AI is still a machine doing a specific task. Just like a lawnmower can’t do much on dirt, an AI becomes useless outside of its parameters.
Cyber attacks are also inevitably performed by humans who make a career out of subverting security systems. AI is just another system. Despite the dominance of AI in playing chess and Go games, motivated and skilled cybercriminals can beat them. For good security, you need people in the mix.
This takes us back to the alert fatigue problem and another interesting statistic from that survey by Cloud Security Alliance: 40.4% of security professionals said they lacked actionable intelligence to decide on an alert. The most potent use of AI in security is perhaps to collaborate with human security professionals. “An AI can act quickly and stop certain things in their tracks. But that’s not foolproof. Humans have the intuition and experience to look at many factors and come up with creative explanations. AI can’t do that; not yet, anyway. But it can create greater context around alerts and decide what should be shown to security staff, who can then decide on the appropriate actions.”
This begs the question: Where is AI in today’s security products? Even though AI solutions are starting to appear, they are still quite scarce. One reason, Mashego says, is the cost associated: “You don’t just buy AI and install it and there it goes. AI needs to be trained and maintained. It can be a very demanding asset.”
Training is made harder by the availability of security data: Cyber attacks are a clandestine activity: even the good guys often keep serious cyber weapons secret. Access to such datasets is by its nature very limited. The massive resource demands mentioned above are also not to be underestimated. For these reasons, security AI is usually found through managed security services that can pool resources and data.
But Mashego adds that we shouldn’t focus on AI alone: “AI has potentially great benefits for security, but that doesn’t mean the other security practices fall away. Train people about good passwords and security hygiene. Put proper BYOD policies in place. Take data management seriously. Invest in end-point security and security skills, and work out the threats to your business for a security strategy. AI is emerging in today’s security products, but those products are also already really good. But they are meant to work with people and a good security culture.”
AI won’t save us from cybercriminals. Yet by giving a little help to humans and catching lightning-fast attacks before they land, it does create an advantage that we, and cybercriminals, can’t dismiss