You may have heard of artificial intelligence (AI) driven malware. It’s painted as the stuff of nightmares, smart malicious algorithms that rule the roost in on-going cyber wars, the spearhead of a dystopian digital world, which can mimick a CEO’s emails down to comma, dashes and verbs, malware that can be released by facial recognition and millions of systems infected when a specific ‘switch’ is triggered.

But how real are these threats and should we be battening down the digital hatches and treating our computers like radioactive devices?
  • Today there is a lot of talk about AI-based malware simply because theoretical threats do exist and the potential for something really nasty and near undetectable does exist. Some researchers have gone as far as actually developing AI malware to prove the point.
What exactly is AI-driven malware?
  • AI-driven malware is conventional malware altered via AI to make it more effective. It can use its intelligence to infect computers faster or make attacks more efficient. Conventional malware in a sense is dumb. It is a set of pre-created, fixed code that tries to sneak past defences. In contrast AI-driven malware can think for itself, to an extent.
How does it ‘think?’
  • AI uses deep learning, for instance an AI algorithm fed with sample data creates its own rules. If, as an example, it is fed with enough pictures of a person it will be able to detect that person’s face in new photos. Applied to malware AI can perform tasks that are impossible with traditional software structures. This means it is very difficult for contemporary endpoint security to identify malware that doesn’t conform to these traditional rules.
Are criminals using AI-driven malware?

There is little evidence to support the belief that criminal cyber gangs are already using AI to help launch and spread attacks. However that doesn’t mean it doesn’t exist and beyond specific AI malware it certainly has the potential to drive through today’s protective measures.
  • It could solve CAPTCHAs to sneak past this type of authentication method.
  • AI could be used to scan social media to find the right people to target with spear phishing campaigns.
  • It could be used to create more convincing spam, customised towards the target victim.
In short, AI in the hands of cybercriminals could pave the way for malware that's harder to detect, more targeted threats and more convincing spam and phishing attacks. Realistically, however, we are unlikely to see threats that use AI emerge for several years yet and even then it will likely be used minimally. Why?

Cyber criminals today rely on trusted malware such as ransomware, which despite being around for years, is remarkably effective.
  • The UK’s National Cyber Security Centre recently highlighted the case of an organisation that paid just under £6.5 million to recover its files following a ransomware attack. The organisation didn’t make any attempt to address how it was attacked in order to strengthen its network. Just under two weeks later it was hit by the same attacker again, exploiting the same vulnerability and had to pay the ransom again.
  • Because traditional malware is still so effective there is little incentive for cyber criminals to turn to AI-driven malware, or for malware creators to try their hand at developing it. In fact the success of traditional malware has led to a resurgence in ransomware over the past 12 months.
  • Today AI-driven malware isn’t a threat to the general public. Rather, if it is used it is developed by nation states and aimed at a specific ‘high value’ target for a specific reason.
Reassuring defences

BullGuard’s mission has always been to provide customers with protection that anticipates and defends against the latest threats. To this end dynamic machine learning has been incorporated into BullGuard 2021 products:
  • BullGuard Dynamic Machine Learning continually accesses large and dynamic pools of data that are constantly updated. It draws on this mass of data to make decisions about whether or not code is harmful based on a series of traits. Some code traits, for instance, may rank higher than other traits
This dynamic machine learning approach is a current application of AI and is based on the concept that if machines have access to data, they can learn faster and better. In terms of detecting new attack vectors, new strains of malicious code and new malware that is just surfacing it’s the perfect foil.

This is why BullGuard is using this form of AI to protect customers against new threats and you can be sure advanced detection technologies will be brought to bear on AI malware as soon as the need arises.