The risks of bad actors in AI

7 December 2020

Artificial intelligence is the beginning of a revolution, but in one way it is just like every other revolution: It can be abused. Whether or not you already use any AI, you need to understand two things; that AI is cranking up the severity of security threats, but it can also offer improved security. 

AI systems are fast and dynamic, meaning they learn from experience instead of relying on pre-programmed assumptions. AI-powered malware doesn’t require the hacker to know anything about you in advance. However, an AI-powered defence system needn’t depend on fixed definitions of who to trust and who not, or how they gain access. It can learn to recognise suspicious activity.

 AI will power more advanced intrusion attempts into systems that are themselves more powerful. End users need to understand that the sophistication of AI-powered tools does not mean they are secure. For example, facial recognition systems powered by AI can potentially be spoofed by another AI, providing building access to criminals, or framing innocent people with forged video footage. 

A report from Forrester “Using AI for Evil” says “mainstream AI-powered hacking is just a matter of time” and Ciaran Martin, of the National Cyber Security Centre, said it’s a matter of “when not if” [there will be a major attack on the UK]. 

New AI threats 

Using “bot manipulation”, malware can use AI to adapt its appearance so that antivirus software doesn’t recognise it. It can also use AI to sample normal network activity and use it as camouflage, known as “generative networks”. When the target is itself an AI system, a malicious actor can feed “poisoned” data to the engine in order to bypass filters or simply cause damage. AI can also learn to impersonate a legitimate person or company in order to launch a social engineering attack. 

New AI defences 

The ability of AI to react quickly and adjust its responses as situations evolve also makes it ideal for defenders. An AI security system gives defenders the edge by providing early warnings and rapid incident response, so attack vectors can be closed down before any real harm can be done. Darktrace is one such tool. 

Behaviour analytics is another important defence tool. Detecting unusual activity allows the AI to close access to key resources while a deeper examination is undertaken, for example, using Varonis. Mastercard’s director of cyber and intelligence solutions in South Africa, says AI is saving $20 billion per annum by detecting fraud in this way. Embedded malware code can be detected using a similar method. 

Security Information and Event Management (SIEM) 

AI-powered solutions also help by improving activity logging; centralising it in a single place and providing tools to zoom in on significant trails. The logs collected by Azure and other Cloud platforms provide a good basis for an effective SIEM system. These tools also enable you to create and evaluate your alert response workflows. 

Once in the Cloud you have access to specialist security products and expertise that few enterprises can deliver in-house. Specialist companies constantly monitor the global situation to stay aware of threats emerging in particular sectors or locations. An ideal SIEM integrates this digital intelligence with your standard procedures such as logs, asset inventory, AI pattern detection and automated incident responses, and makes it easy to demonstrate your statutory compliance.

 Telling friend from foe

 Unfortunately, we can’t wait for someone else to solve our cybercrime problems. The very people we should be able to trust to protect us, the NSA and GCHQ, created the EternalBlue tool used in recent ransomware attacks such as WannaCry, NotPetya and BadRabbit. They also left exploitable flaws in Windows and implanted backdoors into server and router firmware. Although this is similar to the warnings against Huawei, the NSA have placed similar backdoor access into products from Cisco, Juniper and Fortinet. 

The problem with creating these weapons is that everyone else soon uses them; innocent companies are the victims. According to Wikileaks on 7th March, the CIA regularly listens in on Samsung televisions and iPhones and can take control of numerous IoT devices and car computers. When they do it, others will soon follow. 

Secure your supply chain 

For businesses the goal is clear, keep spyware and vulnerabilities out of your software and hardware. That means taking a keen interest in where your IT products come from and investing in good security. There are limits to what is practical, but an integrated security system powered by AI is the best possible solution

Want to discuss this news article? Feel free to post a comment.

Your email address will not be published. Required fields are marked *

crossmenu