Artificial Intelligence is everywhere right now. It is used to power chatbots, create digital art, write code, and help with even the most mundane tasks. But what happens when AI collides with the world of cybersecurity?
What has AI got to do with Cyber Security?
Cyber attacks are getting more advanced, and so are the defences. AI brings automation and intelligence to both sides of the fight. That means attackers can facilitate their attacks faster and smarter, whilst cyber defences can adapt and react quicker than before.
How cyber attackers use AI
Unfortunately, cyber criminals love playing around with new technology. Attackers are already taking advantage of AI by:
- Creating realistic phishing emails with perfect grammar and tone – This makes it harder to differentiate between a fake and a real email.
- Generating deepfakes, which are fake videos or audio that look and sound real – This can be used to impersonate trusted figures to trick people into sending money or sharing information. Deepfakes have commonly been seen, used for promoting fraudulent crypto projects or giveaways, with losses tied to AI deepfakes in crypto scams estimated at $4.6 billion.
- Automating attacks to enable thousands of victims to be targeted at once – This saves attackers so much time.
How cyber defenders use AI
It’s not all bad news. AI is also being used to fight back. Security systems such as enhanced anti-virus tools now use AI to:
- Detect unusual behaviour – AI uses machine learning to understand what normal looks like. For example, if an account suddenly logs in from a country or things happen that are out of the ordinary, AI can flag that behaviour and generate a security alert.
- Spot patterns in attacks – AI can sift through massive amounts of data to recognise the signs of a cyber attack early, through learning from historical data and spotting subtle warning signs a human analyst might miss.
- Automate responses – Some security tools used in large companies have an AI element that can respond to an attack in real time and neutralise the threat.
Prompt Engineering: Designing AI securely
With AI models becoming more powerful, how they’re designed matters. That’s where prompt engineering comes in.
What is prompt engineering?
It’s the art of creating precise instructions (prompts) that guide how AI systems behave. Think of it like giving very clear instructions to a friend. If you’re vague, they might do the wrong thing. With AI, vague prompts can sometimes be exploited by attackers. In cyber security, this means making sure AI doesn’t leak sensitive information or get tricked into doing something harmful.
A great example is the online Gandalf game, where players try to trick an AI “wizard” into giving up a secret password. It shows how attackers might manipulate an insecure chatbot, and why designing prompts securely is so important. You can test your AI manipulation skills here: https://gandalf.lakera.ai/baseline
The takeaway
AI is changing the cyber world fast. Attackers are using it to make scams more convincing, but defenders are using it to spot threats quicker than ever. The more we understand these tools, especially prompt engineering, the better chance we have of making AI part of the solution, not the problem.