By Anna Collard, SVP Content material Technique&Evangelist KnowBe4 Africa (www.KnowBe4.com).
Synthetic Intelligence is not only a device—it’s a gamechanger in our lives, our work in addition to in each cybersecurity and cybercrime. Whereas companies leverage AI to boost defences, cybercriminals are weaponising AI to make these assaults extra scalable and convincing.
In 2025, researchers forecast that AI brokers, or autonomous AI-driven programs able to performing advanced duties with minimal human enter, are revolutionising each cyberattacks and cybersecurity defences. Whereas AI-powered chatbots have been round for some time, AI brokers transcend easy assistants, functioning as self-learning digital operatives that plan, execute, and adapt in actual time. These developments don’t simply improve cybercriminal techniques—they might basically change the cybersecurity battlefield.
How Cybercriminals Are Weaponising AI: The New Menace Panorama
AI is reworking cybercrime, making assaults extra scalable, environment friendly, and accessible. The WEF Synthetic Intelligence and Cybersecurity Report (2025) ( highlights how AI has democratised cyber threats, enabling attackers to automate social engineering, increase phishing campaigns, and develop AI-driven malware. Equally, the Orange Cyberdefense Safety Navigator 2025 ( warns of AI-powered cyber extortion, deepfake fraud, and adversarial AI methods. And the 2025 State of Malware Report by Malwarebytes ( notes, whereas GenAI has enhanced cybercrime effectivity, it hasn’t but launched completely new assault strategies—attackers nonetheless depend on phishing, social engineering, and cyber extortion, now amplified by AI. Nonetheless, that is set to vary with the rise of AI brokers—autonomous AI programs able to planning, performing, and executing advanced duties—posing main implications for the way forward for cybercrime.
Here’s a listing of frequent (ab)use instances of AI by cybercriminals:
AI-Generated Phishing&Social Engineering
Generative AI and huge language fashions (LLMs) allow cybercriminals to craft extra plausible and complex phishing emails in a number of languages—with out the standard pink flags like poor grammar or spelling errors. AI-driven spear phishing now permits criminals to personalise scams at scale, mechanically adjusting messages based mostly on a goal’s on-line exercise. AI-powered Enterprise E mail Compromise (BEC) scams are rising, as attackers use AI-generated phishing emails despatched from compromised inside accounts to boost credibility. AI additionally automates the creation of faux phishing web sites, watering gap assaults and chatbot scams, that are bought as AI-powered crimeware as a service’ choices, additional reducing the barrier to entry for cybercrime.
Deepfake-Enhanced Fraud&Impersonation
Deepfake audio and video scams are getting used to impersonate enterprise executives, co-workers or relations to govern victims into transferring cash or revealing delicate information. Probably the most well-known 2024 incident was UK based mostly engineering agency Arup ( that misplaced $25 million after certainly one of their Hong Kong based mostly staff was tricked by deepfake executives in a video name. Attackers are additionally utilizing deepfake voice know-how to impersonate distressed kin or executives, demanding pressing monetary transactions.
Cognitive Assaults
On-line manipulation—as outlined by Susser et al. (2018) ( —is “at its core, hidden affect — the covert subversion of one other individual’s decision-making energy”. AI-driven cognitive assaults are quickly increasing the scope of on-line manipulation, leveraging digital platforms and state-sponsored actors more and more use generative AI to craft hyper-realistic faux content material, subtly shaping public notion whereas evading detection. These techniques are deployed to affect elections, unfold disinformation, and erode belief in democratic establishments. In contrast to typical cyberattacks, cognitive assaults don’t simply compromise programs—they manipulate minds, subtly steering behaviours and beliefs over time with out the goal’s consciousness. The mixing of AI into disinformation campaigns dramatically will increase the size and precision of those threats, making them tougher to detect and counter.
The Safety Dangers of LLM Adoption
Past misuse by menace actors, enterprise adoption of AI-chatbots and LLMs introduces their very own vital safety dangers—particularly when untested AI interfaces join the open web to essential backend programs or delicate information. Poorly built-in AI programs might be exploited by adversaries and allow new assault vectors, together with immediate injection, content material evasion, and denial-of-service assaults. Multimodal AI expands these dangers additional, permitting hidden malicious instructions in photos or audio to govern outputs.
Moreover, bias inside LLMs poses one other problem, as these fashions be taught from huge datasets which will include skewed, outdated, or dangerous biases. This may result in deceptive outputs, discriminatory decision-making, or safety misjudgments, probably exacerbating vulnerabilities moderately than mitigating them. As LLM adoption grows, rigorous safety testing, bias auditing, and threat evaluation are important to forestall exploitation and guarantee reliable, unbiased AI-driven decision-making.
When AI Goes Rogue: The Risks of Autonomous Brokers
With AI programs now able to self-replication, as demonstrated in a current examine ( the danger of uncontrolled AI propagation or rogue AI—AI programs that act in opposition to the pursuits of their creators, customers, or humanity at giant – is rising. Safety and AI researchers have raised considerations that these rogue programs can come up both by chance or maliciously, significantly when autonomous AI brokers are granted entry to information, APIs, and exterior integrations. The broader an AI’s attain via integrations and automation, the larger the potential menace of it going rogue, making sturdy oversight, safety measures, and moral AI governance important in mitigating these dangers.
The way forward for AI Brokers for Automation in Cybercrime
A extra disruptive shift in cybercrime can and can come from AI Brokers, which remodel AI from a passive assistant into an autonomous actor able to planning and executing advanced assaults. Google, Amazon, Meta, Microsoft, and Salesforce are already growing Agentic AI for enterprise use, however within the palms of cybercriminals, its implications are alarming. These AI brokers can be utilized to autonomously scan for vulnerabilities, exploit safety weaknesses, and execute cyberattacks at scale. They will additionally enable attackers to scrape large quantities of private information from social media platforms and mechanically compose and ship faux govt requests to staff or analyse divorce information throughout a number of international locations to determine people for AI-driven romance scams, orchestrated by an AI agent. These AI-driven fraud techniques don’t simply scale assaults—they make them extra personalised and tougher to detect. In contrast to present GenAI threats, Agentic AI has the potential to automate total cybercrime operations, considerably amplifying the danger.
How Defenders Can Use AI&AI Brokers
Organisations can not afford to stay passive within the face of AI-driven threats and safety professionals want to stay abreast of the newest growth. Listed here are among the alternatives in utilizing AI to defend in opposition to AI:
AI-Powered Menace Detection and Response:
Safety groups can deploy AI and AI-agents to observe networks in actual time, determine anomalies, and reply to threats sooner than human analysts can. AI-driven safety platforms can mechanically correlate huge quantities of information to detect delicate assault patterns which may in any other case go unnoticed, create dynamic menace modelling, real-time community behaviour evaluation, and deep anomaly detection. For instance, as outlined by researchers of Orange Cyber Protection (, AI-assisted menace detection is essential as attackers more and more use “Dwelling off the Land” (LOL) methods that mimic regular consumer behaviour, making it tougher for detection groups to separate actual threats from benign exercise. By analysing repetitive requests and strange visitors patterns, AI-driven programs can rapidly determine anomalies and set off real-time alerts, permitting for sooner defensive responses.
Nonetheless, regardless of the potential of AI-agents, human analysts nonetheless stay essential, as their instinct and adaptableness are important for recognising nuanced assault patterns and leverage actual incident and organisational insights to prioritise assets successfully.
Automated Phishing and Fraud Prevention:
AI-powered e-mail safety options can analyse linguistic patterns, and metadata to determine AI-generated phishing makes an attempt earlier than they attain staff, by analysing writing patterns and behavioural anomalies. AI also can flag uncommon sender behaviour and enhance detection of BEC assaults. Equally, detection algorithms can assist confirm the authenticity of communications and forestall impersonation scams. AI-powered biometric and audio evaluation instruments detect deepfake media by figuring out voice and video inconsistencies. *Nonetheless, real-time deepfake detection stays a problem, as know-how continues to evolve.
Person Schooling&AI-Powered Safety Consciousness Coaching:
AI-powered platforms (e.g., KnowBe4’s AIDA) ship personalised safety consciousness coaching, simulating AI-generated assaults to coach customers on evolving threats, serving to practice staff to recognise misleading AI-generated content material and strengthen their particular person susceptility elements and vulnerabilities.
Adversarial AI Countermeasures:
Simply as cybercriminals use AI to bypass safety, defenders can make use of adversarial AI methods, for instance deploying deception applied sciences—akin to AI-generated honeypots—to mislead and observe attackers, in addition to repeatedly coaching defensive AI fashions to recognise and counteract evolving assault patterns.
Utilizing AI to Combat AI-Pushed Misinformation and Scams:
AI-powered instruments can detect artificial textual content and deepfake misinformation, aiding fact-checking and supply validation. Fraud detection fashions can analyse information sources, monetary transactions, and AI-generated media to flag manipulation makes an attempt. Counter-attacks, like proven by analysis challenge Countercloud ( or O2 Telecoms AI agent “Daisy” ( present how AI based mostly bots and deepfake real-time voice chatbots can be utilized to counter disinformation campaigns in addition to scammers by partaking them in infinite conversations to waste their time and decreasing their skill to focus on actual victims.
In a future the place each attackers and defenders use AI, defenders want to concentrate on how adversarial AI operates and the way AI can be utilized to defend in opposition to their assaults. On this fast-paced setting, organisations want to protect in opposition to their biggest enemy: their very own complacency, whereas on the similar time contemplating AI-driven safety options thoughtfully and intentionally. Reasonably than dashing to undertake the subsequent shiny AI safety device, resolution makers ought to fastidiously consider AI-powered defences to make sure they match the sophistication of rising AI threats. Unexpectedly deploying AI with out strategic threat evaluation might introduce new vulnerabilities, making a aware, measured method important in securing the way forward for cybersecurity.
To remain forward on this AI-powered digital arms race, organisations ought to:
✅Monitor each the menace and AI panorama to remain abreast of newest developments on each side.
✅ Prepare staff ceaselessly on newest AI-driven threats, together with deepfakes and AI-generated phishing.✅ Deploy AI for proactive cyber protection, together with menace intelligence and incident response.✅ Repeatedly check your personal AI fashions in opposition to adversarial assaults to make sure resilience.
Distributed by APO Group on behalf of KnowBe4.