Ron Williams is the CEO and founder of Kindo.Ai . Artificial intelligence (AI) is no longer a futuristic tool for cybersecurity.
It’s gone mainstream. Threat actors have integrated AI into their operations with alarming success, using tools like WormGPT , GhostGPT and even legitimate platforms like Google’s Gemini AI to scale their attacks. Google’s Threat Intelligence Group recently detailed how state-sponsored actors have been abusing Gemini AI to enhance reconnaissance, scripting and privilege escalation.
These factors lead to a harsh reality: The asymmetry of power in AI between cybersecurity and bad actors is growing, and security teams are falling behind. If defenders don’t start using AI to automate workflows, mitigate threats and improve incident response, they risk being perpetually outpaced by modern attackers. The time to act is now, not after attackers have perfected the use of AI in their operations.
ChatGPT democratized consumer AI access, revolutionizing a whole range of industries. However, cybercriminals quickly recognized its potential for malicious usage, and just a year after its launch, discussions on cybercrime networks about exploiting AI exploded, leading to an increase in AI-based attack strategies . Hundreds of thousands of ChatGPT accounts were being bought and sold on underground markets, and by mid-2023, WormGPT , a malicious chatbot designed to enhance business email compromise attacks and spear-phishing campaigns, sent shockwaves through the industry.
WormGPT was marketed as an AI tool specifically trained on malicious datasets to improve cybercrime operations, prompting headlines warning of AI-powered cybercrime on the rise. But WormGPT was just the beginning. Variants like FraudGPT , DarkBERT (not to be confused with DarkBART ) and GhostGPT followed.
Fast-forwarding to today, cybercriminals have found multiple ways to weaponize AI for their operations: • Bypassing ethical constraints: Mainstream AI models like ChatGPT and Claude refuse to generate phishing emails. However, attackers discovered ways to manipulate them into compliance using prompt engineering. • Masquerading legitimate chatbots as malicious chatbots: Some cybercriminals have wrapped jailbroken AI instances within custom interfaces, branding them as their own evil variants and selling access to others.
• Training AI models on malicious datasets: Rather than relying on trickery, some groups have trained their own AI models, fine-tuning them with cybercrime-related data to generate more accurate attack strategies. This is essentially how WormGPT and similar tools evolved within months. Despite clear evidence of AI’s role in advancing cybercrime, many security teams remain hesitant to embrace AI defenses.
This reluctance sometimes stems from three key concerns: lack of trust in AI, implementation complexity and job security fears. Many cybersecurity professionals view AI as a “black box” technology and are concerned that it’s difficult to predict how AI will behave in a live security environment. Security teams worry that if something goes wrong, they won’t be able to remediate the issue due to their lack of understanding of the model’s decision-making process.
However, while these concerns are valid, they can be addressed. Many AI-based workflows are built on well-documented APIs that offer transparency and allow customization. If security teams take the time to understand how AI-powered tools function in practical applications, much of their skepticism could be alleviated.
Another major roadblock is the perceived difficulty of integrating AI into legacy security infrastructure. A lot of organizations assume that AI adoption requires a fundamental overhaul of existing systems, which is daunting and expensive. However, security teams can start small by identifying repetitive, time-consuming tasks that AI can automate.
Take vulnerability management, for instance. Consultants spend a lot of time triaging vulnerabilities, mapping them to affected assets and prioritizing remediation efforts. AI can optimize this by automatically correlating vulnerabilities with exploitability data, assessing business impact and recommending remediation priorities.
A simple exercise to test AI’s effectiveness is to take a common, repetitive security task and design an AI-assisted workflow to replace it. Even partial automation can yield a large return on investment in saved time and improved accuracy. Some security professionals fear that widespread AI adoption could automate them out of a job .
While discussions about AI replacing analysts entirely are common in the industry, AI should be viewed as an augmentation tool rather than a replacement. The focus should be on promoting this perspective. Organizations that upskill their employees to work alongside AI will develop a stronger, more efficient security team.
The bigger point here is that AI won’t eliminate security teams—it will empower them. By automating time-consuming and mundane tasks, security analysts can focus on higher-value work, like investigating more complex threats, threat hunting and incident response. Whether operating within a security operations center (SOC) or following a more agile approach, all security teams encounter repetitive tasks that can be automated.
AI-powered security solutions can assist with this by: Automating repetitive alert investigations, reducing analyst burnout and improving response times. Improving detection capabilities by identifying patterns in large datasets faster than human analysts. Consider a typical security analyst’s workflow: They receive an alert, analyze it, extract indicators of compromise, query threat intelligence databases, determine if it’s a genuine threat, document the findings and respond accordingly.
AI automates much of this process, alleviating manual operational burdens. The benefits of AI and autonomous agents extend beyond the SOC; AI can also improve web application security, agile security in software development lifecycles, penetration testing and threat intelligence gathering. Security teams don’t need to overhaul their entire infrastructure overnight.
Incremental AI adoption can have immediate benefits. AI is not a passing trend—it’s the present and future of cybersecurity. Attackers are not waiting for defenders to catch up.
They are actively refining AI-augmented attack methods, making their operations faster, more scalable and more effective. Security teams must recognize that the only way to counter AI-based cyber threats is to fight fire with fire. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.
Do I qualify?.
Technology
Stop Sleeping On AI: Why Security Teams Should Embrace The Technology

If defenders don’t start using AI to automate workflows, mitigate threats and improve incident response, they risk being perpetually outpaced by modern attackers.