Securing Software In The GenAI Coding Era

featured-image

The productivity boost from GenAI is clear, but it creates tension with the industry’s increasing push to secure coding and software security.

Chris Wysopal is Founder and Chief Security Evangelist at Veracode . Generative artificial intelligence (GenAI) has very quickly established a foothold among code developers as an essential tool in their workflow. Developers have shifted from the traditional code reuse model to generating new code snippets by prompting GenAI, leading to a significant change in software development dynamics.

It’s easy to see why, as the merits are well-documented. GenAI-driven generation of code comes with an unquestioned productivity boost. Research from Microsoft has shown that developers using GenAI were able to complete 26% more tasks on average, increase the number of code commits by 13.



5% and increase builds by 38.4%. In the competitive marketplace where every company and developer is looking for an edge, these numbers reinforce the obvious: The GenAI co-generation era is here to stay.

The productivity boost from GenAI is clear, but it creates tension with the industry’s increasing push to secure coding and software security. Traditionally, developers wrote code over an extended period of time—weeks or even months—then tested it for vulnerabilities before production deployment. That approach changed with DevOps, which emphasized writing, testing and deploying smaller chunks of code in rapid cycles.

To address security concerns in this agile environment, the DevSecOps movement emerged, embedding security testing tools directly into the development pipeline. Now, AI-driven code generation has further accelerated this cycle. While code reuse decreases and code velocity increases in this new paradigm, vulnerability density remains consistent because the large language models (LLMs) that developers are using are often trained on open-source datasets rife with existing security flaws.

With far more output at the same vulnerability density, faster code production leads to a proportional increase in vulnerabilities. Recent studies back up these concerns. New York University researchers found that 40% of code produced by Microsoft's Copilot AI contained known security vulnerabilities, while a similar study from Wuhan University found security weaknesses in 30% of Python and 24% of JavaScript Copilot-generated code snippets.

Despite this, developers often perceive AI-generated code to be more secure than it is. Stanford University found that developers using LLMs were more likely to write insecure code while being overly confident about its security . To fully leverage AI-assisted development, the first step is to approach code co-generation with open eyes.

Organizations must acknowledge and actively counter the human biases that lead to overconfidence in AI-generated content. The data says that LLMs are fallible and likely to introduce just as many (if not more) security flaws as their human counterparts, but it’s up to the developers to heed that important warning. Developers must also understand that the quality of AI-generated code is only as good as the dataset on which it was trained.

If the training data includes vulnerable open-source code, those vulnerabilities will likely surface in the generated output. Using curated datasets known to include more secure code and incorporating security considerations into any GenAI prompts are important steps that will help developers ensure a foundation of secure AI co-generation. The increased velocity of vulnerability introduction has already gone beyond what human remediation can handle.

Veracode's State of Software Security Report (SoSS) found that only 20% of applications achieve a monthly fix rate exceeding 10% of identified flaws . Persistent high-severity vulnerabilities, or "security debt" (i.e.

, security flaws in code that are unfixed for more than one year), continue to accumulate as a result. As that debt adds up, it leads to more compliance risks, security alerts and quality issues. The only way to keep pace is to fight fire with fire in the form of AI-assisted remediation tools.

These solutions offer a way to address security issues without expanding development teams or diverting their focus from core objectives. As GenAI reshapes software development, security automation will become increasingly essential. From vulnerability detection to automated fixes, integrating AI into the security pipeline will ensure a balance between speed and security.

Developers will use GenAI as part of the software development process moving forward. The industry is simply too competitive to leave that bump in productivity on the table. But companies need to take a realistic approach, which means a complete reevaluation of security practices to address the risks inherent in AI-generated code.

By automating security processes and leveraging AI-powered remediation tools, developers can harness the full potential of GenAI while maintaining robust security standards. The era of GenAI demands faster code development and smarter, AI-driven security measures to ensure that the threat of vulnerability proliferation is kept in check. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.

Do I qualify?.