AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals

30 Jul 2025
BURLINGTON, Mass.

Veracode, a global leader in application risk management, today unveiled its 2025 GenAI Code Security Report, revealing critical security flaws in AI-generated code. The study analyzed 80 curated coding tasks across more than 100 large language models (LLMs), revealing that while AI produces functional code, it introduces security vulnerabilities in 45 percent of cases.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20250730694951/en/

Security and Syntax Pass Rates vs LLM Release from the Veracode 2025 GenAI Code Security Report

Security and Syntax Pass Rates vs LLM Release from the Veracode 2025 GenAI Code Security Report

The research demonstrates a troubling pattern: when given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45 percent of the time. Perhaps more concerning, Veracode's research also uncovered a critical trend: despite advances in LLMs’ ability to generate syntactically correct code, security performance has not kept up, remaining unchanged over time.

“The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built,” said Jens Wessling, Chief Technology Officer at Veracode. “The main concern with this trend is that they do not need to specify security constraints to get the code they want, effectively leaving secure coding decisions to LLMs. Our research reveals GenAI models make the wrong choices nearly half the time, and it’s not improving.”

AI is enabling attackers to identify and exploit security vulnerabilities quicker and more effectively. Tools powered by AI can scan systems at scale, identify weaknesses, and even generate exploit code with minimal human input. This lowers the barrier to entry for less-skilled attackers and increases the speed and sophistication of attacks, posing a significant threat to traditional security defenses. Not only are vulnerabilities increasing, but the ability to exploit them is becoming easier.

LLMs Introduce Dangerous Levels of Common Security Vulnerabilities

To evaluate the security properties of LLM-generated code, Veracode designed a set of 80 code completion tasks with known potential for security vulnerabilities according to the MITRE Common Weakness Enumeration (CWE) system, a standard classification of software weaknesses that can turn into vulnerabilities. The tasks prompted more than 100 LLMs to auto-complete a block of code in a secure or insecure manner, which the research team then analyzed using Veracode Static Analysis. In 45 percent of all test cases, LLMs introduced vulnerabilities classified within the OWASP (Open Web Application Security Project) Top 10—the most critical web application security risks.

Veracode found Java to be the riskiest language for AI code generation, with a security failure rate over 70 percent. Other major languages, like Python, C#, and JavaScript, still presented significant risk, with failure rates between 38 percent and 45 percent. The research also revealed LLMs failed to secure code against cross-site scripting (CWE-80) and log injection (CWE-117) in 86 percent and 88 percent of cases, respectively.

“Despite the advances in AI-assisted development, it is clear security hasn’t kept pace,” Wessling said. “Our research shows models are getting better at coding accurately but are not improving at security. We also found larger models do not perform significantly better than smaller models, suggesting this is a systemic issue rather than an LLM scaling problem.”

Managing Application Risks in the AI Era

While GenAI development practices like vibe coding accelerate productivity, they also amplify risks. Veracode emphasizes that organizations need a comprehensive risk management program that prevents vulnerabilities before they reach production—by integrating code quality checks and automated fixes directly into the development workflow.

As organizations increasingly leverage AI-powered development, Veracode recommends taking the following proactive measures to ensure security:

  • Integrate AI-powered tools like Veracode Fix into developer workflows to remediate security risks in real time.
  • Leverage Static Analysis to detect flaws early and automatically, preventing vulnerable code from advancing through development pipelines.
  • Embed security in agentic workflows to automate policy compliance and ensure AI agents enforce secure coding standards.
  • Use Software Composition Analysis (SCA) to ensure AI-generated code does not introduce vulnerabilities from third-party dependencies and open-source components.
  • Adopt bespoke AI-driven remediation guidance to empower developers with precise fix instructions and train them to use the recommendations effectively.
  • Deploy a Package Firewall to automatically detect and block malicious packages, vulnerabilities, and policy violations.

“AI coding assistants and agentic workflows represent the future of software development, and they will continue to evolve at a rapid pace,” Wessling concluded. “The challenge facing every organization is ensuring security evolves alongside these new capabilities. Security cannot be an afterthought if we want to prevent the accumulation of massive security debt.”

The complete 2025 GenAI Code Security Report is available to download on the Veracode website.

About Veracode

Veracode is a global leader in Application Risk Management for the AI era. Powered by trillions of lines of code scans and a proprietary AI-assisted remediation engine, the Veracode platform is trusted by organizations worldwide to build and maintain secure software from code creation to cloud deployment. Thousands of the world’s leading development and security teams use Veracode every second of every day to get accurate, actionable visibility of exploitable risk, achieve real-time vulnerability remediation, and reduce their security debt at scale. Veracode is a multi-award-winning company offering capabilities to secure the entire software development life cycle, including Veracode Fix, Static Analysis, Dynamic Analysis, Software Composition Analysis, Container Security, Application Security Posture Management, Malicious Package Detection, and Penetration Testing.

Learn more at www.veracode.com, on the Veracode blog, and on LinkedIn and X.

Copyright © 2025 Veracode, Inc. All rights reserved. Veracode is a registered trademark of Veracode, Inc. in the United States and may be registered in certain other jurisdictions. All other product names, brands or logos belong to their respective holders. All other trademarks cited herein are property of their respective owners.

 

© Business Wire, Inc.

Disclaimer :
This press release is not a document produced by AFP. AFP shall not bear responsibility for its content. In case you have any questions about this press release, please refer to the contact person/entity mentioned in the text of the press release.