A growing body of research from leading universities and cybersecurity organizations, actively discussed in professional circles on July 5, 2025, is raising alarms about the serious risks associated with the widespread use of popular AI coding tools like GitHub Copilot and similar solutions. Despite the clear and significant boost in developer productivity, these assistants can unknowingly introduce dangerous vulnerabilities into software code, creating what is known as "security debt." As highlighted in recent reports from OWASP (Open Web Application Security Project) and studies conducted, for example, at TU Delft (Delft University of Technology), the core problem is that AI models are trained on vast amounts of publicly available code from platforms like GitHub. This source code often contains outdated, insecure, or simply poor programming practices, which the AI then learns and reproduces in its suggestions. A model optimized for rapidly generating functional code is not always able to assess its security and reliability in real-world scenarios. Among the most common issues are suggesting code with classic vulnerabilities (such as SQL injection or XSS), using outdated libraries with known security holes, and generating code with subtle logic flaws that are difficult to detect with standard testing. Experts agree that this requires a fundamental shift in software development approaches. All AI-generated code must undergo mandatory and thorough review by experienced developers, and automated static application security testing (SAST) tools must be integrated into CI/CD pipelines.
Studies: AI Coding Assistants Create Serious Vulnerabilities in Code
