OpenAI has introduced a new cybersecurity-focused AI system called Codex Security. The tool is designed to help organizations detect, analyze, and fix vulnerabilities in software codebases automatically using advanced AI reasoning. Codex Security builds on the AI giant’s earlier work with OpenAI Codex, the company’s AI system created to understand programming languages and assist developers in writing and analyzing code. While Codex initially focused on generating code and helping with development tasks, the new security-focused agent is designed specifically to identify weaknesses in software systems.
Codex Security operates in multiple stages. The first stage involves mapping the architecture of the software system. The AI reads through source code, configuration files, dependency lists, and documentation to understand how the application functions. By building this contextual understanding, the system can generate a threat model of the application, essentially predicting where vulnerabilities are most likely to appear based on how the system is structured.
After building this model, the AI agent scans the code for possible weaknesses. These can include authentication errors, insecure data processing, improper access control, injection vulnerabilities, or logical flaws that attackers could exploit. Traditional security scanners often generate large numbers of warnings that developers must manually review. Codex Security attempts to reduce this problem by testing its findings in a controlled environment, verifying whether a vulnerability can actually be exploited before reporting it.
Another feature of the system is its ability to suggest fixes automatically. Instead of simply identifying problems, Codex Security can propose patches that fit the surrounding architecture of the codebase. The AI firm claims that the latest tool was tested on large datasets during its early evaluation period. In one large analysis cycle, the system reportedly scanned more than one million software commits across numerous repositories. Within that dataset, the AI detected hundreds of critical vulnerabilities and more than ten thousand high-severity security issues. According to the testing results, critical security flaws appeared in fewer than 0.1% of commits.
The Sam Altman-led firm has made the new tool available to selected enterprise customers and research partners during the preview phase. The company is also offering temporary free access. Developers can connect their repositories to the platform and allow the AI agent to perform security reviews in the background. OpenAI is also extending the program to the open-source ecosystem. Selected open-source maintainers will receive access to Codex Security along with developer tools and AI assistance for code reviews.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →