GPT-5-Codex Addendum: Agentic Coding Optimized GPT-5 with Safety Measures
Sources: https://openai.com/index/gpt-5-system-card-addendum-gpt-5-codex, OpenAI
TL;DR
- OpenAI describes GPT-5 Codex as a version of GPT-5 optimized for agentic coding in Codex.OpenAI
- Trained using reinforcement learning on real-world coding tasks across environments to generate code that mirrors human style and PR preferences, adheres precisely to instructions, and iteratively runs tests until passing results are achieved.
- Available locally in the terminal or IDE via Codex CLI and IDE extension, and on the cloud via Codex web, GitHub, and the ChatGPT mobile app.
- The addendum outlines safety measures at both model and product levels, including specialized safety training for harmful tasks and prompt injections, plus agent sandboxing and configurable network access. Publication Sep 17, 2025; Safety Sep 16, 2025.
Context and background
This document is an addendum to the GPT-5 system card and introduces GPT-5-Codex, a version of GPT-5 optimized for agentic coding within Codex. Like its predecessor, codex-1, the model was developed with reinforcement learning on real-world coding tasks across varied environments to produce code that closely mirrors human style and preferences, adheres precisely to instructions, and iteratively runs tests until results pass. The addendum also details safety and product mitigations designed to address potential risks associated with coding agents and automated tooling. The information here accompanies the public release timeline and safety milestones noted in the addendum. The model is accessible through multiple channels: locally in the terminal or integrated development environments via Codex CLI and the Codex IDE extension, and on the cloud via the Codex web interface, GitHub, and the ChatGPT mobile app. The release emphasizes a safety framework that spans both model-level mitigations and product-level mitigations to reduce misuse while preserving productive coding workflows. The publication and safety dates highlight ongoing governance around GPT-5‑Codex. OpenAI describes the integration as part of a broader effort to align advanced coding models with user instructions and real-world development practices while implementing layered safety controls. OpenAI
What’s new
GPT-5 Codex is introduced as a distinct variant of GPT-5 tailored for agentic coding within Codex. It draws on reinforcement learning trained on real-world coding tasks in diverse environments to generate code that mirrors human style and PR preferences, while strictly following instructions and iteratively running tests until they pass. The model family continues to build on codex-1, underscoring continuity in using RL for code generation fidelity. Key new elements include:
- A focus on coding tasks within Codex environments, optimizing for code that adheres to instructions and passes tests through iterative validation.
- Availability across both local and cloud surfaces: terminal and IDE experiences via Codex CLI and IDE extension, plus cloud access via Codex web, GitHub, and the ChatGPT mobile app.
- A structured safety framework that combines model-level mitigations, such as specialized safety training for harmful tasks and prompt injections, with product-level mitigations like agent sandboxing and configurable network access.
- Publication and safety governance milestones set for September 2025 to support ongoing oversight of the Codex lineage. These changes position GPT-5-Codex as a dedicated tool for developers seeking robust code generation capabilities with explicit safety mechanisms across deployment contexts. OpenAI
Why it matters (impact for developers/enterprises)
For developers and enterprises, GPT-5-Codex promises an agentic coding companion that can produce code aligned with human styling and clear instructions, while enabling automated testing and iterative refinement until passing results are achieved. The model’s dual-channel availability—local (terminal/IDE) and cloud (Codex web, GitHub, ChatGPT mobile app)—supports diverse workflows, from on‑premises development to cloud-based collaboration. The addendum signals a strong emphasis on safety, with model-level mitigations designed to limit harmful task execution and prompt injection risks, complemented by product-level controls such as agent sandboxing and configurable network access. These layers aim to reduce misuse and unintended behavior in real-world coding tasks, which is crucial for enterprise deployments where compliance, reliability, and auditability matter. At a practical level, organizations can expect clearer boundaries around network access and agent behavior, which helps in governance, risk management, and integrating Codex powered tooling into CI/CD pipelines and development environments. The combination of RL-based training, platform-wide availability, and explicit safety measures aligns GPT-5-Codex with professional software development requirements while offering scalability across teams and projects. OpenAI
Technical details or Implementation
The GPT-5 Codex variant is a version of GPT-5 optimized for agentic coding tasks within Codex and is described as being trained with reinforcement learning on real-world coding tasks across multiple environments. Like codex-1, the model was designed to generate code that closely mirrors human style and PR preferences, adhere precisely to instructions, and iteratively run tests until results pass. Access is provided both locally and in the cloud, through multiple channels. Key technical elements include:
- Training approach: reinforcement learning on real-world coding tasks in varied environments to improve code quality, instruction adherence, and test-driven results.
- Model lineage: continues the Codex lineage and mirrors the predecessor’s emphasis on realistic coding behavior and test-driven iteration.
- Deployment surfaces: local terminal or IDE via Codex CLI and IDE extension; cloud access via Codex web, GitHub, and the ChatGPT mobile app.
- Safety architecture: a blend of model-level mitigations and product-level mitigations, including specialized safety training for harmful tasks and prompt injections, plus agent sandboxing and configurable network access. Availability and safety are documented in the addendum with explicit publication and safety dates: Publication Sep 17, 2025; Safety Sep 16, 2025. See the linked addendum for full details. | Platform availability by surface
| Platform | Access |
|---|---|
| Local Terminal / IDE (Codex CLI and IDE extension) | Available locally on supported environments |
| Cloud (Codex web, GitHub, ChatGPT mobile app) | Available on Codex cloud surfaces |
| OpenAI emphasizes that the safety measures include specialized safety training for harmful tasks and prompt injections, complemented by product-level mitigations like agent sandboxing and configurable network access. The goal is to reduce risks while enabling productive coding workflows. OpenAI |
Key takeaways
- GPT-5 Codex represents a dedicated Codex-optimized GPT-5 variant focused on agentic coding tasks.
- Training relies on reinforcement learning across real-world coding tasks to produce human-aligned code and enforce instruction following.
- Access is multi-channel, spanning local development environments and cloud platforms.
- Safety is architected through layered mitigations at both model and product levels, including sandboxing and configurable network controls.
- The addendum provides governance markers with publication and safety dates to support ongoing oversight.
FAQ
-
What is GPT-5-Codex?
version of GPT-5 optimized for agentic coding in Codex, trained with reinforcement learning on real-world coding tasks to generate code that mirrors human style and adheres to instructions. [OpenAI](https://openai.com/index/gpt-5-system-card-addendum-gpt-5-codex)
-
How is safety addressed in GPT-5-Codex?
Through model-level mitigations like specialized safety training for harmful tasks and prompt injections, plus product-level mitigations such as agent sandboxing and configurable network access. [OpenAI](https://openai.com/index/gpt-5-system-card-addendum-gpt-5-codex)
-
Where can I access GPT-5-Codex?
Locally via Codex CLI and the IDE extension, and on the cloud via Codex web, GitHub, and the ChatGPT mobile app. [OpenAI](https://openai.com/index/gpt-5-system-card-addendum-gpt-5-codex)
-
When were the publication and safety milestones?
Publication on Sep 17, 2025, and safety milestone on Sep 16, 2025. [OpenAI](https://openai.com/index/gpt-5-system-card-addendum-gpt-5-codex)
References
More news
First look at the Google Home app powered by Gemini
The Verge reports Google is updating the Google Home app to bring Gemini features, including an Ask Home search bar, a redesigned UI, and Gemini-driven controls for the home.
Shadow Leak shows how ChatGPT agents can exfiltrate Gmail data via prompt injection
Security researchers demonstrated a prompt-injection attack called Shadow Leak that leveraged ChatGPT’s Deep Research to covertly extract data from a Gmail inbox. OpenAI patched the flaw; the case highlights risks of agentic AI.
Predict Extreme Weather in Minutes Without a Supercomputer: Huge Ensembles (HENS)
NVIDIA and Berkeley Lab unveil Huge Ensembles (HENS), an open-source AI tool that forecasts low-likelihood, high-impact weather events using 27,000 years of data, with ready-to-run options.
Scaleway Joins Hugging Face Inference Providers for Serverless, Low-Latency Inference
Scaleway is now a supported Inference Provider on the Hugging Face Hub, enabling serverless inference directly on model pages with JS and Python SDKs. Access popular open-weight models and enjoy scalable, low-latency AI workflows.
Google expands Gemini in Chrome with cross-platform rollout and no membership fee
Gemini AI in Chrome gains access to tabs, history, and Google properties, rolling out to Mac and Windows in the US without a fee, and enabling task automation and Workspace integrations.
Kaggle Grandmasters Playbook: 7 Battle-Tested Techniques for Tabular Data Modeling
A detailed look at seven battle-tested techniques used by Kaggle Grandmasters to solve large tabular datasets fast with GPU acceleration, from diversified baselines to advanced ensembling and pseudo-labeling.