OpenAI is making a push into cybersecurity, building a model specifically designed for the job. The company announced the expansion of its Trusted Access for Cyber (TAC) program alongside the launch of GPT-5.4-Cyber, a fine-tuned model built from the ground up for defensive security workflows.
Table of Contents
- A Cybersecurity Strategy 3 Years in the Making
- What GPT-5.4-Cyber Actually Does
- How to Get Access to GPT-5.4-Cyber
- An Model Built for What's Coming
A Cybersecurity Strategy 3 Years in the Making
OpenAI has been quietly laying the groundwork for this announcement since 2023, when it launched its Cybersecurity Grant Program and began formally evaluating its models' offensive and defensive capabilities. That same year, the company introduced its Preparedness Framework to guide how it assesses and manages risk as model capabilities grow.
What followed was a deliberate, iterative buildup.
Cyber-specific safety training began with GPT-5.2, expanded through GPT-5.3-Codex and continued into GPT-5.4 — which OpenAI classified as "high" cyber capability under its own framework. Earlier this year, the company launched Codex Security into research preview, a tool that automatically monitors codebases, flags vulnerabilities and proposes fixes. Since its launch, Codex Security has contributed to more than 3,000 critical and high-severity vulnerability fixes across the ecosystem.
The TAC program itself launched in February with a narrower focus: automated identity verification for individuals and limited partnerships with select organizations. Tuesday's expansion broadens that, opening access to thousands of verified individual defenders and hundreds of enterprise security teams, with a new tiered structure that scales permissions based on identity verification and accountability.
What GPT-5.4-Cyber Actually Does
The headline of this expansion is GPT-5.4-Cyber, a variant of GPT-5.4 with two key differences from standard models:
- Lower refusal thresholds for legitimate security work
- New capabilities tailored to advanced defensive workflows
The most notable addition is binary reverse engineering, the ability to analyze compiled software for malware and security weaknesses without access to the original source code. For security professionals, that capability has historically required specialized tooling and significant manual effort.
Because the model is more permissive by design, OpenAI is rolling it out carefully. Initial access is limited to vetted security vendors, researchers and organizations. The company also noted that access to more permissive models may come with trade-offs, particularly around Zero-Data Retention (ZDR) scenarios and third-party platform deployments where OpenAI has less direct visibility into how the model is used.
How to Get Access to GPT-5.4-Cyber
TAC access is designed to scale, but it's not open to everyone by default.
- Individual users can verify their identity at chatgpt.com/cyber
- Enterprise teams can request access through an OpenAI representative
- Those already enrolled in TAC will need to complete an additional authentication step to qualify for the higher access tier
OpenAI is backing the broader initiative with a $10 million Cybersecurity Grant Program aimed at supporting the defender community — an acknowledgment that the threat landscape isn't waiting for large organizations to catch up.
An Model Built for What's Coming
The cybersecurity industry doesn't need general-purpose AI with the safety rails loosened — it needs a model built specifically for the job. That's what Anthropic aims to achieve with Claude Mythos, and it's what OpenAI is delivering here. As OpenAI put it, the goal is to scale cyber defenses in lockstep with increasing model capabilities.
For security teams, the practical upshot is access to tools that can keep pace with AI-enabled threats, something OpenAI argues humans simply can't do alone.