OpenAI announced the expansion of its Trusted Access for Cyber (TAC) program alongside the introduction of GPT-5.4-Cyber, a model specifically designed to support advanced cybersecurity defense use cases.
The initiative aims to scale access to AI-powered cybersecurity tools for verified defenders, including individuals, enterprises, and organizations responsible for protecting critical digital infrastructure. The expanded program introduces tiered access levels based on identity verification and trust signals, enabling broader but controlled access to more capable models.
GPT-5.4-Cyber is a specialized variant of GPT-5.4 that has been fine-tuned to enhance defensive cybersecurity workflows while lowering restrictions for legitimate use cases. The model supports advanced capabilities, including vulnerability identification, secure coding assistance, and binary reverse engineering, enabling security professionals to analyze software for threats without requiring access to source code.
OpenAI’s approach is guided by three core principles: democratized access, iterative deployment, and ecosystem resilience. The company aims to expand access to defensive tools while maintaining safeguards through verification systems, automated trust signals, and controlled rollout strategies.
The expansion builds on OpenAI’s broader cybersecurity efforts, including its Cybersecurity Grant Program, Preparedness Framework, and Codex Security platform, which has contributed to identifying and fixing thousands of vulnerabilities across software ecosystems.
OpenAI noted that as AI capabilities advance, both defensive and adversarial use cases are increasing, requiring continuous evolution of safeguards and deployment strategies. The company plans to scale defensive capabilities in parallel with model advancements, ensuring that cybersecurity tools remain accessible to legitimate users while mitigating risks of misuse.
The TAC program is now open to individual users through identity verification and to enterprises through direct engagement with OpenAI, with higher tiers providing access to more permissive and capable models under stricter controls.

