OpenAI has introduced a new optional security feature called Advanced Account Security for people who are more likely to face phishing or account-takeover attempts on ChatGPT and Codex.
This feature is meant for people at increased risk of digital attacks, such as journalists, elected officials, political dissidents, researchers, and anyone who wants the highest level of protection.
The new mode adds strict access controls to make it much harder for attackers to take over accounts.
A tougher sign-in system with fewer recovery options
The main change is that users who enable Advanced Account Security cannot use regular password logins anymore. OpenAI explained that this setting requires passkeys or physical security keys while disabling password-based login.
WIRED noted that users need to add two physical security keys or passkeys to lower the risk of phishing attacks.
OpenAI also said the feature turns off both email and SMS recovery, replacing them with stronger options like backup passkeys, security keys, and recovery keys.
This stricter recovery model has a clear downside.
OpenAI said its support team will not be able to assist with account recovery for people using the new mode, since only the stronger recovery methods are allowed.
Support will not have access to any recovery routes, which lowers the risk that attackers could trick customer support into helping them regain access to an account.
Shorter sessions, login alerts, and automatic training exclusion
OpenAI is also making changes to what happens after a user signs in. The company said Advanced Account Security shortens sign-in sessions “to reduce the window of exposure if a device or active session is compromised,” and gives users better insight into account activity.
The feature sends alerts whenever someone logs in to the protected account and lets users check a dashboard to review active ChatGPT and Codex sessions.
Another important change is how this setting affects model training. OpenAI said that with Advanced Account Security turned on, “conversations from those accounts will not be used to train our models.”
While regular users can choose to opt out of training, this exclusion happens automatically for accounts with the advanced security mode. This makes the feature both a stronger security measure and a privacy-focused option for people dealing with sensitive information.
Yubico partnership and cyber-program requirement
To make phishing-resistant logins easier to get, OpenAI has partnered with Yubico to offer special pricing on bundles of YubiKeys.
The company also said users can use any FIDO-compliant security key or software-based passkeys. This as a way to make hardware-backed account protection more accessible for users who want stronger security.
OpenAI will also require this feature for a specific group of users. Members of its Trusted Access for Cyber program must enable Advanced Account Security starting June 1, 2026, unless their organizations confirm they already use phishing-resistant authentication with enterprise single sign-on.
A sign that AI accounts are becoming higher-value targets
The launch shows how quickly mainstream AI accounts are being treated more like high-value digital identities than ordinary app logins. OpenAI said ChatGPT accounts can hold sensitive personal and professional context and increasingly sit at the center of connected tools and workflows.
The move is an evidence that, as mainstream AI services spread more widely, there is now a stronger need for hardened protections around the accounts that control them.
In that sense, Advanced Account Security is not just a product update.
It is a signal that AI accounts are becoming important enough to need the kind of lock-down protections once reserved for the most sensitive online profiles.