OpenAI has introduced a new security feature called Advanced Account Security, designed to protect ChatGPT and Codex accounts belonging to individuals who may be at heightened risk of targeted attacks, such as journalists, activists, and politicians.
The optional setting, announced on Thursday, enforces stricter access controls to make account takeovers significantly harder. While similar programs—like Google's Advanced Protection—have existed for years, OpenAI's move comes as AI tools become central to sensitive work.
"People are turning to AI for deeply personal questions and increasingly high-stakes work," the company stated in a blog post. "Over time, a ChatGPT account can hold sensitive personal and professional context, and sit at the center of connected tools and workflows."
The launch is part of a broader cybersecurity strategy OpenAI outlined earlier this month, aiming to address growing concerns about account security in the age of generative AI.