
OpenAI is looking for a new employee to help address the growing dangers of AI, and the tech company is willing to spend more than half a million dollars to fill the role.
OpenAI is hiring a “head of preparedness” to reduce harms associated with the technology, like user mental health and cybersecurity, CEO Sam Altman wrote in an X post on Saturday. The position will pay $555,000 per year, plus equity, according to the job listing.
“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” Altman said.
OpenAI’s push to hire a safety executive comes amid companies’ growing concerns about AI risks on operations and reputations. A November analysis of annual Securities and Exchange Commission filings by financial data and analytics company AlphaSense found that in the first 11 months of the year, 418 companies worth at least $1 billion cited reputational harm associated with AI risk factors. These reputation-threatening risks include AI datasets that show biased information or jeopardize security. Reports of AI-related reputational harm increased 46% from 2024, according to the analysis.
“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman said in the social media post.
“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he added.
OpenAI’s previous head of preparedness Aleksander Madry was reassigned last year to a role related to AI reasoning, with AI safety a related part of the job.
OpenAI’s efforts to address AI dangers
Founded in 2015 as a nonprofit with the intention to use AI to improve and benefit humanity, OpenAI has, in the eyes of some of its former leaders, struggled to prioritize its commitment to safe technology development. The company’s former vice president of research, Dario Amodei, along with his sister Daniela Amodei and several other researchers, left OpenAI in 2020, in part because of concerns the company was prioritizing commercial success over safety. Amodei founded Anthropic the following year.
OpenAI has faced multiple wrongful death lawsuits this year, alleging ChatGPT encouraged users’ delusions, and claiming conversations with the bot were linked to some users’ suicides. A New York Times investigation published in November found nearly 50 cases of ChatGPT users having mental health crises while in conversation with the bot.
OpenAI said in August its safety features could “degrade” following long conversations between users and ChatGPT, but the company has made changes to improve how its models interact with users. It created an eight-person council earlier this year to advise the company on guardrails to support users’ wellbeing and has updated ChatGPT to better respond in sensitive conversations and increase access to crisis hotlines. At the beginning of the month, the company announced grants to fund research about the intersection of AI and mental health.
The tech company has also conceded to needing improved safety measures, saying in a blog post this month some of its upcoming models could present a “high” cybersecurity risk as AI rapidly advances. The company is taking measures—such as training models to not respond to requests compromising cybersecurity and refining monitoring systems—to mitigate those risks.
“We have a strong foundation of measuring growing capabilities,” Altman wrote on Saturday. “But we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.”
This story was originally featured on Fortune.com










