OpenAI Looking To Hire ‘Head Of Preparedness’ To Tackle AI Dangers
Authored by Naveen Athrappully via The Epoch Times,
OpenAI is seeking to hire a candidate for the post of “Head of Preparedness” to tackle dangers posed by the proliferation of artificial intelligence (AI), CEO Sam Altman said in a Dec. 27 post on X.
OpenAI sparked initial public interest in AI chatbot interactions with the popular launch of ChatGPT in November 2022.
“This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman wrote.
“The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities.”
The post comes as OpenAI is facing a host of lawsuits on the subject of mental health. In November, seven complaints were filed against the company in California, alleging that its ChatGPT chatbot sent three people into delusional “rabbit holes” while encouraging four others to kill themselves.
According to the lawsuits, the four deaths occurred following the victims’ conversation with ChatGPT about suicide. In some cases, the chatbot romanticized suicide, advising the victims on ways to carry out the act.
As for computer security, multiple reports have flagged the risks posed by AIs.
For instance, a May report from McKinsey & Company warned that AI models capable of detecting fraud and securing networks can also infer identities, expose sensitive information, and reassemble stripped-out details.
The State of Cybersecurity Resilience 2025 report from Accenture warned that 90 percent of companies are not modernized enough to defend against AI-driven threats.
In its Aug. 27 Threat Intelligence Report, AI company Anthropic, which makes the Claude AI, said that AI was being weaponized to carry out sophisticated cybercrimes. In one operation, a hacker used Claude to infiltrate 17 organizations, with the AI used to penetrate networks, analyze stolen data, and create psychologically targeted ransom notes.
In his post, Altman said that while OpenAI has a “strong foundation” for measuring the growing capabilities of its AI models, the company is entering a world where a “more nuanced understanding and measurement” is required to assess how these capabilities could be abused and to limit downsides.
These are “hard” questions with very little precedent, he said. Many ideas that sound good have “edge cases,” which are extreme or unusual scenarios that test the boundaries of a system, such as an AI.
“This will be a stressful job, and you’ll jump into the deep end pretty much immediately,” Altman said.
The role, based in San Francisco, requires the person to build capability evaluations, establish threat models, and build mitigations, according to a post by OpenAI. The job offers $555,000 in annual compensation plus equity.
The head of preparedness will oversee mitigation design across major risk areas, such as cyber and biology, and ensure that safeguards are “technically sound, effective, and aligned with underlying threat models.”
As of September, ChatGPT had 700 million weekly active users globally.
AI Threat
In an interview clip released on X on Aug. 18 as part of the “Making God” documentary film, Geoffrey Hinton, a computer scientist known as the “godfather of AI,” said he was “fairly confident” AI would drive massive unemployment.
However, “the risk I’ve been warning about the most … is the risk that we’ll develop an AI that’s much smarter than us, and it will just take over,” Hinton said. “It won’t need us anymore.”
In a July 18 post from Charles Darwin University in Darwin, Australia, Maria Randazzo, an academic at the university’s law school, warned that AI puts human dignity at risk.
While AI is an engineering triumph, it does not exhibit cognitive behavior; these models have no clue what they are doing or why, she said.
“There’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom,” Randazzo said.
“Globally, if we don’t anchor AI development to what makes us human—our capacity to choose, to feel, to reason with care, to empathy and compassion—we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition.
“Humankind must not be treated as a means to an end.”
Tyler Durden
Mon, 12/29/2025 – 15:40ZeroHedge NewsRead More






R1
T1


