Connect with us

Technology

OpenAI Seeks Senior Safety Lead Amid Rising AI Risk Concerns

Editorial

Published

on

OpenAI Group PBC is actively recruiting a senior safety professional for the newly created position of head of preparedness. This role is essential for anticipating potential harms associated with the company’s artificial intelligence models. As scrutiny around AI safety intensifies, the head of preparedness will guide how risks are mitigated as OpenAI’s capabilities advance.

The job listing on OpenAI’s careers page indicates that the successful candidate will lead the technical strategy and execution of OpenAI’s Preparedness Framework. This framework is designed to monitor and assess frontier capabilities that could introduce new or severe risks. The risks under consideration encompass misuse scenarios, cybersecurity threats, and other negative impacts that may arise as AI models become more powerful.

As part of the Safety Systems organization, the head of preparedness will collaborate with research, policy, and product teams. Key responsibilities include developing threat models, conducting capability evaluations, establishing risk thresholds, and determining when additional safeguards or deployment restrictions are necessary. The insights gained from this role will directly influence decisions regarding the release of new models and features.

The position offers a competitive base salary of $550,000 plus equity. OpenAI emphasizes that this role requires extensive experience in large-scale technical systems, security, risk analysis, or safety governance. Candidates must also possess the ability to translate research findings into operational controls.

The recruitment comes in the wake of significant changes in OpenAI’s safety leadership. Former head of preparedness, Alexsander Madry, was reassigned in mid-2024, leading to senior executives Joaquin Quiñonero Candela and Lilian Weng overseeing preparedness responsibilities. Following Weng’s departure and Quiñonero Candela’s transition to lead recruiting, the preparedness role has remained without a dedicated permanent head.

OpenAI’s CEO, Sam Altman, has previously highlighted preparedness as a vital internal function as AI model capabilities expand. Altman described the head of preparedness as “a critical role at an important time,” underscoring the challenges that arise with advancing model capabilities.

The urgency surrounding this hiring effort is amplified by increased attention on how advanced AI systems may be misused or lead to unintended harm. Industry discussions frequently cite concerns regarding AI-assisted cyberattacks, the discovery and exploitation of software vulnerabilities, and potential adverse effects on users’ mental health at scale.

In October 2023, OpenAI disclosed that over one million people weekly reported experiencing severe mental distress during interactions with ChatGPT. While the data did not imply that ChatGPT was the cause of this distress, it indicated that users were discussing significant mental health issues with the AI.

As OpenAI moves forward with its hiring process, the implications of this role could shape the future of AI safety and preparedness, highlighting the importance of proactive measures in managing the evolving landscape of artificial intelligence.

Our Editorial team doesn’t just report the news—we live it. Backed by years of frontline experience, we hunt down the facts, verify them to the letter, and deliver the stories that shape our world. Fueled by integrity and a keen eye for nuance, we tackle politics, culture, and technology with incisive analysis. When the headlines change by the minute, you can count on us to cut through the noise and serve you clarity on a silver platter.

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.