About the job
About Our Team
Join the Intelligence and Investigations team at OpenAI, where we are committed to swiftly identifying and addressing abuse and strategic risks to foster a secure online environment. We focus on uncovering emerging abuse patterns, assessing risks, and collaborating with both internal and external stakeholders to implement effective strategies that prevent misuse. Our mission aligns with OpenAI's broader vision of creating AI technology that serves humanity positively.
We are developing a comprehensive “radar” for AI abuse and strategic risk—integrating internal signals, external insights, and real-world occurrences into actionable priorities for OpenAI’s safety and product development teams.
About the Position
As a Strategic Risk Analyst specializing in Behavioral & Psychological Risk, you will leverage your extensive knowledge of human behavior to provide a holistic view of risk across OpenAI’s products and platforms.
Your role involves analyzing user interactions with AI systems, particularly in high-stakes situations such as self-harm, manipulation, and coercion, and translating these findings into ready-to-use risk assessments, mitigation strategies, and product insights.
This position merges clinical and behavioral expertise with intelligence analysis, transforming psychological indicators and trends into structured evaluations, early warnings, and actionable recommendations. A significant aspect of your work will be to proactively identify where analytical insights are crucial, anticipate emerging product, policy, and safety inquiries, and concentrate efforts on analyses that influence critical decisions.
You will collaborate closely with investigators, engineers, policy experts, and trust & safety teams to enhance our understanding and mitigation of potential risks in human-AI interaction.
Key Responsibilities
Analyze AI system usage in complex or high-risk contexts (e.g., self-harm, suicidal thoughts, substance use escalation, and threats of violence), identifying patterns and trends that inform product, safety, and policy strategies.
Integrate behavioral, psychological, and intelligence signals into coherent narratives that elucidate user needs, system dynamics, and potential vulnerabilities.
Create decision-ready briefs and assessments to support product, safety, and policy decisions.
Develop and enhance behavioral risk frameworks, taxonomies, and indicators (e.g., severity models, escalation pathways, psychological harm classifications).

