About the job
About Our Team
The Intelligence and Investigations team is dedicated to swiftly identifying and mitigating abuse and strategic risks, ensuring a secure online environment through close collaboration with both internal and external partners. Our initiatives align with OpenAI's fundamental mission of developing AI technologies that benefit humanity.
The Strategic Intelligence & Analysis (SIA) team plays a crucial role in providing safety intelligence for OpenAI’s products. We focus on monitoring, analyzing, and forecasting real-world abuse, geopolitical risks, and strategic threats. Our work informs safety mitigations, product decisions, and partnerships, ensuring that OpenAI’s tools are deployed securely and responsibly across critical sectors.
About the Role
We are in search of an AI Emerging Risks Analyst who will aid in comprehending potential harms and misuse of AI in a landscape characterized by rapid and sustained changes. This role involves identifying known threat actors who exploit new technologies as well as emerging threats enabled by these advancements. You will utilize strategic foresight methodologies to proactively detect and mitigate risks.
In this position, you will provide a strategic-level perspective on a diverse range of evolving risk areas. You will be instrumental in creating actionable risk taxonomies pertinent to OpenAI’s platforms and broader business interests. By employing both quantitative and qualitative methodologies, you will identify early warning signals, investigate concerning behaviors, and transform weak signals into prioritized risk assessments. Your focus will include upstream ecosystem scanning, competitive benchmarking, and external narrative/risk sense-making. Your contributions will guide cross-functional partners in the protection and safety domains, ensuring user, brand, and community safety while fostering productive and creative uses of our tools.
Key Responsibilities
Identify and prioritize emerging risks
Develop and continuously refine a comprehensive view of emerging signals and trends that may impact the AI ecosystem through proactive scanning.
Design and maintain harm taxonomies to foresee and warn about potential AI-related harms and misuse over the next 0-24 months and beyond.
Contribute to an ongoing risk register and prioritization framework that highlights the most pressing issues based on severity, prevalence, exposure, and trajectory.
Analyze and delve into emerging abuse patterns
Create thorough strategies to investigate and understand these patterns.

