About the job
About the Team
At OpenAI, security is integral to our mission of ensuring that artificial general intelligence serves the best interests of humanity.
Our Threat Intelligence team is dedicated to safeguarding OpenAI’s technology, personnel, research, and infrastructure. We proactively identify and mitigate threats from adversaries aiming to exploit our systems or misuse our models. By investigating complex threats, developing scalable analytical tools, and delivering intelligence, we shape our security strategies and provide leadership with actionable insights. Our approach combines technical expertise, investigative thoroughness, and robust cross-functional collaboration to detect threats and enhance security across OpenAI’s various sectors.
About the Role
We are seeking a Technical Threat Investigator to bolster our defenses against sophisticated adversaries targeting OpenAI and the wider ecosystem, including those attempting to misuse our models for cyber operations.
In this investigative role, you will independently conduct comprehensive investigations into advanced threat actors, analyzing their behaviors, infrastructures, and emerging techniques, including how they integrate AI into their operations. Your findings will be crucial in proactively identifying malicious activities and enhancing detection, disruption, enforcement, and overall safety within the organization.
You will convert your investigative insights into scalable solutions. This includes developing lightweight tools, automating processes where feasible, and creating AI-assisted workflows to streamline investigations and improve effectiveness over time.
In this role, you will:
Perform thorough investigations into sophisticated threat actors interacting with OpenAI’s models, products, and ecosystem.
Adopt an adversarial mindset, modeling attacker behavior, predicting misuse patterns, and actively hunting for and disrupting malicious activities.
Utilize internal telemetry, open-source intelligence (OSINT), vendor data, and proprietary safety systems to generate high-confidence findings regarding adversarial use of our models in cyber operations and platform abuse.
Translate investigative findings into tangible enhancements across detection, enforcement, intelligence, and safety frameworks.
Develop tools, scripts, and automations to improve investigative processes and outcomes.
