About the job
About Our Team
At OpenAI, we prioritize security as a fundamental aspect of our mission to ensure that artificial general intelligence benefits all of humanity. Our Threat Intelligence team is dedicated to safeguarding OpenAI's technology, personnel, research, and infrastructure. We proactively identify and neutralize adversaries who aim to compromise our systems or misuse our models. Through sophisticated threat investigations, we build tools to enhance our analysis capabilities and deliver intelligence that informs our security strategy, providing leadership with timely, risk-aware insights. Our approach combines technical expertise, investigative rigor, and strong cross-functional collaboration to uncover threats and drive impactful outcomes across OpenAI's security and research domains.
About the Role
As a Technical Threat Investigator at OpenAI, you will play a crucial role in defending the company against advanced adversaries targeting OpenAI and the wider ecosystem, as well as those attempting to exploit our models for cyber operations.
This role involves deep investigative work. You will conduct complex, end-to-end investigations into sophisticated threat actors, analyzing their behaviors, infrastructure, and emerging techniques, particularly how they integrate AI into their workflows. Your insights will be vital in proactively identifying malicious activities and enhancing detection, disruption, enforcement, and safety measures across the organization.
You will convert your investigative findings into scalable solutions that produce a lasting impact. This includes developing and maintaining lightweight tools, automating processes where beneficial, and establishing AI-assisted workflows to streamline investigations, making them faster, more repeatable, and more effective over time.
Key Responsibilities:
- Conduct thorough, end-to-end investigations into advanced threat actors interacting with OpenAI's models, products, and ecosystem.
- Adopt an adversarial mindset to model attacker behavior, anticipate misuse patterns, and proactively hunt, identify, and disrupt malicious activities.
- Utilize internal telemetry, OSINT, vendor data, and proprietary safety systems to generate high-confidence findings regarding adversarial use of our models in cyber operations, platform abuse, and threats aimed at OpenAI.
- Translate investigative insights into tangible improvements across detection, enforcement, intelligence, and safety processes.
- Develop tools, scripts, and automations, and implement AI-assisted workflows to enhance investigation efficiency.
