OpenAI logoOpenAI logo

Researcher, Robustness & Safety Training

OpenAISan Francisco
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Experience Level

Senior

Qualifications

A PhD in Computer Science or a related field with a focus on AI safety. Proven experience in safety research, particularly in AI model robustness and adversarial training. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Passion for advancing AI safety and ethical deployment.

About the job

About the Team

The Safety Systems team is dedicated to ensuring the responsible deployment of our advanced AI models for societal benefit. We lead OpenAI's mission to develop and implement safe AGI, prioritizing transparency and trust in our AI systems.

The Model Safety Research team is focused on pioneering research to enhance the robustness and safety of AI models. Our goal is to tackle the evolving safety challenges that arise as AI becomes increasingly powerful and prevalent across various applications. Key areas of focus include the enforcement of nuanced safety policies, model robustness against adversarial threats, addressing privacy and security concerns, and ensuring trustworthiness in critical safety domains.

We are committed to understanding real-world deployment and maximizing the benefits of AI while ensuring its safe and responsible use.

About the Role

OpenAI is on the lookout for a passionate and experienced Senior Researcher specializing in AI safety. This role will guide research initiatives aimed at enabling safe AGI and will involve working on projects that enhance the safety, alignment, and robustness of our AI systems against adversarial threats. You will play a pivotal role in shaping the future of safe AI at OpenAI, significantly contributing to our mission of deploying safe AGI.

In this role, you will:

  • Engage in cutting-edge research on AI safety topics such as Reinforcement Learning from Human Feedback (RLHF), adversarial training, and system robustness.
  • Implement innovative methods within OpenAI’s core model training processes and drive safety enhancements across our products.
  • Define research directions and strategies to bolster the safety, alignment, and robustness of our AI systems.
  • Collaborate with cross-functional teams, including Trust & Safety, legal, and policy experts, to ensure our products uphold the highest safety standards.
  • Continuously assess and analyze the safety of our models and systems, pinpointing risks and proposing effective mitigation strategies.

You might thrive in this role if you:

  • Have a strong enthusiasm for AI safety and a solid background in safety research.
  • Possess excellent analytical skills and the ability to think critically about complex safety challenges.
  • Are adept at collaborating with diverse teams and communicating findings effectively.
  • Have a proactive approach to problem-solving and a commitment to ethical AI deployment.

About OpenAI

OpenAI is a pioneering research organization committed to developing artificial intelligence that is safe and beneficial for humanity. We strive to lead the way in AI safety, ensuring that our powerful models are deployed responsibly and transparently. Join us in shaping the future of AI.

Similar jobs

Browse all companies, explore by city & role, or SEO search pages.

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.