companyOpenAI logo

Fullstack Engineer, Safety Engineering

OpenAISan Francisco
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Experience

Qualifications

Proven experience in full-stack development with proficiency in both front-end and back-end technologies. Strong understanding of AI safety principles and practices. Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment. Familiarity with tools for data management, analysis, and visualization. Strong communication skills to engage with cross-functional teams effectively.

About the job

About Our Team

The Safety Systems Organization plays a crucial role in ensuring the safe deployment of our advanced models in real-world settings, contributing to OpenAI's mission of developing and implementing safe Artificial General Intelligence (AGI). We prioritize AI safety, trust, and transparency, and our work is vital in making a positive impact on society.

Our Safety Engineering team is dedicated to creating the platforms and tools that ensure OpenAI’s models are safe for real-world usage. We collaborate closely with researchers, product teams, and policy experts to transform safety concepts into robust, scalable systems: assessing risks, implementing safeguards, and continuously enhancing model performance in live environments. Our efforts sit at the confluence of product engineering, data science, and AI, directly influencing how millions engage with OpenAI’s technologies.

About the Position

We are seeking a proactive Fullstack Engineer who thrives in a dynamic, iterative environment—especially when developing internal tools that drive tangible societal impact. In this role, you will design and implement full-stack solutions for our Safety Systems teams, ensuring the increased safety and reliability of OpenAI’s models, particularly in sensitive areas such as mental health and protections for vulnerable users. Your contributions will enhance our team's ability to quickly identify and resolve safety concerns while also strengthening the feedback loop among policy, data, and model training cycles.

Key Responsibilities:

  • Lead the complete development cycle of internal tools aimed at enhancing the safety of OpenAI’s models, with a focus on critical areas such as mental health and vulnerable-user protections.

  • Collaborate closely with researchers, engineers, and model policy developers to identify workflows, challenges, and requirements, translating them into sustainable product solutions.

  • Create full-stack applications that support essential model policy workflows, including data labeling, failure case analysis, and insight generation for iterative development.

  • Enhance internal applications for usability, performance, and scalability, improving team efficiency and reducing resolution time for safety issues.

  • Transform successful AI-assisted safety workflows into externally facing safety products that empower developers to create safer AI, elevate industry standards for AI safety, and prepare society for more advanced AGI.

About OpenAI

At OpenAI, we are at the forefront of AI innovation, dedicated to ensuring that advanced technologies are harnessed safely and responsibly. Our work spans a variety of sectors, making a meaningful impact on society while fostering a culture of trust and transparency. Join us in our mission to develop AGI that benefits humanity.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.