companyOpenAI logo

Research Engineer, Privacy

OpenAISan Francisco
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Experience

Qualifications

The ideal candidate will possess:Strong expertise in machine learning, with a solid understanding of privacy-preserving technologies. Experience in developing algorithms that prioritize data security while maintaining performance. Proficiency in programming languages such as Python, TensorFlow, or PyTorch. Knowledge of data privacy laws and regulations, including GDPR and CCPA. Strong analytical skills with a focus on problem-solving and innovation. Excellent communication skills, with the ability to convey complex concepts clearly to diverse audiences. Experience in cross-functional collaboration and project management.

About the job

About Our Team

Join the Privacy Engineering Team at OpenAI, where we are dedicated to embedding privacy as a core principle within our mission to develop Artificial General Intelligence (AGI). We focus on ensuring that all OpenAI products and systems that process user data adhere to the highest standards of privacy and security.

Our team engineers essential production solutions, innovates privacy-preserving methodologies, and provides cross-functional engineering and research teams with the tools necessary for responsible data management. Our commitment to ethical data utilization is a cornerstone of OpenAI's vision for safely advancing AGI for the benefit of everyone.

About the Position

As a valued member of the Privacy Engineering Team, you will be instrumental in protecting user data while enhancing the usability and effectiveness of our AI systems. You will engage with cutting-edge research on privacy-enhancing technologies, including differential privacy, federated learning, and data memorization techniques. Your role will also entail exploring the intersection of privacy and machine learning, innovating methods for better data anonymization, and mitigating risks associated with model inversion and membership inference attacks.

This position is based in San Francisco, and we offer relocation assistance.

Key Responsibilities:

  • Design and prototype scalable privacy-preserving machine learning algorithms (e.g., differential privacy, secure aggregation, federated learning) for deployment at OpenAI.

  • Evaluate and enhance model resilience against privacy threats such as membership inference, model inversion, and data memorization leaks, ensuring a balance between utility and security assurances.

  • Create internal libraries, evaluation frameworks, and documentation to make advanced privacy techniques accessible to engineering and research teams.

  • Conduct comprehensive investigations into the privacy-performance trade-offs of large models, sharing findings that guide model training and product safety protocols.

  • Establish and document privacy standards, threat models, and audit procedures to govern the entire machine learning lifecycle, from dataset curation to post-deployment oversight.

  • Work collaboratively with Security, Policy, Product, and Legal teams to translate evolving regulatory frameworks into actionable technical safeguards and tools.

About OpenAI

OpenAI is at the forefront of advancing artificial intelligence technology while ensuring ethical practices in AI development. Our mission is to ensure that AGI benefits all of humanity. We strive for transparency, collaboration, and integrity in our work, fostering an inclusive environment where groundbreaking ideas can thrive. Join us to be part of a team that is redefining the future of AI with an emphasis on safety and privacy.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.