About the job
About Our Team
Join the Privacy Engineering Team at OpenAI, where we are dedicated to embedding privacy as a core principle within our mission to develop Artificial General Intelligence (AGI). We focus on ensuring that all OpenAI products and systems that process user data adhere to the highest standards of privacy and security.
Our team engineers essential production solutions, innovates privacy-preserving methodologies, and provides cross-functional engineering and research teams with the tools necessary for responsible data management. Our commitment to ethical data utilization is a cornerstone of OpenAI's vision for safely advancing AGI for the benefit of everyone.
About the Position
As a valued member of the Privacy Engineering Team, you will be instrumental in protecting user data while enhancing the usability and effectiveness of our AI systems. You will engage with cutting-edge research on privacy-enhancing technologies, including differential privacy, federated learning, and data memorization techniques. Your role will also entail exploring the intersection of privacy and machine learning, innovating methods for better data anonymization, and mitigating risks associated with model inversion and membership inference attacks.
This position is based in San Francisco, and we offer relocation assistance.
Key Responsibilities:
Design and prototype scalable privacy-preserving machine learning algorithms (e.g., differential privacy, secure aggregation, federated learning) for deployment at OpenAI.
Evaluate and enhance model resilience against privacy threats such as membership inference, model inversion, and data memorization leaks, ensuring a balance between utility and security assurances.
Create internal libraries, evaluation frameworks, and documentation to make advanced privacy techniques accessible to engineering and research teams.
Conduct comprehensive investigations into the privacy-performance trade-offs of large models, sharing findings that guide model training and product safety protocols.
Establish and document privacy standards, threat models, and audit procedures to govern the entire machine learning lifecycle, from dataset curation to post-deployment oversight.
Work collaboratively with Security, Policy, Product, and Legal teams to translate evolving regulatory frameworks into actionable technical safeguards and tools.

