About the job
At Google DeepMind, we celebrate the richness of diverse experiences, knowledge, and backgrounds, leveraging these unique perspectives to achieve remarkable outcomes. Our commitment to equal employment opportunity extends to all individuals, irrespective of sex, race, religion, belief, ethnicity, nationality, disability, age, citizenship, marital status, sexual orientation, gender identity, pregnancy, or any other legally protected status. Should you require any accommodations due to a disability or other need, please feel free to inform us.
Snapshot
The Gemini Safety team plays a crucial role in ensuring the safety and equitable behavior of Google DeepMind's most recent Gemini models. As a Research Scientist, you will apply and innovate data-driven and algorithmic solutions to enhance our latest user-facing models. This role thrives in a fast-paced, collaborative environment characterized by a strong culture of support and dedication.
About Us
At Google DeepMind, we believe that artificial intelligence can be one of humanity’s most impactful inventions. Our team consists of scientists, engineers, and machine learning specialists working together to push the boundaries of AI technology for the benefit of the public and to facilitate scientific discovery. We prioritize safety and ethics in all our collaborations addressing critical challenges.
The Role
We seek a dynamic Research Scientist who excels in tackling new research questions and adeptly implementing innovative research concepts.
Our team is focused on enhancing the safety and fairness of cutting-edge AI models, contributing foundational technologies that are integral to various product areas including Gemini App, Cloud API, and Search.
Key Responsibilities:
- Post-training and instruction tuning of state-of-the-art large language models (LLMs), concentrating on text-to-text, image/video/audio-to-text modalities, and agentic capabilities.
- Investigate data, reasoning, and algorithmic solutions to ensure that Gemini Models are safe, maximally beneficial, and accessible to all.
- Enhance the adversarial robustness of Gemini, particularly in relation to high-stakes abuse risks.
- Design and uphold rigorous evaluation protocols to identify model behavior gaps and opportunities for safety and fairness improvements.
- Develop and implement experimental plans to address identified gaps or to create entirely new capabilities.
- Foster innovation and deepen understanding of safety in AI systems.
