About the job
About Us
Nuro is a pioneering self-driving technology company striving to make autonomous vehicles accessible for everyone. Since our founding in 2016, we have been dedicated to creating the world’s most scalable driver, integrating advanced artificial intelligence with automotive-grade hardware. Our flagship technology, the Nuro Driver™, is licensed to support various applications, including robotaxis, commercial fleets, and personally owned vehicles. With a track record of successful self-driving deployments, Nuro is paving the way for automakers and mobility platforms to realize the commercial potential of autonomous vehicles, fostering a safer, more connected future.
Role Overview
As a Senior Machine Learning Research Scientist focusing on Vision-Language-Action (VLA) models, you will enhance our onboard Behavior & Planning stack to promote safe and robust decision-making in complex driving scenarios. Your expertise will drive the development of multimodal models that integrate scene understanding, contextual reasoning, and planning-relevant representations for real-world autonomous driving.
This position emphasizes advancing cutting-edge VLA models, including model development, large-scale training, fine-tuning, evaluation, and optimization for onboard deployment. You will collaborate closely with teams across behavior, planning, perception, systems, and infrastructure to translate research breakthroughs into practical applications deployed in our vehicles.
If you are passionate about creating and implementing state-of-the-art VLA systems in robotics, we encourage you to apply.
Key Responsibilities
- Develop and enhance VLA models for onboard Behavior & Planning in autonomous driving systems.
- Create multimodal models that facilitate safe decision-making in complex and ambiguous driving situations.
- Research and implement state-of-the-art techniques in vision-language-action modeling, multimodal representation learning, and foundational models for autonomy.
- Train, fine-tune, and evaluate large-scale VLAs utilizing diverse and real-world driving datasets to improve model quality and robustness.
- Optimize models for efficient onboard deployment, focusing on inference speed, memory usage, and runtime performance.
- Collaborate with various teams to define training, evaluation, and deployment requirements.
- Design effective evaluation methodologies for multimodal models in safety-critical scenarios.
- Contribute to scalable model and data pipelines that support rapid experimentation and deployment.
Qualifications
- Proven expertise in machine learning, particularly in vision-language-action frameworks.
- Experience with multimodal model development and evaluation.
- Strong background in autonomous driving systems and decision-making processes.
- Familiarity with large-scale data training and optimization techniques.
- Excellent collaboration skills to work effectively with cross-functional teams.

