Qualifications
Responsibilities:Develop Advanced Models: Utilize Transformer-based Generative Models (such as Trajectory Transformers or MotionLM architectures) to understand the 'grammar' of valid human driving, leveraging known ground truths to facilitate rapid evaluations of extensive simulations that would typically necessitate human involvement. Establish Statistical Safety Guarantees: Innovate the definition and application of essential evaluation metrics using methodologies like Conformal Prediction to create rigorous, adaptive safety envelopes surrounding predicted trajectories. Benchmark Methods: Lead benchmarking initiatives comparing traditional geometric methods (e.g., single trajectory comparisons) with state-of-the-art generative/ML approaches, demonstrating a reduction in 'False Fails.'Collaborate: Work cross-functionally with teams in Behaviors, Actions, Simulation, System Engineering, and Research to exchange insights on multi-modal truths and probabilistic safety. Qualifications:Currently pursuing a PhD in Computer Science, Robotics, Machine Learning, Statistics, or a related field. Strong understanding of scientific and statistical methodologies. Proficient in Machine Learning and Deep Learning, particularly with modern Sequence Modeling (Transformers, Self-Attention, and Cross-Attention).
About the job
The Behavior Understanding and Evaluation team at Motional is at the forefront of determining how to effectively measure and validate autonomous vehicle behavior on a large scale. As we prepare for the deployment of driverless vehicles, traditional manual reviews and static metric thresholds are becoming insufficient. Our goal is to develop automated and statistically robust systems utilizing advanced machine learning techniques to assess and understand our vehicles' performance in both real-world scenarios and simulated environments.
One of the key challenges we face is accurately determining whether a simulation has 'Passed' or 'Failed', given the complex, multi-modal nature of human driving. This role is research-oriented and aims to assist in the creation of a Next-Generation Semantic Validator: a production-ready machine learning evaluation system that learns the distribution of valid human driving behavior, establishing a 'Safety Ruler' for the release of autonomous vehicles.
This internship is based in our Boston office and requires in-office attendance weekly.