About the job
The Bot Company
We are on a mission to create a helpful robot for every household.
Our dynamic team of engineers, designers, and operators is headquartered in San Francisco, featuring talent from renowned companies such as Tesla, Cruise, OpenAI, Google, and Pixar. We have a proven track record of delivering exceptional products to hundreds of millions of users.
Our lean structure fosters swift decision-making and minimizes bureaucracy, empowering every team member with significant autonomy and responsibility. We embrace a culture of rapid iteration and execution across the tech stack.
What We Seek in Candidates
At The Bot Company, we value sharp minds capable of thriving in fast-paced, high-pressure environments. Candidates should exhibit:
Exceptional Mental Acuity: The ability to think quickly, assimilate new information instantly, and make connections across various domains.
Engineering Curiosity: A natural inclination to explore and understand how systems function, even beyond your specialized area.
High Performance Mindset: Comfort with rapid movement, adeptness in handling ambiguity, and excellence under demanding conditions.
Role Overview: ML Compiler Engineer
As a specialist in developing ML compilers for edge devices (custom silicon and others), you will be pivotal in establishing a robust deployment framework to efficiently execute large neural networks on our robots with minimal latency.
Key Qualifications
Proficient coding skills with extensive experience in C++ and/or Python.
Familiarity with modern compiler infrastructure (MLIR/LLVM, XLA, TVM, Glow, etc.).
Experience in deploying models on heterogeneous computing platforms (preferably edge devices).
Proficiency in writing kernels (CUDA/OpenCL).
Knowledge of quantization techniques is advantageous, though not mandatory.
Your Responsibilities
Design, develop, and maintain compiler infrastructure tailored for our hardware.
Collaborate across teams, including ML and Systems Software.
Independently diagnose and resolve complex numerical issues (such as discrepancies between training and inference) while enhancing performance.

