Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
We are looking for candidates with a strong background in performance modeling, data analysis, and software engineering. Ideal candidates will have:Proficiency in programming languages such as Python and C++. Experience with machine learning frameworks and performance optimization techniques. A degree in Computer Science, Engineering, or a related field. Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment.
About the job
OpenAI is seeking a Performance Modeling Engineer based in San Francisco. This role centers on building and improving models that enhance the performance and efficiency of AI systems. The work directly supports the technical backbone of OpenAI’s products.
Key responsibilities
Develop and refine models aimed at optimizing the performance of AI systems.
Collaborate with engineers and data scientists to tackle technical challenges as they arise.
Contribute to projects that improve the efficiency of large-scale AI infrastructure.
Role overview
This position offers the chance to work on foundational technology that underpins OpenAI’s products. The focus is on practical improvements and close teamwork with technical colleagues to advance the capabilities and efficiency of AI at scale.
About OpenAI
OpenAI is a leading research organization dedicated to advancing artificial intelligence in a safe and beneficial manner. Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. Join us to work at the forefront of AI technology and contribute to projects that make a difference.
Role OverviewAt Mariana Minerals, we are on a mission to revolutionize refining processes for critical minerals, playing a pivotal role in the global energy transition. We are in search of a dynamic and driven Process Modeling Engineer who will be integral to this endeavor.In this position, you will take charge of developing, validating, and optimizing heat and material balance models utilizing advanced software such as ASPEN Plus/HYSYS, SysCAD, OLI Studio, or METSIM. You will collaborate closely with R&D, pilot operations, and project execution teams to transform lab and pilot data into robust, scalable process models that are essential for the design of groundbreaking mineral refining facilities.Key ResponsibilitiesCreate both steady-state and dynamic process models to determine heat and material balances for integrated mineral refinery systems using ASPEN, SysCAD, OLI, or METSIM.Automate the sizing of equipment and processes (including reactors, heat exchangers, filters, crystallizers, evaporators, and separators) based on model outputs, linking models to datasheets and other engineering tools.Develop and maintain comprehensive process simulation databases to ensure consistency and traceability among modeling assumptions, test data, and engineering outputs.Calibrate and reconcile models using operational data from pilot plants to ensure model accuracy and predictive validity.Conduct optimization studies to enhance energy recovery, recycling strategies, and material efficiency.Develop dynamic models for validating PLC and DCS programming while assessing buffer sizing throughout the design process.Integrate process models with CAPEX and OPEX estimation tools to streamline techno-economic model development.Document modeling methodologies and results, ensuring clear technical communication for design reviews, techno-economic assessments, and regulatory submissions.
Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.
OpenAI is seeking a Performance Modeling Engineer based in San Francisco. This role centers on building and improving models that enhance the performance and efficiency of AI systems. The work directly supports the technical backbone of OpenAI’s products. Key responsibilities Develop and refine models aimed at optimizing the performance of AI systems. Collaborate with engineers and data scientists to tackle technical challenges as they arise. Contribute to projects that improve the efficiency of large-scale AI infrastructure. Role overview This position offers the chance to work on foundational technology that underpins OpenAI’s products. The focus is on practical improvements and close teamwork with technical colleagues to advance the capabilities and efficiency of AI at scale.
Full-time|$160K/yr - $230K/yr|On-site|San Francisco
About MeterAt Meter, we believe that networking is at the heart of technological advancement. We have innovatively unified the entire networking stack and are now on a mission to make it autonomous.Our team is developing a cutting-edge neural network-driven system designed to analyze raw computer networks, enabling us to address all networking challenges. As outlined on Meter.ai, we are creating models within a closed-loop system that utilizes real-time telemetry, logs, and network events to autonomously troubleshoot issues, enhance performance, and resolve challenges.To achieve this, we require not only exceptional models but also robust infrastructure that ensures our models have clean, versioned, and low-latency access to the necessary data throughout training, evaluation, and deployment phases.Why this Role is EssentialEach Meter network deployed in the field serves as a valuable data source for our Models team. However, without meticulous infrastructure design, this data risks becoming fragmented, outdated, or inconsistent. In this role, you will ensure that such pitfalls are avoided. You will be responsible for the core data interface that drives our model development, experimentation, evaluation, and real-time inference.This position is fundamental and offers a significant impact. Your contributions will shape the speed at which we can train new models, the reliability of their evaluations, and their seamless operation across hundreds of real-world networks. You will collaborate closely with modelers to deliver systems that are elegant, scalable, and robust.Your ResponsibilitiesDesign and implement the Models API: a unified interface for accessing training, evaluation, and deployment data across raw, transformed, and feature-engineered layers.Ensure backward compatibility and feature versioning across continually evolving schemas.Develop scalable pipelines to ingest, transform, and serve petabytes of data across Kafka, Postgres, and Clickhouse.Create CI/CD workflows that evolve the API in tandem with changes to the underlying data schema.Facilitate fine-grained querying of historical and real-time data for any network, at any point in time.Help establish and promote the principle of 'smart data, dumb functions': maximizing operations in the data layer to minimize downstream code complexity.Collaborate with modelers to co-design training frameworks that optimize performance.
Role overview The Performance Modeling Engineer II position at OpenAI centers on building and applying performance models to enhance the efficiency of advanced AI systems. Based in San Francisco, this role contributes to the reliability and speed of OpenAI’s technologies. What you will do Develop and implement performance models for AI systems Collaborate with data scientists and engineers to refine performance metrics Support the efficiency and rigorous standards of OpenAI’s technologies
ABOUT BASETENAt Baseten, we are at the forefront of AI innovation, providing critical inference solutions for leading AI companies like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our platform combines advanced AI research, adaptable infrastructure, and intuitive developer tools, empowering organizations to deploy state-of-the-art models effectively. With rapid growth and a recent $300M Series E funding round backed by top-tier investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we invite you to join our mission in building the platform of choice for engineers delivering AI products.THE ROLE:As a member of Baseten’s Model Performance (MP) team, you will play a pivotal role in ensuring our platform’s model APIs are not only fast and reliable but also cost-effective. Your primary focus will be on developing and optimizing the infrastructure that supports our hosted API endpoints for cutting-edge open-source models. This role involves working with distributed systems, model serving, and enhancing the developer experience. You will collaborate with a small, dynamic team at the intersection of product development, model performance, and infrastructure, defining how developers interact with AI models on a large scale.RESPONSIBILITIES:Design, develop, and maintain the Model APIs surface, focusing on advanced inference features such as structured outputs (JSON mode, grammar-constrained generation), tool/function calling, and multi-modal serving.Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, create custom CUDA operators, and enhance memory allocation patterns for maximum efficiency across multi-GPU setups.Implement performance improvements across various runtimes based on a deep understanding of their internals, including speculative decoding, guided generation for structured outputs, and custom scheduling algorithms for high-performance serving.Develop robust benchmarking frameworks to evaluate real-world performance across diverse model architectures, batch sizes, sequence lengths, and hardware configurations.Enhance performance across runtimes (e.g., TensorRT, TensorRT-LLM) through techniques such as speculative decoding, quantization, batching, and KV-cache reuse.Integrate deep observability mechanisms (metrics, traces, logs) and establish repeatable benchmarks to assess speed, reliability, and quality.
About Our TeamJoin the Inference team at OpenAI, where we leverage cutting-edge research and technology to deliver exceptional AI products to consumers, enterprises, and developers. Our mission is to empower users to harness the full potential of our advanced AI models, enabling unprecedented capabilities. We prioritize efficient and high-performance model inference while accelerating research advancements.About the RoleWe are seeking a passionate Software Engineer to optimize some of the world's largest and most sophisticated AI models for deployment in high-volume, low-latency, and highly available production and research environments.Key ResponsibilitiesCollaborate with machine learning researchers, engineers, and product managers to transition our latest technologies into production.Work closely with researchers to enable advanced research initiatives through innovative engineering solutions.Implement new techniques, tools, and architectures that enhance the performance, latency, throughput, and effectiveness of our model inference stack.Develop tools to identify bottlenecks and instability sources, designing and implementing solutions for priority issues.Optimize our code and Azure VM fleet to maximize every FLOP and GB of GPU RAM available.You Will Excel in This Role If You:Possess a solid understanding of modern machine learning architectures and an intuitive grasp of performance optimization strategies, especially for inference.Take ownership of problems end-to-end, demonstrating a willingness to acquire any necessary knowledge to achieve results.Bring at least 5 years of professional software engineering experience.Have or can quickly develop expertise in PyTorch, NVidia GPUs, and relevant optimization software stacks (such as NCCL, CUDA), along with HPC technologies like InfiniBand, MPI, and NVLink.Have experience in architecting, building, monitoring, and debugging production distributed systems, with bonus points for working on performance-critical systems.Have successfully rebuilt or significantly refactored production systems multiple times to accommodate rapid scaling.Are self-driven, enjoying the challenge of identifying and addressing the most critical problems.
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY
Anthropic is looking for a Research Engineer focused on model evaluations. This position involves research and development to assess and strengthen the performance of AI models. Teams are based in San Francisco and New York City, and the role supports remote work with required travel. Key responsibilities Design and implement evaluations for Anthropic's AI models Collaborate with team members to enhance model performance Contribute to research that pushes the boundaries of AI systems Location Remote-friendly (travel required) San Francisco, CA New York City, NY
OpenAI is seeking a Software Engineer in San Francisco to focus on improving productivity by optimizing model performance. This position centers on developing solutions that make machine learning models more efficient and effective. Role overview This role involves working closely with teams across different functions to identify and address areas where model performance can be improved. The aim is to deliver changes that have a measurable impact on both systems and workflows. What you will do Collaborate with engineers and other specialists to enhance model efficiency Develop and implement solutions that improve the effectiveness of machine learning systems Contribute to projects that streamline processes and drive productivity gains Impact Your work will help shape improvements in how models operate and how teams at OpenAI achieve their goals. The changes you help deliver will support more effective use of resources and better outcomes for the organization.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
At Databricks, we are dedicated to empowering data teams to tackle the most challenging problems in the world — from realizing the future of transportation to fast-tracking medical innovations. We accomplish this by developing and operating the premier data and AI infrastructure platform, enabling our customers to harness profound data insights for business enhancement. Our Model Serving product equips organizations with a cohesive, scalable, and governed solution for deploying and managing AI/ML models — ranging from traditional machine learning to intricate proprietary large language models. It ensures real-time, low-latency inference, governance, monitoring, and lineage. As the adoption of AI surges, Model Serving stands as a fundamental component of the Databricks platform, allowing customers to operationalize models at scale with robust SLAs and cost efficiency. In the role of Staff Engineer, you will significantly influence both the product experience and the core infrastructure of Model Serving. Your responsibilities will include designing and constructing systems that facilitate high-throughput, low-latency inference across CPU and GPU workloads, steering architectural strategies, and collaborating extensively with platform, product, infrastructure, and research teams to create an exceptional serving platform.
ABOUT BASETENAt Baseten, we are at the forefront of enabling transformative AI solutions for some of the world's leading companies, including Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our innovative platform combines cutting-edge AI research, adaptable infrastructure, and developer-friendly tools to facilitate the production of advanced models. Recently, we celebrated our rapid growth with a successful $300M Series E funding round from notable investors like BOND, IVP, Spark Capital, Greylock, and Conviction. We invite you to join our dynamic team and contribute to the evolution of AI product deployment.THE ROLEAs a Senior Software Engineer specializing in Model Training at Baseten, you will play a pivotal role in constructing the infrastructure essential for the large-scale training and fine-tuning of foundational AI models. Your responsibilities will include designing and implementing distributed training systems, optimizing GPU utilization, and establishing scalable pipelines that empower Baseten and our clientele to adapt models with efficiency and reliability. This role demands a high level of technical expertise and hands-on involvement: you will be responsible for critical components of our training stack, collaborate with product and infrastructure teams to identify customer needs, and drive advancements in scalable training infrastructure.EXAMPLE WORK:Training open-source models that surpass GPT-5 capabilities for a leading digital insurerExploring specialized, continuously learning models as the future of AIOverview of our training documentationResearch initiatives we've undertakenRESPONSIBILITIESDesign, construct, and sustain distributed training infrastructures for large foundation modelsDevelop scalable pipelines for fine-tuning and training across diverse GPU/accelerator clustersEnhance training performance through optimization of algorithms and infrastructureCollaborate closely with cross-functional teams to align technical solutions with business objectivesStay abreast of advancements in the field of machine learning and AI to continually improve our training processes
Join Us in Revolutionizing AI InfrastructureAt Meter, we are pioneering the application of cutting-edge AI technology to transform the way the internet is constructed, monitored, and managed.Our vertical integration encompasses the entire enterprise networking stack: from hardware and firmware to operating systems and operations. This unique position offers us comprehensive visibility and control over the entire stack via a singular API, along with a proprietary dataset that is unmatched in the industry, paving the way for complete end-to-end automation. Our solutions are already in use by Fortune 500 companies, educational institutions, manufacturing facilities, and cloud-scale clients.We are in the process of assembling a founding core engineering team dedicated to developing and training models that can comprehend these systems, enhance operational efficiency, predict failures, and resolve issues proactively. In essence, you will be instrumental in creating the decision-making framework that underpins the infrastructure of the modern world.You will collaborate closely with our founders, playing a key role in shaping the future of one of the most impactful applications of models available today.Learn more about us at meter.ai.
Full-time|$166K/yr - $225K/yr|On-site|San Francisco, California
At Databricks, we are dedicated to empowering data teams to tackle some of the most challenging issues of our time—from realizing the future of transportation to speeding up medical innovations. We achieve this by developing and maintaining the premier data and AI infrastructure platform, allowing our clients to leverage profound data insights to enhance their operations. Our Model Serving product equips organizations with a cohesive, scalable, and governed platform for deploying and overseeing AI/ML models, spanning traditional ML to specialized large language models. It provides real-time, low-latency inference, governance, monitoring, and lineage capabilities. With the rapid rise of AI adoption, Model Serving stands as a fundamental component of the Databricks platform, enabling clients to operationalize models efficiently and cost-effectively at scale. As a Senior Engineer, your role will be pivotal in transforming both the product experience and the underlying infrastructure of Model Serving. You will design and create systems enabling high-throughput, low-latency inference across CPU and GPU workloads, influence architectural strategies, and work closely with platform, product, infrastructure, and research teams to deliver an exceptional serving platform.
Join Perplexity as a Research Engineering Manager, where you will spearhead a team of exceptional AI researchers and engineers dedicated to crafting the advanced models that power our innovative products. Our talented team has pioneered some of the most sophisticated models in agentic research, query understanding, and other critical domains that demand precision and depth. As we broaden our user base and expand our product offerings, our proprietary models are increasingly essential for delivering a premium experience to the world's most discerning users.You will explore our extensive datasets of conversational and agentic queries, applying state-of-the-art training methodologies to enhance AI model performance. Through proactive technical and organizational leadership, you will empower your team to create cutting-edge models for the applications that are most significant to our business and our users.
At Hover, we empower individuals to design, enhance, and safeguard their cherished properties. Utilizing proprietary AI technology built on over a decade of real property data, we provide answers to pressing questions such as “What will it look like?” and “What will it cost?” Homeowners, contractors, and insurance professionals depend on Hover to receive fully measured, accurate, and interactive 3D models of any property—achieved through a smartphone scan in mere minutes.We are driven by curiosity, purpose, and a collective commitment to our customers, communities, and each other. At Hover, we believe the most innovative ideas stem from diverse perspectives, and we take pride in fostering an inclusive, high-performance culture that encourages growth, accountability, and excellence. Supported by leading investors like Google Ventures and Menlo Ventures, and trusted by industry leaders including Travelers, State Farm, and Nationwide, we are transforming how people perceive and interact with their environments.Why Join Hover?At Hover, 3D models are not just a feature; they are the essence of our product. Each scan and data point we process empowers homeowners, insurers, and contractors to make informed, data-driven decisions. We are seeking a Software Engineer who has a passion for geometry, automation, and making a tangible impact in the real world. In this role, you will design and implement systems that convert customer-captured imagery into meticulously accurate 3D models, enhancing the scalability and precision of Hover’s modeling pipeline. You will work collaboratively with designers and engineers across frontend, backend, computer vision, and DevOps to bring innovative capabilities to fruition, blending technical expertise with strong communication and cross-functional collaboration.The 3D Modeling Pipeline team develops the tools essential for our in-house operations to transform customer-captured scans into highly detailed, accurate 3D models of buildings. This team is also responsible for creating the pipeline and systems that process 3D data through both automated and manual steps, as well as exporting data into customer-facing formats.Your Contributions Will Include:Owning and evolving backend systems that convert raw scan data into exact 3D models, ensuring timely delivery to key ecosystem partners like Xactimate and Cotality.Building and refining internal modeling tools that enable teams to efficiently generate, validate, and optimize high-quality 3D data.Collaborating with machine learning and computer vision engineers to implement new algorithms into production, bridging research with practical applications.Enhancing customer and partner experiences by improving how Hover’s 3D outputs integrate with downstream workflows and external platforms.Promoting innovation and ongoing enhancement across our modeling pipeline.
Embark on an exciting journey with ASM, where innovative technology converges with a collaborative atmosphere.For over 55 years, ASM has led the way in technological advancements, pioneering innovations in the semiconductor industry. Our diverse team of over 4,500 professionals from 70 different nationalities is integral to shaping future technologies such as 5G, cloud computing, artificial intelligence, and autonomous vehicles. We are not just a tech company; we are committed to fostering diversity, promoting inclusion, and prioritizing sustainability to make a meaningful impact worldwide. Our development programs are designed to support your personal and professional growth, encouraging you to push the limits of innovation to realize your full potential.We are currently looking for an enthusiastic intern to assist with advanced metrology development initiatives aimed at enhancing measurement precision and modeling stability across vital characterization techniques. This position is perfect for individuals eager to delve into semiconductor process characterization, data analysis, and design of experiments (DOE) methodologies.
Full-time|$217K/yr - $312.2K/yr|On-site|San Francisco, California
At Databricks, we are dedicated to empowering data teams to tackle the most challenging global issues—whether it's transforming transportation or speeding up medical advancements. We achieve this by constructing and managing the world's leading data and AI infrastructure platform, enabling our clients to leverage deep data insights for business enhancement. The Model Serving product at Databricks offers enterprises a cohesive, scalable, and governed platform for deploying and managing AI/ML models—from conventional ML to sophisticated, proprietary large language models. It facilitates real-time, low-latency inference while providing governance, monitoring, and lineage capabilities. As AI adoption surges, Model Serving becomes a central component of the Databricks platform, allowing customers to operationalize models efficiently and cost-effectively. As a Senior Engineering Manager, you will lead a team responsible for both the product experience and the underlying infrastructure of Model Serving. This role involves shaping user-facing features while architecting for scalability, extensibility, and performance across CPU and GPU inference. You will collaborate closely with various teams across the platform, product, infrastructure, and research domains.
The Bot CompanyAt The Bot Company, we are on a mission to create an innovative robotic assistant for every household.Our dynamic team, composed of talented engineers, designers, and operators, is based in San Francisco. We have a rich background from industry leaders such as Tesla, Cruise, OpenAI, Google, and Pixar, and we have successfully delivered products to hundreds of millions of users, honing our ability to create exceptional products and experiences.We pride ourselves on maintaining a streamlined team structure that fosters swift decision-making and minimizes bureaucracy. Each member is considered an Individual Contributor, granted substantial autonomy, ownership, and accountability. Our culture enables us to work across the technology stack with an emphasis on rapid iteration and execution.What We Seek in CandidatesCandidates for all positions at The Bot Company must exhibit remarkable sharpness and the capacity to thrive in high-pressure environments. We expect candidates to showcase:Exceptional Cognitive Abilities: You possess quick thinking, instant learning capabilities, and the ability to reason across diverse domains.Engineering Curiosity: You demonstrate an innate desire to understand how systems function, even beyond your area of expertise.Performance-Driven Attitude: You excel in fast-paced settings, effectively navigate ambiguity, and thrive under demanding circumstances.Machine Learning: Multimodal Foundation ModelsWe are developing unified foundation models capable of reasoning across text, images, video, and kinematics to inform intelligent robotic behaviors.You will engage with large-scale multimodal networks, overseeing the complete process from data handling to model training and deployment.Your ResponsibilitiesConstruct Native Multimodal Policies: Create architectures where vision, language, and other modalities are represented in a unified manner.Enhance Cross-Modal Reasoning: Explore and implement strategies to ensure that the model not only correlates modalities but also comprehends them (e.g., linking visual physics to kinematic constraints).Manage the Training Loop from Start to Finish: Design, execute, troubleshoot, and refine large-scale training experiments; identify failure points, enhance data mixtures, and tighten evaluations to achieve measurable improvements.Deploy and Refine Real Systems: Integrate models into practical robotic frameworks, enhance robot code for model deployment, and optimize performance for edge inference.
Full-time|$127.5K/yr - $248.5K/yr|On-site|San Francisco, California, United States
About Redwood MaterialsRedwood Materials is pioneering a sustainable battery supply chain that integrates recovery, reuse, and recycling—enabling the circulation of critical minerals and facilitating the energy transition. Established in 2017, we are proud to offer low-cost, large-scale energy storage solutions and produce battery materials within the U.S. for the first time, utilizing batteries that are already in circulation.Modeling and Architecture Engineer, Energy StorageAs the technical lead of the Energy Storage Modeling and Architecture team, you will play a critical role in the design, development, and integration of Redwood Energy’s innovative second-life battery product. You will serve as the subject matter expert in creating a multiphysics and technoeconomic modeling platform that informs the system design of this cutting-edge battery energy storage solution. These models must be robust enough to accurately size future projects while remaining agile enough to facilitate a wide range of design decisions.Additionally, you will be tasked with operationalizing these insights into an algorithm that optimally manages the use of each battery pack on our premises, maximizing the value extracted before recycling. This model will dictate operational parameters, including state of charge (SOC) windows and charge/discharge powers, along with decisions regarding when to replace a pack and what to use as a replacement.The ideal candidate will be self-motivated, adaptable to a startup culture, and enthusiastic about tackling novel technical challenges. You should possess experience in leading modeling teams within the battery energy storage or electric vehicle sectors while being a first-principles thinker capable of initiating new modeling projects independently.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
At Databricks, we are driven by our commitment to empower data teams in tackling the world's most challenging problems — from transforming transportation solutions to accelerating medical advancements. Our mission revolves around constructing and maintaining the world's premier data and AI infrastructure platform, enabling our clients to harness deep data insights for enhanced business outcomes.Foundation Model Serving represents the API product designed for hosting and serving advanced AI model inference, catering to both open-source models like Llama, Qwen, and GPT OSS, as well as proprietary models such as Claude and OpenAI GPT. We welcome engineers who have experience managing high-scale operational systems, including customer-facing APIs, Edge Gateways, or ML Inference services, even if they do not have a background in ML or AI. A passion for developing LLM APIs and runtimes at scale is essential.As a Staff Engineer, you will play a pivotal role in defining both the product experience and the underlying infrastructure. You will be tasked with designing and building systems that facilitate high-throughput, low-latency inference on GPU workloads with cutting-edge models. Your influence will extend to architectural direction, working closely with platform, product, infrastructure, and research teams to deliver an exceptional foundation model API product.The impact you will have:Design and implement core systems and APIs that drive Databricks Foundation Model Serving, ensuring scalability, reliability, and operational excellence.Collaborate with product and engineering leaders to outline the technical roadmap and long-term architecture for workload serving.Make architectural decisions to enhance performance, throughput, autoscaling, and operational efficiency for GPU serving workloads.Contribute directly to critical components within the serving infrastructure, from systems like vLLM and SGLang to developing token-based rate limiters and optimizers, ensuring seamless and efficient operations at scale.Work cross-functionally with product, platform, and research teams to transform customer requirements into dependable and high-performing systems.Establish best practices for code quality, testing, and operational readiness while mentoring fellow engineers through design reviews and technical support.Represent the team in inter-departmental technical discussions, influencing Databricks’ wider AI platform strategy.
Jan 30, 2026
Sign in to browse more jobs
Create account — see all 5,251 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.