Research Engineering Manager Model Training jobs in San Francisco – Browse 8,430 openings on RoboApply Jobs

Research Engineering Manager Model Training jobs in San Francisco

Open roles matching “Research Engineering Manager Model Training” with location signals for San Francisco. 8,430 active listings on RoboApply Jobs.

8,430 jobs found

1 - 20 of 8,430 Jobs
Apply
companyPerplexity logo
Full-time|On-site|San Francisco

Join Perplexity as a Research Engineering Manager, where you will spearhead a team of exceptional AI researchers and engineers dedicated to crafting the advanced models that power our innovative products. Our talented team has pioneered some of the most sophisticated models in agentic research, query understanding, and other critical domains that demand precision and depth. As we broaden our user base and expand our product offerings, our proprietary models are increasingly essential for delivering a premium experience to the world's most discerning users.You will explore our extensive datasets of conversational and agentic queries, applying state-of-the-art training methodologies to enhance AI model performance. Through proactive technical and organizational leadership, you will empower your team to create cutting-edge models for the applications that are most significant to our business and our users.

Feb 4, 2026
Apply
companyZyphra logo
Full-time|On-site|San Francisco

Zyphra is an innovative leader in artificial intelligence, located in the heart of San Francisco, California.Role Overview:As a Research Engineer specializing in Language Model Pre-Training, you will play a pivotal role in defining our language model strategy through comprehensive pretraining development. Your close collaboration with our pretraining team will ensure that your insights contribute to the advancement of our next-generation models.Key Responsibilities:Conduct large-scale training runs and implement model parallelization techniques.Optimize the performance of our pretraining stack.Oversee dataset collection, processing, and evaluation.Research architecture and methodologies, including optimizer ablations.Qualifications:Demonstrated engineering prowess in developing reliable and robust systems.A quick learner with a passion for implementing innovative ideas.Exceptional communication and collaboration skills, capable of working effectively on both research and engineering implementations at scale.Preferred Skills:Profound expertise in addressing machine learning challenges and training models.Experience training on large-scale (multi-node) GPU clusters.In-depth understanding of model training pipelines, including model/data parallelism and distributed optimizers.Strong methodology for conducting rigorous ablations and hypothesis testing.Familiarity with large-scale, high-performance data processing pipelines.High proficiency in PyTorch and Python programming.Ability to navigate and understand extensive pre-existing codebases swiftly.Published research in machine learning in reputable venues is an advantage.Postgraduate degree in a relevant scientific field (Computer Science, Electrical Engineering, Mathematics, Physics).Why Join Zyphra?We value a research methodology that emphasizes thoughtful, methodical progress towards ambitious objectives. Both deep research and engineering excellence are given equal importance.Join us in an environment that fosters innovation, collaboration, and professional growth.

Aug 28, 2025
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

Join Our Innovative TeamAt OpenAI, our Training team is at the forefront of developing advanced language models that drive our research and products, getting us closer to achieving Artificial General Intelligence (AGI). This mission demands a blend of cutting-edge research to enhance our architecture, datasets, and optimization methods, alongside strategic long-term initiatives that boost the efficiency and capabilities of future models. We ensure that our models, including recent breakthroughs like GPT-4-Turbo and GPT-4o, adhere to the highest standards of excellence.Your RoleAs an integral member of our architecture team, you will spearhead architectural advancements for OpenAI’s leading models, enhancing their intelligence and efficiency while introducing novel capabilities. Your expertise in large language model (LLM) architectures and model inference will be crucial as you adopt a hands-on, empirical approach to problem-solving. Whether brainstorming creative breakthroughs, refining foundational systems, designing evaluations, or diagnosing performance issues, your diverse skill set will be invaluable.This position is located in San Francisco, where we embrace a hybrid work environment of three days in the office each week, and we provide relocation support for new hires.Your Key Responsibilities:Innovate, prototype, and upscale new architectures to elevate model intelligence.Conduct and evaluate experiments both independently and collaboratively.Analyze, debug, and enhance both model performance and computational efficiency.Contribute to the development of training and inference infrastructure.Who You Are:You possess experience with significant contributions to major LLM training projects.You excel at independently evaluating and enhancing deep learning architectures.You are driven to responsibly implement LLMs in real-world applications.You are knowledgeable about state-of-the-art transformer modifications aimed at improving efficiency.About OpenAIOpenAI is a pioneering AI research and deployment organization committed to ensuring that artificial general intelligence benefits humanity. We focus on developing safe and effective AI technologies that empower individuals and organizations across the globe.

May 14, 2025
Apply
companyAnthropic logo
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY

Anthropic is looking for a Research Engineer focused on model evaluations. This position involves research and development to assess and strengthen the performance of AI models. Teams are based in San Francisco and New York City, and the role supports remote work with required travel. Key responsibilities Design and implement evaluations for Anthropic's AI models Collaborate with team members to enhance model performance Contribute to research that pushes the boundaries of AI systems Location Remote-friendly (travel required) San Francisco, CA New York City, NY

Apr 28, 2026
Apply
companyZyphra logo
Full-time|On-site|San Francisco

Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.

Aug 28, 2025
Apply
companyBaseten logo
Full-time|On-site|San Francisco

ABOUT BASETENAt Baseten, we are at the forefront of enabling transformative AI solutions for some of the world's leading companies, including Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our innovative platform combines cutting-edge AI research, adaptable infrastructure, and developer-friendly tools to facilitate the production of advanced models. Recently, we celebrated our rapid growth with a successful $300M Series E funding round from notable investors like BOND, IVP, Spark Capital, Greylock, and Conviction. We invite you to join our dynamic team and contribute to the evolution of AI product deployment.THE ROLEAs a Senior Software Engineer specializing in Model Training at Baseten, you will play a pivotal role in constructing the infrastructure essential for the large-scale training and fine-tuning of foundational AI models. Your responsibilities will include designing and implementing distributed training systems, optimizing GPU utilization, and establishing scalable pipelines that empower Baseten and our clientele to adapt models with efficiency and reliability. This role demands a high level of technical expertise and hands-on involvement: you will be responsible for critical components of our training stack, collaborate with product and infrastructure teams to identify customer needs, and drive advancements in scalable training infrastructure.EXAMPLE WORK:Training open-source models that surpass GPT-5 capabilities for a leading digital insurerExploring specialized, continuously learning models as the future of AIOverview of our training documentationResearch initiatives we've undertakenRESPONSIBILITIESDesign, construct, and sustain distributed training infrastructures for large foundation modelsDevelop scalable pipelines for fine-tuning and training across diverse GPU/accelerator clustersEnhance training performance through optimization of algorithms and infrastructureCollaborate closely with cross-functional teams to align technical solutions with business objectivesStay abreast of advancements in the field of machine learning and AI to continually improve our training processes

Aug 29, 2025
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

Team OverviewThe Human Data team at OpenAI is at the forefront of identifying and mitigating risks associated with advanced AI systems. Our mission is to enhance model reliability and public trust by designing thorough evaluations, uncovering vulnerabilities, and collaborating closely with researchers.Role OverviewAs a Technical Program Manager, you will spearhead initiatives aimed at assessing the safety and robustness of OpenAI’s models through innovative experimentation and methodical evaluation. Your role will involve orchestrating efforts across research and engineering teams, translating ambiguous risk signals into actionable research programs that will shape the future of AI model development and deployment.We seek candidates who possess technical acumen, thrive in uncertain environments, and are passionate about pioneering the future of safe AI.This position is based in San Francisco, CA, employing a hybrid work model of three days in the office each week, with relocation assistance available for new hires.Key ResponsibilitiesLead programs that investigate unexpected model behaviors and identify potential failure modes.Convert ambiguous risk signals into clear priorities and actionable research agendas.Design and execute innovative evaluations, experiments, and red-teaming initiatives.Collaborate with research, product, and deployment teams to integrate findings into the model training and deployment pipelines.Establish repeatable systems for monitoring model performance and interpreting emerging behavior patterns.Ideal Candidate ProfileProven experience in technical program management with exceptional organizational and communication abilities.Familiarity with large language models, prompt engineering, or model evaluation methodologies.Ability to manage fast-paced, high-uncertainty projects, shaping them from inception.Creative and resourceful in developing novel methods for evaluating model behavior and performance.Skilled in coordinating effectively across both technical and non-technical stakeholders to ensure alignment and execution.About OpenAIOpenAI is a pioneering AI research and deployment company committed to ensuring that general-purpose artificial intelligence benefits all of humanity. We continually push the boundaries of AI capabilities and strive to deploy them safely through our innovative products. Our mission is to harness the extraordinary potential of AI responsibly and equitably for a better future.

Jan 26, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

OpenAI is hiring a Software Engineer for Post-Training Research in San Francisco. This position centers on improving the performance and capabilities of advanced machine learning models after their initial training phase. Role overview Work closely with a skilled team to explore new ways of strengthening AI systems. The focus is on researching and developing methods that push the boundaries of what these models can achieve once training is complete. Collaboration Expect to contribute to ongoing research efforts and share insights with colleagues who are passionate about advancing AI. Teamwork and knowledge exchange are key parts of this role. Location This position is based in San Francisco.

Apr 29, 2026
Apply
companyZyphra logo
Full-time|On-site|San Francisco

Zyphra is a cutting-edge artificial intelligence firm headquartered in the vibrant city of San Francisco, California.Position Overview:As a Research Scientist specializing in Model Architectures, you will play a pivotal role in Zyphra’s AI Architecture Research Team. Your responsibilities will include the design and thorough evaluation of innovative model architectures and training methodologies aimed at enhancing essential modeling capabilities (e.g., loss per flop or loss per parameter) and tackling core limitations inherent in current models. You will collaborate closely with our pre-training team to ensure that your findings are seamlessly integrated into our next-generation models.Qualifications:A strong research acumen and intuition.Proven ability to navigate research projects from initial conception to execution and final write-up.Exceptional implementation and prototyping skills, with the capability to swiftly transform ideas into experimental outcomes.A collaborative spirit and the ability to thrive in a fast-paced research environment.A deep curiosity and enthusiasm for understanding intelligence.Requirements:Experience with long-term memory, RAG/retrieval systems, dynamic/adaptive computation, and alternative credit assignment strategies.Knowledge of reinforcement learning, control theory, and signal processing techniques.A passion for exploring and critically evaluating unconventional ideas, with the ability to maintain a unique perspective.Familiarity with modern training pipelines and the hardware necessities for designing efficient architectures compatible with GPU hardware.Strong understanding of experimental methodologies for conducting rigorous ablations and hypothesis testing.High proficiency in PyTorch and Python programming.Ability to quickly assimilate into large pre-existing codebases and contribute effectively.Prior publication of machine learning research in reputable venues.Postgraduate degree in a scientific discipline (e.g., Computer Science, Electrical Engineering, Mathematics, Physics).Why Join Zyphra?We emphasize a structured research methodology that systematically addresses ambitious challenges in AI.

Aug 28, 2025
Apply
companyZyphra logo
Full-time|On-site|San Francisco

Join our innovative team at Zyphra as a Research Engineer specializing in Brain-Computer Interface (BCI) Models. In this pivotal role, you will contribute to groundbreaking research and development initiatives in the field of neuroscience and artificial intelligence. Your expertise will help shape the future of communication between humans and machines, enhancing the quality of life for countless individuals.As a Research Engineer, you will be responsible for designing, implementing, and testing advanced BCI models, collaborating closely with a diverse team of scientists and engineers. Your work will play a crucial role in advancing our understanding of neural dynamics and their applications in technology.

Mar 16, 2026
Apply
company
Full-time|On-site|San Francisco

OverviewPluralis Research is at the forefront of Protocol Learning, innovating a decentralized approach to train and deploy AI models that democratizes access beyond just well-funded corporations. By aggregating computational resources from diverse participants, we incentivize collaboration while safeguarding against centralized control of model weights, paving the way for a truly open and cooperative environment for advanced AI.We are seeking a talented Machine Learning Training Platform Engineer to design, develop, and scale the core infrastructure that powers our decentralized ML training platform. In this role, you will have ownership over essential systems including infrastructure orchestration, distributed computing, and service integration, facilitating ongoing experimentation and large-scale model training.ResponsibilitiesMulti-Cloud Infrastructure: Create resource management systems that provision and orchestrate computing resources across AWS, GCP, and Azure using infrastructure-as-code tools like Pulumi or Terraform. Manage dynamic scaling, state synchronization, and concurrent operations across hundreds of diverse nodes.Distributed Training Systems: Design fault-tolerant infrastructure for distributed machine learning, including GPU clusters, NVIDIA runtime, S3 checkpointing, large dataset management and streaming, health monitoring, and resilient retry strategies.Real-World Networking: Develop systems that simulate and manage real-world network conditions—such as bandwidth shaping, latency injection, and packet loss—while accommodating dynamic node churn and ensuring efficient data flow across workers with varying connectivity, as our training occurs on consumer nodes and non-co-located infrastructure.

Apr 1, 2026
Apply
companyBaseten logo
Full-time|On-site|San Francisco

Join Baseten as a Post-Training Research Engineer and contribute to groundbreaking advancements in machine learning and AI. In this role, you will leverage your engineering skills to analyze and enhance models post-training, ensuring optimal performance and efficiency.

Mar 23, 2026
Apply
companyCartesia logo
Full-time|On-site|*HQ - San Francisco, CA

Join Cartesia as a Model Architecture ResearcherAt Cartesia, our vision is to revolutionize AI by creating interactive intelligence that is seamlessly integrated into your daily life. Unlike current models, our goal is to develop systems capable of processing extensive streams of audio, video, and text—1 billion text tokens, 10 billion audio tokens, and 1 trillion video tokens—directly on devices.As pioneers in innovative model architectures, our founding team, which originated from the Stanford AI Lab, has developed State Space Models (SSMs)—a groundbreaking foundation for training efficient, large-scale models. Our diverse team merges deep expertise in model innovation with a design-focused engineering approach, allowing us to create and deploy state-of-the-art models and applications.Backed by leading investors such as Index Ventures, Lightspeed Venture Partners, and many others, including industry veterans and advisors, we are poised to shape the future of AI.Your ContributionIn this role, you will drive forward-thinking research in neural network architecture, focusing on alternative models like state space models, efficient transformers, and hybrid architectures.Create innovative architectures that enhance model performance, inference speed, and adaptability in various environments, from cloud infrastructures to on-device implementations.Develop advanced capabilities for models, including statefulness, long-range memory, and novel conditioning mechanisms to boost expressiveness and generalization.Analyze architectural decisions and their effects on model characteristics such as scalability, robustness, latency, and energy consumption.Create frameworks and tools to assess architectural advancements, benchmarking their performance in both research and production contexts.Collaborate with interdisciplinary teams to translate architectural insights into scalable systems that deliver real-world impact.Your QualificationsExtensive experience in architecture design with a focus on advanced models such as state space models, transformers, and RNN/CNN variants.In-depth understanding of the interplay between architectural designs and system constraints, particularly in cloud and on-device deployments.Strong proficiency in the design and evaluation of neural network architectures.

Dec 12, 2024
Apply
companyTavus logo
Full-time|On-site|San Francisco (London/Europe - OK)

Tavus – Multimodal AI Model OptimizationResearch EngineerAt Tavus, we are pioneering the human aspect of AI technology. Our objective is to make human-AI interactions as seamless and natural as in-person conversations, allowing for a human touch in areas that were once considered unscalable.We accomplish this through groundbreaking research in multimodal AI, focusing on human-to-human communication modeling (encompassing language, audio, and video) and the development of audio-visual avatar behaviors. Our innovative models drive applications ranging from text-to-video AI avatars to real-time conversational video experiences across sectors such as healthcare, recruitment, sales, and education.By empowering AI to perceive, listen, and engage with an authentic human-like presence, we are laying the groundwork for the next generation of AI workers, assistants, and companions.As a Series B company, we are supported by renowned investors, including Sequoia, Y Combinator, and Scale VC. Join us as we shape the future of human-AI interaction.The RoleWe are seeking an accomplished Research Scientist/Engineer with expertise in model optimization to be a vital part of our core AI team.The ideal candidate thrives in dynamic startup environments, is adept at setting priorities independently, and is open to making calculated decisions. We are moving swiftly and need individuals who can help navigate our path forward.Your MissionTransform state-of-the-art research models into fast, efficient, and production-ready systems through techniques such as sparsification, distillation, and quantization.Oversee the optimization lifecycle for critical models: establish metrics, conduct experiments, and evaluate trade-offs among latency, cost, and quality.Collaborate closely with researchers and engineers to convert innovative concepts into deployable solutions.RequirementsExtensive experience in deep learning with PyTorch.Practical experience in model optimization and compression, including knowledge distillation, pruning/sparsification, quantization, and mixed precision.Familiarity with efficient architectures such as low-rank adapters.Strong grasp of inference performance and GPU/accelerator fundamentals.Proficient in Python coding and adherence to best practices in research engineering.Experience with large models and datasets in cloud environments.Capability to read ML literature, reproduce results, and modify ideas accordingly.

Apr 3, 2026
Apply
companyScale AI logo
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY

At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats. In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.

Mar 26, 2026
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

About the TeamJoin the innovative Post-Training team at OpenAI, where we focus on refining and elevating pre-trained models for deployment in ChatGPT, our API, and future products. Collaborating closely with various research and product teams, we conduct crucial research that prepares our models for real-world deployment to millions of users, ensuring they are safe, efficient, and reliable.About the RoleAs a Research Engineer / Scientist, you will spearhead the research and development of enhancements to our models. Our work intersects reinforcement learning and product development, aiming to create cutting-edge solutions.We seek passionate individuals with robust machine learning engineering skills and research experience, particularly with innovative and powerful models. The ideal candidate will be driven by a commitment to product-oriented research.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days in the office each week. Relocation assistance is available for new employees.In this role, you will:Lead and execute a research agenda aimed at enhancing model capabilities and performance.Work collaboratively with research and product teams to empower customers to optimize their models.Develop robust evaluation frameworks to monitor and assess modeling advancements.Design, implement, test, and debug code across our research stack.You may excel in this role if you:Possess a deep understanding of machine learning and its applications.Have experience with relevant models and methodologies for evaluating model improvements.Are adept at navigating large ML codebases for debugging purposes.Thrive in a fast-paced and technically intricate environment.About OpenAIOpenAI is a pioneering AI research and deployment organization dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We are committed to pushing the boundaries of AI capabilities while prioritizing safety and human-centric values in our products. Our mission is to embrace diverse perspectives, voices, and experiences that represent the full spectrum of humanity, as we strive for a future where AI is a powerful ally for everyone.

Dec 1, 2025
Apply
companyAnthropic logo
Remote|Remote|Remote-Friendly (Travel Required) | San Francisco, CA

Join Anthropic as a Senior Research Scientist on our Reward Models team, where you will spearhead groundbreaking research aimed at enhancing our understanding of human preferences at scale. Your innovative contributions will directly influence how our AI models, including Claude, align with human values and optimize for user needs. You will delve into the forefront of reward modeling for large language models, designing novel architectures and training methodologies for Reinforcement Learning from Human Feedback (RLHF). Your research will explore advanced evaluation techniques, including rubric-based grading, and tackle challenges such as reward hacking. Collaboration is key, as you'll work alongside teams in Finetuning, Alignment Science, and our broader research organization to ensure your findings result in tangible advancements in AI capabilities and safety. This role offers you an opportunity to address critical AI alignment challenges, leveraging cutting-edge models and substantial computational resources to further the science of safe and capable AI systems.

Jan 29, 2026
Apply
companyLetta logo
Full-time|On-site|San Francisco Office

Advancing Self-Improving SuperintelligenceAt Letta, we are on a mission to revolutionize artificial intelligence by creating self-improving agents that learn and adapt like humans. Unlike current AI systems that are often rigid and brittle, our innovative approach aims to build adaptable AI that continually evolves through experience.Founded by the visionaries behind MemGPT at UC Berkeley's Sky Computing Lab, the birthplace of Spark and Ray, we are backed by notable figures in AI infrastructure, including Jeff Dean and Clem Delangue. Our agents are already enhancing production systems for industry leaders such as 11x and Bilt Rewards, continually learning and improving in real-time.Join our elite team of researchers and engineers dedicated to tackling AI's most significant challenges: creating machines that can reason, remember, and learn as humans do.This position requires in-person attendance (no hybrid options) at our downtown San Francisco office, five days a week.

Feb 4, 2025
Apply
companyTavus logo
Full-time|On-site|San Francisco

About TavusTavus is at the forefront of innovation in human computing. Our mission is to develop AI Humans: an advanced interface that bridges the gap between individuals and machines, eliminating the friction found in current technologies. Our state-of-the-art human simulation models empower machines to see, hear, respond, and even exhibit realistic appearances—facilitating genuine, face-to-face interactions. AI Humans integrate the emotional insight of humans with the scalability and dependability of machines, making them reliable agents accessible 24/7, in any language, on our terms.Imagine having access to an affordable therapist, a personal trainer that fits your schedule, or a team of medical assistants dedicated to providing personalized care for every patient. With Tavus, individuals, enterprises, and developers have the tools to create AI Humans that connect, comprehend, and act with empathy on a large scale.We are a Series A company supported by esteemed investors such as Sequoia Capital, Y Combinator, and Scale Venture Partners.Join us in shaping a future where machines and humans genuinely understand one another.The PositionWe are seeking an AI Researcher to join our core AI team and advance the frontiers of multimodal conversational intelligence. If you excel in dynamic environments, enjoy transforming abstract concepts into functional code, and derive motivation from pushing the boundaries of possibility, this role is designed for you.Your Responsibilities Engage in research focusing on Foundational Multimodal Models specifically in the realm of Conversational Avatars (such as Neural Avatars and Talking-Heads).Develop models for video, audio, and language sequences utilizing Autoregressive and Predictive Architectures (e.g., V-JEPA) and/or Diffusion methodologies, with a focus on temporal and sequential data rather than static images.Collaborate closely with the Applied ML team to implement your research into production systems.Remain at the forefront of multimodal learning and assist us in defining what “cutting edge” will mean in the future.Ideal Candidate ProfilePhD (or nearing completion) in a relevant field, or equivalent practical research experience.Experience in multimodal machine learning, particularly focused on conversational interfaces.

Oct 8, 2025
Apply
companyWorld Labs logo
Full-time|$250K/yr - $325K/yr|On-site|San Francisco

About World Labs: At World Labs, we create foundational world models capable of perceiving, generating, reasoning, and interacting with the 3D environment. Our mission is to unlock the full potential of AI through spatial intelligence, transforming perception into action, reasoning into insight, and imagination into creation. We believe that spatial intelligence will revolutionize storytelling, creativity, design, simulation, and immersive experiences across both virtual and physical realms. Our world-class team is driven by curiosity and passion, boasting diverse backgrounds in technology, from AI research and systems engineering to product design. This synergy fosters a tight feedback loop between our cutting-edge research and user-empowering products. Role Overview We are seeking an innovative Research Scientist specializing in generative modeling, especially diffusion models, to join our modeling team. This position is ideal for individuals with extensive expertise in applying diffusion models to images, videos, or 3D assets and scenes. While not mandatory, experience in any of the following areas will be considered a significant advantage: Large-scale model trainingResearch in 3D computer vision In this role, you will work closely with researchers, engineers, and product teams to translate advanced 3D modeling and machine learning techniques into practical applications, ensuring our technology stays at the forefront of visual innovation. This position entails substantial hands-on research and engineering work, taking projects from conception to production deployment. Key Responsibilities Design, implement, and train large-scale diffusion models for generating 3D worlds. Develop and experiment with large-scale diffusion models to introduce novel control signals, align with target aesthetic preferences, or optimize for efficient inference. Collaborate closely with research and product teams to comprehend and translate product requirements into actionable technical roadmaps. Contribute actively to all phases of model development, including data curation, experimentation, evaluation, and deployment. Continuously investigate and integrate the latest research in diffusion and generative AI. Serve as a key technical resource within the team, mentoring peers and promoting best practices in generative modeling and machine learning engineering.

Feb 18, 2026

Sign in to browse more jobs

Create account — see all 8,430 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.