Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Qualifications
Candidates should possess a strong background in machine learning, statistics, or a related field, along with hands-on experience in developing and executing complex experiments. Proficiency in programming languages such as Python and familiarity with machine learning frameworks is essential. A keen interest in AI safety and ethical implications is highly valued. Prior experience in collaborative research environments and excellent problem-solving skills will be beneficial.
About the job
About Anthropic
At Anthropic, we are driven by our mission to develop reliable, interpretable, and steerable AI systems. Our commitment is to ensure that AI is safe and beneficial not only for our users but also for society as a whole. Our rapidly expanding team comprises dedicated researchers, engineers, policy specialists, and business leaders collaborating to create impactful AI technologies.
About the Role:
As a Research Engineer focusing on Alignment Science, you will design and execute sophisticated machine learning experiments aimed at understanding and guiding the behavior of advanced AI systems. Your passion lies in making AI systems helpful, honest, and safe, particularly in the face of challenges posed by human-level capabilities. You embody both the scientific and engineering mindsets. In this role, you will engage in exploratory research on AI safety, concentrating on risks associated with future powerful systems (such as those classified as ASL-3 or ASL-4 under our Responsible Scaling Policy), often working in collaboration with teams focused on Interpretability, Fine-Tuning, and the Frontier Red Team.
Discover more about our current research topics and insights on our blog, as we delve into pressing issues such as:
Scalable Oversight: Innovating techniques to ensure that highly capable models remain helpful and truthful, even as they exceed human-level intelligence.
AI Control: Developing strategies to maintain the safety and harmlessness of advanced AI systems in novel or adversarial environments.
Alignment Stress Testing: Implementing rigorous testing frameworks to evaluate AI alignment under various conditions.
About Anthropic
Anthropic is at the forefront of AI innovation, dedicated to creating AI systems that are not only effective but also responsible and ethical. Our expert team collaborates across disciplines to tackle the most pressing challenges in AI development, ensuring our technologies benefit humanity. Join us in our mission to shape the future of AI!
About AnthropicAt Anthropic, we are driven by our mission to develop reliable, interpretable, and steerable AI systems. Our commitment is to ensure that AI is safe and beneficial not only for our users but also for society as a whole. Our rapidly expanding team comprises dedicated researchers, engineers, policy specialists, and business leaders collaborating to create impactful AI technologies.About the Role:As a Research Engineer focusing on Alignment Science, you will design and execute sophisticated machine learning experiments aimed at understanding and guiding the behavior of advanced AI systems. Your passion lies in making AI systems helpful, honest, and safe, particularly in the face of challenges posed by human-level capabilities. You embody both the scientific and engineering mindsets. In this role, you will engage in exploratory research on AI safety, concentrating on risks associated with future powerful systems (such as those classified as ASL-3 or ASL-4 under our Responsible Scaling Policy), often working in collaboration with teams focused on Interpretability, Fine-Tuning, and the Frontier Red Team. Discover more about our current research topics and insights on our blog, as we delve into pressing issues such as:Scalable Oversight: Innovating techniques to ensure that highly capable models remain helpful and truthful, even as they exceed human-level intelligence.AI Control: Developing strategies to maintain the safety and harmlessness of advanced AI systems in novel or adversarial environments.Alignment Stress Testing: Implementing rigorous testing frameworks to evaluate AI alignment under various conditions.
Team focus The Alignment Science team at OpenAI works on intent alignment for artificial intelligence. Their goal is to develop models that accurately interpret and follow user requests, while maintaining high standards for safety and transparency. As AI models become more advanced, the team prioritizes keeping them honest about their capabilities and limitations, ensuring close alignment with user intent. Research spans both theoretical and applied domains. The team shares findings publicly and integrates new alignment techniques into OpenAI's deployed models. Recent efforts have targeted model honesty, studying how models admit mistakes, avoid generating false information, and resist manipulation. The team is looking for scalable solutions to improve instruction following and reliability in AI systems. Quantitative research is a core part of this work, especially reinforcement learning and related training and evaluation methods that support safer, more reliable AI interactions. Role overview This Researcher in Alignment Science position (which may be titled Research Engineer or Research Scientist) centers on designing and running experiments to improve how models follow user intent. Responsibilities include developing training protocols, building evaluation frameworks, and strengthening research infrastructure to support effective alignment in new models. The job is based in San Francisco, CA, with a hybrid schedule requiring three days per week in the office. OpenAI provides relocation support for new hires. Exceptional remote candidates who can work independently and collaborate closely with the team will also be considered. Main responsibilities Design and conduct experiments on alignment techniques, including intent following, honesty, calibration, and robustness. Train and assess models using reinforcement learning and other empirical machine learning approaches. Develop evaluation metrics for failure modes such as hallucination, compliance gaps, reward exploitation, and covert actions. Investigate methods to encourage models to self-verify and report limitations honestly, including confession-style training objectives. Create monitoring tools and interventions at inference time to help models act as intended.
Join Our Innovative TeamAt OpenAI, our Alignment team is committed to building AI systems that prioritize safety, trustworthiness, and alignment with human values, even as these systems evolve and grow in complexity. We are at the forefront of AI research, developing advanced methodologies to ensure that AI adheres to human intent across diverse scenarios, including high-stakes and adversarial environments. Our focus is on tackling the most critical challenges, addressing areas where AI can have profound impacts. By quantifying risks and making meaningful improvements, we aim to prepare our models for the complexities of real-world applications.Our approach is built on two foundational pillars: (1) integrating enhanced capabilities into alignment, ensuring our techniques evolve positively with increasing capabilities, and (2) centering human input through the development of mechanisms that allow humans to communicate their intent and effectively monitor AI systems, even in intricate situations.Your Role in Shaping the FutureAs a Research Engineer / Scientist on our Alignment team, you will play a pivotal role in ensuring our AI systems align with human intent in complex and unpredictable contexts. Your responsibilities will include designing and implementing scalable solutions that maintain alignment as AI capabilities expand, while incorporating human oversight into AI decision-making processes.This position is based in San Francisco, CA, and follows a hybrid work model of three days in the office each week. We also offer relocation assistance to new team members.Key Responsibilities:Develop and assess alignment capabilities that are context-sensitive, subjective, and challenging to quantify.Create evaluations to accurately measure risks and alignment with human values and intentions.Construct tools and evaluations to examine model robustness across various scenarios.Design experiments to explore how alignment scales with compute resources, data, context lengths, actions, and adversarial influences.Innovate new Human-AI interaction frameworks and scalable supervision methods that enhance human engagement and understanding of AI systems.
About Our TeamThe Future of Computing Research team is a dynamic applied research unit within the Consumer Devices group at OpenAI. We are dedicated to pioneering innovative methods, models, and evaluation frameworks that propel our vision for the future of computing. Our focus lies at the cutting edge of multimodal AI, transforming emerging model capabilities into product experiences that are not only functional and enjoyable but also foster long-term trust.Our research delves into a new generation of AI systems capable of learning and evolving over time, adapting to individual needs, and enhancing daily life. This includes exploring long-term memory, user modeling, and personalized systems aligned with broader human goals, values, and overall well-being.We collaborate closely across multiple disciplines—research, engineering, design, product management, and safety—to define what it means to build AI systems that recognize and respond to user needs in a contextually aware and respectful manner, ensuring demonstrable benefits.About the PositionWe are seeking a passionate Research Engineer/Scientist to join our Future of Computing Research team, focusing on Reinforcement Learning from Human Feedback (RLHF) and post-training techniques for personalized multimodal AI systems.In this role, you will be instrumental in establishing the learning and evaluation foundations necessary for models to become increasingly context-aware, adaptive, and useful over time. You will tackle challenges such as reward modeling, preference learning, long-horizon evaluation, and policy improvement for systems that are required to make high-quality behavioral decisions in real-world settings. Our success is measured not just by improved benchmark performance but by enhanced model behavior in actual use cases.The ideal candidate is enthusiastic about advancing beyond simplistic one-turn assistant interactions towards systems that learn and grow through feedback, utilizing richer signals and training against meaningful notions of user value. This requires a thoughtful approach to reward design, feedback mechanisms, and evaluation frameworks that assess the long-term benefits of interventions.This position is based in San Francisco, CA, with a hybrid work model of four days in the office each week. We also provide relocation assistance for new hires.Key Responsibilities:Develop RLHF and post-training strategies for multimodal models.Create reward models and preference-learning pipelines to foster adaptive, personalized model behavior.Engage in long-term evaluation and policy refinement to enhance user interactions.
About the TeamJoin the innovative Post-Training team at OpenAI, where we focus on refining and elevating pre-trained models for deployment in ChatGPT, our API, and future products. Collaborating closely with various research and product teams, we conduct crucial research that prepares our models for real-world deployment to millions of users, ensuring they are safe, efficient, and reliable.About the RoleAs a Research Engineer / Scientist, you will spearhead the research and development of enhancements to our models. Our work intersects reinforcement learning and product development, aiming to create cutting-edge solutions.We seek passionate individuals with robust machine learning engineering skills and research experience, particularly with innovative and powerful models. The ideal candidate will be driven by a commitment to product-oriented research.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days in the office each week. Relocation assistance is available for new employees.In this role, you will:Lead and execute a research agenda aimed at enhancing model capabilities and performance.Work collaboratively with research and product teams to empower customers to optimize their models.Develop robust evaluation frameworks to monitor and assess modeling advancements.Design, implement, test, and debug code across our research stack.You may excel in this role if you:Possess a deep understanding of machine learning and its applications.Have experience with relevant models and methodologies for evaluating model improvements.Are adept at navigating large ML codebases for debugging purposes.Thrive in a fast-paced and technically intricate environment.About OpenAIOpenAI is a pioneering AI research and deployment organization dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We are committed to pushing the boundaries of AI capabilities while prioritizing safety and human-centric values in our products. Our mission is to embrace diverse perspectives, voices, and experiences that represent the full spectrum of humanity, as we strive for a future where AI is a powerful ally for everyone.
About Our TeamJoin the forefront of AI innovation with the RL and Reasoning team at OpenAI. Our team is dedicated to advancing reinforcement learning research and has pioneered transformative projects, including o1 and o3. We are committed to pushing the limits of generative models while ensuring their scalable deployment.About the RoleAs a Research Engineer/Research Scientist at OpenAI, you will play a pivotal role in enhancing AI alignment and capabilities through state-of-the-art reinforcement learning techniques. Your contributions will be essential in training intelligent, aligned, and versatile agents that power various AI models.We seek individuals with a solid foundation in reinforcement learning research, agile coding skills, and a passion for rapid iteration.This position is located in San Francisco, CA, and follows a hybrid work model of three days in the office per week. We also provide relocation assistance for new hires.You may excel in this role if:You are enthusiastic about being at the cutting edge of RL and language model research.You take initiative, owning ideas and driving them to fruition.You value principled methodologies, conducting simple experiments in controlled environments to draw trustworthy conclusions.You thrive in a fast-paced, complex technical environment where rapid iteration is essential.You are adept at navigating extensive ML codebases to troubleshoot and enhance them.You possess a profound understanding of machine learning and its applications.About OpenAIOpenAI is a pioneering AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves the greater good for humanity. We strive to push the boundaries of AI system capabilities while prioritizing safe deployment through our innovative products. We recognize AI as a powerful tool that must be developed with safety and human-centric principles, embracing diverse perspectives to reflect the full spectrum of humanity.We are proud to be an equal opportunity employer, welcoming applicants from all backgrounds without discrimination based on race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or any other legally protected characteristic.
Full-time|$122K/yr - $170K/yr|Remote|Remote ; San Diego, California, United States; South San Francisco, California, United States
Join Veracyte, a leader in transforming cancer care through innovative diagnostics that empower patients and clinicians alike. At Veracyte, we provide an inspiring work environment where you can make a significant impact on patients' lives while advancing your career. Our culture, known as the Veracyte way, emphasizes collaboration, resilience, and a commitment to excellence.Our Core Values:We Seek A Better Way: We boldly innovate and learn from challenges to improve cancer care.We Make It Happen: We prioritize urgency, quality, and enjoyment in our work.We Are Stronger Together: We foster open collaboration and celebrate our collective successes.We Care Deeply: We honor our diverse backgrounds and support each other in doing what is right.The Role:We are on the lookout for an enthusiastic Bioinformatics Research Scientist to join our Translational Bioinformatics group within the Bioinformatics and Data Science Team. This pivotal role involves leveraging computational and statistical techniques on extensive transcriptomic and genomic datasets, particularly focusing on prostate cancer biology and its clinical implications. The ideal candidate will play a key role in identifying clinically relevant molecular signatures that enhance disease stratification, prognosis, and the development of diagnostics, while effectively communicating findings to both scientific and clinical audiences. This position offers a unique opportunity to bridge the realms of data science, biology, and clinical research through close collaboration with scientists, clinicians, and cross-functional teams to convert complex data into actionable insights.
About Our TeamJoin the Foundations Research team, where we tackle ambitious and innovative projects that could redefine the future of AI. Our mission is to enhance the science behind our training and scaling initiatives, focusing on pioneering frontier models. We are dedicated to advancing data utilization, scaling methodologies, optimization strategies, model architectures, and efficiency enhancements to accelerate our scientific breakthroughs.About the PositionWe are on the lookout for a dynamic technical research lead to spearhead our embeddings-focused retrieval initiatives. You will oversee a talented team of research scientists and engineers committed to developing foundational technologies that enable models to access and utilize the right information precisely when needed. This includes crafting innovative embedding training objectives, architecting scalable vector storage, and implementing adaptive indexing techniques.This pivotal role will contribute to various OpenAI products and internal research initiatives, offering opportunities for scientific publication and significant technical influence.This position is located in San Francisco, CA, where we embrace a hybrid work model, requiring three days in the office weekly, and we provide relocation assistance for new hires.Your ResponsibilitiesLead cutting-edge research on embedding models and retrieval systems optimized for grounding, relevance, and adaptive reasoning.Supervise a team of researchers and engineers in building an end-to-end infrastructure for training, evaluating, and integrating embeddings into advanced models.Drive advancements in dense, sparse, and hybrid representation techniques, metric learning, and retrieval systems.Work collaboratively with Pretraining, Inference, and other Research teams to seamlessly integrate retrieval throughout the model lifecycle.Contribute to OpenAI's ambitious vision of developing AI systems with robust memory and knowledge access capabilities rooted in learned representations.You Will Excel in This Role If You PossessA proven track record of leading high-performance teams of researchers or engineers within ML infrastructure or foundational research.In-depth technical knowledge in representation learning, embedding models, or vector retrieval systems.Familiarity with transformer-based large language models and their interaction with embedding spaces and objectives.Research experience in areas such as contrastive learning and retrieval-augmented generation.
About GranicaGranica is an innovative AI research and infrastructure firm dedicated to creating reliable, steerable representations of enterprise data.We establish trust through Crunch, a policy-driven health layer optimizing large tabular datasets for efficiency, reliability, and reversibility. Utilizing this foundation, we are developing Large Tabular Models—systems designed to learn cross-column and relational structures, delivering trustworthy answers and automation with integrated provenance and governance.Our MissionCurrent AI capabilities are hindered not only by model design but also by the inefficiencies of the data that supports it. At scale, each redundant byte, poorly organized dataset, and inefficient data pathway contributes to significant costs, latency, and energy waste.Granica’s mission is to eliminate these inefficiencies. We leverage groundbreaking research in information theory, probabilistic modeling, and distributed systems to craft self-optimizing data infrastructure: systems that continually enhance how information is represented and utilized by AI.Led by Prof. Andrea Montanari from Stanford, Granica’s Research group merges advances in information theory with learning efficiency in large-scale distributed systems. We collectively believe that the next significant leap in AI will originate from innovations in efficient systems, rather than merely larger models.Granica is at the forefront of developing a new category of structured AI models: foundational models designed to learn and reason from the relational, tabular, and structured data that drives the global economy. While many focus on unstructured text or media, we are venturing into the next frontier: systems capable of comprehending and reasoning over structured information.Your ContributionsCreate and prototype algorithms that form the core of structured AI, enhancing representation learning and efficient information modeling for enterprise and tabular data at petabyte scale.Develop adaptive learners merging statistical learning theory with systems optimization at scale, contributing to a new generation of foundational models for structured information.Design architectures that unify symbolic, relational, and neural components, enabling AI systems to reason directly over structured enterprise data.Construct cost models and optimization frameworks that enhance the efficiency of structured learning, both computationally and economically.
Role overview The Principal Research Scientist – Scaling at Databricks leads research projects that advance how the company’s data analytics platform handles large workloads. This San Francisco-based role focuses on designing and improving algorithms that enable efficient large-scale data processing and machine learning. Collaboration is central, with regular work alongside engineering, product, and research teams. What you will do Lead research to develop algorithms that scale for data analytics applications. Work with colleagues across engineering, product, and research to strengthen machine learning capabilities. Use deep expertise to shape the direction and architecture of the Databricks platform. Drive new ideas and solutions that influence the future of data science and analytics at Databricks. Location This role is based in San Francisco, California.
Remote|Remote|Remote-Friendly (Travel Required) | San Francisco, CA
Join Anthropic as a Senior Research Scientist on our Reward Models team, where you will spearhead groundbreaking research aimed at enhancing our understanding of human preferences at scale. Your innovative contributions will directly influence how our AI models, including Claude, align with human values and optimize for user needs. You will delve into the forefront of reward modeling for large language models, designing novel architectures and training methodologies for Reinforcement Learning from Human Feedback (RLHF). Your research will explore advanced evaluation techniques, including rubric-based grading, and tackle challenges such as reward hacking. Collaboration is key, as you'll work alongside teams in Finetuning, Alignment Science, and our broader research organization to ensure your findings result in tangible advancements in AI capabilities and safety. This role offers you an opportunity to address critical AI alignment challenges, leveraging cutting-edge models and substantial computational resources to further the science of safe and capable AI systems.
Join OpenAI as a Research Scientist and explore cutting-edge machine learning innovations. In this role, you will be at the forefront of developing groundbreaking techniques while advancing our team's research initiatives. Collaborate with talented peers across various teams to discover transformative ideas that scale effectively. We seek individuals who are passionate about pushing the boundaries of AI and want to contribute to our unified research vision.
Merge Labs is an innovative research facility dedicated to merging biological sciences and artificial intelligence to enhance human capability, autonomy, and experience. Our mission is to pioneer revolutionary methodologies in brain-computer interfaces that facilitate high-bandwidth interactions with the brain, seamlessly integrate advanced AI, and maintain safety and accessibility for all users.About the TeamAt Merge, we are addressing some of the most ambitious challenges in molecular engineering, synthetic biology, and neuroscience. Our Research Platform Team is responsible for creating the experimental frameworks necessary to tackle these challenges with exceptional speed and precision. The tools and methodologies developed by our team significantly enhance molecular assembly, protein expression, mammalian cell culture, advanced microscopy, sequencing, and unique custom techniques. We collaborate with program teams to establish and optimize these capabilities, implement automation where beneficial, and integrate with our data science and machine learning pipelines, continuously pushing the boundaries of throughput and innovation.About the RoleAs a Platform Scientist, you will be instrumental in developing high-efficiency and high-throughput experimental pipelines that accelerate research initiatives. You will work closely with program leads, project scientists, data scientists, and engineers, leading your work and potentially recruiting additional team members as necessary.Key Responsibilities:Collaborate with program leads and scientists to identify critical experimental requirements and workflows.Develop processes to facilitate high-throughput and/or high-efficiency experiments, including reagent production and analysis.Scope, procure, construct, program, and validate instruments to support experimental workflows.Ensure the quality, reliability, and integrity of data generated from automated pipelines, including defining and implementing suitable quality control checkpoints.Work alongside data science and machine learning engineers to incorporate metadata tracking, computational design, and analysis into experimental pipelines.Partner with electrical, mechanical, and software engineers to create custom setups.Innovate and validate concepts to enhance experimental throughput.
OverviewBecome an integral part of our dynamic R&D team dedicated to developing fully automated research systems that push the boundaries of AI. Zochi has achieved a milestone by publishing the first entirely AI-generated A* conference paper. Locus has set a new industry standard as the first AI system to surpass human experts in AI R&D.Key ResponsibilitiesConceptualize and develop innovative architectures for automated research.Work collaboratively within a specialized team of researchers addressing cutting-edge challenges in long-horizon agentic capabilities, post-training for open-ended objectives, and environment crafting.Document and publish key internal findings alongside success stories from external collaborations.QualificationsPhD or equivalent research experience in Computer Science, Machine Learning, Artificial Intelligence, or a related discipline. Outstanding candidates with significant research contributions are encouraged to apply, regardless of formal qualifications.Demonstrated history of impactful AI/ML research contributions in academic or corporate environments.Expertise in developing long-horizon, multi-agent systems and/or model post-training, especially in scientific domains or for open-ended discovery objectives.A strong passion for advancing problem-solving processes and scientific discovery, thriving in high-autonomy roles and environments.Our CultureCompetitive compensation and equity options.Unlimited Paid Time Off (PTO), emphasizing team collaboration and a community-focused workplace.Opportunities for conference participation and engagement in community initiatives.Empowered roles with high levels of responsibility.#1: We are a small, passionate team of leading investors, researchers, and industry experts committed to the mission of accelerating discovery. Join us.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats. In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.
On-site|On-site|New York City, NY; San Francisco, CA; Seattle, WA
About AnthropicAt Anthropic, we are dedicated to developing AI systems that are reliable, interpretable, and steerable. We aim to ensure that AI is safe, beneficial, and aligned with the needs of both our users and society. Our expanding team consists of passionate researchers, engineers, policy experts, and business leaders collaborating to create groundbreaking AI solutions.About the RoleWe are seeking a talented Research Engineer with a solid foundation in computer vision, who shares our belief that visual and spatial reasoning are essential for unleashing the full potential of large language models (LLMs). In this collaborative role, you will engage in research, development, and evaluation of cutting-edge Claude models, with a specific emphasis on enhancing visual and spatial capabilities. You will contribute across multiple facets of our research initiatives, employing a full-stack approach that encompasses pretraining, reinforcement learning (RL), and runtime techniques such as agentic harnesses. Additionally, you will work closely with our product team to ensure that your vision enhancements positively influence Claude's performance in real-world applications.
Join Baseten as a Post-Training Research Scientist, where you will play a vital role in advancing our machine learning capabilities. In this position, you will have the opportunity to conduct innovative research, analyze data, and contribute to the development of cutting-edge technologies. Your work will directly impact our projects and enhance the performance of our models.
Join Our Team as a Research ScientistAt Parallel, we are at the forefront of web infrastructure innovation, enabling businesses across sectors such as sales, marketing, insurance, and technology to harness the power of AI. Our state-of-the-art products empower users to develop superior AI agents with seamless and flexible access to the web.With significant backing of $130 million from prominent investors like Kleiner Perkins, Index Ventures, and Spark Capital, we are dedicated to redefining the web for artificial intelligence. As we expand, we're assembling a top-tier team of engineers, designers, marketers, sales experts, researchers, and operational specialists committed to our vision.Your Role: As a Research Scientist, you will tackle the challenge of training and scaling models designed to enhance web indexing capabilities.About You: You possess a profound understanding of contemporary models and training methodologies. You enjoy engaging in discussions about the convergence of search, recommendations, and transformer models, and are passionate about translating your research into impactful products and systems utilized by millions.
Advancing Self-Improving SuperintelligenceAt Letta, we are on a mission to revolutionize artificial intelligence by creating self-improving agents that learn and adapt like humans. Unlike current AI systems that are often rigid and brittle, our innovative approach aims to build adaptable AI that continually evolves through experience.Founded by the visionaries behind MemGPT at UC Berkeley's Sky Computing Lab, the birthplace of Spark and Ray, we are backed by notable figures in AI infrastructure, including Jeff Dean and Clem Delangue. Our agents are already enhancing production systems for industry leaders such as 11x and Bilt Rewards, continually learning and improving in real-time.Join our elite team of researchers and engineers dedicated to tackling AI's most significant challenges: creating machines that can reason, remember, and learn as humans do.This position requires in-person attendance (no hybrid options) at our downtown San Francisco office, five days a week.
Innovating Self-Improving SuperintelligenceAt Letta, we are on a transformative journey to enhance artificial intelligence to be as adaptive and capable of learning as the human brain. Our mission is to create self-improving AI agents that continuously learn from their experiences and evolve over time.Founded by the visionary team behind MemGPT at UC Berkeley’s Sky Computing Lab—the birthplace of Spark and Ray—we are supported by industry leaders like Jeff Dean and Clem Delangue. Our cutting-edge agents are already in action at leading companies such as 11x and Bilt Rewards, improving their operations daily.Join our elite team of researchers and engineers as we tackle one of AI's greatest challenges: developing machines that can think, remember, and learn in ways similar to humans.This position is in-person (no hybrid), requiring presence 5 days a week at our downtown San Francisco office.
About AnthropicAt Anthropic, we are driven by our mission to develop reliable, interpretable, and steerable AI systems. Our commitment is to ensure that AI is safe and beneficial not only for our users but also for society as a whole. Our rapidly expanding team comprises dedicated researchers, engineers, policy specialists, and business leaders collaborating to create impactful AI technologies.About the Role:As a Research Engineer focusing on Alignment Science, you will design and execute sophisticated machine learning experiments aimed at understanding and guiding the behavior of advanced AI systems. Your passion lies in making AI systems helpful, honest, and safe, particularly in the face of challenges posed by human-level capabilities. You embody both the scientific and engineering mindsets. In this role, you will engage in exploratory research on AI safety, concentrating on risks associated with future powerful systems (such as those classified as ASL-3 or ASL-4 under our Responsible Scaling Policy), often working in collaboration with teams focused on Interpretability, Fine-Tuning, and the Frontier Red Team. Discover more about our current research topics and insights on our blog, as we delve into pressing issues such as:Scalable Oversight: Innovating techniques to ensure that highly capable models remain helpful and truthful, even as they exceed human-level intelligence.AI Control: Developing strategies to maintain the safety and harmlessness of advanced AI systems in novel or adversarial environments.Alignment Stress Testing: Implementing rigorous testing frameworks to evaluate AI alignment under various conditions.
Team focus The Alignment Science team at OpenAI works on intent alignment for artificial intelligence. Their goal is to develop models that accurately interpret and follow user requests, while maintaining high standards for safety and transparency. As AI models become more advanced, the team prioritizes keeping them honest about their capabilities and limitations, ensuring close alignment with user intent. Research spans both theoretical and applied domains. The team shares findings publicly and integrates new alignment techniques into OpenAI's deployed models. Recent efforts have targeted model honesty, studying how models admit mistakes, avoid generating false information, and resist manipulation. The team is looking for scalable solutions to improve instruction following and reliability in AI systems. Quantitative research is a core part of this work, especially reinforcement learning and related training and evaluation methods that support safer, more reliable AI interactions. Role overview This Researcher in Alignment Science position (which may be titled Research Engineer or Research Scientist) centers on designing and running experiments to improve how models follow user intent. Responsibilities include developing training protocols, building evaluation frameworks, and strengthening research infrastructure to support effective alignment in new models. The job is based in San Francisco, CA, with a hybrid schedule requiring three days per week in the office. OpenAI provides relocation support for new hires. Exceptional remote candidates who can work independently and collaborate closely with the team will also be considered. Main responsibilities Design and conduct experiments on alignment techniques, including intent following, honesty, calibration, and robustness. Train and assess models using reinforcement learning and other empirical machine learning approaches. Develop evaluation metrics for failure modes such as hallucination, compliance gaps, reward exploitation, and covert actions. Investigate methods to encourage models to self-verify and report limitations honestly, including confession-style training objectives. Create monitoring tools and interventions at inference time to help models act as intended.
Join Our Innovative TeamAt OpenAI, our Alignment team is committed to building AI systems that prioritize safety, trustworthiness, and alignment with human values, even as these systems evolve and grow in complexity. We are at the forefront of AI research, developing advanced methodologies to ensure that AI adheres to human intent across diverse scenarios, including high-stakes and adversarial environments. Our focus is on tackling the most critical challenges, addressing areas where AI can have profound impacts. By quantifying risks and making meaningful improvements, we aim to prepare our models for the complexities of real-world applications.Our approach is built on two foundational pillars: (1) integrating enhanced capabilities into alignment, ensuring our techniques evolve positively with increasing capabilities, and (2) centering human input through the development of mechanisms that allow humans to communicate their intent and effectively monitor AI systems, even in intricate situations.Your Role in Shaping the FutureAs a Research Engineer / Scientist on our Alignment team, you will play a pivotal role in ensuring our AI systems align with human intent in complex and unpredictable contexts. Your responsibilities will include designing and implementing scalable solutions that maintain alignment as AI capabilities expand, while incorporating human oversight into AI decision-making processes.This position is based in San Francisco, CA, and follows a hybrid work model of three days in the office each week. We also offer relocation assistance to new team members.Key Responsibilities:Develop and assess alignment capabilities that are context-sensitive, subjective, and challenging to quantify.Create evaluations to accurately measure risks and alignment with human values and intentions.Construct tools and evaluations to examine model robustness across various scenarios.Design experiments to explore how alignment scales with compute resources, data, context lengths, actions, and adversarial influences.Innovate new Human-AI interaction frameworks and scalable supervision methods that enhance human engagement and understanding of AI systems.
About Our TeamThe Future of Computing Research team is a dynamic applied research unit within the Consumer Devices group at OpenAI. We are dedicated to pioneering innovative methods, models, and evaluation frameworks that propel our vision for the future of computing. Our focus lies at the cutting edge of multimodal AI, transforming emerging model capabilities into product experiences that are not only functional and enjoyable but also foster long-term trust.Our research delves into a new generation of AI systems capable of learning and evolving over time, adapting to individual needs, and enhancing daily life. This includes exploring long-term memory, user modeling, and personalized systems aligned with broader human goals, values, and overall well-being.We collaborate closely across multiple disciplines—research, engineering, design, product management, and safety—to define what it means to build AI systems that recognize and respond to user needs in a contextually aware and respectful manner, ensuring demonstrable benefits.About the PositionWe are seeking a passionate Research Engineer/Scientist to join our Future of Computing Research team, focusing on Reinforcement Learning from Human Feedback (RLHF) and post-training techniques for personalized multimodal AI systems.In this role, you will be instrumental in establishing the learning and evaluation foundations necessary for models to become increasingly context-aware, adaptive, and useful over time. You will tackle challenges such as reward modeling, preference learning, long-horizon evaluation, and policy improvement for systems that are required to make high-quality behavioral decisions in real-world settings. Our success is measured not just by improved benchmark performance but by enhanced model behavior in actual use cases.The ideal candidate is enthusiastic about advancing beyond simplistic one-turn assistant interactions towards systems that learn and grow through feedback, utilizing richer signals and training against meaningful notions of user value. This requires a thoughtful approach to reward design, feedback mechanisms, and evaluation frameworks that assess the long-term benefits of interventions.This position is based in San Francisco, CA, with a hybrid work model of four days in the office each week. We also provide relocation assistance for new hires.Key Responsibilities:Develop RLHF and post-training strategies for multimodal models.Create reward models and preference-learning pipelines to foster adaptive, personalized model behavior.Engage in long-term evaluation and policy refinement to enhance user interactions.
About the TeamJoin the innovative Post-Training team at OpenAI, where we focus on refining and elevating pre-trained models for deployment in ChatGPT, our API, and future products. Collaborating closely with various research and product teams, we conduct crucial research that prepares our models for real-world deployment to millions of users, ensuring they are safe, efficient, and reliable.About the RoleAs a Research Engineer / Scientist, you will spearhead the research and development of enhancements to our models. Our work intersects reinforcement learning and product development, aiming to create cutting-edge solutions.We seek passionate individuals with robust machine learning engineering skills and research experience, particularly with innovative and powerful models. The ideal candidate will be driven by a commitment to product-oriented research.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days in the office each week. Relocation assistance is available for new employees.In this role, you will:Lead and execute a research agenda aimed at enhancing model capabilities and performance.Work collaboratively with research and product teams to empower customers to optimize their models.Develop robust evaluation frameworks to monitor and assess modeling advancements.Design, implement, test, and debug code across our research stack.You may excel in this role if you:Possess a deep understanding of machine learning and its applications.Have experience with relevant models and methodologies for evaluating model improvements.Are adept at navigating large ML codebases for debugging purposes.Thrive in a fast-paced and technically intricate environment.About OpenAIOpenAI is a pioneering AI research and deployment organization dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We are committed to pushing the boundaries of AI capabilities while prioritizing safety and human-centric values in our products. Our mission is to embrace diverse perspectives, voices, and experiences that represent the full spectrum of humanity, as we strive for a future where AI is a powerful ally for everyone.
About Our TeamJoin the forefront of AI innovation with the RL and Reasoning team at OpenAI. Our team is dedicated to advancing reinforcement learning research and has pioneered transformative projects, including o1 and o3. We are committed to pushing the limits of generative models while ensuring their scalable deployment.About the RoleAs a Research Engineer/Research Scientist at OpenAI, you will play a pivotal role in enhancing AI alignment and capabilities through state-of-the-art reinforcement learning techniques. Your contributions will be essential in training intelligent, aligned, and versatile agents that power various AI models.We seek individuals with a solid foundation in reinforcement learning research, agile coding skills, and a passion for rapid iteration.This position is located in San Francisco, CA, and follows a hybrid work model of three days in the office per week. We also provide relocation assistance for new hires.You may excel in this role if:You are enthusiastic about being at the cutting edge of RL and language model research.You take initiative, owning ideas and driving them to fruition.You value principled methodologies, conducting simple experiments in controlled environments to draw trustworthy conclusions.You thrive in a fast-paced, complex technical environment where rapid iteration is essential.You are adept at navigating extensive ML codebases to troubleshoot and enhance them.You possess a profound understanding of machine learning and its applications.About OpenAIOpenAI is a pioneering AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves the greater good for humanity. We strive to push the boundaries of AI system capabilities while prioritizing safe deployment through our innovative products. We recognize AI as a powerful tool that must be developed with safety and human-centric principles, embracing diverse perspectives to reflect the full spectrum of humanity.We are proud to be an equal opportunity employer, welcoming applicants from all backgrounds without discrimination based on race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or any other legally protected characteristic.
Full-time|$122K/yr - $170K/yr|Remote|Remote ; San Diego, California, United States; South San Francisco, California, United States
Join Veracyte, a leader in transforming cancer care through innovative diagnostics that empower patients and clinicians alike. At Veracyte, we provide an inspiring work environment where you can make a significant impact on patients' lives while advancing your career. Our culture, known as the Veracyte way, emphasizes collaboration, resilience, and a commitment to excellence.Our Core Values:We Seek A Better Way: We boldly innovate and learn from challenges to improve cancer care.We Make It Happen: We prioritize urgency, quality, and enjoyment in our work.We Are Stronger Together: We foster open collaboration and celebrate our collective successes.We Care Deeply: We honor our diverse backgrounds and support each other in doing what is right.The Role:We are on the lookout for an enthusiastic Bioinformatics Research Scientist to join our Translational Bioinformatics group within the Bioinformatics and Data Science Team. This pivotal role involves leveraging computational and statistical techniques on extensive transcriptomic and genomic datasets, particularly focusing on prostate cancer biology and its clinical implications. The ideal candidate will play a key role in identifying clinically relevant molecular signatures that enhance disease stratification, prognosis, and the development of diagnostics, while effectively communicating findings to both scientific and clinical audiences. This position offers a unique opportunity to bridge the realms of data science, biology, and clinical research through close collaboration with scientists, clinicians, and cross-functional teams to convert complex data into actionable insights.
About Our TeamJoin the Foundations Research team, where we tackle ambitious and innovative projects that could redefine the future of AI. Our mission is to enhance the science behind our training and scaling initiatives, focusing on pioneering frontier models. We are dedicated to advancing data utilization, scaling methodologies, optimization strategies, model architectures, and efficiency enhancements to accelerate our scientific breakthroughs.About the PositionWe are on the lookout for a dynamic technical research lead to spearhead our embeddings-focused retrieval initiatives. You will oversee a talented team of research scientists and engineers committed to developing foundational technologies that enable models to access and utilize the right information precisely when needed. This includes crafting innovative embedding training objectives, architecting scalable vector storage, and implementing adaptive indexing techniques.This pivotal role will contribute to various OpenAI products and internal research initiatives, offering opportunities for scientific publication and significant technical influence.This position is located in San Francisco, CA, where we embrace a hybrid work model, requiring three days in the office weekly, and we provide relocation assistance for new hires.Your ResponsibilitiesLead cutting-edge research on embedding models and retrieval systems optimized for grounding, relevance, and adaptive reasoning.Supervise a team of researchers and engineers in building an end-to-end infrastructure for training, evaluating, and integrating embeddings into advanced models.Drive advancements in dense, sparse, and hybrid representation techniques, metric learning, and retrieval systems.Work collaboratively with Pretraining, Inference, and other Research teams to seamlessly integrate retrieval throughout the model lifecycle.Contribute to OpenAI's ambitious vision of developing AI systems with robust memory and knowledge access capabilities rooted in learned representations.You Will Excel in This Role If You PossessA proven track record of leading high-performance teams of researchers or engineers within ML infrastructure or foundational research.In-depth technical knowledge in representation learning, embedding models, or vector retrieval systems.Familiarity with transformer-based large language models and their interaction with embedding spaces and objectives.Research experience in areas such as contrastive learning and retrieval-augmented generation.
About GranicaGranica is an innovative AI research and infrastructure firm dedicated to creating reliable, steerable representations of enterprise data.We establish trust through Crunch, a policy-driven health layer optimizing large tabular datasets for efficiency, reliability, and reversibility. Utilizing this foundation, we are developing Large Tabular Models—systems designed to learn cross-column and relational structures, delivering trustworthy answers and automation with integrated provenance and governance.Our MissionCurrent AI capabilities are hindered not only by model design but also by the inefficiencies of the data that supports it. At scale, each redundant byte, poorly organized dataset, and inefficient data pathway contributes to significant costs, latency, and energy waste.Granica’s mission is to eliminate these inefficiencies. We leverage groundbreaking research in information theory, probabilistic modeling, and distributed systems to craft self-optimizing data infrastructure: systems that continually enhance how information is represented and utilized by AI.Led by Prof. Andrea Montanari from Stanford, Granica’s Research group merges advances in information theory with learning efficiency in large-scale distributed systems. We collectively believe that the next significant leap in AI will originate from innovations in efficient systems, rather than merely larger models.Granica is at the forefront of developing a new category of structured AI models: foundational models designed to learn and reason from the relational, tabular, and structured data that drives the global economy. While many focus on unstructured text or media, we are venturing into the next frontier: systems capable of comprehending and reasoning over structured information.Your ContributionsCreate and prototype algorithms that form the core of structured AI, enhancing representation learning and efficient information modeling for enterprise and tabular data at petabyte scale.Develop adaptive learners merging statistical learning theory with systems optimization at scale, contributing to a new generation of foundational models for structured information.Design architectures that unify symbolic, relational, and neural components, enabling AI systems to reason directly over structured enterprise data.Construct cost models and optimization frameworks that enhance the efficiency of structured learning, both computationally and economically.
Role overview The Principal Research Scientist – Scaling at Databricks leads research projects that advance how the company’s data analytics platform handles large workloads. This San Francisco-based role focuses on designing and improving algorithms that enable efficient large-scale data processing and machine learning. Collaboration is central, with regular work alongside engineering, product, and research teams. What you will do Lead research to develop algorithms that scale for data analytics applications. Work with colleagues across engineering, product, and research to strengthen machine learning capabilities. Use deep expertise to shape the direction and architecture of the Databricks platform. Drive new ideas and solutions that influence the future of data science and analytics at Databricks. Location This role is based in San Francisco, California.
Remote|Remote|Remote-Friendly (Travel Required) | San Francisco, CA
Join Anthropic as a Senior Research Scientist on our Reward Models team, where you will spearhead groundbreaking research aimed at enhancing our understanding of human preferences at scale. Your innovative contributions will directly influence how our AI models, including Claude, align with human values and optimize for user needs. You will delve into the forefront of reward modeling for large language models, designing novel architectures and training methodologies for Reinforcement Learning from Human Feedback (RLHF). Your research will explore advanced evaluation techniques, including rubric-based grading, and tackle challenges such as reward hacking. Collaboration is key, as you'll work alongside teams in Finetuning, Alignment Science, and our broader research organization to ensure your findings result in tangible advancements in AI capabilities and safety. This role offers you an opportunity to address critical AI alignment challenges, leveraging cutting-edge models and substantial computational resources to further the science of safe and capable AI systems.
Join OpenAI as a Research Scientist and explore cutting-edge machine learning innovations. In this role, you will be at the forefront of developing groundbreaking techniques while advancing our team's research initiatives. Collaborate with talented peers across various teams to discover transformative ideas that scale effectively. We seek individuals who are passionate about pushing the boundaries of AI and want to contribute to our unified research vision.
Merge Labs is an innovative research facility dedicated to merging biological sciences and artificial intelligence to enhance human capability, autonomy, and experience. Our mission is to pioneer revolutionary methodologies in brain-computer interfaces that facilitate high-bandwidth interactions with the brain, seamlessly integrate advanced AI, and maintain safety and accessibility for all users.About the TeamAt Merge, we are addressing some of the most ambitious challenges in molecular engineering, synthetic biology, and neuroscience. Our Research Platform Team is responsible for creating the experimental frameworks necessary to tackle these challenges with exceptional speed and precision. The tools and methodologies developed by our team significantly enhance molecular assembly, protein expression, mammalian cell culture, advanced microscopy, sequencing, and unique custom techniques. We collaborate with program teams to establish and optimize these capabilities, implement automation where beneficial, and integrate with our data science and machine learning pipelines, continuously pushing the boundaries of throughput and innovation.About the RoleAs a Platform Scientist, you will be instrumental in developing high-efficiency and high-throughput experimental pipelines that accelerate research initiatives. You will work closely with program leads, project scientists, data scientists, and engineers, leading your work and potentially recruiting additional team members as necessary.Key Responsibilities:Collaborate with program leads and scientists to identify critical experimental requirements and workflows.Develop processes to facilitate high-throughput and/or high-efficiency experiments, including reagent production and analysis.Scope, procure, construct, program, and validate instruments to support experimental workflows.Ensure the quality, reliability, and integrity of data generated from automated pipelines, including defining and implementing suitable quality control checkpoints.Work alongside data science and machine learning engineers to incorporate metadata tracking, computational design, and analysis into experimental pipelines.Partner with electrical, mechanical, and software engineers to create custom setups.Innovate and validate concepts to enhance experimental throughput.
OverviewBecome an integral part of our dynamic R&D team dedicated to developing fully automated research systems that push the boundaries of AI. Zochi has achieved a milestone by publishing the first entirely AI-generated A* conference paper. Locus has set a new industry standard as the first AI system to surpass human experts in AI R&D.Key ResponsibilitiesConceptualize and develop innovative architectures for automated research.Work collaboratively within a specialized team of researchers addressing cutting-edge challenges in long-horizon agentic capabilities, post-training for open-ended objectives, and environment crafting.Document and publish key internal findings alongside success stories from external collaborations.QualificationsPhD or equivalent research experience in Computer Science, Machine Learning, Artificial Intelligence, or a related discipline. Outstanding candidates with significant research contributions are encouraged to apply, regardless of formal qualifications.Demonstrated history of impactful AI/ML research contributions in academic or corporate environments.Expertise in developing long-horizon, multi-agent systems and/or model post-training, especially in scientific domains or for open-ended discovery objectives.A strong passion for advancing problem-solving processes and scientific discovery, thriving in high-autonomy roles and environments.Our CultureCompetitive compensation and equity options.Unlimited Paid Time Off (PTO), emphasizing team collaboration and a community-focused workplace.Opportunities for conference participation and engagement in community initiatives.Empowered roles with high levels of responsibility.#1: We are a small, passionate team of leading investors, researchers, and industry experts committed to the mission of accelerating discovery. Join us.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats. In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.
On-site|On-site|New York City, NY; San Francisco, CA; Seattle, WA
About AnthropicAt Anthropic, we are dedicated to developing AI systems that are reliable, interpretable, and steerable. We aim to ensure that AI is safe, beneficial, and aligned with the needs of both our users and society. Our expanding team consists of passionate researchers, engineers, policy experts, and business leaders collaborating to create groundbreaking AI solutions.About the RoleWe are seeking a talented Research Engineer with a solid foundation in computer vision, who shares our belief that visual and spatial reasoning are essential for unleashing the full potential of large language models (LLMs). In this collaborative role, you will engage in research, development, and evaluation of cutting-edge Claude models, with a specific emphasis on enhancing visual and spatial capabilities. You will contribute across multiple facets of our research initiatives, employing a full-stack approach that encompasses pretraining, reinforcement learning (RL), and runtime techniques such as agentic harnesses. Additionally, you will work closely with our product team to ensure that your vision enhancements positively influence Claude's performance in real-world applications.
Join Baseten as a Post-Training Research Scientist, where you will play a vital role in advancing our machine learning capabilities. In this position, you will have the opportunity to conduct innovative research, analyze data, and contribute to the development of cutting-edge technologies. Your work will directly impact our projects and enhance the performance of our models.
Join Our Team as a Research ScientistAt Parallel, we are at the forefront of web infrastructure innovation, enabling businesses across sectors such as sales, marketing, insurance, and technology to harness the power of AI. Our state-of-the-art products empower users to develop superior AI agents with seamless and flexible access to the web.With significant backing of $130 million from prominent investors like Kleiner Perkins, Index Ventures, and Spark Capital, we are dedicated to redefining the web for artificial intelligence. As we expand, we're assembling a top-tier team of engineers, designers, marketers, sales experts, researchers, and operational specialists committed to our vision.Your Role: As a Research Scientist, you will tackle the challenge of training and scaling models designed to enhance web indexing capabilities.About You: You possess a profound understanding of contemporary models and training methodologies. You enjoy engaging in discussions about the convergence of search, recommendations, and transformer models, and are passionate about translating your research into impactful products and systems utilized by millions.
Advancing Self-Improving SuperintelligenceAt Letta, we are on a mission to revolutionize artificial intelligence by creating self-improving agents that learn and adapt like humans. Unlike current AI systems that are often rigid and brittle, our innovative approach aims to build adaptable AI that continually evolves through experience.Founded by the visionaries behind MemGPT at UC Berkeley's Sky Computing Lab, the birthplace of Spark and Ray, we are backed by notable figures in AI infrastructure, including Jeff Dean and Clem Delangue. Our agents are already enhancing production systems for industry leaders such as 11x and Bilt Rewards, continually learning and improving in real-time.Join our elite team of researchers and engineers dedicated to tackling AI's most significant challenges: creating machines that can reason, remember, and learn as humans do.This position requires in-person attendance (no hybrid options) at our downtown San Francisco office, five days a week.
Innovating Self-Improving SuperintelligenceAt Letta, we are on a transformative journey to enhance artificial intelligence to be as adaptive and capable of learning as the human brain. Our mission is to create self-improving AI agents that continuously learn from their experiences and evolve over time.Founded by the visionary team behind MemGPT at UC Berkeley’s Sky Computing Lab—the birthplace of Spark and Ray—we are supported by industry leaders like Jeff Dean and Clem Delangue. Our cutting-edge agents are already in action at leading companies such as 11x and Bilt Rewards, improving their operations daily.Join our elite team of researchers and engineers as we tackle one of AI's greatest challenges: developing machines that can think, remember, and learn in ways similar to humans.This position is in-person (no hybrid), requiring presence 5 days a week at our downtown San Francisco office.
Aug 15, 2025
Sign in to browse more jobs
Create account — see all 5,742 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.