Ai Ml Research Internship Opportunity jobs in San Francisco – Browse 4,920 openings on RoboApply Jobs

Ai Ml Research Internship Opportunity jobs in San Francisco

Open roles matching “Ai Ml Research Internship Opportunity” with location signals for San Francisco. 4,920 active listings on RoboApply Jobs.

4,920 jobs found

1 - 20 of 4,920 Jobs
Apply
companyAfterQuery logo
Internship|On-site|San Francisco

Join AfterQuery as an AI/ML Research Intern and immerse yourself in groundbreaking artificial intelligence projects. This internship is designed for exceptional undergraduate and master's students eager to collaborate with our research team on advanced reasoning and agentic models. You will have the opportunity to access specialized datasets and work closely with industry experts, contributing to exciting AI research that could lead to co-authored papers and presentations at prestigious AI conferences.We invite students currently enrolled in relevant programs to apply. This role requires a commitment of 10 to 40 hours per week, adaptable to the needs of the company.

Nov 6, 2025
Apply
companyfabrion logo
Full-time|On-site|San Francisco Bay Area

ML/AI Research Engineer - Founding Team at Agentic AI LabLocation: San Francisco Bay AreaType: Full-TimeCompensation: Competitive salary + meaningful equity (founding tier)At fabrion, backed by 8VC, we are assembling a top-tier team dedicated to addressing one of the most pressing infrastructure challenges in the industry.About the RoleJoin us in shaping the future of enterprise AI infrastructure, focusing on agents, retrieval-augmented generation (RAG), knowledge graphs, and multi-tenant governance.As an ML/AI Research Engineer, you will spearhead the design, training, evaluation, and optimization of agent-native AI models. Your work will integrate LLMs, vector search, graph reasoning, and reinforcement learning, establishing the intelligence layer for our enterprise data fabric.This role goes beyond prompt engineering; it encompasses the entire ML lifecycle—from data curation and fine-tuning to thorough evaluation, interpretability, and deployment, all while considering cost-effectiveness, alignment, and agent coordination.Core ResponsibilitiesFine-tune and assess open-source LLMs (e.g., LLaMA 3, Mistral, Falcon, Mixtral) for enterprise applications, leveraging both structured and unstructured data.Construct and enhance RAG pipelines utilizing LangChain, LangGraph, LlamaIndex, or Dust, integrating with our vector databases and internal knowledge graphs.Train agent architectures (ReAct, AutoGPT, BabyAGI, OpenAgents) using enterprise task datasets.Develop embedding-based memory and retrieval chains employing token-efficient chunking strategies.Create reinforcement learning pipelines to enhance agent behaviors (e.g., RLHF, DPO, PPO).Establish scalable evaluation harnesses for LLM and agent performance, including synthetic evaluations, trace capture, and explainability tools.Contribute to model observability, drift detection, error classification, and alignment efforts.Optimize inference latency and GPU resource utilization across both cloud and on-premises environments.Desired ExperienceModel Training:Deep understanding of machine learning principles and hands-on experience with model training.

Aug 28, 2025
Apply
companyEight Sleep logo
Internship|On-site|San Francisco

Become a Part of the Sleep Fitness RevolutionAt Eight Sleep, our mission is to unleash human potential through optimal sleep. As the pioneering sleep fitness company globally, we are transforming the concept of being well-rested, developing advanced hardware, software, and AI technologies to make this vision a reality. Our innovative products enhance mental, physical, and emotional performance by turning every night of sleep into a personalized, data-driven recovery experience. Trusted by elite athletes and health-conscious individuals in over 30 countries, we have been recognized by Fast Company as one of the Most Innovative Companies in 2019, 2022, and 2023, and have been honored twice by TIME as one of the “Best Inventions of the Year.” We operate with the speed and focus of a high-performance team, driven by a desire to make a significant impact. We are not just about delivering; we are about continuous improvement and a relentless attention to detail that helps our members sleep better and wake up stronger.Every position at Eight Sleep is an opportunity to innovate cutting-edge technology, collaborate with world-class talent, and contribute to a future where sleep is not merely passive but a powerful tool for enhancing life quality. If you are ready to break away from the ordinary and are passionate about pushing boundaries, this is your chance to join us in leading the movement that is revolutionizing sleep and unlocking our capabilities upon waking.Excellence is Our StandardOur mission demands unwavering intensity. At Eight Sleep, we embody the mindset of the world’s top performers: focused, relentless, and driven to excel in our craft. Picture the mamba mentality of Kobe Bryant, applied to bold ideas, next-gen technology, and flawless execution. This is not a standard 9-to-5 role. Our team is deeply committed, often investing over 60 hours a week—not because of obligation, but out of genuine investment in our work. We are here to build swiftly, push boundaries, and deliver uncompromising results. If you thrive under pressure and are eager to engage in the most meaningful work of your career, you will find your home here. If you seek something less demanding, this role may not suit you.The Internship OpportunityEight Sleep is searching for passionate Machine Learning research interns to tackle AI/ML challenges in the sleep fitness and personal health domains. You will collaborate closely with a cross-functional team to define problems, develop end-to-end prototypes, validate findings with data, and iterate towards solutions ready for deployment. Throughout the internship, you will receive hands-on mentorship, enhance your technical skills, and present your findings to leadership. Depending on the relevance, you may also have the opportunity to turn your work into a publication.We are looking for interns who are outcome-oriented, think systematically, and make decisions grounded in data. If you are driven by results and eager to contribute to a transformative mission, we want to hear from you!

Oct 7, 2025
Apply
companyScale AI, Inc. logo
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY

Join Scale AI's ML platform team (RLXF) as a Machine Learning Research Engineer, where you will play a pivotal role in developing our advanced distributed framework for training and inference of large language models. This platform is vital for enabling machine learning engineers, researchers, data scientists, and operators to conduct rapid and automated training, as well as evaluation of LLMs and data quality.At Scale, we occupy a unique position in the AI landscape, serving as an essential provider of training and evaluation data along with comprehensive solutions for the entire ML lifecycle. You will collaborate closely with Scale's ML teams and researchers to enhance the foundational platform that underpins our ML research and development initiatives. Your contributions will be crucial in optimizing the platform to support the next generation of LLM training, inference, and data curation.If you are passionate about driving the future of AI through groundbreaking innovations, we want to hear from you!

Mar 26, 2026
Apply
company
Full-time|On-site|San Francisco

About Liquid AILiquid AI, a pioneering company spun out of MIT CSAIL, is at the forefront of developing general-purpose AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory usage, privacy, and reliability allows us to partner with some of the most esteemed enterprises in consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are seeking exceptional talent to join our innovative journey.The OpportunityJoin our cutting-edge Audio team, where we are developing advanced speech-language models capable of handling Speech-to-Text (STT), Text-to-Speech (TTS), and speech-to-speech tasks within a unified architecture. This pivotal role supports applied audio model development, directly collaborating with the technical lead to deliver production systems that operate on-device under real-time constraints. You will take ownership of key workstreams encompassing data pipelines, evaluation systems, and customer deployments. If you are eager to tackle unique technical challenges within a small, elite team where your contributions are impactful, this is the role for you.What We're Looking ForWe are seeking an individual who:Builds first, theorizes later: You prioritize shipping working systems over theoretical models; production-grade code is your default.Owns outcomes end-to-end: You take full responsibility for everything from data pipelines to customer deployments and don't shy away from challenges.Thrives under constraints: On-device, low-latency, memory-constrained environments motivate you. You view constraints as opportunities for innovative design.Ramps quickly on new territory: You are comfortable closing knowledge gaps swiftly and actively seek feedback to drive results.The WorkDevelop and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale.Design, implement, and maintain evaluation systems that assess multimodal performance across both internal and public benchmarks.Fine-tune and adapt audio models to cater to customer-specific use cases, taking charge from requirement gathering through to deployment.Contribute production code to the core audio repository while collaborating closely with infrastructure and research teams.Facilitate experimentation under real hardware constraints, transitioning smoothly between customer-focused projects and core development initiatives.

Dec 16, 2025
Apply
company
Full-time|On-site|San Francisco

OverviewPluralis Research is at the forefront of Protocol Learning, innovating a decentralized approach to train and deploy AI models that democratizes access beyond just well-funded corporations. By aggregating computational resources from diverse participants, we incentivize collaboration while safeguarding against centralized control of model weights, paving the way for a truly open and cooperative environment for advanced AI.We are seeking a talented Machine Learning Training Platform Engineer to design, develop, and scale the core infrastructure that powers our decentralized ML training platform. In this role, you will have ownership over essential systems including infrastructure orchestration, distributed computing, and service integration, facilitating ongoing experimentation and large-scale model training.ResponsibilitiesMulti-Cloud Infrastructure: Create resource management systems that provision and orchestrate computing resources across AWS, GCP, and Azure using infrastructure-as-code tools like Pulumi or Terraform. Manage dynamic scaling, state synchronization, and concurrent operations across hundreds of diverse nodes.Distributed Training Systems: Design fault-tolerant infrastructure for distributed machine learning, including GPU clusters, NVIDIA runtime, S3 checkpointing, large dataset management and streaming, health monitoring, and resilient retry strategies.Real-World Networking: Develop systems that simulate and manage real-world network conditions—such as bandwidth shaping, latency injection, and packet loss—while accommodating dynamic node churn and ensuring efficient data flow across workers with varying connectivity, as our training occurs on consumer nodes and non-co-located infrastructure.

Apr 1, 2026
Apply
company
Full-time|On-site|San Francisco

OverviewPluralis Research is at the forefront of innovation in Protocol Learning, specializing in the collaborative training of foundational models. Our approach ensures that no single participant ever has or can obtain a complete version of the model. This initiative aims to create community-driven, collectively owned frontier models that operate on self-sustaining economic principles.We are seeking experienced Senior or Staff Machine Learning Engineers with over 5 years of expertise in distributed systems and large-scale machine learning training. In this role, you will design and implement a groundbreaking substrate for training distributed ML models that function effectively over consumer-grade internet connections.

Apr 1, 2026
Apply
company
ML Infrastructure Engineer

Sygaldry Technologies

Full-time|On-site|San Francisco

About Sygaldry Technologies Sygaldry Technologies develops quantum-accelerated AI servers in San Francisco, focusing on faster AI training and inference. By combining quantum technology with artificial intelligence, the team addresses challenges in computing costs and energy efficiency. Their AI servers integrate multiple qubit types within a fault-tolerant system, aiming for a balance of cost, scalability, and speed. The company values optimism, rigor, and a drive to solve complex problems in physics, engineering, and AI. Role Overview: ML Infrastructure Engineer The ML Infrastructure Engineer joins the AI & Algorithms team, which includes research scientists, applied mathematicians, and quantum algorithm specialists. This role centers on building and maintaining the compute infrastructure that powers advanced research. The systems you build will support reliable GPU access, reproducible experiments, and scalable workloads, so researchers can focus on their core work without needing deep cloud expertise. Expect to design and manage compute platforms for a range of tasks, including quantum circuit simulation, large-scale numerical optimization, model training, tensor network contractions, and high-throughput data generation. These workloads span multiple cloud providers and on-premises GPU servers. Key Responsibilities Develop compute abstractions for diverse workloads, such as GPU-accelerated simulations, distributed training, high-throughput CPU jobs, and interactive analyses using frameworks like PyTorch and JAX. Set up infrastructure to support experiment tracking and reproducibility. Create developer tools that make cloud computing feel local, streamlining environment setup, job submission, monitoring, and artifact management. Scale experiments from single-GPU prototypes to large, multi-node production runs. Multi-Cloud GPU Orchestration Design orchestration strategies for workloads across multiple cloud providers, optimizing job routing for cost, availability, and capability. Monitor and improve cloud spending, keeping track of credit balances, burn rates, and expiration dates.

Apr 14, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About Our TeamAt OpenAI, our mission is to develop safe artificial general intelligence (AGI) that serves the greater good of humanity. To accomplish this, we strive to attract the most talented individuals from around the globe to innovate and expand the possibilities of technology.The Research Recruiting team is integral to this mission, embedded within the research organization to ensure a deep understanding of our dynamic priorities. We partner closely with our research staff to foster trust and strategically influence the future of OpenAI's talent acquisition.About the PositionIn this pivotal role, you will spearhead and implement long-term strategies to identify, engage, and recruit top AI researchers, research engineers, and technical scientists who are at the cutting edge of machine learning. This position transcends traditional recruiting functions.As a strategic collaborator with our research teams, you will help define hiring priorities, shape effective search strategies, and influence candidate evaluation and hiring decisions that directly affect our research direction and mission fulfillment.Your Responsibilities Will Include:Collaborating with research and technical staff to establish hiring priorities and anticipate future talent needs as technical roadmaps evolve.Proactively sourcing and nurturing exceptional talent in AI/ML research across various sectors, often before formal hiring needs arise.Leveraging market insights and candidate feedback to inform hiring decisions, including leveling and compensation strategies for niche research roles.Acting as a trusted advisor throughout the candidate evaluation and closing processes, assisting leaders in assessing research excellence, potential, and cultural fit.Working closely with your sourcing partner to execute complex, impactful searches in rapidly evolving technical domains.You May Be a Good Fit If You:Possess significant experience in recruiting within highly technical or specialized environments.Have a deep interest in AI research and a passion for engaging with global research communities.

Feb 23, 2026
Apply
companySciforium logo
Full-time|On-site|San Francisco

At Sciforium, we are at the forefront of AI infrastructure, creating next-generation multimodal AI models and a proprietary high-efficiency serving platform. With substantial backing from AMD, our team is rapidly expanding to develop the complete stack necessary for cutting-edge AI models and real-time applications.About the RoleWe are on the lookout for a talented Senior Research Scientist with expertise in advanced AI and machine learning. This role entails spearheading innovative research projects focusing on large language models, generative media, model architecture, optimization, and scalable training systems. You will engage directly with contemporary ML frameworks, publish original research, and collaborate closely with engineering teams to transition impactful models into production. This position is perfect for a driven researcher excited about pioneering breakthroughs in AI.What You'll DoLead research initiatives in advanced machine learning topics such as LLMs, generative AI, foundational modeling, optimization strategies, diffusion models, and novel Transformer architectures.Design, implement, and assess new ML algorithms using frameworks like PyTorch and JAX.Conduct large-scale distributed training experiments utilizing multi-GPU/TPU systems and cutting-edge compute infrastructure.Enhance performance through debugging frameworks, optimizing speed, and refining training pipelines.Generate high-quality research outputs including academic papers, internal reports, patents, and reproducible code.Work collaboratively with engineering and product teams to convert research prototypes into robust production systems.Stay updated with the latest research advancements to incorporate state-of-the-art techniques into Sciforium's AI roadmap.Mentor junior researchers and actively contribute to fostering a world-class AI research culture.

Nov 15, 2025
Apply
companyHandshake AI logo
Internship|On-site|San Francisco, CA

About HandshakeHandshake stands at the forefront of the evolving AI economy, serving as a vital career network that connects over 20 million knowledge workers, 1,600 educational institutions, and 1 million employers, including all Fortune 50 companies. Our platform is trusted for career discovery, hiring, and skills development, facilitating everything from freelance AI training gigs to internships and full-time careers. Our unique value proposition has resulted in exceptional growth, with our annual recurring revenue tripling in 2025.Why you should join Handshake now:Play a pivotal role in shaping the careers of individuals in the AI economy, impacting your community and peers.Collaborate closely with leading AI labs, Fortune 500 partners, and top-tier educational institutions.Be part of a team guided by experts from renowned companies such as Scale AI, Meta, xAI, Notion, Coinbase, and Palantir.Contribute to building a rapidly expanding business with substantial revenue potential.About the RoleAs an AI Research Intern at Handshake during the summer of 2026, you will be integral to our research team, which specializes in developing data engines that drive the next generation of large language models. This internship will offer you the opportunity to engage in focused projects that could lead to publishable research contributions and immediate application within our production stack. The internship is set to commence between May and June 2026.Projects You Could TackleLLM Post-Training: Explore innovative RLHF/GRPO pipelines, enhance instruction-following capabilities, and refine reasoning-trace supervision.LLM Evaluation: Develop new multilingual and domain-specific benchmarks, conduct studies comparing automatic and human preferences, and perform robustness diagnostics.Data Efficiency: Implement active-learning loops, assess data value, generate synthetic data, and devise low-resource fine-tuning strategies.Each intern will own a defined research project, receiving mentorship from a senior scientist, with the goal of producing a manuscript suitable for archival or submission to a top-tier conference.

Oct 2, 2025
Apply
companyAir Apps logo
Full-time|On-site|San Francisco

Join Our Team at Air AppsAt Air Apps, we are on a mission to revolutionize resource management through innovative technology. Founded in 2018 in Lisbon, Portugal, we have expanded our reach with offices in both Lisbon and San Francisco, boasting over 100 million downloads globally. Our vision is to create the world’s first AI-powered Personal & Entrepreneurial Resource Planner (PRP), and we are looking for passionate individuals to help us achieve this goal.Our commitment to challenging the status quo drives us to push the boundaries of AI-driven solutions that make a real impact. Here, you will have the opportunity to be a creative force, developing products that empower individuals worldwide.Join us as we embark on this journey to redefine how people plan, work, and live.

Feb 25, 2025
Apply
company
Full-time|Remote|San Francisco

At Runway ML, we are revolutionizing the intersection of art and science through innovative AI technology. Our mission is to build sophisticated world models that transcend traditional artificial intelligence limitations. We believe that to tackle the most pressing challenges—such as robotics, disease, and scientific breakthroughs—we need systems that can learn from experiences just like humans do. By simulating these experiences, we can expedite progress in ways that were previously unimaginable.Our diverse and driven team consists of creative thinkers who are passionate about pushing boundaries and achieving the extraordinary. If you share this ambition and are eager to contribute to our groundbreaking work, we invite you to join us.About the Role*We are open to hiring remotely across North America. We also have offices in NYC, San Francisco, and Seattle.We are on the lookout for a highly skilled and intellectually inquisitive Technical Accounting Manager to be our go-to authority on intricate accounting issues. This position offers significant visibility and is ideal for a professional adept at interpreting complex accounting guidelines, formulating sound conclusions, and translating technical insights into practical accounting practices.

Mar 17, 2026
Apply
company
Full-time|On-site|San Francisco Bay Area

Join Merge Labs, a pioneering research facility dedicated to merging biological and artificial intelligence to enhance human capabilities, agency, and experience. We aim to achieve this by crafting innovative brain-computer interfaces that communicate with the brain at high bandwidth, seamlessly integrate with cutting-edge AI, and prioritize safety and accessibility for all users.About the Team:At Merge Labs, we are on a mission to revolutionize brain-computer interfaces by leveraging advancements in synthetic biology, neuroscience, AI, and non-invasive imaging technologies. Our cross-functional data science team is situated at the convergence of computational modeling, neuroscience, and biomolecular engineering. This collaborative unit works closely with wet-lab scientists, automation specialists, and data engineers to develop machine learning frameworks that facilitate rapid molecule discovery and device enhancement.About the Role:We are seeking a talented Senior / Principal ML Scientist to architect and scale Bayesian optimization and reinforcement learning frameworks that guide molecular engineering initiatives through iterative design-build-test-learn (DBTL) cycles. You will start with a fresh approach to construct the company's closed-loop optimization infrastructure, establishing the data and modeling foundations that link experiments with these ML frameworks. Over time, you will transition prototypes into operational pipelines, significantly enhancing experimental throughput and discovery success across various biomolecular and neuroengineering sectors.Key Responsibilities:Develop the scientific and engineering framework for active learning and closed-loop optimization, encompassing data ingestion, ML modeling, and library design.Collaborate with wet-lab scientists to establish feasible optimization objectives while incorporating domain-specific priors and constraints.Create prototypes for representation learning and acquisition strategies utilizing both internal and public datasets; benchmark and validate the performance of models.Integrate machine learning models with experimental data streams, making them accessible to non-domain experts for broader utilization.Extend machine learning frameworks to accommodate multi-objective or constrained optimization challenges.Stay abreast of the latest advancements in Bayesian optimization, active learning, and reinforcement learning, and prototype innovative algorithms to enhance the company's capabilities.

Jan 15, 2026
Apply
companyVaromoney logo
Full-time|On-site|San Francisco, CA

Join Varomoney as a Principal AI/ML Architect, where you will lead groundbreaking projects that leverage artificial intelligence and machine learning to transform financial services. Your expertise will guide our engineering teams in developing innovative solutions that not only meet but exceed client expectations. You will be at the forefront of AI/ML technology, driving strategic initiatives and ensuring the highest standards of technical excellence.

Mar 20, 2026
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

Join Our Innovative TeamAt OpenAI, our Safety Systems team is at the forefront of ensuring that AI models are safe, robust, and reliable for real-world applications. We are committed to the principle set forth in our charter: to widely distribute the benefits of AI technology. Our Health AI unit is dedicated to providing equitable access to high-quality medical information. By bridging AI safety research with healthcare applications, we strive to develop trustworthy AI systems that empower medical professionals and enhance patient care.The OpportunityWe are on the lookout for passionate researchers who are eager to contribute to AI safety and improve health outcomes globally. As a Health AI Research Scientist, your role will involve developing safe and effective AI models tailored for healthcare applications. You will implement innovative methods to enhance the behavior, knowledge, and reasoning capabilities of our models. This necessitates research into safety and alignment techniques that can be generalized to ensure a beneficial AGI.This position is based in San Francisco, CA, utilizing a hybrid work model (3 days in the office per week) and we provide relocation assistance for new hires.Key Responsibilities:Design and implement scalable methods to enhance the safety and reliability of our models, such as Reinforcement Learning from Human Feedback (RLHF), automated red teaming, and scalable oversight.Assess methodologies using health-related data to ensure models deliver accurate, reliable, and trustworthy information.Develop reusable libraries to apply general alignment techniques across our models.Proactively analyze the safety of our models and systems, identifying potential risk areas.Collaborate with cross-functional teams to embed safety methods into core model training and drive safety improvements in OpenAI products.Ideal Candidate Profile:Align with OpenAI’s mission to ensure AGI benefits everyone and resonate with our charter.Exhibit a strong passion for AI safety and enhancing global health outcomes.Possess 4+ years of experience in AI research, with a focus on health applications.Demonstrate proficiency in machine learning frameworks and safety techniques.Showcase effective communication skills for cross-team collaboration.

Jan 29, 2025
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamAt OpenAI, our Hardware organization is pioneering the development of cutting-edge silicon and system-level solutions tailored to meet the distinctive needs of advanced AI workloads. We are dedicated to building the next generation of AI silicon, collaborating closely with software engineers and research partners to co-design hardware that integrates seamlessly with our AI models. Our mission includes not only delivering high-quality, production-grade silicon for OpenAI's supercomputing infrastructure but also creating custom design tools and methodologies that foster innovation and enable hardware optimized specifically for AI applications.About the RoleWe are on the lookout for a talented Research Hardware Co-Design Engineer to operate at the intersection of model research and silicon/system architecture. In this role, you will play a critical part in shaping the numerics, architecture, and technological strategies for the future of OpenAI's silicon in collaboration with both Research and Hardware teams.Your responsibilities will include diagnosing discrepancies between theoretical performance and real-world measurements, writing quantization kernels, assessing the risks associated with numerics through model evaluations, quantifying system architecture trade-offs, and implementing innovative numeric RTL. This is a hands-on position for individuals who are passionate about tackling challenging problems, seeking practical solutions, and driving them to production. Strong prioritization and transparent communication skills are vital for success in this role.Location: San Francisco, CA (Hybrid: 3 days/week onsite)Relocation assistance available.Key Responsibilities:Enhance our roofline simulator to monitor evolving workloads and deliver analyses that quantify the impact of architectural decisions, supporting technology exploration.Identify and resolve discrepancies between performance simulations and actual measurements; effectively communicate root causes, bottlenecks, and incorrect assumptions.Develop emulation kernels for low-precision numerics and lossy compression techniques, equipping Research with the insights needed to balance efficiency with model quality.Prototype numeric modules by advancing RTL through synthesis; either hand off innovative numeric solutions cleanly or occasionally take ownership of an RTL module from start to finish.Proactively engage with new ML workloads, prototype them using rooflines and/or functional simulations, and initiate evaluations of new opportunities or risks.Gain a holistic understanding of the transition from ML science to hardware optimization, breaking down this comprehensive objective into actionable short-term deliverables.Foster collaborative relationships across diverse teams with varying goals and expertise, ensuring that progress remains unimpeded.Clearly articulate design trade-offs with explicit assumptions and rationale.

Jan 13, 2026
Apply
companyaisafety logo
Internship|On-site|San Francisco, CA

Join aisafety as a Research Engineer Intern for the Fall 2026 term! This internship is an excellent opportunity to gain hands-on experience in the field of safety engineering, where you will collaborate with seasoned professionals on innovative projects. You will be involved in research and development activities, contributing to advancements in safety solutions.

Mar 5, 2026
Apply
company
Full-time|On-site|San Francisco

Join Our Innovative Team at David AIDavid AI is pioneering the audio data research landscape. We adopt a rigorous R&D methodology for developing datasets that parallels the standards upheld by leading AI laboratories. Our vision is to seamlessly integrate AI into everyday experiences, with audio serving as the perfect conduit. The evolution of audio AI is rapidly unfolding, yet the availability of high-quality training data remains a critical challenge. This is where David AI steps in.Founded in 2024 by a talented group of former engineers and operators from Scale AI, we have quickly become a trusted partner to numerous FAANG companies and AI research labs. Recently, we secured $50 million in a Series B funding round with notable investors, including Meritech, NVIDIA, and Alt Capital.Our culture is built on sharp intellect, humility, ambition, and a close-knit community. We invite exceptional minds in research, engineering, product development, and operations to join us as we advance the field of audio AI.Research Team OverviewAt David AI, we are convinced that superior model capabilities stem from high-quality, differentiated data. Our research team is dedicated to conducting ambitious, long-term studies into audio technology while collaborating with both internal and external partners to implement cutting-edge research insights into practical applications.Your Role as a Founding Audio AI Research EngineerIn this position, you will establish the research framework that influences how premier AI labs develop their audio models. You will have access to a top-tier team of human AI trainers, robust computing resources, and the autonomy to shape your research agenda.Key ResponsibilitiesCreate and implement comprehensive evaluation frameworks for assessing audio AI capabilities in areas such as speech, emotion detection, conversational dynamics, and acoustic patterns.Investigate and prototype innovative methodologies for audio quality assessment, automated labeling, and optimizing data collection processes.Design focused data collection pipelines aimed at capturing novel, high-value audio capabilities.Develop automated systems for ongoing classifier enhancement and prompt engineering evaluation.Assess cutting-edge models and formulate actionable research strategies.Publish your findings in prestigious conferences.

Jun 24, 2025
Apply
companyDistyl AI logo
Full-time|On-site|San Francisco

About Distyl AIDistyl AI specializes in creating high-performance AI systems that enhance the fundamental operational processes of Fortune 500 companies. Through a strategic alliance with OpenAI, proprietary software accelerators, and extensive expertise in enterprise AI, we deliver effective AI solutions with swift time-to-value, often within a quarter.Our innovations have empowered Fortune 500 clients in various sectors, including insurance, consumer packaged goods, and non-profit organizations. Joining our team means you will assist organizations in recognizing, developing, and extracting value from their Generative AI investments, frequently for the first time. We prioritize customer needs, working backward from the client's challenges and ensuring we generate financial benefits while enhancing the experiences of end-users.Distyl is guided by seasoned leaders from top-tier companies like Palantir and Apple and enjoys backing from prominent investors including Lightspeed, Khosla, Coatue, Dell Technologies Capital, Nat Friedman (Former CEO of GitHub), Brad Gerstner (Founder and CEO of Altimeter), along with board members from numerous Fortune 500 firms.What We Are Looking ForAt Distyl, we are at the forefront of leveraging AI within enterprises. We seek imaginative researchers who aspire to go beyond incremental enhancements on benchmarks and are eager to redefine the application of software in innovative ways.Our researchers hail from diverse academic disciplines but possess a robust research background, operate in an AI-centric manner, and would find conventional research environments unfulfilling.Key ResponsibilitiesThe AI Systems team is dedicated to architecting complex, comprehensive solutions that integrate perception, reasoning, planning, and execution. Researchers amalgamate various components (LLMs, retrievers, evaluators, memory systems, and execution agents) into resilient, scalable systems that deliver consistent performance across dynamic enterprise workflows.Researchers in AI Systems examine the principles governing intricate system interactions. They analyze coordination, information flow, and emergent behavior across multiple agents and models. Their research reveals the foundational mechanics of robustness, composability, and alignment, ultimately establishing the design paradigm for constructing intelligent systems.

Oct 16, 2025

Sign in to browse more jobs

Create account — see all 4,920 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.