Ai Researcher In Multimodal Audio Video Generation jobs in San Francisco – Browse 4,529 openings on RoboApply Jobs

Ai Researcher In Multimodal Audio Video Generation jobs in San Francisco

Open roles matching “Ai Researcher In Multimodal Audio Video Generation” with location signals for San Francisco. 4,529 active listings on RoboApply Jobs.

4,529 jobs found

1 - 20 of 4,529 Jobs
Apply
Canva logoCanva logo
Full-time|On-site|San Francisco

Join Canva as a Staff Research Scientist specializing in Video & Audio Generative AI. In this pivotal role, you will leverage your expertise in AI to develop innovative solutions that enhance our platform's multimedia capabilities. Collaborate with cross-functional teams to push the boundaries of AI technology, driving impactful projects that redefine user e…

Dec 18, 2025
Apply
Tavus logoTavus logo
Full-time|On-site|San Francisco

About TavusTavus is at the forefront of innovation in human computing. Our mission is to develop AI Humans: an advanced interface that bridges the gap between individuals and machines, eliminating the friction found in current technologies. Our state-of-the-art human simulation models empower machines to see, hear, respond, and even exhibit realistic appearances—facilitating genuine, face-to-face interactions. AI Humans integrate the emotional insight of humans with the scalability and dependability of machines, making them reliable agents accessible 24/7, in any language, on our terms.Imagine having access to an affordable therapist, a personal trainer that fits your schedule, or a team of medical assistants dedicated to providing personalized care for every patient. With Tavus, individuals, enterprises, and developers have the tools to create AI Humans that connect, comprehend, and act with empathy on a large scale.We are a Series A company supported by esteemed investors such as Sequoia Capital, Y Combinator, and Scale Venture Partners.Join us in shaping a future where machines and humans genuinely understand one another.The PositionWe are seeking an AI Researcher to join our core AI team and advance the frontiers of multimodal conversational intelligence. If you excel in dynamic environments, enjoy transforming abstract concepts into functional code, and derive motivation from pushing the boundaries of possibility, this role is designed for you.Your Responsibilities Engage in research focusing on Foundational Multimodal Models specifically in the realm of Conversational Avatars (such as Neural Avatars and Talking-Heads).Develop models for video, audio, and language sequences utilizing Autoregressive and Predictive Architectures (e.g., V-JEPA) and/or Diffusion methodologies, with a focus on temporal and sequential data rather than static images.Collaborate closely with the Applied ML team to implement your research into production systems.Remain at the forefront of multimodal learning and assist us in defining what “cutting edge” will mean in the future.Ideal Candidate ProfilePhD (or nearing completion) in a relevant field, or equivalent practical research experience.Experience in multimodal machine learning, particularly focused on conversational interfaces.

Oct 8, 2025
Apply
Eventual Computing logoEventual Computing logo
Full-time|On-site|San Francisco

Eventual Computing builds tools that help AI teams work with large, complex datasets. Based in San Francisco, the company supports projects in robotics, autonomous vehicles, and advanced video generation. Its open-source engine, Daft, is already in use at organizations with demanding data needs. The team focuses on making data curation and model training more efficient, so the right datasets are always within reach. The office is located in the Mission district, where collaboration with leading AI labs and infrastructure companies is part of daily work. Role overview The Research Engineer - Multimodal Data will join the Visual Understanding team. This position centers on building solutions to make vast amounts of video and sensor data accessible and easy to query. The work directly supports researchers who need to find and use specific datasets quickly. What you will do Develop and refine systems that process petabytes of multimodal data, including video and sensor streams. Apply vision-language models to improve how data is discovered and retrieved. Define and influence the roadmap for visual understanding features. Train models to streamline large-scale data annotation and improve efficiency for research teams.

Apr 29, 2026
Apply
Thinking Machines Lab logoThinking Machines Lab logo
Audio Research Specialist

Thinking Machines Lab

Full-time|$350K/yr - $475K/yr|On-site|San Francisco

At Thinking Machines Lab, our mission is to enhance humanity's potential through the advancement of collaborative general intelligence. We envision a future where everyone can access the knowledge and tools necessary to leverage AI for their individual needs and objectives.As a team of scientists, engineers, and innovators, we have developed some of the most widely utilized AI products, such as ChatGPT and Character.ai, along with open-weight models like Mistral, and popular open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.Role OverviewAt Thinking Machines, we adopt a multimodal-first approach, where multimodality is integral to our scientific goals and infrastructure. We are seeking skilled researchers to push the boundaries of audio capabilities. In this role, you will delve into how audio models can facilitate more natural and efficient communication and collaboration by preserving information and accurately capturing user intent.This position requires strong collaboration across pre-training, post-training, and product development with top-tier researchers, infrastructure engineers, and designers. Here, you will have the chance to influence the foundational capabilities of AI systems that will be utilized by millions globally.This role marries fundamental research with practical engineering, as we do not separate these functions internally. You will be expected to write high-performance code as well as engage with technical reports. It is an ideal position for someone who enjoys both extensive theoretical research and hands-on experimentation while laying the groundwork for how AI learns.Note: This is an evergreen role that remains open continuously to gauge interest in this research area. We receive numerous applications, and there may not always be an immediate match for your skills and experience. Nevertheless, we encourage you to apply, as we routinely review applications and reach out as new opportunities arise. You are welcome to reapply if you gain additional experience, but please avoid applying more frequently than every six months. Occasionally, we also post specific roles for distinct project or team needs, in which case you are welcome to apply directly alongside an evergreen submission.

Nov 23, 2025
Apply
Bland Inc. logoBland Inc. logo
Full-time|On-site|San Francisco

Bland Inc. seeks a Machine Learning Researcher specializing in Multimodal Large Language Models (LLMs) to join the team in San Francisco. The focus is on advancing AI systems that integrate language with other types of data. Role overview This position centers on research and development aimed at improving how AI models process and understand information from multiple sources, such as text combined with images or other modalities. What you will do Investigate how language interacts with additional data types within multimodal LLMs Create and evaluate new methods to enhance AI model performance Work closely with colleagues on projects designed to push the boundaries of machine learning Location This role is based in San Francisco.

Apr 21, 2026
Apply
Amplifier Health logo
Full-time|On-site|San Francisco, California, United States

Join Us at Amplifier Health!We are pioneering healthcare innovations with the world's first Large Acoustic Model (LAM), a groundbreaking foundation model that utilizes human voice to identify health conditions. This is where science fiction meets reality, and we have secured substantial funding from leading investors to establish a transformative new category in healthcare.We are in search of a passionate AI researcher who is ready to break free from the traditional "publish or perish" mindset and focus on creating impactful intelligence that truly works in real-world applications.The Reality of Our WorkWe are entering an exhilarating phase of rapid growth. Our commitment to pushing the boundaries of technology is matched only by our dedication to saving lives at scale.Our team collaborates in person in San Francisco, believing that the most challenging problems are best tackled together at a whiteboard rather than through virtual meetings.We operate at a fast pace, quickly transitioning from hypothesis to code, training, and validation with immediate feedback.We enjoy our work and thrive as a close-knit team on an exciting journey, driven by our passion for what we do.Your MissionAs part of our elite AI Research team, you will elevate the state-of-the-art in acoustic modeling. Your role will involve designing innovative architectures to extract clinical-grade biomarkers from raw audio data, not just fine-tuning existing models.The Challenges Ahead:Novel Architectures: You will explore how Transformer architectures can be adapted to process complex acoustic signals and long-range dependencies.Biomarker Discovery: You will conduct experiments to identify specific acoustic features (such as jitter, shimmer, and respiratory rate) that correlate with health conditions, often uncovering new signals that have yet to be recognized by medical science.Data Efficiency: You will contribute to building a foundation model, utilizing self-supervised learning techniques to harness vast amounts of unlabeled audio data.

Jan 30, 2026
Apply
worldlabs logoworldlabs logo
Full-time|On-site|San Francisco

Join worldlabs as a Research Engineer focused on scaling multimodal data. In this dynamic role, you will leverage cutting-edge technologies and methodologies to enhance data processing capabilities. You will be responsible for developing innovative solutions that integrate various data types and drive impactful research outcomes.

Mar 12, 2026
Apply
Zyphra logoZyphra logo
Full-time|On-site|San Francisco

Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.

Aug 28, 2025
Apply
bland logobland logo
Full-time|On-site|San Francisco

bland is looking for a Machine Learning Researcher with a focus on audio. This position is based in San Francisco and centers on advancing how machines process and understand sound. The team works on pushing the boundaries of audio technology for a range of platforms. Responsibilities Research and develop new machine learning techniques for audio applications Contribute to projects that improve audio processing and analysis Collaborate with colleagues to bring research ideas into real-world audio products Location This role requires working onsite in San Francisco.

Apr 20, 2026
Apply
Retell AI logo
Full-time|On-site|San Francisco Bay Area

About Retell AI Retell AI develops advanced voice AI technology for call centers, using first-principles approaches to create intelligent voice agents. These agents help businesses manage sales, support, and logistics communications while reducing reliance on large human teams. The company has reached $36M ARR in just 18 months, backed by Y Combinator and Alt Capital. With a team of 20, Retell AI is building a comprehensive customer experience platform, aiming for AI-powered contact centers by 2026. The vision: intelligent agents that execute, monitor, and improve customer interactions with minimal human oversight. Named a top 50 AI app by a16z: https://tinyurl.com/5853dt2x Ranked #4 on Brex's Fast-Growing Software Vendors of 2025: https://www.brex.com/journal/brex-benchmark-december-2025 Featured among top startups: https://leanaileaderboard.com/ Role Overview: Research Scientist - Voice AI Innovation This role centers on advancing machine learning for human-like voice agents in real-world settings. The Research Scientist will explore new methods in large language models (LLMs) and audio models, design evaluation techniques, and prototype systems that improve reasoning, reduce latency, and enhance conversational quality. The work involves open-ended ML challenges, rapid experimentation, and direct influence on the performance and cognitive abilities of voice AI systems at scale. Location San Francisco Bay Area

Apr 14, 2026
Apply
Altos Labs logoAltos Labs logo
Full-time|$211.2K/yr - $290K/yr|On-site|San Francisco Bay Area, CA;San Diego, CA

Our MissionAt Altos Labs, we are dedicated to revitalizing cell health and resilience through innovative cell rejuvenation techniques to reverse disease, injury, and the disabilities that arise throughout life.Discover more about our vision at altoslabs.com.Our ValuesOur core value is simple yet powerful: Everyone Owns Achieving Our Inspiring Mission.Diversity at AltosWe understand that diverse perspectives are crucial for scientific breakthroughs and exploration. At Altos, exceptional scientists and industry leaders collaborate from around the globe to drive our shared mission forward. We prioritize a culture of belonging, ensuring that every employee feels valued for their unique contributions. We are all responsible for maintaining a diverse and inclusive workplace.Your Contributions to AltosJoin Altos Labs in creating a premier AI ecosystem aimed at addressing the most intricate challenges in human biology. You will be instrumental in designing and developing high-performance, scalable solutions that integrate high-dimensional biomedical imaging with molecular and linguistic data.Your role will involve implementing large-scale multimodal data fusion, advancing beyond basic image analysis to develop predictive models that span various biological domains. You will engage directly with data and coding, partnering with our engineering team to ensure these models are scalable, efficiently trainable in distributed cloud environments, and accessible to our global research network.Key ResponsibilitiesModel Development: Create, implement, and train large-scale foundational models (e.g., Vision Transformers, Multimodal LLMs) capable of embedding spatial data and integrating diverse modalities.Innovative Data Fusion: Apply cutting-edge cross-domain mapping and fusion techniques to align heterogeneous biological datasets.Scaling & Training: Develop and oversee high-performance ML pipelines designed to handle petabyte-scale image collections and multi-omics data streams in a cloud infrastructure.Technical Collaboration: Work closely with experimental scientists and software engineers to convert biological complexity into high-performance code and reliable distributed systems.Who You AreWe seek a technical expert who excels at unraveling "unsolvable" challenges through programming and meticulous experimentation. We welcome candidates at the Scientist I, Scientist II, or Senior Scientist levels.

Feb 19, 2026
Apply
Sieve logoSieve logo
Full-time|On-site|San Francisco

Join Our Pioneering TeamAt Sieve, we are trailblazers in the realm of AI research, specifically dedicated to harnessing the power of video data. Our cutting-edge infrastructure processes exabyte-scale video, utilizing innovative video understanding methodologies, and integrating diverse data sources to create groundbreaking datasets that redefine video modeling. With video accounting for a staggering 80% of global internet traffic, it stands as the cornerstone of digital creativity, communication, gaming, AR/VR, and robotics. Our mission is to eliminate the primary barrier to the growth of these technologies: the scarcity of high-quality training data.Having collaborated with leading AI laboratories, we achieved $XXM in revenue last quarter alone with a compact team of just 15 talented individuals. Our successful Series A funding round last year, backed by prestigious firms such as Matrix Partners, Swift Ventures, Y Combinator, and AI Grant, underscores our potential for exponential growth.The Role You’ll PlayAs an Applied Research Engineer at Sieve, you will be instrumental in constructing high-performance building blocks and expansive pipelines to achieve high-precision video comprehension at internet scale. Your role will often involve tackling ambiguous research challenges and devising ingenious solutions. You will engage with domains including computer vision, audio processing, and text processing.The ideal candidate will possess a strong command of models and APIs, leveraging innovative pre/post-processing techniques, parallelism, pipelining, inference optimization, and occasional fine-tuning to maximize performance.

Apr 26, 2025
Apply
Liquid AI logo
Full-time|On-site|San Francisco

About Liquid AILiquid AI, a pioneering company spun out of MIT CSAIL, is at the forefront of developing general-purpose AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory usage, privacy, and reliability allows us to partner with some of the most esteemed enterprises in consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are seeking exceptional talent to join our innovative journey.The OpportunityJoin our cutting-edge Audio team, where we are developing advanced speech-language models capable of handling Speech-to-Text (STT), Text-to-Speech (TTS), and speech-to-speech tasks within a unified architecture. This pivotal role supports applied audio model development, directly collaborating with the technical lead to deliver production systems that operate on-device under real-time constraints. You will take ownership of key workstreams encompassing data pipelines, evaluation systems, and customer deployments. If you are eager to tackle unique technical challenges within a small, elite team where your contributions are impactful, this is the role for you.What We're Looking ForWe are seeking an individual who:Builds first, theorizes later: You prioritize shipping working systems over theoretical models; production-grade code is your default.Owns outcomes end-to-end: You take full responsibility for everything from data pipelines to customer deployments and don't shy away from challenges.Thrives under constraints: On-device, low-latency, memory-constrained environments motivate you. You view constraints as opportunities for innovative design.Ramps quickly on new territory: You are comfortable closing knowledge gaps swiftly and actively seek feedback to drive results.The WorkDevelop and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale.Design, implement, and maintain evaluation systems that assess multimodal performance across both internal and public benchmarks.Fine-tune and adapt audio models to cater to customer-specific use cases, taking charge from requirement gathering through to deployment.Contribute production code to the core audio repository while collaborating closely with infrastructure and research teams.Facilitate experimentation under real hardware constraints, transitioning smoothly between customer-focused projects and core development initiatives.

Dec 16, 2025
Apply
Scale AI logoScale AI logo
Full-time|$273K/yr - $393K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY

At Scale AI, we are at the forefront of artificial intelligence, driving innovation through our advanced data, infrastructure, and tooling that empower the most sophisticated models worldwide. Our teams thrive at the intersection of pioneering research, extensive engineering, and practical deployment, collaborating with leading labs, enterprises, and government entities to explore the vast potential of Generative AI. As AI technology evolves from static models to dynamic, intelligent systems, Scale AI is dedicated to establishing the essential research foundations, evaluation methodologies, and reinforcement learning infrastructure that will shape this transformative era. Join our high-impact research organization, where you will contribute to advancing large language models, post-training evaluation, and agent-based reinforcement learning environments, influencing the future of AI development and implementation. As the Research Scientist Manager, you will spearhead a distinguished team of research scientists and engineers, define the strategic research roadmap, and oversee projects from initial prototyping to final deployment. You will excel in a fast-paced environment, harmonizing deep technical leadership with effective people management, visionary goal setting, and successful delivery.

Mar 26, 2026
Apply
Hike Medical logo
Full-time|On-site|San Francisco, CA

About Hike Medical Hike Medical is building the future of musculoskeletal care by combining advanced technology with practical healthcare solutions. Based in San Francisco’s Rincon Hill, the team develops a platform that spans three core areas: an AI-powered vision system for rapid web-based foot scans that generate custom 3D-printed orthotics, an AI agent platform that manages the entire DME workflow from intake through claims, and SoleForge, a high-scale 3D printing facility for custom medical devices. Hike Medical partners with some of the world’s largest employers and major orthotics and prosthetics organizations. Fortune 50 companies trust the platform to support employee well-being, and a broad network of clinical partners keeps the company connected to real-world needs. Custom insoles are just the starting point. The long-term goal is to reshape the industry with bionic devices: AI-designed, robotically manufactured orthotic and prosthetic products. The company aims to reach this milestone by 2040. Learn more at bionics2040.com. With $22 million raised across Seed and Series A rounds from leading investors, Hike Medical offers a results-oriented culture for those interested in the intersection of AI, manufacturing, and healthcare.

Apr 16, 2026
Apply
Latent Labs logoLatent Labs logo
Full-time|On-site|San Francisco

About UsAt Latent Labs, we are at the forefront of developing advanced models that delve into the core principles of biology. Our ambitious pursuits are driven by curiosity and a steadfast commitment to scientific excellence. The team behind Latent Labs previously co-developed DeepMind's Nobel Prize-winning AlphaFold, pioneered latent diffusion technology, and created groundbreaking laboratory data management systems alongside high-throughput protein screening platforms. By joining us, you will collaborate with some of the brightest minds in the fields of generative AI and biology.We believe in fostering interdisciplinary collaboration, continuous learning, and teamwork. Our team offsites are designed to cultivate a culture of trust between our London and San Francisco locations.We're on the lookout for innovators who are passionate about solving complex challenges and making a positive global impact. Be part of our monumental mission!Your RoleWe are excited to invite a Research Scientist to join our team, working at the intersection of generative AI and biology. The ideal candidate will have a solid background in molecular biology, biochemistry, mammalian cell culture, and high-throughput screening techniques. You will collaborate with colleagues and work independently to develop and implement screening assays aimed at functionally testing and characterizing novel proteins generated through our proprietary models.This is a unique opportunity to contribute to an organization that advances artificial intelligence in addressing long-standing scientific challenges. A strong technical skill set, adaptability, organizational prowess, and exceptional communication abilities are essential, as is the capability to thrive both independently and as part of a dynamic, collaborative team.

Apr 2, 2026
Apply
voltai logovoltai logo
Full-time|On-site|San Francisco Bay Area

Join VOLT, a trailblazer in crafting advanced AI perception systems that enhance safety and security through real-time risk detection in the physical world.We are on the lookout for a Senior Applied AI & Machine Learning Engineer dedicated to designing, optimizing, and deploying multimodal AI models capable of functioning reliably in diverse real-world scenarios. This is a hands-on role focused on transitioning models from conceptual data to practical production, encompassing both edge devices and cloud infrastructures.In this position, you will engage with vision, video, and language-based models that interpret real-world scenes and events, ensuring their accuracy, latency, robustness, and cost-effectiveness in production systems.Reporting directly to the Head of Engineering, you will play a pivotal role in advancing VOLT AI’s core perception platform.

Jan 13, 2026
Apply
Spellbrush logoSpellbrush logo
Full-time|On-site|San Francisco or Tokyo

Overview: Join us at Spellbrush as we innovate in the world of gaming! We are developing an immersive 3D first-person adventure game where an AI companion drives the gameplay experience. Our goal is to create a game that integrates large language models (LLMs) in a way that enhances storytelling and player engagement, moving beyond simple chat interactions.About SpellbrushAt Spellbrush, we are dedicated to crafting exceptional anime games. As the leading generative AI studio, we are the creative force behind niji・journey. Our mission is clear: to harness AI in bringing vibrant characters to life and to redefine narrative-driven gaming experiences.Our ProjectWe have developed an innovative in-house LLM storytelling system that seamlessly integrates AI with narrative and gameplay, offering players a depth of interaction that transcends traditional gaming.Role OverviewAs an integral member of our small but highly skilled team, you will have the opportunity to shape the future of gaming. Collaborate with leading minds in the industry, including the creator of Warudo and a veteran from Google DeepMind behind Project Astra. Your role will afford you substantial creative and research freedom to pioneer LLM-driven storytelling.

Sep 2, 2025
Apply
Genmo logoGenmo logo
Full-time|On-site|San Francisco HQ

Genmo is a pioneering research laboratory dedicated to advancing cutting-edge models for video generation, with the mission of unlocking the creative potential of Artificial General Intelligence (AGI). We invite you to be a part of our innovative team, where you can contribute to shaping the future of AI and expanding the horizons of video generation technology.Role Overview:We are on the lookout for a talented Research Scientist to join our dynamic team, specializing in alignment and post-training methodologies for large-scale video generation models. In this pivotal role, you will be instrumental in ensuring our diffusion-based video models consistently deliver high-quality, physically accurate, and safe outputs that align with human values and preferences.Key Responsibilities:Lead groundbreaking research initiatives in alignment and post-training strategies for video generation models, prioritizing enhanced quality, reliability, and alignment with human intent.Design and implement supervised fine-tuning and reinforcement learning from human feedback (RLHF) pipelines for video generation models.Establish robust evaluation frameworks to assess model alignment, safety, and output quality.Create and optimize data collection pipelines for capturing human feedback and preferences.Conduct experiments to validate alignment techniques and their scalability.Collaborate with cross-functional teams to incorporate alignment enhancements into our production workflow.Stay abreast of the latest developments by reviewing academic literature in generative AI and alignment.Mentor junior researchers and promote a culture of responsible AI development.Partner closely with product teams to ensure that alignment methods enhance model capabilities.Qualifications:Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a closely related field.Demonstrated excellence with a strong publication record in top-tier conferences (e.g., NeurIPS, ICML, ICLR) focusing on reinforcement learning, alignment, or generative models.Extensive experience in implementing and optimizing large-scale training pipelines utilizing PyTorch.In-depth understanding of reinforcement learning techniques, especially RLHF.Proficient in distributed training systems and conducting large-scale experiments.Proven ability to design and implement robust evaluation strategies for models.

Feb 22, 2026
Apply
Achira logoAchira logo
Full-time|On-site|San Francisco Office

Join Achira in shaping the future of deep learning with cutting-edge generative, representational, and simulation models for molecules and materials. Our mission is to create foundational models that render the atomistic universe understandable, predictable, and designable.Why Choose Achira?Be part of an elite, cross-disciplinary team comprising ML researchers, physicists, chemists, and engineers who are redefining atomistic simulation through expansive foundation models.Advance the integration of deep learning with the principles of nature, merging generative AI, probabilistic reasoning, and molecular physics.Engage in projects at an unparalleled scale, tackling extensive datasets, computational challenges, and ambitious goals.Take full ownership of your research journey — from ideation and architecture to training, evaluation, and deployment.Flourish in a dynamic culture that values rigor, speed, creativity, and impact over bureaucracy.Position OverviewAs a Generative AI Researcher at Achira, you will contribute to the development of foundation simulation models — large-scale systems designed to learn the structure, dynamics, and energetics of the atomistic realm. These models will unite deep representation learning, generative modeling, and sophisticated simulation techniques.Your responsibilities will include:Crafting and training state-of-the-art deep generative models — including diffusion, autoregressive, flow-based, and latent-variable architectures focused on molecules, materials, and atomic systems.Creating expressive representations of molecular and atomistic structures and dynamics utilizing equivariant graph neural networks, geometric transformers, and latent encoders that respect physical symmetries and constraints.Innovating advanced sampling and simulation techniques that blend probabilistic inference, deep learning, and reinforcement learning to facilitate efficient exploration and simulation of learned energy landscapes.Developing models that comprehend, generate, and simulate the physical world, merging reasoning, simulation, and predictive capabilities.Working collaboratively with physicists and chemists to validate models against ab initio, molecular dynamics, and experimental datasets.Rapidly prototyping, benchmarking, and iterating — converting research concepts into reusable, scalable model components across Achira’s foundation model suite.

Oct 24, 2025

Sign in to browse more jobs

Create account — see all 4,529 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.