Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Strong foundation in machine learning algorithms and frameworks. Experience with Python and data processing libraries such as Pandas and NumPy. Familiarity with cloud platforms (e.g., AWS, Azure) and deployment tools. Ability to work collaboratively in a team-oriented environment. Excellent problem-solving skills and attention to detail.
About the job
Achira is seeking a Machine Learning Research Engineer to help improve workflows and systems for artificial intelligence projects. This position is based in the San Francisco office.
Role overview
This role centers on developing and refining machine learning pipelines. The focus is on efficient deployment and scaling of AI models in production environments. Collaboration with colleagues from different disciplines is a key part of the work, aiming to bring forward new ideas and solid practices in machine learning systems.
What you will do
Design and optimize machine learning workflows for better performance and scalability
Work closely with cross-functional teams to implement improvements in AI systems
Support the deployment process, helping ensure models run efficiently in real-world settings
Location
This position is based at Achira's San Francisco office.
About Achira
Achira is a leading organization at the forefront of AI technology, dedicated to developing innovative solutions that enhance the efficiency of workflows and systems. Our team comprises passionate individuals who strive to push the boundaries of what's possible in machine learning and artificial intelligence.
Similar jobs
1 - 20 of 5,826 Jobs
Search for Machine Learning Research Engineer Ml Systems
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
Join Scale AI's ML platform team (RLXF) as a Machine Learning Research Engineer, where you will play a pivotal role in developing our advanced distributed framework for training and inference of large language models. This platform is vital for enabling machine learning engineers, researchers, data scientists, and operators to conduct rapid and automated training, as well as evaluation of LLMs and data quality.At Scale, we occupy a unique position in the AI landscape, serving as an essential provider of training and evaluation data along with comprehensive solutions for the entire ML lifecycle. You will collaborate closely with Scale's ML teams and researchers to enhance the foundational platform that underpins our ML research and development initiatives. Your contributions will be crucial in optimizing the platform to support the next generation of LLM training, inference, and data curation.If you are passionate about driving the future of AI through groundbreaking innovations, we want to hear from you!
OverviewPluralis Research is at the forefront of innovation in Protocol Learning, specializing in the collaborative training of foundational models. Our approach ensures that no single participant ever has or can obtain a complete version of the model. This initiative aims to create community-driven, collectively owned frontier models that operate on self-sustaining economic principles.We are seeking experienced Senior or Staff Machine Learning Engineers with over 5 years of expertise in distributed systems and large-scale machine learning training. In this role, you will design and implement a groundbreaking substrate for training distributed ML models that function effectively over consumer-grade internet connections.
OverviewPluralis Research is at the forefront of Protocol Learning, innovating a decentralized approach to train and deploy AI models that democratizes access beyond just well-funded corporations. By aggregating computational resources from diverse participants, we incentivize collaboration while safeguarding against centralized control of model weights, paving the way for a truly open and cooperative environment for advanced AI.We are seeking a talented Machine Learning Training Platform Engineer to design, develop, and scale the core infrastructure that powers our decentralized ML training platform. In this role, you will have ownership over essential systems including infrastructure orchestration, distributed computing, and service integration, facilitating ongoing experimentation and large-scale model training.ResponsibilitiesMulti-Cloud Infrastructure: Create resource management systems that provision and orchestrate computing resources across AWS, GCP, and Azure using infrastructure-as-code tools like Pulumi or Terraform. Manage dynamic scaling, state synchronization, and concurrent operations across hundreds of diverse nodes.Distributed Training Systems: Design fault-tolerant infrastructure for distributed machine learning, including GPU clusters, NVIDIA runtime, S3 checkpointing, large dataset management and streaming, health monitoring, and resilient retry strategies.Real-World Networking: Develop systems that simulate and manage real-world network conditions—such as bandwidth shaping, latency injection, and packet loss—while accommodating dynamic node churn and ensuring efficient data flow across workers with varying connectivity, as our training occurs on consumer nodes and non-co-located infrastructure.
On-site|On-site|San Francisco, CA | New York City, NY | Seattle, WA
Join Anthropic as a Machine Learning Systems Engineer within our Encodings and Tokenization team, where you'll play a pivotal role in refining and optimizing our tokenization systems across Pretraining and Finetuning workflows. By bridging the gap between our Pretraining and Finetuning teams, you will help shape the essential infrastructure that enhances how our AI models learn from diverse data. Your contributions will be crucial in ensuring our AI systems remain reliable, interpretable, and steerable, driving forward our mission of developing beneficial AI technologies.
Achira is seeking a Machine Learning Research Engineer to help improve workflows and systems for artificial intelligence projects. This position is based in the San Francisco office. Role overview This role centers on developing and refining machine learning pipelines. The focus is on efficient deployment and scaling of AI models in production environments. Collaboration with colleagues from different disciplines is a key part of the work, aiming to bring forward new ideas and solid practices in machine learning systems. What you will do Design and optimize machine learning workflows for better performance and scalability Work closely with cross-functional teams to implement improvements in AI systems Support the deployment process, helping ensure models run efficiently in real-world settings Location This position is based at Achira's San Francisco office.
Why Join Achira?Become part of an elite team comprising scientists, machine learning researchers, and engineers dedicated to transforming the predictability of the physical microcosm and revolutionizing drug discovery.Explore uncharted territories: we are on a mission to innovate next-generation model architectures that merge AI with chemistry.Engage in large-scale operations: harness massive computational resources, extensive datasets, and ambitious objectives.Take ownership of significant projects from inception to deployment on large-scale infrastructures.Thrive in a culture that values precision, speed, execution, and a proactive mindset.About the PositionAt Achira, we are committed to developing state-of-the-art foundation models that tackle the most complex challenges in simulation for drug discovery and beyond. Our atomistic foundation simulation models (FSMs) serve as world models of the physical microcosm, incorporating machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and various generative models.We are seeking a Machine Learning Research Engineer (MLRE) who excels at the intersection of advanced machine learning and rigorous research methodologies. Collaborate closely with our research scientists to design and enhance intelligent training systems that propel us beyond contemporary architectures into a new era of ML-driven molecular modeling.Your mission is clear yet ambitious: to establish the foundational frameworks for training atomistic simulation models at scale. This entails a deep dive into architecture, data, optimizers, losses, training metrics, and representation learning, all while constructing high-performance systems that maximize the potential of our models. In this role, you will be instrumental in creating a blueprint for pretraining FSMs similar to today’s large-scale generative AI systems, making a significant impact on drug discovery.At Achira, you will have the chance to pioneer models that comprehend and simulate the physical world at an atomic level, achieving unprecedented speed and accuracy.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; New York, NY
Join Scale's innovative Large Language Model (LLM) post-training platform team, where you will contribute to the development of our internal distributed framework designed specifically for LLM training. This sophisticated platform empowers Machine Learning Engineers (MLEs), researchers, data scientists, and operators to perform rapid and automated training and evaluation of LLMs. Additionally, it underpins the training framework for our data quality evaluation pipeline.Scale is at the forefront of the Artificial Intelligence sector, acting as a vital provider of training and evaluation data, as well as comprehensive solutions for the entire machine learning lifecycle. In this role, you will collaborate closely with Scale’s ML teams and researchers to construct the foundational platform that supports all our ML research and development initiatives. Your work will involve building and optimizing this platform to facilitate the training, inference, and data curation of next-generation LLMs.If you are passionate about driving the future of AI through groundbreaking innovations, we invite you to connect with us!
Company Overview:At Specter, we are pioneering a software-defined control plane for the physical realm, beginning with safeguarding American enterprises through comprehensive monitoring of their physical assets.Our innovative approach leverages a connected hardware-software ecosystem built on advanced multi-modal wireless mesh sensing technology. This breakthrough enables us to reduce the deployment costs and time for sensors by a factor of 10. Our ultimate goal is to establish a perception engine that provides real-time visibility of a company’s physical environment and facilitates autonomous operations management.Co-founders Xerxes and Philip are dedicated to empowering our partners in the rapidly evolving landscape of physical AI and robotics. Join our dynamic and rapidly expanding team comprised of talents from Anduril, Tesla, Uber, and the U.S. Special Forces.Position Overview:We are seeking a Perception AI Engineer who will be instrumental in transforming sensor data pipelines into actionable insights for our clients.Key Responsibilities:Implement and deploy a range of deep-learning models, including vision, vision-language, and large language models, within our sophisticated distributed perception system.Design and scale a production-ready data collection, labeling, and model retraining platform.Lead the design of a multimodal software user interface.
About Our TeamJoin the innovative Sora team at OpenAI, where we are at the forefront of developing multimodal capabilities for our foundation models. Our hybrid research and product team is dedicated to seamlessly integrating multimodal functionalities into our AI solutions, ensuring they are dependable, user-centric, and aligned with our vision of benefiting society at large.Role OverviewAs a Machine Learning Engineer specializing in Distributed Data Systems, you will be instrumental in designing and scaling the infrastructure that facilitates large-scale multimodal training and evaluation at OpenAI. Your role will involve managing complex distributed data pipelines, collaborating closely with researchers to convert their requirements into robust, production-ready systems, and enhancing pipelines that are essential for Sora's rapid iteration cycles.We are seeking detail-oriented engineers with extensive experience in distributed systems who thrive in high-stakes environments and excel in building resilient infrastructure.This position is located in San Francisco, CA, and follows a hybrid work model, requiring three days in the office each week. We also provide relocation assistance for new team members.Key Responsibilities:Design, implement, and maintain data infrastructure systems, including distributed computing, data orchestration, distributed storage, streaming infrastructure, and machine learning systems, with a focus on scalability, reliability, and security.Ensure our data platform can scale exponentially while maintaining high reliability and efficiency.Collaborate with researchers to gain a deep understanding of their requirements, translating them into production-ready systems.Strengthen, optimize, and manage critical data infrastructure systems that support multimodal training and evaluation.You Will Excel in This Role If You:Possess strong experience with distributed systems and large-scale infrastructure, coupled with a keen interest in data.Exhibit meticulous attention to detail and a commitment to building and maintaining reliable systems.Demonstrate solid software engineering fundamentals and effective organizational skills.Thrive in environments characterized by ambiguity and rapid change.About OpenAIOpenAI is a trailblazing AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves humanity. We continuously push the boundaries of AI capabilities and strive to create technology that benefits everyone.
At Exa, we are revolutionizing the way AI applications access information by building a cutting-edge search engine from the ground up. Our team is dedicated to developing a robust infrastructure capable of crawling the web, training advanced embedding models, and creating high-performance vector databases using Rust to facilitate seamless searches.As part of our ML team, you'll be instrumental in training foundational models that refine search capabilities. Our mission? To deliver precise answers to even the most complex queries, effectively transforming the web into an incredibly powerful knowledge database.We are seeking a talented Machine Learning Research Engineer who is passionate about crafting embedding models that enhance web search efficiency. Your responsibilities will include innovating novel transformer-based architectures, curating extensive datasets, conducting evaluations, and continuously improving our state-of-the-art models.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We're dedicated to crafting a future where everyone can harness the power of AI to meet their unique needs and aspirations.Our team comprises scientists, engineers, and innovators who have developed some of the most widely utilized AI products, including ChatGPT and Character.ai, as well as open-weight models like Mistral, in addition to renowned open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking a talented Infrastructure Research Engineer to architect and develop the foundational systems that facilitate the scalable and efficient training of large models using reinforcement learning.This position exists at the crossroads of research and large-scale systems engineering, requiring a professional who not only comprehends the algorithms behind reinforcement learning but also appreciates the practicalities of distributed training and inference at scale. You will have a diverse set of responsibilities, from optimizing rollout and reward pipelines to enhancing the reliability, observability, and orchestration of systems. Collaboration with researchers and infrastructure teams will be essential to ensure reinforcement learning is stable, rapid, and production-ready.Note: This is an evergreen role that we maintain on an ongoing basis to express interest. Due to the high volume of applications we receive, there may not always be an immediate position that aligns perfectly with your skills and experience. We encourage you to apply, as we continuously review applications and reach out to candidates when new opportunities arise. You may reapply after gaining more experience, but please refrain from applying more than once every six months. Additionally, you may notice postings for specific roles that cater to unique project or team needs; in those circumstances, you are welcome to apply directly alongside this evergreen role.What You’ll DoDesign, implement, and optimize the infrastructure that supports large-scale reinforcement learning and post-training workloads.Enhance the reliability and scalability of the RL training pipeline, including distributed RL workloads and training throughput.Create shared monitoring and observability tools to ensure high uptime, debuggability, and reproducibility of RL systems.Work closely with researchers to translate algorithmic concepts into production-quality training pipelines.Develop evaluation and benchmarking infrastructure to assess model performance based on helpfulness, safety, and factual accuracy.Publish and disseminate insights through internal documentation, open-source libraries, or technical reports that contribute to the advancement of scalable AI infrastructure.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; New York, NY
As AI continues to play a crucial role across various sectors, Scale AI is committed to accelerating the evolution of AI applications. For nearly a decade, we have been at the forefront of AI data solutions, driving significant innovations such as generative AI, defense technologies, and autonomous vehicles. With recent funding from Meta, we are intensifying our efforts to develop cutting-edge post-training algorithms essential for enhancing the performance of complex enterprise agents globally. The Enterprise ML Research Lab is at the forefront of this AI transformation. Our team is dedicated to creating a suite of proprietary research and resources tailored for our enterprise clientele. As a Machine Learning Systems Research Engineer, you will play a pivotal role in developing algorithms for our next-generation Agent Reinforcement Learning (RL) training platform, support large-scale training operations, and integrate state-of-the-art technologies to optimize our machine learning systems. You will collaborate with other ML Research Engineers and AI Architects on the Enterprise AI team to apply these training algorithms to various client use cases, from next-gen AI cybersecurity firewalls to foundational healthtech search models. If you are passionate about shaping the future of AI, we want to hear from you!
Full-time|$227.2K/yr - $417K/yr|Hybrid|San Francisco, CA; Los Angeles, CA; New York, NY (Hybrid); USA - Remote
About the Role:Join our dynamic ML Infrastructure team as a Software Engineer, where you'll collaborate intimately with the Machine Learning and Product teams to construct top-tier machine learning inference platforms. These cutting-edge platforms drive vital services such as personalized recommendations, search functionalities, and content comprehension at Tubi.Your primary focus will be on the development and maintenance of low-latency ML model serving systems that cater to Deep Learning, LLM, and Search models. This will include the creation of self-service infrastructure and critical components such as the inference engine, feature store, vector store, and experimentation engine.In this role, you'll enhance our service deployment and operational processes, with opportunities to contribute to open-source projects. Enjoy architectural freedom to explore innovative frameworks, spearhead significant cross-functional projects, and elevate the capabilities of our ML and Product teams.We are currently hiring for two positions:Staff Software EngineerPrincipal Software EngineerAdditional Details: As a Principal Engineer, you will serve as a technical leader and visionary, guiding the advancement of our machine learning platform. You'll address complex technical challenges, shape architectural decisions, and mentor senior engineers, fostering a culture of excellence and continuous improvement. Your contributions will impact millions of users.
Join David AIAt David AI, we are pioneering the audio data research landscape. Our research and development approach to data ensures that we deliver datasets with the same precision and rigor that leading AI labs apply to their models. Our mission is to seamlessly integrate AI into everyday life, leveraging audio as a key channel. As we witness advancements in audio AI and the emergence of new use cases, we recognize that high-quality training data is the critical component. This is where David AI steps in.Founded in 2024 by a group of former engineers and operators from Scale AI, we have rapidly established partnerships with major FAANG companies and AI labs. Recently, we secured a $50M Series B funding round from prominent investors including Meritech, NVIDIA, Jack Altman (Alt Capital), Amplify Partners, and First Round Capital.Our team is sharp, humble, and ambitious. We are on the lookout for talented individuals in research, engineering, product management, and operations to join us in our mission to redefine the audio AI landscape.About Our Machine Learning TeamOur Machine Learning team operates at the forefront of innovative research and practical application, transforming raw audio into high-quality data for top AI labs and enterprises. We manage the entire machine learning lifecycle—from exploring novel speech processing algorithms to deploying models that handle terabytes of audio data daily.Your RoleAs an Applied ML Engineer at David AI, you will develop state-of-the-art speech and audio models, establish production inference systems, and create robust pipelines that demonstrate the true potential of high-quality data.Key ResponsibilitiesResearch and Design: Create solutions using advanced signal processing algorithms and cutting-edge ML models tailored for speech and audio applications.Development: Build production-grade inference algorithms, pipelines, and APIs in collaboration with cross-functional teams to extract valuable insights for our clients.Collaboration: Work alongside our Operations team to gather valuable training and evaluation datasets to enhance our model quality.Architecture: Design systems that ensure durable and resilient inference and evaluations.
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY
Anthropic is looking for a Research Engineer with a focus on Machine Learning, particularly Reinforcement Learning (RL) Velocity. This position involves collaborating with a team to design, build, and refine machine learning systems. Much of the work centers on experimenting with new ideas and advancing AI research. What you will do Work alongside researchers and engineers to develop and optimize machine learning models Explore new methods in reinforcement learning to accelerate progress Contribute to projects that push the boundaries of AI capabilities Location and travel This role offers flexibility to work remotely, with some required travel. Anthropic maintains offices in San Francisco, CA and New York City, NY.
About Our TeamAt OpenAI, we are pioneers in the field of artificial intelligence, committed to driving innovation and shaping a future where AI benefits everyone. We seek passionate and visionary Research Engineers to become part of our Applied Voice Team. In this role, you'll engage in transformative research on speech models, translating these insights into real-world applications that can revolutionize industries, enhance human creativity, and tackle complex challenges.About the RoleAs a Research Engineer on OpenAI's Applied Voice Team, you will collaborate with some of the most talented professionals in AI. You will be responsible for designing and developing cutting-edge speech models, including speech-to-speech, transcription, and text-to-speech functionalities. Your work will help translate groundbreaking research into practical solutions for B2B applications, APIs, and ChatGPT AVM. If you are eager to make AI more accessible and impactful, this is your opportunity to leave a lasting legacy.Key Responsibilities:Innovate and Build: Conceptualize and create advanced machine learning models that address real-world challenges, transforming OpenAI's research into AI applications with significant impact.Collaborate with Experts: Partner with software engineers, product managers, and deployed engineers to understand intricate business challenges, respond to customer needs, and deliver AI-driven solutions. Join a vibrant team environment where creativity and ideas flourish.Optimize and Scale: Develop scalable data pipelines, enhance models for improved performance and accuracy, and ensure readiness for production. Contribute to high-tech projects that demand innovative methodologies.Learn and Lead: Stay at the forefront of developments in machine learning and AI by participating in code reviews, sharing insights, and exemplifying high-quality engineering practices.Make an Impact: Oversee and maintain deployed models to ensure they consistently provide value. Your contributions will significantly influence the role of AI in benefiting individuals, businesses, and society as a whole.Ideal Candidate Profile:Master's or PhD in Computer Science, Machine Learning, or a related discipline.A minimum of 2 years of professional experience in engineering roles within technology and product-focused organizations (internships excluded).
About Wispr FlowAt Wispr Flow, we strive to make device interaction as seamless as conversing with a friend.Wispr Flow has revolutionized voice dictation, now preferred by users over traditional keyboards due to its unparalleled accuracy on the first attempt. Our platform is context-aware, personalized, and effective across all devices, whether desktop or mobile.By 2026, we aim to expand beyond dictation to develop native actions within an agentic framework that comprehends and responds to user needs reliably.Our diverse team comprises AI researchers, designers, growth specialists, and engineers dedicated to reimagining human-computer interaction. We value team members who prioritize open communication, exhibit a user-centric mindset, and pay meticulous attention to detail. Our collaborative environment fosters spirited discussions, truth-seeking, and tangible impact.Having achieved a remarkable 150% revenue growth quarterly for the past year, we have successfully raised $81 million from top-tier venture capitalists and renowned angel investors.
Full-time|$126K/yr - $196K/yr|Hybrid|San Francisco
About Scribd:At Scribd Inc. (pronounced 'scribbed'), we're on a mission to ignite human curiosity. Join our innovative team as we craft a diverse world of stories and knowledge, democratizing the exchange of ideas and empowering collective intelligence through our four flagship products: Everand, Scribd, Slideshare, and Fable.This job posting is for an exciting, open position within our organization.We foster a culture where authenticity and boldness thrive, facilitating open debates and commitments as we embrace the unexpected. Every team member is empowered to take initiative, prioritizing the needs of our customers.In terms of workplace structure, we prioritize a balance between personal flexibility and communal connections. Our Scribd Flex initiative allows employees, in collaboration with their managers, to determine their daily work styles that best suit their individual needs while promoting intentional in-person interactions to enhance collaboration and company culture. Therefore, occasional in-person attendance is mandatory for all employees, regardless of their location.What do we seek in our new team members? We value 'GRIT'—the intersection of passion and perseverance toward long-term goals. At Scribd Inc., we believe in harnessing the potential that GRIT unlocks and encourage each employee to adopt a GRIT-driven approach to their work. This means we are looking for individuals who can set and achieve Goals, deliver Results in their responsibilities, contribute Innovative ideas, and positively impact the broader Team through collaboration and a positive attitude.About Our Machine Learning Team:Our Machine Learning team is pivotal in developing the platform and product applications that drive personalized discovery, recommendations, and generative AI functionalities across Scribd, Slideshare, and Everand. The ML team operates on the Orion ML Platform, providing essential ML infrastructure such as a feature store, model registry, model inference systems, and embedding-based retrieval (EBR). Our Machine Learning Engineers collaborate closely with the Product team to integrate machine learning into user-facing features, including real-time personalization and AskAI LLM-powered experiences.
Full-time|$275K/yr - $350K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
About Scale AI At Scale AI, we are dedicated to propelling the advancement of AI applications. Over the past eight years, we have established ourselves as the premier AI data foundry, supporting groundbreaking innovations in fields such as generative AI, defense technologies, and autonomous vehicles. Following our recent Series F funding round, we are intensifying our efforts to harness frontier data, paving the way toward achieving Artificial General Intelligence (AGI). Our work with enterprise clients and governments has enhanced our model evaluation capabilities, allowing us to expand our offerings for both public and private evaluations. About the ACE Team The Agent Capabilities & Environments (ACE) team, a vital part of Scale’s Research organization, unites customer-focused Researchers and Applied AI Engineers. Our primary mission is to conduct research on agent environments and reinforcement learning reward signals, benchmark autonomous agent performance in real-world contexts, and develop robust data programs aimed at enhancing the capabilities of Large Language Models (LLMs). We are committed to creating foundational tools and frameworks for evaluating models as agents, focusing on autonomous agents that interact dynamically with a wide range of external environments, including code repositories and GUI interfaces. About This Role This position sits at the cutting edge of AI research and its practical applications, concentrating on the data types necessary for the development of state-of-the-art agents, including browser and software engineering agents. The ideal candidate will investigate the data landscape required to propel intelligent and adaptable AI agents, steering the data strategy at Scale to foster innovation. This role demands not only expertise in LLM agents and planning algorithms but also creative problem-solving skills to tackle novel challenges pertaining to data, interaction, and evaluation. You will contribute to influential research publications on agents, collaborate with customer researchers, and partner with the engineering team to transform these advancements into scalable real-world solutions.
Company OverviewEcho Neurotechnologies is a pioneering startup in the Brain-Computer Interface (BCI) sector, dedicated to revolutionizing the lives of individuals with disabilities through advanced hardware engineering and artificial intelligence solutions. Our vision is to develop innovative technologies that empower users, restoring autonomy and enhancing their quality of life.Team CultureWe pride ourselves on cultivating an inclusive and dynamic team of skilled professionals who are passionate about their work. Our startup environment encourages ownership of impactful decisions and fosters continuous learning and collaboration, where every contribution is essential to our collective success.Job SummaryWe are on the lookout for a talented Machine Learning Research Engineer specialized in speech modeling to join our innovative team. The successful candidate will leverage ML/AI methodologies to create and refine adaptable speech models aimed at brain-computer interface applications, ultimately making a difference in the lives of patients facing severe disabilities. Candidates should possess significant expertise in speech modeling, feature engineering, time-series analysis, and the development of custom ML models.Key ResponsibilitiesDesign and evaluate diverse model architectures and strategies to enhance the accuracy and resilience of models for interpreting speech from brain activity.Investigate and implement cutting-edge speech features and representations within neural-decoding frameworks, informed by speech science and functional neurophysiology.Create pipelines for generating personalized and naturalistic speech from both text and brain activity inputs.Develop algorithms to analyze both intact and compromised speech signals, identifying biomarkers linked to various diseases and disabilities.Collaborate within a tight-knit team to build models, define R&D workflows, and translate scientific discoveries into practical applications.Contribute to best practices ensuring reliability, observability, reproducibility, and scientific rigor across the R&D landscape.Maintain well-documented, versioned code, analysis pipelines, and results for maximum interpretability and reproducibility.
Jan 29, 2026
Sign in to browse more jobs
Create account — see all 5,826 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.