Applied Ai Forward Deployed Machine Learning Engineer jobs in Palo Alto – Browse 750 openings on RoboApply Jobs

Applied Ai Forward Deployed Machine Learning Engineer jobs in Palo Alto

Open roles matching “Applied Ai Forward Deployed Machine Learning Engineer” with location signals for Palo Alto. 750 active listings on RoboApply Jobs.

750 jobs found

1 - 20 of 750 Jobs
Apply
companyMistral logo
Full-time|On-site|Palo Alto

Role Overview Mistral is hiring an Applied AI Forward Deployed Machine Learning Engineer in Palo Alto. This role centers on bringing advanced machine learning solutions into real-world client settings. The work directly shapes client outcomes and business impact. What You Will Do Deploy machine learning models and systems for client projects Work closely with cross-functional teams to understand specific challenges Develop and adapt AI solutions to fit client needs, focusing on efficiency and practical results

Apr 15, 2026
Apply
companySimular logo
Full-time|On-site|Palo Alto

As a Forward Deployed Engineer at Simular, you will play a crucial role in shaping the future of AI by directly collaborating with clients to implement and customize our innovative AI agents in dynamic real-world settings.Key Responsibilities:Engage directly with pioneering clients to deploy and tailor Simular’s AI agents for practical applications.Transform client requirements into actionable product features, prototypes, and seamless integrations.Develop scripts and workflows to customize our AI agents for specific client environments.Create bespoke solutions and tools that effectively demonstrate the capabilities of Simular in operational contexts.Serve as a liaison between customer-facing teams and core engineering/research, translating feedback into product development priorities.Exhibit a resourceful and customer-centric mindset, acting as a foundational multiplier within the team.

Oct 13, 2025
Apply
company
Full-time|Hybrid|Palo Alto

OverviewIn today's rapidly evolving landscape of commerce, traditional marketing strategies are becoming obsolete. The new generation of consumers seeks authentic connections with brands, engaging in meaningful social interactions, and relying on trusted community recommendations. At Nectar, we are at the forefront of this transformation, developing an AI-driven social operating system designed to revolutionize how brands connect with their audiences.Our mission is to enhance every social interaction, fostering deeper relationships and creating real value for both brands and their communities. Our innovative AI agents listen in real-time, providing actionable insights that directly link engagement to revenue, transforming conversations into tangible conversions.Founded by former product and engineering leaders from Meta, we proudly partner with dynamic brands such as OLIPOP, Hatch, K18, and Little Spoon, as we pave the way for the future of social commerce—where community, conversation, and commerce seamlessly intersect.The RoleAs a Forward Deployed Engineer, you will serve as the vital technical liaison between Nectar's platform and our key enterprise clients. You will take ownership of the entire technical journey—from initial deployment to ongoing production optimization. This hybrid role blends robust technical execution with strategic customer collaboration, allowing you to deliver real solutions rather than presentations.Imagine functioning as a startup CTO embedded within our largest clients, where you'll be part of small, agile teams, manage high-impact projects from start to finish, and directly influence both client success and product evolution.What You'll DoLead Enterprise Deployments: Oversee the technical implementation for strategic accounts from pilot programs through to full production, ensuring a swift realization of value.Create Custom Integrations: Design and execute integrations with client systems, including CRMs, CDPs, e-commerce platforms, and social APIs.Optimize AI Agents: Tailor our AI agents to align with specific customer workflows, brand identities, and use case scenarios.Influence Product Roadmap: Convert insights from the field into actionable product improvements; identify trends that should evolve into platform features.Collaborate with GTM Teams: Provide technical expertise during sales cycles to facilitate deal closures by demonstrating feasibility and outlining implementation scopes.Engage with Customers: Spend time on-site to fully comprehend their workflows and requirements.

Feb 3, 2026
Apply
company
Full-time|On-site|Palo Alto Office

This position requires you to be present in our Palo Alto office and travel to customer sites throughout the Bay Area.The RoleAs a Forward Deployed Engineer, you will be integral in delivering our advanced AI platform to the hardware labs of the leading semiconductor and consumer electronics companies worldwide.About VoltaiVoltai is at the forefront of developing innovative models and agents for evaluating, designing, and interacting with hardware and electronic systems.The TeamOur dynamic team consists of hackers, researchers, and operators deeply passionate about building AI for electronic systems and semiconductors. With backgrounds including former Stanford professors, SAIL researchers, Olympiad medalists, and executives from industry giants like Synopsys, GlobalFoundries, and Cadence, we prioritize execution over titles. We are supported by Stanford, top Silicon Valley investors, and leaders from Google, AMD, and Broadcom, giving us unparalleled access to the industry.ResponsibilitiesLead deployments at customer offices across the SF Bay Area, including Cupertino and Santa Clara.Transfer features from our cloud platform to highly secure on-prem environments.Oversee cloud deployments across platforms such as AWS, Azure, GCP, and OpenShift, ensuring optimal performance and troubleshooting issues as they arise.Act as a liaison between customer environments and our engineering team to facilitate feedback and improvements.QualificationsWe value thought processes over keyword matching.Minimum of 3 years of experience in backend engineering, full-stack development, or infrastructure/DevOps; other disciplines (e.g., mobile, frontend) will be considered if you have a strong understanding of web applications and their deployment.Strong instincts for systems design and debugging.Familiarity with the Docker ecosystem, cloud infrastructure (AWS, Azure, GCP), and Infrastructure as Code (IaC) tools.Exceptional communication skills for engaging with senior engineers and executives at world-class companies.Willingness to travel primarily within California.Preferred QualificationsExperience with on-prem deployments.

Mar 17, 2026
Apply
company
Full-time|On-site|Palo Alto

At Rhoda AI, we are pioneering the future of humanoid robotics by establishing a comprehensive stack that includes advanced, software-defined hardware along with foundational models and video world models to drive our innovations. Our robots are engineered to be versatile, capable of navigating complex real-world scenarios that extend beyond traditional training environments. Our interdisciplinary research team, featuring experts from prestigious institutions such as Stanford, Berkeley, and Harvard, is at the forefront of large-scale learning, robotics, and systems engineering. With over $400 million raised, we are making significant investments in research and development, hardware innovation, and scaling our manufacturing capabilities to bring our vision to life.We are seeking a motivated Machine Learning Inference Engineer to join our team and contribute to the development and operation of the inference systems that power our automation stack. You will play a crucial role in ensuring the efficient and reliable execution of large foundation models, collaborating closely with our robotic platforms and internal task tools.Key Responsibilities:Develop and maintain infrastructure for model inference across both cloud and on-premises environments.Optimize the latency, throughput, and reliability of deployed machine learning models.Design and scale services for serving diverse foundation models in both research and production contexts.Collaborate with research and robotics teams to enhance inference optimization and integration.Create tools for model deployment, version control, and observability to facilitate rapid iteration cycles.Contribute to the robustness and scalability of the inference stack as model complexity and deployment demands evolve.Qualifications:Minimum of 3 years of experience in machine learning infrastructure, MLOps, or backend systems.Proven experience in deploying and managing machine learning inference workloads in production environments.Excellent knowledge of Kubernetes and containerized deployment pipelines.Familiarity with cloud service providers such as AWS and GCP, including GPU orchestration capabilities.Experience with popular ML frameworks including PyTorch and TensorFlow, as well as model serving tools like Triton, TorchServe, and Ray Serve.Strong debugging capabilities and a proactive ownership mindset, comfortable resolving issues across the technology stack.

Mar 10, 2026
Apply
companyGlean logo
Full-time|$160K/yr - $270K/yr|Remote|Remote - US

About Glean:Established in 2019, Glean is a pioneering AI-driven knowledge management platform that empowers organizations to swiftly find, organize, and disseminate information across their teams. By seamlessly integrating with platforms such as Google Drive, Slack, and Microsoft Teams, Glean enables employees to access essential knowledge precisely when they need it, enhancing productivity and collaboration. Our advanced AI technology streamlines knowledge discovery, making it faster and more efficient for teams to utilize their collective intelligence.Glean was conceived from the insights of Founder & CEO Arvind Jain, who recognized the challenges employees encounter in locating and comprehending workplace information. Witnessing firsthand how fragmented knowledge and an array of SaaS tools hindered productivity, he embarked on a mission to create an innovative solution—an AI-powered enterprise search platform enabling users to quickly and intuitively find the information they require. Since its inception, Glean has evolved into the leading Work AI platform, integrating enterprise-grade search capabilities, an AI assistant, and robust application- and agent-building features to revolutionize the way employees work.About the Role:As a key member of our Forward Deployed Engineering Team, you will collaborate directly with our clients to design transformative AI solutions that tackle their most pressing business challenges. Working closely with our Go-to-Market, Product, and Engineering teams, you will combine technical expertise with a deep understanding of enterprise AI systems and client engagement skills to architect and deploy production platforms that deliver tangible business outcomes. You will oversee the complete engagement process—from the initial discovery and technical planning stages to solution architecture, development, and final production launch.

Feb 6, 2026
Apply
company
Full-time|On-site|Palo Alto Office

About VoltaiAt Voltai, we are pioneering the future of artificial intelligence by developing world models and agents capable of learning, evaluating, planning, experimenting, and interacting with the physical world. Our initial focus is on understanding and creating advanced hardware, electronic systems, and semiconductors, utilizing AI to design and innovate beyond human cognitive boundaries.About Our TeamOur remarkable team is backed by esteemed Silicon Valley investors, Stanford University, and industry leaders including CEOs and Presidents of Google, AMD, Broadcom, and Marvell. We boast a diverse group of former Stanford professors, SAIL researchers, Olympiad medalists, CTOs of prominent tech firms, and high-ranking officials with experience in national security and foreign policy.What We Are Looking ForExceptional AI/ML engineering skills, ideally from top-tier programs in Computer Science, Electrical Engineering, Mathematics, or Physics.Demonstrated success in delivering AI/ML projects from initial concept through to production deployment.Hands-on experience in fine-tuning and deploying large language models (LLMs) within production environments.Experience working with multi-modal models that integrate text, image, or audio inputs.Bonus PointsExperience in competitive programming.Contributions to open-source projects.Recognition through awards or publications in leading journals and conferences.Ability to thrive in a dynamic, fast-paced startup environment.

Sep 18, 2025
Apply
company
Full-time|On-site|Palo Alto

At Rhoda AI, we are pioneering the development of a comprehensive full-stack platform for the next generation of humanoid robots. Our innovative approach encompasses high-performance, software-defined hardware along with foundational and video world models that empower our robotic systems. Our robots are engineered as versatile generalists, adept at navigating intricate, real-world scenarios, including those not encountered during training. Collaborating with a distinguished research team from Stanford, Berkeley, Harvard, and other leading institutions, we operate at the forefront of large-scale learning, robotics, and systems engineering. With over $400M in funding, we are aggressively investing in research and development, hardware innovation, and scaling up manufacturing to bring our vision to life.We are on the lookout for a Staff / Principal Machine Learning Engineer to take charge of our training platform. This pivotal system is essential for ensuring that large-scale training is reliable, reproducible, and straightforward to execute. You will play a crucial role in defining the lifecycle of training jobs, including their launch, tracking, recovery, and debugging across our clusters. Your contributions will enable researchers to innovate rapidly without infrastructure hindrances.In this role, you will be at the heart of enhancing research efficiency: when a training job fails, your system will allow for automatic recovery; when experiments become challenging to reproduce, you will implement effective solutions; and when GPU hours are squandered, you will ensure visibility and preventative measures are in place.

Apr 8, 2026
Apply
companyGauss Labs logo
Full-time|On-site|Palo Alto, CA / Yeoksam, Seoul

Join our dynamic AI R&D team as an AI Scientist focused on Machine Learning. In this pivotal role, you will lead the development and implementation of advanced deep learning models to address real-world temporal modeling challenges in the manufacturing sector. We are in search of a candidate with extensive practical R&D experience, firmly rooted in robust theoretical principles and possessing deep expertise across various AI disciplines. The ideal candidate will exhibit a profound understanding of cutting-edge machine learning algorithms and techniques, alongside a proven record of contributions to top-tier conferences such as NeurIPS, ICML, ICLR, KDD, CVPR, or ICCV. A solid foundation in computer science and engineering is essential. Familiarity with collaborating alongside software engineering teams to scale and commercialize ML solutions will be highly regarded. This high-impact role merges foundational research, system-level design, and hands-on implementation, allowing you to work closely with cross-functional teams to create innovative solutions that drive strategic decisions and deliver significant business value.

Apr 22, 2025
Apply
companyPathwaycom logo
Internship|On-site|Palo Alto, California, United States

About PathwayPathway is revolutionizing artificial intelligence with the introduction of the world’s first post-transformer model that mimics human thought processes. Our innovative architecture surpasses traditional Transformer models, providing enterprises with unparalleled transparency into model operations. By integrating this foundational model with the fastest data processing engine available, Pathway empowers organizations to transcend mere incremental optimization and achieve genuinely contextualized, experience-driven intelligence. Trusted by prestigious clients including NATO, La Poste, and Formula 1 racing teams, we are at the forefront of AI advancements.Led by visionary CEO Zuzanna Stamirowska, a complexity scientist, our team includes AI trailblazers such as CTO Jan Chorowski, who pioneered the application of Attention in speech and collaborated with Nobel laureate Geoff Hinton at Google Brain, and CSO Adrian Kosowski, a distinguished computer scientist and quantum physicist who earned his PhD at just 20 years old.Supported by prominent investors and advisors like Lukasz Kaiser, co-author of the Transformer architecture (the “T” in ChatGPT) and a key researcher in OpenAI's reasoning models, Pathway is headquartered in Palo Alto, California.The OpportunityWe are on the lookout for passionate Machine Learning/AI Software Engineering interns with a solid foundation in machine learning model research.Your ResponsibilitiesAssist in training Large Language Models (LLMs)Conduct benchmarking of LLMsPrepare and evaluate training datasetsCollaborate with the core Pathway Research TeamYour contributions will significantly impact the advancement of the AI landscape.

Jul 18, 2025
Apply
company
Full-time|On-site|Palo Alto

At Rhoda AI, we are pioneering the development of a comprehensive foundation for the next generation of humanoid robots. Our focus spans high-performance, software-defined hardware to advanced foundational models and video world models that govern robot functionality. Our robots are engineered to be versatile, capable of navigating intricate, real-world environments and tackling scenarios not previously encountered in training. We stand at the crossroads of large-scale learning, robotics, and systems, bolstered by a research team comprising experts from prestigious institutions such as Stanford, Berkeley, and Harvard. Our ambition is not merely to add features; we are crafting a revolutionary computing platform for physical tasks, underpinned by over $400 million in funding, driving aggressive investments in research & development, hardware innovation, and scaling up manufacturing to bring our vision to fruition.Role OverviewWe are in search of a Principal Machine Learning Systems Engineer to take charge of our training systems' performance from start to finish. You will be instrumental in defining the scaling of our model training, enhancing efficiency, scalability, and accuracy across extensive multimodal training environments. This is a pivotal systems role, not merely focused on infrastructure support. Your contributions will significantly influence our compute utilization efficiency, scalability of models across thousands of GPUs, and the speed of research iterations.Your ResponsibilitiesOversee training performance from start to finishAnalyze and enhance the performance of large-scale multimodal training encompassing vision, video, proprioception, actions, and language.Create systematic performance attributions by breaking down step-time into compute, communication, and input pipeline, along with scaling curves for various cluster sizes and identifying key bottlenecks.Drive quantifiable improvements across:Distributed efficiency (e.g., communication and compute overlap, bucketization, topology-aware mapping, and parallelism strategies).Compute efficiency (e.g., identifying kernel hotspots, operator fusion, attention optimization, and minimizing framework/runtime overhead).Memory efficiency (e.g., activation checkpointing, sequence packing, and reducing fragmentation).Design training systems rather than just tuning themDefine and refine parallelism strategies including data, tensor, pipeline, sharding, and hybrid approaches.Enhance execution efficiency through communication scheduling, graph capture, execution optimization, and runtime enhancements.Contribute to the overall system architecture with innovative solutions.

Mar 10, 2026
Apply
companyGauss Labs logo
Full-time|On-site|Palo Alto, CA

Gauss Labs is seeking a dynamic and skilled Senior AI Engineer to pioneer transformative Industrial AI solutions, setting new standards for artificial intelligence in the manufacturing sector. Our collaborations with leading manufacturing clients provide unparalleled access to extensive real-time data derived from their operations. Leveraging advanced AI technologies, we are dedicated to creating innovative AI and machine learning solutions that elevate manufacturing to unprecedented heights.In this pivotal role, you will be instrumental in translating groundbreaking AI and machine learning research into resilient, scalable software applications. Your contributions will facilitate the smooth deployment of models in production environments, thereby enhancing the overall success of AI initiatives within the organization. You will collaborate closely with experienced Applied Scientists, Software Engineers, and Program Managers based in both Palo Alto, California, and Seoul, South Korea.

Jun 19, 2025
Apply
companyMistral AI logo
Full-time|On-site|Palo Alto

About Mistral AIAt Mistral AI, we harness the transformative power of artificial intelligence to streamline tasks, save valuable time, and foster enhanced creativity and learning. Our innovative technology is crafted to effortlessly integrate into everyday work environments.We are committed to democratizing AI by offering high-performance, optimized, open-source models, products, and solutions. Our extensive AI platform caters to both enterprise and individual needs, featuring products like Le Chat, La Plateforme, Mistral Code, and Mistral Compute—creating cutting-edge intelligence accessible to all users.As a vibrant and collaborative team, we are driven by our passion for AI and its potential to revolutionize society. Our diverse workforce excels in competitive settings and is dedicated to fostering innovation. With teams distributed across France, the USA, the UK, Germany, and Singapore, we pride ourselves on our creativity, humility, and team spirit.Join us in shaping the future of AI at a pioneering company. Together, we can create a lasting impact. Discover more about our culture at https://mistral.ai/careers.Role OverviewAbout the Research Engineering TeamThe Research Engineering team operates across Platform (shared infrastructure & clean coding practices) and Embedded (integrated within research squads). Our engineers have the flexibility to navigate the research↔production spectrum as their interests and needs evolve.As a Machine Learning Research Engineer, you will be responsible for building and optimizing large-scale learning systems that underpin our open-weight models. Collaborating closely with Research Scientists, you may join either:- Platform RE Team: Focus on enhancing our shared training frameworks, data pipelines, and tools utilized across all teams; or- Embedded RE Team: Become part of a research squad (Alignment, Pre-training, Multimodal, etc.) to turn innovative ideas into scalable, repeatable code.Key Responsibilities• Support researchers by managing the complex aspects of large-scale ML pipelines and developing robust tools.• Bridge cutting-edge research with production: integrate checkpoints, optimize evaluations, and create accessible APIs.• Conduct experiments utilizing the latest deep-learning techniques (sparsification on 70B+ models, distributed training across thousands of GPUs).• Design, implement, and benchmark ML algorithms; produce clear and efficient code in Python.• Deliver prototypes that evolve into production-grade components for Le Chat and our enterprise API.

Jan 27, 2026
Apply
companyGlean logo
Full-time|$240K/yr - $300K/yr|On-site|San Francisco Bay Area

About Glean:Founded in 2019, Glean is a pioneering AI-driven knowledge management platform that empowers organizations to efficiently discover, organize, and share vital information across their teams. By seamlessly integrating with tools such as Google Drive, Slack, and Microsoft Teams, Glean enables employees to access critical knowledge precisely when they need it, enhancing productivity and collaboration. Our state-of-the-art AI technology streamlines knowledge discovery, allowing teams to harness their collective intelligence more effectively.Glean was conceived by Founder & CEO Arvind Jain, who recognized the challenges employees face in navigating fragmented knowledge and an overwhelming array of SaaS tools. This insight drove him to create a superior solution—an AI-powered enterprise search platform designed for intuitive and rapid access to information. Since its inception, Glean has evolved into a premier Work AI platform, merging enterprise-grade search, an AI assistant, and robust application and agent-building capabilities to fundamentally transform the way employees engage with their work.About the Role:We are seeking experienced engineers to contribute their expertise and vision in the development of next-generation intelligent enterprise AI assistants and autonomous AI agents. Our mission involves reimagining how LLMs (Large Language Models) and agents can reason, plan, and execute complex, multi-step enterprise workflows. You will operate at the intersection of applied research and production engineering, focusing on areas such as agentic frameworks, LLM orchestration, low-latency LLM inference and optimization, domain-adapted and memory-augmented LLMs, reinforcement learning, and creating evaluation frameworks for intricate enterprise tasks. Our approach emphasizes collaboration with customers to deeply understand their challenges and apply the ideal blend of research-driven and practical engineering solutions to address them.

Jan 22, 2026
Apply
companyUpwork Inc. logo
Full-time|$211.3K/yr - $385K/yr|On-site|Austin, Texas, United States; Chicago; Palo Alto, California, United States

Upwork Inc. connects businesses with skilled professionals in AI, machine learning, software development, sales, marketing, customer support, finance, and accounting. The company’s platforms, including the Upwork Marketplace and Lifted, help organizations of all sizes find and manage freelance, fractional, and payrolled talent for a range of contingent work needs. Upwork supports both large enterprises and entrepreneurs in sourcing talent and implementing AI-driven solutions. The company’s network covers more than 10,000 skills, enabling clients to scale and adapt their workforce for changing business demands. Since launch, Upwork has processed over $30 billion in transactions. The company’s mission centers on expanding opportunities at every stage of work. Learn more Visit the Upwork Marketplace: upwork.com Learn about Lifted: go-lifted.com Connect on LinkedIn, Facebook, Instagram, TikTok, and X Follow Lifted on LinkedIn

Apr 28, 2026
Apply
company
Full-time|Hybrid|Palo Alto

Join us at Grindr as a Staff Machine Learning Engineer in a dynamic hybrid work environment, primarily based in our Palo Alto office. You will be required to work in the office on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal member of Grindr, you will play a crucial role in our AI-driven transformation. This is your opportunity to leverage advanced machine learning techniques to enhance the way millions in the LGBTQ+ community connect, whether for casual chats, fleeting encounters, or enduring relationships. We are committed to making machine learning a cornerstone of Grindr, and your contributions will leave a lasting impact on our unique global platform.Impact from Day One: Join a focused team at the forefront of machine learning initiatives, where you will engage in significant, innovative projects that lay the groundwork for our long-term ML vision.Transformative Recommendations: Develop systems that connect users to their next meaningful experiences, adapting to a variety of needs and preferences.Insightful Conversations: Utilize Large Language Models (LLMs) to extract insights, enhancing user interactions with precision and creativity.Your Responsibilities:Design and implement scalable recommendation systems to serve millions, ensuring a balance between performance and innovation.Employ cutting-edge LLMs to analyze extensive conversational data and improve user connections.Prototype, refine, and deploy production-ready ML solutions that address real user challenges.Work collaboratively with engineering, data science, and product teams to bring bold ideas to fruition.Explore and implement new AI tools and techniques to keep Grindr’s technology at the forefront.Your Qualifications:A minimum of 7 years of experience in building machine learning systems, particularly in developing systems from the ground up. Experience with recommendation systems is advantageous.Demonstrated ability to deliver scalable solutions, with proficiency in Python and popular machine learning frameworks.A proactive approach to tackling complex challenges with tangible outcomes.Familiarity with data and deployment technologies (e.g., Snowflake, etc.) is beneficial.

Apr 8, 2025
Apply
companySimile logo
Full-time|On-site|Palo Alto

Join Our Team at SimileAt Simile, we are revolutionizing decision-making in society by providing AI simulations that accurately model human behavior. Just as pilots and surgeons rely on simulations for training, we believe that businesses deserve the same rigor when making high-stakes decisions.Our groundbreaking work has led to the creation of the first AI simulation of society, featuring generative agents that reflect real human experiences. Backed by $100 million from top investors, including Index Ventures and renowned AI experts, we are on a mission to predict human behavior with unparalleled accuracy.The RoleAs an Applied Research Engineer and Member of Technical Staff (MTS), you will be integral in refining our models of human behavior. With a strong emphasis on scientific rigor, you will participate in the entire research cycle—from designing experiments to implementing them in production systems that influence real-world decisions.Your Responsibilities Will Include:Data Insight Extraction: Analyze extensive proprietary datasets, including unstructured interviews and behavioral data, to uncover meaningful insights.Hardware Proficiency: Develop and optimize algorithms for cutting-edge NVIDIA hardware, conducting experiments that inform our model training.Scientific Leadership: Design thorough evaluations that validate our behavioral simulations against industry standards.Pushing Boundaries: Engage with the latest research in simulation and AI, continuously enhancing our methodologies and documentation.

Mar 18, 2026
Apply
companyGlean logo
Full-time|$200K/yr - $300K/yr|On-site|San Francisco Bay Area

About Glean:Established in 2019, Glean is a pioneering AI-driven knowledge management platform designed to empower organizations to swiftly locate, structure, and disseminate information among their teams. By seamlessly integrating with tools such as Google Drive, Slack, and Microsoft Teams, Glean ensures that employees can access the right knowledge at the right time, enhancing productivity and collaboration. Our state-of-the-art AI technology simplifies knowledge discovery, making it more efficient for teams to harness their collective intelligence.Glean was founded by Arvind Jain, who recognized the challenges employees face in navigating fragmented knowledge and diverse SaaS tools that hinder productivity. With a vision to create a superior solution, he developed an AI-powered enterprise search platform that facilitates quick and intuitive access to essential information. Since then, Glean has transformed into a leading Work AI platform, integrating enterprise-grade search, an AI assistant, and robust application and agent-building capabilities, fundamentally changing the way employees work.About the Role:We are on the lookout for talented Machine Learning Engineers who are eager to engage in both Quality Assurance and traditional ML tasks to aid in the development of our revolutionary Enterprise Brain. The Enterprise Brain team is crafting a suite of proactive AI products aimed at transforming enterprise workflows by identifying and automating tasks for users, thereby unlocking genuine productivity. This initiative is based on a profound understanding of user needs and a sophisticated Enterprise graph. The role will involve leveraging both LLM and advanced ML techniques, orchestrating agents, and employing cutting-edge ranking methods.Your Responsibilities:Tackle challenging ML problems that involve...

Nov 26, 2025
Apply
company
Full-time|Hybrid|Palo Alto

Join us at Grindr in a hybrid position based in our Palo Alto or San Francisco offices, with in-office attendance required on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal figure at Grindr, you will lead our transformative AI journey. This is your opportunity to leverage state-of-the-art machine learning techniques to revolutionize the way millions within the LGBTQ+ community connect, whether through engaging conversations, casual meetups, or meaningful relationships. Our commitment to machine learning is strong, and you will play an essential role in shaping our strategy and execution on this unique global platform.Impact from Day One: You will be instrumental in establishing foundational systems in an early-stage ML environment, charting the roadmap for our long-term strategy.Innovative Recommendations: Design and scale recommendation platforms that connect millions to their next significant experience, tailored to diverse user intents.Conversational Insights: Employ large language models (LLMs) to extract insights and establish best practices for conversational AI, enhancing user engagement with precision.Key Responsibilities:Develop and manage large-scale recommendation systems to serve millions of users while balancing performance and innovation.Utilize advanced LLMs to analyze extensive conversation data, enhancing connections among users.Prototype, iterate, and deploy production-ready ML solutions addressing real user challenges.Provide technical guidance across teams, collaborating with engineering, data science, and product teams to turn innovative ideas into reality.Assess and incorporate emerging AI tools and techniques organization-wide to maintain a leading-edge technology stack.Qualifications We Seek:Over 10 years of experience in building ML systems, particularly in developing 0-to-1 systems, platform architecture, and pioneering new capabilities. Familiarity with recommendation systems is advantageous.Proven track record of delivering scalable solutions, with proficiency in Python and popular ML frameworks.A proactive mindset and the ability to work in a fast-paced, dynamic environment.

Sep 22, 2025
Apply
companyProtegrity logo
Full-time|On-site|Palo Alto, CA

At Protegrity, we are at the forefront of data protection innovation, harnessing the power of AI and quantum-resistant cryptography. Our mission is to transform how sensitive data is safeguarded across cloud-native, hybrid, and on-premises environments. Utilizing cutting-edge cryptographic techniques, including tokenization and format-preserving encryption, we ensure that data remains both valuable and secure.Join us in a collaborative environment where your contributions will directly impact our industry. By working with some of the brightest minds, you will help redefine data security in a GenAI era, where data is the ultimate currency. If you're passionate about shaping the future of data protection, then Protegrity is the place for you!

Mar 9, 2026

Sign in to browse more jobs

Create account — see all 750 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.