Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Senior
Qualifications
Proven experience in software engineering with a focus on machine learning technologies. Strong programming skills in languages such as Python, Java, or C++. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch). Solid understanding of data structures, algorithms, and software design principles. Ability to work collaboratively in a team-oriented environment. Excellent problem-solving skills and attention to detail.
About the job
Join our innovative team at latitude as a Senior Software Engineer specializing in Machine Learning Offboarding Models. In this role, you will leverage your expertise in software engineering and machine learning to develop robust models that enhance our offboarding processes. Your contributions will directly impact our operational efficiency and improve user experiences.
As a key member of our team, you will collaborate with cross-functional teams to design, implement, and optimize machine learning solutions. We seek a passionate engineer who thrives in a fast-paced environment and is eager to tackle challenging problems.
About latitude
latitude is at the forefront of technological innovation, dedicated to transforming the way businesses operate. Our mission is to harness the power of machine learning and software development to create solutions that drive efficiency and enhance user experiences. We pride ourselves on fostering a dynamic and inclusive workplace where creativity and collaboration thrive.
Similar jobs
1 - 20 of 769 Jobs
Search for Machine Learning Ai Software Engineering Internship
Internship|On-site|Palo Alto, California, United States
About PathwayPathway is revolutionizing artificial intelligence with the introduction of the world’s first post-transformer model that mimics human thought processes. Our innovative architecture surpasses traditional Transformer models, providing enterprises with unparalleled transparency into model operations. By integrating this foundational model with the fastest data processing engine available, Pathway empowers organizations to transcend mere incremental optimization and achieve genuinely contextualized, experience-driven intelligence. Trusted by prestigious clients including NATO, La Poste, and Formula 1 racing teams, we are at the forefront of AI advancements.Led by visionary CEO Zuzanna Stamirowska, a complexity scientist, our team includes AI trailblazers such as CTO Jan Chorowski, who pioneered the application of Attention in speech and collaborated with Nobel laureate Geoff Hinton at Google Brain, and CSO Adrian Kosowski, a distinguished computer scientist and quantum physicist who earned his PhD at just 20 years old.Supported by prominent investors and advisors like Lukasz Kaiser, co-author of the Transformer architecture (the “T” in ChatGPT) and a key researcher in OpenAI's reasoning models, Pathway is headquartered in Palo Alto, California.The OpportunityWe are on the lookout for passionate Machine Learning/AI Software Engineering interns with a solid foundation in machine learning model research.Your ResponsibilitiesAssist in training Large Language Models (LLMs)Conduct benchmarking of LLMsPrepare and evaluate training datasetsCollaborate with the core Pathway Research TeamYour contributions will significantly impact the advancement of the AI landscape.
At Rhoda AI, we are pioneering the future of humanoid robotics by establishing a comprehensive stack that includes advanced, software-defined hardware along with foundational models and video world models to drive our innovations. Our robots are engineered to be versatile, capable of navigating complex real-world scenarios that extend beyond traditional training environments. Our interdisciplinary research team, featuring experts from prestigious institutions such as Stanford, Berkeley, and Harvard, is at the forefront of large-scale learning, robotics, and systems engineering. With over $400 million raised, we are making significant investments in research and development, hardware innovation, and scaling our manufacturing capabilities to bring our vision to life.We are seeking a motivated Machine Learning Inference Engineer to join our team and contribute to the development and operation of the inference systems that power our automation stack. You will play a crucial role in ensuring the efficient and reliable execution of large foundation models, collaborating closely with our robotic platforms and internal task tools.Key Responsibilities:Develop and maintain infrastructure for model inference across both cloud and on-premises environments.Optimize the latency, throughput, and reliability of deployed machine learning models.Design and scale services for serving diverse foundation models in both research and production contexts.Collaborate with research and robotics teams to enhance inference optimization and integration.Create tools for model deployment, version control, and observability to facilitate rapid iteration cycles.Contribute to the robustness and scalability of the inference stack as model complexity and deployment demands evolve.Qualifications:Minimum of 3 years of experience in machine learning infrastructure, MLOps, or backend systems.Proven experience in deploying and managing machine learning inference workloads in production environments.Excellent knowledge of Kubernetes and containerized deployment pipelines.Familiarity with cloud service providers such as AWS and GCP, including GPU orchestration capabilities.Experience with popular ML frameworks including PyTorch and TensorFlow, as well as model serving tools like Triton, TorchServe, and Ray Serve.Strong debugging capabilities and a proactive ownership mindset, comfortable resolving issues across the technology stack.
About VoltaiAt Voltai, we are pioneering the future of artificial intelligence by developing world models and agents capable of learning, evaluating, planning, experimenting, and interacting with the physical world. Our initial focus is on understanding and creating advanced hardware, electronic systems, and semiconductors, utilizing AI to design and innovate beyond human cognitive boundaries.About Our TeamOur remarkable team is backed by esteemed Silicon Valley investors, Stanford University, and industry leaders including CEOs and Presidents of Google, AMD, Broadcom, and Marvell. We boast a diverse group of former Stanford professors, SAIL researchers, Olympiad medalists, CTOs of prominent tech firms, and high-ranking officials with experience in national security and foreign policy.What We Are Looking ForExceptional AI/ML engineering skills, ideally from top-tier programs in Computer Science, Electrical Engineering, Mathematics, or Physics.Demonstrated success in delivering AI/ML projects from initial concept through to production deployment.Hands-on experience in fine-tuning and deploying large language models (LLMs) within production environments.Experience working with multi-modal models that integrate text, image, or audio inputs.Bonus PointsExperience in competitive programming.Contributions to open-source projects.Recognition through awards or publications in leading journals and conferences.Ability to thrive in a dynamic, fast-paced startup environment.
Gauss Labs is seeking a dynamic and skilled Senior AI Engineer to pioneer transformative Industrial AI solutions, setting new standards for artificial intelligence in the manufacturing sector. Our collaborations with leading manufacturing clients provide unparalleled access to extensive real-time data derived from their operations. Leveraging advanced AI technologies, we are dedicated to creating innovative AI and machine learning solutions that elevate manufacturing to unprecedented heights.In this pivotal role, you will be instrumental in translating groundbreaking AI and machine learning research into resilient, scalable software applications. Your contributions will facilitate the smooth deployment of models in production environments, thereby enhancing the overall success of AI initiatives within the organization. You will collaborate closely with experienced Applied Scientists, Software Engineers, and Program Managers based in both Palo Alto, California, and Seoul, South Korea.
Our VisionAt Tinder, we believe that the thrill of meeting new people is one of life’s greatest joys. We are dedicated to nurturing the magic of human connection, engaging tens of millions of users worldwide. With hundreds of millions of downloads, over 2 billion swipes daily, 20 million matches each day, and a presence in more than 190 countries, our influence is vast and continually expanding.Our team collaborates to tackle intricate challenges, blending insights from human relationships, behavioral science, network economics, AI, and machine learning, while prioritizing user safety and cultural sensitivity. We explore the depths of loneliness, love, and connection.Internship DurationThe internship will take place from June 1 to August 28, 2026.Work EnvironmentThis is a hybrid position, requiring in-office collaboration three days a week in our Palo Alto, California office.Role OverviewAs a member of the Tinder ML team, you will play a crucial role in shaping the product experience across diverse domains, including Recommendations, Trust & Safety, Profile Management, Chat, Growth, and Revenue. Our goal is to leverage machine learning to enhance user experiences, build trust, and drive business growth within Tinder's ecosystem. This internship offers a unique opportunity to work alongside experienced engineers to develop and implement machine learning solutions that align with Tinder’s strategic objectives.
Woven by Toyota is at the forefront of Toyota’s transformative journey into a mobility company. Building on a rich legacy of innovation aimed at enhancing lives, we are committed to redefining mobility through human-centric advancements that expand its meaning and utility in society.Our initiatives revolve around four core areas: AD/ADAS, focusing on autonomous driving and advanced driver assistance technologies; Arene, our platform for software-defined vehicles; Woven City, a testing ground for innovative mobility solutions; and our Cloud & AI services, which provide the foundational digital infrastructure for collaboration. Our mission is bold and clear: to create a future with zero accidents and improved well-being for everyone.TEAMAs part of Woven by Toyota, you will engage with a diverse array of challenges, from optimizing 3D geometric computer vision problems to minimizing latency in hardware accelerators, designing innovative neural network architectures, and advancing the state-of-the-art in machine learning for perception, prediction, and motion planning.The Perception team is eager to welcome a talented machine learning intern for our 2025 program. During this internship, you will collaborate with our esteemed machine learning engineers, pushing the limits of our sophisticated perception systems for autonomous driving. Your colleagues are seasoned experts in the machine learning domain of autonomous driving, having developed and deployed numerous deep learning models across our software ecosystem. This internship presents a unique opportunity for you to learn from the best and contribute to meaningful societal impact.WHAT WE OFFER YOUThis internship is designed to provide you with practical industry experience and the chance to conduct research in the autonomous driving sector. You will have access to our advanced machine learning infrastructure and a vast dataset for experimentation, ultimately contributing to system improvements. The emphasis will be on research and successfully deploying your findings onto our prototype vehicles, which you will get to experience on public roads.Woven by Toyota internships are structured to facilitate both technical and professional growth. You will work on impactful projects that drive key results for our mission, receiving continuous feedback and guidance. Mentorship will be provided throughout your internship, culminating in a presentation of your project outcomes to the team and other interested parties.
At Rhoda AI, we are pioneering the development of a comprehensive full-stack platform for the next generation of humanoid robots. Our innovative approach encompasses high-performance, software-defined hardware along with foundational and video world models that empower our robotic systems. Our robots are engineered as versatile generalists, adept at navigating intricate, real-world scenarios, including those not encountered during training. Collaborating with a distinguished research team from Stanford, Berkeley, Harvard, and other leading institutions, we operate at the forefront of large-scale learning, robotics, and systems engineering. With over $400M in funding, we are aggressively investing in research and development, hardware innovation, and scaling up manufacturing to bring our vision to life.We are on the lookout for a Staff / Principal Machine Learning Engineer to take charge of our training platform. This pivotal system is essential for ensuring that large-scale training is reliable, reproducible, and straightforward to execute. You will play a crucial role in defining the lifecycle of training jobs, including their launch, tracking, recovery, and debugging across our clusters. Your contributions will enable researchers to innovate rapidly without infrastructure hindrances.In this role, you will be at the heart of enhancing research efficiency: when a training job fails, your system will allow for automatic recovery; when experiments become challenging to reproduce, you will implement effective solutions; and when GPU hours are squandered, you will ensure visibility and preventative measures are in place.
Internship|Hybrid|San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; New York, NY, US
Pinterest’s Advanced Technology Group is looking for a Machine Learning Intern at the master’s level for Fall 2026. This internship centers on visual search and recommendation systems, blending research and engineering in a collaborative environment. Interns may help launch new features for millions of users or take part in academic research projects. Role overview This 12-week paid internship runs from September 21 to December 11, 2026. Work may be remote or hybrid, with office locations available in San Francisco, Palo Alto, Seattle, or New York. Team placement determines whether the role is remote or hybrid. What you will do Build and launch new user features using internal datasets and machine learning techniques. Conduct research focused on visual understanding and recommendation systems. Work closely with engineering teams and take part in external collaborations or mentoring. Contribute to features that may reach millions of users or support research for academic publication. Locations San Francisco, CA Palo Alto, CA Seattle, WA New York, NY Remote (team dependent) Internship details 12-week program: September 21 to December 11, 2026 Paid internship Remote or hybrid work (based on team) Application note Applying to this position enters candidates into consideration for multiple machine learning internship roles across Pinterest’s ML teams. Submit only one application within the USA or Canada; multiple applications may delay review.
Join our dynamic AI R&D team as an AI Scientist focused on Machine Learning. In this pivotal role, you will lead the development and implementation of advanced deep learning models to address real-world temporal modeling challenges in the manufacturing sector. We are in search of a candidate with extensive practical R&D experience, firmly rooted in robust theoretical principles and possessing deep expertise across various AI disciplines. The ideal candidate will exhibit a profound understanding of cutting-edge machine learning algorithms and techniques, alongside a proven record of contributions to top-tier conferences such as NeurIPS, ICML, ICLR, KDD, CVPR, or ICCV. A solid foundation in computer science and engineering is essential. Familiarity with collaborating alongside software engineering teams to scale and commercialize ML solutions will be highly regarded. This high-impact role merges foundational research, system-level design, and hands-on implementation, allowing you to work closely with cross-functional teams to create innovative solutions that drive strategic decisions and deliver significant business value.
Join our innovative team at latitude as a Senior Software Engineer specializing in Machine Learning Offboarding Models. In this role, you will leverage your expertise in software engineering and machine learning to develop robust models that enhance our offboarding processes. Your contributions will directly impact our operational efficiency and improve user experiences.As a key member of our team, you will collaborate with cross-functional teams to design, implement, and optimize machine learning solutions. We seek a passionate engineer who thrives in a fast-paced environment and is eager to tackle challenging problems.
At Rhoda AI, we are pioneering the development of a comprehensive foundation for the next generation of humanoid robots. Our focus spans high-performance, software-defined hardware to advanced foundational models and video world models that govern robot functionality. Our robots are engineered to be versatile, capable of navigating intricate, real-world environments and tackling scenarios not previously encountered in training. We stand at the crossroads of large-scale learning, robotics, and systems, bolstered by a research team comprising experts from prestigious institutions such as Stanford, Berkeley, and Harvard. Our ambition is not merely to add features; we are crafting a revolutionary computing platform for physical tasks, underpinned by over $400 million in funding, driving aggressive investments in research & development, hardware innovation, and scaling up manufacturing to bring our vision to fruition.Role OverviewWe are in search of a Principal Machine Learning Systems Engineer to take charge of our training systems' performance from start to finish. You will be instrumental in defining the scaling of our model training, enhancing efficiency, scalability, and accuracy across extensive multimodal training environments. This is a pivotal systems role, not merely focused on infrastructure support. Your contributions will significantly influence our compute utilization efficiency, scalability of models across thousands of GPUs, and the speed of research iterations.Your ResponsibilitiesOversee training performance from start to finishAnalyze and enhance the performance of large-scale multimodal training encompassing vision, video, proprioception, actions, and language.Create systematic performance attributions by breaking down step-time into compute, communication, and input pipeline, along with scaling curves for various cluster sizes and identifying key bottlenecks.Drive quantifiable improvements across:Distributed efficiency (e.g., communication and compute overlap, bucketization, topology-aware mapping, and parallelism strategies).Compute efficiency (e.g., identifying kernel hotspots, operator fusion, attention optimization, and minimizing framework/runtime overhead).Memory efficiency (e.g., activation checkpointing, sequence packing, and reducing fragmentation).Design training systems rather than just tuning themDefine and refine parallelism strategies including data, tensor, pipeline, sharding, and hybrid approaches.Enhance execution efficiency through communication scheduling, graph capture, execution optimization, and runtime enhancements.Contribute to the overall system architecture with innovative solutions.
About Mistral AIAt Mistral AI, we harness the transformative power of artificial intelligence to streamline tasks, save valuable time, and foster enhanced creativity and learning. Our innovative technology is crafted to effortlessly integrate into everyday work environments.We are committed to democratizing AI by offering high-performance, optimized, open-source models, products, and solutions. Our extensive AI platform caters to both enterprise and individual needs, featuring products like Le Chat, La Plateforme, Mistral Code, and Mistral Compute—creating cutting-edge intelligence accessible to all users.As a vibrant and collaborative team, we are driven by our passion for AI and its potential to revolutionize society. Our diverse workforce excels in competitive settings and is dedicated to fostering innovation. With teams distributed across France, the USA, the UK, Germany, and Singapore, we pride ourselves on our creativity, humility, and team spirit.Join us in shaping the future of AI at a pioneering company. Together, we can create a lasting impact. Discover more about our culture at https://mistral.ai/careers.Role OverviewAbout the Research Engineering TeamThe Research Engineering team operates across Platform (shared infrastructure & clean coding practices) and Embedded (integrated within research squads). Our engineers have the flexibility to navigate the research↔production spectrum as their interests and needs evolve.As a Machine Learning Research Engineer, you will be responsible for building and optimizing large-scale learning systems that underpin our open-weight models. Collaborating closely with Research Scientists, you may join either:- Platform RE Team: Focus on enhancing our shared training frameworks, data pipelines, and tools utilized across all teams; or- Embedded RE Team: Become part of a research squad (Alignment, Pre-training, Multimodal, etc.) to turn innovative ideas into scalable, repeatable code.Key Responsibilities• Support researchers by managing the complex aspects of large-scale ML pipelines and developing robust tools.• Bridge cutting-edge research with production: integrate checkpoints, optimize evaluations, and create accessible APIs.• Conduct experiments utilizing the latest deep-learning techniques (sparsification on 70B+ models, distributed training across thousands of GPUs).• Design, implement, and benchmark ML algorithms; produce clear and efficient code in Python.• Deliver prototypes that evolve into production-grade components for Le Chat and our enterprise API.
Role Overview Mistral is hiring an Applied AI Forward Deployed Machine Learning Engineer in Palo Alto. This role centers on bringing advanced machine learning solutions into real-world client settings. The work directly shapes client outcomes and business impact. What You Will Do Deploy machine learning models and systems for client projects Work closely with cross-functional teams to understand specific challenges Develop and adapt AI solutions to fit client needs, focusing on efficiency and practical results
Full-time|$240K/yr - $300K/yr|On-site|San Francisco Bay Area
About Glean:Founded in 2019, Glean is a pioneering AI-driven knowledge management platform that empowers organizations to efficiently discover, organize, and share vital information across their teams. By seamlessly integrating with tools such as Google Drive, Slack, and Microsoft Teams, Glean enables employees to access critical knowledge precisely when they need it, enhancing productivity and collaboration. Our state-of-the-art AI technology streamlines knowledge discovery, allowing teams to harness their collective intelligence more effectively.Glean was conceived by Founder & CEO Arvind Jain, who recognized the challenges employees face in navigating fragmented knowledge and an overwhelming array of SaaS tools. This insight drove him to create a superior solution—an AI-powered enterprise search platform designed for intuitive and rapid access to information. Since its inception, Glean has evolved into a premier Work AI platform, merging enterprise-grade search, an AI assistant, and robust application and agent-building capabilities to fundamentally transform the way employees engage with their work.About the Role:We are seeking experienced engineers to contribute their expertise and vision in the development of next-generation intelligent enterprise AI assistants and autonomous AI agents. Our mission involves reimagining how LLMs (Large Language Models) and agents can reason, plan, and execute complex, multi-step enterprise workflows. You will operate at the intersection of applied research and production engineering, focusing on areas such as agentic frameworks, LLM orchestration, low-latency LLM inference and optimization, domain-adapted and memory-augmented LLMs, reinforcement learning, and creating evaluation frameworks for intricate enterprise tasks. Our approach emphasizes collaboration with customers to deeply understand their challenges and apply the ideal blend of research-driven and practical engineering solutions to address them.
Full-time|$211.3K/yr - $385K/yr|On-site|Austin, Texas, United States; Chicago; Palo Alto, California, United States
Upwork Inc. connects businesses with skilled professionals in AI, machine learning, software development, sales, marketing, customer support, finance, and accounting. The company’s platforms, including the Upwork Marketplace and Lifted, help organizations of all sizes find and manage freelance, fractional, and payrolled talent for a range of contingent work needs. Upwork supports both large enterprises and entrepreneurs in sourcing talent and implementing AI-driven solutions. The company’s network covers more than 10,000 skills, enabling clients to scale and adapt their workforce for changing business demands. Since launch, Upwork has processed over $30 billion in transactions. The company’s mission centers on expanding opportunities at every stage of work. Learn more Visit the Upwork Marketplace: upwork.com Learn about Lifted: go-lifted.com Connect on LinkedIn, Facebook, Instagram, TikTok, and X Follow Lifted on LinkedIn
Join us at Grindr as a Staff Machine Learning Engineer in a dynamic hybrid work environment, primarily based in our Palo Alto office. You will be required to work in the office on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal member of Grindr, you will play a crucial role in our AI-driven transformation. This is your opportunity to leverage advanced machine learning techniques to enhance the way millions in the LGBTQ+ community connect, whether for casual chats, fleeting encounters, or enduring relationships. We are committed to making machine learning a cornerstone of Grindr, and your contributions will leave a lasting impact on our unique global platform.Impact from Day One: Join a focused team at the forefront of machine learning initiatives, where you will engage in significant, innovative projects that lay the groundwork for our long-term ML vision.Transformative Recommendations: Develop systems that connect users to their next meaningful experiences, adapting to a variety of needs and preferences.Insightful Conversations: Utilize Large Language Models (LLMs) to extract insights, enhancing user interactions with precision and creativity.Your Responsibilities:Design and implement scalable recommendation systems to serve millions, ensuring a balance between performance and innovation.Employ cutting-edge LLMs to analyze extensive conversational data and improve user connections.Prototype, refine, and deploy production-ready ML solutions that address real user challenges.Work collaboratively with engineering, data science, and product teams to bring bold ideas to fruition.Explore and implement new AI tools and techniques to keep Grindr’s technology at the forefront.Your Qualifications:A minimum of 7 years of experience in building machine learning systems, particularly in developing systems from the ground up. Experience with recommendation systems is advantageous.Demonstrated ability to deliver scalable solutions, with proficiency in Python and popular machine learning frameworks.A proactive approach to tackling complex challenges with tangible outcomes.Familiarity with data and deployment technologies (e.g., Snowflake, etc.) is beneficial.
Full-time|$200K/yr - $300K/yr|On-site|San Francisco Bay Area
About Glean:Established in 2019, Glean is a pioneering AI-driven knowledge management platform designed to empower organizations to swiftly locate, structure, and disseminate information among their teams. By seamlessly integrating with tools such as Google Drive, Slack, and Microsoft Teams, Glean ensures that employees can access the right knowledge at the right time, enhancing productivity and collaboration. Our state-of-the-art AI technology simplifies knowledge discovery, making it more efficient for teams to harness their collective intelligence.Glean was founded by Arvind Jain, who recognized the challenges employees face in navigating fragmented knowledge and diverse SaaS tools that hinder productivity. With a vision to create a superior solution, he developed an AI-powered enterprise search platform that facilitates quick and intuitive access to essential information. Since then, Glean has transformed into a leading Work AI platform, integrating enterprise-grade search, an AI assistant, and robust application and agent-building capabilities, fundamentally changing the way employees work.About the Role:We are on the lookout for talented Machine Learning Engineers who are eager to engage in both Quality Assurance and traditional ML tasks to aid in the development of our revolutionary Enterprise Brain. The Enterprise Brain team is crafting a suite of proactive AI products aimed at transforming enterprise workflows by identifying and automating tasks for users, thereby unlocking genuine productivity. This initiative is based on a profound understanding of user needs and a sophisticated Enterprise graph. The role will involve leveraging both LLM and advanced ML techniques, orchestrating agents, and employing cutting-edge ranking methods.Your Responsibilities:Tackle challenging ML problems that involve...
Join us at Grindr in a hybrid position based in our Palo Alto or San Francisco offices, with in-office attendance required on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal figure at Grindr, you will lead our transformative AI journey. This is your opportunity to leverage state-of-the-art machine learning techniques to revolutionize the way millions within the LGBTQ+ community connect, whether through engaging conversations, casual meetups, or meaningful relationships. Our commitment to machine learning is strong, and you will play an essential role in shaping our strategy and execution on this unique global platform.Impact from Day One: You will be instrumental in establishing foundational systems in an early-stage ML environment, charting the roadmap for our long-term strategy.Innovative Recommendations: Design and scale recommendation platforms that connect millions to their next significant experience, tailored to diverse user intents.Conversational Insights: Employ large language models (LLMs) to extract insights and establish best practices for conversational AI, enhancing user engagement with precision.Key Responsibilities:Develop and manage large-scale recommendation systems to serve millions of users while balancing performance and innovation.Utilize advanced LLMs to analyze extensive conversation data, enhancing connections among users.Prototype, iterate, and deploy production-ready ML solutions addressing real user challenges.Provide technical guidance across teams, collaborating with engineering, data science, and product teams to turn innovative ideas into reality.Assess and incorporate emerging AI tools and techniques organization-wide to maintain a leading-edge technology stack.Qualifications We Seek:Over 10 years of experience in building ML systems, particularly in developing 0-to-1 systems, platform architecture, and pioneering new capabilities. Familiarity with recommendation systems is advantageous.Proven track record of delivering scalable solutions, with proficiency in Python and popular ML frameworks.A proactive mindset and the ability to work in a fast-paced, dynamic environment.
At Protegrity, we are at the forefront of data protection innovation, harnessing the power of AI and quantum-resistant cryptography. Our mission is to transform how sensitive data is safeguarded across cloud-native, hybrid, and on-premises environments. Utilizing cutting-edge cryptographic techniques, including tokenization and format-preserving encryption, we ensure that data remains both valuable and secure.Join us in a collaborative environment where your contributions will directly impact our industry. By working with some of the brightest minds, you will help redefine data security in a GenAI era, where data is the ultimate currency. If you're passionate about shaping the future of data protection, then Protegrity is the place for you!
Full-time|$140K/yr - $265K/yr|On-site|San Francisco Bay Area
About Glean:Established in 2019, Glean is a pioneering AI-driven knowledge management platform that empowers organizations to swiftly locate, organize, and disseminate information within their teams. By integrating flawlessly with platforms such as Google Drive, Slack, and Microsoft Teams, Glean ensures that employees have timely access to essential knowledge, enhancing productivity and fostering collaboration. Our state-of-the-art AI technology simplifies the discovery of knowledge, enabling teams to utilize their collective intelligence more rapidly and effectively.The vision for Glean originated from the profound insights of our Founder & CEO, Arvind Jain, who recognized the hurdles employees encounter in accessing and comprehending information at work. Witnessing the fragmentation of knowledge and the overwhelming number of SaaS tools that hindered productivity, he set out to create a superior solution—an AI-powered enterprise search platform that facilitates quick and intuitive information retrieval. Since its inception, Glean has transformed into the premier Work AI platform, blending enterprise-grade search capabilities with an AI assistant and robust application- and agent-building functionalities to fundamentally reshape how employees operate.About the Role:We are on the lookout for talented engineers to join our mission of building the world's leading search and assistant product for workplace efficiency. Our engineering team engages with various systems across the technology stack, focusing on areas such as query comprehension, document analysis, domain-specific language modeling, natural language question-answering, evaluation, and experimentation. We maintain regular interactions with our customers to deeply understand their challenges and utilize the most effective tools—whether simple or complex—to address their needs.Your Responsibilities:Develop innovative signals to enhance the personalization of our search engineTrain models to analyze interactions between signals in our ranking processes
Jan 22, 2026
Sign in to browse more jobs
Create account — see all 769 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.