Machine Learning Engineer At Protegrity Palo Alto Ca jobs in Palo Alto – Browse 1,110 openings on RoboApply Jobs
Machine Learning Engineer At Protegrity Palo Alto Ca jobs in Palo Alto
Open roles matching “Machine Learning Engineer At Protegrity Palo Alto Ca” with location signals for Palo Alto. 1,110 active listings on RoboApply Jobs.
1,110 jobs found
Machine Learning Engineer at Protegrity | Palo Alto, CA
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Experience
About the job
At Protegrity, we are at the forefront of data protection innovation, harnessing the power of AI and quantum-resistant cryptography. Our mission is to transform how sensitive data is safeguarded across cloud-native, hybrid, and on-premises environments. Utilizing cutting-edge cryptographic techniques, including tokenization and format-preserving encryption, we ensure that data remains both valuable and secure.
Join us in a collaborative environment where your contributions will directly impact our industry. By working with some of the brightest minds, you will help redefine data security in a GenAI era, where data is the ultimate currency. If you're passionate about shaping the future of data protection, then Protegrity is the place for you!
At Protegrity, we are at the forefront of data protection innovation, harnessing the power of AI and quantum-resistant cryptography. Our mission is to transform how sensitive data is safeguarded across cloud-native, hybrid, and on-premises environments. Utilizing cutting-edge cryptographic techniques, including tokenization and format-preserving encryption, …
Role Overview:Join Nace.AI as a Machine Learning Engineer, where you will be instrumental in transforming advanced machine learning research into scalable, production-ready applications. Collaborating with interdisciplinary teams, you will pinpoint areas where machine learning can enhance product offerings, design robust model-centric architectures, and guarantee their smooth integration into practical applications. This role demands a harmonious blend of theoretical insight and hands-on engineering, focusing on creating dependable, maintainable, and impactful AI-driven features that align with Nace.AI's strategic goals.Key Responsibilities:Develop and sustain complete ML systems, including synthetic data pipelines, model training, debugging, and performance assessment.Enhance large language models (LLMs) and utilize meta-learning strategies to boost model generalization and efficiency.Refine existing Nace.AI models by integrating breakthroughs from the latest ML research.
Join Our Innovative Team Nubank is a leading digital financial platform, serving over 122 million customers across Brazil, Mexico, and Colombia. Our mission is to simplify financial services and empower individuals, marking the start of a vibrant future in Latin America. As a publicly listed company on the New York Stock Exchange (NYSE: NU), we leverage cutting-edge technology and data intelligence to create financial products that are not only accessible but also user-friendly. Our achievements have earned us recognition from prestigious rankings, such as Time 100 Companies, Fast Company’s Most Innovative Companies, and Forbes World’s Best Bank. Explore more about us on our institutional page here. About the Role At AI Core, we are expanding our AI initiatives to become the backbone of Nubank's key decision-making systems. We are in search of talented Machine Learning Engineers to spearhead impactful research projects that connect advanced AI technologies with real-world financial systems. Your role will involve tackling intricate challenges using Deep Learning and Foundation Models, ensuring our solutions are scalable, efficient, and yield tangible business outcomes. As a Machine Learning Engineer (MLE), your responsibilities will include: Leading and executing complex applied research initiatives independently, focusing on building and optimizing architectures (e.g., Transformers, GNNs) for critical applications such as Credit, Recommendation Systems, Generative AI, and real-time inference. Resolving challenging and ambiguous modeling problems that necessitate collaboration across various teams (Data, Infrastructure, Product), delivering innovative solutions with a clear emphasis on medium-term impact. Connecting the research and production worlds by designing architectures that comply with MLOps constraints, ensuring models are optimized for latency, interpretability, and cost-effectiveness. We invite you to be part of our journey to revolutionize the financial landscape.
About UsHippocratic AI stands at the forefront of generative AI in the healthcare sector. Our innovative platform is the only one capable of engaging in safe, autonomous clinical conversations with patients, supported by our proprietary LLMs in the Polaris constellation, boasting an impressive accuracy rate of over 99.9%.Why Join Our TeamRevolutionize healthcare with safety-centric AI. We are pioneering the world's first healthcare-specific, safety-oriented LLM—a groundbreaking platform focused on enhancing patient outcomes on a global scale. This is a unique opportunity to contribute to category creation.Collaborate with visionaries. Co-founded by CEO Munjal Shah alongside a distinguished team of physicians, hospital executives, AI innovators, and researchers from esteemed institutions such as El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, Meta, Microsoft, and NVIDIA.Supported by top-tier investors. We recently secured a $126M Series C funding round at a valuation of $3.5B, led by Avenir Growth, bringing our total funding to $404M with contributions from notable investors like CapitalG, General Catalyst, a16z, Kleiner Perkins, and others.Build alongside experts in healthcare and AI. Join a team of professionals dedicated to enhancing care, advancing science, and creating transformative technologies that ensure our platform is robust, reliable, and revolutionary.Location RequirementWe believe collaboration sparks the best ideas. To foster rapid teamwork and a vibrant company culture, this position requires daily presence in our Palo Alto office, five days a week, unless stated otherwise.About the RoleIn healthcare AI, evaluation is crucial—if it can't be measured, it can't be deployed. You will develop systems that assess the safety, accuracy, and readiness of our models for real-world patient interactions: evaluation frameworks, synthetic data pipelines, automated benchmarks, and LLM-as-judge systems. This role presents a high-impact engineering opportunity where your contributions directly influence what is launched into production.What You’ll DoCreate and implement evaluation frameworks focused on LLM safety, clinical accuracy, and conversational quality.Build synthetic data generation pipelines to rigorously test models across varied clinical scenarios.Develop scalable automated and human-in-the-loop evaluation pipelines.
About VoltaiAt Voltai, we are pioneering the future of artificial intelligence by developing world models and agents capable of learning, evaluating, planning, experimenting, and interacting with the physical world. Our initial focus is on understanding and creating advanced hardware, electronic systems, and semiconductors, utilizing AI to design and innovate beyond human cognitive boundaries.About Our TeamOur remarkable team is backed by esteemed Silicon Valley investors, Stanford University, and industry leaders including CEOs and Presidents of Google, AMD, Broadcom, and Marvell. We boast a diverse group of former Stanford professors, SAIL researchers, Olympiad medalists, CTOs of prominent tech firms, and high-ranking officials with experience in national security and foreign policy.What We Are Looking ForExceptional AI/ML engineering skills, ideally from top-tier programs in Computer Science, Electrical Engineering, Mathematics, or Physics.Demonstrated success in delivering AI/ML projects from initial concept through to production deployment.Hands-on experience in fine-tuning and deploying large language models (LLMs) within production environments.Experience working with multi-modal models that integrate text, image, or audio inputs.Bonus PointsExperience in competitive programming.Contributions to open-source projects.Recognition through awards or publications in leading journals and conferences.Ability to thrive in a dynamic, fast-paced startup environment.
Full-time|$10K/mo - $10K/mo|On-site|Palo Alto, California, United States
AI ResidencyLocation: Palo Alto, CA (on-site)About 1XAt 1X, we are pioneering the development of humanoid robots designed to collaborate with humans, addressing labor shortages and enhancing productivity.About the RoleThe AI Residency offers a unique fixed-term opportunity (3–6 months) to engage in transformative AI and robotics initiatives alongside our dedicated team. As a resident, you will contribute to building critical infrastructure for simulation, data management, and machine learning, directly translating research concepts into practical applications. This is your chance to play a vital role in advancing deployed robotic systems while gaining invaluable hands-on experience at the intersection of AI and robotics.
At Inflection AI, we are dedicated to leveraging the transformative capabilities of artificial intelligence to enhance human well-being and productivity.The future of AI will be characterized by agents we can trust to act on our behalf.We are at the forefront of this evolution with our human-centric AI models that integrate emotional intelligence (EQ) with cognitive intelligence (IQ), shifting interactions from mere transactions to meaningful relationships, thereby generating lasting value for individuals and organizations alike.Our initiatives manifest in two primary forms:Pi, your personal AI, designed to be a compassionate companion that enriches everyday life through practical support and insights.Platform — large language models (LLMs) and APIs that empower developers, agents, and enterprises to infuse Pi-level emotional intelligence into experiences where empathy and understanding are crucial.We are building towards a future of AI agents that foster trust, enhance understanding, and create aligned, long-term value for everyone.About the RoleAs a Model Training Engineer, you will be responsible for designing, building, and scaling post-training pipelines that transform general LLMs into brand-fluent, production-ready assistants. Your innovations in fine-tuning and preference optimization techniques (RLHF, DPO, GRPO, RLAIF) will significantly enhance reliability, alignment, and cost-effectiveness.
About Mistral AIAt Mistral AI, we harness the transformative power of artificial intelligence to streamline tasks, save valuable time, and foster enhanced creativity and learning. Our innovative technology is crafted to effortlessly integrate into everyday work environments.We are committed to democratizing AI by offering high-performance, optimized, open-source models, products, and solutions. Our extensive AI platform caters to both enterprise and individual needs, featuring products like Le Chat, La Plateforme, Mistral Code, and Mistral Compute—creating cutting-edge intelligence accessible to all users.As a vibrant and collaborative team, we are driven by our passion for AI and its potential to revolutionize society. Our diverse workforce excels in competitive settings and is dedicated to fostering innovation. With teams distributed across France, the USA, the UK, Germany, and Singapore, we pride ourselves on our creativity, humility, and team spirit.Join us in shaping the future of AI at a pioneering company. Together, we can create a lasting impact. Discover more about our culture at https://mistral.ai/careers.Role OverviewAbout the Research Engineering TeamThe Research Engineering team operates across Platform (shared infrastructure & clean coding practices) and Embedded (integrated within research squads). Our engineers have the flexibility to navigate the research↔production spectrum as their interests and needs evolve.As a Machine Learning Research Engineer, you will be responsible for building and optimizing large-scale learning systems that underpin our open-weight models. Collaborating closely with Research Scientists, you may join either:- Platform RE Team: Focus on enhancing our shared training frameworks, data pipelines, and tools utilized across all teams; or- Embedded RE Team: Become part of a research squad (Alignment, Pre-training, Multimodal, etc.) to turn innovative ideas into scalable, repeatable code.Key Responsibilities• Support researchers by managing the complex aspects of large-scale ML pipelines and developing robust tools.• Bridge cutting-edge research with production: integrate checkpoints, optimize evaluations, and create accessible APIs.• Conduct experiments utilizing the latest deep-learning techniques (sparsification on 70B+ models, distributed training across thousands of GPUs).• Design, implement, and benchmark ML algorithms; produce clear and efficient code in Python.• Deliver prototypes that evolve into production-grade components for Le Chat and our enterprise API.
Join our team at Mind Robotics as a Machine Learning Infrastructure Engineer, where you'll play a pivotal role in developing the systems that facilitate effective large-scale model training. This position is ideal for individuals who thrive in high-scale environments—overseeing distributed training, managing core ML infrastructure, and leveraging rapid iteration loops across hundreds of GPUs. If you have experience building or managing large training systems in frameworks like PyTorch or JAX and have a passion for optimizing processes such as sharding, parallelism, and performance, you'll find a welcoming environment here. Collaborate closely with researchers to minimize friction, enhance reliability, and streamline the processes for training, evaluating, and deploying models that integrate into real-world applications.
At genbio, located in the heart of Silicon Valley, we are an innovative start-up driven by a team of visionary scientists, engineers, and entrepreneurs. Our mission is to revolutionize biology and medicine with the transformative capabilities of Generative AI. We unite some of the most brilliant minds in AI and Biological Science, challenging the limits of what can be achieved. Our commitment is to holistically decode biology and create a new era of life-changing solutions. As pioneers in the field of pan-modal Large Biological Models (LBM), we are at the forefront of biomedicine, leveraging LBM training to facilitate groundbreaking advancements in healthcare. With an exceptionally robust R&D team, we are positioned to lead in LLM and generative AI technologies, making a global impact from our headquarters in California and our branch office in Paris. Join us on this exciting journey to redefine the future of biology and medicine.
Join Harmony, an innovative open blockchain platform that leverages data sharding and rapid finality. Our on-chain tokens empower social games and community AI, facilitating micro-payments, smart contracts for market pricing, and ensuring data privacy through zero-knowledge proofs.At Harmony, our mission is to foster trust and establish a radically equitable economy. Our decentralized platform is built to be scalable and secure, enabling transactions without the need for trusted intermediaries. Embark on a Journey to InnovationAs a Day-1 startup, we recognize that blockchains are becoming foundational to the global economy, yet their current adoption stands at a mere 1%. This presents a unique opportunity for pioneering developers like you to influence the future with a tenfold impact. Harmony thrives on community engagement, with a network boasting hundreds of applications and a team passionate about ambitious goals. The invincible summer of innovation is upon us!As an engineer, your profound understanding of bytes and systems is invaluable. You are not just a coder but a creator of tools and a solver of complex problems. Your day might involve prototyping cutting-edge research, debugging in hexadecimal, or collaborating asynchronously with a dynamic team of engineers across the globe. Building a blockchain is akin to assembling an aircraft mid-flight, but if you flourish in chaos, why not take the leap?For those with a creative flair, we appreciate your dedication to enhancing user experiences. You embody the roles of product designer, brand strategist, and industry analyst. Your typical day could involve analyzing user satisfaction metrics, crafting compelling narratives, or leading scrum sessions for product launches that will engage millions. Cultivating a community is an art that requires passion and a commitment to sustaining culture for decades. If this is your lifelong dream, the time to start is now! Discover our vision for the future at Social A(G)I and Shard 1.
At Harmony, we are pioneering an open blockchain platform that utilizes data sharding and achieves rapid transaction finality. Our innovative on-chain tokens facilitate micro-payments for social games and community AI, while smart contracts enable market pricing, and zero-knowledge proofs ensure data privacy.Our mission at Harmony is to scale trust and construct a radically fair economy. We are dedicated to providing a decentralized, scalable, and secure platform that allows for transaction settlements without the need for trusted intermediaries. Join Us in Building the FutureAs a Day-1 startup, we recognize that while blockchains are set to become the bedrock of the global economy, their current adoption rate is merely 1%. This presents you, as a pioneering developer, the opportunity to make a significant impact. Harmony thrives on community involvement, boasting a network that supports hundreds of applications and a team driven by ambitious goals. Together, we are poised for an invincible journey ahead!For engineers, we appreciate your profound understanding of data operations. You are a tool maker, system hacker, and math enthusiast rolled into one. Your day may consist of prototyping cutting-edge research papers, debugging and profiling in hexadecimal, or synchronizing tasks with a diverse team of engineers in an open environment. Building a blockchain is akin to leaping off a cliff while constructing a plane mid-air – if you thrive in chaos, why not take the plunge?For creatives, your passion for user experience is invaluable. You embody the roles of a product designer, brand manager, and industry analyst all in one. Your typical day includes analyzing metrics to discover what captivates and frustrates users, crafting detailed narratives to explain the 'why' and 'how', and collaborating on product launches to engage millions of users. Building a community is like sharing your heartfelt vision while nurturing a vibrant culture for years to come – if this is your lifelong dream, start today! Discover our project vision on Social A(G)I and Shard 1.
Join Wealthfront as an Android Engineer and be part of our dynamic team! Collaborating within a cross-functional scrum team, you will be instrumental in designing and implementing innovative investment account and cash account features for our Android application. Your role will encompass not only app development but also the creation of automated tests and the integration of our proprietary design systems with our backend API. Help us realize our goal of making investing effortlessly simple and comprehensible for our clients!
Join us at Grindr as a Staff Machine Learning Engineer in a dynamic hybrid work environment, primarily based in our Palo Alto office. You will be required to work in the office on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal member of Grindr, you will play a crucial role in our AI-driven transformation. This is your opportunity to leverage advanced machine learning techniques to enhance the way millions in the LGBTQ+ community connect, whether for casual chats, fleeting encounters, or enduring relationships. We are committed to making machine learning a cornerstone of Grindr, and your contributions will leave a lasting impact on our unique global platform.Impact from Day One: Join a focused team at the forefront of machine learning initiatives, where you will engage in significant, innovative projects that lay the groundwork for our long-term ML vision.Transformative Recommendations: Develop systems that connect users to their next meaningful experiences, adapting to a variety of needs and preferences.Insightful Conversations: Utilize Large Language Models (LLMs) to extract insights, enhancing user interactions with precision and creativity.Your Responsibilities:Design and implement scalable recommendation systems to serve millions, ensuring a balance between performance and innovation.Employ cutting-edge LLMs to analyze extensive conversational data and improve user connections.Prototype, refine, and deploy production-ready ML solutions that address real user challenges.Work collaboratively with engineering, data science, and product teams to bring bold ideas to fruition.Explore and implement new AI tools and techniques to keep Grindr’s technology at the forefront.Your Qualifications:A minimum of 7 years of experience in building machine learning systems, particularly in developing systems from the ground up. Experience with recommendation systems is advantageous.Demonstrated ability to deliver scalable solutions, with proficiency in Python and popular machine learning frameworks.A proactive approach to tackling complex challenges with tangible outcomes.Familiarity with data and deployment technologies (e.g., Snowflake, etc.) is beneficial.
*This role supports hybrid in-office US-based work in San Francisco - Bay Area, New York, and Seattle.About Wealthfront EngineeringAt Wealthfront, we foster a culture of continuous learning and innovation, focusing on delivering high-quality software solutions. Our primary strategy revolves around automation, enabling us to enhance the financial services we provide to our clients.Our software automatically manages various financial processes, learning from data as we strive for improvement. Wealthfront engineers operate as builder/operators, taking full responsibility for both the development and operational maintenance of our systems, with an emphasis on automating any repetitive tasks.We prioritize automation in our development processes; for instance, all testing is automated — manual testing is not part of our software lifecycle. Additionally, we ensure observability in case of failures, with systems that notify engineers about issues, allowing us to maintain healthy operations.If you are passionate about working in a dynamic engineering environment where your contributions can significantly impact the company's success, especially in a tight-knit team of around 250 employees, with half in engineering, we encourage you to apply!
Our VisionAt Tinder, we believe that the thrill of meeting new people is one of life’s greatest joys. We are dedicated to nurturing the magic of human connection, engaging tens of millions of users worldwide. With hundreds of millions of downloads, over 2 billion swipes daily, 20 million matches each day, and a presence in more than 190 countries, our influence is vast and continually expanding.Our team collaborates to tackle intricate challenges, blending insights from human relationships, behavioral science, network economics, AI, and machine learning, while prioritizing user safety and cultural sensitivity. We explore the depths of loneliness, love, and connection.Internship DurationThe internship will take place from June 1 to August 28, 2026.Work EnvironmentThis is a hybrid position, requiring in-office collaboration three days a week in our Palo Alto, California office.Role OverviewAs a member of the Tinder ML team, you will play a crucial role in shaping the product experience across diverse domains, including Recommendations, Trust & Safety, Profile Management, Chat, Growth, and Revenue. Our goal is to leverage machine learning to enhance user experiences, build trust, and drive business growth within Tinder's ecosystem. This internship offers a unique opportunity to work alongside experienced engineers to develop and implement machine learning solutions that align with Tinder’s strategic objectives.
At Simular, we are at the forefront of AI research, pushing the boundaries of what is possible in machine learning and artificial intelligence. We are seeking a passionate and driven PhD Research Intern to join our innovative team. This position may be based in any of our listed locations, with priority determined according to the order of listing.Your Role:Work closely with our talented research scientists to enhance methodologies in the following areas:Planning and Reinforcement Learning (RL) for computer applications, including behavioral cloning and RL on model weights.Multimodal grounding, focusing on vision-only models and hybrid methods incorporating large models.Reward and Judge Modeling, encompassing error analysis and human evaluation.Understanding user intent, particularly in modeling vague queries and preference learning.Assist in dataset development, conduct experiments, and benchmark results.Investigate innovative approaches to support Simular's long-term technical roadmap.Document and share findings through detailed internal reports and academic-style writing.
At Rhoda AI, we are pioneering the development of a comprehensive full-stack platform for the next generation of humanoid robots. Our innovative approach encompasses high-performance, software-defined hardware along with foundational and video world models that empower our robotic systems. Our robots are engineered as versatile generalists, adept at navigating intricate, real-world scenarios, including those not encountered during training. Collaborating with a distinguished research team from Stanford, Berkeley, Harvard, and other leading institutions, we operate at the forefront of large-scale learning, robotics, and systems engineering. With over $400M in funding, we are aggressively investing in research and development, hardware innovation, and scaling up manufacturing to bring our vision to life.We are on the lookout for a Staff / Principal Machine Learning Engineer to take charge of our training platform. This pivotal system is essential for ensuring that large-scale training is reliable, reproducible, and straightforward to execute. You will play a crucial role in defining the lifecycle of training jobs, including their launch, tracking, recovery, and debugging across our clusters. Your contributions will enable researchers to innovate rapidly without infrastructure hindrances.In this role, you will be at the heart of enhancing research efficiency: when a training job fails, your system will allow for automatic recovery; when experiments become challenging to reproduce, you will implement effective solutions; and when GPU hours are squandered, you will ensure visibility and preventative measures are in place.
About ALSOAt ALSO, we are pioneers in electric mobility, founded as part of Rivian. Our dedicated team comprises builders, innovators, and visionaries who are passionate about developing unique, vertically integrated electric vehicles (EVs) that tackle today’s and tomorrow’s mobility challenges. Our goal is to inspire our community to choose ALSO, replacing traditional vehicle miles with more affordable, enjoyable, and efficient alternatives, achieving 10-50 times greater efficiency.The RoleWe are in search of a senior Software Engineer with a strong focus on algorithms, machine learning, and edge AI. In this position, you will decipher ambiguous customer and product requirements, model complex technical problems, and transform solutions into robust, production-ready code that operates seamlessly across mobile, embedded, and cloud platforms.Collaboration is key in this role; you will work closely with Product, Design, Embedded, Mobile, Cloud, Data, Manufacturing, and Systems Engineering teams to make informed architectural decisions regarding algorithm deployment, performance measurement, and continuous improvement.This role is well-suited for an individual who can navigate the entire process from problem definition to modeling, experimentation, implementation, deployment, telemetry, and iteration. You should adeptly balance algorithmic excellence with real-world constraints such as latency, battery life, compute power, bandwidth, reliability, and manufacturability.What You Will DoDesign, develop, deploy, and refine algorithms and machine learning models across mobile, embedded, and cloud environments.Collaborate with Product Managers, Designers, and Engineering stakeholders to convert customer needs into precise algorithmic requirements.Engage with platform and domain experts to effectively deploy algorithms in production and monitor their performance in real-world settings.Conduct performance analysis of models and algorithms through experimentation, A/B testing, shadow testing, telemetry analysis, and offline evaluation.Create and manage input/output data schemas to facilitate efficient edge-to-cloud telemetry, diagnostics, model retraining, and ongoing enhancements.Optimize deployed algorithms for factors such as latency, battery consumption, communication efficiency, robustness, and intended product functionality.Investigate anomalies, failure modes, regressions, and edge cases to enhance algorithm reliability.
Join us at Grindr in a hybrid position based in our Palo Alto or San Francisco offices, with in-office attendance required on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal figure at Grindr, you will lead our transformative AI journey. This is your opportunity to leverage state-of-the-art machine learning techniques to revolutionize the way millions within the LGBTQ+ community connect, whether through engaging conversations, casual meetups, or meaningful relationships. Our commitment to machine learning is strong, and you will play an essential role in shaping our strategy and execution on this unique global platform.Impact from Day One: You will be instrumental in establishing foundational systems in an early-stage ML environment, charting the roadmap for our long-term strategy.Innovative Recommendations: Design and scale recommendation platforms that connect millions to their next significant experience, tailored to diverse user intents.Conversational Insights: Employ large language models (LLMs) to extract insights and establish best practices for conversational AI, enhancing user engagement with precision.Key Responsibilities:Develop and manage large-scale recommendation systems to serve millions of users while balancing performance and innovation.Utilize advanced LLMs to analyze extensive conversation data, enhancing connections among users.Prototype, iterate, and deploy production-ready ML solutions addressing real user challenges.Provide technical guidance across teams, collaborating with engineering, data science, and product teams to turn innovative ideas into reality.Assess and incorporate emerging AI tools and techniques organization-wide to maintain a leading-edge technology stack.Qualifications We Seek:Over 10 years of experience in building ML systems, particularly in developing 0-to-1 systems, platform architecture, and pioneering new capabilities. Familiarity with recommendation systems is advantageous.Proven track record of delivering scalable solutions, with proficiency in Python and popular ML frameworks.A proactive mindset and the ability to work in a fast-paced, dynamic environment.
At Protegrity, we are at the forefront of data protection innovation, harnessing the power of AI and quantum-resistant cryptography. Our mission is to transform how sensitive data is safeguarded across cloud-native, hybrid, and on-premises environments. Utilizing cutting-edge cryptographic techniques, including tokenization and format-preserving encryption, …
Role Overview:Join Nace.AI as a Machine Learning Engineer, where you will be instrumental in transforming advanced machine learning research into scalable, production-ready applications. Collaborating with interdisciplinary teams, you will pinpoint areas where machine learning can enhance product offerings, design robust model-centric architectures, and guarantee their smooth integration into practical applications. This role demands a harmonious blend of theoretical insight and hands-on engineering, focusing on creating dependable, maintainable, and impactful AI-driven features that align with Nace.AI's strategic goals.Key Responsibilities:Develop and sustain complete ML systems, including synthetic data pipelines, model training, debugging, and performance assessment.Enhance large language models (LLMs) and utilize meta-learning strategies to boost model generalization and efficiency.Refine existing Nace.AI models by integrating breakthroughs from the latest ML research.
Join Our Innovative Team Nubank is a leading digital financial platform, serving over 122 million customers across Brazil, Mexico, and Colombia. Our mission is to simplify financial services and empower individuals, marking the start of a vibrant future in Latin America. As a publicly listed company on the New York Stock Exchange (NYSE: NU), we leverage cutting-edge technology and data intelligence to create financial products that are not only accessible but also user-friendly. Our achievements have earned us recognition from prestigious rankings, such as Time 100 Companies, Fast Company’s Most Innovative Companies, and Forbes World’s Best Bank. Explore more about us on our institutional page here. About the Role At AI Core, we are expanding our AI initiatives to become the backbone of Nubank's key decision-making systems. We are in search of talented Machine Learning Engineers to spearhead impactful research projects that connect advanced AI technologies with real-world financial systems. Your role will involve tackling intricate challenges using Deep Learning and Foundation Models, ensuring our solutions are scalable, efficient, and yield tangible business outcomes. As a Machine Learning Engineer (MLE), your responsibilities will include: Leading and executing complex applied research initiatives independently, focusing on building and optimizing architectures (e.g., Transformers, GNNs) for critical applications such as Credit, Recommendation Systems, Generative AI, and real-time inference. Resolving challenging and ambiguous modeling problems that necessitate collaboration across various teams (Data, Infrastructure, Product), delivering innovative solutions with a clear emphasis on medium-term impact. Connecting the research and production worlds by designing architectures that comply with MLOps constraints, ensuring models are optimized for latency, interpretability, and cost-effectiveness. We invite you to be part of our journey to revolutionize the financial landscape.
About UsHippocratic AI stands at the forefront of generative AI in the healthcare sector. Our innovative platform is the only one capable of engaging in safe, autonomous clinical conversations with patients, supported by our proprietary LLMs in the Polaris constellation, boasting an impressive accuracy rate of over 99.9%.Why Join Our TeamRevolutionize healthcare with safety-centric AI. We are pioneering the world's first healthcare-specific, safety-oriented LLM—a groundbreaking platform focused on enhancing patient outcomes on a global scale. This is a unique opportunity to contribute to category creation.Collaborate with visionaries. Co-founded by CEO Munjal Shah alongside a distinguished team of physicians, hospital executives, AI innovators, and researchers from esteemed institutions such as El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, Meta, Microsoft, and NVIDIA.Supported by top-tier investors. We recently secured a $126M Series C funding round at a valuation of $3.5B, led by Avenir Growth, bringing our total funding to $404M with contributions from notable investors like CapitalG, General Catalyst, a16z, Kleiner Perkins, and others.Build alongside experts in healthcare and AI. Join a team of professionals dedicated to enhancing care, advancing science, and creating transformative technologies that ensure our platform is robust, reliable, and revolutionary.Location RequirementWe believe collaboration sparks the best ideas. To foster rapid teamwork and a vibrant company culture, this position requires daily presence in our Palo Alto office, five days a week, unless stated otherwise.About the RoleIn healthcare AI, evaluation is crucial—if it can't be measured, it can't be deployed. You will develop systems that assess the safety, accuracy, and readiness of our models for real-world patient interactions: evaluation frameworks, synthetic data pipelines, automated benchmarks, and LLM-as-judge systems. This role presents a high-impact engineering opportunity where your contributions directly influence what is launched into production.What You’ll DoCreate and implement evaluation frameworks focused on LLM safety, clinical accuracy, and conversational quality.Build synthetic data generation pipelines to rigorously test models across varied clinical scenarios.Develop scalable automated and human-in-the-loop evaluation pipelines.
About VoltaiAt Voltai, we are pioneering the future of artificial intelligence by developing world models and agents capable of learning, evaluating, planning, experimenting, and interacting with the physical world. Our initial focus is on understanding and creating advanced hardware, electronic systems, and semiconductors, utilizing AI to design and innovate beyond human cognitive boundaries.About Our TeamOur remarkable team is backed by esteemed Silicon Valley investors, Stanford University, and industry leaders including CEOs and Presidents of Google, AMD, Broadcom, and Marvell. We boast a diverse group of former Stanford professors, SAIL researchers, Olympiad medalists, CTOs of prominent tech firms, and high-ranking officials with experience in national security and foreign policy.What We Are Looking ForExceptional AI/ML engineering skills, ideally from top-tier programs in Computer Science, Electrical Engineering, Mathematics, or Physics.Demonstrated success in delivering AI/ML projects from initial concept through to production deployment.Hands-on experience in fine-tuning and deploying large language models (LLMs) within production environments.Experience working with multi-modal models that integrate text, image, or audio inputs.Bonus PointsExperience in competitive programming.Contributions to open-source projects.Recognition through awards or publications in leading journals and conferences.Ability to thrive in a dynamic, fast-paced startup environment.
Full-time|$10K/mo - $10K/mo|On-site|Palo Alto, California, United States
AI ResidencyLocation: Palo Alto, CA (on-site)About 1XAt 1X, we are pioneering the development of humanoid robots designed to collaborate with humans, addressing labor shortages and enhancing productivity.About the RoleThe AI Residency offers a unique fixed-term opportunity (3–6 months) to engage in transformative AI and robotics initiatives alongside our dedicated team. As a resident, you will contribute to building critical infrastructure for simulation, data management, and machine learning, directly translating research concepts into practical applications. This is your chance to play a vital role in advancing deployed robotic systems while gaining invaluable hands-on experience at the intersection of AI and robotics.
At Inflection AI, we are dedicated to leveraging the transformative capabilities of artificial intelligence to enhance human well-being and productivity.The future of AI will be characterized by agents we can trust to act on our behalf.We are at the forefront of this evolution with our human-centric AI models that integrate emotional intelligence (EQ) with cognitive intelligence (IQ), shifting interactions from mere transactions to meaningful relationships, thereby generating lasting value for individuals and organizations alike.Our initiatives manifest in two primary forms:Pi, your personal AI, designed to be a compassionate companion that enriches everyday life through practical support and insights.Platform — large language models (LLMs) and APIs that empower developers, agents, and enterprises to infuse Pi-level emotional intelligence into experiences where empathy and understanding are crucial.We are building towards a future of AI agents that foster trust, enhance understanding, and create aligned, long-term value for everyone.About the RoleAs a Model Training Engineer, you will be responsible for designing, building, and scaling post-training pipelines that transform general LLMs into brand-fluent, production-ready assistants. Your innovations in fine-tuning and preference optimization techniques (RLHF, DPO, GRPO, RLAIF) will significantly enhance reliability, alignment, and cost-effectiveness.
About Mistral AIAt Mistral AI, we harness the transformative power of artificial intelligence to streamline tasks, save valuable time, and foster enhanced creativity and learning. Our innovative technology is crafted to effortlessly integrate into everyday work environments.We are committed to democratizing AI by offering high-performance, optimized, open-source models, products, and solutions. Our extensive AI platform caters to both enterprise and individual needs, featuring products like Le Chat, La Plateforme, Mistral Code, and Mistral Compute—creating cutting-edge intelligence accessible to all users.As a vibrant and collaborative team, we are driven by our passion for AI and its potential to revolutionize society. Our diverse workforce excels in competitive settings and is dedicated to fostering innovation. With teams distributed across France, the USA, the UK, Germany, and Singapore, we pride ourselves on our creativity, humility, and team spirit.Join us in shaping the future of AI at a pioneering company. Together, we can create a lasting impact. Discover more about our culture at https://mistral.ai/careers.Role OverviewAbout the Research Engineering TeamThe Research Engineering team operates across Platform (shared infrastructure & clean coding practices) and Embedded (integrated within research squads). Our engineers have the flexibility to navigate the research↔production spectrum as their interests and needs evolve.As a Machine Learning Research Engineer, you will be responsible for building and optimizing large-scale learning systems that underpin our open-weight models. Collaborating closely with Research Scientists, you may join either:- Platform RE Team: Focus on enhancing our shared training frameworks, data pipelines, and tools utilized across all teams; or- Embedded RE Team: Become part of a research squad (Alignment, Pre-training, Multimodal, etc.) to turn innovative ideas into scalable, repeatable code.Key Responsibilities• Support researchers by managing the complex aspects of large-scale ML pipelines and developing robust tools.• Bridge cutting-edge research with production: integrate checkpoints, optimize evaluations, and create accessible APIs.• Conduct experiments utilizing the latest deep-learning techniques (sparsification on 70B+ models, distributed training across thousands of GPUs).• Design, implement, and benchmark ML algorithms; produce clear and efficient code in Python.• Deliver prototypes that evolve into production-grade components for Le Chat and our enterprise API.
Join our team at Mind Robotics as a Machine Learning Infrastructure Engineer, where you'll play a pivotal role in developing the systems that facilitate effective large-scale model training. This position is ideal for individuals who thrive in high-scale environments—overseeing distributed training, managing core ML infrastructure, and leveraging rapid iteration loops across hundreds of GPUs. If you have experience building or managing large training systems in frameworks like PyTorch or JAX and have a passion for optimizing processes such as sharding, parallelism, and performance, you'll find a welcoming environment here. Collaborate closely with researchers to minimize friction, enhance reliability, and streamline the processes for training, evaluating, and deploying models that integrate into real-world applications.
At genbio, located in the heart of Silicon Valley, we are an innovative start-up driven by a team of visionary scientists, engineers, and entrepreneurs. Our mission is to revolutionize biology and medicine with the transformative capabilities of Generative AI. We unite some of the most brilliant minds in AI and Biological Science, challenging the limits of what can be achieved. Our commitment is to holistically decode biology and create a new era of life-changing solutions. As pioneers in the field of pan-modal Large Biological Models (LBM), we are at the forefront of biomedicine, leveraging LBM training to facilitate groundbreaking advancements in healthcare. With an exceptionally robust R&D team, we are positioned to lead in LLM and generative AI technologies, making a global impact from our headquarters in California and our branch office in Paris. Join us on this exciting journey to redefine the future of biology and medicine.
Join Harmony, an innovative open blockchain platform that leverages data sharding and rapid finality. Our on-chain tokens empower social games and community AI, facilitating micro-payments, smart contracts for market pricing, and ensuring data privacy through zero-knowledge proofs.At Harmony, our mission is to foster trust and establish a radically equitable economy. Our decentralized platform is built to be scalable and secure, enabling transactions without the need for trusted intermediaries. Embark on a Journey to InnovationAs a Day-1 startup, we recognize that blockchains are becoming foundational to the global economy, yet their current adoption stands at a mere 1%. This presents a unique opportunity for pioneering developers like you to influence the future with a tenfold impact. Harmony thrives on community engagement, with a network boasting hundreds of applications and a team passionate about ambitious goals. The invincible summer of innovation is upon us!As an engineer, your profound understanding of bytes and systems is invaluable. You are not just a coder but a creator of tools and a solver of complex problems. Your day might involve prototyping cutting-edge research, debugging in hexadecimal, or collaborating asynchronously with a dynamic team of engineers across the globe. Building a blockchain is akin to assembling an aircraft mid-flight, but if you flourish in chaos, why not take the leap?For those with a creative flair, we appreciate your dedication to enhancing user experiences. You embody the roles of product designer, brand strategist, and industry analyst. Your typical day could involve analyzing user satisfaction metrics, crafting compelling narratives, or leading scrum sessions for product launches that will engage millions. Cultivating a community is an art that requires passion and a commitment to sustaining culture for decades. If this is your lifelong dream, the time to start is now! Discover our vision for the future at Social A(G)I and Shard 1.
At Harmony, we are pioneering an open blockchain platform that utilizes data sharding and achieves rapid transaction finality. Our innovative on-chain tokens facilitate micro-payments for social games and community AI, while smart contracts enable market pricing, and zero-knowledge proofs ensure data privacy.Our mission at Harmony is to scale trust and construct a radically fair economy. We are dedicated to providing a decentralized, scalable, and secure platform that allows for transaction settlements without the need for trusted intermediaries. Join Us in Building the FutureAs a Day-1 startup, we recognize that while blockchains are set to become the bedrock of the global economy, their current adoption rate is merely 1%. This presents you, as a pioneering developer, the opportunity to make a significant impact. Harmony thrives on community involvement, boasting a network that supports hundreds of applications and a team driven by ambitious goals. Together, we are poised for an invincible journey ahead!For engineers, we appreciate your profound understanding of data operations. You are a tool maker, system hacker, and math enthusiast rolled into one. Your day may consist of prototyping cutting-edge research papers, debugging and profiling in hexadecimal, or synchronizing tasks with a diverse team of engineers in an open environment. Building a blockchain is akin to leaping off a cliff while constructing a plane mid-air – if you thrive in chaos, why not take the plunge?For creatives, your passion for user experience is invaluable. You embody the roles of a product designer, brand manager, and industry analyst all in one. Your typical day includes analyzing metrics to discover what captivates and frustrates users, crafting detailed narratives to explain the 'why' and 'how', and collaborating on product launches to engage millions of users. Building a community is like sharing your heartfelt vision while nurturing a vibrant culture for years to come – if this is your lifelong dream, start today! Discover our project vision on Social A(G)I and Shard 1.
Join Wealthfront as an Android Engineer and be part of our dynamic team! Collaborating within a cross-functional scrum team, you will be instrumental in designing and implementing innovative investment account and cash account features for our Android application. Your role will encompass not only app development but also the creation of automated tests and the integration of our proprietary design systems with our backend API. Help us realize our goal of making investing effortlessly simple and comprehensible for our clients!
Join us at Grindr as a Staff Machine Learning Engineer in a dynamic hybrid work environment, primarily based in our Palo Alto office. You will be required to work in the office on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal member of Grindr, you will play a crucial role in our AI-driven transformation. This is your opportunity to leverage advanced machine learning techniques to enhance the way millions in the LGBTQ+ community connect, whether for casual chats, fleeting encounters, or enduring relationships. We are committed to making machine learning a cornerstone of Grindr, and your contributions will leave a lasting impact on our unique global platform.Impact from Day One: Join a focused team at the forefront of machine learning initiatives, where you will engage in significant, innovative projects that lay the groundwork for our long-term ML vision.Transformative Recommendations: Develop systems that connect users to their next meaningful experiences, adapting to a variety of needs and preferences.Insightful Conversations: Utilize Large Language Models (LLMs) to extract insights, enhancing user interactions with precision and creativity.Your Responsibilities:Design and implement scalable recommendation systems to serve millions, ensuring a balance between performance and innovation.Employ cutting-edge LLMs to analyze extensive conversational data and improve user connections.Prototype, refine, and deploy production-ready ML solutions that address real user challenges.Work collaboratively with engineering, data science, and product teams to bring bold ideas to fruition.Explore and implement new AI tools and techniques to keep Grindr’s technology at the forefront.Your Qualifications:A minimum of 7 years of experience in building machine learning systems, particularly in developing systems from the ground up. Experience with recommendation systems is advantageous.Demonstrated ability to deliver scalable solutions, with proficiency in Python and popular machine learning frameworks.A proactive approach to tackling complex challenges with tangible outcomes.Familiarity with data and deployment technologies (e.g., Snowflake, etc.) is beneficial.
*This role supports hybrid in-office US-based work in San Francisco - Bay Area, New York, and Seattle.About Wealthfront EngineeringAt Wealthfront, we foster a culture of continuous learning and innovation, focusing on delivering high-quality software solutions. Our primary strategy revolves around automation, enabling us to enhance the financial services we provide to our clients.Our software automatically manages various financial processes, learning from data as we strive for improvement. Wealthfront engineers operate as builder/operators, taking full responsibility for both the development and operational maintenance of our systems, with an emphasis on automating any repetitive tasks.We prioritize automation in our development processes; for instance, all testing is automated — manual testing is not part of our software lifecycle. Additionally, we ensure observability in case of failures, with systems that notify engineers about issues, allowing us to maintain healthy operations.If you are passionate about working in a dynamic engineering environment where your contributions can significantly impact the company's success, especially in a tight-knit team of around 250 employees, with half in engineering, we encourage you to apply!
Our VisionAt Tinder, we believe that the thrill of meeting new people is one of life’s greatest joys. We are dedicated to nurturing the magic of human connection, engaging tens of millions of users worldwide. With hundreds of millions of downloads, over 2 billion swipes daily, 20 million matches each day, and a presence in more than 190 countries, our influence is vast and continually expanding.Our team collaborates to tackle intricate challenges, blending insights from human relationships, behavioral science, network economics, AI, and machine learning, while prioritizing user safety and cultural sensitivity. We explore the depths of loneliness, love, and connection.Internship DurationThe internship will take place from June 1 to August 28, 2026.Work EnvironmentThis is a hybrid position, requiring in-office collaboration three days a week in our Palo Alto, California office.Role OverviewAs a member of the Tinder ML team, you will play a crucial role in shaping the product experience across diverse domains, including Recommendations, Trust & Safety, Profile Management, Chat, Growth, and Revenue. Our goal is to leverage machine learning to enhance user experiences, build trust, and drive business growth within Tinder's ecosystem. This internship offers a unique opportunity to work alongside experienced engineers to develop and implement machine learning solutions that align with Tinder’s strategic objectives.
At Simular, we are at the forefront of AI research, pushing the boundaries of what is possible in machine learning and artificial intelligence. We are seeking a passionate and driven PhD Research Intern to join our innovative team. This position may be based in any of our listed locations, with priority determined according to the order of listing.Your Role:Work closely with our talented research scientists to enhance methodologies in the following areas:Planning and Reinforcement Learning (RL) for computer applications, including behavioral cloning and RL on model weights.Multimodal grounding, focusing on vision-only models and hybrid methods incorporating large models.Reward and Judge Modeling, encompassing error analysis and human evaluation.Understanding user intent, particularly in modeling vague queries and preference learning.Assist in dataset development, conduct experiments, and benchmark results.Investigate innovative approaches to support Simular's long-term technical roadmap.Document and share findings through detailed internal reports and academic-style writing.
At Rhoda AI, we are pioneering the development of a comprehensive full-stack platform for the next generation of humanoid robots. Our innovative approach encompasses high-performance, software-defined hardware along with foundational and video world models that empower our robotic systems. Our robots are engineered as versatile generalists, adept at navigating intricate, real-world scenarios, including those not encountered during training. Collaborating with a distinguished research team from Stanford, Berkeley, Harvard, and other leading institutions, we operate at the forefront of large-scale learning, robotics, and systems engineering. With over $400M in funding, we are aggressively investing in research and development, hardware innovation, and scaling up manufacturing to bring our vision to life.We are on the lookout for a Staff / Principal Machine Learning Engineer to take charge of our training platform. This pivotal system is essential for ensuring that large-scale training is reliable, reproducible, and straightforward to execute. You will play a crucial role in defining the lifecycle of training jobs, including their launch, tracking, recovery, and debugging across our clusters. Your contributions will enable researchers to innovate rapidly without infrastructure hindrances.In this role, you will be at the heart of enhancing research efficiency: when a training job fails, your system will allow for automatic recovery; when experiments become challenging to reproduce, you will implement effective solutions; and when GPU hours are squandered, you will ensure visibility and preventative measures are in place.
About ALSOAt ALSO, we are pioneers in electric mobility, founded as part of Rivian. Our dedicated team comprises builders, innovators, and visionaries who are passionate about developing unique, vertically integrated electric vehicles (EVs) that tackle today’s and tomorrow’s mobility challenges. Our goal is to inspire our community to choose ALSO, replacing traditional vehicle miles with more affordable, enjoyable, and efficient alternatives, achieving 10-50 times greater efficiency.The RoleWe are in search of a senior Software Engineer with a strong focus on algorithms, machine learning, and edge AI. In this position, you will decipher ambiguous customer and product requirements, model complex technical problems, and transform solutions into robust, production-ready code that operates seamlessly across mobile, embedded, and cloud platforms.Collaboration is key in this role; you will work closely with Product, Design, Embedded, Mobile, Cloud, Data, Manufacturing, and Systems Engineering teams to make informed architectural decisions regarding algorithm deployment, performance measurement, and continuous improvement.This role is well-suited for an individual who can navigate the entire process from problem definition to modeling, experimentation, implementation, deployment, telemetry, and iteration. You should adeptly balance algorithmic excellence with real-world constraints such as latency, battery life, compute power, bandwidth, reliability, and manufacturability.What You Will DoDesign, develop, deploy, and refine algorithms and machine learning models across mobile, embedded, and cloud environments.Collaborate with Product Managers, Designers, and Engineering stakeholders to convert customer needs into precise algorithmic requirements.Engage with platform and domain experts to effectively deploy algorithms in production and monitor their performance in real-world settings.Conduct performance analysis of models and algorithms through experimentation, A/B testing, shadow testing, telemetry analysis, and offline evaluation.Create and manage input/output data schemas to facilitate efficient edge-to-cloud telemetry, diagnostics, model retraining, and ongoing enhancements.Optimize deployed algorithms for factors such as latency, battery consumption, communication efficiency, robustness, and intended product functionality.Investigate anomalies, failure modes, regressions, and edge cases to enhance algorithm reliability.
Join us at Grindr in a hybrid position based in our Palo Alto or San Francisco offices, with in-office attendance required on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal figure at Grindr, you will lead our transformative AI journey. This is your opportunity to leverage state-of-the-art machine learning techniques to revolutionize the way millions within the LGBTQ+ community connect, whether through engaging conversations, casual meetups, or meaningful relationships. Our commitment to machine learning is strong, and you will play an essential role in shaping our strategy and execution on this unique global platform.Impact from Day One: You will be instrumental in establishing foundational systems in an early-stage ML environment, charting the roadmap for our long-term strategy.Innovative Recommendations: Design and scale recommendation platforms that connect millions to their next significant experience, tailored to diverse user intents.Conversational Insights: Employ large language models (LLMs) to extract insights and establish best practices for conversational AI, enhancing user engagement with precision.Key Responsibilities:Develop and manage large-scale recommendation systems to serve millions of users while balancing performance and innovation.Utilize advanced LLMs to analyze extensive conversation data, enhancing connections among users.Prototype, iterate, and deploy production-ready ML solutions addressing real user challenges.Provide technical guidance across teams, collaborating with engineering, data science, and product teams to turn innovative ideas into reality.Assess and incorporate emerging AI tools and techniques organization-wide to maintain a leading-edge technology stack.Qualifications We Seek:Over 10 years of experience in building ML systems, particularly in developing 0-to-1 systems, platform architecture, and pioneering new capabilities. Familiarity with recommendation systems is advantageous.Proven track record of delivering scalable solutions, with proficiency in Python and popular ML frameworks.A proactive mindset and the ability to work in a fast-paced, dynamic environment.