Machine Learning Data Engineer - Replica Pipelines
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
About Parallel Domain
Parallel Domain is at the forefront of innovation, creating the world’s most advanced digital twin and simulation platform. Our technology is pivotal for the development of autonomous systems, making a significant impact in fields like robotics and computer vision.
Similar jobs
Parallel Domain
Join Parallel Domain, where we are revolutionizing the field of autonomy, robotics, and computer vision with our cutting-edge simulation and digital twin technology. Our Replica product is at the forefront of creating expansive, photorealistic digital twins of real-world environments, essential for testing, validating, and developing autonomous systems. If you are passionate about machine learning and eager to contribute to the future of technology, we invite you to apply!
Later is the leading platform in influencer marketing, empowering brands to craft unforgettable campaigns with confidence. By leveraging genuine creator relationships, reliable intelligence, and expert support, Later eliminates uncertainty from one of marketing's most visible investments.Our AI-driven platform, backed by over a decade of proprietary data—including billions of social interactions, impressions, and more than $2.4 billion in verified influencer-generated purchases—enables teams to predict campaign success before launch.By merging valuable insights with expert guidance, Later takes the guesswork out of influencer marketing, allowing brands to select the ideal creators, execute thoroughly managed campaigns, and achieve significant growth in awareness, engagement, and revenue. Trusted by major enterprise brands such as Nike, Wayfair, Unilever, and Southwest Airlines, Later harmonizes creativity with performance to ensure campaigns not only look impressive but also yield tangible results. Discover more at later.com.About this Position:We are seeking a Machine Learning Infrastructure Engineer to join our expanding Data & Platform team. In this pivotal role, you will establish the framework that drives our AI and machine learning capabilities across Later's product suite. As our inaugural dedicated ML Infrastructure Engineer, you will oversee the systems that facilitate model experimentation, training, deployment, and large-scale monitoring.This position is essential for accelerating our data science endeavors and fostering future AI innovations. You will design and maintain reliable, secure, and scalable ML infrastructure that empowers data scientists and engineers to deploy impactful models confidently. If you are passionate about creating robust ML systems in a dynamic environment and are eager to set the standard for ML Ops at Later, this is your chance.
About this OpportunityJoin a global leader in networking that is transforming how businesses manage their networks. Our AI Core group is at the forefront, developing pioneering platforms across various domains such as Generative AI, AI Agents, RAG, Knowledge Bases, Data Mining, Anomaly Detection, and fine-tuning large language models. Here, innovation is not just welcomed; it is a core expectation.The RoleAs a pivotal AI ML Engineer, you will take on a leadership role in shaping our machine learning strategy. You'll be responsible for creating intelligent, high-performance multi-agent systems that can perceive, learn, and act in real-time.Key ResponsibilitiesDefine and lead the technical vision for machine learning solutions across our product portfolio.Manage the complete software development lifecycle, overseeing everything from design and code reviews to deployment and operational management.Architect robust, scalable microservices, including both synchronous and asynchronous web services.Develop real-time inference pipelines for complex models leveraging tools like Triton, TensorRT, and mixed-precision computing.Mentor fellow engineers, establish technical direction, and cultivate a strong team culture.Promote engineering excellence, system resilience, and continuous improvement in operations.
Sanctuary AI
Join Our Innovative TeamSanctuary AI is at the forefront of developing dexterity-driven Physical AI for versatile robots. We are on the lookout for talented Machine Learning Engineers to become part of our dynamic team of engineers, researchers, and scientists. Together, we are committed to overcoming significant challenges in robotic perception, dexterous manipulation, planning, and reasoning.In this role, you will concentrate on creating robust systems to train and implement machine learning policies in robotic platforms. You will collaborate closely with researchers to introduce and execute cutting-edge ML techniques in production environments. Utilizing our state-of-the-art in-house robotic systems, you will design, test, and enhance software that connects advanced methodologies to practical outcomes. This role presents a unique opportunity to contribute to both the engineering foundations and the applied ML capabilities that propel our robots into the future.Note: Your role may require occasional travel (typically one to two weeks at a time, several times a year) to work closely with partners and teams on essential projects.
ZoomInfo
At ZoomInfo, we accelerate careers and foster an environment where innovation thrives. Our fast-paced culture empowers you to achieve exceptional results while collaborating with dedicated teammates who celebrate every success. Equipped with advanced tools and a supportive culture, you will not only contribute but drive impactful outcomes at a remarkable pace.About the RoleWe are looking for a Senior Data Pipeline Analyst who will become the authority on our extensive company data pipeline, which ingests, processes, and profiles millions of records critical to our clients' go-to-market strategies.In this position, you will gain deep insights into the lifecycle of our company data—from acquisition through profiling to output. Your role will involve reading code to grasp data transformations and system dependencies, engaging in design discussions with our Engineering and Product teams, and influencing the evolution of our next-generation data infrastructure. As you deepen your expertise, you will lead strategic initiatives aimed at enhancing our data systems through both analytical and creative problem-solving approaches.This role goes beyond mere dashboard creation or SQL reporting; it demands a thorough understanding of data systems at an architectural level and the ability to tackle complex data challenges. You will ensure that our pipeline infrastructure evolves continuously to meet client demands and retain a competitive edge.Working closely with fellow data analysts during a dynamic transition phase, you will progressively take ownership of the pipeline architecture and strategic initiatives as our systems stabilize and your expertise grows. This position offers significant opportunities for advancement for individuals eager to become the leading technical expert on our company data systems.
Join Our Innovative TeamSanctuary AI, a pioneering force in developing dexterity-driven Physical AI for versatile robots, is on the lookout for a talented and enthusiastic Machine Learning Research Intern to join our dynamic ML team. You will play a crucial role in engineering and innovating advanced robotic manipulation tasks.As an intern reporting directly to the RL Lead, you will engage with diverse challenges related to perception, planning, and motion systems within humanoid general-purpose robots. This experience will provide you with invaluable insights into the design, architecture, and implementation of the simulation platforms and machine learning systems that drive our robots.We offer flexibility regarding the internship start date and duration, tailoring it to suit your academic commitments.
Remarcable Inc.
Remarcable Inc. supports trade contractors in managing procurement, tools, and warehouse operations by providing greater visibility and control. As the company expands throughout North America, data remains at the heart of its strategy. Role overview The Data Engineer will help shape and maintain Remarcable’s AWS Lakehouse infrastructure. This position focuses on designing ELT pipelines and building data models that inform decisions for product, operations, and leadership teams. The work also involves developing data infrastructure to enable AI and machine learning features for both contractors and back-office users. Location This hybrid role is based in Vancouver, BC. What you will do Build and maintain AWS Lakehouse architecture Design and implement ELT data pipelines Develop data models to guide business and product choices Create and support data infrastructure for AI and ML features What Remarcable values Strong interest in clean, reliable data architecture Motivation to influence the future of construction technology Ownership mindset and a willingness to take initiative
CruxOCM
About CruxOCMCruxOCM is an innovative automation leader in the heavy industry sector, driven by venture capital investments. We are transforming the energy landscape with cutting-edge solutions. At CruxOCM, we believe that control room operators deserve top-tier tools that enhance safety and efficiency. Our mission also focuses on reducing environmental impact while maximizing revenue. If pilots can rely on autopilot technology, why shouldn't control room operators have similar support? About the RoleWe are seeking a skilled Pipeline Hydraulics Engineer to spearhead the hydraulic modeling aspects of our platform. In this role, you will ensure that both transient and steady-state pipeline hydraulics are meticulously analyzed, guaranteeing safe and precise operations. You will collaborate closely with our Product, Engineering, Advanced Process Control, and Deployment teams, all of whom possess expertise in areas such as chemistry, physics, mechatronics, automation, simulation, and technology. The projects you will work on are intricate and the challenges are significant. We pride ourselves on our agility and ability to deliver results swiftly! Your involvement will encompass the product development and deployment process, from initial design to final delivery, making your contribution impactful and far-reaching.
ZoomInfo Technologies Inc.
At ZoomInfo, we accelerate careers by fostering a fast-paced environment where bold thinking is encouraged. Join us to do the best work of your life alongside a supportive team that values collaboration, challenges the status quo, and celebrates achievements. We provide you with tools that enhance your impact and a culture that empowers your ambitions, allowing you to not just contribute but to truly make things happen.About the RoleWe are on the lookout for a Senior Data Engineering Analyst to become the go-to expert on our data pipeline, which is responsible for ingesting, processing, and profiling millions of company records that drive our clients' go-to-market strategies.In this pivotal role, you will develop a comprehensive understanding of our data flow—from acquisition to profiling and output. You'll analyze code to grasp data transformations and system dependencies, provide valuable insights during design discussions with Engineering and Product teams, and help guide the evolution of our next-generation data infrastructure. As you gain expertise, you will lead strategic data improvement initiatives that require both systems thinking and innovative problem-solving.This role transcends typical dashboard creation or SQL reporting; it focuses on understanding data systems architecturally, tackling ambiguous data challenges, and evolving our pipeline infrastructure to meet customer demands and maintain a competitive edge.Your collaboration with fellow data analysts will be crucial during our active infrastructure transition. As our systems stabilize and your expertise grows, you will progressively take ownership of the pipeline architecture and lead strategic initiatives. This position offers significant growth potential for those looking to be the technical authority on company data systems.
AlgaeCal Inc.
Machine Learning Innovator: At AlgaeCal, we don't just work with data; we transform it into actionable insights that shape our strategies, optimize efficiencies, and drive growth. As a Data Scientist Intern, your predictive models will play an integral role in advancing our mission to combat bone loss, impacting millions of lives.Our team thrives on uncovering hidden patterns and validating hypotheses. If you're passionate about integrating machine learning into real-world AI applications, we want to hear from you.Join us and be part of something meaningful. With an estimated 54 million people in the U.S. facing low bone density, your work here will provide hope through our clinically-backed natural solutions.
impact.com builds a platform for brands to manage and grow partnerships across the entire customer journey. Over 5,000 companies, including major names like Walmart, Uber, and Shopify, use impact.com to oversee a network of more than 350,000 partnerships. Role overview The Senior Data Scientist, Programmatic Algorithms, joins the Programmatic Experience Group as an individual contributor. This role centers on designing and deploying machine learning models to optimize yield, pricing, and inventory allocation at scale. The work blends data science, platform engineering, and marketplace economics, directly influencing impact.com’s programmatic marketplace by balancing advertiser results with publisher monetization. This position offers significant autonomy and responsibility across the full machine learning lifecycle. Projects range from building data pipelines and engineering features to deploying real-time inference systems that support rapid decision-making. Regular collaboration with Product, Data Science, and Programmatic Delivery Engine Engineering teams is expected. What you will do Yield optimization and pricing models: Design and implement machine learning models for auction pricing, bid shading, floor price setting, and yield management across programmatic inventory. Who succeeds in this role Success in this position requires strong analytical rigor, practical machine learning engineering skills, and a genuine interest in both data science and engineering challenges. The role is based in New York, Seattle, or Vancouver, BC.
AlgaeCal Inc.
AlgaeCal Inc. is seeking a Data Scientist Intern for New Graduates in Vancouver, British Columbia. This internship focuses on transforming raw data into strategies that drive real business results. Interns will work closely with machine learning models, uncover patterns, and help embed AI applications into practical business solutions. AlgaeCal’s mission centers on eliminating the fear of bone loss, offering a clinically-backed natural solution for those with low bone density. The team values curiosity, initiative, and a drive to turn data into meaningful action. Role overview This internship is designed for individuals eager to apply data science skills to real-world challenges. The position involves building and refining machine learning models, optimizing data architectures, and supporting the company’s analytics efforts. Requirements Background in Data Science or a related field, with experience designing predictive models and optimizing AI applications. Strong programming skills in Python or R, with the ability to build and enhance machine learning models using statistical modeling and data science tools. Proficiency in SQL and relational databases, including writing efficient queries and optimizing data structures for large datasets. Understanding of machine learning algorithms such as regression, clustering, and neural networks. Experience with data warehouses like BigQuery, including designing and maintaining scalable data storage solutions. Ability to standardize and improve data models for efficiency, accuracy, and usability. Strong attention to detail for identifying trends, outliers, and opportunities in data. Entrepreneurial mindset, using data insights to generate business value. Those who are motivated by impactful work and want to contribute to a mission-driven company will find this internship rewarding. AlgaeCal welcomes candidates ready to turn data into action and support the company’s growth.
Asana seeks a Staff Data Scientist in Vancouver, BC. This role centers on advanced analytics and machine learning to help guide decisions across the company. Role overview Work will span both product development and operational improvements. The goal is to generate insights that help shape Asana’s products and make them more effective for users. What you will do Apply machine learning and analytics to company data Support teams with insights that influence product direction Contribute to operational improvements using data-driven findings Location This position is based in Asana’s Vancouver, BC office.
impact.com is a leading partnership marketing platform that brings together affiliates, influencers, content publishers, and brand advocates to drive business growth. The platform helps more than 5,000 global brands, including Walmart, Uber, Shopify, Lenovo, L’Oréal, and Fanatics, manage over 350,000 partnerships. impact.com’s suite of products, Performance, Creator, and Advocate, integrates all partner types, supporting brands as they build authentic, performance-driven relationships throughout the customer journey. Role overview The Senior Data Scientist, Programmatic Algorithms, will join the Programmatic Experience Group as an embedded expert. This high-responsibility individual contributor role focuses on designing and implementing machine learning models that optimize yield, pricing, and inventory allocation at scale. The work sits at the intersection of data science, platform engineering, and marketplace economics. Key responsibilities Lead the design and implementation of machine learning models to optimize auction pricing, bid shading, floor price settings, and yield across impact.com’s programmatic inventory. Architect data pipelines to support model development and deployment. Engineer features that improve model performance. Deploy real-time inference systems that enable swift, data-driven decisions. Collaboration and impact This role works closely with Product, Data Science, and Programmatic Delivery Engine Engineering teams. While collaboration is essential, the position also offers significant autonomy. The Senior Data Scientist will influence how the programmatic marketplace balances advertiser performance with publisher monetization, making a measurable impact on the organization. What we look for Analytical rigor and practical instincts are essential. The ideal candidate brings deep experience in both data science and machine learning engineering, with a genuine enthusiasm for solving complex problems in both areas.
ZoomInfo
At ZoomInfo, we accelerate careers and thrive in a fast-paced, innovative environment. Our team is built on collaboration and support, empowering everyone to achieve remarkable results. With cutting-edge tools and a culture that fosters ambition, you won't just be a contributor—you'll be a catalyst for growth and change.About the RoleWe are looking for a Senior Data Systems Analyst to take charge of our data pipeline, which is critical in processing millions of company records that drive our customers' market strategies. In this pivotal role, you will gain profound insights into our data acquisition, profiling, and output processes.You will dive into code to comprehend data transformations and system dependencies, providing valuable input during design discussions with our Engineering and Product teams. Your expertise will shape the future of our data infrastructure. As you grow your knowledge, you will lead strategic data enhancement initiatives that require both systems thinking and innovative problem-solving skills.This position focuses on understanding data systems at an architectural level rather than merely producing dashboards or SQL reports. You will tackle complex data challenges and ensure our pipeline infrastructure evolves to meet customer needs and maintain our competitive edge.During an active period of infrastructure transition, you will collaborate closely with fellow data analysts. As systems stabilize and your expertise deepens, you will take on greater ownership of the pipeline architecture and strategic projects, paving the way for your growth as the leading technical expert in our data systems.
Join our dynamic team at trackvfx as a Junior or Senior Pipeline Technical Director in Vancouver! This is a remarkable chance to contribute to the foundation of our innovative pipeline.As a part of the formative years of our new studio, you will engage in a variety of projects, from feature films to commercials. Collaborating with seasoned artists across diverse productions, you will have the opportunity to leverage your skills and contribute to our rapidly growing company.The Pipeline TD plays a crucial role in ensuring the integrity of our production pipeline. You will be responsible for identifying and resolving issues, as well as developing essential software tools to support our team. A thorough understanding of the VFX production pipeline is essential.We provide comprehensive health benefits, flexible working hours, and an open, creative atmosphere for all our employees.
Asana
Role overview Asana is seeking a Senior Data Engineer based in Vancouver, BC. This position centers on developing and enhancing the company’s data architecture. The Senior Data Engineer partners with teams throughout the organization to build and maintain scalable data pipelines that support business growth. What you will do Design, implement, and maintain data pipelines that adapt to evolving business requirements Work with cross-functional partners to gather and deliver on analytical needs Support and improve data infrastructure, ensuring reliable access for analytics and reporting Impact The solutions built in this role enable Asana to make informed, data-driven decisions. By maintaining effective and reliable data systems, the Senior Data Engineer helps ensure that analytics and reporting are accessible and trustworthy across the company.
Rivian and Volkswagen Group Technologies
Rivian and Volkswagen Group Technologies brings together two leaders in automotive innovation to shape the future of mobility. This collaboration focuses on developing advanced operating systems, zonal controllers, and cloud connectivity for software-defined vehicles. The team is dedicated to solving challenges in electric vehicle technology, aiming to set new standards in the industry. By combining expertise in connectivity, artificial intelligence, and security, the partnership works toward a more connected and sustainable future. Role overview The Senior Data Engineer, Ingestion Framework, joins the Data & AI Platform team in Vancouver. This role centers on supporting the big data platform, which operates at a petabyte scale. The main focus is on architecting and maintaining a custom vehicle data processing framework built with Go, and ensuring it integrates smoothly with the Databricks platform. What you will do Design and maintain a custom-built data ingestion framework using Go. Support seamless integration between the ingestion framework and Databricks. Help ensure reliable, large-scale operation of the vehicle data platform. Location This position is based in Vancouver, British Columbia.
Klue helps businesses transform scattered competitive data into clear, actionable insights. The Vancouver team is expanding and seeking a Senior Software Engineer with a focus on AI agent development. Role overview This position centers on designing and operating large language model (LLM)-powered agents at scale. Responsibilities span multi-agent orchestration, sub-agent design, and building evaluation frameworks to ensure outputs are both reliable and measurable. The work covers the full stack, including optimizing inference costs, improving retrieval and query performance, and creating feedback loops for ongoing system improvement. In addition to technical execution, this engineer will help shape the product roadmap by offering technical guidance and collaborating closely with product leadership. The role involves guiding projects from early architecture through experimentation and into production, always with an eye on production readiness and measurable results. What you will do Develop and deploy backend systems for agentic workflows. Design retrieval pipelines, orchestration layers, and agent architectures that process large volumes of competitive data, such as news, press releases, website updates, Slack messages, emails, reviews, and CRM data, into actionable intelligence for clients. Enhance LLM-powered workflows end-to-end. Work on prompt design, retrieval strategies, caching, and latency optimization to make agent responses fast, accurate, and reliable in production. Lead evaluations of agent systems at scale. Build and manage evaluation frameworks (automated, offline, and human-in-the-loop) to assess relevance, quality, latency, and overall task success. Define excellence metrics and set up infrastructure for ongoing measurement. Design and implement human-in-the-loop systems. Collaborate with product and design teams to propose and prototype feedback mechanisms, review workflows, and correction loops that help keep AI agents accurate and trustworthy over time. Location This role is based in Vancouver, British Columbia. Learn more about Klue at klue.com.
Join Flagler Health, a dynamic healthtech innovator, as we revolutionize the delivery of healthcare services through cutting-edge AI-driven workflow automation, remote patient engagement, and comprehensive chronic care programs. Our platform has positively impacted over 1.5 million patients and is a trusted partner for healthcare providers and payers aiming to improve operational efficiency and patient outcomes while reducing costs. With a unique freemium model and limited competition, we are set to capture a significant portion of the $4.5 trillion U.S. healthcare market.Key ResponsibilitiesDatabricks Platform Expertise• Design, manage, and enhance data pipelines utilizing the Databricks platform.• Troubleshoot and debug Spark applications to ensure optimal performance and reliability.• Apply best practices in Spark computing and workload optimization.• Python Development• Write clean, efficient, and reusable Python code adhering to object-oriented principles.• Develop APIs to facilitate data integration and application functionality.• Create scripts and tools that automate data processing and workflows.MongoDB Management• Manage, query, and integrate data within MongoDB.• Ensure efficient data storage and retrieval tailored to application needs.• Optimize MongoDB's performance for handling large datasets.• Collaboration and Problem Solving• Collaborate closely with data scientists, analysts, and stakeholders to address data requirements and deliver effective solutions.• Identify and resolve technical challenges related to data processing and system architecture.
Sign in to browse more jobs
Create account — see all 292 results
Parallel Domain
Join Parallel Domain, where we are revolutionizing the field of autonomy, robotics, and computer vision with our cutting-edge simulation and digital twin technology. Our Replica product is at the forefront of creating expansive, photorealistic digital twins of real-world environments, essential for testing, validating, and developing autonomous systems. If you are passionate about machine learning and eager to contribute to the future of technology, we invite you to apply!
Later is the leading platform in influencer marketing, empowering brands to craft unforgettable campaigns with confidence. By leveraging genuine creator relationships, reliable intelligence, and expert support, Later eliminates uncertainty from one of marketing's most visible investments.Our AI-driven platform, backed by over a decade of proprietary data—including billions of social interactions, impressions, and more than $2.4 billion in verified influencer-generated purchases—enables teams to predict campaign success before launch.By merging valuable insights with expert guidance, Later takes the guesswork out of influencer marketing, allowing brands to select the ideal creators, execute thoroughly managed campaigns, and achieve significant growth in awareness, engagement, and revenue. Trusted by major enterprise brands such as Nike, Wayfair, Unilever, and Southwest Airlines, Later harmonizes creativity with performance to ensure campaigns not only look impressive but also yield tangible results. Discover more at later.com.About this Position:We are seeking a Machine Learning Infrastructure Engineer to join our expanding Data & Platform team. In this pivotal role, you will establish the framework that drives our AI and machine learning capabilities across Later's product suite. As our inaugural dedicated ML Infrastructure Engineer, you will oversee the systems that facilitate model experimentation, training, deployment, and large-scale monitoring.This position is essential for accelerating our data science endeavors and fostering future AI innovations. You will design and maintain reliable, secure, and scalable ML infrastructure that empowers data scientists and engineers to deploy impactful models confidently. If you are passionate about creating robust ML systems in a dynamic environment and are eager to set the standard for ML Ops at Later, this is your chance.
About this OpportunityJoin a global leader in networking that is transforming how businesses manage their networks. Our AI Core group is at the forefront, developing pioneering platforms across various domains such as Generative AI, AI Agents, RAG, Knowledge Bases, Data Mining, Anomaly Detection, and fine-tuning large language models. Here, innovation is not just welcomed; it is a core expectation.The RoleAs a pivotal AI ML Engineer, you will take on a leadership role in shaping our machine learning strategy. You'll be responsible for creating intelligent, high-performance multi-agent systems that can perceive, learn, and act in real-time.Key ResponsibilitiesDefine and lead the technical vision for machine learning solutions across our product portfolio.Manage the complete software development lifecycle, overseeing everything from design and code reviews to deployment and operational management.Architect robust, scalable microservices, including both synchronous and asynchronous web services.Develop real-time inference pipelines for complex models leveraging tools like Triton, TensorRT, and mixed-precision computing.Mentor fellow engineers, establish technical direction, and cultivate a strong team culture.Promote engineering excellence, system resilience, and continuous improvement in operations.
Sanctuary AI
Join Our Innovative TeamSanctuary AI is at the forefront of developing dexterity-driven Physical AI for versatile robots. We are on the lookout for talented Machine Learning Engineers to become part of our dynamic team of engineers, researchers, and scientists. Together, we are committed to overcoming significant challenges in robotic perception, dexterous manipulation, planning, and reasoning.In this role, you will concentrate on creating robust systems to train and implement machine learning policies in robotic platforms. You will collaborate closely with researchers to introduce and execute cutting-edge ML techniques in production environments. Utilizing our state-of-the-art in-house robotic systems, you will design, test, and enhance software that connects advanced methodologies to practical outcomes. This role presents a unique opportunity to contribute to both the engineering foundations and the applied ML capabilities that propel our robots into the future.Note: Your role may require occasional travel (typically one to two weeks at a time, several times a year) to work closely with partners and teams on essential projects.
ZoomInfo
At ZoomInfo, we accelerate careers and foster an environment where innovation thrives. Our fast-paced culture empowers you to achieve exceptional results while collaborating with dedicated teammates who celebrate every success. Equipped with advanced tools and a supportive culture, you will not only contribute but drive impactful outcomes at a remarkable pace.About the RoleWe are looking for a Senior Data Pipeline Analyst who will become the authority on our extensive company data pipeline, which ingests, processes, and profiles millions of records critical to our clients' go-to-market strategies.In this position, you will gain deep insights into the lifecycle of our company data—from acquisition through profiling to output. Your role will involve reading code to grasp data transformations and system dependencies, engaging in design discussions with our Engineering and Product teams, and influencing the evolution of our next-generation data infrastructure. As you deepen your expertise, you will lead strategic initiatives aimed at enhancing our data systems through both analytical and creative problem-solving approaches.This role goes beyond mere dashboard creation or SQL reporting; it demands a thorough understanding of data systems at an architectural level and the ability to tackle complex data challenges. You will ensure that our pipeline infrastructure evolves continuously to meet client demands and retain a competitive edge.Working closely with fellow data analysts during a dynamic transition phase, you will progressively take ownership of the pipeline architecture and strategic initiatives as our systems stabilize and your expertise grows. This position offers significant opportunities for advancement for individuals eager to become the leading technical expert on our company data systems.
Join Our Innovative TeamSanctuary AI, a pioneering force in developing dexterity-driven Physical AI for versatile robots, is on the lookout for a talented and enthusiastic Machine Learning Research Intern to join our dynamic ML team. You will play a crucial role in engineering and innovating advanced robotic manipulation tasks.As an intern reporting directly to the RL Lead, you will engage with diverse challenges related to perception, planning, and motion systems within humanoid general-purpose robots. This experience will provide you with invaluable insights into the design, architecture, and implementation of the simulation platforms and machine learning systems that drive our robots.We offer flexibility regarding the internship start date and duration, tailoring it to suit your academic commitments.
Remarcable Inc.
Remarcable Inc. supports trade contractors in managing procurement, tools, and warehouse operations by providing greater visibility and control. As the company expands throughout North America, data remains at the heart of its strategy. Role overview The Data Engineer will help shape and maintain Remarcable’s AWS Lakehouse infrastructure. This position focuses on designing ELT pipelines and building data models that inform decisions for product, operations, and leadership teams. The work also involves developing data infrastructure to enable AI and machine learning features for both contractors and back-office users. Location This hybrid role is based in Vancouver, BC. What you will do Build and maintain AWS Lakehouse architecture Design and implement ELT data pipelines Develop data models to guide business and product choices Create and support data infrastructure for AI and ML features What Remarcable values Strong interest in clean, reliable data architecture Motivation to influence the future of construction technology Ownership mindset and a willingness to take initiative
CruxOCM
About CruxOCMCruxOCM is an innovative automation leader in the heavy industry sector, driven by venture capital investments. We are transforming the energy landscape with cutting-edge solutions. At CruxOCM, we believe that control room operators deserve top-tier tools that enhance safety and efficiency. Our mission also focuses on reducing environmental impact while maximizing revenue. If pilots can rely on autopilot technology, why shouldn't control room operators have similar support? About the RoleWe are seeking a skilled Pipeline Hydraulics Engineer to spearhead the hydraulic modeling aspects of our platform. In this role, you will ensure that both transient and steady-state pipeline hydraulics are meticulously analyzed, guaranteeing safe and precise operations. You will collaborate closely with our Product, Engineering, Advanced Process Control, and Deployment teams, all of whom possess expertise in areas such as chemistry, physics, mechatronics, automation, simulation, and technology. The projects you will work on are intricate and the challenges are significant. We pride ourselves on our agility and ability to deliver results swiftly! Your involvement will encompass the product development and deployment process, from initial design to final delivery, making your contribution impactful and far-reaching.
ZoomInfo Technologies Inc.
At ZoomInfo, we accelerate careers by fostering a fast-paced environment where bold thinking is encouraged. Join us to do the best work of your life alongside a supportive team that values collaboration, challenges the status quo, and celebrates achievements. We provide you with tools that enhance your impact and a culture that empowers your ambitions, allowing you to not just contribute but to truly make things happen.About the RoleWe are on the lookout for a Senior Data Engineering Analyst to become the go-to expert on our data pipeline, which is responsible for ingesting, processing, and profiling millions of company records that drive our clients' go-to-market strategies.In this pivotal role, you will develop a comprehensive understanding of our data flow—from acquisition to profiling and output. You'll analyze code to grasp data transformations and system dependencies, provide valuable insights during design discussions with Engineering and Product teams, and help guide the evolution of our next-generation data infrastructure. As you gain expertise, you will lead strategic data improvement initiatives that require both systems thinking and innovative problem-solving.This role transcends typical dashboard creation or SQL reporting; it focuses on understanding data systems architecturally, tackling ambiguous data challenges, and evolving our pipeline infrastructure to meet customer demands and maintain a competitive edge.Your collaboration with fellow data analysts will be crucial during our active infrastructure transition. As our systems stabilize and your expertise grows, you will progressively take ownership of the pipeline architecture and lead strategic initiatives. This position offers significant growth potential for those looking to be the technical authority on company data systems.
AlgaeCal Inc.
Machine Learning Innovator: At AlgaeCal, we don't just work with data; we transform it into actionable insights that shape our strategies, optimize efficiencies, and drive growth. As a Data Scientist Intern, your predictive models will play an integral role in advancing our mission to combat bone loss, impacting millions of lives.Our team thrives on uncovering hidden patterns and validating hypotheses. If you're passionate about integrating machine learning into real-world AI applications, we want to hear from you.Join us and be part of something meaningful. With an estimated 54 million people in the U.S. facing low bone density, your work here will provide hope through our clinically-backed natural solutions.
impact.com builds a platform for brands to manage and grow partnerships across the entire customer journey. Over 5,000 companies, including major names like Walmart, Uber, and Shopify, use impact.com to oversee a network of more than 350,000 partnerships. Role overview The Senior Data Scientist, Programmatic Algorithms, joins the Programmatic Experience Group as an individual contributor. This role centers on designing and deploying machine learning models to optimize yield, pricing, and inventory allocation at scale. The work blends data science, platform engineering, and marketplace economics, directly influencing impact.com’s programmatic marketplace by balancing advertiser results with publisher monetization. This position offers significant autonomy and responsibility across the full machine learning lifecycle. Projects range from building data pipelines and engineering features to deploying real-time inference systems that support rapid decision-making. Regular collaboration with Product, Data Science, and Programmatic Delivery Engine Engineering teams is expected. What you will do Yield optimization and pricing models: Design and implement machine learning models for auction pricing, bid shading, floor price setting, and yield management across programmatic inventory. Who succeeds in this role Success in this position requires strong analytical rigor, practical machine learning engineering skills, and a genuine interest in both data science and engineering challenges. The role is based in New York, Seattle, or Vancouver, BC.
AlgaeCal Inc.
AlgaeCal Inc. is seeking a Data Scientist Intern for New Graduates in Vancouver, British Columbia. This internship focuses on transforming raw data into strategies that drive real business results. Interns will work closely with machine learning models, uncover patterns, and help embed AI applications into practical business solutions. AlgaeCal’s mission centers on eliminating the fear of bone loss, offering a clinically-backed natural solution for those with low bone density. The team values curiosity, initiative, and a drive to turn data into meaningful action. Role overview This internship is designed for individuals eager to apply data science skills to real-world challenges. The position involves building and refining machine learning models, optimizing data architectures, and supporting the company’s analytics efforts. Requirements Background in Data Science or a related field, with experience designing predictive models and optimizing AI applications. Strong programming skills in Python or R, with the ability to build and enhance machine learning models using statistical modeling and data science tools. Proficiency in SQL and relational databases, including writing efficient queries and optimizing data structures for large datasets. Understanding of machine learning algorithms such as regression, clustering, and neural networks. Experience with data warehouses like BigQuery, including designing and maintaining scalable data storage solutions. Ability to standardize and improve data models for efficiency, accuracy, and usability. Strong attention to detail for identifying trends, outliers, and opportunities in data. Entrepreneurial mindset, using data insights to generate business value. Those who are motivated by impactful work and want to contribute to a mission-driven company will find this internship rewarding. AlgaeCal welcomes candidates ready to turn data into action and support the company’s growth.
Asana seeks a Staff Data Scientist in Vancouver, BC. This role centers on advanced analytics and machine learning to help guide decisions across the company. Role overview Work will span both product development and operational improvements. The goal is to generate insights that help shape Asana’s products and make them more effective for users. What you will do Apply machine learning and analytics to company data Support teams with insights that influence product direction Contribute to operational improvements using data-driven findings Location This position is based in Asana’s Vancouver, BC office.
impact.com is a leading partnership marketing platform that brings together affiliates, influencers, content publishers, and brand advocates to drive business growth. The platform helps more than 5,000 global brands, including Walmart, Uber, Shopify, Lenovo, L’Oréal, and Fanatics, manage over 350,000 partnerships. impact.com’s suite of products, Performance, Creator, and Advocate, integrates all partner types, supporting brands as they build authentic, performance-driven relationships throughout the customer journey. Role overview The Senior Data Scientist, Programmatic Algorithms, will join the Programmatic Experience Group as an embedded expert. This high-responsibility individual contributor role focuses on designing and implementing machine learning models that optimize yield, pricing, and inventory allocation at scale. The work sits at the intersection of data science, platform engineering, and marketplace economics. Key responsibilities Lead the design and implementation of machine learning models to optimize auction pricing, bid shading, floor price settings, and yield across impact.com’s programmatic inventory. Architect data pipelines to support model development and deployment. Engineer features that improve model performance. Deploy real-time inference systems that enable swift, data-driven decisions. Collaboration and impact This role works closely with Product, Data Science, and Programmatic Delivery Engine Engineering teams. While collaboration is essential, the position also offers significant autonomy. The Senior Data Scientist will influence how the programmatic marketplace balances advertiser performance with publisher monetization, making a measurable impact on the organization. What we look for Analytical rigor and practical instincts are essential. The ideal candidate brings deep experience in both data science and machine learning engineering, with a genuine enthusiasm for solving complex problems in both areas.
ZoomInfo
At ZoomInfo, we accelerate careers and thrive in a fast-paced, innovative environment. Our team is built on collaboration and support, empowering everyone to achieve remarkable results. With cutting-edge tools and a culture that fosters ambition, you won't just be a contributor—you'll be a catalyst for growth and change.About the RoleWe are looking for a Senior Data Systems Analyst to take charge of our data pipeline, which is critical in processing millions of company records that drive our customers' market strategies. In this pivotal role, you will gain profound insights into our data acquisition, profiling, and output processes.You will dive into code to comprehend data transformations and system dependencies, providing valuable input during design discussions with our Engineering and Product teams. Your expertise will shape the future of our data infrastructure. As you grow your knowledge, you will lead strategic data enhancement initiatives that require both systems thinking and innovative problem-solving skills.This position focuses on understanding data systems at an architectural level rather than merely producing dashboards or SQL reports. You will tackle complex data challenges and ensure our pipeline infrastructure evolves to meet customer needs and maintain our competitive edge.During an active period of infrastructure transition, you will collaborate closely with fellow data analysts. As systems stabilize and your expertise deepens, you will take on greater ownership of the pipeline architecture and strategic projects, paving the way for your growth as the leading technical expert in our data systems.
Join our dynamic team at trackvfx as a Junior or Senior Pipeline Technical Director in Vancouver! This is a remarkable chance to contribute to the foundation of our innovative pipeline.As a part of the formative years of our new studio, you will engage in a variety of projects, from feature films to commercials. Collaborating with seasoned artists across diverse productions, you will have the opportunity to leverage your skills and contribute to our rapidly growing company.The Pipeline TD plays a crucial role in ensuring the integrity of our production pipeline. You will be responsible for identifying and resolving issues, as well as developing essential software tools to support our team. A thorough understanding of the VFX production pipeline is essential.We provide comprehensive health benefits, flexible working hours, and an open, creative atmosphere for all our employees.
Asana
Role overview Asana is seeking a Senior Data Engineer based in Vancouver, BC. This position centers on developing and enhancing the company’s data architecture. The Senior Data Engineer partners with teams throughout the organization to build and maintain scalable data pipelines that support business growth. What you will do Design, implement, and maintain data pipelines that adapt to evolving business requirements Work with cross-functional partners to gather and deliver on analytical needs Support and improve data infrastructure, ensuring reliable access for analytics and reporting Impact The solutions built in this role enable Asana to make informed, data-driven decisions. By maintaining effective and reliable data systems, the Senior Data Engineer helps ensure that analytics and reporting are accessible and trustworthy across the company.
Rivian and Volkswagen Group Technologies
Rivian and Volkswagen Group Technologies brings together two leaders in automotive innovation to shape the future of mobility. This collaboration focuses on developing advanced operating systems, zonal controllers, and cloud connectivity for software-defined vehicles. The team is dedicated to solving challenges in electric vehicle technology, aiming to set new standards in the industry. By combining expertise in connectivity, artificial intelligence, and security, the partnership works toward a more connected and sustainable future. Role overview The Senior Data Engineer, Ingestion Framework, joins the Data & AI Platform team in Vancouver. This role centers on supporting the big data platform, which operates at a petabyte scale. The main focus is on architecting and maintaining a custom vehicle data processing framework built with Go, and ensuring it integrates smoothly with the Databricks platform. What you will do Design and maintain a custom-built data ingestion framework using Go. Support seamless integration between the ingestion framework and Databricks. Help ensure reliable, large-scale operation of the vehicle data platform. Location This position is based in Vancouver, British Columbia.
Klue helps businesses transform scattered competitive data into clear, actionable insights. The Vancouver team is expanding and seeking a Senior Software Engineer with a focus on AI agent development. Role overview This position centers on designing and operating large language model (LLM)-powered agents at scale. Responsibilities span multi-agent orchestration, sub-agent design, and building evaluation frameworks to ensure outputs are both reliable and measurable. The work covers the full stack, including optimizing inference costs, improving retrieval and query performance, and creating feedback loops for ongoing system improvement. In addition to technical execution, this engineer will help shape the product roadmap by offering technical guidance and collaborating closely with product leadership. The role involves guiding projects from early architecture through experimentation and into production, always with an eye on production readiness and measurable results. What you will do Develop and deploy backend systems for agentic workflows. Design retrieval pipelines, orchestration layers, and agent architectures that process large volumes of competitive data, such as news, press releases, website updates, Slack messages, emails, reviews, and CRM data, into actionable intelligence for clients. Enhance LLM-powered workflows end-to-end. Work on prompt design, retrieval strategies, caching, and latency optimization to make agent responses fast, accurate, and reliable in production. Lead evaluations of agent systems at scale. Build and manage evaluation frameworks (automated, offline, and human-in-the-loop) to assess relevance, quality, latency, and overall task success. Define excellence metrics and set up infrastructure for ongoing measurement. Design and implement human-in-the-loop systems. Collaborate with product and design teams to propose and prototype feedback mechanisms, review workflows, and correction loops that help keep AI agents accurate and trustworthy over time. Location This role is based in Vancouver, British Columbia. Learn more about Klue at klue.com.
Join Flagler Health, a dynamic healthtech innovator, as we revolutionize the delivery of healthcare services through cutting-edge AI-driven workflow automation, remote patient engagement, and comprehensive chronic care programs. Our platform has positively impacted over 1.5 million patients and is a trusted partner for healthcare providers and payers aiming to improve operational efficiency and patient outcomes while reducing costs. With a unique freemium model and limited competition, we are set to capture a significant portion of the $4.5 trillion U.S. healthcare market.Key ResponsibilitiesDatabricks Platform Expertise• Design, manage, and enhance data pipelines utilizing the Databricks platform.• Troubleshoot and debug Spark applications to ensure optimal performance and reliability.• Apply best practices in Spark computing and workload optimization.• Python Development• Write clean, efficient, and reusable Python code adhering to object-oriented principles.• Develop APIs to facilitate data integration and application functionality.• Create scripts and tools that automate data processing and workflows.MongoDB Management• Manage, query, and integrate data within MongoDB.• Ensure efficient data storage and retrieval tailored to application needs.• Optimize MongoDB's performance for handling large datasets.• Collaboration and Problem Solving• Collaborate closely with data scientists, analysts, and stakeholders to address data requirements and deliver effective solutions.• Identify and resolve technical challenges related to data processing and system architecture.
Sign in to browse more jobs
Create account — see all 292 results

