Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Senior
Qualifications
The ideal candidate will possess a strong background in data architecture and engineering, with proven experience in building scalable data pipelines and systems that support advanced analytics and AI applications. Familiarity with cloud technologies, machine learning frameworks, and data security practices is essential.
About the job
Join our dynamic team at Cargomatic as a Senior Data Architect in Data Engineering, where you'll play a pivotal role in designing and constructing scalable, cloud-native data infrastructures. Your work will empower analytics, machine learning, and AI-driven applications that revolutionize the local trucking industry.
In this fast-paced environment, you'll leverage your deep expertise in data architecture alongside hands-on experience with modern data platforms and LLM-enabled application development. You will be responsible for leading the design of enterprise-grade data models, architecting RAG systems, implementing agentic workflows, and integrating secure, production-ready LLM capabilities into our ecosystem. This high-impact position offers significant ownership and visibility, allowing you to shape the future of intelligent logistics technology.
About Cargomatic
Cargomatic is at the forefront of transforming the local trucking industry, utilizing innovative technology to connect shippers and carriers in real time. With a focus on transparency, efficiency, and data-driven solutions, we aim to modernize a $82 billion industry that has largely relied on outdated systems. Join us as we tackle complex logistics challenges daily and help shape the future of AI-powered logistics.
Similar jobs
1 - 20 of 5,869 Jobs
Search for Data Engineer Scientific Data Ingestion
ABOUT MITHRLWe envision a world where innovative drugs and therapies reach patients in months rather than years, expediting breakthroughs that save lives.Mithrl is at the forefront of creating the world's first commercially available AI Co-Scientist—an advanced discovery engine that enables life science teams to transform chaotic biological data into insightful discoveries in mere minutes. Scientists can pose questions in natural language, and Mithrl responds with genuine analysis, innovative targets, and patent-ready reports.Our success is evident:12X year-over-year revenue growthTrusted by leading biotech firms and major pharmaceutical companies across three continentsDriving significant breakthroughs from target discovery to patient outcomes.WHAT YOU WILL DOTake the lead in creating and managing an AI-driven data ingestion and normalization pipeline to assimilate data from diverse sources—ranging from raw Excel/CSV uploads to lab and instrument exports, as well as processed outputs from internal systems.Develop comprehensive schema mapping, coercion, and conversion logic, including units normalization, metadata standardization, variable-name harmonization, addressing vendor-instrument peculiarities, plate-reader formats, reference-genome or annotation updates, and batch-effect corrections.Utilize LLM-driven and classical data-engineering tools to structure semi-structured or messy tabular data, focusing on metadata extraction, inferring column roles/types, cleaning free-text headers, resolving inconsistencies, and preparing final clean datasets.Ensure that all transformations that must occur only once—such as normalization, coercion, and batch-correction—are executed during ingestion, ensuring that downstream analytics and the AI Co-Scientist operate with clean, canonical data.Establish validation, verification, and quality control measures to detect ambiguous, inconsistent, or corrupted data before it enters the platform.Collaborate with product teams, data science/bioinformatics colleagues, and infrastructure engineers to define and uphold data standards, ensuring that pipeline outputs integrate smoothly into downstream analysis and storage systems.WHAT YOU BRINGMust-have:5+ years of experience in data engineering or data wrangling with real-world tabular or semi-structured data.Strong proficiency in Python,
Thank you for your interest in joining Uncountable Engineering!Uncountable is on the lookout for recent graduates eager to embark on a career in data engineering, focusing on the management of customer datasets. Our mission is to transform industrial research and development. We are in search of driven software engineers who can contribute to the creation of tools designed to expedite the development of new chemicals and materials.Your key responsibilities will include structuring and ingesting customer data into the Uncountable Web Platform, where scientists input and analyze their experimental data. In this role, you will (1) manipulate, transform, and upload R&D data using Python scripts; (2) establish ETL pipelines between Uncountable and customer data warehouses; (3) devise innovative solutions for users to import and export their data. This position requires a creative, analytical thinker with excellent cross-functional communication skills and the capability to analyze large datasets, identifying necessary steps to clean, process, and upload the data onto our platform.
Our MissionAt Reflection AI, we are committed to developing open superintelligence and making it accessible to everyone. Our initiative involves creating open weight models for individuals, enterprises, agents, and even nation-states. Our talented team of AI researchers and innovators hails from esteemed organizations such as DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic, and others.About the RoleAs data plays an increasingly pivotal role in AI advancements, your expertise will be key in transforming vast data sources from the open web into dependable, structured corpora essential for training cutting-edge models. You will take charge of the mechanisms that acquire, extract, normalize, version, and deliver data to our pre-training pipelines. Collaborating directly with world-class researchers, you will bridge the gap between data collection and its influence on model performance.This position is well-suited for engineers who are passionate about constructing robust distributed systems while also enjoying the opportunity to run experiments, assess data acquisition trade-offs, and iterate rapidly based on measurable impacts.In conjunction with our pre-training and data quality teams, your responsibilities will include:Designing and managing large-scale data ingestion systems for pre-training, encompassing web crawling, extraction, and dataset deliveryConducting experiments to evaluate various crawling strategies, extraction methods, and ingestion trade-offsAnalyzing ingested data to detect gaps, redundancies, and improvement opportunitiesCreating ingestion pipelines that scale efficiently across extensive data campaignsDeveloping specialized crawlers for high-priority data sourcesReviewing code, troubleshooting production issues, and continually enhancing ingestion infrastructureAbout You:You possess an innate curiosity about how training data shapes model capabilities, and you can iterate swiftly based on observable downstream impacts.You excel in collaborating closely across various functions: researchers, infrastructure, operations, and external partners.You appreciate the balance between research and engineering in your work.Skills and Qualifications:Proficiency in building and operating distributed systemsExperience with data acquisition and processingStrong analytical skills for identifying data patterns and quality issues
Thank you for your interest in joining the Uncountable Engineering team!Uncountable is on the lookout for enthusiastic recent graduates eager to embark on a data engineering career focused on managing customer datasets. Our mission is to transform the landscape of industrial research and development. We seek driven software engineers to develop tools that expedite the creation of new chemicals and materials.In this pivotal role, you will structure and ingest customer data into the Uncountable Web Platform, where scientists conduct and analyze their experiments. Your responsibilities will include (1) manipulating, transforming, and uploading R&D data using Python scripts, (2) establishing ETL pipelines between Uncountable and various data warehouses utilized by our customers, and (3) devising innovative solutions for users to import and export their data. This role requires a creative and analytical mindset, strong cross-functional communication skills, and the ability to sift through large datasets to identify necessary steps for data cleaning, processing, and uploading onto our platform.
At Merge Labs, we are at the forefront of scientific innovation, dedicated to harmonizing biological and artificial intelligence to enhance human potential and experiences. Our mission is to pioneer groundbreaking brain-computer interface technologies that communicate with the brain at unprecedented speeds, integrate seamlessly with advanced AI, and ensure safety and accessibility for all users.About Our Team:Our team is committed to transforming advanced brain-computer interface visions into tangible algorithms. By integrating knowledge from synthetic biology, neuroscience, device physics, signal processing, and machine learning, we create effective methodologies to connect human intelligence with artificial intelligence. Our work involves designing experiments, developing analytical frameworks, collecting data, training models, and optimizing performance to construct scalable Brain-AI systems. We prioritize urgency while maintaining a balance between creative exploration and engineering rigor, as we believe that enhancing human ability, agency, and experience is one of the most pressing challenges of our era.Role Overview:As the most senior data engineer on our team, you will take charge of the data pipelines that capture, process, and deliver the essential data driving Merge’s molecular optimization platform. Your role will include converting diverse laboratory outputs into structured, queryable datasets that enable scientific analysis and closed-loop machine learning. Collaborating closely with experimentalists, you will establish data standards and metadata conventions, and work alongside ML engineers to ensure results are integrated into production-grade systems.This position reports to the Head of Software and involves extensive cross-functional collaboration—spanning software engineering, data architecture, and scientific informatics. As part of the Core Software team, you will be supported by infrastructure specialists and will coordinate with the Application Development Lead to ensure comprehensive scientific and user input capture.Your Responsibilities Include:Developing and maintaining ingestion pipelines from laboratory instruments into centralized data storage.Designing schemas and capturing metadata standards for experimental data.Implementing post-processing pipelines to generate analysis-ready datasets for our scientific teams.Setting up monitoring, alerting, and structured logging for data pipelines to ensure optimal operation.
About UsAt Rox, we are dedicated to empowering individuals to achieve their greatest potential. Our innovative platform enhances sales efforts through autonomous revenue agents, allowing sellers to prioritize their expertise in selling. Just as coding agents transformed engineering, revenue agents amplify customer interactions.We are revolutionizing the revenue stack by developing the world’s first revenue operating system, encompassing everything from the application layer to systems of context. At Rox, we envision a future where humans evolve into orchestrators while agents handle the complete customer lifecycle.Our solutions support Global 2000 leaders across sectors such as banking, construction, and AI, partnering with industry giants like Ramp and Cognition.Our success stems from a united belief in our mission and an unwavering commitment to making it a reality.The TeamOur world-class team is the backbone of our innovative approach to redefining business operations.Our team members have:Founded and successfully exited companiesHeld top roles at Google, AWS, Confluent, and New RelicWon gold medals in international mathematics competitionsPublished groundbreaking research papersWe are proud to be backed by leading investors, having raised $50 million from Sequoia (Alfred Lin), General Catalyst (Hemant Taneja), Google Ventures, Elad Gil, and Chris Ré.Core PrinciplesTaste: Craft beautiful experiences.We meticulously focus on every detail, striving to ensure that each interaction not only helps sellers accomplish their tasks but also enhances their experience. We are relentless in our pursuit of excellence, always exploring new ways to delight our sellers.Obsession: Commit unreasonably.We are dedicated to our craft, responding to customer needs proactively and driving value even before they ask. Our commitment to continuous learning and self-improvement is unwavering.Action: Get it done.Execution is key; we prioritize thoughtful yet swift decision-making and immediate delivery. Trust is essential in our field, and we earn it through our actions.
About RoxAt Rox, we are dedicated to empowering individuals to excel in their work.Our innovative platform enhances seller performance with autonomous revenue agents that handle manual tasks, allowing sales professionals to concentrate on their primary role: selling. Just as coding agents exponentially increased engineering productivity, our revenue agents elevate customer engagement.We are revolutionizing the revenue landscape by developing the world’s first revenue operating system, integrating everything from the application layer to the system of context. At Rox, we envision a future where humans become orchestrators, while our agents efficiently manage the complete customer lifecycle.Rox serves Global 2000 leaders across sectors such as banking, hardware, construction, and sovereign AI, while also supporting prominent AI innovators like Ramp and Cognition.This success is built on a steadfast belief in our mission and an unwavering commitment to achieving it.Join Our TeamOur world-class team is the cornerstone of our success, tasked with redefining business operations.Team members have:Founded and exited successful startupsHeld top positions at Google, AWS, Confluent, and New RelicAchieved gold medals in IMO and IOIPublished groundbreaking research papersWe are proud to be backed by top-tier investors, having raised $50M from Sequoia (Alfred Lin), General Catalyst (Hemant Taneja), Google Ventures, Elad Gil, and Chris Ré.Core PrinciplesTaste: Craft exceptional experiences.Every detail matters; we ensure that each interaction is optimized for the seller’s success and satisfaction. We strive for perfection and continuously seek ways to delight our users.Obsession: Commit without reservation.We are dedicated to our craft, proactively addressing customer needs and constantly improving our skills and products.Action: Deliver results.Execution is key; we prioritize thoughtful, rapid decision-making and immediate delivery, fostering a culture of trust.
About UsAt Imprint, we are transforming the landscape of co-branded credit cards and financial products to be more innovative, rewarding, and brand-centric. We collaborate with distinguished companies such as Crate & Barrel, Rakuten, Booking.com, H-E-B, Fetch, and Brooks Brothers to launch contemporary credit programs that enhance customer loyalty, unlock savings, and stimulate growth. Our platform seamlessly integrates advanced payment infrastructure, intelligent underwriting, and a user-friendly experience, enabling brands to offer impactful financial products without the need to operate as a bank.Co-branded cards represent over $300 billion in annual spending in the U.S., yet many are still reliant on outdated banking systems. Imprint stands out as the modern alternative: adaptable, technology-driven, and tailored for today’s consumers. Supported by renowned investors such as Kleiner Perkins, Thrive Capital, and Khosla Ventures, we are assembling a top-tier team to revolutionize payment methods and foster brand growth. If you're eager to work in a dynamic environment, tackle challenging problems, and make a significant impact, we invite you to join us.Our TeamThe Data Engineering team at Imprint is tasked with developing and expanding the data infrastructure that underpins product innovation, analytics, operations, and machine learning throughout the organization. We are responsible for creating the pipelines, platforms, and processes that enable our stakeholders to trust and leverage data effectively.We are seeking a Data Engineer to advance our modern data architecture and deliver dependable, scalable data solutions. Your contributions will directly influence decision-making and innovation across various business areas—from financial operations to real-time personalization.
At Plaid, we envision a future where financial interactions are seamless and intuitive. We are committed to driving this evolution by crafting innovative tools and experiences that empower thousands of developers to build their own applications. Our platform supports millions of users in achieving healthier financial lives by connecting their financial accounts with the apps and services they love. Collaborating with industry leaders like Venmo, SoFi, numerous Fortune 500 companies, and major banks, Plaid's extensive network encompasses over 12,000 financial institutions across the United States, Canada, the UK, and Europe. Established in 2013 and headquartered in San Francisco, we also have offices in New York, Washington D.C., London, and Amsterdam. #LI-HybridAs a Senior Data Engineer, you will play a pivotal role in our Data Engineering team, which is focused on developing robust golden datasets to enhance our insights-driven product offerings. Data-driven decision-making is at the heart of Plaid's culture, and your expertise will help us scale our data systems while ensuring data accuracy and integrity. You will provide essential tooling and guidance to various teams across engineering, product, and business, enabling them to access and utilize data effectively. Our engineers rely heavily on SQL and Python to create data workflows, employing tools such as DBT, Airflow, Redshift, ElasticSearch, Atlanta, and Retool for orchestrating data pipelines and defining workflows. Collaborating closely with engineers, product managers, business intelligence, and data analysts, you will contribute to shaping Plaid's data strategy and fostering a data-first mindset. We thrive on an engineering culture that values individual contributions, encouraging bottom-up ideation and empowering our talented team. We seek engineers passionate about making a meaningful impact for our users and customers while growing together as a cohesive team, delivering MVPs, and continuously improving our processes.In this high-impact position, you will empower business leaders to make faster, more informed decisions based on the datasets you create. You will have the opportunity to define the ownership and scope of internal datasets and visualizations across Plaid, an area we aim to develop further by establishing SLAs. This role presents a unique chance to enhance your technical skills and learn best practices from our strong Data Engineering team and the broader Data Platform team. You will collaborate extensively across all teams at Plaid, from Engineering to Product to Marketing and Finance, forging strong cross-functional partnerships.
Full-time|$160K/yr - $210K/yr|On-site|New York, NY, San Francisco, CA or Los Angeles, CA
The OpportunityJoin Enigma at a pivotal moment as we prepare to unveil our next generation of small business intelligence products, informed by years of industry experience. We are in search of a seasoned Senior Data Engineer who will play a crucial role in shaping the future of these innovative products. Your contributions will significantly enhance the accuracy, scalability, and predictive analytics of our data platform, directly impacting our clients' key business strategies.The RoleAs a Senior Data Engineer, your primary responsibility will be to design, construct, and manage our foundational small business data product. You'll work closely with cross-functional teams to develop new product functionalities, starting from raw data sources, through scalable data pipelines and models, to the customer interface. Our culture emphasizes customer focus and complete ownership, measuring your success by the value delivered and the technical excellence of our product.What You’ll DoIdentify and address the unmet needs of our customers through thorough data exploration, prototyping, and evaluation, culminating in the development of robust and maintainable data pipelines.Construct scalable and maintainable data systems that narrate the stories of millions of U.S. small businesses, utilizing tools like Databricks SQL, PySpark, and MLFlow.Leverage our pioneering data infrastructure designed for iterative development and experimentation, utilizing Dagster for orchestration and LakeFS for data version control.Employ a data-driven, first-principles approach to problem-solving, favoring straightforward and elegant solutions while being capable of deploying advanced models and techniques when necessary.Motivate your colleagues to excel while nurturing a collaborative and supportive team environment.What Makes This Role Exciting?Ownership and Impact: Collaborate directly with customers and internal teams to develop product features in a high-trust environment.Technical Challenge: Solve complex data and engineering challenges, balancing innovation speed with scalability and reliability.Learning: Engage with unique, impactful data challenges that demand creative solutions and cutting-edge methodologies.
Front Careers seeks a Senior Data Engineer based in San Francisco, CA. This position centers on building and maintaining data systems that power analytics and reporting throughout the organization. Key responsibilities Design and maintain data architecture to meet analytics and reporting requirements Develop, optimize, and support data pipelines for reliability and scalability Collaborate with teams across the company to improve data quality, accessibility, and security Enhance and maintain the data infrastructure supporting company operations Role overview This role focuses on ensuring that data flows efficiently and securely, enabling teams to access the information they need for business decisions. The Senior Data Engineer will play a central part in shaping how data is managed and used at Front Careers.
About Us:At Notion, we empower individuals and teams to create stunning productivity tools tailored to their unique workflows. In a digital landscape crowded with applications and tabs, our platform unifies documents, notes, projects, calendars, and emails—all enhanced with AI capabilities to streamline tasks and provide quick insights. Trusted by millions, including industry leaders like Toyota, Figma, and OpenAI, Notion is chosen for its versatility, helping users optimize their time and resources.We believe in the value of in-person collaboration, making it a cornerstone of our company culture. To foster teamwork, all employees are required to work from our San Francisco office on Mondays, Tuesdays, and Thursdays—our designated Anchor Days. Certain roles may necessitate additional in-office presence.About The Role:As our company expands at an impressive pace, we are on the lookout for skilled Data Engineers to join our finance team. In this role, you will be instrumental in constructing robust datasets and data pipelines that underpin our financial reporting. Your work will bridge our product, financial, and operational systems, creating reliable processes that drive our growth. If you have a passion for data analytics, complex modeling, and innovative problem-solving, we would love to have you on board.What You'll Achieve:Develop essential datasets that act as the definitive sources for Notion’s financial reporting, integrating information from financial systems, business databases, and our product.Collaborate closely with our Finance, Monetization Engineering, Business Intelligence, and Data Science teams to address critical financial analysis and reporting requirements.Design, construct, and oversee data pipelines that not only meet current demands but are also scalable as our data needs grow.Facilitate widespread access to high-quality financial data across the Finance, Staff, and go-to-market teams.Skills You'll Need to Bring:A minimum of 4 years of experience as a data engineer, focusing on building core datasets and supporting various business sectors, particularly in product and high-volume data environments. A strong enthusiasm for analytics use cases, data modeling, and tackling intricate data challenges is essential.Experience in creating integrations and reporting datasets for payment, financial, and business systems such as Stripe, NetSuite, Adaptive, Anaplan, Salesforce, or equivalent.A proactive mindset, with the ability to gather and synthesize high-impact needs from business partners, designing and implementing suitable technical solutions.
Full-time|$140K/yr - $160K/yr|On-site|San Francisco, CA
Join our dynamic team at Cargomatic as a Senior Data Architect in Data Engineering, where you'll play a pivotal role in designing and constructing scalable, cloud-native data infrastructures. Your work will empower analytics, machine learning, and AI-driven applications that revolutionize the local trucking industry.In this fast-paced environment, you'll leverage your deep expertise in data architecture alongside hands-on experience with modern data platforms and LLM-enabled application development. You will be responsible for leading the design of enterprise-grade data models, architecting RAG systems, implementing agentic workflows, and integrating secure, production-ready LLM capabilities into our ecosystem. This high-impact position offers significant ownership and visibility, allowing you to shape the future of intelligent logistics technology.
About AlembicAlembic is at the forefront of transforming marketing strategies, demonstrating the actual ROI of marketing initiatives. Our cutting-edge Alembic Marketing Intelligence Platform employs advanced algorithms and AI models to address this longstanding challenge effectively. By joining our team, you'll contribute to the development of tools that deliver unparalleled insights into how marketing influences revenue, empowering a growing roster of Fortune 500 companies to make data-driven decisions with confidence.About the RoleIn your role as a Senior Data Engineer at Alembic, you will play a crucial role in our data platform. You will be responsible for creating scalable and dependable data pipelines, optimizing storage solutions, and facilitating both real-time and batch analytics. Collaborating closely with data scientists, software engineers, and product leaders, you will design and implement robust data architectures that propel our mission forward.Key ResponsibilitiesDesign, develop, and maintain scalable ETL pipelines that efficiently ingest, process, and transform extensive volumes of structured and unstructured data.Optimize data storage solutions utilizing modern data lakehouse architectures and industry best practices to enhance cost-effectiveness, performance, and reliability.Collaborate with data scientists and engineers to seamlessly integrate machine learning models and analytical workloads into production environments.Ensure the integrity, quality, and security of data by implementing monitoring, alerting, and governance best practices.Work with cloud-based data warehouses and distributed data processing frameworks to support our data initiatives.Continuously assess and implement innovative technologies to enhance data infrastructure and operational efficiency.What We’re Looking For10+ years of experience in data engineering, software engineering, or a related field.Strong proficiency in SQL and Python for data processing.Experience with contemporary data warehousing and lakehouse solutions (e.g., Iceberg or similar).Expertise in distributed systems and big data technologies (Apache Spark, Hadoop, Kafka, Flink).Hands-on experience with cloud platforms (AWS, GCP, Azure) and related data services.Deep understanding of data management and governance practices.
About UsAt Imprint, we're on a mission to revolutionize co-branded credit cards and financial products, creating smarter, more rewarding experiences that prioritize brand identity. We collaborate with industry leaders like Crate & Barrel, Rakuten, Booking.com, H-E-B, Fetch, and Brooks Brothers to design innovative credit programs that enhance customer loyalty, unlock savings, and stimulate growth. Our cutting-edge platform seamlessly integrates advanced payment solutions, intelligent underwriting, and user-friendly design, enabling brands to offer powerful financial products without the need to become a bank.Co-branded credit cards represent over $300 billion in annual expenditure in the U.S., yet many are still rooted in outdated banking systems. Imprint stands as the modern alternative: adaptable, technology-driven, and designed for today’s savvy consumers. Supported by prominent investors such as Kleiner Perkins, Thrive Capital, and Khosla Ventures, we are assembling a world-class team to reshape payment methods and drive brand growth. If you're eager to work in a fast-paced environment, tackle challenging problems, and make a significant impact, we want to hear from you!Discover more about our innovations on Imprint's Technology blog.Our TeamThe Data Engineering team at Imprint is pivotal in constructing and scaling our data infrastructure, which underpins product development, analytics, operations, and machine learning throughout the organization. We manage the pipelines, platforms, and processes that empower our stakeholders to trust and utilize our data effectively.Your RoleAs a Senior Data Engineer, you will be responsible for designing and implementing our data platform, addressing our most challenging technical issues. You will lay the groundwork for Imprint's upcoming decade of growth by scaling our infrastructure for rapid expansion, providing insights that inform multi-million dollar decisions, facilitating secure partner data sharing, and transforming how our teams leverage data. Join us in building a platform that not only meets today's demands but also anticipates future opportunities.Key ResponsibilitiesInfrastructure Development & ScalingDesign and optimize our next-generation data platform, leveraging tools such as Snowflake, dbt Cloud, and real-time CDC pipelines for enterprise-level scalability.Create secure and compliant partner data delivery systems utilizing Snowflake shares, S3/SFTP integrations, and Marketplace listings.Develop critical financial reporting pipelines ensuring exceptional accuracy and reliability.Pursue Technical ExcellenceImplement best practices in data engineering, ensuring robust data governance and security.Collaborate with cross-functional teams to enhance data accessibility and usability.
About Us:At Notion, we empower teams to create stunning tools tailored for their unique workflows. In a digital landscape filled with countless applications and tabs, Notion stands out as the singular platform where teams can consolidate their efforts—effortlessly merging documents, notes, projects, calendars, and emails, all enhanced with AI to expedite processes and uncover insights. Our diverse user base, ranging from individuals to giants like Toyota, Figma, and OpenAI, appreciates Notion for its versatility, efficiency, and cost-saving capabilities.Collaboration is at the heart of Notion's culture. To foster this, all team members are required to work from our offices on Mondays, Tuesdays, and Thursdays, which we refer to as Anchor Days. Certain teams may have additional in-office requirements.About The Role:As Notion experiences rapid growth, we are on the lookout for skilled Data Engineers to join our dynamic team. Your role will involve constructing foundational datasets and pipelines that bolster our go-to-market initiatives. You will play a pivotal role in integrating our product and business systems, establishing robust processes that drive our success. If you have a passion for analytics, data modeling, and unraveling intricate data challenges, we would love to have you on board.What You'll Accomplish:Develop core datasets that serve as the authoritative sources for Notion’s marketing and sales reporting, ensuring seamless data flow between business systems and our product.Collaborate closely with our Marketing, Sales, Revenue Operations, Business Technology, Business Intelligence, and Data Science teams to fulfill critical reporting and analysis needs.Design, implement, and oversee pipelines that address current demands while being capable of scaling alongside our expanding data landscape.Promote accessibility to high-quality data across go-to-market teams, staff, and the organization at large.Required Qualifications:Minimum of 3 years of experience as a Data Engineer, focusing on building core datasets and supporting various business verticals, particularly in high-data-volume product and business environments. A strong enthusiasm for analytics, data models, and solving complex data challenges is essential.Proven experience working with Marketing and Sales datasets within a SaaS context, including integrations with business systems such as Salesforce, Netsuite, Marketo, and Zendesk.Self-motivated individual who proactively gathers and synthesizes impactful business needs, designing and implementing effective data solutions.
Join our innovative team at alljoined as a Data Infrastructure Engineer where you will play a pivotal role in shaping our data architecture and ensuring the reliability and efficiency of our data systems. You will collaborate with cross-functional teams to design and implement scalable data solutions that empower data-driven decision-making.
Full-time|Hybrid|Hybrid, San Francisco, Los Angeles, New York
Join Laurel as a Staff Data Engineer, where you will play a pivotal role in designing and implementing scalable data solutions. You will collaborate with cross-functional teams to optimize data architecture and enhance data analytics capabilities.
Embrace the Future of Commerce with Whatnot!Whatnot stands as the premier live shopping platform across North America and Europe, dedicated to enabling users to buy, sell, and discover their passions. We are transforming e-commerce by merging community engagement, shopping, and entertainment into a unique experience crafted just for you. Operating as a remote co-located team, we thrive on innovation and are deeply rooted in our core values. With operational hubs in the US, UK, Germany, Ireland, and Poland, we are collaboratively shaping the future of online marketplaces.From fashion and beauty to electronics and collectibles like trading cards and comic books, our live auctions offer a diverse range of products for everyone.And we’re just getting started! As one of the fastest growing marketplaces, we are on the lookout for innovative, forward-thinking problem solvers to join our expanding team. Stay updated with the latest from Whatnot through our news and engineering blogs and help us empower individuals to turn their passions into thriving businesses while fostering community through commerce. Role OverviewData plays a pivotal role in Whatnot’s mission to unite people through commerce.As our newest Data Engineer, you will design and enhance the systems that facilitate data-driven decision-making across the organization. You will collaborate directly with stakeholders from various departments, including product, sales, marketing, finance, and trust teams, to create resilient data architectures, develop robust pipelines, and establish foundational data products that drive Whatnot’s growth, both internally and externally.On any given day, your responsibilities may include:End-to-End Data Architecture Ownership. Define our approach to capturing, modeling, and serving critical business data, implementing solutions in production. You will make key architectural decisions regarding storage formats, computing patterns, and SLAs that balance cost, scalability, and consistency.Development of Mission-Critical Pipelines. Design and maintain streaming and batch data workflows that handle high-volume events across various domains—user engagement, transactions, experimentation, marketing performance, and operational telemetry—while ensuring low latency, completeness, and accuracy.Canonical Model Design and Implementation. Develop domain-oriented data models that serve as the backbone for our analytics and reporting needs.
About UsAt Rox Data Corp, we are dedicated to empowering individuals to excel in their roles. Our innovative platform enhances sales processes through autonomous revenue agents, allowing sales professionals to concentrate on their core strengths: selling. Just as coding agents revolutionized engineering, our revenue agents significantly enhance customer engagement.We are pioneering the revenue stack with the world's first revenue operating system, integrating seamlessly from the application layer to the system of context. At Rox, humans evolve into orchestrators while our agents efficiently manage the complete customer lifecycle.Our solutions empower Global 2000 leaders across sectors such as banking, hardware, construction, and sovereign AI, while also supporting top AI innovators like Ramp and Cognition.This success is rooted in our shared commitment to our mission, and we are determined to achieve it through unwavering dedication.The TeamOur world-class team is essential in redefining business operations. Team members have:Founded and exited successful companiesHeld senior positions at industry giants like Google, AWS, Confluent, and New RelicAchieved gold medals at IMO and IOIPublished groundbreaking research papersWe are proud to be backed by leading investors, having raised $50M from Sequoia (Alfred Lin), General Catalyst (Hemant Taneja), Google Ventures, Elad Gil, and Chris Ré.Core PrinciplesTaste: Craft Beautiful ExperiencesWe pay attention to every detail, ensuring that each interaction not only assists the seller but also enhances their overall experience. We pursue perfection, continually seeking to delight our users.Obsession: Commit UnreasonablyWe are proactive in adding value and responsive to customer needs. Our team is dedicated to continuous learning and improvement, striving to enhance both our skills and our product daily.Action: Get It DoneExecution is key; we prioritize thoughtful and swift decision-making to deliver results that build trust with our clients.
Sep 25, 2025
Sign in to browse more jobs
Create account — see all 5,869 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.