Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Mid to Senior
Qualifications
We are seeking candidates with a strong background in data engineering, proficiency in SQL and Python, and experience with cloud platforms such as AWS or Azure. A bachelor's degree in Computer Science, Engineering, or a related field is preferred. Ideal candidates should possess excellent problem-solving skills and the ability to work in a fast-paced environment.
About the job
Join m1finance as a Senior Data Engineer and play a vital role in transforming our data architecture and analytics capabilities. You will collaborate with cross-functional teams to design and implement robust, scalable data solutions that empower our organization to make data-driven decisions.
About m1finance
m1finance is revolutionizing the way people manage their investments by providing an intuitive platform that combines finance with technology. We are committed to innovation, and our mission is to empower individuals to take control of their financial futures.
About Us: At Carefeed, we empower senior living and long-term care providers with a comprehensive platform that streamlines operations, enhancing the experience for both staff and residents. Our innovative solution replaces outdated paper processes, phone calls, and disparate systems with an integrated digital approach, allowing teams to focus on what truly matters—caring for residents and their families.We seamlessly integrate with existing EHR and HR systems, reducing operational strain while keeping communities organized. Carefeed is designed for ease of use, functionality, and the realities of multi-community care, helping providers maintain efficiency and confidence in their operations.With thousands of communities in the US and Canada relying on us, Carefeed is dedicated to supporting organizations in delivering exceptional experiences for residents, families, and caregivers alike.About the Opportunity: We are seeking a Senior Data Engineer to spearhead the development of our data infrastructure and enhance our developer tools. In this pivotal role, you will design and implement robust data systems, develop tools that improve developer productivity, and create the foundational technology for impactful data-driven products.This position is perfect for an experienced data engineer ready to transition into infrastructure management. Your primary project will involve designing and constructing a central data lake for 2026, alongside the necessary pipelines (e.g., SQS, Kafka) to support it. As the inaugural data engineer at Carefeed, you will play a key role in guiding design and tooling decisions and executing those plans.As part of our Infrastructure team, you will also engage in standard infrastructure tasks, including observability and deployments.
Data EngineerEmployment Type: Full-Time, Mid-LevelDepartment: Business Intelligence CGS Federal is in search of a dedicated and innovative Data Engineer to bolster our expanding Data Analytics and Business Intelligence platform. This platform is designed to provide federal clients with the essential tools and capabilities to transform data into actionable insights. The ideal candidate will possess strong analytical skills and a love for continuous learning, eager to navigate and develop expertise across a variety of technologies while addressing some of our clients' most challenging problems. At CGS, we unite motivated, skilled, and creative individuals to tackle the government's most pressing challenges with state-of-the-art technology. We are looking for candidates who are enthusiastic about contributing to government innovation, value teamwork, and can proactively address the needs of others. We foster an environment where our employees feel supported and promote professional development through diverse learning opportunities. Key Responsibilities:- Develop and maintain data pipelines to effectively store, manage, and provide data to users.- Actively participate in an Agile/Scrum team, adhering to best practices.- Write efficient code to ensure reliable data extraction and processing.- Facilitate continuous automation of data ingestion processes.- Promote technical excellence by following lean-agile engineering principles, including API-first design, simple designs, continuous integration, version control, and automated testing.- Collaborate with program management and engineers to understand and document complex, evolving requirements.- Foster an environment that encourages customer service excellence and teamwork.- Work collaboratively within a cross-functional team, including user experience researchers, product managers, and other specialists. Qualifications:- U.S. Citizenship is required.- Ability to obtain a Public Trust Clearance.- A minimum of 7 years of IT experience, focusing on the design and management of large, complex datasets and models.- Proven experience developing data pipelines from various sources, including structured and unstructured data formats.- Proficient in creating ETL processes and conducting testing and validation.- Strong skills in data manipulation using tools such as Python, R, SQL, or SAS.
Are you an innovative Data Engineer looking to make a significant impact in a dynamic environment? Join Reply, a leading technology consulting firm, where you will be at the forefront of data transformation and analytics. As a Data Engineer, you will design and implement robust data pipelines, ensuring that our clients leverage data effectively to drive business decisions.
Join our innovative team as a Data Engineering Manager at Stratacareers, where you will lead a dynamic group of data engineers dedicated to building robust and scalable data infrastructure that empowers leading healthcare providers across the nation.In this pivotal role, you won’t just be responsible for coding; you will also define strategic initiatives, mentor your team, and drive the transformation of data into actionable insights that propel our clients’ missions forward. You will play a key role in scaling our data platform, integrating cutting-edge technologies, and collaborating across departments to ensure our data engineering efforts are efficient and aligned with future goals.If you thrive on mentoring engineers, tackling technical challenges, and enhancing processes while staying hands-on with data, we would love to hear from you!
Join Our Award-Winning Technology Team as a Senior Data Engineer at RAPP ChicagoAbout Us:At RAPP, we are pioneers in driving growth with precision and empathy on a global scale. As a leading next-generation precision marketing agency, we blend data, creativity, technology, and empathy to stimulate client growth. We pride ourselves on creating tailored marketing solutions that resonate with the individuality of our clients and their audiences.We are committed to fostering an inclusive workplace that prioritizes personal well-being.Our Approach:Our dynamic team of superconnectors excels in delivering value from personal brand experiences by focusing on three primary areas: connected data, connected content, and connected decision-making. Our data analysts identify individual insights, our strategists decode client needs, and our talented technologists and creatives craft solutions for authentic customer engagement.Part of Omnicom’s Precision Marketing Group, RAPP comprises over 2,000 creatives, technologists, strategists, and data scientists across more than 15 global markets.Your Role:We are seeking a highly skilled Senior Data Engineer with extensive experience in creating scalable, cloud-native data pipelines and platforms. The ideal candidate will be proficient in Python, Apache Airflow, AWS Lambda, DynamoDB, and dbt and possess a track record in designing reliable data workflows that facilitate advanced analytics, reporting, and machine learning applications. A keen attention to detail, a passion for information management, and the ability to collaborate effectively with creative teams to optimize asset workflows are essential.Your Responsibilities:Data Pipeline DevelopmentDesign, implement, and maintain robust ETL/ELT pipelines leveraging Python and Airflow.Craft serverless workflows using AWS Lambda for scalable, event-driven data processing.Optimize dbt models for analytics and transformations.Data Architecture & StorageDesign schemas and manage data in DynamoDB and other cloud-native storage solutions.Ensure high availability and performance of data storage solutions.
Join RAPP Chicago as a Data Engineer!About Us:At RAPP, we stand at the forefront of precision marketing, leading the charge in fostering growth with empathy and precision on a global scale. Our innovative approach combines data, creativity, and technology to deliver tailored marketing solutions that resonate with individual client needs. We believe in nurturing a diverse and inclusive workplace that prioritizes personal well-being and growth.Our Methodology:Our dynamic team is composed of fearless superconnectors who harness the power of connected data, content, and decision-making to create value from personal brand experiences. We strive to build authentic customer connections through meticulous data analysis, strategic insights, and cutting-edge technology.Your Role:We are on the lookout for a passionate Data Engineer who is eager to learn and contribute to the design and implementation of scalable, cloud-native data pipelines and platforms. The ideal candidate will possess foundational Python knowledge and a keen interest in utilizing modern technologies such as Apache Airflow, AWS Lambda, DynamoDB, and dbt to enhance data workflows. Curiosity about data systems and a commitment to mastering data engineering best practices are essential as you gain hands-on experience.
Join the innovative team at Reply as a Senior Data Engineer, where you will play a crucial role in developing and maintaining data architectures that drive business intelligence and analytics. You will collaborate with cross-functional teams to streamline data processes and ensure robust data governance.Your expertise in data modeling, ETL processes, and cloud technologies will be essential in helping us harness the power of data to make informed business decisions.
Join m1finance as a Senior Data Engineer and play a vital role in transforming our data architecture and analytics capabilities. You will collaborate with cross-functional teams to design and implement robust, scalable data solutions that empower our organization to make data-driven decisions.
Contract|$60/hr - $120/hr|Remote|Chicago, Illinois, United States
Notice: We are currently not accepting new applications for this position.Position: Contract Data Engineer - Civic & Political DataRole Overview: As a Contract Data Engineer, you will play a critical role in enhancing and maintaining the essential data infrastructure and tools at Climate Cabinet. This position is key in identifying opportunities to support the election of pro-climate candidates and the advancement of impactful pro-climate legislation across various state houses.Location: Remote, available to candidates across the United States.Application Deadline: March 17, 2026Expected Start Date: March 2026Contract Duration: 6 months with an option to extend.Hours: Flexible commitment ranging from 15 to 40 hours per week, with project assignments tailored to contractor availability.Compensation: Anticipated hourly rates between $60 and $120, commensurate with experience and demonstrated skill level. Those with extensive experience in political data systems, AI infrastructure, and independent project leadership may receive compensation at the higher end of this range.
Full-time|Remote|Netherlands, Remote; Spain, Remote; United Kingdom, Remote; United States, Atlanta; United States, Boston; United States, Charlotte; United States, Chicago; United States, Cincinnati; United States, Miami; United States, Milwaukee; United States, Minneapolis; United States, Philadelphia; United States, Raleigh; United States, St. Louis; United States, Tampa
About Dataiku Dataiku provides a platform for building, deploying, and managing AI and analytics at scale. The company helps organizations design and operate analytics, machine learning, and AI agents with transparency and control. Dataiku connects the enterprise AI stack, supporting centralized governance and multi-vendor environments. Learn more on the Dataiku blog, LinkedIn, X, and YouTube. Role Overview Dataiku is hiring a Data Engineer II for the Enterprise Data and Analytics (EDA) team. This role focuses on delivering reliable data to drive analytics, AI, and insights for teams across the company. The Data Engineer II will play a central part in supporting the Data Platform, which serves centralized analytics, Generative AI engineering, embedded analytics teams, and self-service users. What You Will Do Contribute technical expertise to the Data Platform, working with Snowflake, Dataiku, and GitHub. Develop solutions using Python and SQL, with DataOps processes integrated into GitHub Actions and Dataiku. Collaborate with engineers from other teams to deliver solutions across various technical domains. Promote engineering best practices within the EDA team and the broader Analytics Community. Support analytics and AI initiatives for a wide range of stakeholders. What We Look For Experience with Snowflake, Dataiku, GitHub, Python, and SQL. Understanding of the software development lifecycle and DataOps methodologies. Strong collaboration skills and the ability to work across teams. Clear verbal and written communication. Analytical mindset and curiosity. Locations This position is open to candidates in the Netherlands (remote), Spain (remote), United Kingdom (remote), and the following US cities: Atlanta, Boston, Charlotte, Chicago, Cincinnati, Miami, Milwaukee, Minneapolis, Philadelphia, Raleigh, St. Louis, and Tampa.
Full-time|$276K/yr - $379.5K/yr|On-site|Bellevue, Washington; Chicago, Illinois; San Francisco, California; Washington, DC
Okta is an independent identity provider focused on building secure, trusted infrastructure for both AI and human users. The Technology, Data, and Intelligence (TDI) team supports Okta’s global workforce by providing the technology and systems employees need to succeed. Role overview The Senior Director of Data Platform and Engineering leads a global group of data and analytics engineers. This leader maximizes the value of Okta’s data assets and reports to the VP of Data and Insights. The position requires a balance of deep technical expertise and the ability to engage in strategic business discussions. As a player-coach, the Senior Director builds and mentors the team, guides key initiatives, and maintains a strong data foundation. This role is central to Okta’s AI strategy. As AI becomes more integrated into Okta’s products and operations, the quality and governance of data are increasingly important. The Senior Director ensures that clean, trusted, and well-managed data supports all AI projects, shaping the platform for Okta’s ongoing growth. What you will do Lead and develop a high-performing team: Mentor, grow, and support a diverse group of data and analytics engineers. Foster a culture of excellence, collaboration, and continuous learning. Advance AI enablement: Work closely with AI Engineering teams to ensure data infrastructure and practices are ready for AI development and deployment. Define data governance standards, build quality training datasets, and develop scalable data pipelines for AI and machine learning models. Location Bellevue, Washington; Chicago, Illinois; San Francisco, California; Washington, DC
Join Valorem Reply, an award-winning digital transformation agency dedicated to driving solutions in data-centric enterprises, IT modernization, customer experience, product transformation, and digital workplace innovation through the power of Microsoft technologies. We specialize in hyper-scale and agile delivery of unique digital business services, strategic business models, and design-focused user interactions, ensuring our clients experience secure and rapid transformations in their operations. We are currently seeking a passionate and skilled Manager for our Data & AI team. In this role, you will primarily concentrate on developing modern data platforms utilizing Microsoft cloud technologies like Fabric and Databricks for our diverse clientele. As a technical leader, you will help chart the technical course of our practice while staying abreast of the latest trends in data-focused technology. Strong communication ability is essential, as you will act as a trusted partner to our customers. A solid grasp of data governance and the ability to guide clients on frameworks is critical to success in this role. This position is ideal for individuals who thrive in collaborative settings, appreciate flexibility and autonomy in their work, and enjoy tackling complex challenges.
About Akuna Capital:Akuna Capital is a forward-thinking trading firm dedicated to collaboration, innovative technologies, data-driven solutions, and automation. We excel as an options market-maker, committed to providing competitive quotes for buying and selling. To achieve this, we develop and implement our own low-latency technologies, trading strategies, and mathematical models.Founded by our partners in Sydney, we established our first office in 2011 in Chicago, the heart of the derivatives industry and options capital worldwide. Today, Akuna proudly operates additional offices in Sydney, Shanghai, London, and Singapore.Your Role as a Software Engineer in Data Engineering at Akuna:As a data-centric organization, we leverage our data as a critical competitive asset. The Akuna Data Engineering team comprises world-class professionals responsible for designing, building, and maintaining systems, applications, and infrastructure necessary for collecting, storing, processing, managing, and querying our data assets. This team plays a vital role in ensuring trustworthy data is accessible and reliable to support various initiatives within Akuna’s Quant, Trading, and Business Operations units.Key Responsibilities:Contribute to the growth of our Data Engineering division, supporting the strategic importance of data at Akuna.Lead the design and enhancement of our data platform across diverse data sources, facilitating multiple streaming, operational, and research workflows.Collaborate closely with Trading, Quant, Technology, and Business Operations teams to identify data production and consumption processes, and define impactful projects.Develop and deploy batch and streaming pipelines to collect and transform our expanding Big Data set within a hybrid cloud architecture, utilizing Kubernetes/EKS, Kafka/MSK, and Databricks/Spark.Mentor junior engineers on software and data engineering best practices.Create clean, well-tested, and well-documented code to support mission-critical applications.Implement automated data validation test suites to ensure data is processed and published according to established Service Level Agreements (SLAs) regarding data quality.
Full-time|$130K/yr - $145K/yr|Hybrid|Chicago, IL; Boston, MA
Join Our Team at Kalderos At Kalderos, we are dedicated to developing innovative technologies that foster transparency, trust, and equity across the healthcare landscape, with a particular emphasis on pharmaceutical pricing. Our true measure of success lies in empowering the healthcare community to prioritize improving the health of individuals. Our achievements are driven by our greatest asset—our people. We thrive on problem-solving, innovation, and the constructive feedback from our colleagues. We are passionate about our mission and are seeking like-minded individuals to join our dynamic team. We are currently in search of a collaborative Data Engineer II. You should be eager to work in a rapidly growing and evolving environment, possessing the skills and experience necessary to excel. Being familiar with the fast-paced and often unpredictable nature of operations will enable you to deliver meaningful results. This is a full-time, hybrid position that can be based in either Chicago, IL or Boston, MA. Please note that relocation assistance will not be provided. Expected Salary Range: $130,000 - $145,000 base salary + bonus
Join Above Lending as a Senior Data Engineer, where you'll play a pivotal role in transforming our data architecture to enhance decision-making processes. In this position, you will work closely with data scientists and analysts to develop robust data pipelines and ensure data integrity across our systems. Your expertise will drive the optimization of our data storage and retrieval processes, ultimately contributing to our mission of providing superior lending solutions.
Full-time|$185K/yr - $225K/yr|On-site|Chicago, United States
Join our dynamic team at IMC Trading as a Data Platform Operations Engineer focused on performance. In this pivotal role, you'll leverage your data expertise and operational skills to enhance the reliability, accuracy, and efficiency of our data systems that underpin performance analysis and strategic decision-making.As a production-facing engineer, you will collaborate closely with the Performance Team and key stakeholders to ensure that our data workflows are robust, observable, and efficient.Key Responsibilities:Oversee the operational health of the Performance Team's data platform.Monitor and maintain ETL processes and data pipelines to ensure correctness, timeliness, and stability.Design and implement data quality checks, reports, and alerts tailored to performance data.Analyze and resolve data issues and pipeline failures, identifying root causes to prevent recurrence.Manage and troubleshoot GitLab CI/CD pipeline challenges related to performance data workflows.Oversee workloads and resource usage, identifying and rectifying imbalances and bottlenecks.Develop and maintain operational tools, automation, and runbooks.Continuously optimize data pipelines and workloads to enhance performance and resource efficiency.
Join a leading investment bank as a Big Data Engineer in vibrant Chicago, IL! Our client seeks a talented individual to enhance their team, focusing on streamlining data access for clients. Currently, support analysts spend approximately one hour daily accessing various data sources to answer client queries. This project aims to empower clients by granting them direct access to the necessary data. Your Responsibilities: Collaborate with business stakeholders to gather and analyze technical requirements.Design and implement comprehensive end-to-end solutions.Select the most suitable technology stack for effective project execution. Qualifications: Proficient in tools within the Big Data ecosystem.Experience in developing Kafka streaming applications using Java.Hands-on expertise in building Spark applications with Java or Scala.Familiarity with NoSQL databases.Full stack Java development experience.Proficient in developing production applications using Node.js and React. Project Background: Cross Asset Query Tool – designed to enable low latency query capabilities across our Cross Asset data.
As a Lead Machine Learning and Data Science Engineer at CapTech, you will spearhead the design and implementation of innovative, data-driven solutions for our clients, focusing on the development and deployment of scalable machine learning systems within enterprise frameworks. Our collaborative environment fosters continuous learning and knowledge sharing with fellow analysts, architects, and clients.Key Responsibilities:Collaborate with clients, data scientists, engineers, and cross-functional teams to devise end-to-end machine learning solutions that address evolving business needs.Provide technical guidance and ensure alignment of technical solutions with client requirements.Translate client requirements into actionable data-driven processes and analytical models.Examine and transform extensive datasets hosted on leading enterprise data platforms such as AWS, Azure, and GCP.Develop and deploy cutting-edge analytical solutions leveraging client-specific data, including recommender systems and natural language processing.Optimize and productionize machine learning systems to meet client expectations for performance and scalability.Enhance CapTech’s Machine Learning and Data Science practices by delivering engaging presentations, crafting proposals, participating in business development initiatives, and mentoring junior data scientists and engineers.
Full-time|On-site|Chicago, Illinois, United States
Role Overview SpotOn is looking for a Senior Data Engineer with deep experience in Clickhouse. This position is based in Chicago, Illinois. The engineer will focus on strengthening SpotOn's data infrastructure and work closely with teams across the company. What You Will Do Design, build, and maintain scalable data solutions using Clickhouse. Work with colleagues from different departments to support new and existing products and services. Optimize data processing workflows for efficiency and reliability. Safeguard data integrity throughout the data pipeline. Help turn data into actionable insights that inform business strategy. Impact This role shapes the direction of SpotOn's data operations. The engineer's work will directly support decision-making and the evolution of SpotOn's offerings.
Full-time|$199.1K/yr - $234.1K/yr|On-site|Chicago; Dallas; Los Angeles; Minneapolis; New York; San Francisco; Seattle; Washington, D.C.
Are you prepared to make a significant impact in the realm of data engineering?West Monroe is on the lookout for a Senior Architect to become a vital part of our dynamic Technology and Experience Practice. This is a remarkable chance to partner with clients in shaping their Big Data strategies and roadmaps, while gaining insights across diverse sectors such as Healthcare, Financial Services, Insurance, Consumer & Industrial Products, and Energy & Utilities.
Mar 19, 2026
Sign in to browse more jobs
Create account — see all 820 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.