Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Mid to Senior
Qualifications
Qualifications:7+ years of professional experience in enterprise-level Python programming. Bachelor's degree in Computer Science, Management Information Systems, or a related field. Proven expertise in Data Engineering and constructing data pipelines. Experience with web scraping utilizing tools such as Requests, Beautiful Soup, Selenium, etc. Knowledge of Oracle/PL SQL development, including stored procedures. Strong understanding of object-oriented design, design patterns, and SOA architectures. Previous experience in an Agile/Scrum environment. Familiarity with peer-reviewing, code versioning, and bug/issue tracking tools. Proficient in using Pandas and NumPy libraries. Excellent communication skills, both written and verbal. Experience in the Commodities/Energy sector is preferred. Familiarity with containerization technologies such as Docker and Kubernetes is a plus.
About the job
Vitol is seeking a skilled and driven Python Data Engineer to enhance our data assets and support our analytical initiatives within a full-time capacity. In this role, you will collaborate closely with traders, analysts, researchers, and data scientists to define requirements and fulfill diverse data-related needs.
Key Responsibilities:
Develop modular, reusable components to facilitate communication between external data sources, internal tools, and databases.
Engage with business stakeholders to clarify requirements for data ingestion and accessibility.
Convert business requirements into actionable technical solutions.
Ensure the integrity and organization of the Vitol Python codebase, adhering to established designs and coding standards.
Enhance our developer tools and Python ETL toolkit through standardization and consolidation of core functionalities.
Effectively coordinate efforts with our global team.
Participate in Vitol’s Python development community and act as a liaison to support our expanding business development initiatives.
About Vitol
Vitol is a global leader in the energy and commodities sector, dedicated to producing, managing, and delivering energy and commodities to consumers and industries worldwide. With over $10 billion invested in long-term infrastructure assets, Vitol continues to expand its operations globally. Our esteemed clientele includes national oil companies, multinational corporations, leading industrial firms, and utility providers. Established in Rotterdam in 1966, Vitol now operates from around 40 offices worldwide, generating revenues of approximately $400 billion in 2023.Our people are our business. We value talent and strive to foster an environment where individuals can achieve their full potential, free from hierarchical constraints. Our diverse team comprises over 65 nationalities, and we are committed to promoting and maintaining a diverse workforce. Learn more about us here.
Part-time|$32/hr - $32/hr|Remote|Remote — Virginia, United States
Mindrift seeks a Freelance Python Data Scraping Engineer to support specialized data workflows for the Tendem project. This is a remote, part-time contract based in Virginia, United States. The role centers on complex data extraction, quality control, and collaboration with Tendem Agents who automate routine tasks. Role overview This position involves designing and managing data scraping operations for dynamic and interactive websites. The focus is on delivering structured, reliable datasets that meet project requirements. As an "AI Pilot," the engineer will handle the more challenging aspects of extraction while ensuring data quality and actionable outcomes. What you will do Lead end-to-end extraction from complex sites, producing accurate and structured data. Utilize internal tools like Apify and OpenRouter, along with custom Python workflows, for collecting, validating, and processing data. Adjust techniques to extract information from JavaScript-heavy or interactive sources. Perform validation checks and consistency controls to maintain high data quality before delivery. Scale operations for large datasets by batching or parallelizing tasks, monitor for failures, and adapt to changes in website structures. Compensation Rates reach up to $32 per hour, based on experience, speed, and project complexity. Actual pay may vary depending on the specific assignment and required skills. How to apply Submit an application to this post and complete the qualification steps. Candidates who qualify may join projects that fit their technical background and availability. Assignments include coding, automation, and refining AI-driven outputs, contributing to practical AI solutions.
Part-time|$32/hr - $32/hr|Remote|Remote — San Antonio, Texas, United States
Mindrift brings together technical specialists and AI-driven projects from major technology innovators. The platform’s goal is to blend generative AI with real-world expertise from a global network. Role overview This part-time, contract position focuses on building and maintaining Python-based data scraping workflows for the Tendem project. The role is open to candidates in San Antonio, Texas, or working remotely from anywhere in the United States. As a Freelance Python Data Scraping Engineer, you will work within a hybrid system that combines AI and human input. Internally, this position is known as an AI Pilot, collaborating with Tendem Agents who manage routine tasks. The AI Pilot uses domain expertise and quality assurance skills to deliver reliable, actionable datasets. What you will do Oversee end-to-end data extraction workflows on complex websites, ensuring accurate and well-structured results. Utilize internal tools such as Apify and OpenRouter, along with custom Python scripts, to manage data collection, validation, and processing. Adjust extraction methods for dynamic or JavaScript-heavy sites, refining techniques as website behaviors evolve. Apply data quality checks, including validation, cross-source consistency, and adherence to formatting standards before delivering datasets. Scale up scraping operations for large data sets using batching or parallelization, monitor workflow stability, and address errors from minor site changes. Requirements Minimum of 3 years’ experience in data engineering, web scraping, automation, or a closely related technical field. Strong Python programming skills and practical experience with data extraction tools. Demonstrated ability to think critically and solve problems. High attention to detail and a strong focus on data quality. Compensation Earn up to $32 per hour, depending on experience and contribution speed. Actual pay depends on project scope, complexity, and required expertise. Other projects may offer different rates. How to apply Submit your application to this post. Qualified candidates may be invited to work on projects that align with their technical skills and availability. Projects include coding, automation, and optimizing AI outputs, contributing directly to practical AI applications.
Jobgether is seeking a Senior Python Data Scraping Engineer for a remote freelance role based in the US. This position involves working with a partner company to deliver high-quality data extraction solutions. Role overview The main focus is on building and maintaining web data extraction systems that can handle both scale and complexity. Projects often involve scraping dynamic websites and processing large datasets, with a strong emphasis on accuracy and reliability. The workflow combines AI-driven agents with human oversight to ensure quality control as requirements shift. What you will do Develop and support scalable, reliable web scraping systems using Python Tackle advanced scraping challenges, including dynamic content and sizable data volumes Deliver well-structured data for downstream processes Collaborate within a hybrid AI-human workflow to maintain accuracy and quality Adjust scraping strategies as websites and project needs change Requirements Significant experience with Python in web scraping contexts Strong problem-solving skills and adaptability in evolving technical settings Ability to work independently and deliver precise, high-quality outcomes Background in managing large, complex datasets Benefits Fully remote freelance arrangement with flexible scheduling Engage with projects that support advanced AI and analytics initiatives Contribute to dependable datasets for emerging technologies
Part-time|$32/hr - $32/hr|Remote|Remote — Iowa, United States
Mindrift brings together professionals from around the world to work on AI projects for major technology companies. The team’s focus is on advancing Generative AI by connecting specialists with real-world expertise. Role overview This part-time, remote contract is for a Freelance Python Data Scraping Engineer (AI Pilot) supporting the Tendem project. Candidates must be based in Iowa, United States. The work centers on managing and carrying out web data extraction tasks, collaborating closely with Tendem Agents, and applying critical thinking to ensure the accuracy and relevance of collected data. Quality assurance is a key part of the position. What you will do Manage end-to-end data extraction workflows for complex websites, delivering structured datasets with precision and reliability. Use internal tools such as Apify and OpenRouter, along with custom workflows, to collect, validate, and process data according to project needs. Adapt scraping methods for dynamic web sources, including handling JavaScript-rendered content and responding to changing site behaviors. Apply strict data quality standards, running validation checks and systematic verification before delivering results. Scale operations for large datasets using batching or parallelization, monitor for failures, and maintain stability when site structures change. Requirements Minimum 3 years of experience in data engineering, web scraping, automation, or a related field. Compensation Earn up to $32 per hour, based on expertise and contribution speed. Actual pay may vary depending on project scope and complexity, and may differ across projects on the Mindrift platform. How to apply Submit an application through this posting to be considered for projects that fit your technical background and availability. Work may involve coding, automation, or refining AI outputs, all contributing to AI advancement and practical use cases.
Part-time|$32/hr - $32/hr|Remote|Remote — New York, United States
Mindrift connects technical experts with AI-driven projects, focusing on the intersection of generative AI and specialized human knowledge. The company partners with technology leaders to deliver real-world solutions powered by collaborative intelligence. Role overview This part-time, remote contract position centers on advanced data extraction and processing for the Tendem project. As a Freelance Python Data Scraping Engineer (called "AI Pilot" at Mindrift), the role works within an AI-human hybrid system. Engineers collaborate with Tendem Agents, who handle routine tasks, while focusing on critical thinking and applying domain expertise to produce accurate, actionable data insights. What you will do Run end-to-end workflows to extract data from complex websites, delivering structured datasets with high accuracy. Use internal tools such as Apify and OpenRouter, as well as custom-built solutions, to collect, validate, and process data according to project needs. Adapt extraction methods for dynamic or JavaScript-rendered content and evolving site structures. Ensure data quality through validation checks, cross-source comparisons, formatting standards, and systematic verification before delivery. Optimize large-scale scraping with batching or parallelization, monitor for failures, and maintain resilience to minor layout changes. Requirements Minimum of 3 years of experience in data engineering, web scraping, automation, or a closely related technical field. Compensation Earn up to $32 per hour on this project, depending on experience and pace. Actual rates may vary by project scope, complexity, and required skills. Other projects on the Mindrift platform may offer different compensation based on their needs. How to apply Submit an application to this posting. After qualifying, join projects that fit your technical strengths and work on a flexible schedule. Tasks may include coding, automation, and refining AI outputs, all contributing to the advancement of AI and its real-world uses.
Part-time|$32/hr - $32/hr|Remote|Remote — Wisconsin, United States
Mindrift connects specialists with AI-driven projects from technology innovators. The company blends generative AI with expertise from contributors worldwide. Role overview This part-time, remote position is open to candidates based in Wisconsin, United States. As a Freelance Python Data Scraping Engineer, you will support the Tendem project by executing specialized data scraping workflows within an AI and human collaboration system. Internally, this role is called an AI Pilot. Work closely with Tendem Agents to tackle repetitive tasks, applying critical thinking and domain knowledge to deliver accurate, actionable data. Consistent quality control and attention to detail are essential in this role. Main responsibilities Manage end-to-end data extraction workflows across complex websites, ensuring thorough coverage and accuracy. Use internal tools such as Apify and OpenRouter, along with custom-built workflows, to accelerate data collection, validation, and task completion. Adapt extraction strategies for dynamic web sources, including those with JavaScript-rendered content or changing structures. Apply validation checks and systematic verification to maintain data quality before delivering results. Scale scraping operations for large datasets using efficient methods that remain stable as target sites evolve. Compensation Earn up to $32 per hour, depending on experience and pace of contribution. Actual pay may vary by project scope and complexity. Application process Submit your application and demonstrate your technical skills to be considered. This freelance role offers the chance to contribute to real-world AI projects while working on a flexible schedule.
Lead Python Engineer - Data Infrastructure About AscentAI AscentAI is at the forefront of developing intelligent software solutions tailored for risk and compliance teams within financial institutions. Our innovative platform simplifies complex regulatory information into actionable insights, empowering teams to mitigate risks, enhance operational efficiency, and proactively adapt to changes in global regulations. As a vibrant, mission-driven organization, we are pushing the limits of machine learning and artificial intelligence, combined with human-in-the-loop systems, to tackle some of the most challenging issues in regulatory compliance. The Role We are seeking a skilled Python Engineer to join our dynamic team. In this pivotal role, you will lead the design and development of robust, large-scale web scraping platforms that underpin AscentAI's data infrastructure. You will work collaboratively with fellow engineers and analysts to define data requirements, architect efficient data pipelines, and ensure the delivery of reliable, high-quality data at scale. Your expertise will also be critical in advising on scraping strategies, counteracting anti-bot measures, and implementing best practices in data extraction for cross-functional stakeholders in engineering, data science, and product development. This is a significant role that offers ownership and visibility, providing an opportunity to influence our technical architecture and overall business success. What You’ll Do Lead the design and development of large-scale web scraping platforms using Python and related frameworks. Mentor junior developers, providing technical guidance and conducting code reviews to ensure high-quality and maintainable code. Devise advanced strategies to navigate and overcome sophisticated anti-bot defenses such as CAPTCHAs, Cloudflare, and IP blocking, while adhering to legal and ethical standards and website terms of service. Collaborate with data analysts and engineers to establish data requirements and facilitate seamless data integration into databases. Optimize scrapers for performance, speed, and stability; set up real-time monitoring and alert systems to quickly respond to failures or changes in target sites. Create comprehensive technical documentation and engage effectively with cross-functional teams to ensure alignment and manage expectations.
Join 10alabs as a Data Engineer specializing in web scraping, where you will play a pivotal role in gathering and processing data from diverse online sources. Your expertise will help drive our data-driven decision-making processes, enhance our data pipelines, and optimize our web scraping solutions.
Full-time|$75K/yr - $150K/yr|Remote|Remote — New Jersey, United States
Role Overview mlabs is hiring a Web Scraping Specialist to support large-scale data extraction for AI model training. This full-time position is fully remote, but requires at least six hours of workday overlap with the Eastern Standard Time (EST) zone. The team manages distributed crawlers and complex pipelines that process billions of data points, including video, transcripts, and audio. Compensation Annual salary ranges from $75,000 to $150,000, depending on experience. What You Will Do Develop and optimize code: Build, test, and refine high-performance scraping solutions for a wide range of online sources. Focus on reliability and efficiency. Oversee data retrieval: Manage complex extraction tasks, including handling pagination and dynamic content such as AJAX-loaded pages. Ensure data quality: Clean and format collected data to meet strict standards for downstream analysis and processing. Database management: Organize and store scraped data in appropriate databases, with attention to speed and long-term integrity. Monitor and maintain systems: Continuously track scraping operations and infrastructure to quickly identify and resolve issues, maintaining steady data flow. Work Environment This role suits someone who enjoys technical challenges and prefers working without heavy bureaucracy. Expect a hands-on, collaborative setting focused on delivering results.
Join our dynamic team at Prosper as a Senior Software Engineer specializing in Python for Data Platforms. In this role, you will be instrumental in designing and developing robust data solutions that drive impactful business decisions. You will work closely with cross-functional teams to leverage data for innovative product development.
Full-time|$141K/yr - $232K/yr|Remote|United States (Remote)
About ClickHouseRanked among the 2025 Forbes Cloud 100, ClickHouse stands as a leader in the private cloud sector, recognized for its innovative solutions and rapid growth. With a customer base exceeding 3,000 and an annual recurring revenue (ARR) increase of over 250% year-on-year, ClickHouse excels in real-time analytics, data warehousing, observability, and AI workloads.The company’s remarkable trajectory was further affirmed by a recent $400M Series D funding round. In just the past three months, notable clients such as Capital One, Lovable, Decagon, Polymarket, and Airwallex have embraced our platform, expanding their existing deployments. These businesses join a prestigious roster of AI pioneers and global leaders, including Meta, Cursor, Sony, and Tesla.Our mission is to revolutionize data utilization across industries. Join us on this exciting journey!The Connectors team serves as the crucial link between ClickHouse and the broader data ecosystem. We develop and maintain integrations that make ClickHouse accessible to millions of developers, data professionals, and AI agents globally, from high-level data visualization tools like Tableau, PowerBI, Superset, and Metabase to connectors for frameworks such as Apache Spark, Flink, Kafka Connect, and Fivetran, as well as orchestration platforms and AI tools.Our initiatives directly influence how organizations manage colossal datasets—real-time analytics platforms processing millions of events per second, observability systems that monitor global infrastructure, and increasingly, AI-enhanced data applications transforming team workflows. We work closely with the open-source community, internal teams, and enterprise clients to ensure that ClickHouse integrations exemplify superior performance, reliability, and user experience.About the RoleAs a Senior Software Engineer with a focus on Python and the Data Ecosystem, you will be a key contributor, owning and advancing critical components of ClickHouse’s data engineering ecosystem. This role merges high-performance database engineering with developer experience, enabling Data Engineers and Data Scientists to leverage ClickHouse’s speed and scalability within their preferred frameworks.We seek an individual who has firsthand experience as a Data Engineer or Data Scientist. The landscape for data practitioners is evolving rapidly: databases are no longer mere query targets but active participants in AI-driven workflows, acting as vector stores for RAG pipelines, backends for LLM-powered agents, and real-time feature stores for ML inference. You understand these processes intimately, having worked within them. Your contributions extend beyond building integrations; you provide product-level insights that enhance the developer experience.
Are you passionate about Python and eager to share your expertise with a vast community of developers?The Real Python tutorial team is renowned for delivering top-tier Python tutorials online. Our mission is to empower Python developers globally to enhance their skills.With over 3 million monthly visitors, we are excited about our journey thus far, but believe we can reach even greater heights!To elevate our tutorial quality and broaden our content offerings, we are seeking enthusiastic video course instructors who:Have a deep love for Python and a desire to assist learners in advancing their skillsUnderstand the significance of clarity and tone in educational video contentAspire to refine their craft and leverage our comprehensive publishing processCan commit to producing one or more new recordings each month and adhere to deadlinesThis position is fully remote. For more details, visit: realpython.com/jobs/video-course-instructorIdeal candidates will:Possess several years of programming experienceBe passionate about teaching “programming concepts” and have experience recording screencasts. The content you create will primarily be derived from existing written tutorials, so your ability to transform written material into engaging short videos is crucial.Have the ability to integrate Real Python into their weekly routine, as this role requires a notable time commitment.Joining the Real Python team comes with numerous benefits:Continuous Learning: Engage in ongoing learning and enjoy the process, enhancing your skills as a developer, writer, and communicator while forming valuable connections.Wide Reach: Our website attracts significant traffic—over 3 million visitors monthly and consistently growing. We are frequently highlighted in various Python publications and manage one of the largest email newsletters and social media channels in the community. Our YouTube channel boasts over 150,000 subscribers, ensuring that your published video series garners substantial viewership and appreciation.Content Enhancement: Upon submission of a video series to Real Python, our dedicated team will work with you to ensure the highest quality output.
Join MongoDB as a Senior Python Engineer and become a key player in innovating and enhancing our database solutions. In this role, you will leverage your expertise in Python to develop robust applications, contribute to system architecture, and collaborate with cross-functional teams to deliver high-quality software.
Job Overview:Join our dynamic team as a Senior Data Engineer, where you will play a pivotal role in designing and implementing robust data pipelines, enterprise data products, and AI-driven tools. Your expertise will be essential in deploying API and integration platforms, as well as building seamless system integrations across Mariner’s suite of enterprise platforms.In this role, you will harness ETL/ELT tools and methodologies, bridging cloud and legacy databases, APIs, and automation systems to create scalable business solutions. You will lead our CI/CD and Infrastructure as Code (IaC) strategies, mentor junior engineers, conduct code reviews, manage development projects, and uphold software lifecycle best practices. Collaborating closely with teams in Data Management, Business Intelligence, Operations, Trading, Compliance, and IT, you will contribute to Mariner’s expanding portfolio of businesses.
Vitol is seeking a skilled and driven Python Data Engineer to enhance our data assets and support our analytical initiatives within a full-time capacity. In this role, you will collaborate closely with traders, analysts, researchers, and data scientists to define requirements and fulfill diverse data-related needs.Key Responsibilities:Develop modular, reusable components to facilitate communication between external data sources, internal tools, and databases.Engage with business stakeholders to clarify requirements for data ingestion and accessibility.Convert business requirements into actionable technical solutions.Ensure the integrity and organization of the Vitol Python codebase, adhering to established designs and coding standards.Enhance our developer tools and Python ETL toolkit through standardization and consolidation of core functionalities.Effectively coordinate efforts with our global team.Participate in Vitol’s Python development community and act as a liaison to support our expanding business development initiatives.
Full-time|$130K/yr - $160K/yr|On-site|Salt Lake City, Utah, United States
About the Role At iCapital Network, we are seeking a skilled Python Automation Engineer with a proven background in web scraping, automated portal interactions, and cloud-native deployment in AWS. The successful candidate will possess extensive hands-on experience with Playwright for browser automation, adeptly managing multi-factor authentication (MFA) flows, and deploying scalable scraping tasks using AWS Lambda and associated services. Your role will involve architecting and developing robust, secure, and scalable scraping solutions that effectively interact with complex web applications and secure portals. Responsibilities Collaborate in the design and implementation of sophisticated scraping solutions utilizing Python, Playwright, and AWS services. Automate interactions with JavaScript-dense and authentication-secured websites, effectively handling MFA, CAPTCHAs, and session or token-based login flows. Architect scraping pipelines employing serverless AWS components such as Lambda, Step Functions, S3, CloudWatch, and Secrets Manager. Develop systems capable of scaling to support high volumes of data extraction with fault tolerance, retries, and advanced logging mechanisms. Integrate and manage intricate workflows across multiple portals, APIs, and data sources. Contribute to architectural decisions, tooling, and best practices.
Join Chime as a Senior Python Core Engineer, where you will play a pivotal role in developing and enhancing our core banking platform. This is an exciting opportunity to leverage your expertise in Python and contribute to innovative solutions that redefine the banking experience for our members.
About the Position As a Senior Data Acquisition Engineer, you will take ownership of and enhance the systems designed for large-scale web data collection. Your responsibilities will include the design and maintenance of high-performance scraping and ingestion infrastructure that allows various teams to systematically add and manage data sources with reliability. This role is integral to the Data Engineering team, concentrating on creating scalable, observable, and resilient systems that effectively manage production data. You will collaborate closely with data engineers and product teams to elevate data pipelines and ensure that data acquisition processes scale alongside business growth. Join us to shape the future of AI-driven technologies and contribute significantly to impactful real-world applications. This position is based in our NYC office and operates under a hybrid working model.
About Us:At Wynd Labs, we are at the forefront of building infrastructure that provides vast amounts of web data to organizations developing the world's leading AI models. Our team plays a pivotal role in supporting Grass, a bandwidth-sharing network that enables us to operate an extensive distributed crawler, granting us unique access to high-quality public web data on a global scale. We have also established pipelines for processing and annotating billions of videos, transcripts, and audio files, facilitating dataset creation for cutting-edge laboratories.We pride ourselves on being a nimble and technical team, characterized by swift decision-making and a focus on innovation. Join us as we redefine what is achievable in the realm of open web data and artificial intelligence.Position Overview:We are looking for a skilled Web Scraping Specialist with substantial experience in data extraction and web scraping methodologies. You will be part of a small, dedicated team responsible for collecting and analyzing data, enhancing scraping processes, and advancing our mission of transforming internet data accessibility through Grass.
We are in search of a highly skilled and innovative Data Engineer to join our dynamic team. As a pivotal technical leader, you will:Be the go-to expert in your team, guiding projects with your technical acumen.Conquer complex challenges that others find daunting.Deliver intricate features at an unparalleled pace.Produce exceptionally clean and maintainable code.Enhance the quality of our entire codebase.If you're an exceptional developer with a proven track record, we want to hear from you! This role requires a unique blend of skills and experience, designed for the best in the field.Responsibilities:Develop, optimize, and scale data pipelines and infrastructure utilizing technologies such as Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.Design, operationalize, and oversee ingestion and transformation workflows, including DAGs, alerting, retries, SLAs, lineage, and cost controls.Partner with platform and AI/ML teams to automate ingestion, validation, and real-time compute workflows, contributing towards a feature store.Integrate pipeline health and metrics into engineering dashboards for enhanced visibility and observability.Model data and execute efficient, scalable transformations using Snowflake and PostgreSQL.Create reusable frameworks and connectors to standardize internal data publishing and consumption.
Sep 8, 2025
Sign in to browse more jobs
Create account — see all 69,098 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.