Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Senior
Qualifications
Strong proficiency in Python and experience with machine learning frameworks such as TensorFlow or PyTorch. Proven experience in AI and data science projects, particularly in the logistics sector. Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment. Familiarity with cloud platforms (AWS, Google Cloud) is a plus. Strong communication skills, both written and verbal.
About the job
Delivery Hero is looking for a Senior Python AI Engineer to focus on logistics, data, and machine learning projects in Berlin. This position centers on developing AI-powered tools that support and improve delivery operations.
Role overview
This role involves designing and building applications using Python, with a strong emphasis on artificial intelligence and data science. The main goal is to create solutions that make logistics processes more efficient and help raise customer satisfaction.
What you will do
Develop and maintain AI-driven applications tailored for logistics challenges
Apply machine learning techniques to optimize delivery workflows
Work closely with teams to identify and solve data-related problems
Requirements
Extensive experience with Python
Background in artificial intelligence, data science, and machine learning
Ability to build solutions that improve operational efficiency
About Delivery Hero SE
Delivery Hero is a global leader in the food delivery industry, connecting millions of customers with their favorite restaurants. Our mission is to provide an outstanding delivery experience through cutting-edge technology and a relentless focus on innovation. Join us and be part of a team that is shaping the future of logistics and e-commerce!
Data Engineer (Python) Company Overview Orcrist Technologies is at the forefront of innovation with the Orcrist Intelligence Platform (OIP), a cutting-edge data intelligence system built on Kubernetes. Our platform is available as a SaaS solution or can be deployed on-premises, including air-gapped setups. We manage both streaming and batch data pipelines that empower search functionalities, machine learning enrichment, and investigative workflows for our mission-critical clientele. Role Summary As a Data Engineer, you will play a pivotal role in quickly validating new data initiatives from inception to deployment, ensuring they are adoptable and scalable. In this innovative environment, you will prototype effective connectors and pipelines, generate performance assessments, and create handoff packages for productization by our Foundation or delivery team. Key Responsibilities Prototype ingestion and connector patterns (batch and streaming) utilizing NiFi, Kafka, Kafka Connect/Streams, and Change Data Capture approaches. Design schemas and data models that are both prototype-grade and easily adoptable, ensuring semantic clarity and a disciplined approach to evolution. Develop incremental lakehouse datasets using Hudi, Iceberg, and Delta patterns, producing outputs for real-world latency and throughput evaluations. Implement data quality and provenance considerations early in the process, incorporating checks, metadata hooks, and operational basics. Containerize and deploy prototypes on Kubernetes, providing minimal runbooks and configurations for seamless adoption. Create adoption artifacts including schemas, reference implementations, technical design notes, and a backlog for integration. Qualifications Minimum of 3 years of experience in data engineering with a proven track record of delivering real-world data pipelines beyond ad-hoc scripts. Proficient in Python and SQL, skilled in building transformations, validation tools, and pipeline integration code. Solid understanding of streaming and Change Data Capture fundamentals, along with experience in the Kafka ecosystem. Familiar with lakehouse architectures and query layers (e.g., Hudi, Iceberg, Delta, Trino, Hive, Postgres) and their role in making datasets accessible. Comfortable working in Kubernetes and container environments and adept at documenting technical decisions clearly. Must be eligible to work in Germany; EU/NATO citizenship is preferred, and export-control screening will apply. Preferred Qualifications Experience with data quality tools such as Great Expectations or metadata/lineage platforms (OpenMetadata, DataHub, Atlas). Experience with on-premises or air-gapped deployments and awareness of governance and policy for regulated environments. Proficiency in German (B1+) and familiarity with OSINT, GEOINT, or multi-INT data structures. What We Offer A modern data stack with real-world constraints: Kafka, NiFi, and more.
Role Overview Jobgether is hiring a Senior Python Data Scraping Engineer for a freelance position based in Germany. This role focuses on designing, building, and maintaining web data extraction systems that serve both AI-powered and human-driven workflows. The position is fully remote within Germany and values contributors who work independently and pay close attention to detail. What You Will Do Create and improve scalable Python scraping solutions for dynamic websites and large datasets. Produce structured, accurate data ready for advanced analytics and AI use cases. Adjust scraping methods to keep up with changing web technologies and site structures. Collaborate with AI agents, maintaining high standards through validation and quality checks. Combine hands-on coding with analytical problem-solving to tackle complex extraction issues. Requirements Advanced Python skills, especially in web scraping and automation. Background in extracting data from dynamic websites and managing large-scale projects. Ability to adapt techniques as web environments evolve. Comfort working independently and taking responsibility for project outcomes. Keen attention to detail and a strong focus on data quality. Work Arrangement Freelance contract Remote work within Germany Flexible schedule with emphasis on autonomy
Part-time|$32/hr - $32/hr|Remote|Remote — Stuttgart, Baden-Württemberg, Germany
Mindrift is looking for a Freelance Python Data Scraping Engineer to join its remote team, supporting projects from Stuttgart, Baden-Württemberg, Germany. This part-time, freelance position centers on the Tendem project, where engineers contribute as "AI Pilots" to streamline and improve data workflows. Role overview This role focuses on extracting, processing, and validating data from complex and dynamic web sources. The engineer collaborates with Tendem Agents, applying analytical skills and domain expertise to ensure data is accurate and relevant. Work is fully remote and designed for professionals with a strong background in web scraping and data processing. What you will do Lead data extraction projects on complex websites, delivering structured datasets that meet quality standards. Use internal tools such as Apify and OpenRouter, along with custom workflows, to collect, verify, and process data according to project needs. Develop and refine scraping methods for dynamic and interactive sites, including those with JavaScript-rendered content or frequent changes. Maintain high data quality through validation checks, consistency controls, formatting compliance, and systematic verification. Optimize large-scale scraping using batching or parallel processing, and build resilience against minor site structure changes. Requirements Minimum 3 years of hands-on experience in data engineering, web scraping, and automation. Compensation Contributors can earn up to $32 per hour, depending on project complexity, performance, and expertise. Actual rates may vary between projects. How to apply Submit an application through the provided link. Qualified candidates are matched with projects that suit their technical strengths and can set their own schedules. Work includes programming, automation, and refining AI outputs, supporting the practical use of AI in real-world scenarios.
Contract|$58/hr - $58/hr|Remote|Remote — Stuttgart, Baden-Württemberg, Germany
Please submit your CV in English and indicate your English proficiency level. toloka-ai, working with Mindrift, connects data science professionals to project-based contract work in AI. Assignments support technology companies as they develop, test, and evaluate AI systems. This is not a permanent employment role; work is structured around specific projects. Role overview Design computational data science problems that mirror analytics workflows in industries such as telecom, finance, government, e-commerce, and healthcare. Create problems requiring Python programming, using libraries like Pandas, Numpy, Scipy, Scikit-learn, Statsmodels, Matplotlib, and Seaborn. Ensure challenges are computationally intensive, with solutions that would take significant manual effort. Develop scenarios involving complex data processing, statistical analysis, feature engineering, predictive modeling, and extracting insights. Write deterministic problems with reproducible outcomes by avoiding randomness or using fixed seeds. Base challenges on real business needs such as customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency. Design end-to-end data science tasks covering data ingestion, cleaning, exploratory analysis, modeling, validation, and deployment considerations. Include big data scenarios that require scalable computational solutions. Validate solutions in Python using standard data science libraries and statistical techniques. Document each problem clearly, provide realistic business context, and supply verified correct answers. Requirements Minimum 5 years of hands-on data science experience with demonstrated business impact. Portfolio of completed projects or publications showing practical problem-solving. Advanced Python programming for data science, including Pandas, Numpy, Scipy, Scikit-learn, and Statsmodels. Strong background in statistical analysis and machine learning, with knowledge of key algorithms and applications. Solid SQL skills for data manipulation and analysis. Familiarity with GenAI tools and concepts (LLMs, RAG, prompt engineering, vector databases). Understanding of MLOps and model deployment workflows. Experience with frameworks such as TensorFlow, PyTorch, and LangChain. Excellent written English (C1+ level). Application process Apply Pass qualifications Join a project Complete tasks Receive compensation This freelance, remote position is open to candidates based in Stuttgart, Baden-Württemberg, Germany, as well as those working remotely.
About ClickHouseListed among the top innovators on the 2025 Forbes Cloud 100 list, ClickHouse is a leading, rapidly expanding private cloud company. With over 3,000 customers and an astonishing annual recurring revenue growth exceeding 250%, ClickHouse is at the forefront of real-time analytics, data warehousing, observability, and AI workloads.Our momentum was recently reinforced by a successful $400 million Series D funding round. In the past three months, renowned clients such as Capital One, Lovable, Decagon, Polymarket, and Airwallex have adopted or expanded their use of our platform. These new clients join a prestigious roster of AI innovators and global brands including Meta, Cursor, Sony, and Tesla.Join us in our mission to revolutionize the way companies leverage data!The Connectors team serves as the vital link between ClickHouse and the vast data ecosystem. We develop and maintain integrations that make ClickHouse accessible to millions of developers, data practitioners, and AI agents worldwide—ranging from high-level data visualization plugins (like Tableau, PowerBI, Superset, Metabase) to connectors for data frameworks (Apache Spark, Flink, Kafka Connect, Fivetran), orchestration platforms, and AI tools.Our work is pivotal in shaping how organizations process massive datasets—enabling real-time analytics platforms to ingest millions of events per second, observability systems to monitor global infrastructure, and increasingly, AI-driven data applications that redefine team collaboration with data. We work closely with the open-source community, our internal teams, and enterprise users to ensure that ClickHouse integrations lead the way in performance, reliability, and developer experience.About the RoleAs a Senior Software Engineer specializing in Python and the Data Ecosystem, you will be a key contributor, responsible for owning and advancing essential components of ClickHouse's data engineering ecosystem. This role exists at the crossroads of high-performance database engineering and enhancing developer experience. You will create tools that empower Data Engineers and Data Scientists to fully leverage ClickHouse's speed and scale within the frameworks they are already familiar with.We are seeking an individual who has direct experience as a Data Engineer or Data Scientist. The landscape for data practitioners is evolving rapidly; databases have progressed beyond mere query targets—they are now integral components of AI-powered workflows, serving as vector stores for RAG pipelines, backends for LLM-powered agents, and real-time feature stores for ML inference. You comprehend these workflows not from an outsider's perspective, but from personal experience within them. Your role is not just to build integrations; you will contribute product-level insights that enhance user experience.
Are you ready to take your career to the next level? Join dev2 as a Senior Data Engineer specializing in Python and play a pivotal role in our data-driven projects. You will work closely with cross-functional teams to design and implement scalable data solutions that drive innovation and efficiency.In this role, you will:Design, develop, and maintain data pipelines and ETL processesCollaborate with data scientists and analysts to optimize data storage and retrievalEnsure data quality and integrity through rigorous testing and validationContribute to architectural discussions and guide best practices
Join Nagarro as a Staff Engineer specializing in Python and contribute to innovative projects that shape the future of technology. As part of our dynamic engineering team, you will engage with cutting-edge technologies, collaborate with talented professionals, and drive meaningful change in a fully remote environment.
Delivery Hero is looking for a Senior Python AI Engineer to focus on logistics, data, and machine learning projects in Berlin. This position centers on developing AI-powered tools that support and improve delivery operations. Role overview This role involves designing and building applications using Python, with a strong emphasis on artificial intelligence and data science. The main goal is to create solutions that make logistics processes more efficient and help raise customer satisfaction. What you will do Develop and maintain AI-driven applications tailored for logistics challenges Apply machine learning techniques to optimize delivery workflows Work closely with teams to identify and solve data-related problems Requirements Extensive experience with Python Background in artificial intelligence, data science, and machine learning Ability to build solutions that improve operational efficiency
Intellectsoft is looking for a Senior Python Developer to join a remote team supporting a BaFin-regulated fintech client based in Germany. The client specializes in Crypto-as-a-Service infrastructure for institutional partners. This role is open to candidates located in Georgia and offers the chance to work fully remotely. What you will do Develop scalable backend systems using Python to support financial services Create and maintain integrations that bridge traditional finance with Web3 technologies Operate within a high-security environment shaped by regulatory requirements Who this role suits This position fits experienced Python developers who are comfortable tackling complex backend projects and financial integrations, particularly in regulated environments. Intellectsoft works with clients across North America, Latin America, the Nordics, the UK, and Europe, delivering digital solutions in sectors such as fintech, healthcare, education technology, construction, and hospitality. Clients include startups as well as Fortune 500 companies like Jaguar Motors, Universal Pictures, and Harley-Davidson. More information can be found at www.intellectsoft.net.
We are seeking a highly skilled Senior Data Engineer to join our dynamic team at repriskag in Berlin. In this pivotal role, you will be responsible for designing, building, and maintaining scalable data pipelines that support our data-driven decision-making processes. You will work closely with data scientists, analysts, and other stakeholders to ensure data quality and accessibility.
forvismazars is looking for a Werkstudent in Data Analytics & Python to join the team in Berlin. This role is designed for students interested in data analysis and programming, offering a chance to apply academic knowledge in a professional setting. What you will do Support the data analytics team on ongoing projects Work with Python and data analysis tools Gain practical experience in real business scenarios Who we’re looking for Currently enrolled as a student Interest in data analytics and programming Eager to develop technical skills and learn from experienced colleagues This position offers hands-on involvement in data projects and the chance to build technical expertise while working alongside professionals in the field.
LITIT is a joint venture between NTT DATA and Reiz Tech, focused on delivering IT solutions in the DACH region. The company brings together German precision, Japanese work ethics, and Lithuanian talent to provide IT services and support. This remote Data Analytics Engineer position centers on building and improving the IoT Insurance Data Platform (IDP) using AWS. The role involves designing, implementing, and maintaining scalable data pipelines and shared platform services. The work supports analytics, data products, and machine learning applications for industrial IoT and insurance clients. Key responsibilities Architect, deploy, and manage cloud-native data pipelines on AWS. Develop and maintain scalable ETL workflows, data lakes, and data mesh components. Create and optimize PySpark jobs for processing large-scale and time-series data. Manage schemas, tables, and metadata using AWS Glue Data Catalog and Lake Formation. Collaborate with data platform, analytics, and product teams. Desired qualifications Extensive hands-on experience with AWS services, especially Lambda, Glue, and S3; familiarity with Athena, Lake Formation, Step Functions, and DynamoDB is a plus. Strong background in data engineering, including designing and executing ETL/ELT pipelines for large-scale or streaming data. Proficiency in Spark or PySpark for distributed data processing. Understanding of modern data formats such as Apache Iceberg and Parquet. Proven ability to deliver production-grade, enterprise-level data solutions. Experience with API integration, including AWS API Gateway and data exchange APIs. Familiarity with CI/CD pipelines and automated deployment processes. Experience working in cross-functional Scrum teams and an agile mindset. Willingness to travel domestically or internationally as project or client needs arise, sometimes on short notice. Compensation and development Salary range: €4700 - €5700 gross per month. Opportunities for learning and continuous professional growth.
Technical LeadAbout Orcrist TechnologiesOrcrist Technologies is at the forefront of innovation with the Orcrist Intelligence Platform (OIP), a cutting-edge, Kubernetes-native solution designed to facilitate secure open-source intelligence collection and analysis for defense and public safety sectors.Role OverviewAs a Technical Lead, you will steer the architecture of our data platform. Your responsibilities will include designing and constructing the lakehouse infrastructure (utilizing technologies such as Hudi, Trino, Kafka, NiFi, and PySpark) that underpins all downstream AI and analytics offerings, while also providing mentorship to our team of Data Engineers.ResponsibilitiesTake ownership of the architecture for our data lakehouse—make informed decisions on build-vs-buy, craft technical designs, and direct implementation efforts.Develop production-level Python code for essential data pipelines and infrastructure components (including PySpark, Kafka, and NiFi).Establish engineering standards for code quality, testing, observability, and documentation.Guide and mentor Data Engineers through code reviews, one-on-one sessions, and technical advice.Collaborate with Product teams to transform requirements into scalable technical solutions.Design and implement data governance frameworks focusing on lineage, cataloging, and compliance with government security regulations.QualificationsMinimum of 6 years of experience in data engineering, with at least 2 years in a leadership or architectural capacity.Proficiency in Python (including PySpark and pandas) and SQL, along with a comprehensive understanding of lakehouse architectures and data modeling.Experience with streaming technologies (e.g., Kafka), batch processing frameworks (e.g., Spark), and modern table formats (e.g., Hudi, Iceberg, or Delta).Proven track record in building and maintaining large-scale data platforms in production environments.Eligibility to work in Germany; EU/NATO citizenship preferred for roles involving export control.Preferred QualificationsProficiency in the German language (B1 or higher).Experience in defense or government data environments.Familiarity with graph databases or machine learning infrastructure.BenefitsAccess to a modern technological stack (Hudi, Ozone, PySpark, Trino, Kafka, Kubernetes).Work on mission-driven projects that make a significant impact.Remote-first work culture based in Germany.Opportunities for meetups in Berlin.30 vacation days per year.Budget for equipment and professional development.
Contract|$58/hr - $58/hr|Remote|Remote — Stuttgart, Baden-Württemberg, Germany
Please submit your CV in English and specify your English proficiency level. Mindrift, in partnership with toloka-ai, offers project-based freelance work for experienced data scientists. This role centers on testing, evaluating, and enhancing AI systems for leading technology companies. All assignments are project-based, not permanent employment. Key Responsibilities Design original data science challenges that reflect analytical work from sectors like telecom, finance, government, e-commerce, and healthcare. Create tasks requiring advanced Python programming, using libraries such as Pandas, Numpy, Scipy, scikit-learn, Statsmodels, Matplotlib, and Seaborn. Develop problems that require significant computational effort, making manual solutions impractical. Formulate scenarios involving complex data processing, statistical analysis, feature engineering, predictive modeling, and extracting business insights. Ensure each challenge is deterministic and reproducible, using fixed random seeds when needed. Base tasks on real business needs, including customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency. Build end-to-end challenges covering the full data science workflow: data ingestion, cleaning, exploratory analysis, modeling, validation, and deployment considerations. Include scenarios that require scalable computational methods for large datasets. Validate solutions in Python using standard data science libraries and statistical techniques. Write clear documentation for each problem, providing business context and verified solutions. Requirements Minimum 5 years of hands-on data science experience with measurable business outcomes. Portfolio of completed projects or publications showing real-world problem-solving. Advanced Python programming skills for data science (pandas, numpy, scipy, scikit-learn, statsmodels). Expertise in statistical analysis and machine learning, with strong knowledge of algorithms and methodologies. Proficient SQL skills for data analysis and manipulation. Familiarity with GenAI technologies, including LLMs, RAG, prompt engineering, and vector databases. Understanding of MLOps and model deployment workflows. Knowledge of frameworks like TensorFlow, PyTorch, and LangChain. Strong written English skills at C1 level or higher. How to Get Started Apply Complete qualification(s) Join a project Finish assigned tasks Receive compensation Location: Remote , Stuttgart, Baden-Württemberg, Germany
Full-time|Remote|Germany, Remote; Netherlands, Remote; Spain, Remote; United Kingdom, Remote
About Dataiku Dataiku provides a unified platform for building, deploying, and managing AI and analytics across the enterprise. The platform connects teams and tools, supporting transparency, collaboration, and centralized governance. Organizations use Dataiku to run analytics, machine learning, and AI projects across multiple vendors and cloud environments. Many leading global companies rely on Dataiku to operationalize AI and drive measurable business value. Learn more through the Dataiku blog, LinkedIn, X, and YouTube. Role Overview: Data Engineer I (Remote) The Enterprise Data and Analytics (EDA) team at Dataiku is hiring a Data Engineer I. This internal, non-client facing position supports the data platform that enables analytics, embedded analytics teams, Generative AI engineering, and self-service users across the company. The role is fully remote and open to candidates based in Germany, Netherlands, Spain, or the United Kingdom. What You Will Do Deliver and maintain data pipelines that power analytics and insights for teams throughout Dataiku. Split time equally between Data Operations (support, troubleshooting) and new development. Work daily with the data platform, primarily using Snowflake, Dataiku, and GitHub. Develop solutions using Python and SQL. Contribute to DataOps processes within GitHub Actions and Dataiku. Support data platform processes within Snowflake and Dataiku. What We Look For Technical skills in Python, SQL, and data platform tools (Snowflake, Dataiku, GitHub). Strong analytical thinking and problem-solving abilities. Excellent verbal and written communication skills. Curiosity and a commitment to continuous learning. Ability to work collaboratively with engineers from different teams. Positive attitude and focus on shared goals. Additional Notes This is an internal and non-client facing role.
About dltHubdltHub is at the forefront of innovation, leveraging the dlt library, a leading open-source Python tool for data loading with approximately 4 million downloads monthly. We are on a mission to develop the next generation of AI-native data tools built upon our robust open-source foundation.Our team is agile and thrives on experimentation, automation, and user-centric AI workflows. Founded by seasoned data scientists and machine learning engineers, we have our roots in Berlin and a burgeoning presence in the United States.We are supported by prominent investors including Bessemer Venture Partners, Foundation Capital, Dig Ventures, and technical pioneers from Hugging Face, Instana, Matillion, and more. Our core values drive us: we communicate with courage, prioritize meaningful projects, automate continuously, maintain high energy, honor our commitments, and achieve success collectively. About the RoleAs we continue to enhance our AI-first data platform, we are seeking a passionate and entrepreneurial software engineer in Berlin. This role offers the opportunity to tackle data platform challenges related to ingestion, storage, performance, reliability, observability, and AI-enhanced developer experiences. If you are driven, eager to collaborate with a motivated team, and prioritize high-quality code and innovative data systems, this position is tailored for you.
Distribusion is the premier global marketplace for ground transportation, providing travelers with effortless online access to a variety of ground transport options from search to ticket purchase. Our state-of-the-art B2B technology platform connects bus, rail, and ferry operators across 70+ countries to major online retailers, including Google Maps and Booking.com.We are redefining the future of travel by constructing the largest global network of transport providers and retailers. Having achieved a remarkable 10x growth in the past year, we are one of the fastest-growing startups in the travel sector. Supported by four prominent venture capital firms (TQ Ventures, Creandum, Northzone, and Lightrock), and following our recent Series C funding round of $80 million, we are poised to expand even further.We are on the lookout for a talented Senior Python Software Engineer to join our Demand Team. This team is crucial to our product, acting as the first touchpoint for end customers. Therefore, delivering a seamless and efficient search experience is vital for driving sales, ensuring customers can swiftly find optimal choices and successfully complete their bookings.Key Responsibilities:Write clean, efficient, and thoroughly documented Python code.Develop and maintain backend services and APIs for web projects.Implement backend solutions for business-critical applications.Debug, test, and optimize applications for enhanced performance and scalability.Ensure system uptime of at least 99.9%.Engage in code reviews and contribute to best practices.Work Environment:Our headquarters is located in Berlin, where the team frequently meets. However, we embrace a remote-first culture, with team members dispersed globally.
About Qualifyze:Founded in 2019, Qualifyze has quickly established itself as a premier provider in supply chain compliance management within the Life Sciences sector, gaining the trust of over 1,500 pharmaceutical and healthcare firms worldwide. Our sophisticated digital suite of solutions seamlessly connects manufacturers, suppliers, and a global network of more than 250 auditors and quality professionals.With an impressive portfolio that includes over 4,500 audits conducted across 85+ countries, and boasting the largest, most accurate supplier network along with advanced data analytics tools, Qualifyze is your comprehensive partner for quality compliance and supply chain risk mitigation in the Life Sciences industry.
peec is building and improving the backend systems that support its products. As a Senior Python Engineer based in Berlin, this position centers on designing, developing, and maintaining complex backend infrastructure to ensure reliable and high-quality delivery. Role overview This role involves full ownership of backend systems, from initial planning and architecture through to implementation and deployment. The work spans API design, distributed systems, and ongoing performance improvements. What you will do Lead the end-to-end development of backend systems, including scoping, architecture, execution, and delivery. Design and implement scalable APIs and distributed systems that meet high performance standards. Work closely with Product, Frontend, and Design teams to translate product ideas into solid backend solutions. Explore and apply modern backend technologies to improve architecture and the developer experience. Help evolve backend architecture and infrastructure to support fast iteration and growth. Identify areas for system and developer experience improvement, and drive initiatives to address them. Requirements Significant experience with Python and backend development is essential. Strong skills in API design, distributed systems, and collaborating across teams will be important for success in this role.
Distribusion Technologies stands at the forefront of the global ground transportation marketplace, offering travelers unparalleled access to diverse transport options online, from seamless search functionalities to straightforward ticket purchasing. Our state-of-the-art B2B technology platform bridges the gap between bus, rail, and ferry operators in over 70 countries and leading online retailers such as Google Maps and Booking.com.As we continue to redefine the travel landscape, we are expanding the largest global network of transportation providers and retailers. After experiencing a remarkable 10x growth last year, we are recognized as one of the fastest-growing startups in the travel sector. With the support of four prominent venture capital firms (TQ Ventures, Creandum, Northzone, and Lightrock) and our recent successful $80 million Series C funding round, we are poised for significant advancements.We are currently seeking a dynamic Senior Python Software Engineer to join our Portal Team. In this pivotal role, you will play a crucial part in developing our core B2B product, which connects global carriers, retailers, and travel partners to Distribusion’s robust infrastructure. The Portal serves as the central interface within our ecosystem, facilitating high-performance booking and analytics tools, as well as identity management and operational dashboards. You will navigate multiple domains, influence architectural decisions, integrate complex workflows, and collaborate with teams across Data, Platform, and E-commerce to create a seamless and scalable user experience. We are on the lookout for a developer who thrives on ownership, enjoys tackling complex challenges, and is passionate about building the foundation of how the world books ground transportation.
Apr 3, 2026
Sign in to browse more jobs
Create account — see all 6,241 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.