Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Proficiency in Python programming and biological sciences. Experience with machine learning concepts and AI training methodologies. Strong communication skills and the ability to convey complex concepts in an understandable manner. Prior experience in training, teaching, or educational content development is a plus.
About the job
Toloka AI is seeking a freelance specialist with strong expertise in biology and Python programming. This remote role, based in Israel, centers on supporting AI training efforts with hands-on subject matter knowledge.
Role overview
This position involves developing training materials that help improve artificial intelligence systems, with a focus on biological data. The work includes designing content, guiding AI models, and ensuring accurate interpretation of complex biological information.
What you will do
Create clear and effective training content for AI models in the biology domain
Facilitate learning experiences that connect biological concepts with Python programming
Guide AI systems to better understand and process biological data
Requirements
Expertise in biology
Strong programming skills in Python
Ability to explain biological concepts and coding techniques clearly
About Toloka AI
Toloka AI is at the forefront of artificial intelligence training, committed to enhancing the capabilities of AI systems through expertise in various fields, including biology. Our mission is to bridge the gap between human knowledge and machine learning, creating intelligent solutions that can solve real-world problems.
Similar jobs
1 - 20 of 613 Jobs
Search for Freelance Python Data Scraping Engineer
Role overview Jobgether is hiring a Senior Python Data Scraping Engineer for a freelance position based in Israel. This role offers remote flexibility and involves collaboration with a partner organization. The focus is on building and maintaining sophisticated data extraction systems that support AI and analytics initiatives. What you will do Design and implement scalable web data extraction pipelines for large and complex datasets. Create advanced scraping solutions for dynamic websites, adapting to frequent changes in site structure and content. Work with hybrid AI-human workflows, partnering with AI-driven agents to improve data validation and quality control. Deliver structured, accurate, and validated data to support downstream projects. Share technical expertise to refine scraping strategies and uphold reliability standards. Requirements Deep experience with Python, especially for web scraping and data extraction tasks. Track record of engineering resilient and scalable scraping tools for complex, changing web environments. Keen attention to detail and a focus on producing precise, high-quality datasets. Ability to work independently as a remote freelancer. Experience with both human and AI-driven workflows is a plus. Location This position is open to candidates based in Israel. Remote work is available.
Role overview Toloka AI seeks a freelance AI trainer with expertise in civil engineering and strong Python programming skills. This position is fully remote and available to candidates based in Israel. What you will do Use civil engineering knowledge and Python to contribute to the training and improvement of AI models. Work closely with Toloka AI's team to assist in developing advanced AI systems. Location This is a remote role for candidates residing in Israel.
About ClickHouseListed among the 2025 Forbes Cloud 100, ClickHouse stands out as a pioneering and rapidly expanding private cloud organization. With a robust customer base exceeding 3,000 and an annual recurring revenue (ARR) growth surpassing 250% year-on-year, we are at the forefront of real-time analytics, data warehousing, observability, and AI workloads.Our recent $400M Series D funding round validates our continuous momentum. In the last quarter, notable clients such as Capital One, Lovable, Decagon, Polymarket, and Airwallex have either implemented or expanded their use of our platform, joining a prestigious roster of AI innovators and global brands including Meta, Cursor, Sony, and Tesla.Join us in our mission to revolutionize data utilization across industries!The Connectors team serves as the vital link between ClickHouse and the expansive data ecosystem. We design and maintain integrations that empower millions of developers, data professionals, and AI systems worldwide, ranging from advanced data visualization tools (like Tableau, PowerBI, Superset, Metabase) to connectors for data processing frameworks (including Apache Spark, Flink, Kafka Connect, Fivetran), orchestration platforms, and AI toolsets.Our efforts are integral in shaping how organizations handle vast datasets, from real-time analytics platforms managing millions of events per second to observability systems that monitor global infrastructures, and increasingly, AI-driven data applications that redefine team workflows. We collaborate closely with the open-source community, internal teams, and enterprise users to ensure our integrations set new benchmarks in performance, reliability, and user experience.About the RoleAs a Senior Software Engineer with expertise in Python and the Data Ecosystem, you will play a pivotal role in developing and advancing key components of ClickHouse's data engineering ecosystem. This position intersects high-performance database engineering and user experience. Your contributions will involve creating tools that enable Data Engineers and Data Scientists to leverage ClickHouse's speed and scalability seamlessly within their existing frameworks.We seek a candidate who has firsthand experience as a Data Engineer or Data Scientist. The landscape for data practitioners is evolving rapidly: databases are transforming from mere query targets to active participants in AI-driven workflows, functioning as vector stores for retrieval-augmented generation (RAG) pipelines, backends for large language model (LLM)-driven agents, and real-time feature stores for machine learning inference. You comprehend these workflows not from an external viewpoint but from active engagement. You don’t just develop integrations; you provide product-level insights that drive innovation.
Please submit your CV in English and indicate your English proficiency level. Mindrift connects skilled specialists with project-based AI initiatives for leading tech companies. The focus is on testing, evaluating, and improving AI systems. This is a project-based freelance role, not a permanent position. What you will do Create original computational mathematics problems that reflect real mathematical research methods. Develop problems that require Python programming to solve, using libraries such as Numpy, SciPy, and Sympy. Ensure problems are computationally intensive and cannot be solved manually within days or weeks. Design challenges involving advanced reasoning in areas like number theory, combinatorics, graph theory, and numerical analysis. Base problems on genuine research questions or practical mathematical applications. Validate solutions in Python with established mathematical libraries. Document each problem clearly, providing both the statement and a verified solution. Requirements Degree in Mathematics (Pure or Applied) or a closely related field. Proficiency in Python for numerical validation; experience with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or Stata, or knowledge of another programming language is also acceptable. At least 2 years of professional experience, which may include research, teaching, or applied work. Familiarity with numerical methods and symbolic computation. Ability to design problems that mirror authentic mathematical research workflows. Understanding of computational complexity theory. Strong written English skills at C1 level or higher. Application process The process includes: Apply → Complete qualifications → Join a project → Execute tasks → Receive payment. Project commitment During active phases, tasks are expected to require approximately 10–20 hours per week. This estimate depends on project needs and is not a guaranteed workload. Compensation Contributors may earn up to $35 per hour, depending on expertise and pace. Actual compensation varies by project scope, complexity, and required skills. Other projects on the platform may offer different earning potential based on their requirements.
Toloka AI is seeking a freelance specialist with strong expertise in biology and Python programming. This remote role, based in Israel, centers on supporting AI training efforts with hands-on subject matter knowledge. Role overview This position involves developing training materials that help improve artificial intelligence systems, with a focus on biological data. The work includes designing content, guiding AI models, and ensuring accurate interpretation of complex biological information. What you will do Create clear and effective training content for AI models in the biology domain Facilitate learning experiences that connect biological concepts with Python programming Guide AI systems to better understand and process biological data Requirements Expertise in biology Strong programming skills in Python Ability to explain biological concepts and coding techniques clearly
Submit your CV in English and specify your English proficiency level. Mindrift offers project-based freelance roles for professionals interested in AI education. This remote position, open to candidates based in Israel, focuses on creating and validating computational chemistry problems for AI system evaluation. Engagements are project-based rather than permanent employment. What you will do Design computational chemistry problems that reflect real-world research workflows. Develop problem sets requiring Python programming for solutions, using libraries such as numpy, scipy, and chemistry-specific packages. Ensure problems are computation-intensive and not easily solvable by hand within a reasonable timeframe. Create tasks involving advanced reasoning in physical chemistry, quantum chemistry, and molecular modeling. Base scenarios on authentic research challenges or practical chemistry applications. Validate solutions in Python, applying established computational chemistry methods. Document problems clearly and provide accurate, verified solutions. Requirements Degree in Chemistry or a related field. Proficiency in Python for numerical validation. Experience with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or other relevant programming languages is also acceptable. At least 2 years of relevant experience (applied, research, or teaching). Familiarity with numerical methods in chemistry and computational chemistry concepts. Strong written English skills at C1 level or above. How projects work The process includes application, passing qualifications, joining a project, completing assigned tasks, and receiving payment. Time commitment During active project phases, expect approximately 10–20 hours of work per week, depending on project needs. Actual workload may vary. Compensation Earn up to $35 per hour, based on experience and contribution speed. Payment depends on project scope, complexity, and expertise required. Other projects on the platform may offer different rates.
Please submit your CV in English and indicate your level of English proficiency. Mindrift connects specialists with project-based AI work for leading technology companies. The platform focuses on testing, evaluating, and improving AI systems. All roles are flexible and non-permanent. Role overview This freelance position centers on applying material science expertise and Python skills to help train and evaluate AI systems. Projects change over time, but the core work involves creating and validating engineering problems that reflect real-world scenarios. What you will do Design original materials that reflect authentic engineering challenges Create programming problems using Python for calculations and simulations Ensure tasks require computational work, numerical methods, or iterative solutions Develop projects focused on system design, optimization, and analysis Base problems on real research issues or practical engineering situations Verify solutions using Python and standard engineering libraries Document problem statements clearly and provide validated solutions Requirements Degree in Material Science or a closely related field Strong Python skills for numerical validation; familiarity with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or similar tools is a plus At least 2 years of relevant experience, including applied research or teaching Understanding of practical engineering constraints and approximations Excellent written English at C1 level or higher How to get started Apply Pass qualification(s) Join a project Complete tasks Get compensated Project commitment During active project phases, tasks usually require 10–20 hours per week. Actual workload may vary depending on project needs. Compensation Contributors can earn up to $35 per hour, depending on the nature and complexity of the project. Pay varies by project scope.
Role overview Toloka Annotators seeks a Freelance AI Trainer - Data Annotation Specialist to join its remote team. This contract position is open to candidates living in Israel and centers on supporting AI development projects from home. What you will do Label and annotate data accurately to support improvements in artificial intelligence systems Participate in projects that help advance AI performance Work independently as part of a distributed team Focus of the work This role involves careful data labeling and review. The goal is to help ensure AI models meet quality standards and function reliably.
About the Role Tipalti Solutions is growing and looking for a Data Engineer in the Tel Aviv District. This role centers on developing and refining data integration processes, building data pipelines, and strengthening the company’s data platform. The position supports analytics across teams, systems, and products. What You Will Do Design and implement ELT and streaming processes, including SQL queries, to move data between the data warehouse and other sources. Create scalable, automated workflows for large-scale data analysis. Help develop dashboards and datasets that deliver actionable insights. Work with business owners to build datasets tailored to their specific questions. Collaborate with analytics and business teams to improve data models for business intelligence tools and support data-driven decisions. Partner with engineering teams to plan long-term data platform architecture. Maintain and enhance data lake pipelines, including schema management and ongoing improvements. Who Succeeds Here This role suits someone who enjoys building and optimizing data systems from the ground up, manages data efficiently, and works well with diverse teams. A self-motivated approach and an interest in evolving data architecture are important.
Join our dynamic Platform Infrastructure team as a Senior Software Engineer, where you will be instrumental in designing and building the core Axonius Platform that underpins all current and future offerings. You will confront challenging engineering challenges across the entire stack, from low-level system configurations to advanced distributed backend applications. By joining our diverse global team, you'll serve as a technical beacon, conducting in-depth research and troubleshooting complex performance and architectural challenges that others find elusive.
Jobgether is looking for a Senior Full-Stack Engineer with strong skills in Python and React. This position is based in Israel and focuses on building and supporting modern web applications. Role overview This role centers on both backend and frontend development. The Senior Full-Stack Engineer will work with Python and React to create new features and improve existing systems. Collaboration with other teams is a regular part of the job, ensuring projects move forward smoothly. Key responsibilities Develop and maintain web applications using Python and React Work closely with cross-functional teams to deliver project goals Promote technical quality and consistency across projects Requirements Proven experience as a full-stack engineer Strong knowledge of Python and React Ability to collaborate with diverse teams
Jobgether is looking for a Senior Data Engineer based in Israel. This position centers on building and maintaining data pipelines and architectures that help drive business goals. Role overview The Senior Data Engineer will design, develop, and support data systems. Collaboration with teams across the company is a regular part of the job, with a focus on delivering reliable and secure data solutions. Key responsibilities Create and manage data pipelines and architectures Work with multiple teams to support data quality and availability Help ensure data security in all processes Location This role is based in Israel.
About Teads Teads is an omnichannel advertising platform focused on improving results for both brand and performance advertisers across multiple screens. The company uses predictive AI to connect high-quality media with engaging brand creatives, offering context-driven addressability and measurement. Teads partners with over 10,000 publishers and 20,000 advertisers worldwide. Its headquarters are in New York City, and the team includes about 1,700 professionals in more than 30 countries. Learn more at www.teads.com. Engineering at Teads Build user-friendly, efficient web products used by thousands from leading publishers, advertisers, and agencies. Work with a diverse tech stack and system architecture, prioritizing performance, scalability, resiliency, and cost efficiency. The main technologies include Scala and TypeScript. Support a high-traffic environment: 2.2 billion users monthly, 100 billion events daily, and 2 million requests per second, with responses delivered in under 150 milliseconds. Handle large datasets, ensuring millisecond-level access for complex auction resolution algorithms and near real-time processing (18 million predictions per second). Collaborate closely with Product teams and adapt Cloud infrastructure to roll out new features in a fast-moving setting.
Full-time|On-site|Herzliya, Tel Aviv District, Israel
Role Overview Shift4 Payments, Inc. is hiring a Senior Data Engineer to join the Data Infrastructure team in Herzliya. This team builds and maintains the data systems that support Shift4’s payment processing and technology products. What You Will Do Design, build, and maintain scalable data pipelines Work closely with data scientists and analysts to improve data workflows Ensure data integrity across systems and processes Impact and Collaboration This role shapes the backbone of Shift4’s data capabilities. Senior Data Engineers here help strengthen the company’s data infrastructure, supporting both current needs and future growth.
Join Axonius as a Senior Data Infrastructure Engineer and take charge of designing and constructing robust data architectures, ML pipelines, and cloud infrastructure tailored to transform extensive and fragmented datasets into actionable insights. Your expertise will enable business stakeholders to achieve in-depth visibility, automate essential processes, and drive strategic outcomes organization-wide.
About Tavily - Search APITavily is an innovative company dedicated to delivering a state-of-the-art Search API. We enable developers and businesses to access fast, reliable, and relevant search results. Join our dynamic team to contribute to building robust data infrastructure that enhances and scales our core product.The RoleWe are in search of a driven and skilled Data Engineer to join our expanding team. Your role will involve designing, building, testing, and maintaining highly scalable data management systems. You will collaborate closely with our engineering and DevOps teams to create and optimize the data pipelines essential for the functionality and accuracy of our Search API.ResponsibilitiesDesign, develop, and sustain efficient ETL/ELT pipelines for data warehousing.Enhance and maintain our data infrastructure.Guarantee data quality, integrity, and security across all data platforms.Optimize data systems for improved performance and scalability.Diagnose and resolve issues related to data pipelines and infrastructure.Work with cross-functional teams to understand data requirements and deliver effective solutions.Minimum QualificationsBachelor's degree in Computer Science, Statistics, Engineering, or a related quantitative field.3+ years of professional experience as a Data Engineer or in a similar role focused on data infrastructure.Strong proficiency in Python.Extensive experience with relational (SQL) and NoSQL databases.Familiarity with AWS and its various data services.Demonstrated experience in constructing and optimizing data pipelines and architectures.Preferred QualificationsExperience with MongoDB, Snowflake, Redis, S3 General, and S3 Express.Knowledge of big data technologies.Experience with Apache Airflow.Familiarity with containerization and orchestration tools such as Docker and Kubernetes.Background in companies focused on API services or search technology.
Role Overview d-fendsolutions is looking for a Python Automation Developer to help improve operational efficiency. This position is based in Raanana, Israel. What You Will Do Develop and maintain automation scripts using Python Collaborate with teams across the company to spot areas where automation can help Implement solutions that simplify and streamline internal processes
MyHeritage is seeking a visionary Data Engineering Team Leader to spearhead our exceptional team in architecting and developing our data infrastructure. In this pivotal leadership role, you will be instrumental in planning, designing, and deploying MyHeritage's data platform. Our organization thrives on data — join us in the movement toward data democratization! Are you ready to embrace challenges in crafting innovative data solutions and managing state-of-the-art big data architectures at scale? With a complex ecosystem comprising over 7 billion individual profiles and 30 billion historical records, you'll have the opportunity to address substantial big data challenges in a modern Continuous Integration/Continuous Deployment (CI/CD) environment. Engage with various data types, including structured, unstructured, free-text, and DNA genome data. Your role will involve gathering and defining the organization’s data needs while collaborating closely with numerous stakeholders. You'll adopt a comprehensive view of their challenges, propose effective solutions, and lead a high-performing data engineering team to realize them. Your efforts will foster robust alignment and widespread adoption across the organization. Your contributions will help create an online platform that provides profound emotional value to millions of families worldwide.
Nice Ltd. seeks a BI Data Engineer based in Raanana, Israel. This position centers on transforming data into actionable insights that support business decision-making. What you will do Design and develop data pipelines to collect, process, and organize information from various sources. Build and enhance analytics tools that enable teams to access and interpret data effectively. Work with modern technologies to ensure reliable and scalable data solutions. Role focus This role emphasizes improving the way data is gathered and analyzed, making it easier for the business to draw conclusions and act on findings.
Join Axonius as a Senior Software Engineer specializing in Data Engineering. In this role, you will take charge of the design and development of data architectures, machine learning pipelines, and the cloud infrastructure essential for converting vast and fragmented data into actionable insights. Your contributions will enable business leaders to achieve better visibility, automate critical workflows, and foster strategic advancements across the organization.Key Responsibilities:Data Infrastructure Development: Architect and implement all components of the data platform, including machine learning pipelines and necessary infrastructure.Cloud Data Lake Optimization: Establish and enhance an AWS-based Data Lake while adhering to cloud architecture best practices in metadata management, partitioning, and security to facilitate enterprise-level operations.Project Leadership: Oversee full lifecycle of data projects, from initial infrastructure design to production monitoring and performance tuning.Integration Solutions: Employ effective ETL/ELT patterns and advanced querying techniques to tackle complex data integration challenges arising from both structured and unstructured datasets.
Mar 10, 2026
Sign in to browse more jobs
Create account — see all 613 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.