Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Senior
Qualifications
The ideal candidate will possess:Proficiency in Python and React frameworksStrong understanding of web development principlesExperience with RESTful APIs and microservices architectureAbility to work collaboratively in a team environmentExcellent problem-solving skills and attention to detailStrong communication skills, both verbal and written
About the job
Jobgether is looking for a Senior Full-Stack Engineer with strong skills in Python and React. This position is based in Greece and centers on building and improving the company’s platforms.
Role overview
This role focuses on developing new features and maintaining existing applications using Python and React. Collaboration with other engineers and team members is a regular part of the job, with the goal of delivering reliable and user-friendly products.
What you will do
Design, build, and refine applications across the stack using Python and React
Work closely with peers to solve technical challenges and deliver updates
Play a key part in shaping the direction of jobgether’s platforms
Requirements
Professional experience with both Python and React
Ability to work effectively with a team
Comfortable contributing to projects in a collaborative setting
About jobgether
At jobgether, we are dedicated to connecting talented individuals with exciting job opportunities. Our mission is to create a seamless job search experience that empowers job seekers and employers alike. We value innovation, collaboration, and the pursuit of excellence in everything we do.
Please submit your CV in English and specify your English proficiency level. Toloka AI, via the Mindrift platform, offers freelance opportunities for specialists working on AI projects. This Senior Python Systems Developer role centers on functional testing and is open to candidates based in Greece. The position is fully remote and structured as a freelance …
METRO AEBE is recognized as one of Greece's leading employers, proudly supporting over 11,000 employees. Operating under the renowned My Market brand, we manage one of the largest retail networks in the country, featuring 290 stores nationwide. Additionally, we dominate the wholesale market with 50 METRO Cash & Carry stores catering to professionals across Greece.To fuel our ongoing expansion, we are looking for a skilled Data Engineer to join our Data Warehouse (DWH) team. In this role, you will play a pivotal part in designing, enhancing, and optimizing enterprise data products that empower data-driven decision-making.Responsibilities:Design and develop robust, automated data pipelines (ETL/ELT) to efficiently ingest data from diverse sources into our Data Warehouse or Data Lake.Conduct data wrangling tasks, including data cleaning and transformation, to convert raw data into actionable formats for analysis, visualization, or machine learning applications.Ensure data quality and monitor pipeline performance to uphold data integrity and reliability.Implement data access controls in alignment with corporate regulations and policies.Contribute to machine learning and AI initiatives by preparing, validating, and serving high-quality datasets for model training and evaluation.Work collaboratively with Data and BI Analysts, providing technical support as required.
Please submit your CV in English and specify your English proficiency level. This freelance, project-based position is offered through Mindrift and supports Toloka AI. Mindrift connects skilled professionals with assignments focused on testing, evaluating, and improving AI systems for leading technology companies. This is not a permanent employment role. Role overview This remote role, open to candidates based in Greece, seeks a Senior Python Developer with strong expertise in code migration and functional testing. The work involves handling large codebases in multiple programming languages, designing and executing tests, managing Docker environments, and using LLM-powered tools to streamline development tasks. What you will do Design and implement functional black box tests for large codebases written in various source languages. Set up and maintain Docker environments to enable reproducible builds and cross-platform testing. Monitor code coverage and configure automated scoring systems to meet industry standards. Utilize LLMs (such as Roo Code and Claude Code) to accelerate development, automate repetitive tasks, and enhance code quality. Requirements Minimum 5 years of software engineering experience, with a focus on Python. Strong knowledge of pytest, including fixtures, session-scoped tests, timeouts, and creating black-box functional tests for CLI tools. Advanced experience with Docker (writing reproducible Dockerfiles, managing user contexts, securing workspaces). Proficient in Linux and Bash scripting, including debugging within containers. Familiarity with modern Python tooling (uv, pyproject.toml, packaging). Ability to read and understand code in multiple languages (C, C++, Rust, Go) with assistance from LLMs. Experience using LLMs (Claude Code, Roo Code, Cursor) to speed up development and generate test cases. English proficiency at B2 level or above. Preferred qualifications Background with agent evaluation platforms and MCP CLI. Technologies and tools Python (pytest, uv, Pillow), Docker, Bash, Git Submodules, reading C/C++/Rust/Go, Dagger, GitHub Codespaces, LLMs (Claude Code, Roo Code, Cursor), coverage.py, gcov, kcov. Benefits and work arrangement Freelance, project-based assignments through the Mindrift platform (powered by Toloka AI). Fully remote work with flexible hours (20-30 hours per week). Compensation is project-dependent; for this project, AI trainers can earn up to $30 per hour.
Please submit your CV in English and indicate your level of English proficiency.At Mindrift, we facilitate connections between specialists and project-based AI opportunities with leading technology companies, with a focus on testing, evaluating, and enhancing AI systems. This is a project-based engagement, not a permanent position.Role OverviewWe are seeking a skilled Senior Python Developer with extensive experience in functional testing. The ideal candidate will possess strong expertise in Linux and Docker, a proficiency in reading code across multiple programming languages (such as C, Rust, and Go) using LLMs, and the capability to translate requirements for migration tasks. Familiarity with tools like Roo Code or Claude Code to streamline iterative development is essential.Key ResponsibilitiesDevelop functional black box tests for large codebases written in various programming languages.Establish and manage Docker environments to guarantee reproducible builds and testing across multiple platforms.Monitor code coverage and set automated scoring criteria to meet industry-standard benchmarks.Utilize LLMs (such as Roo Code and Claude) to enhance development cycles, automate repetitive tasks, and elevate overall code quality.QualificationsA minimum of 5 years of experience as a Software Engineer, primarily in Python.In-depth experience with pytest (including fixtures, session-scoped tests, and timeouts) and designing black-box functional tests for command-line interface tools.Expertise in Docker (including reproducible Dockerfiles, user contexts, and secure workspaces).Strong skills in Linux and Bash scripting, with the ability to debug within containers.Familiarity with modern Python tools (like uv, pyproject.toml, and packaging).Ability to understand and work with multiple programming languages (e.g., C, C++, Rust, or Go) using LLMs.Experience with LLMs (Claude Code, Roo Code, Cursor) to expedite iterative development and test-case generation.Proficiency in English at a B2 level or higher.Preferred QualificationsPrevious experience with agent evaluation platforms and MCP CLI.Technologies and Tools: Python (pytest, uv, Pillow), Docker, Bash, Git Submodules, C/C++/Rust/Go (reading), Dagger, GitHub Codespaces, LLMs (Claude Code, Roo Code, Cursor), coverage.py, gcov, kcov.BenefitsWhat We OfferFreelance project-based collaboration through the Mindrift platform (powered by Toloka AI).Fully remote and flexible participation—choose your schedule and the amount of time you wish to contribute (20-30 hours per week).Compensation varies by project scope and required expertise, with earnings of up to $30 per hour for this project.
Role Overview finartix is looking for an ETL/SSIS Data Engineer to join the team in Athens, Attica, Greece. This role focuses on building and maintaining data solutions for clients in the Greek market. The position works closely with IT professionals to improve data ecosystems and streamline data delivery for a range of sectors. What You Will Do Develop, test, and maintain data solutions throughout the full software development lifecycle. Apply effective methods for collecting and analyzing data to support strategic recommendations that fit client business goals. Act as a technical advisor, offering insights and solutions to clients. Work as part of an Agile team, contributing to collaboration and new ideas. Qualifications BS or MS in Computer Science, Engineering, or a related field. Minimum 3 years of experience in software development using MS SQL Server and ETL tools, especially SSIS. At least 2 years working on data migration projects. 2 years of experience in the Banking Industry. Solid understanding of software application fundamentals and how they affect user experience. Strong skills in testing and quality assurance. Proficient programming abilities and a creative, problem-solving approach. Good communication and time management skills. Comfortable working both independently and as part of a team. Demonstrated analytical thinking and a solution-oriented mindset. Proficiency with Microsoft Office Suite. Fluent in both English and Greek, written and spoken.
About EveryPayEveryPay is dedicated to revolutionizing the digital financial landscape of e-commerce in Greece. Our mission is to empower Marketplaces and Merchants, enabling them to succeed in a competitive environment.We are a vibrant team of young professionals, united by our core values of Empowering Customers, Collaborating as a Team, Managing Risks, and Delivering Results.We take pride in having created the payment infrastructure that connects numerous Greek Marketplaces and Merchants with global payment schemes such as Visa and MasterCard. Our services extend to Greece's largest and most successful marketplace, Skroutz.Our systems interface with thousands of banks, both domestically and internationally. Our technology handles tens of thousands of transactions daily, amounting to billions of euros in e-commerce. If you have made an online purchase in Greece, you have likely interacted with our payment solutions.EveryPay is a wholly-owned subsidiary of the Skroutz Group of Companies, functioning as both a Technology Firm and a Regulated Financial Services Institution. This unique position offers you exposure to both the Tech Payments Sector and the realm of Financial Services.Your Role in EveryPay's Vision:We are looking to expand our Data Platform team by hiring a skilled Data Platform Engineer. In this role, you will design, build, and maintain the core data platform that drives analytics and business intelligence at EveryPay. You will be instrumental in developing robust data ingestion pipelines, establishing scalable data infrastructure, and enabling our BI team to extract actionable insights from data. Your contributions will ensure that high-quality, reliable data is readily accessible to all stakeholders within the organization.Key Challenges You Will Tackle:Data Ingestion at Scale: Design and implement scalable, reliable data ingestion pipelines that handle data from diverse internal and external sources.Platform Enablement: Construct, operate, and optimize our data platform to empower BI and analytics teams to easily explore, analyze, and visualize data.Data Quality & Governance: Establish and uphold best practices for data quality, lineage, and governance to ensure data trustworthiness and compliance.Your Responsibilities:Architect, build, and maintain ETL/ELT pipelines for ingesting data from various systems (e.g., payment systems, marketplaces, SaaS tools).Establish and manage data platform infrastructure (cloud data warehouses, databases, orchestration tools, etc.).Collaborate closely with the BI team to understand data requirements and deliver efficient, reliable data models and datasets.Monitor pipeline performance and data quality, proactively troubleshooting and resolving any issues.
Toloka AI seeks a Freelance Machine Learning Engineer based in Greece to work remotely. This position centers on developing and refining machine learning models for a variety of AI projects. Role overview Work involves building new models and enhancing existing solutions to support projects in several industries. Projects may span different domains, offering exposure to diverse applications of AI and machine learning. Location This is a remote role open to candidates located in Greece.
Full-time|Remote|Remote — Thessaloniki, Central Macedonia, Greece
Are you an enthusiastic Software Engineer eager to advance your career while working on significant projects? We invite you to join our innovative team at European Dynamics, where remote work is not just allowed, but embraced. Collaborate with a supportive project team to develop challenging applications for prominent public organizations across Germany, Austria, and Switzerland. We pride ourselves on high software development standards and offer ongoing coaching and training opportunities to enhance your skills. Proficiency in German is a plus, and a desire to learn is essential.Your Responsibilities:Engaging in the design and development of sophisticated web applications;Creating, testing, and maintaining large-scale software applications;Fostering effective collaboration within the team;Ensuring compliance with software quality standards;Preparing detailed technical documentation.
About the Role netcompany1 is looking for a Junior to Mid-level Data Engineer in Athens. This role focuses on building and improving data pipelines that support business decisions. The work involves designing, implementing, and optimizing data flows to keep information accurate and reliable. Main Responsibilities Design and build data pipelines for business and analytics needs Optimize existing data processes for efficiency and quality Work with data scientists, analysts, and engineers to improve data architecture Help ensure data integrity across projects Collaboration This position works closely with cross-functional teams, including data scientists and analysts, to support analytical projects and improve how data is used throughout the company. Who We're Looking For Interest in data engineering and analytics Willingness to learn new technologies and approaches Strong teamwork and communication skills
Optasia is a cutting-edge B2B2X financial technology platform specializing in scoring, financial decision-making, disbursement, and collections. Our mission is to promote financial inclusion for everyone, and we pride ourselves on transforming the world in our own unique way.We are on the lookout for passionate and proactive professionals who are driven by results and possess a can-do attitude. Join a team of like-minded individuals dedicated to delivering innovative solutions in an exciting environment.We invite you to apply for the position of Data Engineer within our expanding Data Engineering team. In this role, you will design and implement highly scalable end-to-end batch and streaming data pipelines, contributing to the overall success of Optasia.Your responsibilities will include:Enhancing the scalability, stability, accuracy, speed, and efficiency of our existing data systems.Designing and developing end-to-end data processing pipelines.Navigating a diverse technology stack, including Scala, Spark, Python3, Bash/Python scripting, Hadoop, and SQL.Designing, constructing, testing, and deploying new libraries, frameworks, or complete systems while adhering to the highest standards of testing and code quality.Developing, maintaining, and optimizing core libraries for batch processing and large volume data ingestion into our big data infrastructure.Building and maintaining CI/CD orchestration.What we expect from you:Bachelor's or Master's degree in Computer Science or Informatics.A minimum of 2 years experience in Data Engineering.Proven experience in software/data engineering and/or operations/DevOps/DataOps.Familiarity with the Apache Hadoop ecosystem (YARN, HDFS, HBase, Spark).Hands-on experience with both relational and NoSQL databases.Proficient in systems administration with Linux.Experience in deploying, configuring, and maintaining distributed systems and data/software engineering tools.Your key attributes:Experience with fluid virtual infrastructures such as containers (e.g., Docker, Kubernetes).Familiarity with data and ML flow engines and tools, such as Apache Airflow.A strong passion for learning new technologies and collaborating with other creative professionals.Why you should join us:We offer a range of benefits including: Flexible hybrid working options Competitive remuneration package An extra day off on your birthday Performance-based bonus scheme Comprehensive private healthcare insurance All the tech gear you need to work efficientlyExperience the Optasia perks: Join our multicultural working environment Engage with a unique and promising business and industry Gain insights into the future market landscape Enjoy a solid career path within our working family.
cepal seeks a Senior Data Quality Specialist in Nea Smyrni, Attica, Greece. This position focuses on strengthening data integrity and maintaining high standards across the organization. Key Responsibilities Develop and apply data quality frameworks that align with business needs. Carry out regular audits to detect inconsistencies and data gaps. Collaborate with teams throughout the company to ensure data quality standards are met. Identify data quality concerns and propose actionable solutions. Encourage ongoing improvement in data management practices. Requirements Strong analytical skills with attention to detail. Background working with data quality frameworks and audit procedures. Experience cooperating with colleagues from various departments. Dedication to upholding and enhancing data quality.
Join a pioneering leader in education technology, recognized globally for excellence in the assessment and certification of professional skills across more than 200 countries. PeopleCert is actively seeking a talented Data Engineer to enhance our dynamic Data & AI team. This position is crucial in architecting, developing, and sustaining the infrastructure and data solutions that empower our AI-driven projects. The ideal candidate will possess substantial practical experience with Microsoft Azure technologies and demonstrate a strong passion for data engineering practices that facilitate machine learning, advanced analytics, and large-scale data processing.In this role, you will collaborate closely with the AI Center of Excellence, working alongside data scientists, ML engineers, software developers, analysts, and business stakeholders to enable data accessibility and drive intelligent applications.Your responsibilities will include:Designing, implementing, and maintaining scalable data pipelines and workflows to facilitate AI/ML model training, evaluation, and inference.Building and optimizing data integration solutions utilizing Azure data tools such as Synapse Analytics, Azure Data Factory, Databricks, and Delta Lake.Partnering with data scientists and AI engineers to ensure data is available in the correct format and quality for modeling purposes.Developing and maintaining APIs and data services that power AI-driven applications and insights delivery.Supporting the development of data lakes and lakehouses tailored for advanced analytics and AI use cases.Writing efficient, reusable Python and SQL code for data processing, cleaning, and transformation.Participating in code reviews and knowledge-sharing sessions within the team to cultivate best practices and continuous learning.Keeping abreast of emerging tools, cloud services, and trends in data engineering and AI infrastructure.
Join Kpler as a Business Intelligence Data Engineer where you will play a crucial role in transforming data into actionable insights. You will work with various data sources and be part of a dynamic team focused on enhancing our data platforms. You will have the opportunity to leverage your analytical skills to drive strategic decision-making and contribute to our innovative solutions.
Are you excited about Data & AI? At Satori Analytics, we are redefining the landscape of data and artificial intelligence. Our mission is to empower global brands by providing unparalleled clarity through innovative data solutions. We develop cloud-based ecosystems for fintech and predictive models for airlines, offering cutting-edge solutions that span the entire data lifecycle—from ingestion to AI applications.As a rapidly growing scale-up, our dynamic team of over 100 tech professionals—including Data Engineers, Data Scientists, and more—delivers transformative analytics solutions across diverse sectors such as FMCG, retail, manufacturing, and financial services. Join us in spearheading the data revolution in South-Eastern Europe and beyond!What Your Day Might Look Like:Technical & Delivery LeadershipLead the development and enhancement of data engineering standards, best practices, and architectural principles for all Satori projects.Serve as a senior technical authority for complex data platforms, including cloud data stacks, pipelines, streaming, and orchestration.Assist project teams in solution design, risk management, and technical decision-making processes.Evaluate and critique designs to ensure they meet scalability, performance, security, and cost-effectiveness criteria.Collaborate with Tech Leads to maintain consistency and quality across projects.People Management & LeadershipOversee Senior Data Engineers and Tech Leads, fostering growth, performance, and career advancement.Mentor engineers on technical depth, ownership, communication, and leadership skills.Contribute to performance evaluations, development plans, and promotion decisions in line with Satori’s competency framework.Exemplify Satori’s values of collaboration, transparency, and accountability.Cross-Functional CollaborationWork in tandem with Product Owners to align technical solutions with client requirements and delivery constraints.Partner with Data Science, AI, and Cloud teams to ensure seamless end-to-end solutions.Support presales and discovery phases by providing technical insights, estimations, and solution framing when necessary.Organizational ImpactIdentify skill gaps, tooling, or process improvements and recommend practical solutions.Engage in internal initiatives, such as guilds, playbooks, training, and knowledge sharing.Help scale the data engineering capabilities as Satori expands, ensuring quality and culture are preserved.
iKnowHow S.A. is part of the iKnowHow Group, a technology company with more than 24 years in the field and a team of over 300 professionals. The group delivers technology solutions to sectors such as Energy, Telecommunications, Banking & Financial Services, and the Public Sector. Specialized subsidiaries within the group focus on areas like Health and Robotics, integrating advanced technologies for clients and internal projects. The company’s portfolio covers Data & AI platforms, enterprise integration, cloud-native applications, and digital transformation initiatives for organizations in both public and private sectors. Role overview iKnowHow S.A. is hiring an SSIS Data Engineer in Gerakas, Attica, Greece. This role will support ongoing and upcoming data migration projects, working as part of a collaborative technology team.
Elevate your career with us! Join our dynamic development teams in Athens or work remotely as a Data Engineer. In this vital role within our agile team, you will help design and implement cutting-edge big data solutions on a scalable cloud platform. You will analyze millions of real-time data points to extract advanced insights and enhance analytics capabilities for our end users.Your Responsibilities: Develop and implement batch processing pipelines utilizing Spark (Python or Scala) and SQL; Design and execute streaming ETL/ELT processes from a variety of data sources; Write and maintain code for developing comprehensive big data solutions, focusing on data integration and analytics use cases; Create and implement APIs using contemporary Python frameworks; Collaborate effectively with our Business Analysis teams to align technical solutions with business needs; Conduct end-to-end and functional testing using open-source tools; Set up monitoring solutions for our data platform, including alerts and dashboards. Essential Qualifications: Bachelor’s degree in Computer Science or Software Engineering; Extensive knowledge of Apache Spark; Proficient in Python and database management; Previous experience as a Data Engineer; Familiarity with Azure Data Lake Storage and Delta Live Tables; Fluency in English, both written and spoken; Strong analytical skills and a team-oriented mindset; A passion for learning and professional growth in data engineering. Preferred Qualifications: Experience with Databricks; Proficiency in API development with FastAPI; Familiarity with cloud platforms (AWS, Azure, GCP, etc.); Experience with Docker. Why Join Us?We value talent and commitment, offering a range of benefits for our team members, including:Competitive full-time salary;Comprehensive private health coverage under the company’s group program;Flexible working hours;Access to state-of-the-art tools;Opportunities for professional development including language courses and specialized training;Career advancement opportunities with industry-leading specialists;A dynamic work environment that encourages personal and professional growth through challenging goals and mentorship.If you're ready to embrace an exciting challenge, work with cutting-edge technologies, and enjoy your daily tasks, we invite you to apply! Please submit your detailed CV in English, referencing: (SDE/02/26).Explore all our open vacancies by visiting the career section of our website.
Are you enthusiastic about big data and eager to engage with advanced technologies? We invite you to explore a thrilling opportunity as a Data Engineer - Spark Developer with our dynamic and growing development teams. Whether you prefer the vibrant atmosphere of our Athens office or the flexibility of remote work, we are excited to welcome your expertise and passion.Key Responsibilities: Architect, develop, test, deploy, maintain, and enhance data pipelines; Implement coding solutions using Apache Spark on Azure Databricks; Create and design big data architectures leveraging Azure Data Factory, Service Bus, BI, Databricks, and other Azure Services. Essential Qualifications: Bachelor's degree in Computer Science or Software Engineering; Strong analytical mindset, team-oriented, dedicated to quality, and eager to learn; Comprehensive understanding of Apache Spark; Proven experience as a Data Engineer; Advanced proficiency in Python or Scala; Expertise in Spark query tuning and performance enhancement; Familiarity with cloud platforms such as Azure, AWS, or GCP; Fluent in both spoken and written English. Preferred Qualifications: Ability to understand and analyze Directed Acyclic Graph (DAG) operations; Experience in providing cost estimates for big data processing; Capability to write and review architecture documentation. Benefits:We value talent and commitment and offer a range of benefits to our team members:Competitive full-time salary;Comprehensive private health coverage under the company’s group program;Flexible working hours;Access to state-of-the-art tools;Opportunities for professional development including language courses, specialized training, and continuous learning;Career advancement opportunities with leading specialists in the industry;A dynamic work environment that promotes challenging goals, autonomy, and mentorship, supporting both personal and company growth.If you are looking for an exciting challenge, keen to work with innovative technologies, and enjoy your work, we would love to hear from you! Please submit your detailed CV in English, referencing: (DESD/02/26).Explore our other open positions by visiting our career section at www.eurodyn.com and follow us on Twitter (@EURODYN_Careers) and LinkedIn.European Dynamics (www.eurodyn.com) is a prominent European company specializing in Software, Information, and Communication Technologies, with a robust international presence.
Full-time|On-site|Athens or Ioannina, Sterea Ellada, Greece
Location: Athens or Ioannina, Sterea Ellada, Greece About Snappi Bank Snappi Bank is building a neobank from the ground up. The team focuses on financial freedom by delivering transparent, technology-driven digital banking services. The company aims to reshape how people interact with their finances. Role Overview The Data Engineer will design, build, and manage data architecture and pipelines that support data acquisition, storage, processing, and analysis across the organization. This position is open in both the Athens and Ioannina offices. Main Responsibilities Create and maintain data pipelines and infrastructure for efficient ingestion, processing, and storage of large datasets. Work with data scientists, analysts, and other stakeholders to understand data needs and translate them into technical solutions. Develop and optimize data models and schemas for effective storage and retrieval. Build and manage ETL processes to bring data from various sources into data warehouses or lakes. Monitor and troubleshoot pipelines to ensure data integrity, reliability, and performance. Evaluate and introduce new tools or technologies to improve data processing and operational efficiency. Document pipelines, processes, and solutions to support knowledge sharing and maintainability. Partner with infrastructure and DevOps teams to deploy and manage data systems in cloud environments. Keep up with trends and best practices in data engineering and analytics. Qualifications Bachelor’s degree in Computer Science, Electronics, or equivalent experience in data roles. Minimum 5 years of experience in a similar position (7+ years preferred; 3-5 years considered for junior roles). Strong skills in SQL and Python; experience with Azure Data Factory is a plus. Excellent interpersonal skills, including listening, negotiation, and presentation. Clear verbal and written communication abilities. Attention to detail. Effective decision-making, problem analysis, and resolution skills. Strong organizational habits. Proactive approach to problem-solving. Comfort working in a fast-changing environment. Interest in agile software processes, data-driven development, reliability, and experimentation; experience with Agile product teams is a plus. Why Work at Snappi? Snappi Bank values innovation, trust, and ongoing growth. The team focuses on solutions and results. This is a chance to make a real impact on the future of banking and improve financial services for a broad audience.
Join our innovative team as a Semantic Data Engineer, where you'll play a crucial role in enhancing a sophisticated platform centered around RDF data models, SPARQL queries, and structured datasets. Your primary responsibilities will involve comprehending, maintaining, and advancing the semantic layer of our system, collaborating closely with backend engineers and architects. This position is ideal for a passionate specialist with a keen interest in data modeling, semantics, and knowledge representation within real-world production environments.Key Responsibilities:Analyze and uphold RDF/TTL data models and vocabularies;Design, optimize, and manage SPARQL queries;Facilitate data ingestion, transformation, and validation processes;Ensure the consistency and accuracy of semantic data throughout the platform;Work alongside backend engineers to integrate semantic logic into application workflows;Assist in documenting semantic models, assumptions, and constraints;Engage in troubleshooting data quality and reasoning challenges.
We are seeking a talented Site Reliability Data Engineer to join our dynamic team in Athens. As a Site Reliability Data Engineer, you will be at the forefront of ensuring the reliability and performance of our data systems. Your expertise will play a critical role in maintaining service uptime, optimizing system performance, and enhancing our data infrastructure.Your responsibilities will include monitoring system performance, troubleshooting issues, and implementing automation tools to streamline processes. You will collaborate with cross-functional teams to design and deploy scalable infrastructure solutions.