About the job
Join Us in Revolutionizing Healthcare
About Vitestro
Established in 2017 in Utrecht, Vitestro is at the forefront of transforming blood collection with the Aletta® Autonomous Robotic Phlebotomy Device™ (ARPD™). This innovative medical solution employs sophisticated multi-modal imaging technologies (including near-infrared, ultrasound, and Doppler ultrasound) combined with robotics and artificial intelligence to autonomously conduct comprehensive diagnostic blood draw procedures.
By tackling significant healthcare staffing challenges and enhancing patient experiences, Vitestro is redefining one of the most vital medical processes. With a rapidly expanding team of over 90 dedicated professionals, we are intensifying our impact. As we deploy our initial devices with clients, we are actively seeking to expand our team to guarantee successful implementation and sustained reliability.
At Vitestro, we prioritize continuous innovation and improvement.
Become a part of our dynamic team as a Data Engineer.
As a Data Engineer within the Application Team at Vitestro, you will be instrumental in empowering us to make data-driven decisions and derive valuable insights into our device performances. This is an ideal opportunity for proactive individuals like you to make a substantial impact during a crucial phase of our growth.
Your Responsibilities
In a dynamic, cloud-native setting, your key responsibilities will include:
Creating, enhancing, and maintaining scalable ETL/ELT (batch) data pipelines utilizing Python (Polars) scripts within containerized deployments, including writing unit tests, managing schema versioning, and conducting impact analyses.
Integrating pipelines using cloud-native orchestration and execution services.
Diagnosing and resolving data discrepancies, collaborating with software engineers on necessary adjustments, and expanding automated data quality checks (e.g., completeness and consistency).
Working closely with the Senior Data Engineer to optimize pipeline performance while adhering to established architectural guidelines.
Engaging with engineering and business stakeholders to comprehend data requirements and deliver clean, structured, and insightful dashboards.
Contributing to pipeline auditability, traceability, and controlled change management.
Additionally, you will have the chance to expand your contributions beyond business intelligence by participating in data science initiatives, developing fault analysis tools, and supporting machine learning and AI workflows integrated within our devices.
