About the job
About Us
At Heidi, we believe that healthcare deserves a new rhythm, one that fosters continuous and compassionate care. Our innovative AI Care Partner is designed to work seamlessly with clinicians, enhancing their ability to provide exceptional patient care.
Our dynamic team comprises doctors, engineers, designers, researchers, and creatives committed to developing tools that allow healthcare professionals to focus on what truly matters, their patients.
In just 18 months, Heidi has reclaimed over 18 million hours for healthcare providers, facilitating 73 million patient visits across 116 countries. We are proud to support over two million patient visits each week on a global scale.
With nearly $100 million in funding, Heidi is expanding in the US, UK, Canada, and Europe, collaborating with leading health systems such as the NHS, Beth Israel Lahey Health, and Monash Health.
Your Role
As a Senior AI Engineer on our Orchestration team, you will spearhead the design and implementation of a clinician-grade retrieval and question-answering framework. This will encompass data ingestion, indexing, ranking, grounding, and secure deployment.
You will establish the technical vision, define quality standards, and guide collaborative efforts among engineering, product, and clinical teams.
Transitioning between research and production, you will transform prototypes into dependable services with defined SLAs, traceable outputs, and relevant metrics that are crucial in clinical settings.
Key Responsibilities:
- Architect end-to-end systems for literature and guideline ingestion, normalization, metadata extraction, de-duplication, and versioning.
- Develop hybrid search and retrieval solutions: combining lexical, vector, and re-ranking methodologies while adhering to strict latency and cost constraints.
- Create grounding and answer synthesis mechanisms that accurately cite sources, maintain provenance, and quantify confidence levels.
- Lead model enhancements through prompting, fine-tuning, distillation, and integration of tools to boost reliability, coverage, and utility.
- Establish gold-standard evaluation metrics: including offline information retrieval metrics, factuality audits, and human review processes.
- Conduct online experiments to continuously refine and validate model performance.
