About the job
Intercom builds AI-powered customer service tools that help businesses deliver reliable, round-the-clock support. Our AI agent, Fin, and the Intercom Customer Service Suite combine automation with human expertise to handle millions of customer inquiries every month. Since 2011, nearly 30,000 businesses worldwide have chosen Intercom to raise their customer service standards.
Role overview
The Senior AI Infrastructure Engineer role focuses on developing and scaling the systems that train and deploy Intercom’s next generation of AI products. The AI Infrastructure team works across the stack, from GPU-level engineering up to the user-facing agents that power customer support. This team has built training pipelines and inference systems for models like Fin Apex, which are tailored for customer service and serve as the foundation for Intercom’s AI capabilities.
What you will do
- Design and scale training pipelines for large transformer and LLM models, including data ingestion, preprocessing, distributed training, and evaluation.
- Build and improve inference services to ensure fast, reliable experiences for clients, covering auto-scaling, routing, and failover.
- Optimize GPU-level performance by tuning kernels, improving resource utilization, and identifying bottlenecks in training and inference workflows.
- Work closely with ML scientists to implement new training and inference techniques.
What we’re looking for
- Strong experience with model training or large-scale model inference, and hands-on knowledge of low-level GPU programming (such as CUDA or Triton). Experience in more than one of these areas is especially valuable.
Location
This position is based in Dublin, Ireland.

