About the job
Join Klue and Help Shape the Future of Competitive Intelligence!
Exciting Opportunity: Klue Engineering is Expanding!
We are in search of a Senior Software Engineer to become an integral part of our Vancouver team. If you are passionate about developing and refining cutting-edge LLM-powered agents at scale, we want to hear from you! You will bring a builder's mindset, scientific rigor, and an unwavering focus on customer satisfaction. Your contributions will significantly influence our products while remaining at the forefront of AI innovation.
In this role, you will define the architecture for building and operating AI agents at scale, addressing everything from multi-agent orchestration and sub-agent design to the evaluation frameworks that ensure outputs are reliable and measurable. You'll be optimizing across the entire stack, including inference costs, retrieval and query performance, and the feedback loops that facilitate continual improvement.
You will not only execute on our strategic roadmap but also help shape it by providing technical insights on the product's direction and collaborating closely with product leadership. You will take ownership of projects from inception to completion, guiding architectural decisions, experimentation strategies, and ensuring production readiness for our LLM-powered agents.
Your Responsibilities
Develop and deploy backend systems for agentic workflows. You'll design retrieval pipelines, orchestration layers, and complex agent architectures that transform vast amounts of competitive data (including news, press releases, website updates, Slack messages, emails, reviews, and CRM data) into actionable intelligence for our clients.
Enhance LLM-powered workflows from start to finish. This includes prompt design, retrieval strategies, caching, and latency optimization, ensuring our agent responses are faster, more precise, and consistently reliable in production.
Lead evaluations of agent systems at scale. You will create and manage evaluation frameworks (automated, offline, and human-in-the-loop) to assess relevance, quality, latency, and overall task success across our agent pipelines. You'll define the metrics for excellence and build the infrastructure for continuous measurement.
Design and implement human-in-the-loop systems. Collaborating closely with product and design teams, you will propose and prototype feedback mechanisms, review workflows, and correction loops that maintain the accuracy and trustworthiness of AI agents over time.

