About the job
Join the Sana Team
At Sana, we are at the forefront of AI innovation, dedicated to creating superintelligence solutions that enhance workplace efficiency. Our vision is that organizations can achieve their goals more swiftly when teams are empowered to access knowledge effortlessly, automate mundane tasks, and harness the capabilities of agentic AI. As a proud part of Workday, we focus on developing AI systems that support and augment human capabilities rather than replace them.
Our mission is realized through two groundbreaking products: Sana Agents and Sana Learn. Sana Agents offer a streamlined interface for accessing all company applications, knowledge, and data, allowing AI agents to perform significant tasks that enable teams to process and act on information on an unprecedented scale. Meanwhile, Sana Learn serves as a dynamic learning hub powered by AI, combining an intuitive learning platform with intelligent features such as an AI tutor, automated content creation, and interactive applications, ensuring knowledge is not just accessible but actionable.
We are a highly skilled, product-driven team of engineers and designers from renowned companies including Google, Spotify, Apple, and Databricks, united by a commitment to technical excellence and rapid innovation. Our tools are already enhancing the productivity of over a million users across various leading enterprises, and we are just beginning our journey.
About the Role
As the Quality Assurance Engineer for Sana’s AI agent platform, you will play a pivotal role in ensuring that our LLM-powered products are not only robust and reliable but also user-friendly. You will be responsible for designing and implementing test strategies that match our pace of rapid development, automating essential workflows, and fostering a culture of quality throughout our engineering teams. This is a hands-on position ideal for someone who excels at creating scalable testing methods, identifying unique edge cases related to agentic and LLM systems, and establishing safeguards to prevent issues from affecting production. You will contribute to the delivery of safe, trustworthy, and enterprise-ready agent workflows in today's evolving AI landscape.
Key Responsibilities
Design and execute comprehensive test plans for agent infrastructure, LLM-based APIs, and user journey flows.
Develop and sustain automated test suites across backend, frontend, and integration layers, including validation of prompts and responses for generative models.
Create tools and frameworks to expedite testing processes and detect regressions early, particularly in areas of agent reasoning, tool usage, and context management.
Work closely with engineers to integrate quality assurance into every development phase, focusing on the specific challenges posed by AI/LLM systems (e.g., non-determinism, hallucinations, safety concerns).
Lead root cause analyses and drive the resolution of critical incidents and issues.

