About the job
About Us
At Gather AI, we are at the forefront of revolutionizing the supply chain industry. Our innovative vision-powered platform utilizes autonomous drones and existing infrastructure to gather real-time data, thereby digitizing traditionally manual and error-prone workflows. This transformation enables facilities to operate smarter, safer, and more efficiently, redefining the standards of on-time and complete delivery.
Join us if you are eager to contribute to transformative technology and make a substantial impact in a crucial industry. As leaders in the rapidly advancing robotics sector, we invite you to help reshape the global supply chain, one intelligent warehouse at a time.
About the Team
You will collaborate closely with our fullstack engineering team and their leads, frequently engaging with the cloud engineering team on API and backend testing, as well as with our ML team for feature validation. Your cross-departmental collaboration will extend to Product Management for acceptance criteria and release readiness, and to Customer Support for addressing field-reported issues.
About the Role
We are seeking a Senior QA Engineer to take charge of the comprehensive test automation strategy for Gather AI’s fullstack platform. You will be responsible for building and maintaining Playwright-based E2E test suites, designing API automation coverage, integrating tests into CI/CD pipelines, and developing a structured evaluation framework for LLM-powered features.
This is a unique opportunity to shape QA practices and tools from the ground up at a production AI/robotics company, rather than a mere ticket-closing QA factory. Your contributions will directly influence product quality for our customers.
What You’ll Do
- Develop and maintain Playwright-based E2E test automation suites covering essential user workflows, API validation, and edge case scenarios across web applications.
- Design and implement a robust API automation framework for backend services, ensuring thorough REST endpoint coverage and data validation.
- Integrate automated test suites into CI/CD pipelines (GitHub Actions) with comprehensive reporting, failure alerts, and quality gates.
- Create a structured LLM tool evaluation framework — defining testing methodologies, evaluation criteria, and repeatable benchmarks for AI/ML-powered features.
- Identify and rectify gaps in test coverage, flaky tests, and QA technical debt; promote the adoption of quality metrics such as test coverage, defect escape rates, and automation coverage.
- Collaborate with cross-functional teams to ensure quality throughout the development lifecycle.

