Qualifications
Minimum Qualifications:
8+ years in security engineering (Application Security, offensive security, or security architecture), with at least 1 year focused on GenAI/LLM/agentic security.
Demonstrated expertise in the OWASP LLM Top 10 and its application to real-world systems.
Proven knowledge of agentic system risks and the application of the OWASP Agentic Top 10 (2026).
Experience in secure software architecture.
Strong practical skills in executing and elucidating complex security testing, including reproducible Proofs of Concept and clear mitigations.
Ability to write scalable standards and achieve cross-team consensus.
Exceptional communication skills with senior engineers and security specialists.
About the job
About the Role
Distro is at the forefront of advancing GenAI developer tools, including IDE/CLI agents and MCP-based workflows. We are searching for a talented Senior AI Security Engineer to lead the standardization of AI tool evaluations and governance, reducing bespoke review burdens while implementing robust guardrails.
This role merges the fields of AI red teaming, security architecture, and the stewardship of security standards. You will work in close partnership with engineering teams and Engineering Security partners to develop a coherent, capability-based framework for the safe approval and operation of AI tools.
What You’ll Do:
- Act as the internal authority on AI security threat models and standards.
- Implement and operationalize the OWASP Top 10 for LLM Applications and Agentic Applications (2026).
- Develop tailored mappings for necessary controls and approval criteria.
- Lead AI security testing that is rapid, comprehensive, and AI-enhanced.
- Design and execute adversarial evaluations for agentic tools.
- Utilize AI to expedite security initiatives by creating automated test harnesses, reproducible Proofs of Concept, and regression suites for new releases.
- Provide clear deliverables including reproduction steps, severity justifications, mitigations, vendor requests, and guardrails, while advocating for systemic improvements.
- Shape client-side defenses and reference architectures.
- Establish minimum guardrail architectures for AI developer tooling.
- Collaborate with other security teams to ensure that policies are enforceable and not merely documented.
- Standardize processes for vendor and model onboarding.
- Create reusable artifacts, such as standard security and telemetry requirements, and default trust tiers.
- Offer guidance for hosting open-source models.
- Encourage clarity and adoption among developers.
- Publish and maintain clear guidance on desktop agents versus IDE/CLI agents.
- Clarify the distinction between safe defaults and behavioral restrictions with measurable outcomes.
- Conduct office hours and enablement sessions to align stakeholders with a unified playbook.