About the job
Your Impact at Lila Sciences
As a Principal Engineer specializing in AI Security, you will play a crucial role in shaping and executing the technical strategy to secure AI applications throughout Lila's enterprise. Operating as a senior individual contributor, you will collaborate closely with IT and business teams to ensure the safe and compliant implementation of AI tools and platforms.
This position emphasizes securing both third-party and internally developed AI tools, safeguarding sensitive data, intellectual property, and vital scientific processes as AI technology becomes integral to our operations.
Key Responsibilities
- Enterprise AI Security Strategy: Develop and implement robust security controls and guidelines for utilizing AI tools across the organization, including LLM APIs and SaaS AI platforms.
- AI Gateway & Agentic Gateway Security: Design and enforce AI gateway controls to monitor and manage access to AI systems, ensuring secure agentic workflows through identity and authorization constraints.
- AI Red Teaming & Adversarial Testing: Conduct red teaming exercises and adversarial testing to identify vulnerabilities in AI usage, including prompt injection and data exfiltration tactics.
- Data Protection for AI Usage: Create and implement measures to prevent sensitive data from leaking through AI systems, focusing on input/output filtering and secure management of data.
- Multi-Layer AI Security: Integrate AI security with existing enterprise security layers, ensuring comprehensive visibility and control over AI service access and data handling.
- AI Threat Modeling: Construct threat models specific to enterprise AI applications, addressing risks associated with data leakage and unauthorized agent actions.
- Vendor & Platform Security: Evaluate and guide secure integration of third-party AI vendors, scrutinizing their data handling and model behavior.

