About the role
About Tenable:
Tenable® is a leading Exposure Management company, empowering over 44,000 organizations worldwide to identify and mitigate cyber risks. We proudly serve 65% of the Fortune 500, 45% of the Global 2000, and numerous government agencies. Join us on our mission to enhance cybersecurity!
Why Work at Tenable?
Our team's spirit is our greatest asset! We collaborate to develop innovative, top-tier cybersecurity solutions while fostering a culture of inclusion, respect, and excellence. As part of our #OneTenable team, you will work alongside some of the industry’s brightest minds, receiving the support and resources needed to make a meaningful impact. Together, we achieve remarkable results!
Your Opportunity:
Tenable Cloud Security is on the lookout for a Senior AI Security Researcher to join our elite product research team. This pivotal role involves shaping the emerging domain of AI security. As a key player, you will direct research initiatives aimed at identifying novel risks in AI systems and translating these insights into actionable product features and groundbreaking research. You will collaborate with seasoned researchers and engineers who are passionate about security, with the autonomy to explore and innovate in this rapidly evolving field.
We seek a distinguished security researcher who thrives in ambiguity, possesses an attacker’s mindset, and can elucidate complexities in a nascent field. Your curiosity and technical expertise will drive your success in defining risks in systems that are still being understood.
Your Role:
- Lead research in a burgeoning field by analyzing AI frameworks, services, and architectures to identify risks, vulnerabilities, and attack vectors before they become widespread issues.
- Define AI security risks by evaluating how exposure is generated and exploited in AI systems, collaborating with engineering and product teams to transform AI research into practical applications.
- Assess the risks associated with pre-trained models, vector databases, and orchestration frameworks (e.g., LangChain, LlamaIndex) to illustrate how shadow AI can create vulnerabilities for organizations.
- Produce insightful blogs, whitepapers, and technical advisories that shape industry standards and guide best practices.
