About the job
At Thinking Machines Lab, our mission is to enhance human capabilities through the development of collaborative general intelligence. We are dedicated to creating a future where everyone can utilize AI tailored to their specific needs and aspirations.
Our team consists of accomplished scientists, engineers, and innovators responsible for some of the most popular AI applications, including ChatGPT and Character.ai, along with renowned open-weight models like Mistral and influential open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
About the Role
We are on the lookout for a passionate Software Engineer with a focus on security to ensure our products are secure by design while facilitating rapid and ambitious product development. You will collaborate closely with product and research teams to integrate security measures into the design and development processes, and create tools and automation to maintain system safety at scale.
Note: This is an ongoing opportunity, and we encourage you to express your interest. While we receive numerous applications and there may not always be an immediate match for your skills, we encourage you to apply. We consistently review applications and will reach out as new roles become available. You may reapply if you gain additional experience, but please limit applications to once every six months. We also post specific roles for particular projects or teams, and you are welcome to apply for those as well.
What You’ll Do
- Collaborate with product and research teams to integrate security into the development lifecycle: threat modeling, design reviews, and establishing secure defaults for new features.
- Design and implement security controls throughout our product stack (authentication, authorization, session management, input validation, etc.).
- Create and maintain security tooling and automation for engineers: secure frameworks and templates, CI/CD checks, dependency management, and vulnerability detection.
- Work alongside researchers to identify and address AI-specific product risks, such as model abuse, prompt injection, data leakage, or misuse of capabilities.
- Enhance observability and detection for security-related events: access anomalies, abuse patterns, and suspicious behavior in production.

