About the job
Join Us If You:
Are eager to learn from a group of experienced engineers who have successfully delivered over $10 billion in value.
Prefer to work in our San Francisco office three days a week.
Excel in navigating uncertainty.
Possess a product-oriented mindset with a strong emphasis on customer satisfaction.
Are passionate about working with Large Language Models (LLMs), Multi-Cloud Platforms (MCPs), Cloud Infrastructure, and Observability tools.
Bring at least five years of professional or open-source experience.
Bonus: Have previous experience in a startup environment and understand the dynamics involved.
About TierZero
At TierZero, we are redefining how engineering teams leverage AI to enhance the speed and efficiency of code deployment. While AI accelerates the development cycle, the actual process of productionizing code remains a challenge. Our platform empowers agile engineering teams to manage code in production effectively, ensuring quicker incident response times, comprehensive operational visibility, and shared knowledge among all team members.
Backed by $7 million in funding from leading investors like Accel and SV Angel, TierZero is trusted by industry leaders such as Discord, Drata, and Framer to operate their high-scale systems and create the foundational layer for AI-driven engineering teams.
The Role
As a founding member of our team, you will play a crucial role in conceptualizing and developing our core product and systems from the ground up. Collaborating closely with the CEO, CTO, and our valued customers, you will be engaged in a variety of dynamic projects, including:
Designing and implementing intelligent AI systems capable of analyzing extensive unstructured data.
Delivering full-stack features informed by direct user feedback.
Enhancing the product experience to ensure agents are not only intelligent but also user-friendly and reliable for engineers.
Creating systems that autonomously assess LLM outputs, enhancing agent reasoning through iterative self-play and feedback mechanisms.
Developing machine learning pipelines encompassing data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search infrastructure, and graph databases.
Investigating and prototyping with open-source and cutting-edge LLMs to assess their capabilities and trade-offs.
Establishing scalable infrastructure to support long-running, multi-step agents, addressing aspects like memory management, state handling, and asynchronous workflows.
