Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
You might be an ideal candidate if you possess:Hands-on experience in training and evaluating large-scale deep learning models. Expertise in popular deep learning frameworks such as PyTorch and JAX. A strong background in deploying machine learning algorithms within software systems at scale. The adaptability to thrive in a dynamic environment with a degree of uncertainty. A collaborative and supportive approach to teamwork.
About the job
As a Technical Staff Member specializing in Machine Learning, you will:
Engage in the complete development lifecycle of innovative large-scale deep learning models.
Curate datasets, architect solutions, implement algorithms, and train and assess models to enhance our offerings.
Work collaboratively with engineers and researchers to convert groundbreaking research into real-world applications.
Join us at a pivotal time, take on diverse roles, and contribute to building transformative products from the ground up!
About Reka
Reka is on a mission to create valuable multimodal artificial intelligence that empowers organizations and businesses. As a startup focused on foundation models, we are headquartered in the San Francisco Bay Area, California, with a commitment to a remote-first culture. Our diverse team comprises top talent from around the globe, including contributors to significant AI advancements over the past decade.
Similar jobs
1 - 20 of 2,022 Jobs
Search for Technical Staff Member Distributed Systems
At Gimlet Labs, we are pioneering the first heterogeneous neocloud tailored for AI workloads. As AI technology evolves, the industry confronts critical limitations in power, capacity, and cost linked to the traditional homogeneous, vertically integrated infrastructure. Gimlet addresses these challenges by decoupling AI workloads from the fundamental hardware, intelligently partitioning them into components and orchestrating each to the hardware that best meets its performance and efficiency needs. This innovative approach facilitates heterogeneous systems across diverse vendors and generations of hardware, including the latest emerging accelerators, resulting in significant improvements in performance and cost efficiency at scale.Building upon this platform, Gimlet is developing a production-grade neocloud for agentic workloads. Our customers can deploy and manage their workloads through stable, production-ready APIs without the complexities of hardware selection, placement, or low-level performance optimization.Gimlet collaborates with foundational labs, hyperscalers, and AI-native companies to enable real production workloads designed to scale to gigawatt-class AI datacenters.We are currently in search of a Technical Staff Member specializing in distributed systems. In this role, you will be instrumental in developing the core platform responsible for scheduling, routing, and managing AI workloads reliably at production scale. You will engage with systems that coordinate execution across thousands of nodes, provide stable production APIs, and guarantee predictable workload performance under real-world conditions of load and failure.This position is ideal for engineers passionate about building foundational infrastructure, grasping end-to-end systems, and operating at scale.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in the development of general-purpose AI systems designed to operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our focus is on delivering low latency, efficient memory usage, privacy, and reliability. We collaborate with organizations in diverse sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek outstanding talent to join our mission.The OpportunityThe Training Infrastructure team is at the forefront of building the distributed systems that empower our next-generation Liquid Foundation Models. As our operations expand, we aim to innovate, implement, and enhance the infrastructure crucial for large-scale training.This role is centered around high ownership of training systems, emphasizing runtime, performance, and reliability rather than a typical platform or SRE function. You will collaborate within a small, agile team, creating vital systems from the ground up instead of working with pre-existing infrastructure.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe are seeking an individual who:Embraces the complexity of distributed systems: Our team is dedicated to maintaining stability during extensive training runs, troubleshooting training failures across GPU clusters, and enhancing overall performance.Is passionate about building: We value team members who take pride in developing robust, efficient, and reliable infrastructure.Excels in uncertain environments: Our systems are designed to support evolving model architectures. You will be making decisions based on incomplete information and rapidly iterating.Aligns with team goals and delivers results: The best engineers on our team align with collective priorities while providing data-driven feedback when challenges arise.The WorkDesign and develop core systems that ensure quick and reliable large training runs.Create scalable distributed training infrastructure for GPU clusters.Implement and refine parallelism and sharding strategies for evolving architectures.Optimize distributed efficiency through topology-aware collectives, communication/compute overlap, and straggler mitigation.Develop data loading systems to eliminate I/O bottlenecks for multimodal datasets.
TierZero builds tools that help engineering teams deliver and manage code efficiently. The platform enables quicker incident response, clearer operational visibility, and shared knowledge among engineers. Backed by $7 million from investors like Accel and SV Angel, TierZero supports clients such as Discord, Drata, and Framer as they strengthen infrastructure for AI-driven work. This in-person role is based at TierZero's San Francisco headquarters, with a hybrid schedule requiring three days onsite each week. As a founding member of the technical staff, work directly with the CEO, CTO, and customers to influence the direction of TierZero’s core products and systems. The position calls for flexibility as priorities shift and close collaboration across the company. What you will do Design and develop AI systems that handle large volumes of unstructured data. Build full-stack product features, informed by direct feedback from users. Enhance the product so agents are intelligent, reliable, and easy for engineers to use. Create systems to automatically evaluate outputs from large language models and improve agentic reasoning through self-play and feedback. Construct machine learning pipelines, including data ingestion, feature creation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search, and graph databases. Experiment with open-source and emerging large language models to compare different approaches. Develop scalable infrastructure for long-running, multi-step agents, including memory, state management, and asynchronous workflows. Requirements Interest in working with large language models, managed cloud platforms, cloud infrastructure, and observability tools. At least 5 years of professional experience or significant open-source contributions. Comfort with shifting priorities and tackling new technical problems. Strong product focus and commitment to customer outcomes. Openness to learning from a team with a track record of delivering over $10 billion in value. Ability to work onsite in San Francisco three days per week. Bonus: Experience in a startup setting and familiarity with startup dynamics.
At Catalog, we are pioneering the commerce infrastructure for AI—creating the essential framework that enables digital agents to not only explore the web but also comprehend, analyze, and engage with products. Our innovations drive the future of AI-driven shopping experiences, fundamentally transforming how consumers discover and purchase items online.Role OverviewAs a Technical Staff Member, you will be instrumental in developing core systems, shaping our engineering culture, and transitioning our vision from prototype to a robust platform. This role requires full-stack expertise and a commitment to owning and resolving challenges from start to finish.Who You AreYou have experience creating beloved and trusted products from the ground up.You combine technical proficiency with a keen product sense and data-driven intuition.You are well-versed in AI technologies.You prioritize speed, write clean code, and ensure thorough instrumentation.You seek a high level of ownership within a small, talent-rich team based in San Francisco.Challenges You Will TackleDevelop and deploy agentic-search APIs that deliver structured and real-time product data in milliseconds.Build checkout systems enabling agents to conduct transactions with any merchant.Create an embeddings and retrieval layer that optimizes recall, precision, and cost efficiency.Establish a product graph and ranking pipeline that adapts based on actual user outcomes.Preferred QualificationsProven experience shipping data-centric products in a live environment.Experience with recommendation systems or information retrieval methodologies.Familiarity with API development, search indexing, and data pipeline construction.Our Work CultureWe operate with a small, high-trust, and highly motivated team, fostering an environment of in-person collaboration in North Beach, San Francisco. Our process involves debate, decision-making, and execution.If your profile aligns with our needs, we will contact you to arrange 2-3 brief technical interviews, followed by an onsite meeting in our office where you will collaborate on a small project, exchange ideas, and meet the team.
Join our dynamic team at Adyen as a Technical Staff Member in San Francisco! We are seeking innovative minds passionate about technology and problem-solving. In this role, you will collaborate with cross-functional teams to craft solutions that enhance our services and improve customer experiences.
About TierZero TierZero helps engineering teams use AI to build and ship code more efficiently. The platform targets the bottleneck of human speed in production, giving teams tools for faster incident response, better operational visibility, and shared knowledge. TierZero is backed by $7M in funding from investors including Accel and SV Angel. Companies like Discord, Drata, and Framer trust TierZero to strengthen their infrastructure for AI-driven engineering. Role Overview: Founding Member of Technical Staff This is an on-site role based at TierZero’s San Francisco headquarters, with three days a week in the office. As a founding member, direct collaboration with the CEO, CTO, and early customers shapes the direction of both product and systems. The work spans hands-on development and close engagement with users and leadership. What You Will Do Design and build intelligent AI systems to analyze large volumes of unstructured data. Deliver full-stack features based on real user feedback. Improve the product experience so AI agents are both reliable and easy for engineers to use. Develop systems that automatically evaluate LLM outputs and advance agentic reasoning using self-play and feedback loops. Create machine learning pipelines, including data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG), vector search, and graph databases. Prototype with open-source and new LLMs, comparing their strengths and weaknesses. Build scalable infrastructure for long-running, multi-step agents, with attention to memory, state, and asynchronous workflows. What We Look For Over five years of relevant professional or open-source experience. Comfort working in environments with uncertainty and evolving challenges. Strong product focus and a drive for customer satisfaction. Interest in large language models (LLMs), Model Control Planes (MCPs), cloud infrastructure, and observability tools. Previous startup experience is a plus. Location This position is based in San Francisco. Expect to work on-site three days per week at TierZero’s HQ.
tierzero is looking for a Founding Member of Technical Staff to help shape the direction of its technology from the ground up. This role is based at the company's San Francisco headquarters. Role overview As an early technical hire, you will work closely with engineers and product managers to build new products and features. The work centers on designing, coding, and delivering software solutions that address client needs and support tierzero's growth. Impact Contributions in this role will directly influence the company's future. The team values initiative and hands-on problem solving, giving each member a chance to make a visible difference in how the company evolves. Collaboration This position involves regular collaboration with a small, focused team. Input and ideas from every member help guide product direction and technical decisions.
At Gimlet Labs, we are pioneering the development of the first heterogeneous neocloud designed specifically for AI workloads. As the demand for AI systems surges, traditional homogeneous infrastructures face critical limits in power, capacity, and cost. Our innovative platform effectively decouples AI workloads from their hardware foundations, intelligently partitioning tasks and orchestrating them to the most suitable hardware for optimal performance and efficiency. This strategy fosters heterogeneous systems that span multiple vendors and generations, including cutting-edge accelerators, enabling significant enhancements in performance and cost-effectiveness at scale.In addition to this foundational work, Gimlet is establishing a robust neocloud for agentic workloads. Our clients benefit from deploying and managing their workloads via stable, production-ready APIs, without the need to navigate hardware selection or performance optimization intricacies.We collaborate with foundation labs, hyperscalers, and AI-native companies to drive real production workloads capable of scaling to gigawatt-class AI datacenters.We are currently seeking a Member of Technical Staff specializing in ML systems and inference. In this pivotal role, you will be responsible for designing and constructing inference systems that facilitate the execution of complete models in real production environments. You will operate at the intersection of model architecture and system performance to ensure that inference processes are swift, predictable, and scalable.This position is perfect for engineers with a deep understanding of modern model execution and a passion for optimizing latency, throughput, and memory utilization across the entire inference lifecycle.
tierzero seeks a Founding Member of Technical Staff to play a key role in building the company’s technology from the earliest stages. This position is based at the San Francisco headquarters and offers the chance to collaborate directly with founders and engineers. Role overview As an early team member, you will help design and develop new products and systems. The work involves close collaboration with others in the office, shaping both the technical direction and the culture of the engineering team. What you will do Develop core technology in partnership with founders and engineers Contribute ideas and code that guide the evolution of tierzero’s products Help define engineering standards and establish best practices Location This position is based onsite at the San Francisco HQ.
At Magic, we are driven by our mission to develop safe Artificial General Intelligence (AGI) that propels humanity forward in addressing the most critical challenges. We firmly believe that the future of safe AGI lies in automating research and code generation, allowing us to enhance models and tackle alignment issues more effectively than humans alone can manage. Our innovative approach combines cutting-edge pre-training, domain-specific reinforcement learning (RL), ultra-long context, and efficient inference-time computation to realize this vision.Position OverviewAs a Software Engineer within the Inference & RL Systems team, you will play a pivotal role in designing and managing the distributed systems that enable our models to function seamlessly in production, supporting extensive post-training workflows.This position operates at the intersection of model execution and distributed infrastructure, focusing on systems that influence inference latency, throughput, stability, and the reliability of RL and post-training training loops.Our long-context models impose significant execution demands, including KV-cache scaling, managing memory constraints for lengthy sequences, batching strategies, long-horizon trajectory rollouts, and ensuring consistent throughput under real-world workloads. You will be responsible for the infrastructure that ensures both production inference and large-scale RL iterations are efficient and dependable.Key ResponsibilitiesCraft and scale high-performance inference serving systems.Optimize KV-cache management, batching methods, and scheduling processes.Enhance throughput and latency for long-context tasks.Develop and sustain distributed RL and post-training infrastructure.Boost reliability across rollout, evaluation, and reward pipelines.Automate fault detection and recovery mechanisms for serving and RL systems.Analyze and eliminate performance bottlenecks across GPU, networking, and storage components.Collaborate with Kernel and Research teams to ensure alignment between execution systems and model architecture.QualificationsSolid foundation in software engineering and distributed systems.Proven experience in building or managing large-scale inference or training systems.In-depth understanding of GPU execution constraints and memory trade-offs.Experience troubleshooting performance issues in production machine learning systems.Capability to analyze system-level trade-offs between latency, throughput, and cost.
Overview: Due to the increasing market demand and a robust six-month product roadmap, Listen Labs is expanding its engineering team. We seek a technically adept individual (our team includes three IOI medalists) who is eager to contribute to a product that is revolutionizing corporate decision-making. If you are passionate about solving intricate problems from start to finish, we invite you to connect with us.About Listen LabsListen Labs is an innovative AI-driven research platform that empowers teams to swiftly extract insights from customer interviews in hours rather than months. Our technology enables clients to analyze conversations, identify recurring themes, and expedite informed product decisions.Company Highlights:Exceptional Team: Composed of seasoned entrepreneurs (with prior AI exits), co-founders, and experts from leading firms such as Jane Street, Twitter, Stripe, Affirm, Bain, Goldman Sachs, and more, our team is built on a foundation of excellence.Rapid Growth: We are a dynamic team of 40, supported by Sequoia, achieving a remarkable growth trajectory from $0 to $14 million run-rate in less than a year. We prioritize speed, craftsmanship, and collaboration with individuals who embrace ownership.Impressive Traction: We have seen rapid growth across various sectors, securing enterprise clients such as Google, Microsoft, Nestlé, and P&G.Outstanding Performance: Our industry-leading win rate is a direct result of our uniquely differentiated product.Market Validation: We consistently attract customers across every segment, often landing six-figure deals that lead to quick expansions.Viral Product: Our interviews are shared with tens of thousands of viewers, driving product-led growth, organic expansion, and daily inquiries from Fortune 500 companies.Technical Challenges:Research Agent Development: Unlike traditional software purchases, hiring McKinsey involves gaining insights and execution expertise. We are building Listen Labs with that mindset — an AI agent that understands our platform and best research practices, assisting users in project setup, interview execution, and response analysis.Human Database Creation: A core value proposition is our capability to connect users with specific demographics. We are developing a database of millions of individuals, continually enhancing our understanding of user needs as they engage with Listen Labs.
At Composio, we are developing advanced infrastructure that enables agents to seamlessly interact with essential work tools such as GitHub, Gmail, Notion, Salesforce, and more. Our dedicated team of engineers is committed to tackling challenges ranging from contextual understanding to search functionalities, ensuring we provide an exceptional bridge between your agents and their tools.Having secured a $25M Series A funding from Lightspeed, alongside prominent angel investors like Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of HubSpot), and Gokul Rajaram, we have experienced remarkable growth, tripling our ARR at the start of this year. Our clientele includes notable names from Y Combinator cohorts to Wabi, Glean, Zoom, and beyond.Your RoleEnhance the experience of teams utilizing our platform by refining our core APIs and SDK.Create intuitive interfaces for both frontend and SDK applications.Take ownership of product development from concept through to production.Collaborate closely with customers to cultivate their loyalty while enhancing the product.Craft clear and concise documentation.
As a Technical Staff Member specializing in Machine Learning, you will:Engage in the complete development lifecycle of innovative large-scale deep learning models.Curate datasets, architect solutions, implement algorithms, and train and assess models to enhance our offerings.Work collaboratively with engineers and researchers to convert groundbreaking research into real-world applications.Join us at a pivotal time, take on diverse roles, and contribute to building transformative products from the ground up!
Overview: Join Listen Labs as we respond to a surge in market demand with an ambitious 6-month product roadmap. We are expanding our engineering team and are on the lookout for a highly skilled technical expert (our current team includes three IOI medalists) who is eager to build a transformative product that reshapes decision-making for businesses. If you have a passion for solving intricate problems from start to finish, we want to connect with you.About Listen LabsListen Labs is an AI-driven research platform designed to help teams quickly extract insights from customer interviews in a matter of hours rather than months. We empower our clients by enabling them to analyze conversations, identify key themes, and make faster, more informed product decisions.Why Work with Us?Exceptional Team: Founded by seasoned entrepreneurs with a successful AI exit, along with talent from renowned companies such as Jane Street, Twitter, Stripe, Affirm, Bain, and Goldman Sachs, our team boasts impressive credentials including IOI and ICPC backgrounds.Rapid Growth: As a 40-person team backed by Sequoia Capital, we have achieved a remarkable growth trajectory, scaling from $0 to a $14 million run-rate in less than a year. We prioritize craftsmanship and thrive on collaboration with individuals who take ownership.Impressive Traction: We are experiencing rapid growth across various sectors, securing enterprise clients such as Google, Microsoft, Nestlé, and Procter & Gamble.Proven Performance: We maintain an industry-leading win rate driven by our uniquely differentiated product.Market Validation: We consistently attract customers from diverse segments, achieving six-figure contracts that facilitate quick expansions.Viral Product: Our interviews reach tens of thousands of viewers, promoting product-led growth, organic expansion, and daily interest from Fortune 500 companies.Technical Challenges Await:Research Agent Development:Unlike traditional software purchases, hiring McKinsey offers valuable opinions, expertise, and execution. We aim to provide users with an AI agent that possesses complete knowledge about our platform and best research practices, assisting them in project setup, interview conduction, and response analysis.Human Database Creation:One of our core offerings is the ability to identify target users effectively (e.g., "power users of ChatGPT and Excel"). We are in the process of building a comprehensive database that connects users with the insights they need.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
P-186 At Databricks, we are passionate about empowering data teams to tackle some of the world’s most challenging problems, from security threat detection to cancer drug development. Our mission is to build and operate the leading data and AI infrastructure platform, enabling our customers to concentrate on the high-value challenges that are integral to their own objectives. Founded in 2013 by the original creators of Apache Spark™, Databricks has rapidly evolved from a small office in Berkeley, California, to a global powerhouse with over 1000 employees. Trusted by thousands of organizations, from startups to Fortune 100 companies, we are recognized as one of the fastest-growing SaaS companies worldwide. Our engineering teams create highly sophisticated products that address significant needs in the industry. We continuously push the limits of data and AI technology while maintaining the resilience, security, and scalability essential for our customers' success on our platform. We manage one of the largest-scale software platforms, consisting of millions of virtual machines that generate terabytes of logs and process exabytes of data daily. At this scale, we frequently encounter cloud hardware, network, and operating system faults, and our software must effectively shield our customers from these challenges. Modern data analysis leverages advanced techniques, such as machine learning, that far exceed the capabilities of traditional SQL query engines. As a Software Engineer on the Runtime team at Databricks, you will be instrumental in developing the next generation of distributed data storage and processing systems that outshine specialized SQL query engines in relational query performance, while providing the flexibility and programming abstractions to support a variety of workloads, from ETL to data science. Examples of projects you may work on include: Apache Spark™: Contributing to the de facto open-source framework for big data. Data Plane Storage: Developing reliable, high-performance services and client libraries for storing and accessing vast amounts of data on cloud storage backends like AWS S3 and Azure Blob Store. Delta Lake: A storage management system that merges the scalability and cost-effectiveness of data lakes with the performance and reliability of data warehouses, featuring low latency streaming. Its higher-level abstractions and guarantees, including ACID transactions and time travel, significantly reduce the complexity of real-world data engineering architectures. Delta Pipelines: Aiming to simplify the management of data engineering pipelines.
Join the team at Mirendil as a Member of Technical Staff specializing in Machine Learning Systems. In this role, you will leverage your expertise to develop innovative solutions that enhance our ML frameworks and contribute to groundbreaking projects in the AI space. Collaborate with top talent in a dynamic environment that promotes creativity and technical excellence.
Full-time|$250K/yr - $300K/yr|Hybrid|San Francisco
About Us:At Ambience Healthcare, we are not just another scribe; we are pioneering an AI intelligence platform that reintegrates humanity into healthcare, delivering significant ROI for health systems nationwide.Our innovative technology empowers providers to concentrate on delivering exceptional care by alleviating the administrative burdens that distract them from their patients and essential duties. Ambience offers real-time, coding-aware documentation and clinical workflow support across various healthcare settings at the leading health systems in North America.Our teams operate with unwavering dedication and extreme ownership to develop optimal solutions for our healthcare partners. We cherish transparency, positivity, and deep contemplation, holding each other to high standards because we recognize that the challenges we tackle are of utmost importance.Recognized as the leader in enhancing clinician experience by KLAS Research in their Emerging Solutions Top 20 Report, honored by Fast Company as one of the Next Big Things in Tech, acknowledged by Inc. as one of the best AI companies in healthcare, and selected as a LinkedIn Top Startup in 2024 and 2025. We're proudly supported by Oak HC/FT, Andreessen Horowitz (a16z), OpenAI Startup Fund, and Kleiner Perkins — and we're just beginning our journey.The Role:Ambience is responsible for processing millions of patient encounters across the largest health systems in the country. These organizations rely on us for real-time clinical workflows where latency and reliability significantly influence patient care. A delay during a patient visit is not merely a negative metric; it can lead to a physician abandoning the tool.In this position, you will oversee the core systems that enable Ambience to scale with reliability: database architecture, caching, multi-tenancy, and performance optimization that influences the user experience for clinicians. You will design database architectures that accommodate our growth, construct caching systems that prevent EHR API latency from affecting critical processes, and develop multi-tenant infrastructures that protect customer data while enhancing performance.Your ultimate goal will be to create infrastructure that other teams rely on effortlessly.Our engineering roles are hybrid, requiring presence in our San Francisco office three times a week.
About Liquid AIOriginating from the prestigious MIT CSAIL, Liquid AI crafts cutting-edge, general-purpose AI systems designed for optimal efficiency across a variety of platforms, from data center accelerators to edge devices. Our solutions prioritize low latency, minimal memory requirements, privacy, and reliability. We collaborate with industry leaders in consumer electronics, automotive, life sciences, and financial services, and as we expand rapidly, we are looking for exceptional talent to join our journey.The OpportunityJoin us at the exciting crossroads of advanced foundation models and the open-source community. In this pivotal role, you will oversee developer relations and community engagement, influencing how our models are adopted, documented, and integrated throughout the AI ecosystem. This unique position allows you to balance impactful community work with essential technical contributions, giving you the chance to shape how our models are represented and utilized by developers worldwide. If you are passionate about excellent documentation, enhancing developer experience, and democratizing access to powerful AI models, this is your chance to influence the future of open-source AI.What We're Looking ForWe seek a proactive individual who:Takes ownership: Manages open-source partnerships from initial outreach to ongoing collaboration.Thinks community-first: Integrates documentation, tutorials, integrations, and support into a seamless developer experience.Is pragmatic: Focuses on developer adoption and partner success rather than superficial metrics.Communicates clearly: Bridges the gap between technical teams and external partners, representing Liquid's interests while fostering genuine relationships.The WorkServe as the primary liaison for open-source partners.Assist in model releases with both marketing and technical content.Create tutorials, articles, and guides on training and utilizing our foundation models.Enhance and maintain LFM documentation for clarity and thoroughness.Collect community feedback and communicate insights to internal teams.
About tierzero tierzero builds tools that help engineering teams manage production code with stronger incident response, better operational visibility, and collaborative knowledge sharing. Companies like Discord, Drata, and Framer use tierzero to support their infrastructure in an AI-driven landscape. Backed by $7 million from investors including Accel and SV Angel, tierzero is growing quickly from its San Francisco headquarters. Role Overview: Founding Member of Technical Staff This is a hands-on role shaping tierzero’s core product and systems from the ground up. The founding technical team works closely with the CEO, CTO, and early customers to solve real engineering challenges. The position is based in San Francisco, with a hybrid schedule: three days each week in the office. What You’ll Do Design and build intelligent AI systems that process large volumes of unstructured data Deliver full-stack features informed by real-time user feedback Improve usability so AI agents are both effective and trustworthy for engineers Develop systems for automated evaluation of LLM outputs, including feedback loops and self-play Construct machine learning pipelines for data ingestion, feature generation, embedding storage, retrieval-augmented generation (RAG), vector search, and graph databases Prototype with open-source LLMs to understand their strengths and weaknesses Create scalable infrastructure for complex, multi-step agents, focusing on memory, state management, and asynchronous workflows Who We’re Looking For 5+ years of professional experience or significant open-source contributions Interest in LLMs, MCPs, cloud infrastructure, and observability tools Comfort working in changing, ambiguous situations Product-focused and customer-first mindset Experience learning from and collaborating with engineers from diverse backgrounds Bonus: Previous experience in a startup setting Work Location Hybrid schedule: three days per week in-person at the San Francisco HQ.
About tierzero tierzero helps engineering teams build and deploy code with greater speed and operational clarity in an AI-driven world. The company focuses on improving incident response, operational visibility, and knowledge sharing for engineers. Backed by $7 million in funding from investors like Accel and SV Angel, tierzero supports large-scale systems for clients such as Discord, Drata, and Framer. Role Overview: Founding Member of Technical Staff This role is based at tierzero's San Francisco headquarters. In-person work is required three days a week. As a founding member of the technical team, you will help design and build core products and systems from the ground up. Collaboration is central: expect to work closely with the CEO, CTO, and customers. Projects span a wide range of technical challenges and product areas. What You Will Do Design and implement intelligent AI systems that process and reason over large volumes of unstructured data. Develop full-stack features, incorporating direct feedback from users. Improve the product experience so intelligent agents are practical and reliable for engineers. Create systems that automatically evaluate LLM outputs and refine agent reasoning using self-play and feedback loops. Build machine learning pipelines covering data ingestion, feature generation, embedding stores, RAG pipelines, vector search, and graph databases. Prototype and experiment with open-source and advanced LLMs to weigh different approaches. Set up scalable infrastructure for long-running, multi-step agents, including memory management, state handling, and asynchronous workflows. What We Look For At least 5 years of professional or open-source experience in a relevant technical field. Comfort working in a setting that changes and evolves quickly. Strong product focus and an understanding of customer needs. Interest in LLMs, MCPs, cloud infrastructure, and observability tools. Ability to learn from and collaborate with engineers who have delivered over $10 billion in value. Commitment to working onsite in San Francisco three days per week. Startup experience is a plus.
Apr 20, 2026
Sign in to browse more jobs
Create account — see all 2,022 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.