Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Senior
Qualifications
QualificationsEducation: Preferred BS/MS/PhD in Computer Science or a related field, or equivalent practical experience. Experience: A minimum of 6 years in data infrastructure, with a strong emphasis on big data technologies. Technical SkillsBig Data Expertise: Proficient in big data technologies including Spark, Trino, Kubernetes, and AWS EMR. Programming Skills: Strong command of programming languages such as Java, Scala, and SQL. System Design: Extensive experience in designing, building, and maintaining scalable, fault-tolerant distributed systems. Database Knowledge: Familiarity with a variety of database systems, encompassing both SQL and NoSQL.
About the job
Founded in 2007, Airbnb has transformed the way people experience travel, connecting over 5 million hosts with more than 2 billion guests worldwide. Our platform enables unique stays and authentic experiences, fostering connections with local communities.
The Team You Will Join:
As a pivotal member of the Data Warehouse Infrastructure team, you will help shape the backbone of Airbnb's big data capabilities, enabling hundreds of engineers to efficiently collect, manage, and analyze vast amounts of data. We leverage cutting-edge open-source technologies such as Hadoop, Spark, Trino, Iceberg, and Airflow.
Typical Responsibilities:
Design and architect Airbnb's next-generation big data compute platform to enhance data ETL, analytics, and machine learning efforts.
Oversee the platform's operations, focusing on improving reliability, performance, observability, and cost-effectiveness.
Create high-quality, maintainable, and self-documenting code while engaging actively in code review processes.
Contribute to open-source projects, making a significant impact on the industry.
About Airbnb, Inc.
Airbnb, Inc. is a global leader in hospitality, offering travelers unique accommodations and experiences through its platform. With millions of listings worldwide, Airbnb enables hosts to share their spaces and connect with guests seeking authentic local experiences.
Position OverviewJoin OpenEvidence as a Data Infrastructure Software Engineer, where you will engineer comprehensive systems that drive essential product and research operations. Your focus will be on optimizing performance, ensuring scalability, and enhancing accuracy, while enjoying the autonomy to manage the infrastructure that assists healthcare professionals in navigating complex clinical decisions in real-time.We value exceptional creators who thrive in versatile roles. Our engineers engage across various products and projects, taking ownership wherever they can make the most significant impact.About OpenEvidenceOpenEvidence is the leading medical AI platform globally, utilized by over 40% of clinicians in the U.S. in just over a year through organic product-led growth. As a $12 billion company, our engineering team comprises 30 talented individuals from MIT, Harvard, and Stanford. We believe that groundbreaking products are born from a small group of exceptional builders, driven by focused goals and empowered to take ownership and act swiftly. We are expanding our team to capitalize on an unparalleled opportunity to set the standard for medical AI platforms.If you are a top-tier engineer or scientist eager to push the boundaries and achieve tangible outcomes that affect millions of lives, we want to connect with you.Our CultureWe expect our work to be performed at an elite level. The journey from concept to execution and scaling is akin to a professional sport, where excellence is non-negotiable. We believe that the creation of innovative technologies is only achievable through complete ownership. Significant achievements happen when individuals take the initiative to see them through.Your ProfileThis role is not for those seeking a 9-to-5 job or merely looking to write papers. If you are ready to dive into the trenches, tackle challenges head-on, and create something from scratch that could impact millions and drive substantial revenue, you might be the perfect fit.We seek brilliant builders who are intelligent, ambitious, resourceful, self-reliant, detail-oriented, driven, hardworking, and humble. Does this sound rare? It is, as we have only found 30 of them so far, and we are eager to discover more.
Full-time|$153K/yr - $376K/yr|Remote|San Francisco, CA • New York, NY • United States
At Figma, we are expanding our team of dedicated creatives and innovators committed to making design accessible for everyone. Our platform empowers teams to transform ideas into reality—whether you're brainstorming, prototyping, converting designs into code, or utilizing AI for enhancements. From concept to product, Figma enables teams to optimize workflows, accelerate processes, and collaborate in real-time from anywhere in the world. If you're passionate about shaping the future of design and teamwork, we invite you to join us!The Data Platform team at Figma is responsible for constructing and managing the essential systems that drive analytics, AI/ML initiatives, and data-informed decision-making across our organization. We cater to a wide array of stakeholders, including AI researchers, machine learning engineers, data scientists, product engineers, and business teams that depend on data for insights and strategic planning. Our team is tasked with owning and scaling critical platforms such as the Snowflake data warehouse, ML Datalake, orchestration and pipeline infrastructure, and extensive data ingestion and processing systems, overseeing all data transactions that occur within these platforms.Despite our small size, we tackle significant, high-impact challenges. In the upcoming years, we are focused on developing the data infrastructure layer for Figma's AI-driven products, enhancing cost and performance efficiencies across our data stack, scaling our ingestion and reverse ETL capabilities for new product applications, and reinforcing data quality, reliability, and compliance at every level. If you are enthusiastic about creating scalable, high-performance data platforms that empower teams across Figma, we would love to connect with you!This is a full-time role that can be performed from one of our US hubs or remotely within the United States.
Founded in 2007, Airbnb has transformed the way people experience travel, connecting over 5 million hosts with more than 2 billion guests worldwide. Our platform enables unique stays and authentic experiences, fostering connections with local communities.The Team You Will Join:As a pivotal member of the Data Warehouse Infrastructure team, you will help shape the backbone of Airbnb's big data capabilities, enabling hundreds of engineers to efficiently collect, manage, and analyze vast amounts of data. We leverage cutting-edge open-source technologies such as Hadoop, Spark, Trino, Iceberg, and Airflow.Typical Responsibilities:Design and architect Airbnb's next-generation big data compute platform to enhance data ETL, analytics, and machine learning efforts.Oversee the platform's operations, focusing on improving reliability, performance, observability, and cost-effectiveness.Create high-quality, maintainable, and self-documenting code while engaging actively in code review processes.Contribute to open-source projects, making a significant impact on the industry.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our vision is to enhance human potential by advancing collaborative general intelligence. We are dedicated to creating a future where individuals have the resources and knowledge to harness AI for their specific objectives and aspirations.Our team comprises scientists, engineers, and innovators who have developed some of the most popular AI products, including ChatGPT and Character.ai, as well as influential open-weight models like Mistral, along with highly regarded open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking a talented engineer to enhance our data infrastructure. You will become part of a dynamic, high-impact team tasked with designing and scaling the foundational infrastructure for distributed training pipelines, multimodal data catalogs, and sophisticated processing systems that manage petabytes of data.Our infrastructure is pivotal; it serves as the foundation for every groundbreaking achievement. You will collaborate directly with researchers to expedite experiments, develop novel datasets, optimize infrastructure efficiency, and derive essential insights from our data repositories.If you are passionate about distributed systems, large-scale data mining, and open-source tools such as Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building innovative solutions from scratch, we encourage you to apply.Note: This is an evergreen role that we keep open continuously for expressions of interest. We receive a high volume of applications, and while there may not always be an immediate position that aligns perfectly with your skills and experience, we encourage you to apply. We regularly review applications and reach out as new opportunities arise. You are welcome to reapply after gaining more experience, but please refrain from applying more than once every six months. We may also post for specific roles for particular projects or team needs, and in those cases, you are welcome to apply directly in addition to this evergreen role.
About Our TeamAt OpenAI, our Data Platform team is at the heart of our innovative approaches to data management, powering essential product, research, and analytics workflows. We manage some of the largest Spark compute fleets in production, architect data lakes and metadata systems on Iceberg and Delta, and envision exabyte-scale architectures. Our high-throughput streaming platforms utilize Kafka and Flink, while our orchestration is powered by Airflow. We also support machine learning feature engineering tools such as Chronon. Our mission is to provide secure, reliable, and efficient data access at scale, thereby enhancing intelligent, AI-assisted data workflows.Join us in building and maintaining these core platforms that are foundational to OpenAI's products, research, and analytics capabilities.We are not just scaling infrastructure; we are transforming the way people engage with data. Our vision includes intelligent interfaces and AI-powered workflows that make data interactions faster, more reliable, and intuitive.About the PositionIn this role, you will focus on constructing and managing data infrastructure that supports extensive compute fleets and storage systems optimized for high performance and scalability. You will be instrumental in designing, developing, and operating the next generation of data infrastructure at OpenAI. Your responsibilities will encompass scaling and securing big data compute and storage platforms, building and maintaining high-throughput streaming systems, ensuring low-latency data ingestion, and facilitating secure, governed data access for machine learning and analytics. You will also prioritize reliability and performance at extreme scales.You will have complete ownership of the full lifecycle: from architecture to implementation, production operations, and on-call responsibilities.You should be experienced with platforms such as Spark, Kafka, Flink, Airflow, Trino, or Iceberg. Familiarity with infrastructure tools like Terraform, along with expertise in debugging large-scale distributed systems, is essential. A passion for addressing data infrastructure challenges in the AI domain is a must.This role is based in San Francisco, CA. We offer a hybrid work model requiring 3 days in the office each week and provide relocation assistance for new hires.Responsibilities:Design, build, and maintain data infrastructure systems including distributed compute, data orchestration, distributed storage, streaming infrastructure, and machine learning infrastructure, ensuring they are scalable, reliable, and secure.Ensure our data platform can scale significantly while maintaining reliability and efficiency.Enhance company productivity by empowering your fellow engineers and teammates through innovative data solutions.
Foxglove develops data infrastructure for robotics teams operating in real-world environments such as factories and warehouses. As robots leave the lab, engineers need reliable tools for analyzing data, diagnosing issues, and improving system performance. Foxglove delivers observability, visualization, and data management solutions designed to help teams manage large volumes of multimodal sensor data from deployed fleets. Role overview This Software Engineer - Robotics Data Infrastructure position centers on building and optimizing the systems behind Foxglove’s products. The scope covers desktop and web visualization tools, backend services for data ingestion and streaming, and client libraries running directly on robots. Work ranges from enhancing decoding performance in Rust, to extending MCAP tooling in C++, integrating new data sources with TypeScript, and occasionally working with customers to resolve performance issues. What you will do Design, build, and deploy product features from start to finish, incorporating feedback from users. Work across the stack: from Rust and C++ libraries on devices, to backend cloud services, to browser-based visualization tools. Identify and address performance bottlenecks in data pipelines, including ingestion, decoding, streaming, and rendering. Contribute to MCAP and other open-source libraries used by the robotics community. Collaborate with customers and robotics engineers to gather requirements and validate new solutions. Maintain high engineering standards and help foster a culture of ownership within the team. Design systems for efficient storage and querying of petabyte-scale robotics data. Requirements At least 5 years of experience developing production software. Strong proficiency in Rust, C++, and TypeScript, with a willingness to learn new languages or frameworks as needed. Location This position is based in San Francisco, CA.
At Plaid, we believe in the power of data-driven decision-making. Our data culture demands robust and scalable data systems that ensure accuracy and completeness. As a Senior Software Engineer focusing on Data Infrastructure, you will play a pivotal role in empowering teams across engineering, product, and business sectors to swiftly and securely extract valuable data insights. Your work will directly enhance our ability to serve customers effectively. You will be responsible for building and optimizing our data and machine learning infrastructure, allowing Plaid engineers to innovate and iterate on products built on consumer-permissioned financial data. Our Data Infrastructure engineers are experts in Data Warehousing, Data Lakehouse architecture, Spark, Workflow Orchestration, and Streaming technologies. You will enhance our existing data pipelines for performance and cost efficiency while creating intuitive abstractions that simplify the development process for other engineers at Plaid.
Full-time|$160K/yr - $225K/yr|Hybrid|San Francisco, CA (Hybrid)
About Fable SecurityAt Fable Security, we recognize that AI-driven threats and human error pose significant risks to enterprise security. Cybercriminals exploit human behavior, which is responsible for over 70% of security breaches. Our mission is to empower individuals with the right tools, transforming them from targets into an active line of defense.We have developed a human risk platform that effectively shapes employee behavior. Our user-friendly and scalable platform integrates complex employee data, identifies risky behaviors, and automatically delivers timely, relevant interventions where employees are most engaged—in real time.Supported by renowned investors such as Redpoint Ventures and Greylock Partners, and founded by members of the Abnormal Security team, Fable is addressing one of cybersecurity’s most pressing challenges within a multi-billion-dollar market. Our diverse team includes alumni from Meta, Twitter, and prestigious universities like Columbia, Stanford, and UCLA. As we experience rapid growth, this is a prime opportunity to contribute to and influence the future of security.Why Join UsHelp us build and scale the core data infrastructure that drives a groundbreaking product.Collaborate with engineering, data science, and product teams to operationalize data effectively at scale.Be part of a small, elite team where your contributions will have a significant impact.As part of an early-stage company, every engineer plays a crucial role in shaping product functionality and evolution. You will define not only the technical architecture but also the company’s data philosophy.Your RoleIn the position of Data Infrastructure Engineer, you will be responsible for the architecture, scalability, and reliability of our data platform.You will design and construct systems that support everything from real-time product functionalities to internal analytics and machine learning processes, covering the spectrum from data ingestion to production-ready datasets. Additionally, you will establish best practices that underpin our data-driven products.This role is highly cross-functional, requiring close collaboration with engineering, data, and product teams to ensure our data foundation evolves in tandem with our growth.ResponsibilitiesDesign, develop, and sustain scalable data systems.Implement best practices for data architecture and management.Collaborate with cross-functional teams to facilitate data-driven decision-making.
Innovating the Future of SoftwareAs we approach 2026, the software industry is facing an unprecedented challenge: the 'infinite software crisis.' At Sazabi, we are dedicated to redefining how engineering teams support, maintain, and operate the rapid growth in application development.Introducing Sazabi: The AI-Native Observability Platform for Agile Engineering Teams.Our platform empowers teams by providing a centralized solution to inquire about their production systems in natural language, visualize system activities automatically, and diagnose issues ten times faster.Say goodbye to tedious instrumentation, dashboard setups, and alert tuning—just straightforward answers.We are proud to be backed by pioneers from leading AI organizations, including Vercel, Graphite, Daytona, Browserbase, LangChain, Mastra, Replit, and others.
Full-time|$162K/yr - $216K/yr|Hybrid|San Francisco, California, United States
Who We AreBaton is Ryder’s innovative product development division dedicated to leveraging cutting-edge technologies to transform the transportation and logistics landscape. Managing over $10 billion in freight, our technology has a significant impact across the U.S. economy.We are committed to creating and delivering software that not only meets but exceeds the needs of Ryder and its 50,000+ clients, which includes some of the most recognized brands globally. Our projects range from user-centric applications to the robust data platform that will drive the future of Ryder’s innovations.Baton’s mission: To enable a supply chain that operates on autopilot.Since Ryder’s acquisition of Baton in 2022, we have been operating with the agility of a startup while benefiting from the extensive reach of a Fortune 500 company. If you're passionate about tackling intricate challenges and making a real impact in the backbone of the American economy, you’ll thrive with us.Role: Software Engineer - InfrastructureDepartment: Data PlatformLocation: Hayes Valley, San Francisco, CA
Full-time|$200K/yr - $275K/yr|On-site|San Francisco
About Watney RoboticsAt Watney Robotics, we are pioneers in developing autonomous robotic solutions aimed at enhancing critical infrastructure. Recently securing $21 million in seed funding from leading investors such as Conviction, Abstract, and A*, we are collaborating with the world’s largest hyperscalers to propel the expansion of data centers and streamline maintenance processes.This is an extraordinary opportunity to join our team at a pivotal stage as we transition from prototype to large-scale production. Be part of a team that not only ships cutting-edge systems but also plays a crucial role in shaping the operational framework of an innovative robotics company.
About Our Innovative TeamJoin the Workload team at OpenAI, where we are at the forefront of designing and managing the cutting-edge infrastructure that drives the training and inference of large language models (LLMs) at an unprecedented scale. Our systems are engineered to harmonize the complex processes of model training and serving, abstracting performance, parallelism, and execution across extensive GPU and accelerator networks. This robust foundation allows researchers to concentrate on elevating model capabilities, while we take care of the scalability, efficiency, and reliability needed to bring these advanced models to life.Your Role and ResponsibilitiesWe are seeking a talented engineer to design and implement the dataset infrastructure that will fuel OpenAI’s next-generation training stack. Your primary focus will be on creating standardized dataset interfaces, scaling pipelines across thousands of GPUs, and proactively identifying and addressing performance bottlenecks. Collaboration with multimodal researchers and infrastructure teams will be key to ensuring that our datasets are unified, efficient, and user-friendly.Key Responsibilities Include:Design and maintain standardized dataset APIs, including those for multimodal (MM) data that exceeds memory capacity.Develop proactive testing and validation pipelines for dataset loading at GPU scale.Work collaboratively to integrate datasets into training and inference pipelines, ensuring seamless user experiences.Document and maintain dataset interfaces to ensure they are discoverable, consistent, and easily adoptable by other teams.Establish validation systems to assure datasets remain reproducible and unchanged once standardized.Identify and troubleshoot performance bottlenecks in distributed dataset loading, such as stragglers impacting global training speed.Create visualization and inspection tools to highlight errors, bugs, or bottlenecks in datasets.Ideal Candidate ProfilePossess strong engineering fundamentals and experience in distributed systems, data pipelines, or infrastructure.Have a proven track record in building APIs, modular code, and scalable abstractions, with a user-centric approach to design.Be adept at debugging performance issues across large-scale machine fleets.Demonstrate a passion for advancing data infrastructure to enhance research capabilities.
Full-time|$130.6K/yr - $235K/yr|On-site|San Francisco, CA; Sunnyvale, CA
About Our TeamAt DoorDash, data drives our success. Our Data Engineering team is pivotal in building robust database solutions tailored for diverse applications, including reporting, product analytics, marketing optimization, and financial reporting. By architecting pipelines, data structures, and data warehouse environments, we enable data-driven decision-making across the organization.About the RoleWe are seeking a talented Software Engineer II to join our team as a technical leader, responsible for scaling our data infrastructure, enhancing automation, and developing tools to support our expanding business needs.What You Will DoCollaborate with business partners and stakeholders to gather and understand data requirements.Work alongside engineering, product teams, and external partners to ensure seamless data collection.Design, develop, and implement high-performance data models and pipelines for our Data Lake and Data Warehouse.Establish and execute data quality checks, conduct thorough QA, and implement monitoring routines.Enhance the reliability and scalability of our ETL processes.Manage a suite of data products that deliver accurate and trustworthy data.Support and onboard new engineers as they join our team.What We Are Looking For3+ years of professional experience in data engineering, business intelligence, or a related field.Proficiency in programming languages such as Python and Java.3+ years of experience with ETL orchestration and workflow management tools, including Airflow, Flink, Oozie, and Azkaban, using AWS/GCP platforms.Strong understanding of database fundamentals, SQL, and distributed computing.3+ years of experience with distributed data ecosystems (e.g., Spark, Hive, Druid, Presto) and streaming technologies like Kafka and Flink.Experience with Snowflake, Redshift, PostgreSQL, and/or other database management systems.Excellent communication skills with a proven ability to liaise with both technical and non-technical teams.Familiarity with reporting tools such as Tableau, Superset, and Looker.Able to thrive in a fast-paced and dynamic environment.
About the TeamJoin OpenAI's Privacy Engineering team, where we operate at the vital crossroads of Security, Privacy, Legal, and Core Infrastructure. Our mission is to develop cutting-edge data infrastructure and systems that empower our privacy, legal, and security teams to operate securely, swiftly, and at scale. We adhere to principles of defensibility by default, enabling impactful research, and fostering a robust security culture in preparation for transformative technologies.About the RoleWe are seeking a talented Software Engineer to design and implement technical systems that facilitate legal compliance workflows, including secure data processing and document review. In this role, you will collaborate closely with Legal, Security, IT, and engineering teams to translate legal processes into actionable technical workflows. This position is perfect for an engineer passionate about large-scale data challenges and who understands the meticulousness required in ensuring compliance.Located in the vibrant city of San Francisco, we offer relocation assistance for qualified candidates.Key Responsibilities:Design and maintain scalable data storage pipelines.Develop search and discovery services (e.g., Spark/Databricks, index layers, metadata catalogs) tailored to partner team requirements.Automate secure data transfers, including encryption, checksumming, and auditing exports to reviewers.Establish secure compute environments that balance usability with stringent security controls.Implement monitoring and KPIs to ensure accountability of data holds and productions.Work cross-functionally to document SOPs, threat models, and chain-of-custody documentation that can withstand scrutiny.Ideal Candidates Will:Possess practical experience in building or operating large-scale data-lake or backup systems (Azure, AWS, GCP).Be proficient with Terraform or Pulumi, CI/CD processes, and capable of converting ad-hoc legal requests into repeatable pipelines.Be comfortable working with discovery workflows (legal holds, enterprise document collections, secure review) or eager to quickly gain expertise.Effectively communicate technical concepts—from storage governance to block-ID APIs—to interdisciplinary teams such as Legal and Engineering.
About UsAt Sierra, we are revolutionizing the way businesses engage with their customers by building a cutting-edge platform that harnesses the power of AI. Our headquarters is located in the vibrant city of San Francisco, with additional offices expanding in Atlanta, New York, London, France, Singapore, and Japan.Our company culture is deeply rooted in our core values: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These principles guide our actions and foster an environment where innovation thrives.Sierra was co-founded by visionary leaders Bret Taylor, who currently serves as the Board Chair of OpenAI and has a rich history with Salesforce and Facebook, and Clay Bavor, who previously led Google Labs and spearheaded initiatives like Google Lens and Project Starline.Your RoleAs a Software Engineer focusing on Infrastructure at Sierra, you will play a pivotal role in designing, constructing, and maintaining the foundational systems that empower our AI platform. Your expertise will ensure that our infrastructure is not only secure and reliable but also scalable, allowing product teams to execute their work with agility and confidence.Guarantee the reliability, scalability, and performance of our platform and LLM inference serving in response to increasing traffic demands.Develop and oversee cloud infrastructure using Terraform to create secure, scalable, and reproducible environments.Establish and manage a self-service infrastructure platform to empower engineering teams in deploying and operating services independently.Take ownership of and improve CI/CD pipelines and release management processes, facilitating rapid and reliable deployments across Sierra’s platform.Design and manage distributed systems utilizing distributed databases, retrieval systems, and machine learning models.Develop and sustain core data serving abstractions along with essential authentication and security features (SSO, RBAC, authentication controls).Effectively navigate and integrate our technology stack with enterprise customer environments in a scalable and maintainable manner.
At Exa, we are on a mission to create a cutting-edge search engine from the ground up, designed to cater to the diverse needs of AI applications. Our team is building a robust infrastructure that enables us to crawl the internet, train advanced embedding models for indexing, and develop high-performance vector databases using Rust. Additionally, we manage a significant $5M H200 GPU cluster that powers tens of thousands of machines.The Infrastructure Team at Exa is responsible for developing the essential tools and infrastructure that support our entire system. We are looking for talented infrastructure engineers to help us scale our capabilities rapidly. Your work could involve orchestrating GPU clusters with Kubernetes, implementing map-reduce batch jobs on Ray, or creating top-tier observability tools that set industry standards.
Join our innovative team at alljoined as a Data Infrastructure Engineer where you will play a pivotal role in shaping our data architecture and ensuring the reliability and efficiency of our data systems. You will collaborate with cross-functional teams to design and implement scalable data solutions that empower data-driven decision-making.
At Hover, we empower individuals to conceptualize, enhance, and safeguard the spaces they cherish. Utilizing proprietary AI and over a decade's worth of real property data, we provide answers to pivotal questions such as, 'What will it look like?' and 'What will it cost?' Our platform offers homeowners, contractors, and insurance professionals accurately measured, interactive 3D models of properties — all achievable from a smartphone scan in mere minutes.Driven by curiosity and purpose, we maintain a strong commitment to our customers, communities, and one another. We believe that diverse perspectives foster the best ideas, and we take pride in nurturing an inclusive, high-performance culture that encourages growth, accountability, and excellence. Supported by premier investors like Google Ventures and Menlo Ventures, and trusted by industry leaders such as Travelers, State Farm, and Nationwide, we are revolutionizing how individuals perceive and interact with their environments.About the RoleAs a Senior Software Engineer specializing in Infrastructure, you will delve into cloud infrastructure challenges unique to a company focused on 3D data, computer vision, and machine learning. Your enthusiasm for building internal tools and your talent for crafting elegant solutions to complex issues will be crucial in this role.Our Infrastructure team is responsible for everything beyond the application binary, serving as a critical partner to the rest of the engineering department. Through automation, we aim to streamline processes, ensuring that the simplest path is also the fastest and most secure. We manage and optimize all cloud infrastructure components including our Kubernetes environment, databases, networks, storage, and caching systems. Collaborating with engineering peers, we establish consistent solutions to common architectural challenges, particularly those involving rich geospatial and machine learning workloads. We are well-versed in best practices for cloud architecture and CI/CD, leveraging application development as a means to implement these practices.Your ContributionsYou will play a pivotal role in developing straightforward solutions to intriguing problems, thereby enhancing the foundation upon which our engineering teams build. Collaborating closely with engineers across the organization, you will help make their applications faster, easier to manage, and more reliable in production. Your work will span frontend, backend, computer vision, data, security, and machine learning teams to scale new ideas into production effectively. Given the small and highly collaborative nature of our team, you can expect a varied and impactful workload, which may include:Designing scalable cloud architectureEnhancing CI/CD pipelines and developer tooling
Who We AreServal is an innovative AI-driven automation platform redefining operational efficiency for enterprises. Our intelligent agents seamlessly comprehend and execute real-world workflows, replacing outdated manual processes with adaptive, self-learning software. Since our inception in early 2024, we have garnered the trust of industry leaders such as General Motors, Notion, Perplexity, Vercel, Mercor, LangChain, and Verkada, streamlining high-volume operational tasks across their organizations.At the heart of Serval is a cutting-edge agentic AI platform that transforms natural language into actionable workflows. Our agents not only respond to queries but also reason, act across various systems, and continuously enhance their performance. What started as a solution for operational tasks has rapidly expanded into a versatile AI automation layer utilized across IT, HR, Finance, Security, Legal, and Engineering sectors.Our mission is to eradicate repetitive, manual tasks within enterprises, empowering teams through intelligent automation. In the long run, we aim to establish a universal AI operations layer—a system of agents that integrates across business functions, maintaining the momentum of modern companies.We are proud to be backed by renowned investors including Sequoia Capital, Redpoint Ventures, Meritech, First Round, General Catalyst, and Elad Gil, and founded by seasoned product and engineering leaders from Verkada.Role OverviewAs a Senior Software Engineer in Infrastructure at Serval, you will be pivotal in developing and scaling the core systems that empower our AI agents and workflow automation platform. A crucial aspect of this role involves enabling and supporting self-hosted deployments for enterprise clients needing on-premises or private cloud environments. We are looking for engineers with profound expertise in distributed systems, infrastructure-as-code, production operations, and customer-facing support, who aspire to influence the technical architecture of a rapidly evolving platform.What You'll DoDesign, implement, and operate large-scale distributed systems that power Serval's AI agents, workflow orchestration, and data pipelines.Create and maintain Terraform modules to provision and manage cloud infrastructure across AWS, GCP, or Azure environments.Develop and sustain deployment packages, installation scripts, and infrastructure templates, enabling customers to self-host Serval in their own environments.Provide technical support and guidance to enterprise customers during installation and deployment phases.
About UsAt Imprint, we are revolutionizing the world of co-branded credit cards and innovative financial solutions, focusing on smarter, more rewarding, and brand-first experiences. We collaborate with renowned brands such as Crate & Barrel, Rakuten, Booking.com, H-E-B, Fetch, and Brooks Brothers to establish modern credit programs that enhance customer loyalty, unlock savings, and stimulate growth. Our robust platform integrates advanced payment technologies, intelligent underwriting, and a seamless user experience, enabling brands to offer impactful financial products without the complexities of becoming a bank.Co-branded credit cards represent over $300 billion in U.S. annual spending, yet many are still managed by outdated banking systems. Imprint stands as the modern alternative—flexible, technology-driven, and tailored for today’s consumers. Supported by notable investors like Kleiner Perkins, Thrive Capital, and Khosla Ventures, we are assembling a world-class team dedicated to reshaping payment methods and driving brand growth. If you thrive in fast-paced environments, enjoy tackling complex challenges, and aspire to make a significant impact, we would be delighted to meet you.Discover more about us on Imprint's Technology Blog.The TeamThe Tech Platform Engineering Team at Imprint is pioneering the democratization of access to advanced technologies, empowering teams across our organization to innovate and excel. Our commitment to redefining the Fintech landscape drives us to build secure, highly available infrastructures while equipping our engineers with comprehensive development tools, allowing them to rapidly create world-class products.Your RoleDesign, build, and manage cloud and web infrastructure with a strong emphasis on security, reliability, and scalability.Implement and maintain infrastructure components across computing, networking, and data platforms.Adhere to security best practices in cloud infrastructure, ensuring proper access control, network isolation, and secure communication between services.Monitor system health and engage in incident response, root cause analysis, and reliability enhancements.Collaborate with platform, security, and product engineers to deliver safe and efficient infrastructure solutions.
Jan 16, 2026
Sign in to browse more jobs
Create account — see all 6,391 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.