Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
Key Responsibilities:Develop low-latency networking infrastructure to connect embedded and cloud systems. Design resource-efficient pipelines for the egress and ingress of multimodal sensor data and telemetry. Create low-latency command and control frameworks across distributed sensor networks. Qualifications:Minimum of 5 years of experience with Rust (preferred) or C++ in low-latency and/or embedded environments. In-depth knowledge of networking protocols including UDP, TCP, QUIC, etc. Strong understanding of IPC fundamentals and mechanics on Linux systems. Experience with modern databases, whether distributed or otherwise. Preferred Qualifications:Experience with real-time, multimodal machine learning data ingestion systems. Familiarity with modern video codec implementations across various hardware platforms.
About the job
Specter is revolutionizing how businesses perceive their physical environments by developing a software-defined control plane. Our mission is to enhance the security of American enterprises by providing them with comprehensive visibility over their physical assets.
We are pioneering a connected hardware-software ecosystem that leverages multi-modal wireless mesh sensing technology, reducing the deployment costs and time for sensors by a factor of ten. Our platform aims to be the perception engine for a company’s physical presence, facilitating real-time visibility of perimeters and enabling autonomous operational management.
Founded by passionate innovators from Anduril, Tesla, Uber, and the U. S. Special Forces, our co-founders, Xerxes and Philip, are dedicated to empowering our partners in the rapidly evolving landscape of physical AI and robotics.
About Specter
Specter is at the forefront of creating a secure and perceptive environment for businesses. By integrating cutting-edge technology with a dedicated team, we aim to redefine the operational landscape for American enterprises.
About Our TeamThe Frontier Systems team at OpenAI is at the forefront of technology, responsible for creating, deploying, and maintaining some of the world's largest supercomputers. These supercomputers are pivotal for training our most advanced AI models, pushing the boundaries of innovation.We transform sophisticated data center designs into operational systems and develop the software infrastructure necessary for extensive frontier model training. Our goal is to ensure these hyperscale supercomputers operate reliably and efficiently, supporting groundbreaking AI research.About the RoleAs a key member of the Frontier Systems team, you will be instrumental in designing the critical infrastructure that ensures our supercomputers function seamlessly for pioneering AI research. In this role, you'll address system-level challenges and implement automation solutions that minimize disruptions during large-scale training processes.Your responsibilities will encompass end-to-end ownership of your projects, allowing you to make significant contributions to our mission. This position is ideal for individuals who excel in diagnosing complex system issues and crafting automation strategies to proactively resolve problems across a vast network of machines.Your Responsibilities Include:Enhancing system health checks to maintain the stability of our hyperscale supercomputers during model training.Conducting in-depth investigations into hardware failures and system-level bugs to uncover root causes.Developing automation tools that monitor and resolve issues across thousands of systems, enabling uninterrupted research progress.You May Be a Great Fit If You Possess:7+ years of hands-on experience in software engineering.Strong proficiency in Python and shell scripting.Expertise in analyzing complex data sets using SQL, PromQL, Pandas, or other relevant tools.Experience in creating reproducible analyses.A solid balance of skills in both building and operationalizing systems.Prior experience with hardware is not a prerequisite for this role.Preferred Qualifications:Familiarity with the intricacies of hardware components, protocols, and Linux tools (e.g., PCIe, Infiniband, networking, power management, kernel performance tuning).Experience with system optimization and performance tuning.
About Our TeamJoin the innovative Frontier Systems team at OpenAI, where we develop, deploy, and maintain some of the world’s largest supercomputers used for pioneering model training. Our expertise in transforming data center designs into fully operational systems enables us to build the necessary software to facilitate expansive frontier model trainings.Our mission is to establish, stabilize, and ensure the dependability and efficiency of these hyperscale supercomputers throughout the training of our advanced models.About This PositionAs a Software Engineer on the Frontier Systems team with a focus on power management, you will play a pivotal role in enhancing our groundbreaking research capabilities. Given the significant power demands of large-scale supercomputers, your expertise will be essential in optimizing power management to maximize computational efficiency. This role is vital for maintaining smooth operations within our cutting-edge research supercomputing framework, ensuring both reliability and grid-level power consistency.Our team fosters an environment that empowers talented engineers with substantial autonomy and ownership, allowing for impactful contributions. You will be challenged to conduct thorough system-level investigations and develop automated solutions, tackling complex issues with depth and precision while creating scalable automation for detection and remediation.Your Responsibilities Will Include:Design and implement both system-level and software-level solutions to optimize power consumption in large-scale supercomputers, ensuring efficient and reliable operations.Develop automation tools to monitor power consumption patterns during training workloads and create algorithms to stabilize these fluctuations, safeguarding grid reliability.Collaborate with researchers and engineers to create tools for real-time monitoring, detection, and resolution of power-related hardware and system issues.Work cross-functionally to translate complex electrical system requirements into executable code, driving ongoing enhancements in our power management strategies.Lead the creation of power throttling mechanisms at the IT system level, dynamically adjusting power usage based on workload demands and infrastructure constraints.Partner with hardware design teams to integrate system-level power control requirements into hardware design, ensuring seamless collaboration between software-driven power management and hardware functionalities.
Join Crusoe as a Principal Systems Software Engineer and play a vital role in revolutionizing the tech industry. You will lead the development of innovative software solutions that enhance our systems and platforms, contributing to the overall mission of providing efficient and sustainable computing resources. Your expertise will help shape the future of our software architecture and ensure seamless integration across various applications.
About the TeamJoin the innovative Frontier Systems team at OpenAI, where we design, implement, and maintain the world's largest supercomputers, essential for advancing our most groundbreaking model training initiatives.We transform data center blueprints into operational systems while crafting the software necessary for executing large-scale frontier model trainings.Our mission is to establish, stabilize, and ensure the reliability and efficiency of these hyperscale supercomputers throughout the training of our frontier models.About the RoleWe are seeking passionate engineers to manage the next generation of compute clusters that underpin OpenAI’s frontier research.This position merges distributed systems engineering with practical infrastructure work across our expansive data centers. You will scale Kubernetes clusters to unprecedented levels, automate bare-metal setups, and create the software layer that simplifies the complexity of numerous nodes across various data centers.Your work will be at the crossroads of hardware and software, where speed and reliability are paramount. Be prepared to oversee dynamic operations, swiftly identify and resolve pressing issues, and constantly elevate the standards for automation and uptime.In this role, you will:Provision and scale extensive Kubernetes clusters, including automation for deployment, bootstrapping, and lifecycle managementCreate software abstractions that integrate multiple clusters and provide a cohesive interface for training workloadsOversee node deployment from bare metal to firmware upgrades, ensuring rapid, repeatable setups at scaleEnhance operational metrics by reducing cluster restart times (e.g., from hours to minutes) and expediting firmware and OS upgrade cyclesIntegrate networking and hardware health systems to ensure end-to-end reliability across servers, switches, and data center infrastructureDevelop monitoring and observability systems to identify issues early and maintain cluster stability under high loadsYou might thrive in this role if you:Have extensive experience operating or scaling Kubernetes clusters or similar container orchestration systems in high-growth or hyperscale environmentsPossess strong programming skills in languages relevant to cloud and infrastructure management
About Our Team:Join the innovative Database Systems team at OpenAI, where we specialize in high-performance distributed databases. We are the architects behind Rockset, a cutting-edge real-time search, analytics, and vector database that powers all vector search and retrieval augmented generation (RAG) at OpenAI. Rockset underpins core functionalities across all OpenAI product lines and supports various critical internal applications.About the Role:We are in search of engineers who are passionate about distributed systems, performance optimization at a low level (with our core engine developed in C++), and constructing scalable database infrastructures from scratch. As a member of the Database Systems team, you will play a key role in enhancing the core database engine, making significant contributions to ingestion, query execution, indexing, and storage improvements. You will collaborate with multiple teams across OpenAI to unlock new product capabilities and ensure the reliability and scalability of our online database as usage expands exponentially.Your Responsibilities Will Include:Design, develop, and maintain high-performance distributed systems.Identify and address performance bottlenecks to elevate infrastructure capabilities.Define and guide the long-term technical vision and evolution of the system.Collaborate with product, engineering, and research teams to deliver robust and scalable infrastructure.Investigate complex production issues across the entire technology stack.Contribute to incident response, retrospective analyses, and establishing best practices for system reliability.You Will Excel In This Role If You:Possess substantial experience in building, scaling, and optimizing distributed systems.Exhibit a keen interest in database internals, storage engines, or low-latency query systems.Enjoy tackling complex performance challenges in high-throughput systems.Have experience managing and operating production clusters at scale (e.g., Kubernetes or similar orchestration tools).Approach scalability, correctness, and reliability with a rigorous mindset.Thrive in a fast-paced environment where you can make a significant impact.Qualifications:4+ years of relevant industry experience with a focus on distributed systems.Proficiency in C++ or similar low-level programming languages.Strong problem-solving skills and attention to detail.Experience with performance monitoring and optimization tools.Excellent collaboration and communication skills.
About AfterQuery AfterQuery partners with leading AI labs to advance training data and evaluation frameworks. The team builds high-signal datasets and runs thorough evaluations that go beyond standard benchmarks. As a post-Series A, early-stage company in San Francisco, AfterQuery gives each team member room to shape the future of AI models. Role Overview: Research Scientist - Frontier Data This role focuses on designing datasets and developing evaluation systems that influence how top AI models are trained and assessed. Working closely with research teams at major AI labs, the scientist explores new data collection techniques, investigates where models fall short, and sets up metrics to track progress. The work is hands-on and experimental, moving quickly from hypothesis to live testing and directly impacting large-scale model training. Key Responsibilities Design data slides and analyze data structures to uncover model weaknesses in areas like finance, software development, and enterprise operations. Build and refine evaluation rubrics and reward signals for RLHF and RLVR training approaches. Study annotator behavior and run experiments to improve model capabilities across different domains. Develop quantitative frameworks to measure dataset quality, diversity, and their effect on model alignment and performance. Work with research teams to turn training objectives into concrete data and evaluation needs. What We Look For Experience as an undergraduate or master’s research student (PhD not required). Background or internships with RL environments or AI safety and benchmarking organizations (e.g., METR, Artificial Analysis) is a strong plus. Genuine interest in how data structure, selection, and quality affect model outcomes. Demonstrated skill in designing experiments, acting quickly, and extracting insights from complex data. Comfort working across sectors such as finance, software engineering, and policy. Strong quantitative background and familiarity with LLM training pipelines, RLHF/RLVR methods, or evaluation frameworks. A hands-on mindset focused on building practical solutions.
Join us at sfcompute, where we are revolutionizing the future by mitigating risks associated with the largest infrastructure development in history.As the demand for GPU clusters surges, financing these data centers and their supporting infrastructure has never been more critical. Our innovative approach ensures that financing is secured through long-term contracts, providing peace of mind to both lenders and developers.In the fast-paced world of AI and compute resources, we are creating a liquid market for GPU offtake, allowing even small startups to access high-end computing power without the burdens of traditional financing.About the RoleAs a Systems Software Engineer at sfcompute, you will be instrumental in developing a GPU market that brings the advanced software capabilities of hyperscalers to our innovative GPU neoclouds. Your responsibilities will encompass provisioning and monitoring bare metal servers with our virtualization orchestration software, as well as collaborating with our GPU marketplace to facilitate user configurations of VMs, networks, and storage.Key tasks include creating and maintaining a Linux OS image tailored for our tools, ensuring consistent deployment across nodes with specific data-center adjustments, and designing the API protocols and servers for user interaction.Our primary programming language is Rust, which enables us to write efficient code across all system layers, from web servers to kernel coordination. If you are familiar with memory-managed languages like C and possess experience in higher-level programming, we encourage you to apply.
Full-time|$138K/yr - $259.4K/yr|On-site|San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC
Scale AI is on the lookout for an exceptionally talented and driven Software Engineer, Frontier AI Infrastructure to become an integral part of our innovative Public Sector Engineering team. In this role, you will take charge of the model inference layer, enabling cutting-edge AI models, troubleshooting the latest AI tools, managing networking tasks, addressing latency issues, and monitoring pricing and usage metrics for AI models. You will spearhead technical discussions with cloud vendors and clients to fulfill critical contracts and resolve platform challenges. Additionally, you will collaborate closely with Product teams to anticipate feature requirements, transitioning from reactive 'infra-only debugging' to proactive integration testing.Your Responsibilities Include:Designing and implementing secure, scalable backend systems tailored for Public Sector clients, utilizing Scale's advanced cloud-native AI infrastructure.Owning services or systems while defining long-term health objectives and enhancing the health of related components.Redesigning the architecture to operate in compliant or restrictive environments, which entails creating swappable components (authentication, storage, logging) to adhere to government and security regulations without compromising product integrity.Collaborating with Product teams to develop integration tests that identify issues early, shifting focus from 'infra-only debugging' to preventing upstream failures.Actively participating in customer engagements, liaising with stakeholders to comprehend requirements and deliver innovative solutions.Contributing to the platform roadmap and product strategy for Scale AI's Public Sector division, playing a vital role in shaping the future trajectory of our offerings.
About Lumafield: Established in 2019, Lumafield has pioneered the development of the world's first accessible X-Ray CT scanner specifically designed for engineers. Our intuitive scanner, combined with cloud-based software, empowers engineers to gain unparalleled insights into their projects at a remarkably affordable cost. Engineers face high-stakes decisions daily, necessitating tools that provide maximum visibility into their designs. By delivering exceptional product clarity and AI-enhanced tools that identify issues and produce quantitative insights, Lumafield is set to transform the creation, manufacturing, and application of complex products across various sectors. Our company thrives on impact and is dedicated to delivering the utmost value to our customers, ensuring their needs drive our development. Our talented team consists of leading researchers, industrial designers, PhD holders, innovators, and startup founders, all working collaboratively without egos. We proudly receive backing from prestigious venture capital firms, including Kleiner Perkins, Lux Capital, DCVC, and Spark Capital.Headquartered in Cambridge, MA, with an additional office in San Francisco, CA, we are excited to grow our team.About the Role: As a Senior Systems Software Engineer at Lumafield, you will be instrumental in developing the software that drives our cutting-edge, in-line manufacturing CT scanning products. You will engage with state-of-the-art X-ray physics, high-speed detectors, image processing, and embedded systems. Collaborating within a small team focused on our latest hardware, you will harness your expertise to maximize system performance and achieve outstanding results for our clients. This position is perfect for those eager to take ownership of embedded systems, firmware, and software design in an early-stage product environment. This role is based in our San Francisco, CA office, with occasional travel required to our Cambridge, MA office.
About Us:Aurelius Systems is a venture capital-backed startup at the forefront of defense technology, specializing in the development of autonomous, edge-deployed robotic systems utilizing directed energy for counter-unmanned aerial systems (UAS).Our innovative approach involves creating laser systems designed to neutralize drones.With a dedicated team of approximately 10 engineers, former U.S. military personnel, and industry experts, we are committed to advancing America's capabilities in directed energy technology, delivering the first cost-effective and reliable laser weapon systems.Inspired by the philosophy of Marcus Aurelius, we emphasize consistent effort and accountability in our work, embodying a culture of high output without excuses. Following in the footsteps of pioneers like Henry Ford, we embrace innovation and action within our small but impactful team.In addition to our San Francisco headquarters, we are proud to operate a manufacturing hub in Detroit and conduct field tests weekly on our expansive private range.If you thrive on seeing your engineering contributions directly in action rather than being confined to a lab, we encourage you to explore this opportunity.The Position & Your Contribution:As a Robotics Software Systems Engineer, your primary responsibility will be to ensure that all subsystems function seamlessly and efficiently together.Our system comprises a complex array of subsystems including sensing, computer vision, machine learning inference, control systems, power management, and mechanical actuation. Achieving minimal processing time and inter-process latency is crucial for successfully targeting our nimble and evasive UAS.The key area we are looking to fill is real-time systems performance at the hardware interface. You should possess a deep understanding of how software execution impacts physical system behavior, how latency accumulates across CPU, GPU, memory, and I/O, and how bandwidth limitations influence sensor data processing. We need an engineer who is detail-oriented, considering microseconds, memory bandwidth, cache behavior, and system determinism.In our tight-knit team of around 10 engineers, you will have the opportunity to take ownership of systems that are field-tested. The success of our tests is binary—it's either effective or it isn't—and your role will involve iterative improvement based on real-world outcomes.Your Responsibilities:Manage the latency budget for the entire platform, from data sensing to actuation.Profile and mitigate latency across CPU, GPU, memory, and I/O interfaces.Develop and optimize kernels for high-throughput, low-latency operations.Adjust memory access patterns for optimal performance.
About Our TeamThe Platform Systems team at OpenAI is at the forefront of innovation, merging advanced AI technologies with large-scale distributed systems. We are tasked with creating the engineering and research infrastructure essential for training OpenAI's premier models on some of the most powerful, custom-built supercomputers globally.Our team is dedicated to developing the core software for model training, delving deep into the technological stack. This encompasses collective communication, compute efficiency, parallelism strategies, fault tolerance, failure detection, and observability. The systems we design are pivotal to enhancing OpenAI's research capabilities, facilitating reliable and efficient training at the leading edge of technology.We work in close partnership with researchers across the organization, continuously integrating insights from various OpenAI projects to advance our training platform.About the RoleAs a Software Engineer specializing in Platform Systems, you will architect and develop distributed systems that enhance visibility into large-scale training operations, ensuring their dependable operation at scale.Your responsibilities will include designing systems for failure detection, tracing, and observability that pinpoint slow or malfunctioning nodes, identify performance bottlenecks, and assist engineers in optimizing extensive distributed training tasks. This infrastructure is integral to the functionality of OpenAI's training stack and is continuously evolving to accommodate new use cases and increasingly intricate workloads.This position is central to our training infrastructure, merging systems engineering, performance analysis, and large-scale debugging.Key ResponsibilitiesDesign and develop distributed failure detection, tracing, and profiling systems tailored for large-scale AI training jobs.Create tools to identify slow, faulty, or errant nodes and deliver actionable insights into system behavior.Enhance observability, reliability, and performance across OpenAI's training platform.Troubleshoot and resolve issues within complex, high-throughput distributed systems.Collaborate effectively with systems, infrastructure, and research teams to advance platform capabilities.Adapt and expand failure detection and tracing systems to support new training paradigms and workloads.Ideal Candidate ProfilePossesses a deep passion for performance, stability, and observability in distributed systems.Demonstrates proficiency in systems engineering and performance analysis.Has experience in debugging high-throughput distributed systems.Exhibits strong collaboration skills with a track record of working with cross-functional teams.Shows adaptability and eagerness to embrace new technologies and methodologies.
Location: San Francisco, CA (Hybrid: 4 days onsite/week). Relocation assistance available.About Our Team:At OpenAI, we are at the forefront of technology, creating foundational platform software that ensures our consumer products are reliable, secure, and high-performing. Our team collaborates across various system layers, working closely with engineering partners to deliver exceptional capabilities from initial concept to final launch.Role Overview:We are looking for a passionate Systems Software Engineer to lead the design, implementation, and debugging of critical platform components and the pipelines that build and update system images. Your focus will span across operating system layers, emphasizing performance optimization, security enhancements, and in-depth system debugging to deliver production-grade systems that exceed expectations.Key Responsibilities:Design and develop robust system-level components and services within both kernel and user spaces.Configure and maintain essential OS platform services (init, services, networking, security policies) and related tools.Build and manage image and update pipelines, ensuring their reliability, reproducibility, and rollback safety.Instrument system performance through profiling and tracing; enhance CPU, memory, I/O, and energy efficiency.Oversee platform observability and reliability, including logging, crash capture, watchdogs, and diagnostics.Collaborate with cross-functional teams to define interfaces and deliver comprehensive end-to-end features.Establish and promote strong engineering practices such as code reviews, continuous integration, reproducible builds, and effective release management.Work alongside external vendors to support builds and deployments.You Will Excel in This Role If You:Have successfully launched production systems software on modern operating systems.Possess proficiency in C/C++ and a scripting language, with a strong understanding of OS internals including concurrency, memory management, filesystems, networking, and power management.Demonstrate exceptional systems debugging skills utilizing debuggers, tracers, profilers, and logs across kernel/user-space boundaries.Comprehend the configuration of platform services and interfaces, effectively translating requirements into stable, well-documented APIs.Are knowledgeable about user-space foundations including service management, IPC, networking, packaging, and automation.Have experience collaborating with external partners to deliver high-quality software solutions.
Why Join Achira?Become part of an exceptional team comprised of scientists, ML researchers, and engineers dedicated to transforming the landscape of drug discovery.Engage with cutting-edge machine learning infrastructure at an unprecedented scale, leveraging extensive computing resources, vast datasets, and ambitious goals.Take ownership of significant projects from conception through to architecture and deployment on large-scale infrastructures.Thrive in a culture that values thoroughness, speed, and a proactive, builder-oriented mindset.About the RoleAt Achira, we are developing state-of-the-art foundation models that address the most complex challenges in simulation for drug discovery and beyond. Our atomistic foundation simulation models (FSMs) serve as comprehensive representations of the physical microcosm, encompassing machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and various generative model classes.We are looking for a Software Engineer who is enthusiastic about distributed computing and its applications in machine learning. You will play a pivotal role in designing and constructing the infrastructure for our ML data generation pipelines, model training, and fine-tuning workflows across large-scale distributed systems.Your expertise will be crucial in ensuring our compute clusters are efficient, observable, cost-effective, and dependable, enabling us to advance the frontiers of ML development. If you are passionate about distributed systems, performance optimization, and cloud cost efficiency, we encourage you to apply.You will be empowered to conceptualize and manage complex workloads across multiple vendors worldwide. Achira's mission revolves around computation, and providing seamless access to our uniquely tailored workloads at the lowest possible cost is critical to our success.
Company Overview:Specter is revolutionizing how businesses perceive their physical environments by developing a software-defined control plane. Our mission is to enhance the security of American enterprises by providing them with comprehensive visibility over their physical assets.We are pioneering a connected hardware-software ecosystem that leverages multi-modal wireless mesh sensing technology, reducing the deployment costs and time for sensors by a factor of ten. Our platform aims to be the perception engine for a company’s physical presence, facilitating real-time visibility of perimeters and enabling autonomous operational management.Founded by passionate innovators from Anduril, Tesla, Uber, and the U.S. Special Forces, our co-founders, Xerxes and Philip, are dedicated to empowering our partners in the rapidly evolving landscape of physical AI and robotics.
Join our innovative team at Crusoe as a Staff Software Engineer, where you will leverage your expertise in systems engineering to develop cutting-edge software solutions. In this dynamic role, you will collaborate with cross-functional teams to design, implement, and optimize systems that drive our mission forward. Your contributions will be pivotal in enhancing our technology stack and ensuring the seamless operation of our systems.
About GranicaGranica is an innovative AI research and infrastructure firm dedicated to creating reliable and steerable representations of enterprise data.We build trust through our product Crunch, a policy-driven health layer that ensures large tabular datasets remain efficient, reliable, and reversible. On this solid foundation, we are developing Large Tabular Models—systems designed to learn cross-column and relational structures in order to provide trustworthy answers and automation with inherent provenance and governance.Our MissionAI is currently hampered not only by the design of models but also by the inefficiencies of the data that supports them. Every redundant byte, poorly organized dataset, and inefficient data pathway contributes to significant costs, latency, and energy waste as we scale.Granica aims to eliminate these inefficiencies. We merge cutting-edge research in information theory, probabilistic modeling, and distributed systems to craft self-optimizing data infrastructures: systems that consistently enhance the representation and utilization of information by AI.Our engineering team collaborates closely with the Granica Research group led by Prof. Andrea Montanari of Stanford University, bridging advancements in information theory and learning efficiency with large-scale distributed systems. Together, we firmly believe that the next major advancement in AI will stem from breakthroughs in efficient systems rather than merely larger models.Your ContributionsGlobal Metadata Substrate: Design a transactional and metadata substrate that facilitates time-travel, schema evolution, and atomic consistency across massive petabyte-scale tabular datasets.Adaptive Engines: Develop systems that autonomously reorganize data, learning from access patterns and workloads to maintain peak efficiency without the need for manual tuning.Intelligent Data Layouts: Optimize bit-level organization (including encoding, compression, and layout) to maximize signal extraction per byte read.Autonomous Compute Pipelines: Create distributed compute systems that scale predictably, adapt to dynamic loads, and ensure reliability under failure conditions.Research to Production: Apply new algorithms in compression, representation, and optimization that emerge from ongoing research. We encourage opportunities to publish and open-source your work.Latency as Intelligence: Design systems that inherently minimize latency as a measure of intelligence.
At NerdWallet, we are committed to empowering individuals to make informed financial decisions. Our team comprises exceptional individuals who thrive in an inclusive, flexible, and candid environment. Whether you choose to work remotely or in the office, we prioritize your well-being, professional development, and the impact you can make. We believe that when one of us elevates our skills, the whole team benefits.As part of NerdWallet’s Platform team, you will oversee the systems that serve as the backbone of our consumer experience. This includes management of our centralized product data platform, partner ingestion pipelines, publishing and click-tracking infrastructure, GraphQL gateway operations, and our high-traffic, headless WordPress CMS. These platforms deliver precise, compliant, and high-performance product and content experiences to millions of users on both web and mobile platforms. We are searching for a Senior Engineering Manager to lead this team in modernizing legacy services into scalable and reliable systems while advancing our vision of a decoupled, adaptable platform that facilitates quicker publishing, enhanced observability, and future growth.In the role of Senior Engineering Manager for Platform Systems, you will guide and support a team of engineers in delivering high-quality, scalable, and secure software that aligns with NerdWallet’s product and business objectives. You will collaborate closely with Product Managers and other cross-functional partners to define the roadmap, prioritize tasks, and eliminate obstacles, while nurturing strong engineering practices and a culture of continuous improvement. Your responsibilities will include ensuring technical quality, team well-being, and daily operations, while mentoring engineers, making strategic technical decisions, and balancing immediate deliverables with long-term sustainability, compliance, and reliability.This position reports to the Director of Engineering.Opportunities for Impact:Lead, mentor, and develop a high-performing engineering team responsible for NerdWallet’s platform systems, including the Content Platform, CMS, and Product Data Platform.Collaborate with Product Managers and cross-functional teams to strategize, prioritize, and execute the product roadmap.Champion consistent adherence to software development best practices, including code quality, testing, documentation, and operational excellence.Influence and guide technical and architectural decisions to ensure solutions are scalable, secure, reliable, and compliant with regulatory standards.Balance immediate project needs with long-term project vision and maintainability.
Full-time|$216.2K/yr - $270.3K/yr|On-site|San Francisco, CA; New York, NY
The Frontier Data team at Scale AI is dedicated to developing innovative data solutions and robust systems that drive the most advanced applications of Frontier AI, including autonomous capabilities such as coding agents, tool usage, and automation of graphical user interfaces. We tackle complex, ambiguous challenges by creating scalable platforms and pipelines that consistently yield high-quality outcomes. We are seeking a Senior Backend Engineer who excels in uncertain environments, possesses a fast-paced work ethic, and is passionate about overcoming significant technical challenges. The ideal candidate will be instrumental in designing and constructing scalable systems while collaborating closely with research, product, operations, and other engineering teams.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, we are dedicated to empowering humanity by advancing collaborative general intelligence. Our vision is to create a future where everyone can access the knowledge and tools necessary to harness AI for their unique needs.Our diverse team of scientists, engineers, and builders has developed some of the most recognized AI products, including ChatGPT and Character.ai, as well as notable open-weight models like Mistral, and popular open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are currently seeking versatile infrastructure and systems engineers to help construct the foundational systems that support our models and facilitate research and product development. Your contributions will enable teams to create and deliver groundbreaking AI products.As a member of a small, high-impact team, you will be responsible for architecting and scaling the core infrastructure that underpins our operations. This role involves working across the entire technical stack, addressing complex distributed systems challenges, and developing robust, scalable platforms.Infrastructure is vital to our success; it serves as the foundation for every innovation. You will collaborate directly with researchers to expedite experiments, enhance infrastructure efficiency, and derive critical insights from our models, products, and data assets.Note: This is an evergreen role, meaning we are continuously accepting expressions of interest. Due to the volume of applications, there may not always be an immediate match for your skills and experience. However, we encourage you to apply. Applications are reviewed regularly, and we reach out to candidates as new opportunities arise. You may reapply if you gain additional experience, but please wait at least six months between applications. Additionally, we occasionally post specific roles for particular projects or teams, and you are welcome to apply for those as well.What You’ll DoInterviews will be conducted in a general manner, but project selection will consider your interests and experience alongside the needs of the organization. This flexible approach allows us to align talented engineers with the infrastructure teams where they will have the greatest influence and opportunities for growth.Depending on your expertise and interests, you may contribute to various areas such as:Core Infrastructure: Supporting teams that train, research, and ultimately serve AI models by building the infrastructure required for reliable and secure training of frontier models. This may include developing systems and managing large Kubernetes clusters with GPU workloads.
About UsAt LatchBio, we are at the forefront of transforming biological discovery through the fusion of laboratory automation, high-throughput assays, and machine learning. Our innovative platform is designed to store, visualize, and analyze the next wave of scientific discoveries. Trusted by teams across pharmaceutical, biotech, and solution provider sectors, our technology plays a crucial role in enhancing, informing, and delivering groundbreaking products.Our dedicated team of engineers has spent over four years developing and marketing cutting-edge technology in a challenging market that is often hesitant to embrace newcomers. We cater to a diverse clientele with varying product expectations, necessitating close collaboration and nuanced communication with both technical and non-technical users. Our systems routinely handle computational tasks involving multiple terabytes of data.Our commitment and perseverance have resulted in significant market validation, with revenue more than tripling over the past year. Looking ahead, we aim to achieve a sustainable growth trajectory, targeting a repeatable sales process and reaching $50 million in annual recurring revenue (ARR) within the next three years.Explore our core product offerings:Distributed file system with metadata on Postgres and blobs on S3, featuring a web UI and a FUSE driver.Workflow orchestrator built on Kubernetes.On-demand interactive compute instances based on Kubernetes containers.Statically-typed tabular data storage engine.Reactive Python-based web application framework for data analysis and visualization.Upcoming: A cluster orchestrator and workflow engine designed to accept compute nodes from anywhere on the internet.While various startups focus on niche solutions, our comprehensive approach sets us apart in this tech-heavy industry.
Jun 22, 2024
Sign in to browse more jobs
Create account — see all 5,687 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.