Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Entry Level
Qualifications
Strong programming skills in languages such as Java, Python, or C++. Experience with cloud platforms and services. Excellent analytical and problem-solving capabilities. Ability to work collaboratively in a fast-paced environment. Prior experience in a technical role is preferred.
About the job
Join our dynamic team at Adyen as a Technical Staff Member in San Francisco! We are seeking innovative minds passionate about technology and problem-solving. In this role, you will collaborate with cross-functional teams to craft solutions that enhance our services and improve customer experiences.
About Adyen
Adyen is a leading global payment company that enables businesses to accept payments and grow their revenue. Our innovative technology provides a seamless experience for merchants and their customers, making transactions faster and more secure.
About Liquid AILiquid AI, a pioneering company spun out of MIT CSAIL, is at the forefront of developing general-purpose AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory usage, privacy, and reliability allows us to partner with some of the most este…
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in creating versatile AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware, focusing on low latency, minimal memory consumption, privacy, and dependability. Our collaborations extend across industries including consumer electronics, automotive, life sciences, and financial services. As we undergo rapid expansion, we are on the lookout for outstanding individuals to join our journey.The OpportunityThe Vision-Language Models (VLM) team is dedicated to developing cutting-edge vision-language models that function seamlessly on devices, adhering to stringent latency and memory requirements without compromising quality. Having already launched four premier models, we are excited about what lies ahead.This team is responsible for the complete VLM pipeline, encompassing research on novel architectures, training algorithms, data curation, evaluation, and deployment. You will be part of a dedicated, hands-on team that directly engages with models and works closely with our pretraining, post-training, and infrastructure teams. Your success will be gauged by the performance of the models we deliver.
Overview: Join Listen Labs as we embark on an exciting journey to revolutionize decision-making for companies through cutting-edge AI technology. With a robust product roadmap planned for the next six months, we are expanding our engineering team. We are in search of a highly technical individual who thrives on solving complex problems and is eager to contribute to our mission. If you are passionate about innovation and want to be part of a team that includes several IOI medalists, we want to hear from you!About Listen Labs:Listen Labs is at the forefront of AI-powered research, enabling teams to extract valuable insights from customer interviews in a matter of hours rather than months. Our platform assists users in analyzing conversations, identifying key themes, and making informed product decisions swiftly.Why Join Us?Exceptional Team: Our founding team consists of seasoned entrepreneurs with a proven track record in AI, alongside top talents from renowned organizations such as Jane Street, Twitter, Stripe, and Goldman Sachs.Rapid Growth: Backed by Sequoia Capital, we have grown from zero to a $14M run-rate in under a year, with a dedicated team of 40.Impressive Clientele: We are witnessing significant traction across various sectors, securing enterprise clients like Google, Microsoft, and Nestlé.Product Excellence: Our differentiated product offers an industry-leading win rate, which is a testament to our commitment to quality.Market Success: Our customer base is expanding rapidly, with numerous six-figure contracts leading to further growth.Viral Impact: Our product's interviews reach tens of thousands of viewers, driving organic growth and interest from Fortune 500 companies.
Join Composio as we revolutionize the way agents communicate with the tools you rely on, such as GitHub, Gmail, Notion, Salesforce, and more. As part of our dynamic team of engineers, you will tackle challenges from context to search, creating a seamless connection between agents and their essential tools.We've successfully raised a $25M Series A from Lightspeed, backed by visionary investors like Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of Hubspot), and Gokul Rajaram. This year, we have tripled our ARR, serving a diverse clientele that includes startups from Y Combinator to established companies like Wabi, Glean, and Zoom.Your ResponsibilitiesDevelop large evaluations using real tool-calling data to assess model performance in long-term tool execution.Address search challenges by identifying semantically similar tools and optimizing cached tool execution paths and plans.Train expansive agentic harness systems to enhance session accuracy using millions of real tool calls as baseline data.Essential QualificationsIf you're exceptionally skilled, nothing is a strict requirement.Research ExpertiseAbility to independently advance research objectives.Skilled in rapid prototyping and testing of experiments.Collaborate with product and engineering teams to transition research concepts into production swiftly.Strong Writing Skills — Capable of documenting effectively and articulating complex ideas clearly.Interpersonal Skills — Foster trust and acknowledge areas for growth.
Join our innovative team at liquid-ai as a Member of the Technical Staff specializing in audio applications. As a post-training role, you will have the opportunity to apply your knowledge in cutting-edge audio technologies, contributing to the development of advanced machine learning solutions.This position is ideal for individuals who are eager to work in a collaborative environment and are passionate about audio technology and its applications in artificial intelligence.
About Us:Modal is at the forefront of AI infrastructure. We provide seamless access to GPUs, quick container startups, and integrated storage solutions, simplifying the process of model training, batch job execution, and low-latency inference. Leading companies such as Suno, Lovable, and Substack trust Modal to transition from prototypes to full-scale production without the complexities of infrastructure management.Our rapidly expanding team operates out of NYC, San Francisco, and Stockholm. With a remarkable 9-figure annual recurring revenue (ARR), we recently achieved a valuation of $1.1 billion after our successful Series B funding round. Thousands of customers, including Lovable, Scale AI, Substack, and Suno, depend on us for their AI workload needs.Joining Modal means becoming part of a dynamic, rapidly growing AI infrastructure company with substantial opportunities for personal and professional advancement. Our team comprises creators of well-known open-source projects (e.g., Seaborn, Luigi), academic researchers, international competition medalists, and seasoned engineering and product leaders with extensive experience.The Role:We are seeking a strategic Solutions Architect to influence technical strategies across our key enterprise accounts.In this position, you will serve as the technical partner to Enterprise Account Executives, managing intricate evaluations, developing infrastructure modernization strategies, and promoting multi-product adoption across AI and ML applications.This is a strategic, consultative role that demands a robust architectural background, a strong executive presence, and the ability to impact major infrastructure decisions worth millions. You will collaborate directly with CTOs, VPs of Engineering, and leaders of ML platforms to redefine the construction and operation of AI infrastructure.If you excel in fast-paced technical sales contexts and are eager to shape the infrastructure that drives modern AI innovations, we would love to hear from you.
Technical Staff MemberMirendil is a pioneering technology company dedicated to addressing fundamental challenges that propel significant advancements in science and technology. Our primary mission is to democratize access to cutting-edge AI research and development across various scientific fields. We believe that accelerating scientific discovery is one of the most impactful ways to enhance humanity's future, with AI playing a crucial role in achieving this vision.We are in the process of establishing a leading AI research company, developing our own models from the ground up. Our focus encompasses model training, reinforcement learning, reasoning systems, and the infrastructure necessary for large-scale experiments. Our team comprises accomplished researchers and engineers from esteemed organizations such as Anthropic, Google DeepMind, xAI, OpenAI, Microsoft, Apple, and MIT.Position OverviewWe are seeking skilled engineers and researchers to join us as Members of Technical Staff.This role is designed to be flexible and open-ended. Depending on your expertise and interests, you may engage in:Enhancing and training advanced AI modelsDeveloping reinforcement learning and reasoning systemsBuilding infrastructure for extensive experimental projectsCreating systems to automate or expedite research workflowsIf you are passionate about tackling ambitious challenges at the crossroads of AI, research, and scientific innovation, we would love to connect with you.
At Catalog, we are pioneering the commerce infrastructure for AI—creating the essential framework that enables digital agents to not only explore the web but also comprehend, analyze, and engage with products. Our innovations drive the future of AI-driven shopping experiences, fundamentally transforming how consumers discover and purchase items online.Role OverviewAs a Technical Staff Member, you will be instrumental in developing core systems, shaping our engineering culture, and transitioning our vision from prototype to a robust platform. This role requires full-stack expertise and a commitment to owning and resolving challenges from start to finish.Who You AreYou have experience creating beloved and trusted products from the ground up.You combine technical proficiency with a keen product sense and data-driven intuition.You are well-versed in AI technologies.You prioritize speed, write clean code, and ensure thorough instrumentation.You seek a high level of ownership within a small, talent-rich team based in San Francisco.Challenges You Will TackleDevelop and deploy agentic-search APIs that deliver structured and real-time product data in milliseconds.Build checkout systems enabling agents to conduct transactions with any merchant.Create an embeddings and retrieval layer that optimizes recall, precision, and cost efficiency.Establish a product graph and ranking pipeline that adapts based on actual user outcomes.Preferred QualificationsProven experience shipping data-centric products in a live environment.Experience with recommendation systems or information retrieval methodologies.Familiarity with API development, search indexing, and data pipeline construction.Our Work CultureWe operate with a small, high-trust, and highly motivated team, fostering an environment of in-person collaboration in North Beach, San Francisco. Our process involves debate, decision-making, and execution.If your profile aligns with our needs, we will contact you to arrange 2-3 brief technical interviews, followed by an onsite meeting in our office where you will collaborate on a small project, exchange ideas, and meet the team.
Join Canva as a Staff Research Scientist specializing in Video & Audio Generative AI. In this pivotal role, you will leverage your expertise in AI to develop innovative solutions that enhance our platform's multimedia capabilities. Collaborate with cross-functional teams to push the boundaries of AI technology, driving impactful projects that redefine user experiences.
Overview: Due to the increasing market demand and a robust six-month product roadmap, Listen Labs is expanding its engineering team. We seek a technically adept individual (our team includes three IOI medalists) who is eager to contribute to a product that is revolutionizing corporate decision-making. If you are passionate about solving intricate problems from start to finish, we invite you to connect with us.About Listen LabsListen Labs is an innovative AI-driven research platform that empowers teams to swiftly extract insights from customer interviews in hours rather than months. Our technology enables clients to analyze conversations, identify recurring themes, and expedite informed product decisions.Company Highlights:Exceptional Team: Composed of seasoned entrepreneurs (with prior AI exits), co-founders, and experts from leading firms such as Jane Street, Twitter, Stripe, Affirm, Bain, Goldman Sachs, and more, our team is built on a foundation of excellence.Rapid Growth: We are a dynamic team of 40, supported by Sequoia, achieving a remarkable growth trajectory from $0 to $14 million run-rate in less than a year. We prioritize speed, craftsmanship, and collaboration with individuals who embrace ownership.Impressive Traction: We have seen rapid growth across various sectors, securing enterprise clients such as Google, Microsoft, Nestlé, and P&G.Outstanding Performance: Our industry-leading win rate is a direct result of our uniquely differentiated product.Market Validation: We consistently attract customers across every segment, often landing six-figure deals that lead to quick expansions.Viral Product: Our interviews are shared with tens of thousands of viewers, driving product-led growth, organic expansion, and daily inquiries from Fortune 500 companies.Technical Challenges:Research Agent Development: Unlike traditional software purchases, hiring McKinsey involves gaining insights and execution expertise. We are building Listen Labs with that mindset — an AI agent that understands our platform and best research practices, assisting users in project setup, interview execution, and response analysis.Human Database Creation: A core value proposition is our capability to connect users with specific demographics. We are developing a database of millions of individuals, continually enhancing our understanding of user needs as they engage with Listen Labs.
Join the team at Mirendil as a Member of Technical Staff specializing in Machine Learning Systems. In this role, you will leverage your expertise to develop innovative solutions that enhance our ML frameworks and contribute to groundbreaking projects in the AI space. Collaborate with top talent in a dynamic environment that promotes creativity and technical excellence.
At Composio, we are developing advanced infrastructure that enables agents to seamlessly interact with essential work tools such as GitHub, Gmail, Notion, Salesforce, and more. Our dedicated team of engineers is committed to tackling challenges ranging from contextual understanding to search functionalities, ensuring we provide an exceptional bridge between your agents and their tools.Having secured a $25M Series A funding from Lightspeed, alongside prominent angel investors like Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of HubSpot), and Gokul Rajaram, we have experienced remarkable growth, tripling our ARR at the start of this year. Our clientele includes notable names from Y Combinator cohorts to Wabi, Glean, Zoom, and beyond.Your RoleEnhance the experience of teams utilizing our platform by refining our core APIs and SDK.Create intuitive interfaces for both frontend and SDK applications.Take ownership of product development from concept through to production.Collaborate closely with customers to cultivate their loyalty while enhancing the product.Craft clear and concise documentation.
TierZero builds tools that help engineering teams deliver and manage code efficiently. The platform enables quicker incident response, clearer operational visibility, and shared knowledge among engineers. Backed by $7 million from investors like Accel and SV Angel, TierZero supports clients such as Discord, Drata, and Framer as they strengthen infrastructure for AI-driven work. This in-person role is based at TierZero's San Francisco headquarters, with a hybrid schedule requiring three days onsite each week. As a founding member of the technical staff, work directly with the CEO, CTO, and customers to influence the direction of TierZero’s core products and systems. The position calls for flexibility as priorities shift and close collaboration across the company. What you will do Design and develop AI systems that handle large volumes of unstructured data. Build full-stack product features, informed by direct feedback from users. Enhance the product so agents are intelligent, reliable, and easy for engineers to use. Create systems to automatically evaluate outputs from large language models and improve agentic reasoning through self-play and feedback. Construct machine learning pipelines, including data ingestion, feature creation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search, and graph databases. Experiment with open-source and emerging large language models to compare different approaches. Develop scalable infrastructure for long-running, multi-step agents, including memory, state management, and asynchronous workflows. Requirements Interest in working with large language models, managed cloud platforms, cloud infrastructure, and observability tools. At least 5 years of professional experience or significant open-source contributions. Comfort with shifting priorities and tackling new technical problems. Strong product focus and commitment to customer outcomes. Openness to learning from a team with a track record of delivering over $10 billion in value. Ability to work onsite in San Francisco three days per week. Bonus: Experience in a startup setting and familiarity with startup dynamics.
About TierZero TierZero helps engineering teams use AI to build and ship code more efficiently. The platform targets the bottleneck of human speed in production, giving teams tools for faster incident response, better operational visibility, and shared knowledge. TierZero is backed by $7M in funding from investors including Accel and SV Angel. Companies like Discord, Drata, and Framer trust TierZero to strengthen their infrastructure for AI-driven engineering. Role Overview: Founding Member of Technical Staff This is an on-site role based at TierZero’s San Francisco headquarters, with three days a week in the office. As a founding member, direct collaboration with the CEO, CTO, and early customers shapes the direction of both product and systems. The work spans hands-on development and close engagement with users and leadership. What You Will Do Design and build intelligent AI systems to analyze large volumes of unstructured data. Deliver full-stack features based on real user feedback. Improve the product experience so AI agents are both reliable and easy for engineers to use. Develop systems that automatically evaluate LLM outputs and advance agentic reasoning using self-play and feedback loops. Create machine learning pipelines, including data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG), vector search, and graph databases. Prototype with open-source and new LLMs, comparing their strengths and weaknesses. Build scalable infrastructure for long-running, multi-step agents, with attention to memory, state, and asynchronous workflows. What We Look For Over five years of relevant professional or open-source experience. Comfort working in environments with uncertainty and evolving challenges. Strong product focus and a drive for customer satisfaction. Interest in large language models (LLMs), Model Control Planes (MCPs), cloud infrastructure, and observability tools. Previous startup experience is a plus. Location This position is based in San Francisco. Expect to work on-site three days per week at TierZero’s HQ.
Join Us If You:Are eager to learn from a group of experienced engineers who have successfully delivered over $10 billion in value.Prefer to work in our San Francisco office three days a week.Excel in navigating uncertainty.Possess a product-oriented mindset with a strong emphasis on customer satisfaction.Are passionate about working with Large Language Models (LLMs), Multi-Cloud Platforms (MCPs), Cloud Infrastructure, and Observability tools.Bring at least five years of professional or open-source experience.Bonus: Have previous experience in a startup environment and understand the dynamics involved.About TierZeroAt TierZero, we are redefining how engineering teams leverage AI to enhance the speed and efficiency of code deployment. While AI accelerates the development cycle, the actual process of productionizing code remains a challenge. Our platform empowers agile engineering teams to manage code in production effectively, ensuring quicker incident response times, comprehensive operational visibility, and shared knowledge among all team members.Backed by $7 million in funding from leading investors like Accel and SV Angel, TierZero is trusted by industry leaders such as Discord, Drata, and Framer to operate their high-scale systems and create the foundational layer for AI-driven engineering teams.The RoleAs a founding member of our team, you will play a crucial role in conceptualizing and developing our core product and systems from the ground up. Collaborating closely with the CEO, CTO, and our valued customers, you will be engaged in a variety of dynamic projects, including:Designing and implementing intelligent AI systems capable of analyzing extensive unstructured data.Delivering full-stack features informed by direct user feedback.Enhancing the product experience to ensure agents are not only intelligent but also user-friendly and reliable for engineers.Creating systems that autonomously assess LLM outputs, enhancing agent reasoning through iterative self-play and feedback mechanisms.Developing machine learning pipelines encompassing data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search infrastructure, and graph databases.Investigating and prototyping with open-source and cutting-edge LLMs to assess their capabilities and trade-offs.Establishing scalable infrastructure to support long-running, multi-step agents, addressing aspects like memory management, state handling, and asynchronous workflows.
Full-time|On-site|Cambridge, MA USA; London, UK; San Francisco, CA USA
Join Lila Sciences as a Staff or Principal Engineer specializing in Technical Mitigations Research. This role offers an exciting opportunity to leverage your engineering expertise to develop innovative solutions in the field of technical mitigations.
Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.
Join our dynamic team at Adyen as a Technical Staff Member in San Francisco! We are seeking innovative minds passionate about technology and problem-solving. In this role, you will collaborate with cross-functional teams to craft solutions that enhance our services and improve customer experiences.
OverviewPluralis Research is at the forefront of Protocol Learning, innovating a decentralized approach to train and deploy AI models that democratizes access beyond just well-funded corporations. By aggregating computational resources from diverse participants, we incentivize collaboration while safeguarding against centralized control of model weights, paving the way for a truly open and cooperative environment for advanced AI.We are seeking a talented Machine Learning Training Platform Engineer to design, develop, and scale the core infrastructure that powers our decentralized ML training platform. In this role, you will have ownership over essential systems including infrastructure orchestration, distributed computing, and service integration, facilitating ongoing experimentation and large-scale model training.ResponsibilitiesMulti-Cloud Infrastructure: Create resource management systems that provision and orchestrate computing resources across AWS, GCP, and Azure using infrastructure-as-code tools like Pulumi or Terraform. Manage dynamic scaling, state synchronization, and concurrent operations across hundreds of diverse nodes.Distributed Training Systems: Design fault-tolerant infrastructure for distributed machine learning, including GPU clusters, NVIDIA runtime, S3 checkpointing, large dataset management and streaming, health monitoring, and resilient retry strategies.Real-World Networking: Develop systems that simulate and manage real-world network conditions—such as bandwidth shaping, latency injection, and packet loss—while accommodating dynamic node churn and ensuring efficient data flow across workers with varying connectivity, as our training occurs on consumer nodes and non-co-located infrastructure.
Full-time|On-site|San Francisco / Tel Aviv / Zurich
Tzafon is at the forefront of machine intelligence, operating as a cutting-edge foundation model lab dedicated to building scalable computing systems. With offices in San Francisco, Zurich, and Tel Aviv, we have secured over $12 million in funding to propel our mission of expanding the boundaries of machine intelligence.Our talented team comprises engineers and scientists with extensive expertise in ML infrastructure and research, founded by distinguished IOI and IMO medalists, PhD holders, and alumni from top tech firms such as Google DeepMind, Character, and NVIDIA. We specialize in training models and constructing infrastructure for swarms of agents to automate tasks across real-world environments.In this role, you'll collaborate between our product and post-training teams to deploy Large Action Models that deliver results. Your responsibilities will include building evaluations, benchmarks, and fine-tuning pipelines, as well as defining optimal model behavior and achieving it at scale.