Technical Staff Intern Ai Workload Deployment jobs in San Francisco – Browse 5,349 openings on RoboApply Jobs

Technical Staff Intern Ai Workload Deployment jobs in San Francisco

Open roles matching “Technical Staff Intern Ai Workload Deployment” with location signals for San Francisco. 5,349 active listings on RoboApply Jobs.

5,349 jobs found

1 - 20 of 5,349 Jobs
Apply
companyGimlet Labs logo
Internship|On-site|San Francisco

Gimlet Labs is pioneering the creation of the first heterogeneous neocloud specifically designed for AI workloads. As the demand for AI systems continues to grow, the industry is encountering significant challenges related to power, capacity, and cost with current homogeneous, vertically integrated infrastructures. Gimlet tackles these limitations by decoupling AI workloads from their physical hardware, intelligently partitioning tasks into components and orchestrating them to the most suitable hardware, optimizing for performance and efficiency. This innovative approach facilitates the use of heterogeneous systems across multiple vendors and generations of hardware, including the latest cutting-edge accelerators, enabling substantial improvements in performance and cost efficiency at scale.Furthermore, Gimlet is developing a robust production-grade neocloud tailored for agentic workloads. Our customers benefit from deploying and managing their workloads seamlessly through stable, production-ready APIs, alleviating the need to focus on hardware selection, placement, or intricate performance optimizations.Collaborating with foundation labs, hyperscalers, and AI-native organizations, Gimlet powers real-world production workloads capable of scaling to gigawatt-class AI datacenters.Gimlet Labs is on the lookout for a Technical Staff Intern to assist in the development of our platform dedicated to deploying and monitoring AI workloads. In this role, you will leverage the latest AI methodologies to create frameworks that enhance and optimize AI workloads. You will play a vital role in advancing Gimlet’s unique compilation framework, facilitating the partitioning and orchestration of AI workloads across varied hardware environments. Your designs will lead to scalable systems capable of handling production workloads of millions of requests per second.

Mar 31, 2025
Apply
companyGimlet Labs logo
Full-time|On-site|San Francisco

Gimlet Labs is pioneering the development of the first heterogeneous neocloud designed specifically for AI workloads. As the demand for AI systems grows, the current infrastructure is reaching critical limitations in terms of power, capacity, and cost-efficiency. Gimlet addresses these challenges by effectively decoupling AI workloads from the hardware they run on. Our innovative platform intelligently partitions workloads into various components and allocates each to the most suitable hardware, optimizing for performance and efficiency. This dynamic approach facilitates heterogeneous systems that can operate across diverse vendors and generations of hardware, including the latest cutting-edge accelerators, thus enabling substantial advancements in performance and cost-effectiveness.Furthermore, Gimlet is developing a production-grade neocloud for agentic workloads, allowing clients to effortlessly deploy and manage their tasks through stable, production-ready APIs, without needing to consider hardware selection, placement, or intricate performance optimizations.Our collaborations span foundation labs, hyperscalers, and AI-native companies, all aimed at powering real production workloads built for scalability in gigawatt-class AI datacenters.Gimlet Labs is currently on the lookout for a dedicated Technical Staff Member specializing in AI research. In this role, you will assess and apply innovative techniques to enhance performance and quality across the newest AI models. Your contributions will be crucial in exploring new model architectures and experimenting with advanced inference efficiency methods such as KV caching and FlashAttention. The research team will focus on designing and prototyping frameworks that leverage fine-tuning and knowledge distillation techniques to push the limits of model performance.

Feb 26, 2025
Apply
company
Full-time|On-site|San Francisco

About the Role:As an AI Deployment Strategist at F2-ai, you will play a pivotal role in bridging customer success, sales, and engineering teams to guarantee technical precision and effective implementation of our innovative F2 product. You will establish yourself as the go-to expert in AI prompting, serving as the technical foundation for our go-to-market strategy.Your Responsibilities:Assist in the F2 sales process: Participate in prospect and client meetings to resolve technical inquiries related to our product.Manage customer due diligence: Complete InfoSec questionnaires and enhance our due diligence knowledge base continuously.Review reports produced by users, identify issues, and collaborate with engineering to ensure customers derive value from our AI solutions.Develop prompts: Craft natural language prompts tailored to specific customer requirements to guarantee accurate and relevant report outputs.Conduct light technical implementations, such as configuring Single Sign-On (SSO) or establishing custom data retention policies.Maintain CRM hygiene: Ensure data is clean and structured to support the scalability of customer success and sales teams.Act as the customer advocate to the engineering team, providing valuable feedback that aids in product enhancement.Who You Are:You are a dynamic professional adept at translating technical language for engineers and customers alike. Your experience with enterprise buyers enables you to convey complex systems in a straightforward manner, fostering cross-functional collaboration that leads to impactful results.Experience in prompt engineering and working with LLMs and AI SaaS tools.Familiarity with enterprise information security concepts and due diligence procedures.Excellent communication skills, commercially astute, customer-focused, and technologically proficient. You can simplify complex ideas for non-technical audiences.Proficient in project management of intricate workflows and technical deliverables without the need to write production code.Strong technical aptitude and willingness to enhance your coding skills.

Feb 9, 2026
Apply
companyStuut AI logo
Full-time|On-site|San Francisco

At Stuut AI, we are revolutionizing accounts receivable for B2B companies, enhancing collections to be smarter and more efficient. Our platform is increasingly being adopted by finance teams in various sectors, including industrials, chemicals, and manufacturing, with clients ranging from Fortune 10 enterprises to growing midmarket firms. Our innovative approach is supported by esteemed investors such as Andreessen Horowitz, Khosla Ventures, Activant, 1984 Ventures, and Page One.Position OverviewWe are seeking a dedicated Technical Staff Member focused on Internal AI Tooling. This pivotal role will involve constructing the foundational systems that allow Stuut to scale effectively. Your primary responsibility will be to design and implement internal infrastructures, automation, and AI-driven workflows to enhance operational efficiency across various departments, starting with marketing and extending to sales, operations, and product development.This is a significant role for a proactive individual who thrives on transforming manual or disjointed processes into scalable systems. Collaboration with leadership and cross-functional teams will be essential as you design AI agents, automation pipelines, and internal tools that streamline operations and unlock new capabilities.We are transitioning to an agent-first model, not only in our products but also in our operational approach. This role is central to that evolution.

Mar 16, 2026
Apply
companyGimlet Labs logo
Full-time|On-site|San Francisco

Gimlet Labs is pioneering the first heterogeneous neocloud specifically designed for AI workloads. As the demand for AI systems increases, the industry faces critical challenges related to power, capacity, and cost with the existing homogeneous infrastructure. Gimlet's innovative solution decouples AI workloads from hardware, effectively partitioning tasks into manageable components and directing them to the optimal hardware to maximize performance and efficiency. This strategy facilitates the creation of heterogeneous systems across various vendors and generations of hardware, including cutting-edge accelerators, leading to significant advancements in performance and cost efficiency at scale.Building on this framework, Gimlet is developing a robust neocloud for agentic workloads, enabling clients to seamlessly deploy and manage their operations via stable, production-ready APIs—eliminating the need to navigate hardware selection, placement, or intricate performance optimizations.We collaborate with foundational labs, hyperscalers, and AI-native enterprises to support production workloads that are designed to scale to gigawatt-class AI datacenters.We are on the lookout for a Member of Technical Staff specialized in compilers. In this position, you will engage in the core compilation infrastructure that converts high-level AI workloads into highly efficient executable programs across a diverse array of advanced hardware. Your role will include designing and implementing compiler systems that partition workloads, optimize them through various Intermediate Representations (IRs), and target multiple execution environments and accelerators.This position is ideal for engineers who thrive on building practical systems, engaging closely with hardware, and transforming emerging AI models and execution patterns into reliable, production-ready infrastructure.

Mar 10, 2026
Apply
companyGimlet Labs logo
Full-time|On-site|San Francisco

At Gimlet Labs, we are pioneering the first heterogeneous neocloud tailored for AI workloads. As AI technology evolves, the industry confronts critical limitations in power, capacity, and cost linked to the traditional homogeneous, vertically integrated infrastructure. Gimlet addresses these challenges by decoupling AI workloads from the fundamental hardware, intelligently partitioning them into components and orchestrating each to the hardware that best meets its performance and efficiency needs. This innovative approach facilitates heterogeneous systems across diverse vendors and generations of hardware, including the latest emerging accelerators, resulting in significant improvements in performance and cost efficiency at scale.Building upon this platform, Gimlet is developing a production-grade neocloud for agentic workloads. Our customers can deploy and manage their workloads through stable, production-ready APIs without the complexities of hardware selection, placement, or low-level performance optimization.Gimlet collaborates with foundational labs, hyperscalers, and AI-native companies to enable real production workloads designed to scale to gigawatt-class AI datacenters.We are currently in search of a Technical Staff Member specializing in distributed systems. In this role, you will be instrumental in developing the core platform responsible for scheduling, routing, and managing AI workloads reliably at production scale. You will engage with systems that coordinate execution across thousands of nodes, provide stable production APIs, and guarantee predictable workload performance under real-world conditions of load and failure.This position is ideal for engineers passionate about building foundational infrastructure, grasping end-to-end systems, and operating at scale.

Mar 10, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

At OpenAI, we are committed to ensuring that the advancements in artificial general intelligence (AGI) benefit humanity as a whole. To achieve this goal, we are laying the technical groundwork that enables enterprises to transition from initial experiments to real-world, production-level applications of our platform and coding solutions.The Lead Technical AI Deployment Manager plays a pivotal role in this mission. You will be responsible for shaping how OpenAI’s extensive technical knowledge is shared with developers and engineering groups — transforming our approach from high-touch, limited delivery to a scalable and consistent technical training model that drives significant customer results.This role involves architecting and expanding OpenAI’s technical adoption strategies for developers. The Technical AI Deployment Manager will convert OpenAI's platform capabilities into actionable, high-quality technical training that empowers clients to design, build, and deploy effective systems. This position operates at the crossroads of technical expertise, education, and scalability.You will lead by example — initially providing impactful technical training yourself — and subsequently develop a team, set standards, and establish an operational framework that facilitates this work on a global scale.This is a foundational leadership position with substantial visibility and a well-defined path for growth as you demonstrate your impact.

Mar 5, 2026
Apply
companyAnthropic logo
On-site|On-site|Atlanta, GA; Austin, TX; Boston, MA; Chicago, IL; San Francisco, CA | New York City, NY; Washington, DC

Join Anthropic as an Engagement Manager on our Applied AI team, where you will spearhead the delivery of cutting-edge AI solutions for Fortune 500 companies. In this pivotal role, you will collaborate with customers to create bespoke AI agents that enhance their core business processes. You will oversee the entire project lifecycle from the signed Statement of Work (SOW) to production deployment, coordinating cross-functional teams that include Engineering, Product, Design, and key customer stakeholders. This position goes beyond traditional project management; you will adeptly navigate complex enterprise environments, eliminate technical and organizational obstacles, and drive measurable business outcomes while upholding our commitment to safety and reliability. Work closely with Forward Deployed Engineers (FDEs) to manage stakeholder relationships and organizational intricacies, ensuring seamless delivery of AI innovations. Additionally, you will champion our mission in the field and develop the frameworks that enable scalability in our growing initiatives.

Jan 29, 2026
Apply
company
Full-time|On-site|San Francisco

About the Role: Strala is on the lookout for exceptional operators to tackle one of the most challenging roles in AI-native services: enabling critical AI solutions for some of the world’s largest insurance companies. This role is pivotal, involving direct engagement with clients to ensure successful deployments of our technology. You will navigate complex environments, connecting our engineering efforts with the intricate realities faced by large insurers, ultimately delivering substantial ROI. If you possess a unique blend of technical expertise and leadership skills, capable of guiding high-stakes discussions with executives while also diving deep into data models to resolve issues, we want to hear from you.About Strala: At Strala, we are revolutionizing the claims process and enhancing loss ratios for the largest insurers globally. Our mission is to reshape the $280 billion foundation of the insurance industry using applied AI.In under a year, we’ve achieved multi-seven-figure ARR with approximately 50% month-over-month growth, all without dedicated growth personnel. We’ve successfully secured funding from top-tier investors, including early partners of OpenAI, Cognition, and Ramp. Our lean and powerful team comprises elite professionals, including top sales leaders, engineers from renowned companies like DRW, Optiver, AWS, and Palantir, AI researchers from prestigious institutions such as Oxford and Cambridge, and multiple ICPC and Olympiad winners.Key Responsibilities: Oversee multiple client deployments from post-sale design through to go-live and stabilization, serving as the primary contact for VP-level stakeholders.Collaborate with client teams on architecture, data models, workflow design, and source-of-record decisions.Transform complex client realities into structured plans, including milestones, ownership, risks, dependencies, and progress tracking. Communicate effectively with both client executives and internal engineering teams.Establish trust with senior stakeholders while engaging deeply with technical teams, earning credibility in both arenas.Coordinate daily with Strala's engineering and sales teams, translating client requirements into engineering priorities and incorporating deployment context into deal strategies.Partner with Sales on late-stage deals to refine project scope and identify opportunities for expansion.This position is based in San Francisco. We believe that in-person collaboration leads to better results.

Mar 10, 2026
Apply
companyGimlet Labs logo
Full-time|On-site|San Francisco

At Gimlet Labs, we are pioneering the first heterogeneous neocloud tailored for AI workloads. As the demand for AI systems grows, traditional infrastructure faces significant limitations in terms of power, capacity, and cost. Our innovative platform addresses these challenges by decoupling AI workloads from the hardware, intelligently partitioning tasks, and directing each component to the most suitable hardware for optimal performance and efficiency. This method allows for the creation of heterogeneous systems that span multiple vendors and generations of hardware, including the latest cutting-edge accelerators, achieving substantial improvements in performance and cost-effectiveness.Building upon this robust foundation, Gimlet is developing a production-grade neocloud designed for agentic workloads. Our customers can effortlessly deploy and manage their workloads with stable, production-ready APIs, eliminating the complexities of hardware selection, placement, or low-level performance optimization.We collaborate with foundational labs, hyperscalers, and AI-native companies to drive real production workloads capable of scaling to gigawatt-class AI data centers.We are currently seeking a dedicated Member of Technical Staff specializing in kernels and GPU performance. In this role, you will work closely with accelerators and execution hardware to extract maximum performance from AI workloads across diverse and rapidly evolving platforms. You will analyze low-level execution behaviors, design and optimize kernels, and ensure consistent performance across both established and emerging hardware.This position is perfect for engineers who thrive on deep performance analysis, enjoy exploring hardware trade-offs, and are passionate about transforming theoretical peak performance into tangible real-world outcomes.

Mar 10, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

Join our dynamic team at OpenAI as a Software Engineer specializing in Workload Enablement. In this pivotal role, you will be responsible for designing and implementing innovative solutions that enhance our operational efficiency and performance.Your expertise will contribute to building scalable systems that empower our teams to harness the full potential of AI technologies. Collaborate with cross-functional teams to identify challenges and deliver cutting-edge solutions that drive our mission forward.

Mar 28, 2026
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamAt OpenAI, our AI Deployment Manager team collaborates with visionary organizations to transform innovative AI technologies into impactful solutions. We excel in facilitating technical enablement and adoption, guiding clients on effectively integrating OpenAI’s products into their workflows, teams, and processes.Our initiatives encompass structured workshops, technical enablement sessions, and adoption programs designed to transition customers from initial interactions to confident and scalable usage. We collaborate closely with Sales, AI Success Engineers, Solutions Engineering, and Product teams to ensure our clients are not only empowered but also positioned for long-term success as our platform continues to evolve.Our clientele ranges from rapidly-growing digital enterprises to the largest global corporations, government agencies, and educational institutions. Each engagement represents a chance to maximize the benefits of AI in the way individuals work, create, and innovate. This role is pivotal in achieving that goal.About the RoleThe AI Deployment Manager is a specialized post-sales enablement role, focused on delivering transformative enablement and adoption services across OpenAI’s diverse product offerings. This position is responsible for crafting and executing technical enablement experiences that uphold a repeatable adoption framework—fostering sustained engagement, broadening usage, and delivering measurable business outcomes across OpenAI’s product suite, including ChatGPT Enterprise, Codex, Agents, and our API. This entails assisting customers in understanding and effectively applying deployment harnesses, evaluation layers, and operational controls essential for reliable utilization.This position combines strong technical fluency, instructional design expertise, and customer advisory skills. You will facilitate live trainings, workshops, and adoption interventions for diverse audiences ranging from technical builders to executive leaders, guiding customers on not only the capabilities of OpenAI’s products but also on practical usage in real-world applications.Success in this role is measured by enhancing customer confidence, increasing product adoption, supporting successful launches of new features, and assisting clients in translating technical functionalities into tangible business results.This position is located at our San Francisco headquarters, operating under a hybrid work model of three days in the office per week, and we offer relocation assistance for new team members.In this role, you will:Oversee the technical enablement of OpenAI products, including ChatGPT Enterprise, Codex, Agents, and API capabilities, defining effective enablement patterns to support adoption...

Jan 29, 2026
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamThe Technical Success team at OpenAI is pivotal in ensuring the seamless and secure deployment of ChatGPT and OpenAI API applications for both developers and enterprises. Acting as trusted advisors, we empower our customers to maximize the value derived from our innovative models and products.Within this team, the AI Deployment Engineering group is dedicated to supporting high-impact and strategic partners. We engage collaboratively to overcome technical challenges, provide in-depth expertise, and co-create groundbreaking ecosystem experiences that showcase the enhanced benefits of partnering with OpenAI.About the RoleWe are seeking an AI Deployment Engineer to play a crucial role in shaping the future of OpenAI’s partner ecosystem within the ChatGPT domain across various applications and commerce. As the primary technical resource for our strategic partners, you will lead solution design efforts and co-develop innovative experiences that fully leverage our platform's capabilities.You will provide expert technical guidance while collaborating with our Partnerships, Product, Engineering, and Go-To-Market teams, reporting directly to the Technical Success department.This position is based in our San Francisco or New York City office, utilizing a hybrid work model of three days in-office each week, with relocation assistance available for new hires.Key Responsibilities:Deliver outstanding partner experiences by providing technical expertise, scoping use cases, and aiding in the development of applications within ChatGPT and commerce checkout processes alongside technical stakeholders and strategic partners.Collaborate with our Partnerships team to design and support ecosystem integrations, ensuring technical feasibility and impactful launches.Advise partners on best practices for utilizing the OpenAI API and ChatGPT to develop secure, scalable, and unique experiences.Act as the initial point of contact for inquiries related to design, security, compliance, and architecture, escalating complex requirements to internal experts when necessary.Develop and sustain documentation, implementation guides, and FAQs addressing common partner requirements and technical hurdles.Collect partner feedback to represent the ecosystem's voice within internal teams, influencing product roadmaps and future partner initiatives.

Oct 14, 2025
Apply
company
Full-time|On-site|San Francisco

Role overview As a Product Engineer at liquid-ai, this position centers on shaping the company’s internal data and agent platform. The work involves designing, building, and launching solutions that reinforce the product lineup. Collaboration is key, with regular interaction across multiple teams. What you will do Partner with colleagues from various disciplines to define and deliver technical solutions Develop and maintain systems that support internal data and agent platform requirements Facilitate smooth integration between platforms and focus on optimizing system performance Location This role is based in San Francisco.

Apr 21, 2026
Apply
companyParallel logo
Full-time|On-site|San Francisco or Palo Alto

About UsAt Parallel, we redefine web infrastructure to empower businesses across various sectors including sales, marketing, insurance, and coding. Our innovative products enable the creation of top-tier AI agents, providing them with flexible and robust programmatic access to the web.Having secured $130 million in funding from prominent investors like Kleiner Perkins, Index Ventures, Spark Capital, Khosla Ventures, First Round, and Terrain, we are on a mission to construct the web tailored for AI applications. We are assembling an elite team of engineers, designers, marketers, sellers, researchers, and operational experts to realize this vision.Job Role: As a member of our research team, your primary objective will be to explore methods to train and scale a model capable of serving a comprehensive web index.Your Profile: You possess a profound understanding of modern AI models and training methodologies. You enjoy engaging in discussions about the convergence of search algorithms, recommendation systems, and transformer models. You are passionate about ensuring your research translates into practical applications that impact millions.Life at ParallelOur team operates in a fully in-person environment, collaborating between our headquarters in Palo Alto and our San Francisco office. We maintain a flat organizational structure that values talent and is committed to tackling both technical and creative challenges.We are looking for individuals who are equally passionate about leveraging science, creativity, and consistency to address significant and complex challenges, leading to substantial outcomes. Our core values include:Own Customer Impact: We take responsibility for delivering real-world results for our clients.Obsess Over Craft: We believe in perfecting every detail, as quality compounds over time.Accelerate Change: We prioritize swift shipping, rapid adaptation, and the implementation of pioneering ideas.Create Win-Wins: We strive to transform trade-offs into advantages.Make High-Conviction Bets: We embrace experimentation, learning from failures to achieve extraordinary successes.Compensation & BenefitsCompetitive salaryGenerous equity optionsVisa sponsorship available401K retirement plansDaily lunches & office snacksDinner provided at the officeUnlimited vacation policyCaltrain pass reimbursement

Jun 13, 2025
Apply
company
Full-time|On-site|San Francisco

About Liquid AIOriginating from the prestigious MIT CSAIL, Liquid AI crafts cutting-edge, general-purpose AI systems designed for optimal efficiency across a variety of platforms, from data center accelerators to edge devices. Our solutions prioritize low latency, minimal memory requirements, privacy, and reliability. We collaborate with industry leaders in consumer electronics, automotive, life sciences, and financial services, and as we expand rapidly, we are looking for exceptional talent to join our journey.The OpportunityJoin us at the exciting crossroads of advanced foundation models and the open-source community. In this pivotal role, you will oversee developer relations and community engagement, influencing how our models are adopted, documented, and integrated throughout the AI ecosystem. This unique position allows you to balance impactful community work with essential technical contributions, giving you the chance to shape how our models are represented and utilized by developers worldwide. If you are passionate about excellent documentation, enhancing developer experience, and democratizing access to powerful AI models, this is your chance to influence the future of open-source AI.What We're Looking ForWe seek a proactive individual who:Takes ownership: Manages open-source partnerships from initial outreach to ongoing collaboration.Thinks community-first: Integrates documentation, tutorials, integrations, and support into a seamless developer experience.Is pragmatic: Focuses on developer adoption and partner success rather than superficial metrics.Communicates clearly: Bridges the gap between technical teams and external partners, representing Liquid's interests while fostering genuine relationships.The WorkServe as the primary liaison for open-source partners.Assist in model releases with both marketing and technical content.Create tutorials, articles, and guides on training and utilizing our foundation models.Enhance and maintain LFM documentation for clarity and thoroughness.Collect community feedback and communicate insights to internal teams.

Feb 4, 2026
Apply
companyReka logo
Full-time|Remote|US, UK, Remote

Join Reka as a Member of the Technical Staff in Applied AI!Leverage cutting-edge AI models to tackle intricate real-world challenges.Engage in close collaboration with researchers and fellow team members to explore the latest developments in AI and ML.Partner with our customers to seamlessly integrate our innovative models into their existing technology frameworks.Drive business success with a strong sense of product ownership and accountability.Be part of a pioneering team in a rapidly growing environment, taking on diverse roles.

Jan 21, 2026
Apply
companyCooper AI logo
Full-time|On-site|San Francisco, CA

Cooper AI develops artificial intelligence tools designed for the commercial insurance industry. The platform integrates with existing systems to automate manual tasks, allowing professionals to focus on clients and business growth. With backing from Lightspeed, General Catalyst, and Valor, Cooper AI spun out of Nirvana, a Series D insurtech company valued at $1.5 billion. The company's mission centers on modernizing the global insurance market by automating workflows and delivering real-time improvements for carriers and brokers. Role overview The AI Deployment Strategist shapes Cooper AI’s market presence and delivers value to customers throughout the deployment lifecycle. This role adapts the platform to fit each agency’s processes, partners with engineers to connect systems, and trains users for sustained adoption. The strategist acts as an advisor to insurance leaders and works closely with product and engineering teams. The position’s impact extends beyond software setup, helping insurance organizations transform their operations. What you will do Lead deployments from start to finish. Manage multiple customer projects, from initial scoping and configuring Cooper’s features to setting up integrations and supporting adoption among producers, service representatives, and account managers. Drive meaningful activation. Focus on transforming agency operations by customizing the platform, empowering internal champions to train others, and tracking usage to ensure Cooper becomes part of daily workflows. Be a trusted advisor. Build relationships with agency owners and user champions, map AI capabilities to real-world use cases, share data-driven insights, and help shape product and adoption strategies. Requirements 4+ years’ experience in management consulting or a related area, with strong knowledge of the insurance industry. Technical skills in software integration and managing deployment projects. Clear communication and the ability to collaborate with a range of stakeholders. Location: San Francisco, CA

Apr 27, 2026
Apply
company
Full-time|On-site|San Francisco

About David AIDavid AI is a pioneering audio data research firm dedicated to transforming how artificial intelligence utilizes audio data. Our R&D-driven methodology allows us to create high-quality datasets that rival the best AI models in the industry. We aim to integrate AI into everyday life, leveraging audio as a vital component due to its versatility and accessibility. As we advance in audio AI, we recognize that the demand for superior training data is paramount, and this is where David AI excels.Founded in 2024 by a team of experienced professionals from Scale AI, we have quickly gained the trust of major clients, including leading FAANG companies and AI labs. Our recent funding round raised $50M from top-tier investors such as Meritech, NVIDIA, and Amplify Partners, underscoring our rapid growth and potential.Our team embodies sharp intellect, humility, and ambition. We invite talented individuals in research, engineering, product, and operations to join our journey in advancing audio AI.About Our Forward Deployed TeamOur Forward Deployed team collaborates closely with clients on their most essential projects, transforming complexity into clarity and strategy into actionable steps. By embedding ourselves within customer organizations and maintaining deep connections with our product, research, and engineering teams, we ensure that each interaction generates significant value and contributes to the ongoing evolution of David AI.Your RoleWe are seeking a Deployment Strategist to act as a trusted advisor for our clients on their key audio AI initiatives. In this role, you will shape strategic directions and provide guidance on audio training approaches. Collaborating closely with senior stakeholders, you will identify opportunities, set priorities, and design solutions that ensure David AI consistently delivers exceptional results.Key ResponsibilitiesLead the customer journey: Oversee the process from ideation to launch.Build strong relationships: Cultivate consultative partnerships across various business units and levels within client organizations.Translate objectives: Collaborate with research and engineering teams to develop actionable strategies that drive long-term success.Identify opportunities: Discover new avenues within accounts where David AI's insights can create added value.

Aug 17, 2025
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamAt OpenAI, our Forward Deployed Engineering (FDE) team collaborates with clients to transform cutting-edge research into robust production systems. We sit at the confluence of customer engagement and fundamental platform enhancement.About the PositionAs the Technical Deployment Lead (TDL) specializing in Life Sciences, you'll be instrumental in shaping how OpenAI delivers complex systems to our clients. You will oversee the development, deployment, and integration of these systems, ensuring alignment with business objectives. Your role includes translating strategic goals into actionable technical plans, managing the day-to-day activities of FDEs, researchers, and customer engineers, and collaborating with client teams to ensure that our solutions meet their unique needs.Your primary focus will be on the Life Sciences sector, working alongside pharmaceutical firms, clinical research organizations, and data service providers to implement innovative AI solutions for drug discovery, development, and operational processes. You will take charge of the entire delivery process: engaging with Life Sciences clients to delineate workflows and success metrics, ensuring timely component shipments, and leading readiness and change management efforts for effective user adoption. You’ll monitor progress, manage interdependencies, make sequencing decisions, and drive the development of prototypes from concept to MVP and beyond. Additionally, you’ll convey field insights to our Product and Research teams, helping to shape our strategic roadmap and priorities.Your success will be gauged primarily by the impact of your deployments—delivering tangible value that aligns with client objectives, fostering widespread adoption, and becoming integral to their operations. Other success indicators include delivery reliability, operational efficiency, quality of judgment under pressure, and the influence of your work on product development.This position offers a high level of trust and autonomy. To excel, you’ll need profound technical project management skills, a strong sense of ownership regarding outcomes, and the ability to deeply understand client workflows while collaborating with their teams to tackle intricate engineering challenges swiftly.Your Responsibilities Include:Leading the technical delivery strategy for multiple interconnected workstreams. Convert business goals into a comprehensive roadmap with milestones, dependencies, and acceptance criteria.Overseeing daily engineering operations. Monitor and drive the delivery across OpenAI FDE and client teams.

Mar 20, 2026

Sign in to browse more jobs

Create account — see all 5,349 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.