Go-to-Market Champion for GPU & AI Infrastructure
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Mid to Senior
Similar jobs
Browse all companies, explore by city & role, or SEO search pages. View directory listings: all jobs, search results, location & role pages.
Impossible Cloud
Group: Impossible Cloud / Impossible Cloud Network (ICN)Focus: Integrating Enterprise Storage with Decentralized GPU OrchestrationOur MissionAt Impossible Cloud, we are transforming enterprise storage through our patented decentralized object storage technology, delivering a high-performance, cost-effective infrastructure. We aim to expand this foundation by…
sfcompute
At sfcompute, we are pioneering a transformative approach to de-risk the largest infrastructure build-out in history.As organizations invest in GPU clusters, the datacenters that host them, and the supporting infrastructure, they require 'offtake' agreements—pre-signing contracts to lease these clusters before they are constructed.Financing GPU clusters involves inherent risks due to narrow margins coupled with substantial volumes. Lenders are hesitant to bear the risk that cluster developers may default on loans, while developers are equally wary of failing to sell their clusters. This dynamic necessitates transferring risk to customers through fixed-price, long-term contracts.Without adequate risk mitigation strategies for customers, market instability can arise. Unlike traditional SaaS models, where application layer companies sign multi-year contracts for computer and inference services while offering monthly subscriptions to clients, the stakes in this arena are significantly higher. A miscalculation in purchasing can lead to dire consequences, where even a small shift in revenue growth may result in profitability or insolvency. Imagine a scenario where companies could exit their contracts by selling them back to the market—this is the solution we offer.In an era where AI is rapidly scaling, compute power will only be accessible to those prepared to shoulder the associated risks. A small startup based in a San Francisco Victorian cannot realistically enter into a five-year obligation for $100 million supercomputers. However, they might acquire a month's worth of liquidity from the secondary market.Our mission is to create a liquid market for GPU offtake.The RoleWe are seeking a Go-To-Market (GTM) Generalist to serve as the bridge between our sales, growth, and customer onboarding teams. Collaborating closely with our Chief Revenue Officer, you will manage complex enterprise deals and facilitate onboarding for government and national clients, while also providing rapid value to small AI teams. Your role will involve partnering with both our technical team and the supply side to strategize on capacity growth. While deep technical expertise is not mandatory, a background in working with or selling technical products is essential. This position demands outstanding organizational skills, sound judgment, and customer-focused insights in high-pressure environments. Experience with marketplaces and two-sided platforms will be an added advantage.What You’ll DoExecute Deals with the CRO: Prepare and lead enterprise and federal discovery sessions, formulate ROI narratives, manage evaluation processes, coordinate security and legal requirements, and drive contracts to signature.Onboard Governments & Enterprises: Develop structured onboarding plans, implement data and security checklists, and enhance the overall customer experience.
About the RoleWe invite you to join our innovative team at Wafer as a Technical Intern, where you will have the opportunity to shape the future of inference, GPU optimization, and AI infrastructure. As a full-time engineer, you will collaborate closely with our team to define our technical direction and develop the core systems that drive our GPU optimization platform.Your ResponsibilitiesDesign and implement scalable infrastructure for AI model training and inference.Make pivotal technical decisions and influence architectural choices.
Lightning AI
About UsLightning AI, the innovative force behind PyTorch Lightning, is revolutionizing the AI landscape since 2019. We provide an all-encompassing platform designed to streamline the development, training, and deployment of AI systems, facilitating the transition from research to production effortlessly.Following our merger with Voltage Park, a cutting-edge neocloud and AI Factory, we unite developer-centric software with cost-effective, large-scale computing solutions. Our tools are tailored for experimentation, training, and production inference, incorporating built-in security, observability, and control.We cater to various clients, from individual researchers to startups and large enterprises, operating globally with offices in key cities including New York, San Francisco, Seattle, and London. We're proud to be backed by prestigious investors like Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.Our Core ValuesMove Fast: We prioritize speed and accuracy, breaking down complex challenges into manageable tasks.Focus: We aim to achieve one goal at a time, working collaboratively to deliver precise features.Balance: We believe sustained performance comes from adequate rest and recovery, ensuring a healthy work-life balance.Craftsmanship: We strive for excellence in every detail, taking pride in our work and its impact.Minimal: We embrace simplicity to drive innovation, eliminating unnecessary complexity and focusing on what truly matters.Role OverviewWe are on the lookout for a GPU & Compute Infrastructure Engineer to become a vital member of our Infrastructure Engineering team. In this pivotal role, you will manage image systems, diagnostics, and validation across expansive bare-metal computing infrastructure, particularly for GPU-optimized systems. You will work at the crossroads of hardware, systems, and software, developing automation, enhancing reliability, and facilitating efficient cluster setups for AI/ML and HPC workloads.Your responsibilities will include overseeing our image pipeline, running validation environments and test clusters, and supporting GPU hardware qualification. This role is essential for maintaining the integrity of our infrastructure, ensuring consistency, performance, and reliability.
At Sciforium, we are at the forefront of AI infrastructure, pioneering advanced multimodal AI models and an innovative, high-efficiency serving platform. With substantial backing from AMD and a dedicated team of engineers, we are rapidly expanding our capabilities to support the next generation of frontier AI models and real-time applications.About the RoleWe are looking for a highly skilled Senior HPC & GPU Infrastructure Engineer who will be responsible for ensuring the health, reliability, and performance of our GPU compute cluster. As the primary custodian of our high-density accelerator environment, you will serve as the crucial link between hardware operations, distributed systems, and machine learning workflows. This position encompasses a range of responsibilities, from hands-on Linux systems engineering and GPU driver setup to maintaining the ML software stack (CUDA/ROCm, PyTorch, JAX, vLLM). If you are passionate about optimizing hardware performance, enjoy troubleshooting GPUs at scale, and aspire to create world-class AI infrastructure, we would love to hear from you.Your Responsibilities1. System Health & Reliability (SRE)On-Call Response: Be the primary responder for system outages, GPU failures, node crashes, and other cluster-wide incidents, ensuring rapid issue resolution to minimize downtime.Cluster Monitoring: Develop and maintain monitoring protocols for GPU health, thermal behavior, PCIe/NVLink topology issues, memory errors, and general system load.Vendor Liaison: Collaborate with data center personnel, hardware vendors, and on-site technicians for repairs, RMA processing, and physical maintenance of the cluster.2. Linux & Network AdministrationOS Management: Oversee the installation, patching, and maintenance of Linux distributions (Ubuntu / CentOS / RHEL), ensuring consistent configuration, kernel tuning, and automation for large node fleets.Security & Access Controls: Set up VPNs, iptables/firewalls, SSH hardening, and network routing to secure our computing infrastructure.Identity & Storage Management: Manage LDAP/FreeIPA/AD for user identity and administer distributed file systems like NFS, GPFS, or Lustre.3. GPU & ML Stack EngineeringDeployment & Bring-Up: Spearhead the deployment of new GPU nodes, including BIOS configuration and software integration to ensure optimal performance.
About the RoleWe're excited to invite you to join wafer as a Spring Intern, where you will play a crucial role in shaping the future of AI infrastructure and GPU optimization. As part of our innovative team, you will work closely with full-time engineers to define our technical strategies and contribute to the development of the essential systems that drive our GPU optimization platform.Your ResponsibilitiesDesign and implement scalable infrastructure for AI model training and inference tasks.Guide the team in making technical decisions and architectural choices.Qualifications We SeekEssential Technical SkillsGPU Fundamentals: A strong grasp of GPU architectures, CUDA programming, and parallel computing methodologies.Deep Learning Frameworks: Skilled in PyTorch, TensorFlow, or JAX, especially for GPU-accelerated applications.Knowledge of LLM/AI: Solid foundation in large language models, including training, fine-tuning, prompting, and evaluation.Systems Engineering: Proficient in C++, Python, and potentially Rust/Go for developing tools around CUDA.Preferred BackgroundPublications or contributions to open-source projects related to inference GPU computing or ML/AI are advantageous.Hands-on experience in conducting large-scale experiments, benchmarking, and performance optimization.
Fluidstack
About FluidstackFluidstack is at the forefront of building groundbreaking infrastructure designed for the future of intelligence. We collaborate with premier AI research labs, government entities, and leading enterprises like Mistral, Poolside, Black Forest Labs, and Meta to deliver compute solutions at unparalleled speeds.Our mission is to expedite the realization of Artificial General Intelligence (AGI). Our team is dedicated, passionate, and driven to create world-class infrastructure, treating our clients' success as our own. If you possess a strong sense of purpose, a dedication to excellence, and the willingness to work diligently to transform the future of intelligence, we welcome you to join us in shaping what lies ahead.About the RoleWe are seeking a Product Manager to spearhead New Product Introduction (NPI) for our GPU infrastructure. You will collaborate with our datacenter, infrastructure, and networking teams to launch new GPU SKUs and compute solutions. Your role will involve defining the frameworks through which Fluidstack assesses, qualifies, and brings new GPU generations to market—from NVIDIA Blackwell and Rubin to AMD MI300X and future accelerators. This highly cross-functional position demands strong technical acumen, adept vendor relationship management, and a clear understanding of how hardware capabilities align with customer workload requirements. By doing so, you will help ensure that Fluidstack remains a leader in providing optimal compute options tailored for training, inference, and specialized AI workloads.Key ResponsibilitiesManage the NPI roadmap for GPU SKUs, including evaluation criteria, qualification timelines, and market strategies for new hardware generations.Collaborate with datacenter teams to establish requirements for power delivery (HVDC/LVDC), cooling systems (liquid vs. air), rack architecture, and the physical infrastructure necessary for next-gen GPUs.Engage with infrastructure engineers to validate hardware performance across essential metrics: training throughput (MFU), inference latency (TTFT, TBT), memory bandwidth, and interconnect topology (NVLink, InfiniBand).Foster vendor relationships with NVIDIA, AMD, and emerging XPU providers—conducting in-depth technical discussions, negotiating supply agreements, and overseeing early access programs.Define product specifications for system configurations: single-GPU instances, multi-GPU nodes, full rack deployments, and megacluster architectures.Analyze customer workload profiles to identify the optimal GPU mix: H100 for large model training, L40S for inference, B200 for frontier research, and MI300X for cost-sensitive workloads.Develop business cases for new SKU introductions.
About Our TeamJoin the Fleet team at OpenAI, where we empower groundbreaking research and product innovation through our advanced computing infrastructure. We manage extensive systems across data centers, GPUs, and networking, ensuring optimal performance, high availability, and efficiency. Our work is crucial in enabling OpenAI’s models to function seamlessly at scale, supporting both our internal research endeavors and external products like ChatGPT. We are committed to prioritizing safety, reliability, and the ethical deployment of AI technology.About the RoleAs a Software Engineer on the Fleet High Performance Computing (HPC) team, you will play a vital role in ensuring the reliability and uptime of OpenAI’s compute fleet. Minimizing hardware failures is essential for smooth research training progress and uninterrupted services, as even minor hardware issues can lead to significant setbacks. With the rise of large supercomputers, the stakes in maintaining efficiency and stability have never been higher.At the cutting edge of technology, we often lead the charge in troubleshooting complex, state-of-the-art systems at scale. This is a unique opportunity for you to engage with groundbreaking technologies and create innovative solutions that enhance the health and efficiency of our supercomputing infrastructure.Our team fosters a culture of autonomy and ownership, enabling skilled engineers to drive meaningful change. In this role, you will focus on comprehensive system investigations and develop automated solutions to enhance our operations. We seek individuals who dive deep into challenges, conduct thorough investigations, and create scalable automation for detection and remediation.Key Responsibilities:Develop and maintain automation systems for provisioning and managing server fleets.Create tools to monitor server health, performance metrics, and lifecycle events.Collaborate effectively with teams across clusters, networking, and infrastructure.Work closely with external operators to maintain a high level of service quality.Identify and resolve performance bottlenecks and inefficiencies in the system.Continuously enhance automation processes to minimize manual intervention.You Will Excel in This Role if You Have:Experience in managing large-scale server environments.A blend of technical skills in systems programming and infrastructure management.Strong problem-solving abilities and a methodical approach to troubleshooting.Familiarity with high-performance computing technologies and tools.
rockstar-3
Join Rockstar as we revolutionize the AI infrastructure landscape! We are building the foundational AI backbone for the next wave of intelligent products, enabling fast-growing AI startups to design, fine-tune, evaluate, deploy, and maintain specialized models across text, vision, and embeddings. Think of us as the 'AWS for AI models'—a comprehensive backend for fine-tuning, reinforcement learning, inference, and model maintenance. Our clientele consists of Series A–C AI companies developing enterprise-grade products, with a straightforward promise: enhancing your AI systems.We are seeking a Founding Go-To-Market Lead to spearhead our initial GTM strategy from the ground up, encompassing content creation, event management, partnership development, customer engagement, and product strategy. This position offers a unique opportunity to immerse yourself in a technically advanced product while influencing the commercial roadmap from inception.Why This Role is CrucialThe current AI infrastructure is disjointed and primarily designed for researchers rather than product teams. Our mission is to change that. As our first GTM hire, you'll have the chance to delineate the market sectors we engage in, formulate compelling narratives that can shift market perceptions, and construct a robust top-of-funnel engine for the company. Collaborating directly with the founders, you will help shape product direction, pricing strategies, ideal customer profile evolution, and partnerships within the ecosystem (such as with GPUs, model vendors, and frameworks).This is a builder role ideal for an individual passionate about being at the crossroads of engineering, product development, community engagement, and commercialization.Key ResponsibilitiesExecution (0→1 GTM Buildout)• Drive top-of-funnel growth by identifying, qualifying, and nurturing early customer segments, including Series A–C AI startups, infrastructure-heavy teams, and enterprise ML teams.• Produce compelling technical content: create in-depth case studies, benchmarks, architecture articles, and thought leadership pieces that resonate with technical founders and ML engineers.• Enhance event and community presence: participate in, and speak at, AI meetups, infrastructure conferences, and ecosystem events; represent the company with charisma and technical credibility.• Manage GPU and ecosystem partnerships: work closely with GPU providers, cloud partners, and model ecosystem vendors to foster co-marketing and co-selling initiatives.• Connect product and customer needs: gain an in-depth understanding of the company's capabilities and assist prospects in aligning their infrastructure with their model strategies (fine-tuning to reinforcement learning to pre-training).Strategy (Shaping the Company)• Establish pricing and packaging strategies for model-centric customers through fine-tuning, reinforcement learning workflows, inference, maintenance, evaluation, and platform usage.• Expand the ideal customer profile over time by identifying adjacent customer segments and guiding the upmarket movement.• Influence the product roadmap by helping prioritize both horizontal (modalities, pipelines) and vertical (evaluations, agents, monitoring) expansions.• Craft a compelling category narrative: assist in defining the market position and storytelling that resonates with customers and partners.
About UsAt Parallel, we are pioneering the future of web infrastructure. Our innovative products empower leading organizations in sectors such as sales, marketing, insurance, and software development to create exceptional AI agents, offering unparalleled programmatic access to the web.With a robust funding of $130 million from top-tier investors including Kleiner Perkins, Index Ventures, Spark Capital, Khosla Ventures, First Round, and Terrain, we are on a mission to redefine the web for AI applications. We are assembling a diverse and talented team of engineers, designers, marketers, sales professionals, researchers, and operational experts to help us achieve our ambitious goals.Role Overview:As a Go-To-Market Strategist, you will take ownership of the entire customer journey. You'll navigate the full sales cycle for some of the most ambitious AI companies globally, ranging from innovative startups to established multi-billion-dollar enterprises. Your responsibilities will encompass prospecting, conducting discovery sessions, managing proof of concepts (PoCs), negotiating contracts, closing deals, and overseeing account management. You'll have the unique opportunity to not only sell our product but also influence its development. Additionally, you will contribute to the growth of our Go-To-Market organization by developing playbooks, pricing strategies, automation processes, and metrics for success.Your Profile:You are a first-principles thinker with relentless resourcefulness and technical credibility, particularly in discussions surrounding APIs, systems, and large language models (LLMs). You possess a competitive drive to lead the market and have a background that could include early Go-To-Market roles, technical founding teams, solutions or customer engineering, venture capital, or other unconventional paths.Life at ParallelOur team operates fully in-person across our headquarters in Palo Alto and our office in San Francisco. We pride ourselves on being a flat, talent-rich organization focused on tackling both technical and creative challenges.We are looking for passionate individuals who share our commitment to applying science, creativity, and consistency to complex problems with significant outcomes. Here are our core values:Own Customer Impact: We take responsibility for delivering real-world outcomes for our customers.Obsess Over Craft: We strive for perfection in every detail because quality compounds.Accelerate Change: We prioritize fast shipping, quick adaptations, and move innovative ideas into production swiftly.Create Win-Wins: We creatively transform trade-offs into opportunities.Make High-Conviction Bets: We embrace experimentation, learning from failures, and succeeding disproportionately.Compensation & BenefitsCompetitive salaryGenerous equityVisa sponsorship available
HumanSignal
The future of artificial intelligence—whether it involves training, evaluation, classical machine learning, or agentic workflows—begins with high-quality data.At HumanSignal, we're developing a cutting-edge platform that drives the creation, curation, and evaluation of this vital data. Our tools are utilized by top AI teams to ensure models are anchored in real-world signals rather than mere noise.Our open-source product, Label Studio, has established itself as the industry standard for labeling and evaluating data across various modalities, including text, images, time series, and agent-environment interactions. With over 250,000 users and hundreds of millions of labeled samples, it stands as the most widely adopted open-source solution for teams engaged in building AI systems.Label Studio Enterprise enhances our offering with the security, collaboration, and scalability features essential for supporting mission-critical AI pipelines—facilitating everything from model training datasets to evaluation test sets and continuous feedback loops. We were pioneers before foundation models became mainstream, and we're intensifying our efforts as AI continues to transform industries. If you're passionate about empowering leading AI teams to develop smarter and more accurate systems, we want to hear from you.About the RoleWe are in search of a driven AI Engineer to revolutionize our go-to-market operations. While we prefer candidates in San Francisco, Austin, or Lisbon for collaboration, this position is open to remote or hybrid arrangements.As the inaugural GTM engineer, you will design and implement AI-driven systems that support our entire go-to-market strategy, taking ownership of the technology stack, collaborating with stakeholders to define workflows, deploying AI agents and applications, and continually enhancing how the company attracts, engages, and retains customers.This is a highly hands-on position suited for someone who thrives on developing systems, rapidly experimenting, and tackling complex real-world challenges with software.Why This Role MattersThis position acts as a force multiplier for the entire organization. You will establish the infrastructure enabling the go-to-market team to accelerate operations, enhance learning, and scale effectively without proportional increases in headcount.
Andromeda Cluster
Location: North America Remote / San Francisco · Full-TimeAbout AndromedaFounded by Nat Friedman and Daniel Gross, Andromeda Cluster provides early-stage startups with access to scaled AI infrastructure, once exclusive to hyperscalers. Our journey began with a single managed cluster that rapidly gained demand, leading us to develop a robust system, network, and orchestration layer to democratize AI infrastructure.Today, we partner with leading AI labs, data centers, and cloud providers to efficiently deliver compute resources wherever needed. Our platform expertly routes training and inference jobs across global supply chains, promoting flexibility and efficiency in one of the fastest-growing markets in the world.Our vision is to create a liquidity layer for global AI compute, and we are on the lookout for bright minds in AI infrastructure, research, and engineering to join our expanding team.The OpportunityWe are seeking a dedicated Global GPU Commodity Manager to enhance the supply and demand matching on our platform. This role is an Individual Contributor position reporting to the Head of Infrastructure. The Infrastructure team is pivotal to our operations, responsible for acquiring and facilitating compute resources across the organization while collaborating closely with compute providers, sales, and technical teams to align supply with demand.With a solid foundation established with our providers, we are now scaling to expand our network and liquidity, broaden our service offerings, and accelerate our growth trajectory.What You'll DoMatch incoming leads from the sales team to internal and external market capacity.Maximize utilization of compute resources.Source and onboard new compute suppliers globally.Identify capacity based on customer requirements and market trends.Resolve customer and supplier challenges in a fast-paced environment.Analyze technical and commercial differences between suppliers to optimize our capacity funnel.Develop a proactive compute strategy driven by market intelligence.Negotiate costs with suppliers and other vendors.Create and implement processes around capacity planning.
Hyperbolic Labs
Join Our MissionAt Hyperbolic Labs, we are dedicated to democratizing artificial intelligence by eliminating barriers to computing power through our Open-Access AI Cloud. We aggregate global computing resources to provide an innovative GPU marketplace and AI inference service, making AI affordable and accessible for everyone. As pioneers at the crossroads of AI and open-source technology, we envision a future where AI innovation is driven by imagination, not resource limitations. We invite forward-thinking individuals who share our vision of making AI universally accessible, secure, and cost-effective to join us in crafting a platform that empowers innovators to realize their groundbreaking AI projects.As we gear up for expansion following our Series A funding, our team, led by co-founders with PhDs in AI, Mathematics, and Computer Science, is set to transform the landscape of computing.The RoleWe are on the lookout for a Senior Infrastructure Engineer to drive the development and scaling of Hyperbolic's GPU Cloud Marketplace. In this pivotal role, you will create a multi-tenancy provisioning and virtualization solution that transforms raw GPUs from diverse global suppliers into a programmable, orchestrated resource pool serving thousands of AI developers and researchers. You will work at the forefront of cloud infrastructure, building the core orchestration layer that allows our platform to deliver cost savings of up to 75% compared to traditional cloud providers.
At Gong, we leverage the power of artificial intelligence to fundamentally change the way revenue teams achieve success. Our Gong Revenue AI Operating System seamlessly integrates data, insights, and workflows into a single, reliable system that observes, guides, and collaborates with the most successful revenue teams globally. With the Gong Revenue Graph, AI-driven intelligence, specialized agents, and trusted applications, we empower over 5,000 companies worldwide to gain deep insights into their teams and customers, automate essential sales workflows, and close more deals with reduced effort. For further details, visit www.gong.io.Joining Gong means becoming part of a forward-thinking company that values innovative products, ambitious objectives, and passionate individuals. We are at the forefront of shaping the future of revenue intelligence, and we are seeking team members who are eager to build what lies ahead. You will collaborate with a team that dares to dream big, moves rapidly, and is deeply committed to both the craft and each other. Here, transparency and trust are at the core of our operations, allowing every individual to make a tangible impact. If you aspire to grow, challenge yourself, and engage in work that truly matters, Gong is the perfect place to achieve the best work of your career.We are currently seeking a Director of Go-To-Market (GTM) Systems & Infrastructure to design and oversee the systems, automation, and data infrastructure that drive our GTM organization. This pivotal role exists at the crossroads of AI, automation, internal tools, and data orchestration, aimed at accelerating revenue by transforming the operational framework of our GTM teams. You will architect and scale the robust technical foundation that interlinks our GTM systems, operational workflows, and revenue data, creating intelligent, automated processes across Sales, SDR, Customer Success, Marketing, and Partnerships.
Prime Intellect
Join Our Mission to Build Open Superintelligence InfrastructureAt Prime Intellect, we are pioneering the development of an open superintelligence stack that encompasses cutting-edge agentic models and the infrastructure that empowers anyone to create, train, and deploy these advanced AI systems. Our innovative approach aggregates and orchestrates global computational resources into a cohesive control plane, complemented by a comprehensive reinforcement learning (RL) post-training toolkit that includes environments, secure sandboxes, verifiable evaluations, and our asynchronous RL trainer. We provide researchers, startups, and enterprises with the capabilities to execute end-to-end reinforcement learning at unparalleled scale, adapting models to real-world tools, workflows, and deployment scenarios.As a Solutions Architect for GPU Infrastructure, you will be the technical authority responsible for translating customer needs into robust, production-ready systems designed to train the world’s most sophisticated AI models.With a recent funding round raising $15 million (totaling $20 million) led by Founders Fund, alongside contributions from Menlo Ventures and illustrious angels such as Andrej Karpathy (Tesla, OpenAI), Tri Dao (Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), and Emad Mostaque (Stability AI), we are poised for significant growth and innovation.Key Technical ResponsibilitiesThis role requires a blend of deep technical knowledge and hands-on implementation skills. Your contributions will be crucial in:Customer Architecture & DesignCollaborating with clients to comprehend workload specifications and architect optimal GPU cluster solutions.Drafting technical proposals and conducting capacity planning for clusters ranging from 100 to over 10,000 GPUs.Formulating deployment strategies for large language model (LLM) training, inference, and high-performance computing (HPC) tasks.Delivering architectural recommendations to both technical teams and executive stakeholders.Infrastructure Deployment & OptimizationImplementing and configuring orchestration frameworks such as SLURM and Kubernetes for distributed workloads.Establishing high-performance networking through InfiniBand, RoCE, and NVLink interconnects.Enhancing GPU utilization, memory management, and inter-node communication.Setting up parallel file systems (Lustre, BeeGFS, GPFS) to maximize I/O efficiency.Tuning system performance, from kernel parameters to CUDA configurations.Production Operations & SupportEnsuring the reliability and performance of GPU infrastructure through continuous monitoring and support.Collaborating with cross-functional teams to troubleshoot and optimize operational workflows.Documenting processes and creating training materials for team members and clients.
Join the Innovative Team at Liquid AIFounded as a spin-off from MIT’s CSAIL, Liquid AI is at the forefront of developing cutting-edge AI systems that operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our technology is designed to ensure low latency, efficient memory usage, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services as we rapidly scale our operations. We are seeking talented individuals who are passionate about technology and innovation.Your Role in Our TeamAs a GPU Performance Engineer, your expertise will be critical in enhancing our models and workflows beyond the capabilities of standard frameworks. You will be responsible for designing and deploying custom CUDA kernels, conducting hardware-level profiling, and transforming research concepts into production code that yields tangible improvements in our pipelines (training, post-training, and inference). Our dynamic team values initiative and ownership, and we are looking for a candidate who thrives on tackling complex challenges related to memory hierarchies, tensor cores, and profiling outputs.While San Francisco and Boston are preferred, we welcome applications from other locations.
About SimAt Sim, we are pioneering the foremost platform for AI agents, empowering developers and enterprises to visually create, deploy, and oversee high-performance agents that automate workflows globally. Our mission is straightforward: AI agents will govern the world, and Sim is the primary facilitator for this transformation. Supported by elite investors including Standard Capital, Paul Graham, Perplexity, SV Angel, and Y Combinator, we have embraced open-source principles from our inception.As a dynamic and compact team based in San Francisco, we collaborate in person every day. With over 70,000 developers utilizing Sim, your contributions will be swiftly deployed and will have a significant impact on real users at scale. We champion ownership, excellence, and ambitious builders who are committed to shaping the future from the ground up.About the RoleWe are seeking our first Go-To-Market (GTM) Engineer to establish and develop the company's go-to-market and Revenue Operations (RevOps) infrastructure from the ground up. This is a pivotal role where you will not only refine existing workflows but also design and architect innovative systems. You will collaborate closely with leadership to define, implement, and scale our GTM systems, automations, and data flows. If you aspire to function as a technical founder for our GTM function while enjoying extensive ownership and rapid deployment, this opportunity is tailored for you.This position is full-time and requires working in-person at our San Francisco office.What You'll Do:Build from scratch: Develop Sim’s GTM infrastructure from the ground up, seamlessly integrating top-tier tools such as Salesforce, Clay, Gong, enrichment APIs, and more into an automated ecosystem.Create internal automations: Design and implement AI agents and bespoke workflows to automate account research, reduce manual tasks, and personalize outreach on a large scale.Engineered data flows: Construct the middleware and webhooks necessary for ensuring data integrity and synchronization across Sim’s product, CRM, and internal systems.Programmatic GTM: Translate overarching GTM strategies into automated workflows, customized internal tools, and data-driven outbound campaigns.Establish and scale: Set the technical standards and influence the foundational aspects of Sim’s GTM/Revenue Operations for years to come.What We’re Looking For:3+ years of experience in GTM Engineering, technical RevOps, or comparable technical roles.Strong coding skills (Python, JavaScript, or SQL) and experience with APIs.Proven history of integrating, automating, and optimizing workflows.
Higgsfield AI
About HiggsfieldHiggsfield AI is a pioneering company in the realm of video artificial intelligence, transforming synthetic media across social platforms. With impressive momentum, we have achieved a run-rate of over $200 million in sales within just nine months of our launch, bolstered by a recent $130 million Series A funding round.Who We Are SeekingWe are on the lookout for a strategically minded Go-To-Market (GTM) professional who embodies the following qualities:Customer-focused and adept at leading revenue discussions.Possesses a seller's mindset, yet adheres to strategic GTM practices.Skilled in translating advanced AI functionalities into tangible business outcomes.Thrives in dynamic, uncertain startup settings.Quickly establishes trust with founders, executives, and operational teams.Your ResponsibilitiesLead Commercial Initiatives: Identify, engage, and advance B2B customer relationships from initial contact to closure.Conduct Customer Discovery: Collaborate directly with prospects to uncover impactful use cases and integrate Higgsfield's solutions into their operations.Shape Deals: Assist in structuring pilots, proofs of concept, and initial commercial agreements that pave the way for sustained revenue growth.Engage with Live Products: Demonstrate, prototype, and iterate with customers in real time using our innovative products.Foster Account Growth: Support expansion and upselling efforts by identifying new use cases and revenue opportunities within existing accounts.Craft Value Narratives: Develop clear and persuasive narratives that highlight ROI, differentiation, and the strategic advantages of partnering with Higgsfield.Feedback Loop: Gather insights from customer interactions to inform Product, Engineering, and Leadership, shaping our roadmap and market positioning.Essential Qualifications*We encourage applications from individuals who may not meet every requirement but excel in specific areas and are eager to develop further.*Proven Revenue Track Record: Demonstrated success in achieving and surpassing multi-million dollar revenue goals and scaling sales from early-stage to maturity.Experience in B2B SaaS: Proven background in B2B SaaS sales and/or account management.
Join us at Catalog as we revolutionize agentic commerce with a cutting-edge data infrastructure that empowers AI to comprehend and analyze the real economy. This unique role offers you the chance to spearhead our go-to-market (GTM) strategy while closing our initial set of deals.As our first GTM hire, you will be responsible for every aspect of pipeline generation and deal execution, establishing our footprint within frontier AI labs, innovative startups, and retailers. This position transcends a conventional sales role; you will construct our GTM function from the ground up while driving sales.Collaborating closely with our founders in San Francisco, you will define our Ideal Customer Profile (ICP), craft a comprehensive sales playbook, design pricing strategies, and establish scalable, repeatable processes. Expect a role characterized by high autonomy, significant impact, and complete ownership.Pipeline GenerationDevelop and execute Account-Based Marketing (ABM) campaigns targeting retailers, agentic commerce applications, and commerce infrastructure providers.Experiment with messaging, channels, and outbound sequences to determine effective strategies.Oversee conference strategy, including outreach before events, engagement during, and follow-up conversions post-event.Deal ExecutionManage the entire sales cycle: prospecting, qualifying, demonstrating, negotiating, and closing deals.Articulate the commercial value of our complex API/data infrastructure to technical stakeholders.Innovate pilot structures and pricing models for new use cases.Assess customer ROI and build compelling business cases for platform adoption.GTM StrategyConfirm target verticals and personas through direct engagement with customers.Establish our sales processes, qualification criteria, and partnership frameworks.Collaborate with engineering to guide product development based on customer insights.Create the reporting infrastructure and metrics necessary to scale our GTM initiatives.Ideal Candidate Profile1.5-3+ years of sales experience, particularly in closing technical or platform deals.Demonstrated ability to independently generate pipeline and achieve results.
About UsAt Parallel, we are redefining web infrastructure. Our innovative products empower leading businesses in sectors such as sales, marketing, insurance, and coding to create top-tier AI agents equipped with flexible and robust programmatic access to the web.Having secured $130 million in funding from prestigious investors like Kleiner Perkins, Index Ventures, Spark Capital, Khosla Ventures, First Round, and Terrain, we are dedicated to revolutionizing the web for AI applications. Our mission is supported by a world-class team of engineers, designers, marketers, sales professionals, researchers, and operational experts.Role Overview: Your focus will be on enterprise customers, where you will cultivate trusted relationships with senior stakeholders and convert genuine demand into substantial revenue. Your ability to exercise sound judgment and adapt to diverse customer environments will be key to your success in closing deals.Your Profile: You are an inquisitive thinker with a strong grasp of first principles, exhibiting relentless drive and competitiveness. With a background in GTM roles for technical products, you thrive in dynamic environments and do not settle for less than success.
Sign in to browse more jobs
Create account — see all 5,416 results
Browse all companies, explore by city & role, or SEO search pages. View directory listings: all jobs, search results, or location & role pages.
