Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Entry Level
About the job
About Us
At Roboflow, our mission is to empower developers to make the world programmable through advanced artificial intelligence solutions. We believe that vision is a fundamental way we comprehend our environment, and soon, this understanding will be reflected in the software we utilize.
We are dedicated to creating tools, fostering community, and providing resources that simplify the development and deployment of computer vision models. With over 1 million developers, including teams from half of the Fortune 100, leveraging Roboflow's open-source and hosted machine learning tools, we are on a mission to enhance various industries, from accelerating cancer research through cell counting to improving construction site safety, digitizing floor plans, preserving coral reef ecosystems, guiding drone operations, and much more.
Our compact team is driven by a culture of collaboration, where we believe that our users' success is our success. One of our team members aptly described us as a company of
About Roboflow Inc.
Roboflow is at the forefront of making the world programmable through artificial intelligence. Our innovative tools have empowered countless developers to utilize computer vision in transformative ways across diverse sectors. With a strong backing from leading investors and a commitment to user success, we are dedicated to expanding the potential of AI in real-world applications.
Reflection AI builds open weight models for a wide range of users, including individuals, businesses, and governments. The team brings together talent from organizations such as DeepMind, OpenAI, Google Brain, Meta, Character.AI, and Anthropic, all working to advance open superintelligence. Role overview The AI Compute and Infrastructure Counsel acts as the …
Full-time|Remote|Global Remote / San Francisco, CA
Location: North America Remote / San Francisco · Full-TimeAbout AndromedaFounded by Nat Friedman and Daniel Gross, Andromeda Cluster is on a mission to democratize access to advanced AI infrastructure for early-stage startups. Initially starting with a single managed cluster, we rapidly expanded our capabilities to build a robust orchestration layer that enhances global AI infrastructure accessibility.We collaborate with prominent AI labs, data centers, and cloud providers to ensure compute resources are efficiently delivered where and when they are most required. Our innovative platform optimizes the routing of training and inference jobs globally, enhancing flexibility and operational efficiency in one of the most dynamic markets around.Our vision is to establish the liquidity layer for global AI compute, and we are continually seeking exceptional talent in AI infrastructure, research, and engineering.The OpportunityWe are in search of an Infrastructure Manager to enhance the alignment of supply and demand on our platform. This role is an Individual Contributor position, reporting directly to the Head of Infrastructure. The Infrastructure team forms the backbone of our operations, focusing on acquiring and managing compute resources in collaboration with our compute providers, sales, and technical teams.As we scale our operations, we aim to broaden our network and liquidity while deepening our service offerings and accelerating growth.What You'll Do• Align incoming leads from the sales team with both internal and external compute capacities.• Optimize the utilization of our compute resources.• Identify and onboard new compute suppliers globally.• Source capacity tailored to customer requirements and market trends.• Address customer and supplier challenges in a fast-paced, dynamic environment.• Analyze technical and commercial differences among suppliers to refine our capacity strategies.• Formulate a proactive compute strategy driven by market insights.• Negotiate costs with suppliers and vendors.• Design and implement capacity planning processes.
Genesis Molecular AI is building the GEMS molecular AI platform, driving advances in foundation model training and industrial screening. Strategic partnerships and a strong compute infrastructure are central to the company’s growth and mission. Role Overview The Director of AI Infrastructure Partnerships will lead efforts to secure and manage critical technology alliances, investments, and compute resources. This leader will work closely with top AI organizations, hardware providers, and investors, including firms like a16z and NVIDIA, to support Genesis’s technical and business goals. The role is based in either New York City or the San Francisco Bay Area. What You Will Do Oversee partnerships with NVIDIA and identify new opportunities with leading AI organizations. Structure contracts, equity deals, technical collaborations, co-publications, and data-sharing agreements for both public and proprietary experimental and synthetic data. Create presentations and written materials that clearly communicate Genesis’s platform vision and technical strengths to partners and investors, and integrate these messages into broader external communications. Serve as the business lead and chief negotiator for major cloud computing and AI infrastructure deals. Secure high-performance compute at competitive rates and maintain strong relationships with key partners. Monitor the AI compute market, evaluating providers for cost, reliability, and availability to support research and deployment needs. Work with ML Engineering to forecast compute requirements for model training, synthetic data generation, fine-tuning, and large-scale inference. Optimize performance and budget across multiple cloud environments and track usage to maximize value. Manage the internal budgeting process for compute spend. Translate technical needs into financial forecasts and present capital allocation recommendations to company leadership. What We’re Looking For Significant experience in AI and cloud computing, including managing high-value negotiations and partnerships. Strong analytical and strategic skills, with the ability to assess market trends and make informed decisions. Excellent communication and interpersonal abilities, comfortable explaining complex topics to a range of audiences.
Team and Platform Focus The Compute Infrastructure team at OpenAI designs, builds, and maintains the systems that support AI research at scale. This work brings together accelerators, CPUs, networking, storage, data centers, orchestration software, agent infrastructure, developer tools, and observability. The aim is to create a reliable, unified experience for researchers and product teams across the company. Projects span the full stack: capacity planning, cluster lifecycle management, bare-metal automation, and distributed systems. The team manages Kubernetes scheduling, system optimization, high-performance networking, storage, fleet health, reliability, workload profiling, benchmarking, and improvements to the developer experience. Even small improvements in communication, scheduling, hardware efficiency, or debugging can significantly accelerate research. OpenAI matches engineers to areas within Compute Infrastructure that align with their skills and interests. Role Overview This Software Engineer role centers on building and evolving the compute platform that supports OpenAI’s research and products. Candidates may bring expertise in low-level systems, high-performance computing, distributed infrastructure, reliability, CaaS, agent infrastructure, developer platforms, tooling, or infrastructure user experience. The most important qualities are strong analytical skills, the ability to write resilient code, and a collaborative approach that helps colleagues move faster and with more confidence. What You Will Work On Working close to hardware or at the user interaction layer Developing CaaS and agent infrastructure Managing control and data planes that connect the system Bringing new supercomputing capabilities online Optimizing training workloads through profiler traces and benchmarks Improving NCCL and collective communication Analyzing GPUs, NICs, topology, firmware, thermal dynamics, and failure modes Designing abstractions to unify diverse clusters into a single platform Areas of Expertise No one is expected to cover every area listed. Some engineers focus on system performance, kernel or runtime behavior, large-scale networking protocols, RDMA, NCCL, GPU hardware, benchmarking, scheduling, or hardware reliability. Others improve the platform’s usability through APIs, tools, workflows, and developer experience. The team values strong engineering judgment and a drive to advance the field.
Full-time|$180K/yr - $200K/yr|Remote|New York, New York, United States; Remote; San Francisco, California, United States; Seattle, Washington, United States
About UsLightning AI, the innovative force behind PyTorch Lightning, is revolutionizing the AI landscape since 2019. We provide an all-encompassing platform designed to streamline the development, training, and deployment of AI systems, facilitating the transition from research to production effortlessly.Following our merger with Voltage Park, a cutting-edge neocloud and AI Factory, we unite developer-centric software with cost-effective, large-scale computing solutions. Our tools are tailored for experimentation, training, and production inference, incorporating built-in security, observability, and control.We cater to various clients, from individual researchers to startups and large enterprises, operating globally with offices in key cities including New York, San Francisco, Seattle, and London. We're proud to be backed by prestigious investors like Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.Our Core ValuesMove Fast: We prioritize speed and accuracy, breaking down complex challenges into manageable tasks.Focus: We aim to achieve one goal at a time, working collaboratively to deliver precise features.Balance: We believe sustained performance comes from adequate rest and recovery, ensuring a healthy work-life balance.Craftsmanship: We strive for excellence in every detail, taking pride in our work and its impact.Minimal: We embrace simplicity to drive innovation, eliminating unnecessary complexity and focusing on what truly matters.Role OverviewWe are on the lookout for a GPU & Compute Infrastructure Engineer to become a vital member of our Infrastructure Engineering team. In this pivotal role, you will manage image systems, diagnostics, and validation across expansive bare-metal computing infrastructure, particularly for GPU-optimized systems. You will work at the crossroads of hardware, systems, and software, developing automation, enhancing reliability, and facilitating efficient cluster setups for AI/ML and HPC workloads.Your responsibilities will include overseeing our image pipeline, running validation environments and test clusters, and supporting GPU hardware qualification. This role is essential for maintaining the integrity of our infrastructure, ensuring consistency, performance, and reliability.
Full-time|On-site|San Francisco, CA | New York City, NY | Seattle, WA
Role Overview Anthropic is hiring a Commercial Counsel focused on Compute & Infrastructure. This role centers on providing legal guidance that supports technology initiatives while maintaining compliance. The position involves advising on legal matters tied to infrastructure and compute, helping the company manage risks and meet its business goals. Location San Francisco, CA | New York City, NY | Seattle, WA
About Our TeamThe Compute Infrastructure Team at OpenAI manages a robust fleet of GPUs and extensive compute clusters that support the models powering ChatGPT and our API. This team also accommodates the training demands for our upcoming models. We specialize in operating a state-of-the-art GPU fleet, offering a cohesive platform for various OpenAI teams to effortlessly execute production-level Applied AI and research training tasks.Our mission is to harness the potential of AI responsibly, ensuring its benefits are shared while prioritizing safety over unrestrained growth.Role OverviewAs a Technical Program Manager on our engineer-centric TPM team, you will take charge of the comprehensive delivery of large-scale GPU clusters, collaborating closely with engineers to initiate clusters across external providers and partners. You will manage a diverse portfolio that encompasses hardware, networking, power, and cooling—steering execution, risk management, and establishing clear alignment from operational teams to leadership, all aimed at delivering scalable, production-ready capacity.This position is located in San Francisco, CA, operating under a hybrid work model requiring three days in the office weekly. We also provide relocation assistance for new hires.Key ResponsibilitiesOversee the complete delivery of new Compute SKUs and large-scale GPU clusters within an external partner network while aiding capacity planning for both training and inference workloads.Drive multi-threaded program initiatives involving hardware, networking, power, and cooling—taking ownership of plans, interdependencies, and critical pathways.Collaborate with chip providers to mitigate risks associated with long-term onboarding to new hardware platforms, engaging with teams across kernels, communications, hardware, and scheduling.Develop and implement program mechanisms such as roadmaps, milestones, risk registers, and runbooks to ensure predictable delivery at scale.Work alongside engineering teams to enhance cluster turn-up reliability, repeatability, and automation, thereby decreasing the time-to-serve for new capacities.Facilitate cross-functional readiness involving security, finance, operations, and product/research stakeholders to ensure the launch of production-ready compute capabilities.Manage integrations and transitions among teams and partners to guarantee seamless execution, transparent communication, and prompt issue resolution.Identify operational bottlenecks and systemic deficiencies, driving sustainable improvements across tooling, processes, and partner interactions.
Full-time|$190K/yr - $253.8K/yr|On-site|Mountain View, California; San Francisco, California
P-931 At Databricks, we are dedicated to empowering data teams to tackle some of the most challenging problems in the world—from revolutionizing transportation to fast-tracking medical innovations. We achieve this by developing and managing the foremost data and AI infrastructure platform, enabling our clients to leverage profound data insights to enhance their enterprises. Founded by engineers with a customer-centric approach, we seize every chance to resolve technical challenges, from crafting next-generation UI/UX for data interactions to scaling our services and infrastructure across millions of virtual machines. And we’re just getting started. Within Databricks, the Compute Infrastructure organization is responsible for building and operating the essential framework that supports all Data, AI, and stateful workloads across major cloud platforms. Our system launches tens of millions of VMs daily, manages thousands of Kubernetes clusters, and must deliver exceptional elasticity, reliability, and cost-effectiveness. We are in search of an Engineering Manager to lead a team focused on pivotal components of this platform. Your contributions will significantly impact product delivery speed, customer satisfaction, and our company's scalability. The impact you will have: Own and enhance the compute platform to support all Databricks workloads, enabling engineers to create top-tier products with high velocity and superior performance. Recruit exceptional engineers and nurture their development through guidance, feedback, and career advancement opportunities. Elevate the technical and operational standards through robust design practices, rigorous testing, and a culture of engineering excellence and platform thinking. Collaborate with engineering and product leadership to establish long-term strategies and roadmaps. Lead cross-functional initiatives encompassing both product and infrastructure domains. Influence architectural decisions that extend beyond your immediate team.
Full-time|$200K/yr - $235K/yr|On-site|San Francisco, CA
Role Overview Sigma Computing is hiring a Corporate Counsel - Commercial to join the legal team in San Francisco, CA. This in-person role reports directly to the General Counsel. The position calls for strong judgment, careful risk assessment, and close attention to detail. Key Responsibilities Draft and negotiate technology transactions, including: Sales and related agreements (master subscription agreements, data processing agreements, order forms, and non-disclosure agreements) Alliance agreements with third-party partners Cloud infrastructure contracts (such as agreements with AWS, Azure, GCP, and others) Consulting and independent service agreements Conduct legal and regulatory due diligence in international jurisdictions where Sigma Computing plans to enter into technology agreements. Work with cross-functional teams to ensure agreements follow internal corporate policies and processes, and secure approvals from stakeholders to support business goals. Provide legal and commercial support for existing contracts, including assistance with dispute resolution. Offer practical legal advice on a range of legal and business matters to internal teams. Location This position is based on-site in San Francisco, CA.
Innovating Open Superintelligence InfrastructureAt Prime Intellect, we are pioneering the development of an open superintelligence stack, encompassing everything from advanced agentic models to the infrastructure that empowers anyone to create, train, and deploy them. By aggregating and orchestrating global compute resources into a unified control plane, we complement this with a comprehensive RL post-training suite: environments, secure sandboxes, verifiable evaluations, and our asynchronous RL trainer. Our platform facilitates researchers, startups, and enterprises to execute end-to-end reinforcement learning at an unprecedented scale, seamlessly integrating models into real-world tools, workflows, and deployment environments.Recently, we secured $15 million in funding (bringing our total to $20 million), led by Founders Fund, with contributions from Menlo Ventures and notable angel investors such as Andrej Karpathy, Tri Dao, Dylan Patel, Clem Delangue, Emad Mostaque, and many others.Your RoleIn the AI era, compute is the cornerstone of advancement. The companies, models, and capabilities that will define the next decade will be determined by access to computing resources, the terms of that access, the economic implications, and the allocation of these resources across various systems. The financial and operational frameworks for such a significant asset class are still under construction, and the strategies for navigating this landscape are yet to be established. The individuals who develop these strategies will influence the evolution of the AI infrastructure industry over the next decade.You will be responsible for establishing the analytical foundation that informs our understanding of global compute markets: pricing supply across different regions and contract lengths, modeling the economics of substantial GPU commitments, assessing neoclouds and hyperscalers, and translating this analysis into provider decisions, commercial structures, and customer-facing offerings.Your work will lie at the intersection of infrastructure, finance, and AI systems. You will tackle inquiries such as determining when an H200 cluster is preferable over a GB200 or GB300, analyzing how networking and storage limitations impact real workload performance, understanding how utilization assumptions affect the economics of multi-year commitments, and tracing how regional power, colocation, and capital costs influence GPU-hour pricing. You will assess providers not only based on headline pricing but also on delivery timelines, cluster architecture, reliability, support models, contractual risks, and their capability to handle frontier AI workloads.The decisions you contribute to will directly influence Prime Intellect’s capacity to deliver high-quality compute to researchers, AI labs, and companies leveraging our stack.ResponsibilitiesCompute EconomicsDevelop and oversee financial models that accurately price our compute supply: per-cluster...
About EventualAt Eventual, we are reimagining how AI applications process vast amounts of data, from images to complex datasets. Traditional data platforms are not equipped to handle the petabytes of multimodal data essential for AI, causing teams to struggle with inadequate infrastructure. Founded in 2022, our mission is to simplify data querying, making it as intuitive as working with tables while ensuring scalability for production workloads.Our open-source engine, Daft, is specifically designed for real-world AI systems. It efficiently manages external APIs, GPU clusters, and addresses failures that traditional engines cannot handle. Daft is already integral to operations at leading companies such as Amazon, Mobileye, Together AI, and CloudKitchens.We pride ourselves on our exceptional team, which includes talents from Databricks, AWS, Nvidia, Pinecone, GitHub Copilot, Tesla, and others. We have quadrupled our team size in just a year, supported by Series A and seed funding from notable investors like Felicis, CRV, Microsoft M12, and Y Combinator. We are now eager to expand further. Join us—Eventual is just getting started.We are seeking passionate individuals who are excited to collaborate in a close-knit team environment, working together four days a week in our San Francisco Mission district office.Your Role:As a Software Engineer, you will take charge of developing Eventual's core products and architecture. You’ll deliver features that our customers will use immediately and collaborate with a dedicated team that values open communication and cross-functional teamwork. Our fast-paced environment is focused on solving a variety of complex technical and product challenges. While our experienced team is here to provide guidance and mentorship, we appreciate engineers who can independently identify and tackle challenging technical issues.Key Responsibilities:Design and develop highly reliable and resilient products and features.Collaborate closely with cross-functional product and customer-facing teams to understand requirements and deliver thoughtful solutions.Write high-quality, extensible, and maintainable code.Create and build scalable applications and components.Architect and manage Kubernetes clusters optimized for our needs.
Prime Intellect develops an open superintelligence framework, supporting advanced agentic models and the infrastructure needed to create, train, and deploy them. The company’s mission is to unify global computational resources under a single control plane, integrating a full reinforcement learning (RL) post-training stack. The platform includes secure sandboxes, verifiable evaluations, environments, and an asynchronous RL trainer. Researchers, startups, and enterprises use Prime Intellect to run end-to-end RL at scale, adapting models for practical deployment. Prime Intellect has raised $20 million in funding, including a recent $15 million round. Investors include Founders Fund, Menlo Ventures, and individuals such as Andrej Karpathy, Tri Dao, Dylan Patel, Clem Delangue, and Emad Mostaque. Role overview The Head of Compute leads all aspects of GPU resource management at Prime Intellect from the San Francisco office. This function covers sourcing, economics, contracting, and the strategic direction for compute resources, critical for model training, serving, and sales. Compute is both the company’s core product and the main constraint in the open AI ecosystem. The role exists to keep Prime Intellect and the broader open ecosystem competitive in a landscape where every major lab contends for the same GPUs. What you will do Direct sourcing and procurement of GPU resources for model training and serving Manage compute economics and contracts, balancing long-term commitments with spot market activity Shape Prime Intellect’s strategic position in the global compute market Identify and prioritize key geographic compute hubs and hardware generations for broad access Collaborate with research and engineering teams to design the compute layer for the open model ecosystem Build and maintain commercial relationships with neocloud providers and industry partners Secure early access to new accelerator hardware and develop the operational framework for sustained compute advantage Decide what to train, where, and under which cost structures What success looks like Modeling unit economics for multi-year GPU commitments as the market evolves Turning research needs into actionable compute strategies Negotiating significant contracts for reserved resources Working with neocloud leaders and internal teams to advance open post-training Location This position is based in San Francisco.
Databricks is looking for a Senior Software Engineer focused on Compute Infrastructure in San Francisco, California. This position centers on building and improving compute architecture to support greater performance and scalability across Databricks' platform. What you will do Develop and optimize compute infrastructure to handle demanding data processing and analytics workloads. Work closely with teams from different disciplines to deliver reliable, high-quality solutions for customers. Impact Your contributions will help define how data processing and analytics evolve at Databricks. The work directly supports customers’ ability to scale and perform complex tasks in the cloud. Who we’re looking for Strong background in cloud technologies and compute systems. Enjoys tackling complex technical challenges. Collaborative approach to problem-solving with cross-functional teams.
About Our TeamThe Legal team at OpenAI is integral to our mission, addressing groundbreaking legal challenges in the realm of artificial intelligence. If you are a dedicated technology lawyer eager to engage in meaningful and innovative work, this team is the perfect fit for you. Our diverse team includes experts in technology, AI, privacy, intellectual property, corporate law, employment law, tax law, regulatory issues, and litigation.About This RoleWe are on the lookout for an AI Policy Counsel to expertly navigate the evolving legislative and regulatory landscape surrounding AI, both in the United States and internationally. We seek a proactive legal professional with extensive experience in technology policy who excels at analyzing intricate legislative and regulatory proposals, often in uncharted territory. This position reports directly to our Senior Counsel of AI Policy.This role is located in our San Francisco, CA or Washington, DC offices, operating under a hybrid work model of three days in the office each week, with relocation assistance available for new hires.Your Responsibilities Will Include:Evaluating proposed legislation and regulations related to AI to assess their implications for OpenAI and the larger AI ecosystem.Working collaboratively with Legal and Global Affairs teams to formulate policy stances.Providing strategic insights and guidance to the Global Affairs team and other internal partners.Advising on the dynamic regulatory and industry standards impacting OpenAI’s operations and clientele.You May Excel in This Role if You:Possess over 5 years of experience in both in-house environments and technology-focused law firms.Have a robust understanding of technology policy, complemented by experience in analyzing complex legislative frameworks.Demonstrate familiarity with AI or possess a strong interest in AI-related laws and regulations.Exhibit exceptional writing skills, effectively simplifying complex concepts into clear, persuasive narratives.Have a proven track record of engaging with regulators and policymakers.About OpenAIOpenAI is at the forefront of AI research and deployment, committed to ensuring that general-purpose artificial intelligence serves the greater good for all humanity. We push the boundaries of innovation while prioritizing ethical considerations.
Full-time|On-site|San Francisco, Los Angeles, or New York
Laurel develops AI-powered time platforms designed for professional services firms. By automating the capture of work time and connecting time data to business outcomes, Laurel helps firms improve profitability, client service, and strategic decision-making. Clients include organizations such as EY, Aprio, Crowell & Moring, and Frost Brown Todd. Laurel’s systems process over 1 billion previously untracked work activities each year. The team includes experts in AI, product, and engineering, all working to rethink productivity in the knowledge economy. Laurel’s mission centers on helping employees accomplish more in less time, creating space for creativity and impact. Role overview The Lead Counsel for AI & Privacy is Laurel’s first legal hire, based in San Francisco, Los Angeles, or New York. This position focuses on product, data, and AI strategy, setting the foundation for legal guidance as the company grows. Key focus areas Provide direction on how Laurel builds, deploys, and scales AI responsibly for enterprise clients. Collaborate with Product, Engineering, Security, and Sales to influence data use, AI system design, and customer trust. Tackle legal and ethical questions where AI, privacy, and data intersect. Establish foundational practices and policies for Laurel’s technology and business operations. Collaboration This role works closely with cross-functional teams, including Product, Legal, and Go-To-Market, to influence both strategy and day-to-day execution. Location San Francisco, Los Angeles, or New York.
About Us:At novita-ai, we are a rapidly growing global provider of AI cloud infrastructure, leading the charge in the artificial intelligence revolution. Our innovative platform equips developers and enterprises with powerful, scalable, and user-friendly solutions such as Model APIs, GPU Instances, and Serverless Computing. As organizations around the globe strive to integrate AI into their offerings, we serve as the essential engine that fuels their innovative efforts.Join our world-class team and contribute to our expanding customer base. This unique opportunity allows you to be part of a dynamic company in a hyper-growth market, where your technical skills will directly impact customer success and drive our business forward.The Role:As a Solutions Engineer, you will act as the primary technical leader and trusted advisor for our clients throughout their journey. You will collaborate closely with the sales team to bridge the gap between complex customer challenges and our sophisticated technical solutions. Your mission is to build technical credibility, demonstrate the capabilities of our platform, and design tailored solutions that empower our clients to achieve their AI-related business objectives.What You'll Do:Technical Discovery & Solution Design: Collaborate with Account Executives to gain a deep understanding of customer needs, technical requirements, and business goals. Develop elegant and effective solutions utilizing our AI infrastructure stack (Model APIs, GPU Instances, Serverless).Product Demonstration & Proof of Concept (POC): Conduct engaging, customized product demonstrations and interactive workshops. Plan, manage, and execute successful POCs, showcasing the value and performance of our platform within the client’s environment.Technical Evangelism & Trusted Advisory: Communicate the value proposition of our platform to diverse audiences, including both technical and non-technical stakeholders, from engineers to C-level executives. Establish yourself as the go-to expert for customers on best practices in AI infrastructure.Sales Enablement & Market Feedback Loop: Create and maintain technical sales materials, including whitepapers, best practice guides, and demo scripts. Serve as the voice of the customer, relaying valuable feedback from the field to our Product and Engineering teams to influence our product roadmap.Onboarding & Implementation Guidance: Facilitate a seamless post-sales transition by providing initial onboarding support and architectural guidance, setting customers up for sustained success.
Full-time|$178.1K/yr - $267.1K/yr|Remote|Remote, District of Columbia, USA
Exciting Career OpportunityJoin Unity Technologies as a Senior Managing Counsel for AI Governance, reporting directly to the Vice President and Deputy General Counsel for Privacy and AI Governance. In this pivotal role, you will be responsible for crafting, executing, and supervising a privacy-focused governance framework that supports the development and deployment of AI technologies. As a key advisor and process architect, your work will ensure adherence to global AI regulations, privacy laws, and ethical benchmarks through collaboration with Product, Security, Legal, and Engineering teams.Your ResponsibilitiesLead AI Program Governance and Policy: Assess the relevance of international AI regulations, spearhead policy formulation, act as the Framework Owner, and set ethical guidelines for AI development. Optimize privacy, safety, and AI assessments while proposing AI-enabled tools to enhance efficiency.Manage AI Incident Response: Revise and enhance incident response protocols to include AI-specific triggers and regulatory reporting structures, aligning with existing roles and Security’s AI Acceptable Use Policy.Oversee AI Risk Management: Maintain and refine the AI Risk Management Policy, establishing a comprehensive risk taxonomy that distinguishes between rights-based and technical risks.Ensure Accuracy, Robustness, and Cybersecurity: Provide counsel on adapting security and compliance strategies to meet AI-specific legal requirements, collaborating with technical teams to ensure that AI models meet established accuracy and compliance criteria.Implement Quality Management and Monitoring: Assess and refine standard Quality Management System (QMS) protocols for AI design, testing, validation, and post-market monitoring. Sustain standardized processes and reporting templates for ongoing model performance and compliance.Specialist Consultation: Facilitate specialized legal analysis (e.g., Intellectual Property, Employment) with relevant co-counsel.
Join Anthropic as a Strategic Deals Lead focused on our Compute & Infrastructure initiatives. In this pivotal role, you will spearhead the development of strategic partnerships and enhance our infrastructure capabilities. You will work closely with cross-functional teams to optimize operational efficiency while ensuring that our technical solutions are scalable and robust. Your leadership and vision will be crucial in navigating complex negotiations and driving successful outcomes for our organization.
Full-time|$130K/yr - $190K/yr|On-site|San Francisco
Job Category: AI & RoboticsAbout Avala AIAvala AI is a pioneering AI Data Infrastructure company at the forefront of real-world AI and its integration with the labor economy. We excel in delivering high-quality data labeling, comprehensive dataset management, and insightful data visualization, providing 4D labeling solutions tailored for autonomous vehicles, humanoid robots, and drone applications. Our mission is to empower AI-driven sectors—ranging from AV companies to robotics innovators and drone enterprises—by equipping them with the essential data infrastructure to propel the next generation of intelligent systems while offering dignified digital employment opportunities globally.The RoleIn your capacity as a 3D Computer Vision Engineer at Avala AI, you will be responsible for designing and implementing cutting-edge solutions for both offline and online 3D reconstruction and scene understanding, ensuring robustness, accuracy, and performance. You will collaborate on a world-class spatial computing platform deployed extensively in autonomous vehicles, advanced robotic systems, and drone technologies. Your contributions will advance the capabilities of real-world AI while utilizing the latest advancements in deep learning and 3D computer vision techniques.What You’ll DoSpatial Computing & Reconstruction: Innovate through the application of NeRFs, Diffusion Models, Gaussian Splatting, Multiview Stereo, TSDF Fusion, Structure from Motion, and SLAM methodologies.Mission-Critical Perception: Develop robust 3D perception systems and scene understanding frameworks that enhance safety and operational performance across various robotics and AV applications.4D Data Labeling & Visualization: Work collaboratively with cross-functional teams to enhance and expand Avala’s 4D labeling platform for automobiles, humanoid robots, and drones.Software Engineering Best Practices: Apply strong coding, testing, and deployment methodologies to ensure rapid, safe, and efficient development of innovative solutions.Boundary-Pushing Innovation: Actively explore new methodologies and technologies that advance the field of 3D vision, neural rendering, and large-scale data processing.
About UsAt Roboflow, our mission is to empower developers to make the world programmable through advanced artificial intelligence solutions. We believe that vision is a fundamental way we comprehend our environment, and soon, this understanding will be reflected in the software we utilize.We are dedicated to creating tools, fostering community, and providing resources that simplify the development and deployment of computer vision models. With over 1 million developers, including teams from half of the Fortune 100, leveraging Roboflow's open-source and hosted machine learning tools, we are on a mission to enhance various industries—from accelerating cancer research through cell counting to improving construction site safety, digitizing floor plans, preserving coral reef ecosystems, guiding drone operations, and much more.Our compact team is driven by a culture of collaboration, where we believe that our users' success is our success. One of our team members aptly described us as a company of