Sourcing Manager For Critical Components At Cerebras Systems jobs in Sunnyvale – Browse 381 openings on RoboApply Jobs

Sourcing Manager For Critical Components At Cerebras Systems jobs in Sunnyvale

Open roles matching “Sourcing Manager For Critical Components At Cerebras Systems” with location signals for Sunnyvale. 381 active listings on RoboApply Jobs.

381 jobs found

1 - 20 of 381 Jobs
Apply
Cerebras Systems logo
Full-time|$200K/yr - $240K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than conventional GPUs. Our unique wafer-scale architecture combines the computational power of numerous GPUs into a single chip, offering unparalleled programming simplicity. This allows us to deliver exceptional training and inference speeds,…

May 4, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is pioneering the realm of artificial intelligence with the world’s largest AI chip, boasting a size 56 times greater than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while maintaining the programming simplicity of a single device. This unique approach enables us to achieve unparalleled training and inference speeds, allowing machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and avant-garde AI-native startups. Recently, OpenAI formed a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing essential workloads with ultra high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over ten times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and augmenting intelligence via enhanced agentic computation.About The RoleWe are seeking a dynamic Head of IT to establish and manage the internal technology infrastructure of a rapidly scaling organization operating at the forefront of AI hardware and software. This is not a conventional IT leadership position; it is a build-and-scale opportunity for someone who thrives in a dynamic environment.You will oversee the systems that Cerebras employees, contractors, and executives depend on daily, including laptops, identity management, SaaS, networking, collaboration tools, endpoint security, internal support, and essential IT controls necessary for a company of our maturity. You will ensure that our highly technical and fast-paced engineering workforce remains unimpeded while simultaneously fortifying the environment to meet the standards expected of a company at our stage, including SOX-grade ITGCs and SOC 2 compliance.

Apr 9, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is pioneering the future of artificial intelligence with the development of the world's largest AI chip, which is an astonishing 56 times bigger than conventional GPUs. Our innovative wafer-scale architecture enables the AI computational power equivalent to dozens of GPUs on a single chip, while maintaining programming simplicity akin to that of a single device. This state-of-the-art approach allows us to deliver unparalleled training and inference speeds, enabling machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, multinational corporations, and innovative AI-native startups. Notably, OpenAI recently announced a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing critical workloads with ultra-high-speed inference.With our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds over ten times faster than GPU-based hyperscale cloud inference services. This remarkable enhancement in speed is transforming the user experience for AI applications, unlocking real-time iteration and enriching intelligence through enhanced computational capabilities.About The RoleAs a Network Architect on the Cluster Architecture Team, you will collaborate closely with vendors, internal networking teams, and industry experts to create top-tier interconnect architecture for both current and future generations of Cerebras AI clusters. Your responsibilities will include developing proof-of-concept designs for new network features that promote a resilient and reliable network tailored for AI workloads. This role demands cross-functional collaboration and engagement with a variety of hardware components, including network devices and the Wafer-Scale Engine, as well as software across multiple layers of the stack, from host-side networking to cluster-level coordination. A strong understanding of network monitoring systems and debugging methodologies is essential.ResponsibilitiesDesign AI/ML and HPC Clusters.Identify and mitigate performance or efficiency bottlenecks, ensuring optimal resource utilization, low latency, and high-throughput communication.Lead technical projects involving multiple teams and diverse software and hardware components to realize advanced network solutions.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$150K/yr - $260K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, engineering the world’s largest AI chip, which is 56 times larger than conventional GPUs. Our innovative wafer-scale architecture enables unprecedented AI computational power, equivalent to dozens of GPUs operating as a single unit, thereby simplifying programming for machine learning tasks. This revolutionary approach not only provides unmatched training and inference speeds but also allows users to execute large-scale machine learning applications without the complexity of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model laboratories, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, deploying 750 megawatts of processing capacity that revolutionizes critical workloads through ultra-fast inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud services by over 10 times. This significant increase in speed is reshaping the user experience of AI applications, facilitating real-time iterations and enhancing intelligence through advanced computational capabilities.

Feb 19, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, creating the world's largest AI chip—56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while simplifying programming to the ease of a single device. This groundbreaking approach enables us to achieve unparalleled training and inference speeds, empowering machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Cerebras serves an impressive clientele that includes top model laboratories, multinational corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, deploying 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Our wafer-scale architecture also powers the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over 10 times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through additional computational capabilities.About The RoleWe are in search of a talented Compiler Engineer to contribute to the design and implementation of new features within our CSL (Cerebras Software Language) compiler. CSL is a Zig-like programming language utilized both internally and externally to program our wafer-scale engine (WSE).The language offers high-level abstractions to simplify programming the wafer WSE while providing low-level access to hardware internals for optimal hardware utilization. The compiler leverages MLIR infrastructure to translate CSL into LLVM IR, which is further compiled by a dedicated LLVM mid-end/backend into executable files.ResponsibilitiesDesign and implement front-end language features, semantic analysis, intermediate representations, and lowering pipelines from CSL to MLIR dialect(s) and LLVM IR.Develop and enhance abstraction layers between the CSL language and the underlying hardware.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$190K/yr - $230K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip, which is 56 times larger than traditional GPUs. Our pioneering wafer-scale architecture delivers exceptional AI computational power equivalent to dozens of GPUs on a single chip, offering users unparalleled simplicity and efficiency. This unique approach enables us to provide industry-leading training and inference speeds, allowing machine learning practitioners to run extensive ML applications seamlessly without the complexities of managing multiple GPUs or TPUs.Our clientele includes renowned model labs, leading global enterprises, and innovative AI-first startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, leveraging 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Thanks to our cutting-edge wafer-scale technology, Cerebras Inference offers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than typical GPU-based hyperscale cloud services. This significant speed enhancement is reshaping the user experience in AI applications, enabling real-time iteration and enhancing intelligence through advanced computation.About The RoleAs a Senior Mechanical Engineer at Cerebras, you will spearhead the design of innovative mechanical systems for our next-generation wafer-scale engine. Your key responsibilities will encompass ensuring adherence to specifications, validating manufacturability, and delivering high-quality products in a dynamic environment, addressing some of the most intricate challenges in the rapidly advancing AI landscape.In this role, you will be instrumental in developing the mechanical infrastructure for Cerebras' custom hardware systems.Rapidly iterate on designs and analyses to inform high-level systems decisions and guide the overall product strategy.Provide extensive support for environmental and performance testing on hardware, validate analyses, and ensure compliance with design criteria.Take ownership of technical deliverables.Conduct first-article inspections and functional analyses, identifying and resolving issues as they arise.Collaborate closely with design, manufacturing, production, diagnostics, and embedded software engineering teams, contractors, and suppliers.Perform detailed structural analyses and simulations to optimize designs.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$150K/yr - $270K/yr|On-site|Sunnyvale, CA

Cerebras Systems is revolutionizing the world of artificial intelligence with our groundbreaking wafer-scale architecture, which is 56 times larger than traditional GPUs. Our innovative design provides unparalleled AI compute power, allowing users to run extensive machine learning applications effortlessly, without the complexities of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model labs, major global enterprises, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras to deploy 750 megawatts of transformative computing power, enabling ultra-fast inference for critical workloads.With our advanced wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud services. This leap in performance is redefining the user experience for AI applications, facilitating real-time iteration and enhancing intelligence through additional agentic computation.About The RoleJoin our dedicated physical design team as a 3D Physical Design Engineer, where you will focus on the design and analysis of 3D integrated products. This role requires a blend of traditional ASIC/SoC physical design expertise, along with skills in packaging, power management, clock distribution, and thermal analysis. Collaborating closely with the architecture and RTL teams, you will contribute to research and development efforts on innovative concepts for 3D integration.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$200K/yr - $300K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, producing the world's largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture allows us to deliver the AI compute power equivalent to dozens of GPUs on a single chip, simplifying programming with the ease of a single device. This revolutionary approach enables us to achieve unmatched speeds in training and inference, empowering machine learning professionals to run extensive ML applications seamlessly, without the complexities of managing multiple GPUs or TPUs.Our esteemed clients include leading model labs, major global enterprises, and pioneering AI-native startups. Recently, OpenAI forged a multi-year partnership with Cerebras to deploy an impressive 750 megawatts of scale, significantly enhancing key workloads with ultra high-speed inference.Thanks to our cutting-edge wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, surpassing GPU-based hyperscale cloud inference services by over 10 times. This remarkable increase in speed is revolutionizing the user experience in AI applications, facilitating real-time iterations and enhancing intelligence through advanced computational capabilities.Job SummaryWe are seeking a Director of Strategic Sourcing for Contract Manufacturers to spearhead the development and implementation of global sourcing strategies for outsourced manufacturing services. This pivotal role focuses on driving cost efficiency, optimizing supplier performance, and managing risks across the contract manufacturing supply chain, ensuring alignment with our business objectives and operational excellence.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times the size of conventional GPUs. Our unique wafer-scale architecture not only provides the power equivalent to dozens of GPUs on a single chip but does so with the simplicity of programming a single device. This cutting-edge approach allows us to achieve unparalleled training and inference speeds, enabling machine learning professionals to seamlessly execute large-scale ML applications without the complexity of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, renowned global enterprises, and pioneering AI-native startups. Notably, OpenAI has recently announced a multi-year partnership with us to leverage our technology in deploying 750 megawatts of scale, revolutionizing key workloads with ultra-fast inference capabilities.Our groundbreaking wafer-scale architecture empowers Cerebras Inference to deliver the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than traditional GPU-based hyperscale cloud inference services. This significant leap in speed redefines the user experience of AI applications, facilitating real-time iteration and enhancing intelligence through advanced computation.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than traditional GPUs. Our unique wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming while providing unparalleled training and inference speeds. This revolutionary approach enables users to run extensive machine learning applications effortlessly, eliminating the complexity of managing multiple GPUs or TPUs.Cerebras serves a diverse clientele, including leading model labs, major global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of scale that will redefine key workloads with ultra-high-speed inference.Our groundbreaking wafer-scale architecture ensures that Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds that are over ten times faster than GPU-based hyperscale cloud services. This significant enhancement in performance is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleWe are seeking a Senior Performance Analyst to join our dynamic Product team. As a specialist in state-of-the-art inference performance, you will be the go-to expert on how Cerebras measures up against alternative inference providers in terms of pricing and performance. This role combines performance benchmarking from foundational principles with competitive intelligence. The position revolves around two key pillars:Performance BenchmarkingYou will develop, execute, and sustain reproducible benchmarks that assess Cerebras inference performance for actual customer workloads. This includes metrics such as tokens per second, time to first token, latency under concurrency, and total cost of ownership (TCO).Competitive AnalysisYou will analyze market trends and competitor offerings to position Cerebras effectively within the inference landscape.

Apr 13, 2026
Apply
Cerebras Systems logo
Full-time|$175K/yr - $275K/yr|On-site|Sunnyvale, CA

Cerebras Systems is a pioneer in AI technology, renowned for creating the world’s largest AI chip, which is an astounding 56 times larger than traditional GPUs. Our innovative wafer-scale architecture provides unparalleled AI computing capabilities equivalent to dozens of GPUs on a single chip, while ensuring the programming simplicity of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, allowing machine learning practitioners to seamlessly execute large-scale ML applications without the complexity of managing vast arrays of GPUs or TPUs. Cerebras' impressive clientele includes leading model laboratories, global enterprises, and cutting-edge AI-focused startups. Recently, OpenAI announced a multi-year partnership with Cerebras, enhancing transformative workloads through ultra-high-speed inference utilizing 750 megawatts of scale. With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This significant speed enhancement is revolutionizing the user experience in AI applications, enabling real-time iterations and boosting intelligence through advanced computational capabilities.About The RoleAs the Lead RTL Design Engineer, you will play a pivotal role in our exceptional team responsible for designing and developing the next iterations of the Cerebras Wafer Scale Engine (WSE). This position demands extensive expertise in RTL design and integration, with a strong emphasis on delivering high-performance, power-efficient, and scalable solutions. Additionally, you will oversee collaboration with external ASIC vendors and work closely with design verification, physical design, software, and system teams to translate innovative semiconductor architectures from concept to production, addressing the unique challenges associated with building WSE systems.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems is revolutionizing the AI industry by developing the world’s largest AI chip, 56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip, while simplifying programming to the ease of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, empowering machine learning professionals to seamlessly operate large-scale ML applications without the complexities of managing multiple GPUs or TPUs. Our esteemed clientele includes leading model labs, global corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to leverage 750 megawatts of scale to transform critical workloads through ultra high-speed inference. With our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This unprecedented speed enhances the user experience of AI applications, enabling real-time iterations and increased intelligence through advanced computation capabilities. About The RoleThe AI Infrastructure Operations Engineer (SiteOps) is an entry-level position focusing on the deployment, initialization, monitoring, and first-response troubleshooting of Cerebras AI infrastructure within data center settings. This role plays a critical part in supporting Cerebras systems, cluster server hardware, networking hardware, and monitoring tools.Your responsibilities will include ensuring the reliable operation and scalability of Cerebras AI clusters by executing established hardware initialization and validation protocols, monitoring telemetry data, performing initial troubleshooting, and escalating issues according to predefined workflows.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems revolutionizes the AI landscape with the creation of the world’s largest AI chip, a remarkable 56 times larger than conventional GPUs. Our innovative wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming efforts for users. This unique approach enables Cerebras to achieve unparalleled training and inference speeds, empowering machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing hundreds of GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and pioneering AI-native startups. Notably, OpenAI recently announced a multi-year partnership with Cerebras to deploy 750 megawatts of scale, significantly enhancing key workloads with ultra-high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding the performance of GPU-based hyperscale cloud inference services by over ten times. This significant speed enhancement transforms the user experience of AI applications, facilitating real-time iterations and augmented intelligence through additional agentic computation.About The RoleWe are on the lookout for a highly skilled and experienced AI Infrastructure Operations Engineer to oversee and manage our state-of-the-art machine learning compute clusters. In this role, you will have the unique opportunity to work with the world’s largest computer chip, the Wafer-Scale Engine (WSE), and the systems that leverage its extraordinary power.You will play a pivotal role in ensuring the health, performance, and availability of our infrastructure, maximizing compute capacity, and supporting our expanding AI initiatives. This position requires an in-depth understanding of Linux-based systems, expertise in containerization technologies, and experience in monitoring and troubleshooting complex distributed systems. The ideal candidate is a proactive problem-solver with a strong background in large-scale compute infrastructure who is reliable and committed to customer success.

Feb 17, 2026
Apply
dstaff logo
Full-time|On-site|Sunnyvale

We are seeking a talented and motivated Reliability Manager to join our team at dstaff, focusing on Photonic Integrated Components. In this role, you will lead initiatives to enhance the reliability of our products and ensure that they meet the highest quality standards. You will collaborate with cross-functional teams to conduct reliability testing, analyze data, and implement improvements.

May 14, 2015
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems is pioneering the field of artificial intelligence with the development of the world's largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power equivalent to dozens of GPUs on a single chip, simplifying programming to a single device. This breakthrough allows Cerebras to achieve unparalleled training and inference speeds, enabling machine learning practitioners to seamlessly run extensive ML applications without the complexity of managing numerous GPUs or TPUs. Our clientele includes leading model labs, global corporations, and cutting-edge AI-native startups. Cerebras recently formed a transformative multi-year partnership with OpenAI, focusing on deploying 750 megawatts of scale to enhance critical workloads through ultra-fast inference. Thanks to our unique wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud services by over ten times. This dramatic increase in speed is revolutionizing the user experience of AI applications, facilitating real-time iterations and enhancing intelligence through additional agentic computation. As an Infrastructure Hardware Technical Program Manager (Server and Network Systems) within the Cluster Architecture Team, you will oversee the comprehensive delivery of server and network platform programs across Cerebras CS-3-based AI clusters. Your responsibilities will range from requirements gathering and vendor selection to lab bring-up, qualification, and production rollout. You will act as the execution lead for multi-team programs involving OEM/ODM partners, component vendors, internal software/runtime teams, architects, validation/QA, and deployment/operations.This position requires a strong technical background; you should grasp server, network, and system-level trade-offs to effectively conduct technical reviews, keep programs aligned with real-world constraints, and maintain clear decision documentation. Collaborating closely with Compute, Server, and Network Platform Architects, you will ensure detailed technical direction and approval. Additionally, you will work to establish mutual understanding with our rack/elevations and physical data center design partners to ensure server and network modifications are implemented smoothly in real deployments (without directly managing physical data center design).

Feb 25, 2026
Apply
Intuitive Surgical, Inc. logo
Full-time|On-site|Sunnyvale

Join Intuitive Surgical, Inc. as a Senior Sourcing Manager specializing in HR and Professional Services. In this pivotal role, you will lead strategic sourcing initiatives, driving supplier performance and ensuring alignment with organizational goals. You will collaborate closely with cross-functional teams to identify and implement innovative sourcing strategies that enhance our operational efficiency and effectiveness.Your expertise will be crucial in negotiating contracts, managing supplier relationships, and optimizing service delivery. This is an exciting opportunity to make a significant impact within a forward-thinking company dedicated to transforming healthcare.

Mar 23, 2026
Apply
Cerebras Systems logo
Full-time|$140K/yr - $240K/yr|On-site|Sunnyvale, CA

At Cerebras Systems, we are pioneering the future of artificial intelligence with the development of the world's largest AI chip, which is an astonishing 56 times larger than traditional GPUs. Our innovative wafer-scale architecture combines the computational power of numerous GPUs into a single chip, simplifying programming and enhancing efficiency. This unique approach enables us to achieve unparalleled training and inference speeds, empowering machine learning practitioners to run extensive ML applications seamlessly, without the complexities of juggling multiple GPUs or TPUs.Our clientele includes leading model labs, global corporations, and groundbreaking AI-focused startups. Notably, OpenAI has recently partnered with Cerebras to harness 750 megawatts of scale, revolutionizing critical workloads with ultra-fast inference capabilities.Thanks to our cutting-edge wafer-scale technology, Cerebras Inference delivers the fastest Generative AI inference solutions available, exceeding GPU-based hyperscale cloud services by over ten times. This significant leap in speed is revolutionizing user interactions with AI applications, facilitating real-time adjustments and enhancing intelligence through advanced computational capabilities.About The RoleAs the security lead for Cerebras's AI cluster product, you will be at the forefront of ensuring the security of our large-scale AI clusters, which consist of hundreds of wafer-scale accelerator systems, thousands of high-performance servers, and numerous networking ports, including switches. This will also involve managing network-attached storage within a vast data center.Your primary responsibility will be to implement security measures based on established best practices and first principles, ensuring the protection of Cerebras's extensive AI clusters. These clusters comprise intricate hardware components, networking systems, and a fully integrated cluster management software stack that ranges from bare-metal deployments to sophisticated management systems that enable multi-tenant training and inference services across these expansive clusters.You will focus on guaranteeing end-to-end security and privacy for various cluster applications, developing security engineering solutions incorporating robust network access controls, user access management, and an exceptional multi-tenancy framework.

Feb 17, 2026
Apply
Intuitive Surgical, Inc. logo
Senior Sourcing Manager - MRO

Intuitive Surgical, Inc.

Full-time|On-site|Sunnyvale

Join Intuitive Surgical as a Senior Sourcing Manager specializing in Maintenance, Repair, and Operations (MRO). In this role, you will lead sourcing strategies and initiatives that support our operational excellence and drive cost efficiency. You will collaborate with cross-functional teams to ensure our supply chain is robust and responsive to the needs of our innovative medical devices.

Mar 19, 2026
Apply
Cerebras Systems logo
Full-time|$175K/yr - $275K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, manufacturing the world’s largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture delivers the AI computing power comparable to dozens of GPUs on a single chip, while maintaining the programming simplicity of a single device. This groundbreaking approach enables Cerebras to achieve unmatched training and inference speeds, allowing machine learning professionals to seamlessly deploy large-scale ML applications without the complexities of managing multiple GPUs or TPUs. Cerebras’ impressive client roster includes leading model labs, major global enterprises, and pioneering AI-driven startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aimed at harnessing 750 megawatts of scale to revolutionize key workloads through ultra high-speed inference. Leveraging our cutting-edge wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution globally, boasting speeds over 10 times faster than GPU-based hyperscale cloud inference services. This dramatic increase in speed is transforming the user experience for AI applications, facilitating real-time iteration and enhancing intelligence through advanced agentic computation. The Role Join our Embedded Software team to contribute to the critical software framework that empowers the Cerebras Wafer Scale technology. You will work on innovative projects that push the boundaries of AI and embedded systems development, collaborating with a talented group of engineers focused on delivering exceptional performance for our clients.

Feb 17, 2026
Apply
Intuitive Surgical, Inc. logo
Program Manager, NPI Sourcing

Intuitive Surgical, Inc.

Full-time|On-site|Sunnyvale

Join our team as a Program Manager for New Product Introduction (NPI) Sourcing at Intuitive Surgical. In this pivotal role, you will lead the sourcing strategy for our innovative medical devices, ensuring that we meet the highest standards of quality and efficiency. You will collaborate with cross-functional teams to drive product development from concept to commercialization, enabling us to deliver cutting-edge solutions to our customers.

Apr 3, 2026

Sign in to browse more jobs

Create account — see all 381 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.