Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Proven experience in frontend development, particularly with frameworks such as React or Angular. Strong knowledge of web technologies including HTML, CSS, and JavaScript. Experience with API integration and working with backend services. Ability to collaborate effectively with cross-functional teams and communicate technical concepts clearly. A passion for creating seamless user experiences and optimizing performance.
About the job
Join Cerebras Systems as a Staff Frontend Engineer specializing in Inference. In this pivotal role, you will be instrumental in developing innovative solutions that push the boundaries of AI and machine learning. Your expertise will drive the design and implementation of user-friendly interfaces that enhance our cutting-edge technology.
About Cerebras Systems
Cerebras Systems is at the forefront of AI innovation, dedicated to delivering unparalleled computational power to tackle the most complex challenges in machine learning. Our team is composed of experts who are passionate about pushing the limits of technology, fostering an environment where creativity and collaboration thrive.
Join Cerebras Systems as a Staff Frontend Engineer specializing in Inference. In this pivotal role, you will be instrumental in developing innovative solutions that push the boundaries of AI and machine learning. Your expertise will drive the design and implementation of user-friendly interfaces that enhance our cutting-edge technology.
Cerebras Systems is revolutionizing the AI landscape with the world's largest AI chip, which is 56 times more extensive than traditional GPUs. Our innovative wafer-scale architecture enables us to deliver the computational power of dozens of GPUs on a single chip, while offering the ease of programming like a single device. This groundbreaking approach empowers Cerebras to achieve unparalleled training and inference speeds, allowing machine learning practitioners to run large-scale ML applications effortlessly without the complexities of managing numerous GPUs or TPUs.Cerebras serves a diverse clientele that includes leading model laboratories, global corporations, and pioneering AI-focused startups. Recently, OpenAI announced a multi-year collaboration with Cerebras to harness 750 megawatts of scale, significantly enhancing key workloads through ultra-fast inference capabilities.With our cutting-edge wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding the speed of GPU-based hyperscale cloud inference services by over ten times. This extraordinary speed transformation is reshaping the user experience of AI applications, facilitating real-time iterations and boosting intelligence through enhanced agentic computation.
Role Overview Cerebras Systems is looking for a Staff Software Engineer focused on Inference Cloud. This position is based in Sunnyvale, CA. What You Will Do Design, develop, and optimize software for inference products Work closely with team members to improve performance and reliability Apply advanced AI and machine learning methods to real-world challenges Collaboration Work alongside experienced engineers on projects that shape the future of inference technology at Cerebras Systems.
Join our dynamic team as a Staff Software Engineer specializing in Frontend development. We are seeking a talented individual with a robust background in building scalable e-commerce applications or mobile software. Your expertise in modern JavaScript frameworks and attention to detail will be instrumental in delivering high-quality web applications that enhance user experience.
At Cerebras Systems, we are revolutionizing AI computing by developing the world’s largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture provides the computational power equivalent to dozens of GPUs on a single chip, simplifying programming to the level of a single device. This unique approach enables us to achieve unparalleled training and inference speeds, allowing machine learning practitioners to run large-scale ML applications without the complexity of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model laboratories, prominent global enterprises, and forward-thinking AI-native startups. Notably, OpenAI has entered a multi-year partnership with Cerebras to leverage 750 megawatts of scale, enhancing critical workloads with ultra-high-speed inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud inference services by over tenfold. This dramatic increase in speed is transforming how users experience AI applications, facilitating real-time iterations and enhancing intelligence through additional agentic computation.Location: Toronto / SunnyvaleWe are seeking a highly technical, hands-on engineering leader for our Inference Service Platform. In this role, you will guide a high-performing team to address a critical challenge: scaling large language model (LLM) inference on Cerebras’ advanced compute clusters and delivering a world-class, on-premise solution for enterprise customers. You will establish the technical vision while maintaining close engagement with the code, focusing on architecting highly reliable and low-latency distributed systems. If you possess proven expertise in distributed systems and scaling modern model-serving frameworks, we encourage you to apply.
Cerebras Systems is at the forefront of AI technology, developing the world's largest AI chip that is 56 times greater than conventional GPUs. Our innovative wafer-scale architecture delivers the computational capabilities of numerous GPUs on a single chip, simplifying programming to the level of a single device. This groundbreaking approach enables Cerebras to achieve unmatched training and inference speeds, allowing machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing extensive GPU or TPU resources. Our clientele includes leading model laboratories, global corporations, and pioneering AI-centric startups. Notably, OpenAI has recently entered into a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of capacity, revolutionizing key workloads with exceptionally rapid inference speeds. Thanks to our extraordinary wafer-scale architecture, Cerebras Inference provides the swiftest Generative AI inference solution available today, operating over ten times faster than GPU-based hyperscale cloud inference services. This significant boost in speed is reshaping the user experience in AI applications, facilitating real-time iterations and enhancing intelligence through advanced agentic computation. About The Role We are looking for an exceptionally talented Deployment Engineer to design and manage our state-of-the-art inference clusters. In this role, you will have the opportunity to work with the unparalleled Wafer-Scale Engine (WSE) and the systems that exploit its extraordinary capabilities.
Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times larger than traditional GPUs. Our groundbreaking wafer-scale architecture delivers the computational power equivalent to dozens of GPUs on a single chip, combined with the programming simplicity of a unified device. This innovative approach allows us to offer unparalleled training and inference speeds, enabling machine learning practitioners to execute extensive ML applications seamlessly, without the complexities of managing multiple GPUs or TPUs.Cerebras boasts an impressive clientele, including premier model labs, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aimed at deploying 750 megawatts of scale, revolutionizing critical workloads with ultra-fast inference capabilities.Our unique wafer-scale architecture enables Cerebras Inference to provide the fastest Generative AI inference solution globally, surpassing GPU-based hyperscale cloud inference services by more than tenfold. This remarkable enhancement in speed is reshaping the AI application user experience, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleThe Inference ML Engineering team at Cerebras Systems is committed to empowering our rapid generative inference solution through intuitive APIs, supported by a distributed runtime that operates on extensive clusters of our proprietary hardware. Our goal is to enable enterprises, developers, and researchers to fully harness the capabilities of our platform, leveraging its exceptional performance, scalability, and flexibility. The team collaborates closely with cross-functional groups, including compiler developers, cluster orchestrators, ML scientists, cloud architects, and product teams, to deliver impactful solutions that redefine the limits of ML performance and usability.As a Senior Software Engineer on the Inference ML Engineering team, you will be instrumental in designing and implementing APIs, ML features, and tools that facilitate the execution of state-of-the-art generative AI models on our custom hardware. Your role will involve architecting solutions that allow for seamless model translation and execution, ensuring high throughput and minimal latency while maintaining user-friendliness. You will lead technical initiatives and collaborate with other engineering teams to enhance our solutions.
Full-time|Remote|Remote Office; Sunnyvale CA or Toronto Canada
Cerebras Systems is at the forefront of AI innovation, manufacturing the largest AI chip in the world, which is 56 times bigger than conventional GPUs. Our cutting-edge wafer-scale architecture provides the computational power equivalent to dozens of GPUs on a single chip, simplifying programming to the level of a single device. This pioneering approach enables us to offer unmatched training and inference speeds, allowing machine learning practitioners to smoothly execute large-scale ML applications without the complexity of managing numerous GPUs or TPUs. Our clientele includes leading model laboratories, major global corporations, and innovative AI-native startups. Notably, OpenAI has recently partnered with Cerebras to leverage 750 megawatts of scale, revolutionizing critical workloads with ultra-high-speed inference. Our advanced wafer-scale architecture makes Cerebras Inference the fastest Generative AI inference solution available, outperforming GPU-based hyperscale cloud inference services by over tenfold. This remarkable speed enhancement is reshaping the user experience of AI applications, enabling real-time iterations and enhanced intelligence through additional agentic computation.In late 2024, we launched Cerebras Inference, setting a new standard for Generative AI inference speed. Since its launch, we have rapidly scaled our services to meet the rising demand from AI labs, enterprises, and a vibrant developer community.In October 2025, we celebrated our Series G funding round, successfully raising $1.1 billion USD to accelerate the growth of our product offerings and services to satisfy global AI demand.About the TeamThe Cerebras Inference team is dedicated to delivering the most efficient, secure, and reliable enterprise-grade AI service. We design and manage expansive distributed systems that facilitate AI inference with unparalleled speed and efficiency. Join us in scaling our inference capabilities to new heights!
Join CoreWeave as a Senior Software Engineer I specializing in inference, where you will spearhead architectural designs, elevate engineering standards, and significantly enhance latency, throughput, and reliability across various services. Collaborate closely with product, orchestration, and hardware teams to advance our Kubernetes-native inference platform, ensuring we achieve stringent P99 SLAs at scale.
Your ImpactBecome an integral part of a dynamic team dedicated to developing cutting-edge cybersecurity solutions from inception to launch. Under the guidance of industry leaders with a history of success, you will have the chance to design, construct, and roll out innovative products that make a significant difference. This is a perfect opportunity for you to advance your career and enhance your skills alongside a world-class team from the very beginning.Role OverviewIn this pivotal role, you will design user experiences and implement user interfaces for a next-generation security product. This unique position merges both design and implementation of user experience, giving you the chance to utilize modern frontend technologies, explore the integration of AI for optimal user outcomes, and directly influence the success of an exciting new product line.
Join Intuitive Surgical as a Staff Value Engineer, where you'll have the opportunity to shape the future of minimally invasive surgery. Our team is dedicated to advancing surgical technology and improving patient outcomes. In this role, you will leverage your engineering expertise to analyze and optimize the value of our surgical systems.
We are seeking a talented and motivated Staff Supplier Engineer to join our dynamic team at Intuitive Surgical, Inc. In this role, you will play a crucial part in managing supplier relationships and ensuring the highest quality of materials and components for our innovative surgical systems. You will be responsible for evaluating suppliers, conducting audits, and collaborating closely with cross-functional teams to drive continuous improvement.
Join our dynamic team as a Staff Quality Engineer at Intuitive Surgical, where you will play a pivotal role in ensuring the highest standards of quality in our innovative medical devices. You will collaborate with cross-functional teams to enhance product reliability and maintain compliance with industry regulations. Your expertise will contribute to our mission of advancing minimally invasive surgical technologies.
Join Intuitive Surgical, a pioneering company at the forefront of robotic-assisted surgery, as a Staff Electrical Engineer. In this role, you will collaborate on innovative projects that enhance surgical precision and patient safety. Your expertise will help drive the development of cutting-edge medical devices that are transforming healthcare.
Join Intuitive Surgical, a pioneering leader in minimally invasive robotic-assisted surgery, as a Managing Staff Engineer. In this critical role, you will oversee engineering projects and lead a talented team of engineers to innovate and improve our surgical systems. You will have the opportunity to drive advancements in technology and contribute to transforming surgical practices across the globe.
Join Intuitive Surgical as a Staff Research Engineer and become a vital member of our innovative team. In this role, you will contribute to the development of advanced robotic systems designed to enhance surgical procedures. Your expertise will be crucial in pushing the boundaries of technology and improving patient outcomes.
Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than traditional GPUs. Our unique wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming while providing unparalleled training and inference speeds. This revolutionary approach enables users to run extensive machine learning applications effortlessly, eliminating the complexity of managing multiple GPUs or TPUs.Cerebras serves a diverse clientele, including leading model labs, major global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of scale that will redefine key workloads with ultra-high-speed inference.Our groundbreaking wafer-scale architecture ensures that Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds that are over ten times faster than GPU-based hyperscale cloud services. This significant enhancement in performance is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleWe are seeking a Senior Performance Analyst to join our dynamic Product team. As a specialist in state-of-the-art inference performance, you will be the go-to expert on how Cerebras measures up against alternative inference providers in terms of pricing and performance. This role combines performance benchmarking from foundational principles with competitive intelligence. The position revolves around two key pillars:Performance BenchmarkingYou will develop, execute, and sustain reproducible benchmarks that assess Cerebras inference performance for actual customer workloads. This includes metrics such as tokens per second, time to first token, latency under concurrency, and total cost of ownership (TCO).Competitive AnalysisYou will analyze market trends and competitor offerings to position Cerebras effectively within the inference landscape.
Illumio develops technology to contain ransomware and breaches, helping organizations guard against cyberattacks and maintain business continuity. The company’s breach containment platform uses the Illumio AI Security Graph to detect and isolate threats across hybrid and multi-cloud environments. Illumio is recognized for its leadership in microsegmentation and its commitment to the Zero Trust model, serving critical infrastructure and organizations worldwide. This Senior Staff Engineer - Cybersecurity position is based onsite in Sunnyvale, California, with work expected in the office five days a week. Role overview The engineering team at Illumio values leadership, autonomy, and ownership. Members work with a modern technology stack that spans operating systems, distributed applications, and advanced UI and visualization tools. The team is focused on building products that address today’s cybersecurity challenges, drawing on diverse perspectives and a shared drive for innovation. What you will do Develop containerized microservices for a distributed, multi-tenant system that processes data, real-time events, and network telemetry from multiple public clouds. These services provide customers with real-time insights, visibility, and security recommendations to reduce cloud risks. Design and architect platform components and subsystems, working through technical details, presenting and defending designs to peers, and ensuring thorough implementation. Mentor junior engineers, recent graduates, and interns to support their professional development and integration into the team. Write code primarily in Go and manage data pipelines using SQL or similar technologies. Familiarity with Kubernetes for service infrastructure is considered a plus. The team welcomes candidates with varied programming backgrounds who are eager to learn. Take ownership of critical features and subsystems, overseeing the software development lifecycle from requirements to deployment and customer adoption. Contribute to operational excellence and help drive engineering efforts toward greater innovation and efficiency.
Role Overview Intuitive Surgical, Inc. is hiring a Staff Software Engineer in Sunnyvale. This position focuses on designing, building, and maintaining software that supports surgical robotics and improves patient care. What You Will Do Develop and refine software solutions for surgical robotics systems Work closely with teams from different disciplines to deliver reliable, high-quality products Contribute technical expertise to projects that advance healthcare technology
Join Us in Shaping the Future!At Illumio, we are pioneers in ransomware and breach containment, transforming the way organizations tackle cyber threats and enhance operational resilience. Our innovative breach containment platform, fueled by the Illumio AI Security Graph, effectively identifies and mitigates threats across hybrid multi-cloud environments—preventing potential disasters before they escalate.As a recognized leader in the Forrester Wave™ for Microsegmentation, Illumio champions Zero Trust principles, bolstering cyber resilience for the critical infrastructure and systems that sustain global operations.Our Vision:Our Engineering team thrives on a culture of visionary leadership, autonomy, and ownership, fostering a collaborative environment that propels us forward in the rapidly evolving cybersecurity landscape.By joining our team, you will contribute to the leader in Zero Trust Segmentation, utilizing a cutting-edge technology stack that encompasses operating systems, distributed applications, and immersive UI/visualization tools.Together, we are shaping the future of cybersecurity, crafting world-class products led by diverse perspectives and a shared commitment to innovation amidst unprecedented cybersecurity challenges.Your Responsibilities:Architect cloud solutions that effectively address business challenges while balancing architectural integrity and business margins.Collaborate with cross-functional teams, including product managers, developers, and DevOps engineers, to comprehend business requirements and design scalable cloud architectures.Design, deploy, and manage cloud architectures adhering to industry best practices, with a focus on efficiency, scalability, availability, performance, and security.Assess and select suitable cloud technologies and platforms, including Kubernetes, to fulfill organizational needs and foster innovation.Enhance cloud-based systems for high availability, fault tolerance, and disaster recovery capabilities.Implement and oversee monitoring, logging, and alerting systems to ensure optimal health and performance of cloud infrastructure.Identify and rectify performance bottlenecks, security vulnerabilities, and operational challenges within the cloud environment.Stay abreast of the latest trends, technologies, and best practices in cloud computing, distributed systems, and cybersecurity.
Join Cerebras Systems as a Staff Frontend Engineer specializing in Inference. In this pivotal role, you will be instrumental in developing innovative solutions that push the boundaries of AI and machine learning. Your expertise will drive the design and implementation of user-friendly interfaces that enhance our cutting-edge technology.
Cerebras Systems is revolutionizing the AI landscape with the world's largest AI chip, which is 56 times more extensive than traditional GPUs. Our innovative wafer-scale architecture enables us to deliver the computational power of dozens of GPUs on a single chip, while offering the ease of programming like a single device. This groundbreaking approach empowers Cerebras to achieve unparalleled training and inference speeds, allowing machine learning practitioners to run large-scale ML applications effortlessly without the complexities of managing numerous GPUs or TPUs.Cerebras serves a diverse clientele that includes leading model laboratories, global corporations, and pioneering AI-focused startups. Recently, OpenAI announced a multi-year collaboration with Cerebras to harness 750 megawatts of scale, significantly enhancing key workloads through ultra-fast inference capabilities.With our cutting-edge wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding the speed of GPU-based hyperscale cloud inference services by over ten times. This extraordinary speed transformation is reshaping the user experience of AI applications, facilitating real-time iterations and boosting intelligence through enhanced agentic computation.
Role Overview Cerebras Systems is looking for a Staff Software Engineer focused on Inference Cloud. This position is based in Sunnyvale, CA. What You Will Do Design, develop, and optimize software for inference products Work closely with team members to improve performance and reliability Apply advanced AI and machine learning methods to real-world challenges Collaboration Work alongside experienced engineers on projects that shape the future of inference technology at Cerebras Systems.
Join our dynamic team as a Staff Software Engineer specializing in Frontend development. We are seeking a talented individual with a robust background in building scalable e-commerce applications or mobile software. Your expertise in modern JavaScript frameworks and attention to detail will be instrumental in delivering high-quality web applications that enhance user experience.
At Cerebras Systems, we are revolutionizing AI computing by developing the world’s largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture provides the computational power equivalent to dozens of GPUs on a single chip, simplifying programming to the level of a single device. This unique approach enables us to achieve unparalleled training and inference speeds, allowing machine learning practitioners to run large-scale ML applications without the complexity of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model laboratories, prominent global enterprises, and forward-thinking AI-native startups. Notably, OpenAI has entered a multi-year partnership with Cerebras to leverage 750 megawatts of scale, enhancing critical workloads with ultra-high-speed inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud inference services by over tenfold. This dramatic increase in speed is transforming how users experience AI applications, facilitating real-time iterations and enhancing intelligence through additional agentic computation.Location: Toronto / SunnyvaleWe are seeking a highly technical, hands-on engineering leader for our Inference Service Platform. In this role, you will guide a high-performing team to address a critical challenge: scaling large language model (LLM) inference on Cerebras’ advanced compute clusters and delivering a world-class, on-premise solution for enterprise customers. You will establish the technical vision while maintaining close engagement with the code, focusing on architecting highly reliable and low-latency distributed systems. If you possess proven expertise in distributed systems and scaling modern model-serving frameworks, we encourage you to apply.
Cerebras Systems is at the forefront of AI technology, developing the world's largest AI chip that is 56 times greater than conventional GPUs. Our innovative wafer-scale architecture delivers the computational capabilities of numerous GPUs on a single chip, simplifying programming to the level of a single device. This groundbreaking approach enables Cerebras to achieve unmatched training and inference speeds, allowing machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing extensive GPU or TPU resources. Our clientele includes leading model laboratories, global corporations, and pioneering AI-centric startups. Notably, OpenAI has recently entered into a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of capacity, revolutionizing key workloads with exceptionally rapid inference speeds. Thanks to our extraordinary wafer-scale architecture, Cerebras Inference provides the swiftest Generative AI inference solution available today, operating over ten times faster than GPU-based hyperscale cloud inference services. This significant boost in speed is reshaping the user experience in AI applications, facilitating real-time iterations and enhancing intelligence through advanced agentic computation. About The Role We are looking for an exceptionally talented Deployment Engineer to design and manage our state-of-the-art inference clusters. In this role, you will have the opportunity to work with the unparalleled Wafer-Scale Engine (WSE) and the systems that exploit its extraordinary capabilities.
Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times larger than traditional GPUs. Our groundbreaking wafer-scale architecture delivers the computational power equivalent to dozens of GPUs on a single chip, combined with the programming simplicity of a unified device. This innovative approach allows us to offer unparalleled training and inference speeds, enabling machine learning practitioners to execute extensive ML applications seamlessly, without the complexities of managing multiple GPUs or TPUs.Cerebras boasts an impressive clientele, including premier model labs, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aimed at deploying 750 megawatts of scale, revolutionizing critical workloads with ultra-fast inference capabilities.Our unique wafer-scale architecture enables Cerebras Inference to provide the fastest Generative AI inference solution globally, surpassing GPU-based hyperscale cloud inference services by more than tenfold. This remarkable enhancement in speed is reshaping the AI application user experience, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleThe Inference ML Engineering team at Cerebras Systems is committed to empowering our rapid generative inference solution through intuitive APIs, supported by a distributed runtime that operates on extensive clusters of our proprietary hardware. Our goal is to enable enterprises, developers, and researchers to fully harness the capabilities of our platform, leveraging its exceptional performance, scalability, and flexibility. The team collaborates closely with cross-functional groups, including compiler developers, cluster orchestrators, ML scientists, cloud architects, and product teams, to deliver impactful solutions that redefine the limits of ML performance and usability.As a Senior Software Engineer on the Inference ML Engineering team, you will be instrumental in designing and implementing APIs, ML features, and tools that facilitate the execution of state-of-the-art generative AI models on our custom hardware. Your role will involve architecting solutions that allow for seamless model translation and execution, ensuring high throughput and minimal latency while maintaining user-friendliness. You will lead technical initiatives and collaborate with other engineering teams to enhance our solutions.
Full-time|Remote|Remote Office; Sunnyvale CA or Toronto Canada
Cerebras Systems is at the forefront of AI innovation, manufacturing the largest AI chip in the world, which is 56 times bigger than conventional GPUs. Our cutting-edge wafer-scale architecture provides the computational power equivalent to dozens of GPUs on a single chip, simplifying programming to the level of a single device. This pioneering approach enables us to offer unmatched training and inference speeds, allowing machine learning practitioners to smoothly execute large-scale ML applications without the complexity of managing numerous GPUs or TPUs. Our clientele includes leading model laboratories, major global corporations, and innovative AI-native startups. Notably, OpenAI has recently partnered with Cerebras to leverage 750 megawatts of scale, revolutionizing critical workloads with ultra-high-speed inference. Our advanced wafer-scale architecture makes Cerebras Inference the fastest Generative AI inference solution available, outperforming GPU-based hyperscale cloud inference services by over tenfold. This remarkable speed enhancement is reshaping the user experience of AI applications, enabling real-time iterations and enhanced intelligence through additional agentic computation.In late 2024, we launched Cerebras Inference, setting a new standard for Generative AI inference speed. Since its launch, we have rapidly scaled our services to meet the rising demand from AI labs, enterprises, and a vibrant developer community.In October 2025, we celebrated our Series G funding round, successfully raising $1.1 billion USD to accelerate the growth of our product offerings and services to satisfy global AI demand.About the TeamThe Cerebras Inference team is dedicated to delivering the most efficient, secure, and reliable enterprise-grade AI service. We design and manage expansive distributed systems that facilitate AI inference with unparalleled speed and efficiency. Join us in scaling our inference capabilities to new heights!
Join CoreWeave as a Senior Software Engineer I specializing in inference, where you will spearhead architectural designs, elevate engineering standards, and significantly enhance latency, throughput, and reliability across various services. Collaborate closely with product, orchestration, and hardware teams to advance our Kubernetes-native inference platform, ensuring we achieve stringent P99 SLAs at scale.
Your ImpactBecome an integral part of a dynamic team dedicated to developing cutting-edge cybersecurity solutions from inception to launch. Under the guidance of industry leaders with a history of success, you will have the chance to design, construct, and roll out innovative products that make a significant difference. This is a perfect opportunity for you to advance your career and enhance your skills alongside a world-class team from the very beginning.Role OverviewIn this pivotal role, you will design user experiences and implement user interfaces for a next-generation security product. This unique position merges both design and implementation of user experience, giving you the chance to utilize modern frontend technologies, explore the integration of AI for optimal user outcomes, and directly influence the success of an exciting new product line.
Join Intuitive Surgical as a Staff Value Engineer, where you'll have the opportunity to shape the future of minimally invasive surgery. Our team is dedicated to advancing surgical technology and improving patient outcomes. In this role, you will leverage your engineering expertise to analyze and optimize the value of our surgical systems.
We are seeking a talented and motivated Staff Supplier Engineer to join our dynamic team at Intuitive Surgical, Inc. In this role, you will play a crucial part in managing supplier relationships and ensuring the highest quality of materials and components for our innovative surgical systems. You will be responsible for evaluating suppliers, conducting audits, and collaborating closely with cross-functional teams to drive continuous improvement.
Join our dynamic team as a Staff Quality Engineer at Intuitive Surgical, where you will play a pivotal role in ensuring the highest standards of quality in our innovative medical devices. You will collaborate with cross-functional teams to enhance product reliability and maintain compliance with industry regulations. Your expertise will contribute to our mission of advancing minimally invasive surgical technologies.
Join Intuitive Surgical, a pioneering company at the forefront of robotic-assisted surgery, as a Staff Electrical Engineer. In this role, you will collaborate on innovative projects that enhance surgical precision and patient safety. Your expertise will help drive the development of cutting-edge medical devices that are transforming healthcare.
Join Intuitive Surgical, a pioneering leader in minimally invasive robotic-assisted surgery, as a Managing Staff Engineer. In this critical role, you will oversee engineering projects and lead a talented team of engineers to innovate and improve our surgical systems. You will have the opportunity to drive advancements in technology and contribute to transforming surgical practices across the globe.
Join Intuitive Surgical as a Staff Research Engineer and become a vital member of our innovative team. In this role, you will contribute to the development of advanced robotic systems designed to enhance surgical procedures. Your expertise will be crucial in pushing the boundaries of technology and improving patient outcomes.
Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than traditional GPUs. Our unique wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming while providing unparalleled training and inference speeds. This revolutionary approach enables users to run extensive machine learning applications effortlessly, eliminating the complexity of managing multiple GPUs or TPUs.Cerebras serves a diverse clientele, including leading model labs, major global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of scale that will redefine key workloads with ultra-high-speed inference.Our groundbreaking wafer-scale architecture ensures that Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds that are over ten times faster than GPU-based hyperscale cloud services. This significant enhancement in performance is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleWe are seeking a Senior Performance Analyst to join our dynamic Product team. As a specialist in state-of-the-art inference performance, you will be the go-to expert on how Cerebras measures up against alternative inference providers in terms of pricing and performance. This role combines performance benchmarking from foundational principles with competitive intelligence. The position revolves around two key pillars:Performance BenchmarkingYou will develop, execute, and sustain reproducible benchmarks that assess Cerebras inference performance for actual customer workloads. This includes metrics such as tokens per second, time to first token, latency under concurrency, and total cost of ownership (TCO).Competitive AnalysisYou will analyze market trends and competitor offerings to position Cerebras effectively within the inference landscape.
Illumio develops technology to contain ransomware and breaches, helping organizations guard against cyberattacks and maintain business continuity. The company’s breach containment platform uses the Illumio AI Security Graph to detect and isolate threats across hybrid and multi-cloud environments. Illumio is recognized for its leadership in microsegmentation and its commitment to the Zero Trust model, serving critical infrastructure and organizations worldwide. This Senior Staff Engineer - Cybersecurity position is based onsite in Sunnyvale, California, with work expected in the office five days a week. Role overview The engineering team at Illumio values leadership, autonomy, and ownership. Members work with a modern technology stack that spans operating systems, distributed applications, and advanced UI and visualization tools. The team is focused on building products that address today’s cybersecurity challenges, drawing on diverse perspectives and a shared drive for innovation. What you will do Develop containerized microservices for a distributed, multi-tenant system that processes data, real-time events, and network telemetry from multiple public clouds. These services provide customers with real-time insights, visibility, and security recommendations to reduce cloud risks. Design and architect platform components and subsystems, working through technical details, presenting and defending designs to peers, and ensuring thorough implementation. Mentor junior engineers, recent graduates, and interns to support their professional development and integration into the team. Write code primarily in Go and manage data pipelines using SQL or similar technologies. Familiarity with Kubernetes for service infrastructure is considered a plus. The team welcomes candidates with varied programming backgrounds who are eager to learn. Take ownership of critical features and subsystems, overseeing the software development lifecycle from requirements to deployment and customer adoption. Contribute to operational excellence and help drive engineering efforts toward greater innovation and efficiency.
Role Overview Intuitive Surgical, Inc. is hiring a Staff Software Engineer in Sunnyvale. This position focuses on designing, building, and maintaining software that supports surgical robotics and improves patient care. What You Will Do Develop and refine software solutions for surgical robotics systems Work closely with teams from different disciplines to deliver reliable, high-quality products Contribute technical expertise to projects that advance healthcare technology
Join Us in Shaping the Future!At Illumio, we are pioneers in ransomware and breach containment, transforming the way organizations tackle cyber threats and enhance operational resilience. Our innovative breach containment platform, fueled by the Illumio AI Security Graph, effectively identifies and mitigates threats across hybrid multi-cloud environments—preventing potential disasters before they escalate.As a recognized leader in the Forrester Wave™ for Microsegmentation, Illumio champions Zero Trust principles, bolstering cyber resilience for the critical infrastructure and systems that sustain global operations.Our Vision:Our Engineering team thrives on a culture of visionary leadership, autonomy, and ownership, fostering a collaborative environment that propels us forward in the rapidly evolving cybersecurity landscape.By joining our team, you will contribute to the leader in Zero Trust Segmentation, utilizing a cutting-edge technology stack that encompasses operating systems, distributed applications, and immersive UI/visualization tools.Together, we are shaping the future of cybersecurity, crafting world-class products led by diverse perspectives and a shared commitment to innovation amidst unprecedented cybersecurity challenges.Your Responsibilities:Architect cloud solutions that effectively address business challenges while balancing architectural integrity and business margins.Collaborate with cross-functional teams, including product managers, developers, and DevOps engineers, to comprehend business requirements and design scalable cloud architectures.Design, deploy, and manage cloud architectures adhering to industry best practices, with a focus on efficiency, scalability, availability, performance, and security.Assess and select suitable cloud technologies and platforms, including Kubernetes, to fulfill organizational needs and foster innovation.Enhance cloud-based systems for high availability, fault tolerance, and disaster recovery capabilities.Implement and oversee monitoring, logging, and alerting systems to ensure optimal health and performance of cloud infrastructure.Identify and rectify performance bottlenecks, security vulnerabilities, and operational challenges within the cloud environment.Stay abreast of the latest trends, technologies, and best practices in cloud computing, distributed systems, and cybersecurity.
Mar 24, 2025
Sign in to browse more jobs
Create account — see all 541 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.