About the role
Cerebras Systems develops the largest AI chip available, using a wafer-scale architecture that consolidates the power of dozens of GPUs into a single device. This approach allows machine learning teams to run large-scale applications without managing complicated GPU or TPU clusters.
The company collaborates with a variety of organizations, including research labs, global enterprises, and AI-focused startups. OpenAI has entered a multi-year partnership with Cerebras to deploy 750 megawatts of compute for high-speed inference workloads. The Cerebras Inference platform delivers generative AI inference at speeds significantly faster than typical GPU-based cloud services, enabling real-time AI interactions and more advanced agentic computation.
Growth Team Overview
The Growth Team leads AI adoption at Cerebras. This multidisciplinary group covers product development, engineering, and marketing. Members design agentic workflows, internal knowledge systems, and developer infrastructure to support teams such as kernel, design verification, and cloud platform engineering. The team’s stack includes Claude Code, MCP (Model Context Protocol), RAG pipelines, and multi-agent systems, working closely with the chip and inference platform teams.
Role overview
The AI Engineering Intern position on the Growth Team is a summer internship based in either Sunnyvale, CA or Toronto, Canada. Interns will work alongside engineers to build AI tools that streamline hardware and software development workflows. The tools developed in this role will be used by engineers across the company, directly influencing daily engineering work.
Location
Sunnyvale, CA or Toronto, Canada

