Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Entry Level
Qualifications
Strong foundation in machine learning algorithms and frameworks. Experience with Python and data processing libraries such as Pandas and NumPy. Familiarity with cloud platforms (e.g., AWS, Azure) and deployment tools. Ability to work collaboratively in a team-oriented environment. Excellent problem-solving skills and attention to detail.
About the job
Achira is seeking a Machine Learning Research Engineer to help improve workflows and systems for artificial intelligence projects. This position is based in the San Francisco office.
Role overview
This role centers on developing and refining machine learning pipelines. The focus is on efficient deployment and scaling of AI models in production environments. Collaboration with colleagues from different disciplines is a key part of the work, aiming to bring forward new ideas and solid practices in machine learning systems.
What you will do
Design and optimize machine learning workflows for better performance and scalability
Work closely with cross-functional teams to implement improvements in AI systems
Support the deployment process, helping ensure models run efficiently in real-world settings
Location
This position is based at Achira's San Francisco office.
About Achira
Achira is a leading organization at the forefront of AI technology, dedicated to developing innovative solutions that enhance the efficiency of workflows and systems. Our team comprises passionate individuals who strive to push the boundaries of what's possible in machine learning and artificial intelligence.
Account ExecutiveLocation: San FranciscoEmployment Type: Full timeDepartment: OperationsJoin Us in Revolutionizing AI InfrastructureAt Prime Intellect, we are pioneering the development of an open superintelligence framework that empowers anyone to create, train, and deploy advanced AI models. Our innovative approach consolidates global computational resourc…
Empowering the Future of Open SuperintelligenceAt Prime Intellect, we are at the forefront of developing a cutting-edge open superintelligence stack, bridging the gap between advanced agentic models and accessible infrastructure. Our platform integrates global computing resources into a unified control plane, complemented by a comprehensive reinforcement learning (RL) post-training suite that includes secure environments, verifiable evaluations, and our innovative asynchronous RL trainer. We empower researchers, startups, and enterprises to execute end-to-end reinforcement learning at an unprecedented scale, aligning models with practical tools, workflows, and deployment contexts.Recently, we secured $15 million in funding (totaling $20 million) led by Founders Fund, with contributions from Menlo Ventures and distinguished angel investors including Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Chief Scientific Officer of Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), and Emad Mostaque (Stability AI), among others.Your OpportunityAs the Director of Growth Strategy, you will spearhead our efforts to connect innovative technology with market opportunities. This role encompasses leadership in sales, marketing, partnerships, and customer success initiatives. You'll be instrumental in shaping how Prime Intellect communicates its mission, securing substantial post-training and compute contracts, and developing systems for efficient revenue scaling. This dynamic position is ideal for a strategic and execution-focused leader who excels at the intersection of infrastructure and artificial intelligence.Key ResponsibilitiesStrategic LeadershipEstablish and manage a cross-functional Growth team encompassing Sales, Marketing, Partnerships, and Customer Success.Craft and implement our go-to-market strategy for RL infrastructure, including pricing, packaging, and positioning.Oversee revenue forecasting, pipeline management, and operational frameworks (CRM, dashboards, forecasting tools).Create data-driven systems to enhance deal tracking, reporting, and forecasting accuracy.Sales & Partnerships DevelopmentLead complex enterprise sales processes for post-training and multi-node cluster agreements (64+ GPUs, 6+ month engagement).Develop scalable playbooks for converting design partners and ensuring seamless enterprise onboarding.Forge strategic alliances with compute providers, AI companies, and research institutions to extend our ecosystem.
Prime Intellect is at the forefront of developing an open superintelligence framework, integrating advanced agentic models and the infrastructure necessary for seamless creation, training, and deployment. We harness global computing resources into a unified control plane, complemented by our comprehensive reinforcement learning post-training stack, which includes environments, secure sandboxes, verifiable evaluations, and an asynchronous RL trainer. Our mission is to empower researchers, startups, and enterprises to conduct end-to-end reinforcement learning at unprecedented scales, effectively adapting models for real-world applications and workflows. We are seeking outstanding interns who have demonstrated their ability to build robust systems, contribute to open-source projects, or excel in various technical fields. If you excel in tackling complex, open-ended problems and are eager to advance the capabilities of open, distributed AI, we would love to connect with you. Whether your expertise lies in AI, systems engineering, distributed computing, cryptography, or other innovative areas, we value rapid learning, effective execution, and critical thinking. Recently, we secured $15 million in our latest funding round, bringing our total funding to $20 million, with support from prominent investors like Founders Fund, Menlo Ventures, and notable angels including Andrej Karpathy, Tri Dao, Dylan Patel, Clem Delangue, and Emad Mostaque. Share with us what inspires you about Prime Intellect, highlight a remarkable project you've undertaken, and tell us how you envision accelerating decentralized AGI.
On-site|On-site|San Francisco, CA | New York City, NY | Seattle, WA
Join Anthropic as an Infrastructure Engineer on our Sandboxing team, where you'll play a pivotal role in building and scaling secure execution environments for AI research. Your expertise will ensure that researchers can safely experiment with AI-generated code in isolated settings. As our models advance, the infrastructure that supports these environments becomes increasingly vital. Your contributions will help maintain security and reliability at scale, directly aligning with our mission to develop trustworthy and beneficial AI systems.
At Prime Intellect, we are pioneering the development of an open superintelligence framework, encompassing everything from cutting-edge agentic models to the infrastructure that empowers individuals to create, train, and deploy these advanced systems. Our innovative platform consolidates global computational resources into a unified control plane and integrates a comprehensive Reinforcement Learning (RL) post-training suite: including environments, secure sandboxes, verifiable evaluations, and our asynchronous RL trainer. We provide researchers, startups, and enterprises with the tools necessary to execute end-to-end reinforcement learning at the forefront of technology, seamlessly adapting models to real-world applications, workflows, and deployment scenarios.We are eager to connect with individuals who have successfully crafted intricate technical systems, made significant contributions to open-source initiatives, or who possess expertise across diverse domains. Whether your specialization lies in AI, distributed computing, cryptography, systems programming, or an unconventional area, your ability to learn rapidly, think critically, and execute effectively is what we value most.Recently, we secured $15 million in funding (bringing our total to $20 million), spearheaded by Founders Fund, with contributions from Menlo Ventures and notable angels such as Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Chief Scientific Officer of Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), Emad Mostaque (Stability AI), among others.We invite you to share your enthusiasm for Prime Intellect, highlight an impressive project you’ve undertaken, and describe how you envision advancing open and decentralized AGI.
Join Our Mission to Build Open Superintelligence InfrastructureAt Prime Intellect, we are pioneering the development of an open superintelligence stack that encompasses cutting-edge agentic models and the infrastructure that empowers anyone to create, train, and deploy these advanced AI systems. Our innovative approach aggregates and orchestrates global computational resources into a cohesive control plane, complemented by a comprehensive reinforcement learning (RL) post-training toolkit that includes environments, secure sandboxes, verifiable evaluations, and our asynchronous RL trainer. We provide researchers, startups, and enterprises with the capabilities to execute end-to-end reinforcement learning at unparalleled scale, adapting models to real-world tools, workflows, and deployment scenarios.As a Solutions Architect for GPU Infrastructure, you will be the technical authority responsible for translating customer needs into robust, production-ready systems designed to train the world’s most sophisticated AI models.With a recent funding round raising $15 million (totaling $20 million) led by Founders Fund, alongside contributions from Menlo Ventures and illustrious angels such as Andrej Karpathy (Tesla, OpenAI), Tri Dao (Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), and Emad Mostaque (Stability AI), we are poised for significant growth and innovation.Key Technical ResponsibilitiesThis role requires a blend of deep technical knowledge and hands-on implementation skills. Your contributions will be crucial in:Customer Architecture & DesignCollaborating with clients to comprehend workload specifications and architect optimal GPU cluster solutions.Drafting technical proposals and conducting capacity planning for clusters ranging from 100 to over 10,000 GPUs.Formulating deployment strategies for large language model (LLM) training, inference, and high-performance computing (HPC) tasks.Delivering architectural recommendations to both technical teams and executive stakeholders.Infrastructure Deployment & OptimizationImplementing and configuring orchestration frameworks such as SLURM and Kubernetes for distributed workloads.Establishing high-performance networking through InfiniBand, RoCE, and NVLink interconnects.Enhancing GPU utilization, memory management, and inter-node communication.Setting up parallel file systems (Lustre, BeeGFS, GPFS) to maximize I/O efficiency.Tuning system performance, from kernel parameters to CUDA configurations.Production Operations & SupportEnsuring the reliability and performance of GPU infrastructure through continuous monitoring and support.Collaborating with cross-functional teams to troubleshoot and optimize operational workflows.Documenting processes and creating training materials for team members and clients.