Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
Proven experience in technical program management, preferably within AI or technology sectors. Strong understanding of AI alignment principles and methodologies. Excellent communication skills with the ability to convey complex technical concepts to non-technical stakeholders. Ability to lead teams and foster collaboration across diverse groups. Experience with project management tools and methodologies.
About the job
Anthropic is hiring a Technical Program Manager focused on Alignment. This role centers on guiding projects that help keep AI systems safe and beneficial. The position is based in either San Francisco, CA or New York City, NY.
What You Will Do
Oversee alignment-related projects, ensuring they move forward smoothly and meet safety objectives.
Work closely with teams across disciplines to set project scopes and define clear timelines.
Coordinate initiatives that support Anthropic’s strategic direction for AI safety and benefit.
About Anthropic
Anthropic is a leading AI safety and research company dedicated to developing advanced AI systems that prioritize safety and ethical considerations. We are committed to building a collaborative and inclusive work environment where innovation thrives and our team members can grow.
Join Our Innovative TeamAt OpenAI, our Alignment team is committed to building AI systems that prioritize safety, trustworthiness, and alignment with human values, even as these systems evolve and grow in complexity. We are at the forefront of AI research, developing advanced methodologies to ensure that AI adheres to human intent across diverse scenarios, including high-stakes and adversarial environments. Our focus is on tackling the most critical challenges, addressing areas where AI can have profound impacts. By quantifying risks and making meaningful improvements, we aim to prepare our models for the complexities of real-world applications.Our approach is built on two foundational pillars: (1) integrating enhanced capabilities into alignment, ensuring our techniques evolve positively with increasing capabilities, and (2) centering human input through the development of mechanisms that allow humans to communicate their intent and effectively monitor AI systems, even in intricate situations.Your Role in Shaping the FutureAs a Research Engineer / Scientist on our Alignment team, you will play a pivotal role in ensuring our AI systems align with human intent in complex and unpredictable contexts. Your responsibilities will include designing and implementing scalable solutions that maintain alignment as AI capabilities expand, while incorporating human oversight into AI decision-making processes.This position is based in San Francisco, CA, and follows a hybrid work model of three days in the office each week. We also offer relocation assistance to new team members.Key Responsibilities:Develop and assess alignment capabilities that are context-sensitive, subjective, and challenging to quantify.Create evaluations to accurately measure risks and alignment with human values and intentions.Construct tools and evaluations to examine model robustness across various scenarios.Design experiments to explore how alignment scales with compute resources, data, context lengths, actions, and adversarial influences.Innovate new Human-AI interaction frameworks and scalable supervision methods that enhance human engagement and understanding of AI systems.
Team focus The Alignment Science team at OpenAI works on intent alignment for artificial intelligence. Their goal is to develop models that accurately interpret and follow user requests, while maintaining high standards for safety and transparency. As AI models become more advanced, the team prioritizes keeping them honest about their capabilities and limitations, ensuring close alignment with user intent. Research spans both theoretical and applied domains. The team shares findings publicly and integrates new alignment techniques into OpenAI's deployed models. Recent efforts have targeted model honesty, studying how models admit mistakes, avoid generating false information, and resist manipulation. The team is looking for scalable solutions to improve instruction following and reliability in AI systems. Quantitative research is a core part of this work, especially reinforcement learning and related training and evaluation methods that support safer, more reliable AI interactions. Role overview This Researcher in Alignment Science position (which may be titled Research Engineer or Research Scientist) centers on designing and running experiments to improve how models follow user intent. Responsibilities include developing training protocols, building evaluation frameworks, and strengthening research infrastructure to support effective alignment in new models. The job is based in San Francisco, CA, with a hybrid schedule requiring three days per week in the office. OpenAI provides relocation support for new hires. Exceptional remote candidates who can work independently and collaborate closely with the team will also be considered. Main responsibilities Design and conduct experiments on alignment techniques, including intent following, honesty, calibration, and robustness. Train and assess models using reinforcement learning and other empirical machine learning approaches. Develop evaluation metrics for failure modes such as hallucination, compliance gaps, reward exploitation, and covert actions. Investigate methods to encourage models to self-verify and report limitations honestly, including confession-style training objectives. Create monitoring tools and interventions at inference time to help models act as intended.
About Our TeamThe Future of Computing Research team is a dynamic applied research unit within the Consumer Devices group at OpenAI. We are dedicated to pioneering innovative methods, models, and evaluation frameworks that propel our vision for the future of computing. Our focus lies at the cutting edge of multimodal AI, transforming emerging model capabilities into product experiences that are not only functional and enjoyable but also foster long-term trust.Our research delves into a new generation of AI systems capable of learning and evolving over time, adapting to individual needs, and enhancing daily life. This includes exploring long-term memory, user modeling, and personalized systems aligned with broader human goals, values, and overall well-being.We collaborate closely across multiple disciplines—research, engineering, design, product management, and safety—to define what it means to build AI systems that recognize and respond to user needs in a contextually aware and respectful manner, ensuring demonstrable benefits.About the PositionWe are seeking a passionate Research Engineer/Scientist to join our Future of Computing Research team, focusing on Reinforcement Learning from Human Feedback (RLHF) and post-training techniques for personalized multimodal AI systems.In this role, you will be instrumental in establishing the learning and evaluation foundations necessary for models to become increasingly context-aware, adaptive, and useful over time. You will tackle challenges such as reward modeling, preference learning, long-horizon evaluation, and policy improvement for systems that are required to make high-quality behavioral decisions in real-world settings. Our success is measured not just by improved benchmark performance but by enhanced model behavior in actual use cases.The ideal candidate is enthusiastic about advancing beyond simplistic one-turn assistant interactions towards systems that learn and grow through feedback, utilizing richer signals and training against meaningful notions of user value. This requires a thoughtful approach to reward design, feedback mechanisms, and evaluation frameworks that assess the long-term benefits of interventions.This position is based in San Francisco, CA, with a hybrid work model of four days in the office each week. We also provide relocation assistance for new hires.Key Responsibilities:Develop RLHF and post-training strategies for multimodal models.Create reward models and preference-learning pipelines to foster adaptive, personalized model behavior.Engage in long-term evaluation and policy refinement to enhance user interactions.
About AnthropicAt Anthropic, we are driven by our mission to develop reliable, interpretable, and steerable AI systems. Our commitment is to ensure that AI is safe and beneficial not only for our users but also for society as a whole. Our rapidly expanding team comprises dedicated researchers, engineers, policy specialists, and business leaders collaborating to create impactful AI technologies.About the Role:As a Research Engineer focusing on Alignment Science, you will design and execute sophisticated machine learning experiments aimed at understanding and guiding the behavior of advanced AI systems. Your passion lies in making AI systems helpful, honest, and safe, particularly in the face of challenges posed by human-level capabilities. You embody both the scientific and engineering mindsets. In this role, you will engage in exploratory research on AI safety, concentrating on risks associated with future powerful systems (such as those classified as ASL-3 or ASL-4 under our Responsible Scaling Policy), often working in collaboration with teams focused on Interpretability, Fine-Tuning, and the Frontier Red Team. Discover more about our current research topics and insights on our blog, as we delve into pressing issues such as:Scalable Oversight: Innovating techniques to ensure that highly capable models remain helpful and truthful, even as they exceed human-level intelligence.AI Control: Developing strategies to maintain the safety and harmlessness of advanced AI systems in novel or adversarial environments.Alignment Stress Testing: Implementing rigorous testing frameworks to evaluate AI alignment under various conditions.
About Our TeamThe Alignment team at OpenAI is dedicated to ensuring our AI systems are capable of recursive self-improvement while consistently aligning with human intents in complex real-world scenarios. We focus on developing AI that avoids catastrophic outcomes, remains controllable, auditable, and fundamentally aligned with human values as our technological capabilities grow.About the PositionWe are seeking a skilled Program Manager to enhance OpenAI’s alignment and safety initiatives through effective program execution, relationship management, and operational leadership. This role involves close collaboration with alignment leadership to manage key external programs and partnerships, streamline coordination among collaborators, and address ongoing operational needs in a dynamic research environment.This position is based in San Francisco, CA, following a hybrid work model of three days in the office each week. We also provide relocation assistance for new hires.Key Responsibilities:Manage logistics and execution for alignment-related events, coordinating with researchers, external participants, and internal stakeholders.Act as the operational liaison for third-party collaborations, overseeing program management, contract coordination, follow-up, and cross-functional tracking.Foster and oversee external collaborations within the alignment ecosystem, including research partnerships related to misalignment or shared infrastructure initiatives.Assist in recruiting for alignment ecosystem roles by sourcing, mapping, and engaging with trusted candidates and communities.Serve as the Program Manager counterpart for the Alignment blog, facilitating publishing operations, editorial coordination, and drafting support as needed.Support compute management processes for the team, ensuring consistent coordination, tracking, and operational follow-up.If the pilot program succeeds, oversee the Safety Fellows program from start to finish, including the selection process, participant support, programming, and operational management.Ideal Candidate Profile:Possess 4+ years of experience in program management, operations, partnerships, or related fields within research, policy, or technical environments.Demonstrate excellent organizational and multitasking skills, with a keen ability to work in fast-paced environments.Show strong interpersonal skills and a commitment to collaborative teamwork.
Full-time|Hybrid|San Francisco, CA | New York City, NY
Role Overview Anthropic is hiring a Technical Program Manager focused on Alignment. This role centers on guiding projects that help keep AI systems safe and beneficial. The position is based in either San Francisco, CA or New York City, NY. What You Will Do Oversee alignment-related projects, ensuring they move forward smoothly and meet safety objectives. Work closely with teams across disciplines to set project scopes and define clear timelines. Coordinate initiatives that support Anthropic’s strategic direction for AI safety and benefit.
Remote|Remote|Remote-Friendly (Travel Required) | San Francisco, CA
Join Anthropic as a Senior Research Scientist on our Reward Models team, where you will spearhead groundbreaking research aimed at enhancing our understanding of human preferences at scale. Your innovative contributions will directly influence how our AI models, including Claude, align with human values and optimize for user needs. You will delve into the forefront of reward modeling for large language models, designing novel architectures and training methodologies for Reinforcement Learning from Human Feedback (RLHF). Your research will explore advanced evaluation techniques, including rubric-based grading, and tackle challenges such as reward hacking. Collaboration is key, as you'll work alongside teams in Finetuning, Alignment Science, and our broader research organization to ensure your findings result in tangible advancements in AI capabilities and safety. This role offers you an opportunity to address critical AI alignment challenges, leveraging cutting-edge models and substantial computational resources to further the science of safe and capable AI systems.
Join Anthropic as a Research Engineer focusing on Economic Research. In this role, you will leverage your analytical skills to conduct in-depth economic analysis and contribute to innovative projects aimed at enhancing our understanding of economic models and their implications.
About the TeamJoin the innovative Post-Training team at OpenAI, where we focus on refining and elevating pre-trained models for deployment in ChatGPT, our API, and future products. Collaborating closely with various research and product teams, we conduct crucial research that prepares our models for real-world deployment to millions of users, ensuring they are safe, efficient, and reliable.About the RoleAs a Research Engineer / Scientist, you will spearhead the research and development of enhancements to our models. Our work intersects reinforcement learning and product development, aiming to create cutting-edge solutions.We seek passionate individuals with robust machine learning engineering skills and research experience, particularly with innovative and powerful models. The ideal candidate will be driven by a commitment to product-oriented research.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days in the office each week. Relocation assistance is available for new employees.In this role, you will:Lead and execute a research agenda aimed at enhancing model capabilities and performance.Work collaboratively with research and product teams to empower customers to optimize their models.Develop robust evaluation frameworks to monitor and assess modeling advancements.Design, implement, test, and debug code across our research stack.You may excel in this role if you:Possess a deep understanding of machine learning and its applications.Have experience with relevant models and methodologies for evaluating model improvements.Are adept at navigating large ML codebases for debugging purposes.Thrive in a fast-paced and technically intricate environment.About OpenAIOpenAI is a pioneering AI research and deployment organization dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We are committed to pushing the boundaries of AI capabilities while prioritizing safety and human-centric values in our products. Our mission is to embrace diverse perspectives, voices, and experiences that represent the full spectrum of humanity, as we strive for a future where AI is a powerful ally for everyone.
About Our TeamJoin the forefront of AI innovation with the RL and Reasoning team at OpenAI. Our team is dedicated to advancing reinforcement learning research and has pioneered transformative projects, including o1 and o3. We are committed to pushing the limits of generative models while ensuring their scalable deployment.About the RoleAs a Research Engineer/Research Scientist at OpenAI, you will play a pivotal role in enhancing AI alignment and capabilities through state-of-the-art reinforcement learning techniques. Your contributions will be essential in training intelligent, aligned, and versatile agents that power various AI models.We seek individuals with a solid foundation in reinforcement learning research, agile coding skills, and a passion for rapid iteration.This position is located in San Francisco, CA, and follows a hybrid work model of three days in the office per week. We also provide relocation assistance for new hires.You may excel in this role if:You are enthusiastic about being at the cutting edge of RL and language model research.You take initiative, owning ideas and driving them to fruition.You value principled methodologies, conducting simple experiments in controlled environments to draw trustworthy conclusions.You thrive in a fast-paced, complex technical environment where rapid iteration is essential.You are adept at navigating extensive ML codebases to troubleshoot and enhance them.You possess a profound understanding of machine learning and its applications.About OpenAIOpenAI is a pioneering AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves the greater good for humanity. We strive to push the boundaries of AI system capabilities while prioritizing safe deployment through our innovative products. We recognize AI as a powerful tool that must be developed with safety and human-centric principles, embracing diverse perspectives to reflect the full spectrum of humanity.We are proud to be an equal opportunity employer, welcoming applicants from all backgrounds without discrimination based on race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or any other legally protected characteristic.
Pluralis Research is at the forefront of Protocol Learning—an innovative decentralized approach to training and deploying AI models that democratizes access to this technology for individuals, rather than just large corporations. By aggregating computing resources from numerous contributors, incentivizing participation, and ensuring no single entity can dominate the model's complete weights, we are forging a truly open and collaborative pathway to cutting-edge AI.Role OverviewWe are seeking a passionate Developer Relations Lead to serve as the crucial technical liaison between Pluralis's research initiatives and the broader machine learning and systems communities. In this role, you will transform complex, groundbreaking research (including distributed training, communication-efficient model parallelism, and fault-tolerant optimization) into clear, engaging, and accessible content for researchers, engineers, and innovators.This position is not merely a traditional marketing role. We are looking for an individual who can digest our research papers, grasp the underlying architecture, and convey these insights effectively through blog posts, conference presentations, or social media updates. You will shape our technical narrative and become the face of Pluralis's contributions within the community.
abundant seeks a Research Lead based in San Francisco. This position steers research activities that help shape the company’s direction. The Research Lead partners with colleagues to analyze data, draw meaningful insights, and support projects where research has a clear business impact. Key responsibilities Plan, manage, and execute research initiatives from start to finish Work with team members to analyze data and spot important trends Turn research results into practical recommendations for the business Support projects that guide company strategy Collaboration and impact This role involves close teamwork and communication across departments. Research findings directly inform business decisions and contribute to the company’s ongoing growth.
OverviewBecome an integral part of our dynamic R&D team dedicated to developing fully automated research systems that push the boundaries of AI. Zochi has achieved a milestone by publishing the first entirely AI-generated A* conference paper. Locus has set a new industry standard as the first AI system to surpass human experts in AI R&D.Key ResponsibilitiesConceptualize and develop innovative architectures for automated research.Work collaboratively within a specialized team of researchers addressing cutting-edge challenges in long-horizon agentic capabilities, post-training for open-ended objectives, and environment crafting.Document and publish key internal findings alongside success stories from external collaborations.QualificationsPhD or equivalent research experience in Computer Science, Machine Learning, Artificial Intelligence, or a related discipline. Outstanding candidates with significant research contributions are encouraged to apply, regardless of formal qualifications.Demonstrated history of impactful AI/ML research contributions in academic or corporate environments.Expertise in developing long-horizon, multi-agent systems and/or model post-training, especially in scientific domains or for open-ended discovery objectives.A strong passion for advancing problem-solving processes and scientific discovery, thriving in high-autonomy roles and environments.Our CultureCompetitive compensation and equity options.Unlimited Paid Time Off (PTO), emphasizing team collaboration and a community-focused workplace.Opportunities for conference participation and engagement in community initiatives.Empowered roles with high levels of responsibility.#1: We are a small, passionate team of leading investors, researchers, and industry experts committed to the mission of accelerating discovery. Join us.
OpenAI’s Safety Systems team is building a dedicated group to tackle misalignment risks in artificial general intelligence. This San Francisco-based team focuses on identifying, quantifying, and reducing ways AGI could act against human interests as the technology matures. The work centers on safeguarding society by proactively addressing these challenges. Research Focus Areas Worst-Case Demonstrations: Develop compelling examples that show how AI systems might fail, especially in scenarios where misaligned AGI could harm human priorities. Adversarial & Frontier Safety Evaluations: Build rigorous tests using these demonstrations to measure dangerous capabilities, such as deception or power-seeking behavior. System-Level Stress Testing: Create automated infrastructure to push product stacks to their limits, adapting tests as the systems evolve. Alignment Stress-Testing Research: Study where mitigations break down, share findings to inform strategy, and work with other research groups to improve safeguards. Role Overview This Senior Researcher position centers on AI safety and red-teaming. The role involves designing and running innovative attacks, supporting adversarial evaluations, and uncovering how safety measures can fail or be improved. Insights from this work help shape both product launches and long-term safety planning at OpenAI. Key Responsibilities Develop and execute worst-case demonstrations that clarify AGI alignment risks for stakeholders in high-impact situations. Create thorough adversarial and system-level evaluations based on these demonstrations, and help integrate them across the company. Design automated tools and frameworks to strengthen red-teaming and stress-testing efforts.
Full-time|$220K/yr - $220K/yr|On-site|San Francisco
The Perplexity Research Residency stands as our premier initiative designed to empower outstanding research talent from diverse fields to influence the future of artificial intelligence (AI). This program opens avenues for exceptional researchers, engineers, and analysts from disciplines outside traditional AI research to make significant contributions to the advancement of AI and its implications for users. We welcome applications from theoretical physicists, cognitive scientists, biochemists, quants, mathematicians, philosophers, and distinguished researchers in any other relevant field.For comprehensive details on the Perplexity Research Residency and the application process, please visit our program homepage. We encourage you to review the “What We’re Looking For” section for the specific criteria that will guide our selection of candidates.The annual cash compensation for this position is set at $220,000, prorated for a three-month term.
Join Anthropic as a Research Operations Specialist focused on Economic Research. In this role, you will facilitate the smooth execution of research projects and support our team in analyzing and interpreting economic data. Your contributions will play a crucial part in driving our mission to create safe and beneficial AI systems.
Join OpenAI as a Research Scientist and explore cutting-edge machine learning innovations. In this role, you will be at the forefront of developing groundbreaking techniques while advancing our team's research initiatives. Collaborate with talented peers across various teams to discover transformative ideas that scale effectively. We seek individuals who are passionate about pushing the boundaries of AI and want to contribute to our unified research vision.
Join Cloudflare as a Research Manager and play a pivotal role in driving innovative research initiatives that enhance our security and performance solutions. You will lead a team of skilled researchers, collaborating closely with cross-functional teams to identify market trends and develop groundbreaking strategies that align with our business objectives.Your responsibilities will include overseeing research projects from inception to completion, analyzing data to derive actionable insights, and presenting findings to stakeholders. You will foster a culture of creativity and critical thinking, ensuring that our research efforts remain at the forefront of industry standards.
Join aiedu as a Senior Lead in Research & Evaluation, where you will drive impactful research initiatives that shape educational practices and policies. In this role, you will lead a team of researchers in designing and executing comprehensive evaluations that inform our strategic direction. Your expertise will be critical in analyzing data, generating insights, and communicating findings to stakeholders.
About Our TeamJoin the Foundations Research team, where we tackle ambitious and innovative projects that could redefine the future of AI. Our mission is to enhance the science behind our training and scaling initiatives, focusing on pioneering frontier models. We are dedicated to advancing data utilization, scaling methodologies, optimization strategies, model architectures, and efficiency enhancements to accelerate our scientific breakthroughs.About the PositionWe are on the lookout for a dynamic technical research lead to spearhead our embeddings-focused retrieval initiatives. You will oversee a talented team of research scientists and engineers committed to developing foundational technologies that enable models to access and utilize the right information precisely when needed. This includes crafting innovative embedding training objectives, architecting scalable vector storage, and implementing adaptive indexing techniques.This pivotal role will contribute to various OpenAI products and internal research initiatives, offering opportunities for scientific publication and significant technical influence.This position is located in San Francisco, CA, where we embrace a hybrid work model, requiring three days in the office weekly, and we provide relocation assistance for new hires.Your ResponsibilitiesLead cutting-edge research on embedding models and retrieval systems optimized for grounding, relevance, and adaptive reasoning.Supervise a team of researchers and engineers in building an end-to-end infrastructure for training, evaluating, and integrating embeddings into advanced models.Drive advancements in dense, sparse, and hybrid representation techniques, metric learning, and retrieval systems.Work collaboratively with Pretraining, Inference, and other Research teams to seamlessly integrate retrieval throughout the model lifecycle.Contribute to OpenAI's ambitious vision of developing AI systems with robust memory and knowledge access capabilities rooted in learned representations.You Will Excel in This Role If You PossessA proven track record of leading high-performance teams of researchers or engineers within ML infrastructure or foundational research.In-depth technical knowledge in representation learning, embedding models, or vector retrieval systems.Familiarity with transformer-based large language models and their interaction with embedding spaces and objectives.Research experience in areas such as contrastive learning and retrieval-augmented generation.
Jun 16, 2025
Sign in to browse more jobs
Create account — see all 512 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.