Minor Safety Analyst jobs in San Francisco – Browse 538 openings on RoboApply Jobs

Minor Safety Analyst jobs in San Francisco

Open roles matching “Minor Safety Analyst” with location signals for San Francisco. 538 active listings on RoboApply Jobs.

538 jobs found

1 - 20 of 538 Jobs
Apply
company
Full-time|$65K/yr - $65K/yr|Remote|Remote — San Francisco, California, United States

The Minor Safety Analyst is essential in protecting vulnerable users by thoroughly reviewing and processing reports related to safety and abusive incidents that could endanger minors. This role requires the execution of detailed open-source and in-platform investigations to identify harmful behaviors, recognize threat actors, and evaluate potential risks. The analyst is responsible for managing high-volume analytical workflows, ensuring that all findings are documented accurately while adhering to tight deadlines. They produce clear and concise written summaries and investigative reports for both internal teams and client stakeholders. Furthermore, analysts collaborate with cross-functional teams to ensure that safety requests are processed effectively and in accordance with established protocols.This role requires availability from Tuesday to Saturday, 9:00 AM to 5:00 PM local time.Review and process reports of abuse and safety incidents, particularly those that may involve potential harm to minors, while adhering to established operational procedures.Conduct open-source and in-platform investigations into behavioral abuse and threat actors to identify techniques, context, and impact.Support analytical workflows by documenting findings clearly, meeting deadlines, and maintaining a high level of accuracy and consistency.Generate concise written summaries, briefings, and investigative reports for internal and client stakeholders.Work collaboratively with team members, cross-functional partners, and vendor teams to ensure requests are processed efficiently and accurately.Contribute to the continuous improvement of investigative workflows and knowledge resources.Engage in team training sessions, shadowing, and peer reviews to enhance professional expertise and investigative capabilities.

Feb 2, 2026
Apply
company
Full-time|$80K/yr - $100K/yr|Remote|Remote — San Francisco, California, United States

As a Senior Analyst for Minor Safety at Control Risks, you will be instrumental in assisting a leading global technology firm in safeguarding and fortifying its online platform. This role focuses on the critical task of reviewing and addressing safety and abuse incidents involving minors, conducting in-depth investigations of behavioral abuse and potential threat actors, and delivering high-quality analytical insights in a dynamic environment. Become part of a dedicated, mission-oriented team that collaborates across various disciplines to protect vulnerable populations, allowing the client to operate securely and responsibly in a constantly shifting risk landscape.Work Schedule: Tuesday to Saturday, 9:00 AM - 5:00 PM local time.Thoroughly review incident reports related to safety and abuse targeting minors, taking decisive action in accordance with operational policies and ensuring consistent follow-through on every report.Conduct investigations into behavioral abuse and threat actors on the client’s platform to analyze techniques, impacts, and attribution accurately.Engage in a fast-paced analytic workflow, meeting tight deadlines while maintaining a high standard of analytic excellence, ensuring that reporting deliverables are aligned with best practice intelligence assessments.Prepare detailed written reports, presentations, and strategic insights for senior leadership and the broader organization.Lead improvement initiatives by offering guidance on policy development and executing projects that enhance existing workflows.Review and provide constructive feedback on team members’ work, and design and deliver training programs.Identify emerging issues and trends, escalating them as necessary.Contribute to enhancing support resources and content.Act as a consultative partner with our vendor team, providing expertise in processing all types of requests with exceptional quality and efficiency.Serve as a role model and mentor to colleagues, demonstrating flexibility and outstanding teamwork to prioritize competing demands effectively.

Jan 27, 2026
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

Join Our Dynamic TeamAt OpenAI, our Trust, Safety & Risk Operations teams are dedicated to protecting our innovative products, users, and the organization from various threats, including abuse, fraud, scams, and regulatory challenges. We operate at the nexus of operations, compliance, user trust, and safety, collaborating closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are secure, compliant, and reliable for a diverse, global audience.Our team supports users across ChatGPT, our API, enterprise solutions, and developer tools. We handle sensitive inbound inquiries, develop detection and enforcement systems, and scale operational workflows to address the demands of a fast-paced, high-stakes environment.Your Role and ResponsibilitiesWe are looking for seasoned analysts with expertise in one or more of the following domains:Content Integrity & Scaled Enforcement – Proactively identify, review, and respond to policy violations, harmful content, and emerging abuse trends on a large scale.Emerging Risk Operations – Detect, assess, and mitigate new and intricate safety, policy, or integrity challenges in the rapidly changing AI landscape.In this role, you will manage high-sensitivity workflows, serve as the incident manager for complex cases, and develop scalable operational systems, including tools, automation, and vendor processes that uphold user safety and trust while fulfilling our legal, ethical, and product commitments.Our work culture embraces a hybrid model of three days in the San Francisco office each week, and we provide relocation assistance for new hires.Please be advised that this role may involve exposure to sensitive content, including material that may be sexual, violent, or otherwise unsettling.Your Key Responsibilities Include:Manage and resolve high-priority cases within your area of expertise (content enforcement, fraud/scams, compliance, or emerging risks).Conduct thorough risk assessments and investigations utilizing internal tools, product signals, and external data sources.Act as the incident manager for escalated cases necessitating intricate policy, legal, or regulatory analysis.Collaborate with cross-functional teams to design and implement top-tier operational workflows, decision trees, and automation strategies.Establish feedback loops and continuous improvement initiatives to enhance operational effectiveness.

Aug 14, 2025
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About Our TeamAt OpenAI, our User Safety & Risk Operations team is dedicated to protecting our platform and users from various forms of abuse, fraud, and emerging threats. We operate at the crucial intersection of product risk, operational scale, and real-time safety response, supporting a diverse range of users from individuals to global enterprises, as well as advertisers and creators.The Ads Trust & Safety Operations team is committed to ensuring the safety of our users, advertisers, and creators across all monetized surfaces. As OpenAI rolls out new revenue-generating formats and partnerships, this team guarantees that these experiences are safe, compliant, of high quality, and aligned with our overarching safety standards. We work closely with Product, Engineering, Policy, and Legal teams to identify potential risks, develop and enhance enforcement systems, and ensure scalable, high-integrity operations.About the RoleWe are seeking a seasoned operator to help expand and enhance the Ads Trust & Safety Operations at OpenAI. In this pivotal role, you will oversee critical Ads T&S workstreams from inception to execution, collaborating closely with Product, Policy, Engineering, Legal, and Operations teams to design scalable enforcement processes, strengthen detection mechanisms, and ensure safe support for Ads and monetization at scale.You will navigate the intersection of strategy and execution—translating ambiguity into structured programs, identifying operational risks, and driving measurable improvements across systems and workflows.This position requires an individual who is highly operational, excels at execution, and is comfortable providing clarity in uncertain situations. You should be enthusiastic about building scalable systems and processes from the ground up and working in tandem with policy and product teams as we rapidly iterate on advertising strategies and features.Key Responsibilities:Oversee complex, high-impact Ads Trust & Safety problem areas from strategy through execution.Design and scale operational workflows for Ads Trust & Safety, encompassing enforcement models, review processes, escalation paths, and quality frameworks.Work closely with Product, Policy, and Engineering teams to translate risk and policy requirements into scalable systems, tools, and automation.Drive operational readiness for new Ads and monetization launches, features, and markets, identifying risks early and ensuring appropriate mitigations are in place.Leverage data to identify trends, gaps, and emerging risks across Ads surfaces; develop proposals for enhancements.

Feb 24, 2026
Apply
companyOpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamAt OpenAI, our Trust, Safety & Risk Operations teams play a crucial role in protecting our innovative products, valued users, and the integrity of our organization from various threats such as abuse, fraud, scams, and regulatory non-compliance. We work collaboratively across operations, compliance, and user safety, partnering closely with Legal, Policy, Engineering, Product, Go-To-Market teams, and external stakeholders to ensure our platforms are secure, compliant, and trusted by a diverse, global audience.The Global Safety Response Operations team provides round-the-clock support for user safety, risk management, and regulatory escalations pertaining to OpenAI’s suite of products. We address high-priority cases that necessitate human judgment and swift action. Acting as the core escalation management and delivery unit of OpenAI’s safety operations, we strive to ensure our products are safe and aligned with our policies while delivering timely, empathetic, and consistent support to our users.About the RoleWe are seeking experienced Trust, Safety, and Risk Operations analysts with expertise in areas such as policy enforcement, content moderation, fraud prevention, developer risk, or privacy and regulatory matters. As a key player in safety escalation management, you will triage and resolve urgent and sensitive cases while collaborating across various subject matter areas, systems, and processes to ensure operational excellence, drive process enhancements, and extract valuable insights and trends.This role operates within a 24/7 global framework, necessitating flexibility to work rotating shifts, including nights, weekends, and holidays as part of an on-call support model.Our hybrid work model offers three days in-office weekly, alongside relocation assistance for new hires.Please be advised that this position may involve exposure to sensitive content, which may be sexual, violent, or otherwise disturbing.Your Responsibilities:Manage and resolve high-priority cases related to all harm and risk areas, ensuring prompt and appropriate resolutions in accordance with policy and legal standards.Utilize multiple systems and tools to oversee user reports, internal escalations, and other critical investigations.Serve as an incident manager for escalations that require nuanced interpretation of policies, legal stipulations, or regulatory guidelines.Identify and implement process enhancements and automation initiatives to boost efficiency, accuracy, and coverage.Collaborate with cross-functional teams to drive continuous improvement and optimize operational workflows.

Oct 30, 2025
Apply
companyAnthropic logo
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC; San Francisco, CA | New York City, NY

Join Anthropic as a Safeguards Enforcement Analyst, where you will play a pivotal role in ensuring safety evaluations within our innovative AI systems. This role focuses on analyzing compliance with safeguards and developing strategies to enhance safety protocols. Collaborate with cross-functional teams to assess risks and implement robust solutions that align with our commitment to responsible AI.

Mar 12, 2026
Apply
company
Full-time|On-site|San Francisco

At Intrinsic Safety, we're leveraging innovative technologies to tackle some of the most challenging issues of our digital era using safe and effective AI solutions. Your contributions will play a pivotal role in assisting fraud prevention and Trust & Safety teams, enabling them to concentrate on impactful tasks rather than repetitive manual reviews. We're experiencing rapid growth, partnering with some of the largest and most dynamic social media and online service platforms.We are on the lookout for an exceptional Technical Account Manager (TAM) to join our core Customer Success team. In this role, you will be crucial in ensuring our customers—particularly our enterprise clients—derive maximum value from Intrinsic Safety by providing expert technical support and guidance. You will act as the essential link between our advanced AI platform and our customers' technical teams, fostering deep platform adoption, troubleshooting complex issues, and facilitating their long-term success. This is a unique opportunity to contribute to our mission by transforming intricate technical challenges into practical solutions that effectively combat fraud and abuse.This role requires in-person attendance at our San Francisco office.Your ResponsibilitiesPost-Integration Technical Partnership: Serve as the main technical liaison for our enterprise customers post-deployment. Gain a thorough understanding of their unique technical environments and data flows as they interact with the Intrinsic Safety platform.Drive Ongoing Technical Adoption & Optimization: Assist customers in navigating advanced platform features and configurations to ensure they maximize the benefits of Intrinsic Safety’s offerings.Build Scalable Technical Resources: Develop and enhance our knowledge base, technical documentation, and best practice guides to empower both customers and the Customer Success team.Strategic Technical Troubleshooting & Resolution: Identify and resolve sophisticated technical challenges that arise after implementation, working closely with engineering and product teams to deliver scalable solutions.Product Evolution: Act as a key contributor to the continual improvement of our product offerings.

Aug 4, 2025
Apply
companyZūm logo
Full-time|On-site|San Francisco, CA

Zūm is transforming mass mobility with its Connected Mobility Experience (Zūm CMX) system, which synchronizes people, vehicles, and operations in real time. In the student mobility sector, Zūm CMX aims to deliver reliable, transparent, and efficient transportation for students and their families. More than 4,500 schools rely on Zūm CMX. The company has received recognition from Fast Company’s World’s Most Innovative Companies, CNBC’s Disruptor 50, and the Financial Times’ Fastest Growing Companies. Backing comes from investors such as Sequoia Capital, GIC, TPG, and Softbank. Role overview The Safety Manager will oversee safety operations for Zūm in San Francisco. This position covers the full safety lifecycle: developing programs, delivering training, managing incidents, and ensuring compliance with regulatory standards. The role leads the safety department and works to build a strong culture of behavioral safety, aiming to protect employees, students, and the community. The Safety Manager is responsible for upholding safety as a core value, making sure everyone returns home safely each day. What you will do Develop and implement safety programs and standards Deliver safety training and promote behavioral change Manage incident response and investigate accidents Ensure compliance with all regulatory safety requirements Lead the safety department and foster a culture of shared responsibility What makes a candidate successful Strong commitment to safety and achieving zero incidents Ability to balance proactive prevention and responsive incident management Experience designing and delivering engaging safety training Skill in analyzing incident data and conducting field observations Confidence presenting safety metrics to administrators and investigating accident scenes Clear communication and the ability to interpret safety regulations

Apr 28, 2026
Apply
company
Full-time|On-site|San Francisco

Join our dynamic team at Intrinsic Safety as an Enterprise Account Executive. In this pivotal role, you will be responsible for driving sales and fostering relationships with key enterprise clients. Your expertise will contribute to the growth and success of our innovative safety solutions.We are looking for a passionate professional who thrives in a fast-paced environment, excels in building strong client connections, and is driven to achieve targets. You will work closely with cross-functional teams to deliver outstanding service and value to our clients.

Mar 29, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About Our TeamThe Safety Systems team at OpenAI is dedicated to advancing safety protocols to ensure our cutting-edge models can be deployed responsibly, ultimately benefiting society. We are at the forefront of OpenAI's commitment to creating and deploying safe Artificial General Intelligence (AGI), fostering a culture rooted in trust and transparency.The Pretraining Safety team aspires to develop safer, more capable base models while facilitating early and reliable safety assessments during the training phase. Our objectives include:Establishing upstream safety evaluations to track the emergence of unsafe behaviors and goals;Creating safer priors through strategic pretraining and mid-training interventions that enhance downstream alignment;Designing safe-by-design architectures that improve control over model capabilities.Additionally, we conduct foundational research to comprehend how behaviors develop, generalize, and can be accurately measured throughout the training process.About the RoleThe Pretraining Safety team is trailblazing the integration of safety into models prior to their post-training and deployment stages. In this position, you will engage with the complete model development lifecycle, focusing on pre-training:Identifying safety-relevant behaviors as they emerge in base models;Assessing and mitigating risk without waiting for extensive training runs;Designing architectures and training setups that prioritize safer behavior;Enhancing models by integrating comprehensive, early safety signals.Our collaborative efforts span across OpenAI’s safety ecosystem—from Safety Systems to Training—to ensure our safety foundations are robust, scalable, and grounded in real-world considerations.Your Responsibilities Will Include:Developing innovative techniques to predict, measure, and assess unsafe behavior in early-stage models;Crafting data curation strategies that refine pretraining priors and mitigate downstream risk;Investigating safe-by-design architectures and training configurations to enhance controllability;Collaborating with cross-functional teams to ensure adherence to safety standards.

Oct 30, 2025
Apply
company
Full-time|On-site|San Francisco

We are seeking an ambitious and organized Chief of Staff to join our dynamic team in San Francisco. This role is pivotal in helping us scale our operations and navigate through the exciting challenges of growth.As Chief of Staff, you will collaborate closely with the CEO to define and streamline our operational cadence. You will take charge of executing essential tasks across various functions, including recruitment, finance, and the implementation of new initiatives. Being part of a lean team means your decisions will directly impact our trajectory, and you will have the autonomy to shape our processes and outcomes.Key ResponsibilitiesOversee internal operations, including financial management and human resourcesManage day-to-day administrative tasks such as payroll, expense tracking, and vendor relationsLead recruitment efforts, conduct interviews, and onboard new team membersDrive new initiatives and special projects focused on growth, sales, marketing, and partnershipsSupport the go-to-market strategy for the enterpriseDevelop and automate processes that enhance efficiency as we scaleThis is a high-impact role for a motivated individual eager to become a top-tier operator. You will gain hands-on experience across various domains such as recruitment, marketing, operations, and customer development. As our company expands, so will the scope of your responsibilities and opportunities.

Oct 21, 2025
Apply
company
Full-time|From $100K/yr|On-site|San Francisco

Join Our Team as a Software EngineerAt intrinsic-safety, we are at the forefront of developing AI agents that tackle complex decision-making in risk investigations, fraud detection, and identity verification. Our mission revolves around empowering machines to make the most challenging judgment calls efficiently and accurately.As a dynamic and compact team based in San Francisco, we are addressing challenges that impact billions of transactions and entities. Our clientele includes renowned Fortune 500 companies, global marketplaces, and regulated financial institutions. If you are driven by ownership, quick execution, and collaboration with founders, you will thrive here.

Mar 30, 2026
Apply
company
Full-time|On-site|San Francisco

Role OverviewAt Variance, we are at the forefront of teaching machines to execute high-stakes judgment calls on a large scale. This involves developing AI agents that navigate the complex domains of risk investigations, fraud detection, and identity verifications.Our San Francisco-based team is small yet exceptionally talented, comprising former founders and specialists from leading AI laboratories. We cater to an impressive clientele, including Fortune 500 companies, global marketplaces, and regulated financial institutions. If you are passionate about taking ownership, working swiftly, and collaborating closely with founders, you will thrive in our environment.We are seeking a Security Engineer to help establish a robust security foundation. You will collaborate across product, infrastructure, and internal systems to ensure that Variance is secure by design, enabling us to meet the rigorous standards needed to deploy AI in critical workflows for the world’s largest corporations.

Mar 30, 2026
Apply
companyAnthropic logo
Full-time|On-site|San Francisco, CA

Join our dynamic team at Anthropic as a Policy Analyst focusing on the Latin American region. This role offers an exciting opportunity to engage with policy-making processes and contribute to our mission of ensuring advanced AI systems are safe and beneficial for humanity. You will be responsible for analyzing regional policies, collaborating with stakeholders, and providing insights that drive our strategic objectives.

Apr 13, 2026
Apply
companyAnthropic logo
Full-time|On-site|San Francisco, CA

Join The Anthropic Institute as an Analyst, where you will play a pivotal role in advancing research and insights that shape the future of AI safety and policy. Contribute to impactful projects, collaborate with a diverse team of experts, and help foster a safer AI landscape.

Mar 12, 2026
Apply
companyIntegrated Resources Inc. logo
Senior Safety Surveillance Specialist

Integrated Resources Inc.

Full-time|On-site|San Francisco

Join Integrated Resources Inc. as a Senior Safety Surveillance Specialist, where you will play a crucial role in the safety monitoring of our innovative projects. We are looking for a detail-oriented professional who can analyze safety data and implement effective safety protocols.

Jul 22, 2017
Apply
company
Full-time|$100K/yr - $100K/yr|On-site|San Francisco

Join our innovative team at Intrinsic Safety, where we leverage cutting-edge technologies to tackle some of the most challenging issues of the digital era using safe and effective AI. Your role will be pivotal in enabling Trust & Safety teams to minimize time spent on tedious manual reviews and investigations, empowering them to focus on what truly matters. By transforming the methods these teams use to protect their communities from various threats, including spam, scams, misinformation, hate speech, and physical security issues, you will significantly impact the lives of many individuals. We are experiencing rapid growth, serving major social media and online service platforms.We are seeking our inaugural Business Development Representative (BDR) to spearhead pipeline growth and assist in scaling our sales efforts. In this role, you will be responsible for outbound prospecting, qualifying leads, and scheduling meetings with decision-makers across online marketplaces and digital platforms. Collaborating closely with the go-to-market (GTM) team, you will refine messaging, optimize outreach strategies, and cultivate relationships with potential customers. This is a high-impact position that offers the chance to shape our sales approach and advance your career as we grow.This position requires in-person attendance at our San Francisco office.Key Responsibilities:Generate and qualify leads through strategic outbound prospecting, focusing on trust & safety, legal, and compliance leaders.Implement multi-channel outreach strategies via email, phone, and LinkedIn to engage key decision-makers and secure discovery meetings.Conduct initial conversations to assess prospect needs, evaluate fit, and ensure seamless transitions to the GTM team.Accurately track outreach activities, interactions, and pipeline progress in our CRM system.Provide insights from customer interactions to refine our sales messaging, ideal customer profile, and go-to-market strategy.

Mar 31, 2025
Apply
company
Full-time|$100K/yr - $100K/yr|On-site|San Francisco

Role OverviewAt Variance, we are revolutionizing the way machines make critical judgment calls at scale. Our mission is to develop AI agents that tackle complex challenges in risk investigations, fraud detection, and identity verification.Based in the vibrant tech hub of San Francisco, our close-knit team comprises talented individuals with backgrounds from top AI labs and previous startup experiences. We cater to an impressive clientele, including Fortune 500 companies, global marketplaces, and regulated financial institutions. Success in this competitive landscape demands not only robust products and engineering but also building trust with discerning buyers and seamlessly integrating into essential workflows.We are seeking a driven Enterprise Sales Engineer to spearhead the technical aspects of our enterprise sales approach. In this role, you will collaborate closely with account executives, founders, and prospective clients to comprehend their workflows, conduct technical discovery, create engaging demos, and demonstrate how Variance delivers unparalleled value in high-stakes settings. This pre-sales opportunity is ideal for a technically adept individual who excels at client engagement and is passionate about enabling large enterprises to embrace trustworthy AI solutions.This position requires in-person presence at our San Francisco office.

Mar 31, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About the TeamThe Safety Systems team is dedicated to ensuring the responsible deployment of our advanced AI models for societal benefit. We lead OpenAI's mission to develop and implement safe AGI, prioritizing transparency and trust in our AI systems.The Model Safety Research team is focused on pioneering research to enhance the robustness and safety of AI models. Our goal is to tackle the evolving safety challenges that arise as AI becomes increasingly powerful and prevalent across various applications. Key areas of focus include the enforcement of nuanced safety policies, model robustness against adversarial threats, addressing privacy and security concerns, and ensuring trustworthiness in critical safety domains.We are committed to understanding real-world deployment and maximizing the benefits of AI while ensuring its safe and responsible use.About the RoleOpenAI is on the lookout for a passionate and experienced Senior Researcher specializing in AI safety. This role will guide research initiatives aimed at enabling safe AGI and will involve working on projects that enhance the safety, alignment, and robustness of our AI systems against adversarial threats. You will play a pivotal role in shaping the future of safe AI at OpenAI, significantly contributing to our mission of deploying safe AGI.In this role, you will:Engage in cutting-edge research on AI safety topics such as Reinforcement Learning from Human Feedback (RLHF), adversarial training, and system robustness.Implement innovative methods within OpenAI’s core model training processes and drive safety enhancements across our products.Define research directions and strategies to bolster the safety, alignment, and robustness of our AI systems.Collaborate with cross-functional teams, including Trust & Safety, legal, and policy experts, to ensure our products uphold the highest safety standards.Continuously assess and analyze the safety of our models and systems, pinpointing risks and proposing effective mitigation strategies.You might thrive in this role if you:Have a strong enthusiasm for AI safety and a solid background in safety research.Possess excellent analytical skills and the ability to think critically about complex safety challenges.Are adept at collaborating with diverse teams and communicating findings effectively.Have a proactive approach to problem-solving and a commitment to ethical AI deployment.

May 25, 2023
Apply
company
Research Engineer, Evals

Intrinsic Safety

Full-time|On-site|San Francisco

Role OverviewAt Intrinsic Safety, we are pioneering the development of AI systems capable of making critical decisions in high-stakes environments such as risk investigations, fraud detection, and identity verification. Our dedicated team in San Francisco is at the forefront of tackling complex challenges where traditional AI solutions often fall short.We are in search of a Research Engineer to play a pivotal role in shaping our model evaluation strategies. You will be responsible for creating benchmarks, datasets, and evaluation frameworks that accurately assess our systems’ performance in real-world scenarios. This position bridges research, product development, and engineering, focusing on rigorous evaluations that reflect actual customer workflows and identify key failure points to propel the next generation of AI advancements.

Mar 31, 2026

Sign in to browse more jobs

Create account — see all 538 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.