Senior Data Scientist Trust And Safety jobs in San Francisco – Browse 4,208 openings on RoboApply Jobs

Senior Data Scientist Trust And Safety jobs in San Francisco

Open roles matching “Senior Data Scientist Trust And Safety” with location signals for San Francisco. 4,208 active listings on RoboApply Jobs.

4,208 jobs found

1 - 20 of 4,208 Jobs
Apply
Databricks logoDatabricks logo
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California

Join Databricks, where we are dedicated to creating the most advanced and secure platform for data and AI. Our commitment to innovation drives us to develop cutting-edge solutions in security, compliance, and governance.As a vital member of the Trust and Safety Data Science team, you will engage in projects that are essential for maintaining the security and…

Jan 30, 2026
Apply
Scale AI, Inc. logoScale AI, Inc. logo
Full-time|$198K/yr - $247.5K/yr|On-site|San Francisco, CA

About the Role At Scale AI, we are pioneering the future of Generative AI and enhancing human-AI collaboration. As a member of the Gen AI Ops Trust and Safety team, you will play a crucial role in safeguarding contributor integrity within a vast marketplace that encompasses hundreds of thousands of contributors involved in training foundational models. We are in search of a data-driven Data Science Lead who thrives in a fast-paced environment, possesses a systemic thinking approach, and seamlessly integrates AI coding tools into their daily workflow. This position offers a high degree of autonomy. You will be responsible for the end-to-end development of fraud and abuse detection models — from defining labels to feature engineering, training, evaluation, and final deployment in production. Collaborating within a compact team, you will achieve remarkable velocity by combining keen analytical insights with AI-enhanced development techniques (using tools like Cursor and Claude Code). If you have previously encountered limitations in teams that operate slowly or differentiate between 'analysis' and 'building', this role is designed to bridge that gap completely.

Mar 26, 2026
Apply
Lyft, Inc. logoLyft, Inc. logo
Full-time|On-site|San Francisco, CA

Role overview The Senior Manager, Trust & Safety Policy at Lyft leads the team that shapes and updates policies to protect riders and drivers. This position ensures Lyft’s standards align with legal requirements and promote a secure experience on the platform. The role involves both policy development and hands-on implementation. Key responsibilities Guide a team dedicated to creating and carrying out trust and safety policies Draft and update policies that keep users safe while meeting legal and regulatory standards Collaborate with colleagues from multiple departments to design solutions that work in practice Share policy changes and decisions clearly throughout the company What Lyft looks for Ability to think strategically and solve complex problems Strong communication skills Experience working with teams across different functions Background in trust and safety, policy, or a related area is helpful Location San Francisco, CA

Apr 27, 2026
Apply
OpenAI logoOpenAI logo
Full-time|On-site|San Francisco

About Our TeamThe Safety Systems team is committed to ensuring the safety, robustness, and reliability of AI models in real-world applications. By leveraging years of practical alignment and applied safety efforts, this team addresses emerging safety challenges and develops innovative solutions to facilitate the secure deployment of our advanced models and future AGI, ensuring that AI is both beneficial and trustworthy.Discover more about OpenAI’s safety initiativesAbout the PositionAs a Data Scientist within the Safety Systems team, you will spearhead a data-driven methodology for analyzing, evaluating, and overseeing the safety of our production systems. You will collaborate with various partners across the organization to define key metrics, develop and implement statistical methods to operationalize these metrics, analyze the effects of our products, and create comprehensive dashboards that serve as a reliable source of truth for addressing safety-related inquiries. Most importantly, you will play a pivotal role in the Safety Systems team, working closely with researchers and engineers to further our mission of establishing safe, robust, and reliable AI.This position is based at our headquarters in San Francisco, and we provide relocation assistance for new employees.Key Responsibilities:Lead initiatives to assess and quantify the real-world safety impacts of OpenAI’s existing and upcoming products.Explore novel approaches to enhance our methodologies for measuring and mitigating harm and abuse.Develop and execute statistical methods necessary for the operationalization of safety metrics.Provide strategic direction and project coordination within the realm of safety.Foster a data-driven culture in Safety Systems by defining, tracking, and operationalizing metrics at the feature, product, and company levels.Create and share dashboards, reports, and tools that empower the team and the organization to independently address safety-related questions.Construct a safety data flywheel and supply safety research with production insights and data for training and evaluation.

Mar 16, 2026
Apply
Quizlet Inc. logo
Full-time|On-site|San Francisco, CA

Join Quizlet as a Senior Software Engineer specializing in Trust & Safety, where you will play a crucial role in enhancing the security and integrity of our platform. You will collaborate with cross-functional teams to develop robust software solutions that protect our community and ensure a safe learning environment.

Mar 25, 2026
Apply
DoorDash logoDoorDash logo
Full-time|$124.1K/yr - $223.5K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY; Los Angeles, CA; Chicago, IL; Austin, TX; Washington D.C.

Join the dynamic Analytics team at DoorDash as a Data Scientist or Senior Data Scientist, where your expertise will directly influence strategic decisions and operational improvements. You will uncover valuable insights from vast datasets, transforming them into actionable recommendations that drive company-wide initiatives. Collaborate with diverse teams in areas like Consumer & Growth, Business Operations, and Customer Experience to elevate our analytics capabilities and impact our business in meaningful ways.

Feb 5, 2026
Apply
OpenAI logoOpenAI logo
Full-time|Hybrid|San Francisco

Join Our Dynamic TeamAt OpenAI, our Trust, Safety & Risk Operations teams are dedicated to protecting our innovative products, users, and the organization from various threats, including abuse, fraud, scams, and regulatory challenges. We operate at the nexus of operations, compliance, user trust, and safety, collaborating closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are secure, compliant, and reliable for a diverse, global audience.Our team supports users across ChatGPT, our API, enterprise solutions, and developer tools. We handle sensitive inbound inquiries, develop detection and enforcement systems, and scale operational workflows to address the demands of a fast-paced, high-stakes environment.Your Role and ResponsibilitiesWe are looking for seasoned analysts with expertise in one or more of the following domains:Content Integrity & Scaled Enforcement – Proactively identify, review, and respond to policy violations, harmful content, and emerging abuse trends on a large scale.Emerging Risk Operations – Detect, assess, and mitigate new and intricate safety, policy, or integrity challenges in the rapidly changing AI landscape.In this role, you will manage high-sensitivity workflows, serve as the incident manager for complex cases, and develop scalable operational systems, including tools, automation, and vendor processes that uphold user safety and trust while fulfilling our legal, ethical, and product commitments.Our work culture embraces a hybrid model of three days in the San Francisco office each week, and we provide relocation assistance for new hires.Please be advised that this role may involve exposure to sensitive content, including material that may be sexual, violent, or otherwise unsettling.Your Key Responsibilities Include:Manage and resolve high-priority cases within your area of expertise (content enforcement, fraud/scams, compliance, or emerging risks).Conduct thorough risk assessments and investigations utilizing internal tools, product signals, and external data sources.Act as the incident manager for escalated cases necessitating intricate policy, legal, or regulatory analysis.Collaborate with cross-functional teams to design and implement top-tier operational workflows, decision trees, and automation strategies.Establish feedback loops and continuous improvement initiatives to enhance operational effectiveness.

Aug 14, 2025
Apply
Lyft, Inc. logoLyft, Inc. logo
Full-time|On-site|San Francisco, CA

Join Lyft as a Manager of Trust & Safety Policy, where you will play a crucial role in shaping and implementing policies that ensure the safety and trust of our community. Your leadership will guide strategic initiatives, engage with stakeholders, and drive data-informed decisions to foster a secure environment for our riders and drivers.

Mar 23, 2026
Apply
DoorDash logoDoorDash logo
Full-time|$193.8K/yr - $285K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY; Chicago, IL

About the TeamThe Trust & Safety, Integrity, and Fraud Product team at DoorDash is committed to creating a secure and reliable experience for all users on our platform, including Consumers, Merchants, and Dashers. We address intricate challenges such as fraud prevention, account takeover prevention, authenticity verification, and regulatory compliance, all while ensuring a seamless user experience. Our collaborative efforts with cross-functional teams—including Engineering, Data Science, Compliance, and Risk Operations—drive strategic initiatives that safeguard our business while promoting growth.About the RoleAs DoorDash expands beyond restaurants into a broader marketplace, our commitment to safety and trust remains paramount. We are seeking a Senior Product Manager to spearhead cross-functional teams tackling complex challenges that affect Consumers, Dashers, and Merchants. Your role will encompass various aspects of our business—from new user onboarding and in-app experiences to innovative products and services that are unparalleled in the market. Depending on your expertise, you will either lead a vertical fraud team focused on protecting Consumers or a horizontal fraud platform team dedicated to enhancing our Risk Engine, Data Signals Intelligence, and Automation/Anomaly Detection capabilities. This is an exceptional opportunity to shape the future of DoorDash during a period of rapid growth and significant impact.You’re excited about this opportunity because you will…Establish the vision and long-term product strategy for a vertical or horizontal fraud team.Develop and implement a customer-centric product roadmap in close collaboration with senior leadership, Operations, Data Science, Analytics, Design, and Engineering teams.

Feb 5, 2026
Apply
OpenAI logoOpenAI logo
Full-time|On-site|San Francisco

About the TeamThe Applied Foundations team at OpenAI is focused on ensuring our innovative technology remains secure against a range of adversarial threats. We are committed to safeguarding the integrity of our platforms as they expand. Our team plays a crucial role in defending against financial abuse, large-scale attacks, and various forms of misuse that could compromise user experience or operational stability.The Integrity pillar within Applied Foundations is tasked with developing robust systems that identify and respond to harmful actors and activities on OpenAI’s platforms. As these systems evolve to address significant usage harms, we are seeking skilled data scientists to accurately measure the prevalence of these issues and assess the effectiveness of our responses.About the RoleWe are in search of experienced trust and safety data scientists who can enhance, operationalize, and oversee the measurement of complex actor- and network-level harms. The selected data scientist will be responsible for measurement and metrics across various established harm verticals, including estimating the prevalence of on-platform (and occasionally off-platform) harm, while also conducting analyses to uncover gaps and opportunities in our responses.This position is based in our San Francisco or New York office and may require addressing urgent escalations outside of regular working hours. Many harm areas may involve sensitive content, including sexual, violent, or otherwise disturbing material.In this role, you will:Lead measurement and quantitative analysis for a range of severe, actor- and network-based usage harm verticals.Develop and apply AI-first methodologies for prevalence measurement and other standardized safety metrics, potentially incorporating off-platform indicators and non-traditional datasets.Create metrics that can be utilized for goal-setting or A/B testing where traditional prevalence or top-line metrics may not apply.Manage dashboards and metrics reporting for harm verticals.Perform analyses and generate insights to guide improvements in review, detection, and enforcement processes, while influencing strategic roadmaps.Optimize LLM prompts specifically for measurement purposes.Collaborate with other safety teams to identify key safety issues and develop relevant policies to address safety needs.Provide metrics for leadership to support informed decision-making.

Feb 18, 2026
Apply
Faire logoFaire logo
Full-time|On-site|San Francisco, CA

As the Trust and Safety Strategy Lead at Faire, you will play a pivotal role in shaping our approach to ensuring the security and trust within our marketplace. You will be responsible for developing and executing strategic initiatives that promote a safe environment for our users, driving policy development, and collaborating with various teams to implement safety measures and risk management protocols.This position is ideal for a strategic thinker with a strong background in trust and safety, who thrives in a fast-paced, innovative environment.

Apr 8, 2026
Apply
Chime logoChime logo
Full-time|On-site|San Francisco, CA, USA

Chime is hiring a Product Manager focused on Trust & Safety in San Francisco. This role centers on protecting the platform and its users by driving initiatives that strengthen safety and reduce fraud. Role overview The Product Manager will work with teams across the company to design and launch strategies that address user safety concerns. Efforts will target the identification and prevention of fraudulent activities, ensuring that Chime remains a secure place for members. Key responsibilities Develop and implement product strategies to enhance trust and safety Collaborate with engineering, operations, and other teams to address risks and improve user security Shape product direction with a focus on maintaining a trustworthy platform Impact Your work will directly influence how Chime protects its community, helping to build a safer experience for all users.

Apr 29, 2026
Apply
OpenAI logoOpenAI logo
Full-time|On-site|San Francisco

About Our TeamAt OpenAI, our User Safety & Risk Operations team is dedicated to protecting our platform and users from various forms of abuse, fraud, and emerging threats. We operate at the crucial intersection of product risk, operational scale, and real-time safety response, supporting a diverse range of users from individuals to global enterprises, as well as advertisers and creators.The Ads Trust & Safety Operations team is committed to ensuring the safety of our users, advertisers, and creators across all monetized surfaces. As OpenAI rolls out new revenue-generating formats and partnerships, this team guarantees that these experiences are safe, compliant, of high quality, and aligned with our overarching safety standards. We work closely with Product, Engineering, Policy, and Legal teams to identify potential risks, develop and enhance enforcement systems, and ensure scalable, high-integrity operations.About the RoleWe are seeking a seasoned operator to help expand and enhance the Ads Trust & Safety Operations at OpenAI. In this pivotal role, you will oversee critical Ads T&S workstreams from inception to execution, collaborating closely with Product, Policy, Engineering, Legal, and Operations teams to design scalable enforcement processes, strengthen detection mechanisms, and ensure safe support for Ads and monetization at scale.You will navigate the intersection of strategy and execution—translating ambiguity into structured programs, identifying operational risks, and driving measurable improvements across systems and workflows.This position requires an individual who is highly operational, excels at execution, and is comfortable providing clarity in uncertain situations. You should be enthusiastic about building scalable systems and processes from the ground up and working in tandem with policy and product teams as we rapidly iterate on advertising strategies and features.Key Responsibilities:Oversee complex, high-impact Ads Trust & Safety problem areas from strategy through execution.Design and scale operational workflows for Ads Trust & Safety, encompassing enforcement models, review processes, escalation paths, and quality frameworks.Work closely with Product, Policy, and Engineering teams to translate risk and policy requirements into scalable systems, tools, and automation.Drive operational readiness for new Ads and monetization launches, features, and markets, identifying risks early and ensuring appropriate mitigations are in place.Leverage data to identify trends, gaps, and emerging risks across Ads surfaces; develop proposals for enhancements.

Feb 24, 2026
Apply
Discord Inc. logoDiscord Inc. logo
Full-time|$248K/yr - $279K/yr|On-site|San Francisco Bay Area

Discord, a platform frequented by over 200 million users monthly, thrives on its vibrant gaming community, where more than 90% of users engage in gaming activities. With a staggering 1.5 billion hours spent playing diverse titles each month, Discord is pivotal in shaping the gaming landscape. Our mission is to enhance social interactions for gamers before, during, and after gameplay.We are seeking an outstanding Trust & Safety Counsel to join our dynamic legal team. This influential role offers the opportunity to contribute significantly at one of the most exciting companies in the tech industry! As our second Trust & Safety Counsel, you will be integral in supporting our Trust & Safety organization, addressing law enforcement data requests, identifying and removing harmful content and actors, and ensuring compliance with international laws and regulations.

Mar 17, 2026
Apply
suno logosuno logo
Full-time|On-site|San Francisco

Join suno as an Engineering Manager in our Trust & Safety team, where you will lead the development and implementation of innovative solutions to enhance user safety and trust on our platform. You will work closely with cross-functional teams to ensure the integrity of our systems and the protection of our users. Your leadership will be vital in driving engineering excellence and fostering a culture of safety and accountability.

Mar 7, 2026
Apply
Descript logoDescript logo
Full-time|$100K/yr - $170K/yr|Remote|San Francisco, CA

At Descript, we are on a mission to revolutionize the way audio and video content is created. Our innovative platform has earned the trust of renowned podcasters, influencers, and prestigious organizations like BBC, ESPN, HubSpot, Shopify, and The Washington Post. With $100M in funding from top investors, including the OpenAI Startup Fund, Andreessen Horowitz, Redpoint Ventures, and Spark Capital, we are poised for tremendous growth. We are seeking a proactive and analytical Senior Marketing Data Scientist to join our dynamic Growth team. In this role, you will collaborate with the Performance Marketing team to design and implement a comprehensive measurement framework, optimize marketing performance across the entire funnel, and drive data-informed business strategies. This crucial position will enable Descript to enhance its marketing capabilities across both B2C and B2B sectors. You will leverage AI-driven analytics tools to streamline decision-making processes and deliver scalable predictive insights. This position can be performed remotely within the United States or onsite in the San Francisco Bay Area.

Mar 7, 2026
Apply
Pinterest, Inc. logoPinterest, Inc. logo
Full-time|Remote|San Francisco, CA, US; Remote, US

Join our innovative team at Pinterest as a Senior Data Scientist in the Core division. We're looking for a data-driven professional who is passionate about leveraging data analytics and machine learning to enhance user experience and drive business outcomes. At Pinterest, you will collaborate with cross-functional teams to develop data models and insights that influence strategic decisions.

Mar 18, 2026
Apply
Grow Therapy logoGrow Therapy logo
Full-time|On-site|San Francisco

About the Role Grow Therapy is hiring a Senior or Staff Data Scientist in San Francisco. This role focuses on using data to inform key decisions and improve therapy services.

Apr 18, 2026
Apply
tvScientific powered by Pinterest logotvScientific powered by Pinterest logo
Senior Data Scientist

tvScientific powered by Pinterest

Full-time|$139.8K/yr - $287.7K/yr|Remote|San Francisco, CA, US; Remote, US

About tvScientifictvScientific is the premier CTV advertising platform specifically designed for performance marketers. By harnessing extensive data and innovative science, we automate and enhance TV advertising to achieve tangible business results. Our integrated solution encompasses media buying, optimization, measurement, and attribution, all within a single, efficient platform. Developed by industry veterans with deep experience in programmatic advertising, digital media, and ad verification, our platform provides advertisers with the trust they need to scale their business.As a vital member of our Data Science team, you will transform data into actionable insights, influence business strategies, and lead analytics projects from start to finish. Your daily responsibilities will include crafting tailored reporting and analytics tools to address the specific needs of our clients.

Apr 7, 2026
Apply
Airbnb, Inc. logoAirbnb, Inc. logo
Full-time|$248K/yr - $310K/yr|Remote|Remote - US

Airbnb started in 2007 when two hosts welcomed three guests into their San Francisco home. Since then, the platform has grown to over 5 million hosts and more than 2 billion guests worldwide. Hosts offer unique stays and experiences that connect travelers with local communities. Trust Engineering at Airbnb Trust sits at the heart of Airbnb’s platform. The Trust Engineering team builds technology to keep the community safe and uphold high standards for hosts, guests, homes, and experiences. Their work addresses both online risks, such as account compromise, fake listings, and financial loss, and offline concerns like theft, property damage, and personal safety. The team’s responsibilities include user onboarding, screening, identity, and reputation systems. Trust Engineering leads the technical vision for these systems and integrates them throughout Airbnb’s platform. Role overview The Senior Staff Software Engineer, Trust, is a senior individual contributor role. This engineer partners with technical leaders across Airbnb to shape, plan, and deliver a broad roadmap of Trust engineering projects. The position involves extensive collaboration with teams throughout the company. While highly senior, this is still a hands-on engineering role, every Airbnb software engineer, regardless of level, contributes code and development work. What you will do Define and drive the long-term vision and strategy for the Trust Platform, setting architectural direction for core systems that support scalable, high-quality fraud detection, safety, and trust decisions across Airbnb. Work deeply within Trust Platform components, developing system and performance tools, and identifying ways to improve technical quality, operational excellence, and developer experience. Promote an AI-first engineering approach, using LLM-powered agents to generate and refine code, so you can focus on problem-solving, system design, and quality oversight. Location This position is remote and based in the United States.

Apr 21, 2026

Sign in to browse more jobs

Create account — see all 4,208 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.