Insider Risk Investigator Technical Human Intelligence jobs in San Francisco – Browse 1,169 openings on RoboApply Jobs

Insider Risk Investigator Technical Human Intelligence jobs in San Francisco

Open roles matching “Insider Risk Investigator Technical Human Intelligence” with location signals for San Francisco. 1,169 active listings on RoboApply Jobs.

1,169 jobs found

1 - 20 of 1,169 Jobs
Apply
Anthropic logo
Full-time|On-site|San Francisco, CA | New York City, NY | Seattle, WA

Join Anthropic as an Insider Risk Investigator, where you will be at the forefront of safeguarding our organization by employing both technical and human intelligence methodologies. In this pivotal role, you will analyze and investigate potential insider threats, collaborate with cross-functional teams, and leverage advanced analytical tools to ensure the in…

Apr 10, 2026
Apply
SoFi logo
Full-time|Remote|WA - Seattle; UT - Cottonwood Heights; CA - San Francisco; TX - Frisco

Join SoFi as a Security Product Lead specializing in Threat Intelligence and Insider Risk. In this pivotal role, you will spearhead initiatives that enhance our security posture and protect our assets from internal and external threats. You will collaborate with cross-functional teams to develop and implement innovative security solutions, ensuring the safety and integrity of our operations.

Mar 12, 2026
Apply
SoFi logo
Full-time|On-site|WA - Seattle; UT - Cottonwood Heights; CA - San Francisco; NY - New York City; TX - Frisco

As the Lead Insider Trust & Fraud Investigator at SoFi, you will play a pivotal role in safeguarding our platform by identifying and mitigating potential fraud risks. You will lead investigations into suspicious activities, collaborate with cross-functional teams, and develop strategies to enhance our fraud detection capabilities.

Mar 25, 2026
Apply
OpenAI, Inc. logo
Full-time|Remote|San Francisco

Join Our TeamAt OpenAI, our mission is to ensure that general-purpose artificial intelligence serves the greater good for all humanity. We are committed to the real-world deployment of our technologies and their continuous improvement based on practical usage and potential misuse.The Intelligence and Investigations team plays a critical role in this mission by identifying, examining, and mitigating the misuse of our products, focusing on significant and innovative harms. Our efforts empower partner teams to create data-driven model policies and develop robust safety measures. By gaining a deep understanding of abuse patterns, we help guarantee that OpenAI's products are utilized safely in the creation of impactful and rewarding applications.About the PositionAs a Technical Abuse Investigator within the Intelligence and Investigations team, your primary responsibility will be to detect, investigate, and thwart malicious activities on OpenAI’s platform. You will enhance portions of the investigative process to enable our team to effectively counteract harm on a larger scale. This position uniquely blends traditional investigative acumen with strong technical skills, as much of the work involves navigating intricate datasets to uncover actionable abuse signals, rather than merely reviewing isolated reports.Beyond performing direct investigations, this role is designed to amplify the capabilities of the broader investigations team. You will work on scaling or automating essential yet intricate processes, crafting and implementing lightweight technical solutions—like notebook templates, data pipelines, or internal utilities—that empower specialized investigators to detect, track, and address abuse more effectively than what a single investigator could achieve. Success will not only be measured by the number of investigations completed but also by how efficiently your contributions allow you and your teammates to operate.You will collaborate closely with engineering, legal, investigations, security, and policy partners to address urgent escalations, examine activities that surpass existing safeguards, and translate investigative findings into scalable detection and enforcement strategies.This role will require participation in an on-call rotation to manage urgent escalations beyond standard work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise disturbing material. This position operates in the PST time zone and is open to remote candidates within the United States, although we have a strong preference for applicants based in San Francisco or New York.

Mar 12, 2026
Apply
OpenAI logo
Full-time|On-site|San Francisco

About the TeamAt OpenAI, security is integral to our mission of ensuring that artificial general intelligence serves the best interests of humanity.Our Threat Intelligence team is dedicated to safeguarding OpenAI’s technology, personnel, research, and infrastructure. We proactively identify and mitigate threats from adversaries aiming to exploit our systems or misuse our models. By investigating complex threats, developing scalable analytical tools, and delivering intelligence, we shape our security strategies and provide leadership with actionable insights. Our approach combines technical expertise, investigative thoroughness, and robust cross-functional collaboration to detect threats and enhance security across OpenAI’s various sectors.About the RoleWe are seeking a Technical Threat Investigator to bolster our defenses against sophisticated adversaries targeting OpenAI and the wider ecosystem, including those attempting to misuse our models for cyber operations.In this investigative role, you will independently conduct comprehensive investigations into advanced threat actors, analyzing their behaviors, infrastructures, and emerging techniques, including how they integrate AI into their operations. Your findings will be crucial in proactively identifying malicious activities and enhancing detection, disruption, enforcement, and overall safety within the organization.You will convert your investigative insights into scalable solutions. This includes developing lightweight tools, automating processes where feasible, and creating AI-assisted workflows to streamline investigations and improve effectiveness over time.In this role, you will:Perform thorough investigations into sophisticated threat actors interacting with OpenAI’s models, products, and ecosystem.Adopt an adversarial mindset—modeling attacker behavior, predicting misuse patterns, and actively hunting for and disrupting malicious activities.Utilize internal telemetry, open-source intelligence (OSINT), vendor data, and proprietary safety systems to generate high-confidence findings regarding adversarial use of our models in cyber operations and platform abuse.Translate investigative findings into tangible enhancements across detection, enforcement, intelligence, and safety frameworks.Develop tools, scripts, and automations to improve investigative processes and outcomes.

Apr 30, 2026
Apply
Airwallex logo
Full-time|On-site|US - San Francisco

Join Our Team at AirwallexAt Airwallex, we are redefining the landscape of global finance with our all-in-one payments and financial platform designed for businesses around the globe. Our unique blend of proprietary technology and innovative software has empowered over 200,000 companies, including industry leaders like Brex and Qantas, to seamlessly manage their financial needs — from business accounts and payments to treasury and embedded finance solutions.Founded in Melbourne and now boasting a diverse team of over 2,000 talented professionals across 26 global offices, we are backed by prominent investors such as T. Rowe Price and Visa, achieving a valuation of $8 billion. Join us to make an impactful contribution as we lead the future of global payments.Our ValuesWe seek dynamic individuals with a builder's mindset who are eager to create meaningful impact, learn rapidly, and take ownership of their work. If you possess strong expertise, a curious intellect, and a commitment to our mission and operating principles, we want you on our team. You should thrive in a fast-paced environment, approach challenges with a problem-solving mindset, and embrace collaboration to turn innovative ideas into reality.The Operations TeamThe Operations team at Airwallex plays a crucial role in ensuring our services run smoothly and efficiently. Our focus is on optimizing workflows, enhancing operational effectiveness, and providing top-notch customer support. By implementing best practices, we contribute to the company's growth and uphold our commitment to delivering exceptional service to our clients worldwide.Your RoleAs a Senior Risk Investigator, you will be instrumental in detecting, analyzing, and mitigating risk across our product areas. You will conduct comprehensive investigations into complex risk patterns and handle escalated cases from the risk department. Your contributions will be vital in safeguarding our operations and ensuring compliance across our global platform.

Feb 27, 2026
Apply
OpenAI logo
Full-time|Remote|San Francisco

Role Overview OpenAI is seeking an Abuse Investigator focused on AI self-improvement risk. This position is based in San Francisco. The role centers on protecting the ethical and responsible use of AI systems. What You Will Do Analyze potential risks connected to AI self-improvement and related technologies. Investigate reported incidents or concerns involving AI behavior or misuse. Work with teams across the company to strengthen safety protocols and reduce risk.

Apr 15, 2026
Apply
humans& logo
Full-time|On-site|San Francisco

Technical Staff MemberAt humans&, we are dedicated to pioneering a human-centric approach to artificial intelligence. Our mission is to redefine AI by placing individuals and their interpersonal connections at the heart of our innovations.We invite talented researchers and engineers who have made significant contributions to the cutting-edge of AI to join our dynamic team. If you excel in your field and are driven to innovate, we want to hear from you!

Jan 20, 2026
Apply
OpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamThe Intelligence and Investigations team at OpenAI is committed to proactively identifying and mitigating online abuse and strategic risks. Our mission is to foster a safe digital ecosystem by analyzing emerging trends and collaborating with internal and external partners to implement effective measures against misuse. Through our initiatives, we contribute to OpenAI's goal of creating AI technologies that enhance human well-being.The Strategic Intelligence & Analysis (SIA) team specializes in providing safety intelligence for OpenAI’s products. By monitoring, analyzing, and forecasting real-world abuse, geopolitical risks, and strategic threats, we guide safety measures, product strategies, and partnerships. Our work ensures that OpenAI’s tools are utilized securely and responsibly in essential sectors.About the RoleAs a Data Scientist, you will spearhead econometric and experimental analyses to comprehend risk dynamics within complex human–AI systems. Your focus will be on quantifying the magnitude and implications of risk fluctuations in a fast-paced, evolving environment. You will design experimental and observational studies to uncover causal factors and assess risk changes across diverse surfaces and sources. Your insights will directly influence strategic risk management and prioritization across the organization.This position is based in San Francisco, CA (hybrid, 3 days/week) with relocation assistance available.Key Responsibilities:Design and execute experimental and observational analyses to evaluate strategic risks.Develop econometric models to assess the impact of product, policy, and external factors on critical risk vectors.Translate strategic risk inquiries into testable hypotheses and robust study frameworks.Implement A/B tests and pseudo-experimental studies to measure risk changes and uncover underlying mechanisms.Identify, test, and clarify product-driven, event-driven, or signal-driven risk variations.Establish baselines and statistical confidence around core metrics to define the scale of risks.Collaborate across teams to monitor strategic risks, pinpoint intervention opportunities, and assess the effectiveness of those interventions.

Dec 19, 2025
Apply
OpenAI logo
Full-time|Hybrid|San Francisco

About the TeamJoin our Intelligence and Investigations team, where we work tirelessly to swiftly identify and address abuse and strategic risks, fostering a safer online environment. Our mission is to detect emerging abuse trends and analyze potential risks, collaborating with both internal and external partners to implement robust mitigation strategies that safeguard against misuse. This vital work aligns with OpenAI's core objective of creating AI that serves humanity positively.Our Strategic Intelligence & Analysis (SIA) team delivers crucial safety intelligence for OpenAI’s products. We monitor, analyze, and predict real-world abuse, geopolitical risks, and strategic threats. Our insights guide safety measures, product decisions, and partnerships, ensuring that OpenAI’s technologies are deployed securely and responsibly across essential sectors.About the RoleAs a Technical Intelligence Analyst, you will play a pivotal role in producing rapid, scaled risk analyses that result in structured and actionable intelligence. You will design and support analytical workflows that empower our team to identify, validate, and prioritize risks effectively. By employing a range of technical tools and discovery methods, you will uncover novel harms and abuse patterns.Moreover, you will enhance recurring intelligence workflows, allowing analysts to prioritize information swiftly and efficiently. Your efforts will be vital in transforming these workflows into reusable frameworks, dashboards, AI-assisted processes, and rapid analysis tools. You will also design and implement lightweight technical solutions that enable analysts to convert raw signals into actionable insights at scale.This position is based in San Francisco, CA (hybrid, 3 days/week), and relocation support is available.In this role, you will:Develop and maintain analytical tools, evaluation methodologies, and quality metrics for intelligence workflows.Provide structure and technical rigor to analyst-built prototypes, redesigning workflows for broader applicability.Transform recurring needs into reusable templates, notebooks, graders, and lightweight tools.Identify and surface novel harms and abuse patterns across diverse ecosystems and behaviors.Establish rapid analysis workflows focused on emerging risks.Oversee the delivery, monitoring, and maintenance of risk tools and dashboards.Utilize SQL, Python, and AI-powered tools to enhance the speed, coverage, and consistency of analyses.

May 1, 2026
Apply
Control Risks logo
Full-time|$160K/yr - $160K/yr|Remote|Remote — San Francisco, California, United States

The Cyber Threat Intelligence Team Lead is crucial in establishing and guiding a premier Cyber Intelligence program for a key client at Control Risks. This role entails crafting strategies, enhancing capabilities, and leading a dedicated team of security professionals to proactively identify, assess, and respond to cyber threats.This position encompasses providing technical guidance and administrative oversight on all cybersecurity initiatives, ensuring the safeguarding of the client's systems, networks, and sensitive data. The Team Lead collaborates closely with technology and business stakeholders to integrate security considerations into all planning, development, and operational processes.Collaborate with client stakeholders to build, manage, and expand a Cyber Threat Intelligence Team from inception.Take charge of developing Standard Operating Procedures for threat intelligence operations, tailored to specific client activities and stakeholder needs, including tooling, reporting structures, and incident management outside regular hours.Oversee the management of the most severe and critical cybersecurity incidents, providing support to incident responders with timely reporting, updates, and investigations to facilitate effective incident response and crisis management.Mentor and train threat intelligence analysts, engineers, and threat hunters to enhance their skills and capabilities.Establish operational workflows, escalation protocols, and comprehensive playbooks.Supervise the triage of cybersecurity events, ensuring swift identification, investigation, and remediation.Coordinate incident response activities across IT, Legal, Risk, and other relevant stakeholders.Develop metrics, KPIs, and reporting frameworks to evaluate the effectiveness of the Security Operations Center (SOC).Lead proactive threat hunting initiatives to uncover potential compromises and undetected malicious activities.Integrate threat intelligence into SOC workflows and leverage insights to shape response and prevention strategies.Assess and optimize the client's technology stack, including SIEM, SOAR, EDR, and threat intelligence platforms.Drive ongoing enhancements in detection rules, automation, and response capabilities.Propose emerging tools and processes to elevate operational maturity.Conduct regular check-ins, offer coaching and feedback, manage performance reviews and improvement plans, and support career development for team members.Act as the primary liaison between team members and the ECS program management team, ensuring timely updates on programs and personnel, and maintaining quality control on client deliverables.Collaborate with the Talent Acquisition team in the hiring process to ensure team resources align with client expectations and program requirements.Lead onboarding efforts, manage logistics for offboarding, and ensure operational continuity during transitions.

Nov 20, 2025
Apply
dstaff logo
Full-time|On-site|San Francisco

Join dstaff as a Technical Risk Governance Specialist in beautiful San Francisco, California! We are seeking a motivated professional to oversee and enhance our risk governance framework. You will play a crucial role in developing policies, procedures, and controls to manage technical risks effectively.

Nov 24, 2014
Apply
Sofi logo
Full-time|Remote|WA - Seattle; UT - Cottonwood Heights; CA - San Francisco; NY - New York City; TX - Frisco

Join Sofi as a Senior Insider Trust & Fraud Investigator, where you will play a pivotal role in safeguarding our customers and ensuring their trust. You will leverage your expertise to investigate and mitigate insider threats and fraudulent activities, contributing to a secure financial environment.

Mar 25, 2026
Apply
dstaff logo
Full-time|On-site|San Francisco

Join dstaff as a Technical Risk Governance Specialist in the vibrant city of San Francisco! We are looking for a dedicated individual to help manage and mitigate technical risks within our organization. As part of our team, you will play a crucial role in ensuring that our technology systems are secure, compliant, and aligned with industry best practices.

Nov 23, 2014
Apply
Control Risks logo
Full-time|$120K/yr - $140K/yr|Remote|Remote — San Francisco, California, United States

The Senior Cyber Threat Intelligence Analyst is integral to the daily functions of our client's cyber threat intelligence team. Collaborating closely with the Team Lead, this role emphasizes the triage of cyber events, proactive threat hunting, and the enhancement of the Security Operations Center (SOC) technology stack. This is a hands-on opportunity for a cybersecurity enthusiast eager to develop leadership skills while directly aiding in the identification and mitigation of cyber threats.Respond to and manage security alerts and incidents in real-time.Conduct thorough analyses of logs, network traffic, and endpoint data to uncover malicious behavior.Provide clear recommendations and escalate critical incidents to the Team Lead and relevant stakeholders.Engage in proactive threat hunting to uncover anomalies, suspicious activities, and sophisticated threats.Contribute to the development of playbooks and use cases addressing emerging attack methodologies.Assist in optimizing and fine-tuning tools such as SIEM, SOAR, and EDR platforms.Create detection rules, automation scripts, and dashboards to boost team productivity.Collaborate on evaluating new technologies and potential integrations.

Jan 27, 2026
Apply
City and County of San Francisco logo
Background Investigator

City and County of San Francisco

Full-time|On-site|San Francisco

Join the City and County of San Francisco as a Background Investigator and play a crucial role in ensuring the integrity of our police department. In this dynamic position, you will conduct comprehensive background investigations for individuals seeking employment within the police force. Your keen attention to detail and commitment to public service will help maintain a trustworthy and effective law enforcement environment.

Jan 7, 2026
Apply
OpenAI logo
Full-time|Remote|San Francisco

About Our TeamAt OpenAI, our mission is to ensure that general-purpose artificial intelligence serves the betterment of all humanity. We believe that the realization of this mission hinges on real-world deployment and continuous iteration based on our experiences.The Intelligence and Investigations team plays a crucial role in this mission by identifying and probing into misuse of our products, particularly emerging forms of abuse. This work empowers our partner teams to devise data-informed product policies and develop scalable safety measures. A nuanced understanding of abuse enables us to provide users with the tools they need to create positive outcomes with our products.About This RoleAs an Abuse Investigator within the Intelligence and Investigations team, your primary responsibility will be to detect and assess malicious activities on our platform, effectively disrupting any violations of our policies and identifying harmful behaviors. This role necessitates an expert-level comprehension of our products and data, alongside a solid background in investigating threat actors. You will address urgent escalations, particularly those that evade our existing tools and safeguards.This position demands specialized knowledge in identifying, interpreting, and mitigating risks associated with violent behaviors and terrorist activities. You should have experience in investigating complex and harmful threats, along with the capability to discern ambiguous signals in a multifaceted and adversarial threat landscape. A demonstrated ability to quickly assimilate new processes, systems, and team dynamics while thriving in a dynamic, high-pressure environment is essential.This role operates on Pacific Standard Time and supports remote work, although you are also welcome to work from our offices in San Francisco, New York, or Washington, D.C. The position includes resolving urgent escalations outside of standard working hours and participating in on-call shifts. Investigations will involve sensitive content, including sexual, violent, or otherwise disturbing material, including issues of child safety. Thus, resilience in managing high-stress environments is crucial.Key Responsibilities:Analyze leads, investigate activities, and disrupt abusive operations in collaboration with our policy, legal, and integrity teams, focusing on violent and terrorist activities, particularly those posing immediate threats to life.Create abuse signals and tracking strategies to proactively identify harmful activities on our platform.Identify operational workflow enhancements and processes that expedite work while maintaining risk mitigation strategies.

Mar 11, 2026
Apply
Axiom Talent Platform logo
Full-time|On-site|US - US - San Francisco

Become a Key Member of Our Investigations Team at Axiom Location: San Jose, CA (on-site required, 4-5 days per week) We are actively looking for a seasoned Investigations Counsel to join our dynamic technology client team. In this pivotal role, you will spearhead comprehensive corporate investigations for a fast-paced global SaaS and technology company, ensuring that all issues are addressed meticulously, consistently, and in adherence to applicable laws and internal policies. This position is currently open. Key Responsibilities: Oversee the complete lifecycle of workplace investigations, from initial intake to final reporting, including scoping, interviews, evidence analysis, findings, and remediation recommendations. Manage a portfolio of 6-8 complex cases at any given time, with an expectation to resolve most cases within 30 days. Conduct investigations on a broad spectrum of issues, from interpersonal complaints to intricate multi-party cases, including Title VII and other EEO-related matters. Prepare clear and well-reasoned investigation reports, collaborating closely with Legal, HR, Compliance, and business stakeholders on findings and remediation measures. Develop and enhance investigation protocols, documentation standards, and reporting metrics. Keep abreast of the latest developments in employment law, anti-discrimination laws, and whistleblower protections, as well as best practices in workplace investigations. Qualifications: A minimum of 10 years of direct experience in full-cycle workplace and corporate investigations. Proven expertise in handling complex and sensitive investigations, including Title VII and anti-discrimination and harassment cases. Experience in a fast-paced technology or SaaS environment is highly desirable. Demonstrated ability to juggle multiple high-stakes investigations simultaneously under tight deadlines. Exceptional skills in interviewing, fact-finding, and written communication, including the ability to produce concise and defensible reports. Juris Doctor (JD) or equivalent, with a valid license to practice law in California. Must be willing to work on-site at the client's office 4-5 days per week. Preferred Skills: Prior in-house investigations experience at a global or rapidly scaling technology company. Familiarity with working alongside cross-functional teams (Legal, HR, Ethics & Compliance, Security) during investigations and remediation efforts. Understanding of global employment and data privacy considerations in cross-border investigations.

Feb 17, 2026
Apply
OpenAI logo
Full-time|On-site|San Francisco

About Our TeamJoin OpenAI's innovative Human Data Team, where we engineer cutting-edge data solutions to propel pioneering research forward. Our team plays a crucial role in enhancing and evaluating flagship models and products such as ChatGPT, GPT-4, and Sora, while also contributing to vital safety initiatives in collaboration with our Preparedness and Safety Systems teams.About the PositionAs a Technical Program Manager (TPM) within the Human Data team, you will work closely with research and engineering teams to devise and implement effective strategies for gathering high-quality data. This role serves as a pivotal link between our research roadmap, external partners, AI trainers, and the Human Data engineering team.This position is located at our headquarters in San Francisco and reports directly to the Head of Human Data Operations. We provide relocation assistance to new team members.Your Responsibilities Include:Collaborating with Research: Engage with researchers to define data collection requirements, establish success metrics, and create quality assessment frameworks.Designing & Executing Data Collection Initiatives: Transform research requirements into actionable plans and expedite execution by utilizing existing tools while iterating to meet objectives. You may need to create innovative solutions in tandem with engineering to develop robust and scalable systems.Proactive Problem Solving: Have a proactive approach to overcoming obstacles without waiting for external dependencies, using your technical skills and drive to achieve partial successes in the meantime.Optimizing Systems & Processes: Develop and refine dashboards to monitor campaign performance, applying SQL and Python for data analysis and actionable insights.Advancing Technical Roadmaps: Collaborate with engineering teams to improve data platforms, remove obstacles, and ensure adherence to security best practices, including access management.Amplifying Your Impact: Guide and empower program managers and vendors to facilitate daily operations, allowing you to concentrate on high-priority opportunities.You Will Excel in This Role If You:Possess strong SQL and Python skills for data analysis, including database querying, processing large datasets, and deriving actionable insights.Have a proven track record of effectively collaborating across teams to achieve common goals.Demonstrate a strong understanding of data collection methodologies and project management principles.Exhibit exceptional problem-solving abilities and a results-oriented mindset.

Oct 30, 2025
Apply
OpenAI logo
Full-time|On-site|San Francisco

About Our TeamThe Intelligence and Investigations team is dedicated to swiftly identifying and mitigating abuse and strategic risks, ensuring a secure online environment through close collaboration with both internal and external partners. Our initiatives align with OpenAI's fundamental mission of developing AI technologies that benefit humanity.The Strategic Intelligence & Analysis (SIA) team plays a crucial role in providing safety intelligence for OpenAI’s products. We focus on monitoring, analyzing, and forecasting real-world abuse, geopolitical risks, and strategic threats. Our work informs safety mitigations, product decisions, and partnerships, ensuring that OpenAI’s tools are deployed securely and responsibly across critical sectors.About the RoleWe are in search of an AI Emerging Risks Analyst who will aid in comprehending potential harms and misuse of AI in a landscape characterized by rapid and sustained changes. This role involves identifying known threat actors who exploit new technologies as well as emerging threats enabled by these advancements. You will utilize strategic foresight methodologies to proactively detect and mitigate risks.In this position, you will provide a strategic-level perspective on a diverse range of evolving risk areas. You will be instrumental in creating actionable risk taxonomies pertinent to OpenAI’s platforms and broader business interests. By employing both quantitative and qualitative methodologies, you will identify early warning signals, investigate concerning behaviors, and transform weak signals into prioritized risk assessments. Your focus will include upstream ecosystem scanning, competitive benchmarking, and external narrative/risk sense-making. Your contributions will guide cross-functional partners in the protection and safety domains, ensuring user, brand, and community safety while fostering productive and creative uses of our tools.Key ResponsibilitiesIdentify and prioritize emerging risksDevelop and continuously refine a comprehensive view of emerging signals and trends that may impact the AI ecosystem through proactive scanning.Design and maintain harm taxonomies to foresee and warn about potential AI-related harms and misuse over the next 0-24 months and beyond.Contribute to an ongoing risk register and prioritization framework that highlights the most pressing issues based on severity, prevalence, exposure, and trajectory.Analyze and delve into emerging abuse patternsCreate thorough strategies to investigate and understand these patterns.

Feb 13, 2026

Sign in to browse more jobs

Create account — see all 1,169 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.