Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
The ideal candidate will possess strong analytical skills, a deep understanding of network protocols, and hands-on experience with penetration testing tools. A background in information security, including knowledge of threat modeling and risk assessment, is preferred. Strong communication skills and the ability to work collaboratively in a team environment are essential.
About the job
Become a pivotal part of our cybersecurity team as a Red Team Security Engineer at Astranis. In this role, you will have the opportunity to simulate sophisticated attacks on our systems, helping to identify vulnerabilities and strengthen our security framework. We are looking for innovative thinkers who can think like adversaries and enhance our defenses.
About Astranis
Astranis is at the forefront of satellite technology, committed to providing affordable internet access across the globe. Our innovative solutions aim to bridge the digital divide, making a difference in the lives of millions. Join us in our mission to connect the unconnected while working in an inspiring and dynamic environment.
Similar jobs
1 - 20 of 4,411 Jobs
Search for Security Research Manager Coverage Team
Full-time|$188K/yr - $254K/yr|On-site|San Francisco, Boston, New York, Denver
About SemgrepSemgrep is at the forefront of code security for developers, enabling innovative work without compromising safety. Our platform allows teams to identify, report, and rectify genuine issues before deployment, supported by an intelligent security system that evolves alongside development. Semgrep enhances code security as it is authored, providing essential guardrails that allow developers to operate swiftly while maintaining security. Built for creators and endorsed by security professionals, our solution integrates seamlessly into developers' workflows, delivering solutions that preserve productivity while granting security teams enhanced oversight, control, and assurance. As Semgrep evolves, our AI adapts to your context, minimizing false positives and prioritizing actionable vulnerabilities, a claim validated by 95% of security reviewers across over 6 million findings. We are committed to making zero false positives a reality, enabling AppSec teams to manage 80% fewer false alarms across Code and Supply Chain, significantly reducing backlog.Founded in San Francisco and supported by investors such as Menlo Ventures, Felicis Ventures, Lightspeed Venture Partners, Redpoint Ventures, and Sequoia Capital, Semgrep has been acknowledged by Gartner in Application Security Testing and is trusted by top-tier organizations like Snowflake, Dropbox, and Figma. Discover more at semgrep.dev.About the RoleAs the Security Research Manager for the Coverage Team, you will spearhead a group of Security Researchers dedicated to enhancing detection rules for Secrets, Code, and Supply Chain across all Semgrep products. Your responsibilities will include:Crafting high-quality detection rulesInnovating research and automation techniques to expedite and enhance rule creationEvaluating and elevating the overall quality and scope of detectionsIn this managerial role, you will report directly to the Head of Security Research. You will define the strategic roadmap, collaborate with Product Management to concentrate on the most impactful detection areas, and drive ongoing enhancements in both detection accuracy and coverage breadth. Achieving success in this position means leading a team that produces exceptional detections, scales rule generation through automation and AI, and expands the limits of contemporary vulnerability research.Your Responsibilities:Recruit, mentor, and nurture your team, fostering a productive, engaging, diverse, and inclusive workplace that aligns with Semgrep's core valuesCollaborate closely with product management, sales, and development teams across all product linesAnalyze, measure, and enhance the velocity and quality of Semgrep detections
About the Team:At OpenAI, security is integral to our mission of ensuring that artificial general intelligence benefits all of humanity.Codex Security, our pioneering security agent, is designed to scan GitHub Cloud repositories, verify genuine vulnerabilities, and collaborate with Codex to generate effective fixes.About the Role:In this critical position, you will spearhead initiatives to identify, characterize, and prioritize vulnerabilities across multiple layers in advanced AI systems, including data pipelines, training and inference runtimes, and system supply chains. Your work will encompass offensive research, technical documentation, product enhancement, and serving as OpenAI’s primary technical liaison to select external partners, including potential U.S. government stakeholders.Key Responsibilities:Conduct comprehensive security research on real-world software systems to uncover intricate vulnerabilities across extensive codebases and distributed architectures.Validate vulnerabilities identified by AI-driven security agents through the development of proofs-of-concept and exploit demonstrations.Collaborate with engineering teams to optimize automated vulnerability discovery, validation, and remediation workflows within product development.Create high-quality security datasets and evaluations that will enhance the cybersecurity capabilities of models.Advance AI models used for vulnerability discovery and remediation by establishing datasets, evaluations, and feedback mechanisms based on real-world research.Publish insightful technical write-ups, research findings, and vulnerability analyses to elevate application security standards.You Will Excel If You:Possess extensive experience in vulnerability research, exploit development, or offensive security.Demonstrate a strong command of advanced offensive security techniques.Are well-versed in AI/ML infrastructure (data, training, inference, schedulers, accelerators) and can perform comprehensive threat modeling.Exhibit the ability to work independently, unify diverse teams, and meet tight deadlines.Communicate effectively and succinctly with both technical experts and decision-makers.Have a passion for enhancing the security of widely utilized software and open-source platforms.
Join Anthropic as an Offensive Security Research Engineer in our Safeguards team, where you will play a critical role in identifying and mitigating security risks. Your expertise will be essential in enhancing our security protocols and ensuring the integrity of our systems. You will collaborate with cross-functional teams to develop innovative solutions that prioritize safety and security.
At Perplexity, we are on the lookout for passionate researchers and engineers to join our pioneering Secure Intelligence Institute (SII). As our primary research hub, SII is dedicated to enhancing security, privacy, and trust within the realm of frontier intelligence. Our mission focuses on pushing the boundaries of AI security research, implementing significant enhancements in Perplexity's systems, and disseminating insights that bolster the wider AI ecosystem.As a member of SII, your role will involve undertaking original and influential research aimed at bolstering the security and privacy of advanced intelligence systems. You will strive to ensure that your research is not only theoretically sound but also pragmatically applicable to improve systems that are relied upon daily by millions of users and thousands of businesses. You will be expected to effectively translate your research findings, as well as advancements from the broader research community, into actionable improvements that safeguard Perplexity's users.
About Our TeamAt OpenAI, our Safety Systems organization is dedicated to ensuring the responsible development and deployment of our most advanced AI models. We create robust evaluations, safeguards, and safety frameworks that guarantee our models operate as intended in real-world environments.The Preparedness team, an integral part of the Safety Systems organization, is guided by OpenAI’s Preparedness Framework.While frontier AI models hold the promise of benefiting humanity, they also introduce significant risks. The Preparedness team is tasked with monitoring and preparing for catastrophic risks associated with these advanced AI models to ensure they drive positive change.Our mission includes:Proactively monitoring and assessing the evolving capabilities of frontier AI systems, focusing on identifying catastrophic risks.Establishing concrete procedures, infrastructure, and partnerships to effectively mitigate these risks and manage the development of powerful AI systems safely.This dynamic and impactful role connects capability assessments, evaluations, internal red teaming, and mitigations for frontier models, playing a crucial role in our overall AGI preparedness efforts.About the PositionIn this pivotal role, you will spearhead the Automated Red Teaming (ART) initiative, developing scalable, research-driven systems that continuously identify failure modes in our AI models and implement actionable improvements. Your primary goal will be to minimize potential harm by identifying the most critical vulnerabilities early and reliably.Your ResponsibilitiesYou will direct the research and technical strategy for automated red teaming across critical risk areas, focusing initially on:Automated discovery of classifier jailbreak vulnerabilities (cybersecurity and biosecurity).Automated elicitation of bio threat-development scenarios (worst-case planning uplift).
About DepthFirst AIAt DepthFirst, we are committed to reinforcing the backbone of modern civilization: software. However, vulnerabilities pose significant risks to its integrity, security, and resilience. Our mission is to innovate solutions to enhance software security.We are pioneering intelligence technologies designed to detect and remediate critical software vulnerabilities. Our focus is on training and scaling security AI agents capable of identifying zero-day vulnerabilities across extensive customer codebases and widely-used open source software.With a founding team comprising experts from leading organizations like DeepMind, Databricks, Square, and Faire, we seek talented individuals eager to work at the confluence of AI, Security, and Infrastructure.About This Role:We are in search of an accomplished Security Researcher to contribute to the development and training of AI agents specialized in vulnerability discovery and exploitation.Your role will involve creating technology that can uncover vulnerabilities at scale, akin to discovering the next Log4J, and facilitating the identification and remediation of vulnerabilities in both customer and open source codebases.We are seeking dynamic security researchers with a keen intuition for identifying, analyzing, and investigating application vulnerabilities. Collaborating with AI researchers and engineers, you will explore novel attack vectors and assist in the advancement of sophisticated detection and defense mechanisms. Your contributions will be vital in reshaping security practices for organizations worldwide.You Will Be Excited About This Role Because You Will:Develop technologies capable of identifying novel vulnerabilities at scale in both proprietary and open source codebases.Design techniques aimed at minimizing false positives through automated exploitation, proof of concept generation, and context inference.Engage closely with engineers to comprehend limitations and innovate methodologies to enhance our systems.Publish internal technical reports and contribute to security advisories as necessary.Work on a product that addresses critical security issues, already proving valuable to customers by helping them rectify significant vulnerabilities within days of implementation.
Join Cloudflare as a Senior Research Manager, where you'll lead innovative research initiatives that shape the future of internet security. As a pivotal member of our team, you will be responsible for driving strategic research projects, collaborating with cross-functional teams, and leveraging data insights to enhance our product offerings. Your leadership will guide the research direction and ensure alignment with our company goals.
Join Cloudflare as a Research Manager and play a pivotal role in driving innovative research initiatives that enhance our security and performance solutions. You will lead a team of skilled researchers, collaborating closely with cross-functional teams to identify market trends and develop groundbreaking strategies that align with our business objectives.Your responsibilities will include overseeing research projects from inception to completion, analyzing data to derive actionable insights, and presenting findings to stakeholders. You will foster a culture of creativity and critical thinking, ensuring that our research efforts remain at the forefront of industry standards.
Become a pivotal part of our cybersecurity team as a Red Team Security Engineer at Astranis. In this role, you will have the opportunity to simulate sophisticated attacks on our systems, helping to identify vulnerabilities and strengthen our security framework. We are looking for innovative thinkers who can think like adversaries and enhance our defenses.
We are seeking a highly motivated and detail-oriented Research Program Manager to join our dynamic team at mercor. The ideal candidate will play a pivotal role in overseeing various research initiatives, coordinating projects, and ensuring that all objectives are met efficiently and effectively.This position offers an exciting opportunity to work in a fast-paced environment where innovation and collaboration are key. You will be responsible for managing project timelines, budgets, and resources while fostering a culture of continuous improvement within the team.
Full-time|$165K/yr - $242K/yr|On-site|Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/ San Francisco, CA
CoreWeave is seeking a Security Engineering Manager to lead the Platform Security team. This position is based in Livingston, NJ, New York, NY, Sunnyvale, CA, Bellevue, WA, or San Francisco, CA. The team’s mission is to embed security into CoreWeave’s Kubernetes-based platform and public cloud environments, supporting high-performance infrastructure for AI and machine learning workloads. Role overview This manager will oversee and expand the Platform Security engineering team, reporting to the Senior Director of Security Foundations. The focus is on hands-on leadership and technical execution, with an emphasis on building and implementing security controls rather than policy development. The role requires close collaboration with Infrastructure, Platform Engineering, Site Reliability Engineering, and other security teams to ensure security measures keep pace with business growth and evolving needs. What you will do Lead and grow the Platform Security engineering team. Integrate security into Kubernetes infrastructure and public cloud platforms such as AWS, GCP, and Azure. Define and execute strategies for cloud security posture, workload isolation, platform guardrails, image integrity, and multi-cloud security. Develop and implement security controls across CoreWeave’s infrastructure. Work closely with other technical teams to align platform security with business needs. The Platform Security team The Platform Security team at CoreWeave engineers systems that enforce security at the infrastructure layer. Their work spans both CoreWeave’s own Kubernetes-based platform and third-party public cloud environments. The team supports GPU-accelerated infrastructure for demanding AI and machine learning workloads, ensuring that both customer and internal services remain secure as CoreWeave’s global presence expands.
At Contrast Security, we are revolutionizing the way organizations safeguard their software in the fast-paced realm of modern development. With our industry-leading Application Detection and Response (ADR), we empower teams to detect, halt, and remediate genuine threats in real time. If you are driven by the desire to create smarter, faster, and more effective security solutions, you will find a welcoming home here.We are in search of innovative thinkers, courageous creators, and problem solvers who excel at transforming complex challenges into groundbreaking solutions.As the Senior Channel Account Manager for the West region, you will play a pivotal role in bringing our leading application security tool to the forefront of the market. We are looking for a proactive and imaginative leader to cultivate mutually beneficial partnerships with both new and existing channel business partners. This role is heavily focused on driving revenue growth and requires a strong aptitude for expanding our existing partnerships while identifying new ones that will propel the Contrast Security brand into the market. You will be accountable for a regional sales quota and will be required to travel frequently.Ideal candidates will possess a strong hunter mentality, a proven consultative selling approach, and exceptional presentation skills. We seek a talented, enthusiastic, and ambitious channel partner account manager who is ready to join our team and elevate Contrast Security to new heights. Partner account managers are crucial to the growth of our partner organizations. The primary objective of this role is to establish credibility and trust with Senior Partner Executives, fostering investments in selling and driving the growth of Contrast products. At Contrast, you will find an environment where innovation and success stem from collaborative creativity.
Full-time|$100K/yr - $300K/yr|On-site|San Francisco, CA
About Cogent SecurityCogent Security is at the forefront of cybersecurity innovation, leveraging Applied AI to develop next-generation AI agents. In an era where cyber attacks evolve rapidly, our AI Taskforce analyzes vast amounts of enterprise data to proactively address vulnerabilities and prevent critical breaches.We combine pioneering research with practical execution, ensuring that our innovative solutions meet real-world challenges. Our Cogent Research division acts as our dedicated AI lab, driving the development of advanced security workflows.Since our emergence from stealth mode, we have rapidly grown, collaborating with Fortune 500 companies to secure complex production environments globally.Supported by Greylock, we have gathered a team of top talent from renowned institutions and leading organizations in the AI and cybersecurity sectors.About the RoleAs an Agent Engineer at Cogent Security, you will be pivotal in designing, building, and deploying critical AI agents tailored for complex client environments. Your role is highly cross-functional, involving direct collaboration with customers to understand their unique needs, adapting our platform accordingly, and iterating on scalable solutions to handle millions of real-world security events.You will manage projects from inception to deployment, including data onboarding and integrating feedback into our core agent platform. Your contributions will shape how AI agents detect threats, triage incidents, and automate security workflows for some of the most sophisticated organizations worldwide.This position is ideal for engineers who excel in dynamic environments, enjoy tackling complex technical challenges, and wish to see the tangible impact of their work.
As a Product Manager specializing in Research at Decagon, you will play a pivotal role in shaping our product strategy and driving innovation. You will work closely with cross-functional teams to identify market opportunities, define product vision, and deliver actionable insights that meet customer needs.Your responsibilities will include conducting thorough market research, analyzing user feedback, and collaborating with engineering, design, and marketing teams to ensure successful product launches. You will leverage your expertise to prioritize features and enhancements that align with our company objectives.
About Our TeamJoin the Privacy Engineering Team at OpenAI, where we are dedicated to embedding privacy as a core principle within our mission to develop Artificial General Intelligence (AGI). We focus on ensuring that all OpenAI products and systems that process user data adhere to the highest standards of privacy and security.Our team engineers essential production solutions, innovates privacy-preserving methodologies, and provides cross-functional engineering and research teams with the tools necessary for responsible data management. Our commitment to ethical data utilization is a cornerstone of OpenAI's vision for safely advancing AGI for the benefit of everyone.About the PositionAs a valued member of the Privacy Engineering Team, you will be instrumental in protecting user data while enhancing the usability and effectiveness of our AI systems. You will engage with cutting-edge research on privacy-enhancing technologies, including differential privacy, federated learning, and data memorization techniques. Your role will also entail exploring the intersection of privacy and machine learning, innovating methods for better data anonymization, and mitigating risks associated with model inversion and membership inference attacks.This position is based in San Francisco, and we offer relocation assistance.Key Responsibilities:Design and prototype scalable privacy-preserving machine learning algorithms (e.g., differential privacy, secure aggregation, federated learning) for deployment at OpenAI.Evaluate and enhance model resilience against privacy threats such as membership inference, model inversion, and data memorization leaks, ensuring a balance between utility and security assurances.Create internal libraries, evaluation frameworks, and documentation to make advanced privacy techniques accessible to engineering and research teams.Conduct comprehensive investigations into the privacy-performance trade-offs of large models, sharing findings that guide model training and product safety protocols.Establish and document privacy standards, threat models, and audit procedures to govern the entire machine learning lifecycle, from dataset curation to post-deployment oversight.Work collaboratively with Security, Policy, Product, and Legal teams to translate evolving regulatory frameworks into actionable technical safeguards and tools.
Join Hive as a Security Compliance Manager and take the lead in enhancing our security framework. Collaborate with engineers and auditors to ensure compliance with industry standards such as ISO and SOC, as well as federal regulations. You will own the execution of our Information Security program, focusing on improving personnel screening compliance and risk monitoring. Your role will require effective communication with technology and business leaders across all levels, driving consensus among stakeholders to ensure security controls are effective and remediated as necessary.
Join Anthropic as a Research Engineer focusing on Economic Research. In this role, you will leverage your analytical skills to conduct in-depth economic analysis and contribute to innovative projects aimed at enhancing our understanding of economic models and their implications.
About Our TeamAt OpenAI, our Foundations team is dedicated to examining how model behavior evolves as we scale up models, data, and computing resources. We meticulously analyze the relationships between model architecture, optimization strategies, and training datasets to inform the design and training of next-generation models.About the PositionAs a Team Lead in Research Inference, you will be instrumental in constructing systems that empower advanced AI models to operate efficiently at scale. Your role lies at the crossroads of model research and systems engineering, where you will translate innovative architectural concepts into high-performance inference systems, clearly illustrating the trade-offs in performance, memory usage, and scalability.Your contributions will significantly shape model design, evaluation, and iteration processes across our research organization. By developing and refining high-performance inference infrastructures, you will provide researchers with the tools necessary to explore new ideas while understanding their computational and systems implications.This position does not involve serving products; instead, it supports research through a focus on performance, accuracy, and realism, ensuring that our AI research is firmly rooted in scalable solutions.ResponsibilitiesDesign and develop optimized inference runtimes for large-scale AI models, emphasizing efficiency, reliability, and scalability.Take ownership of optimizing core execution processes, including model execution, memory management, batching, and scheduling.Enhance and expand distributed inference across multiple GPUs, focusing on parallelism, communication patterns, and runtime coordination.Implement and refine critical inference operators and kernels based on real-world workloads.Collaborate closely with research teams to ensure accurate and efficient support for new model architectures within inference systems.Identify and resolve performance bottlenecks through comprehensive profiling, benchmarking, and low-level debugging.Contribute to the observability, correctness, and reliability of large-scale AI systems.Ideal Candidate ProfileExperience in developing production-level inference systems, beyond just training and executing models.Proficient in GPU-centric performance engineering, including managing memory behavior and understanding latency/throughput trade-offs.Strong analytical skills and familiarity with performance profiling tools.
About the TeamJoin the innovative Post-Training team at OpenAI, where we focus on refining and elevating pre-trained models for deployment in ChatGPT, our API, and future products. Collaborating closely with various research and product teams, we conduct crucial research that prepares our models for real-world deployment to millions of users, ensuring they are safe, efficient, and reliable.About the RoleAs a Research Engineer / Scientist, you will spearhead the research and development of enhancements to our models. Our work intersects reinforcement learning and product development, aiming to create cutting-edge solutions.We seek passionate individuals with robust machine learning engineering skills and research experience, particularly with innovative and powerful models. The ideal candidate will be driven by a commitment to product-oriented research.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days in the office each week. Relocation assistance is available for new employees.In this role, you will:Lead and execute a research agenda aimed at enhancing model capabilities and performance.Work collaboratively with research and product teams to empower customers to optimize their models.Develop robust evaluation frameworks to monitor and assess modeling advancements.Design, implement, test, and debug code across our research stack.You may excel in this role if you:Possess a deep understanding of machine learning and its applications.Have experience with relevant models and methodologies for evaluating model improvements.Are adept at navigating large ML codebases for debugging purposes.Thrive in a fast-paced and technically intricate environment.About OpenAIOpenAI is a pioneering AI research and deployment organization dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We are committed to pushing the boundaries of AI capabilities while prioritizing safety and human-centric values in our products. Our mission is to embrace diverse perspectives, voices, and experiences that represent the full spectrum of humanity, as we strive for a future where AI is a powerful ally for everyone.
About Our TeamJoin the forefront of AI innovation with the RL and Reasoning team at OpenAI. Our team is dedicated to advancing reinforcement learning research and has pioneered transformative projects, including o1 and o3. We are committed to pushing the limits of generative models while ensuring their scalable deployment.About the RoleAs a Research Engineer/Research Scientist at OpenAI, you will play a pivotal role in enhancing AI alignment and capabilities through state-of-the-art reinforcement learning techniques. Your contributions will be essential in training intelligent, aligned, and versatile agents that power various AI models.We seek individuals with a solid foundation in reinforcement learning research, agile coding skills, and a passion for rapid iteration.This position is located in San Francisco, CA, and follows a hybrid work model of three days in the office per week. We also provide relocation assistance for new hires.You may excel in this role if:You are enthusiastic about being at the cutting edge of RL and language model research.You take initiative, owning ideas and driving them to fruition.You value principled methodologies, conducting simple experiments in controlled environments to draw trustworthy conclusions.You thrive in a fast-paced, complex technical environment where rapid iteration is essential.You are adept at navigating extensive ML codebases to troubleshoot and enhance them.You possess a profound understanding of machine learning and its applications.About OpenAIOpenAI is a pioneering AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves the greater good for humanity. We strive to push the boundaries of AI system capabilities while prioritizing safe deployment through our innovative products. We recognize AI as a powerful tool that must be developed with safety and human-centric principles, embracing diverse perspectives to reflect the full spectrum of humanity.We are proud to be an equal opportunity employer, welcoming applicants from all backgrounds without discrimination based on race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or any other legally protected characteristic.
May 14, 2025
Sign in to browse more jobs
Create account — see all 4,411 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.