OpenAI logo

Strategic Risk Analyst, Behavioral & Psychological Risk

OpenAI
8 hours ago
Full-time
On-site
San Francisco, California, United States
$288,000 - $320,000 USD yearly
Analyst

About the team

The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analyzing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI's overarching goal of developing AI that benefits humanity.

We are building a horizontal “radar” for AI abuse and strategic risk—correlating internal signals, external intelligence, and real-world events into clear, actionable priorities for OpenAI’s safety and product decision-makers.

About the Role

As a Strategic Risk Analyst, Behavioral & Psychological Risk, you will bring deep expertise in human behavior to our central view of risk across OpenAI’s products and platforms.

You will analyze how users think, feel, and behave in interaction with AI systems—especially in high-risk contexts such as self-harm, manipulation, coercion, and influence—and translate these insights into decision-ready risk assessments, mitigation strategies, and product guidance.

This role bridges clinical/behavioral expertise and intelligence analysis, turning psychological signals and patterns into structured judgments, early indicators, and actionable recommendations. A key part of this role is proactively identifying where analytical insight is most needed, anticipating emerging product, policy, and safety questions, and focusing efforts on analyses that shape critical decisions.

You will partner closely with investigators, engineers, policy, and trust & safety teams to shape how we understand and mitigate potential risks in human-AI interactions.

In this role, you will

  • Develop insights into how AI systems are used in complex or high-risk situations (e.g., self-harm, suicidal ideation, substance-use escalation, and threats of violence), identifying recurring patterns and emerging trends that help guide product, safety, and policy decisions.

  • Synthesize behavioral, psychological, and intelligence signals into clear narratives about user needs, system dynamics, and potential areas of risk or vulnerability.

  • Produce decision-ready briefs and assessments that inform product, safety, and policy decisions.

  • Develop and refine behavioral risk frameworks, taxonomies, and indicators (e.g., severity models, escalation pathways, psychological harm categories).

  • Identify early indicators of emerging issues and assess whether observed patterns represent meaningful safety concerns, helping prioritize and inform appropriate mitigations.

  • Assess the effectiveness of mitigations—such as product changes, safeguards, and guidance—using behavioral evidence and real-world outcomes.

  • Contribute to incident reviews and post-incident analysis by bringing a behavioral perspective to root cause analysis and prevention.

  • Bridge research and operations, translating academic and clinical literature into practical safeguards, policies, and product decisions.

You might thrive in this role if you:

  • Bring 5+ years in forensic, clinical, trust and safety, or applied academic settings assessing risk of violence, self-harm, or addiction, with strong mixed-methods research skills.

  • Have familiarity with AI systems, language models, or human-AI interaction dynamics, and are interested in applying psychological expertise to emerging AI risks (experience working on AI safety, trust & safety, or related domains is a plus).

  • Can translate human behavior into structured intelligence, connecting individual cases to system-level patterns and risks.

  • Are comfortable working across qualitative and quantitative inputs, including casework, interaction data, research literature, and metrics.

  • Have experience designing or using risk frameworks, taxonomies, or evaluation methods to structure ambiguity.

  • Communicate clearly across disciplines, turning complex behavioral insights into concise, actionable recommendations.

  • Thrive in fast-moving, ambiguous environments, and can prioritize effectively under uncertainty.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.