mpathic logo

Red Teaming Expert

mpathic
3 hours ago
Contract
Remote
United States
$30 - $60 USD hourly
Specialist

    About mpathic.ai

    Keeping the human in AI. mpathic is a trusted leader in advancing quality and safety in AI systems through expert-led evaluation and human data. We partner with leading technology companies to support red teaming, trust & safety, expert annotation, and model evaluation across high-stakes domains.


    About the Role

    mpathic is seeking part-time, project-based Red Teaming Experts to support a red-teaming and evaluation campaign focused on AI safety and model behavior in sensitive, real-world interactions.



    In this role, you will design, simulate, and evaluate conversations with AI systems to assess safety, risk, and behavioral performance. You will identify failure modes, edge cases, and policy gaps—particularly in scenarios involving distress, ambiguity, or escalation.



    This role involves roleplaying and reviewing clinical scenarios with AI agents. As such, we are ideally seeking candidates who bring creative or performance-driven strengths, as these competencies enhance the realism, nuance, and emotional depth needed for AI safety testing. Examples of these can include, but are not limited to: 

    • Theatre degrees or studies
    • Acting, theatre, improv, or voice-over experience 
    • Strong writing skills, especially dialogue or scenario writing 
    • Experience creating or inhabiting characters (e.g., performers, TTRPG roleplay, narrative designers) 
    • Conversational design, interaction writing, or scripted roleplay experience 
    • Participation in gaming, interactive storytelling, or digital communities where roleplay is common


    What You’ll Be Working On 

    You will help identify, prevent, and characterize risks that emerge when users interact with AI systems.



    Responsibilities may include:

    • Designing and executing red-teaming scenarios across diverse user behaviors
    • Reviewing AI-generated responses for safety, accuracy, and policy compliance
    • Identifying failure modes, edge cases, and behavioral risks
    • Assessing whether AI appropriately recognizes and responds to distress or escalation
    • Evaluating tone, boundaries, and appropriateness in sensitive interactions
    • Detecting misleading, overconfident, or unsafe responses
    • Evaluating multi-turn conversations for consistency and risk handling
    • Identifying gaps in responses, including missed signals or incomplete handling
    • Conducting qualitative analysis to identify behavioral patterns and system weaknesses
    • Documenting edge cases, failure patterns, and safety risks
    • Applying or contributing to evaluation rubrics, taxonomies, and frameworks
    • Supporting quality assurance (QA) to ensure consistency across evaluations
    • Collaborating with internal teams on AI safety and evaluation improvements
    • Participating in red teaming exercises to surface system vulnerabilities
    • Maintaining strict confidentiality and quality standards


    What We’re Looking For

    Successful candidates are detail-oriented, analytically strong, and experienced in evaluating or stress-testing AI systems in complex or high-risk scenarios.



    Professional experience in one or more of the following:

    • LLM red teaming or AI safety evaluation
    • Trust & safety, content moderation, or policy enforcement
    • AI/ML evaluation, annotation, or QA workflows
    • Conversational analysis or behavioral risk assessment
    • Work involving sensitive or high-stakes user interactions

    Strong understanding of:

    • AI safety principles and common failure modes
    • Behavioral risk, escalation patterns, and edge-case handling
    • Mental health sensitivity, boundaries, and responsible AI behavior
    • How users express distress, confusion, or harmful intent in conversation

    Ability to identify:

    • Safety violations and policy gaps
    • Missed or mishandled risk signals
    • Unsafe, misleading, or overconfident responses
    • Inappropriate tone or boundary-setting
    • Failures in escalation, de-escalation, or resolution
    • Inconsistencies across multi-turn interactions

    Experience with or Interest in:

    • Red teaming methodologies and adversarial testing
    • Evaluating conversational AI systems or chatbots
    • Developing or applying evaluation frameworks and rubrics
    • Understanding how AI systems perform under real user behavior

    Comfort with:

    • Tech tools and platforms (Slack, spreadsheets, dashboards)
    • Evaluating AI-generated responses (no coding required, but must be tech-comfortable)
    • Ambiguity, iteration, and feedback-driven workflows

    Willingness to:

    • Sign NDAs and work with sensitive or high-impact content

    Nice to Have (Not Required)

    • Background in mental health, behavioral science, or psychology
    • Experience in QA, annotation, or qualitative analysis
    • Experience with AI systems in sensitive domains (e.g., healthcare, safety)
    • Familiarity with evaluation metrics or safety frameworks


    Compensation

    $30-60/hour, depending on experience and specific project tasks/difficulty



    Location

    Seattle, Washington (Remote)


    Department

    Experts


    Employment Type

    Contractor


    Minimum Experience

    Mid-level


    Compensation

    $30–$40/hour depending on project difficulty