mpathic logo

Youth Safety Expert, Part-Time

mpathic
Contract
Remote
United States
$30 - $200 USD hourly
Specialist

About mpathic

 Keeping the human in AI. mpathic is a trusted leader in advancing clinical accuracy and quality through developing AI-enhanced solutions. mpathic offers human services in red teaming, trust & safety, central rating and monitoring for clinical trials and expert data annotation for LLM builders. Our reviewers have specialization in behavioral analysis, conversational design, mental health, psychiatry, social services and clinical trial settings.


About the Role

If you have applied for a part-time expert role within the past 12 months, you do not need to reapply.


mpathic is seeking part-time Youth Safety Experts to support confidential initiatives focused on AI safety for children and adolescents—particularly in conversational, entertainment, and gaming-adjacent contexts.


This role is ideal for individuals with professional or lived experience working with youth, as well as those deeply familiar with online youth culture, gaming communities, and social platforms.


What You’ll Be Doing

Responsibilities vary by project and may include:

  • Roleplaying youth-facing scenarios with AI agents 
  • Red teaming for youth safety edge cases, age-inappropriate content, and model failure modes
  • Identifying signs of distress, vulnerability, coercion, grooming, or unsafe influence
  • Evaluating AI responses for age-appropriateness, tone, clarity, and harm prevention
  • Rating and reviewing AI models for youth safety and policy compliance
  • Developing behavioral taxonomies, personas, and evaluation rubrics specific to children and adolescents
  • Conducting qualitative analysis of AI conversations involving youth users
  • Providing expert feedback on internal youth safety and child protection policies
  • Collaborating with engineers, researchers, and safety teams on youth-specific safeguards
  • Documenting edge cases, emergent risks, and cultural trends impacting youth safety
  • Maintaining strict confidentiality, QA standards, and ethical guidelines
  • Participating in interdisciplinary review and calibration sessions


Who You’ll Protect

  • Children and teens navigating online and AI-mediated experiences
  • Young people exploring identity, relationships, and social belonging
  • Youth exposed to peer pressure, viral challenges, or risky behaviors
  • Minors engaging in gaming, entertainment, and social platforms


What We’re Looking For

Successful candidates are thoughtful, culturally aware, communicative, and comfortable working at the intersection of youth development, online culture, and emerging technology.


Basic Qualifications

  • Background in education, child development, youth services, psychology, social work, or a related field OR deep, demonstrated familiarity with youth gaming and online culture
  • Experience working with children, adolescents, or teen communities (formal or informal)
  • Strong understanding of age-appropriate communication, boundaries, and youth safety principles
  • Familiarity with online platforms, gaming ecosystems, or youth-oriented digital spaces
  • Comfort evaluating conversational content and nuanced social interactions
  • Ability to work remotely using Slack, LLM tools, and Google Docs
  • Strong ethical judgment and attention to safety, consent, and power dynamics
  • Willingness to sign NDAs and work with sensitive or youth-related content
  • Availability up to 10 hours per week and for occasional scheduled meetings


Above and Beyond

  • Professional experience in K–12 education, higher education, or informal learning environments
  • Fluency in English and at least one additional language 
  • Experience in child safety, trust & safety, moderation, or community management
  • Familiarity with gaming culture, livestreaming platforms, or esports communities
  • Experience working with marginalized or vulnerable youth populations
  • Background in youth mental health, crisis response, or online harm prevention
  • Experience with content moderation, behavioral labeling, or policy enforcement
  • Interest in AI safety, online harms, or digital well-being
  • Experience participating in or moderating online communities (e.g., Discord, Twitch, Reddit)
  • Strong cultural awareness of evolving youth trends, slang, and social norms


Location

United States (Remote)


Department

Experts


Employment Type

Contractor


Minimum Experience

Mid-level


Compensation

$30-$200/hr depending on experience