Alignerr logo

AI Safety Analyst

Alignerr
2 hours ago
Full-time
Remote
Türkiye
Analyst
About The Role

What if your skeptical, questioning mind could make AI safer for millions of people? We're looking for AI Safety Analysts to put cutting-edge AI systems under pressure — probing for harmful outputs, testing edge cases, and uncovering the unexpected behaviors that matter most before they cause real-world harm.

This is meaningful, intellectually engaging work at the frontier of AI development. No technical background required — just sharp critical thinking and a talent for asking the questions others don't think to ask.

  • Organization: Alignerr
  • Type: Hourly Contract
  • Location: Remote
  • Commitment: 10–40 hours/week

What You'll Do

  • Probe AI systems with challenging, adversarial, and edge-case inputs designed to surface unsafe or unexpected behavior
  • Identify harmful, inappropriate, or policy-violating AI outputs across a range of topics and scenarios
  • Evaluate AI responses using structured safety and helpfulness rating scales
  • Document issues clearly and precisely with supporting examples and explanations
  • Follow red-teaming protocols and safety testing guides to ensure consistent, high-quality evaluations
  • Work independently and asynchronously on your own schedule

Who You Are

  • A natural critical thinker who enjoys poking holes in things and asking "what if?"
  • Comfortable exploring unusual, sensitive, or morally complex scenarios with professionalism
  • Strong written communicator — able to describe issues with precision and clarity
  • Detail-oriented and consistent in your approach to structured evaluations
  • Genuinely interested in AI safety, responsible technology, or the ethics of AI
  • No cybersecurity, AI, or technical background required

Nice to Have

  • Experience in writing, journalism, research, or quality assurance
  • Familiarity with AI tools or large language models as an end user
  • Background in ethics, philosophy, policy, or social sciences
  • Prior experience with content moderation or trust and safety work

Why Join Us

  • Work on real AI safety projects alongside leading research labs shaping the future of AI
  • Fully remote and flexible — work when and where it suits you
  • Freelance autonomy with the structure of meaningful, task-based assignments
  • Contribute to work that has a genuine impact on how safely AI systems interact with people
  • Variety in every session — no two testing scenarios are the same
  • Potential for ongoing work and contract extension as new projects launch