Taskify logo

AI Safety – Trust & Safety Specialist (Remote)

Taskify
Part-time
Remote
United Kingdom
Specialist
Why This Role Exists
At Taskify, we believe the foundation of AI safety is high-quality human data. Models cannot evaluate themselves — they need humans who apply structured judgment to complex, nuanced outputs. We’re building a flexible pod of Safety specialists, contributors from both technical and non-technical backgrounds, who will serve as expert data annotators. This pod will annotate and evaluate AI behaviors to ensure the systems are safe. No prior annotation experience is required — instead, we’re seeking individuals capable of making careful, consistent decisions in ambiguous situations.
This role may involve reviewing AI outputs, addressing sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources.

What You’ll Do
Produce high-quality human data by annotating AI outputs against safety criteria (e.g., bias, misinformation, disallowed content, unsafe reasoning, etc.
Apply harm taxonomies and guidelines consistently, even when tasks are ambiguous
Document your reasoning to improve guidelines
Collaborate to provide the human data that powers AI safety research, model improvements, and risk audits

Who You Are
Experienced in trust & safety, governance, or policy-to-product frameworks
Familiar with harm taxonomies, safety-by-design principles, or regulatory frameworks such as the EU AI Act or NIST AI RMF
Skilled at translating abstract policies into concrete evaluation criteria
Motivated by reducing user harm and ensuring systems are safe, ethical, and compliant
Examples of past titles: Trust & Safety Analyst, Online Safety Specialist, Policy Researcher, Governance Specialist, UX Researcher, Risk & Policy Associate, Regulatory Affairs Analyst, Safety Policy Manager, Ethics & Compliance Coordinators.

What Success Looks Like
Your annotations are accurate, high-quality, and consistent, even across ambiguous cases
You help surface risks early that automated tools miss
Guidelines and taxonomies improve based on your feedback
The data you produce directly strengthens AI model safety and compliance

Why Join us
Work at the frontier of AI safety, providing the human data that shapes how models behave
Gain experience in a rapidly growing field with direct impact on how labs deploy advanced systems
Be part of a team committed to making AI systems safer, trustworthy, and aligned with human values

About Taskify
Taskify connects top experts with leading AI labs and organizations pioneering the future. We collaborate with thousands of professionals across various domains, including law, research, engineering, and creative fields, on cutting-edge AI projects.
Show more Show less