SME Careers logo

Hindi Trust & Safety Data Trainer

SME Careers
Contract
Remote
India
Trainer
This is a remote, hourly-paid contract role where you will review AI-generated content and safety decisions, evaluate reasoning quality and step-by-step problem-solving, and provide expert feedback so outputs are accurate, logical, and clearly explained. You will assess solutions for correctness and clarity, spot methodological or conceptual errors, fact-check where needed, and rate/compare multiple responses based on safety and policy alignment. You must be fluent/proficient (near-native or native) in Hindi and able to make nuanced judgments across Hindi and English content. This role is with the fast-growing AI Data Services company SME Careers (a subsidiary of SuperAnnotate), supporting many of the world’s largest AI companies and foundation-model labs. Your annotations on explicit safety tasks will be used to prevent the model from generating unintentional or adversarial, toxic, or unsafe outputs. You may be exposed to content that is sexual, violent, or psychologically disturbing in nature as part of this work to improve the world’s premier AI models.
Key Responsibilities
Label and quality-check safety data across categories such as hate/harassment, sexual content, self-harm, violence, bias, illegal goods/services, malicious activities, malicious code, and deliberate misinformation.
Perform red-teaming and adversarial testing by identifying realistic attack patterns, edge cases, and policy gray areas; document rationales and recommend mitigations to reduce unsafe outcomes.
Apply and localize safety policies consistently across Hindi and English: detect cultural nuance, slang, coded language, and context shifts; escalate uncertainty using documented decision paths.
Your Profile
Bachelor’s degree or higher in a relevant field (e.g., Communications, Linguistics, Psychology, Law/Policy, Security Studies) or equivalent professional experience.
Near-native or native Hindi proficiency (reading/writing) for high-precision safety labeling and cultural-linguistic nuance.
Minimum C1 English proficiency (reading/writing) for policy interpretation, prompt understanding, and consistent documentation.
Experience years in Trust & Safety, content moderation, policy operations, risk, compliance, investigations, or related safety functions (senior level).
LLM red-teaming / adversarial testing experience is required (documented examples of edge-case discovery and mitigation recommendations).
Localization/translation experience is highly preferred; able to preserve meaning, severity, and intent across languages.
Emotional resilience: comfortable annotating unsafe, explicit, and/or toxic content, including content of a sexual, violent, or psychologically disturbing nature.
Highly detail-oriented with strong judgment, consistency, and ability to follow evolving written guidelines.
Strong analytical writing: concise rationales, clear decision paths, and reproducible reasoning for disagreements.
Secure and confidential handling of sensitive content; reliable remote work practices and time management.
Strong hands-on experience using tools like Perplexity, Gemini, ChatGPT and others
Show more Show less
Apply now
Share this job