T3 logo

Experienced AI/ML Red Teamer

T3
Contract
On-site
United States
Contract Overview
We are seeking an experienced AI/ML Red Teamer to join on a contract basis to assess and stress-test generative AI systems used in media, publishing, and digital content workflows. The role will focus on adversarial testing, bias detection, reputational risk assessment, and content integrity assurance, helping the organisation ensure its AI tools are safe, reliable, and brand-aligned.
This is a hands-on contract suited to a senior practitioner who can rapidly design and execute red teaming exercises, uncover vulnerabilities, and provide clear, actionable recommendations.
Key Deliverables
Adversarial Testing
Run red team exercises against LLMs and generative AI tools used for media content creation.
Test for jailbreaks, prompt injections, and unintended harmful or off-brand outputs.
Content Safety & Integrity
Evaluate risks of misinformation, hallucinations, and fact-inaccuracy in AI-generated outputs.
Assess susceptibility to generating offensive, biased, or reputationally damaging content.
Bias & Fairness
Test outputs for gender, racial, cultural, and political bias that could impact brand trust.
Provide mitigation strategies and monitoring frameworks.
IP & Copyright Risk
Identify risks relating to the use of training data and AI-generated content in a media/legal context.
Stress-test outputs for plagiarism, copyright infringement, or deepfake potential.
Knowledge Transfer
Deliver documentation and training sessions for product teams to build internal awareness of AI vulnerabilities

Contract Requirements
Experience: 5+ years in adversarial ML, AI red teaming, or generative AI assurance.
Technical Skills:
Strong Python and ML frameworks (PyTorch, TensorFlow).
Hands-on with red teaming and adversarial testing libraries (TextAttack, ART, Garak, LLM vulnerability tools).
Domain Knowledge:
Understanding of generative AI in media production (text, image, video).
Awareness of IP, copyright, and misinformation risks in content industries.
Contractor Mindset:
Able to work independently, deliver outputs against deadlines, and engage with non-technical stakeholders (editorial, legal, comms).

Contract Requirements
Proven Experience: 5+ years in adversarial ML, AI red teaming, or applied AI security.
Technical Skills:
Advanced Python; strong knowledge of ML frameworks (PyTorch, TensorFlow).
Hands-on with adversarial ML/red teaming libraries (CleverHans, ART, TextAttack, Garak, LLM attack tooling).
Domain Knowledge:
Familiarity with AI governance standards
Strong understanding of bias/fairness evaluation metrics.
Consulting/Contractor Mindset:
Ability to operate independently, deliver defined outputs to agreed deadlines, and engage credibly with both technical teams and senior management.

Desirable Background
Previous assignments in media, publishing, entertainment, or digital platforms.
Familiarity with content moderation, brand safety, and trust & safety practices.
Knowledge of deepfake detection and synthetic media risks.
Experience briefing senior editorial or brand stakeholders on technical findings.

Engagement Details
Contract Type: Day rate / assignment-based.
Duration: 2 months, with extension possible.
Location: Hybrid (on-site collaboration with media teams, plus remote).
Reporting Line: Head of AI/Innovation
Show more Show less