Anthropic logo

Enforcement Operations Lead

Anthropic
Full-time
On-site
United States
Manager

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Role

Anthropic's Safeguards team is responsible for enforcing our policies, protecting users, and ensuring our platform is not misused. As a Safeguards Enforcement Analyst focused on Safety Evaluations, you'll play a central role in ensuring our models meet safety and policy standards before and after launch. You'll run and monitor evaluations, drive mitigations when issues surface, coordinate the creation of new evals, and help build the processes and documentation that allow the team to scale this work over time.

This role requires someone who is detail-oriented, comfortable navigating ambiguity, and capable of coordinating across teams to break new ground and drive work to completion. This work is deeply cross-functional — you'll partner closely with policy experts, Safeguards engineering teams, and many other stakeholders throughout the organization to ensure our evaluations are comprehensive and current, and that findings translate into meaningful improvements to model behavior.

Responsibilities

*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.

Vendor Operations

  • Own end-to-end management of content moderation vendor relationships, including onboarding, performance management, quality assurance, and capacity planning
  • Partner with internal stakeholders to define vendor scope, set SLAs, and evaluate vendor output quality on an ongoing basis
  • Identify opportunities to scale content review operations efficiently as Anthropic's product surface area grows
  • Develop and maintain standard operating procedures (SOPs) for all vendor-executed review workflows, ensuring consistency and accuracy across content 

Regulatory Reporting and Enforcement

  • Partner with Regulatory Operations to ensure that new product features and content surfaces are incorporated into Safeguards reporting workflows as they launch
  • Own enforcement reporting for Regulatory Operations requirements, including maintaining and updating dashboards and tracking mechanisms that provide accurate, timely data to regulatory bodies
  • Produce on-request read-outs of enforcement metrics over specified time ranges to support regulatory reporting obligations
  • Identify and drive improvements to existing reporting infrastructure — including transitioning manual, spreadsheet-based workflows to more robust and scalable solutions
  • Oversee the user-reported content review pipeline, including reviews submitted via the Content Reporting Form across all supported content surfaces
  • Ensure SOPs for content review workflows are kept current as new features and surfaces are added
  • Work collaboratively with the RegOps team to ensure intake processes are prepared to handle emerging report types (e.g., third-party MCP server reports)
  • Maintain a strong understanding of Anthropic's policy framework to provide informed operational guidance and escalation support

Copyright Operations

  • Oversee Safeguards copyright systems, ensuring the right operational processes are in place to handle copyright-related enforcement at scale
  • Partner closely with the Regulatory Operations team to scale copyright operations as Anthropic's products grow, with a particular focus on reducing false positives and improving the accuracy of copyright enforcement workflows
  • Identify gaps in current copyright operational processes and drive cross-functional solutions in collaboration with policy, legal, and engineering stakeholders

You may be a good fit if you:

  • Have 5+ years of experience in trust and safety operations, content moderation program management, or a related field
  • Have managed external vendor or contractor relationships, including performance management and quality assurance
  • Are comfortable working across policy, legal, and operations teams to translate compliance requirements into practical workflows
  • Have experience building or improving operational reporting, dashboards, or enforcement tracking systems
  • Are highly organized, with a track record of maintaining rigorous documentation and SOPs in fast-moving environments
  • Communicate clearly and precisely — both in writing and verbally — across technical and non-technical audiences
  • Are energized by the challenge of building scalable systems in an environment where not everything is already figured out
  • Care deeply about the responsible deployment of AI and the role enforcement operations plays in that mission

Strong candidates may also have:

  • Experience working with regulatory reporting requirements, particularly in the context of online platforms or AI systems
  • Familiarity with content moderation tooling and review workflows at scale
  • Experience with copyright enforcement operations, including false positive mitigation strategies
  • Background in policy enforcement, legal operations, or compliance program management
  • Experience supporting or standing up a new operational function, including writing foundational SOPs and building institutional knowledge from scratch
  • Comfort working with data and metrics to inform operational decisions and surface trends to leadership

The annual compensation range for this role is listed below. 

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:
$230,000$270,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.

Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process