Swooped logo

AI Safety & Responsibility Policy Manager

Swooped
2 hours ago
Full-time
Remote
United States
$150,000 - $200,000 USD yearly
Manager

Swooped is a Talent Platform (not a staffing agency). We are not the employer for this role and do not make hiring decisions. Clicking "Apply" takes you to Swooped to review the role and hiring company details.


About the Opportunity

The company is building AI to simulate the world through merging art and science. The organization believes that world models are at the frontier of progress in artificial intelligence. Language models alone won’t solve the world’s hardest problems – robotics, disease, scientific discovery. Real progress requires models that experience the world and learn from their mistakes, the same way that humans do.


This kind of trial and error can be massively accelerated when done in simulation, rather than in the real world. World models offer the most clear path to general-purpose simulation, changing how stories are told, how scientific progress is made and how the next frontiers of humanity are reached.


The team consists of creative, open minded, caring and ambitious people who are determined to change the world. The organization aspires to continuously build impossible things and its ability to do so relies on building an incredible team. If driven to do the same, the organization would love to hear from you.


About the role

Open to hiring remote across the US and Europe — offices are also available in NYC, San Francisco, Seattle, and London


As the organization's products evolve and become increasingly capable, and as the user base expands, the need for clear and accurate content policies has never been greater. This role is for a Policy Manager to own and evolve the policies that govern what AI systems can and cannot do—across consumer products, enterprise offerings, and third-party model integrations.

This role will own the creation, iteration, maintenance, and implementation of the organization's content policies, working to capture genuine harms without unnecessarily blocking legitimate use cases. As the organization invests more heavily in LLM-based moderation, the quality of its policy will increasingly determine the quality of its moderation, and this role ensures that policy evolves along with the product and surrounding environment.


What you’ll do

  • Own and maintain the organization's content policies, balancing user safety, creative expression, and operational feasibility
  • Translate policies into LLM prompts and continuously iterate to drive accuracy improvements
  • Track shifts in cultural and market norms and new use cases, and continuously evaluate and update policies accordingly
  • Build frameworks for ongoing policy review, ensuring policy remains nimble, accurate, and appropriate for the organization's user base
  • Serve as the policy subject matter expert for key internal partners, particularly the enterprise enablement team
  • Translate the organization's policies into clear documentation for internal teams, enterprise and API partners, and end users
  • As part of a small, collaborative AI Safety and Responsibility team, contribute to work across the broader team as priorities and needs evolve


What you’ll need

  • 5+ years of experience in content policy, trust & safety policy, or a closely related field at a technology company
  • Strong understanding of content moderation systems
  • Hands-on experience using machine learning systems for content moderation, policy enforcement, or risk assessment
  • Excellent written communication skills, including ability to translate nuanced policy positions into clear, functional documentation
  • Strong ability to use data/statistics to inform policy decisions
  • Comfort with ambiguity and a track record of making principled decisions in fast-moving environments
  • Ability to act as a self-starter, taking initiative to identify opportunities to improve or build on processes and work products
  • Collaborative working style with the ability to influence cross-functional teams without direct authority


Nice to Have

  • Familiarity with generative AI products, including video, image, or avatar-based applications
  • Experience using LLMs for policy drafting and/or enforcement
  • Experience in a high-growth startup environment where T&S/policy infrastructure was being built from scratch


The organization strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for its team. Salary ranges are based on competitive market rates for the organization's size, stage and industry, and salary is just one part of the overall compensation package provided.


There are many factors that go into salary determinations, including relevant experience, skill level and qualifications assessed during the interview process, and maintaining internal equity with peers on the team. The range shared below is a general expectation for the function as posted, but the organization is also open to considering candidates who may be more or less experienced than outlined in the job description. In this case, any updates in the expected salary range will be communicated.


Lastly, the provided range is the expected salary for candidates in the U.S. Outside of those regions, there may be a change in the range, which again, will be communicated to candidates.


Working at the Organization

Great things come from great teams. The organization would love to hear from you.

The organization is committed to creating a space where its employees can bring their full selves to work and have equal opportunity to succeed. So regardless of race, gender identity or expression, sexual orientation, religion, origin, ability, age, veteran status, if joining this mission speaks to you, the organization encourages you to apply.


Compensation

$150K – $200K