Meta logo

Manager, Content Engineering — AI Content Understanding

Meta
2 hours ago
Full-time
On-site
Menlo Park, California, United States
$162,000 - $227,000 USD yearly
Manager, Engineer
Product Content Engineering is a horizontal function supporting initiatives across Meta's Family of Apps. We partner closely with product and technical teams to solve problems by providing content-centered solutions, setting standards of quality, and building the frameworks that ensure AI-powered experiences actually work for people. We're looking for a people manager to lead a team within PCE's AI Content Understanding group. This team is responsible for executing evaluations and improving the quality of Meta's content understanding models — the AI systems that understand what content is about and how it should be ranked, recommended, and moderated. The work spans content quality, AI evaluation, and the search and recommendation systems that power Meta's products — building the frameworks, rubrics, and pipelines that hold AI outputs to a high standard. Your team assesses model behavior, identifies where it falls short, and works cross-functionally with engineering, product, research, and data science teams to make it better. Reporting to the AI CU lead, you'll manage a team of Content Engineers and contingent workers, owning the day-to-day execution of human evaluations, golden dataset creation, auto-eval calibration, and cross-functional delivery with engineering, data science, and product partners. You'll drive operational rigor while navigating a fast-moving environment where evaluation frameworks, model capabilities, and team processes are evolving simultaneously.

Responsibilities

  • Manage and develop a team of Content Engineers and contingent workers, setting clear goals, providing regular feedback, and supporting career growth
  • Own the execution of continuous CU model evaluations — coordinating sprint planning, reviewer assignments, QA processes, and delivery timelines across multiple concurrent workstreams
  • Drive the creation and maintenance of golden datasets that serve as ground truth for model benchmarking and auto-eval calibration
  • Partner with engineering, data science, and product teams to translate evaluation insights into actionable recommendations for model improvement and prompt optimization
  • Lead the team's contribution to LLM-as-a-Judge (auto-eval) initiatives — ensuring human evaluation data is used to calibrate, validate, and improve automated evaluation systems
  • Define and maintain evaluation guidelines, rubrics, and quality standards in partnership with Lead Content Engineers, ensuring consistency across reviewers and use cases
  • Build repeatable operational processes for evaluation sprints, including reviewer training, calibration sessions, and escalation workflows
  • Manage CW workforce planning — hiring, onboarding, allocation across workstreams, and performance management
  • Synthesize evaluation results into structured reports and present findings to cross-functional leadership, including engineering leads and lead product stakeholders
  • Identify and mitigate operational risks — staffing gaps, timeline conflicts, quality regressions — before they impact delivery
  • If you have a proven experience of proactively identifying, scoping and implementing innovative solutions, demonstrated success in identifying issues and improving processes, to create impact and bring demonstrated experience in content judgment to AI evaluation, we encourage you to apply


Minimum Qualifications

  • 8+ years of experience in content strategy, content operations, AI evaluation, or a related field
  • 2+ years of people management experience, including hiring, developing, and performance-managing direct reports
  • Experience managing cross-functional programs with engineering, data science, and product partners in fast-paced environments
  • 1+ years working with generative AI products, AI evaluation, prompt engineering, annotation, and/or content labeling and analysis
  • Experience designing or operationalizing evaluation frameworks, annotation guidelines, or quality rubrics for AI/ML systems
  • Demonstrated ability to manage multiple concurrent workstreams with competing priorities and tight deadlines
  • Proven analytical skills with experience interpreting evaluation data and communicating findings to technical and non-technical audiences
  • Track record of building team operational processes and quality standards from the ground up or during periods of significant change


Preferred Qualifications

  • Demonstrated ability to integrate AI tools to optimize/redesign workflows and drive measurable impact (e.g., efficiency gains, quality improvements)
  • Experience adhering to and implementing responsible, ethical AI practices (e.g., risk assessment, bias mitigation, quality and accuracy reviews)
  • Demonstrated ongoing AI skill development (e.g., prompt/context engineering, agent orchestration) and staying current with emerging AI technologies
  • Experience managing contingent worker (CW) teams, vendor relationships, or scaled annotation operations
  • Familiarity with AI evaluation methods such as human eval, LLM-as-judge, model benchmarking, A/B testing, or red-teaming
  • Experience with Python, SQL, or other tools for data analysis and evaluation automation
  • Background in content understanding, search quality, recommendation systems, or trust and safety
  • Experience managing through organizational transitions, including shifts from manual to automated workflows
  • BA or BS in Computer Science, Data Science, Linguistics, or related field


$162,000/year to $227,000/year + bonus + equity + benefits