Microsoft logo

Principal Product Manager- AI Integrity

Microsoft
1 hour ago
Full-time
On-site
Mountain View, California, United States
$139,900 - $304,200 USD yearly
Director
Overview

The AI Integrity & Provenance team builds post‑deployment safety, abuse monitoring, and content authenticity systems for frontier AI models and experiences across Microsoft. We are looking for a Principal Product Manager to own strategy and execution for integrity and provenance capabilities that enable responsible deployment, regulatory compliance, and real‑world abuse detection of AI systems at scale.



Responsibilities
  • Lead product strategy for AI Integrity Foundations across provenance, abuse monitoring, incident response, and social listening, enabling safe, accountable, and resilient deployment of AI systems and agents at scale. 
  • Define the long-term vision, strategy, and roadmap for foundational integrity capabilities within Azure AI Foundry, ensuring consistent post-deployment safeguards across models, applications, and agentic workflows.
  • Improve abuse monitoring and detection systems that identify and mitigate real-world AI threats and misuse, including prompt injection, jailbreaks, data exfiltration, malicious tool calls, coordinated abuse, model exploitation and other novel vectors.
  • Own incident response product capabilities, enabling rapid detection, triage, investigation, and remediation of AI-related safety and security incidents, with clear metrics for MTTR, coverage, and enforcement effectiveness.
  • Evolve provenance and content authenticity capabilities, supporting traceability, attribution, auditability, and regulatory requirements for trustworthy AI outputs.
  • Partner closely with security engineers, red teams, AI researchers, and integrity analysts to translate emerging attack patterns, abuse signals, and novel harm vectors into durable, productized protections.
  • Integrate AI integrity and security capabilities with Microsoft’s broader ecosystem, including Defender (threat detection and response), Entra (identity and access control), and Purview (data protection, governance, and compliance).
  • Drive 0‑to‑1 product development, taking new integrity and safety concepts from early experimentation through production launch, customer adoption, and operational maturity.
  • Establish and own metrics and dashboards for AI integrity posture and product success, including detection coverage, signal quality, response effectiveness, customer impact, and regulatory readiness. 


Qualifications
Required/minimum qualifications
  • Bachelor's Degree AND 8+ years experience in product/program management
    • OR equivalent experience.
Other Requirements 
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. 
 
Additional or preferred qualifications
  • Bachelor's Degree AND 12+ years experience in product/program management 
    • OR equivalent experience.
  • 4+ years experience taking a product, feature, or experience to market (e.g., design, addressing product market fit, and launch, internal tool/framework).
  • 6+ years experience improving product metrics for a product, feature, or experience in a market (e.g., growing customer base, expanding customer usage, avoiding customer churn).
  • 6+ years experience disrupting a market for a product, feature, or experience (e.g., competitive disruption, taking the place of an established competing product).
  • Platform PM experience driving foundational or horizontal capabilities.
  • Demonstrated systems‑level thinking in safety, security, or reliability‑critical domains.
  • Experience shipping AI platforms or trust, safety, or integrity‑focused products into production.
  • Experience with AI security testing, evaluation, or automated red‑teaming techniques for generative AI or agentic systems.
  • Familiarity with post‑deployment AI monitoring, incident response workflows, and operational metrics such as detection coverage, signal quality, and response effectiveness.
  • Exposure to enterprise governance, data protection, and compliance systems, particularly as they relate to AI deployments.
  • Background working on safety‑critical, security‑critical, or high‑risk systems operating at global scale 


Product Management IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
https://careers.microsoft.com/us/en/us-corporate-pay


This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.




Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.