Google logo

Principal Product Manager, AI Safety and Content Moderation

Google
Full-time
On-site
Sunnyvale, California, United States
$272,000 - $383,000 USD yearly
Director
In accordance with Washington state law, we are highlighting our comprehensive benefits package, which is available to all eligible US based employees. Benefits for this role include:

Health, dental, vision, life, disability insurance
Retirement Benefits: 401(k) with company match
Paid Time Off: 20 days of vacation per year, accruing at a rate of 6.15 hours per pay period for the first five years of employment
Sick Time: 40 hours/year (statutory, where applicable); 5 days/event (discretionary)
Maternity Leave (Short-Term Disability + Baby Bonding): 28-30 weeks
Baby Bonding Leave: 18 weeks
Holidays: 13 paid days per year

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Sunnyvale, CA, USA; Seattle, WA, USA.Minimum qualifications:

Bachelor’s degree in Computer Science, a related technical field or equivalent practical experience.
15 years of product management experience.

Preferred qualifications:

Master's degree or PhD.
15 years of experience working within large-scale, global, technical organizations with multiple product lines.
Ability to lead, mentor and grow a team of product managers and technical talent.
Strong communication skills with the ability to represent and convert complex business or technical concepts with senior leadership.
Demonstrated success leading initiatives, developing policies or thought leadership in regulated, high visibility domains.
Demonstrated success aligning organizations across multiple product lines and successfully influencing the overall direction of a product or company.

About The Job

The Core User Protection organization is seeking a highly analytical and

empathetic Product Manager to work on the most complex, deep-seated

problems related to AI safety. You will be responsible for defining the

product vision, strategy, and roadmap for features that protect our users

from abuse, harm, and platform manipulation, often dealing with ambiguous and highly contentious policy issues.

Within Core UP, this role will partner closely with the PM and Eng Directors and the Principal and Distinguished Engineers focused on content moderation and our safety platforms. Outside of our organization, you will collaborate regularly with teams in Trust & Safety, Google DeepMind, and the largest Google product areas.

The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.

The US base salary range for this full-time position is $272,000-$383,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

Transform content moderation across Google, moving from a purely human-centric model to one that is AI-first and AI-assisted. Partner closely with leaders across Google to create, drive and deliver innovative privacy and security products for our users across all Google platforms.
Understand, anticipate and address emerging threats to user safety, especially those arising from the increasing use of AI by malicious actors.
Drive product strategy for integrating disparate AI safety solutions across Google into a cohesive, scalable, and cost-effective ecosystem.
Scale and productionize the roadmap for cutting edge AI Safety research. Define unified, holistic outcome safety metrics that can guide the AI safety work and priorities of the User Protection team.
Inspire innovation that results in product transformation, drive improvements in security, safety and privacy metrics, and addresses gaps and opportunities across product experiences.


Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .