Google logo

Associate Principal, Kids and Learning Trust and Safety

Google
Full-time
On-site
Seattle, Washington, United States
$132,000 - $194,000 USD yearly
Director

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 7 years of experience in data analytics, Trust and Safety, policy, cybersecurity, or related fields.
  • 1 year of experience with AI safety and security, adversarial testing, or red teaming.
  • Experience with common LLM security vulnerabilities (e.g., prompt injection, jailbreaking, data exfiltration) and designing mitigation strategies.

Preferred qualifications:

  • Master's or PhD in relevant field.
  • Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g. Python).
  • Experience with machine learning.
  • Comfortable working with data analytics, interpreting ML model performance metrics, and understanding technical architecture to identify safety gaps.
  • Excellent problem-solving and critical thinking skills with attention to detail in a fluid environment.
  • Excellent communication and presentation skills with the ability to influence cross-functionally at various levels.

About the job:

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

The Trust and Safety (T&S) team reduces risk and protects the experience of our youngest users and business partners across Google's expanding base of products. We work with a variety of teams—from Engineering to Legal and Public Policy—to set policies and combat fraud and abuse at scale, often by finding innovative solutions. We use technical know-how, user insights, and proactive communication to pursue the highest possible quality and safety standards for users...

The Kids and Learning T&S team works alongside Product, Engineering, and Policy teams to proactively understand the risk for evolving GenAI experiences on Search. The team detects harm patterns, develops state-of-the-art Applied AI solutions to manage novel trust problems, and defines industry best practices. This is an exciting opportunity to be part of unlocking safe access to GenAI experiences for global youth, which is a company-level priority.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $132,000-$194,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities:

  • Own the safety strategy for new GenAI features on Search, including defining youth-specific risks for new capabilities (e.g., image, video, agentic), analyzing and prioritizing emerging risks, managing testing, and prioritizing mitigation. 
  • Lead cross-functional teams (Product, Engineering, Responsible AI Testing, Research, Policy) to implement safety initiatives. Act as a key advisor to executive stakeholders (T&S, Legal, and Product teams) on safety issues.
  • Use technical judgment to develop testing requirements, analyze results, design mitigations, and drive post-launch monitoring.
  • Act as a trusted partner in a fluid environment, coordinating and providing a consolidated view of risks and mitigations across all launch pillars (e.g., policy, testing, features) to cross-functional partners and leadership.
  • Perform on-call responsibilities on a rotating schedule, including weekend coverage. Work with sensitive content or situations and may be exposed to graphic, controversial, upsetting topics or content.
Apply now
Share this job