Google logo

Senior Principal Analyst, Search, Trust and Safety

Google
Full-time
On-site
Sunnyvale, California, United States
$206,000 - $289,000 USD yearly
Analyst
Minimum qualifications:

Bachelor's degree or equivalent practical experience.
13 years of experience in data analytics, Trust and Safety, policy, cybersecurity, business strategy, or related fields.

Preferred qualifications:

Master's or PhD in relevant field.
Experience with machine learning.
Experience working with engineering and product teams to create tools, solutions, or automation to improve user safety.
Experience working with policy teams.
Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, and knowledge of a scripting/programming language (e.g., Python).
Excellent problem-solving and critical thinking skills with attention to detail in a fluid environment.

About The Job

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

As a Senior Principal Analyst, you will collaborate with many teams within and outside of Trust and Safety. You will lead and partner with the Engineering, Product, Legal, Policy, and Scaled Operations teams to lay down strategy, enable integration of teams and be the driver of solving ecosystems. You will enable efficiency by driving intent and execution excellence to deliver cross-functional initiatives.

You will functionally lead team(s) and will be responsible for setting strategy and represent the team’s work, responsible for reducing policy violating activity across all Generative AI products for Search and Assistant. You will also enable the deployment of key defenses to stop abuse, and lead process improvement efforts to improve speed and quality of response to abuse. You will identify platform needs or influence enforcement capability design and enable professional or career success for the team.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $206,000-$289,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

Lead a team responsible for delivering safety strategy for generative AI launches, ensure achievement of business outcomes and objectives.
Test critical areas that are important for the business, make recommendations for business processes. Lead decision-making for organizational strategies, identifying trends and opportunities for people initiatives, workforce drives and experience.
Work across user-centric issues in search results that prevent user harm, financial fraud, and identity theft such as non consensual explicit images, involuntary fake pornography, exploitative removal practices, doxxing, etc.
Lead with ability to pivot in the fluid environment, coordinating and providing a consolidated view of risks and mitigations across the various pillars of the launch (policy, testing, features) to the cross-functional group and leadership.
Perform on-call responsibilities on a rotating basis, including weekend coverage/holidays. You will be exposed to graphic, controversial, and upsetting content.


Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Apply now
Share this job