Google logo

Senior Strategist, Generative AI, Trust and Safety

Google
Full-time
On-site
Austin, Texas, United States
$110,000 - $157,000 USD yearly
Strategist

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 4 years of experience in data analytics, Trust and Safety, policy, cybersecurity, or related fields.

Preferred qualifications:

  • Master's degree in relevant field.
  • Experience working on novel AI risks and threat actors involving cyber misuse, societal harms, and weaponization.
  • Experience with SQL, Python or other scripting languages for data analysis and prototyping.
  • Experience addressing the technical and policy issues inherent in AI systems.
  • Experience leading cross-functional projects and setting direction.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

About the job:

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

As a Senior Strategist, GenAI, you will be a subject matter expert responsible for architecting our approach to the risks associated with AI. You will go beyond day-to-day analysis and investigations to inform the long-term roadmap for model safety. You will anticipate future threats, develop novel evaluation paradigms, and influence Google's product and research direction to ensure safety is a foundational, non-negotiable component of our AI systems.

You will have experience in analytics, an investigative mindset, critical thinking and the ability to work across all levels in the organization. You will be a pivotal voice in discussions that shape the future of AI at Google and beyond. You will collaborate with many teams within and outside of Trust and Safety. You also will partner with teams to drive operational excellence and deliver cross-functional initiatives at scale.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $110,000-$157,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities:

  • Conduct analyses and investigations to implement next-generation safety mitigations.
  • Partner with Engineering, Product, Policy and Legal to set precedents and create scalable, and defensible principles for new AI capabilities. Inform engineering and research teams in building sophisticated technical solutions, from fine-tuning techniques to classifier-based guardrails.
  • Analyse the evolving AI threat landscape. Identify and forecast future misuse vectors and adversarial techniques, translating these insights into a proactive mitigation agenda.
  • Be the go-to person for issues related to the area of the business and use the domain knowledge to provide partners with insights and analyses.
  • Review or be exposed to sensitive or violative content as part of core role. Perform on-call responsibilities on a rotating basis, including weekend coverage/holidays.