Google logo

Senior Strategist, Generative AI

Google
1 hour ago
Full-time
On-site
Hyderabad, Telangana, India
Strategist

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 4 years of experience in SQL, Python, Automation, Communication and Stakeholder Management.

Preferred qualifications:

  • Master's degree in relevant field.
  • Experience working with the technical and policy challenges of AI systems.
  • Experience working on novel AI risks and threat actors engaging in cyber mis-use, societal harms, weaponization, etc.
  • Experience leading complex, cross-functional projects and setting strategic direction along with excellent analytical, communication (synthesizes information and recognizing goals), and problem-solving skills with an interest in innovation, technology, and Google products.
  • Experience with Python or other scripting languages for data analysis and prototyping using statistical analysis and hypothesis testing.
  • Excellent problem-solving and critical thinking skills with attention-to-detail in an ever-changing environment.

About the job:

Trust and Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

As a Senior Strategist, Generative AI, you will be a subject matter expert responsible for architecting our approach to risks associated with AI. You will go beyond day-to-day analysis and investigations to inform the long-term strategic roadmap for model safety. Your role will be to anticipate future threats, develop novel evaluation paradigms, and influence Google's product and research direction to ensure safety is a foundational, non-negotiable component of AI systems.

In this role, you will require deep expertise in analytics, an investigative mindset, critical thinking and the ability to work across all levels in the organization. You will be a pivotal voice in discussions that shape the future of AI at Google and beyond. You will collaborate with many teams within and outside of Trust and Safety. You also will partner with teams to drive operational excellence and deliver cross-functional initiatives at scale.

Responsibilities:

  • Execute analyses and investigations to implement next-generation safety mitigations, while guiding engineering and research teams in developing sophisticated technical solutions, from fine-tuning techniques to classifier-based guardrails.
  • Monitor the evolving AI threat landscape by identifying and forecasting future misuse vectors and adversarial techniques, and translate these insights into a proactive mitigation agenda.
  • Serve as the go-to person for business-related issues and use domain knowledge to provide partners with insights and analyses.
  • Evaluate sensitive or violative content as the core part of the role, and fulfill on-call responsibilities on a rotating basis, including weekend and holiday coverage.
  • Collaborate with Engineering, Product, Policy and Legal to set precedents and create scalable, and defensible principles for new AI capabilities.