Google logo

Technical Program Manager, Generative AI Safety

Google
2 hours ago
Full-time
On-site
Singapore
Manager

Minimum qualifications:

  • Bachelor's degree in a technical field, or equivalent practical experience.
  • 5 years of experience in program management.
  • 2 years of experience developing or launching products or technologies within safety, security, privacy, or a related area.
  • Experience with generative AI and machine learning
  • Experience managing multi-quarter technical programs involving distributed systems, machine learning pipelines, or infrastructure.
  • Experience designing cross-functional engagement models and operations to lead teams through the full execution lifecycle.

Preferred qualifications:

  • Master's degree.
  • 7 years of experience in program or project management.
  • Experience in content safety, Trust and Safety, responsible AI, or product policy, including evaluating malicious threats at scale.
  • Experience driving alignment across a distributed landscape of stakeholders (e.g., central engineering, Trust and Safety, and various product teams) to land high-impact cross-functional efforts.
  • Experience driving and critiquing technical requirements for sensitive, scalable detection systems, including Machine Learning (ML) and Large Language Model (LLM) concepts (e.g., transformers, activations, and efficient training, deployment).
  • Understanding of adversarial dynamics, and prioritizing coverage gaps with a problem-centric mindset.

About the job:

A problem isn’t truly solved until it’s solved for all. That’s why Googlers build products that help create opportunities for everyone, whether down the street or across the globe. As a Technical Program Manager at Google, you’ll use your technical expertise to lead complex, multi-disciplinary projects from start to finish. You’ll work with stakeholders to plan requirements, identify risks, manage project schedules, and communicate clearly with cross-functional partners across the company. You're equally comfortable explaining your team's analyses and recommendations to executives as you are discussing the technical tradeoffs in product development with engineers.

The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.

Responsibilities:

  • Lead complex, multi-quarter initiatives to expand our content safety infrastructure. Manage ambiguous issues, such as integrating specialized safety classifiers or building rapid response capabilities for AI abuse vectors.
  • Partner with cross-functional leaders to convert emerging threat intelligence and safety objectives into scalable, production-ready models and technical protections within our serving stack.
  • Orchestrate the strategy and execution of our Safety Engineering teams. Ensure our programs tangibly reduce abuse prevalence, improve user safety metrics, and optimize the person hours required for model training and deployment.
  • Manage global workflows, coordinating with regional teams to ensure continuous coverage, seamless handoffs, and timely integration and evaluation of safety models for business-critical Gemini releases.
  • Coordinate between Infrastructure teams, generative AI product groups, and foundational model researchers to integrate safety signals into primary models.