🚨 Prices increase October 8th. Last chance to join at launch pricing!
Google logo

Principal Engineering Analyst, Content Adversarial Red Team

Google
Full-time
On-site
Mountain View, California, United States
$174,000 - $258,000 USD yearly
Engineer
Minimum qualifications:

Bachelor's degree or equivalent practical experience.
7 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
7 years of experience managing projects and defining project scope, goals, and deliverables.

Preferred qualifications:

Master's degree or PhD in a relevant quantitative or engineering field.
Experience with Large Language Models (LLMs), LLM Operations, prompt engineering, pre-training, and fine-tuning.
Ability to think strategically and identify emerging threats and vulnerabilities.
Ability to work separately and as part of a team.
Ability to influence cross-functionally at various levels and excellent communication and presentation skills.
Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

About The Job

Fast-paced, dynamic, and proactive, YouTube’s Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.

We are seeking a pioneering expert in AI Red Teaming, with technical proficiency, to shape our approaches to adversarial testing of Google's generative AI products.

You will blend your domain expertise in GenAI red teaming and adversarial testing with technical acumen, driving creative and ambitious solutions to tests, ultimately preventing abusive content or uses of our products. You will demonstrate an ability to grow in a changing dynamic research and product development environment.

Combining red teaming and technical experience will enable you to design and direct operations, creating innovative methodologies to uncover novel content abuse risks, while supporting the team in the design, development and delivery of technical solutions to testing and process limitations. You will be a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams, driving initiatives.

You will mentor analysts, fostering a culture of continuous learning and sharing your expertise in adversarial techniques. You will also represent Google's AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $174,000-$258,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

Drive unstructured testing of novel model modalities and capabilities.
Bridge technical constraints and red teaming requirements by leading the design, development, and integration of novel platforms, tooling, and engineering solutions, supporting and scaling adversarial testing.
Design, develop, and oversee the execution of innovative and red teaming strategies, uncovering content abuse risks. Create and refine net new red teaming methodologies, strategies and tactics.
Lead and influence cross-functional teams, including Product, Engineering, Research, and Policy, driving the implementation of strategic safety initiatives. Act as a key advisor to executive leadership on content safety issues, providing actionable insights and recommendations.
Exposed to graphic, controversial, or upsetting content.


Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .