Google logo

Engineering Analyst, Content Adversarial Red Team

Google
Full-time
On-site
London, England, United Kingdom
Engineer
Minimum qualifications:

Bachelor's degree or equivalent practical experience.
7 years of experience in Trust and Safety, risk mitigation, cybersecurity, or related fields.
7 years of experience with one or more of the following languages: SQL, R, Python, or C++.
6 years of experience in adversarial testing, red teaming, jailbreaking for trust and safety, or a related field, with a focus on AI safety.
Experience with Google infra/tech stack and tooling, API and web service, Collab deployment, SQL and data handling, MLOps or other AI infrastructure.

Preferred qualifications:

Master's degree or PhD in a relevant quantitative or engineering field.
Experience in an individual contributor role within a technology company, focused on product safety or risk management.
Experience working closely with both technical and non-technical teams on dynamic solutions or automations to improve user safety.
Understanding of AI systems/architecture including specific vulnerabilities, machine learning, and AI responsibility principles.
Ability to effectively articulate technical concepts to both technical and non-technical stakeholders.
Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels.

About The Job

Fast-paced, dynamic, and proactive, YouTube’s Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.

As a pioneering expert in AI Red Teaming with your technical proficiency, you will shape our sustainable, future-proofed approaches to adversarial testing of Google's generative AI products.

In this role, you will design and direct red teaming operations, creating innovative methodologies to uncover novel content abuse risks. You will support the team in the design, development, and delivery of technical solutions to testing and process limitations. You will act as a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams to drive strategic initiatives.

Be a mentor by fostering a culture of continuous learning and sharing your deep expertise in adversarial techniques. You will represent Google's AI safety efforts externally collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

Responsibilities

Influence across Product, Engineering, Research and Policy to drive the implementation of strategic safety initiatives. Be a key advisor to executive leadership on complex content safety issues, providing actionable insights and recommendations.
Mentor and guide junior and executive analysts, fostering excellence and continuous learning within the team. Act as a subject matter expert, sharing deep knowledge of adversarial and red teaming techniques, and risk mitigation.
Represent Google's AI safety efforts in external forums and conferences. Contribute to the development of industry-wide best practices for responsible AI development.
Be exposed to graphic, controversial, or upsetting content.
Bridge technical constraints and red teaming requirements by leading the design, development, and integration of novel platforms, tooling, and engineering solutions to support and scale adversarial testing.


Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .