Investigations Analyst, Ethical ML, Trust and Safety

Google

Share Share Share Share Share Share

Google's brand is only as strong as our users' trust--and their steadfast belief that our guiding principles are what's best for them. Our Trust and Safety team has the critical responsibility of protecting Google's users by ensuring online safety by fighting web abuse and fraud across Google products like Search, Maps, AdWords and AdSense. On this team, you're a big-picture thinker and strategic leader. You understand the user's point of view and are passionate about using your combined technical, sales and customer service acumen to protect our users. You work globally and cross-functionally with Google developers and Product Managers to navigate challenging online safety situations and handle abuse and fraud cases at Google speed (read: fast!). Help us prove that quality on the Internet trumps all.

You will be ensuring responsible and high-impact AI development and deployment in Trust & Safety (T&S) and Google. Our team's scope has 2 main components: Ethical ML, which will address technological and ethical issues, and ML in T&S, which seeks to ensure that T&S is recognized as a leader in responsible fraud-fighting AI development and deployment.

At Google we work hard to earn our users’ trust every day. Gaining and retaining this trust is critically important to Google’s success. We defend Google's integrity by fighting spam, fraud and abuse, and develop and communicate state-of-the-art product policies. The Trust and Safety team reduces risk and protects the experience of our users and business partners in more than 40 languages and across Google's expanding base of products. We work with a variety of teams from Engineering to Legal, Public Policy and Sales Engineering to set policies and combat fraud and abuse in a scalable way, often with an eye to finding industry-wide solutions. Trust and Safety team members are motivated to find innovative solutions, and use technical know-how, user insights and proactive communication to pursue the highest possible quality and safety standards for users across Google products.

Responsibilities

  • Perform deep-dives and case studies into fairness issues and vulnerabilities in advanced technologies, and conduct assessments outlining user impact issues, abuse vectors and trends.
  • Design and implement operational processes to investigate and detect machine learning fairness across Google products.
  • Assess and experiment with different approaches and workflows to identify a scalable/stable state.
  • Surface insights and communicate effectively to influence and guide larger, cross-functional working groups towards solutions.
  • Partner with other teams in Trust & Safety and across Google proactively to exchange information on emerging ethical ML and abuse trends and establish yourself as a leading expert.

Qualifications

Minimum qualifications:

  • BA/BS degree in Psychology, Anthropology, Ethnography, or equivalent practical experience.
  • 6 years of experience in technical fields (Data Analysis, Hazard Assessment, Risk and Fraud Investigation, Security Vulnerabilities, Penetration Testing)
  • Experience with risk management, failure modes or experiment design.

Preferred qualifications:

  • Knowledge of ethics and socio-technical considerations of technology and the future of AI
  • Data analysis skills through SQL (or similar programming language) and scripting languages (e.g. Python).
  • Able to surface actionable insight to mitigate and combat ethical AI concerns across a variety of products.

APPLY

Return to Job Listings