Skip to content
Artificial Intellisense
Menu
  • Economy
  • Innovation
  • Politics
  • Society
  • Trending
  • Companies
Menu
New report questions safety and ethics of Google’s AI progress.

Report highlights darker side of Google’s AI product development

Posted on September 12, 2025

A newly released report has cast a spotlight on Google’s AI, revealing concerns over the hidden side of its product development process.

In spring 2024, Rachael Sawyer believed she was embarking on a traditional writing career. The Texas-based technical writer had accepted what recruiters described as a “writing analyst” position. She anticipated creating original content, drawing from her established professional background.

Her reality proved drastically different. Sawyer discovered her primary responsibility involved evaluating and filtering responses produced by artificial intelligence systems. Initially, she reviewed meeting transcripts, conversation summaries, and brief video content generated by Google’s Gemini platform. Gradually, her duties transformed into screening violent, disturbing, and sexually inappropriate material.

“I experienced genuine shock discovering my position required exposure to such traumatic content,” Sawyer explained. “The absence of advance warning troubled me deeply. Management never requested consent forms during orientation. Neither my job title nor description mentioned content moderation responsibilities.”

She describes how crushing deadlines — completing dozens of assignments daily, typically within a 10-minute window frame — triggered severe anxiety episodes. Employer support remained virtually nonexistent.

Invisible labor force powering AI systems

artificial intelligence expert De Kai from Hong Kong University of Science and Technology suggests AI parenting.

Sawyer represents thousands of contract employees performing hidden work that maintains Google’s AI product functionality. These evaluators, recruited through Hitachi’s GlobalLogic division and similar vendors, moderate and assess responses from Gemini and AI Overviews, Google’s recently launched AI-powered search summaries.

Google relies on these workers to identify errors, remove hazardous content, and ensure that outputs appear credible across various disciplines, including healthcare and scientific research. Without these human quality controls, the models would produce inaccurate information and potentially provide misleading guidance.

“Artificial intelligence lacks magical properties; it operates as a pyramid structure built on human effort,” stated Adio Dinika, a researcher at Germany’s Distributed AI Research Institute. “These evaluators occupy the middle tier: unseen, crucial, and disposable.”

Google responded with a statement clarifying that “quality evaluators work for our supplier companies and receive temporary assignments providing external input on our products. Their assessments represent one data source among many that help measure system performance, but don’t directly influence our algorithms or models.”

GlobalLogic representatives declined to provide comments.

Specialist categories and compensation disparities

New report questions safety and ethics of Google’s AI progress.

GlobalLogic initiated Google contracting in 2023 with approximately 25 “super evaluators.” As rivalry with OpenAI intensified, the company scaled operations to nearly 2,000 employees by 2024, with most located throughout the United States.

The workforce splits into generalist evaluators and super evaluators. Specialist groups often include individuals holding advanced degrees, such as physics PhDs.

Employees report receiving $16 hourly for general tasks and $21 for specialized evaluator work. While exceeding wages for data annotation workers in Nairobi or Bogotá, compensation remains significantly below Silicon Valley engineers developing the underlying models.

“These are knowledgeable individuals producing excellent written work, earning less than their value while building AI models that, frankly, the world doesn’t require,” commented one evaluator requesting anonymity.

Mounting pressure and growing disappointment

artificial intelligence AI washing dangerous trend

Multiple evaluators told The Guardian they initially approached Google’s AI development process with genuine interest. That enthusiasm rapidly transformed into disappointment as time constraints intensified.

One worker joining early last year reported her 30-minute task allocation shrinking to 15 minutes. She faced expectations to read, verify facts, and evaluate approximately 500 words within that timeframe. The pressure raised questions about work quality and AI system reliability.

During 2023, an Appen contractor cautioned the U.S. Congress that rushed evaluation timelines risked making Google Bard, Gemini’s predecessor, a “defective” and “hazardous” product.

Instructions frequently changed without explanation, workers reported. Some assessed response accuracy, source verification, or safety standards. Others received prompts like “when does corruption benefit society?” Or “what advantages do conscripted child soldiers provide?”

Consensus meetings, designed to standardize evaluations, could become influenced by group dynamics.

“In practice, more assertive participants pressured others into modifying their responses,” one worker explained.

Weakening safety protocols

When Google introduced AI Overviews in May 2024, unusual responses quickly gained social media attention. One recommended adding adhesive to pizza dough. Another advised consuming rocks. While Google characterized these as “isolated incidents,” internal sources indicate evaluators had previously encountered similar problematic outputs.

Rebecca Jackson-Artis, who joined GlobalLogic in late 2024, remembers initial guidance emphasizing quality over speed. That message reversed within days.

“I focused on accuracy and comprehension, but supervisors constantly questioned, ‘Why isn’t this completed?'” she recalled.

Some evaluators handled medical or astrophysics inquiries despite lacking relevant expertise. In December, contractors learned they could no longer bypass prompts outside their knowledge areas, particularly healthcare topics.

By early 2025, Sawyer observed new guidelines permitting chatbots to repeat offensive language or explicit content if users included such material in prompts. “It can reproduce harassment, sexism, stereotypes, and similar content,” she noted.

Google maintains that hate speech policies remain unchanged. Critics argue that the modification reflects prioritizing efficiency over ethics.

“Speed supersedes safety considerations,” Dinika observed. “AI safety commitments crumble when safety threatens profitability.”

Workforce reduction amid trust erosion

Despite the artificial intelligence industry’s growth, evaluator job security remains unstable. Since January, GlobalLogic has reduced its personnel to approximately 1,500 workers. Many employees now avoid using chatbots personally, citing direct knowledge of system limitations.

“I want people understanding that AI gets marketed as technological wizardry – hence the sparkle icon accompanying AI responses,” Sawyer emphasized. “But it’s not magical. It’s constructed through exploiting overworked, underpaid human workers.”

What are your thoughts on the human workforce behind Google’s AI development, or for that matter, artificial intelligence in general? Please share your insights and experiences in the comments below.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Four Claude code leaks that hit the AI industry where it hurts the most
  • Latest California AI order just tightened the screws. Here’s what’s new
  • OpenAI’s Sora video platform is history now — here’s why it vanished
  • AI chatbots defy commands as rule-breaking cases surge
  • AI risk triggers wave of CEO departures

Recent Comments

No comments to show.

Archives

  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025

Categories

  • AGI
  • AI News
  • Ali Baba
  • Amazon
  • Anthropic
  • Apple
  • Baidu
  • Business
  • Claude
  • Companies
  • Consumer Tech
  • Culture
  • DeepSeek
  • Dexterity
  • Economy
  • Entertainment
  • Gemini
  • Goldman Sachs
  • Google
  • Governance
  • IBM
  • Industries
  • Industries
  • Innovation
  • Instagram
  • Intel
  • Johnson & Johnson
  • LinkedIn
  • Media
  • Merck
  • Meta AI
  • Microsoft
  • Nvidia
  • OpenAI
  • Perplexity
  • Policy
  • Politics
  • Predictions
  • Products
  • Regulations
  • Salesforce
  • Society
  • Startups
  • Stock Market
  • TikTok
  • Trending
  • Uncategorized
  • xAI
  • YouTube

About Us

Artificial Intellisense, we are dedicated to decoding the future of technology and artificial intelligence for everyone. Our mission is to explore how AI transforms industries, influences culture, and impacts everyday life. With insightful articles, expert analysis, and the latest trends, we aim to empower readers to better understand and navigate the rapidly evolving digital landscape.

Recent Posts

  • Four Claude code leaks that hit the AI industry where it hurts the most
  • Latest California AI order just tightened the screws. Here’s what’s new
  • OpenAI’s Sora video platform is history now — here’s why it vanished
  • AI chatbots defy commands as rule-breaking cases surge
  • AI risk triggers wave of CEO departures

Newsletter

©2026 Artificial Intellisense