Skip to content
Artificial Intellisense
Menu
  • Economy
  • Innovation
  • Politics
  • Society
  • Trending
  • Companies
Menu
Meta AI glasses under fire over privacy violations by Kenya-based contractor.

Meta AI glasses under fire over privacy violations by Kenya-based contractor

Posted on March 5, 2026

Britain’s data privacy watchdog is pressing Meta for answers. The focus: whether workers at a Nairobi-based outsourcing firm watched sensitive footage shot through Meta AI-powered smart glasses. The investigation is raising hard questions about data security, informed consent, and the hidden human cost of training artificial intelligence.

UK regulator steps in after damaging report

Meta AI glasses under fire over privacy violations by Kenya-based contractor.

The UK’s Information Commissioner’s Office (ICO) has reached out to Meta after a Swedish media investigation revealed troubling details about how data from the Ray-Ban Meta smart glasses is handled. The ICO, which enforces data protection rules across Britain, said the findings raised serious concerns.

The regulator made its position clear. Smart devices that process personal data must give users meaningful control and full transparency. Companies operating in this space are required to explain exactly what data is gathered and how it is used. The ICO confirmed it would gather further information from Meta AI to assess whether UK data protection law has been complied with.

The inquiry reflects growing pressure on technology firms developing AI consumer products that depend on large-scale data collection from everyday users.

Contractors reportedly saw deeply personal footage

Meta unveils smart AI glasses amid parent protests.

The investigation, conducted by Swedish outlets Svenska Dagbladet and Göteborgs-Posten, examined the data review pipeline behind the Meta AI wearables. Reporters found that workers at a Meta outsourcing partner had been reviewing photos and videos recorded through the glasses. The content was reportedly far more intimate than users likely expected.

“We see everything – from living rooms to naked bodies,” one worker reportedly said.

One contractor described the scope of the footage bluntly, indicating they saw virtually everything recorded inside people’s homes. According to the report, some videos captured individuals using the bathroom or engaging in sexual activity. In one account, a pair of glasses left recording in a bedroom captured a woman undressing.

The workers reviewing this content were data annotators. Their role in the Meta AI training pipeline is to tag and label visual content so that machine learning models can learn to recognize objects, environments, and human behavior. This type of human-in-the-loop annotation is a standard but often opaque part of building AI systems.

Sama, the Nairobi firm at the center of the controversy

The annotators were employed by Sama, an outsourcing company headquartered in Nairobi, Kenya. The firm has handled data labeling and content review work for several major tech companies. Workers also reportedly reviewed transcripts of conversations between users and the glasses’ built-in AI assistant, checking whether the AI gave accurate and helpful responses.

Sama started as a nonprofit with a mission to generate technology jobs in emerging economies. It later became a certified B-corporation, a designation intended to signal a commitment to ethical business practice. However, the company has faced criticism in the past over working conditions tied to its content moderation contracts, particularly the psychological burden placed on workers reviewing disturbing material. Sama eventually withdrew from content moderation work entirely.

The BBC reported it had contacted Sama for comment on the most recent allegations.

Meta says privacy filters are in place

Meta acknowledged that human contractors review some user data but insisted this happens only in limited circumstances and serves to improve the user experience. The company said data shared with Meta AI through the glasses undergoes filtering before any contractor sees it, with privacy protections applied as part of that process.

The company’s privacy policy does disclose that user interactions with Meta AI systems may be reviewed, either through automated processes or by human reviewers. Meta said the practice is consistent with industry norms.

However, workers cited in the investigation contradicted the company’s assurances. They said the privacy filters did not always function as intended, and that identifiable faces were sometimes visible in the footage they reviewed. That gap between Meta’s stated policy and the reported experience of its contractors is now at the heart of the ICO’s inquiry.

“Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you,” the tech giant said in a BBC News report, adding “When people share content with Meta AI, like other companies we sometimes use contractors to review this data to improve people’s experience with the glasses, as stated in our Privacy Policy.”

“This data is first filtered to protect people’s privacy.”

How do the Ray-Ban Meta glasses work?

apple goes smart with glasses with artificial intelligence

Meta AI-enabled eyewear was developed in collaboration with EssilorLuxottica, the global lens and frame manufacturer behind brands, including Ray-Ban and Oakley. The glasses incorporate cameras, microphones, and embedded AI software to deliver hands-free assistance throughout the day.

Users can ask the glasses to describe what they see, translate written text, or record photos and videos. A small indicator light is designed to signal when recording is active. Meta advises users to exercise discretion and avoid filming in private or sensitive settings.

Critics argue that the indicator light is too discreet to reliably alert bystanders that they are being recorded. Because the cameras are embedded in frames that look like ordinary eyewear, the glasses can operate in social spaces without drawing attention.

AI wearables and the broader surveillance debate

Google Pixel 10 Redefines Smartphones Through AI.

The controversy sits within a much wider conversation about the privacy implications of AI-powered wearable technology. Devices capable of capturing and interpreting real-world environments in real time are becoming more common. For people with visual impairments, such tools can be genuinely life-changing, enabling them to navigate spaces and access information independently.

But the same capabilities that make these devices useful also enable passive surveillance. Privacy advocates have long warned that AI wearables blur the line between personal technology and recording equipment. Several women previously told BBC News they had been filmed without consent by people wearing smart glasses. Those accounts helped intensify calls for tighter safeguards around AI-enabled wearables.

A transparency test for the AI industry

The ICO’s investigation arrives as regulators across multiple countries push for clearer rules governing AI development. Transparency around data use, the role of human reviewers in training pipelines, and the rights of users to control how their information is processed have all moved to the center of the policy debate.

This case may set a precedent. If regulators determine that Meta AI failed to adequately inform users about human review of their data, the company could face enforcement action. More broadly, the outcome could shape how AI companies disclose data practices to consumers going forward.

For technology companies racing to bring the next generation of AI wearables to market, the message from regulators is becoming harder to ignore. Innovation and user trust are not mutually exclusive. But closing the gap between the two will require more than a line buried in a privacy policy.

What do you think?

Should AI wearable companies disclose how and when human contractors review your recorded footage? Do current privacy laws go far enough to protect users from this kind of data exposure? Please share your thoughts in the comments below.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Four Claude code leaks that hit the AI industry where it hurts the most
  • Latest California AI order just tightened the screws. Here’s what’s new
  • OpenAI’s Sora video platform is history now — here’s why it vanished
  • AI chatbots defy commands as rule-breaking cases surge
  • AI risk triggers wave of CEO departures

Recent Comments

No comments to show.

Archives

  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025

Categories

  • AGI
  • AI News
  • Ali Baba
  • Amazon
  • Anthropic
  • Apple
  • Baidu
  • Business
  • Claude
  • Companies
  • Consumer Tech
  • Culture
  • DeepSeek
  • Dexterity
  • Economy
  • Entertainment
  • Gemini
  • Goldman Sachs
  • Google
  • Governance
  • IBM
  • Industries
  • Industries
  • Innovation
  • Instagram
  • Intel
  • Johnson & Johnson
  • LinkedIn
  • Media
  • Merck
  • Meta AI
  • Microsoft
  • Nvidia
  • OpenAI
  • Perplexity
  • Policy
  • Politics
  • Predictions
  • Products
  • Regulations
  • Salesforce
  • Society
  • Startups
  • Stock Market
  • TikTok
  • Trending
  • Uncategorized
  • xAI
  • YouTube

About Us

Artificial Intellisense, we are dedicated to decoding the future of technology and artificial intelligence for everyone. Our mission is to explore how AI transforms industries, influences culture, and impacts everyday life. With insightful articles, expert analysis, and the latest trends, we aim to empower readers to better understand and navigate the rapidly evolving digital landscape.

Recent Posts

  • Four Claude code leaks that hit the AI industry where it hurts the most
  • Latest California AI order just tightened the screws. Here’s what’s new
  • OpenAI’s Sora video platform is history now — here’s why it vanished
  • AI chatbots defy commands as rule-breaking cases surge
  • AI risk triggers wave of CEO departures

Newsletter

©2026 Artificial Intellisense