Meta is in damage control mode, and the timing couldn’t look worse. According to BBC reporting, the social media giant cancelled a major contract with Sama, a US-based outsourcing company, just weeks after workers alleged they had to review intimate and graphic content captured by Meta’s AI-powered smart glasses. The fallout has left over 1,000 people without jobs and raised uncomfortable questions about whether Meta’s real problem was the work quality or the workers who spoke up.
When the Glasses Saw Too Much
In late February, Swedish newspapers Svenska Dagbladet and Goteborgs-Posten published an investigation featuring accounts from unnamed workers at Sama. These weren’t engineers or designers. They were data annotators in Kenya, hired to teach Meta’s AI to interpret images by manually labelling content. Their job description never mentioned that they’d be watching people undress, use the bathroom, or have sex.
“We see everything from living rooms to naked bodies,” one worker told the Swedish outlets, according to BBC reporting.
The glasses themselves, which Meta launched in partnership with Ray-Ban and Oakley, feature a physical light indicator that turns on when recording. But that doesn’t stop misuse. In one instance documented by the Swedish investigation, a man’s glasses were left recording in a bedroom and later captured his wife undressing without her knowledge or consent. The workers were expected to review this footage as part of their job training Meta’s AI.
Meta acknowledged that subcontracted workers sometimes review content from the glasses when users share it with Meta’s AI systems. The company framed this as standard practice, done with “clear user consent” to improve product performance. Technically true on the consent part. Practically nightmarish for the people reviewing it.
The Contract Ends, Questions Linger
Within weeks of the Swedish investigation, Meta paused its work with Sama. Less than two months after the initial reporting, the company terminated the contract entirely. Sama said the decision would leave 1,108 workers redundant.
Meta’s explanation was blunt: Sama didn’t meet its standards. That’s where the story might have ended, except Sama pushed back hard. In a statement to BBC News, the company said it had “consistently met the operational, security and quality standards required across our client engagements, including with Meta” and that it was “never notified of any failure to meet those standards.”
Here’s where the timing becomes suspicious. Naftali Wambalo from the Africa Tech Workers Movement, who is involved in ongoing legal action against Meta over earlier business practices, told the BBC he believed the real issue wasn’t standards at all. “What I think are the standards they are talking about here are standards of secrecy,” he said.
The argument cuts deep. If Sama genuinely failed to meet quality benchmarks, why wasn’t this flagged months or years into the relationship? Why the sudden termination right after workers went public? Meta hasn’t directly addressed these questions.
A Pattern Worth Noticing
This isn’t Meta’s first rough partnership with Sama either. The company also hired Sama to moderate Facebook content, a contract that later attracted criticism and legal action from former employees who described being traumatised by exposure to graphic material. Workers at that contract also faced pushback when they complained.
Mercy Mutemi, a lawyer representing petitioners in the Facebook moderation case and executive director of campaign group the Oversight Lab, sees a troubling pattern. She warned the Kenyan government about building an AI industry on what she called “a very flimsy foundation.” Her concern: Kenya risks becoming a global hub for outsourced technology work without proper protections for its workers.
Regulators have started paying attention. The UK’s Information Commissioners Office wrote to Meta expressing concern shortly after the Swedish investigation. Kenya’s Office of the Data Protection Commissioner announced its own investigation into privacy issues surrounding the glasses.
The Real Questions
What troubles most about this situation isn’t just Meta’s treatment of Sama or its workers. It’s the larger implication: major tech companies are building the infrastructure of artificial intelligence partly through outsourced labour in developing countries, often with minimal oversight and maximum exposure to content that would horrify most people. When workers speak up, the response isn’t always accountability. Sometimes it’s a cancelled contract.
Meta has previously stated that users are informed about human review in its terms of service. But knowing that humans might review your content and actually confronting what that means for both the user and the reviewer are entirely different things.
The glasses themselves represent a genuinely useful innovation for people with visual impairments. Translation features, object recognition, real-time assistance. These aren’t trivial applications. But innovation doesn’t absolve accountability, especially when the human cost of that innovation falls disproportionately on workers in countries with fewer labour protections.
So far, Meta has maintained its position that it ended the contract due to unmet standards. Sama says that’s not true. Workers say they were punished for speaking. And regulators are watching to see what sticks. In the absence of transparency, the suspicion that Meta prioritised silence over standards will likely persist.


