A joint investigation reveals that Meta’s Ray-Ban smart glasses and other AI eyewear send user recordings to human contractors for review, including explicit and sensitive content. Despite Meta’s terms allowing this, most users are unaware. With over 7 million AI glasses sold in 2025, the implications for privacy are massive.
The explosive growth of AI-powered smart glasses—with over seven million units sold in 2025 alone—has masked a disturbing reality: the footage you capture may be reviewed by human contractors thousands of miles away. This isn’t speculation; it’s the verified finding of a joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten that exposes how private recordings from devices like Meta’s Ray-Ban glasses end up on reviewers’ screens.
EssilorLuxottica, the French-Italian eyewear conglomerate that manufactures Meta’s Ray-Ban glasses, reported sales of over seven million AI glasses in 2025, a dramatic increase from the combined two million sold in 2023 and 2024. This surge reflects a broader industry momentum—approximately 60 companies showcased smart glasses at CES 2026, and Meta itself demoed lightweight models that won Innovation Awards. But the investigation uncovers what happens behind the scenes: contractors in locations like Nairobi, Kenya, are tasked with data annotation of user recordings, a process essential for training AI algorithms but one that involves humans watching some of the most intimate moments imaginable.
The scope of what reviewers see is staggering. Contractors told the Swedish newspapers they’ve annotated videos containing sex scenes filmed with the smart glasses, footage of people watching pornography, and recordings capturing private details such as bank cards and sensitive personal chats discussing crimes. One annotator directly stated: “In some videos you can see someone going to the toilet, or getting undressed. I don’t think they know, because if they knew they wouldn’t be recording.”
This practice isn’t hidden—it’s explicitly permitted by Meta’s AI terms of use. The terms state: “In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).” The terms further clarify that user data “must be processed and may be shared onwards” for AI functionality to work, with no opt-out from data processing. Meta cautions users not to share information they don’t want the AI to use and retain, effectively placing the burden of privacy on the user despite the invasive nature of the review process.
The disconnect between marketing and reality is stark. Meta CEO Mark Zuckerberg launched the Ray-Ban AR glasses in 2025 as “an all-in-one assistant that could help you quickly accomplish some of your everyday tasks without breaking your flow.” The devices pack microphones, speakers, and cameras for features like real-time translation and moment capture. Meanwhile, reports indicate Meta is planning to add facial recognition capabilities, a development that would exponentially increase privacy concerns. Yet the average user, lured by sleek design and convenience, remains unaware that their “private” recordings could be viewed by anonymous contractors abroad.
For developers and product designers, this investigation underscores a critical failure in ethical design. The default data-handling practices prioritize AI training over user privacy, burying consent in lengthy terms of service. The technical architecture—where footage is automatically uploaded for annotation—means even seemingly mundane recordings can become part of a global human-review pipeline. This model isn’t unique to Meta; it’s an industry-wide pattern in AI product development that treats user data as a limitless training resource.
Users must now confront a sobering calculus: every time you capture a video with smart glasses, you risk permanently sharing that moment with unseen reviewers. The convenience of hands-free recording comes with an invisible human element that most would never anticipate. While Meta’s terms technically disclose this, the investigation proves disclosure isn’t enough—the average user won’t parse legal jargon to understand that “data processing” includes humans watching them undress.
The path forward requires transparency by design. Companies should implement clear, real-time indicators when recording data is being uploaded for annotation, and provide granular controls to opt out of human review entirely. For now, the prudent advice is to treat smart glasses as inherently public devices—assume anything you record could be viewed by third parties. Until the industry aligns its practices with reasonable privacy expectations, the safest smart glasses are the ones that don’t exist.
For the fastest, most authoritative analysis of breaking tech news and in-depth guides like this, trust onlytrustedinfo.com to keep you informed and ahead of the curve. Explore our library of expert technology coverage.