smart glasses

workers-report-watching-ray-ban-meta-shot-footage-of-people-using-the-bathroom

Workers report watching Ray-Ban Meta-shot footage of people using the bathroom


Meta accused of “concealing the facts” about smart glass users’ privacy.

A marketing image for Ray-Ban Meta smart glasses. Credit: Meta

Meta’s approach to user privacy is under renewed scrutiny following a Swedish report that employees of a Meta subcontractor have watched footage captured by Ray-Ban Meta smart glasses showing sensitive user content.

The workers reportedly work for Kenya-headquartered Sama and provide data annotation for Ray-Ban Metas.

The February report, a collaboration from Swedish newspapers Svenska Dagbladet, Göteborgs-Posten, and Kenya-based freelance journalist Naipanoi Lepapa, is, per a machine translation, based on interviews with over 30 employees at various levels of Sama, including several people who work with video, image, and speech annotation for Meta’s AI systems. Some of the people interviewed have worked on projects other than Meta’s smart glasses. The report’s authors said they did not gain access to the materials that Sama workers handle or the area where workers perform data annotation. The report is also based on interviews with former US Meta employees who have reportedly witnessed live data annotation for several Meta projects.

The report pointed to, per the translation, a “stream of privacy-sensitive data that is fed straight into the tech giant’s systems,” and that makes Sama workers uncomfortable. The authors said that several people interviewed for the report said they have seen footage shot with Ray-Ban Meta smart glasses that shows people having sex and using the bathroom.

“I saw a video where a man puts the glasses on the bedside table and leaves the room. Shortly afterwards, his wife comes in and changes her clothes,” an anonymous Sama employee reportedly said, per the machine translation.

Another anonymous employee said that they have seen users’ partners come out of the bathroom naked.

“You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work,” an anonymous Sama employee reportedly said.

Meta confirms use of data annotators

In statements shared with the BBC on Wednesday, Meta confirmed that it “sometimes” shares content that users share with the Meta AI generative AI chatbot with contractors to review with “the purpose of improving people’s experience, as many other companies do.”

“This data is first filtered to protect people’s privacy,” the statement said, pointing to, as an example, blurring out faces in images.

Meta’s privacy policy for wearables says that photos and videos taken with its smart glasses are sent to Meta “when you turn on cloud processing on your AI Glasses, interact with the Meta AI service on your AI Glasses, or upload your media to certain services provided by Meta (i.e., Facebook or Instagram). You can change your choices about cloud processing of your Media at any time in Settings.”

The policy also says that video and audio from livestreams recorded with Ray-Ban Metas are sent to Meta, as are text transcripts and voice recordings created by Meta’s chatbot.

“We use machine learning and trained reviewers to process this data to improve, troubleshoot, and train our products. We share that information with third-party vendors and service providers to improve our products. You can access and delete recordings and related transcripts in the Meta AI App,” the policy says.

Meta’s broader privacy policy for the Meta AI chatbot adds: “In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).”

That policy also warns users against sharing “information that you don’t want the AIs to use and retain, such as information about sensitive topics.”

“When information is shared with AIs, the AIs will sometimes retain and use that information,” the Meta AI privacy policy says.

Notably, in August, Meta made “Meta AI with camera” on by default until a user turns off support for the “Hey Meta” voice command, per an email sent to users at the time. Meta spokesperson Albert Aydin told The Verge at the time that “photos and videos captured on Ray-Ban Meta are on your phone’s camera roll and not used by Meta for training.”

However, some Ray-Ban Meta users may not have read or understood the numerous privacy policies associated with Meta’s smart glasses.

Sama employees suggested that Ray-Ban Meta owners may be unaware that the devices are sometimes recording. Employees reportedly pointed to users recording their bank card or porn that they’re watching, seemingly inadvertently.

Meta’s smart glasses flash a red light when they are recording video or taking a photo, but there has been criticism that people may not notice the light or misinterpret its meaning.

“We see everything, from living rooms to naked bodies. Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording,” an anonymous employee was quoted as saying.

When reached for comment by Ars Technica, a Sama representative shared a statement saying that Sama doesn’t “comment on specific client relationships or projects” but is GDPR and CCPA-compliant and uses “rigorously audited policies and procedures designed to protect all customer information, including personally identifiable information.”

Saama’s statement added:

This work is conducted in secure, access-controlled facilities. Personal devices are not permitted on production floors, and all team members undergo background checks and receive ongoing training in data protection, confidentiality, and responsible AI practices. Our teams receive living wages and full benefits, and have access to comprehensive wellness resources and on-site support.

Meta sued

The Swedish report has reignited concerns about the privacy of Meta’s smart glasses, including from the Information Commissioner’s Office, a UK data watchdog that has written to Meta about the report. The debate also comes as Meta is reportedly planning to add facial recognition to its Ray-Ban and Oakley-branded smart glasses “as soon as this year,” per a February report from The New York Times citing anonymous people “involved with the plans.”

The claims have also led to a proposed class-action lawsuit [PDF] filed yesterday against Meta and Luxottica of America, a subsidiary of Ray-Ban parent company EssilorLuxottica. The lawsuit challenges Meta’s slogan for the glasses, “designed for privacy, controlled by you,” saying:

No reasonable consumer would understand “designed for privacy, controlled by you” and similar promises like “built for your privacy” to mean that deeply personal footage from inside their homes would be viewed and catalogued by human workers overseas. Meta chose to make privacy the centerpiece of its pervasive marketing campaign while concealing the facts that reveal those promises to be false.

The lawsuit alleges that Meta has broken state consumer protection laws and seeks damages, punitive penalties, and an injunction requiring Meta to change business practices “to prevent or mitigate the risk of the consumer deception and violations of law.”

Ars Technica reached out to Meta for comment but didn’t hear back before publication. Meta has declined to comment on the lawsuit to other outlets.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Workers report watching Ray-Ban Meta-shot footage of people using the bathroom Read More »

meta-to-cut-5%-of-employees-deemed-unfit-for-zuckerberg’s-ai-fueled-future

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future

Anticipating that 2025 will be an “intense year” requiring rapid innovation, Mark Zuckerberg reportedly announced that Meta would be cutting 5 percent of its workforce—targeting “lowest performers.”

Bloomberg reviewed the internal memo explaining the cuts, which was posted to Meta’s internal Workplace forum Tuesday. In it, Zuckerberg confirmed that Meta was shifting its strategy to “move out low performers faster” so that Meta can hire new talent to fill those vacancies this year.

“I’ve decided to raise the bar on performance management,” Zuckerberg said. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle.”

Cuts will likely impact more than 3,600 employees, as Meta’s most recent headcount in September totaled about 72,000 employees. It may not be as straightforward as letting go anyone with an unsatisfactory performance review, as Zuckerberg said that any employee not currently meeting expectations could be spared if Meta is “optimistic about their future performance,” The Wall Street Journal reported.

Any employees affected will be notified by February 10 and receive “generous severance,” Zuckerberg’s memo promised.

This is the biggest round of cuts at Meta since 2023, when Meta laid off 10,000 employees during what Zuckerberg dubbed the “year of efficiency.” Those layoffs followed a prior round where 11,000 lost their jobs and Zuckerberg realized that “leaner is better.” He told employees in 2023 that a “surprising result” from reducing the workforce was “that many things have gone faster.”

“A leaner org will execute its highest priorities faster,” Zuckerberg wrote in 2023. “People will be more productive, and their work will be more fun and fulfilling. We will become an even greater magnet for the most talented people. That’s why in our Year of Efficiency, we are focused on canceling projects that are duplicative or lower priority and making every organization as lean as possible.”

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future Read More »

meta-smart-glasses-can-be-used-to-dox-anyone-in-seconds,-study-finds

Meta smart glasses can be used to dox anyone in seconds, study finds

To prevent anyone from being doxxed, the co-creators are not releasing the code, Nguyen said on social media site X. They did, however, outline how their disturbing tech works and how shocked random strangers used as test subjects were to discover how easily identifiable they are just from accessing with the smart glasses information posted publicly online.

Nguyen and Ardayfio tested out their technology at a subway station “on unsuspecting people in the real world,” 404 Media noted. To demonstrate how the tech could be abused to trick people, the students even claimed to know some of the test subjects, seemingly using information gleaned from the glasses to make resonant references and fake an acquaintance.

Dozens of test subjects were identified, the students claimed, although some results have been contested, 404 Media reported. To keep their face-scanning under the radar, the students covered up a light that automatically comes on when the Meta Ray Bans 2 are recording, Ardayfio said on X.

Opt out of PimEyes now, students warn

For Nguyen and Ardayfio, the point of the project was to persuade people to opt out of invasive search engines to protect their privacy online. An attempt to use I-XRAY to identify 404 Media reporter Joseph Cox, for example, didn’t work because he’d opted out of PimEyes.

But while privacy is clearly important to the students and their demo video strove to remove identifying information, at least one test subject was “easily” identified anyway, 404 Media reported. That test subject couldn’t be reached for comment, 404 Media reported.

So far, neither Facebook nor Google has chosen to release similar technologies that they developed linking smart glasses to face search engines, The New York Times reported.

Meta smart glasses can be used to dox anyone in seconds, study finds Read More »

ar-startup-brilliant-labs-secures-$3m-seed-funding-from-oculus-&-siri-co-founders

AR Startup Brilliant Labs Secures $3M Seed Funding from Oculus & Siri Co-founders

Brilliant Labs, an AR startup working to integrate AI into daily life, announced that it has raised $3 million in seed funding which it will use to expand its team and invest in R&D for its open-source, AI-powered smartglasses.

The funding round was led by Brendan Iribe, co-founder of Oculus, Adam Cheyer, co-founder of Siri, Eric Migicovsky, founder of Pebble, and Plug & Play Ventures, among others.

Founded in 2019, Brilliant Labs describes its design approach as “embodied intelligence.” Its one-eyed ‘Monocle’ smartglasses dev kit is an open-source device which began shipping in February 2023, offering up a single-lens design that’s supposed to clip onto existing eyewear. For now, Monocle boasts a six-hour battery life with a charging case, which includes fast charging technology.

Monocle | Image courtesy Brilliant Labs

Similar to Google Glass, Brilliant Labs’ Monocle serves up text via a single waveguide, doing things like letting you see important information while remaining present in the moment. Monocle also includes an embedded microphone, computer vision-ready camera, and hackable FPGA accelerator chip.

In addition to the latest funding round, Brilliant Labs also announced the launch of arGPT, the company’s first ChatGPT integration for Monocle, letting developers directly use the generative AI as well as build apps on top of arGPT.

“We believe that Generative AI is the key enabler for AR, so at Brilliant Labs, we’re building an open-source ecosystem to support developers and creatives reimagining the future, and Monocle is just the beginning. We’re excited to see what developers create with it,” said Bobak Tavangar, Founder and CEO of Brilliant Labs. “We’re thrilled to have the support of our investors as we usher in a new era of embodied intelligence – the intersection of AI and AR.”

Other investors in its seed funding round include Steve Sarowitz, founder of Paylocity and Chairman of Wayfarer Studios, Nirav Patel, former core team member at Oculus and founder of Framework, Francisco Tolmasky, member of the original iPhone team, and Moveon Technologies.


Want to know the difference between smartglasses and AR glasses? Check out our primer on what’s what (and why everyone is confused).

AR Startup Brilliant Labs Secures $3M Seed Funding from Oculus & Siri Co-founders Read More »

google-discontinues-glass-enterprise-edition-smartglasses

Google Discontinues Glass Enterprise Edition Smartglasses

Google Glass Enterprise Edition 2, the company’s work-focused version of its iconic but once maligned smartglasses, is being discontinued.

Google says in a device support FAQ that, starting March 15th, it will no longer sell Glass Enterprise 2, adding that it will only support the device until September 15th, 2023.

While the company says it’s not pushing out any more software for Glass Enterprise Edition after that date, however its most recent system images will remain publicly available until at least April 1st, 2024.

Launched in 2017, Google Glass for enterprise was a revival of sorts, as the company had ceased production of the storied device in 2015.

Google Glass Explorer Edition | Image courtesy Alphabet

Starting in 2012, the company was hoping to seed the device among prosumers with its Glass Explorer Editions, although public backlash spawned the term “glasshole,” putting a severe dent in Google’s ambitions to launch a more consumer-focused version of the device.

Google hasn’t explained why it’s killing off Glass for enterprise. In response to PC Mag, a Google spokesperson left this comment:

“For years, we’ve been building AR into many Google products and we’ll continue to look at ways to bring new, innovative AR experiences across our product portfolio.”

To be fair, Google probably has bigger fish to fry, and the aging smartglasses platform may well be replaced sooner rather than later. Google said last summer it would be conducting real world tests of its early AR prototypes, emphasizing things like real-time translation and AR turn-by-turn navigation.

There’s also the issue of emerging competition. Apple’s upcoming mixed reality (MR) headset is rumored to arrive sometime in mid-2023, while Meta is prepping multiple generations of its MR Quest headsets.

Granted, these MR headsets probably won’t be the model workhorses, although many companies see MR headsets as a steppingstone in preparation for the sort of all-day AR glasses industry is hoping to commercialize in the near future.

– – — – –

To be clear, Google Glass is a style of smartglass(es) and not an AR device as such; Glass provides a single heads-up display (HUD) that doesn’t place digital imagery naturally in the user’s perceived environment, like with HoloLens 2 or Magic Leap 2, but rather flatly projects the sort of useful information you might also see on a smartwatch. You can learn more about the differences between AR headsets and smartglasses here.

Google Discontinues Glass Enterprise Edition Smartglasses Read More »