privacy

workers-report-watching-ray-ban-meta-shot-footage-of-people-using-the-bathroom

Workers report watching Ray-Ban Meta-shot footage of people using the bathroom


Meta accused of “concealing the facts” about smart glass users’ privacy.

A marketing image for Ray-Ban Meta smart glasses. Credit: Meta

Meta’s approach to user privacy is under renewed scrutiny following a Swedish report that employees of a Meta subcontractor have watched footage captured by Ray-Ban Meta smart glasses showing sensitive user content.

The workers reportedly work for Kenya-headquartered Sama and provide data annotation for Ray-Ban Metas.

The February report, a collaboration from Swedish newspapers Svenska Dagbladet, Göteborgs-Posten, and Kenya-based freelance journalist Naipanoi Lepapa, is, per a machine translation, based on interviews with over 30 employees at various levels of Sama, including several people who work with video, image, and speech annotation for Meta’s AI systems. Some of the people interviewed have worked on projects other than Meta’s smart glasses. The report’s authors said they did not gain access to the materials that Sama workers handle or the area where workers perform data annotation. The report is also based on interviews with former US Meta employees who have reportedly witnessed live data annotation for several Meta projects.

The report pointed to, per the translation, a “stream of privacy-sensitive data that is fed straight into the tech giant’s systems,” and that makes Sama workers uncomfortable. The authors said that several people interviewed for the report said they have seen footage shot with Ray-Ban Meta smart glasses that shows people having sex and using the bathroom.

“I saw a video where a man puts the glasses on the bedside table and leaves the room. Shortly afterwards, his wife comes in and changes her clothes,” an anonymous Sama employee reportedly said, per the machine translation.

Another anonymous employee said that they have seen users’ partners come out of the bathroom naked.

“You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work,” an anonymous Sama employee reportedly said.

Meta confirms use of data annotators

In statements shared with the BBC on Wednesday, Meta confirmed that it “sometimes” shares content that users share with the Meta AI generative AI chatbot with contractors to review with “the purpose of improving people’s experience, as many other companies do.”

“This data is first filtered to protect people’s privacy,” the statement said, pointing to, as an example, blurring out faces in images.

Meta’s privacy policy for wearables says that photos and videos taken with its smart glasses are sent to Meta “when you turn on cloud processing on your AI Glasses, interact with the Meta AI service on your AI Glasses, or upload your media to certain services provided by Meta (i.e., Facebook or Instagram). You can change your choices about cloud processing of your Media at any time in Settings.”

The policy also says that video and audio from livestreams recorded with Ray-Ban Metas are sent to Meta, as are text transcripts and voice recordings created by Meta’s chatbot.

“We use machine learning and trained reviewers to process this data to improve, troubleshoot, and train our products. We share that information with third-party vendors and service providers to improve our products. You can access and delete recordings and related transcripts in the Meta AI App,” the policy says.

Meta’s broader privacy policy for the Meta AI chatbot adds: “In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).”

That policy also warns users against sharing “information that you don’t want the AIs to use and retain, such as information about sensitive topics.”

“When information is shared with AIs, the AIs will sometimes retain and use that information,” the Meta AI privacy policy says.

Notably, in August, Meta made “Meta AI with camera” on by default until a user turns off support for the “Hey Meta” voice command, per an email sent to users at the time. Meta spokesperson Albert Aydin told The Verge at the time that “photos and videos captured on Ray-Ban Meta are on your phone’s camera roll and not used by Meta for training.”

However, some Ray-Ban Meta users may not have read or understood the numerous privacy policies associated with Meta’s smart glasses.

Sama employees suggested that Ray-Ban Meta owners may be unaware that the devices are sometimes recording. Employees reportedly pointed to users recording their bank card or porn that they’re watching, seemingly inadvertently.

Meta’s smart glasses flash a red light when they are recording video or taking a photo, but there has been criticism that people may not notice the light or misinterpret its meaning.

“We see everything, from living rooms to naked bodies. Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording,” an anonymous employee was quoted as saying.

When reached for comment by Ars Technica, a Sama representative shared a statement saying that Sama doesn’t “comment on specific client relationships or projects” but is GDPR and CCPA-compliant and uses “rigorously audited policies and procedures designed to protect all customer information, including personally identifiable information.”

Saama’s statement added:

This work is conducted in secure, access-controlled facilities. Personal devices are not permitted on production floors, and all team members undergo background checks and receive ongoing training in data protection, confidentiality, and responsible AI practices. Our teams receive living wages and full benefits, and have access to comprehensive wellness resources and on-site support.

Meta sued

The Swedish report has reignited concerns about the privacy of Meta’s smart glasses, including from the Information Commissioner’s Office, a UK data watchdog that has written to Meta about the report. The debate also comes as Meta is reportedly planning to add facial recognition to its Ray-Ban and Oakley-branded smart glasses “as soon as this year,” per a February report from The New York Times citing anonymous people “involved with the plans.”

The claims have also led to a proposed class-action lawsuit [PDF] filed yesterday against Meta and Luxottica of America, a subsidiary of Ray-Ban parent company EssilorLuxottica. The lawsuit challenges Meta’s slogan for the glasses, “designed for privacy, controlled by you,” saying:

No reasonable consumer would understand “designed for privacy, controlled by you” and similar promises like “built for your privacy” to mean that deeply personal footage from inside their homes would be viewed and catalogued by human workers overseas. Meta chose to make privacy the centerpiece of its pervasive marketing campaign while concealing the facts that reveal those promises to be false.

The lawsuit alleges that Meta has broken state consumer protection laws and seeks damages, punitive penalties, and an injunction requiring Meta to change business practices “to prevent or mitigate the risk of the consumer deception and violations of law.”

Ars Technica reached out to Meta for comment but didn’t hear back before publication. Meta has declined to comment on the lawsuit to other outlets.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Workers report watching Ray-Ban Meta-shot footage of people using the bathroom Read More »

ring-cancels-flock-deal-after-dystopian-super-bowl-ad-prompts-mass-outrage

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage

Both statements verified that the integration never launched and that no Ring customers’ videos were ever sent to Flock.

Ring did not credit users’ privacy concerns for its change of heart. Instead, they claimed that a joint decision was made “following a comprehensive review” where Ring “determined the planned Flock Safety integration would require significantly more time and resources than anticipated.”

Separately, Flock said that “we believe this decision allows both companies to best serve their respective customers and communities.”

The only hint that Ring gave users that their concerns had been heard came in the last line of its blog, which said, “We’ll continue to carefully evaluate future partnerships to ensure they align with our standards for customer trust, safety, and privacy.”

Sharing his views on X and Bluesky, John Scott-Railton, a senior cybersecurity researcher at the Citizen Lab, joined critics calling Ring’s statement insufficient. He posted an image of the ad frame that Markey found creepy next to a statement from Ring, writing, “On the left? A picture of mass surveillance from #Ring’s ad. On the right? A ring [spokesperson] saying that they are not doing mass surveillance. The company cannot have it both ways.”

Ring’s statements so far do not “acknowledge the real issue,” Scott-Railton said, which is privacy risks. For Ring, it seemed like a missed opportunity to discuss or introduce privacy features to reassure concerned users, he suggested, noting the backlash showed “Americans want more control of their privacy right now” and “are savvy enough to see through sappy dog pics.”

“Stop trying to build a surveillance dystopia consumers didn’t ask for” and “focus on shipping good, private products,” Scott-Railton said.

He also suggested that lawmakers should take note of the grassroots support that could possibly help pass laws to push back on mass surveillance. That could help block not just a potential future partnership with Flock, but possibly also stop Ring from becoming the next Flock.

“Ring communications not acknowledging the lesson they just got publicly taught is a bad sign that they hope this goes away,” Scott-Railton said.

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage Read More »

google-recovers-“deleted”-nest-video-in-high-profile-abduction-case

Google recovers “deleted” Nest video in high-profile abduction case

Suspect attempts to cover the camera with a plant.

In statements made by investigators, the video was apparently “recovered from residual data located in backend systems.” It’s unclear how long such data is retained or how easy it is for Google to access it. Some reports claim that it took several days for Google to recover the data.

In large-scale enterprise storage solutions, “deleted” for the user doesn’t always mean that the data is gone. Data that is no longer needed is often compressed and overwritten only as needed. In the meantime, it may be possible to recover the data. That’s something a company like Google could decide to do on its own, or it could be compelled to perform the recovery by a court order. In the Guthrie case, it sounds like Google was voluntarily cooperating with the investigation, which makes sense. Publishing video of the alleged perpetrator could be a major breakthrough as investigators seek help from the public.

It’s not your cloud

There is a temptation to ascribe some malicious intent to Google’s video storage setup. After all, this video expired after three hours, but here it is nine days later. That feels a bit suspicious on the surface, particularly for a company that is so focused on training AI models that feed on video.

We have previously asked Google to explain how it uses Nest to train AI models, and the company claims it does not incorporate user videos into training data, but the way you interact with the service and with your videos is fair game. “We may use your inputs, including prompts and feedback, usage, and outputs from interactions with AI features to further research, tune, and train Google’s generative models, machine learning technologies, and related products and services,” Google said.

Google recovers “deleted” Nest video in high-profile abduction case Read More »

upgraded-google-safety-tools-can-now-find-and-remove-more-of-your-personal-info

Upgraded Google safety tools can now find and remove more of your personal info

Do you feel popular? There are people on the Internet who want to know all about you! Unfortunately, they don’t have the best of intentions, but Google has some handy tools to address that, and they’ve gotten an upgrade today. The “Results About You” tool can now detect and remove more of your personal information. Plus, the tool for removing non-consensual explicit imagery (NCEI) is faster to use. All you have to do is tell Google your personal details first—that seems safe, right?

With today’s upgrade, Results About You gains the ability to find and remove pages that include ID numbers like your passport, driver’s license, and Social Security. You can access the option to add these to Google’s ongoing scans from the settings in Results About You. Just click in the ID numbers section to enable detection.

Naturally, Google has to know what it’s looking for to remove it. So you need to provide at least part of those numbers. Google asks for the full driver’s license number, which is fine, as it’s not as sensitive. For your passport and SSN, you only need the last four digits, which is enough for Google to find the full numbers on webpages.

ID number results detected.

The NCEI tool is geared toward hiding real, explicit images as well as deepfakes and other types of artificial sexualized content. This kind of content is rampant on the Internet right now due to the rapid rise of AI. What used to require Photoshop skills is now just a prompt away, and some AI platforms hardly do anything to prevent it.

Upgraded Google safety tools can now find and remove more of your personal info Read More »

millions-of-people-imperiled-through-sign-in-links-sent-by-sms

Millions of people imperiled through sign-in links sent by SMS

“We argue that these attacks are straightforward to test, verify, and execute at scale,” the researchers, from the universities of New Mexico, Arizona, Louisiana, and the firm Circle, wrote. “The threat model can be realized using consumer-grade hardware and only basic to intermediate Web security knowledge.”

SMS messages are sent unencrypted. In past years, researchers have unearthed public databases of previously sent texts that contained authentication links and private details, including people’s names and addresses. One such discovery, from 2019, included millions of stored sent and received text messages over the years between a single business and its customers. It included usernames and passwords, university finance applications, and marketing messages with discount codes and job alerts.

Despite the known insecurity, the practice continues to flourish. For ethical reasons, the researchers behind the study had no way to capture its true scale, because it would require bypassing access controls, however weak they were. As a lens offering only a limited view into the process, the researchers viewed public SMS gateways. These are typically ad-based websites that let people use a temporary number to receive texts without revealing their phone number. Examples of such gateways are here and here.

With such a limited view of SMS-sent authentication messages, the researchers were unable to measure the true scope of the practice and the security and privacy risks it posed. Still, their findings were notable.

The researchers collected 332,000 unique SMS-delivered URLs extracted from 33 million texts, sent to more than 30,000 phone numbers. The researchers found numerous evidence of security and privacy threats to the people receiving them. Of those, the researchers said, messages originating from 701 endpoints sent on behalf of the 177 services exposed “critical personally identifiable information.” The root cause of the exposure was weak authentication based on tokenized links for verification. Anyone with the link could then obtain users’ personal information—including social security numbers, dates of birth, bank account numbers, and credit scores—from these services.

Millions of people imperiled through sign-in links sent by SMS Read More »

“ungentrified”-craigslist-may-be-the-last-real-place-on-the-internet

“Ungentrified” Craigslist may be the last real place on the Internet


People still use Craigslist to find jobs, love, and even to cast creative projects.

The writer and comedian Megan Koester got her first writing job, reviewing Internet pornography, from a Craigslist ad she responded to more than 15 years ago. Several years after that, she used the listings website to find the rent-controlled apartment where she still lives today. When she wanted to buy property, she scrolled through Craigslist and found a parcel of land in the Mojave Desert. She built a dwelling on it (never mind that she’d later discover it was unpermitted) and furnished it entirely with finds from Craigslist’s free section, right down to the laminate flooring, which had previously been used by a production company.

“There’s so many elements of my life that are suffused with Craigslist,” says Koester, 42, whose Instagram account is dedicated, at least in part, to cataloging screenshots of what she has dubbed “harrowing images” from the site’s free section; on the day we speak, she’s wearing a cashmere sweater that cost her nothing, besides the faith it took to respond to an ad with no pictures. “I’m ride or die.”

Koester is one of untold numbers of Craigslist aficionados, many of them in their thirties and forties, who not only still use the old-school classifieds site but also consider it an essential, if anachronistic, part of their everyday lives. It’s a place where anonymity is still possible, where money doesn’t have to be exchanged, and where strangers can make meaningful connections—for romantic pursuits, straightforward transactions, and even to cast unusual creative projects, including experimental TV shows like The Rehearsal on HBO and Amazon Freevee’s Jury Duty. Unlike flashier online marketplaces such as DePop and its parent company, Etsy, or Facebook Marketplace, Craigslist doesn’t use algorithms to track users’ moves and predict what they want to see next. It doesn’t offer public profiles, rating systems, or “likes” and “shares” to dole out like social currency; as a result, Craigslist effectively disincentivizes clout-chasing and virality-seeking—behaviors that are often rewarded on platforms like TikTok, Instagram, and X. It’s a utopian vision of a much earlier, far more earnest Internet.

“The real freaks come out on Craigslist,” says Koester. “There’s a purity to it.” Even still, the site is a little tamer than it used to be: Craigslist shut down its “casual encounters” ads and took its personals section offline in 2018, after Congress passed legislation that would’ve put the company on the hook for listings from potential sex traffickers. The “missed connections” section, however, remains active.

The site is what Jessa Lingel, an associate professor of communication at the University of Pennsylvania, has called the “ungentrified” Internet. If that’s the case, then online gentrification has only accelerated in recent years, thanks in part to the proliferation of AI. Even Wikipedia and Reddit, visually basic sites created in the early aughts and with an emphasis similar to Craigslist’s on fostering communities, have both incorporated their own versions of AI tools.

Some might argue that Craigslist, by contrast, is outdated; an article published in this magazine more than 15 years ago called it “underdeveloped” and “unpredictable.” But to the site’s most devoted adherents, that’s precisely its appeal.

“ I think Craigslist is having a revival,” says Kat Toledo, an actor and comedian who regularly uses the site to hire cohosts for her LA-based stand-up show, Besitos. “When something is structured so simply and really does serve the community, and it doesn’t ask for much? That’s what survives.”

Toledo started using Craigslist in the 2000s and never stopped. Over the years, she has turned to the site to find romance, housing, and even her current job as an assistant to a forensic psychologist. She’s worked there full-time for nearly two years, defying Craigslist’s reputation as a supplier of potentially sketchy one-off gigs. The stigma of the website, sometimes synonymous with scammers and, in more than one instance, murderers, can be hard to shake. “If I’m not doing a good job,” Toledo says she jokes to her employer, “just remember you found me on Craigslist.”

But for Toledo, the site’s “random factor”—the way it facilitates connection with all kinds of people she might not otherwise interact with—is also what makes it so exciting. Respondents to her ads seeking paid cohosts tend to be “people who almost have nothing to lose, but in a good way, and everything to gain,” she says. There was the born-again Christian who performed a reenactment of her religious awakening and the poet who insisted on doing Toledo’s makeup; others, like the commercial actor who started crying on the phone beforehand, never made it to the stage.

It’s difficult to quantify just how many people actively use Craigslist and how often they click through its listings. The for-profit company is privately owned and doesn’t share data about its users. (Craigslist also didn’t respond to a request for comment.) But according to the Internet data company similarweb, Craigslist draws more than 105 million monthly users, making it the 40th most popular website in the United States—not too shabby for a company that doesn’t spend any money on advertising or marketing. And though Craigslist’s revenue has reportedly plummeted over the past half-dozen years, based on an estimate from an industry analytics firm, it remains enormously profitable. (The company generates revenue by charging a modest fee to publish ads for gigs, certain types of goods, and in some cities, apartments.)

“It’s not a perfect platform by any means, but it does show that you can make a lot of money through an online endeavor that just treats users like they have some autonomy and grants everybody a degree of privacy,” says Lingel. A longtime Craigslist user, she began researching the site after wondering, “Why do all these web 2.0 companies insist that the only way for them to succeed and make money is off the back of user data? There must be other examples out there.”

In her book, Lingel traces the history of the site, which began in 1995 as an email list for a couple hundred San Francisco Bay Area locals to share events, tech news, and job openings. By the end of the decade, engineer Craig Newmark’s humble experiment had evolved into a full-fledged company with an office, a domain name, and a handful of hires. In true Craigslist fashion, Newmark even recruited the company’s CEO, Jim Buckmaster, from an ad he posted to the site, initially seeking a programmer.

The two have gone to great lengths to wrest the company away from corporate interests. When they suspected a looming takeover attempt from eBay, which had purchased a minority stake in Craigslist from a former employee in 2004, Newmark and Buckmaster spent roughly a decade battling the tech behemoth in court. The litigation ended in 2015, with Craigslist buying back its shares and regaining control.

“ They are in lockstep about their early ’90s Internet values,” says Lingel, who credits Newmark and Buckmaster with Craigslist’s long-held aesthetic and ethos: simplicity, privacy, and accessibility. “As long as they’re the major shareholders, that will stay that way.”

Craigslist’s refusal to “sell out,” as Koester puts it, is all the more reason to use it. “Not only is there a purity to the fan base or the user base, there’s a purity to the leadership that they’re uncorruptible basically,” says Koester. “I’m gonna keep looking at Craigslist until I die.” She pauses, then shudders: “Or, until Craig dies, I guess.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

“Ungentrified” Craigslist may be the last real place on the Internet Read More »

the-nation’s-strictest-privacy-law-just-took-effect,-to-data-brokers’-chagrin

The nation’s strictest privacy law just took effect, to data brokers’ chagrin

Californians are getting a new, supercharged way to stop data brokers from hoarding and selling their personal information, as a recently enacted law that’s among the strictest in the nation took effect at the beginning of the year.

According to the California Privacy Protection Agency, more than 500 companies actively scour all sorts of sources for scraps of information about individuals, then package and store it to sell to marketers, private investigators, and others.

The nonprofit Consumer Watchdog said in 2024 that brokers trawl automakers, tech companies, junk-food restaurants, device makers, and others for financial info, purchases, family situations, eating, exercising, travel, entertainment habits, and just about any other imaginable information belonging to millions of people.

Scrubbing your data made easy

Two years ago, California’s Delete Act took effect. It required data brokers to provide residents with a means to obtain a copy of all data pertaining to them and to demand that such information be deleted. Unfortunately, Consumer Watchdog found that only 1 percent of Californians exercised these rights in the first 12 months after the law went into effect. A chief reason: Residents were required to file a separate demand with each broker. With hundreds of companies selling data, the burden was too onerous for most residents to take on.

On January 1, a new law known as DROP (Delete Request and Opt-out Platform) took effect. DROP allows California residents to register a single demand for their data to be deleted and no longer collected in the future. CalPrivacy then forwards it to all brokers.

The nation’s strictest privacy law just took effect, to data brokers’ chagrin Read More »

browser-extensions-with-8-million-users-collect-extended-ai-conversations

Browser extensions with 8 million users collect extended AI conversations

Besides ChatGPT, Claude, and Gemini, the extensions harvest all conversations from Copilot, Perplexity, DeepSeek, Grok, and Meta AI. Koi said the full description of the data captured includes:

  • Every prompt a user sends to the AI
  • Every response received
  • Conversation identifiers and timestamps
  • Session metadata
  • The specific AI platform and model used

The executor script runs independently from the VPN networking, ad blocking, or other core functionality. That means that even when a user toggles off VPN networking, AI protection, ad blocking, or other functions, the conversation collection continues. The only way to stop the harvesting is to disable the extension in the browser settings or to uninstall it.

Koi said it first discovered the conversation harvesting in Urban VPN Proxy, a VPN routing extension that lists “AI protection” as one of its benefits. The data collection began in early July with the release of version 5.5.0.

“Anyone who used ChatGPT, Claude, Gemini, or the other targeted platforms while Urban VPN was installed after July 9, 2025 should assume those conversations are now on Urban VPN’s servers and have been shared with third parties,” the company said. “Medical questions, financial details, proprietary code, personal dilemmas—all of it, sold for ‘marketing analytics purposes.’”

Following that discovery, the security firm uncovered seven additional extensions with identical AI harvesting functionality. Four of the extensions are available in the Chrome Web Store. The other four are on the Edge add-ons page. Collectively, they have been installed more than 8 million times.

They are:

Chrome Store

  • Urban VPN Proxy: 6 million users
  • 1ClickVPN Proxy: 600,000 users
  • Urban Browser Guard: 40,000 users
  • Urban Ad Blocker: 10,000 users

Edge Add-ons:

  • Urban VPN Proxy: 1,32 million users
  • 1ClickVPN Proxy: 36,459 users
  • Urban Browser Guard – 12,624 users
  • Urban Ad Blocker – 6,476 users

Read the fine print

The extensions come with conflicting messages about how they handle bot conversations, which often contain deeply personal information about users’ physical and mental health, finances, personal relationships, and other sensitive information that could be a gold mine for marketers and data brokers. The Urban VPN Proxy in the Chrome Web Store, for instance, lists “AI protection” as a benefit. It goes on to say:

Browser extensions with 8 million users collect extended AI conversations Read More »

google-will-end-dark-web-reports-that-alerted-users-to-leaked-data

Google will end dark web reports that alerted users to leaked data

As Google admits in the email alert, its dark web scans didn’t offer much help. “Feedback showed that it did not provide helpful next steps,” Google said of the service. Here’s the full text of the email.

Google dark web email

Credit: Google

With other types of personal data alerts provided by the company, it has the power to do something. For example, you can have Google remove pages from search that list your personal data. Google doesn’t run anything on the dark web, though, so all it can do is remind you that your data is being passed around in one of the shadier corners of the Internet.

The shutdown begins on January 15, when Google will stop conducting new scans for user data on the dark web. Past data will no longer be available as of February 16, 2026. Google says it will delete all past reports at that time. However, users can remove their monitoring profile earlier in the account settings. This change does not impact any of Google’s other privacy reports.

The good news is that the best ways to protect your personal data from being shuffled around the dark web are the same ones that keep you safe on the open web. Google suggests always using two-step verification, and tools like Passkeys and Google’s password checkup can ensure you don’t accidentally reuse a compromised password. Stay safe out there.

Google will end dark web reports that alerted users to leaked data Read More »

flock-haters-cross-political-divides-to-remove-error-prone-cameras

Flock haters cross political divides to remove error-prone cameras

“People should care because this could be you,” White said. “This is something that police agencies are now using to document and watch what you’re doing, where you’re going, without your consent.”

Haters cross political divides to fight Flock

Currently, Flock’s reach is broad, “providing services to 5,000 police departments, 1,000 businesses, and numerous homeowners associations across 49 states,” lawmakers noted. Additionally, in October, Flock partnered with Amazon, which allows police to request Ring camera footage that widens Flock’s lens further.

However, Flock’s reach notably doesn’t extend into certain cities and towns in Arizona, Colorado, New York, Oregon, Tennessee, Texas, and Virginia, following successful local bids to end Flock contracts. These local fights have only just started as groups learn from each other, Sarah Hamid, EFF’s director of strategic campaigns, told Ars.

“Several cities have active campaigns underway right now across the country—urban and rural, in blue states and red states,” Hamid said.

A Flock spokesperson told Ars that the growing effort to remove cameras “remains an extremely small percentage of communities that consider deploying Flock technology (low single digital percentages).” To keep Flock’s cameras on city streets, Flock attends “hundreds of local community meetings and City Council sessions each month, and the vast majority of those contracts are accepted,” Flock’s spokesperson said.

Hamid challenged Flock’s “characterization of camera removals as isolated incidents,” though, noting “that doesn’t reflect what we’re seeing.”

“The removals span multiple states and represent different organizing strategies—some community-led, some council-initiated, some driven by budget constraints,” Hamid said.

Most recently, city officials voted to remove Flock cameras this fall in Sedona, Arizona.

A 72-year-old retiree, Sandy Boyce, helped fuel the local movement there after learning that Sedona had “quietly” renewed its Flock contract, NBC News reported. She felt enraged as she imagined her tax dollars continuing to support a camera system tracking her movements without her consent, she told NBC News.

Flock haters cross political divides to remove error-prone cameras Read More »

ring-cameras-are-about-to-get-increasingly-chummy-with-law-enforcement

Ring cameras are about to get increasingly chummy with law enforcement


Amazon’s Ring partners with company whose tech has reportedly been used by ICE.

Ring’s Outdoor Cam Pro. Credit: Amazon

Law enforcement agencies will soon have easier access to footage captured by Amazon’s Ring smart cameras. In a partnership announced this week, Amazon will allow approximately 5,000 local law enforcement agencies to request access to Ring camera footage via surveillance platforms from Flock Safety. Ring cooperating with law enforcement and the reported use of Flock technologies by federal agencies, including US Immigration and Customs Enforcement (ICE), has resurfaced privacy concerns that have followed the devices for years.

According to Flock’s announcement, its Ring partnership allows local law enforcement members to use Flock software “to send a direct post in the Ring Neighbors app with details about the investigation and request voluntary assistance.” Requests must include “specific location and timeframe of the incident, a unique investigation code, and details about what is being investigated,” and users can look at the requests anonymously, Flock said.

“Any footage a Ring customer chooses to submit will be securely packaged by Flock and shared directly with the requesting local public safety agency through the FlockOS or Flock Nova platform,” the announcement reads.

Flock said its local law enforcement users will gain access to Ring Community Requests in “the coming months.”

A flock of privacy concerns

Outside its software platforms, Flock is known for license plate recognition cameras. Flock customers can also search footage from Flock cameras using descriptors to find people, such as “man in blue shirt and cowboy hat.” Besides law enforcement agencies, Flock says 6,000 communities and 1,000 businesses use their products.

For years, privacy advocates have warned against companies like Flock.

This week, US Sen. Ron Wyden (D-Ore.) sent a letter [PDF] to Flock CEO Garrett Langley saying that ICE’s Homeland Security Investigations (HSI), the Secret Service, and the US Navy’s Criminal Investigative Service have had access to footage from Flock’s license plate cameras.

“I now believe that abuses of your product are not only likely but inevitable and that Flock is unable and uninterested in preventing them,” Wyden wrote.

In August, Jay Stanley, senior policy analyst for the ACLU Speech, Privacy, and Technology Project, wrote that “Flock is building a dangerous, nationwide mass-surveillance infrastructure.” Stanley pointed to ICE using Flock’s network of cameras, as well as Flock’s efforts to build a people lookup tool with data brokers.

Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation (EFF), told Ars via email that Flock is a “mass surveillance tool” that “has increasingly been used to spy on both immigrants and people exercising their First Amendment-protected rights.”

Flock has earned this reputation among privacy advocates through its own cameras, not Ring’s.

An Amazon spokesperson told Ars Technica that only local public safety agencies will be able to make Community Requests via Flock software, and that requests will also show the name of the agency making the request.

A Flock spokesperson told Ars:

Flock does not currently have any contracts with any division of [the US Department of Homeland Security], including ICE. The Ring Community Requests process through Flock is only available for local public safety agencies for specific, active investigations. All requests are time and geographically-bound. Ring users can choose to share relevant footage or ignore the request.

Flock’s rep added that all activity within FlockOS and Flock Nova is “permanently recorded in a comprehensive CJIS-compliant audit trail for unalterable custody tracking,” referring to a set of standards created by the FBI’s Criminal Justice Information Services division.

But there’s still concern that federal agencies will end up accessing Ring footage through Flock. Guariglia told Ars:

Even without formal partnerships with federal authorities, data from these surveillance companies flow to agencies like ICE through local law enforcement. Local and state police have run more than 4,000 Flock searches on behalf of federal authorities or with a potential immigration focus, reporting has found. Additionally, just this month, it became clear that Texas police searched 83,000 Flock cameras in an attempt to prosecute a woman for her abortion and then tried to cover it up.

Ring cozies up to the law

This week’s announcement shows Amazon, which acquired Ring in 2018, increasingly positioning its consumer cameras as a law enforcement tool. After years of cops using Ring footage, Amazon last year said that it would stop letting police request Ring footage—unless it was an “emergency”—only to reverse course about 18 months later by allowing police to request Ring footage through a Flock rival, Axon.

While announcing Ring’s deals with Flock and Axon, Ring founder and CEO Jamie Siminoff claimed that the partnerships would help Ring cameras keep neighborhoods safe. But there’s doubt as to whether people buy Ring cameras to protect their neighborhood.

“Ring’s new partnership with Flock shows that the company is more interested in contributing to mounting authoritarianism than servicing the specific needs of their customers,” Guariglia told Ars.

Interestingly, Ring initiated conversations about a deal with Flock, Langely told CNBC.

Flock says that its cameras don’t use facial recognition, which has been criticized for racial biases. But local law enforcement agencies using Flock will soon have access to footage from Ring cameras with facial recognition. In a conversation with The Washington Post this month, Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center, described the new feature for Ring cameras as “invasive for anyone who walks within range of” a Ring doorbell, since they likely haven’t consented to facial recognition being used on them.

Amazon, for its part, has mostly pushed the burden of ensuring responsible facial recognition use to its customers. Schroeder shared concern with the Post that Ring’s facial recognition data could end up being shared with law enforcement.

Some people who are perturbed about Ring deepening its ties with law enforcement have complained online.

“Inviting big brother into the system. Screw that,” a user on the Ring subreddit said this week.

Another Reddit user said: “And… I’m gone. Nope, NO WAY IN HELL. Goodbye, Ring. I’ll be switching to a UniFi[-brand] system with 100 percent local storage. You don’t get my money any more. This is some 1984 BS …”

Privacy concerns are also exacerbated by Ring’s past, as the company has previously failed to meet users’ privacy expectations. In 2023, Ring agreed to pay $5.8 million to settle claims that employees illegally spied on Ring customers.

Amazon and Flock say their collaboration will only involve voluntary customers and local enforcement agencies. But there’s still reason to be concerned about the implications of people sending doorbell and personal camera footage to law enforcement via platforms that are reportedly widely used by federal agencies for deportation purposes. Combined with the privacy issues that Ring has already faced for years, it’s not hard to see why some feel that Amazon scaling up Ring’s association with any type of law enforcement is unacceptable.

And it appears that Amazon and Flock would both like Ring customers to opt in when possible.

“It will be turned on for free for every customer, and I think all of them will use it,” Langely told CNBC.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Ring cameras are about to get increasingly chummy with law enforcement Read More »

hackers-can-steal-2fa-codes-and-private-messages-from-android-phones

Hackers can steal 2FA codes and private messages from Android phones

In the second step, Pixnapping performs graphical operations on individual pixels that the targeted app sent to the rendering pipeline. These operations choose the coordinates of target pixels the app wants to steal and begin to check if the color of those coordinates is white or non-white.

“Suppose, for example, [the attacker] wants to steal a pixel that is part of the screen region where a 2FA character is known to be rendered by Google Authenticator,” Wang said. “This pixel is either white (if nothing was rendered there) or non-white (if part of a 2FA digit was rendered there). Then, conceptually, the attacker wants to cause some graphical operations whose rendering time is long if the target victim pixel is non-white and short if it is white. The malicious app does this by opening some malicious activities (i.e., windows) in front of the victim app that was opened in Step 1.”

The third step measures the amount of time required at each coordinate. By combining the times for each one, the attack can rebuild the images sent to the rendering pipeline one pixel at a time.

The amount of time required to perform the attack depends on several variables, including how many coordinates need to be measured. In some cases, there’s no hard deadline for obtaining the information the attacker wants to steal. In other cases—such as stealing a 2FA code—every second counts, since each one is valid for only 30 seconds. In the paper, the researchers explained:

To meet the strict 30-second deadline for the attack, we also reduce the number of samples per target pixel to 16 (compared to the 34 or 64 used in earlier attacks) and decrease the idle time between pixel leaks from 1.5 seconds to 70 milliseconds. To ensure that the attacker has the full 30 seconds to leak the 2FA code, our implementation waits for the beginning of a new 30-second global time interval, determined using the system clock.

… We use our end-to-end attack to leak 100 different 2FA codes from Google Authenticator on each of our Google Pixel phones. Our attack correctly recovers the full 6-digit 2FA code in 73%, 53%, 29%, and 53% of the trials on the Pixel 6, 7, 8, and 9, respectively. The average time to recover each 2FA code is 14.3, 25.8, 24.9, and 25.3 seconds for the Pixel 6, Pixel 7, Pixel 8, and Pixel 9, respectively. We are unable to leak 2FA codes within 30 seconds using our implementation on the Samsung Galaxy S25 device due to significant noise. We leave further investigation of how to tune our attack to work on this device to future work.

In an email, a Google representative wrote, “We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation.”

Hackers can steal 2FA codes and private messages from Android phones Read More »