data privacy

shopping-app-temu-is-“dangerous-malware,”-spying-on-your-texts,-lawsuit-claims

Shopping app Temu is “dangerous malware,” spying on your texts, lawsuit claims

“Cleverly hidden spyware” —

Temu “surprised” by the lawsuit, plans to “vigorously defend” itself.

A person is holding a package from Temu.

Enlarge / A person is holding a package from Temu.

Temu—the Chinese shopping app that has rapidly grown so popular in the US that even Amazon is reportedly trying to copy it—is “dangerous malware” that’s secretly monetizing a broad swath of unauthorized user data, Arkansas Attorney General Tim Griffin alleged in a lawsuit filed Tuesday.

Griffin cited research and media reports exposing Temu’s allegedly nefarious design, which “purposely” allows Temu to “gain unrestricted access to a user’s phone operating system, including, but not limited to, a user’s camera, specific location, contacts, text messages, documents, and other applications.”

“Temu is designed to make this expansive access undetected, even by sophisticated users,” Griffin’s complaint said. “Once installed, Temu can recompile itself and change properties, including overriding the data privacy settings users believe they have in place.”

Griffin fears that Temu is capable of accessing virtually all data on a person’s phone, exposing both users and non-users to extreme privacy and security risks. It appears that anyone texting or emailing someone with the shopping app installed risks Temu accessing private data, Griffin’s suit claimed, which Temu then allegedly monetizes by selling it to third parties, “profiting at the direct expense” of users’ privacy rights.

“Compounding” risks is the possibility that Temu’s Chinese owners, PDD Holdings, are legally obligated to share data with the Chinese government, the lawsuit said, due to Chinese “laws that mandate secret cooperation with China’s intelligence apparatus regardless of any data protection guarantees existing in the United States.”

Griffin’s suit cited an extensive forensic investigation into Temu by Grizzly Research—which analyzes publicly traded companies to inform investors—last September. In their report, Grizzly Research alleged that PDD Holdings is a “fraudulent company” and that “Temu is cleverly hidden spyware that poses an urgent security threat to United States national interests.”

As Griffin sees it, Temu baits users with misleading promises of discounted, quality goods, angling to get access to as much user data as possible by adding addictive features that keep users logged in, like spinning a wheel for deals. Meanwhile hundreds of complaints to the Better Business Bureau showed that Temu’s goods are actually low-quality, Griffin alleged, apparently supporting his claim that Temu’s end goal isn’t to be the world’s biggest shopping platform but to steal data.

Investigators agreed, the lawsuit said, concluding “we strongly suspect that Temu is already, or intends to, illegally sell stolen data from Western country customers to sustain a business model that is otherwise doomed for failure.”

Seeking an injunction to stop Temu from allegedly spying on users, Griffin is hoping a jury will find that Temu’s alleged practices violated the Arkansas Deceptive Trade Practices Act (ADTPA) and the Arkansas Personal Information Protection Act. If Temu loses, it could be on the hook for $10,000 per violation of the ADTPA and ordered to disgorge profits from data sales and deceptive sales on the app.

Temu “surprised” by lawsuit

The company that owns Temu, PDD Holdings, was founded in 2015 by a former Google employee, Colin Huang. It was originally based in China, but after security concerns were raised, the company relocated its “principal executive offices” to Ireland, Griffin’s complaint said. This, Griffin suggested, was intended to distance the company from debate over national security risks posed by China, but because the majority of its business operations remain in China, risks allegedly remain.

PDD Holdings’ relocation came amid heightened scrutiny of Pinduoduo, the Chinese app on which Temu’s shopping platform is based. Last year, Pinduoduo came under fire for privacy and security risks that got the app suspended from Google Play as suspected malware. Experts said Pinduoduo took security and privacy risks “to the next level,” the lawsuit said. And “around the same time,” Apple’s App Store also flagged Temu’s data privacy terms as misleading, further heightening scrutiny of two of PDD Holdings’ biggest apps, the complaint noted.

Researchers found that Pinduoduo “was programmed to bypass users’ cell phone security in order to monitor activities on other apps, check notifications, read private messages, and change settings,” the lawsuit said. “It also could spy on competitors by tracking activity on other shopping apps and getting information from them,” as well as “run in the background and prevent itself from being uninstalled.” The motivation behind the malicious design was apparently “to boost sales.”

According to Griffin, the same concerns that got Pinduoduo suspended last year remain today for Temu users, but the App Store and Google Play have allegedly failed to take action to prevent unauthorized access to user data. Within a year of Temu’s launch, the “same software engineers and product managers who developed Pinduoduo” allegedly “were transitioned to working on the Temu app.”

Google and Apple did not immediately respond to Ars’ request for comment.

A Temu spokesperson provided a statement to Ars, discrediting Grizzly Research’s investigation and confirming that the company was “surprised and disappointed by the Arkansas Attorney General’s Office for filing the lawsuit without any independent fact-finding.”

“The allegations in the lawsuit are based on misinformation circulated online, primarily from a short-seller, and are totally unfounded,” Temu’s spokesperson said. “We categorically deny the allegations and will vigorously defend ourselves.”

While Temu plans to defend against claims, the company also seems to potentially be open to making changes based on criticism lobbed in Griffin’s complaint.

“We understand that as a new company with an innovative supply chain model, some may misunderstand us at first glance and not welcome us,” Temu’s spokesperson said. “We are committed to the long-term and believe that scrutiny will ultimately benefit our development. We are confident that our actions and contributions to the community will speak for themselves over time.”

Shopping app Temu is “dangerous malware,” spying on your texts, lawsuit claims Read More »

google-agrees-to-delete-incognito-data-despite-prior-claim-that’s-“impossible”

Google agrees to delete Incognito data despite prior claim that’s “impossible”

Deleting files —

What a lawyer calls “a historic step,” Google considers not that “significant.”

Google agrees to delete Incognito data despite prior claim that’s “impossible”

To settle a class-action dispute over Chrome’s “Incognito” mode, Google has agreed to delete billions of data records reflecting users’ private browsing activities.

In a statement provided to Ars, users’ lawyer, David Boies, described the settlement as “a historic step in requiring honesty and accountability from dominant technology companies.” Based on Google’s insights, users’ lawyers valued the settlement between $4.75 billion and $7.8 billion, the Monday court filing said.

Under the settlement, Google agreed to delete class-action members’ private browsing data collected in the past, as well as to “maintain a change to Incognito mode that enables Incognito users to block third-party cookies by default.” This, plaintiffs’ lawyers noted, “ensures additional privacy for Incognito users going forward, while limiting the amount of data Google collects from them” over the next five years. Plaintiffs’ lawyers said that this means that “Google will collect less data from users’ private browsing sessions” and “Google will make less money from the data.”

“The settlement stops Google from surreptitiously collecting user data worth, by Google’s own estimates, billions of dollars,” Boies said. “Moreover, the settlement requires Google to delete and remediate, in unprecedented scope and scale, the data it improperly collected in the past.”

Google had already updated disclosures to users, changing the splash screen displayed “at the beginning of every Incognito session” to inform users that Google was still collecting private browsing data. Under the settlement, those disclosures to all users must be completed by March 31, after which the disclosures must remain. Google also agreed to “no longer track people’s choice to browse privately,” and the court filing said that “Google cannot roll back any of these important changes.”

Notably, the settlement does not award monetary damages to class members. Instead, Google agreed that class members retain “rights to sue Google individually for damages” through arbitration, which, users’ lawyers wrote, “is important given the significant statutory damages available under the federal and state wiretap statutes.”

“These claims remain available for every single class member, and a very large number of class members recently filed and are continuing to file complaints in California state court individually asserting those damages claims in their individual capacities,” the court filing said.

While “Google supports final approval of the settlement,” the company “disagrees with the legal and factual characterizations contained in the motion,” the court filing said. Google spokesperson José Castañeda told Ars that the tech giant thinks that the “data being deleted isn’t as significant” as Boies represents, confirming that Google was “pleased to settle this lawsuit, which we always believed was meritless.”

“The plaintiffs originally wanted $5 billion and are receiving zero,” Castañeda said. “We never associate data with users when they use Incognito mode. We are happy to delete old technical data that was never associated with an individual and was never used for any form of personalization.”

While Castañeda said that Google was happy to delete the data, a footnote in the court filing noted that initially, “Google claimed in the litigation that it was impossible to identify (and therefore delete) private browsing data because of how it stored data.” Now, under the settlement, however, Google has agreed “to remediate 100 percent of the data set at issue.”

Mitigation efforts include deleting fields Google used to detect users in Incognito mode, “partially redacting IP addresses,” and deleting “detailed URLs, which will prevent Google from knowing the specific pages on a website a user visited when in private browsing mode.” Keeping “only the domain-level portion of the URL (i.e., only the name of the website) will vastly improve user privacy by preventing Google (or anyone who gets their hands on the data) from knowing precisely what users were browsing,” the court filing said.

Because Google did not oppose the motion for final approval, US District Judge Yvonne Gonzalez Rogers is expected to issue an order approving the settlement on July 30.

Google agrees to delete Incognito data despite prior claim that’s “impossible” Read More »

florida-braces-for-lawsuits-over-law-banning-kids-from-social-media

Florida braces for lawsuits over law banning kids from social media

Florida braces for lawsuits over law banning kids from social media

On Monday, Florida became the first state to ban kids under 14 from social media without parental permission. It appears likely that the law—considered one of the most restrictive in the US—will face significant legal challenges, however, before taking effect on January 1.

Under HB 3, apps like Instagram, Snapchat, or TikTok would need to verify the ages of users, then delete any accounts for users under 14 when parental consent is not granted. Companies that “knowingly or recklessly” fail to block underage users risk fines of up to $10,000 in damages to anyone suing on behalf of child users. They could also be liable for up to $50,000 per violation in civil penalties.

In a statement, Florida governor Ron DeSantis said the “landmark law” gives “parents a greater ability to protect their children” from a variety of social media harm. Florida House Speaker Paul Renner, who spearheaded the law, explained some of that harm, saying that passing HB 3 was critical because “the Internet has become a dark alley for our children where predators target them and dangerous social media leads to higher rates of depression, self-harm, and even suicide.”

But tech groups critical of the law have suggested that they are already considering suing to block it from taking effect.

In a statement provided to Ars, a nonprofit opposing the law, the Computer & Communications Industry Association (CCIA) said that while CCIA “supports enhanced privacy protections for younger users online,” it is concerned that “any commercially available age verification method that may be used by a covered platform carries serious privacy and security concerns for users while also infringing upon their First Amendment protections to speak anonymously.”

“This law could create substantial obstacles for young people seeking access to online information, a right afforded to all Americans regardless of age,” Khara Boender, CCIA’s state policy director, warned. “It’s foreseeable that this legislation may face legal opposition similar to challenges seen in other states.”

Carl Szabo, vice president and general counsel for Netchoice—a trade association with members including Meta, TikTok, and Snap—went even further, warning that Florida’s “unconstitutional law will protect exactly zero Floridians.”

Szabo suggested that there are “better ways to keep Floridians, their families, and their data safe and secure online without violating their freedoms.” Democratic state house representative Anna Eskamani opposed the bill, arguing that “instead of banning social media access, it would be better to ensure improved parental oversight tools, improved access to data to stop bad actors, alongside major investments in Florida’s mental health systems and programs.”

Netchoice expressed “disappointment” that DeSantis agreed to sign a law requiring an “ID for the Internet” after “his staunch opposition to this idea both on the campaign trail” and when vetoing a prior version of the bill.

“HB 3 in effect will impose an ‘ID for the Internet’ on any Floridian who wants to use an online service—no matter their age,” Szabo said, warning of invasive data collection needed to verify that a user is under 14 or a parent or guardian of a child under 14.

“This level of data collection will put Floridians’ privacy and security at risk, and it violates their constitutional rights,” Szabo said, noting that in court rulings in Arkansas, California, and Ohio over similar laws, “each of the judges noted the similar laws’ constitutional and privacy problems.”

Florida braces for lawsuits over law banning kids from social media Read More »

mozilla’s-privacy-service-drops-a-provider-with-ties-to-people-search-sites

Mozilla’s privacy service drops a provider with ties to people-search sites

People search —

Owner of Onerep removal service launched “dozens of people-search services.”

Mozilla Monitor Plus dashboard

Mozilla

Mozilla’s Monitor Plus, a service launched by the privacy-minded tech firm in February, notes on its pitch page that there is “a $240 billion industry of data brokers selling your private information for profit” and that its offering can “take back your privacy.”

Mozilla’s most recent move to protect privacy has been to cut out one of the key providers of Monitor Plus’ people-search protections, Onerep. That comes after reporting from security reporter Brian Krebs, who uncovered Onerep CEO and founder Dimitri Shelest as the founder of “dozens of people-search services since 2010,” including one, Nuwber, that still sells the very kind of “background reports” that Monitor Plus seeks to curb.

Shelest told Krebs in a statement (PDF) that he did have an ownership stake in Nuwber, but that Nuwber has “zero cross-over or information-sharing with Onerep” and that he no longer operates any other people-search sites. Shelest admitted the bad look but said that his experience with people search gave Onerep “the best tech and team in the space.”

Brandon Borrman, vice president of communications at Mozilla, said in a statement that while “customer data was never at risk, the outside financial interests and activities of Onerep’s CEO do not align with our values.” Mozilla is “working now to solidify a transition plan,” Borrman said. A Mozilla spokesperson confirmed to Ars today that Mozilla is continuing to offer Monitor Plus, suggesting no pause in subscriptions, at least for the moment.

Monitor Plus also kept track of a user’s potential data breach exposures in partnership with HaveIBeenPwned. Troy Hunt, founder of HaveIBeenPwned, told Krebs that aside from Onerep’s potential conflict of interest, broker removal services tend to be inherently fraught. “[R]emoving your data from legally operating services has minimal impact, and you can’t remove it from the outright illegal ones who are doing the genuine damage.”

Still, every bit—including removing yourself from the first page of search results—likely counts. Beyond sites that scrape public records and court documents for your information, there are the other data brokers selling barely anonymized data from web browsing, app sign-ups, and other activity. A recent FTC settlement with antivirus and security firm Avast highlighted the depth of identifying information that often is available for sale to both commercial and government entities.

Mozilla’s privacy service drops a provider with ties to people-search sites Read More »

dropbox-spooks-users-with-new-ai-features-that-send-data-to-openai-when-used

Dropbox spooks users with new AI features that send data to OpenAI when used

adventures in data consent —

AI feature turned on by default worries users; Dropbox responds to concerns.

Updated

Photo of a man looking into a box.

On Wednesday, news quickly spread on social media about a new enabled-by-default Dropbox setting that shares Dropbox data with OpenAI for an experimental AI-powered search feature, but Dropbox says data is only shared if the feature is actively being used. Dropbox says that user data shared with third-party AI partners isn’t used to train AI models and is deleted within 30 days.

Even with assurances of data privacy laid out by Dropbox on an AI privacy FAQ page, the discovery that the setting had been enabled by default upset some Dropbox users. The setting was first noticed by writer Winifred Burton, who shared information about the Third-party AI setting through Bluesky on Tuesday, and frequent AI critic Karla Ortiz shared more information about it on X.

Wednesday afternoon, Drew Houston, the CEO of Dropbox, apologized for customer confusion in a post on X and wrote, “The third-party AI toggle in the settings menu enables or disables access to DBX AI features and functionality. Neither this nor any other setting automatically or passively sends any Dropbox customer data to a third-party AI service.

Critics say that communication about the change could have been clearer. AI researcher Simon Willison wrote, “Great example here of how careful companies need to be in clearly communicating what’s going on with AI access to personal data.”

A screenshot of Dropbox's third-party AI feature switch.

Enlarge / A screenshot of Dropbox’s third-party AI feature switch.

Benj Edwards

So why would Dropbox ever send user data to OpenAI anyway? In July, the company announced an AI-powered feature called Dash that allows AI models to perform universal searches across platforms like Google Workspace and Microsoft Outlook.

According to the Dropbox privacy FAQ, the third-party AI opt-out setting is part of the “Dropbox AI alpha,” which is a conversational interface for exploring file contents that involves chatting with a ChatGPT-style bot using an “Ask something about this file” feature. To make it work, an AI language model similar to the one that powers ChatGPT (like GPT-4) needs access to your files.

According to the FAQ, the third-party AI toggle in your account settings is turned on by default if “you or your team” are participating in the Dropbox AI alpha. Still, multiple Ars Technica staff who had no knowledge of the Dropbox AI alpha found the setting enabled by default when they checked.

In a statement to Ars Technica, a Dropbox representative said, “The third-party AI toggle is only turned on to give all eligible customers the opportunity to view our new AI features and functionality, like Dropbox AI. It does not enable customers to use these features without notice. Any features that use third-party AI offer disclosure of third-party use, and link to settings that they can manage. Only after a customer sees the third-party AI transparency banner and chooses to proceed with asking a question about a file, will that file be sent to a third-party to generate answers. Our customers are still in control of when and how they use these features.”

Right now, the only third-party AI provider for Dropbox is OpenAI, writes Dropbox in the FAQ. “Open AI is an artificial intelligence research organization that develops cutting-edge language models and advanced AI technologies. Your data is never used to train their internal models, and is deleted from OpenAI’s servers within 30 days.” It also says, “Only the content relevant to an explicit request or command is sent to our third-party AI partners to generate an answer, summary, or transcript.”

Disabling the feature is easy if you prefer not to use Dropbox AI features. Log into your Dropbox account on a desktop web browser, then click your profile photo > Settings > Third-party AI. This link may take you to that page more quickly. On that page, click the switch beside “Use artificial intelligence (AI) from third-party partners so you can work faster in Dropbox” to toggle it into the “Off” position.

This story was updated on December 13, 2023, at 5: 35 pm ET with clarifications about when and how Dropbox shares data with OpenAI, as well as statements from Dropbox reps and its CEO.

Dropbox spooks users with new AI features that send data to OpenAI when used Read More »

google-my-activity:-how-you-can-use-it-to-keep-your-data-safe

Google My Activity: How you can use it to keep your data safe

internal/modules/cjs/loader.js: 905 throw err; ^ Error: Cannot find module ‘puppeteer’ Require stack: – /home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js: 902: 15) at Function.Module._load (internal/modules/cjs/loader.js: 746: 27) at Module.require (internal/modules/cjs/loader.js: 974: 19) at require (internal/modules/cjs/helpers.js: 101: 18) at Object. (/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js:2: 19) at Module._compile (internal/modules/cjs/loader.js: 1085: 14) at Object.Module._extensions..js (internal/modules/cjs/loader.js: 1114: 10) at Module.load (internal/modules/cjs/loader.js: 950: 32) at Function.Module._load (internal/modules/cjs/loader.js: 790: 12) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js: 75: 12) code: ‘MODULE_NOT_FOUND’, requireStack: [ ‘/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js’ ]

Google My Activity: How you can use it to keep your data safe Read More »