privacy

proton-is-taking-its-privacy-first-apps-to-a-nonprofit-foundation-model

Proton is taking its privacy-first apps to a nonprofit foundation model

Proton going nonprofit —

Because of Swiss laws, there are no shareholders, and only one mission.

Swiss flat flying over a landscape of Swiss mountains, with tourists looking on from nearby ledge

Getty Images

Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn’t want you to think about it in the way you think about other notable privacy and web foundations.

“We believe that if we want to bring about large-scale change, Proton can’t be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto “foundations”),” Proton CEO Andy Yen wrote in a blog post announcing the transition. “Instead, Proton must have a profitable and healthy business at its core.”

The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will “make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits.” Among other members of the Foundation’s board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.

Of particular importance is where Proton and the Proton Foundation are located: Switzerland. As Yen noted, Swiss foundations do not have shareholders and are instead obligated to act “in accordance with the purpose for which they were established.” While the for-profit entity Proton AG can still do things like offer stock options to recruits and even raise its own capital on private markets, the Foundation serves as a backstop against moving too far from Proton’s founding mission, Yen wrote.

There’s a lot more Proton to protect these days

Proton has gone from a single email offering to a wide range of services, many of which specifically target the often invasive offerings of other companies (read, mostly: Google). You can now take your cloud files, passwords, and calendars over to Proton and use its VPN services, most of which offer end-to-end encryption and open source core software hosted in Switzerland, with its notably strong privacy laws.

None of that guarantees that a Swiss court can’t compel some forms of compliance from Proton, as happened in 2021. But compared to most service providers, Proton offers a far clearer and easier-to-grasp privacy model: It can’t see your stuff, and it only makes money from subscriptions.

Of course, foundations are only as strong as the people who guide them, and seemingly firewalled profit/non-profit models can be changed. Time will tell if Proton’s new model can keep up with changing markets—and people.

Proton is taking its privacy-first apps to a nonprofit foundation model Read More »

apple’s-ai-promise:-“your-data-is-never-stored-or-made-accessible-by-apple”

Apple’s AI promise: “Your data is never stored or made accessible by Apple”

…and throw away the key —

And publicly reviewable server code means experts can “verify this privacy promise.”

Apple Senior VP of Software Engineering Craig Federighi announces

Enlarge / Apple Senior VP of Software Engineering Craig Federighi announces “Private Cloud Compute” at WWDC 2024.

Apple

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC keynote today, Apple stressed that the new “Apple Intelligence” system it’s integrating into its products will use a new “Private Cloud Compute” to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior VP of Software Engineering Craig Federighi said.

Trust, but verify

Part of what Apple calls “a brand new standard for privacy and AI” is achieved through on-device processing. Federighi said “many” of Apple’s generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

When a bigger, cloud-based model is needed to fulfill a generative AI request, though, Federighi stressed that it will “run on servers we’ve created especially using Apple silicon,” which allows for the use of security tools built into the Swift programming language. The Apple Intelligence system “sends only the data that’s relevant to completing your task” to those servers, Federighi said, rather than giving blanket access to the entirety of the contextual information the device has access to.

And Apple says that minimized data is not going to be saved for future server access or used to further train Apple’s server-based models, either. “Your data is never stored or made accessible by Apple,” Federighi said. “It’s used exclusively to fill your request.”

But you don’t just have to trust Apple on this score, Federighi claimed. That’s because the server code used by Private Cloud Compute will be publicly accessible, meaning that “independent experts can inspect the code that runs on these servers to verify this privacy promise.” The entire system has been set up cryptographically so that Apple devices “will refuse to talk to a server unless its software has been publicly logged for inspection.”

While the keynote speech was light on details for the moment, the focus on privacy during the presentation shows that Apple is at least prioritizing security concerns in its messaging as it wades into the generative AI space for the first time. We’ll see what security experts have to say when these servers and their code are made publicly available in the near future.

Apple’s AI promise: “Your data is never stored or made accessible by Apple” Read More »

message-scraping,-user-tracking-service-spy-pet-shut-down-by-discord

Message-scraping, user-tracking service Spy Pet shut down by Discord

Discord message privacy —

Bot-driven service was also connected to targeted harassment site Kiwi Farms.

Image of various message topics locked away in a wireframe box, with a Discord logo and lock icon nearby.

Discord

Spy Pet, a service that sold access to a rich database of allegedly more than 3 billion Discord messages and details on more than 600 million users, has seemingly been shut down.

404 Media, which broke the story of Spy Pet’s offerings, reports that Spy Pet seems mostly shut down. Spy Pet’s website was unavailable as of this writing. A Discord spokesperson told Ars that the company’s safety team had been “diligently investigating” Spy Pet and that it had banned accounts affiliated with it.

“Scraping our services and self-botting are violations of our Terms of Service and Community Guidelines,” the spokesperson wrote. “In addition to banning the affiliated accounts, we are considering appropriate legal action.” The spokesperson noted that Discord server administrators can adjust server permissions to prevent future such monitoring on otherwise public servers.

Kiwi Farms ties, GDPR violations

The number of servers monitored by Spy Pet had been fluctuating in recent days. The site’s administrator told 404 Media’s Joseph Cox that they were rewriting part of the service while admitting that Discord had banned a number of bots. The administrator had also told 404 Media that he did not “intend for my tool to be used for harassment,” despite a likely related user offering Spy Pet data on Kiwi Farms, a notorious hub for doxxing and online harassment campaigns that frequently targets trans and non-binary people, members of the LGBTQ community, and women.

Even if Spy Pet can somehow work past Discord’s bans or survive legal action, the site’s very nature runs against a number of other Internet regulations across the globe. It’s almost certainly in violation of the European Union’s General Data Protection Regulation (GDPR). As pointed out by StackDiary, Spy Pet and services like it seem to violate at least three articles of the GDPR, including the “right to be forgotten” in Article 17.

In Article 8 of the GDPR and likely in the eyes of the FTC, gathering data from what could be children’s accounts and profiting from them is almost certainly to draw scrutiny, if not legal action.

Ars was unsuccessful in reaching the administrator of Spy Pet by email and Telegram message. Their last message on Telegram stated that their domain had been suspended and a backup domain was being set up. “TL;DR: Never trust the Germans,” they wrote.

Message-scraping, user-tracking service Spy Pet shut down by Discord Read More »

billions-of-public-discord-messages-may-be-sold-through-a-scraping-service

Billions of public Discord messages may be sold through a scraping service

Discord chat-scraping service —

Cross-server tracking suggests a new understanding of “public” chat servers.

Discord logo, warped by vertical perspective over a phone displaying the app

Getty Images

It’s easy to get the impression that Discord chat messages are ephemeral, especially across different public servers, where lines fly upward at a near-unreadable pace. But someone claims to be catching and compiling that data and is offering packages that can track more than 600 million users across more than 14,000 servers.

Joseph Cox at 404 Media confirmed that Spy Pet, a service that sells access to a database of purportedly 3 billion Discord messages, offers data “credits” to customers who pay in bitcoin, ethereum, or other cryptocurrency. Searching individual users will reveal the servers that Spy Pet can track them across, a raw and exportable table of their messages, and connected accounts, such as GitHub. Ominously, Spy Pet lists more than 86,000 other servers in which it has “no bots,” but “we know it exists.”

  • An example of Spy Pet’s service from its website. Shown are a user’s nicknames, connected accounts, banner image, server memberships, and messages across those servers tracked by Spy Pet.

    Spy Pet

  • Statistics on servers, users, and messages purportedly logged by Spy Pet.

    Spy Pet

  • An example image of the publicly available data gathered by Spy Pet, in this example for a public server for the game Deep Rock Galactic: Survivor.

    Spy Pet

As Cox notes, Discord doesn’t make messages inside server channels, like blog posts or unlocked social media feeds, easy to publicly access and search. But many Discord users many not expect their messages, server memberships, bans, or other data to be grabbed by a bot, compiled, and sold to anybody wishing to pin them all on a particular user. 404 Media confirmed the service’s function with multiple user examples. Private messages are not mentioned by Spy Pet and are presumably still secure.

Spy Pet openly asks those training AI models, or “federal agents looking for a new source of intel,” to contact them for deals. As noted by 404 Media and confirmed by Ars, clicking on the “Request Removal” link plays a clip of J. Jonah Jameson from Spider-Man (the Tobey Maguire/Sam Raimi version) laughing at the idea of advance payment before an abrupt “You’re serious?” Users of Spy Pet, however, are assured of “secure and confidential” searches, with random usernames.

This author found nearly every public Discord he had ever dropped into for research or reporting in Spy Pet’s server list. Those who haven’t paid for message access can only see fairly benign public-facing elements, like stickers, emojis, and charted member totals over time. But as an indication of the reach of Spy Pet’s scraping, it’s an effective warning, or enticement, depending on your goals.

Ars has reached out to Spy Pet for comment and will update this post if we receive a response. A Discord spokesperson told Ars that the company is investigating whether Spy Pet violated its terms of service and community guidelines. It will take “appropriate steps to enforce our policies,” the company said, and could not provide further comment.

Billions of public Discord messages may be sold through a scraping service Read More »

facebook-let-netflix-see-user-dms,-quit-streaming-to-keep-netflix-happy:-lawsuit

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit

A promotional image for Sorry for Your Loss, with Elizabeth Olsen

Enlarge / A promotional image for Sorry for Your Loss, which was a Facebook Watch original scripted series.

Last April, Meta revealed that it would no longer support original shows, like Jada Pinkett Smith’s Red Table Talk talk show, on Facebook Watch. Meta’s streaming business that was once viewed as competition for the likes of YouTube and Netflix is effectively dead now; Facebook doesn’t produce original series, and Facebook Watch is no longer available as a video-streaming app.

The streaming business’ demise has seemed related to cost cuts at Meta that have also included layoffs. However, recently unsealed court documents in an antitrust suit against Meta [PDF] claim that Meta has squashed its streaming dreams in order to appease one of its biggest ad customers: Netflix.

Facebook allegedly gave Netflix creepy privileges

As spotted via Gizmodo, a letter was filed on April 14 in relation to a class-action antitrust suit that was filed by Meta customers, accusing Meta of anti-competitive practices that harm social media competition and consumers. The letter, made public Saturday, asks a court to have Reed Hastings, Netflix’s founder and former CEO, respond to a subpoena for documents that plaintiffs claim are relevant to the case. The original complaint filed in December 2020 [PDF] doesn’t mention Netflix beyond stating that Facebook “secretly signed Whitelist and Data sharing agreements” with Netflix, along with “dozens” of other third-party app developers. The case is still ongoing.

The letter alleges that Netflix’s relationship with Facebook was remarkably strong due to the former’s ad spend with the latter and that Hastings directed “negotiations to end competition in streaming video” from Facebook.

One of the first questions that may come to mind is why a company like Facebook would allow Netflix to influence such a major business decision. The litigation claims the companies formed a lucrative business relationship that included Facebook allegedly giving Netflix access to Facebook users’ private messages:

By 2013, Netflix had begun entering into a series of “Facebook Extended API” agreements, including a so-called “Inbox API” agreement that allowed Netflix programmatic access to Facebook’s users’ private message inboxes, in exchange for which Netflix would “provide to FB a written report every two weeks that shows daily counts of recommendation sends and recipient clicks by interface, initiation surface, and/or implementation variant (e.g., Facebook vs. non-Facebook recommendation recipients). … In August 2013, Facebook provided Netflix with access to its so-called “Titan API,” a private API that allowed a whitelisted partner to access, among other things, Facebook users’ “messaging app and non-app friends.”

Meta said it rolled out end-to-end encryption “for all personal chats and calls on Messenger and Facebook” in December. And in 2018, Facebook told Vox that it doesn’t use private messages for ad targeting. But a few months later, The New York Times, citing “hundreds of pages of Facebook documents,” reported that Facebook “gave Netflix and Spotify the ability to read Facebook users’ private messages.”

Meta didn’t respond to Ars Technica’s request for comment. The company told Gizmodo that it has standard agreements with Netflix currently but didn’t answer the publication’s specific questions.

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit Read More »

gm-stops-sharing-driver-data-with-brokers-amid-backlash

GM stops sharing driver data with brokers amid backlash

woo and indeed hoo —

Customers, wittingly or not, had their driving data shared with insurers.

Scissors cut off a stream of data from a toy car to a cloud

Aurich Lawson | Getty Images

After public outcry, General Motors has decided to stop sharing driving data from its connected cars with data brokers. Last week, news broke that customers enrolled in GM’s OnStar Smart Driver app have had their data shared with LexisNexis and Verisk.

Those data brokers in turn shared the information with insurance companies, resulting in some drivers finding it much harder or more expensive to obtain insurance. To make matters much worse, customers allege they never signed up for OnStar Smart Driver in the first place, claiming the choice was made for them by salespeople during the car-buying process.

Now, in what feels like an all-too-rare win for privacy in the 21st century, that data-sharing deal is no more.

“As of March 20th, OnStar Smart Driver customer data is no longer being shared with LexisNexis or Verisk. Customer trust is a priority for us, and we are actively evaluating our privacy processes and policies,” GM told us in a statement.

GM stops sharing driver data with brokers amid backlash Read More »

hackers-can-read-private-ai-assistant-chats-even-though-they’re-encrypted

Hackers can read private AI-assistant chats even though they’re encrypted

CHATBOT KEYLOGGING —

All non-Google chat GPTs affected by side channel that leaks responses sent to users.

Hackers can read private AI-assistant chats even though they’re encrypted

Aurich Lawson | Getty Images

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

Token privacy

“Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab at Ben-Gurion University in Israel, wrote in an email. “This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or their client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

Mirsky was referring to OpenAI, but with the exception of Google Gemini, all other major chatbots are also affected. As an example, the attack can infer the encrypted ChatGPT response:

  • Yes, there are several important legal considerations that couples should be aware of when considering a divorce, …

as:

  • Yes, there are several potential legal considerations that someone should be aware of when considering a divorce. …

and the Microsoft Copilot encrypted response:

  • Here are some of the latest research findings on effective teaching methods for students with learning disabilities: …

is inferred as:

  • Here are some of the latest research findings on cognitive behavior therapy for children with learning disabilities: …

While the underlined words demonstrate that the precise wording isn’t perfect, the meaning of the inferred sentence is highly accurate.

Attack overview: A packet capture of an AI assistant’s real-time response reveals a token-sequence side-channel. The side-channel is parsed to find text segments that are then reconstructed using sentence-level context and knowledge of the target LLM’s writing style.

Enlarge / Attack overview: A packet capture of an AI assistant’s real-time response reveals a token-sequence side-channel. The side-channel is parsed to find text segments that are then reconstructed using sentence-level context and knowledge of the target LLM’s writing style.

Weiss et al.

The following video demonstrates the attack in action against Microsoft Copilot:

Token-length sequence side-channel attack on Bing.

A side channel is a means of obtaining secret information from a system through indirect or unintended sources, such as physical manifestations or behavioral characteristics, such as the power consumed, the time required, or the sound, light, or electromagnetic radiation produced during a given operation. By carefully monitoring these sources, attackers can assemble enough information to recover encrypted keystrokes or encryption keys from CPUs, browser cookies from HTTPS traffic, or secrets from smartcards. The side channel used in this latest attack resides in tokens that AI assistants use when responding to a user query.

Tokens are akin to words that are encoded so they can be understood by LLMs. To enhance the user experience, most AI assistants send tokens on the fly, as soon as they’re generated, so that end users receive the responses continuously, word by word, as they’re generated rather than all at once much later, once the assistant has generated the entire answer. While the token delivery is encrypted, the real-time, token-by-token transmission exposes a previously unknown side channel, which the researchers call the “token-length sequence.”

Hackers can read private AI-assistant chats even though they’re encrypted Read More »

spain-tells-sam-altman,-worldcoin-to-shut-down-its-eyeball-scanning-orbs

Spain tells Sam Altman, Worldcoin to shut down its eyeball-scanning orbs

Only for real humans —

Cryptocurrency launched by OpenAI’s Altman is drawing scrutiny from regulators.

A spherical device that scans people's eyeballs.

Enlarge / Worldcoin’s “Orb,” a device that scans your eyeballs to verify that you’re a real human.

Spain has moved to block Sam Altman’s cryptocurrency project Worldcoin, the latest blow to a venture that has raised controversy in multiple countries by collecting customers’ personal data using an eyeball-scanning “orb.”

The AEPD, Spain’s data protection regulator, has demanded that Worldcoin immediately ceases collecting personal information in the country via the scans and that it stops using data it has already gathered.

The regulator announced on Wednesday that it had taken the “precautionary measure” at the start of the week and had given Worldcoin 72 hours to demonstrate its compliance with the order.

Mar España Martí, AEPD director, said Spain was the first European country to move against Worldcoin and that it was impelled by special concern that the company was collecting information about minors.

“What we have done is raise the alarm in Europe. But this is an issue that affects… citizens in all the countries of the European Union,” she said. “That means there has to be coordinated action.”

Worldcoin, co-founded by Altman in 2019, has been offering tokens of its own cryptocurrency to people around the world, in return for their consent to have their eyes scanned by an orb.

The scans are used as a form of identification as it seeks to create a reliable mechanism to distinguish between humans and machines as artificial intelligence becomes more advanced.

Worldcoin was not immediately available for comment.

The Spanish regulator’s decision is the latest blow to the aspirations of the OpenAI boss and his Worldcoin co-founders Max Novendstern and Alex Blania following a series of setbacks elsewhere in the world.

At the point of its rollout last summer, the San Francisco and Berlin headquartered start-up avoided launching its crypto tokens in the US on account of the country’s harsh crackdown on the digital assets sector.

The Worldcoin token is also not available in major global markets such as China and India, while watchdogs in Kenya last year ordered the project to shut down operations. The UK’s Information Commissioner’s Office has previously said it would be making inquiries into Worldcoin.

While some jurisdictions have raised concerns about the viability of a Worldcoin cryptocurrency token, Spain’s latest crackdown targets the start-up’s primary efforts to establish a method to prove customers’ “personhood”—work that Altman characterizes as essential in a world where sophisticated AI is harder to distinguish from humans.

In the face of growing scrutiny, Altman told the Financial Times he could imagine a world where his start-up could exist without its in-house cryptocurrency.

Worldcoin has registered 4 million users, according to a person with knowledge of the matter. Investors poured roughly $250 million into the company, including venture capital groups Andreessen Horowitz and Khosla Ventures, internet entrepreneur Reid Hoffman and, prior to the collapse of his FTX empire, Sam Bankman-Fried.

The project attracted media attention and prompted a handful of consumer complaints in Spain as queues began to grow at the stands in shopping centers where Worldcoin is offering cryptocurrency in exchange for eyeball scans.

In January, the data protection watchdog in the Basque country, one of Spain’s autonomous regions, issued a warning about the eye-scanning technology Worldcoin was using in a Bilbao mall. The watchdog, the AVPD, said it fell under biometric data protection rules and that a risk assessment was needed.

España Martí said the Spanish agency was acting on concerns that the Worldcoin initiative did not comply with biometric data laws, which demand that users be given adequate information about how their data will be used and that they have the right to erase it.

Sharing such biometric data, she said, opened people up to a variety of risks ranging from identity fraud to breaches of health privacy and discrimination.

“I want to send a message to young people. I understand that it can be very tempting to get €70 or €80 that sorts you out for the weekend,” España Martí said, but “giving away personal data in exchange for these derisory amounts of money is a short, medium and long-term risk.”

Spain tells Sam Altman, Worldcoin to shut down its eyeball-scanning orbs Read More »

how-your-sensitive-data-can-be-sold-after-a-data-broker-goes-bankrupt

How your sensitive data can be sold after a data broker goes bankrupt

playing fast and loose —

Sensitive location data could be sold off to the highest bidder.

Blue tone city scape and network connection concept , Map pin business district

In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden (D-Ore.) wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company.

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies.

As of publication, Near has not responded to requests for comment.

According to Near’s privacy policy, all of the data they have collected can be transferred to the new owners. Under the heading of “Who do you share my personal data with?” It lists “Prospective buyers of our business.”

This type of clause is common in privacy policies, and is a regular part of businesses being bought and sold. Where it gets complicated is when the company being sold owns data containing sensitive information.

This week, a new bankruptcy court filing showed that Wyden’s requests were granted. The order placed restrictions on the use, sale, licensing, or transfer of location data collected from sensitive locations in the US and requires any company that purchases the data to establish a “sensitive location data program” with detailed policies for such data and ensure ongoing monitoring and compliance, including the creation of a list of sensitive locations such as reproductive health care facilities, doctor’s offices, houses of worship, mental health care providers, corrections facilities and shelters among others. The order demands that unless consumers have explicitly provided consent, the company must cease any collection, use, or transfer of location data.

In a statement emailed to The Markup, Wyden wrote, “I commend the FTC for stepping in—at my request—to ensure that this data broker’s stockpile of Americans’ sensitive location data isn’t abused, again.”

Wyden called for protecting sensitive location data from data brokers, citing the new legal threats to women since the Supreme Court’s June 2022 decision to overturn the abortion-rights ruling Roe v. Wade. Wyden wrote, “The threat posed by the sale of location data is clear, particularly to women who are seeking reproductive care.”

The bankruptcy order also provided a rare glimpse into how data brokers license data to one another. Near’s list of contracts included agreements with several location brokers, ad platforms, universities, retailers, and city governments.

It is not clear from the filing if the agreements covered Near data being licensed, Near licensing the data from the companies, or both.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

How your sensitive data can be sold after a data broker goes bankrupt Read More »

avast-ordered-to-stop-selling-browsing-data-from-its-browsing-privacy-apps

Avast ordered to stop selling browsing data from its browsing privacy apps

Security, privacy, things of that nature —

Identifiable data included job searches, map directions, “cosplay erotica.”

Avast logo on a phone in front of the words

Getty Images

Avast, a name known for its security research and antivirus apps, has long offered Chrome extensions, mobile apps, and other tools aimed at increasing privacy.

Avast’s apps would “block annoying tracking cookies that collect data on your browsing activities,” and prevent web services from “tracking your online activity.” Deep in its privacy policy, Avast said information that it collected would be “anonymous and aggregate.” In its fiercest rhetoric, Avast’s desktop software claimed it would stop “hackers making money off your searches.”

All of that language was offered up while Avast was collecting users’ browser information from 2014 to 2020, then selling it to more than 100 other companies through a since-shuttered entity known as Jumpshot, according to the Federal Trade Commission. Under a proposed recent FTC order (PDF), Avast must pay $16.5 million, which is “expected to be used to provide redress to consumers,” according to the FTC. Avast will also be prohibited from selling future browsing data, must obtain express consent on future data gathering, notify customers about prior data sales, and implement a “comprehensive privacy program” to address prior conduct.

Reached for comment, Avast provided a statement that noted the company’s closure of Jumpshot in early 2020. “We are committed to our mission of protecting and empowering people’s digital lives. While we disagree with the FTC’s allegations and characterization of the facts, we are pleased to resolve this matter and look forward to continuing to serve our millions of customers around the world,” the statement reads.

Data was far from anonymous

The FTC’s complaint (PDF) notes that after Avast acquired then-antivirus competitor Jumpshot in early 2014, it rebranded the company as an analytics seller. Jumpshot advertised that it offered “unique insights” into the habits of “[m]ore than 100 million online consumers worldwide.” That included the ability to “[s]ee where your audience is going before and after they visit your site or your competitors’ sites, and even track those who visit a specific URL.”

While Avast and Jumpshot claimed that the data had identifying information removed, the FTC argues this was “not sufficient.” Jumpshot offerings included a unique device identifier for each browser, included in data like an “All Clicks Feed,” “Search Plus Click Feed,” “Transaction Feed,” and more. The FTC’s complaint detailed how various companies would purchase these feeds, often with the express purpose of pairing them with a company’s own data, down to an individual user basis. Some Jumpshot contracts attempted to prohibit re-identifying Avast users, but “those prohibitions were limited,” the complaint notes.

The connection between Avast and Jumpshot became broadly known in January 2020, after reporting by Vice and PC Magazine revealed that clients, including Home Depot, Google, Microsoft, Pepsi, and McKinsey, were buying data from Jumpshot, as seen in confidential contracts. Data obtained by the publications showed that buyers could purchase data including Google Maps look-ups, individual LinkedIn and YouTube pages, porn sites, and more. “It’s very granular, and it’s great data for these companies, because it’s down to the device level with a timestamp,” one source told Vice.

The FTC’s complaint provides more detail on how Avast, on its own web forums, sought to downplay its Jumpshot presence. Avast suggested both that only non-aggregated data was provided to Jumpshot and that users were informed during product installation about collecting data to “better understand new and interesting trends.” Neither of these claims proved true, the FTC suggests. And the data collected was far from harmless, given its re-identifiable nature:

For example, a sample of just 100 entries out of trillions retained by Respondents

showed visits by consumers to the following pages: an academic paper on a study of symptoms

of breast cancer; Sen. Elizabeth Warren’s presidential candidacy announcement; a CLE course

on tax exemptions; government jobs in Fort Meade, Maryland with a salary greater than

$100,000; a link (then broken) to the mid-point of a FAFSA (financial aid) application;

directions on Google Maps from one location to another; a Spanish-language children’s

YouTube video; a link to a French dating website, including a unique member ID; and cosplay

erotica.

In a blog post accompanying its announcement, FTC Senior Attorney Lesley Fair writes that, in addition to the dual nature of Avast’s privacy products and Jumpshot’s extensive tracking, the FTC is increasingly viewing browsing data as “highly sensitive information that demands the utmost care.” “Data about the websites a person visits isn’t just another corporate asset open to unfettered commercial exploitation,” Fair writes.

FTC commissioners voted 3-0 to issue the complaint and accept the proposed consent agreement. Chair Lina Khan, along with commissioners Rebecca Slaughter and Alvaro Bedoya, issued a statement on their vote.

Since the time of the FTC’s complaint and its Jumpshot business, Avast has been acquired by Gen Digital, a firm that contains Norton, Avast, LifeLock, Avira, AVG, CCLeaner, and ReputationDefender, among other security businesses.

Disclosure: Condé Nast, Ars Technica’s parent company, received data from Jumpshot before its closure.

Avast ordered to stop selling browsing data from its browsing privacy apps Read More »

chatgpt-is-leaking-passwords-from-private-conversations-of-its-users,-ars-reader-says

ChatGPT is leaking passwords from private conversations of its users, Ars reader says

OPENAI SPRINGS A LEAK —

Names of unpublished research papers, presentations, and PHP scripts also leaked.

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Getty Images

ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.

Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems that encountered while using the portal.

“Horrible, horrible, horrible”

“THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better,” the user wrote. “I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong.”

Besides the candid language and the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem occurred.

The entire conversation goes well beyond what’s shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs.

The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.

“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).”

Other conversations leaked to Whiteside include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. The users for each leaked conversation appeared to be different and unrelated to each other. The conversation involving the prescription portal included the year 2020. Dates didn’t appear in the other conversations.

The episode, and others like it, underscore the wisdom of stripping out personal details from queries made to ChatGPT and other AI services whenever possible. Last March, ChatGPT maker OpenAI took the AI chatbot offline after a bug caused the site to show titles from one active user’s chat history to unrelated users.

In November, researchers published a paper reporting how they used queries to prompt ChatGPT into divulging email addresses, phone and fax numbers, physical addresses, and other private data that was included in material used to train the ChatGPT large language model.

Concerned about the possibility of proprietary or private data leakage, companies, including Apple, have restricted their employees’ use of ChatGPT and similar sites.

As mentioned in an article from December when multiple people found that Ubiquity’s UniFy devices broadcasted private video belonging to unrelated users, these sorts of experiences are as old as the Internet is. As explained in the article:

The precise root causes of this type of system error vary from incident to incident, but they often involve “middlebox” devices, which sit between the front- and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of users who have recently logged in. When mismatches occur, credentials for one account can be mapped to a different account.

An OpenAI representative said the company was investigating the report.

ChatGPT is leaking passwords from private conversations of its users, Ars reader says Read More »

apple-airdrop-leaks-user-data-like-a-sieve-chinese-authorities-say-they’re-scooping-it-up.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up.

Aurich Lawson | Getty Images

Chinese authorities recently said they’re using an advanced encryption attack to de-anonymize users of AirDrop in an effort to crack down on citizens who use the Apple file-sharing feature to mass-distribute content that’s outlawed in that country.

According to a 2022 report from The New York Times, activists have used AirDrop to distribute scathing critiques of the Communist Party of China to nearby iPhone users in subway trains and stations and other public venues. A document one protester sent in October of that year called General Secretary Xi Jinping a “despotic traitor.” A few months later, with the release of iOS 16.1.1, the AirDrop users in China found that the “everyone” configuration, the setting that makes files available to all other users nearby, automatically reset to the more contacts-only setting. Apple has yet to acknowledge the move. Critics continue to see it as a concession Apple CEO Tim Cook made to Chinese authorities.

The rainbow connection

On Monday, eight months after the half-measure was put in place, officials with the local government in Beijing said some people have continued mass-sending illegal content. As a result, the officials said, they were now using an advanced technique publicly disclosed in 2021 to fight back.

“Some people reported that their iPhones received a video with inappropriate remarks in the Beijing subway,” the officials wrote, according to translations. “After preliminary investigation, the police found that the suspect used the AirDrop function of the iPhone to anonymously spread the inappropriate information in public places. Due to the anonymity and difficulty of tracking AirDrop, some netizens have begun to imitate this behavior.”

In response, the authorities said they’ve implemented the technical measures to identify the people mass-distributing the content.

  • Screenshot showing log files containing the hashes to be extracted

  • Screenshot showing a dedicated tool converting extracted AirDrop hashes.

The scant details and the quality of Internet-based translations don’t explicitly describe the technique. All the translations, however, have said it involves the use of what are known as rainbow tables to defeat the technical measures AirDrop uses to obfuscate users’ phone numbers and email addresses.

Rainbow tables were first proposed in 1980 as a means for vastly reducing what at the time was the astronomical amount of computing resources required to crack at-scale hashes, the one-way cryptographic representations used to conceal passwords and other types of sensitive data. Additional refinements made in 2003 made rainbow tables more useful still.

When AirDrop is configured to distribute files only between people who know each other, Apple says, it relies heavily on hashes to conceal the real-world identities of each party until the service determines there’s a match. Specifically, AirDrop broadcasts Bluetooth advertisements that contain a partial cryptographic hash of the sender’s phone number and/or email address.

If any of the truncated hashes match any phone number or email address in the address book of the other device, or if the devices are set to send or receive from everyone, the two devices will engage in a mutual authentication handshake. When the hashes match, the devices exchange the full SHA-256 hashes of the owners’ phone numbers and email addresses. This technique falls under an umbrella term known as private set intersection, often abbreviated as PSI.

In 2021, researchers at Germany’s Technical University of Darmstadt reported that they had devised practical ways to crack what Apple calls the identity hashes used to conceal identities while AirDrop determines if a nearby person is in the contacts of another. One of the researchers’ attack methods relies on rainbow tables.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up. Read More »