privacy

how-your-sensitive-data-can-be-sold-after-a-data-broker-goes-bankrupt

How your sensitive data can be sold after a data broker goes bankrupt

playing fast and loose —

Sensitive location data could be sold off to the highest bidder.

Blue tone city scape and network connection concept , Map pin business district

In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden (D-Ore.) wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company.

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies.

As of publication, Near has not responded to requests for comment.

According to Near’s privacy policy, all of the data they have collected can be transferred to the new owners. Under the heading of “Who do you share my personal data with?” It lists “Prospective buyers of our business.”

This type of clause is common in privacy policies, and is a regular part of businesses being bought and sold. Where it gets complicated is when the company being sold owns data containing sensitive information.

This week, a new bankruptcy court filing showed that Wyden’s requests were granted. The order placed restrictions on the use, sale, licensing, or transfer of location data collected from sensitive locations in the US and requires any company that purchases the data to establish a “sensitive location data program” with detailed policies for such data and ensure ongoing monitoring and compliance, including the creation of a list of sensitive locations such as reproductive health care facilities, doctor’s offices, houses of worship, mental health care providers, corrections facilities and shelters among others. The order demands that unless consumers have explicitly provided consent, the company must cease any collection, use, or transfer of location data.

In a statement emailed to The Markup, Wyden wrote, “I commend the FTC for stepping in—at my request—to ensure that this data broker’s stockpile of Americans’ sensitive location data isn’t abused, again.”

Wyden called for protecting sensitive location data from data brokers, citing the new legal threats to women since the Supreme Court’s June 2022 decision to overturn the abortion-rights ruling Roe v. Wade. Wyden wrote, “The threat posed by the sale of location data is clear, particularly to women who are seeking reproductive care.”

The bankruptcy order also provided a rare glimpse into how data brokers license data to one another. Near’s list of contracts included agreements with several location brokers, ad platforms, universities, retailers, and city governments.

It is not clear from the filing if the agreements covered Near data being licensed, Near licensing the data from the companies, or both.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

How your sensitive data can be sold after a data broker goes bankrupt Read More »

avast-ordered-to-stop-selling-browsing-data-from-its-browsing-privacy-apps

Avast ordered to stop selling browsing data from its browsing privacy apps

Security, privacy, things of that nature —

Identifiable data included job searches, map directions, “cosplay erotica.”

Avast logo on a phone in front of the words

Getty Images

Avast, a name known for its security research and antivirus apps, has long offered Chrome extensions, mobile apps, and other tools aimed at increasing privacy.

Avast’s apps would “block annoying tracking cookies that collect data on your browsing activities,” and prevent web services from “tracking your online activity.” Deep in its privacy policy, Avast said information that it collected would be “anonymous and aggregate.” In its fiercest rhetoric, Avast’s desktop software claimed it would stop “hackers making money off your searches.”

All of that language was offered up while Avast was collecting users’ browser information from 2014 to 2020, then selling it to more than 100 other companies through a since-shuttered entity known as Jumpshot, according to the Federal Trade Commission. Under a proposed recent FTC order (PDF), Avast must pay $16.5 million, which is “expected to be used to provide redress to consumers,” according to the FTC. Avast will also be prohibited from selling future browsing data, must obtain express consent on future data gathering, notify customers about prior data sales, and implement a “comprehensive privacy program” to address prior conduct.

Reached for comment, Avast provided a statement that noted the company’s closure of Jumpshot in early 2020. “We are committed to our mission of protecting and empowering people’s digital lives. While we disagree with the FTC’s allegations and characterization of the facts, we are pleased to resolve this matter and look forward to continuing to serve our millions of customers around the world,” the statement reads.

Data was far from anonymous

The FTC’s complaint (PDF) notes that after Avast acquired then-antivirus competitor Jumpshot in early 2014, it rebranded the company as an analytics seller. Jumpshot advertised that it offered “unique insights” into the habits of “[m]ore than 100 million online consumers worldwide.” That included the ability to “[s]ee where your audience is going before and after they visit your site or your competitors’ sites, and even track those who visit a specific URL.”

While Avast and Jumpshot claimed that the data had identifying information removed, the FTC argues this was “not sufficient.” Jumpshot offerings included a unique device identifier for each browser, included in data like an “All Clicks Feed,” “Search Plus Click Feed,” “Transaction Feed,” and more. The FTC’s complaint detailed how various companies would purchase these feeds, often with the express purpose of pairing them with a company’s own data, down to an individual user basis. Some Jumpshot contracts attempted to prohibit re-identifying Avast users, but “those prohibitions were limited,” the complaint notes.

The connection between Avast and Jumpshot became broadly known in January 2020, after reporting by Vice and PC Magazine revealed that clients, including Home Depot, Google, Microsoft, Pepsi, and McKinsey, were buying data from Jumpshot, as seen in confidential contracts. Data obtained by the publications showed that buyers could purchase data including Google Maps look-ups, individual LinkedIn and YouTube pages, porn sites, and more. “It’s very granular, and it’s great data for these companies, because it’s down to the device level with a timestamp,” one source told Vice.

The FTC’s complaint provides more detail on how Avast, on its own web forums, sought to downplay its Jumpshot presence. Avast suggested both that only non-aggregated data was provided to Jumpshot and that users were informed during product installation about collecting data to “better understand new and interesting trends.” Neither of these claims proved true, the FTC suggests. And the data collected was far from harmless, given its re-identifiable nature:

For example, a sample of just 100 entries out of trillions retained by Respondents

showed visits by consumers to the following pages: an academic paper on a study of symptoms

of breast cancer; Sen. Elizabeth Warren’s presidential candidacy announcement; a CLE course

on tax exemptions; government jobs in Fort Meade, Maryland with a salary greater than

$100,000; a link (then broken) to the mid-point of a FAFSA (financial aid) application;

directions on Google Maps from one location to another; a Spanish-language children’s

YouTube video; a link to a French dating website, including a unique member ID; and cosplay

erotica.

In a blog post accompanying its announcement, FTC Senior Attorney Lesley Fair writes that, in addition to the dual nature of Avast’s privacy products and Jumpshot’s extensive tracking, the FTC is increasingly viewing browsing data as “highly sensitive information that demands the utmost care.” “Data about the websites a person visits isn’t just another corporate asset open to unfettered commercial exploitation,” Fair writes.

FTC commissioners voted 3-0 to issue the complaint and accept the proposed consent agreement. Chair Lina Khan, along with commissioners Rebecca Slaughter and Alvaro Bedoya, issued a statement on their vote.

Since the time of the FTC’s complaint and its Jumpshot business, Avast has been acquired by Gen Digital, a firm that contains Norton, Avast, LifeLock, Avira, AVG, CCLeaner, and ReputationDefender, among other security businesses.

Disclosure: Condé Nast, Ars Technica’s parent company, received data from Jumpshot before its closure.

Avast ordered to stop selling browsing data from its browsing privacy apps Read More »

chatgpt-is-leaking-passwords-from-private-conversations-of-its-users,-ars-reader-says

ChatGPT is leaking passwords from private conversations of its users, Ars reader says

OPENAI SPRINGS A LEAK —

Names of unpublished research papers, presentations, and PHP scripts also leaked.

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Getty Images

ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.

Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems that encountered while using the portal.

“Horrible, horrible, horrible”

“THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better,” the user wrote. “I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong.”

Besides the candid language and the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem occurred.

The entire conversation goes well beyond what’s shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs.

The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.

“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).”

Other conversations leaked to Whiteside include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. The users for each leaked conversation appeared to be different and unrelated to each other. The conversation involving the prescription portal included the year 2020. Dates didn’t appear in the other conversations.

The episode, and others like it, underscore the wisdom of stripping out personal details from queries made to ChatGPT and other AI services whenever possible. Last March, ChatGPT maker OpenAI took the AI chatbot offline after a bug caused the site to show titles from one active user’s chat history to unrelated users.

In November, researchers published a paper reporting how they used queries to prompt ChatGPT into divulging email addresses, phone and fax numbers, physical addresses, and other private data that was included in material used to train the ChatGPT large language model.

Concerned about the possibility of proprietary or private data leakage, companies, including Apple, have restricted their employees’ use of ChatGPT and similar sites.

As mentioned in an article from December when multiple people found that Ubiquity’s UniFy devices broadcasted private video belonging to unrelated users, these sorts of experiences are as old as the Internet is. As explained in the article:

The precise root causes of this type of system error vary from incident to incident, but they often involve “middlebox” devices, which sit between the front- and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of users who have recently logged in. When mismatches occur, credentials for one account can be mapped to a different account.

An OpenAI representative said the company was investigating the report.

ChatGPT is leaking passwords from private conversations of its users, Ars reader says Read More »

apple-airdrop-leaks-user-data-like-a-sieve-chinese-authorities-say-they’re-scooping-it-up.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up.

Aurich Lawson | Getty Images

Chinese authorities recently said they’re using an advanced encryption attack to de-anonymize users of AirDrop in an effort to crack down on citizens who use the Apple file-sharing feature to mass-distribute content that’s outlawed in that country.

According to a 2022 report from The New York Times, activists have used AirDrop to distribute scathing critiques of the Communist Party of China to nearby iPhone users in subway trains and stations and other public venues. A document one protester sent in October of that year called General Secretary Xi Jinping a “despotic traitor.” A few months later, with the release of iOS 16.1.1, the AirDrop users in China found that the “everyone” configuration, the setting that makes files available to all other users nearby, automatically reset to the more contacts-only setting. Apple has yet to acknowledge the move. Critics continue to see it as a concession Apple CEO Tim Cook made to Chinese authorities.

The rainbow connection

On Monday, eight months after the half-measure was put in place, officials with the local government in Beijing said some people have continued mass-sending illegal content. As a result, the officials said, they were now using an advanced technique publicly disclosed in 2021 to fight back.

“Some people reported that their iPhones received a video with inappropriate remarks in the Beijing subway,” the officials wrote, according to translations. “After preliminary investigation, the police found that the suspect used the AirDrop function of the iPhone to anonymously spread the inappropriate information in public places. Due to the anonymity and difficulty of tracking AirDrop, some netizens have begun to imitate this behavior.”

In response, the authorities said they’ve implemented the technical measures to identify the people mass-distributing the content.

  • Screenshot showing log files containing the hashes to be extracted

  • Screenshot showing a dedicated tool converting extracted AirDrop hashes.

The scant details and the quality of Internet-based translations don’t explicitly describe the technique. All the translations, however, have said it involves the use of what are known as rainbow tables to defeat the technical measures AirDrop uses to obfuscate users’ phone numbers and email addresses.

Rainbow tables were first proposed in 1980 as a means for vastly reducing what at the time was the astronomical amount of computing resources required to crack at-scale hashes, the one-way cryptographic representations used to conceal passwords and other types of sensitive data. Additional refinements made in 2003 made rainbow tables more useful still.

When AirDrop is configured to distribute files only between people who know each other, Apple says, it relies heavily on hashes to conceal the real-world identities of each party until the service determines there’s a match. Specifically, AirDrop broadcasts Bluetooth advertisements that contain a partial cryptographic hash of the sender’s phone number and/or email address.

If any of the truncated hashes match any phone number or email address in the address book of the other device, or if the devices are set to send or receive from everyone, the two devices will engage in a mutual authentication handshake. When the hashes match, the devices exchange the full SHA-256 hashes of the owners’ phone numbers and email addresses. This technique falls under an umbrella term known as private set intersection, often abbreviated as PSI.

In 2021, researchers at Germany’s Technical University of Darmstadt reported that they had devised practical ways to crack what Apple calls the identity hashes used to conceal identities while AirDrop determines if a nearby person is in the contacts of another. One of the researchers’ attack methods relies on rainbow tables.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up. Read More »

researchers-come-up-with-better-idea-to-prevent-airtag-stalking

Researchers come up with better idea to prevent AirTag stalking

Picture of AirTag

BackyardProduction via Getty Images

Apple’s AirTags are meant to help you effortlessly find your keys or track your luggage. But the same features that make them easy to deploy and inconspicuous in your daily life have also allowed them to be abused as a sinister tracking tool that domestic abusers and criminals can use to stalk their targets.

Over the past year, Apple has taken protective steps to notify iPhone and Android users if an AirTag is in their vicinity for a significant amount of time without the presence of its owner’s iPhone, which could indicate that an AirTag has been planted to secretly track their location. Apple hasn’t said exactly how long this time interval is, but to create the much-needed alert system, Apple made some crucial changes to the location privacy design the company originally developed a few years ago for its “Find My” device tracking feature. Researchers from Johns Hopkins University and the University of California, San Diego, say, though, that they’ve developed a cryptographic scheme to bridge the gap—prioritizing detection of potentially malicious AirTags while also preserving maximum privacy for AirTag users.

The Find My system uses both public and private cryptographic keys to identify individual AirTags and manage their location tracking. But Apple developed a particularly thoughtful mechanism to regularly rotate the public device identifier—every 15 minutes, according to the researchers. This way, it would be much more difficult for someone to track your location over time using a Bluetooth scanner to follow the identifier around. This worked well for privately tracking the location of, say, your MacBook if it was lost or stolen, but the downside of constantly changing this identifier for AirTags was that it provided cover for the tiny devices to be deployed abusively.

In reaction to this conundrum, Apple revised the system so an AirTag’s public identifier now only rotates once every 24 hours if the AirTag is away from an iPhone or other Apple device that “owns” it. The idea is that this way other devices can detect potential stalking, but won’t be throwing up alerts all the time if you spend a weekend with a friend who has their iPhone and the AirTag on their keys in their pockets.

In practice, though, the researchers say that these changes have created a situation where AirTags are broadcasting their location to anyone who’s checking within a 30- to 50-foot radius over the course of an entire day—enough time to track a person as they go about their life and get a sense of their movements.

“We had students walk through cities, walk through Times Square and Washington, DC, and lots and lots of people are broadcasting their locations,” says Johns Hopkins cryptographer Matt Green, who worked on the research with a group of colleagues, including Nadia Heninger and Abhishek Jain. “Hundreds of AirTags were not near the device they were registered to, and we’re assuming that most of those were not stalker AirTags.”

Apple has been working with companies like Google, Samsung, and Tile on a cross-industry effort to address the threat of tracking from products similar to AirTags. And for now, at least, the researchers say that the consortium seems to have adopted Apple’s approach of rotating the device public identifiers once every 24 hours. But the privacy trade-off inherent in this solution made the researchers curious about whether it would be possible to design a system that better balanced both privacy and safety.

Researchers come up with better idea to prevent AirTag stalking Read More »

google-agrees-to-settle-chrome-incognito-mode-class-action-lawsuit

Google agrees to settle Chrome incognito mode class action lawsuit

Not as private as you thought —

2020 lawsuit accused Google of tracking incognito activity, tying it to users’ profiles.

Google agrees to settle Chrome incognito mode class action lawsuit

Getty Images

Google has indicated that it is ready to settle a class-action lawsuit filed in 2020 over its Chrome browser’s Incognito mode. Arising in the Northern District of California, the lawsuit accused Google of continuing to “track, collect, and identify [users’] browsing data in real time” even when they had opened a new Incognito window.

The lawsuit, filed by Florida resident William Byatt and California residents Chasom Brown and Maria Nguyen, accused Google of violating wiretap laws. It also alleged that sites using Google Analytics or Ad Manager collected information from browsers in Incognito mode, including web page content, device data, and IP address. The plaintiffs also accused Google of taking Chrome users’ private browsing activity and then associating it with their already-existing user profiles.

Google initially attempted to have the lawsuit dismissed by pointing to the message displayed when users turned on Chrome’s incognito mode. That warning tells users that their activity “might still be visible to websites you visit.”

Judge Yvonne Gonzalez Rogers rejected Google’s bid for summary judgement in August, pointing out that Google never revealed to its users that data collection continued even while surfing in Incognito mode.

“Google’s motion hinges on the idea that plaintiffs consented to Google collecting their data while they were browsing in private mode,” Rogers ruled. “Because Google never explicitly told users that it does so, the Court cannot find as a matter of law that users explicitly consented to the at-issue data collection.”

According to the notice filed on Tuesday, Google and the plaintiffs have agreed to terms that will result in the litigation being dismissed. The agreement will be presented to the court by the end of January, with the court giving final approval by the end of February.

Google agrees to settle Chrome incognito mode class action lawsuit Read More »

unifi-devices-broadcasted-private-video-to-other-users’-accounts

UniFi devices broadcasted private video to other users’ accounts

CASE OF MISTAKEN IDENTITY —

“I was presented with 88 consoles from another account,” one user reports.

an assortment of ubiquiti cameras

Enlarge / An assortment of Ubiquiti cameras.

Users of UniFi, the popular line of wireless devices from manufacturer Ubiquiti, are reporting receiving private camera feeds from, and control over, devices belonging to other users, posts published to social media site Reddit over the past 24 hours show.

“Recently, my wife received a notification from UniFi Protect, which included an image from a security camera,” one Reddit user reported. “However, here’s the twist—this camera doesn’t belong to us.”

Stoking concern and anxiety

The post included two images. The first showed a notification pushed to the person’s phone reporting that their UDM Pro, a network controller and network gateway used by tech-enthusiast consumers, had detected someone moving in the backyard. A still shot of video recorded by a connected surveillance camera showed a three-story house surrounded by trees. The second image showed the dashboard belonging to the Reddit user. The user’s connected device was a UDM SE, and the video it captured showed a completely different house.

Less than an hour later, a different Reddit user posting to the same thread replied: “So it’s VERY interesting you posted this, I was just about to post that when I navigated to unifi.ui.com this morning, I was logged into someone else’s account completely! It had my email on the top right, but someone else’s UDM Pro! I could navigate the device, view, and change settings! Terrifying!!”

Two other people took to the same thread to report similar behavior happening to them.

Other Reddit threads posted in the past day reporting UniFi users connecting to private devices or feeds belonging to others are here and here. The first one reported that the Reddit poster gained full access to someone else’s system. The post included two screenshots showing what the poster said was the captured video of an unrecognized business. The other poster reported logging into their Ubiquiti dashboard to find system controls for someone else. “I ended up logging out, clearing cookies, etc seems fine now for me…” the poster wrote.

Yet another person reported the same problem in a post published to Ubiquiti’s community support forum on Thursday, as this Ars story was being reported. The person reported logging into the UniFi console as is their routine each day.

“However this time I was presented with 88 consoles from another account,” the person wrote. “I had full access to these consoles, just as I would my own. This was only stopped when I forced a browser refresh, and I was presented again with my consoles.”

Ubiquity on Thursday said it had identified the glitch and fixed the errors that caused it.

“Specifically, this issue was caused by an upgrade to our UniFi Cloud infrastructure, which we have since solved,” officials wrote. They went on:

1. What happened?

1,216 Ubiquiti accounts (“Group 1”) were improperly associated with a separate group of 1,177 Ubiquiti accounts (“Group 2”).

2. When did this happen?

December 13, from 6: 47 AM to 3: 45 PM UTC.

3. What does this mean?

During this time, a small number of users from Group 2 received push notifications on their mobile devices from the consoles assigned to a small number of users from Group 1.

Additionally, during this time, a user from Group 2 that attempted to log into his or her account may have been granted temporary remote access to a Group 1 account.

The reports are understandably stoking concern and even anxiety for users of UniFi products, which include wireless access points, switches, routers, controller devices, VoIP phones, and access control products. As the Internet-accessible portals into the local networks of users, UniFi devices provide a means for accessing cameras, mics, and other sensitive resources inside the home.

“I guess I should stop walking around naked in my house now,” a participant in one of the forums joked.

To Ubiquiti’s credit, company employees proactively responded to reports, signaling they took the reports seriously and began actively investigating early on. The employees said the problem has been corrected, and the account mix-ups are no longer occurring.

It’s useful to remember that this sort of behavior—legitimately logging into an account only to find the data or controls belonging to a completely different account—is as old as the Internet. Recent examples: A T-Mobile mistake in September, and similar glitches involving Chase Bank, First Virginia Banks, Credit Karma, and Sprint.

The precise root causes of this type of system error vary from incident to incident, but they often involve “middlebox” devices, which sit between the front- and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of users who have recently logged in. When mismatches occur, credentials for one account can be mapped to a different account.

In an email, a Ubiquiti official said company employees are still gathering “information to provide an accurate assessment.”

UniFi devices broadcasted private video to other users’ accounts Read More »

dropbox-spooks-users-with-new-ai-features-that-send-data-to-openai-when-used

Dropbox spooks users with new AI features that send data to OpenAI when used

adventures in data consent —

AI feature turned on by default worries users; Dropbox responds to concerns.

Updated

Photo of a man looking into a box.

On Wednesday, news quickly spread on social media about a new enabled-by-default Dropbox setting that shares Dropbox data with OpenAI for an experimental AI-powered search feature, but Dropbox says data is only shared if the feature is actively being used. Dropbox says that user data shared with third-party AI partners isn’t used to train AI models and is deleted within 30 days.

Even with assurances of data privacy laid out by Dropbox on an AI privacy FAQ page, the discovery that the setting had been enabled by default upset some Dropbox users. The setting was first noticed by writer Winifred Burton, who shared information about the Third-party AI setting through Bluesky on Tuesday, and frequent AI critic Karla Ortiz shared more information about it on X.

Wednesday afternoon, Drew Houston, the CEO of Dropbox, apologized for customer confusion in a post on X and wrote, “The third-party AI toggle in the settings menu enables or disables access to DBX AI features and functionality. Neither this nor any other setting automatically or passively sends any Dropbox customer data to a third-party AI service.

Critics say that communication about the change could have been clearer. AI researcher Simon Willison wrote, “Great example here of how careful companies need to be in clearly communicating what’s going on with AI access to personal data.”

A screenshot of Dropbox's third-party AI feature switch.

Enlarge / A screenshot of Dropbox’s third-party AI feature switch.

Benj Edwards

So why would Dropbox ever send user data to OpenAI anyway? In July, the company announced an AI-powered feature called Dash that allows AI models to perform universal searches across platforms like Google Workspace and Microsoft Outlook.

According to the Dropbox privacy FAQ, the third-party AI opt-out setting is part of the “Dropbox AI alpha,” which is a conversational interface for exploring file contents that involves chatting with a ChatGPT-style bot using an “Ask something about this file” feature. To make it work, an AI language model similar to the one that powers ChatGPT (like GPT-4) needs access to your files.

According to the FAQ, the third-party AI toggle in your account settings is turned on by default if “you or your team” are participating in the Dropbox AI alpha. Still, multiple Ars Technica staff who had no knowledge of the Dropbox AI alpha found the setting enabled by default when they checked.

In a statement to Ars Technica, a Dropbox representative said, “The third-party AI toggle is only turned on to give all eligible customers the opportunity to view our new AI features and functionality, like Dropbox AI. It does not enable customers to use these features without notice. Any features that use third-party AI offer disclosure of third-party use, and link to settings that they can manage. Only after a customer sees the third-party AI transparency banner and chooses to proceed with asking a question about a file, will that file be sent to a third-party to generate answers. Our customers are still in control of when and how they use these features.”

Right now, the only third-party AI provider for Dropbox is OpenAI, writes Dropbox in the FAQ. “Open AI is an artificial intelligence research organization that develops cutting-edge language models and advanced AI technologies. Your data is never used to train their internal models, and is deleted from OpenAI’s servers within 30 days.” It also says, “Only the content relevant to an explicit request or command is sent to our third-party AI partners to generate an answer, summary, or transcript.”

Disabling the feature is easy if you prefer not to use Dropbox AI features. Log into your Dropbox account on a desktop web browser, then click your profile photo > Settings > Third-party AI. This link may take you to that page more quickly. On that page, click the switch beside “Use artificial intelligence (AI) from third-party partners so you can work faster in Dropbox” to toggle it into the “Off” position.

This story was updated on December 13, 2023, at 5: 35 pm ET with clarifications about when and how Dropbox shares data with OpenAI, as well as statements from Dropbox reps and its CEO.

Dropbox spooks users with new AI features that send data to OpenAI when used Read More »

challenges-behind-applying-real-world-laws-to-xr-spaces-and-ensuring-user-safety

Challenges Behind Applying Real-World Laws to XR Spaces and Ensuring User Safety

Immersive technologies bridging the gap between the physical and digital worlds can create new business opportunities. However, it also gives rise to new challenges in regulation and applying real-world laws to XR spaces. According to a World Economic Forum report, we are relatively slow in innovating new legal frameworks for emerging technologies like AR and VR.

Common Challenges of Applying Laws to AR and VR

XR technologies like AR and VR are already considered beneficial and are used in industries like medicine and education. However, XR still harbors risks to human rights, according to an Electronic Frontier Foundation (EFF) article.

Issues like data harvesting and online harassment pose real threats to users, and self-regulation when it comes to data protection and ethical guidelines is insufficient in mitigating such risks. Some common challenges that crop up when applying real-world laws to AR and VR include intellectual property, virtual privacy and security, and product liability.

There’s also the need for a new framework tailored to fit emerging technologies, but legislative attempts at regulation may face several hurdles. It’s also worth noting that while regulation can help keep users safe, it may also potentially hamper the development of such technologies, according to Digikonn co-founder Chirag Prajapati.

Can Real-World Laws Be Applied to XR Spaces?

In an interview with IEEE Spectrum in 2018, Robyn Chatwood, an intellectual property and information technology partner at Dentons Australia, gave an example of an incident that occurred in a VR space where a user experienced sexual assault. Unfortunately, Chatwood remarked that there are no laws saying that sexual assault in VR is the same as in the real world. When asked when she thinks these issues will be addressed, Chatwood remarked that, in several years, another incident could draw more widespread attention to the problems in XR spaces. It’s also possible that, through increased adoption, society will begin to recognize the need to develop regulations for XR spaces.

On a more positive note, the trend toward regulations for XR spaces has been changing recently. For instance, Meta has rolled out a minimum distance between avatars in Horizon Worlds, its VR social media platform. This boundary prevents other avatars from getting into your avatar’s personal space. This system works by halting a user’s forward movement as they get closer to the said boundary.

There are also new laws being drafted to protect users in online spaces. In particular, the UK’s Online Safety Bill, which had its second reading in the House of Commons in April 2022, aims to protect users by ensuring that online platforms have safety measures in place against harmful and illegal content and covers four new criminal offenses.

In the paper, The Law and Ethics of Virtual Assault, author John Danaher proposes a broader definition of virtual sexual assault, which allows for what he calls the different “sub-types of virtual sexual assault.” Danaher also provides suggestions on when virtual acts should be criminalized and how virtual sexual assault can be criminalized. The paper also touches on topics like consent and criminal responsibility for such crimes.

There’s even a short film that brings to light pressing metaverse concerns. Privacy Lost aims to educate policymakers about the potential dangers, such as manipulation, that come with emerging technologies.

While many legal issues in the virtual world are resolved through criminal courts and tort systems, according to Gamma Law’s David B. Hoppe, these approaches lack the necessary nuance and context to resolve such legal disputes. Hoppe remarks that real-world laws may not have the specificity that will allow them to tackle new privacy issues in XR spaces and shares that there is a need for a more nuanced legal strategy and tailored legal documents to help protect users in XR spaces.

Issues with Existing Cyber Laws

The novelty of AR and VR technologies makes it challenging to implement legislation. However, for users to maximize the benefits of such technologies, their needs should be considered by developers, policymakers, and organizations that implement them. While cyber laws are in place, persistent issues still need to be tackled, such as challenges in executing sanctions for offenders and the lack of adequate responses.

The United Nations Office on Drugs and Crime (UNODC) also cites several obstacles to cybercrime investigations, such as user anonymity from technologies, attribution, which determines who or what is responsible for the crime, and traceback, which can be time-consuming. The UNODC also notes that the lack of coordinated national cybercrime laws and international standards for evidence can hamper cybercrime investigations.

Creating Safer XR Spaces for Users

Based on guidelines provided by the World Economic Forum, there are several key considerations that legislators should consider. These include how laws and regulations apply to XR conduct governed by private platforms and how rules can potentially apply when an XR user’s activities have direct, real-world effects.

The XR Association (XRA) has also provided guidelines to help create safe and inclusive immersive spaces. Its conduct policy tips to address abuse include creating tailored policies that align with a business’ product and community and including notifications of possible violations. Moreover, the XRA has been proactive in rolling out measures for the responsible development and adoption of XR. For instance, it has held discussions on user privacy and safety in mixed reality spaces, zeroing in on how developers, policymakers, and organizations can better promote privacy, safety, and inclusion, as well as tackle issues that are unique to XR spaces. It also works with XRA member companies to create guidelines for age-appropriate use of XR technology, helping develop safer virtual spaces for younger users.

Other Key Players in XR Safety

Aside from the XRA, other organizations are also taking steps to create safer XR spaces. X Reality Safety Intelligence (XRSI), formerly known as X Reality Safety Initiative, is one of the world’s leading organizations focused on providing intelligence and advisory services to promote the safety and well-being of ecosystems for emerging technologies.

It has created a number of programs that help tackle critical issues and risks in the metaverse focusing on aspects like diversity and inclusion, trustworthy journalism, and child safety. For instance, the organization has shown support for the Kids PRIVACY Act, a legislation that aims to implement more robust measures to protect younger users online.

XRSI has also published research and shared guidelines to create standards for XR spaces. It has partnered with Standards Australia to create the first-ever Metaverse Standards whitepaper, which serves as a guide for standards in the metaverse to protect users against risks unique to the metaverse. These are categorized as Human Risks, Regulatory Risks, Financial Risks, and Legal Risks, among other metaverse-unique risks.

The whitepaper is a collaborative effort that brings together cybersecurity experts, VR and AR pioneers, strategists, and AI and metaverse specialists. One of its authors, Dr. Catriona Wallace, is the founder of the social enterprise The Responsible Metaverse Alliance. Cybersecurity professional Kavya Pearlman, the founder and CEO of XRSI, is also one of its authors. Pearlman works with various organizations and governments, advising on policymaking and cybersecurity to help keep users safe in emerging technology ecosystems.

One such issue that’s being highlighted by the XRSI is the risks that come with XR data collection in three areas: medical XR and healthcare, learning and education, and employment and work. The report highlights how emerging technologies create new privacy and safety concerns, risks such as the lack of inclusivity, the lack of equality in education, and the lack of experience in using data collected in XR spaces are cropping up.

In light of these issues, the XRSI has created goals and guidelines to help address these risks. Some of the goals include establishing a standards-based workflow to manage XR-collected data and adopting a new approach to classifying such data.

The EU is also taking steps to ensure data protection in emerging technologies, with new EU laws aiming to complement the GDPR’s requirements for XR technologies and services. Moreover, the EU data protection law applies to most XR technologies, particularly for commercial applications. It’s possible that a user’s explicit consent may be required to make data processing operations legitimate.

According to the Information Technology & Innovation Foundation (ITIF), policymakers need to mitigate so-called regulatory uncertainty by making it clear how and when laws apply to AR and VR technologies. The same ITIF report stresses that they need to collaborate with stakeholder communities and industry leaders to create and implement comprehensive guidelines and clear standards for AR and VR use.

However, while creating safer XR spaces is of utmost importance, the ITIF also highlights the risks of over-regulation, which can stifle the development of new technologies. To mitigate this risk, policymakers can instead focus on developing regulations that help promote innovation in the field, such as creating best practices for law enforcement agencies to tackle cybercrime and focusing on funding for user safety research.

Moreover, the ITIF also provides some guidelines regarding privacy concerns from AR in public spaces, as well as what steps leaders and policymakers could take to mitigate the risks and challenges that come with the use of immersive technologies.

The EFF also shares that governments need to execute or update data protection legislation to protect users and their data.

There is still a long way to go when applying real-world laws to XR spaces. However, many organizations, policymakers, and stakeholders are already taking steps to help make such spaces safer for users.

Challenges Behind Applying Real-World Laws to XR Spaces and Ensuring User Safety Read More »

“privacy-lost”:-new-short-film-shows-metaverse-concerns

“PRIVACY LOST”: New Short Film Shows Metaverse Concerns

Experts have been warning that, as exciting as AI and the metaverse are, these emerging technologies may have negative effects if used improperly. However, it seems like the promise of these technologies may be easier to convey than some of the concerns. A new short film, titled PRIVACY LOST, is a theatrical exploration of some of those concerns.

To learn more, ARPost talked with the writer of PRIVACY LOST – CEO and Chief Scientist of Unanimous AI and a long-time emerging technology engineer and commentator, Dr. Louis Rosenberg.

PRIVACY LOST

Parents and their son sit in a restaurant. The parents are wearing slim AR glasses while the child plays on a tablet.

As the parents argue with one another, their glasses display readouts of the other’s emotional state. The husband is made aware when his wife is getting angry and the wife is made aware when her husband is lying.

privacy lost movie emotions

A waiter appears and the child puts down the tablet and puts on a pair of AR glasses. The actual waiter never appears on screen but appears to the husband as a pleasant-looking tropical server, to the wife as a fit surf-bro, and to the child as an animated stuffed bear.

privacy lost movie sales

Just as the husband and wife used emotional information about one another to try to navigate their argument, the waiter uses emotional information to try to most effectively sell menu items – aided through 3D visual samples. The waiter takes drink orders and leaves. The couple resumes arguing.

privacy lost movie purchase probability

PRIVACY LOST presents what could be a fairly typical scene in the near future. But, should it be?

“It’s short and clean and simple, which is exactly what we aimed for – a quick way to take the complex concept of AI-powered manipulation and make it easily digestible by anyone,” Rosenberg says of PRIVACY LOST.

Creating the Film

“I’ve been developing VR, AR, and AI for over 30 years because I am convinced they will make computing more natural and human,” said Rosenberg. “I’m also keenly aware that these technologies can be abused in very dangerous ways.”

For as long as Rosenberg has been developing these technologies, he has been warning about their potential societal ramifications. However, for much of that career, people have viewed his concerns as largely theoretical. As first the metaverse and now AI have developed and attained their moments in the media, Rosenberg’s concerns take on a new urgency.

“ChatGPT happened and suddenly these risks no longer seemed theoretical,” said Rosenberg. “Almost immediately, I got flooded by interest from policymakers and regulators who wanted to better understand the potential for AI-powered manipulation in the metaverse.”

Rosenberg reached out to the Responsible Metaverse Alliance. With support from them, the XR Guild, and XRSI, Rosenberg wrote a script for PRIVACY LOST, which was produced with help from Minderoo Pictures and HeadQ Production & Post.

“The goal of the video, first and foremost, is to educate and motivate policymakers and regulators about the manipulative dangers that will emerge as AI technologies are unleashed in immersive environments,” said Rosenberg. “At the same time, the video aims to get the public thinking about these issues because it’s the public that motivates policymakers.”

Finding Middle Ground

While Rosenberg is far from the only person calling for regulation in emerging tech, that concept is still one that many see as problematic.

“Some people think regulation is a dirty word that will hurt the industry. I see it the opposite way,” said Rosenberg. “The one thing that would hurt the industry most of all is if the public loses trust. If regulation makes people feel safe in virtual and augmented worlds, the industry will grow.”

The idea behind PRIVACY LOST isn’t to prevent the development of any of the technologies shown in the video – most of which already exist, even though they don’t work together or to the exact ends displayed in the cautionary vignette. These technologies, like any technology, have the capacity to be useful but could also be used and abused for profit, or worse.

For example, sensors that could be used to determine emotion are already used in fitness apps to allow for more expressive avatars. If this data is communicated to other devices, it could enable the kinds of manipulative behavior shown in PRIVACY LOST. If it is stored and studied over time, it could be used at even greater scales and potentially for more dangerous uses.

“We need to allow for real-time emotional tracking, to make the metaverse more human, but ban the storage and profiling of emotional data, to protect against powerful forms of manipulation,” said Rosenberg. “It’s about finding a smart middle ground and it’s totally doable.”

The Pace of Regulation

Governments around the world respond to emerging technologies in different ways and at different paces, according to Rosenberg. However, across the board, policymakers tend to be “receptive but realistic, which generally means slow.” That’s not for lack of interest or effort – after all, the production of PRIVACY LOST was prompted by policymaker interest in these technologies.

“I’ve been impressed with the momentum in the EU and Australia to push regulation forward, and I am seeing genuine efforts in the US as well,” said Rosenberg. “I believe governments are finally taking these issues very seriously.”

The Fear of (Un)Regulated Tech

Depending on how you view the government, regulation can seem scary. In the case of technology, however, it seems to never be as scary as no regulation. PRIVACY LOST isn’t an exploration of a world where a controlling government prevents technological progress, it’s a view of a world where people are controlled by technology gone bad. And it doesn’t have to be that way.

“PRIVACY LOST”: New Short Film Shows Metaverse Concerns Read More »