privacy

researchers-come-up-with-better-idea-to-prevent-airtag-stalking

Researchers come up with better idea to prevent AirTag stalking

Picture of AirTag

BackyardProduction via Getty Images

Apple’s AirTags are meant to help you effortlessly find your keys or track your luggage. But the same features that make them easy to deploy and inconspicuous in your daily life have also allowed them to be abused as a sinister tracking tool that domestic abusers and criminals can use to stalk their targets.

Over the past year, Apple has taken protective steps to notify iPhone and Android users if an AirTag is in their vicinity for a significant amount of time without the presence of its owner’s iPhone, which could indicate that an AirTag has been planted to secretly track their location. Apple hasn’t said exactly how long this time interval is, but to create the much-needed alert system, Apple made some crucial changes to the location privacy design the company originally developed a few years ago for its “Find My” device tracking feature. Researchers from Johns Hopkins University and the University of California, San Diego, say, though, that they’ve developed a cryptographic scheme to bridge the gap—prioritizing detection of potentially malicious AirTags while also preserving maximum privacy for AirTag users.

The Find My system uses both public and private cryptographic keys to identify individual AirTags and manage their location tracking. But Apple developed a particularly thoughtful mechanism to regularly rotate the public device identifier—every 15 minutes, according to the researchers. This way, it would be much more difficult for someone to track your location over time using a Bluetooth scanner to follow the identifier around. This worked well for privately tracking the location of, say, your MacBook if it was lost or stolen, but the downside of constantly changing this identifier for AirTags was that it provided cover for the tiny devices to be deployed abusively.

In reaction to this conundrum, Apple revised the system so an AirTag’s public identifier now only rotates once every 24 hours if the AirTag is away from an iPhone or other Apple device that “owns” it. The idea is that this way other devices can detect potential stalking, but won’t be throwing up alerts all the time if you spend a weekend with a friend who has their iPhone and the AirTag on their keys in their pockets.

In practice, though, the researchers say that these changes have created a situation where AirTags are broadcasting their location to anyone who’s checking within a 30- to 50-foot radius over the course of an entire day—enough time to track a person as they go about their life and get a sense of their movements.

“We had students walk through cities, walk through Times Square and Washington, DC, and lots and lots of people are broadcasting their locations,” says Johns Hopkins cryptographer Matt Green, who worked on the research with a group of colleagues, including Nadia Heninger and Abhishek Jain. “Hundreds of AirTags were not near the device they were registered to, and we’re assuming that most of those were not stalker AirTags.”

Apple has been working with companies like Google, Samsung, and Tile on a cross-industry effort to address the threat of tracking from products similar to AirTags. And for now, at least, the researchers say that the consortium seems to have adopted Apple’s approach of rotating the device public identifiers once every 24 hours. But the privacy trade-off inherent in this solution made the researchers curious about whether it would be possible to design a system that better balanced both privacy and safety.

Researchers come up with better idea to prevent AirTag stalking Read More »

google-agrees-to-settle-chrome-incognito-mode-class-action-lawsuit

Google agrees to settle Chrome incognito mode class action lawsuit

Not as private as you thought —

2020 lawsuit accused Google of tracking incognito activity, tying it to users’ profiles.

Google agrees to settle Chrome incognito mode class action lawsuit

Getty Images

Google has indicated that it is ready to settle a class-action lawsuit filed in 2020 over its Chrome browser’s Incognito mode. Arising in the Northern District of California, the lawsuit accused Google of continuing to “track, collect, and identify [users’] browsing data in real time” even when they had opened a new Incognito window.

The lawsuit, filed by Florida resident William Byatt and California residents Chasom Brown and Maria Nguyen, accused Google of violating wiretap laws. It also alleged that sites using Google Analytics or Ad Manager collected information from browsers in Incognito mode, including web page content, device data, and IP address. The plaintiffs also accused Google of taking Chrome users’ private browsing activity and then associating it with their already-existing user profiles.

Google initially attempted to have the lawsuit dismissed by pointing to the message displayed when users turned on Chrome’s incognito mode. That warning tells users that their activity “might still be visible to websites you visit.”

Judge Yvonne Gonzalez Rogers rejected Google’s bid for summary judgement in August, pointing out that Google never revealed to its users that data collection continued even while surfing in Incognito mode.

“Google’s motion hinges on the idea that plaintiffs consented to Google collecting their data while they were browsing in private mode,” Rogers ruled. “Because Google never explicitly told users that it does so, the Court cannot find as a matter of law that users explicitly consented to the at-issue data collection.”

According to the notice filed on Tuesday, Google and the plaintiffs have agreed to terms that will result in the litigation being dismissed. The agreement will be presented to the court by the end of January, with the court giving final approval by the end of February.

Google agrees to settle Chrome incognito mode class action lawsuit Read More »

unifi-devices-broadcasted-private-video-to-other-users’-accounts

UniFi devices broadcasted private video to other users’ accounts

CASE OF MISTAKEN IDENTITY —

“I was presented with 88 consoles from another account,” one user reports.

an assortment of ubiquiti cameras

Enlarge / An assortment of Ubiquiti cameras.

Users of UniFi, the popular line of wireless devices from manufacturer Ubiquiti, are reporting receiving private camera feeds from, and control over, devices belonging to other users, posts published to social media site Reddit over the past 24 hours show.

“Recently, my wife received a notification from UniFi Protect, which included an image from a security camera,” one Reddit user reported. “However, here’s the twist—this camera doesn’t belong to us.”

Stoking concern and anxiety

The post included two images. The first showed a notification pushed to the person’s phone reporting that their UDM Pro, a network controller and network gateway used by tech-enthusiast consumers, had detected someone moving in the backyard. A still shot of video recorded by a connected surveillance camera showed a three-story house surrounded by trees. The second image showed the dashboard belonging to the Reddit user. The user’s connected device was a UDM SE, and the video it captured showed a completely different house.

Less than an hour later, a different Reddit user posting to the same thread replied: “So it’s VERY interesting you posted this, I was just about to post that when I navigated to unifi.ui.com this morning, I was logged into someone else’s account completely! It had my email on the top right, but someone else’s UDM Pro! I could navigate the device, view, and change settings! Terrifying!!”

Two other people took to the same thread to report similar behavior happening to them.

Other Reddit threads posted in the past day reporting UniFi users connecting to private devices or feeds belonging to others are here and here. The first one reported that the Reddit poster gained full access to someone else’s system. The post included two screenshots showing what the poster said was the captured video of an unrecognized business. The other poster reported logging into their Ubiquiti dashboard to find system controls for someone else. “I ended up logging out, clearing cookies, etc seems fine now for me…” the poster wrote.

Yet another person reported the same problem in a post published to Ubiquiti’s community support forum on Thursday, as this Ars story was being reported. The person reported logging into the UniFi console as is their routine each day.

“However this time I was presented with 88 consoles from another account,” the person wrote. “I had full access to these consoles, just as I would my own. This was only stopped when I forced a browser refresh, and I was presented again with my consoles.”

Ubiquity on Thursday said it had identified the glitch and fixed the errors that caused it.

“Specifically, this issue was caused by an upgrade to our UniFi Cloud infrastructure, which we have since solved,” officials wrote. They went on:

1. What happened?

1,216 Ubiquiti accounts (“Group 1”) were improperly associated with a separate group of 1,177 Ubiquiti accounts (“Group 2”).

2. When did this happen?

December 13, from 6: 47 AM to 3: 45 PM UTC.

3. What does this mean?

During this time, a small number of users from Group 2 received push notifications on their mobile devices from the consoles assigned to a small number of users from Group 1.

Additionally, during this time, a user from Group 2 that attempted to log into his or her account may have been granted temporary remote access to a Group 1 account.

The reports are understandably stoking concern and even anxiety for users of UniFi products, which include wireless access points, switches, routers, controller devices, VoIP phones, and access control products. As the Internet-accessible portals into the local networks of users, UniFi devices provide a means for accessing cameras, mics, and other sensitive resources inside the home.

“I guess I should stop walking around naked in my house now,” a participant in one of the forums joked.

To Ubiquiti’s credit, company employees proactively responded to reports, signaling they took the reports seriously and began actively investigating early on. The employees said the problem has been corrected, and the account mix-ups are no longer occurring.

It’s useful to remember that this sort of behavior—legitimately logging into an account only to find the data or controls belonging to a completely different account—is as old as the Internet. Recent examples: A T-Mobile mistake in September, and similar glitches involving Chase Bank, First Virginia Banks, Credit Karma, and Sprint.

The precise root causes of this type of system error vary from incident to incident, but they often involve “middlebox” devices, which sit between the front- and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of users who have recently logged in. When mismatches occur, credentials for one account can be mapped to a different account.

In an email, a Ubiquiti official said company employees are still gathering “information to provide an accurate assessment.”

UniFi devices broadcasted private video to other users’ accounts Read More »

dropbox-spooks-users-with-new-ai-features-that-send-data-to-openai-when-used

Dropbox spooks users with new AI features that send data to OpenAI when used

adventures in data consent —

AI feature turned on by default worries users; Dropbox responds to concerns.

Updated

Photo of a man looking into a box.

On Wednesday, news quickly spread on social media about a new enabled-by-default Dropbox setting that shares Dropbox data with OpenAI for an experimental AI-powered search feature, but Dropbox says data is only shared if the feature is actively being used. Dropbox says that user data shared with third-party AI partners isn’t used to train AI models and is deleted within 30 days.

Even with assurances of data privacy laid out by Dropbox on an AI privacy FAQ page, the discovery that the setting had been enabled by default upset some Dropbox users. The setting was first noticed by writer Winifred Burton, who shared information about the Third-party AI setting through Bluesky on Tuesday, and frequent AI critic Karla Ortiz shared more information about it on X.

Wednesday afternoon, Drew Houston, the CEO of Dropbox, apologized for customer confusion in a post on X and wrote, “The third-party AI toggle in the settings menu enables or disables access to DBX AI features and functionality. Neither this nor any other setting automatically or passively sends any Dropbox customer data to a third-party AI service.

Critics say that communication about the change could have been clearer. AI researcher Simon Willison wrote, “Great example here of how careful companies need to be in clearly communicating what’s going on with AI access to personal data.”

A screenshot of Dropbox's third-party AI feature switch.

Enlarge / A screenshot of Dropbox’s third-party AI feature switch.

Benj Edwards

So why would Dropbox ever send user data to OpenAI anyway? In July, the company announced an AI-powered feature called Dash that allows AI models to perform universal searches across platforms like Google Workspace and Microsoft Outlook.

According to the Dropbox privacy FAQ, the third-party AI opt-out setting is part of the “Dropbox AI alpha,” which is a conversational interface for exploring file contents that involves chatting with a ChatGPT-style bot using an “Ask something about this file” feature. To make it work, an AI language model similar to the one that powers ChatGPT (like GPT-4) needs access to your files.

According to the FAQ, the third-party AI toggle in your account settings is turned on by default if “you or your team” are participating in the Dropbox AI alpha. Still, multiple Ars Technica staff who had no knowledge of the Dropbox AI alpha found the setting enabled by default when they checked.

In a statement to Ars Technica, a Dropbox representative said, “The third-party AI toggle is only turned on to give all eligible customers the opportunity to view our new AI features and functionality, like Dropbox AI. It does not enable customers to use these features without notice. Any features that use third-party AI offer disclosure of third-party use, and link to settings that they can manage. Only after a customer sees the third-party AI transparency banner and chooses to proceed with asking a question about a file, will that file be sent to a third-party to generate answers. Our customers are still in control of when and how they use these features.”

Right now, the only third-party AI provider for Dropbox is OpenAI, writes Dropbox in the FAQ. “Open AI is an artificial intelligence research organization that develops cutting-edge language models and advanced AI technologies. Your data is never used to train their internal models, and is deleted from OpenAI’s servers within 30 days.” It also says, “Only the content relevant to an explicit request or command is sent to our third-party AI partners to generate an answer, summary, or transcript.”

Disabling the feature is easy if you prefer not to use Dropbox AI features. Log into your Dropbox account on a desktop web browser, then click your profile photo > Settings > Third-party AI. This link may take you to that page more quickly. On that page, click the switch beside “Use artificial intelligence (AI) from third-party partners so you can work faster in Dropbox” to toggle it into the “Off” position.

This story was updated on December 13, 2023, at 5: 35 pm ET with clarifications about when and how Dropbox shares data with OpenAI, as well as statements from Dropbox reps and its CEO.

Dropbox spooks users with new AI features that send data to OpenAI when used Read More »

challenges-behind-applying-real-world-laws-to-xr-spaces-and-ensuring-user-safety

Challenges Behind Applying Real-World Laws to XR Spaces and Ensuring User Safety

Immersive technologies bridging the gap between the physical and digital worlds can create new business opportunities. However, it also gives rise to new challenges in regulation and applying real-world laws to XR spaces. According to a World Economic Forum report, we are relatively slow in innovating new legal frameworks for emerging technologies like AR and VR.

Common Challenges of Applying Laws to AR and VR

XR technologies like AR and VR are already considered beneficial and are used in industries like medicine and education. However, XR still harbors risks to human rights, according to an Electronic Frontier Foundation (EFF) article.

Issues like data harvesting and online harassment pose real threats to users, and self-regulation when it comes to data protection and ethical guidelines is insufficient in mitigating such risks. Some common challenges that crop up when applying real-world laws to AR and VR include intellectual property, virtual privacy and security, and product liability.

There’s also the need for a new framework tailored to fit emerging technologies, but legislative attempts at regulation may face several hurdles. It’s also worth noting that while regulation can help keep users safe, it may also potentially hamper the development of such technologies, according to Digikonn co-founder Chirag Prajapati.

Can Real-World Laws Be Applied to XR Spaces?

In an interview with IEEE Spectrum in 2018, Robyn Chatwood, an intellectual property and information technology partner at Dentons Australia, gave an example of an incident that occurred in a VR space where a user experienced sexual assault. Unfortunately, Chatwood remarked that there are no laws saying that sexual assault in VR is the same as in the real world. When asked when she thinks these issues will be addressed, Chatwood remarked that, in several years, another incident could draw more widespread attention to the problems in XR spaces. It’s also possible that, through increased adoption, society will begin to recognize the need to develop regulations for XR spaces.

On a more positive note, the trend toward regulations for XR spaces has been changing recently. For instance, Meta has rolled out a minimum distance between avatars in Horizon Worlds, its VR social media platform. This boundary prevents other avatars from getting into your avatar’s personal space. This system works by halting a user’s forward movement as they get closer to the said boundary.

There are also new laws being drafted to protect users in online spaces. In particular, the UK’s Online Safety Bill, which had its second reading in the House of Commons in April 2022, aims to protect users by ensuring that online platforms have safety measures in place against harmful and illegal content and covers four new criminal offenses.

In the paper, The Law and Ethics of Virtual Assault, author John Danaher proposes a broader definition of virtual sexual assault, which allows for what he calls the different “sub-types of virtual sexual assault.” Danaher also provides suggestions on when virtual acts should be criminalized and how virtual sexual assault can be criminalized. The paper also touches on topics like consent and criminal responsibility for such crimes.

There’s even a short film that brings to light pressing metaverse concerns. Privacy Lost aims to educate policymakers about the potential dangers, such as manipulation, that come with emerging technologies.

While many legal issues in the virtual world are resolved through criminal courts and tort systems, according to Gamma Law’s David B. Hoppe, these approaches lack the necessary nuance and context to resolve such legal disputes. Hoppe remarks that real-world laws may not have the specificity that will allow them to tackle new privacy issues in XR spaces and shares that there is a need for a more nuanced legal strategy and tailored legal documents to help protect users in XR spaces.

Issues with Existing Cyber Laws

The novelty of AR and VR technologies makes it challenging to implement legislation. However, for users to maximize the benefits of such technologies, their needs should be considered by developers, policymakers, and organizations that implement them. While cyber laws are in place, persistent issues still need to be tackled, such as challenges in executing sanctions for offenders and the lack of adequate responses.

The United Nations Office on Drugs and Crime (UNODC) also cites several obstacles to cybercrime investigations, such as user anonymity from technologies, attribution, which determines who or what is responsible for the crime, and traceback, which can be time-consuming. The UNODC also notes that the lack of coordinated national cybercrime laws and international standards for evidence can hamper cybercrime investigations.

Creating Safer XR Spaces for Users

Based on guidelines provided by the World Economic Forum, there are several key considerations that legislators should consider. These include how laws and regulations apply to XR conduct governed by private platforms and how rules can potentially apply when an XR user’s activities have direct, real-world effects.

The XR Association (XRA) has also provided guidelines to help create safe and inclusive immersive spaces. Its conduct policy tips to address abuse include creating tailored policies that align with a business’ product and community and including notifications of possible violations. Moreover, the XRA has been proactive in rolling out measures for the responsible development and adoption of XR. For instance, it has held discussions on user privacy and safety in mixed reality spaces, zeroing in on how developers, policymakers, and organizations can better promote privacy, safety, and inclusion, as well as tackle issues that are unique to XR spaces. It also works with XRA member companies to create guidelines for age-appropriate use of XR technology, helping develop safer virtual spaces for younger users.

Other Key Players in XR Safety

Aside from the XRA, other organizations are also taking steps to create safer XR spaces. X Reality Safety Intelligence (XRSI), formerly known as X Reality Safety Initiative, is one of the world’s leading organizations focused on providing intelligence and advisory services to promote the safety and well-being of ecosystems for emerging technologies.

It has created a number of programs that help tackle critical issues and risks in the metaverse focusing on aspects like diversity and inclusion, trustworthy journalism, and child safety. For instance, the organization has shown support for the Kids PRIVACY Act, a legislation that aims to implement more robust measures to protect younger users online.

XRSI has also published research and shared guidelines to create standards for XR spaces. It has partnered with Standards Australia to create the first-ever Metaverse Standards whitepaper, which serves as a guide for standards in the metaverse to protect users against risks unique to the metaverse. These are categorized as Human Risks, Regulatory Risks, Financial Risks, and Legal Risks, among other metaverse-unique risks.

The whitepaper is a collaborative effort that brings together cybersecurity experts, VR and AR pioneers, strategists, and AI and metaverse specialists. One of its authors, Dr. Catriona Wallace, is the founder of the social enterprise The Responsible Metaverse Alliance. Cybersecurity professional Kavya Pearlman, the founder and CEO of XRSI, is also one of its authors. Pearlman works with various organizations and governments, advising on policymaking and cybersecurity to help keep users safe in emerging technology ecosystems.

One such issue that’s being highlighted by the XRSI is the risks that come with XR data collection in three areas: medical XR and healthcare, learning and education, and employment and work. The report highlights how emerging technologies create new privacy and safety concerns, risks such as the lack of inclusivity, the lack of equality in education, and the lack of experience in using data collected in XR spaces are cropping up.

In light of these issues, the XRSI has created goals and guidelines to help address these risks. Some of the goals include establishing a standards-based workflow to manage XR-collected data and adopting a new approach to classifying such data.

The EU is also taking steps to ensure data protection in emerging technologies, with new EU laws aiming to complement the GDPR’s requirements for XR technologies and services. Moreover, the EU data protection law applies to most XR technologies, particularly for commercial applications. It’s possible that a user’s explicit consent may be required to make data processing operations legitimate.

According to the Information Technology & Innovation Foundation (ITIF), policymakers need to mitigate so-called regulatory uncertainty by making it clear how and when laws apply to AR and VR technologies. The same ITIF report stresses that they need to collaborate with stakeholder communities and industry leaders to create and implement comprehensive guidelines and clear standards for AR and VR use.

However, while creating safer XR spaces is of utmost importance, the ITIF also highlights the risks of over-regulation, which can stifle the development of new technologies. To mitigate this risk, policymakers can instead focus on developing regulations that help promote innovation in the field, such as creating best practices for law enforcement agencies to tackle cybercrime and focusing on funding for user safety research.

Moreover, the ITIF also provides some guidelines regarding privacy concerns from AR in public spaces, as well as what steps leaders and policymakers could take to mitigate the risks and challenges that come with the use of immersive technologies.

The EFF also shares that governments need to execute or update data protection legislation to protect users and their data.

There is still a long way to go when applying real-world laws to XR spaces. However, many organizations, policymakers, and stakeholders are already taking steps to help make such spaces safer for users.

Challenges Behind Applying Real-World Laws to XR Spaces and Ensuring User Safety Read More »

“privacy-lost”:-new-short-film-shows-metaverse-concerns

“PRIVACY LOST”: New Short Film Shows Metaverse Concerns

Experts have been warning that, as exciting as AI and the metaverse are, these emerging technologies may have negative effects if used improperly. However, it seems like the promise of these technologies may be easier to convey than some of the concerns. A new short film, titled PRIVACY LOST, is a theatrical exploration of some of those concerns.

To learn more, ARPost talked with the writer of PRIVACY LOST – CEO and Chief Scientist of Unanimous AI and a long-time emerging technology engineer and commentator, Dr. Louis Rosenberg.

PRIVACY LOST

Parents and their son sit in a restaurant. The parents are wearing slim AR glasses while the child plays on a tablet.

As the parents argue with one another, their glasses display readouts of the other’s emotional state. The husband is made aware when his wife is getting angry and the wife is made aware when her husband is lying.

privacy lost movie emotions

A waiter appears and the child puts down the tablet and puts on a pair of AR glasses. The actual waiter never appears on screen but appears to the husband as a pleasant-looking tropical server, to the wife as a fit surf-bro, and to the child as an animated stuffed bear.

privacy lost movie sales

Just as the husband and wife used emotional information about one another to try to navigate their argument, the waiter uses emotional information to try to most effectively sell menu items – aided through 3D visual samples. The waiter takes drink orders and leaves. The couple resumes arguing.

privacy lost movie purchase probability

PRIVACY LOST presents what could be a fairly typical scene in the near future. But, should it be?

“It’s short and clean and simple, which is exactly what we aimed for – a quick way to take the complex concept of AI-powered manipulation and make it easily digestible by anyone,” Rosenberg says of PRIVACY LOST.

Creating the Film

“I’ve been developing VR, AR, and AI for over 30 years because I am convinced they will make computing more natural and human,” said Rosenberg. “I’m also keenly aware that these technologies can be abused in very dangerous ways.”

For as long as Rosenberg has been developing these technologies, he has been warning about their potential societal ramifications. However, for much of that career, people have viewed his concerns as largely theoretical. As first the metaverse and now AI have developed and attained their moments in the media, Rosenberg’s concerns take on a new urgency.

“ChatGPT happened and suddenly these risks no longer seemed theoretical,” said Rosenberg. “Almost immediately, I got flooded by interest from policymakers and regulators who wanted to better understand the potential for AI-powered manipulation in the metaverse.”

Rosenberg reached out to the Responsible Metaverse Alliance. With support from them, the XR Guild, and XRSI, Rosenberg wrote a script for PRIVACY LOST, which was produced with help from Minderoo Pictures and HeadQ Production & Post.

“The goal of the video, first and foremost, is to educate and motivate policymakers and regulators about the manipulative dangers that will emerge as AI technologies are unleashed in immersive environments,” said Rosenberg. “At the same time, the video aims to get the public thinking about these issues because it’s the public that motivates policymakers.”

Finding Middle Ground

While Rosenberg is far from the only person calling for regulation in emerging tech, that concept is still one that many see as problematic.

“Some people think regulation is a dirty word that will hurt the industry. I see it the opposite way,” said Rosenberg. “The one thing that would hurt the industry most of all is if the public loses trust. If regulation makes people feel safe in virtual and augmented worlds, the industry will grow.”

The idea behind PRIVACY LOST isn’t to prevent the development of any of the technologies shown in the video – most of which already exist, even though they don’t work together or to the exact ends displayed in the cautionary vignette. These technologies, like any technology, have the capacity to be useful but could also be used and abused for profit, or worse.

For example, sensors that could be used to determine emotion are already used in fitness apps to allow for more expressive avatars. If this data is communicated to other devices, it could enable the kinds of manipulative behavior shown in PRIVACY LOST. If it is stored and studied over time, it could be used at even greater scales and potentially for more dangerous uses.

“We need to allow for real-time emotional tracking, to make the metaverse more human, but ban the storage and profiling of emotional data, to protect against powerful forms of manipulation,” said Rosenberg. “It’s about finding a smart middle ground and it’s totally doable.”

The Pace of Regulation

Governments around the world respond to emerging technologies in different ways and at different paces, according to Rosenberg. However, across the board, policymakers tend to be “receptive but realistic, which generally means slow.” That’s not for lack of interest or effort – after all, the production of PRIVACY LOST was prompted by policymaker interest in these technologies.

“I’ve been impressed with the momentum in the EU and Australia to push regulation forward, and I am seeing genuine efforts in the US as well,” said Rosenberg. “I believe governments are finally taking these issues very seriously.”

The Fear of (Un)Regulated Tech

Depending on how you view the government, regulation can seem scary. In the case of technology, however, it seems to never be as scary as no regulation. PRIVACY LOST isn’t an exploration of a world where a controlling government prevents technological progress, it’s a view of a world where people are controlled by technology gone bad. And it doesn’t have to be that way.

“PRIVACY LOST”: New Short Film Shows Metaverse Concerns Read More »