privacy

“myterms”-wants-to-become-the-new-way-we-dictate-our-privacy-on-the-web

“MyTerms” wants to become the new way we dictate our privacy on the web

Searls and his group are putting up the standards and letting the browsers, extension-makers, website managers, mobile platforms, and other pieces of the tech stack craft the tools. So long as the human is the first party to a contract, the digital thing is the second, a “disinterested non-profit” provides the roster of agreements, and both sides keep records of what they agreed to, the function can take whatever shape the Internet decides.

Terms offered, not requests submitted

Searls’ and his group’s standard is a plea for a sensible alternative to the modern reality of accessing web information. It asks us to stop pretending that we’re all reading agreements stuffed full with opaque language, agreeing to thousands upon thousands of words’ worth of terms every day and willfully offering up information about us. And, of course, it makes people ask if it is due to become another version of Do Not Track.

Do Not Track was a request, while MyTerms is inherently a demand. Websites and services could, of course, simply refuse to show or provide content and data if a MyTerms agent is present, or they could ask or demand that people set the least restrictive terms.

There is nothing inherently wrong with setting up a user-first privacy scheme and pushing for sites and software to do the right thing and abide by it. People may choose to stick to search engines and sites that agree to MyTerms. Time will tell if MyTerms can gain the kind of leverage Searls is aiming for.

“MyTerms” wants to become the new way we dictate our privacy on the web Read More »

privacy-problematic-deepseek-pulled-from-app-stores-in-south-korea

Privacy-problematic DeepSeek pulled from app stores in South Korea

In a media briefing held Monday, the South Korean Personal Information Protection Commission indicated that it had paused new downloads within the country of Chinese AI startup DeepSeek’s mobile app. The restriction took effect on Saturday and doesn’t affect South Korean users who already have the app installed on their devices. The DeepSeek service also remains accessible in South Korea via the web.

Per Reuters, PIPC explained that representatives from DeepSeek acknowledged the company had “partially neglected” some of its obligations under South Korea’s data protection laws, which provide South Koreans some of the strictest privacy protections globally.

PIPC investigation division director Nam Seok is quoted by the Associated Press as saying DeepSeek “lacked transparency about third-party data transfers and potentially collected excessive personal information.” DeepSeek reportedly has dispatched a representative to South Korea to work through any issues and bring the app into compliance.

It’s unclear how long the app will remain unavailable in South Korea, with PIPC saying only that the privacy issues it identified with the app might take “a considerable amount of time” to resolve.

Western infosec sources have also expressed dissatisfaction with aspects of DeepSeek’s security. Mobile security company NowSecure reported two weeks ago that the app sends information unencrypted to servers located in China and controlled by TikTok owner ByteDance; the week before that, another security company found an open, web-accessible database filled with DeepSeek customer chat history and other sensitive data.

Ars attempted to ask DeepSeek’s DeepThink (R1) model about the Tiananmen Square massacre or its favorite “Winnie the Pooh” movie, but the LLM continued to have no comment.

Privacy-problematic DeepSeek pulled from app stores in South Korea Read More »

deepseek-ios-app-sends-data-unencrypted-to-bytedance-controlled-servers

DeepSeek iOS app sends data unencrypted to ByteDance-controlled servers


Apple’s defenses that protect data from being sent in the clear are globally disabled.

A little over two weeks ago, a largely unknown China-based company named DeepSeek stunned the AI world with the release of an open source AI chatbot that had simulated reasoning capabilities that were largely on par with those from market leader OpenAI. Within days, the DeepSeek AI assistant app climbed to the top of the iPhone App Store’s “Free Apps” category, overtaking ChatGPT.

On Thursday, mobile security company NowSecure reported that the app sends sensitive data over unencrypted channels, making the data readable to anyone who can monitor the traffic. More sophisticated attackers could also tamper with the data while it’s in transit. Apple strongly encourages iPhone and iPad developers to enforce encryption of data sent over the wire using ATS (App Transport Security). For unknown reasons, that protection is globally disabled in the app, NowSecure said.

Basic security protections MIA

What’s more, the data is sent to servers that are controlled by ByteDance, the Chinese company that owns TikTok. While some of that data is properly encrypted using transport layer security, once it’s decrypted on the ByteDance-controlled servers, it can be cross-referenced with user data collected elsewhere to identify specific users and potentially track queries and other usage.

More technically, the DeepSeek AI chatbot uses an open weights simulated reasoning model. Its performance is largely comparable with OpenAI’s o1 simulated reasoning (SR) model on several math and coding benchmarks. The feat, which largely took AI industry watchers by surprise, was all the more stunning because DeepSeek reported spending only a small fraction on it compared with the amount OpenAI spent.

A NowSecure audit of the app has found other behaviors that researchers found potentially concerning. For instance, the app uses a symmetric encryption scheme known as 3DES or triple DES. The scheme was deprecated by NIST following research in 2016 that showed it could be broken in practical attacks to decrypt web and VPN traffic. Another concern is that the symmetric keys, which are identical for every iOS user, are hardcoded into the app and stored on the device.

The app is “not equipped or willing to provide basic security protections of your data and identity,” NowSecure co-founder Andrew Hoog told Ars. “There are fundamental security practices that are not being observed, either intentionally or unintentionally. In the end, it puts your and your company’s data and identity at risk.”

Hoog said the audit is not yet complete, so there are many questions and details left unanswered or unclear. He said the findings were concerning enough that NowSecure wanted to disclose what is currently known without delay.

In a report, he wrote:

NowSecure recommends that organizations remove the DeepSeek iOS mobile app from their environment (managed and BYOD deployments) due to privacy and security risks, such as:

  1. Privacy issues due to insecure data transmission
  2. Vulnerability issues due to hardcoded keys
  3. Data sharing with third parties such as ByteDance
  4. Data analysis and storage in China

Hoog added that the DeepSeek app for Android is even less secure than its iOS counterpart and should also be removed.

Representatives for both DeepSeek and Apple didn’t respond to an email seeking comment.

Data sent entirely in the clear occurs during the initial registration of the app, including:

  • organization id
  • the version of the software development kit used to create the app
  • user OS version
  • language selected in the configuration

Apple strongly encourages developers to implement ATS to ensure the apps they submit don’t transmit any data insecurely over HTTP channels. For reasons that Apple hasn’t explained publicly, Hoog said, this protection isn’t mandatory. DeepSeek has yet to explain why ATS is globally disabled in the app or why it uses no encryption when sending this information over the wire.

This data, along with a mix of other encrypted information, is sent to DeepSeek over infrastructure provided by Volcengine a cloud platform developed by ByteDance. While the IP address the app connects to geo-locates to the US and is owned by US-based telecom Level 3 Communications, the DeepSeek privacy policy makes clear that the company “store[s] the data we collect in secure servers located in the People’s Republic of China.” The policy further states that DeepSeek:

may access, preserve, and share the information described in “What Information We Collect” with law enforcement agencies, public authorities, copyright holders, or other third parties if we have good faith belief that it is necessary to:

• comply with applicable law, legal process or government requests, as consistent with internationally recognised standards.

NowSecure still doesn’t know precisely the purpose of the app’s use of 3DES encryption functions. The fact that the key is hardcoded into the app, however, is a major security failure that’s been recognized for more than a decade when building encryption into software.

No good reason

NowSecure’s Thursday report adds to growing list of safety and privacy concerns that have already been reported by others.

One was the terms spelled out in the above-mentioned privacy policy. Another came last week in a report from researchers at Cisco and the University of Pennsylvania. It found that the DeepSeek R1, the simulated reasoning model, exhibited a 100 percent attack failure rate against 50 malicious prompts designed to generate toxic content.

A third concern is research from security firm Wiz that uncovered a publicly accessible, fully controllable database belonging to DeepSeek. It contained more than 1 million instances of “chat history, backend data, and sensitive information, including log streams, API secrets, and operational details,” Wiz reported. An open web interface also allowed for full database control and privilege escalation, with internal API endpoints and keys available through the interface and common URL parameters.

Thomas Reed, staff product manager for Mac endpoint detection and response at security firm Huntress, and an expert in iOS security, said he found NowSecure’s findings concerning.

“ATS being disabled is generally a bad idea,” he wrote in an online interview. “That essentially allows the app to communicate via insecure protocols, like HTTP. Apple does allow it, and I’m sure other apps probably do it, but they shouldn’t. There’s no good reason for this in this day and age.”

He added: “Even if they were to secure the communications, I’d still be extremely unwilling to send any remotely sensitive data that will end up on a server that the government of China could get access to.”

HD Moore, founder and CEO of runZero, said he was less concerned about ByteDance or other Chinese companies having access to data.

“The unencrypted HTTP endpoints are inexcusable,” he wrote. “You would expect the mobile app and their framework partners (ByteDance, Volcengine, etc) to hoover device data, just like anything else—but the HTTP endpoints expose data to anyone in the network path, not just the vendor and their partners.”

On Thursday, US lawmakers began pushing to immediately ban DeepSeek from all government devices, citing national security concerns that the Chinese Communist Party may have built a backdoor into the service to access Americans’ sensitive private data. If passed, DeepSeek could be banned within 60 days.

This story was updated to add further examples of security concerns regarding DeepSeek.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

DeepSeek iOS app sends data unencrypted to ByteDance-controlled servers Read More »

millions-of-subarus-could-be-remotely-unlocked,-tracked-due-to-security-flaws

Millions of Subarus could be remotely unlocked, tracked due to security flaws


Flaws also allowed access to one year of location history.

About a year ago, security researcher Sam Curry bought his mother a Subaru, on the condition that, at some point in the near future, she let him hack it.

It took Curry until last November, when he was home for Thanksgiving, to begin examining the 2023 Impreza’s Internet-connected features and start looking for ways to exploit them. Sure enough, he and a researcher working with him online, Shubham Shah, soon discovered vulnerabilities in a Subaru web portal that let them hijack the ability to unlock the car, honk its horn, and start its ignition, reassigning control of those features to any phone or computer they chose.

Most disturbing for Curry, though, was that they found they could also track the Subaru’s location—not merely where it was at the moment but also where it had been for the entire year that his mother had owned it. The map of the car’s whereabouts was so accurate and detailed, Curry says, that he was able to see her doctor visits, the homes of the friends she visited, even which exact parking space his mother parked in every time she went to church.

A year of location data for Sam Curry’s mother’s 2023 Subaru Impreza that Curry and Shah were able to access in Subaru’s employee admin portal thanks to its security vulnerabilities.

Credit: Sam Curry

A year of location data for Sam Curry’s mother’s 2023 Subaru Impreza that Curry and Shah were able to access in Subaru’s employee admin portal thanks to its security vulnerabilities. Credit: Sam Curry

“You can retrieve at least a year’s worth of location history for the car, where it’s pinged precisely, sometimes multiple times a day,” Curry says. “Whether somebody’s cheating on their wife or getting an abortion or part of some political group, there are a million scenarios where you could weaponize this against someone.”

Curry and Shah today revealed in a blog post their method for hacking and tracking millions of Subarus, which they believe would have allowed hackers to target any of the company’s vehicles equipped with its digital features known as Starlink in the US, Canada, or Japan. Vulnerabilities they found in a Subaru website intended for the company’s staff allowed them to hijack an employee’s account to both reassign control of cars’ Starlink features and also access all the vehicle location data available to employees, including the car’s location every time its engine started, as shown in their video below.

Curry and Shah reported their findings to Subaru in late November, and Subaru quickly patched its Starlink security flaws. But the researchers warn that the Subaru web vulnerabilities are just the latest in a long series of similar web-based flaws they and other security researchers working with them have found that have affected well over a dozen carmakers, including Acura, Genesis, Honda, Hyundai, Infiniti, Kia, Toyota, and many others. There’s little doubt, they say, that similarly serious hackable bugs exist in other auto companies’ web tools that have yet to be discovered.

In Subaru’s case, in particular, they also point out that their discovery hints at how pervasively those with access to Subaru’s portal can track its customers’ movements, a privacy issue that will last far longer than the web vulnerabilities that exposed it. “The thing is, even though this is patched, this functionality is still going to exist for Subaru employees,” Curry says. “It’s just normal functionality that an employee can pull up a year’s worth of your location history.”

When WIRED reached out to Subaru for comment on Curry and Shah’s findings, a spokesperson responded in a statement that “after being notified by independent security researchers, [Subaru] discovered a vulnerability in its Starlink service that could potentially allow a third party to access Starlink accounts. The vulnerability was immediately closed and no customer information was ever accessed without authorization.”

The Subaru spokesperson also confirmed to WIRED that “there are employees at Subaru of America, based on their job relevancy, who can access location data.” The company offered as an example that employees have that access to share a vehicle’s location with first responders in the case when a collision is detected. “All these individuals receive proper training and are required to sign appropriate privacy, security, and NDA agreements as needed,” Subaru’s statement added. “These systems have security monitoring solutions in place which are continually evolving to meet modern cyber threats.”

Responding to Subaru’s example of notifying first responders about a collision, Curry notes that would hardly require a year’s worth of location history. The company didn’t respond to WIRED asking how far back it keeps customers’ location histories and makes them available to employees.

Shah and Curry’s research that led them to the discovery of Subaru’s vulnerabilities began when they found that Curry’s mother’s Starlink app connected to the domain SubaruCS.com, which they realized was an administrative domain for employees. Scouring that site for security flaws, they found that they could reset employees’ passwords simply by guessing their email address, which gave them the ability to take over any employee’s account whose email they could find. The password reset functionality did ask for answers to two security questions, but they found that those answers were checked with code that ran locally in a user’s browser, not on Subaru’s server, allowing the safeguard to be easily bypassed. “There were really multiple systemic failures that led to this,” Shah says.

The two researchers say they found the email address for a Subaru Starlink developer on LinkedIn, took over the employee’s account, and immediately found that they could use that staffer’s access to look up any Subaru owner by last name, zip code, email address, phone number, or license plate to access their Starlink configurations. In seconds, they could then reassign control of the Starlink features of that user’s vehicle, including the ability to remotely unlock the car, honk its horn, start its ignition, or locate it, as shown in the video below.

Those vulnerabilities alone, for drivers, present serious theft and safety risks. Curry and Shah point out that a hacker could have targeted a victim for stalking or theft, looked up someone’s vehicle’s location, then unlocked their car at any time—though a thief would have to somehow also use a separate technique to disable the car’s immobilizer, the component that prevents it from being driven away without a key.

Those car hacking and tracking techniques alone are far from unique. Last summer, Curry and another researcher, Neiko Rivera, demonstrated to WIRED that they could pull off a similar trick with any of millions of vehicles sold by Kia. Over the prior two years, a larger group of researchers, of which Curry and Shah are a part, discovered web-based security vulnerabilities that affected cars sold by Acura, BMW, Ferrari, Genesis, Honda, Hyundai, Infiniti, Mercedes-Benz, Nissan, Rolls Royce, and Toyota.

More unusual in Subaru’s case, Curry and Shah say, is that they were able to access fine-grained, historical location data for Subarus going back at least a year. Subaru may in fact collect multiple years of location data, but Curry and Shah tested their technique only on Curry’s mother, who had owned her Subaru for about a year.

Curry argues that Subaru’s extensive location tracking is a particularly disturbing demonstration of the car industry’s lack of privacy safeguards around its growing collection of personal data on drivers. “It’s kind of bonkers,” he says. “There’s an expectation that a Google employee isn’t going to be able to just go through your emails in Gmail, but there’s literally a button on Subaru’s admin panel that lets an employee view location history.”

The two researchers’ work contributes to a growing sense of concern over the enormous amount of location data that car companies collect. In December, information a whistleblower provided to the German hacker collective the Chaos Computer Computer and Der Spiegel revealed that Cariad, a software company that partners with Volkswagen, had left detailed location data for 800,000 electric vehicles publicly exposed online. Privacy researchers at the Mozilla Foundation in September warned in a report that “modern cars are a privacy nightmare,” noting that 92 percent give car owners little to no control over the data they collect, and 84 percent reserve the right to sell or share your information. (Subaru tells WIRED that it “does not sell location data.”)

“While we worried that our doorbells and watches that connect to the Internet might be spying on us, car brands quietly entered the data business by turning their vehicles into powerful data-gobbling machines,” Mozilla’s report reads.

Curry and Shah’s discovery of Subaru’s security vulnerabilities in its tracking demonstrate a particularly egregious exposure of that data—but also a privacy problem that’s hardly less disturbing now that the vulnerabilities are patched, says Robert Herrell, the executive director of the Consumer Federation of California, which has sought to create legislation for limiting a car’s data tracking.

“It seems like there are a bunch of employees at Subaru that have a scary amount of detailed information,” Herrell says. “People are being tracked in ways that they have no idea are happening.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Millions of Subarus could be remotely unlocked, tracked due to security flaws Read More »

time-to-check-if-you-ran-any-of-these-33-malicious-chrome-extensions

Time to check if you ran any of these 33 malicious Chrome extensions

Screenshot showing the phishing email sent to Cyberhaven extension developers. Credit: Amit Assaraf

A link in the email led to a Google consent screen requesting access permission for an OAuth application named Privacy Policy Extension. A Cyberhaven developer granted the permission and, in the process, unknowingly gave the attacker the ability to upload new versions of Cyberhaven’s Chrome extension to the Chrome Web Store. The attacker then used the permission to push out the malicious version 24.10.4.

Screenshot showing the Google permission request. Credit: Amit Assaraf

As word of the attack spread in the early hours of December 25, developers and researchers discovered that other extensions were targeted, in many cases successfully, by the same spear phishing campaign. John Tuckner, founder of Secure Annex, a browser extension analysis and management firm, said that as of Thursday afternoon, he knew of 19 other Chrome extensions that were similarly compromised. In every case, the attacker used spear phishing to push a new malicious version and custom, look-alike domains to issue payloads and receive authentication credentials. Collectively, the 20 extensions had 1.46 million downloads.

“For many I talk to, managing browser extensions can be a lower priority item in their security program,” Tuckner wrote in an email. “Folks know they can present a threat, but rarely are teams taking action on them. We’ve often seen in security [that] one or two incidents can cause a reevaluation of an organization’s security posture. Incidents like this often result in teams scrambling to find a way to gain visibility and understanding of impact to their organizations.”

The earliest compromise occurred in May 2024. Tuckner provided the following spreadsheet:

Name ID Version Patch Available Users Start End
VPNCity nnpnnpemnckcfdebeekibpiijlicmpom 2.0.1 FALSE 10,000 12/12/24 12/31/24
Parrot Talks kkodiihpgodmdankclfibbiphjkfdenh 1.16.2 TRUE 40,000 12/25/24 12/31/24
Uvoice oaikpkmjciadfpddlpjjdapglcihgdle 1.0.12 TRUE 40,000 12/26/24 12/31/24
Internxt VPN dpggmcodlahmljkhlmpgpdcffdaoccni 1.1.1 1.2.0 TRUE 10,000 12/25/24 12/29/24
Bookmark Favicon Changer acmfnomgphggonodopogfbmkneepfgnh 4.00 TRUE 40,000 12/25/24 12/31/24
Castorus mnhffkhmpnefgklngfmlndmkimimbphc 4.40 4.41 TRUE 50,000 12/26/24 12/27/24
Wayin AI cedgndijpacnfbdggppddacngjfdkaca 0.0.11 TRUE 40,000 12/19/24 12/31/24
Search Copilot AI Assistant for Chrome bbdnohkpnbkdkmnkddobeafboooinpla 1.0.1 TRUE 20,000 7/17/24 12/31/24
VidHelper – Video Downloader egmennebgadmncfjafcemlecimkepcle 2.2.7 TRUE 20,000 12/26/24 12/31/24
AI Assistant – ChatGPT and Gemini for Chrome bibjgkidgpfbblifamdlkdlhgihmfohh 0.1.3 FALSE 4,000 5/31/24 10/25/24
TinaMind – The GPT-4o-powered AI Assistant! befflofjcniongenjmbkgkoljhgliihe 2.13.0 2.14.0 TRUE 40,000 12/15/24 12/20/24
Bard AI chat pkgciiiancapdlpcbppfkmeaieppikkk 1.3.7 FALSE 100,000 9/5/24 10/22/24
Reader Mode llimhhconnjiflfimocjggfjdlmlhblm 1.5.7 FALSE 300,000 12/18/24 12/19/24
Primus (prev. PADO) oeiomhmbaapihbilkfkhmlajkeegnjhe 3.18.0 3.20.0 TRUE 40,000 12/18/24 12/25/24
Cyberhaven security extension V3 pajkjnmeojmbapicmbpliphjmcekeaac 24.10.4 24.10.5 TRUE 400,000 12/24/24 12/26/24
GraphQL Network Inspector ndlbedplllcgconngcnfmkadhokfaaln 2.22.6 2.22.7 TRUE 80,000 12/29/24 12/30/24
GPT 4 Summary with OpenAI epdjhgbipjpbbhoccdeipghoihibnfja 1.4 FALSE 10,000 5/31/24 9/29/24
Vidnoz Flex – Video recorder & Video share cplhlgabfijoiabgkigdafklbhhdkahj 1.0.161 FALSE 6,000 12/25/24 12/29/24
YesCaptcha assistant jiofmdifioeejeilfkpegipdjiopiekl 1.1.61 TRUE 200,000 12/29/24 12/31/24
Proxy SwitchyOmega (V3) hihblcmlaaademjlakdpicchbjnnnkbo 3.0.2 TRUE 10,000 12/30/24 12/31/24

But wait, there’s more

One of the compromised extensions is called Reader Mode. Further analysis showed it had been compromised not just in the campaign targeting the other 19 extensions but in a separate campaign that started no later than April 2023. Tuckner said the source of the compromise appears to be a code library developers can use to monetize their extensions. The code library collects details about each web visit a browser makes. In exchange for incorporating the library into the extensions, developers receive a commission from the library creator.

Time to check if you ran any of these 33 malicious Chrome extensions Read More »

booking.com-says-typos-giving-strangers-access-to-private-trip-info-is-not-a-bug

Booking.com says typos giving strangers access to private trip info is not a bug

For Booking.com, it’s essential that users can book travel for other users by adding their email addresses to a booking because that’s how people frequently book trips together. And if it happens that the email address added to a booking is also linked to an existing Booking.com user, the trip is automatically added to that person’s account. After that, there’s no way for Booking.com to remove the trip from the stranger’s account, even if there’s a typo in the email or if auto-complete adds the wrong email domain and the user booking the trip doesn’t notice.

According to Booking.com, there is nothing to fix because this is not a “system glitch,” and there was no “security breach.” What Alfie encountered is simply the way the platform works, which, like any app where users input information, has the potential for human error.

In the end, Booking.com declined to remove the trip from Alfie’s account, saying that would have violated the privacy of the user booking the trip. The only resolution was for Alfie to remove the trip from his account and pretend it never happened.

Alfie remains concerned, telling Ars, “I can’t help thinking this can’t be the only occurrence of this issue.” But Jacob Hoffman-Andrews, a senior staff technologist for the digital rights group the Electronic Frontier Foundation, told Ars that after talking to other developers, his “gut reaction” is that Booking.com didn’t have a ton of options to prevent typos during bookings.

“There’s only so much they can do to protect people from their own typos,” Hoffman-Andrews said.

One step Booking.com could take to protect privacy

Perhaps the bigger concern exposed by Alfie’s experience beyond typos is Booking.com’s practice of automatically adding bookings to accounts linked to emails that users they don’t know input. Once the trip is added to someone’s account, that person can seemingly access sensitive information about the users booking the trip that Booking.com otherwise would not share.

While engaging with the Booking.com support team member, Alfie told Ars that he “probed for as much information as possible” to find out who was behind the strange booking on his account. And seemingly because the booking was added to Alfie’s account, the support team member had no problem sharing sensitive information that went beyond the full name and last four digits of the credit card used for the booking, which were listed in the trip information by default.

Booking.com says typos giving strangers access to private trip info is not a bug Read More »

phone-tracking-tool-lets-government-agencies-follow-your-every-move

Phone tracking tool lets government agencies follow your every move

Both operating systems will display a list of apps and whether they are permitted access always, never, only while the app is in use, or to prompt for permission each time. Both also allow users to choose whether the app sees precise locations down to a few feet or only a coarse-grained location.

For most users, there’s usefulness in allowing an app for photos, transit or maps to access a user’s precise location. For other classes of apps—say those for Internet jukeboxes at bars and restaurants—it can be helpful for them to have an approximate location, but giving them precise, fine-grained access is likely overkill. And for other apps, there’s no reason for them ever to know the device’s location. With a few exceptions, there’s little reason for apps to always have location access.

Not surprisingly, Android users who want to block intrusive location gathering have more settings to change than iOS users. The first thing to do is access Settings > Security & Privacy > Ads and choose “Delete advertising ID.” Then, promptly ignore the long, scary warning Google provides and hit the button confirming the decision at the bottom. If you don’t see that setting, good for you. It means you already deleted it. Google provides documentation here.

iOS, by default, doesn’t give apps access to “Identifier for Advertisers,” Apple’s version of the unique tracking number assigned to iPhones, iPads, and AppleTVs. Apps, however, can display a window asking that the setting be turned on, so it’s useful to check. iPhone users can do this by accessing Settings > Privacy & Security > Tracking. Any apps with permission to access the unique ID will appear. While there, users should also turn off the “Allow Apps to Request to Track” button. While in iOS Privacy & Security, users should navigate to Apple Advertising and ensure Personalized Ads is turned off.

Additional coverage of Location X from Haaretz and NOTUS is here and here. The New York Times, the other publication given access to the data, hadn’t posted an article at the time this Ars post went live.

Phone tracking tool lets government agencies follow your every move Read More »

redbox-easily-reverse-engineered-to-reveal-customers’-names,-zip-codes,-rentals

Redbox easily reverse-engineered to reveal customers’ names, zip codes, rentals

Thousands of Redboxes getting dumped

It’s worth noting that the amount of data expected to be stored on Redboxes is small compared to Redbox’s overall business. Since Redbox once rented out millions of DVDs weekly, the data retrieved only represents a small portion of Redbox’s overall business and, likely, of business conducted on that specific kiosk.  That might not be much comfort to those whose data is left vulnerable, though.

The problem is more alarming when considering how many Redboxes are still out in the wild with uncertain futures. High demand for Redbox removals has resulted in all sorts of people, like Turing, gaining access to kiosk hardware and/or data. For example, The Wall Street Journal reported last week about a “former Redbox employee who convinced a 7-Eleven franchisee” to give him a Redbox, a 19-year-old who persuaded a contractor hauling a kiosk away from a drugstore to give it to him instead, as well as a Redbox landing in an Illinois dumpster.

Consumer privacy concerns

Chicken Soup’s actions may violate consumer privacy regulations, including the Video Privacy Protection Act outlawing “wrongful disclosure of video tape rental or sale records.” However, Chicken Soup’s bankruptcy (most of its assets are in a holding pattern, Lowpass reported) makes customer remediation more complicated and less likely.

Mario Trujillo, staff attorney for the Electronic Frontier Foundation, told Ars that this incident “highlights the importance of security research in uncovering flaws that can leave customers unprotected.”

“While it may be hard to hold a bankrupt company accountable, uncovering the flaw is the first step,” he added.

Turing, which reverses engineers a lot of tech, said that the privacy problems she encountered with Redbox storage “isn’t terribly uncommon.”

Overall, the situation underscores the need for stricter controls around consumer data, whether it comes internally from companies or, as some would argue, through government regulation.

“This security flaw is a reminder that all companies should be obligated to minimize the amount of data they collect and retain in the first place,” Trujillo said. “We need strong data privacy laws to do that.”

Redbox easily reverse-engineered to reveal customers’ names, zip codes, rentals Read More »

android-15’s-security-and-privacy-features-are-the-update’s-highlight

Android 15’s security and privacy features are the update’s highlight

Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts.

In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google’s own docs and blogs and various news site’s reports.

Here’s what is notable and new in how Android 15 handles privacy and security.

Private Space for apps

In the Android 15 settings, you can find “Private Space,” where you can set up a separate PIN code, password, biometric check, and optional Google account for apps you don’t want to be available to anybody who happens to have your phone. This could add a layer of protection onto sensitive apps, like banking and shopping apps, or hide other apps for whatever reason.

In your list of apps, drag any app down to the lock space that now appears in the bottom right. It will only be shown as a lock until you unlock it; you will then see the apps available in your new Private Space. After that, you should probably delete it from the main app list. Dave Taylor has a rundown of the process and its quirks.

It’s obviously more involved than Apple’s “Hide and Require Face ID” tap option but with potentially more robust hiding of the app.

Hiding passwords and OTP codes

A second form of authentication is good security, but allowing apps to access the notification text with the code in it? Not so good. In Android 15, a new permission, likely to be given only to the most critical apps, prevents the leaking of one-time passcodes (OTPs) to other apps waiting for them. Sharing your screen will also hide OTP notifications, along with usernames, passwords, and credit card numbers.

Android 15’s security and privacy features are the update’s highlight Read More »

nintendo’s-new-clock-tracks-your-movement-in-bed

Nintendo’s new clock tracks your movement in bed

The motion detectors reportedly work with various bed sizes, from twin to king. As users shift position, the clock’s display responds by moving on-screen characters from left to right and playing sound effects from Nintendo video games based on different selectable themes.

A photo of Nintendo Sound Clock Alarmo.

A photo of Nintendo Sound Clock Alarmo.

A photo of Nintendo Sound Clock Alarmo. Credit: Nintendo

The Verge’s Chris Welch examined the new device at Nintendo’s New York City store shortly after its announcement, noting that setting up Alarmo involves a lengthy process of configuring its motion-detection features. The setup cannot be skipped and might prove challenging for younger users. The clock prompts users to input the date, time, and bed-related information to calibrate its sensors properly. Even so, Welch described “small, thoughtful Nintendo touches throughout the experience.”

Themes and sounds

Beyond motion tracking, the clock has a few other tricks up its sleeve. Its screen brightness adjusts automatically based on ambient light levels, and users can control Alarmo through buttons on top, including a large dial for navigation and selection.

The device’s full-color rectangular display shows the time and 35 different scenes that feature animated Nintendo characters from games like the aforementioned Super Mario Odyssey, The Legend of Zelda: Breath of the Wild, and Splatoon 3, as well as Pikmin 4 and Ring Fit Adventure.

A promotional image for a Super Mario Odyssey theme for the Nintendo Sound Clock Alarmo. Nintendo

Alarmo also offers sleep sounds to help users doze off. Nintendo plans to release additional downloadable sounds and themes for the device in the future using its built-in Wi-Fi capabilities, which are accessible after linking a Nintendo account. The Nintendo website mentions upcoming themes for Mario Kart 8 Deluxe and Animal Crossing: New Horizons in particular.

As of today, Nintendo Online members can order an Alarmo online, and as mentioned above, Nintendo says the clock will be available through other retailers in January 2025.

Nintendo’s new clock tracks your movement in bed Read More »

neo-nazis-head-to-encrypted-simplex-chat-app,-bail-on-telegram

Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram

“SimpleX, at its core, is designed to be truly distributed with no central server. This allows for enormous scalability at low cost, and also makes it virtually impossible to snoop on the network graph,” Poberezkin wrote in a company blog post published in 2022.

SimpleX’s policies expressly prohibit “sending illegal communications” and outline how SimpleX will remove such content if it is discovered. Much of the content that these terrorist groups have shared on Telegram—and are already resharing on SimpleX—has been deemed illegal in the UK, Canada, and Europe.

Argentino wrote in his analysis that discussion about moving from Telegram to platforms with better security measures began in June, with discussion of SimpleX as an option taking place in July among a number of extremist groups. Though it wasn’t until September, and the Terrorgram arrests, that the decision was made to migrate to SimpleX, the groups are already establishing themselves on the new platform.

“The groups that have migrated are already populating the platform with legacy material such as Terrorgram manuals and are actively recruiting propagandists, hackers, and graphic designers, among other desired personnel,” the ISD researchers wrote.

However, there are some downsides to the additional security provided by SimpleX, such as the fact that it is not as easy for these groups to network and therefore grow, and disseminating propaganda faces similar restrictions.

“While there is newfound enthusiasm over the migration, it remains unclear if the platform will become a central organizing hub,” ISD researchers wrote.

And Poberezkin believes that the current limitations of his technology will mean these groups will eventually abandon SimpleX.

“SimpleX is a communication network rather than a service or a platform where users can host their own servers, like in OpenWeb, so we were not aware that extremists have been using it,” says Poberezkin. “We never designed groups to be usable for more than 50 users and we’ve been really surprised to see them growing to the current sizes despite limited usability and performance. We do not think it is technically possible to create a social network of a meaningful size in the SimpleX network.”

This story originally appeared on wired.com.

Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram Read More »

meta-pays-the-price-for-storing-hundreds-of-millions-of-passwords-in-plaintext

Meta pays the price for storing hundreds of millions of passwords in plaintext

GOT HASHES? —

Company failed to follow one of the most sacrosanct rules for password storage.

Meta pays the price for storing hundreds of millions of passwords in plaintext

Getty Images

Officials in Ireland have fined Meta $101 million for storing hundreds of millions of user passwords in plaintext and making them broadly available to company employees.

Meta disclosed the lapse in early 2019. The company said that apps for connecting to various Meta-owned social networks had logged user passwords in plaintext and stored them in a database that had been searched by roughly 2,000 company engineers, who collectively queried the stash more than 9 million times.

Meta investigated for five years

Meta officials said at the time that the error was found during a routine security review of the company’s internal network data storage practices. They went on to say that they uncovered no evidence that anyone internally improperly accessed the passcodes or that the passcodes were ever accessible to people outside the company.

Despite those assurances, the disclosure exposed a major security failure on the part of Meta. For more than three decades, best practices across just about every industry have been to cryptographically hash passwords. Hashing is a term that applies to the practice of passing passwords through a one-way cryptographic algorithm that assigns a long string of characters that’s unique for each unique input of plaintext.

Because the conversion works in only one direction—from plaintext to hash—there is no cryptographic means for converting the hashes back into plaintext. More recently, these best practices have been mandated by laws and regulations in countries worldwide.

Because hashing algorithms works in one direction, the only way to obtain the corresponding plaintext is to guess, a process that can require large amounts of time and computational resources. The idea behind hashing passwords is similar to the idea of fire insurance for a home. In the event of an emergency—the hacking of a password database in one case, or a house fire in the other—the protection insulates the stakeholder from harm that otherwise would have been more dire.

For hashing schemes to work as intended, they must follow a host of requirements. One is that hashing algorithms must be designed in a way that they require large amounts of computing resources. That makes algorithms such as SHA1 and MD5 unsuitable, because they’re designed to quickly hash messages with minimal computing required. By contrast, algorithms specifically designed for hashing passwords—such as Bcrypt, PBKDF2, or SHA512crypt—are slow and consume large amounts of memory and processing.

Another requirement is that the algorithms must include cryptographic “salting,” in which a small amount of extra characters are added to the plaintext password before it’s hashed. Salting further increases the workload required to crack the hash. Cracking is the process of passing large numbers of guesses, often measured in the hundreds of millions, through the algorithm and comparing each hash against the hash found in the breached database.

The ultimate aim of hashing is to store passwords only in hashed format and never as plaintext. That prevents hackers and malicious insiders alike from being able to use the data without first having to expend large amounts of resources.

When Meta disclosed the lapse in 2019, it was clear the company had failed to adequately protect hundreds of millions of passwords.

“It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data,” Graham Doyle, deputy commissioner at Ireland’s Data Protection Commission, said. “It must be borne in mind, that the passwords, the subject of consideration in this case, are particularly sensitive, as they would enable access to users’ social media accounts.”

The commission has been investigating the incident since Meta disclosed it more than five years ago. The government body, the lead European Union regulator for most US Internet services, imposed a fine of $101 million (91 million euros) this week. To date, the EU has fined Meta more than $2.23 billion (2 billion euros) for violations of the General Data Protection Regulation (GDPR), which went into effect in 2018. That amount includes last year’s record $1.34 billion (1.2 billion euro) fine, which Meta is appealing.

Meta pays the price for storing hundreds of millions of passwords in plaintext Read More »