Policy

apple-warns-proposed-uk-law-will-affect-software-updates-around-the-world

Apple warns proposed UK law will affect software updates around the world

Heads up —

Apple may leave the UK if required to provide advance notice of product updates.

Apple warns proposed UK law will affect software updates around the world

Apple is “deeply concerned” that proposed changes to a United Kingdom law could give the UK government unprecedented power to “secretly veto” privacy and security updates to its products and services, the tech giant said in a statement provided to Ars.

If passed, potentially this spring, the amendments to the UK’s Investigatory Powers Act (IPA) could deprive not just UK users, but all users globally of important new privacy and security features, Apple warned.

“Protecting our users’ privacy and the security of their data is at the very heart of everything we do at Apple,” Apple said. “We’re deeply concerned the proposed amendments” to the IPA “now before Parliament place users’ privacy and security at risk.”

The IPA was initially passed in 2016 to ensure that UK officials had lawful access to user data to investigate crimes like child sexual exploitation or terrorism. Proposed amendments were announced last November, after a review showed that the “Act has not been immune to changes in technology over the last six years” and “there is a risk that some of these technological changes have had a negative effect on law enforcement and intelligence services’ capabilities.”

The proposed amendments require that any company that fields government data requests must notify UK officials of any updates they planned to make that could restrict the UK government’s access to this data, including any updates impacting users outside the UK.

UK officials said that this would “help the UK anticipate the risk to public safety posed by the rolling out of technology by multinational companies that precludes lawful access to data. This will reduce the risk of the most serious offenses such as child sexual exploitation and abuse or terrorism going undetected.”

According to the BBC, the House of Lords will begin debating the proposed changes on Tuesday.

Ahead of that debate, Apple described the amendments on Monday as “an unprecedented overreach by the government” that “if enacted” could allow the UK to “attempt to secretly veto new user protections globally, preventing us from ever offering them to customers.”

In a letter last year, Apple argued that “it would be improper for the Home Office to act as the world’s regulator of security technology.”

Apple told the UK Home Office that imposing “secret requirements on providers located in other countries” that apply to users globally “could be used to force a company like Apple, that would never build a backdoor, to publicly withdraw critical security features from the UK market, depriving UK users of these protections.” It could also “dramatically disrupt the global market for security technologies, putting users in the UK and around the world at greater risk,” Apple claimed.

The proposed changes, Apple said, “would suppress innovation, stifle commerce, and—when combined with purported extraterritorial application—make the Home Office the de facto global arbiter of what level of data security and encryption are permissible.”

UK defends proposed changes

The UK Home Office has repeatedly stressed that these changes do not “provide powers for the Secretary of State to approve or refuse technical changes,” but “simply” requires companies “to inform the Secretary of State of relevant changes before those changes are implemented.”

“The intention is not to introduce a consent or veto mechanism or any other kind of barrier to market,” a UK Home Office fact sheet said. “A key driver for this amendment is to give operational partners time to understand the change and adapt their investigative techniques where necessary, which may in some circumstances be all that is required to maintain lawful access.”

The Home Office has also claimed that “these changes do not directly relate to end-to-end encryption,” while admitting that they “are designed to ensure that companies are not able to unilaterally make design changes which compromise exceptional lawful access where the stringent safeguards of the IPA regime are met.”

This seems to suggest that companies will not be allowed to cut off the UK government from accessing encrypted data under certain circumstances, which concerns privacy advocates who consider end-to-end encryption a vital user privacy and security protection. Earlier this month, civil liberties groups including Big Brother Watch, Liberty, Open Rights Group and Privacy International filed a joint brief opposing the proposed changes, the BBC reported, warning that passing the amendments would be “effectively transforming private companies into arms of the surveillance state and eroding the security of devices and the Internet.”

“We have always been clear that we support technological innovation and private and secure communications technologies, including end-to-end encryption, but this cannot come at a cost to public safety,” a UK government official told the BBC.

The UK government may face more opposition to the amendments than from tech companies and privacy advocates, though. In Apple’s letter last year, the tech giant noted that the proposed changes to the IPA could conflict with EU and US laws, including the EU’s General Data Protection Regulation—considered the world’s strongest privacy law.

Under the GDPR, companies must implement measures to safeguard users’ personal data, Apple said, noting that “encryption is one means by which a company can meet” that obligation.

“Secretly installing backdoors in end-to-end encrypted technologies in order to comply with UK law for persons not subject to any lawful process would violate that obligation,” Apple argued.

Apple warns proposed UK law will affect software updates around the world Read More »

nsa-finally-admits-to-spying-on-americans-by-purchasing-sensitive-data

NSA finally admits to spying on Americans by purchasing sensitive data

Leaving Americans in the dark —

Violating Americans’ privacy “not just unethical but illegal,” senator says.

NSA finally admits to spying on Americans by purchasing sensitive data

The National Security Agency (NSA) has admitted to buying records from data brokers detailing which websites and apps Americans use, US Senator Ron Wyden (D-Ore.) revealed Thursday.

This news follows Wyden’s push last year that forced the FBI to admit that it was also buying Americans’ sensitive data. Now, the senator is calling on all intelligence agencies to “stop buying personal data from Americans that has been obtained illegally by data brokers.”

“The US government should not be funding and legitimizing a shady industry whose flagrant violations of Americans’ privacy are not just unethical but illegal,” Wyden said in a letter to Director of National Intelligence (DNI) Avril Haines. “To that end, I request that you adopt a policy that, going forward,” intelligence agencies “may only purchase data about Americans that meets the standard for legal data sales established by the FTC.”

Wyden suggested that the intelligence community might be helping data brokers violate an FTC order requiring that Americans are provided “clear and conspicuous” disclosures and give informed consent before their data can be sold to third parties. In the seven years that Wyden has been investigating data brokers, he said that he has not been made “aware of any company that provides such a warning to users before collecting their data.”

The FTC’s order came after reaching a settlement with a data broker called X-Mode, which admitted to selling sensitive location data without user consent and even to selling data after users revoked consent.

In his letter, Wyden referred to this order as the FTC outlining “new rules,” but that’s not exactly what happened. Instead of issuing rules, FTC settlements often serve as “common law,” signaling to marketplaces which practices violate laws like the FTC Act.

According to the FTC’s analysis of the order on its site, X-Mode violated the FTC Act by “unfairly selling sensitive data, unfairly failing to honor consumers’ privacy choices, unfairly collecting and using consumer location data, unfairly collecting and using consumer location data without consent verification, unfairly categorizing consumers based on sensitive characteristics for marketing purposes, deceptively failing to disclose use of location data, and providing the means and instrumentalities to engage in deceptive acts or practices.”

The FTC declined to comment on whether the order also applies to data purchases by intelligence agencies. In defining “location data,” the FTC order seems to carve out exceptions for any data collected outside the US and used for either “security purposes” or “national security purposes conducted by federal agencies or other federal entities.”

NSA must purge data, Wyden says

NSA officials told Wyden that not only is the intelligence agency purchasing data on Americans located in the US but that it also bought Americans’ Internet metadata.

Wyden warned that the former “can reveal sensitive, private information about a person based on where they go on the Internet, including visiting websites related to mental health resources, resources for survivors of sexual assault or domestic abuse, or visiting a telehealth provider who focuses on birth control or abortion medication.” And the latter “can be equally sensitive.”

To fix the problem, Wyden wants intelligence communities to agree to inventory and then “promptly” purge the data that they allegedly illegally collected on Americans without a warrant. Wyden said that this process has allowed agencies like the NSA and the FBI “in effect” to use “their credit card to circumvent the Fourth Amendment.”

X-Mode’s practices, the FTC said, were likely to cause “substantial injury to consumers that are not outweighed by countervailing benefits to consumers or competition and are not reasonably avoidable by consumers themselves.” Wyden’s spokesperson, Keith Chu, told Ars that “the data brokers selling Internet records to the government appear to engage in nearly identical conduct” to X-Mode.

The FTC’s order also indicates “that Americans must be told and agree to their data being sold to ‘government contractors for national security purposes’ for the practice to be allowed,” Wyden said.

DoD defends shady data broker dealings

In response to Wyden’s letter to Haines, the Under Secretary of Defense for Intelligence & Security, Ronald Moultrie, said that the Department of Defense (DoD) “adheres to high standards of privacy and civil liberties protections” when buying Americans’ location data. He also said that he was “not aware of any requirement in US law or judicial opinion” forcing the DoD to “obtain a court order in order to acquire, access, or use” commercially available information that “is equally available for purchase to foreign adversaries, US companies, and private persons as it is to the US government.”

In another response to Wyden, NSA leader General Paul Nakasone told Wyden that the “NSA takes steps to minimize the collection of US person information” and “continues to acquire only the most useful data relevant to mission requirements.” That includes some commercially available information on Americans “where one side of the communications is a US Internet Protocol address and the other is located abroad,” data which Nakasone said is “critical to protecting the US Defense Industrial Base” that sustains military weapons systems.

While the FTC has so far cracked down on a few data brokers, Wyden believes that the shady practice of selling data without Americans’ informed consent is an “industry-wide” problem in need of regulation. Rather than being a customer in this sketchy marketplace, intelligence agencies should stop funding companies allegedly guilty of what the FTC has described as “intrusive” and “unchecked” surveillance of Americans, Wyden said.

According to Moultrie, DNI Haines decides what information sources are “relevant and appropriate” to aid intelligence agencies.

But Wyden believes that Americans should have the opportunity to opt out of consenting to such invasive, secretive data collection. He said that by purchasing data from shady brokers, US intelligence agencies have helped create a world where consumers have no opportunity to consent to intrusive tracking.

“The secrecy around data purchases was amplified because intelligence agencies have sought to keep the American people in the dark,” Wyden told Haines.

NSA finally admits to spying on Americans by purchasing sensitive data Read More »

x-can’t-stop-spread-of-explicit,-fake-ai-taylor-swift-images

X can’t stop spread of explicit, fake AI Taylor Swift images

Escalating the situation —

Will Swifties’ war on AI fakes spark a deepfake porn reckoning?

X can’t stop spread of explicit, fake AI Taylor Swift images

Explicit, fake AI-generated images sexualizing Taylor Swift began circulating online this week, quickly sparking mass outrage that may finally force a mainstream reckoning with harms caused by spreading non-consensual deepfake pornography.

A wide variety of deepfakes targeting Swift began spreading on X, the platform formerly known as Twitter, yesterday.

Ars found that some posts have been removed, while others remain online, as of this writing. One X post was viewed more than 45 million times over approximately 17 hours before it was removed, The Verge reported. Seemingly fueling more spread, X promoted these posts under the trending topic “Taylor Swift AI” in some regions, The Verge reported.

The Verge noted that since these images started spreading, “a deluge of new graphic fakes have since appeared.” According to Fast Company, these harmful images were posted on X but soon spread to other platforms, including Reddit, Facebook, and Instagram. Some platforms, like X, ban sharing of AI-generated images but seem to struggle with detecting banned content before it becomes widely viewed.

Ars’ AI reporter Benj Edwards warned in 2022 that AI image-generation technology was rapidly advancing, making it easy to train an AI model on just a handful of photos before it could be used to create fake but convincing images of that person in infinite quantities. That is seemingly what happened to Swift, and it’s currently unknown how many different non-consensual deepfakes have been generated or how widely those images have spread.

It’s also unknown what consequences have resulted from spreading the images. At least one verified X user had their account suspended after sharing fake images of Swift, The Verge reported, but Ars reviewed posts on X from Swift fans targeting others who allegedly shared images whose accounts remain active. Swift fans also have been uploading countless favorite photos of Swift to bury the harmful images and prevent them from appearing in various X searches. Her fans seem dedicated to reducing the spread however they can, with some posting different addresses, seemingly in attempts to dox an X user who, they’ve alleged, is the initial source of the images.

Neither X nor Swift’s team has yet commented on the deepfakes, but it seems clear that solving the problem will require more than just requesting removals from social media platforms. The AI model trained on Swift’s images is likely still out there, likely procured through one of the known websites that specialize in making fine-tuned celebrity AI models. As long as the model exists, anyone with access could crank out as many new images as they wanted, making it hard for even someone with Swift’s resources to make the problem go away for good.

In that way, Swift’s predicament might raise awareness of why creating and sharing non-consensual deepfake pornography is harmful, perhaps moving the culture away from persistent notions that nobody is harmed by non-consensual AI-generated fakes.

Swift’s plight could also inspire regulators to act faster to combat non-consensual deepfake porn. Last year, she inspired a Senate hearing after a Live Nation scandal frustrated her fans, triggering lawmakers’ antitrust concerns about the leading ticket seller, The New York Times reported.

Some lawmakers are already working to combat deepfake porn. Congressman Joe Morelle (D-NY) proposed a law criminalizing deepfake porn earlier this year after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates. Under that proposed law, anyone sharing deepfake pornography without an individual’s consent risks fines and being imprisoned for up to two years. Damages could go as high as $150,000 and imprisonment for as long as 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

Elsewhere, the UK’s Online Safety Act restricts any illegal content from being shared on platforms, including deepfake pornography. It requires moderation, or companies will risk fines worth more than $20 million, or 10 percent of their global annual turnover, whichever amount is higher.

The UK law, however, is controversial because it requires companies to scan private messages for illegal content. That makes it practically impossible for platforms to provide end-to-end encryption, which the American Civil Liberties Union has described as vital for user privacy and security.

As regulators tangle with legal questions and social media users with moral ones, some AI image generators have moved to limit models from producing NSFW outputs. Some did this by removing some of the large quantity of sexualized images in the models’ training data, such as Stability AI, the company behind Stable Diffusion. Others, like Microsoft’s Bing image creator, make it easy for users to report NSFW outputs.

But so far, keeping up with reports of deepfake porn seems to fall squarely on social media platforms’ shoulders. Swift’s battle this week shows how unprepared even the biggest platforms currently are to handle blitzes of harmful images seemingly uploaded faster than they can be removed.

X can’t stop spread of explicit, fake AI Taylor Swift images Read More »

amazon-ring-stops-letting-police-request-footage-in-neighbors-app-after-outcry

Amazon Ring stops letting police request footage in Neighbors app after outcry

Neighborhood watch —

Warrantless access may still be granted during vaguely defined “emergencies.”

Amazon Ring stops letting police request footage in Neighbors app after outcry

Amazon Ring has shut down a controversial feature in its community safety app Neighbors that has allowed police to contact homeowners and request doorbell and surveillance camera footage without a warrant for years.

In a blog, head of the Neighbors app Eric Kuhn confirmed that “public safety agencies like fire and police departments can still use the Neighbors app to share helpful safety tips, updates, and community events,” but the Request for Assistance (RFA) tool will be disabled.

“They will no longer be able to use the RFA tool to request and receive video in the app,” Kuhn wrote.

Kuhn did not explain why Neighbors chose to “sunset” the RFA tool, but privacy advocates and lawmakers have long criticized Ring for helping to expand police surveillance in communities, seemingly threatening privacy and enabling racial profiling, CNBC reported. Among the staunchest critics of Ring’s seemingly tight relationship with law enforcement is the Electronic Frontier Foundation (EFF), which has long advocated for Ring and its users to stop sharing footage with police without a warrant.

In a statement provided to Ars, EFF senior policy analyst Matthew Guariglia noted that Ring had launched the RFA tool after EFF and other organizations had criticized Ring for allowing police to privately email warrantless requests for footage in the Neighbors app. Rather than end requests through the app entirely, Ring appeared to see the RFA tool as a middle ground, providing transparency about how many requests were being made, without ending police access to community members readily sharing footage on the app.

“Now, Ring hopefully will altogether be out of the business of platforming casual and warrantless police requests for footage to its users,” Guariglia said.

Moving forward, police and public safety agencies with warrants will still be able to request footage, which Amazon documents in transparency reports published every six months. These reports show thousands of search warrant requests and even more “preservation requests,” which allow government agencies to request to preserve user information for up to 90 days, “pending the receipt of a legally valid and binding order.”

“If we are legally required to comply, we will provide information responsive to the government demand,” Ring’s website says.

Ring rebrand embraces “hope and joy”

Guariglia said that Ring sunsetting the RFA tool “is a step in the right direction,” but it has “come after years of cozy relationships with police and irresponsible handling of data” that has, for many, damaged trust in Ring.

In 2022, EFF reported that Ring admitted that “there are ’emergency’ instances when police can get warrantless access to Ring personal devices without the owner’s permission.” And last year, Ring reached a $5.8 million settlement with the Federal Trade Commission, refunding customers for what the FTC described as “compromising its customers’ privacy by allowing any employee or contractor to access consumers’ private videos and by failing to implement basic privacy and security protections, enabling hackers to take control of consumers’ accounts, cameras, and videos.”

Because of this history, Guariglia said that EFF is “still deeply skeptical about law enforcement’s and Ring’s ability to determine what is, or is not, an emergency that requires the company to hand over footage without a warrant or user consent.”

EFF recommends additional steps that Ring could take to enhance user privacy, like enabling end-to-end encryption by default and turning off default audio collection, Guariglia said.

Bloomberg noted that this change to the Neighbors app comes after a new CEO, Liz Hamren, came on board, announcing that last year “Ring was rethinking its mission statement.” Because Ring was adding indoor and backyard home monitoring and business services, the company’s initial mission statement—”to reduce crime in neighborhoods”—was no longer, as founding Ring CEO Jamie Siminoff had promoted it, “at the core” of what Ring does.

In Kuhn’s blog, barely any attention is given to ending the RFA tool. A Ring spokesperson declined to tell Ars how many users had volunteered to use the tool, so it remains unclear how popular it was.

Rather than clarifying the RFA tool controversy, Kuhn’s blog primarily focused on describing how much Ring users loved “heartwarming or silly” footage like a “bear relaxing in a pool.” Under Hamren and Kuhn’s guidance, it appears that the Neighbors app is embracing a new mission of connecting communities to find “hope and joy” in their areas by adding new features to Neighbors like Moments and Best of Ring.

By contrast, when Ring introduced the RFA tool, it said that its mission was “to make neighborhoods safer for everyone.” On a help page, Ring bragged that police had used Neighbors to recover stolen guns and medical supplies. Because of these selling points, Ring’s community safety features may still be priorities for some users. So, while Ring may be ready to move on from highlighting its partnership with law enforcement as a “core” part of its service, its users may still be used to seeing their cameras as tools that should be readily accessible to police.

As law enforcement agencies lose access to Neighbors’ RFA tool, Guariglia said that it’s important to raise awareness among Ring owners that police can’t demand access to footage without a warrant.

“This announcement will not stop police from trying to get Ring footage directly from device owners without a warrant,” Guariglia said. “Ring users should also know that when police knock on their door, they have the right to, and should, request that police get a warrant before handing over footage.”

Amazon Ring stops letting police request footage in Neighbors app after outcry Read More »

ebay-lays-off-1,000-employees,-about-9-percent-of-full-time-workforce

eBay lays off 1,000 employees, about 9 percent of full-time workforce

eBay layoffs —

Cutting 1,000 jobs, eBay says “headcount and expenses have outpaced” growth.

A large eBay logo on a sign near the company headquarters building.

Getty Images | Justin Sullivan

eBay is laying off approximately 1,000 employees in a move that reduces its full-time workforce by 9 percent, the company announced yesterday. eBay also plans “to scale back the number of contracts we have within our alternate workforce over the coming months,” CEO Jamie Iannone wrote in a message to staff that was titled, “Ensuring eBay’s Long-Term Success.”

Iannone cited “the challenging macroeconomic environment” and said that eBay has too many employees. “While we are making progress against our strategy, our overall headcount and expenses have outpaced the growth of our business,” he wrote.

eBay asked all US-based employees to work from home on Wednesday “to provide some space and privacy” for conversations in which laid-off employees were to be given the bad news. The 1,000 layoffs come nearly one year after eBay eliminated 500 employees.

eBay reported $2.5 billion of revenue in its most recent quarterly earnings, for Q3 2023, a rise of 5 percent year over year. Q3 2023 net income was $1.3 billion, whereas the company had reported a net loss of $70 million in Q3 2022. eBay’s Q3 operating income was $455 million, down from $568 million the previous year.

eBay exceeded earnings expectations

eBay also said it “returned $783 million to shareholders in Q3, including $651 million of share repurchases and $132 million paid in cash dividends.” eBay’s stock price was up 0.48 percent today but has fallen about 5 percent this month.

“In Q3, we met or exceeded expectations across all of our key financial metrics,” eBay Chief Financial Officer Steve Priest said at the time. “Our strong balance sheet and operational rigor enable us to adapt to the evolving changes in this dynamic macro environment. We will continue to be prudent with cost efficiencies, saving to invest for the future, while remaining good stewards of capital for our shareholders.”

Even though eBay beat earnings estimates in Q3, The Wall Street Journal pointed out some challenges facing the company going forward. “The company has been under pressure amid rising competition from the likes of Amazon.com and Walmart, as well as from emerging Chinese retailers such as Temu and Shein,” the WSJ wrote. “High interest rates and sticky inflation in the US and other major economies have also weighed on consumers’ discretionary spending.”

eBay’s layoff announcement is the latest in a string of job cuts in the tech industry. Amazon this month announced layoffs of 500 employees at Twitch and several hundred more at its MGM and Prime Video divisions. Google announced layoffs of 100 employees at YouTube after previously laying off hundreds of workers in several other divisions.

eBay lays off 1,000 employees, about 9 percent of full-time workforce Read More »

mugger-take-your-phone?-cash-apps-too-easily-let-thieves-drain-accounts,-da-says

Mugger take your phone? Cash apps too easily let thieves drain accounts, DA says

Mugger take your phone? Cash apps too easily let thieves drain accounts, DA says

Popular apps like Venmo, Zelle, and Cash App aren’t doing enough to protect consumers from fraud that occurs when unauthorized users gain access to unlocked devices, Manhattan District Attorney Alvin Bragg warned.

“Thousands or even tens of thousands can be drained from financial accounts in a matter of seconds with just a few taps,” Bragg said in letters to app makers. “Without additional protections, customers’ financial and physical safety is being put at risk.”

According to Bragg, his office and the New York Police Department have been increasingly prosecuting crimes where phones are commandeered by bad actors to quickly steal large amounts of money through financial apps.

This can happen to unwitting victims when fraudsters ask “to use an individual’s smartphone for personal use” or to transfer funds to initiate a donation for a specific cause. Or “in the most disturbing cases,” Bragg said, “offenders have violently assaulted or drugged victims, and either compelled them to provide a password for a device or used biometric ID to open the victim’s phone before transferring money once the individual is incapacitated.”

But prosecuting crimes alone won’t solve this problem, Bragg suggested. Prevention is necessary. That’s why the DA is requesting meetings with executives managing widely used financial apps to discuss “commonsense” security measures that Bragg said can be taken to “combat this growing concern.”

Bragg appears particularly interested in Apple’s recently developed “Stolen Device Protection,” which he said is “making it harder for perpetrators to use a phone’s passcode to steal funds when the user’s phone is not at home or at work.”

Apple just rolled out “Stolen Device Protection” for iOS 17.3. On its website, Apple explained that when “Stolen Device Protection” is enabled, “some features and actions have additional security requirements when your iPhone is away from familiar locations such as home or work.”

For users taking advantage of this enhanced security layer, biometric or FaceID would be required to access devices, with no option to bypass with a passcode. This alone could help deter crimes that Bragg described, potentially stopping thieves from rifling through someone’s passwords to get instant access to a cash app. “Stolen Device Protection” also sets up a security delay that could stop thieves from immediately changing the account password and locking an owner out of their device. To change a password in this more secure mode, thieves would need to wait one hour—perhaps giving time for the owner to report that the phone is stolen or missing—and then must provide a biometric or FaceID.

Bragg wants financial apps like Zelle or Venmo to follow Apple’s lead and build similar safeguards. He suggested that Apple’s release makes it clear that the technology exists where apps could detect when a user is attempting to send a large transaction from an unknown location and perhaps block or delay sending that transaction for up to a day without secondary verification. This could afford victims more time to discover and cancel fraudulent transfers before they go through, instead of after the theft, when it’s usually harder to claw back funds.

This problem goes well beyond Manhattan, Bragg wrote, pointing to “similar thefts and robberies” that have been “publicly reported” in major cities like Los Angeles and Orlando, as well as in West Virginia, Louisiana, Illinois, Kansas, Tennessee, Virginia, and “elsewhere across the United States.”

Overall, the DA traced a pattern showing that the more people were using financial apps, the more fraud claims spiked, “tripling between 2020 and 2022” and “costing consumers hundreds of millions of dollars each year.”

“While cash apps, like Cash App, offer consumers an easy and fast method to transfer funds, they also have made these platforms a favorite of fraudsters because consumers have no option to cancel transactions, even moments after authorizing them,” Bragg wrote to Cash App CEO Brian Grassadonia. “I am concerned about the troubling rise in illegal behavior that has developed because of insufficient security measures connected with your software and business policy decisions.”

While building tech like Apple’s “Stolen Device Protection” seems to be the most extreme step that Bragg recommended, he also pushed “commonsense solutions” that he claimed that financial apps currently overlook. These include steps like requiring multifactor authentication to help keep thieves locked out and lowering limits on daily transfers to make the scam less appealing to thieves looking for a big payday.

Mugger take your phone? Cash apps too easily let thieves drain accounts, DA says Read More »

patreon:-blocking-platforms-from-sharing-user-video-data-is-unconstitutional

Patreon: Blocking platforms from sharing user video data is unconstitutional

Patreon: Blocking platforms from sharing user video data is unconstitutional

Patreon, a monetization platform for content creators, has asked a federal judge to deem unconstitutional a rarely invoked law that some privacy advocates consider one of the nation’s “strongest protections of consumer privacy against a specific form of data collection.” Such a ruling would end decades that the US spent carefully shielding the privacy of millions of Americans’ personal video viewing habits.

The Video Privacy Protection Act (VPPA) blocks businesses from sharing data with third parties on customers’ video purchases and rentals. At a minimum, the VPPA requires written consent each time a business wants to share this sensitive video data—including the title, description, and, in most cases, the subject matter.

The VPPA was passed in 1988 in response to backlash over a reporter sharing the video store rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan. The report revealed that Bork apparently liked spy thrillers and British costume dramas and suggested that maybe the judge had a family member who dug John Hughes movies.

Although the videos that Bork rented “revealed nothing particularly salacious” about the judge, the intent of reporting the “Bork Tapes” was to confront the judge “with his own vulnerability to privacy harms” during a time when the Supreme Court nominee had “criticized the constitutional right to privacy” as “a loose canon in the law,” Harvard Law Review noted.

Even though no harm was caused by sharing the “Bork Tapes,” policymakers on both sides of the aisle agreed that First Amendment protections ought to safeguard the privacy of people’s viewing habits, or else risk chilling their speech by altering their viewing habits. The US government has not budged on this stance since, supporting a lawsuit filed in 2022 by Patreon users who claimed that while no harms were caused, damages are owed after Patreon allegedly violated the VPPA by sharing data on videos they watched on the platform with Facebook through Meta Pixel without users’ written consent.

“Restricting the ability of those who possess a consumer’s video purchase, rental, or request history to disclose such information directly advances the goal of keeping that information private and protecting consumers’ intellectual freedom,” the Department of Justice’s brief said.

The Meta Pixel is a piece of code used by companies like Patreon to better target content to users by tracking their activity and monitoring conversions on Meta platforms. “In simplest terms,” Patreon users said in an amended complaint, “the Pixel allows Meta to know what video content one of its users viewed on Patreon’s website.”

The Pixel is currently at the center of a pile of privacy lawsuits, where people have accused various platforms of using the Pixel to covertly share sensitive data without users’ consent, including health and financial data.

Several lawsuits have specifically lobbed VPPA claims, which users have argued validates the urgency of retaining the VPPA protections that Patreon now seeks to strike. The DOJ argued that “the explosion of recent VPPA cases” is proof “that the disclosures the statute seeks to prevent are a legitimate concern,” despite Patreon’s arguments that the statute does “nothing to materially or directly advance the privacy interests it supposedly was enacted to protect.”

Patreon’s attack on the VPPA

Patreon has argued in a recent court filing that the VPPA was not enacted to protect average video viewers from embarrassing and unwarranted disclosures but “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

That’s one of many ways that the VPPA silences speech, Patreon argued, by allegedly preventing disclosures regarding public figures that are relevant to public interest.

Among other “fatal flaws,” Patreon alleged, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

Patreon claimed that the VPPA is too narrow, focusing only on pre-recorded videos. It prevents video service providers from disclosing to any other person the titles of videos that someone watched, but it does not necessarily stop platforms from sharing information about “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch,” Patreon claimed.

Patreon: Blocking platforms from sharing user video data is unconstitutional Read More »

robocall-with-artificial-joe-biden-voice-tells-democrats-not-to-vote

Robocall with artificial Joe Biden voice tells Democrats not to vote

A bunch of malarkey —

Fake Biden voice urges New Hampshire Democrats to skip tomorrow’s primary.

Joe Biden holds a cell phone to his ear while having a conversation.

Enlarge / President Joe Biden at a Rose Garden event at the White House on May 1, 2023, in Washington, DC.

Getty Images | Alex Wong

An anti-voting robocall that seems to use an artificially generated version of President Joe Biden’s voice is being investigated by the New Hampshire Attorney General’s office. The calls sent on Sunday told Democrats to avoid voting in the Presidential Primary on January 23.

“Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated based on initial indications,” the state AG’s office said in an announcement today. The recorded message appears “to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters,” the announcement said.

The “Biden” voice in the recording (available with this NBC News article) sounds a bit off but perhaps could fool some people into thinking it came from the president.

“What a bunch of malarkey,” the voice says. “You know the value of voting Democratic when our votes count. It’s important that you save your vote for the November election. We’ll need your help in electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.”

NBC News reported that a spokesperson for the Trump campaign said it had no connection to the fake Biden call. “Not us, we have nothing to do with it,” the spokesperson said.

Spoofed Caller ID

The apparently spoofed Caller ID displayed the personal cell phone number of “a prominent New Hampshire Democrat,” NBC News wrote. Biden’s national campaign manager, Julie Chavez Rodriguez, said the “campaign is actively discussing additional actions to take immediately,” according to NBC News.

Biden isn’t officially on the ballot in New Hampshire this week because of a dispute over scheduling between New Hampshire Democrats and the Democratic National Committee. But there’s a write-in campaign supporting Biden in the Democratic primary, and a spokesperson for the write-in campaign described the robocall as “deepfake disinformation designed to harm Joe Biden, suppress votes, and damage our democracy.”

The New Hampshire AG’s office said the fake Biden call “appears to have been ‘spoofed’ to falsely show that it had been sent by the treasurer of a political committee that has been supporting the New Hampshire Democratic Presidential Primary write-in efforts for President Biden. The message’s content directed recipients who wished to be removed from a calling list to call the number belonging to this person.”

The AG’s office pointed out that no law prevents someone from voting in both January and November. “Voting in the New Hampshire Presidential Primary Election does not preclude a voter from additionally voting in the November General Election,” the AG’s office said.

Robocall with artificial Joe Biden voice tells Democrats not to vote Read More »

meta-relents-to-eu,-allows-unlinking-of-facebook-and-instagram-accounts

Meta relents to EU, allows unlinking of Facebook and Instagram accounts

Meta relents to EU, allows unlinking of Facebook and Instagram accounts

Meta will allow some Facebook and Instagram users to unlink their accounts as part of the platform’s efforts to comply with the European Union’s Digital Markets Act (DMA) ahead of enforcement starting March 1.

In a blog, Meta’s competition and regulatory director, Tim Lamb, wrote that Instagram and Facebook users in the EU, the European Economic Area, and Switzerland would be notified in the “next few weeks” about “more choices about how they can use” Meta’s services and features, including new opportunities to limit data-sharing across apps and services.

Most significantly, users can choose to either keep their accounts linked or “manage their Instagram and Facebook accounts separately so that their information is no longer used across accounts.” Up to this point, linking user accounts had provided Meta with more data to more effectively target ads to more users. The perk of accessing data on Instagram’s widening younger user base, TechCrunch noted, was arguably the $1 billion selling point explaining why Facebook acquired Instagram in 2012.

Also announced today, users protected by the DMA will soon be able to separate their Facebook Messenger, Marketplace, and Gaming accounts. However, doing so will limit some social features available in some of the standalone apps.

While Messenger users choosing to disconnect the chat service from their Facebook accounts will still “be able to use Messenger’s core service offering such as private messaging and chat, voice and video calling,” Marketplace users making that same choice will have to email sellers and buyers, rather than using Facebook’s messenger service. And unlinked Gaming app users will only be able to play single-player games, severing their access to social gaming otherwise supported by linking the Gaming service to their Facebook social networks.

While Meta may have had choices other than depriving users unlinking accounts of some features, Meta didn’t really have a choice in allowing newly announced options to unlink accounts. The DMA specifically requires that very large platforms designated as “gatekeepers” give users the “specific choice” of opting out of sharing personal data across a platform’s different core services or across any separate services that the gatekeepers manage.

Without gaining “specific” consent, gatekeepers will no longer be allowed to “combine personal data from the relevant core platform service with personal data from any further core platform services” or “cross-use personal data from the relevant core platform service in other services provided separately by the gatekeeper,” the DMA says. The “specific” requirement is designed to block platforms from securing consent at sign-up, then hoovering up as much personal data as possible as new services are added in an endless pursuit of advertising growth.

As defined under the General Data Protection Regulation, the EU requiring “specific” consent stops platforms from gaining user consent for broadly defined data processing by instead establishing “the need for granularity,” so that platforms always seek consent for each “specific” data “processing purpose.”

“This is an important ‘safeguard against the gradual widening or blurring of purposes for which data is processed, after a data subject has agreed to the initial collection of the data,’” the European Data Protection Supervisor explained in public comments describing “commercial surveillance and data security practices that harm consumers” provided at the request of the FTC in 2022.

According to Meta’s help page, once users opt out of sharing data between apps and services, Meta will “stop combining your info across these accounts” within 15 days “after you’ve removed them.” However, all “previously combined info would remain combined.”

Meta relents to EU, allows unlinking of Facebook and Instagram accounts Read More »

google-and-at&t-invest-in-starlink-rival-for-satellite-to-smartphone-service

Google and AT&T invest in Starlink rival for satellite-to-smartphone service

Satellite for smartphones —

AST SpaceMobile gets $206.5 million and is partnering with Google and AT&T.

Illustration of a large, square satellite orbiting the Earth.

Enlarge / Illustration of AST SpaceMobile’s cellular satellite.

AST SpaceMobile

Google, AT&T, and Vodafone are investing $206.5 million in AST SpaceMobile, a Starlink competitor that plans to offer smartphone service from low-Earth-orbit satellites.

This is the first investment in AST SpaceMobile from Google and AT&T, while Vodafone had already put money into the satellite company. AST SpaceMobile announced the funding in a press release on Thursday and announced a $100 million public offering of its stock on the same day.

“Vodafone and AT&T have placed purchase orders for network equipment from AST SpaceMobile to support planned commercial service,” the satellite company said. Google has meanwhile “agreed to collaborate on product development, testing, and implementation plans for SpaceMobile network connectivity on Android and related devices.” AST, which has one very large test satellite in orbit, previously received investments from Rakuten, American Tower, and Bell Canada.

SpaceX subsidiary Starlink has deals with T-Mobile in the US and several carriers in other countries for satellite-to-smartphone service. T-Mobile is expected to offer Starlink-enabled text messaging this year, with voice and data service beginning sometime in 2025.

Though AT&T hadn’t previously invested in AST SpaceMobile, the companies were already working together. AT&T is leasing spectrum in the 700 MHz and 850 MHz bands to AST SpaceMobile. They plan “to provide mobile broadband to unserved and underserved areas covered by the Leased Spectrum,” the companies told the Federal Communications Commission in an application last year.

AST SpaceMobile's BlueWalker 3 test satellite, which is 693 square feet in size.

Enlarge / AST SpaceMobile’s BlueWalker 3 test satellite, which is 693 square feet in size.

AST SpaceMobile

For hard-to-reach areas

Satellite-to-smartphone technology is generally seen as a supplement to cellular networks in hard-to-reach areas. “Because AST’s technology can focus satellite coverage in discrete portions of licensed areas, it does not need a nationwide swath of terrestrial mobile spectrum that a mobile network operator licensee has left fallow. Rather than displacing terrestrial network facilities nationwide, AST’s coverage will be complementary to AT&T’s extensive terrestrial network coverage,” the companies’ FCC filing said.

In April 2023, the companies announced that they completed the first two-way voice calls using AST SpaceMobile’s test satellite with standard mobile phones. “The first voice call was made from the Midland, Texas area to Rakuten in Japan over AT&T spectrum using a Samsung Galaxy S22 smartphone,” the announcement said.

In September 2023, AST SpaceMobile said it made “the first-ever 5G connection for voice and data between an everyday, unmodified smartphone and a satellite in space” and that it achieved a download rate of 14Mbps.

Five satellites should launch soon

AST SpaceMobile’s prototype satellite launched from a SpaceX rocket in September 2022. AST’s early plans detailed in 2020 called for 243 satellites overall, and its first five satellites for commercial operations are expected to launch by March 31, 2024. AST is manufacturing the satellites at its Texas facilities.

The prototype satellite delivers data over 5 MHz channels. “For the company’s planned operational satellites, beams are designed to support capacity of up to 40 Mhz, potentially enabling data transmission speeds of up to 120Mbps,” the company said.

An AST description of its satellite says it has “a large surface area of phased-array antennas, which work together to electronically form, steer, and shape wireless communication beams into cells of coverage,” similarly to cell towers on the ground. AST says its BlueWalker 3 test satellite is 693 square feet.

AST said it has “over 40 agreements and understandings with mobile network operators globally, who collectively service over 2 billion subscribers.” Besides Vodafone and AT&T, these “agreements and understandings” are with firms including Rakuten Mobile, Bell Canada, Orange, Telefonica, TIM, MTN, Saudi Telecom Company, Zain KSA, Etisalat, Indosat Ooredoo Hutchison, Telkomsel, Smart Communications, Globe Telecom, Millicom, Smartfren, Telecom Argentina, Telstra, Africell, and Liberty Latin America.

While Starlink already has over 5,000 satellites delivering home Internet service and plans to launch tens of thousands more, it isn’t too far ahead of AST SpaceMobile in terms of cellular-enabled satellites. SpaceX launched the first six Starlink satellites that can provide cellular transmissions to standard LTE phones a few weeks ago and demonstrated the technology with text messages sent between T-Mobile phones.

Google and AT&T invest in Starlink rival for satellite-to-smartphone service Read More »

hp-ceo-evokes-james-bond-style-hack-via-ink-cartridges

HP CEO evokes James Bond-style hack via ink cartridges

Office printer with

Last Thursday, HP CEO Enrique Lores addressed the company’s controversial practice of bricking printers when users load them with third-party ink. Speaking to CNBC Television, he said, “We have seen that you can embed viruses in the cartridges. Through the cartridge, [the virus can] go to the printer, [and then] from the printer, go to the network.”

That frightening scenario could help explain why HP, which was hit this month with another lawsuit over its Dynamic Security system, insists on deploying it to printers.

Dynamic Security stops HP printers from functioning if an ink cartridge without an HP chip or HP electronic circuitry is installed. HP has issued firmware updates that block printers with such ink cartridges from printing, leading to the above lawsuit (PDF), which is seeking class-action certification. The suit alleges that HP printer customers were not made aware that printer firmware updates issued in late 2022 and early 2023 could result in printer features not working. The lawsuit seeks monetary damages and an injunction preventing HP from issuing printer updates that block ink cartridges without an HP chip.

But are hacked ink cartridges something we should actually be concerned about?

To investigate, I turned to Ars Technica Senior Security Editor Dan Goodin. He told me that he didn’t know of any attacks actively used in the wild that are capable of using a cartridge to infect a printer.

Goodin also put the question to Mastodon, and cybersecurity professionals, many with expertise in embedded-device hacking, were decidedly skeptical.

Another commenter, going by Graham Sutherland / Polynomial on Mastodon, referred to serial presence detect (SPD) electrically erasable programmable read-only memory (EEPROM), a form of flash memory used extensively in ink cartridges, saying:

I’ve seen and done some truly wacky hardware stuff in my life, including hiding data in SPD EEPROMs on memory DIMMs (and replacing them with microcontrollers for similar shenanigans), so believe me when I say that his claim is wildly implausible even in a lab setting, let alone in the wild, and let alone at any scale that impacts businesses or individuals rather than selected political actors.

HP’s evidence

Unsurprisingly, Lores’ claim comes from HP-backed research. The company’s bug bounty program tasked researchers from Bugcrowd with determining if it’s possible to use an ink cartridge as a cyberthreat. HP argued that ink cartridge microcontroller chips, which are used to communicate with the printer, could be an entryway for attacks.

As detailed in a 2022 article from research firm Actionable Intelligence, a researcher in the program found a way to hack a printer via a third-party ink cartridge. The researcher was reportedly unable to perform the same hack with an HP cartridge.

Shivaun Albright, HP’s chief technologist of print security, said at the time:

A researcher found a vulnerability over the serial interface between the cartridge and the printer. Essentially, they found a buffer overflow. That’s where you have got an interface that you may not have tested or validated well enough, and the hacker was able to overflow into memory beyond the bounds of that particular buffer. And that gives them the ability to inject code into the device.

Albright added that the malware “remained on the printer in memory” after the cartridge was removed.

HP acknowledges that there’s no evidence of such a hack occurring in the wild. Still, because chips used in third-party ink cartridges are reprogrammable (their “code can be modified via a resetting tool right in the field,” according to Actionable Intelligence), they’re less secure, the company says. The chips are said to be programmable so that they can still work in printers after firmware updates.

HP also questions the security of third-party ink companies’ supply chains, especially compared to its own supply chain security, which is ISO/IEC-certified.

So HP did find a theoretical way for cartridges to be hacked, and it’s reasonable for the company to issue a bug bounty to identify such a risk. But its solution for this threat was announced before it showed there could be a threat. HP added ink cartridge security training to its bug bounty program in 2020, and the above research was released in 2022. HP started using Dynamic Security in 2016, ostensibly to solve the problem that it sought to prove exists years later.

Further, there’s a sense from cybersecurity professionals that Ars spoke with that even if such a threat exists, it would take a high level of resources and skills, which are usually reserved for targeting high-profile victims. Realistically, the vast majority of individual consumers and businesses shouldn’t have serious concerns about ink cartridges being used to hack their machines.

HP CEO evokes James Bond-style hack via ink cartridges Read More »