Policy

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

apple-disables-iphone-web-apps-in-eu,-says-it’s-too-hard-to-comply-with-rules

Apple disables iPhone web apps in EU, says it’s too hard to comply with rules

Digital Markets Act —

Apple says it can’t secure home-screen web apps with third-party browser engines.

Photo of an iPhone focusing on the app icons for Phone, Safari, Messages, and Music.

Getty Images | NurPhoto

Apple is removing the ability to install home screen web apps from iPhones and iPads in Europe when iOS 17.4 comes out, saying it’s too hard to keep offering the feature under the European Union’s new Digital Markets Act (DMA). Apple is required to comply with the law by March 6.

Apple said the change is necessitated by a requirement to let developers “use alternative browser engines—other than WebKit—for dedicated browser apps and apps providing in-app browsing experiences in the EU.” Apple explained its stance in a developer Q&A under the heading, “Why don’t users in the EU have access to Home Screen web apps?” It says:

Addressing the complex security and privacy concerns associated with web apps using alternative browser engines would require building an entirely new integration architecture that does not currently exist in iOS and was not practical to undertake given the other demands of the DMA and the very low user adoption of Home Screen web apps. And so, to comply with the DMA’s requirements, we had to remove the Home Screen web apps feature in the EU.

It will still be possible to add website bookmarks to iPhone and iPad home screens, but those bookmarks would take the user to the web browser instead of a separate web app. The change was recently rolled out to beta versions of iOS 17.4.

The Digital Markets Act targets “gatekeepers” of certain technologies such as operating systems, browsers, and search engines. It requires gatekeepers to let third parties interoperate with the gatekeepers’ own services, and prohibits them from favoring their own services at the expense of competitors. As 9to5Mac notes, allowing home screen web apps with Safari but not third-party browser engines might cause Apple to violate the rules.

Apple warns of “malicious web apps”

As Apple explains, iOS “has traditionally provided support for Home Screen web apps by building directly on WebKit and its security architecture. That integration means Home Screen web apps are managed to align with the security and privacy model for native apps on iOS, including isolation of storage and enforcement of system prompts to access privacy impacting capabilities on a per-site basis.”

Apple said it won’t be able to guarantee this isolation once alternative browser engines are supported. “Without this type of isolation and enforcement, malicious web apps could read data from other web apps and recapture their permissions to gain access to a user’s camera, microphone or location without a user’s consent. Browsers also could install web apps on the system without a user’s awareness and consent,” Apple’s FAQ said.

Despite the change, Apple said that “EU users will be able to continue accessing websites directly from their Home Screen through a bookmark with minimal impact to their functionality.”

Apple previously announced that its DMA compliance will bring sideloading to Europe, allowing developers to offer iOS apps from stores other than Apple’s official App Store.

Browser choice, security requirements

One browser-related change will be immediately obvious to EU users once they install the new iOS version. “When users in the EU first open Safari on iOS 17.4, they’ll be prompted to choose their default browser and presented with a list of the main web browsers available in their market to select as their default browser,” Apple’s developer FAQ said.

Apple said it had to prepare carefully for the requirement to let developers use alternative browser engines because browser engines “are constantly exposed to untrusted and potentially malicious content and have visibility into sensitive user data,” making them “one of the most common attack vectors for malicious actors.”

Apple said it is requiring developers who use alternative browser engines to meet certain security standards:

To help keep users safe online, Apple will only authorize developers to implement alternative browser engines after meeting specific criteria and committing to a number of ongoing privacy and security requirements, including timely security updates to address emerging threats and vulnerabilities. Apple will provide authorized developers of dedicated browser apps access to security mitigations and capabilities to enable them to build secure browser engines, and access features like passkeys for secure user login, multiprocess system capabilities to improve security and stability, web content sandboxes that combat evolving security threats, and more.

Overall, Apple said its DMA preparations have involved “an enormous amount of engineering work to add new functionality and capabilities for developers and users in the European Union—including more than 600 new APIs and a wide range of developer tools.”

Apple disables iPhone web apps in EU, says it’s too hard to comply with rules Read More »

air-canada-must-honor-refund-policy-invented-by-airline’s-chatbot

Air Canada must honor refund policy invented by airline’s chatbot

Blame game —

Air Canada appears to have quietly killed its costly chatbot support.

Air Canada must honor refund policy invented by airline’s chatbot

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline’s bereavement travel policy.

On the day Jake Moffatt’s grandmother died, Moffat immediately visited Air Canada’s website to book a flight from Vancouver to Toronto. Unsure of how Air Canada’s bereavement rates worked, Moffatt asked Air Canada’s chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada’s policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot’s advice and request a refund but was shocked that the request was rejected.

Moffatt tried for months to convince Air Canada that a refund was owed, sharing a screenshot from the chatbot that clearly claimed:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.

Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada’s Civil Resolution Tribunal.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because Air Canada essentially argued that “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.

Experts told the Vancouver Sun that Moffatt’s case appeared to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.

Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Further, Rivers found that Moffatt had “no reason” to believe that one part of Air Canada’s website would be accurate and another would not.

Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” Rivers wrote.

In the end, Rivers ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) off the original fare (about $482 USD), which was $1,640.36 CAD (about $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.

Air Canada told Ars it will comply with the ruling and considers the matter closed.

Air Canada’s chatbot appears to be disabled

When Ars visited Air Canada’s website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot.

Air Canada did not respond to Ars’ request to confirm whether the chatbot is still part of the airline’s online support offerings.

Last March, Air Canada’s chief information officer Mel Crocker told the Globe and Mail that the airline had launched the chatbot as an AI “experiment.”

Initially, the chatbot was used to lighten the load on Air Canada’s call center when flights experienced unexpected delays or cancellations.

“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal to automate every service that did not require a “human touch.”

If Air Canada can use “technology to solve something that can be automated, we will do that,” Crocker said.

Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that “Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries.” It was worth it, Crocker said, because “the airline believes investing in automation and machine learning technology will lower its expenses” and “fundamentally” create “a better customer experience.”

It’s now clear that for at least one person, the chatbot created a more frustrating customer experience.

Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.

Because Air Canada seemingly failed to take that step, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.”

“It should be obvious to Air Canada that it is responsible for all the information on its website,” Rivers wrote. “It makes no difference whether the information comes from a static page or a chatbot.”

Air Canada must honor refund policy invented by airline’s chatbot Read More »

amc-to-pay-$8m-for-allegedly-violating-1988-law-with-use-of-meta-pixel

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

Stream like no one is watching —

Proposed settlement impacts millions using AMC apps like Shudder and AMC+.

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

On Thursday, AMC notified subscribers of a proposed $8.3 million settlement that provides awards to an estimated 6 million subscribers of its six streaming services: AMC+, Shudder, Acorn TV, ALLBLK, SundanceNow, and HIDIVE.

The settlement comes in response to allegations that AMC illegally shared subscribers’ viewing history with tech companies like Google, Facebook, and X (aka Twitter) in violation of the Video Privacy Protection Act (VPPA).

Passed in 1988, the VPPA prohibits AMC and other video service providers from sharing “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” It was originally passed to protect individuals’ right to private viewing habits, after a journalist published the mostly unrevealing video rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan.

The so-called “Bork Tapes” revealed little—other than that the judge frequently rented spy thrillers and British costume dramas—but lawmakers recognized that speech could be chilled by monitoring anyone’s viewing habits. While the law was born in the era of Blockbuster Video, subscribers suing AMC wrote in their amended complaint that “the importance of legislation like the VPPA in the modern era of datamining is more pronounced than ever before.”

According to subscribers suing, AMC allegedly installed tracking technologies—including the Meta Pixel, the X Tracking Pixel, and Google Tracking Technology—on its website, allowing their personally identifying information to be connected with their viewing history.

Some trackers, like the Meta Pixel, required AMC to choose what kind of activity can be tracked, and subscribers claimed that AMC had willingly opted into sharing video names and URLs with Meta, along with a Facebook ID. “Anyone” could use the Facebook ID, subscribers said, to identify the AMC subscribers “simply by entering https://www.facebook.com/[unencrypted FID]/” into a browser.

X’s ID could similarly be de-anonymized, subscribers alleged, by using tweeterid.com.

AMC “could easily program its AMC Services websites so that this information is not disclosed” to tech companies, subscribers alleged.

Denying wrongdoing, AMC has defended its use of tracking technologies but is proposing to settle with subscribers to avoid uncertain outcomes from litigation, the proposed settlement said.

A hearing to approve the proposed settlement has been scheduled for May 16.

If it’s approved, AMC has agreed to “suspend, remove, or modify operation of the Meta Pixel and other Third-Party Tracking Technologies so that use of such technologies on AMC Services will not result in AMC’s disclosure to the third-party technology companies of the specific video content requested or obtained by a specific individual.”

Google and X did not immediately respond to Ars’ request to comment. Meta declined to comment.

All registered users of AMC services who “requested or obtained video content on at least one of the six AMC services” between January 18, 2021, and January 10, 2024, are currently eligible to submit claims under the proposed settlement. The deadline to submit is April 9.

In addition to distributing the $8.3 million settlement fund among class members, subscribers will receive a free one-week digital subscription.

According to AMC’s notice to subscribers (full disclosure, I am one), AMC’s agreement to avoid sharing subscribers’ viewing histories may change if the VPPA is amended, repealed, or invalidated. If the law changes to permit sharing viewing data at the core of subscribers’ claim, AMC may resume sharing that information with tech companies.

That day could come soon if Patreon has its way. Recently, Patreon asked a federal judge to rule that the VPPA is unconstitutional.

Patreon’s lawsuit is similar in its use of the Meta Pixel, allegedly violating the VPPA by sharing video views on its platform with Meta.

Patreon has argued that the VPPA is unconstitutional because it chills speech. Patreon said that the law was enacted “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

According to Patreon, the VPPA narrowly prohibits video service providers from sharing video titles, but not from sharing information that people may wish to keep private, such as “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch.”

Therefore, Patreon argued, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

That lawsuit remains ongoing, but Patreon’s position is likely to be met with opposition from experts who typically also defend freedom of speech. Experts at the Electronic Privacy Information Center, like AMC subscribers suing, consider the VPPA one of America’s “strongest protections of consumer privacy against a specific form of data collection.” And the Electronic Frontier Foundation (EFF) has already moved to convince the court to reject Patreon’s claim, describing the VPPA in a blog as an “essential” privacy protection.

“EFF is second to none in fighting for everyone’s First Amendment rights in court,” EFF’s blog said. “But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of Internet users who benefit from the VPPA’s protections.”

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel Read More »

musk’s-x-sold-checkmarks-to-hezbollah-and-other-terrorist-groups,-report-says

Musk’s X sold checkmarks to Hezbollah and other terrorist groups, report says

A photo of Elon Musk next to the logo for X, the social network formerly known as Twitter,.

Getty Images | NurPhoto

A watchdog group’s investigation found that terrorist group Hezbollah and other US-sanctioned entities have accounts with paid checkmarks on X, the Elon Musk-owned social network that still resides at the twitter.com domain.

The Tech Transparency Project (TTP), a nonprofit that is critical of Big Tech companies, said in a report today that “X, the platform formerly known as Twitter, is providing premium, paid services to accounts for two leaders of a US-designated terrorist group and several other organizations sanctioned by the US government.”

After buying Twitter for $44 billion, Musk started charging users for checkmarks that were previously intended to verify that an account was notable and authentic. “Along with the checkmarks, which are intended to confer legitimacy, X promises various perks for premium accounts, including the ability to post longer text and videos and greater visibility for some posts,” the Tech Transparency Project report noted.

The Tech Transparency Project suggests that X may be violating US sanctions. “The accounts identified by TTP include two that apparently belong to the top leaders of Lebanon-based Hezbollah and others belonging to Iranian and Russian state-run media,” the report said. “The fact that X requires users to pay a monthly or annual fee for premium service suggests that X is engaging in financial transactions with these accounts, a potential violation of US sanctions.”

Some of the accounts were verified before Musk bought Twitter, but verification was a free service at the time. Musk’s decision to charge for checkmarks means that X is “providing a premium, paid service to sanctioned entities,” which may raise “new legal issues,” the Tech Transparency Project said.

Report details 28 checkmarked accounts

Musk’s X charges $1,000 a month for a Verified Organizations subscription and last month added a basic tier for $200 a month. For individuals, the X Premium tiers that come with checkmarks cost $8 or $16 a month.

It’s possible for US companies to receive a license from the government to engage in certain transactions with sanctioned entities, but it doesn’t seem likely that X has such a license. X’s rules explicitly prohibit users from purchasing X Premium “if you are a person with whom X is not permitted to have dealings under US and any other applicable economic sanctions and trade compliance law.”

In all, the Tech Transparency Project said it found 28 “verified” accounts tied to sanctioned individuals or entities. These include individuals and groups listed by the US Treasury Department’s Office of Foreign Assets Control (OFAC) as “Specially Designated Nationals.”

“Of the 28 X accounts identified by TTP, 18 show they got verified after April 1, 2023, when X began requiring accounts to subscribe to paid plans to get a checkmark. The other 10 were legacy verified accounts, which are required to pay for a subscription to retain their checkmarks,” the group wrote, adding that it “found advertising in the replies to posts in 19 of the 28 accounts.”

We contacted X today and will update this article if we get a comment. Our email to [email protected] triggered the standard auto-reply from [email protected] that says, “Busy now, please check back later.”

Update at 4: 28pm ET: After this article was published, X issued the following statement: “X has a robust and secure approach in place for our monetization features, adhering to legal obligations, along with independent screening by our payments providers. Several of the accounts listed in the Tech Transparency Report are not directly named on sanction lists, while some others may have visible account check marks without receiving any services that would be subject to sanctions. Our teams have reviewed the report and will take action if necessary. We’re always committed to ensuring that we maintain a safe, secure and compliant platform.”

Musk’s X sold checkmarks to Hezbollah and other terrorist groups, report says Read More »

backdoors-that-let-cops-decrypt-messages-violate-human-rights,-eu-court-says

Backdoors that let cops decrypt messages violate human rights, EU court says

Building of the European Court of Human Rights in Strasbourg (France).

Enlarge / Building of the European Court of Human Rights in Strasbourg (France).

The European Court of Human Rights (ECHR) has ruled that weakening end-to-end encryption disproportionately risks undermining human rights. The international court’s decision could potentially disrupt the European Commission’s proposed plans to require email and messaging service providers to create backdoors that would allow law enforcement to easily decrypt users’ messages.

This ruling came after Russia’s intelligence agency, the Federal Security Service (FSS), began requiring Telegram to share users’ encrypted messages to deter “terrorism-related activities” in 2017, ECHR’s ruling said. A Russian Telegram user alleged that FSS’s requirement violated his rights to a private life and private communications, as well as all Telegram users’ rights.

The Telegram user was apparently disturbed, moving to block required disclosures after Telegram refused to comply with an FSS order to decrypt messages on six users suspected of terrorism. According to Telegram, “it was technically impossible to provide the authorities with encryption keys associated with specific users,” and therefore, “any disclosure of encryption keys” would affect the “privacy of the correspondence of all Telegram users,” the ECHR’s ruling said.

For refusing to comply, Telegram was fined, and one court even ordered the app to be blocked in Russia, while dozens of Telegram users rallied to continue challenging the order to maintain Telegram services in Russia. Ultimately, users’ multiple court challenges failed, sending the case before the ECHR while Telegram services seemingly tenuously remained available in Russia.

The Russian government told the ECHR that “allegations that the security services had access to the communications of all users” were “unsubstantiated” because their request only concerned six Telegram users.

They further argued that Telegram providing encryption keys to FSB “did not mean that the information necessary to decrypt encrypted electronic communications would become available to its entire staff.” Essentially, the government believed that FSB staff’s “duty of discretion” would prevent any intrusion on private life for Telegram users as described in the ECHR complaint.

Seemingly most critically, the government told the ECHR that any intrusion on private lives resulting from decrypting messages was “necessary” to combat terrorism in a democratic society. To back up this claim, the government pointed to a 2017 terrorist attack that was “coordinated from abroad through secret chats via Telegram.” The government claimed that a second terrorist attack that year was prevented after the government discovered it was being coordinated through Telegram chats.

However, privacy advocates backed up Telegram’s claims that the messaging services couldn’t technically build a backdoor for governments without impacting all its users. They also argued that the threat of mass surveillance could be enough to infringe on human rights. The European Information Society Institute (EISI) and Privacy International told the ECHR that even if governments never used required disclosures to mass surveil citizens, it could have a chilling effect on users’ speech or prompt service providers to issue radical software updates weakening encryption for all users.

In the end, the ECHR concluded that the Telegram user’s rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram’s position that complying with the FSB’s disclosure order would force changes impacting all its users.

The “confidentiality of communications is an essential element of the right to respect for private life and correspondence,” the ECHR’s ruling said. Thus, requiring messages to be decrypted by law enforcement “cannot be regarded as necessary in a democratic society.”

Martin Husovec, a law professor who helped to draft EISI’s testimony, told Ars that EISI is “obviously pleased that the Court has recognized the value of encryption and agreed with us that state-imposed weakening of encryption is a form of indiscriminate surveillance because it affects everyone’s privacy.”

Backdoors that let cops decrypt messages violate human rights, EU court says Read More »

judge-rejects-most-chatgpt-copyright-claims-from-book-authors

Judge rejects most ChatGPT copyright claims from book authors

Insufficient evidence —

OpenAI plans to defeat authors’ remaining claim at a “later stage” of the case.

Judge rejects most ChatGPT copyright claims from book authors

A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission.

By allegedly repackaging original works as ChatGPT outputs, authors alleged, OpenAI’s most popular chatbot was just a high-tech “grift” that seemingly violated copyright laws, as well as state laws preventing unfair business practices and unjust enrichment.

According to judge Araceli Martínez-Olguín, authors behind three separate lawsuits—including Sarah Silverman, Michael Chabon, and Paul Tremblay—have failed to provide evidence supporting any of their claims except for direct copyright infringement.

OpenAI had argued as much in their promptly filed motion to dismiss these cases last August. At that time, OpenAI said that it expected to beat the direct infringement claim at a “later stage” of the proceedings.

Among copyright claims tossed by Martínez-Olguín were accusations of vicarious copyright infringement. Perhaps most significantly, Martínez-Olguín agreed with OpenAI that the authors’ allegation that “every” ChatGPT output “is an infringing derivative work” is “insufficient” to allege vicarious infringement, which requires evidence that ChatGPT outputs are “substantially similar” or “similar at all” to authors’ books.

“Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books,” Martínez-Olguín wrote. “Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials.”

Authors also failed to convince Martínez-Olguín that OpenAI violated the Digital Millennium Copyright Act (DMCA) by allegedly removing copyright management information (CMI)—such as author names, titles of works, and terms and conditions for use of the work—from training data.

This claim failed because authors cited “no facts” that OpenAI intentionally removed the CMI or built the training process to omit CMI, Martínez-Olguín wrote. Further, the authors cited examples of ChatGPT referencing their names, which would seem to suggest that some CMI remains in the training data.

Some of the remaining claims were dependent on copyright claims to survive, Martínez-Olguín wrote.

Arguing that OpenAI caused economic injury by unfairly repurposing authors’ works, even if authors could show evidence of a DMCA violation, authors could only speculate about what injury was caused, the judge said.

Similarly, allegations of “fraudulent” unfair conduct—accusing OpenAI of “deceptively” designing ChatGPT to produce outputs that omit CMI—”rest on a violation of the DMCA,” Martínez-Olguín wrote.

The only claim under California’s unfair competition law that was allowed to proceed alleged that OpenAI used copyrighted works to train ChatGPT without authors’ permission. Because the state law broadly defines what’s considered “unfair,” Martínez-Olguín said that it’s possible that OpenAI’s use of the training data “may constitute an unfair practice.”

Remaining claims of negligence and unjust enrichment failed, Martínez-Olguín wrote, because authors only alleged intentional acts and did not explain how OpenAI “received and unjustly retained a benefit” from training ChatGPT on their works.

Authors have been ordered to consolidate their complaints and have until March 13 to amend arguments and continue pursuing any of the dismissed claims.

To shore up the tossed copyright claims, authors would likely need to provide examples of ChatGPT outputs that are similar to their works, as well as evidence of OpenAI intentionally removing CMI to “induce, enable, facilitate, or conceal infringement,” Martínez-Olguín wrote.

Ars could not immediately reach the authors’ lawyers or OpenAI for comment.

As authors likely prepare to continue fighting OpenAI, the US Copyright Office has been fielding public input before releasing guidance that could one day help rights holders pursue legal claims and may eventually require works to be licensed from copyright owners for use as training materials. Among the thorniest questions is whether AI tools like ChatGPT should be considered authors when spouting outputs included in creative works.

While the Copyright Office prepares to release three reports this year “revealing its position on copyright law in relation to AI,” according to The New York Times, OpenAI recently made it clear that it does not plan to stop referencing copyrighted works in its training data. Last month, OpenAI said it would be “impossible” to train AI models without copyrighted materials, because “copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents.”

According to OpenAI, it doesn’t just need old copyrighted materials; it needs current copyright materials to ensure that chatbot and other AI tools’ outputs “meet the needs of today’s citizens.”

Rights holders will likely be bracing throughout this confusing time, waiting for the Copyright Office’s reports. But once there is clarity, those reports could “be hugely consequential, weighing heavily in courts, as well as with lawmakers and regulators,” The Times reported.

Judge rejects most ChatGPT copyright claims from book authors Read More »

cryptocurrency-maker-sues-former-ars-reporter-for-writing-about-fraud-lawsuit

Cryptocurrency maker sues former Ars reporter for writing about fraud lawsuit

Promotional image of a man using a machine that looks like an ATM and is labeled

Enlarge / Image from Bitcoin Latinum’s website

Bitcoin Latinum

The cryptocurrency firm Bitcoin Latinum has sued journalists at Forbes and Poker.org, claiming that the writers made false and defamatory statements in articles that described securities fraud lawsuits filed against the crypto firm.

Bitcoin Latinum and its founder, Donald Basile, filed a libel lawsuit against Forbes reporter Cyrus Farivar and another libel lawsuit against Poker.org and its reporter Haley Hintze. (Farivar was a long-time Ars Technica reporter.)

The lawsuits are surprising because the Forbes article and the Poker.org article, both published in 2022, are very much like thousands of other news stories that describe allegations in a lawsuit. In both articles, it is clear that the allegations come from the filer of the lawsuit and not from the author of the article.

But both of Bitcoin Latinum’s lawsuits, which were filed last week in Delaware’s Court of Chancery, demand that the articles be retracted. They contain the following claim in exactly the same words:

The Article contains statements which insinuate and lead the reader to believe that Assofi’s allegations against Plaintiff Latinum and Plaintiff Basile are factual and correct, and which statements are not couched as the opinion of the author, but rather, are presented as fact, and therefore do not fall under any applicable privilege.

“Assofi’s allegations” are those made in a lawsuit filed against Bitcoin Latinum and Basile in November 2022. That lawsuit from Arshad Assofi, who said he lost over $15 million investing in worthless tokens, alleged that Bitcoin Latinum “is a scam” and accused the defendants of securities fraud and other violations. Bitcoin Latinum calls itself “the future of Bitcoin.”

Lawsuit cites wrong article

It’s especially surprising that Bitcoin Latinum’s lawsuit against Hintze contains the statement about “Assofi’s allegations” because the Hintze article cited in the lawsuit never mentions Assofi. The Hintze article on Poker.org is about a different lawsuit from different plaintiffs who also alleged securities fraud.

In fact, the Hintze article was published in February 2022, 10 months before the Assofi lawsuit was filed. TechDirt’s Mike Masnick pointed out this error in an article yesterday:

It appears that Latinum’s lawyer actually meant to sue over a different Poker.org article, that was published in November about the Assofi lawsuit, but repeatedly claims that the article was published on February 5, 2022, rather than the actual publication date of the article she meant, which was November 21, 2022. Also, Latinum’s lawyer included the February 5th article as the exhibit, rather than the November 21st article. Such attention to detail to talk about the wrong article and include the wrong article as an exhibit. Top notch lawyering.

Masnick also points out that the statute of limitations is two years, and the lawsuit against Hintze was filed more than two years after her February 2022 article.

In libel cases, journalists may defend themselves with the “fair report privilege.” This applies to accurate reporting on official government matters, including court proceedings.

The lawyer for Bitcoin Latinum in the Farivar and Hintze cases is Holly Whitney, who specializes in estate planning and probate cases. We contacted Whitney and Bitcoin Latinum about the lawsuits today and will update this article if we get a response.

Cryptocurrency maker sues former Ars reporter for writing about fraud lawsuit Read More »

apple’s-imessage-is-not-a-“core-platform”-in-eu,-so-it-can-stay-walled-off

Apple’s iMessage is not a “core platform” in EU, so it can stay walled off

Too core to fail —

Microsoft’s Edge browser, Bing search, and ad business also avoid regulations.

Apple Messages in a Mac dock

Getty Images

Apple’s iMessage service is not a “gatekeeper” prone to unfair business practices and will thus not be required under the Fair Markets Act to open up to messages, files, and video calls from other services, the European Commission announced earlier today.

Apple was one of many companies, including Google, Amazon, Alphabet (Google’s parent company), Meta, and Microsoft to have its “gatekeeper” status investigated by the European Union. The iMessage service did meet the definition of a “core platform,” serving at least 45 million EU users monthly and being controlled by a firm with at least 75 billion euros in market capitalization. But after “a thorough assessment of all arguments” during a five-month investigation, the Commission found that iMessage and Microsoft’s Bing search, Edge browser, and ad platform “do not qualify as gatekeeper services.” The unlikelihood of EU demands on iMessage was apparent in early December when Bloomberg reported that the service didn’t have enough sway with business users to demand more regulation.

Had the Commission ruled otherwise, Apple would have had until August to open its service. It would have been interesting to see how the company would have complied, given that it provides end-to-end encryption and registers senders based on information from their registered Apple devices.

Google had pushed the Commission to force Apple into “gatekeeper status,” part of Google’s larger campaign to make Apple treat Android users better when they trade SMS messages with iPhone users. While Apple has agreed to take up RCS, an upgraded form of carrier messaging with typing indicators and better image and video quality, it will not provide encryption for Android-to-iPhone SMS, nor remove the harsh green coloring that particularly resonates with younger users.

Apple is still obligated to comply with the Digital Markets Act’s other implications on its iOS operating system, its App Store, and its Safari browser. The European Union version of iOS 17.4, due in March, will offer “alternative app marketplaces,” or sideloading, along with the tools so that those other app stores can provide updates and other services. Browsers on iOS will also be able to use their own rendering engines rather than providing features only on top of mobile Safari rendering. Microsoft, among other firms, will make similar concessions in certain areas of Europe with Windows 11 and other products.

While it’s unlikely to result in the same kind of action, Brendan Carr, a commissioner at the Federal Communications Commission, said at a conference yesterday that the FCC “has a role to play” in investigating whether Apple’s blocking of the Beeper Mini app violated Part 14 rules regarding accessibility and usability. “I think the FCC should launch an investigation to look at whether Apple’s decision to degrade the Beeper Mini functionality… was a step that violated the FCC’s rules in Part 14,” Carr said at the State of the Net policy conference in Washington, DC.

Beeper Mini launched with the ability for Android users to send fully encrypted iMessage messages to Apple users, based on reverse-engineering of its protocol and registration. Days after its launch, Apple blocked its users and issued a statement saying that it was working to stop exploits and spam. The blocking and workarounds continued until Beeper announced that it was shifting its focus away from iMessage and back to being a multi-service chat app, minus one particular service. Beeper’s experience had previously garnered recognition from Senators Elizabeth Warren (D-Mass.) and Amy Klobuchar (D-Minn.).

Ars has reached out to Apple, Microsoft, and Google for comment and will update this post if we receive responses.

Apple’s iMessage is not a “core platform” in EU, so it can stay walled off Read More »

amazon-hides-cheaper-items-with-faster-delivery,-lawsuit-alleges

Amazon hides cheaper items with faster delivery, lawsuit alleges

A game of hide-and-seek —

Hundreds of millions of Amazon’s US customers have overpaid, class action says.

Amazon hides cheaper items with faster delivery, lawsuit alleges

Amazon rigged its platform to “routinely” push an overwhelming majority of customers to pay more for items that could’ve been purchased at lower costs with equal or faster delivery times, a class-action lawsuit has alleged.

The lawsuit claims that a biased algorithm drives Amazon’s “Buy Box,” which appears on an item’s page and prompts shoppers to “Buy Now” or “Add to Cart.” According to customers suing, nearly 98 percent of Amazon sales are of items featured in the Buy Box, because customers allegedly “reasonably” believe that featured items offer the best deal on the platform.

“But they are often wrong,” the complaint said, claiming that instead, Amazon features items from its own retailers and sellers that participate in Fulfillment By Amazon (FBA), both of which pay Amazon higher fees and gain secret perks like appearing in the Buy Box.

“The result is that consumers routinely overpay for items that are available at lower prices from other sellers on Amazon—not because consumers don’t care about price, or because they’re making informed purchasing decisions, but because Amazon has chosen to display the offers for which it will earn the highest fees,” the complaint said.

Authorities in the US and the European Union have investigated Amazon’s allegedly anticompetitive Buy Box algorithm, confirming that it’s “favored FBA sellers since at least 2016,” the complaint said. In 2021, Amazon was fined more than $1 billion by the Italian Competition Authority over these unfair practices, and in 2022, the European Commission ordered Amazon to “apply equal treatment to all sellers when deciding what to feature in the Buy Box.”

These investigations served as the first public notice that Amazon’s Buy Box couldn’t be trusted, customers suing said. Amazon claimed that the algorithm was fixed in 2020, but so far, Amazon does not appear to have addressed all concerns over its Buy Box algorithm. As of 2023, European regulators have continued pushing Amazon “to take further action to remedy its Buy Box bias in their respective jurisdictions,” the customers’ complaint said.

The class action was filed by two California-based long-time Amazon customers, Jeffrey Taylor and Robert Selway. Both feel that Amazon “willfully” and “deceptively” tricked them and hundreds of millions of US customers into purchasing the featured item in the Buy Box when better deals existed.

Taylor and Selway’s lawyer, Steve Berman, told Reuters that Amazon has placed “a great burden” on its customers, who must invest more time on the platform to identify the best deals. Unlike other lawsuits over Amazon’s Buy Box, this is the first lawsuit to seek compensation over harms to consumers, not over antitrust concerns or harms to sellers, Reuters noted.

The lawsuit has been filed on behalf of “all persons who made a purchase using the Buy Box from 2016 to the present.” Because Amazon supposedly “frequently” features more expensive items in the Buy Box and most sales result from Buy Box placements, they’ve alleged that “the chances that any Class member was unharmed by one or more purchases is virtually non-existent.”

“Our team expects the class to include hundreds of millions of Amazon consumers because virtually all purchases are made from the Buy Box,” a spokesperson for plaintiffs’ lawyers told Ars.

Customers suing are hoping that a jury will decide that Amazon continues to “deliberately steer” customers to purchase higher-priced items in the Buy Box to spike its own profits. They’ve asked a US district court in Washington, where Amazon is based, to permanently stop Amazon from using allegedly biased algorithms to drive sales through its Buy Box.

The extent of damages that Amazon could owe are currently unknown but appear significant. It’s estimated that 80 percent of Amazon’s 300 million userbase is comprised of US subscribers, each allegedly overpaying on most of their purchases over the past seven years. Last year, Amazon’s US sales exceeded $574 billion.

“Amazon claims to be a ‘customer-centric’ company that works to offer the lowest prices to its customers, but in violation of the Washington Consumer Protection Act, Amazon employs a deceptive scheme to keep its profits—and consumer prices—high,” customer’s lawsuit alleged.

Amazon hides cheaper items with faster delivery, lawsuit alleges Read More »

us-funds-$5b-chip-effort-after-lagging-on-semiconductor-innovation

US funds $5B chip effort after lagging on semiconductor innovation

Now hiring? —

US had failed to fund the “science half” of CHIPS and Science Act, critic said.

US President Joe Biden speaks before signing the CHIPS and Science Act of 2022.

Enlarge / US President Joe Biden speaks before signing the CHIPS and Science Act of 2022.

The Biden administration announced investments Friday totaling more than $5 billion in semiconductor research and development intended to re-establish the US as a global leader manufacturing the “next generation of semiconductor technologies.”

Through sizeable investments, the US will “advance US leadership in semiconductor R&D, cut down on the time and cost of commercializing new technologies, bolster US national security, and connect and support workers in securing good semiconductor jobs,” a White House press release said.

Currently, the US produces “less than 10 percent” of the global chips supply and “none of the most advanced chips,” the White House said. But investing in programs like the National Semiconductor Technology Center (NSTC)—considered the “centerpiece” of the CHIPS and Science Act’s four R&D programs—and training a talented workforce could significantly increase US production of semiconductors that the Biden administration described as the “backbone of the modern economy.”

The White House projected that the NSTC’s workforce activities would launch in the summer of 2024. The Center’s prime directive will be developing new semiconductor technologies by “supporting design, prototyping, and piloting and through ensuring innovators have access to critical capabilities.”

Moving forward, the NSTC will operate as a public-private consortium, involving both government and private sector institutions, the White House confirmed. It will be run by a recently established nonprofit called the National Center for the Advancement of Semiconductor Technology (Natcast), which will coordinate with the secretaries of Commerce, Defense, and Energy, as well as the National Science Foundation’s director. Any additional stakeholders can provide input on the NSTC’s goals by joining the NSTC Community of Interest at no cost.

The National Institute of Standards and Technology (NIST) has explained why achieving the NSTC’s mission to develop cutting-edge semiconductor technology in the US will not be easy:

The smallest dimensions of leading-edge semiconductor devices have reached the atomic scale and the complexity of the circuit architecture is increasing exponentially with the use of three-dimensional structures, the incorporation of new materials, and improvements in the thousands of process steps needed to make advanced chips. Into the future, as new applications demand higher-performance semiconductors, their design and production will become even more complex. This complexity makes it increasingly difficult and costly to implement innovations because of the dependencies between design and manufacturing, between manufacturing steps, and between front-end and back-end processes.

The complexity of keeping up with semiconductor tech is why it’s critical for the US to create clear pathways for skilled workers to break into this burgeoning industry. The Biden administration said it plans to invest “at least hundreds of millions of dollars in the NSTC’s workforce efforts,” creating a Workforce Center of Excellence with locations throughout the US and piloting new training programs, including initiatives engaging underserved communities. The Workforce Center will start by surveying best practices in semiconductor education programs, then establish a baseline program to attract workers seeking dependable paths to break into the industry.

Last year, the Semiconductor Industry Association (SIA) released a study showing that the US was not adequately preparing a highly skilled workforce. Between “67,000, or 58 percent, of projected new jobs, may remain unfulfilled at the current trajectory,” SIA estimated.

A skilled workforce is just part of the equation, though. The US also needs facilities where workers can experiment with new technologies without breaking the bank. To that end, the Department of Commerce announced it would be investing “at least $200 million” in a first-of-its-kind CHIPS Manufacturing USA Institute. That institute will “allow innovators to replicate and experiment with physical manufacturing processes at low cost.”

Other Commerce Department investments announced include “up to $300 million” for advanced packaging R&D necessary for discovering new applications for semiconductor technologies and over $100 million in funding for dozens of projects to help inventors “more easily scale innovations into commercial products.”

A Commerce Department spokesperson told Ars that “the location of the NSTC headquarters has not yet been determined” but will “directly support the NSTC research strategy and give engineers, academics, researchers, engineers at startups, small and large companies, and workforce developers the capabilities they need to innovate.” In 2024, NSTC’s efforts to kick off research appear modest, with the center expecting to prioritize engaging community members and stakeholders, launching workforce programs, and identifying early start research programs.

So far, Biden’s efforts to ramp up semiconductor manufacturing in the US have not gone smoothly. Earlier this year, TSMC predicted further delays at chips plants under construction in Arizona and confirmed that the second plant would not be able to manufacture the most advanced chips, as previously expected.

That news followed criticism from private entities last year. In November, Nvidia CEO Jensen Huang predicted that the US was “somewhere between a decade and two decades away” from semiconductor supply chain independence. The US Chamber of Commerce said last August that the reason why the US remained so far behind was because the US had so far failed to prioritize funding in the “science half” of the CHIPS and Science Act.

In 2024, the Biden administration appears to be attempting to finally start funding a promised $11 billion total in research and development efforts. Once NSTC kicks off research, the pressure will be on to chase the Center’s highest ambition of turning the US into a consistent birthplace of life-changing semiconductor technologies once again.

US funds $5B chip effort after lagging on semiconductor innovation Read More »

canada-declares-flipper-zero-public-enemy-no.-1-in-car-theft-crackdown

Canada declares Flipper Zero public enemy No. 1 in car-theft crackdown

FLIPPING YOUR LID —

How do you ban a device built with open source hardware and software anyway?

A Flipper Zero device

Enlarge / A Flipper Zero device

https://flipperzero.one/

Canadian Prime Minister Justin Trudeau has identified an unlikely public enemy No. 1 in his new crackdown on car theft: the Flipper Zero, a $200 piece of open source hardware used to capture, analyze and interact with simple radio communications.

On Thursday, the Innovation, Science and Economic Development Canada agency said it will “pursue all avenues to ban devices used to steal vehicles by copying the wireless signals for remote keyless entry, such as the Flipper Zero, which would allow for the removal of those devices from the Canadian marketplace through collaboration with law enforcement agencies.” A social media post by François-Philippe Champagne, the minister of that agency, said that as part of the push “we are banning the importation, sale and use of consumer hacking devices, like flippers, used to commit these crimes.”

In remarks made the same day, Trudeau said the push will target similar tools that he said can be used to defeat anti-theft protections built into virtually all new cars.

“In reality, it has become too easy for criminals to obtain sophisticated electronic devices that make their jobs easier,” he said. “For example, to copy car keys. It is unacceptable that it is possible to buy tools that help car theft on major online shopping platforms.”

Presumably, such tools subject to the ban would include HackRF One and LimeSDR, which have become crucial for analyzing and testing the security of all kinds of electronic devices to find vulnerabilities before they’re exploited. None of the government officials identified any of these tools, but in an email, a representative of the Canadian government reiterated the use of the phrase “pursuing all avenues to ban devices used to steal vehicles by copying the wireless signals for remote keyless entry.”

A humble hobbyist device

The push to ban any of these tools has been met with fierce criticism from hobbyists and security professionals. Their case has only been strengthened by Trudeau’s focus on Flipper Zero. This slim, lightweight device bearing the logo of an adorable dolphin acts as a Swiss Army knife for sending, receiving, and analyzing all kinds of wireless communications. It can interact with radio signals, including RFID, NFC, Bluetooth, Wi-Fi, or standard radio. People can use them to change the channels of a TV at a bar covertly, clone simple hotel key cards, read the RFID chip implanted in pets, open and close some garage doors, and, until Apple issued a patch, send iPhones into a never-ending DoS loop.

The price and ease of use make Flipper Zero ideal for beginners and hobbyists who want to understand how increasingly ubiquitous communications protocols such as NFC and Wi-Fi work. It bundles various open source hardware and software into a portable form factor that sells for an affordable price. Lost on the Canadian government, the device isn’t especially useful in stealing cars because it lacks the more advanced capabilities required to bypass anti-theft protections introduced in more than two decades.

One thing the Flipper Zero is exceedingly ill-equipped for is defeating modern antihack protections built into cars, smartcards, phones, and other electronic devices.

The most prevalent form of electronics-assisted car theft these days, for instance, uses what are known as signal amplification relay devices against keyless ignition and entry systems. This form of hack works by holding one device near a key fob and a second device near the vehicle the fob works with. In the most typical scenario, the fob is located on a shelf near a locked front door, and the car is several dozen feet away in a driveway. By placing one device near the front door and another one next to the car, the hack beams the radio signals necessary to unlock and start the device.

Canada declares Flipper Zero public enemy No. 1 in car-theft crackdown Read More »