Policy

50-injured-on-boeing-787-as-“strong-shake”-reportedly-sent-heads-into-ceiling

50 injured on Boeing 787 as “strong shake” reportedly sent heads into ceiling

Boeing nosedive —

LATAM Airlines said “technical event” in mid-flight “caused a strong movement.”

A Boeing airplane on a runway. The LATAM Airlines logo is printed on the side of the plane.

Enlarge / A LATAM Airlines Boeing 787-9 Dreamliner taxiing at Arturo Merino Benítez International Airport in Chile on March 20, 2019.

Getty Images | SOPA Images

About 50 people were injured on a LATAM Airlines flight today in which a Boeing 787-9 Dreamliner suffered a technical problem that caused a “strong shake,” reportedly causing some passengers’ heads to hit the ceiling.

The plane flying from Australia to New Zealand “experienced a strong shake during flight, the cause of which is currently under investigation,” LATAM said on its website today. LATAM, a Chilean airline, was also quoted in news reports as saying the plane suffered “a technical event during the flight which caused a strong movement.”

The Boeing plane, carrying 263 passengers and nine flight and cabin crew members, landed at Auckland Airport as scheduled. New Zealand ambulance service Hato Hone St. John published a statement saying that its “ambulance crews assessed and treated approximately 50 patients, with one patient in a serious condition and the remainder in a moderate to minor condition.” Twelve patients were taken to hospitals, the statement said.

Most of the patients were “discharged shortly after,” LATAM said on its website. “Only one passenger and one cabin crew member required additional attention, but without any life-threatening risks.”

The plane was originally supposed to continue from New Zealand to Chile, but that leg of the trip was rescheduled. LATAM said it is “working in coordination with the respective authorities to support the investigations into the incident.”

Boeing told news outlets that it is “working to gather more information about the flight and will provide any support needed by our customers.” We contacted Boeing today and will update this article if it provides more information.

Passenger describes nosedive, people hitting the ceiling

Passenger Brian Jokat described the frightening incident in interviews with several media outlets. “The ceiling’s broken from people’s heads and bodies hitting it,” Jokat said, according to ABC News. “Basically neck braces were being put on people, guys’ heads were cut and they were bleeding. It was just crazy.”

Jokat was also quoted as saying that he “felt the plane take a nosedive—it felt like it was at the top of a roller coaster, and then it flattened out again.” It all happened in “split seconds,” he reportedly said.

Today’s flight came about two months after a near-disaster involving a Boeing 737 Max 9 plane used by Alaska Airlines. On January 5, the plane was forced to return to Portland International Airport in Oregon after a passenger door plug blew off the aircraft during flight.

The National Transportation Safety Board concluded that four bolts were missing from the plane. The Justice Department has opened a criminal investigation into the incident, The Wall Street Journal reported Saturday.

Boeing was seeking a safety exemption from the US Federal Aviation Administration related to its 737 Max 7 aircraft, but withdrew the application in January after the 737 Max 9 door-plug blowout.

50 injured on Boeing 787 as “strong shake” reportedly sent heads into ceiling Read More »

nvidia-sued-over-ai-training-data-as-copyright-clashes-continue

Nvidia sued over AI training data as copyright clashes continue

In authors’ bad books —

Copyright suits over AI training data reportedly decreasing AI transparency.

Nvidia sued over AI training data as copyright clashes continue

Book authors are suing Nvidia, alleging that the chipmaker’s AI platform NeMo—used to power customized chatbots—was trained on a controversial dataset that illegally copied and distributed their books without their consent.

In a proposed class action, novelists Abdi Nazemian (Like a Love Story), Brian Keene (Ghost Walk), and Stewart O’Nan (Last Night at the Lobster) argued that Nvidia should pay damages and destroy all copies of the Books3 dataset used to power NeMo large language models (LLMs).

The Books3 dataset, novelists argued, copied “all of Bibliotek,” a shadow library of approximately 196,640 pirated books. Initially shared through the AI community Hugging Face, the Books3 dataset today “is defunct and no longer accessible due to reported copyright infringement,” the Hugging Face website says.

According to the authors, Hugging Face removed the dataset last October, but not before AI companies like Nvidia grabbed it and “made multiple copies.” By training NeMo models on this dataset, the authors alleged that Nvidia “violated their exclusive rights under the Copyright Act.” The authors argued that the US district court in San Francisco must intervene and stop Nvidia because the company “has continued to make copies of the Infringed Works for training other models.”

A Hugging Face spokesperson clarified to Ars that “Hugging Face never removed this dataset, and we did not host the Books3 dataset on the Hub.” Instead, “Hugging Face hosted a script that downloads the data from The Eye, which is the place where ELeuther hosted the data,” until “Eleuther removed the data from The Eye” over copyright concerns, causing the dataset script on Hugging Face to break.

Nvidia did not immediately respond to Ars’ request to comment.

Demanding a jury trial, authors are hoping the court will rule that Nvidia has no possible defense for both allegedly violating copyrights and intending “to cause further infringement” by distributing NeMo models “as a base from which to build further models.”

AI models decreasing transparency amid suits

The class action was filed by the same legal team representing authors suing OpenAI, whose lawsuit recently saw many claims dismissed, but crucially not their claim of direct copyright infringement. Lawyers told Ars last month that authors would be amending their complaints against OpenAI and were “eager to move forward and litigate” their direct copyright infringement claim.

In that lawsuit, the authors alleged copyright infringement both when OpenAI trained LLMs and when chatbots referenced books in outputs. But authors seemed more concerned about alleged damages from chatbot outputs, warning that AI tools had an “uncanny ability to generate text similar to that found in copyrighted textual materials, including thousands of books.”

Uniquely, in the Nvidia suit, authors are focused exclusively on Nvidia’s training data, seemingly concerned that Nvidia could empower businesses to create any number of AI models on the controversial dataset, which could affect thousands of authors whose works could allegedly be broadly infringed just by training these models.

There’s no telling yet how courts will rule on the direct copyright claims in either lawsuit—or in the New York Times’ lawsuit against OpenAI—but so far, OpenAI has failed to convince courts to toss claims aside.

However, OpenAI doesn’t appear very shaken by the lawsuits. In February, OpenAI said that it expected to beat book authors’ direct copyright infringement claim at a “later stage” of the case and, most recently in the New York Times case, tried to convince the court that NYT “hacked” ChatGPT to “set up” the lawsuit.

And Microsoft, a co-defendant in the NYT lawsuit, even more recently introduced a new argument that could help tech companies defeat copyright suits over LLMs. Last month, Microsoft argued that The New York Times was attempting to stop a “groundbreaking new technology” and would fail, just like movie producers attempting to kill off the VCR in the 1980s.

“Despite The Times’s contentions, copyright law is no more an obstacle to the LLM than it was to the VCR (or the player piano, copy machine, personal computer, Internet, or search engine),” Microsoft wrote.

In December, Hugging Face’s machine learning and society lead, Yacine Jernite, noted that developers appeared to be growing less transparent about training data after copyright lawsuits raised red flags about companies using the Books3 dataset, “especially for commercial models.”

Meta, for example, “limited the amount of information [it] disclosed about” its LLM, Llama-2, “to a single paragraph description and one additional page of safety and bias analysis—after [its] use of the Books3 dataset when training the first Llama model was brought up in a copyright lawsuit,” Jernite wrote.

Jernite warned that AI models lacking transparency could hinder “the ability of regulatory safeguards to remain relevant as training methods evolve, of individuals to ensure that their rights are respected, and of open science and development to play their role in enabling democratic governance of new technologies.” To support “more accountability,” Jernite recommended “minimum meaningful public transparency standards to support effective AI regulation,” as well as companies providing options for anyone to opt out of their data being included in training data.

“More data transparency supports better governance and fosters technology development that more reliably respects peoples’ rights,” Jernite wrote.

Nvidia sued over AI training data as copyright clashes continue Read More »

apple-and-tesla-feel-the-pain-as-china-opts-for-homegrown-products

Apple and Tesla feel the pain as China opts for homegrown products

Domestically made smartphones were much in evidence at the National People’s Congress in Beijing

Enlarge / Domestically made smartphones were much in evidence at the National People’s Congress in Beijing

Wang Zhao/AFP/Getty Images

Apple and Tesla cracked China, but now the two largest US consumer companies in the country are experiencing cracks in their own strategies as domestic rivals gain ground and patriotic buying often trumps their allure.

Falling market share and sales figures reported this month indicate the two groups face rising competition and the whiplash of US-China geopolitical tensions. Both have turned to discounting to try to maintain their appeal.

A shift away from Apple, in particular, has been sharp, spurred on by a top-down campaign to reduce iPhone usage among state employees and the triumphant return of Chinese national champion Huawei, which last year overcame US sanctions to roll out a homegrown smartphone capable of near 5G speeds.

Apple’s troubles were on full display at China’s annual Communist Party bash in Beijing this month, where a dozen participants told the Financial Times they were using phones from Chinese brands.

“For people coming here, they encourage us to use domestic phones, because phones like Apple are not safe,” said Zhan Wenlong, a nuclear physicist and party delegate. “[Apple phones] are made in China, but we don’t know if the chips have back doors.”

Wang Chunru, a member of China’s top political advisory body, the Chinese People’s Political Consultative Conference, said he was using a Huawei device. “We all know Apple has eavesdropping capabilities,” he said.

Delegate Li Yanfeng from Guangxi said her phone was manufactured by Huawei. “I trust domestic brands, using them was a uniform request.”

Financial Times using Bloomberg data

Outside of the US, China is both Apple and Tesla’s single-largest market, respectively contributing 19 percent and 22 percent of total revenues during their most recent fiscal years. Their mounting challenges in the country have caught Wall Street’s attention, contributing to Apple’s 9 percent share price slide this year and Tesla’s 28 percent fall, making them the poorest performers among the so-called Magnificent Seven tech stocks.

Apple and Tesla are the latest foreign companies to feel the pain of China’s shift toward local brands. Sales of Nike and Adidas clothing have yet to return to their 2021 peak. A recent McKinsey report showed a growing preference among Chinese consumers for local brands.

Apple and Tesla feel the pain as China opts for homegrown products Read More »

op-ed:-charges-against-journalist-tim-burke-are-a-hack-job

Op-ed: Charges against journalist Tim Burke are a hack job

Permission required? —

Burke was indicted after sharing outtakes of a Fox News interview.

Op-ed: Charges against journalist Tim Burke are a hack job

Caitlin Vogus is the deputy director of advocacy at Freedom of the Press Foundation and a First Amendment lawyer. Jennifer Stisa Granick is the surveillance and cybersecurity counsel with the ACLU’s Speech, Privacy, and Technology Project. The opinions in this piece do not necessarily reflect the views of Ars Technica.

Imagine a journalist finds a folder on a park bench, opens it, and sees a telephone number inside. She dials the number. A famous rapper answers and spews a racist rant. If no one gave her permission to open the folder and the rapper’s telephone number was unlisted, should the reporter go to jail for publishing what she heard?

If that sounds ridiculous, it’s because it is. And yet, add in a computer and the Internet, and that’s basically what a newly unsealed federal indictment accuses Florida journalist Tim Burke of doing when he found and disseminated outtakes of Tucker Carlson’s Fox News interview with Ye, the artist formerly known as Kanye West, going on the first of many antisemitic diatribes.

The vast majority of the charges against Burke are under the Computer Fraud and Abuse Act (CFAA), a law that the ACLU and Freedom of the Press Foundation have long argued is vague and subject to abuse. Now, in a new and troubling move, the government suggests in the Burke indictment that journalists violate the CFAA if they don’t ask for permission to use information they find publicly posted on the Internet.

According to news reports and statements from Burke’s lawyer, the charges are, in part, related to the unaired segments of the interview between Carlson and Ye. After Burke gave the video to news sites to publish, Ye’s disturbing remarks, and Fox’s decision to edit them out of the interview when broadcast, quickly made national news.

According to Burke, the video of Carlson’s interview with Ye was streamed via a publicly available, unencrypted URL that anyone could access by typing the address into your browser. Those URLs were not listed in any search engine, but Burke says that a source pointed him to a website on the Internet Archive where a radio station had posted “demo credentials” that gave access to a page where the URLs were listed.

The credentials were for a webpage created by LiveU, a company that provides video streaming services to broadcasters. Using the demo username and password, Burke logged into the website, and, Burke’s lawyer claims, the list of URLs for video streams automatically downloaded to his computer.

And that, the government says, is a crime. It charges Burke with violating the CFAA’s prohibition on intentionally accessing a computer “without authorization” because he accessed the LiveU website and URLs without having been authorized by Fox or LiveU. In other words, because Burke didn’t ask Fox or LiveU for permission to use the demo account or view the URLs, the indictment alleges, he acted without authorization.

But there’s a difference between LiveU and Fox’s subjective wishes about what journalists or others would find, and what the services and websites they maintained and used permitted people to find. The relevant question should be the latter. Generally, it is both a First Amendment and a due process problem to allow a private party’s desire to control information to form the basis of criminal prosecutions.

The CFAA charges against Burke take advantage of the vagueness of the statutory term “without authorization.” The law doesn’t define the term, and its murkiness has enabled plenty of ill-advised prosecutions over the years. In Burke’s case, because the list of unencrypted URLs was password protected and the company didn’t want outsiders to access the URLs, the government claims that Burke acted “without authorization.”

Using a published demo password to get a list of URLs, which anyone could have used a software program to guess and access, isn’t that big of a deal. What was a big deal is that Burke’s research embarrassed Fox News. But that’s what journalists are supposed to do—uncover questionable practices of powerful entities.

Journalists need never ask corporations for permission to investigate or embarrass them, and the law shouldn’t encourage or force them to. Just because someone doesn’t like what a reporter does online doesn’t mean that it’s without authorization and that what he did is therefore a crime.

Still, this isn’t the first time that prosecutors have abused computer hacking laws to go after journalists and others, like security researchers. Until a 2021 Supreme Court ruling, researchers and journalists worried that their good faith investigations of algorithmic discrimination could expose them to CFAA liability for exceeding sites’ terms of service.

Even now, the CFAA and similarly vague state computer crime laws continue to threaten press freedom. Just last year, in August, police raided the newsroom of the Marion County Record and accused its journalists of breaking state computer hacking laws by using a government website to confirm a tip from a source. Police dropped the case after a national outcry.

The White House seemed concerned about the Marion ordeal. But now the same administration is using an overly broad interpretation of a hacking law to target a journalist. Merely filing charges against Burke sends a chilling message that the government will attempt to penalize journalists for engaging in investigative reporting it dislikes.

Even worse, if the Burke prosecution succeeds, it will encourage the powerful to use the CFAA as a veto over news reporting based on online sources just because it is embarrassing or exposes their wrongdoing. These charges were also an excuse for the government to seize Burke’s computer equipment and digital work—and demand to keep it permanently. This seizure interferes with Burke’s ongoing reporting, a tactic that it could repeat in other investigations.

If journalists must seek permission to publish information they find online from the very people they’re exposing, as the government’s indictment of Burke suggests, it’s a good bet that most information from the obscure but public corners of the Internet will never see the light of day. That would endanger both journalism and public access to important truths. The court reviewing Burke’s case should dismiss the charges.

Op-ed: Charges against journalist Tim Burke are a hack job Read More »

florida-middle-schoolers-charged-with-making-deepfake-nudes-of-classmates

Florida middle-schoolers charged with making deepfake nudes of classmates

no consent —

AI tool was used to create nudes of 12- to 13-year-old classmates.

Florida middle-schoolers charged with making deepfake nudes of classmates

Jacqui VanLiew; Getty Images

Two teenage boys from Miami, Florida, were arrested in December for allegedly creating and sharing AI-generated nude images of male and female classmates without consent, according to police reports obtained by WIRED via public record request.

The arrest reports say the boys, aged 13 and 14, created the images of the students who were “between the ages of 12 and 13.”

The Florida case appears to be the first arrests and criminal charges as a result of alleged sharing of AI-generated nude images to come to light. The boys were charged with third-degree felonies—the same level of crimes as grand theft auto or false imprisonment—under a state law passed in 2022 which makes it a felony to share “any altered sexual depiction” of a person without their consent.

The parent of one of the boys arrested did not respond to a request for comment in time for publication. The parent of the other boy said that he had “no comment.” The detective assigned to the case, and the state attorney handling the case, did not respond for comment in time for publication.

As AI image-making tools have become more widely available, there have been several high-profile incidents in which minors allegedly created AI-generated nude images of classmates and shared them without consent. No arrests have been disclosed in the publicly reported cases—at Issaquah High School in Washington, Westfield High School in New Jersey, and Beverly Vista Middle School in California—even though police reports were filed. At Issaquah High School, police opted not to press charges.

The first media reports of the Florida case appeared in December, saying that the two boys were suspended from Pinecrest Cove Academy in Miami for 10 days after school administrators learned of allegations that they created and shared fake nude images without consent. After parents of the victims learned about the incident, several began publicly urging the school to expel the boys.

Nadia Khan-Roberts, the mother of one of the victims, told NBC Miami in December that for all of the families whose children were victimized the incident was traumatizing. “Our daughters do not feel comfortable walking the same hallways with these boys,” she said. “It makes me feel violated, I feel taken advantage [of] and I feel used,” one victim, who asked to remain anonymous, told the TV station.

WIRED obtained arrest records this week that say the incident was reported to police on December 6, 2023, and that the two boys were arrested on December 22. The records accuse the pair of using “an artificial intelligence application” to make the fake explicit images. The name of the app was not specified and the reports claim the boys shared the pictures between each other.

“The incident was reported to a school administrator,” the reports say, without specifying who reported it, or how that person found out about the images. After the school administrator “obtained copies of the altered images” the administrator interviewed the victims depicted in them, the reports say, who said that they did not consent to the images being created.

After their arrest, the two boys accused of making the images were transported to the Juvenile Service Department “without incident,” the reports say.

A handful of states have laws on the books that target fake, nonconsensual nude images. There’s no federal law targeting the practice, but a group of US senators recently introduced a bill to combat the problem after fake nude images of Taylor Swift were created and distributed widely on X.

The boys were charged under a Florida law passed in 2022 that state legislators designed to curb harassment involving deepfake images made using AI-powered tools.

Stephanie Cagnet Myron, a Florida lawyer who represents victims of nonconsensually shared nude images, tells WIRED that anyone who creates fake nude images of a minor would be in possession of child sexual abuse material, or CSAM. However, she claims it’s likely that the two boys accused of making and sharing the material were not charged with CSAM possession due to their age.

“There’s specifically several crimes that you can charge in a case, and you really have to evaluate what’s the strongest chance of winning, what has the highest likelihood of success, and if you include too many charges, is it just going to confuse the jury?” Cagnet Myron added.

Mary Anne Franks, a professor at the George Washington University School of Law and a lawyer who has studied the problem of nonconsensual explicit imagery, says it’s “odd” that Florida’s revenge porn law, which predates the 2022 statute under which the boys were charged, only makes the offense a misdemeanor, while this situation represented a felony.

“It is really strange to me that you impose heftier penalties for fake nude photos than for real ones,” she says.

Franks adds that although she believes distributing nonconsensual fake explicit images should be a criminal offense, thus creating a deterrent effect, she doesn’t believe offenders should be incarcerated, especially not juveniles.

“The first thing I think about is how young the victims are and worried about the kind of impact on them,” Franks says. “But then [I] also question whether or not throwing the book at kids is actually going to be effective here.”

This story originally appeared on wired.com.

Florida middle-schoolers charged with making deepfake nudes of classmates Read More »

tesla-drivers-who-sued-over-exaggerated-ev-range-are-forced-into-arbitration

Tesla drivers who sued over exaggerated EV range are forced into arbitration

Tesla beats drivers —

Judge upholds arbitration agreement but says Tesla can still face injunction.

Four Tesla charging stations inside a parking garage.

Enlarge / Tesla Superchargers at Boanrka shopping center in Krakow, Poland on March 4, 2024.

Getty Images | NurPhoto

Tesla drivers who say the carmaker “grossly” exaggerated the ranges of its electric vehicles have lost their attempt to sue Tesla as a class. They will have to pursue claims individually in arbitration, a federal judge ruled yesterday.

Two related lawsuits were filed after a Reuters investigation last year found that Tesla consistently exaggerated the driving range of its electric vehicles, leading car owners to think something was broken when the actual driving range was much lower than advertised. Tesla reportedly created a “Diversion Team” to handle these complaints and routinely canceled service appointments because there was no way to improve the actual distance Tesla cars could drive between charges.

Several Tesla drivers sued in US District Court for the Northern District of California, seeking class-action status to represent buyers of Tesla cars.

When buying their Teslas, each named plaintiff in the two lawsuits signed an order agreement that included an arbitration provision, US District Judge Yvonne Gonzalez Rogers wrote. The agreement says that “any dispute arising out of or relating to any aspect of the relationship between you and Tesla will not be decided by a judge or jury but instead by a single arbitrator in an arbitration administered by the American Arbitration Association.”

The agreement has a severance clause that says, “If a court or arbitrator decides that any part of this agreement to arbitrate cannot be enforced as to a particular claim for relief or remedy, then that claim or remedy (and only that claim or remedy) must be brought in court and any other claims must be arbitrated.”

Tesla drivers argued that the arbitration agreement is not enforceable under the McGill v. Citibank precedent, in which the California Supreme Court ruled that arbitration provisions are unenforceable if they waive a plaintiff’s right to seek public injunctive relief. However, the McGill precedent doesn’t always give plaintiffs the right to pursue claims as a class, Gonzalez Rogers wrote. In the Tesla case, “the Arbitration Provision does not prohibit plaintiffs from pursuing public injunctive relief in their individual capacities,” the ruling said.

Tesla could still be hit with injunction

Public injunctive relief is “brought on behalf of an individual for the benefit of the public, not as a class or representative claim,” the judge wrote. Public injunctive relief is supposed to benefit the public at large. When an injunction benefits the plaintiff, it does so “only incidentally and/or as a member of the general public.”

In other words, a Tesla driver could win an arbitration case and seek an injunction that forces Tesla to change its practices. In a case won by Comcast, the US Court of Appeals for the 9th Circuit in 2021 stated that “public injunctive relief within the meaning of McGill is limited to forward-looking injunctions that seek to prevent future violations of law for the benefit of the general public as a whole, as opposed to a particular class of persons… without the need to consider the individual claims of any non-party.”

Gonzalez Rogers ruled that Tesla’s arbitration agreement “permits plaintiffs to seek public injunctive relief in arbitration.” The US District Court could also issue an injunction against Tesla after an arbitration case.

The Tesla drivers are seeking remedies under the California Consumer Legal Remedies Act (CLRA), the California Unfair Competition Law (UCL), and the California False Advertising Law (FAL). After arbitration, the court “will be able to craft appropriate public injunctive relief if plaintiffs successfully arbitrate their UCL, FAL, and CLRA claims and such relief is deemed unavailable,” Gonzalez Rogers wrote.

The judge stayed the case “pending resolution of the arbitration in case it is required to adjudicate any request for public injunctive relief… The Court finds that the Arbitration Provision does not prohibit plaintiffs from pursuing public injunctive relief in their individual capacities. To the extent an arbitrator finds otherwise, the Court STAYS the action as such relief is severable and can be separately adjudicated by this Court.”

Tesla arbitration clause upheld in earlier case

Tesla previously won a different case in the same court involving its arbitration clause. In September 2023, Judge Haywood Gilliam Jr. ruled that four Tesla drivers who sued the company over its allegedly deceptive “self-driving” claims would have to go to arbitration instead of pursuing a class action.

The plaintiffs in that case argued that “Tesla’s arbitration agreement is unconscionable, and thus [un]enforceable.” They said the arbitration agreement “is not referenced on the Order page” and “is buried in small font in the middle of an Order Agreement, which is only accessible through an inconspicuous hyperlink.”

Ruling against the plaintiffs, Gilliam found that Tesla’s “order payment screens provided conspicuous notice of the order agreements.” He also found that provisions such as a 30-day opt-out clause were enforceable, even though Tesla drivers argued it was too short because it “typically takes much more than 30 days for Tesla to configure and deliver a car.”

Tesla drivers who sued over exaggerated EV range are forced into arbitration Read More »

us-lawmakers-vote-50-0-to-force-sale-of-tiktok-despite-angry-calls-from-users

US lawmakers vote 50-0 to force sale of TikTok despite angry calls from users

Divest or get out —

Lawmaker: TikTok must “sever relationship with the Chinese Communist Party.”

A large TikTok ad at a subway station.

Getty Images | Bloomberg

The House Commerce Committee today voted 50-0 to approve a bill that would force TikTok owner ByteDance to sell the company or lose access to the US market.

The Protecting Americans from Foreign Adversary Controlled Applications Act “addresses the immediate national security risks posed by TikTok and establishes a framework for the Executive Branch to protect Americans from future foreign adversary controlled applications,” a committee memo said. “If an application is determined to be operated by a company controlled by a foreign adversary—like ByteDance, Ltd., which is controlled by the People’s Republic of China—the application must be divested from foreign adversary control within 180 days.”

If the bill passes in the House and Senate and is signed into law by President Biden, TikTok would eventually be dropped from app stores in the US if its owner doesn’t sell. It also would lose access to US-based web-hosting services.

“If the application is not divested, entities in the United States would be prohibited from distributing the application through an application marketplace or store and providing web hosting services,” the committee memo said.

Chair: “CCP weaponizes applications it controls”

The bill was introduced on Tuesday and had 20 sponsors split evenly between Democrats and Republicans. TikTok urged its users to protest the bill, sending a notification that said, “Congress is planning a total ban of TikTok… Let Congress know what TikTok means to you and tell them to vote NO.”

Many users called lawmakers’ offices to complain, congressional staffers told Politico. “It’s so so bad. Our phones have not stopped ringing. They’re teenagers and old people saying they spend their whole day on the app and we can’t take it away,” one House GOP staffer was quoted as saying.

House Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.) said that TikTok enlisting users to call lawmakers showed “in real time how the Chinese Communist Party can weaponize platforms like TikTok to manipulate the American people.”

“This is just a small taste of how the CCP weaponizes applications it controls to manipulate tens of millions of people to further their agenda. These applications present a clear national security threat to the United States and necessitate the decisive action we will take today,” she said before the vote.

The American Civil Liberties Union opposes the TikTok bill, saying it “would violate the First Amendment rights of hundreds of millions of Americans who use the app to communicate and express themselves daily.”

Bill sponsor: “It’s not a ban”

Bill sponsor Rep. Mike Gallagher (R-Wis.) expressed anger at TikTok for telling its users that the bill would ban the app completely, pointing out that the bill would only ban the app if it isn’t sold.

“If you actually read the bill, it’s not a ban. It’s a divestiture,” Gallagher said, according to Politico. Gallagher also said his bill puts the decision “squarely in the hands of TikTok to sever their relationship with the Chinese Communist Party.”

TikTok issued a statement calling the bill “an outright ban of TikTok, no matter how much the authors try to disguise it.” The House Commerce Committee responded to TikTok’s claim, calling it “yet another lie.”

While the bill text could potentially wrap in other apps in the future, it specifically lists the ByteDance-owned TikTok as a “foreign adversary controlled application.”

“It shall be unlawful for an entity to distribute, maintain, or update (or enable the distribution, maintenance, or updating of) a foreign adversary controlled application,” the bill says. An app would be allowed to stay in the US market after a divestiture if the president determines that the sale “would result in the relevant covered company no longer being controlled by a foreign adversary.”

US lawmakers vote 50-0 to force sale of TikTok despite angry calls from users Read More »

“disgraceful”:-messy-tos-update-allegedly-locks-roku-devices-until-users-give-in

“Disgraceful”: Messy ToS update allegedly locks Roku devices until users give in

Show’s over —

Users are opted in automatically unless they write a letter to Roku by March 21.

A promotional image for a Roku TV.

Enlarge / A promotional image for a Roku TV.

Roku customers are threatening to stop using, or to even dispose of, their low-priced TVs and streaming gadgets after the company appears to be locking devices for people who don’t conform to the recently updated terms of service (ToS).

This month, users on Roku’s support forums reported suddenly seeing a message when turning on their Roku TV or streaming device reading: “We’ve made an important update: We’ve updated our Dispute Resolution Terms. Select ‘Agree’ to agree to these updated Terms and to continue enjoying our products and services. Press to view these updated Terms.” A large button reading “Agree” follows. The pop-up doesn’t offer a way to disagree, and users are unable to use their device unless they hit agree.

Customers have left pages of complaints on Roku’s forum. One user going by “rickstanford” said they were “FURIOUS!!!!” and expressed interest in sending their reported six Roku devices back to the company since “apparently I don’t own them despite spending hundreds of dollars on them.”

Another user going by Formercustomer, who, I suspect, is aptly named, wrote:

So, you buy a product, and you use it. And they want to change the terms limiting your rights, and they basically brick the device … if you don’t accept their new terms. … I hope they get their comeuppance here, as this is disgraceful.

Roku has further aggravated customers who have found that disagreeing to its updated terms is harder than necessary. Roku is willing to accept agreement to its terms with a single button press, but to opt out, users must jump through hoops that include finding that old book of stamps.

To opt out of Roku’s ToS update, which primarily changes the “Dispute Resolution Terms,” users must send a letter to Roku’s general counsel in California mentioning: “the name of each person opting out and contact information for each such person, the specific product models, software, or services used that are at issue, the email address that you used to set up your Roku account (if you have one), and, if applicable, a copy of your purchase receipt.” Roku required all this to opt out of its terms previously, as well.

But the new update means that while users read this information and have their letter delivered, they’re unable to use products they already paid for and used, in some cases for years, under different “dispute resolution terms.”

“I can’t watch my TV because I don’t agree to the Dispute Resolution Terms. Please help,” a user going by Campbell220 wrote on Roku’s support forum.

Based on the ToS’s wording, users could technically choose to agree to the ToS on their device and then write a letter saying they’d like to opt out. But opting into an agreement only to use a device under terms you don’t agree with is counterintuitive.

Even more pressing, Roku’s ToS states that users only have “within 30 days of you first becoming subject to” Roku’s updated terms, which was February 20, to opt out. Otherwise, you’re opted in automatically.

Archived records of Roku’s ToS website seem to show the new ToS being online since at least August. But it was only this month that users reported that their TVs were useless unless they accepted the terms via an on-screen message. Roku declined to answer Ars Technica’s questions about the changes, including why it didn’t alert users about them earlier. But a spokesperson shared a statement saying:

Like many companies, Roku updates its terms of service from time to time. When we do, we take steps to make sure customers are informed of the change.

What Roku changed

Customers are criticizing Roku for aggressively pushing them to accept ToS changes. The updates focus on Roku’s terms for dispute resolution, which prevent users from suing Roku. The terms have long forced a described arbitration process for dispute resolution. The new ToS is more detailed, including specifics for “mass arbitrations.” The biggest change is the introduction of a section called “Required Informal Dispute Resolution.” It states that except for a small number of described exceptions (which include claims around intellectual property), users must make “a good-faith effort” to negotiate with Roku, or vice versa, for at least 45 days before entering arbitration.

Roku is also taking heat for using forced arbitration at all, which some argue can have one-sided benefits. In a similar move in December, for example, 23andMe said users had 30 days to opt out of its new dispute resolution terms, which included mass arbitration rules (the genetics firm let customers opt out via email, though). The changes came after 23andMe user data was stolen in a cyberattack. Forced arbitration clauses are frequently used by large companies to avoid being sued by fed-up customers.

Roku’s forced arbitration rules aren’t new but are still making customers question their streaming hardware, especially considering that there are rivals, like Amazon, Apple, and Google, that don’t force arbitration on users.

Based on comments in Roku’s forums, some users were unaware they were already subject to arbitration rules and only learned this as a result of Roku’s abrupt pop-up.

But with the functionality of already-owned devices blocked until users give in, Roku’s methods are questionable, and Roku may lose customers over it. Per an anonymous user on Roku’s forum:

I’m unplugging right now.

“Disgraceful”: Messy ToS update allegedly locks Roku devices until users give in Read More »

us-gov’t-announces-arrest-of-former-google-engineer-for-alleged-ai-trade-secret-theft

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft

Don’t trade the secrets dept. —

Linwei Ding faces four counts of trade secret theft, each with a potential 10-year prison term.

A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

Enlarge / A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

On Wednesday, authorities arrested former Google software engineer Linwei Ding in Newark, California, on charges of stealing AI trade secrets from the company. The US Department of Justice alleges that Ding, a Chinese national, committed the theft while secretly working with two China-based companies.

According to the indictment, Ding, who was hired by Google in 2019 and had access to confidential information about the company’s data centers, began uploading hundreds of files into a personal Google Cloud account two years ago.

The trade secrets Ding allegedly copied contained “detailed information about the architecture and functionality of GPU and TPU chips and systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of executing at the cutting edge of machine learning and AI technology,” according to the indictment.

Shortly after the alleged theft began, Ding was offered the position of chief technology officer at an early-stage technology company in China that touted its use of AI technology. The company offered him a monthly salary of about $14,800, plus an annual bonus and company stock. Ding reportedly traveled to China, participated in investor meetings, and sought to raise capital for the company.

Investigators reviewed surveillance camera footage that showed another employee scanning Ding’s name badge at the entrance of the building where Ding worked at Google, making him look like he was working from his office when he was actually traveling.

Ding also founded and served as the chief executive of a separate China-based startup company that aspired to train “large AI models powered by supercomputing chips,” according to the indictment. Prosecutors say Ding did not disclose either affiliation to Google, which described him as a junior employee. He resigned from Google on December 26 of last year.

The FBI served a search warrant at Ding’s home in January, seizing his electronic devices and later executing an additional warrant for the contents of his personal accounts. Authorities found more than 500 unique files of confidential information that Ding allegedly stole from Google. The indictment says that Ding copied the files into the Apple Notes application on his Google-issued Apple MacBook, then converted the Apple Notes into PDF files and uploaded them to an external account to evade detection.

“We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets,” Google spokesperson José Castañeda told Ars Technica. “After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement. We are grateful to the FBI for helping protect our information and will continue cooperating with them closely.”

Attorney General Merrick Garland announced the case against the 38-year-old at an American Bar Association conference in San Francisco. Ding faces four counts of federal trade secret theft, each carrying a potential sentence of up to 10 years in prison.

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft Read More »

law-enforcement-doesn’t-want-to-be-“customer-service”-reps-for-meta-any-more

Law enforcement doesn’t want to be “customer service” reps for Meta any more

No help —

“Dramatic and persistent spike” in account takeovers is “substantial drain” on resources.

In this photo illustration, the icons of WhatsApp, Messenger, Instagram and Facebook are displayed on an iPhone in front of a Meta logo

Enlarge / Meta has a verified program for users of Facebook and Instagram.

Getty Images | Chesnot

Forty-one state attorneys general penned a letter to Meta’s top attorney on Wednesday saying complaints are skyrocketing across the United States about Facebook and Instagram user accounts being stolen and declaring “immediate action” necessary to mitigate the rolling threat.

The coalition of top law enforcement officials, spearheaded by New York Attorney General Letitia James, says the “dramatic and persistent spike” in complaints concerning account takeovers amounts to a “substantial drain” on governmental resources, as many stolen accounts are also tied to financial crimes—some of which allegedly profits Meta directly.

“We have received a number of complaints of threat actors fraudulently charging thousands of dollars to stored credit cards,” says the letter addressed to Meta’s chief legal officer, Jennifer Newstead. “Furthermore, we have received reports of threat actors buying advertisements to run on Meta.”

“We refuse to operate as the customer service representatives of your company,” the officials add. “Proper investment in response and mitigation is mandatory.”

In addition to New York, the letter is signed by attorneys general from Alabama, Alaska, Arizona, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Illinois, Iowa, Kentucky, Louisiana, Maryland, Massachusetts, Michigan, Minnesota, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, North Carolina, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin, Wyoming, and the District of Columbia.

“Scammers use every platform available to them and constantly adapt to evade enforcement. We invest heavily in our trained enforcement and review teams and have specialized detection tools to identify compromised accounts and other fraudulent activity,” Meta says in a statement provided by spokesperson Erin McPike. “We regularly share tips and tools people can use to protect themselves, provide a means to report potential violations, work with law enforcement, and take legal action.”

Account takeovers can occur as a result of phishing as well as other more sophisticated and targeted techniques. Once an attacker gains access to an account, the owner can be easily locked out by changing passwords and contact information. Private messages and personal information are left up for grabs for a variety of nefarious purposes, from impersonation and fraud to pushing misinformation.

“It’s basically a case of identity theft and Facebook is doing nothing about it,” said one user whose complaint was cited in the letter to Meta’s Newstead.

The state officials said the accounts that were stolen to run ads on Facebook often run afoul of its rules while doing so, leading them to be permanently suspended, punishing the victims—often small business owners—twice over.

“Having your social media account taken over by a scammer can feel like having someone sneak into your home and change all of the locks,” New York’s James said in a statement. “Social media is how millions of Americans connect with family, friends, and people throughout their communities and the world. To have Meta fail to properly protect users from scammers trying to hijack accounts and lock rightful owners out is unacceptable.”

Other complaints forwarded to Newstead show hacking victims expressing frustration over Meta’s lack of response. In many cases, users report no action being taken by the company. Some say the company encourages users to report such problems but never responds, leaving them unable to salvage their accounts or the businesses they built around them.

After being hacked and defrauded of $500, one user complained that their ability to communicate with their own customer base had been “completely disrupted,” and that Meta had never responded to the report they filed, though the user had followed the instructions the company provided them to obtain help.

“I can’t get any help from Meta. There is no one to talk to and meanwhile all my personal pictures are being used. My contacts are receiving false information from the hacker,” one user wrote.

Wrote another: “This is my business account, which is important to me and my life. I have invested my life, time, money and soul in this account. All attempts to contact and get a response from the Meta company, including Instagram and Facebook, were crowned with complete failure, since the company categorically does not respond to letters.”

Figures provided by James’ office in New York show a tenfold increase in complaints between 2019 and 2023—from 73 complaints to more than 780 last year. In January alone, more than 128 complaints were received, James’ office says. Other states saw similar spikes in complaints during that period, according to the letter, with Pennsylvania recording a 270 percent increase, a 330 percent jump in North Carolina, and a 740 percent surge in Vermont.

The letter notes that, while the officials cannot be “certain of any connection,” the drastic increase in complaints occurred “around the same time” as layoffs at Meta affecting roughly 11,000 employees in November 2022, around 13 percent of its staff at the time.

This story originally appeared on wired.com.

Law enforcement doesn’t want to be “customer service” reps for Meta any more Read More »

spain-tells-sam-altman,-worldcoin-to-shut-down-its-eyeball-scanning-orbs

Spain tells Sam Altman, Worldcoin to shut down its eyeball-scanning orbs

Only for real humans —

Cryptocurrency launched by OpenAI’s Altman is drawing scrutiny from regulators.

A spherical device that scans people's eyeballs.

Enlarge / Worldcoin’s “Orb,” a device that scans your eyeballs to verify that you’re a real human.

Spain has moved to block Sam Altman’s cryptocurrency project Worldcoin, the latest blow to a venture that has raised controversy in multiple countries by collecting customers’ personal data using an eyeball-scanning “orb.”

The AEPD, Spain’s data protection regulator, has demanded that Worldcoin immediately ceases collecting personal information in the country via the scans and that it stops using data it has already gathered.

The regulator announced on Wednesday that it had taken the “precautionary measure” at the start of the week and had given Worldcoin 72 hours to demonstrate its compliance with the order.

Mar España Martí, AEPD director, said Spain was the first European country to move against Worldcoin and that it was impelled by special concern that the company was collecting information about minors.

“What we have done is raise the alarm in Europe. But this is an issue that affects… citizens in all the countries of the European Union,” she said. “That means there has to be coordinated action.”

Worldcoin, co-founded by Altman in 2019, has been offering tokens of its own cryptocurrency to people around the world, in return for their consent to have their eyes scanned by an orb.

The scans are used as a form of identification as it seeks to create a reliable mechanism to distinguish between humans and machines as artificial intelligence becomes more advanced.

Worldcoin was not immediately available for comment.

The Spanish regulator’s decision is the latest blow to the aspirations of the OpenAI boss and his Worldcoin co-founders Max Novendstern and Alex Blania following a series of setbacks elsewhere in the world.

At the point of its rollout last summer, the San Francisco and Berlin headquartered start-up avoided launching its crypto tokens in the US on account of the country’s harsh crackdown on the digital assets sector.

The Worldcoin token is also not available in major global markets such as China and India, while watchdogs in Kenya last year ordered the project to shut down operations. The UK’s Information Commissioner’s Office has previously said it would be making inquiries into Worldcoin.

While some jurisdictions have raised concerns about the viability of a Worldcoin cryptocurrency token, Spain’s latest crackdown targets the start-up’s primary efforts to establish a method to prove customers’ “personhood”—work that Altman characterizes as essential in a world where sophisticated AI is harder to distinguish from humans.

In the face of growing scrutiny, Altman told the Financial Times he could imagine a world where his start-up could exist without its in-house cryptocurrency.

Worldcoin has registered 4 million users, according to a person with knowledge of the matter. Investors poured roughly $250 million into the company, including venture capital groups Andreessen Horowitz and Khosla Ventures, internet entrepreneur Reid Hoffman and, prior to the collapse of his FTX empire, Sam Bankman-Fried.

The project attracted media attention and prompted a handful of consumer complaints in Spain as queues began to grow at the stands in shopping centers where Worldcoin is offering cryptocurrency in exchange for eyeball scans.

In January, the data protection watchdog in the Basque country, one of Spain’s autonomous regions, issued a warning about the eye-scanning technology Worldcoin was using in a Bilbao mall. The watchdog, the AVPD, said it fell under biometric data protection rules and that a risk assessment was needed.

España Martí said the Spanish agency was acting on concerns that the Worldcoin initiative did not comply with biometric data laws, which demand that users be given adequate information about how their data will be used and that they have the right to erase it.

Sharing such biometric data, she said, opened people up to a variety of risks ranging from identity fraud to breaches of health privacy and discrimination.

“I want to send a message to young people. I understand that it can be very tempting to get €70 or €80 that sorts you out for the weekend,” España Martí said, but “giving away personal data in exchange for these derisory amounts of money is a short, medium and long-term risk.”

Spain tells Sam Altman, Worldcoin to shut down its eyeball-scanning orbs Read More »

oregon-oks-right-to-repair-bill-that-bans-the-blocking-of-aftermarket-parts

Oregon OKs right-to-repair bill that bans the blocking of aftermarket parts

Right to repair —

Governor’s signature would stop software locks from impairing replacement parts.

iPhone battery being removed from an iPhone over a blue repair mat

Getty Images

Oregon has joined the small but growing list of states that have passed right-to-repair legislation. Oregon’s bill stands out for a provision that would prevent companies from requiring that official parts be unlocked with encrypted software checks before they will fully function.

Bill SB 1596 passed Oregon’s House by a 42 to 13 margin. Gov. Tina Kotek has five days to sign the bill into law. Consumer groups and right-to-repair advocates praised the bill as “the best bill yet,” while the bill’s chief sponsor, state Sen. Janeen Sollman (D), pointed to potential waste reductions and an improved second-hand market for closing a digital divide.

“Oregon improves on Right to Repair laws in California, Minnesota and New York by making sure that consumers have the choice of buying new parts, used parts, or third-party parts for the gadgets and gizmos,” said Gay Gordon-Byrne, executive director of Repair.org, in a statement.

Like bills passed in New York, California, and Minnesota, Oregon’s bill requires companies to offer the same parts, tools, and documentation to individual and independent repair shops that are already offered to authorized repair technicians.

Unlike other states’ bills, however, Oregon’s bill doesn’t demand a set number of years after device manufacture for such repair implements to be produced. That suggests companies could effectively close their repair channels entirely rather than comply with the new requirements. California’s bill mandated seven years of availability.

If signed, the law’s requirements for parts, tools, and documentation would apply to devices sold after 2015, except for phones, which are covered after July 2021. The prohibition against parts pairing only covers devices sold in 2025 and later. Like other repair bills, a number of device categories are exempted, including video game consoles, HVAC and medical gear, solar systems, vehicles, and, very specifically, “Electric toothbrushes.”

Apple had surprised many with its support for California’s repair bill. But the company, notable for its pairing requirements for certain repair parts, opposed Oregon’s repair bill. John Perry, a senior manager for secure design at Apple, testified at an Oregon hearing that the pairing restriction would “undermine the security, safety, and privacy of Oregonians by forcing device manufacturers to allow the use of parts of unknown origin in consumer devices.”

Perry also noted Apple’s improved repair workflow, which no longer requires online access or a phone call to pair parts. Apple devices will still issue notifications and warnings if an unauthorized screen or battery, for example, is installed in an iPhone.

Disclosure: Kevin Purdy previously worked for iFixit. He has no financial ties to the company.

Oregon OKs right-to-repair bill that bans the blocking of aftermarket parts Read More »