Policy

publisher:-openai’s-gpt-store-bots-are-illegally-scraping-our-textbooks

Publisher: OpenAI’s GPT Store bots are illegally scraping our textbooks

OpenAI logo

For the past few months, Morten Blichfeldt Andersen has spent many hours scouring OpenAI’s GPT Store. Since it launched in January, the marketplace for bespoke bots has filled up with a deep bench of useful and sometimes quirky AI tools. Cartoon generators spin up New Yorker–style illustrations and vivid anime stills. Programming and writing assistants offer shortcuts for crafting code and prose. There’s also a color analysis bot, a spider identifier, and a dating coach called RizzGPT. Yet Blichfeldt Andersen is hunting only for one very specific type of bot: Those built on his employer’s copyright-protected textbooks without permission.

Blichfeldt Andersen is publishing director at Praxis, a Danish textbook purveyor. The company has been embracing AI and created its own custom chatbots. But it is currently engaged in a game of whack-a-mole in the GPT Store, and Blichfeldt Andersen is the man holding the mallet.

“I’ve been personally searching for infringements and reporting them,” Blichfeldt Andersen says. “They just keep coming up.” He suspects the culprits are primarily young people uploading material from textbooks to create custom bots to share with classmates—and that he has uncovered only a tiny fraction of the infringing bots in the GPT Store. “Tip of the iceberg,” Blichfeldt Andersen says.

It is easy to find bots in the GPT Store whose descriptions suggest they might be tapping copyrighted content in some way, as Techcrunch noted in a recent article claiming OpenAI’s store was overrun with “spam.” Using copyrighted material without permission is permissible in some contexts but in others rightsholders can take legal action. WIRED found a GPT called Westeros Writer that claims to “write like George R.R. Martin,” the creator of Game of Thrones. Another, Voice of Atwood, claims to imitate the writer Margaret Atwood. Yet another, Write Like Stephen, is intended to emulate Stephen King.

When WIRED tried to trick the King bot into revealing the “system prompt” that tunes its responses, the output suggested it had access to King’s memoir On Writing. Write Like Stephen was able to reproduce passages from the book verbatim on demand, even noting which page the material came from. (WIRED could not make contact with the bot’s developer, because it did not provide an email address, phone number, or external social profile.)

OpenAI spokesperson Kayla Wood says it responds to takedown requests against GPTs made with copyrighted content but declined to answer WIRED’s questions about how frequently it fulfills such requests. She also says the company proactively looks for problem GPTs. “We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies, including the use of content from third parties without necessary permission,” Wood says.

New disputes

The GPT store’s copyright problem could add to OpenAI’s existing legal headaches. The company is facing a number of high-profile lawsuits alleging copyright infringement, including one brought by The New York Times and several brought by different groups of fiction and nonfiction authors, including big names like George R.R. Martin.

Chatbots offered in OpenAI’s GPT Store are based on the same technology as its own ChatGPT but are created by outside developers for specific functions. To tailor their bot, a developer can upload extra information that it can tap to augment the knowledge baked into OpenAI’s technology. The process of consulting this additional information to respond to a person’s queries is called retrieval-augmented generation, or RAG. Blichfeldt Andersen is convinced that the RAG files behind the bots in the GPT Store are a hotbed of copyrighted materials uploaded without permission.

Publisher: OpenAI’s GPT Store bots are illegally scraping our textbooks Read More »

fcc-won’t-block-california-net-neutrality-law,-says-states-can-“experiment”

FCC won’t block California net neutrality law, says states can “experiment”

Illustration of ones and zeroes overlaid on a US map.

Getty Images | Matt Anderson Photography

California can keep enforcing its state net neutrality law after the Federal Communications Commission implements its own rules. The FCC could preempt future state laws if they go far beyond the national standard but said that states can “experiment” with different regulations for interconnection payments and zero-rating.

The FCC scheduled an April 25 vote on Chairwoman Jessica Rosenworcel’s proposal to restore net neutrality rules similar to the ones introduced during the Obama era and repealed under former President Trump. The FCC yesterday released the text of the pending order, which could still be changed but isn’t likely to get any major overhaul.

State-level enforcement of net neutrality rules can benefit consumers, the FCC said. The order said that “state enforcement generally supports our regulatory efforts by dedicating additional resources to monitoring and enforcement, especially at the local level, and thereby ensuring greater compliance with our requirements.”

California stepped in to regulate broadband providers after then-FCC Chairman Ajit Pai led a vote to repeal the federal rules. California beat ISPs in court, ensuring that it could enforce the state law even though Pai’s FCC attempted to preempt all state net neutrality rules.

The California law mostly mirrored the FCC’s repealed rules by prohibiting paid prioritization and blocking or throttling of lawful traffic, on both fixed and mobile networks. California went further than the FCC in regulating zero-rating by imposing a ban on paid data cap exemptions.

That means ISPs operating in California can’t exempt Internet traffic from customers’ data usage allowances in exchange for payment from a third party. In response to the state law, AT&T stopped exempting HBO Max from its mobile data caps and stopped its “sponsored data” program in which it charged other companies for similar exemptions from AT&T’s data caps.

FCC: No reason to preempt California

In the order scheduled for an April 25 vote, the FCC said the California law “appears largely to mirror or parallel our federal rules. Thus we see no reason at this time to preempt it.”

That doesn’t mean the rules are exactly the same. Instead of banning certain types of zero-rating entirely, the FCC will judge on a case-by-case basis whether any specific zero-rating program harms consumers and conflicts with the goal of preserving an open Internet. The FCC said it will evaluate sponsored-data “programs based on a totality of the circumstances, including potential benefits.”

The FCC order cautions that the agency will take a dimmer view of zero-rating in exchange for payment from a third party or zero-rating that favors an affiliated entity. But those categories will still be judged by the FCC on a case-by-case basis, whereas California bans paid data cap exemptions entirely.

Despite that difference, the FCC said it is “not persuaded on the record currently before us that the California law is incompatible with the federal rules.” The FCC also found that California’s approach to interconnection payments is compatible with the pending federal rule. Interconnection was the subject of a major controversy involving Netflix and big ISPs a decade ago.

Interconnection and zero-rating

The FCC’s new order addressed interconnection and zero-rating as follows:

As to the former, California prohibits BIAS [Broadband Internet Access Service] providers from requiring interconnection agreements “that have the purpose or effect of evading the other prohibitions” by blocking, throttling, or charging for traffic at the interconnection point. We have likewise stated in this Order that BIAS providers may not engage in interconnection practices that circumvent the prohibitions contained in the open Internet rules.

As to the latter, California restricts zero-rating when applied discriminatorily to only a subset of “Internet content, applications, services, or devices in a category” or when performed “in exchange for consideration, monetary or otherwise, from a third party.” We have likewise explained in this Order that sponsored-data programs—where a BIAS provider zero rates an edge product in exchange for consideration (monetary or otherwise) from a third party or where a BIAS provider favors an affiliate’s edge products—raise concerns under the general conduct standard.

The FCC said it found no evidence that the California law has “unduly burdened or interfered with interstate communications service.” When it comes to zero-rating and interconnection, the FCC said there is “room for states to experiment and explore their own approaches within the bounds of our overarching federal framework.”

The FCC said it will reconsider preemption of California rules if “California state enforcement authorities or state courts seek to interpret or enforce these requirements in a manner inconsistent with how we intend our rules to apply.”

FCC won’t block California net neutrality law, says states can “experiment” Read More »

elon-musk-shares-“extremely-false”-allegation-of-voting-fraud-by-“illegals”

Elon Musk shares “extremely false” allegation of voting fraud by “illegals”

Elon Musk's account on X (formerly Twitter) displayed on a smartphone next to a large X logo.

Getty Images | Nathan Stirk

Texas Secretary of State Jane Nelson yesterday issued a statement debunking claims of widespread voter fraud that were amplified by X owner Elon Musk on the social network formerly named Twitter. Election officials in two other states also disputed the “extremely false” information shared by Musk.

Musk is generally a big fan of Texas, but on Tuesday he shared a post by the account “End Wokeness” that claimed, “The number of voters registering without a photo ID is SKYROCKETING in 3 key swing states: Arizona, Texas, and Pennsylvania.” The account claimed there were 1.25 million such registrations in Texas since the beginning of 2024, over 580,000 in Pennsylvania, and over 220,000 in Arizona.

“Extremely concerning,” Musk wrote in a retweet re-X. The End Wokeness post shared by Musk suggested that “illegals” are registering to vote in large numbers by using Social Security numbers that can be obtained for work authorizations. The End Wokeness post has been viewed 63 million times so far, and Musk’s re-post has been viewed 58.2 million times.

Nelson’s statement on the Texas government’s website called the claim “totally inaccurate.” For one thing, the real number of voter registrations is a small fraction of the number claimed in the post shared by Musk, the secretary of state wrote:

It is totally inaccurate that 1.2 million voters have registered to vote in Texas without a photo ID this year. The truth is our voter rolls have increased by 57,711 voters since the beginning of 2024. This is less than the number of people registered in the same timeframe in 2022 (about 65,000) and in 2020 (about 104,000).

“Extremely false”

The Texas Secretary of State office reports having 17,948,242 registered voters for the March 2024 elections, a gain of just under 189,000 voters since November 2023. The total gain over the past 24 months is a little over 764,000.

Pennsylvania’s data shows the state has 8.7 million registered voters and 87,440 voter registrations so far in 2024. Most of those were applications for party changes, while the other 39,877 were new-voter registrations.

Arizona’s total number of registered voters has been declining. While Arizona had 4.28 million registered voters in 2020 and 4.14 million in 2022, the state’s tally in March 2024 was 4,096,260.

Musk’s “Extremely concerning” post got a reply from Maricopa County Recorder Stephen Richer, who called it “extremely false.”

“We haven’t even had that many new registrants TOTAL in 2024 in Arizona,” stated Richer, an elected official and Republican who has been active in calling out election misinformation on X. “And we have fewer than 35,000 registrants (out of 4.1 million registered voters in Arizona) who haven’t provided documented proof of citizenship.”

Musk’s platform has faced plenty of criticism over its moderation of misinformation on elections and other topics. After reports of deep cuts to X’s election integrity team in September 2023, Musk claimed the ex-X employees were “undermining election integrity.”

Elon Musk shares “extremely false” allegation of voting fraud by “illegals” Read More »

google-sues-two-crypto-app-makers-over-allegedly-vast-“pig-butchering”-scheme

Google sues two crypto app makers over allegedly vast “pig butchering” scheme

Foul Play —

Crypto and other investment app scams promoted on YouTube targeted 100K users.

Google sues two crypto app makers over allegedly vast “pig butchering” scheme

Google has sued two app developers based in China over an alleged scheme targeting 100,000 users globally over four years with at least 87 fraudulent cryptocurrency and other investor apps distributed through the Play Store.

The tech giant alleged that scammers lured victims with “promises of high returns” from “seemingly legitimate” apps offering investment opportunities in cryptocurrencies and other products. Commonly known as “pig-butchering schemes,” these scams displayed fake returns on investments, but when users went to withdraw the funds, they discovered they could not.

In some cases, Google alleged, developers would “double down on the scheme by requesting various fees and other payments from victims that were supposedly necessary for the victims to recover their principal investments and purported gains.”

Google accused the app developers—Yunfeng Sun (also known as “Alphonse Sun”) and Hongnam Cheung (also known as “Zhang Hongnim” and “Stanford Fischer”)—of conspiring to commit “hundreds of acts of wire fraud” to further “an unlawful pattern of racketeering activity” that siphoned up to $75,000 from each user successfully scammed.

Google was able to piece together the elaborate alleged scheme because the developers used a wide array of Google products and services to target victims, Google said, including Google Play, Voice, Workspace, and YouTube, breaching each one’s terms of service. Perhaps most notably, the Google Play Store’s developer program policies “forbid developers to upload to Google Play ‘apps that expose users to deceptive or harmful financial products and services,’ including harmful products and services ‘related to the management or investment of money and cryptocurrencies.'”

In addition to harming Google consumers, Google claimed that each product and service’s reputation would continue to be harmed unless the US district court in New York ordered a permanent injunction stopping developers from using any Google products or services.

“By using Google Play to conduct their fraud scheme,” scammers “have threatened the integrity of Google Play and the user experience,” Google alleged. “By using other Google products to support their scheme,” the scammers “also threaten the safety and integrity of those other products, including YouTube, Workspace, and Google Voice.”

Google’s lawsuit is the company’s most recent attempt to block fraudsters from targeting Google products by suing individuals directly, Bloomberg noted. Last year, Google sued five people accused of distributing a fake Bard AI chatbot that instead downloaded malware to Google users’ devices, Bloomberg reported.

How did the alleged Google Play scams work?

Google said that the accused developers “varied their approach from app to app” when allegedly trying to scam users out of thousands of dollars but primarily relied on three methods to lure victims.

The first method relied on sending text messages using Google Voice—such as “I am Sophia, do you remember me?” or “I miss you all the time, how are your parents Mike?”—”to convince the targeted victims that they were sent to the wrong number.” From there, the scammers would apparently establish “friendships” or “romantic relationships” with victims before moving the conversation to apps like WhatsApp, where they would “offer to guide the victim through the investment process, often reassuring the victim of any doubts they had about the apps.” These supposed friends, Google claimed, would “then disappear once the victim tried to withdraw funds.”

Another strategy allegedly employed by scammers relied on videos posted to platforms like YouTube, where fake investment opportunities would be promoted, promising “rates of return” as high as “two percent daily.”

The third tactic, Google said, pushed bogus affiliate marketing campaigns, promising users commissions for “signing up additional users.” These apps, Google claimed, were advertised on social media as “a guaranteed and easy way to earn money.”

Once a victim was drawn into using one of the fraudulent apps, “user interfaces sought to convince victims that they were maintaining balances on the app and that they were earning ‘returns’ on their investments,” Google said.

Occasionally, users would be allowed to withdraw small amounts, convincing them that it was safe to invest more money, but “later attempts to withdraw purported returns simply did not work.” And sometimes the scammers would “bilk” victims out of “even more money,” Google said, by requesting additional funds be submitted to make a withdrawal.

“Some demands” for additional funds, Google found, asked for anywhere “from 10 to 30 percent to cover purported commissions and/or taxes.” Victims, of course, “still did not receive their withdrawal requests even after these additional fees were paid,” Google said.

Which apps were removed from the Play Store?

Google tried to remove apps as soon as they were discovered to be fraudulent, but Google claimed that scammers concocted new aliases and infrastructure to “obfuscate their connection to suspended fraudulent apps.” Because scammers relied on so many different Google services, Google was able to connect the scheme to the accused developers through various business records.

Fraudulent apps named in the complaint include fake cryptocurrency exchanges called TionRT and SkypeWallet. To make the exchanges appear legitimate, scammers put out press releases on newswire services and created YouTube videos likely relying on actors to portray company leadership.

In one YouTube video promoting SkypeWallet, the supposed co-founder of Skype Coin uses the name “Romser Bennett,” which is the same name used for the supposed founder of another fraudulent app called OTCAI2.0, Google said. In each video, a completely different presumed hired actor plays the part of “Romser Bennett.” In other videos, Google found the exact same actor plays an engineer named “Rodriguez” for one app and a technical leader named “William Bryant” for another app.

Another fraudulent app that was flagged by Google was called the Starlight app. Promoted on TikTok and Instagram, Google said, that app promised “that users could earn commissions by simply watching videos.”

The Starlight app was downloaded approximately 23,000 times and seemingly primarily targeted users in Ghana, allegedly scamming at least 6,000 Ghanian users out of initial investment capital that they were told was required before they could start earning money on the app.

Across all 87 fraudulent apps that Google has removed, Google estimated that approximately 100,000 users were victimized, including approximately 8,700 in the United States.

Currently, Google is not aware of any live apps in the Play Store connected to the alleged scheme, the complaint said, but scammers intent on furthering the scheme “will continue to harm Google and Google Play users” without a permanent injunction, Google warned.

Google sues two crypto app makers over allegedly vast “pig butchering” scheme Read More »

man-pleads-guilty-to-stealing-former-coworker’s-identity-for-30-years

Man pleads guilty to stealing former coworker’s identity for 30 years

“My life is over” —

Victim was jailed for 428 days after LA cops failed to detect true identity.

Man pleads guilty to stealing former coworker’s identity for 30 years

A high-level Iowa hospital systems administrator, Matthew Kierans, has admitted to stealing a coworker’s identity and posing as William Donald Woods for more than 30 years, The Register reported.

On top of using Woods’ identity to commit crimes and rack up debt, Kierans’ elaborate identity theft scheme led to Woods’ incarceration after Kierans’ accused his victim of identity theft and Los Angeles authorities failed to detect which man was the true William Donald Woods. Kierans could face up to 32 years in prison, The Register reported, and must pay a $1.25 million fine.

According to a proposed plea agreement with the US Attorney’s Office for the Northern District of Iowa, Kierans met Woods “in about 1988” when they worked together at a hot dog stand in New Mexico. “For the next three decades,” Kierans used Woods’ “identity in every aspect of his life,” including when obtaining “employment, insurance, a social security number, driver’s licenses, titles, loans, and credit,” as well as when paying taxes. Kierans even got married and had a child using Woods’ name.

Kierans apparently hatched the scheme in 1990 when he was working as a newspaper carrier for the Denver Post. That’s when he first obtained an identification document in Woods’ name. The next year, Kierans bought a vehicle for $600 using two checks in Woods’ name, the plea agreement said. After both checks bounced, Kierans “absconded with the stolen vehicle to Idaho, where the car broke down and he abandoned it.” As a result, an arrest warrant was issued in Woods’ name, while Kierans moved to Oregon and the whereabouts of the real Woods was seemingly unknown.

Eventually in summer 2012, Kierans relocated to Wisconsin, researching Woods’ family history on Ancestry.com and then fraudulently obtaining Woods’ certified birth certificate from the State of Kentucky, seemingly to aid his job hunt. Sometime in 2013, Kierans was hired by a hospital in Iowa City to work remotely from Wisconsin as a “high level administrator in the hospital’s information technology department,” using Woods’ birth certificate and a fictitious I-9 form to pass the hospital background check.

Over the next decade, Kierans earned about $700,000 in that role, while furthering his identity theft scheme. Between 2016 and 2022, Kierans used Woods’ name, Social Security number, and date of birth and “repeatedly obtained” eight “vehicle and personal loans” from two credit unions, each one totaling between $15,000 and $44,000.

Woods only discovered the identity theft in 2019, when he was homeless and discovered that he inexplicably had $130,000 in debt to his name. Woods attempted to close bank accounts that Kierans had opened in Woods’ name, and that’s when Kierans went on the defense, successfully pushing to get Woods arrested to conceal Kierans’ decades of identity theft.

LAPD fails to detect true identity

In 2019, Woods walked into the branch of a national bank in Los Angeles, telling an assistant branch manager that “he had recently discovered that someone was using his credit and had accumulated large amounts of debt,” the plea agreement said.

Woods presented his real Social Security card and an authentic state of California ID card, but the assistant branch manager became suspicious when Woods could not answer security questions that Kierans had set for the bank accounts.

The bank employee called the phone number listed on the accounts, which was Kierans’ number. At that point, Kierans correctly answered the security questions, and the assistant branch manager contacted the Los Angeles Police Department to investigate Woods.

As a result of LAPD’s investigation—which included contacting Kierans and reviewing Kierans’ fraudulent documents, which at times used a different middle name—Woods was arrested for unauthorized use of personal information. Subsequently, Woods was charged with committing felony crimes of identity theft and false impersonation, facing as much as three years’ incarceration for each count.

Woods continued insisting that he was the real victim of identity theft, while the California court system insisted he was actually Kierans. This continued until December 2019 when a state public defender told a court that Woods did not have the mental competency to stand trial. The court ordered Woods to be detained in a publicly funded California mental hospital until his mental competency improved, where he was given psychotropic medication in 2020.

Ultimately, in March 2021, Woods was convicted of the felony charges and after his release, he was ordered to “use only” what California decided was his “true name, Matthew Kierans.” In total, Woods spent 428 days in jail and 147 days in a mental hospital because California officials failed to detect his true identity.

Man pleads guilty to stealing former coworker’s identity for 30 years Read More »

x-filing-“thermonuclear-lawsuit”-in-texas-should-be-“fatal,”-media-matters-says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

Ever since Elon Musk’s X Corp sued Media Matters for America (MMFA) over a pair of reports that X (formerly Twitter) claims caused an advertiser exodus in 2023, one big question has remained for onlookers: Why is this fight happening in Texas?

In a motion to dismiss filed in Texas’ northern district last month, MMFA argued that X’s lawsuit should be dismissed not just because of a “fatal jurisdictional defect,” but “dismissal is also required for lack of venue.”

Notably, MMFA is based in Washington, DC, while “X is organized under Nevada law and maintains its principal place of business in San Francisco, California, where its own terms of service require users of its platform to litigate any disputes.”

“Texas is not a fair or reasonable forum for this lawsuit,” MMFA argued, suggesting that “the case must be dismissed or transferred” because “neither the parties nor the cause of action has any connection to Texas.”

Last Friday, X responded to the motion to dismiss, claiming that the lawsuit—which Musk has described as “thermonuclear”—was appropriately filed in Texas because MMFA “intentionally” targeted readers and at least two X advertisers located in Texas, Oracle and AT&T. According to X, because MMFA “identified Oracle, a Texas-based corporation, by name in its coverage,” MMFA “cannot claim surprise at being held to answer for its conduct in Texas.” X also claimed that Texas has jurisdiction because Musk resides in Texas and “makes numerous critical business decisions about X while in Texas.”

This so-called targeting of Texans caused a “substantial part” of alleged financial harms that X attributes to MMFA’s reporting, X alleged.

According to X, MMFA specifically targeted X in Texas by sending newsletters sharing its reports with “hundreds or thousands” of Texas readers and by allegedly soliciting donations from Texans to support MMFA’s reporting.

But MMFA pushed back, saying that “Texas subscribers comprise a disproportionately small percentage of Media Matters’ newsletter recipients” and that MMFA did “not solicit Texas donors to fund Media Matters’s journalism concerning X.” Because of this, X’s “efforts to concoct claim-related Texas contacts amount to a series of shots in the dark, uninformed guesses, and irrelevant tangents,” MMFA argued.

On top of that, MMFA argued that X could not attribute any financial harms allegedly caused by MMFA’s reports to either of the two Texas-based advertisers that X named in its court filings. Oracle, MMFA said, “by X’s own admission,… did not withdraw its ads” from X, and AT&T was not named in MMFA’s reporting, and thus, “any investigation AT&T did into its ad placement on X was of its own volition and is not plausibly connected to Media Matters.” MMFA has argued that advertisers, particularly sophisticated Fortune 500 companies, made their own decisions to stop advertising on X, perhaps due to widely reported increases in hate speech on X or even Musk’s own seemingly antisemitic posting.

Ars could not immediately reach X, Oracle, or AT&T for comment.

X’s suit allegedly designed to break MMFA

MMFA President Angelo Carusone, who is a defendant in X’s lawsuit, told Ars that X’s recent filing has continued to “expose” the lawsuit as a “meritless and vexatious effort to inflict maximum damage on critical research and reporting about the platform.”

“It’s solely designed to basically break us or stop us from doing the work that we were doing originally,” Carusone said, confirming that the lawsuit has negatively impacted MMFA’s hate speech research on X.

MMFA argued that Musk could have sued in other jurisdictions, such as Maryland, DC, or California, and MMFA would not have disputed the venue, but Carusone suggested that Musk sued in Texas in hopes that it would be “a more friendly jurisdiction.”

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says Read More »

apple-wouldn’t-let-jon-stewart-interview-ftc-chair-lina-khan,-tv-host-claims

Apple wouldn’t let Jon Stewart interview FTC Chair Lina Khan, TV host claims

The Problem with Jon Stewart —

Tech company also didn’t want a segment on Stewart’s show criticizing AI.

The Daily Show host Jon Stewart’s interview with FTC Chair Lina Khan. The conversation about Apple begins around 16: 30 in the video.

Before the cancellation of The Problem with Jon Stewart on Apple TV+, Apple forbade the inclusion of Federal Trade Commission Chair Lina Khan as a guest and steered the show away from confronting issues related to artificial intelligence, according to Jon Stewart.

This isn’t the first we’ve heard of this rift between Apple and Stewart. When the Apple TV+ show was canceled last October, reports circulated that he told his staff that creative differences over guests and topics were a factor in the decision.

The New York Times reported that both China and AI were sticking points between Apple and Stewart. Stewart confirmed the broad strokes of that narrative in a CBS Morning Show interview after it was announced that he would return to The Daily Show.

“They decided that they felt that they didn’t want me to say things that might get me into trouble,” he explained.

Stewart’s comments during his interview with Khan yesterday were the first time he’s gotten more specific publicly.

“I’ve got to tell you, I wanted to have you on a podcast, and Apple asked us not to do it—to have you. They literally said, ‘Please don’t talk to her,'” Stewart said while interviewing Khan on the April 1, 2024, episode of The Daily Show.

Khan appeared on the show to explain and evangelize the FTC’s efforts to battle corporate monopolies both in and outside the tech industry in the US and to explain the challenges the organization faces.

She became the FTC chair in 2021 and has since garnered a reputation for an aggressive and critical stance against monopolistic tendencies or practices among Big Tech companies like Amazon and Meta.

Stewart also confirmed previous reports that AI was a sensitive topic for Apple. “They wouldn’t let us do that dumb thing we did in the first act on AI,” he said, referring to the desk monologue segment that preceded the Khan interview in the episode.

The segment on AI in the first act of the episode mocked various tech executives for their utopian framing of AI and interspersed those claims with acknowledgments from many of the same leaders that AI would replace many people’s jobs. (It did not mention Apple or its leadership, though.)

Stewart and The Daily Show‘s staff also included clips of current tech leaders suggesting that workers be retrained to work with or on AI when their current roles are disrupted by it. That was followed by a montage of US political leaders promising to retrain workers after various technological and economic disruptions over the years, with the implication that those retraining efforts were rarely as successful as promised.

The segment effectively lampooned some of the doublespeak about AI, though Stewart stopped short of venturing any solutions or alternatives to the current path, so it mostly just prompted outrage and laughs.

The Daily Show host Jon Stewart’s segment criticizing tech and political leaders on the topic of AI.

Apple currently uses AI-related technologies in its software, services, and devices, but so far it has not launched anything tapping into generative AI, which is the new frontier in AI that has attracted worry, optimism, and criticism from various parties.

However, the company is expected to roll out its first generative AI features as part of iOS 18, a new operating system update for iPhones. iOS 18 will likely be detailed during Apple’s annual developer conference in June and will reach users’ devices sometime in the fall.

Listing image by Paramount

Apple wouldn’t let Jon Stewart interview FTC Chair Lina Khan, TV host claims Read More »

google-agrees-to-delete-incognito-data-despite-prior-claim-that’s-“impossible”

Google agrees to delete Incognito data despite prior claim that’s “impossible”

Deleting files —

What a lawyer calls “a historic step,” Google considers not that “significant.”

Google agrees to delete Incognito data despite prior claim that’s “impossible”

To settle a class-action dispute over Chrome’s “Incognito” mode, Google has agreed to delete billions of data records reflecting users’ private browsing activities.

In a statement provided to Ars, users’ lawyer, David Boies, described the settlement as “a historic step in requiring honesty and accountability from dominant technology companies.” Based on Google’s insights, users’ lawyers valued the settlement between $4.75 billion and $7.8 billion, the Monday court filing said.

Under the settlement, Google agreed to delete class-action members’ private browsing data collected in the past, as well as to “maintain a change to Incognito mode that enables Incognito users to block third-party cookies by default.” This, plaintiffs’ lawyers noted, “ensures additional privacy for Incognito users going forward, while limiting the amount of data Google collects from them” over the next five years. Plaintiffs’ lawyers said that this means that “Google will collect less data from users’ private browsing sessions” and “Google will make less money from the data.”

“The settlement stops Google from surreptitiously collecting user data worth, by Google’s own estimates, billions of dollars,” Boies said. “Moreover, the settlement requires Google to delete and remediate, in unprecedented scope and scale, the data it improperly collected in the past.”

Google had already updated disclosures to users, changing the splash screen displayed “at the beginning of every Incognito session” to inform users that Google was still collecting private browsing data. Under the settlement, those disclosures to all users must be completed by March 31, after which the disclosures must remain. Google also agreed to “no longer track people’s choice to browse privately,” and the court filing said that “Google cannot roll back any of these important changes.”

Notably, the settlement does not award monetary damages to class members. Instead, Google agreed that class members retain “rights to sue Google individually for damages” through arbitration, which, users’ lawyers wrote, “is important given the significant statutory damages available under the federal and state wiretap statutes.”

“These claims remain available for every single class member, and a very large number of class members recently filed and are continuing to file complaints in California state court individually asserting those damages claims in their individual capacities,” the court filing said.

While “Google supports final approval of the settlement,” the company “disagrees with the legal and factual characterizations contained in the motion,” the court filing said. Google spokesperson José Castañeda told Ars that the tech giant thinks that the “data being deleted isn’t as significant” as Boies represents, confirming that Google was “pleased to settle this lawsuit, which we always believed was meritless.”

“The plaintiffs originally wanted $5 billion and are receiving zero,” Castañeda said. “We never associate data with users when they use Incognito mode. We are happy to delete old technical data that was never associated with an individual and was never used for any form of personalization.”

While Castañeda said that Google was happy to delete the data, a footnote in the court filing noted that initially, “Google claimed in the litigation that it was impossible to identify (and therefore delete) private browsing data because of how it stored data.” Now, under the settlement, however, Google has agreed “to remediate 100 percent of the data set at issue.”

Mitigation efforts include deleting fields Google used to detect users in Incognito mode, “partially redacting IP addresses,” and deleting “detailed URLs, which will prevent Google from knowing the specific pages on a website a user visited when in private browsing mode.” Keeping “only the domain-level portion of the URL (i.e., only the name of the website) will vastly improve user privacy by preventing Google (or anyone who gets their hands on the data) from knowing precisely what users were browsing,” the court filing said.

Because Google did not oppose the motion for final approval, US District Judge Yvonne Gonzalez Rogers is expected to issue an order approving the settlement on July 30.

Google agrees to delete Incognito data despite prior claim that’s “impossible” Read More »

at&t-acknowledges-data-leak-that-hit-73-million-current-and-former-users

AT&T acknowledges data leak that hit 73 million current and former users

A lot of leaked data —

Data leak hit 7.6 million current AT&T users, 65.4 million former subscribers.

A person walks past an AT&T store on a city street.

Getty Images | VIEW press

AT&T reset passcodes for millions of customers after acknowledging a massive leak involving the data of 73 million current and former subscribers.

“Based on our preliminary analysis, the data set appears to be from 2019 or earlier, impacting approximately 7.6 million current AT&T account holders and approximately 65.4 million former account holders,” AT&T said in an update posted to its website on Saturday.

An AT&T support article said the carrier is “reaching out to all 7.6 million impacted customers and have reset their passcodes. In addition, we will be communicating with current and former account holders with compromised sensitive personal information.” AT&T said the leaked information varied by customer but included full names, email addresses, mailing addresses, phone numbers, Social Security numbers, dates of birth, AT&T account numbers, and passcodes.

AT&T’s acknowledgement of the leak described it as “AT&T data-specific fields [that] were contained in a data set released on the dark web.” But the same data appears to be on the open web as well. As security researcher Troy Hunt wrote, the data is “out there in plain sight on a public forum easily accessed by a normal web browser.”

The hacking forum has a public version accessible with any browser and a hidden service that requires a Tor network connection. Based on forum posts we viewed today, the leak seems to have appeared on both the public and Tor versions of the hacking forum on March 17 of this year. Viewing the AT&T data requires a hacking forum account and site “credits” that can be purchased or earned by posting on the forum.

Hunt told Ars today that the term “dark web” is “incorrect and misleading” in this case. The forum where the AT&T data appeared “does not meet the definition of dark web,” he wrote in an email. “No special software, no special network, just a plain old browser. It’s easily discoverable via a Google search and immediately shows many PII [Personal Identifiable Information] records from the AT&T breach. Registration is then free for anyone with the only remaining barrier being obtaining credits.”

We contacted AT&T today and will update this article if we get a response.

49 million email addresses

Hunt’s post on March 19 said the leaked information included a file with 73,481,539 lines of data that contained 49,102,176 unique email addresses. Another file with decrypted Social Security numbers had 43,989,217 lines, he wrote.

Hunt, who runs the “Have I Been Pwned” database that lets you check if your email was in a data breach, says the 49 million email addresses in the AT&T leak have been added to his database.

BleepingComputer covered the leak two weeks ago, writing that it is the same data involved in a 2021 incident in which a hacker shared samples of the data and attempted to sell the entire data set for $1 million. In 2021, AT&T told BleepingComputer that “the information that appeared in an Internet chat room does not appear to have come from our systems.”

AT&T maintained that position last month. “AT&T continues to tell BleepingComputer today that they still see no evidence of a breach in their systems and still believe that this data did not originate from them,” the news site’s March 17, 2024, article said.

AT&T says data may have come from itself or vendor

AT&T’s update on March 30 acknowledged that the data may have come from AT&T itself, but said it also may have come from an AT&T vendor:

AT&T has determined that AT&T data-specific fields were contained in a data set released on the dark web approximately two weeks ago. While AT&T has made this determination, it is not yet known whether the data in those fields originated from AT&T or one of its vendors. With respect to the balance of the data set, which includes personal information such as Social Security numbers, the source of the data is still being assessed.

“Currently, AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set,” the company update also said. AT&T said it “is communicating proactively with those impacted and will be offering credit monitoring at our expense where applicable.”

AT&T said the passcodes that it reset are generally four digits and are different from AT&T account passwords. The passcodes are used when calling customer support, when managing an account at a retail store, and when signing in to the AT&T website “if you’ve chosen extra security.”

AT&T acknowledges data leak that hit 73 million current and former users Read More »

after-overreaching-tos-angers-users,-cloud-provider-vultr-backs-off

After overreaching TOS angers users, cloud provider Vultr backs off

“Clearly causing confusion” —

Terms seemed to grant an “irrevocable” right to commercialize any user content.

After overreaching TOS angers users, cloud provider Vultr backs off

After backlash, the cloud provider Vultr has updated its terms to remove a clause that a Reddit user feared required customers to “fork over rights” to “anything” hosted on its platform.

The alarming clause seemed to grant Vultr a “non-exclusive, perpetual, irrevocable” license to “use and commercialize” any user content uploaded, posted, hosted, or stored on Vultr “in any way that Vultr deems appropriate, without any further consent” or compensation to users or third parties.

Here’s the full clause that was removed:

You hereby grant to Vultr a non-exclusive, perpetual, irrevocable, royalty-free, fully paid-up, worldwide license (including the right to sublicense through multiple tiers) to use, reproduce, process, adapt, publicly perform, publicly display, modify, prepare derivative works, publish, transmit and distribute each of your User Content, or any portion thereof, in any form, medium or distribution method now known or hereafter existing, known or developed, and otherwise use and commercialize the User Content in any way that Vultr deems appropriate, without any further consent, notice and/or compensation to you or to any third parties, for purposes of providing the Services to you.

In a statement provided to Ars, Vultr CEO J.J. Kardwell said that the terms were revised to “simplify and clarify” language causing confusion for some users.

“A Reddit post incorrectly took portions of our Terms of Service out of context, which only pertain to content provided to Vultr on our public mediums (community-related content on public forums, as an example) for purposes of rendering the needed services—e.g., publishing comments, posts, or ratings,” Kardwell said. “This is separate from a user’s own, private content that is deployed on Vultr services.”

It’s easy to see why the Reddit user was confused, as the previous terms did not clearly differentiate between a user’s public and “private content” in the paragraph where it was included. Kardwell told The Register that the old terms, which were drafted in 2021, were “clearly causing confusion for some portion of users” and were updated because Vultr recognized “that the average user doesn’t have a law degree.”

According to Kardwell, the part of the removed clause that “ends with ‘for purposes of providing the Services to you'” was “intended to make it clear that any rights referenced are solely for the purposes of providing the Services to you.” Kevin Cochrane, Vultr’s chief marketing officer, told Ars that users were intended to scroll down to understand that the line only applied to community content described in a section labeled “content that you make publicly available.” He said that the removed clause was necessary in 2021 when Vultr provided forums and collected ratings, but that the clause could be stripped now because “we don’t actually use” that kind of community content “any longer.”

“We’re very focused on being responsive to the community and the concerns people have, and we believe the strongest thing we can do to demonstrate that there is no bad intent here is to remove it,” Kardwell told The Register.

A plain read of the terms without scrolling seemed to substantiate the Reddit user’s worst fears that “it’s possible Vultr may want the expansive license grant to do AI/Machine Learning based on the data they host. Or maybe they could mine database contents to resell [personally identifiable information]. Given the (perpetual!) license, there’s not really any limit to what they might do. They could even clone someone’s app and sell their own rebranded version, and they’d be legally in the clear.”

The user claimed to have been locked out of their Vultr account for five days after refusing to agree to the terms, with Vultr’s support team seemingly providing little recourse to migrate data to a new cloud provider.

“Migrating all my servers and DNS without being able to log in to my account is going to be both a headache and error prone,” the Reddit user wrote. “I feel like they’re holding my business hostage and extorting me into accepting a license I would never consent to under duress.”

Ars was not able to reach the Reddit user to see if Vultr removing the line from the terms has resolved the issue. Other users on the thread claimed that they had terminated their Vultr accounts over the controversy. Cochrane told Ars that they had been contacted by many customers over the past two days and had no way to identify the Reddit user to confirm if they had terminated their account. Cochrane said the support team was actively reaching out to users to verify if their complaints stemmed from discomfort with the previous terms.

In his statement, Kardwell reiterated that Vultr “customers own 100 percent of their content,” clarifying that Vultr “has never claimed any rights to, used, accessed, nor allowed access to or shared” user content, “other than as may be required by law or for security purposes.”

He also confirmed that Vultr would be conducting a “full review” of its terms and publishing another update “soon.” Kardwell told The Register that the most recent update to its terms that led the Reddit user to call out the company was “actually spurred by unrelated Microsoft licensing changes,” promising that Vultr has no plans to use or commercialize user data.

“We do not use user data,” Kardwell told The Register. “We never have, and we never will. We take privacy and security very seriously. It’s at the core of what we do globally.”

After overreaching TOS angers users, cloud provider Vultr backs off Read More »

nyc’s-government-chatbot-is-lying-about-city-laws-and-regulations

NYC’s government chatbot is lying about city laws and regulations

Close enough for government work? —

You can be evicted for not paying rent, despite what the “MyCity” chatbot says.

Has a government employee checked all those zeroes and ones floating above the skyline?

Enlarge / Has a government employee checked all those zeroes and ones floating above the skyline?

If you follow generative AI news at all, you’re probably familiar with LLM chatbots’ tendency to “confabulate” incorrect information while presenting that information as authoritatively true. That tendency seems poised to cause some serious problems now that a chatbot run by the New York City government is making up incorrect answers to some important questions of local law and municipal policy.

NYC’s “MyCity” ChatBot launched as a “pilot” program last October. The announcement touted the ChatBot as a way for business owners to “save … time and money by instantly providing them with actionable and trusted information from more than 2,000 NYC Business webpages and articles on topics such as compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.”

But a new report from The Markup and local nonprofit news site The City found the MyCity chatbot giving dangerously wrong information about some pretty basic city policies. To cite just one example, the bot said that NYC buildings “are not required to accept Section 8 vouchers,” when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination. The Markup also received incorrect information in response to chatbot queries regarding worker pay and work hour regulations, as well as industry-specific information like funeral home pricing.

Welcome news for people who think the rent is too damn high, courtesy of the MyCity chatbot.

Enlarge / Welcome news for people who think the rent is too damn high, courtesy of the MyCity chatbot.

Further testing from BlueSky user Kathryn Tewson shows the MyCity chatbot giving some dangerously wrong answers regarding the treatment of workplace whistleblowers, as well as some hilariously bad answers regarding the need to pay rent.

This is going to keep happening

The result isn’t too surprising if you dig into the token-based predictive models that power these kinds of chatbots. MyCity’s Microsoft Azure-powered chatbot uses a complex process of statistical associations across millions of tokens to essentially guess at the most likely next word in any given sequence, without any real understanding of the underlying information being conveyed.

That can cause problems when a single factual answer to a question might not be reflected precisely in the training data. In fact, The Markup said that at least one of its tests resulted in the correct answer on the same query about accepting Section 8 housing vouchers (even as “ten separate Markup staffers” got the incorrect answer when repeating the same question).

The MyCity Chatbot—which is prominently labeled as a “Beta” product—tells users who bother to read the warnings that it “may occasionally produce incorrect, harmful or biased content” and that users should “not rely on its responses as a substitute for professional advice.” But the page also states front and center that it is “trained to provide you official NYC Business information” and is being sold as a way “to help business owners navigate government.”

Andrew Rigie, executive director of the NYC Hospitality Alliance, told The Markup that he had encountered inaccuracies from the bot himself and had received reports of the same from at least one local business owner. But NYC Office of Technology and Innovation Spokesperson Leslie Brown told The Markup that the bot “has already provided thousands of people with timely, accurate answers” and that “we will continue to focus on upgrading this tool so that we can better support small businesses across the city.”

NYC Mayor Eric Adams touts the MyCity chatbot in an October announcement event.

The Markup’s report highlights the danger of governments and corporations rolling out chatbots to the public before their accuracy and reliability have been fully vetted. Last month, a court forced Air Canada to honor a fraudulent refund policy invented by a chatbot available on its website. A recent Washington Post report found that chatbots integrated into major tax preparation software provides “random, misleading, or inaccurate … answers” to many tax queries. And some crafty prompt engineers have reportedly been able to trick car dealership chatbots into accepting a “legally binding offer – no take backsies” for a $1 car.

These kinds of issues are already leading some companies away from more generalized LLM-powered chatbots and toward more specifically trained Retrieval-Augmented Generation models, which have been tuned only on a small set of relevant information. That kind of focus could become that much more important if the FTC is successful in its efforts to make chatbots liable for “false, misleading, or disparaging” information.

NYC’s government chatbot is lying about city laws and regulations Read More »

jails-banned-visits-in-“quid-pro-quo”-with-prison-phone-companies,-lawsuits-say

Jails banned visits in “quid pro quo” with prison phone companies, lawsuits say

The bars of a jail cell are pictured along with a man's hand turning a key in the lock of the cell door.

Getty Images | Charles O’Rear

Two lawsuits filed by a civil rights group allege that county jails in Michigan banned in-person visits in order to maximize revenue from voice and video calls as part of a “quid pro quo kickback scheme” with prison phone companies.

Civil Rights Corps filed the lawsuits on March 15 against the county governments, two county sheriffs, and two prison phone companies. The suits filed in county courts seek class-action status on behalf of people unable to visit family members detained in the local jails, including children who have been unable to visit their parents.

Defendants in one lawsuit include St. Clair County Sheriff Mat King, prison phone company Securus Technologies, and Securus owner Platinum Equity. In the other lawsuit, defendants include Genesee County Sheriff Christopher Swanson and prison phone company ViaPath Technologies. ViaPath was formerly called Global Tel*Link Corporation (GTL), and the lawsuit primarily refers to the company as GTL.

Each year, thousands of people spend months in the county jails, the lawsuit said. Many of the detainees have not been convicted of any crime and are awaiting trial; if they are convicted and receive long sentences, they are transferred to the Michigan Department of Corrections.

The named plaintiffs in both cases include family members, including children identified by their initials.

“Hundreds of jails” eliminated visits

The Michigan counties are far from alone in implementing visitation bans, Civil Rights Corps said in a lawsuit announcement. “Across the United States, hundreds of jails have eliminated in-person family visits over the last decade,” the group said, adding:

Why has this happened? The answer highlights a profound flaw in how decisions too often get made in our legal system: for-profit jail telecom companies realized that they could earn more profit from phone and video calls if jails eliminated free in-person visits for families. So the companies offered sheriffs and county jails across the country a deal: if you eliminate family visits, we’ll give you a cut of the increased profits from the larger number of calls. This led to a wave across the country, as local jails sought to supplement their budgets with hundreds of millions of dollars in cash from some of the poorest families in our society.

St. Clair County implemented its family visitation ban in September 2017, “prohibiting people from visiting their family members detained inside the county jail,” Civil Rights Corps alleged. This “decision was part of a quid pro quo kickback scheme with Securus Technologies, a for-profit company that contracts with jails to charge the families of incarcerated persons exorbitant rates to communicate with one another through ‘services’ such as low-quality phone and video calls,” the lawsuit said.

Under the contract, “Securus pays the County 50 percent of the $12.99 price tag for every 20-minute video call and 78 percent of the $0.21 per minute cost of every phone call,” the lawsuit said. The contract has “a guarantee that Securus would pay the County at least $190,000 each year,” the St. Clair County lawsuit said.

Jails banned visits in “quid pro quo” with prison phone companies, lawsuits say Read More »