defamation

elon-musk’s-grok-keeps-making-up-fake-news-based-on-x-users’-jokes

Elon Musk’s Grok keeps making up fake news based on X users’ jokes

It’s all jokes until it isn’t —

X likely hopes to avoid liability with disclaimer that Grok “can make mistakes.”

Elon Musk’s Grok keeps making up fake news based on X users’ jokes

X’s chatbot Grok is supposed to be an AI engine crunching the platform’s posts to surface and summarize breaking news, but this week, Grok’s flaws were once again exposed when the chatbot got confused and falsely accused an NBA star of criminal vandalism.

“Klay Thompson Accused in Bizarre Brick-Vandalism Spree,” Grok’s headline read in an AI-powered trending-tab post that has remained on X (formerly Twitter) for days. Beneath the headline, Grok went into even more detail to support its fake reporting:

In a bizarre turn of events, NBA star Klay Thompson has been accused of vandalizing multiple houses with bricks in Sacramento. Authorities are investigating the claims after several individuals reported their houses being damaged, with windows shattered by bricks. Klay Thompson has not yet issued a statement regarding the accusations. The incidents have left the community shaken, but no injuries were reported. The motive behind the alleged vandalism remains unclear.

Grok appears to be confusing a common basketball term, where players are said to be throwing “bricks” when they take an airball shot that doesn’t hit the rim. According to SF Gate, which was one of the first outlets to report the Grok error, Thompson had an “all-time rough shooting” night, hitting none of his shots on what was his emotional last game with the Golden State Warriors before becoming an unrestricted free agent.

In small type under Grok’s report, X includes a disclaimer saying, “Grok is an early feature and can make mistakes. Verify its outputs.”

But instead of verifying Grok’s outputs, it appeared that X users—in the service’s famously joke-y spirit—decided to fuel Grok’s misinformation. Under the post, X users, some NBA fans, commented with fake victim reports, using the same joke format to seemingly convince Grok that “several individuals reported their houses being damaged.” Some of these joking comments were viewed by millions.

First off… I am ok.

My house was vandalized by bricks 🧱

After my hands stopped shaking, I managed to call the Sheriff…They were quick to respond🚨

My window was gone and the police asked if I knew who did it👮‍♂️

I said yes, it was Klay Thompson

— LakeShowYo (@LakeShowYo) April 17, 2024

First off…I am ok.

My house was vandalized by bricks in Sacramento.

After my hands stopped shaking, I managed to call the Sheriff, they were quick to respond.

My window is gone, the police asked me if I knew who did it.

I said yes, it was Klay Thompson. pic.twitter.com/smrDs6Yi5M

— KeeganMuse (@KeegMuse) April 17, 2024

First off… I am ok.

My house was vandalized by bricks 🧱

After my hands stopped shaking, I managed to call the Sheriff…They were quick to respond🚨

My window was gone and the police asked if I knew who did it👮‍♂️

I said yes, it was Klay Thompson pic.twitter.com/JaWtdJhFli

— JJJ Muse (@JarenJJMuse) April 17, 2024

X did not immediately respond to Ars’ request for comment or confirm if the post will be corrected or taken down.

In the past, both Microsoft and chatbot maker OpenAI have faced defamation lawsuits over similar fabrications in which ChatGPT falsely accused a politician and a radio host of completely made-up criminal histories. Microsoft was also sued by an aerospace professor who Bing Chat falsely labeled a terrorist.

Experts told Ars that it remains unclear if disclaimers like X’s will spare companies from liability should more people decide to sue over fake AI outputs. Defamation claims might depend on proving that platforms “knowingly” publish false statements, which disclaimers suggest they do. Last July, the Federal Trade Commission launched an investigation into OpenAI, demanding that the company address the FTC’s fears of “false, misleading, or disparaging” AI outputs.

Because the FTC doesn’t comment on its investigations, it’s impossible to know if its probe will impact how OpenAI conducts business.

For people suing AI companies, the urgency of protecting against false outputs seems obvious. Last year, the radio host suing OpenAI, Mark Walters, accused the company of “sticking its head in the sand” and “recklessly disregarding whether the statements were false under circumstances when they knew that ChatGPT’s hallucinations were pervasive and severe.”

X just released Grok to all premium users this month, TechCrunch reported, right around the time that X began giving away premium access to the platform’s top users. During that wider rollout, X touted Grok’s new ability to summarize all trending news and topics, perhaps stoking interest in this feature and peaking Grok usage just before Grok spat out the potentially defamatory post about the NBA star.

Thompson has not issued any statements on Grok’s fake reporting.

Grok’s false post about Thompson may be the first widely publicized example of potential defamation from Grok, but it wasn’t the first time that Grok promoted fake news in response to X users joking around on the platform. During the solar eclipse, a Grok-generated headline read, “Sun’s Odd Behavior: Experts Baffled,” Gizmodo reported.

While it’s amusing to some X users to manipulate Grok, the pattern suggests that Grok may also be vulnerable to being manipulated by bad actors into summarizing and spreading more serious misinformation or propaganda. That’s apparently already happening, too. In early April, Grok made up a headline about Iran attacking Israel with heavy missiles, Mashable reported.

Elon Musk’s Grok keeps making up fake news based on X users’ jokes Read More »

judge-mocks-x-for-“vapid”-argument-in-musk’s-hate-speech-lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

It looks like Elon Musk may lose X’s lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.

X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site’s terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH’s reporting is misleading and ads are rarely served on extremist content.

But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X’s lawsuit, repeatedly appearing to mock X’s decision to file it in the first place.

Seemingly skeptical of X’s entire argument, Breyer appeared particularly focused on how X intended to prove that the CCDH could have known that its reporting would trigger such substantial financial losses, as the lawsuit hinges on whether the alleged damages were “foreseeable,” NPR reported.

X’s lawyer, Jon Hawk, argued that when the CCDH joined Twitter in 2019, the group agreed to terms of service that noted those terms could change. So when Musk purchased Twitter and updated rules to reinstate accounts spreading hate speech, the CCDH should have been able to foresee those changes in terms and therefore anticipate that any reporting on spikes in hate speech would cause financial losses.

According to CNN, this is where Breyer became frustrated, telling Hawk, “I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

“What you have to tell me is, why is it foreseeable?” Breyer said. “That they should have understood that, at the time they entered the terms of service, that Twitter would then change its policy and allow this type of material to be disseminated?

“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer added. “‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s ‘foreseeable.’ I mean, that argument is truly remarkable.”

According to NPR, Breyer suggested that X was trying to “shoehorn” its legal theory by using language from a breach of contract claim, when what the company actually appeared to be alleging was defamation.

“You could’ve brought a defamation case; you didn’t bring a defamation case,” Breyer said. “And that’s significant.”

Breyer directly noted that one reason why X might not bring a defamation suit was if the CCDH’s reporting was accurate, NPR reported.

CCDH’s CEO and founder, Imran Ahmed, provided a statement to Ars, confirming that the group is “very pleased with how yesterday’s argument went, including many of the questions and comments from the court.”

“We remain confident in the strength of our arguments for dismissal,” Ahmed said.

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit Read More »

over-a-decade-later,-climate-scientist-prevails-in-libel-case

Over a decade later, climate scientist prevails in libel case

What a long, strange trip it’s been —

But the case is not entirely over, as he plans to go after the publishers again.

Image of a middle-aged male speaking into a microphone against a dark backdrop.

Enlarge / Climate scientist Michael Mann.

This is a story I had sporadically wondered whether I’d ever have the chance to write. Over a decade ago, I covered a lawsuit filed by climate scientist Michael Mann, who finally had enough of being dragged through the mud online. When two authors accused him of fraud and compared his academic position to that of a convicted child molester, he sued for defamation.

Mann was considered a public figure, which makes winning defamation cases extremely challenging. But his case was based on the fact that multiple institutions on two different continents had scrutinized his work and found no hint of scientific malpractice—thus, he argued, that anyone who accused him of fraud was acting with reckless disregard for the truth.

Over the ensuing decade, the case was narrowed, decisions were appealed, and long periods went by without any apparent movement. But recently, amazingly, the case finally went to trial, and a jury rendered a verdict yesterday: Mann is entitled to damages from the writers. Even if you don’t care about the case, it’s worth reflecting on how much has changed since it was first filed.

The suit

The piece that started the whole mess was posted on the blog of a free market think tank called the Competitive Enterprise Institute. In it, Rand Simberg accused Mann of manipulating data and compared the investigations at Penn State (where he was faculty at the time) to the university’s lack of interest in pursuing investigations of one of its football coaches who was convicted of molesting children. A few days later, a second author, Mark Steyn, echoed those accusations at the publication National Review.

Mann’s case was based on the accusations of fraud in those pieces. He had been a target for years after he published work showing that the recent warming was unprecedented in the last few thousand years. This graph, known as the “hockey stick” due to its sudden swerve upwards, later graced the cover of an IPCC climate report. The pieces were also published just a few years after a large trove of emails from climate scientists were obtained illicitly from the servers of a research institution, leading to widespread accusations of misconduct against climate scientists.

Out of the public eye were a large number of investigations, both by the schools involved and the governments that funded the researchers, all of which cleared those involved, including Mann. But Simberg and Steyn were part of a large collection of writers and bloggers who were convinced that Mann (and by extension, all of modern climate science) had to be wrong. So they assumed—and in Simberg and Steyn’s case, wrote—that the investigations were simply whitewashes.

Mann’s suit alleged the exact opposite: that, by accusing him of fraud despite these investigations, the two authors showed a reckless disregard for truth. That would be enough to hold them responsible for defamation despite the fact that Mann was a public figure. The authors’ defense was largely focused on the fact that they genuinely believed their own opinions and so should be free to express them under the First Amendment.

In essence, the case came down to whether people who appear to be incapable of incorporating evidence into their opinions should still be able to voice those opinions without consequences, even if doing so has consequences for others.

Victory at last-ish

In the end, the jury decided they did not. And their damage awards suggest that they understood the present circumstances quite well. For starters, the compensatory damages awarded to Mann for the defamation itself were minimal: one dollar each from Simberg and Steyn. While Mann alleged he lost grants and suffered public scorn due to the columns, he’s since become a successful book author and received a tenured chair at the University of Pennsylvania, where he now heads its Center for Science, Sustainability, and the Media.

But the suit also sought punitive damages to discourage future behavior of the sort. Here, there was a dramatic split. Simberg, who now tends to write about politics rather than science and presents himself as a space policy expert, was placed on the hook for just $1,000. Steyn, who is still actively fighting the climate wars and hosts a continued attack on Mann on his website, was told to pay Mann $1 million.

That said, the suit’s not over yet. Steyn has suggested that there are grounds to appeal the monetary award, while Mann has indicated that he will appeal the decision that had terminated his case against the Competitive Enterprise Institute and National Review. So, check back in another decade and we may have another decision.

Over a decade later, climate scientist prevails in libel case Read More »

openai-must-defend-chatgpt-fabrications-after-failing-to-defeat-libel-suit

OpenAI must defend ChatGPT fabrications after failing to defeat libel suit

One false move —

ChatGPT users may soon learn whether false outputs will be allowed to ruin lives.

OpenAI must defend ChatGPT fabrications after failing to defeat libel suit

OpenAI may finally have to answer for ChatGPT’s “hallucinations” in court after a Georgia judge recently ruled against the tech company’s motion to dismiss a radio host’s defamation suit.

OpenAI had argued that ChatGPT’s output cannot be considered libel, partly because the chatbot output cannot be considered a “publication,” which is a key element of a defamation claim. In its motion to dismiss, OpenAI also argued that Georgia radio host Mark Walters could not prove that the company acted with actual malice or that anyone believed the allegedly libelous statements were true or that he was harmed by the alleged publication.

It’s too early to say whether Judge Tracie Cason found OpenAI’s arguments persuasive. In her order denying OpenAI’s motion to dismiss, which MediaPost shared here, Cason did not specify how she arrived at her decision, saying only that she had “carefully” considered arguments and applicable laws.

There may be some clues as to how Cason reached her decision in a court filing from John Monroe, attorney for Walters, when opposing the motion to dismiss last year.

Monroe had argued that OpenAI improperly moved to dismiss the lawsuit by arguing facts that have yet to be proven in court. If OpenAI intended the court to rule on those arguments, Monroe suggested that a motion for summary judgment would have been the proper step at this stage in the proceedings, not a motion to dismiss.

Had OpenAI gone that route, though, Walters would have had an opportunity to present additional evidence. To survive a motion to dismiss, all Walters had to do was show that his complaint was reasonably supported by facts, Monroe argued.

Failing to convince the court that Walters had no case, OpenAI’s legal theories regarding its liability for ChatGPT’s “hallucinations” will now likely face their first test in court.

“We are pleased the court denied the motion to dismiss so that the parties will have an opportunity to explore, and obtain a decision on, the merits of the case,” Monroe told Ars.

What’s the libel case against OpenAI?

Walters sued OpenAI after a journalist, Fred Riehl, warned him that in response to a query, ChatGPT had fabricated an entire lawsuit. Generating an entire complaint with an erroneous case number, ChatGPT falsely claimed that Walters had been accused of defrauding and embezzling funds from the Second Amendment Foundation.

Walters is the host of Armed America Radio and has a reputation as the “Loudest Voice in America Fighting For Gun Rights.” He claimed that OpenAI “recklessly” disregarded whether ChatGPT’s outputs were false, alleging that OpenAI knew that “ChatGPT’s hallucinations were pervasive and severe” and did not work to prevent allegedly libelous outputs. As Walters saw it, the false statements were serious enough to be potentially career-damaging, “tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule.”

Monroe argued that Walters had “adequately stated a claim” of libel, per se, as a private citizen, “for which relief may be granted under Georgia law” where “malice is inferred” in “all actions for defamation” but “may be rebutted” by OpenAI.

Pushing back, OpenAI argued that Walters was a public figure who must prove that OpenAI acted with “actual malice” when allowing ChatGPT to produce allegedly harmful outputs. But Monroe told the court that OpenAI “has not shown sufficient facts to establish that Walters is a general public figure.”

Whether or not Walters is a public figure could be another key question leading Cason to rule against OpenAI’s motion to dismiss.

Perhaps also frustrating the court, OpenAI introduced “a large amount of material” in its motion to dismiss that fell outside the scope of the complaint, Monroe argued. That included pointing to a disclaimer in ChatGPT’s terms of use that warns users that ChatGPT’s responses may not be accurate and should be verified before publishing. According to OpenAI, this disclaimer makes Riehl the “owner” of any libelous ChatGPT responses to his queries.

“A disclaimer does not make an otherwise libelous statement non-libelous,” Monroe argued. And even if the disclaimer made Riehl liable for publishing the ChatGPT output—an argument that may give some ChatGPT users pause before querying—”that responsibility does not have the effect of negating the responsibility of the original publisher of the material,” Monroe argued.

Additionally, OpenAI referenced a conversation between Walters and OpenAI, even though Monroe said that the complaint “does not allege that Walters ever had a chat” with OpenAI. And OpenAI also somewhat oddly argued that ChatGPT outputs could be considered “intra-corporate communications” rather than publications, suggesting that ChatGPT users could be considered private contractors when querying the chatbot.

With the lawsuit moving forward, curious chatbot users everywhere may finally get the answer to a question that has been unclear since ChatGPT quickly became the fastest-growing consumer application of all time after its launch in November 2022: Will ChatGPT’s hallucinations be allowed to ruin lives?

In the meantime, the FTC is seemingly still investigating potential harms caused by ChatGPT’s “false, misleading, or disparaging” generations.

An FTC spokesperson previously told Ars that the FTC does not generally comment on nonpublic investigations.

OpenAI did not immediately respond to Ars’ request to comment.

OpenAI must defend ChatGPT fabrications after failing to defeat libel suit Read More »

twin-galaxies,-billy-mitchell-settle-donkey-kong-score-case-before-trial

Twin Galaxies, Billy Mitchell settle Donkey Kong score case before trial

Two men give a presentation in what appears to be a hotel room.

Enlarge / Billy Mitchell (left) and Twin Galaxies owner Jace Hall (center) attend an event at the Arcade Expo 2015 in Banning, California.

The long, drawn-out legal fight between famed high-score chaser Billy Mitchell and “International Scoreboard” Twin Galaxies appears to be over. Courthouse News reports that Mitchell and Twin Galaxies have reached a confidential settlement in the case months before an oft-delayed trial was finally set to start.

The settlement comes as Twin Galaxies counsel David Tashroudian had come under fire for legal misconduct after making improper contact with two of Mitchell’s witnesses in the case. Tashroudian formally apologized to the court for that contact in a filing earlier this month, writing that he had “debased myself before this Court” and “allowed my personal emotions to cloud my judgement” by reaching out to the witnesses outside of official court proceedings.

But in the same statement, Tashroudian took Mitchell’s side to task for “what appeared to me to be the purposeful fabrication and hiding of evidence.” The emotional, out-of-court contact was intended “to prove what I still genuinely believe is fraud on this Court,” he wrote.

Billy Mitchell reviews a document in front of a <em>Donkey Kong</em> machine decked out for an annual “Kong Off” high score competition.” height=”1024″ src=”https://cdn.arstechnica.net/wp-content/uploads/2020/07/mitchellpaper.jpg” width=”683″></img><figcaption>
<p>Billy Mitchell reviews a document in front of a <em>Donkey Kong</em> machine decked out for an annual “Kong Off” high score competition.</p>
</figcaption></figure>
<p>In <a href=a filing last month, Tashroudian asked the court to sanction Mitchell for numerous alleged lies and fabrications during the evidence-discovery process. Those alleged lies encompass subjects including an alleged $33,000 payment associated with the sale of Twin Galaxies; the technical cabinet testing of Carlos Pineiro; the setup of a recording device for one of Mitchell’s high-score performances; a supposed “Player of the Century” plaque Mitchell says he had received from Namco; and a technical analysis that showed, according to Tashroudian, “that the videotaped recordings of his score in questions could not have come from original unmodified Donkey Kong hardware.”

Tashroudian asked the court to impose sanctions on Mitchell—up to and including dismissing the case—for these and other “deliberate and egregious [examples of] discovery abuse throughout the course of this litigation by lying at deposition and by engaging in the spoliation of evidence with the intent to defraud the Court.” A hearing on both Mitchell and Tashroudian’s alleged actions was scheduled for later this week; Tashroudian could still face referral to the State Bar for his misconduct.

“Plaintiff wants nothing more than for me to be kicked off of this case,” Tashroudian continued in his apology statement. “I know this will not stop. I am now [Mitchell’s] and his counsel’s target. The facts support [Twin Galaxies’] defense and now [Mitchell] realizes that. He also realizes that he has dug himself into a hole by lying in discovery. I do not say that lightly.”

Mitchell, Tashroudian, and representatives for Twin Galaxies were not immediately available to respond to a request for comment from Ars Technica.

Twin Galaxies, Billy Mitchell settle Donkey Kong score case before trial Read More »