DMCA

judge-rejects-most-chatgpt-copyright-claims-from-book-authors

Judge rejects most ChatGPT copyright claims from book authors

Insufficient evidence —

OpenAI plans to defeat authors’ remaining claim at a “later stage” of the case.

Judge rejects most ChatGPT copyright claims from book authors

A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission.

By allegedly repackaging original works as ChatGPT outputs, authors alleged, OpenAI’s most popular chatbot was just a high-tech “grift” that seemingly violated copyright laws, as well as state laws preventing unfair business practices and unjust enrichment.

According to judge Araceli Martínez-Olguín, authors behind three separate lawsuits—including Sarah Silverman, Michael Chabon, and Paul Tremblay—have failed to provide evidence supporting any of their claims except for direct copyright infringement.

OpenAI had argued as much in their promptly filed motion to dismiss these cases last August. At that time, OpenAI said that it expected to beat the direct infringement claim at a “later stage” of the proceedings.

Among copyright claims tossed by Martínez-Olguín were accusations of vicarious copyright infringement. Perhaps most significantly, Martínez-Olguín agreed with OpenAI that the authors’ allegation that “every” ChatGPT output “is an infringing derivative work” is “insufficient” to allege vicarious infringement, which requires evidence that ChatGPT outputs are “substantially similar” or “similar at all” to authors’ books.

“Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books,” Martínez-Olguín wrote. “Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials.”

Authors also failed to convince Martínez-Olguín that OpenAI violated the Digital Millennium Copyright Act (DMCA) by allegedly removing copyright management information (CMI)—such as author names, titles of works, and terms and conditions for use of the work—from training data.

This claim failed because authors cited “no facts” that OpenAI intentionally removed the CMI or built the training process to omit CMI, Martínez-Olguín wrote. Further, the authors cited examples of ChatGPT referencing their names, which would seem to suggest that some CMI remains in the training data.

Some of the remaining claims were dependent on copyright claims to survive, Martínez-Olguín wrote.

Arguing that OpenAI caused economic injury by unfairly repurposing authors’ works, even if authors could show evidence of a DMCA violation, authors could only speculate about what injury was caused, the judge said.

Similarly, allegations of “fraudulent” unfair conduct—accusing OpenAI of “deceptively” designing ChatGPT to produce outputs that omit CMI—”rest on a violation of the DMCA,” Martínez-Olguín wrote.

The only claim under California’s unfair competition law that was allowed to proceed alleged that OpenAI used copyrighted works to train ChatGPT without authors’ permission. Because the state law broadly defines what’s considered “unfair,” Martínez-Olguín said that it’s possible that OpenAI’s use of the training data “may constitute an unfair practice.”

Remaining claims of negligence and unjust enrichment failed, Martínez-Olguín wrote, because authors only alleged intentional acts and did not explain how OpenAI “received and unjustly retained a benefit” from training ChatGPT on their works.

Authors have been ordered to consolidate their complaints and have until March 13 to amend arguments and continue pursuing any of the dismissed claims.

To shore up the tossed copyright claims, authors would likely need to provide examples of ChatGPT outputs that are similar to their works, as well as evidence of OpenAI intentionally removing CMI to “induce, enable, facilitate, or conceal infringement,” Martínez-Olguín wrote.

Ars could not immediately reach the authors’ lawyers or OpenAI for comment.

As authors likely prepare to continue fighting OpenAI, the US Copyright Office has been fielding public input before releasing guidance that could one day help rights holders pursue legal claims and may eventually require works to be licensed from copyright owners for use as training materials. Among the thorniest questions is whether AI tools like ChatGPT should be considered authors when spouting outputs included in creative works.

While the Copyright Office prepares to release three reports this year “revealing its position on copyright law in relation to AI,” according to The New York Times, OpenAI recently made it clear that it does not plan to stop referencing copyrighted works in its training data. Last month, OpenAI said it would be “impossible” to train AI models without copyrighted materials, because “copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents.”

According to OpenAI, it doesn’t just need old copyrighted materials; it needs current copyright materials to ensure that chatbot and other AI tools’ outputs “meet the needs of today’s citizens.”

Rights holders will likely be bracing throughout this confusing time, waiting for the Copyright Office’s reports. But once there is clarity, those reports could “be hugely consequential, weighing heavily in courts, as well as with lawmakers and regulators,” The Times reported.

Judge rejects most ChatGPT copyright claims from book authors Read More »

this-“smoking-gun”-killed-the-mcdonald’s-ice-cream-hackers’-startup

This “smoking gun” killed the McDonald’s ice cream hackers’ startup

Vanilla Soft Serve Ice Cream Cone

A little over three years have passed since McDonald’s sent out an email to thousands of its restaurant owners around the world that abruptly cut short the future of a three-person startup called Kytch—and with it, perhaps one of McDonald’s best chances for fixing its famously out-of-order ice cream machines.

Until then, Kytch had been selling McDonald’s restaurant owners a popular Internet-connected gadget designed to attach to their notoriously fragile and often broken soft-serve McFlurry dispensers, manufactured by McDonald’s equipment partner Taylor. The Kytch device would essentially hack into the ice cream machine’s internals, monitor its operations, and send diagnostic data over the Internet to an owner or manager to help keep it running. But despite Kytch’s efforts to solve the Golden Arches’ intractable ice cream problems, a McDonald’s email in November 2020 warned its franchisees not to use Kytch, stating that it represented a safety hazard for staff. Kytch says its sales dried up practically overnight.

Now, after years of litigation, the ice-cream-hacking entrepreneurs have unearthed evidence that they say shows that Taylor, the soft-serve machine maker, helped engineer McDonald’s Kytch-killing email—kneecapping the startup not because of any safety concern, but in a coordinated effort to undermine a potential competitor. And Taylor’s alleged order, as Kytch now describes it, came all the way from the top.

On Wednesday, Kytch filed a newly unredacted motion for summary adjudication in its lawsuit against Taylor for alleged trade libel, tortious interference, and other claims. The new motion, which replaces a redacted version from August, refers to internal emails Taylor released in the discovery phase of the lawsuit, which were quietly unsealed over the summer. The motion focuses in particular on one email from Timothy FitzGerald, the CEO of Taylor parent company Middleby, that appears to suggest that either Middleby or McDonald’s send a communication to McDonald’s franchise owners to dissuade them from using Kytch’s device.

“Not sure if there is anything we can do to slow up the franchise community on the other solution,” FitzGerald wrote on October 17, 2020. “Not sure what communication from either McD or Midd can or will go out.”

In their legal filing, the Kytch co-founders, of course, interpret “the other solution” to mean their product. In fact, FitzGerald’s message was sent in an email thread that included Middleby’s then COO, David Brewer, who had wondered earlier whether Middleby could instead acquire Kytch. Another Middleby executive responded to FitzGerald on October 17 to write that Taylor and McDonald’s had already met the previous day to discuss sending out a message to franchisees about McDonald’s lack of support for Kytch.

But Jeremy O’Sullivan, a Kytch co-founder, claims—and Kytch argues in its legal motion—that FitzGerald’s email nonetheless proves Taylor’s intent to hamstring a potential competitor. “It’s the smoking gun,” O’Sullivan says of the email. “He’s plotting our demise.”

Although FitzGerald’s email doesn’t actually order anyone to act against Kytch, the company’s motion argues that Taylor played a key role in what happened next. It’s an “ambiguous yet direct message to his underlings,” argues Melissa Nelson, Kytch’s other co-founder. “It’s just like a mafia boss giving coded instructions to his team to whack someone.”

On November 2, 2020, a little over two weeks after FitzGerald’s open-ended suggestion that perhaps a “communication” from McDonald’s or Middleby to franchisees could “slow up” adoption of “the other solution,” McDonald’s sent out its email blast cautioning restaurant owners not to use Kytch’s product.

The email stated that the Kytch gadget “allows complete access to all aspects of the equipment’s controller and confidential data”—meaning Taylor’s and McDonald’s data, not the restaurant owners’ data; that it “creates a potential very serious safety risk for the crew or technician attempting to clean or repair the machine”; and finally, that it could cause “serious human injury.” The email concluded with a warning in italics and bold: “McDonald’s strongly recommends that you remove the Kytch device from all machines and discontinue use.”

This “smoking gun” killed the McDonald’s ice cream hackers’ startup Read More »