Anthropic did not immediately respond to Ars’ request for comment on how guardrails currently work to prevent the alleged jailbreaks, but publishers appear satisfied by current guardrails in accepting the deal.
Whether AI training on lyrics is infringing remains unsettled
Now, the matter of whether Anthropic has strong enough guardrails to block allegedly harmful outputs is settled, Lee wrote, allowing the court to focus on arguments regarding “publishers’ request in their Motion for Preliminary Injunction that Anthropic refrain from using unauthorized copies of Publishers’ lyrics to train future AI models.”
“Whether generative AI companies can permissibly use copyrighted content to train LLMs without licenses,” Anthropic’s court filing said, “is currently being litigated in roughly two dozen copyright infringement cases around the country, none of which has sought to resolve the issue in the truncated posture of a preliminary injunction motion. It speaks volumes that no other plaintiff—including the parent company record label of one of the Plaintiffs in this case—has sought preliminary injunctive relief from this conduct.”
In a statement, Anthropic’s spokesperson told Ars that “Claude isn’t designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement.”
“Our decision to enter into this stipulation is consistent with those priorities,” Anthropic said. “We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use.”
This suit will likely take months to fully resolve, as the question of whether AI training is a fair use of copyrighted works is complex and remains hotly disputed in court. For Anthropic, the stakes could be high, with a loss potentially triggering more than $75 million in fines, as well as an order possibly forcing Anthropic to reveal and destroy all the copyrighted works in its training data.
The Supreme Court signaled it may take up a case that could determine whether Internet service providers must terminate users who are accused of copyright infringement. In an order issued today, the court invited the Department of Justice’s solicitor general to file a brief “expressing the views of the United States.”
In Sony Music Entertainment v. Cox Communications, the major record labels argue that cable provider Cox should be held liable for failing to terminate users who were repeatedly flagged for infringement based on their IP addresses being connected to torrent downloads. There was a mixed ruling at the US Court of Appeals for the 4th Circuit as the appeals court affirmed a jury’s finding that Cox was guilty of willful contributory infringement but reversed a verdict on vicarious infringement “because Cox did not profit from its subscribers’ acts of infringement.”
That ruling vacated a $1 billion damages award and ordered a new damages trial. Cox and Sony are both seeking a Supreme Court review. Cox wants to overturn the finding of willful contributory infringement, while Sony wants to reinstate the $1 billion verdict.
The Supreme Court asking for US input on Sony v. Cox could be a precursor to the high court taking up the case. For example, the court last year asked the solicitor general to weigh in on Texas and Florida laws that restricted how social media companies can moderate their platforms. The court subsequently took up the case and vacated lower-court rulings, making it clear that content moderation is protected by the First Amendment.
“The public interest in the preservation of intellectual property rights weighs heavily against the injunction sought here, which would force Apple to distribute an app over the repeated and consistent objections of non-parties who allege their rights are infringed by the app,” Apple argued.
Unlike other free apps that continually play ads, Musi only serves ads when the app is initially opened, then allows uninterrupted listening. One Musi user also noted that Musi allows for an unlimited number of videos in a playlist, where YouTube caps playlists at 5,000 videos.
“Musi is the only playback system I have to play all 9k of my videos/songs in the same library,” the Musi fan said. “I honestly don’t just use Musi just cause it’s free. It has features no other app has, especially if you like to watch music videos while you listen to music.”
“Spotify isn’t cutting it,” one Reddit user whined.
“I hate Spotify,” another user agreed.
“I think of Musi every other day,” a third user who apparently lost the app after purchasing a new phone said. “Since I got my new iPhone, I have to settle for other music apps just to get by (not enough, of course) to listen to music in my car driving. I will be patiently waiting once Musi is available to redownload.”
Some Musi fans who still have access gloat in the threads, while others warn the litigation could soon doom the app for everyone.
Musi continues to perhaps optimistically tell users that the app is coming back, reassuring anyone whose app was accidentally offloaded that their libraries remain linked through iCloud and will be restored if it does.
Some users buy into Musi’s promises, while others seem skeptical that Musi can take on Apple. To many users still clinging to their Musi app, updating their phones has become too risky until the litigation resolves.
“Please,” one Musi fan begged. “Musi come back!!!”
Music companies appeal, demanding payment for each song instead of each album.
Credit: Getty Images | digicomphoto
The big three record labels notched another court victory against a broadband provider last month, but the music publishing firms aren’t happy that an appeals court only awarded per-album damages instead of damages for each song.
Universal, Warner, and Sony are seeking an en banc rehearing of the copyright infringement case, claiming that Internet service provider Grande Communications should have to pay per-song damages over its failure to terminate the accounts of Internet users accused of piracy. The decision to make Grande pay for each album instead of each song “threatens copyright owners’ ability to obtain fair damages,” said the record labels’ petition filed last week.
The case is in the conservative-leaning US Court of Appeals for the 5th Circuit. A three-judge panel unanimously ruled last month that Grande, a subsidiary of Astound Broadband, violated the law by failing to terminate subscribers accused of being repeat infringers. Subscribers were flagged for infringement based on their IP addresses being connected to torrent downloads monitored by Rightscorp, a copyright-enforcement company used by the music labels.
The one good part of the ruling for Grande is that the 5th Circuit ordered a new trial on damages because it said a $46.8 million award was too high. Appeals court judges found that the district court “erred in granting JMOL [judgment as a matter of law] that each of the 1,403 songs in suit was eligible for a separate award of statutory damages.” The damages were $33,333 per song.
Record labels want the per-album portion of the ruling reversed while leaving the rest of it intact.
All parts of album “constitute one work”
The Copyright Act says that “all the parts of a compilation or derivative work constitute one work,” the 5th Circuit panel noted. The panel concluded that “the statute unambiguously instructs that a compilation is eligible for only one statutory damage award, whether or not its constituent works are separately copyrightable.”
When there is a choice “between policy arguments and the statutory text—no matter how sympathetic the plight of the copyright owners—the text must prevail,” the ruling said. “So, the strong policy arguments made by Plaintiffs and their amicus are best directed at Congress.”
Record labels say the panel got it wrong, arguing that the “one work” portion of the law “serves to prevent a plaintiff from alleging and proving infringement of the original authorship in a compilation (e.g., the particular selection, coordination, or arrangement of preexisting materials) and later arguing that it should be entitled to collect separate statutory damages awards for each of the compilation’s constituent parts. That rule should have no bearing on this case, where Plaintiffs alleged and proved the infringement of individual sound recordings, not compilations.”
Record labels say that six other US appeals courts “held that Section 504(c)(1) authorizes a separate statutory damages award for each infringed copyrightable unit of expression that was individually commercialized by its copyright owner,” though several of those cases involved non-musical works such as clip-art images, photos, and TV episodes.
Music companies say the per-album decision prevents them from receiving “fair damages” because “sound recordings are primarily commercialized (and generate revenue for copyright owners) as individual tracks, not as parts of albums.” The labels also complained of what they call “a certain irony to the panel’s decision,” because “the kind of rampant peer-to-peer infringement at issue in this case was a primary reason that record companies had to shift their business models from selling physical copies of compilations (albums) to making digital copies of recordings available on an individual basis (streaming/downloading).”
Record labels claim the panel “inverted the meaning” of the statutory text “and turned a rule designed to ensure that compilation copyright owners do not obtain statutory damages windfalls into a rule that prevents copyright owners of individual works from obtaining just compensation.” The petition continued:
The practical implications of the panel’s rule are stark. For example, if an infringer separately downloads the recordings of four individual songs that so happened at any point in time to have been separately selected for and included among the ten tracks on a particular album, the panel’s decision would permit the copyright owner to collect only one award of statutory damages for the four recordings collectively. That would be so even if there were unrebutted trial evidence that the four recordings were commercialized individually by the copyright owner. This outcome is wholly unsupported by the text of the Copyright Act.
ISP wants to overturn underlying ruling
Grande also filed a petition for rehearing because it wants to escape liability, whether for each song or each album. A rehearing would be in front of all the court’s judges.
“Providing Internet service is not actionable conduct,” Grande argued. “The Panel’s decision erroneously permits contributory liability to be based on passive, equivocal commercial activity: the provision of Internet access.”
Grande cited Supreme Court decisions in MGM Studios v. Grokster and Twitter v. Taamneh. “Nothing in Grokster permits inferring culpability from a defendant’s failure to stop infringement,” Grande wrote. “And Twitter makes clear that providing online platforms or services for the exchange of information, even if the provider knows of misuse, is not sufficiently culpable to support secondary liability. This is because supplying the ‘infrastructure’ for communication in a way that is ‘agnostic as to the nature of the content’ is not ‘active, substantial assistance’ for any unlawful use.”
This isn’t the only important case in the ongoing battle between copyright owners and broadband providers, which could have dramatic effects on Internet access for individuals accused of piracy.
ISPs, labels want Supreme Court to weigh in
ISPs don’t want to be held liable when their subscribers violate copyright law and argue that they shouldn’t have to conduct mass terminations of Internet users based on mere accusations of piracy. ISPs say that copyright-infringement notices sent on behalf of record labels aren’t accurate enough to justify such terminations.
Digital rights groups have supported ISPs in these cases, arguing that turning ISPs into copyright cops would be bad for society and disconnect people who were falsely accused or were just using the same Internet connection as an infringer.
The broadband and music publishing industries are waiting to learn whether the Supreme Court will take up a challenge by cable firm Cox Communications, which wants to overturn a ruling in a copyright infringement lawsuit brought by Sony. In that case, the US Court of Appeals for the 4th Circuit affirmed a jury’s finding that Cox was guilty of willful contributory infringement, but vacated a $1 billion damages award and ordered a new damages trial. Record labels also petitioned the Supreme Court because they want the $1 billion verdict reinstated.
Cox has said that the 4th Circuit ruling “would force ISPs to terminate Internet service to households or businesses based on unproven allegations of infringing activity, and put them in a position of having to police their networks… Terminating Internet service would not just impact the individual accused of unlawfully downloading content, it would kick an entire household off the Internet.”
Four other large ISPs told the Supreme Court that the legal question presented by the case “is exceptionally important to the future of the Internet.” They called the copyright-infringement notices “famously flawed” and said mass terminations of Internet users who are subject to those notices “would harm innocent people by depriving households, schools, hospitals, and businesses of Internet access.”
Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.
ISP Grande loses appeal as 5th Circuit sides with Universal, Warner, and Sony.
Credit: Getty Images | natatravel
Music publishing companies notched another court victory against a broadband provider that refused to terminate the accounts of Internet users accused of piracy. In a ruling on Wednesday, the conservative-leaning US Court of Appeals for the 5th Circuit sided with the big three record labels against Grande Communications, a subsidiary of Astound Broadband.
The appeals court ordered a new trial on damages because it said the $46.8 million award was too high, but affirmed the lower court’s finding that Grande is liable for contributory copyright infringement.
“Here, Plaintiffs [Universal, Warner, and Sony] proved at trial that Grande knew (or was willfully blind to) the identities of its infringing subscribers based on Rightscorp’s notices, which informed Grande of specific IP addresses of subscribers engaging in infringing conduct. But Grande made the choice to continue providing services to them anyway, rather than taking simple measures to prevent infringement,” said the unanimous ruling by three judges.
Rightscorp is a copyright-enforcement company used by the music labels to detect copyright infringement. The company monitors torrent downloads to find users’ IP addresses and sends infringement notices to Internet providers that serve subscribers using those IP addresses.
“The evidence at trial demonstrated that Grande had a simple measure available to it to prevent further damages to copyrighted works (i.e., terminating repeat infringing subscribers), but that Grande never took it,” the 5th Circuit ruling said. “On appeal, Grande and its amici make a policy argument—that terminating Internet services is not a simple measure, but instead a ‘draconian overreaction’ that is a ‘drastic and overbroad remedy’—but a reasonable jury could, and did, find that Grande had basic measures, including termination, available to it. And because Grande does not dispute any of the evidence on which Plaintiffs relied to prove material contribution, there is no basis to conclude a reasonable jury lacked sufficient evidence to reach that conclusion.”
Grande’s pre-lawsuit policy: No terminations
The ruling described how Grande implemented a new policy on copyright infringement in 2010, a year after being purchased by a private equity firm:
Under Grande’s new policy, Grande no longer terminated subscribers for copyright infringement, no matter how many infringement notices Grande received. As Grande’s corporate representative at trial admitted, Grande “could have received a thousand notices about a customer, and it would not have terminated that customer for copyright infringement.”
Further, under Grande’s new policy, Grande did not take other remedial action to address infringing subscribers, such as suspending their accounts or requiring them to contact Grande to maintain their services. Instead, Grande would notify subscribers of copyright infringement complaints through letters that described the nature of the complaint and possible causes and advised that any infringing conduct is unlawful and should cease. Grande maintained that policy for nearly seven years, until May 2017.
The record labels sued Grande in April 2017. “It was not until after Plaintiffs initiated this lawsuit that Grande resumed terminating subscribers for copyright infringement,” the ruling said.
In November 2022, the labels were awarded $46,766,200 in statutory damages by a jury in US District Court for the Western District of Texas. But the District Court will have to hold a new damages trial following this week’s appeals court ruling.
Back in 2020, we wrote about the voir dire questions that record labels intended to ask prospective jurors in their case against Grande. One of those questions was, “Have you ever read or visited Ars Technica or TorrentFreak?”
Damages to be reduced
Although the 5th Circuit agreed that Grande is liable for contributory copyright infringement, judges found that the lower court “erred in granting JMOL [judgment as a matter of law] that each of the 1,403 songs in suit was eligible for a separate award of statutory damages.” The damages were $33,333 per song.
The 5th Circuit remanded the case to the district court for a new trial on damages. Record labels can expect a lower payout because the appeals court said they can’t obtain separate damages awards for multiple songs on the same album.
“The district court determined that each of Plaintiffs’ 1,403 sound recordings that was infringed entitled Plaintiffs to an individual statutory damages award,” the 5th Circuit said. “Grande contends that the text of the Copyright Act requires a different result: Whenever more than one of those recordings appeared on the same album, Plaintiffs are entitled to only one statutory damages award for that album, regardless of how many individual recordings from the album were infringed. Grande has the better reading of the text of the statute.”
The Copyright Act says that “all the parts of a compilation or derivative work constitute one work,” the court said. In the Grande case, record labels sought damages for each song but conceded that “each album constitutes a compilation.”
“In sum, the record evidence indicates that many of the works in suit are compilations (albums) comprising individual works (songs),” the 5th Circuit court wrote. “The statute unambiguously instructs that a compilation is eligible for only one statutory damage award, whether or not its constituent works are separately copyrightable.”
Larger battle could head to Supreme Court
The Grande case is part of a larger battle between ISPs and copyright holders. The industries are waiting to learn whether the Supreme Court will take up a challenge by cable firm Cox Communications, which wants to overturn a ruling in a similar copyright infringement lawsuit brought by Sony.
The US Court of Appeals for the 4th Circuit affirmed a jury’s finding that Cox was guilty of willful contributory infringement, though it also vacated a $1 billion damages award because it found that “Cox did not profit from its subscribers’ acts of infringement.” Cox and other ISPs argue that copyright-infringement notices sent on behalf of record labels aren’t reliable and that forcing ISPs to disconnect users based on unproven piracy accusations will cause great harm.
A Supreme Court brief filed by Altice USA, Frontier Communications, Lumen (aka CenturyLink), and Verizon said the 4th Circuit ruling “imperils the future of the Internet” by “expos[ing] Internet service providers to massive liability if they do not carry out mass Internet evictions.” Cutting off a subscriber’s service would hurt other residents in a home “who did not infringe and may have no connection to the infringer,” they wrote.
Cox told the Supreme Court that ISPs “have no way of verifying whether a bot-generated notice is accurate. And no one can reliably identify the actual individual who used a particular Internet connection for an illegal download. The ISP could connect the IP address to a particular subscriber’s account, but the subscriber in question might be a university or a conference center with thousands of individual users on its network, or a grandmother who unwittingly left her Internet connection open to the public. Thus, the subscriber is often not the infringer and may not even know about the infringement.”
Cox asked the Supreme Court to decide whether the 4th Circuit “err[ed] in holding that a service provider can be held liable for ‘materially contributing’ to copyright infringement merely because it knew that people were using certain accounts to infringe and did not terminate access, without proof that the service provider affirmatively fostered infringement or otherwise intended to promote it.”
Record labels also petitioned the Supreme Court because they want the original $1 billion verdict reinstated. Digital rights groups such as the Electronic Frontier Foundation (EFF) have backed Cox, saying that forcing ISPs to terminate subscribers accused of piracy “would result in innocent and vulnerable users losing essential Internet access.”
Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.
Four more large Internet service providers told the US Supreme Court this week that ISPs shouldn’t be forced to aggressively police copyright infringement on broadband networks.
While the ISPs worry about financial liability from lawsuits filed by major record labels and other copyright holders, they also argue that mass terminations of Internet users accused of piracy “would harm innocent people by depriving households, schools, hospitals, and businesses of Internet access.” The legal question presented by the case “is exceptionally important to the future of the Internet,” they wrote in a brief filed with the Supreme Court on Monday.
The amici curiae brief was filed by Altice USA (operator of the Optimum brand), Frontier Communications, Lumen (aka CenturyLink), and Verizon. The brief supports cable firm Cox Communications’ attempt to overturn its loss in a copyright infringement lawsuit brought by Sony. Cox petitioned the Supreme Court to take up the case last month.
Sony and other music copyright holders sued Cox in 2018, claiming it didn’t adequately fight piracy on its network and failed to terminate repeat infringers. A US District Court jury in the Eastern District of Virginia ruled in December 2019 that Cox must pay $1 billion in damages to the major record labels.
Cox won a partial victory when the US Court of Appeals for the 4th Circuit vacated the $1 billion verdict, finding that Cox wasn’t guilty of vicarious infringement because it did not profit directly from infringement committed by users of its cable broadband network. But the appeals court affirmed the jury’s finding of willful contributory infringement and ordered a new damages trial.
Future of Internet at stake, ISPs say
The Altice/Frontier/Lumen/Verizon brief said the 4th Circuit ruling “imperils the future of the Internet” by “expos[ing] Internet service providers to massive liability if they do not carry out mass Internet evictions.” Cutting off a subscriber’s service would hurt other residents in a home “who did not infringe and may have no connection to the infringer,” they wrote.
The automated processes used by copyright holders to find infringement on peer-to-peer networks are “famously flawed,” ISPs wrote. Despite that, the appeals court’s “view of contributory infringement would force Internet service providers to cut off any subscriber after receiving allegations that some unknown person used the subscriber’s connection for copyright infringement,” the brief said.
Under the 4th Circuit’s theory, “an Internet service provider acts culpably whenever it knowingly fails to stop some bad actor from exploiting its service,” the brief said. According to the ISPs, this “would compel Internet service providers to engage in wide-scale terminations to avoid facing crippling damages, like the $1 billion judgment entered against Cox here, the $2.6 billion damages figure touted by these same plaintiffs in a recent suit against Verizon, or the similarly immense figures sought from Frontier and Altice USA.”
Potential liability for ISPs is up to $150,000 in statutory damages for each work that is infringed, the brief said. “Enterprising plaintiffs’ lawyers could seek to hold Internet service providers liable for every bad act that occurs online,” they wrote. This threat of financial liability detracts from the ISPs’ attempts “to fulfill Congress’s goal of connecting all Americans to the Internet,” the ISPs said.
Artists defending a class-action lawsuit are claiming a major win this week in their fight to stop the most sophisticated AI image generators from copying billions of artworks to train AI models and replicate their styles without compensating artists.
In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles.
“We won BIG,” an artist plaintiff, Karla Ortiz, wrote on X (formerly Twitter), celebrating the order. “Not only do we proceed on our copyright claims,” but “this order also means companies who utilize” Stable Diffusion models and LAION-like datasets that scrape artists’ works for AI training without permission “could now be liable for copyright infringement violations, amongst other violations.”
Lawyers for the artists, Joseph Saveri and Matthew Butterick, told Ars that artists suing “consider the Court’s order a significant step forward for the case,” as “the Court allowed Plaintiffs’ core copyright-infringement claims against all four defendants to proceed.”
Stability AI was the only company that responded to Ars’ request to comment, but it declined to comment.
Artists prepare to defend their livelihoods from AI
To get to this stage of the suit, artists had to amend their complaint to better explain exactly how AI image generators work to allegedly train on artists’ images and copy artists’ styles.
For example, they were told that if they “contend Stable Diffusion contains ‘compressed copies’ of the Training Images, they need to define ‘compressed copies’ and explain plausible facts in support. And if plaintiffs’ compressed copies theory is based on a contention that Stable Diffusion contains mathematical or statistical methods that can be carried out through algorithms or instructions in order to reconstruct the Training Images in whole or in part to create the new Output Images, they need to clarify that and provide plausible facts in support,” Orrick wrote.
To keep their fight alive, the artists pored through academic articles to support their arguments that “Stable Diffusion is built to a significant extent on copyrighted works and that the way the product operates necessarily invokes copies or protected elements of those works.” Orrick agreed that their amended complaint made plausible inferences that “at this juncture” is enough to support claims “that Stable Diffusion by operation by end users creates copyright infringement and was created to facilitate that infringement by design.”
“Specifically, the Court found Plaintiffs’ theory that image-diffusion models like Stable Diffusion contain compressed copies of their datasets to be plausible,” Saveri and Butterick’s statement to Ars said. “The Court also found it plausible that training, distributing, and copying such models constitute acts of copyright infringement.”
Not all of the artists’ claims survived, with Orrick granting motions to dismiss claims alleging that AI companies removed content management information from artworks in violation of the Digital Millennium Copyright Act (DMCA). Because artists failed to show evidence of defendants altering or stripping this information, they must permanently drop the DMCA claims.
Part of Orrick’s decision on the DMCA claims, however, indicates that the legal basis for dismissal is “unsettled,” with Orrick simply agreeing with Stability AI’s unsettled argument that “because the output images are admittedly not identical to the Training Images, there can be no liability for any removal of CMI that occurred during the training process.”
Ortiz wrote on X that she respectfully disagreed with that part of the decision but expressed enthusiasm that the court allowed artists to proceed with false endorsement claims, alleging that Midjourney violated the Lanham Act.
Five artists successfully argued that because “their names appeared on the list of 4,700 artists posted by Midjourney’s CEO on Discord” and that list was used to promote “the various styles of artistic works its AI product could produce,” this plausibly created confusion over whether those artists had endorsed Midjourney.
“Whether or not a reasonably prudent consumer would be confused or misled by the Names List and showcase to conclude that the included artists were endorsing the Midjourney product can be tested at summary judgment,” Orrick wrote. “Discovery may show that it is or that is it not.”
While Orrick agreed with Midjourney that “plaintiffs have no protection over ‘simple, cartoony drawings’ or ‘gritty fantasy paintings,'” artists were able to advance a “trade dress” claim under the Lanham Act, too. This is because Midjourney allegedly “allows users to create works capturing the ‘trade dress of each of the Midjourney Named Plaintiffs [that] is inherently distinctive in look and feel as used in connection with their artwork and art products.'”
As discovery proceeds in the case, artists will also have an opportunity to amend dismissed claims of unjust enrichment. According to Orrick, their next amended complaint will be their last chance to prove that AI companies have “deprived plaintiffs ‘the benefit of the value of their works.'”
Saveri and Butterick confirmed that “though the Court dismissed certain supplementary claims, Plaintiffs’ central claims will now proceed to discovery and trial.” On X, Ortiz suggested that the artists’ case is “now potentially one of THE biggest copyright infringement and trade dress cases ever!”
“Looking forward to the next stage of our fight!” Ortiz wrote.
Thousands of anime fans were shocked Thursday when the popular piracy site Animeflix voluntarily shut down without explaining why, TorrentFreak reported.
“It is with a heavy heart that we announce the closure of Animeflix,” the site’s operators told users in a Discord with 35,000 members. “After careful consideration, we have decided to shut down our service effective immediately. We deeply appreciate your support and enthusiasm over the years.”
Prior to its shutdown, Animeflix attracted millions of monthly visits, TorrentFreak reported. It was preferred by some anime fans for its clean interface, with one fan on Reddit describing Animeflix as the “Netflix of anime.”
“Deadass this site was clean,” one Reddit user wrote. “The best I’ve ever seen. Sad to see it go.”
Although Animeflix operators did not connect the dots for users, TorrentFreak suggested that the piracy site chose to shut down after facing “considerable legal pressure in recent months.”
Back in December, an anti-piracy group, Alliance for Creativity and Entertainment (ACE), sought to shut down Animeflix. Then in mid-May, rightsholders—including Netflix, Disney, Universal, Paramount, and Warner Bros.—won an injunction through the High Court of India against several piracy sites, including Animeflix. This briefly caused Animeflix to be unavailable until Animeflix simply switched to another domain and continued serving users, TorrentFreak reported.
Although Animeflix is not telling users why it’s choosing to shut down now, TorrentFreak—which, as its name suggests, focuses much of its coverage on copyright issues impacting file sharing online—noted that “when a pirate site shuts down, voluntarily or not, copyright issues typically play a role.”
For anime fans, the abrupt closure was disappointing because of difficulty accessing the hottest new anime titles and delays as studios work to offer translations to various regions. The delays are so bad that some studios are considering combating piracy by using AI to push out translated versions more quickly. But fans fear this will only result in low-quality subtitles, CBR reported.
On Reddit, some fans also complained after relying exclusively on Animeflix to keep track of where they left off on anime shows that often span hundreds of episodes.
Others begged to be turned onto other anime piracy sites, while some speculated whether Animeflix might eventually pop up at a new domain. TorrentFreak noted that Animeflix shut down once previously several years ago but ultimately came back. One Redditor wrote, “another hero has passed away but the will, will be passed.” On another Reddit thread asking “will Animeflix be gone forever or maybe create a new site,” one commenter commiserated, writing, “We don’t know for sure. Only time will tell.”
It’s also possible that someone else may pick up the torch and operate a new piracy site under the same name. According to TorrentFreak, this is “likely.”
Animeflix did not reassure users that it may be back, instead urging them to find other sources for their favorite shows and movies.
“We hope the joy and excitement of anime continue to brighten your days through other wonderful platforms,” Animeflix’s Discord message said.
ACE did not immediately respond to Ars’ request for comment.
Book authors are suing Nvidia, alleging that the chipmaker’s AI platform NeMo—used to power customized chatbots—was trained on a controversial dataset that illegally copied and distributed their books without their consent.
In a proposed class action, novelists Abdi Nazemian (LikeaLoveStory), Brian Keene (Ghost Walk), and Stewart O’Nan (Last Night at the Lobster) argued that Nvidia should pay damages and destroy all copies of the Books3 dataset used to power NeMo large language models (LLMs).
The Books3 dataset, novelists argued, copied “all of Bibliotek,” a shadow library of approximately 196,640 pirated books. Initially shared through the AI community Hugging Face, the Books3 dataset today “is defunct and no longer accessible due to reported copyright infringement,” the Hugging Face website says.
According to the authors, Hugging Face removed the dataset last October, but not before AI companies like Nvidia grabbed it and “made multiple copies.” By training NeMo models on this dataset, the authors alleged that Nvidia “violated their exclusive rights under the Copyright Act.” The authors argued that the US district court in San Francisco must intervene and stop Nvidia because the company “has continued to make copies of the Infringed Works for training other models.”
A Hugging Face spokesperson clarified to Ars that “Hugging Face never removed this dataset, and we did not host the Books3 dataset on the Hub.” Instead, “Hugging Face hosted a script that downloads the data from The Eye, which is the place where ELeuther hosted the data,” until “Eleuther removed the data from The Eye” over copyright concerns, causing the dataset script on Hugging Face to break.
Nvidia did not immediately respond to Ars’ request to comment.
Demanding a jury trial, authors are hoping the court will rule that Nvidia has no possible defense for both allegedly violating copyrights and intending “to cause further infringement” by distributing NeMo models “as a base from which to build further models.”
AI models decreasing transparency amid suits
The class action was filed by the same legal team representing authors suing OpenAI, whose lawsuit recently saw many claims dismissed, but crucially not their claim of direct copyright infringement. Lawyers told Ars last month that authors would be amending their complaints against OpenAI and were “eager to move forward and litigate” their direct copyright infringement claim.
In that lawsuit, the authors alleged copyright infringement both when OpenAI trained LLMs and when chatbots referenced books in outputs. But authors seemed more concerned about alleged damages from chatbot outputs, warning that AI tools had an “uncanny ability to generate text similar to that found in copyrighted textual materials, including thousands of books.”
Uniquely, in the Nvidia suit, authors are focused exclusively on Nvidia’s training data, seemingly concerned that Nvidia could empower businesses to create any number of AI models on the controversial dataset, which could affect thousands of authors whose works could allegedly be broadly infringed just by training these models.
There’s no telling yet how courts will rule on the direct copyright claims in either lawsuit—or in the New York Times’ lawsuit against OpenAI—but so far, OpenAI has failed to convince courts to toss claims aside.
However, OpenAI doesn’t appear very shaken by the lawsuits. In February, OpenAI said that it expected to beat book authors’ direct copyright infringement claim at a “later stage” of the case and, most recently in the New York Times case, tried to convince the court that NYT “hacked” ChatGPT to “set up” the lawsuit.
And Microsoft, a co-defendant in the NYT lawsuit, even more recently introduced a new argument that could help tech companies defeat copyright suits over LLMs. Last month, Microsoft argued that The New York Times was attempting to stop a “groundbreaking new technology” and would fail, just like movie producers attempting to kill off the VCR in the 1980s.
“Despite The Times’s contentions, copyright law is no more an obstacle to the LLM than it was to the VCR (or the player piano, copy machine, personal computer, Internet, or search engine),” Microsoft wrote.
In December, Hugging Face’s machine learning and society lead, Yacine Jernite, noted that developers appeared to be growing less transparent about training data after copyright lawsuits raised red flags about companies using the Books3 dataset, “especially for commercial models.”
Meta, for example, “limited the amount of information [it] disclosed about” its LLM, Llama-2, “to a single paragraph description and one additional page of safety and bias analysis—after [its] use of the Books3 dataset when training the first Llama model was brought up in a copyright lawsuit,” Jernite wrote.
Jernite warned that AI models lacking transparency could hinder “the ability of regulatory safeguards to remain relevant as training methods evolve, of individuals to ensure that their rights are respected, and of open science and development to play their role in enabling democratic governance of new technologies.” To support “more accountability,” Jernite recommended “minimum meaningful public transparency standards to support effective AI regulation,” as well as companies providing options for anyone to opt out of their data being included in training data.
“More data transparency supports better governance and fosters technology development that more reliably respects peoples’ rights,” Jernite wrote.
OpenAI is now boldly claiming that The New York Times “paid someone to hack OpenAI’s products” like ChatGPT to “set up” a lawsuit against the leading AI maker.
In a court filing Monday, OpenAI alleged that “100 examples in which some version of OpenAI’s GPT-4 model supposedly generated several paragraphs of Times content as outputs in response to user prompts” do not reflect how normal people use ChatGPT.
Instead, it allegedly took The Times “tens of thousands of attempts to generate” these supposedly “highly anomalous results” by “targeting and exploiting a bug” that OpenAI claims it is now “committed to addressing.”
According to OpenAI this activity amounts to “contrived attacks” by a “hired gun”—who allegedly hacked OpenAI models until they hallucinated fake NYT content or regurgitated training data to replicate NYT articles. NYT allegedly paid for these “attacks” to gather evidence to support The Times’ claims that OpenAI’s products imperil its journalism by allegedly regurgitating reporting and stealing The Times’ audiences.
“Contrary to the allegations in the complaint, however, ChatGPT is not in any way a substitute for a subscription to The New York Times,” OpenAI argued in a motion that seeks to dismiss the majority of The Times’ claims. “In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will.”
In the filing, OpenAI described The Times as enthusiastically reporting on its chatbot developments for years without raising any concerns about copyright infringement. OpenAI claimed that it disclosed that The Times’ articles were used to train its AI models in 2020, but The Times only cared after ChatGPT’s popularity exploded after its debut in 2022.
According to OpenAI, “It was only after this rapid adoption, along with reports of the value unlocked by these new technologies, that the Times claimed that OpenAI had ‘infringed its copyright[s]’ and reached out to demand ‘commercial terms.’ After months of discussions, the Times filed suit two days after Christmas, demanding ‘billions of dollars.'”
Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that “what OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI’s copying is much larger than the 100-plus examples set forth in the complaint.”
Crosby told Ars that OpenAI’s filing notably “doesn’t dispute—nor can they—that they copied millions of The Times’ works to build and power its commercial products without our permission.”
“Building new products is no excuse for violating copyright law, and that’s exactly what OpenAI has done on an unprecedented scale,” Crosby said.
OpenAI argued that the court should dismiss claims alleging direct copyright, contributory infringement, Digital Millennium Copyright Act violations, and misappropriation, all of which it describes as “legally infirm.” Some fail because they are time-barred—seeking damages on training data for OpenAI’s older models—OpenAI claimed. Others allegedly fail because they misunderstand fair use or are preempted by federal laws.
If OpenAI’s motion is granted, the case would be substantially narrowed.
“OpenAI, which has been secretive and has deliberately concealed how its products operate, is now asserting it’s too late to bring a claim for infringement or hold them accountable. We disagree,” Crosby told Ars. “It’s noteworthy that OpenAI doesn’t dispute that it copied Times works without permission within the statute of limitations to train its more recent and current models.”
OpenAI did not immediately respond to Ars’ request to comment.
A federal appeals court today overturned a $1 billion piracy verdict that a jury handed down against cable Internet service provider Cox Communications in 2019. Judges rejected Sony’s claim that Cox profited directly from copyright infringement committed by users of Cox’s cable broadband network.
Appeals court judges didn’t let Cox off the hook entirely, but they vacated the damages award and ordered a new damages trial, which will presumably result in a significantly smaller amount to be paid to Sony and other copyright holders. Universal and Warner are also plaintiffs in the case.
“We affirm the jury’s finding of willful contributory infringement,” said a unanimous decision by a three-judge panel at the US Court of Appeals for the 4th Circuit. “But we reverse the vicarious liability verdict and remand for a new trial on damages because Cox did not profit from its subscribers’ acts of infringement, a legal prerequisite for vicarious liability.”
If the correct legal standard had been used in the district court, “no reasonable jury could find that Cox received a direct financial benefit from its subscribers’ infringement of Plaintiffs’ copyrights,” judges wrote.
The case began when Sony and other music copyright holders sued Cox, claiming that it didn’t adequately fight piracy on its network and failed to terminate repeat infringers. A US District Court jury in the Eastern District of Virginia found the ISP liable for infringement of 10,017 copyrighted works.
Copyright owners want ISPs to disconnect users
Cox’s appeal was supported by advocacy groups concerned that the big-money judgment could force ISPs to disconnect more Internet users based merely on accusations of copyright infringement. Groups such as the Electronic Frontier Foundation also called the ruling legally flawed.
“When these music companies sued Cox Communications, an ISP, the court got the law wrong,” the EFF wrote in 2021. “It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital Internet access as ISPs start to cut off more and more customers to avoid massive damages.”
In today’s 4th Circuit ruling, appeals court judges wrote that “Sony failed, as a matter of law, to prove that Cox profits directly from its subscribers’ copyright infringement.”
A defendant may be vicariously liable for a third party’s copyright infringement if it profits directly from it and is in a position to supervise the infringer, the ruling said. Cox argued that it doesn’t profit directly from infringement because it receives the same monthly fee from subscribers whether they illegally download copyrighted files or not, the ruling noted.
The question in this type of case is whether there is a causal relationship between the infringement and the financial benefit. “If copyright infringement draws customers to the defendant’s service or incentivizes them to pay more for their service, that financial benefit may be profit from infringement. But in every case, the financial benefit to the defendant must flow directly from the third party’s acts of infringement to establish vicarious liability,” the court said.
A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission.
By allegedly repackaging original works as ChatGPT outputs, authors alleged, OpenAI’s most popular chatbot was just a high-tech “grift” that seemingly violated copyright laws, as well as state laws preventing unfair business practices and unjust enrichment.
According to judge Araceli Martínez-Olguín, authors behind three separate lawsuits—including Sarah Silverman, Michael Chabon, and Paul Tremblay—have failed to provide evidence supporting any of their claims except for direct copyright infringement.
Among copyright claims tossed by Martínez-Olguín were accusations of vicarious copyright infringement. Perhaps most significantly, Martínez-Olguín agreed with OpenAI that the authors’ allegation that “every” ChatGPT output “is an infringing derivative work” is “insufficient” to allege vicarious infringement, which requires evidence that ChatGPT outputs are “substantially similar” or “similar at all” to authors’ books.
“Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books,” Martínez-Olguín wrote. “Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials.”
Authors also failed to convince Martínez-Olguín that OpenAI violated the Digital Millennium Copyright Act (DMCA) by allegedly removing copyright management information (CMI)—such as author names, titles of works, and terms and conditions for use of the work—from training data.
This claim failed because authors cited “no facts” that OpenAI intentionally removed the CMI or built the training process to omit CMI, Martínez-Olguín wrote. Further, the authors cited examples of ChatGPT referencing their names, which would seem to suggest that some CMI remains in the training data.
Some of the remaining claims were dependent on copyright claims to survive, Martínez-Olguín wrote.
Arguing that OpenAI caused economic injury by unfairly repurposing authors’ works, even if authors could show evidence of a DMCA violation, authors could only speculate about what injury was caused, the judge said.
Similarly, allegations of “fraudulent” unfair conduct—accusing OpenAI of “deceptively” designing ChatGPT to produce outputs that omit CMI—”rest on a violation of the DMCA,” Martínez-Olguín wrote.
The only claim under California’s unfair competition law that was allowed to proceed alleged that OpenAI used copyrighted works to train ChatGPT without authors’ permission. Because the state law broadly defines what’s considered “unfair,” Martínez-Olguín said that it’s possible that OpenAI’s use of the training data “may constitute an unfair practice.”
Remaining claims of negligence and unjust enrichment failed, Martínez-Olguín wrote, because authors only alleged intentional acts and did not explain how OpenAI “received and unjustly retained a benefit” from training ChatGPT on their works.
Authors have been ordered to consolidate their complaints and have until March 13 to amend arguments and continue pursuing any of the dismissed claims.
To shore up the tossed copyright claims, authors would likely need to provide examples of ChatGPT outputs that are similar to their works, as well as evidence of OpenAI intentionally removing CMI to “induce, enable, facilitate, or conceal infringement,” Martínez-Olguín wrote.
Ars could not immediately reach the authors’ lawyers or OpenAI for comment.
As authors likely prepare to continue fighting OpenAI, the US Copyright Office has been fielding public input before releasing guidance that could one day help rights holders pursue legal claims and may eventually require works to be licensed from copyright owners for use as training materials. Among the thorniest questions is whether AI tools like ChatGPT should be considered authors when spouting outputs included in creative works.
While the Copyright Office prepares to release three reports this year “revealing its position on copyright law in relation to AI,” according to The New York Times, OpenAI recently made it clear that it does not plan to stop referencing copyrighted works in its training data. Last month, OpenAI said it would be “impossible” to train AI models without copyrighted materials, because “copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents.”
According to OpenAI, it doesn’t just need old copyrighted materials; it needs current copyright materials to ensure that chatbot and other AI tools’ outputs “meet the needs of today’s citizens.”
Rights holders will likely be bracing throughout this confusing time, waiting for the Copyright Office’s reports. But once there is clarity, those reports could “be hugely consequential, weighing heavily in courts, as well as with lawmakers and regulators,” The Times reported.