Policy

ny-times-copyright-suit-wants-openai-to-delete-all-gpt-instances

NY Times copyright suit wants OpenAI to delete all GPT instances

Not the sincerest form of flattery —

Shows evidence that GPT-based systems will reproduce Times articles if asked.

Image of a CPU on a motherboard with

Enlarge / Microsoft is named in the suit for allegedly building the system that allowed GPT derivatives to be trained using infringing material.

In August, word leaked out that The New York Times was considering joining the growing legion of creators that are suing AI companies for misappropriating their content. The Times had reportedly been negotiating with OpenAI regarding the potential to license its material, but those talks had not gone smoothly. So, eight months after the company was reportedly considering suing, the suit has now been filed.

The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times’ paywall and ascribe hallucinated misinformation to the Times.

Journalism is expensive

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters.

All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times’ terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories. In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear.

The suit alleges that OpenAI-developed tools undermine all of that. “By providing Times content without The Times’s permission or authorization, Defendants’ tools undermine and damage The Times’s relationship with its readers and deprive The Times of subscription, licensing, advertising, and affiliate revenue,” the suit alleges.

Part of the unauthorized use The Times alleges came during the training of various versions of GPT. Prior to GPT-3.5, information about the training dataset was made public. One of the sources used is a large collection of online material called “Common Crawl,” which the suit alleges contains information from 16 million unique records from sites published by The Times. That places the Times as the third most referenced source, behind Wikipedia and a database of US patents.

OpenAI no longer discloses as many details of the data used for training of recent GPT versions, but all indications are that full-text NY Times articles are still part of that process (Much more on that in a moment.) Expect access to training information to be a major issue during discovery if this case moves forward.

Not just training

A number of suits have been filed regarding the use of copyrighted material during training of AI systems. But the Times’ suit goes well beyond that to show how the material ingested during training can come back out during use. “Defendants’ GenAI tools can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of examples,” the suit alleges.

The suit alleges—and we were able to verify—that it’s comically easy to get GPT-powered systems to offer up content that is normally protected by the Times’ paywall. The suit shows a number of examples of GPT-4 reproducing large sections of articles nearly verbatim.

The suit includes screenshots of ChatGPT being given the title of a piece at The New York Times and asked for the first paragraph, which it delivers. Getting the ensuing text is apparently as simple as repeatedly asking for the next paragraph.

ChatGPT has apparently closed that loophole in between the preparation of that suit and the present. We entered some of the prompts shown in the suit, and were advised “I recommend checking The New York Times website or other reputable sources,” although we can’t rule out that context provided prior to that prompt could produce copyrighted material.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

John Timmer

But not all loopholes have been closed. The suit also shows output from Bing Chat, since rebranded as Copilot. We were able to verify that asking for the first paragraph of a specific article at The Times caused Copilot to reproduce the first third of the article.

The suit is dismissive of attempts to justify this as a form of fair use. “Publicly, Defendants insist that their conduct is protected as ‘fair use’ because their unlicensed use of copyrighted content to train GenAI models serves a new ‘transformative’ purpose,” the suit notes. “But there is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it.”

Reputational and other damages

The hallucinations common to AI also came under fire in the suit for potentially damaging the value of the Times’ reputation, and possibly damaging human health as a side effect. “A GPT model completely fabricated that “The New York Times published an article on January 10, 2020, titled ‘Study Finds Possible Link between Orange Juice and Non-Hodgkin’s Lymphoma,’” the suit alleges. “The Times never published such an article.”

Similarly, asking about a Times article on heart-healthy foods allegedly resulted in Copilot saying it contained a list of examples (which it didn’t). When asked for the list, 80 percent of the foods on weren’t even mentioned by the original article. In another case, recommendations were ascribed to the Wirecutter when the products hadn’t even been reviewed by its staff.

As with the Times material, it’s alleged that it’s possible to get Copilot to offer up large chunks of Wirecutter articles (The Wirecutter is owned by The New York Times). But the suit notes that these article excerpts have the affiliate links stripped out of them, keeping the Wirecutter from its primary source of revenue.

The suit targets various OpenAI companies for developing the software, as well as Microsoft—the latter for both offering OpenAI-powered services, and for having developed the computing systems that enabled the copyrighted material to be ingested during training. Allegations include direct, contributory, and vicarious copyright infringement, as well as DMCA and trademark violations. Finally, it alleges “Common Law Unfair Competition By Misappropriation.”

The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: “statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity.”

NY Times copyright suit wants OpenAI to delete all GPT instances Read More »

elon-musk-will-see-you-in-court:-the-top-twitter-and-x-corp.-lawsuits-of-2023

Elon Musk will see you in court: The top Twitter and X Corp. lawsuits of 2023

Elon Musk holding a microphone and speaking.

Enlarge / Elon Musk speaks at the Atreju political convention organized by Fratelli d’Italia (Brothers of Italy) on December 15, 2023 in Rome, Italy.

Getty Images | Antonio Masiello

Elon Musk’s ownership of Twitter, now called X, began with a lawsuit. When Musk tried to break a $44 billion merger agreement, Twitter filed a lawsuit that gave Musk no choice but to complete the deal.

In the year-plus since Musk bought the company, he’s been the defendant and plaintiff in many more lawsuits involving Twitter and X Corp. As 2023 comes to a close, this article rounds up a selection of notable lawsuits involving the Musk-led social network and provides updates on the status of the cases.

Musk sues Twitter law firm

Musk seemingly held a grudge against the law firm that helped Twitter force Musk to complete the merger. In July, X Corp. sued Wachtell, Lipton, Rosen & Katz in an attempt to claw back the $90 million that Twitter paid the firm before Musk completed the acquisition.

Most of that money was paid to Wachtell hours before the merger closed. X’s lawsuit in San Francisco County Superior Court claimed that “Wachtell arranged to effectively line its pockets with funds from the company cash register while the keys were being handed over to the Musk Parties.”

Wachtell sought to move the dispute into arbitration, pointing out that the contract between itself and Twitter contained a binding arbitration clause. In October, the court granted Wachtell’s motion to compel arbitration and stayed the lawsuit pending the outcome.

Unpaid-bill lawsuits

While Twitter paid the Wachtell legal bill before Musk could block the payment, dozens of lawsuits allege that X has refused to pay bills owed to other companies that started providing services to Twitter before the Musk takeover.

The suits were filed by software vendors, landlords, event planning firms, a private jet company, an office renovator, consultants, and other companies. The lawsuits helped some companies obtain payment via settlements, but X has continued to fight many of the allegations. We covered the unpaid-bill lawsuits in-depth in this lengthy article published in September.

Musk sues Media Matters

Musk has repeatedly blamed outside parties for X’s financial problems, which are largely due to advertisers not wanting to be associated with offensive and controversial content that used to be more heavily moderated before Musk slashed the company’s staff.

One of the biggest ad-spending drops came after a November 16 Media Matters report that said corporate ads were placed “next to content that touts Adolf Hitler and his Nazi Party.” Musk’s X Corp responded by suing Media Matters a few days later, claiming the group “manipulated the algorithms governing the user experience on X to bypass safeguards and create images of X’s largest advertisers’ paid posts adjacent to racist, incendiary content.”

The suit was filed in US District Court for the Northern District of Texas. There aren’t any significant updates on the case to report yet.

X Corp. previously filed a similar lawsuit against the nonprofit Center for Countering Digital Hate (CCDH), claiming the group “improperly gain[ed] access” to data, and “cherry-pick[ed] from the hundreds of millions of posts made each day on X” in order to “falsely claim it had statistical support showing the platform is overwhelmed with harmful content.”

The CCDC filed a motion to dismiss X’s lawsuit on November 16, saying that its actions constituted “newsgathering activity in furtherance of the CCDH defendants’ protected speech and reporting.” The motion and case are still pending in US District Court for the Northern District of California.

Musk suit against data scrapers tossed

In July, X Corp. sued unidentified data scrapers in Dallas County District Court, accusing them of “severely tax[ing]” company servers by “flooding Twitter’s sign-up page with automated requests.” The lawsuit was filed days after Twitter imposed rate limits capping the number of tweets users could view each day.

“Several entities tried to scrape every tweet ever made in a short period of time. That is why we had to put rate limits in place,” Musk wrote at the time.

The lawsuit initially listed four John Doe defendants and was amended to raise the number of defendants to 11. This was a tough lawsuit for X to pursue because it didn’t know who the scrapers were and identified them only by their IP addresses.

X issued subpoenas to Amazon Web Services, Akamai, and Google in attempts to gain information on the John Does behind the IP addresses, but the case fizzled out. On October 30, a Dallas County judge dismissed the lawsuit “for want of prosecution” and ordered X to pay the court costs.

Elon Musk will see you in court: The top Twitter and X Corp. lawsuits of 2023 Read More »

us-agency-tasked-with-curbing-risks-of-ai-lacks-funding-to-do-the-job

US agency tasked with curbing risks of AI lacks funding to do the job

more dollars needed —

Lawmakers fear the NIST will have to rely on companies developing the technology.

They know...

Enlarge / They know…

Aurich / Getty

US president Joe Biden’s plan for containing the dangers of artificial intelligencealready risks being derailed by congressional bean counters.

A White House executive order on AI announced in October calls on the US to develop new standards for stress-testing AI systems to uncover their biases, hidden threats, and rogue tendencies. But the agency tasked with setting these standards, the National Institute of Standards and Technology (NIST), lacks the budget needed to complete that work independently by the July 26, 2024, deadline, according to several people with knowledge of the work.

Speaking at the NeurIPS AI conference in New Orleans last week, Elham Tabassi, associate director for emerging technologies at NIST, described this as “an almost impossible deadline” for the agency.

Some members of Congress have grown concerned that NIST will be forced to rely heavily on AI expertise from private companies that, due to their own AI projects, have a vested interest in shaping standards.

The US government has already tapped NIST to help regulate AI. In January 2023 the agency released an AI risk management framework to guide business and government. NIST has also devised ways to measure public trust in new AI tools. But the agency, which standardizes everything from food ingredients to radioactive materials and atomic clocks, has puny resources compared to those of the companies on the forefront of AI. OpenAI, Google, and Meta each likely spent upwards of $100 million to train the powerful language models that undergird applications such as ChatGPT, Bard, and Llama 2.

NIST’s budget for 2023 was $1.6 billion, and the White House has requested that it be increased by 29 percent in 2024 for initiatives not directly related to AI. Several sources familiar with the situation at NIST say that the agency’s current budget will not stretch to figuring out AI safety testing on its own.

On December 16, the same day Tabassi spoke at NeurIPS, six members of Congress signed a bipartisan open letter raising concern about the prospect of NIST enlisting private companies with little transparency. “We have learned that NIST intends to make grants or awards to outside organizations for extramural research,” they wrote. The letter warns that there does not appear to be any publicly available information about how those awards will be decided.

The lawmakers’ letter also claims that NIST is being rushed to define standards even though research into testing AI systems is at an early stage. As a result there is “significant disagreement” among AI experts over how to work on or even measure and define safety issues with the technology, it states. “The current state of the AI safety research field creates challenges for NIST as it navigates its leadership role on the issue,” the letter claims.

NIST spokesperson Jennifer Huergo confirmed that the agency had received the letter and said that it “will respond through the appropriate channels.”

NIST is making some moves that would increase transparency, including issuing a request for information on December 19, soliciting input from outside experts and companies on standards for evaluating and red-teaming AI models. It is unclear if this was a response to the letter sent by the members of Congress.

The concerns raised by lawmakers are shared by some AI experts who have spent years developing ways to probe AI systems. “As a nonpartisan scientific body, NIST is the best hope to cut through the hype and speculation around AI risk,” says Rumman Chowdhury, a data scientist and CEO of Parity Consultingwho specializes in testing AI models for bias and other problems. “But in order to do their job well, they need more than mandates and well wishes.”

Yacine Jernite, machine learning and society lead at Hugging Face, a company that supports open source AI projects, says big tech has far more resources than the agency given a key role in implementing the White House’s ambitious AI plan. “NIST has done amazing work on helping manage the risks of AI, but the pressure to come up with immediate solutions for long-term problems makes their mission extremely difficult,” Jernite says. “They have significantly fewer resources than the companies developing the most visible AI systems.”

Margaret Mitchell, chief ethics scientist at Hugging Face, says the growing secrecy around commercial AI models makes measurement more challenging for an organization like NIST. “We can’t improve what we can’t measure,” she says.

The White House executive order calls for NIST to perform several tasks, including establishing a new Artificial Intelligence Safety Institute to support the development of safe AI. In April, a UK taskforce focused on AI safety was announced. It will receive $126 million in seed funding.

The executive order gave NIST an aggressive deadline for coming up with, among other things, guidelines for evaluating AI models, principles for “red-teaming” (adversarially testing) models, developing a plan to get US-allied nations to agree to NIST standards, and coming up with a plan for “advancing responsible global technical standards for AI development.”

Although it isn’t clear how NIST is engaging with big tech companies, discussions on NIST’s risk management framework, which took place prior to the announcement of the executive order, involved Microsoft; Anthropic, a startup formed by ex-OpenAI employees that is building cutting-edge AI models; Partnership on AI, which represents big tech companies; and the Future of Life Institute, a nonprofit dedicated to existential risk, among others.

“As a quantitative social scientist, I’m both loving and hating that people realize that the power is in measurement,” Chowdhury says.

This story originally appeared on wired.com.

US agency tasked with curbing risks of AI lacks funding to do the job Read More »

from-cz-to-sbf,-2023-was-the-year-of-the-fallen-crypto-bro

From CZ to SBF, 2023 was the year of the fallen crypto bro

From CZ to SBF, 2023 was the year of the fallen crypto bro

Aurich Lawson | Getty Images (Bloomberg/Antonio Masiello)

Looking back, 2023 will likely be remembered as the year of the fallen crypto bro.

While celebrities like Kim Kardashian and Matt Damon last year faced public backlash after shilling for cryptocurrency, this year’s top headlines traced the downfalls of two of the most successful and influential crypto bros of all time: FTX co-founder Sam Bankman-Fried (often referred to as SBF) and Binance founder Changpeng Zhao (commonly known as CZ).

At 28 years old, Bankman-Fried made Forbes’ 30 Under 30 list in 2021, but within two short years, his recently updated Forbes profile notes that the man who was once “one of the richest people in crypto” in “a stunning fall from grace” now has a real-time net worth of $0.

In November, Bankman-Fried was convicted by a 12-member jury of defrauding FTX customers, after a monthlong trial where federal prosecutors accused him of building FTX into “a pyramid of deceit.” The trial followed months of wild headlines—comparing Bankman-Fried to a cartoon villain, accusing Bankman-Fried of stealing $2.2 billion from FTX customers to buy things like a $16.4 million house for his parents, and revealing that Bankman-Fried casually joked about losing track of $50 million.

Defending against his crimes at FTX, Bankman-Fried argued that “dishonesty and unfair dealing” aren’t fraud and even claimed that he couldn’t recall what he did at FTX, while FTX scrambled to recover $7.3 billion and put out the “dumpster fire.”

Ultimately, Bankman-Fried’s former FTX/Alameda Research partners, including his ex-girlfriend Caroline Ellison, testified against him. Ellison’s testimony led to even weirder revelations about SBF, like Bankman-Fried’s aspirations to become US president and his professed rejection of moral ideals like “don’t steal.” By the end of the trial, it seemed like very few felt any sympathy for the once-FTX kingpin.

Bankman-Fried now faces a maximum sentence of 110 years. His exact sentence is scheduled to be determined by a US district judge in March 2024, Reuters reported.

While FTX had been considered a giant force in the cryptocurrency world, Binance is still the world’s biggest cryptocurrency exchange—and considered more “systemically important” to crypto enthusiasts, Bloomberg reported. That’s why it was a huge deal when Binance was rocked by its own scandal in 2023 that ended in its founder and CEO, Zhao, admitting to money laundering and resigning.

Arguably Zhao’s fall from grace may have been more shocking to cryptocurrency fans than Bankman-Fried’s. Just one month prior to Zhao’s resignation, after FTX collapsed, The Economist had dubbed CZ as “crypto’s last man standing.”

Zhao launched Binance in 2017 and the next year was featured on the cover of Forbes’ first list of the wealthiest people in crypto. Peering out from under a hoodie, Zhao was considered by Forbes to be a “crypto overlord,” going from “zero to billionaire in six months,” where other crypto bros had only managed to become millionaires.

But 2023 put an abrupt end to Zhao’s reign at Binance. In March, the Commodity Futures Trading Commission (CFTC) sued Binance and Zhao over suspected money laundering and sanctions violations, triggering a Securities and Exchange Commission lawsuit in June and a Department of Justice (DOJ) probe. In the end, Binance owed billions in fines to the DOJ and the CFTC, which Secretary of the Treasury Janet Yellen called “historic penalties.” For personally directing Binance employees to skirt US regulatory compliance—and hide more than 100,000 suspicious transactions linked to terrorism, child sexual abuse materials, and ransomware attacks—Zhao now personally owes the CFTC $150 million.

On the social media platform X (formerly Twitter), Zhao wrote that after stepping down as Binance’s CEO, he will be taking a break and likely never helming a startup ever again.

“I am content being [a] one-shot (lucky) entrepreneur,” Zhao wrote.

From CZ to SBF, 2023 was the year of the fallen crypto bro Read More »

banks-use-your-deposits-to-loan-money-to-fossil-fuel,-emissions-heavy-firms

Banks use your deposits to loan money to fossil-fuel, emissions-heavy firms

Money for something —

Your $1,000 in the bank creates emissions equal to a flight from NYC to Seattle.

High angle shot of female hand inserting her bank card into automatic cash machine in the city. Withdrawing money, paying bills, checking account balances and make a bank transfer. Privacy protection, internet and mobile banking security concept

When you drop money in the bank, it looks like it’s just sitting there, ready for you to withdraw. In reality, your institution makes money on your money by lending it elsewhere, including to the fossil fuel companies driving climate change, as well as emissions-heavy industries like manufacturing.

So just by leaving money in a bank account, you’re unwittingly contributing to worsening catastrophes around the world. According to a new analysis, for every $1,000 dollars the average American keeps in savings, each year they indirectly create emissions equivalent to flying from New York to Seattle. “We don’t really take a look at how the banks are using the money we keep in our checking account on a daily basis, where that money is really circulating,” says Jonathan Foley, executive director of Project Drawdown, which published the analysis. “But when we look under the hood, we see that there’s a lot of fossil fuels.”

By switching to a climate-conscious bank, you could reduce those emissions by about 75 percent, the study found. In fact, if you moved $8,000 dollars—the median balance for US customers—the reduction in your indirect emissions would be twice that of the direct emissions you’d avoid if you switched to a vegetarian diet.

Put another way: You as an individual have a carbon footprint—by driving a car, eating meat, running a gas furnace instead of a heat pump—but your money also has a carbon footprint. Banking, then, is an underappreciated yet powerful avenue for climate action on a mass scale. “Not just voting every four years, or not just skipping the hamburger, but also where my money sits, that’s really important,” says Foley.

Just as you can borrow money from a bank, so too do fossil fuel companies and the companies that support that industry—think of building pipelines and other infrastructure. “Even if it’s not building new pipelines, for a fossil fuel company to be doing just its regular operations—whether that’s maintaining the network of gas stations that it owns, or maintaining existing pipelines, or paying its employees—it’s going to need funding for that,” says Paddy McCully, senior analyst at Reclaim Finance, an NGO focused on climate action.

A fossil fuel company’s need for those loans varies from year to year, given the fluctuating prices of those fuels. That’s where you, the consumer, comes in. “The money that an individual puts into their bank account makes it possible for the bank to then lend money to fossil fuel companies,” says Richard Brooks, climate finance director at Stand.earth, an environmental and climate justice advocacy group. “If you look at the top 10 banks in North America, each of them lends out between $20 billion and $40 billion to fossil fuel companies every year.”

The new report finds that on average, 11 of the largest US banks lend 19.4 percent of their portfolios to carbon-intensive industries. (The American Bankers Association did not immediately respond to a request to comment for this story.) To be very clear: Oil, gas, and coal companies wouldn’t be able to keep producing these fuels—when humanity needs to be reducing carbon emissions dramatically and rapidly—without these loans. New fossil fuel projects aren’t simply fleeting endeavors, but will operate for years, locking in a certain amount of emissions going forward.

At the same time, Brooks says, big banks are under-financing the green economy. As a civilization, we’re investing in the wrong kind of energy if we want to avoid the ever-worsening effects of climate change. Yes, 2022 was the first year that climate finance surpassed the trillion-dollar mark. “However, the alarming aspect is that climate finance must increase by at least fivefold annually, as swiftly as possible, to mitigate the worst impacts of climate change,” says Valerio Micale, senior manager of the Climate Policy Initiative. “An even more critical consideration is that this cost, which would accumulate to $266 trillion until 2050, pales in comparison to the costs of inaction, estimated at over $2,000 trillion over the same period.”

Smaller banks, at least, are less likely to be providing money for the fossil fuel industry. A credit union operates more locally, so it’s much less likely to be fronting money for, say, a new oil pipeline. “Big fossil fuel companies go to the big banks for their financing,” says Brooks. “They’re looking for loans in the realm of hundreds of millions of dollars, sometimes multibillion-dollar loans, and a credit union wouldn’t be able to provide that.”

This makes banking a uniquely powerful lever to pull when it comes to climate action, Foley says. Compared to switching to vegetarianism or veganism to avoid the extensive carbon emissions associated with animal agriculture, money is easy to move. “If large numbers of people start to tell their financial institutions that they don’t really want to participate in investing in fossil fuels, that slowly kind of drains capital away from what’s available for fossil fuels,” says Foley.

While the new report didn’t go so far as to exhaustively analyze the lending habits of the thousands of banks in the US, Foley says there’s a growing number that deliberately don’t invest in fossil fuels. If you’re not sure about what your bank is investing in, you can always ask. “I think when people hear we need to move capital out of fossil fuels into climate solutions, they probably think only Warren Buffett can do that,” says Foley. “That’s not entirely true. We can all do a little bit of that.”

This story originally appeared on wired.com.

Banks use your deposits to loan money to fossil-fuel, emissions-heavy firms Read More »

ftc-suggests-new-rules-to-shift-parents’-burden-of-protecting-kids-to-websites

FTC suggests new rules to shift parents’ burden of protecting kids to websites

Ending the endless tracking of kids —

FTC seeking public comments on new rules to expand children’s privacy law.

FTC suggests new rules to shift parents’ burden of protecting kids to websites

The Federal Trade Commission (FTC) is currently seeking comments on new rules that would further restrict platforms’ efforts to monetize children’s data.

Through the Children’s Online Privacy Protection Act (COPPA), the FTC initially sought to give parents more control over what kinds of information that various websites and apps can collect from their kids. Now, the FTC wants to update COPPA and “shift the burden from parents to providers to ensure that digital services are safe and secure for children,” the FTC’s press release said.

“By requiring firms to better safeguard kids’ data, our proposal places affirmative obligations on service providers and prohibits them from outsourcing their responsibilities to parents,” FTC chair Lina Khan said.

Among proposed rules, the FTC would require websites to turn off targeted advertising by default and prohibit sending push notifications to encourage kids to use services more than they want to. Surveillance in schools would be further restricted, so that data is only collected for educational purposes. And data security would be strengthened by mandating that websites and apps “establish, implement, and maintain a written children’s personal information security program that contains safeguards that are appropriate to the sensitivity of the personal information collected from children.”

Perhaps most significantly, COPPA would also be updated to stop companies from retaining children’s data forever, explicitly stating that “operators cannot retain the information indefinitely.” In a statement, commissioner Alvaro Bedoya called this a “critical protection” at a time when “new, machine learning-fueled systems require ever larger amounts of training data.”

These proposed changes were designed to address “the evolving ways personal information is being collected, used, and disclosed, including to monetize children’s data,” the FTC said.

Keeping up with advancing technology, the FTC said, also requires expanding COPPA’s definition of “personal information” to include biometric identifiers. That change was likely inspired by charges brought against Amazon earlier this year, when the FTC accused Amazon of violating COPPA by retaining tens of thousands of children’s Alexa voice recordings forever.

Once the notice of proposed rulemaking is published to the Federal Register, the public will have 60 days to submit comments. The FTC likely anticipates thousands of parents and stakeholders to weigh in, noting that the last time COPPA was updated in 2019, more than 175,000 comments were submitted.

Endless tracking of kids not a “victimless crime”

Bedoya said that updating the already-expansive children’s privacy law would prevent known harms. He also expressed concern that increasingly these harms are being overlooked, citing a federal judge in California who preliminarily enjoined California’s Age-Appropriate Design Code” in September. That judge had suggested that California’s law was “actually likely to exacerbate” online harm to kids, but Bedoya challenged that decision as reinforcing a “critique that has quietly proliferated around children’s privacy: the idea that many privacy invasions do not actually hurt children.”

For decades, COPPA has protected against the unauthorized or unnecessary collection, use, retention, and disclosure of children’s information, which Bedoya said “endangers children’s safety,” “exposes children and families to hacks and data breaches,” and “allows third-party companies to develop commercial relationships with children that prey on their trust and vulnerability.”

“I think each of these harms, particularly the latter, undermines the idea that the pervasive tracking of children online is [a] ‘victimless crime,'” Bedoya said, adding that “the harms that COPPA sought to prevent remain real, and COPPA remains relevant and profoundly important.”

According to Bedoya, COPPA is more vital than ever, as “we are only at the beginning of an era of biometric fraud.”

Khan characterized the proposed changes as “much-needed” in an “era where online tools are essential for navigating daily life—and where firms are deploying increasingly sophisticated digital tools to surveil children.”

“Kids must be able to play and learn online without being endlessly tracked by companies looking to hoard and monetize their personal data,” Khan said.

FTC suggests new rules to shift parents’ burden of protecting kids to websites Read More »

child-sex-abuse-images-found-in-dataset-training-image-generators,-report-says

Child sex abuse images found in dataset training image generators, report says

Child sex abuse images found in dataset training image generators, report says

More than 1,000 known child sexual abuse materials (CSAM) were found in a large open dataset—known as LAION-5B—that was used to train popular text-to-image generators such as Stable Diffusion, Stanford Internet Observatory (SIO) researcher David Thiel revealed on Wednesday.

SIO’s report seems to confirm rumors swirling on the Internet since 2022 that LAION-5B included illegal images, Bloomberg reported. In an email to Ars, Thiel warned that “the inclusion of child abuse material in AI model training data teaches tools to associate children in illicit sexual activity and uses known child abuse images to generate new, potentially realistic child abuse content.”

Thiel began his research in September after discovering in June that AI image generators were being used to create thousands of fake but realistic AI child sex images rapidly spreading on the dark web. His goal was to find out what role CSAM may play in the training process of AI models powering the image generators spouting this illicit content.

“Our new investigation reveals that these models are trained directly on CSAM present in a public dataset of billions of images, known as LAION-5B,” Thiel’s report said. “The dataset included known CSAM scraped from a wide array of sources, including mainstream social media websites”—like Reddit, X, WordPress, and Blogspot—as well as “popular adult video sites”—like XHamster and XVideos.

Shortly after Thiel’s report was published, a spokesperson for LAION, the Germany-based nonprofit that produced the dataset, told Bloomberg that LAION “was temporarily removing LAION datasets from the Internet” due to LAION’s “zero tolerance policy” for illegal content. The datasets will be republished once LAION ensures “they are safe,” the spokesperson said. A spokesperson for Hugging Face, which hosts a link to a LAION dataset that’s currently unavailable, confirmed to Ars that the dataset is now unavailable to the public after being switched to private by the uploader.

Removing the datasets now doesn’t fix any lingering issues with previously downloaded datasets or previously trained models, though, like Stable Diffusion 1.5. Thiel’s report said that Stability AI’s subsequent versions of Stable Diffusion—2.0 and 2.1—filtered out some or most of the content deemed “unsafe,” “making it difficult to generate explicit content.” But because users were dissatisfied by these later, more filtered versions, Stable Diffusion 1.5 remains “the most popular model for generating explicit imagery,” Thiel’s report said.

A spokesperson for Stability AI told Ars that Stability AI is “committed to preventing the misuse of AI and prohibit the use of our image models and services for unlawful activity, including attempts to edit or create CSAM.” The spokesperson pointed out that SIO’s report “focuses on the LAION-5B dataset as a whole,” whereas “Stability AI models were trained on a filtered subset of that dataset” and were “subsequently fine-tuned” to “mitigate residual behaviors.” The implication seems to be that Stability AI’s filtered dataset is not as problematic as the larger dataset.

Stability AI’s spokesperson also noted that Stable Diffusion 1.5 “was released by Runway ML, not Stability AI.” There seems to be some confusion on that point, though, as a Runway ML spokesperson told Ars that Stable Diffusion “was released in collaboration with Stability AI.”

A demo of Stable Diffusion 1.5 noted that the model was “supported by Stability AI” but released by CompVis and Runway. While a YCombinator thread linking to a blog—titled “Why we chose not to release Stable Diffusion 1.5 as quickly”—from Stability AI’s former chief information officer, Daniel Jeffries, may have provided some clarity on this, it has since been deleted.

Runway ML’s spokesperson declined to comment on any updates being considered for Stable Diffusion 1.5 but linked Ars to a Stability AI blog from August 2022 that said, “Stability AI co-released Stable Diffusion alongside talented researchers from” Runway ML.

Stability AI’s spokesperson said that Stability AI does not host Stable Diffusion 1.5 but has taken other steps to reduce harmful outputs. Those include only hosting “versions of Stable Diffusion that include filters” that “remove unsafe content” and “prevent the model from generating unsafe content.”

“Additionally, we have implemented filters to intercept unsafe prompts or unsafe outputs when users interact with models on our platform,” Stability AI’s spokesperson said. “We have also invested in content labelling features to help identify images generated on our platform. These layers of mitigation make it harder for bad actors to misuse AI.”

Beyond verifying 1,008 instances of CSAM in the LAION-5B dataset, SIO found 3,226 instances of suspected CSAM in the LAION dataset. Thiel’s report warned that both figures are “inherently a significant undercount” due to researchers’ limited ability to detect and flag all the CSAM in the datasets. His report also predicted that “the repercussions of Stable Diffusion 1.5’s training process will be with us for some time to come.”

“The most obvious solution is for the bulk of those in possession of LAION‐5B‐derived training sets to delete them or work with intermediaries to clean the material,” SIO’s report said. “Models based on Stable Diffusion 1.5 that have not had safety measures applied to them should be deprecated and distribution ceased where feasible.”

Child sex abuse images found in dataset training image generators, report says Read More »

republicans-slam-broadband-discounts-for-poor-people,-threaten-to-kill-program

Republicans slam broadband discounts for poor people, threaten to kill program

Senate Minority Whip John Thune gestures with his right hand while speaking to reporters.

Enlarge / Senate Minority Whip John Thune (R-S.D.) speaks to reporters after the weekly Senate Republican caucus lunch on November 14, 2023, in Washington, DC.

Getty Images | Anna Rose Layden

Republican members of Congress blasted a program that gives $30 monthly broadband discounts to people with low incomes, accusing the Federal Communications Commission of being “wasteful.” The lawmakers suggested in a letter to FCC Chairwoman Jessica Rosenworcel that they may try to block funding for the Affordable Connectivity Program (ACP), which is expected to run out of money in April 2024.

“As lawmakers with oversight responsibility over the ACP, we have raised concerns, shared by the FCC Inspector General, regarding the program’s effectiveness in connecting non-subscribers to the Internet,” the lawmakers wrote. “While you have repeatedly claimed that the ACP is necessary for connecting participating households to the Internet, it appears the vast majority of tax dollars have gone to households that already had broadband prior to the subsidy.”

The letter was sent Friday by Sen. John Thune (R-S.D.), Sen. Ted Cruz (R-Texas), Rep. Cathy McMorris Rodgers (R-Wash.), and Rep. Bob Latta (R-Ohio). Cruz is the top Republican on the Senate Commerce Committee, and Thune is the top Republican on the Subcommittee on Communications, Media, and Broadband. McMorris Rodgers is chair of the House Commerce Committee, and Latta is chair of the House Subcommittee on Communications and Technology.

The letter questioned Rosenworcel’s testimony at a recent House hearing in which she warned that 25 million households could lose Internet access if Congress doesn’t renew the ACP discounts. The ACP was created by congressional legislation, but Republicans are wary of continuing it. The program began with $14.2 billion a little less than two years ago.

“At a hearing before the House Energy and Commerce Committee on November 30, 2023, you asserted—without evidence and contrary to the FCC’s own data—that ’25 million households’ would be ‘unplug[ged]…from the Internet’ if Congress does not provide new funding for the ACP,” the letter said. “This is not true. As Congress considers the future of taxpayer broadband subsidies, we ask you to correct the hearing record and make public accurate information about the ACP.”

“Reckless spending spree”

The letter criticizes what it calls “the Biden administration’s reckless spending spree” and questions whether the ACP is worth paying for:

It is incumbent on lawmakers to protect taxpayers and make funding decisions based on clear evidence. Unfortunately, your testimony pushes “facts” about the ACP that are deeply misleading and have the potential to exacerbate the fiscal crisis without producing meaningful benefits to the American consumer. We therefore ask you to supplement your testimony from November 30, 2023, with the correct information about the number of Americans that will “lose” broadband if the ACP does not receive additional funds, and correct the hearing record accordingly by January 5, 2024.

During the November 30 hearing, Rep. Yvette Clarke (D-N.Y.) said she will introduce legislation to re-fund the program. The ACP has widespread support from consumer advocates and the telecom industry. Additionally, the governors of 25 US states and Puerto Rico urged Congress to extend the ACP in a November 13 letter.

The Biden administration has requested $6 billion to fund the program through December 2024. Rosenworcel’s office declined to comment on the Republicans’ letter when contacted by Ars today.

Although the FCC operates the discount program, it has to do so within parameters set by Congress. The FCC’s ACP rulemaking noted that the income-eligibility guidelines were determined by Congress.

Republicans slam broadband discounts for poor people, threaten to kill program Read More »

binance-to-pay-$2.7-billion-fine-after-hiding-shady-transactions-from-feds

Binance to pay $2.7 billion fine after hiding shady transactions from feds

Ill-gotten gains —

Binance’s former compliance-control officer must also pay a $1.5 million fine.

Founder and CEO of Binance Changpeng Zhao, commonly known as

Enlarge / Founder and CEO of Binance Changpeng Zhao, commonly known as “CZ,” in May 10, 2022, in Rome, Italy.

Now that a federal court has approved a settlement with Binance, the world largest cryptocurrency exchange is hoping to move past a money-laundering scandal that forced its founder and CEO, Changpeng Zhao, to resign and overnight drained more than $1 billion in assets from its platform.

Under the settlement, Binance will “disgorge $1.35 billion of ill-gotten transaction fees and pay a $1.35 billion penalty” to the Commodity Futures Trading Commission (CFTC), the federal agency announced in a press release.

Additionally, Zhao will personally pay a $150 million civil monetary penalty. According to a plea agreement with the US Department of Justice—which ordered Binance to pay a “historic” penalty of $4.3 billion—Zhao’s previously ordered $50 million fine can be credited under certain terms against the amount that Zhao owes the CFTC.

The CFTC found that Zhao directed Binance to dodge US regulatory requirements and violate Binance’s own terms of use to hide unauthorized US trading on the exchange. Binance did this by soliciting US customers to trade on the platform without being subjected to Binance’s know-your-customer (KYC) procedures.

“Zhao and Binance were aware of US regulatory requirements, but chose to ignore them and knowingly concealed the presence of US customers on the platform,” the CFTC’s press release said. “The order also finds Zhao and other members of Binance’s senior management actively facilitated violations of US law, including instructing US customers to evade compliance controls.”

Among those “aiding and abetting Binance’s violations,” the CFTC said, was Binance’s former compliance-control officer, Samuel Lin. Under a separate order, Lin must pay a $1.5 million civil monetary penalty, the CFTC noted.

As part of the settlement, Binance will no longer allow customers to use sub-accounts to skirt KYC procedures and has agreed to remove all non-compliant accounts from the platform. Moving forward, Binance has agreed to “no longer allow existing sub-accounts, including those opened by prime brokers, to bypass the platform’s compliance controls,” the CFTC said.

Binance must also implement a new corporate governance structure, adding a board of directors with independent members and compliance and audit committees. This structure is intended to prevent Binance from approving suspicious transactions linked to terrorism, child sexual abuse, and ransomware attacks, as well as from violating anti-money laundering and sanctions laws.

In November, when Zhao resigned, Binance said that settling these lawsuits would help the crypto exchange “turn the page,” Reuters reported.

Zhao’s plea agreement prevents him from making any public statements contradicting his acceptance of responsibility for Binance’s schemes, and he has kept his word on that front. Shortly after resigning, Zhao wrote on the social media platform X (formerly Twitter) that he had made mistakes and must take responsibility “for our community, for Binance, and for myself.”

Within one day after Zhao resigned, though, some Binance users immediately did not appear confident in the platform, withdrawing more than $1 billion from the exchange, CNBC reported. A market analyst told CNBC that Binance’s token suffered most from the CEO stepping down.

However, the majority of Binance’s assets—more than $65 billion—remained on the platform, CNBC reported, indicating that Binance is likely big enough to survive this year’s legal storms.

Zhao said he was “proud to point out” that the plea deals “do not allege that Binance misappropriated any user funds” or “that Binance engaged in any market manipulation.” Naming his successor as CEO—Binance’s former global head of regional markets, Richard Teng—Zhao expressed confidence that Teng would “ensure Binance delivers on our next phase of security, transparency, compliance, and growth.”

Binance to pay $2.7 billion fine after hiding shady transactions from feds Read More »

disgraced-nikola-founder-trevor-milton-gets-4-year-sentence-for-lying-about-evs

Disgraced Nikola founder Trevor Milton gets 4-year sentence for lying about EVs

Web of lies —

Prosecutors had asked for a heavier sentence to deter future fraud.

Trevor Milton, founder of Nikola Corp., arrives at court in New York on Monday, Dec. 18, 2023. Milton is set to be sentenced on Monday after being found guilty of securities fraud and wire fraud in October 2022.

Enlarge / Trevor Milton, founder of Nikola Corp., arrives at court in New York on Monday, Dec. 18, 2023. Milton is set to be sentenced on Monday after being found guilty of securities fraud and wire fraud in October 2022.

The disgraced founder and former CEO of the “zero emissions” truck company Nikola, Trevor Milton, was sentenced to four years in prison on Monday, Bloomberg reported.

That’s a lighter sentence than prosecutors had requested after a jury found Milton guilty of one count of securities fraud and two counts of wire fraud in 2022. During the trial, Milton was accused of lying about “nearly all aspects of the business,” CNBC reported.

From 2016 to 2020, Milton’s “extravagant claims” were fueled by a desire to pump up the value of Nikola stock, The New York Times reported. He was accused of misleading investors about everything from fake prototypes of emission-free long-haul trucks to billions worth of supposedly binding orders for hydrogen fuel cells and batteries that were never shipped. In a sentencing memo, prosecutors said that Milton targeted “less sophisticated investors,” the Times reported, engaging “in a sustained scheme to take advantage of” their inexperience.

Nikola’s stock peaked in 2020, but then dozens of fraud allegations were reported by the investment firm Hindenburg Research, causing Nikola stock to plummet promptly. “We have never seen this level of deception at a public company, especially of this size,” Hindenburg Research’s report said. Facing backlash, Milton resigned, voluntarily withdrawing from his company and selling off $100 million in Nikola stock to fund more than $85 million in luxury purchases, the Times reported. Today, Milton remains Nikola’s second-largest shareholder, Bloomberg reported.

By 2021, Nikola had admitted to the US Securities and Exchange Commission that nine statements made by Milton were “inaccurate.”

The price of these lies to investors was more than $660 million, prosecutors claimed.

Through it all, Milton has denied the charges, requesting to be sentenced to only probation while holding back tears, Bloomberg reported. At his sentencing hearing, he said that his “misstatements” came from a place of “deeply held optimism,” and he did not intend to cause any harm, Yahoo reported.

“I was not a very seasoned CEO,” Milton reportedly said.

Prosecutors sought heavier consequences, asking the judge to order Milton to pay a $5 million fine and sentence Milton to 11 years in prison.

Milton is likely to appeal, Bloomberg reported.

Nikola’s spokesperson provided Ars with a statement on the sentencing.

“Nikola has a strong foundation and is in the process of achieving our mission to decarbonize the trucking industry, which is our focus,” Nikola’s statement said. “We have made significant progress year-over-year and will continue with the same level of discipline and commitment in 2024. We are pleased to move forward and remind the public that the company founder has not had any active role in Nikola since September 2020.”

Nikola’s shaky road to recovery

Current Nikola CEO Steve Girsky has recently said that Nikola will recover by attracting “world-class people to execute on our business plan” and working toward “establishing ourselves as the leader in zero-emissions commercial transportation,” Forbes reported.

Girsky seems keen to move past the scandal by promoting Nikola’s latest successes. In September, Girsky boasted that daily tests showed that one of Nikola’s fuel cell trucks could successfully run for 900 miles.

“This was quite an accomplishment, and I defy anyone to find another zero-emission vehicle truck anywhere that can run up to 900 miles in a day,” Girsky said.

However, since the 2020 scandal, Nikola’s stock has dropped 99 percent, Forbes reported, and now an investor analytics company called Macroaxis has estimated that Nikola has an 81 percent chance of going bankrupt.

While Forbes credited Milton with most of Nikola’s current woes, it’s not just the scandal causing investment setbacks for Nikola. In August, Nikola also recalled most of its battery-electric trucks—about 209—after a fire probe revealed a “defective part” that “is believed to have caused a battery to overheat” and risk setting trucks on fire, The Wall Street Journal reported.

This represented “virtually all” the battery-electric trucks that Nikola had shipped to customers, the Journal reported. While engineers worked on a solution to keep battery-electric trucks on the roads, Nikola temporarily halted sales of the battery-electric trucks, ramping up production instead on hydrogen fuel-cell electric trucks that remain Nikola’s core focus.

In September, Girsky described the recall as a setback but pointed to all of Nikola’s progress since Milton’s departure.

“It’s a setback, but we’re in it for the long haul,” Girsky said. “We’ve proved the skeptics wrong who said we couldn’t engineer a truck, couldn’t build a truck, and couldn’t sell a truck, and we’re not planning on stopping any time soon.”

Disgraced Nikola founder Trevor Milton gets 4-year sentence for lying about EVs Read More »

musk’s-x-hit-with-eu’s-first-investigation-of-digital-services-act-violations

Musk’s X hit with EU’s first investigation of Digital Services Act violations

EU investigates X —

EU probes disinformation, election policy, Community Notes, and paid checkmarks.

Illustration includes an upside-down Twitter bird logo with an

Getty Images | Chris Delmas

The European Union has opened a formal investigation into whether Elon Musk’s X platform (formerly Twitter) violated the Digital Services Act (DSA), which could result in fines of up to 6 percent of global revenue. A European Commission announcement today said the agency “opened formal proceedings to assess whether X may have breached the Digital Services Act (DSA) in areas linked to risk management, content moderation, dark patterns, advertising transparency and data access for researchers.”

This is the commission’s first formal investigation under the Digital Services Act, which applies to large online platforms and has requirements on content moderation and transparency. The step has been in the works since at least October, when a formal request for information was sent amid reports of widespread Israel/Hamas disinformation.

The European Commission today said it “decided to open formal infringement proceedings against X under the Digital Services Act” after reviewing X’s replies to the request for information on topics including “the dissemination of illegal content in the context of Hamas’ terrorist attacks against Israel.” The commission said the investigation will focus on dissemination of illegal content, the effectiveness of measures taken to combat information manipulation on X, transparency, and “a suspected deceptive design of the user interface.”

The illegal content probe will focus on “risk assessment and mitigation measures” and “the functioning of the notice and action mechanism for illegal content” that is mandated by the DSA. The commission said this will be evaluated “in light of X’s content moderation resources,” a reference to the deep staff cuts made by Musk since purchasing Twitter in October 2022.

Community Notes and paid checkmarks under review

The information manipulation portion of the investigation will evaluate “the effectiveness of X’s so-called ‘Community Notes’ system in the EU and the effectiveness of related policies mitigating risks to civic discourse and electoral processes,” the announcement said. The transparency probe “concerns suspected shortcomings in giving researchers access to X’s publicly accessible data as mandated by Article 40 of the DSA, as well as shortcomings in X’s ads repository,” the commission said.

Musk’s decision to make “verification” checkmarks a paid feature will figure into the commission’s probe of whether the X user interface has a deceptive design. The commission said it will evaluate “checkmarks linked to certain subscription products, the so-called Blue checks.”

The investigation will include more requests for information, interviews, and “inspections,” the commission said. There is no legal deadline for completing the investigation.

“The opening of formal proceedings empowers the Commission to take further enforcement steps, such as interim measures, and non-compliance decisions. The Commission is also empowered to accept any commitment made by X to remedy on the matters subject to the proceeding,” the announcement said.

In a statement today, X said it is committed to complying with the Digital Services Act and is cooperating with regulators. “It is important that this process remains free of political influence and follows the law,” the company said. “X is focused on creating a safe and inclusive environment for all users on our platform, while protecting freedom of expression, and we will continue to work tirelessly towards this goal.”

Musk’s X hit with EU’s first investigation of Digital Services Act violations Read More »

adobe-gives-up-on-$20-billion-acquisition-of-figma

Adobe gives up on $20 billion acquisition of Figma

No deal —

Competition probes in the EU and UK made regulatory approval dicey.

Adobe and Figma logos

Adobe has abandoned its proposed $20 billion acquisition of product design software company Figma, as there was “no clear path to receive necessary regulatory approvals” from UK and EU watchdogs.

The deal had faced probes from both the UK and EU competition regulators for fears it would have an impact on the product design, image editing, and illustration markets.

Adobe refused to offer remedies to satisfy the UK Competition and Markets Authority’s concerns last week, according to a document published by the regulator on Monday, arguing that a divestment would be “wholly disproportionate.”

Hours later, the two companies issued a mutual statement terminating the merger, citing the regulatory challenges. Adobe will pay Figma $1 billion in a termination fee under the terms of the merger agreement.

“Adobe and Figma strongly disagree with the recent regulatory findings, but we believe it is in our respective best interests to move forward independently,” said Shantanu Narayen, chair and chief executive of Adobe.

The companies had been battling multiple regulatory challenges, with the EU’s executive body, the European Commission, publishing a statement of objections to the deal last month arguing the takeover could “significantly reduce competition in the global markets.”

Margrethe Vestager, the EU’s competition commissioner, said: “By combining these two companies, the proposed acquisition would have terminated all current and prevented all future competition between them. Our in-depth investigation showed that this would lead to higher prices, reduced quality or less choice for customers.”

Competition regulators around the world have sent mixed signals over the aspirations of Big Tech groups hoping to acquire promising start-ups and potential rivals, at a time when public markets have been largely closed to new listings.

The EU’s antitrust watchdog has made a formal objection to Amazon’s $1.7 billion proposed purchase of Roomba-maker iRobot. However, Microsoft was able to complete its $75 billion takeover of games maker Activision after it made revisions to the deal to appease UK regulators.

Speaking with the Financial Times last week, Figma chief executive Dylan Field said: “It is important that those paths of acquisition remain available because very few companies make it all the way to IPO. So many companies fail on the way.”

Shares in Adobe were up almost 2 percent in pre-market trading. Since the deal was announced, Adobe has turned its focus to embedding generative artificial intelligence into its products by, for example, enabling users to create novel stock imagery with AI.

The huge price that Adobe was willing to pay for San Francisco-based Figma had been seen by critics of the deal as an effort to quash the software giant’s most promising new rival in decades.

The deal, which was first negotiated during the COVID-19 pandemic’s boom in tech investment and announced in September 2022, would have valued Figma at roughly 50 times its annual recurring revenue, and double its last private funding round in 2021.

The companies were expected to appear in front of the CMA to contest the regulator’s provisional findings on Thursday this week.

Under its proposed remedies in November, the CMA said it was considering either prohibiting the deal or demanding the divestiture of overlapping operations, such as Adobe’s Illustrator or Photoshop, or Figma’s core product, Figma Design.

Field said that the latter suggestion left him amazed at “the idea of buying a company so you can divest the company.”

“When I read that document and saw that was one of the proposals, I thought it was quite amusing; it felt like a bit of a punchline to a joke. I was surprised to see that as a proposal from the agency.” In a statement on Monday, Field said he was “disappointed in the outcome.”

Earlier on Monday, the CMA had published the companies’ responses to its provisional findings, which Adobe and Figma said contained “serious errors of law and fact” and took “an irrational approach to the gathering and appraisal of evidence.”

“Requiring a multibillion-dollar global divestment of Photoshop or Illustrator in order to address an uncertain and speculative theory of harm is wholly disproportionate,” they wrote.

© 2023 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Adobe gives up on $20 billion acquisition of Figma Read More »