Policy

amazon-marketplace-crackdown-has-sellers-searching-for-legal-help

Amazon marketplace crackdown has sellers searching for legal help

Legit or not —

Clean-up drive has led to some small businesses having their accounts suspended.

Amazon marketplace crackdown has sellers searching for legal help

Leon Neal | Getty Images

Merchants who have been suspended from selling goods on Amazon’s marketplace are turning to a cottage industry of lawyers to regain access to their accounts and money, amid growing scrutiny of how the retailer treats independents.

Millions of accounts on the leading ecommerce platform have been prevented from engaging in sales for alleged violations of Amazon’s broad range of policies and other bad behavior. Even temporary suspensions can be a critical blow to the small business owners who rely on online sales.

Four ecommerce-focused US law firms told the Financial Times that the majority of the cases they took on were complaints brought by aggrieved Amazon sellers, with each handling hundreds or thousands of cases every year.

About a dozen sellers also said they had grown worried about Amazon’s power to suspend their accounts or product listings, as it was not always clear what had triggered the suspension and Amazon’s seller support services did not always help to sort out the issue.

Account suspension was “a big fear of mine,” said one seller, who declined to be named. “At the end of the day, it’s not really your business. One day you can wake up and it’s all gone.”

Amazon’s recent efforts to crack down on issues such as fake product reviews have come as US and European regulators have upped their scrutiny of the online harms facing shoppers.

But critics said the existence of a growing army of lawyers and consultants to deal with the fallout from Amazon’s actions pointed to a problem with the way the retailer treats its sellers.

“If you’re a seller and you need help to navigate the system, that’s a real vulnerability for the marketplace. If you’re operating a business where the people you’re deriving revenue from feel that they’re being treated in an arbitrary way without due process, that is a problem,” said Marianne Rowden, chief executive of the E-Merchants Trade Council.

“The fact that there are entire law firms dedicated to dealing with Amazon says a lot,” said one seller, who like many who spoke to the FT asked to remain anonymous for fear of reprisals.

Amazon declined to comment in detail but said its selling partners were “incredibly important” and the company worked hard to “protect and help them grow their business.” The company worked to “eliminate mistakes and ‘false positive’ enforcements” and had an appeal process for sellers in place.

Sellers on Amazon’s marketplace account for more than 60 percent of sales in its store. In the nine months to September 30, Amazon recorded $96bn in commissions and fees paid by sellers, a jump of nearly 20 percent compared with the same period a year earlier.

As the marketplace has grown, Amazon has had to do more to police it. During the first half of 2023 in its EU store, Amazon took 274mn “actions” in response to potential policy violations and other suspected problems, which included the removal of content and 4.2mn account suspensions. Amazon revealed the numbers as part of its first European transparency report newly required by EU law.

Amazon typically withholds any money in the account of a seller it has suspended for alleged fraudulent or abusive practices, which it may keep permanently if the account is not reinstated and the merchant is deemed to have been a bad actor.

Figuring out what caused a suspension and how to reverse it can be difficult. “We had a listing shut down during Prime Big Deals Days with no warning, no cause, no explanation,” said one kitchenware seller who has been selling on Amazon.com since 2014. “That’s pretty common.”

Amazon gave no further information when the listing was reinstated days later, the seller said.

Such confusion drives some sellers towards lawyers and consultants who advise on underlying problems, such as intellectual property disputes.

Amazon-focused US firms said they typically charged flat fees of between $1,300 and $3,500 per case.

CJ Rosenbaum, founding partner of the Amazon and ecommerce-focused law firm Rosenbaum Famularo, said the practice experienced a “big jump” in demand during the pandemic.

Many cases related to IP complaints from bigger brands “trying to control who sells their products” and making “a baseless counterfeit complaint” against a smaller Amazon seller, he added.

Lawyers said some sellers had been wrongly accused by the company’s automated systems that identify breaches of rules and policies. They added though that others had broken Amazon’s rules.

The retailer has become “more draconian” in the enforcement of its policies in recent years, said attorney Jeff Schick.

“Clients will say Amazon is unfair,” he said, but added that if the company did not strictly enforce its rules “then the platform becomes the next [US classified advertisements website] Craigslist.”

As part of escalated disputes, lawyers might steer merchants through a costly arbitration process that the company requires US sellers to use for most issues, rather than filing lawsuits against it.

Sellers were subject to “forced” arbitration clauses that required them to “sign away the right to their day in court if a dispute with Amazon arises,” said a 2022 US government report.

The details of arbitrations are not public, and decisions do not typically set binding precedents. They can also be hugely expensive: the up to three arbitrators that preside over a case can charge hundreds of dollars an hour.

“Quickly, you’re at $25,000 of costs or more,” said sole practitioner Leo Vaisburg, who left firm Wilson Elser in 2022 to pursue Amazon-related work full time. For many small businesses the high costs were “a barrier to entry,” he added. “Very few cases are worth that kind of money.”

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Amazon marketplace crackdown has sellers searching for legal help Read More »

vizio-settles-for-$3m-after-saying-60-hz-tvs-had-120-hz-“effective-refresh-rate”

Vizio settles for $3M after saying 60 Hz TVs had 120 Hz “effective refresh rate”

Class action —

Vizio claimed backlight scanning made refresh rates seem twice as high.

A marketing image for Vizio's P-series Q9 TV.

Enlarge / A marketing image for Vizio’s P-series Q9 TV.

Vizio has agreed to pay $3 million to settle a class-action lawsuit that alleged the company misled customers about the refresh rates of its TVs.

In 2018, a lawsuit [PDF], which was later certified as a class action, was filed against Vizio for advertising its 60 Hz and 120 Hz LCD TVs as having an “effective” refresh rate of 120 Hz and 240 Hz, respectively. Vizio was referring to the backlight scanning (or black frame insertion) ability, which it claimed made the TVs look like they were operating at a refresh rate that was twice as fast as they are capable of. Vizio’s claims failed to address the drawbacks that can come from backlight scanning, which include less brightness and the potential for noticeable flickering. The lawsuit complained about Vizio’s language in marketing materials and user manuals.

The lawsuit read:

Vizio knows, or at the very least should know, that its television with 60Hz display panels have a refresh rate of 60 images per second and that backlight manipulation methods cannot and do not increase the effective Hz (refresh rate) of a television.

The lawsuit, filed in the Superior Court of California, County of Los Angeles, accused Vizio of using misleading tactics to persuade retailers to sell and recommend Vizio TVs. It accused Vizio of trying “to sell its lesser-quality product at a higher price and allowed Vizio to realize sales it may not have otherwise made if it were truthful regarding the performance capabilities of its televisions.”

Under the settlement terms [PDF] spotted by The Verge, people who bought a Vizio TV in California after April 30, 2014, can file a claim. They’ll receive $17 or up to $50 if the fund allows it. The individual payout may also be under $17 if the claims exceed the $3 million fund. Vizio will also pay attorney fees. People have until March 30 to submit their claims. The final approval hearing is scheduled for June 20.

Vizio also agreed to stop advertising their TVs with 120 and 240 Hz “effective” refresh rates but “will not be obligated to recall or modify labeling for any Vizio-branded television model that has already been sold or distributed to a third party,” according to the agreement. Further, the California-headquartered company will also offer affected customers a “service and limited warranty package conservatively valued at $25” per person.

Vizio, per the settlement, denies any wrongdoing. The company declined to comment on the settlement to Ars.

The settlement comes as tactics for fighting motion blur, like backlight scanning and frame interpolation (known for causing the “soap opera effect“), have been maligned for often making the viewing experience worse. LG and TCL have also faced class-action lawsuits for boosting refresh rate claims by saying that their motion blur-fighting techniques make it seem like their TVs are running at a higher refresh rate than possible. While the case against LG was dismissed, TCL settled for $2,900,000 [PDF].

Despite the criticisms, backlight scanning and motion smoothing remain on default across countless TVs belonging to unsuspecting owners. Class-action cases like Vizio’s that end up having a negative cost for OEMs provide further incentive for them to at least stop using the ability as a way to superficially boost spec sheets.

Vizio settles for $3M after saying 60 Hz TVs had 120 Hz “effective refresh rate” Read More »

since-elon-musk’s-twitter-purchase,-firm-reportedly-lost-72%-of-its-value

Since Elon Musk’s Twitter purchase, firm reportedly lost 72% of its value

Going down, down, down… —

Fidelity cuts value of X stake, implying 72% drop since Musk paid $44 billion.

A businessman places his hand on his head as he looks up and is perplexed by a chart indicating a drop in value.

Getty Images | DNY59

Fidelity’s latest valuation of its stake in X implies that Elon Musk’s social network is worth about 71.5 percent less than when Musk bought the company in October 2022.

Fidelity’s Blue Chip Growth Fund has a relatively small stake in X. A monthly update for the fund listed the value of its “X Holdings Corp.” stake at $5.6 million as of November 30, 2023. The fund’s share of X was originally worth $19.7 million but lost about two-thirds of its value by April 2023 and has dropped more modestly since then.

Fidelity cut its valuation of X by 10.7 percent in November, according to Axios. One question is whether Fidelity sold any of its stake during November, but the latest drop in value isn’t surprising given the recent Musk-related controversies that drove advertisers away from the platform.

“As of Oct. 30 the fund hadn’t sold any of its stake, but the monthly report with the updated valuation doesn’t disclose whether the size of the holding changed,” Bloomberg wrote. “Assuming the fund hasn’t reduced its holding in X, the latest report implies the value of the entire company has also fallen by 72 percent. Fidelity declined to comment.”

X’s ad woes hurt value

Based on the $44 billion that Musk paid for Twitter over a year ago, the drop in Fidelity’s valuation would make the company worth about $12.5 billion. X reportedly valued itself at about $19 billion in October, based on the value of stock grants to employees.

Since Musk took Twitter private, the company’s value and revenue are harder to determine from the outside. As Axios noted, “Fidelity doesn’t necessarily have much, if any, inside information on X’s financial performance, despite being a shareholder in the privately held business. Other shareholders may value their X stock differently.”

X’s finances were shaky enough at the end of October, the one-year anniversary of Musk’s purchase. Musk made things worse in mid-November when he posted a favorable response to an antisemitic tweet. He addressed the antisemitism controversy in a public interview on November 29, telling businesses that pulled advertising from X to “go fuck yourself.”

X has had trouble retaining advertisers throughout Musk’s tenure, due largely to his approach to content moderation. Musk eliminated most of the company’s staff shortly after becoming its owner.

X loses bid to block California law

X is dealing with new regulations on content moderation, both in Europe and the US. Musk’s company sued California in September in an attempt to block the state’s content-moderation law but last week lost a key ruling in the court case.

On Thursday, US District Judge William Shubb denied X’s motion for a preliminary injunction that would have blocked enforcement of the California content-moderation law. The state law requires companies to file two reports each year with terms of service and detailed descriptions of content-moderation practices.

Shubb rejected X’s claim that the law violates the First Amendment. “While the reporting requirement does appear to place a substantial compliance burden on social medial companies, it does not appear that the requirement is unjustified or unduly burdensome within the context of First Amendment law,” Shubb wrote.

The judge agreed with California that there is “a substantial government interest in requiring social media companies to be transparent about their content moderation policies and practices so that consumers can make informed decisions about where they consume and disseminate news and information.”

Since Elon Musk’s Twitter purchase, firm reportedly lost 72% of its value Read More »

fda-would-like-to-stop-finding-viagra-in-supplements-sold-on-amazon

FDA would like to stop finding Viagra in supplements sold on Amazon

Well, that’s one kind of energy —

“Big Guys Male Energy Supplement” turns out to be a vehicle for prescription drugs.

Image of a pile of blue pills that forms the shape of a male symbol.

If you were to search for a product called “Mens Maximum Energy Supplement” on Amazon, you’d be bombarded with everything from caffeine pills to amino acid supplements to the latest herb craze. But at some point last year, the FDA had purchased a specific product by that name from Amazon and sent it off to one of its labs to find out if the self-proclaimed “dietary supplement” contained anything that would actually boost energy.

In August, the FDA announced that the supposed supplement was actually a vehicle for a prescription drug that offered a very specific type of energy boost. It contained sildenafil, a drug much better known by its brand name: Viagra.

Four months later, the FDA is finally getting around to issuing a warning letter to Amazon, giving it 15 days to not only address Mens Maximum Energy Supplement and a handful of similar vehicles for prescription erection boosters, but also asking for an explanation of how the company is going to keep similarly mislabelled prescription drugs from being hawked on its site in the future.

Prescription energy

Mens Maximum Energy Supplement was just one of seven products that the FDA found for sale on Amazon that contained either Sildenafil or Tadalafil (marketed as Cialis). The product names ranged from the jokey (WeFun and Genergy) to the vaguely suggestive (Round 2) to the verbose (Big Guys Male Energy Supplement and X Max Triple Shot Energy Honey). All of them were marketed as supplements and contained no indication of their active ingredients.

And that, as the FDA explains to Amazon in detail, means selling those products violates a whole host of laws and regulations. They’re being marketed as dietary supplements, but don’t fit the operative legal definition of these supplements. They’re offering prescription drugs without providing directions for their intended and safe use. They contain no warnings about unsafe doses or how long they can be used safely.

The FDA points out that these rules exist for very good reasons. Both of the drugs found in these supplements inhibit an enzyme called a type-5 phosphodiesterase which, among other things, influences the circulatory system. One potential side effect is a dangerous drop in blood pressure. Both Sildenafil and Tadalafil can also have dangerous interactions with a specific class of drugs often taken by those with diabetes, high blood pressure, or heart disease.

Legal remedies

The FDA’s letter makes it clear that the highlighted supplements aren’t intended to be an exhaustive list of the products that Amazon offers in violation of federal law. And it is very explicit about the fact that it is Amazon’s responsibility (and not the FDA’s) to ensure compliance: “You are responsible for investigating and determining the causes of any violations and for preventing their recurrence or the occurrence of other violations.”

And Amazon clearly has its work cut out for it. None of the products cited by the FDA’s letter appear to still be for sale under the same name at Amazon—a company spokesperson told Ars that it pulled them in response to the original FDA findings. But searches for them at Amazon brought up a number of similar products, many of which included pills with the blue color that Viagra was marketed with.

So, the FDA wants to see a plan that describes how Amazon will not only deal with the products at issue in this letter, but prevent all similar violations in the future: “Include an explanation of each step being taken to prevent the recurrence of violations, including steps you will take to ensure that Amazon will no longer introduce or deliver for introduction into interstate commerce unapproved new drugs and/or misbranded products with undeclared drug ingredients, as well as copies of related documentation.”

Amazon is being given 15 days to respond to the warning letter. Failure to adequately address these violations, the FDA warns, will result in legal action.

FDA would like to stop finding Viagra in supplements sold on Amazon Read More »

google-agrees-to-settle-chrome-incognito-mode-class-action-lawsuit

Google agrees to settle Chrome incognito mode class action lawsuit

Not as private as you thought —

2020 lawsuit accused Google of tracking incognito activity, tying it to users’ profiles.

Google agrees to settle Chrome incognito mode class action lawsuit

Getty Images

Google has indicated that it is ready to settle a class-action lawsuit filed in 2020 over its Chrome browser’s Incognito mode. Arising in the Northern District of California, the lawsuit accused Google of continuing to “track, collect, and identify [users’] browsing data in real time” even when they had opened a new Incognito window.

The lawsuit, filed by Florida resident William Byatt and California residents Chasom Brown and Maria Nguyen, accused Google of violating wiretap laws. It also alleged that sites using Google Analytics or Ad Manager collected information from browsers in Incognito mode, including web page content, device data, and IP address. The plaintiffs also accused Google of taking Chrome users’ private browsing activity and then associating it with their already-existing user profiles.

Google initially attempted to have the lawsuit dismissed by pointing to the message displayed when users turned on Chrome’s incognito mode. That warning tells users that their activity “might still be visible to websites you visit.”

Judge Yvonne Gonzalez Rogers rejected Google’s bid for summary judgement in August, pointing out that Google never revealed to its users that data collection continued even while surfing in Incognito mode.

“Google’s motion hinges on the idea that plaintiffs consented to Google collecting their data while they were browsing in private mode,” Rogers ruled. “Because Google never explicitly told users that it does so, the Court cannot find as a matter of law that users explicitly consented to the at-issue data collection.”

According to the notice filed on Tuesday, Google and the plaintiffs have agreed to terms that will result in the litigation being dismissed. The agreement will be presented to the court by the end of January, with the court giving final approval by the end of February.

Google agrees to settle Chrome incognito mode class action lawsuit Read More »

ny-times-copyright-suit-wants-openai-to-delete-all-gpt-instances

NY Times copyright suit wants OpenAI to delete all GPT instances

Not the sincerest form of flattery —

Shows evidence that GPT-based systems will reproduce Times articles if asked.

Image of a CPU on a motherboard with

Enlarge / Microsoft is named in the suit for allegedly building the system that allowed GPT derivatives to be trained using infringing material.

In August, word leaked out that The New York Times was considering joining the growing legion of creators that are suing AI companies for misappropriating their content. The Times had reportedly been negotiating with OpenAI regarding the potential to license its material, but those talks had not gone smoothly. So, eight months after the company was reportedly considering suing, the suit has now been filed.

The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times’ paywall and ascribe hallucinated misinformation to the Times.

Journalism is expensive

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters.

All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times’ terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories. In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear.

The suit alleges that OpenAI-developed tools undermine all of that. “By providing Times content without The Times’s permission or authorization, Defendants’ tools undermine and damage The Times’s relationship with its readers and deprive The Times of subscription, licensing, advertising, and affiliate revenue,” the suit alleges.

Part of the unauthorized use The Times alleges came during the training of various versions of GPT. Prior to GPT-3.5, information about the training dataset was made public. One of the sources used is a large collection of online material called “Common Crawl,” which the suit alleges contains information from 16 million unique records from sites published by The Times. That places the Times as the third most referenced source, behind Wikipedia and a database of US patents.

OpenAI no longer discloses as many details of the data used for training of recent GPT versions, but all indications are that full-text NY Times articles are still part of that process (Much more on that in a moment.) Expect access to training information to be a major issue during discovery if this case moves forward.

Not just training

A number of suits have been filed regarding the use of copyrighted material during training of AI systems. But the Times’ suit goes well beyond that to show how the material ingested during training can come back out during use. “Defendants’ GenAI tools can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of examples,” the suit alleges.

The suit alleges—and we were able to verify—that it’s comically easy to get GPT-powered systems to offer up content that is normally protected by the Times’ paywall. The suit shows a number of examples of GPT-4 reproducing large sections of articles nearly verbatim.

The suit includes screenshots of ChatGPT being given the title of a piece at The New York Times and asked for the first paragraph, which it delivers. Getting the ensuing text is apparently as simple as repeatedly asking for the next paragraph.

ChatGPT has apparently closed that loophole in between the preparation of that suit and the present. We entered some of the prompts shown in the suit, and were advised “I recommend checking The New York Times website or other reputable sources,” although we can’t rule out that context provided prior to that prompt could produce copyrighted material.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

John Timmer

But not all loopholes have been closed. The suit also shows output from Bing Chat, since rebranded as Copilot. We were able to verify that asking for the first paragraph of a specific article at The Times caused Copilot to reproduce the first third of the article.

The suit is dismissive of attempts to justify this as a form of fair use. “Publicly, Defendants insist that their conduct is protected as ‘fair use’ because their unlicensed use of copyrighted content to train GenAI models serves a new ‘transformative’ purpose,” the suit notes. “But there is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it.”

Reputational and other damages

The hallucinations common to AI also came under fire in the suit for potentially damaging the value of the Times’ reputation, and possibly damaging human health as a side effect. “A GPT model completely fabricated that “The New York Times published an article on January 10, 2020, titled ‘Study Finds Possible Link between Orange Juice and Non-Hodgkin’s Lymphoma,’” the suit alleges. “The Times never published such an article.”

Similarly, asking about a Times article on heart-healthy foods allegedly resulted in Copilot saying it contained a list of examples (which it didn’t). When asked for the list, 80 percent of the foods on weren’t even mentioned by the original article. In another case, recommendations were ascribed to the Wirecutter when the products hadn’t even been reviewed by its staff.

As with the Times material, it’s alleged that it’s possible to get Copilot to offer up large chunks of Wirecutter articles (The Wirecutter is owned by The New York Times). But the suit notes that these article excerpts have the affiliate links stripped out of them, keeping the Wirecutter from its primary source of revenue.

The suit targets various OpenAI companies for developing the software, as well as Microsoft—the latter for both offering OpenAI-powered services, and for having developed the computing systems that enabled the copyrighted material to be ingested during training. Allegations include direct, contributory, and vicarious copyright infringement, as well as DMCA and trademark violations. Finally, it alleges “Common Law Unfair Competition By Misappropriation.”

The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: “statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity.”

NY Times copyright suit wants OpenAI to delete all GPT instances Read More »

elon-musk-will-see-you-in-court:-the-top-twitter-and-x-corp.-lawsuits-of-2023

Elon Musk will see you in court: The top Twitter and X Corp. lawsuits of 2023

Elon Musk holding a microphone and speaking.

Enlarge / Elon Musk speaks at the Atreju political convention organized by Fratelli d’Italia (Brothers of Italy) on December 15, 2023 in Rome, Italy.

Getty Images | Antonio Masiello

Elon Musk’s ownership of Twitter, now called X, began with a lawsuit. When Musk tried to break a $44 billion merger agreement, Twitter filed a lawsuit that gave Musk no choice but to complete the deal.

In the year-plus since Musk bought the company, he’s been the defendant and plaintiff in many more lawsuits involving Twitter and X Corp. As 2023 comes to a close, this article rounds up a selection of notable lawsuits involving the Musk-led social network and provides updates on the status of the cases.

Musk sues Twitter law firm

Musk seemingly held a grudge against the law firm that helped Twitter force Musk to complete the merger. In July, X Corp. sued Wachtell, Lipton, Rosen & Katz in an attempt to claw back the $90 million that Twitter paid the firm before Musk completed the acquisition.

Most of that money was paid to Wachtell hours before the merger closed. X’s lawsuit in San Francisco County Superior Court claimed that “Wachtell arranged to effectively line its pockets with funds from the company cash register while the keys were being handed over to the Musk Parties.”

Wachtell sought to move the dispute into arbitration, pointing out that the contract between itself and Twitter contained a binding arbitration clause. In October, the court granted Wachtell’s motion to compel arbitration and stayed the lawsuit pending the outcome.

Unpaid-bill lawsuits

While Twitter paid the Wachtell legal bill before Musk could block the payment, dozens of lawsuits allege that X has refused to pay bills owed to other companies that started providing services to Twitter before the Musk takeover.

The suits were filed by software vendors, landlords, event planning firms, a private jet company, an office renovator, consultants, and other companies. The lawsuits helped some companies obtain payment via settlements, but X has continued to fight many of the allegations. We covered the unpaid-bill lawsuits in-depth in this lengthy article published in September.

Musk sues Media Matters

Musk has repeatedly blamed outside parties for X’s financial problems, which are largely due to advertisers not wanting to be associated with offensive and controversial content that used to be more heavily moderated before Musk slashed the company’s staff.

One of the biggest ad-spending drops came after a November 16 Media Matters report that said corporate ads were placed “next to content that touts Adolf Hitler and his Nazi Party.” Musk’s X Corp responded by suing Media Matters a few days later, claiming the group “manipulated the algorithms governing the user experience on X to bypass safeguards and create images of X’s largest advertisers’ paid posts adjacent to racist, incendiary content.”

The suit was filed in US District Court for the Northern District of Texas. There aren’t any significant updates on the case to report yet.

X Corp. previously filed a similar lawsuit against the nonprofit Center for Countering Digital Hate (CCDH), claiming the group “improperly gain[ed] access” to data, and “cherry-pick[ed] from the hundreds of millions of posts made each day on X” in order to “falsely claim it had statistical support showing the platform is overwhelmed with harmful content.”

The CCDC filed a motion to dismiss X’s lawsuit on November 16, saying that its actions constituted “newsgathering activity in furtherance of the CCDH defendants’ protected speech and reporting.” The motion and case are still pending in US District Court for the Northern District of California.

Musk suit against data scrapers tossed

In July, X Corp. sued unidentified data scrapers in Dallas County District Court, accusing them of “severely tax[ing]” company servers by “flooding Twitter’s sign-up page with automated requests.” The lawsuit was filed days after Twitter imposed rate limits capping the number of tweets users could view each day.

“Several entities tried to scrape every tweet ever made in a short period of time. That is why we had to put rate limits in place,” Musk wrote at the time.

The lawsuit initially listed four John Doe defendants and was amended to raise the number of defendants to 11. This was a tough lawsuit for X to pursue because it didn’t know who the scrapers were and identified them only by their IP addresses.

X issued subpoenas to Amazon Web Services, Akamai, and Google in attempts to gain information on the John Does behind the IP addresses, but the case fizzled out. On October 30, a Dallas County judge dismissed the lawsuit “for want of prosecution” and ordered X to pay the court costs.

Elon Musk will see you in court: The top Twitter and X Corp. lawsuits of 2023 Read More »

us-agency-tasked-with-curbing-risks-of-ai-lacks-funding-to-do-the-job

US agency tasked with curbing risks of AI lacks funding to do the job

more dollars needed —

Lawmakers fear the NIST will have to rely on companies developing the technology.

They know...

Enlarge / They know…

Aurich / Getty

US president Joe Biden’s plan for containing the dangers of artificial intelligencealready risks being derailed by congressional bean counters.

A White House executive order on AI announced in October calls on the US to develop new standards for stress-testing AI systems to uncover their biases, hidden threats, and rogue tendencies. But the agency tasked with setting these standards, the National Institute of Standards and Technology (NIST), lacks the budget needed to complete that work independently by the July 26, 2024, deadline, according to several people with knowledge of the work.

Speaking at the NeurIPS AI conference in New Orleans last week, Elham Tabassi, associate director for emerging technologies at NIST, described this as “an almost impossible deadline” for the agency.

Some members of Congress have grown concerned that NIST will be forced to rely heavily on AI expertise from private companies that, due to their own AI projects, have a vested interest in shaping standards.

The US government has already tapped NIST to help regulate AI. In January 2023 the agency released an AI risk management framework to guide business and government. NIST has also devised ways to measure public trust in new AI tools. But the agency, which standardizes everything from food ingredients to radioactive materials and atomic clocks, has puny resources compared to those of the companies on the forefront of AI. OpenAI, Google, and Meta each likely spent upwards of $100 million to train the powerful language models that undergird applications such as ChatGPT, Bard, and Llama 2.

NIST’s budget for 2023 was $1.6 billion, and the White House has requested that it be increased by 29 percent in 2024 for initiatives not directly related to AI. Several sources familiar with the situation at NIST say that the agency’s current budget will not stretch to figuring out AI safety testing on its own.

On December 16, the same day Tabassi spoke at NeurIPS, six members of Congress signed a bipartisan open letter raising concern about the prospect of NIST enlisting private companies with little transparency. “We have learned that NIST intends to make grants or awards to outside organizations for extramural research,” they wrote. The letter warns that there does not appear to be any publicly available information about how those awards will be decided.

The lawmakers’ letter also claims that NIST is being rushed to define standards even though research into testing AI systems is at an early stage. As a result there is “significant disagreement” among AI experts over how to work on or even measure and define safety issues with the technology, it states. “The current state of the AI safety research field creates challenges for NIST as it navigates its leadership role on the issue,” the letter claims.

NIST spokesperson Jennifer Huergo confirmed that the agency had received the letter and said that it “will respond through the appropriate channels.”

NIST is making some moves that would increase transparency, including issuing a request for information on December 19, soliciting input from outside experts and companies on standards for evaluating and red-teaming AI models. It is unclear if this was a response to the letter sent by the members of Congress.

The concerns raised by lawmakers are shared by some AI experts who have spent years developing ways to probe AI systems. “As a nonpartisan scientific body, NIST is the best hope to cut through the hype and speculation around AI risk,” says Rumman Chowdhury, a data scientist and CEO of Parity Consultingwho specializes in testing AI models for bias and other problems. “But in order to do their job well, they need more than mandates and well wishes.”

Yacine Jernite, machine learning and society lead at Hugging Face, a company that supports open source AI projects, says big tech has far more resources than the agency given a key role in implementing the White House’s ambitious AI plan. “NIST has done amazing work on helping manage the risks of AI, but the pressure to come up with immediate solutions for long-term problems makes their mission extremely difficult,” Jernite says. “They have significantly fewer resources than the companies developing the most visible AI systems.”

Margaret Mitchell, chief ethics scientist at Hugging Face, says the growing secrecy around commercial AI models makes measurement more challenging for an organization like NIST. “We can’t improve what we can’t measure,” she says.

The White House executive order calls for NIST to perform several tasks, including establishing a new Artificial Intelligence Safety Institute to support the development of safe AI. In April, a UK taskforce focused on AI safety was announced. It will receive $126 million in seed funding.

The executive order gave NIST an aggressive deadline for coming up with, among other things, guidelines for evaluating AI models, principles for “red-teaming” (adversarially testing) models, developing a plan to get US-allied nations to agree to NIST standards, and coming up with a plan for “advancing responsible global technical standards for AI development.”

Although it isn’t clear how NIST is engaging with big tech companies, discussions on NIST’s risk management framework, which took place prior to the announcement of the executive order, involved Microsoft; Anthropic, a startup formed by ex-OpenAI employees that is building cutting-edge AI models; Partnership on AI, which represents big tech companies; and the Future of Life Institute, a nonprofit dedicated to existential risk, among others.

“As a quantitative social scientist, I’m both loving and hating that people realize that the power is in measurement,” Chowdhury says.

This story originally appeared on wired.com.

US agency tasked with curbing risks of AI lacks funding to do the job Read More »

from-cz-to-sbf,-2023-was-the-year-of-the-fallen-crypto-bro

From CZ to SBF, 2023 was the year of the fallen crypto bro

From CZ to SBF, 2023 was the year of the fallen crypto bro

Aurich Lawson | Getty Images (Bloomberg/Antonio Masiello)

Looking back, 2023 will likely be remembered as the year of the fallen crypto bro.

While celebrities like Kim Kardashian and Matt Damon last year faced public backlash after shilling for cryptocurrency, this year’s top headlines traced the downfalls of two of the most successful and influential crypto bros of all time: FTX co-founder Sam Bankman-Fried (often referred to as SBF) and Binance founder Changpeng Zhao (commonly known as CZ).

At 28 years old, Bankman-Fried made Forbes’ 30 Under 30 list in 2021, but within two short years, his recently updated Forbes profile notes that the man who was once “one of the richest people in crypto” in “a stunning fall from grace” now has a real-time net worth of $0.

In November, Bankman-Fried was convicted by a 12-member jury of defrauding FTX customers, after a monthlong trial where federal prosecutors accused him of building FTX into “a pyramid of deceit.” The trial followed months of wild headlines—comparing Bankman-Fried to a cartoon villain, accusing Bankman-Fried of stealing $2.2 billion from FTX customers to buy things like a $16.4 million house for his parents, and revealing that Bankman-Fried casually joked about losing track of $50 million.

Defending against his crimes at FTX, Bankman-Fried argued that “dishonesty and unfair dealing” aren’t fraud and even claimed that he couldn’t recall what he did at FTX, while FTX scrambled to recover $7.3 billion and put out the “dumpster fire.”

Ultimately, Bankman-Fried’s former FTX/Alameda Research partners, including his ex-girlfriend Caroline Ellison, testified against him. Ellison’s testimony led to even weirder revelations about SBF, like Bankman-Fried’s aspirations to become US president and his professed rejection of moral ideals like “don’t steal.” By the end of the trial, it seemed like very few felt any sympathy for the once-FTX kingpin.

Bankman-Fried now faces a maximum sentence of 110 years. His exact sentence is scheduled to be determined by a US district judge in March 2024, Reuters reported.

While FTX had been considered a giant force in the cryptocurrency world, Binance is still the world’s biggest cryptocurrency exchange—and considered more “systemically important” to crypto enthusiasts, Bloomberg reported. That’s why it was a huge deal when Binance was rocked by its own scandal in 2023 that ended in its founder and CEO, Zhao, admitting to money laundering and resigning.

Arguably Zhao’s fall from grace may have been more shocking to cryptocurrency fans than Bankman-Fried’s. Just one month prior to Zhao’s resignation, after FTX collapsed, The Economist had dubbed CZ as “crypto’s last man standing.”

Zhao launched Binance in 2017 and the next year was featured on the cover of Forbes’ first list of the wealthiest people in crypto. Peering out from under a hoodie, Zhao was considered by Forbes to be a “crypto overlord,” going from “zero to billionaire in six months,” where other crypto bros had only managed to become millionaires.

But 2023 put an abrupt end to Zhao’s reign at Binance. In March, the Commodity Futures Trading Commission (CFTC) sued Binance and Zhao over suspected money laundering and sanctions violations, triggering a Securities and Exchange Commission lawsuit in June and a Department of Justice (DOJ) probe. In the end, Binance owed billions in fines to the DOJ and the CFTC, which Secretary of the Treasury Janet Yellen called “historic penalties.” For personally directing Binance employees to skirt US regulatory compliance—and hide more than 100,000 suspicious transactions linked to terrorism, child sexual abuse materials, and ransomware attacks—Zhao now personally owes the CFTC $150 million.

On the social media platform X (formerly Twitter), Zhao wrote that after stepping down as Binance’s CEO, he will be taking a break and likely never helming a startup ever again.

“I am content being [a] one-shot (lucky) entrepreneur,” Zhao wrote.

From CZ to SBF, 2023 was the year of the fallen crypto bro Read More »

banks-use-your-deposits-to-loan-money-to-fossil-fuel,-emissions-heavy-firms

Banks use your deposits to loan money to fossil-fuel, emissions-heavy firms

Money for something —

Your $1,000 in the bank creates emissions equal to a flight from NYC to Seattle.

High angle shot of female hand inserting her bank card into automatic cash machine in the city. Withdrawing money, paying bills, checking account balances and make a bank transfer. Privacy protection, internet and mobile banking security concept

When you drop money in the bank, it looks like it’s just sitting there, ready for you to withdraw. In reality, your institution makes money on your money by lending it elsewhere, including to the fossil fuel companies driving climate change, as well as emissions-heavy industries like manufacturing.

So just by leaving money in a bank account, you’re unwittingly contributing to worsening catastrophes around the world. According to a new analysis, for every $1,000 dollars the average American keeps in savings, each year they indirectly create emissions equivalent to flying from New York to Seattle. “We don’t really take a look at how the banks are using the money we keep in our checking account on a daily basis, where that money is really circulating,” says Jonathan Foley, executive director of Project Drawdown, which published the analysis. “But when we look under the hood, we see that there’s a lot of fossil fuels.”

By switching to a climate-conscious bank, you could reduce those emissions by about 75 percent, the study found. In fact, if you moved $8,000 dollars—the median balance for US customers—the reduction in your indirect emissions would be twice that of the direct emissions you’d avoid if you switched to a vegetarian diet.

Put another way: You as an individual have a carbon footprint—by driving a car, eating meat, running a gas furnace instead of a heat pump—but your money also has a carbon footprint. Banking, then, is an underappreciated yet powerful avenue for climate action on a mass scale. “Not just voting every four years, or not just skipping the hamburger, but also where my money sits, that’s really important,” says Foley.

Just as you can borrow money from a bank, so too do fossil fuel companies and the companies that support that industry—think of building pipelines and other infrastructure. “Even if it’s not building new pipelines, for a fossil fuel company to be doing just its regular operations—whether that’s maintaining the network of gas stations that it owns, or maintaining existing pipelines, or paying its employees—it’s going to need funding for that,” says Paddy McCully, senior analyst at Reclaim Finance, an NGO focused on climate action.

A fossil fuel company’s need for those loans varies from year to year, given the fluctuating prices of those fuels. That’s where you, the consumer, comes in. “The money that an individual puts into their bank account makes it possible for the bank to then lend money to fossil fuel companies,” says Richard Brooks, climate finance director at Stand.earth, an environmental and climate justice advocacy group. “If you look at the top 10 banks in North America, each of them lends out between $20 billion and $40 billion to fossil fuel companies every year.”

The new report finds that on average, 11 of the largest US banks lend 19.4 percent of their portfolios to carbon-intensive industries. (The American Bankers Association did not immediately respond to a request to comment for this story.) To be very clear: Oil, gas, and coal companies wouldn’t be able to keep producing these fuels—when humanity needs to be reducing carbon emissions dramatically and rapidly—without these loans. New fossil fuel projects aren’t simply fleeting endeavors, but will operate for years, locking in a certain amount of emissions going forward.

At the same time, Brooks says, big banks are under-financing the green economy. As a civilization, we’re investing in the wrong kind of energy if we want to avoid the ever-worsening effects of climate change. Yes, 2022 was the first year that climate finance surpassed the trillion-dollar mark. “However, the alarming aspect is that climate finance must increase by at least fivefold annually, as swiftly as possible, to mitigate the worst impacts of climate change,” says Valerio Micale, senior manager of the Climate Policy Initiative. “An even more critical consideration is that this cost, which would accumulate to $266 trillion until 2050, pales in comparison to the costs of inaction, estimated at over $2,000 trillion over the same period.”

Smaller banks, at least, are less likely to be providing money for the fossil fuel industry. A credit union operates more locally, so it’s much less likely to be fronting money for, say, a new oil pipeline. “Big fossil fuel companies go to the big banks for their financing,” says Brooks. “They’re looking for loans in the realm of hundreds of millions of dollars, sometimes multibillion-dollar loans, and a credit union wouldn’t be able to provide that.”

This makes banking a uniquely powerful lever to pull when it comes to climate action, Foley says. Compared to switching to vegetarianism or veganism to avoid the extensive carbon emissions associated with animal agriculture, money is easy to move. “If large numbers of people start to tell their financial institutions that they don’t really want to participate in investing in fossil fuels, that slowly kind of drains capital away from what’s available for fossil fuels,” says Foley.

While the new report didn’t go so far as to exhaustively analyze the lending habits of the thousands of banks in the US, Foley says there’s a growing number that deliberately don’t invest in fossil fuels. If you’re not sure about what your bank is investing in, you can always ask. “I think when people hear we need to move capital out of fossil fuels into climate solutions, they probably think only Warren Buffett can do that,” says Foley. “That’s not entirely true. We can all do a little bit of that.”

This story originally appeared on wired.com.

Banks use your deposits to loan money to fossil-fuel, emissions-heavy firms Read More »

ftc-suggests-new-rules-to-shift-parents’-burden-of-protecting-kids-to-websites

FTC suggests new rules to shift parents’ burden of protecting kids to websites

Ending the endless tracking of kids —

FTC seeking public comments on new rules to expand children’s privacy law.

FTC suggests new rules to shift parents’ burden of protecting kids to websites

The Federal Trade Commission (FTC) is currently seeking comments on new rules that would further restrict platforms’ efforts to monetize children’s data.

Through the Children’s Online Privacy Protection Act (COPPA), the FTC initially sought to give parents more control over what kinds of information that various websites and apps can collect from their kids. Now, the FTC wants to update COPPA and “shift the burden from parents to providers to ensure that digital services are safe and secure for children,” the FTC’s press release said.

“By requiring firms to better safeguard kids’ data, our proposal places affirmative obligations on service providers and prohibits them from outsourcing their responsibilities to parents,” FTC chair Lina Khan said.

Among proposed rules, the FTC would require websites to turn off targeted advertising by default and prohibit sending push notifications to encourage kids to use services more than they want to. Surveillance in schools would be further restricted, so that data is only collected for educational purposes. And data security would be strengthened by mandating that websites and apps “establish, implement, and maintain a written children’s personal information security program that contains safeguards that are appropriate to the sensitivity of the personal information collected from children.”

Perhaps most significantly, COPPA would also be updated to stop companies from retaining children’s data forever, explicitly stating that “operators cannot retain the information indefinitely.” In a statement, commissioner Alvaro Bedoya called this a “critical protection” at a time when “new, machine learning-fueled systems require ever larger amounts of training data.”

These proposed changes were designed to address “the evolving ways personal information is being collected, used, and disclosed, including to monetize children’s data,” the FTC said.

Keeping up with advancing technology, the FTC said, also requires expanding COPPA’s definition of “personal information” to include biometric identifiers. That change was likely inspired by charges brought against Amazon earlier this year, when the FTC accused Amazon of violating COPPA by retaining tens of thousands of children’s Alexa voice recordings forever.

Once the notice of proposed rulemaking is published to the Federal Register, the public will have 60 days to submit comments. The FTC likely anticipates thousands of parents and stakeholders to weigh in, noting that the last time COPPA was updated in 2019, more than 175,000 comments were submitted.

Endless tracking of kids not a “victimless crime”

Bedoya said that updating the already-expansive children’s privacy law would prevent known harms. He also expressed concern that increasingly these harms are being overlooked, citing a federal judge in California who preliminarily enjoined California’s Age-Appropriate Design Code” in September. That judge had suggested that California’s law was “actually likely to exacerbate” online harm to kids, but Bedoya challenged that decision as reinforcing a “critique that has quietly proliferated around children’s privacy: the idea that many privacy invasions do not actually hurt children.”

For decades, COPPA has protected against the unauthorized or unnecessary collection, use, retention, and disclosure of children’s information, which Bedoya said “endangers children’s safety,” “exposes children and families to hacks and data breaches,” and “allows third-party companies to develop commercial relationships with children that prey on their trust and vulnerability.”

“I think each of these harms, particularly the latter, undermines the idea that the pervasive tracking of children online is [a] ‘victimless crime,'” Bedoya said, adding that “the harms that COPPA sought to prevent remain real, and COPPA remains relevant and profoundly important.”

According to Bedoya, COPPA is more vital than ever, as “we are only at the beginning of an era of biometric fraud.”

Khan characterized the proposed changes as “much-needed” in an “era where online tools are essential for navigating daily life—and where firms are deploying increasingly sophisticated digital tools to surveil children.”

“Kids must be able to play and learn online without being endlessly tracked by companies looking to hoard and monetize their personal data,” Khan said.

FTC suggests new rules to shift parents’ burden of protecting kids to websites Read More »

child-sex-abuse-images-found-in-dataset-training-image-generators,-report-says

Child sex abuse images found in dataset training image generators, report says

Child sex abuse images found in dataset training image generators, report says

More than 1,000 known child sexual abuse materials (CSAM) were found in a large open dataset—known as LAION-5B—that was used to train popular text-to-image generators such as Stable Diffusion, Stanford Internet Observatory (SIO) researcher David Thiel revealed on Wednesday.

SIO’s report seems to confirm rumors swirling on the Internet since 2022 that LAION-5B included illegal images, Bloomberg reported. In an email to Ars, Thiel warned that “the inclusion of child abuse material in AI model training data teaches tools to associate children in illicit sexual activity and uses known child abuse images to generate new, potentially realistic child abuse content.”

Thiel began his research in September after discovering in June that AI image generators were being used to create thousands of fake but realistic AI child sex images rapidly spreading on the dark web. His goal was to find out what role CSAM may play in the training process of AI models powering the image generators spouting this illicit content.

“Our new investigation reveals that these models are trained directly on CSAM present in a public dataset of billions of images, known as LAION-5B,” Thiel’s report said. “The dataset included known CSAM scraped from a wide array of sources, including mainstream social media websites”—like Reddit, X, WordPress, and Blogspot—as well as “popular adult video sites”—like XHamster and XVideos.

Shortly after Thiel’s report was published, a spokesperson for LAION, the Germany-based nonprofit that produced the dataset, told Bloomberg that LAION “was temporarily removing LAION datasets from the Internet” due to LAION’s “zero tolerance policy” for illegal content. The datasets will be republished once LAION ensures “they are safe,” the spokesperson said. A spokesperson for Hugging Face, which hosts a link to a LAION dataset that’s currently unavailable, confirmed to Ars that the dataset is now unavailable to the public after being switched to private by the uploader.

Removing the datasets now doesn’t fix any lingering issues with previously downloaded datasets or previously trained models, though, like Stable Diffusion 1.5. Thiel’s report said that Stability AI’s subsequent versions of Stable Diffusion—2.0 and 2.1—filtered out some or most of the content deemed “unsafe,” “making it difficult to generate explicit content.” But because users were dissatisfied by these later, more filtered versions, Stable Diffusion 1.5 remains “the most popular model for generating explicit imagery,” Thiel’s report said.

A spokesperson for Stability AI told Ars that Stability AI is “committed to preventing the misuse of AI and prohibit the use of our image models and services for unlawful activity, including attempts to edit or create CSAM.” The spokesperson pointed out that SIO’s report “focuses on the LAION-5B dataset as a whole,” whereas “Stability AI models were trained on a filtered subset of that dataset” and were “subsequently fine-tuned” to “mitigate residual behaviors.” The implication seems to be that Stability AI’s filtered dataset is not as problematic as the larger dataset.

Stability AI’s spokesperson also noted that Stable Diffusion 1.5 “was released by Runway ML, not Stability AI.” There seems to be some confusion on that point, though, as a Runway ML spokesperson told Ars that Stable Diffusion “was released in collaboration with Stability AI.”

A demo of Stable Diffusion 1.5 noted that the model was “supported by Stability AI” but released by CompVis and Runway. While a YCombinator thread linking to a blog—titled “Why we chose not to release Stable Diffusion 1.5 as quickly”—from Stability AI’s former chief information officer, Daniel Jeffries, may have provided some clarity on this, it has since been deleted.

Runway ML’s spokesperson declined to comment on any updates being considered for Stable Diffusion 1.5 but linked Ars to a Stability AI blog from August 2022 that said, “Stability AI co-released Stable Diffusion alongside talented researchers from” Runway ML.

Stability AI’s spokesperson said that Stability AI does not host Stable Diffusion 1.5 but has taken other steps to reduce harmful outputs. Those include only hosting “versions of Stable Diffusion that include filters” that “remove unsafe content” and “prevent the model from generating unsafe content.”

“Additionally, we have implemented filters to intercept unsafe prompts or unsafe outputs when users interact with models on our platform,” Stability AI’s spokesperson said. “We have also invested in content labelling features to help identify images generated on our platform. These layers of mitigation make it harder for bad actors to misuse AI.”

Beyond verifying 1,008 instances of CSAM in the LAION-5B dataset, SIO found 3,226 instances of suspected CSAM in the LAION dataset. Thiel’s report warned that both figures are “inherently a significant undercount” due to researchers’ limited ability to detect and flag all the CSAM in the datasets. His report also predicted that “the repercussions of Stable Diffusion 1.5’s training process will be with us for some time to come.”

“The most obvious solution is for the bulk of those in possession of LAION‐5B‐derived training sets to delete them or work with intermediaries to clean the material,” SIO’s report said. “Models based on Stable Diffusion 1.5 that have not had safety measures applied to them should be deprecated and distribution ceased where feasible.”

Child sex abuse images found in dataset training image generators, report says Read More »