Spy Pet, a service that sold access to a rich database of allegedly more than 3 billion Discord messages and details on more than 600 million users, has seemingly been shut down.
404 Media, which broke the story of Spy Pet’s offerings, reports that Spy Pet seems mostly shut down. Spy Pet’s website was unavailable as of this writing. A Discord spokesperson told Ars that the company’s safety team had been “diligently investigating” Spy Pet and that it had banned accounts affiliated with it.
“Scraping our services and self-botting are violations of our Terms of Service and Community Guidelines,” the spokesperson wrote. “In addition to banning the affiliated accounts, we are considering appropriate legal action.” The spokesperson noted that Discord server administrators can adjust server permissions to prevent future such monitoring on otherwise public servers.
Kiwi Farms ties, GDPR violations
The number of servers monitored by Spy Pet had been fluctuating in recent days. The site’s administrator told 404 Media’s Joseph Cox that they were rewriting part of the service while admitting that Discord had banned a number of bots. The administrator had also told 404 Media that he did not “intend for my tool to be used for harassment,” despite a likely related user offering Spy Pet data on Kiwi Farms, a notorious hub for doxxing and online harassment campaigns that frequently targets trans and non-binary people, members of the LGBTQ community, and women.
Even if Spy Pet can somehow work past Discord’s bans or survive legal action, the site’s very nature runs against a number of other Internet regulations across the globe. It’s almost certainly in violation of the European Union’s General Data Protection Regulation (GDPR). As pointed out by StackDiary, Spy Pet and services like it seem to violate at least three articles of the GDPR, including the “right to be forgotten” in Article 17.
Ars was unsuccessful in reaching the administrator of Spy Pet by email and Telegram message. Their last message on Telegram stated that their domain had been suspended and a backup domain was being set up. “TL;DR: Never trust the Germans,” they wrote.
TikTok owner ByteDance is preparing to sue the US government now that President Biden has signed into law a bill that will ban TikTok in the US if its Chinese owner doesn’t sell the company within 270 days. While it’s impossible to predict the outcome with certainty, law professors speaking to Ars believe that ByteDance will have a strong First Amendment case in its lawsuit against the US.
One reason for this belief is that just a few months ago, a US District Court judge blocked a Montana state law that attempted to ban TikTok. In October 2020, another federal judge in Pennsylvania blocked a Trump administration order that would have banned TikTok from operating inside the US. TikTok also won a preliminary injunction against Trump in US District Court for the District of Columbia in September 2020.
“Courts have said that a TikTok ban is a First Amendment problem,” Santa Clara University law professor Eric Goldman, who writes frequent analysis of legal cases involving technology, told Ars this week. “And Congress didn’t really try to navigate away from that. They just went ahead and disregarded the court rulings to date.”
The fact that previous attempts to ban TikTok have failed is “pretty good evidence that the government has an uphill battle justifying the ban,” Goldman said.
TikTok users engage in protected speech
The Montana law “bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” US District Judge Donald Molloy wrote in November 2023 when he granted a preliminary injunction that blocks the state law.
“The Montana court concluded that the First Amendment challenge would be likely to succeed. This will give TikTok some hope that other courts will follow suit with respect to a national order,” Georgetown Law Professor Anupam Chander told Ars.
Molloy’s ruling said that without TikTok, “User Plaintiffs are deprived of communicating by their preferred means of speech, and thus First Amendment scrutiny is appropriate.” TikTok’s speech interests must be considered “because the application’s decisions related to how it selects, curates, and arranges content are also protected by the First Amendment,” the ruling said.
Banning apps that let people talk to each other “is categorically impermissible,” Goldman said. While the Chinese government engaging in propaganda is a problem, “we need to address that as a government propaganda problem, and not just limited to China,” he said. In Goldman’s view, a broader approach should also be used to stop governments from siphoning user data.
TikTok and opponents of bans haven’t won every case. A federal judge in Texas ruled in favor of Texas Governor Greg Abbott in December 2023. But that ruling only concerned a ban on state employees using TikTok on government-issued devices rather than a law that potentially affects all users of TikTok.
Weighing national security vs. First Amendment
US lawmakers have alleged that the Chinese Communist Party can weaponize TikTok to manipulate public opinion and access user data. But Chander was skeptical of whether the US government could convincingly justify its new law in court on national security grounds.
“Thus far, the government has refused to make public its evidence of a national security threat,” he told Ars. “TikTok put in an elaborate set of controls to insulate the app from malign foreign influence, and the government hasn’t shown why those controls are insufficient.”
The ruling against Trump by a federal judge in Pennsylvania noted that “the Government’s own descriptions of the national security threat posed by the TikTok app are phrased in the hypothetical.”
Chander stressed that the outcome of ByteDance’s planned case against the US is difficult to predict, however. “I would vote against the law if I were a judge, but it’s unclear how judges will weigh the alleged national security risks against the real free expression incursions,” he said.
Montana case may be “bellwether”
There are at least three types of potential plaintiffs that could lodge constitutional challenges to a TikTok ban, Goldman said. There’s TikTok itself, the users of TikTok who would no longer be able to post on the platform, and app stores that would be ordered not to carry the TikTok app.
Montana was sued by TikTok and users. Lead plaintiff Samantha Alario runs a local swimwear business and uses TikTok to market her products.
Montana Attorney General Austin Knudsen appealed the ruling against his state to the US Court of Appeals for the 9th Circuit. The Montana case could make it to the Supreme Court before there is any resolution on the enforceability of the US law, Goldman said.
“It’s possible that the Montana ban is actually going to be the bellwether that’s going to set the template for the constitutional review of the Congressional action,” Goldman said.
On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial.
The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos.
In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials:
A deepfake is an inauthentic audiovisual presentation prepared by software programs using artificial intelligence. Of course, photos and videos have always been subject to forgery, but developments in AI make deepfakes much more difficult to detect. Software for creating deepfakes is already freely available online and fairly easy for anyone to use. As the software’s usability and the videos’ apparent genuineness keep improving over time, it will become harder for computer systems, much less lay jurors, to tell real from fake.
During Friday’s three-hour hearing, the panel wrestled with the question of whether existing rules, which predate the rise of generative AI, are sufficient to ensure the reliability and authenticity of evidence presented in court.
Some judges on the panel, such as US Circuit Judge Richard Sullivan and US District Judge Valerie Caproni, reportedly expressed skepticism about the urgency of the issue, noting that there have been few instances so far of judges being asked to exclude AI-generated evidence.
“I’m not sure that this is the crisis that it’s been painted as, and I’m not sure that judges don’t have the tools already to deal with this,” said Judge Sullivan, as quoted by Reuters.
Last year, Chief US Supreme Court Justice John Roberts acknowledged the potential benefits of AI for litigants and judges, while emphasizing the need for the judiciary to consider its proper uses in litigation. US District Judge Patrick Schiltz, the evidence committee’s chair, said that determining how the judiciary can best react to AI is one of Roberts’ priorities.
In Friday’s meeting, the committee considered several deepfake-related rule changes. In the agenda for the meeting, US District Judge Paul Grimm and attorney Maura Grossman proposed modifying Federal Rule 901(b)(9) (see page 5), which involves authenticating or identifying evidence. They also recommended the addition of a new rule, 901(c), which might read:
901(c): Potentially Fabricated or Altered Electronic Evidence. If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.
The panel agreed during the meeting that this proposal to address concerns about litigants challenging evidence as deepfakes did not work as written and that it will be reworked before being reconsidered later.
Another proposal by Andrea Roth, a law professor at the University of California, Berkeley, suggested subjecting machine-generated evidence to the same reliability requirements as expert witnesses. However, Judge Schiltz cautioned that such a rule could hamper prosecutions by allowing defense lawyers to challenge any digital evidence without establishing a reason to question it.
For now, no definitive rule changes have been made, and the process continues. But we’re witnessing the first steps of how the US justice system will adapt to an entirely new class of media-generating technology.
Putting aside risks from AI-generated evidence, generative AI has led to embarrassing moments for lawyers in court over the past two years. In May 2023, US lawyer Steven Schwartz of the firm Levidow, Levidow, & Oberman apologized to a judge for using ChatGPT to help write court filings that inaccurately cited six nonexistent cases, leading to serious questions about the reliability of AI in legal research. Also, in November, a lawyer for Michael Cohen cited three fake cases that were potentially influenced by a confabulating AI assistant.
The US Chamber of Commerce and other business groups sued the Federal Trade Commission and FTC Chair Lina Khan today in an attempt to block a newly issued ban on noncompete clauses.
The lawsuit was filed in US District Court for the Eastern District of Texas. The US Chamber of Commerce was joined in the suit by Business Roundtable, the Texas Association of Business, and the Longview Chamber of Commerce. The suit seeks a court order that would vacate the rule in its entirety.
The lawsuit claimed that noncompete clauses “benefit employers and workers alike—the employer protects its workforce investments and sensitive information, and the worker benefits from increased training, access to more information, and an opportunity to bargain for higher pay.”
“Having invested in their people and entrusted them with valuable company secrets, businesses have strong interests in preventing others from free-riding on those investments or gaining improper access to competitive, confidential information,” the lawsuit said.
Lawsuit filed one day after FTC issued rule
The lawsuit came one day after the FTC issued its rule banning noncompete clauses, determining that such clauses are an unfair method of competition and thus a violation of Section 5 of the FTC Act. The rule is scheduled to take effect in about four months and would render the vast majority of existing noncompetes unenforceable.
The only existing noncompetes that won’t be nullified are those for senior executives, defined as people earning more than $151,164 a year and who are in policymaking positions. Existing noncompetes for all other workers would be nullified, and the FTC is prohibiting employers from imposing any new noncompetes on both senior executives and other workers.
“By invalidating existing noncompete agreements and prohibiting businesses and their workers from ever entering into such agreements going forward, the rule will force businesses all over the country—including in this District—to turn to inadequate and expensive alternatives to protect their confidential information, such as nondisclosure agreements and trade-secret lawsuits,” the Chamber of Commerce said in its complaint.
The Chamber argues that the FTC overstepped its authority. “The Commission’s astounding assertion of power breaks with centuries of state and federal law and rests on novel claims of authority by the Commission. From the Founding forward, States have always regulated noncompete agreements,” the lawsuit said.
The FTC says it can impose the ban using authority under sections 5 and 6(g) of the FTC Act. Section 6(g) authorizes the Commission to “make rules and regulations for the purpose of carrying out the provisions of” the FTC Act, including the Section 5 prohibition on unfair methods of competition, the FTC said in its rule.
FTC: “Our legal authority is crystal clear”
“Our legal authority is crystal clear,” an FTC spokesperson said in a statement provided to Ars today. “In the FTC Act, Congress specifically ’empowered and directed’ the FTC to prevent ‘unfair methods of competition’ and to ‘make rules and regulations for the purposes of carrying out the provisions of’ the FTC Act. This authority has repeatedly been upheld by courts and reaffirmed by Congress. Addressing noncompetes that curtail Americans’ economic freedom is at the very heart of our mandate, and we look forward to winning in court.”
The Chamber’s lawsuit said “the sheer economic and political significance of a nationwide noncompete ban demonstrates that this is a question for Congress to decide, rather than an agency.” If the FTC’s claim of authority is upheld, it “would reflect a boundless and unconstitutional delegation of legislative power to the Executive Branch,” the lawsuit said.
If the US District Court in Texas grants an injunction blocking the ban, the FTC could challenge the ruling in a federal appeals court.
Separately, a lobby group for cable TV and broadband companies issued a statement opposing the ban on noncompetes. “It is disappointing the FTC is poised to undercut small, independent, and rural broadband providers with a sweeping ban on non-competes,” said America’s Communications Association (formerly the American Cable Association). “This unjustified action will make it more challenging to provide quality service, crush competition and allow large incumbents to raid talent and obtain propriety information.”
Khan said yesterday that “noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism.” The ban “will ensure Americans have the freedom to pursue a new job, start a new business, or bring a new idea to market,” she said.
Enlarge/ Federal Trade Commission Chair Lina Khan talks with guests during an event in the Eisenhower Executive Office Building on April 03, 2024
Getty Images | Chip Somodevilla
The Federal Trade Commission (FTC) today announced that it has issued a final rule banning noncompete clauses. The rule will render the vast majority of current noncompete clauses unenforceable, according to the agency.
“In the final rule, the Commission has determined that it is an unfair method of competition and therefore a violation of Section 5 of the FTC Act, for employers to enter into noncompetes with workers and to enforce certain noncompetes,” the FTC said.
The US Chamber of Commerce said it will sue the FTC in an effort to block the rule, claiming the ban is “a blatant power grab that will undermine American businesses’ ability to remain competitive.”
The FTC proposed the rule in January 2023 and received over 26,000 public comments on its proposal. Over 25,000 of the comments supported the proposed ban, the FTC said. The final rule announced today will take effect 120 days after it is published in the Federal Register, unless opponents of the rule secure a court order blocking it.
The FTC said that “noncompetes are a widespread and often exploitative practice imposing contractual conditions that prevent workers from taking a new job or starting a new business. Noncompetes often force workers to either stay in a job they want to leave or bear other significant harms and costs, such as being forced to switch to a lower-paying field, being forced to relocate, being forced to leave the workforce altogether, or being forced to defend against expensive litigation.”
Noncompete clauses currently bind about 30 million workers in the US, the agency said. “Under the FTC’s new rule, existing noncompetes for the vast majority of workers will no longer be enforceable after the rule’s effective date,” the FTC said.
FTC: “Noncompete clauses keep wages low”
The only existing noncompetes that won’t be nullified are those for senior executives, who represent less than 0.75 percent of workers, the FTC said. The rule defines senior executives as people earning more than $151,164 a year and who are in policy-making positions.
“The final rule allows existing noncompetes with senior executives to remain in force because this subset of workers is less likely to be subject to the kind of acute, ongoing harms currently being suffered by other workers subject to existing noncompetes and because commenters raised credible concerns about the practical impacts of extinguishing existing noncompetes for senior executives,” the FTC said.
Senior executives will be protected from new noncompete clauses after the rule takes effect. Employers will be “banned from entering into or attempting to enforce any new noncompetes, even if they involve senior executives,” the FTC said. “Employers will be required to provide notice to workers other than senior executives who are bound by an existing noncompete that they will not be enforcing any noncompetes against them.”
The FTC vote was 3-2, with Democrats supporting the noncompete ban and Republicans opposing.
“Noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism, including from the more than 8,500 new startups that would be created a year once noncompetes are banned,” FTC Chair Lina Khan said. “The FTC’s final rule to ban noncompetes will ensure Americans have the freedom to pursue a new job, start a new business, or bring a new idea to market.”
Chamber of Commerce CEO Suzanne Clark argued that “the FTC has never been granted the constitutional and statutory authority to write its own competition rules… The Chamber will sue the FTC to block this unnecessary and unlawful rule and put other agencies on notice that such overreach will not go unchecked.”
FTC cites authority, urges businesses to raise wages
The FTC argues that it can impose the rule using authority under sections 5 and 6(g) of the FTC Act:
Alongside section 5, Congress adopted section 6(g) of the Act, in which it authorized the Commission to “make rules and regulations for the purpose of carrying out the provisions of” the FTC Act, which include the Act’s prohibition of unfair methods of competition. The plain text of section 5 and section 6(g), taken together, empower the Commission to promulgate rules for the purpose of preventing unfair methods of competition. That includes legislative rules defining certain conduct as an unfair method of competition.
The FTC said it found evidence that “noncompetes tend to negatively affect competitive conditions in product and service markets, inhibiting new business formation and innovation” and “lead to increased market concentration and higher prices for consumers.”
Businesses can protect trade secrets without noncompetes, the agency said:
Trade secret laws and nondisclosure agreements (NDAs) both provide employers with well-established means to protect proprietary and other sensitive information. Researchers estimate that over 95 percent of workers with a noncompete already have an NDA.
The Commission also finds that instead of using noncompetes to lock in workers, employers that wish to retain employees can compete on the merits for the worker’s labor services by improving wages and working conditions.
Grindr is facing a class action lawsuit from hundreds of users over the sharing of HIV statuses and other sensitive personal information with third-party firms.
UK law firm Austen Hays filed the claim in the High Court in London yesterday, the firm announced. The class action “alleges the misuse of private information of thousands of affected UK Grindr users, including highly sensitive information about their HIV status and latest tested date,” the law firm said.
The law firm said it has signed up over 670 potential class members and “is in discussions with thousands of other individuals who are interested in joining the claim.” Austen Hays said that “claimants could receive thousands in damages” from Grindr, a gay dating app, if the case is successful.
Austen Hays alleges that Grindr violated UK data protection laws by sharing sensitive data for commercial purposes without users’ consent, including when it “unlawfully processed and shared users’ data with third parties, including advertising companies Localytics and Apptimize.”
While Austen Hays describes Localytics and Apptimize as advertising firms, they do not seem to be in the business of selling ads. Localytics is software for mobile app marketing and analytics, while Apptimize says it provides A/B testing and feature release management for product teams.
Grindr admitted sharing HIV status, said it stopped
Grindr has admitted sharing HIV status with the firms but stressed that it wasn’t for advertising purposes and pledged to stop sharing that information. The sharing of HIV status came to light in 2018 thanks to the work of independent researchers. At the time, Grindr said it “has never sold, nor will we ever sell, personal user information—especially information regarding HIV status or last test date—to third parties or advertisers.”
Grindr said it “consult[ed] several international health organizations” before determining in 2016 that it would be “beneficial for the health and well-being of our community to give users the option to publish, at their discretion, their HIV status and their ‘Last Tested Date’ to their public profile.”
Grindr acknowledged that it had been “sharing HIV status information with our trusted vendors, Apptimize and Localytics.” Apptimize software helped Grindr test and deploy new app features including an “HIV Testing Reminder” feature, while Localytics software was used “to confirm that the new features were not causing problems with the functioning of the Grindr app,” Grindr said.
Today, Grindr provided Ars with a statement in response to the lawsuit. “We are committed to protecting our users’ data and complying with all applicable data privacy regulations, including in the UK,” the company said. Grindr has never shared user-reported health information for ‘commercial purposes’ and has never monetized such information. We intend to respond vigorously to this claim, which appears to be based on a mischaracterization of practices from more than four years ago, prior to early 2020.”
Congress passed and President Biden signed a reauthorization of Title VII of the Foreign Intelligence Surveillance Act (FISA), approving a bill that opponents say includes a “major expansion of warrantless surveillance” under Section 702 of FISA.
Over the weekend, the Reforming Intelligence and Securing America Act was approved by the Senate in a 60-34 vote. The yes votes included 30 Republicans, 28 Democrats, and two independents who caucus with Democrats. The bill, which was previously passed by the House and reauthorizes Section 702 of FISA for two years, was signed by President Biden on Saturday.
“Thousands and thousands of Americans could be forced into spying for the government by this new bill and with no warrant or direct court oversight whatsoever,” Sen. Ron Wyden (D-Ore.), a member of the Senate Select Committee on Intelligence, said on Friday. “Forcing ordinary Americans and small businesses to conduct secret, warrantless spying is what authoritarian countries do, not democracies.”
Wyden and Sen. Cynthia Lummis (R-Wyo.) led a bipartisan group of eight senators who submitted an amendment to reverse what Wyden’s office called “a major expansion of warrantless surveillance under Section 702 of the Foreign Intelligence Surveillance Act that was included in the House-passed bill.” After the bill was approved by the Senate without the amendment, Wyden said it seemed “that senators were unwilling to send this bill back to the House, no matter how common-sense the amendment before them.”
Sen. Ted Cruz (R-Texas) said he voted against the reauthorization “because it failed to include the most important requirement to protect Americans’ civil rights: that law enforcement get a warrant before targeting a US citizen.”
Bill expands definition of service provider
The Wyden/Lummis amendment would have struck language that expands the definition of an electronic communication service provider to include, with some exceptions, any “service provider who has access to equipment that is being or may be used to transmit or store wire or electronic communications.” The exceptions are for public accommodation facilities, dwellings, community facilities, and food service establishments.
“Instead of using the opportunity to curb warrantless surveillance of Americans’ private communications and protect the public’s privacy, Congress passed an expansive, unchecked surveillance authority,” Sen. Edward J. Markey (D-Mass.) said after the vote. “This FISA reauthorization legislation is a step backwards, doing nothing to address the extent to which the government conducts surveillance over its own citizens.”
Under the 2008 FISA Amendments Act, electronic communication service providers already included telecommunications carriers, providers of electronic communication services, providers of remote computing services, and “any other communication service provider who has access to wire or electronic communications either as such communications are transmitted or as such communications are stored.” These entities must provide the government with information, facilities, and assistance necessary to obtain communications.
The Brennan Center for Justice at New York University School of Law called the reauthorization “the largest expansion of domestic surveillance authority since the Patriot Act.”
“The bill, which would effectively grant the federal government access to the communications equipment of almost any business in the United States, is a gift to any president who may wish to spy on political enemies,” said Elizabeth Goitein, senior director of the Brennan Center’s Liberty and National Security Program.
Enlarge/ An Apple Store in Shanghai, China, on April 11, 2024.
CFOTO/Future Publishing via Getty Images
Apple said it complied with orders from the Chinese government to remove the Meta-owned WhatsApp and Threads from its App Store in China. Apple also removed Telegram and Signal from China.
“We are obligated to follow the laws in the countries where we operate, even when we disagree,” Apple said in a statement quoted by several news outlets. “The Cyberspace Administration of China ordered the removal of these apps from the China storefront based on their national security concerns. These apps remain available for download on all other storefronts where they appear.”
The Wall Street Journal paraphrased a person familiar with the matter as saying that the Chinese cyberspace agency “asked Apple to remove WhatsApp and Threads from the App Store because both contain political content that includes problematic mentions of the Chinese president [Xi Jinping].”
The New York Times similarly wrote that “a person briefed on the situation said the Chinese government had found content on WhatsApp and Threads about China’s president, Xi Jinping, that was inflammatory and violated the country’s cybersecurity laws. The specifics of what was in the content was unclear, the person said.”
Meta apps Facebook, Instagram, and Messenger were still available for iOS in China today, according to Reuters. As Reuters noted, the four apps removed from Apple’s China store were not widely used in the country, where WeChat is the dominant service.
“These apps and many foreign apps are normally blocked on Chinese networks by the ‘Great Firewall’—the country’s extensive cybersystem of censorship—and can only be used with a virtual private network or other proxy tools,” Reuters wrote. WhatsApp, Threads, Telegram, and Signal were reportedly still available on Apple devices in Hong Kong and Macau, China’s special administrative regions.
US House moves on forcing TikTok sale or ban
China’s crackdown on foreign messaging apps comes amid US debate over whether to ban or force a sale of the Chinese-owned TikTok. The House Commerce Committee last month voted 50–0 to approve a bill that would force TikTok owner ByteDance to sell the company or lose access to the US market.
US lawmakers argue that TikTok poses national security risks, saying that China can use the app to obtain sensitive personal data and manipulate US public opinion. House leaders are reportedly planning a floor vote on the TikTok bill on Saturday.
US lawmakers raised concerns about Apple’s China ties after the recent cancellation of Apple TV+ show The Problem with Jon Stewart. Stewart reportedly told members of his staff that Apple executives were concerned about potential show topics related to China and artificial intelligence.
Apple pulled The New York Times app from its store in China in December 2016, saying that Apple was informed by China “that the app is in violation of local regulations.” The New York Times news app is still unavailable on Apple’s App Store in China, the Reuters article said.
“For years, Apple has bowed to Beijing’s demands that it block an array of apps, including newspapers, VPNs, and encrypted messaging services,” The New York Times noted yesterday. “It also built a data center in the country to house Chinese citizens’ iCloud information, which includes personal contacts, photos and email.”
A jury has unanimously convicted Avi Eisenberg in the US Department of Justice’s first case involving cryptocurrency open-market manipulation, the DOJ announced Thursday.
The jury found Eisenberg guilty of commodities fraud, commodities market manipulation, and wire fraud in connection with the manipulation on a decentralized cryptocurrency exchange called Mango Markets.
Eisenberg is scheduled to be sentenced on July 29 and is facing “a maximum penalty of 10 years in prison on the commodities fraud count and the commodities manipulation count, and a maximum penalty of 20 years in prison on the wire fraud count,” the DOJ said.
On the Mango Markets exchange, Eisenberg was “engaged in a scheme to fraudulently obtain approximately $110 million worth of cryptocurrency from Mango Markets and its customers by artificially manipulating the price of certain perpetual futures contracts,” the DOJ said. The scheme impacted both investors trading and the exchange itself, which had to suspend operations after Eisenberg’s attack made the exchange insolvent.
Nicole M. Argentieri, the principal deputy assistant attorney general who heads the DOJ’s criminal division, said that Eisenberg’s manipulative trading scheme “puts our financial markets and investors at risk.”
“This prosecution—the first involving the manipulation of cryptocurrency through open-market trades—demonstrates the Criminal Division’s commitment to protecting US financial markets and holding wrongdoers accountable, no matter what mechanism they use to commit manipulation and fraud,” Argentieri said.
Mango Labs has similarly sued Eisenberg over the price manipulation scheme, but that lawsuit was stayed until the DOJ’s case was resolved. Mango Labs is expecting a status update today from the US government and is hoping to proceed with its lawsuit.
Ars could not immediately reach Mango Labs for comment.
Eisenberg’s lawyer, Brian Klein, provided the same statement to Ars, confirming that Eisenberg’s legal team is “obviously disappointed” but “will keep fighting for our client.”
How the Mango Markets scheme worked
Mango Labs has accused Eisenberg of being a “notorious cryptocurrency market manipulator,” noting in its complaint that he has a “history of attacking multiple cryptocurrency platforms and manipulating cryptocurrency markets.” That history includes allegedly embezzling $14 million in 2021 while Eisenberg was working as a developer for another decentralized marketplace called Fortress, Mango Labs’ complaint said.
Eisenberg’s attack on Mango Markets intended to grab tens of millions more than the alleged Fortress attack. When Eisenberg was first charged, the DOJ explained how his Mango Markets price manipulation scheme worked.
On Mango Markets, investors can “purchase and borrow cryptocurrencies and cryptocurrency-related financial products,” including buying and selling “perpetual futures contracts.”
“When an investor buys or sells a perpetual for a particular cryptocurrency, the investor is not buying or selling that cryptocurrency but is, instead, buying or selling exposure to future movements in the value of that cryptocurrency relative to another cryptocurrency,” the DOJ explained.
Enlarge / A cropped image showing Raw TV’s poster for the Netflix documentary What Jennifer Did, which features a long front tooth that leads critics to believe it was AI-generated.
An executive producer of the Netflix hit What Jennifer Did has responded to accusations that the true crime documentary used AI images when depicting Jennifer Pan, a woman currently imprisoned in Canada for orchestrating a murder-for-hire scheme targeting her parents.
What Jennifer Did shot to the top spot in Netflix’s global top 10 when it debuted in early April, attracting swarms of true crime fans who wanted to know more about why Pan paid hitmen $10,000 to murder her parents. But quickly the documentary became a source of controversy, as fans started noticing glaring flaws in images used in the movie, from weirdly mismatched earrings to her nose appearing to lack nostrils, the Daily Mail reported, in a post showing a plethora of examples of images from the film.
Futurism was among the first to point out that these flawed images (around the 28-minute mark of the documentary) “have all the hallmarks of an AI-generated photo, down to mangled hands and fingers, misshapen facial features, morphed objects in the background, and a far-too-long front tooth.” The image with the long front tooth was even used in Netflix’s poster for the movie.
Because the movie’s credits do not mention any uses of AI, critics called out the documentary filmmakers for potentially embellishing a movie that’s supposed to be based on real-life events.
But Jeremy Grimaldi—who is also the crime reporter who wrote a book on the case and provided the documentary with research and police footage—told the Toronto Star that the images were not AI-generated.
Grimaldi confirmed that all images of Pan used in the movie were real photos. He said that some of the images were edited, though, not to blur the lines between truth and fiction, but to protect the identity of the source of the images.
“Any filmmaker will use different tools, like Photoshop, in films,” Grimaldi told The Star. “The photos of Jennifer are real photos of her. The foreground is exactly her. The background has been anonymized to protect the source.”
While Grimaldi’s comments provide some assurance that the photos are edited versions of real photos of Pan, they are also vague enough to obscure whether AI was among the “different tools” used to edit the photos.
One photographer, Joe Foley, wrote in a post for Creative Bloq that he thought “documentary makers may have attempted to enhance old low-resolution images using AI-powered upscaling or photo restoration software to try to make them look clearer on a TV screen.”
“The problem is that even the best AI software can only take a poor-quality image so far, and such programs tend to over sharpen certain lines, resulting in strange artifacts,” Foley said.
Foley suggested that Netflix should have “at the very least” clarified that images had been altered “to avoid this kind of backlash,” noting that “any kind of manipulation of photos in a documentary is controversial because the whole point is to present things as they were.”
X’s chatbot Grok is supposed to be an AI engine crunching the platform’s posts to surface and summarize breaking news, but this week, Grok’s flaws were once again exposed when the chatbot got confused and falsely accused an NBA star of criminal vandalism.
“Klay Thompson Accused in Bizarre Brick-Vandalism Spree,” Grok’s headline read in an AI-powered trending-tab post that has remained on X (formerly Twitter) for days. Beneath the headline, Grok went into even more detail to support its fake reporting:
In a bizarre turn of events, NBA star Klay Thompson has been accused of vandalizing multiple houses with bricks in Sacramento. Authorities are investigating the claims after several individuals reported their houses being damaged, with windows shattered by bricks. Klay Thompson has not yet issued a statement regarding the accusations. The incidents have left the community shaken, but no injuries were reported. The motive behind the alleged vandalism remains unclear.
Grok appears to be confusing a common basketball term, where players are said to be throwing “bricks” when they take an airball shot that doesn’t hit the rim. According to SF Gate, which was one of the first outlets to report the Grok error, Thompson had an “all-time rough shooting” night, hitting none of his shots on what was his emotional last game with the Golden State Warriors before becoming an unrestricted free agent.
In small type under Grok’s report, X includes a disclaimer saying, “Grok is an early feature and can make mistakes. Verify its outputs.”
But instead of verifying Grok’s outputs, it appeared that X users—in the service’s famously joke-y spirit—decided to fuel Grok’s misinformation. Under the post, X users, some NBA fans, commented with fake victim reports, using the same joke format to seemingly convince Grok that “several individuals reported their houses being damaged.” Some of these joking comments were viewed by millions.
First off… I am ok.
My house was vandalized by bricks
After my hands stopped shaking, I managed to call the Sheriff…They were quick to respond
My window was gone and the police asked if I knew who did it
Experts told Ars that it remains unclear if disclaimers like X’s will spare companies from liability should more people decide to sue over fake AI outputs. Defamation claims might depend on proving that platforms “knowingly” publish false statements, which disclaimers suggest they do. Last July, the Federal Trade Commission launched an investigation into OpenAI, demanding that the company address the FTC’s fears of “false, misleading, or disparaging” AI outputs.
Because the FTC doesn’t comment on its investigations, it’s impossible to know if its probe will impact how OpenAI conducts business.
For people suing AI companies, the urgency of protecting against false outputs seems obvious. Last year, the radio host suing OpenAI, Mark Walters, accused the company of “sticking its head in the sand” and “recklessly disregarding whether the statements were false under circumstances when they knew that ChatGPT’s hallucinations were pervasive and severe.”
X just released Grok to all premium users this month, TechCrunch reported, right around the time that X began giving away premium access to the platform’s top users. During that wider rollout, X touted Grok’s new ability to summarize all trending news and topics, perhaps stoking interest in this feature and peaking Grok usage just before Grok spat out the potentially defamatory post about the NBA star.
Thompson has not issued any statements on Grok’s fake reporting.
Grok’s false post about Thompson may be the first widely publicized example of potential defamation from Grok, but it wasn’t the first time that Grok promoted fake news in response to X users joking around on the platform. During the solar eclipse, a Grok-generated headline read, “Sun’s Odd Behavior: Experts Baffled,” Gizmodo reported.
While it’s amusing to some X users to manipulate Grok, the pattern suggests that Grok may also be vulnerable to being manipulated by bad actors into summarizing and spreading more serious misinformation or propaganda. That’s apparently already happening, too. In early April, Grok made up a headline about Iran attacking Israel with heavy missiles, Mashable reported.
The US Constitution’s Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law.
The US Court of Appeals for the 9th Circuit had to grapple with the question of “whether the compelled use of Payne’s thumb to unlock his phone was testimonial,” the ruling in United States v. Jeremy Travis Payne said. “To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial.”
A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court’s denial of Payne’s motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine.
There was a dispute in District Court over whether a CHP officer “forcibly used Payne’s thumb to unlock the phone.” But for the purposes of Payne’s appeal, the government “accepted the defendant’s version of the facts, i.e., ‘that defendant’s thumbprint was compelled.'”
Payne’s Fifth Amendment claim “rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination,” the ruling said. Judges rejected his claim, holding “that the compelled use of Payne’s thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking.”
“When Officer Coddington used Payne’s thumb to unlock his phone—which he could have accomplished even if Payne had been unconscious—he did not intrude on the contents of Payne’s mind,” the court also said.
Suspect’s mental process is key
Payne conceded that “the use of biometrics to open an electronic device is akin to providing a physical key to a safe” but argued it is still a testimonial act because it “simultaneously confirm[s] ownership and authentication of its contents,” the court said. “However, Payne was never compelled to acknowledge the existence of any incriminating information. He merely had to provide access to a source of potential information.”
The appeals court cited two Supreme Court rulings in cases involving the US government. In Doe v. United States in 1988, the government compelled a person to sign forms consenting to disclosure of bank records relating to accounts that the government already knew about. The Supreme Court “held that this was not a testimonial production, reasoning that the signing of the forms related no information about existence, control, or authenticity of the records that the bank could ultimately be forced to produce,” the 9th Circuit said.
In United States v. Hubbell in 2000, a subpoena compelled a suspect to produce 13,120 pages of documents and records and respond “to a series of questions that established that those were all of the documents in his custody or control that were responsive to the commands in the subpoena.” The Supreme Court ruled against the government, as the 9th Circuit explained:
The Court held that this act of production was of a fundamentally different kind than that at issue in Doe because it was “unquestionably necessary for respondent to make extensive use of ‘the contents of his own mind’ in identifying the hundreds of documents responsive to the requests in the subpoena.” The “assembly of those documents was like telling an inquisitor the combination to a wall safe, not like being forced to surrender the key to a strongbox.” Thus, the dividing line between Doe and Hubbell centers on the mental process involved in a compelled act, and an inquiry into whether that act implicitly communicates the existence, control, or authenticity of potential evidence.