Policy

crypto-influencer-guilty-of-$110m-scheme-that-shut-down-mango-markets

Crypto influencer guilty of $110M scheme that shut down Mango Markets

Crypto influencer guilty of $110M scheme that shut down Mango Markets

A jury has unanimously convicted Avi Eisenberg in the US Department of Justice’s first case involving cryptocurrency open-market manipulation, the DOJ announced Thursday.

The jury found Eisenberg guilty of commodities fraud, commodities market manipulation, and wire fraud in connection with the manipulation on a decentralized cryptocurrency exchange called Mango Markets.

Eisenberg is scheduled to be sentenced on July 29 and is facing “a maximum penalty of 10 years in prison on the commodities fraud count and the commodities manipulation count, and a maximum penalty of 20 years in prison on the wire fraud count,” the DOJ said.

On the Mango Markets exchange, Eisenberg was “engaged in a scheme to fraudulently obtain approximately $110 million worth of cryptocurrency from Mango Markets and its customers by artificially manipulating the price of certain perpetual futures contracts,” the DOJ said. The scheme impacted both investors trading and the exchange itself, which had to suspend operations after Eisenberg’s attack made the exchange insolvent.

Nicole M. Argentieri, the principal deputy assistant attorney general who heads the DOJ’s criminal division, said that Eisenberg’s manipulative trading scheme “puts our financial markets and investors at risk.”

“This prosecution—the first involving the manipulation of cryptocurrency through open-market trades—demonstrates the Criminal Division’s commitment to protecting US financial markets and holding wrongdoers accountable, no matter what mechanism they use to commit manipulation and fraud,” Argentieri said.

Mango Labs has similarly sued Eisenberg over the price manipulation scheme, but that lawsuit was stayed until the DOJ’s case was resolved. Mango Labs is expecting a status update today from the US government and is hoping to proceed with its lawsuit.

Ars could not immediately reach Mango Labs for comment.

Eisenberg’s lawyer, Brian Klein, provided the same statement to Ars, confirming that Eisenberg’s legal team is “obviously disappointed” but “will keep fighting for our client.”

How the Mango Markets scheme worked

Mango Labs has accused Eisenberg of being a “notorious cryptocurrency market manipulator,” noting in its complaint that he has a “history of attacking multiple cryptocurrency platforms and manipulating cryptocurrency markets.” That history includes allegedly embezzling $14 million in 2021 while Eisenberg was working as a developer for another decentralized marketplace called Fortress, Mango Labs’ complaint said.

Eisenberg’s attack on Mango Markets intended to grab tens of millions more than the alleged Fortress attack. When Eisenberg was first charged, the DOJ explained how his Mango Markets price manipulation scheme worked.

On Mango Markets, investors can “purchase and borrow cryptocurrencies and cryptocurrency-related financial products,” including buying and selling “perpetual futures contracts.”

“When an investor buys or sells a perpetual for a particular cryptocurrency, the investor is not buying or selling that cryptocurrency but is, instead, buying or selling exposure to future movements in the value of that cryptocurrency relative to another cryptocurrency,” the DOJ explained.

Crypto influencer guilty of $110M scheme that shut down Mango Markets Read More »

netflix-doc-accused-of-using-ai-to-manipulate-true-crime-story

Netflix doc accused of using AI to manipulate true crime story

Everything is not as it seems —

Producer remained vague about whether AI was used to edit photos.

A cropped image showing Raw TV's poster for the Netflix documentary <em>What Jennifer Did</em>, which features a long front tooth that leads critics to believe it was AI-generated.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/04/What-Jennifer-Did-Netflix-poster-cropped-800×450.jpg”></img><figcaption>
<p><a data-height=Enlarge / A cropped image showing Raw TV’s poster for the Netflix documentary What Jennifer Did, which features a long front tooth that leads critics to believe it was AI-generated.

An executive producer of the Netflix hit What Jennifer Did has responded to accusations that the true crime documentary used AI images when depicting Jennifer Pan, a woman currently imprisoned in Canada for orchestrating a murder-for-hire scheme targeting her parents.

What Jennifer Did shot to the top spot in Netflix’s global top 10 when it debuted in early April, attracting swarms of true crime fans who wanted to know more about why Pan paid hitmen $10,000 to murder her parents. But quickly the documentary became a source of controversy, as fans started noticing glaring flaws in images used in the movie, from weirdly mismatched earrings to her nose appearing to lack nostrils, the Daily Mail reported, in a post showing a plethora of examples of images from the film.

Futurism was among the first to point out that these flawed images (around the 28-minute mark of the documentary) “have all the hallmarks of an AI-generated photo, down to mangled hands and fingers, misshapen facial features, morphed objects in the background, and a far-too-long front tooth.” The image with the long front tooth was even used in Netflix’s poster for the movie.

Because the movie’s credits do not mention any uses of AI, critics called out the documentary filmmakers for potentially embellishing a movie that’s supposed to be based on real-life events.

But Jeremy Grimaldi—who is also the crime reporter who wrote a book on the case and provided the documentary with research and police footage—told the Toronto Star that the images were not AI-generated.

Grimaldi confirmed that all images of Pan used in the movie were real photos. He said that some of the images were edited, though, not to blur the lines between truth and fiction, but to protect the identity of the source of the images.

“Any filmmaker will use different tools, like Photoshop, in films,” Grimaldi told The Star. “The photos of Jennifer are real photos of her. The foreground is exactly her. The background has been anonymized to protect the source.”

While Grimaldi’s comments provide some assurance that the photos are edited versions of real photos of Pan, they are also vague enough to obscure whether AI was among the “different tools” used to edit the photos.

One photographer, Joe Foley, wrote in a post for Creative Bloq that he thought “documentary makers may have attempted to enhance old low-resolution images using AI-powered upscaling or photo restoration software to try to make them look clearer on a TV screen.”

“The problem is that even the best AI software can only take a poor-quality image so far, and such programs tend to over sharpen certain lines, resulting in strange artifacts,” Foley said.

Foley suggested that Netflix should have “at the very least” clarified that images had been altered “to avoid this kind of backlash,” noting that “any kind of manipulation of photos in a documentary is controversial because the whole point is to present things as they were.”

Hollywood’s increasing use of AI has indeed been controversial, with screenwriters’ unions opposing AI tools as “plagiarism machines” and artists stirring recent backlash over the “experimental” use of AI art in a horror film. Even using AI for a movie poster, as Civil War did, is enough to generate controversy, the Hollywood Reporter reported.

Neither Raw TV, the production company behind What Jennifer Did, nor Netflix responded to Ars’ request for comment.

Netflix doc accused of using AI to manipulate true crime story Read More »

elon-musk’s-grok-keeps-making-up-fake-news-based-on-x-users’-jokes

Elon Musk’s Grok keeps making up fake news based on X users’ jokes

It’s all jokes until it isn’t —

X likely hopes to avoid liability with disclaimer that Grok “can make mistakes.”

Elon Musk’s Grok keeps making up fake news based on X users’ jokes

X’s chatbot Grok is supposed to be an AI engine crunching the platform’s posts to surface and summarize breaking news, but this week, Grok’s flaws were once again exposed when the chatbot got confused and falsely accused an NBA star of criminal vandalism.

“Klay Thompson Accused in Bizarre Brick-Vandalism Spree,” Grok’s headline read in an AI-powered trending-tab post that has remained on X (formerly Twitter) for days. Beneath the headline, Grok went into even more detail to support its fake reporting:

In a bizarre turn of events, NBA star Klay Thompson has been accused of vandalizing multiple houses with bricks in Sacramento. Authorities are investigating the claims after several individuals reported their houses being damaged, with windows shattered by bricks. Klay Thompson has not yet issued a statement regarding the accusations. The incidents have left the community shaken, but no injuries were reported. The motive behind the alleged vandalism remains unclear.

Grok appears to be confusing a common basketball term, where players are said to be throwing “bricks” when they take an airball shot that doesn’t hit the rim. According to SF Gate, which was one of the first outlets to report the Grok error, Thompson had an “all-time rough shooting” night, hitting none of his shots on what was his emotional last game with the Golden State Warriors before becoming an unrestricted free agent.

In small type under Grok’s report, X includes a disclaimer saying, “Grok is an early feature and can make mistakes. Verify its outputs.”

But instead of verifying Grok’s outputs, it appeared that X users—in the service’s famously joke-y spirit—decided to fuel Grok’s misinformation. Under the post, X users, some NBA fans, commented with fake victim reports, using the same joke format to seemingly convince Grok that “several individuals reported their houses being damaged.” Some of these joking comments were viewed by millions.

First off… I am ok.

My house was vandalized by bricks 🧱

After my hands stopped shaking, I managed to call the Sheriff…They were quick to respond🚨

My window was gone and the police asked if I knew who did it👮‍♂️

I said yes, it was Klay Thompson

— LakeShowYo (@LakeShowYo) April 17, 2024

First off…I am ok.

My house was vandalized by bricks in Sacramento.

After my hands stopped shaking, I managed to call the Sheriff, they were quick to respond.

My window is gone, the police asked me if I knew who did it.

I said yes, it was Klay Thompson. pic.twitter.com/smrDs6Yi5M

— KeeganMuse (@KeegMuse) April 17, 2024

First off… I am ok.

My house was vandalized by bricks 🧱

After my hands stopped shaking, I managed to call the Sheriff…They were quick to respond🚨

My window was gone and the police asked if I knew who did it👮‍♂️

I said yes, it was Klay Thompson pic.twitter.com/JaWtdJhFli

— JJJ Muse (@JarenJJMuse) April 17, 2024

X did not immediately respond to Ars’ request for comment or confirm if the post will be corrected or taken down.

In the past, both Microsoft and chatbot maker OpenAI have faced defamation lawsuits over similar fabrications in which ChatGPT falsely accused a politician and a radio host of completely made-up criminal histories. Microsoft was also sued by an aerospace professor who Bing Chat falsely labeled a terrorist.

Experts told Ars that it remains unclear if disclaimers like X’s will spare companies from liability should more people decide to sue over fake AI outputs. Defamation claims might depend on proving that platforms “knowingly” publish false statements, which disclaimers suggest they do. Last July, the Federal Trade Commission launched an investigation into OpenAI, demanding that the company address the FTC’s fears of “false, misleading, or disparaging” AI outputs.

Because the FTC doesn’t comment on its investigations, it’s impossible to know if its probe will impact how OpenAI conducts business.

For people suing AI companies, the urgency of protecting against false outputs seems obvious. Last year, the radio host suing OpenAI, Mark Walters, accused the company of “sticking its head in the sand” and “recklessly disregarding whether the statements were false under circumstances when they knew that ChatGPT’s hallucinations were pervasive and severe.”

X just released Grok to all premium users this month, TechCrunch reported, right around the time that X began giving away premium access to the platform’s top users. During that wider rollout, X touted Grok’s new ability to summarize all trending news and topics, perhaps stoking interest in this feature and peaking Grok usage just before Grok spat out the potentially defamatory post about the NBA star.

Thompson has not issued any statements on Grok’s fake reporting.

Grok’s false post about Thompson may be the first widely publicized example of potential defamation from Grok, but it wasn’t the first time that Grok promoted fake news in response to X users joking around on the platform. During the solar eclipse, a Grok-generated headline read, “Sun’s Odd Behavior: Experts Baffled,” Gizmodo reported.

While it’s amusing to some X users to manipulate Grok, the pattern suggests that Grok may also be vulnerable to being manipulated by bad actors into summarizing and spreading more serious misinformation or propaganda. That’s apparently already happening, too. In early April, Grok made up a headline about Iran attacking Israel with heavy missiles, Mashable reported.

Elon Musk’s Grok keeps making up fake news based on X users’ jokes Read More »

cops-can-force-suspect-to-unlock-phone-with-thumbprint,-us-court-rules

Cops can force suspect to unlock phone with thumbprint, US court rules

A man holding up his thumb for a thumbprint scan

The US Constitution’s Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law.

The US Court of Appeals for the 9th Circuit had to grapple with the question of “whether the compelled use of Payne’s thumb to unlock his phone was testimonial,” the ruling in United States v. Jeremy Travis Payne said. “To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial.”

A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court’s denial of Payne’s motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine.

There was a dispute in District Court over whether a CHP officer “forcibly used Payne’s thumb to unlock the phone.” But for the purposes of Payne’s appeal, the government “accepted the defendant’s version of the facts, i.e., ‘that defendant’s thumbprint was compelled.'”

Payne’s Fifth Amendment claim “rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination,” the ruling said. Judges rejected his claim, holding “that the compelled use of Payne’s thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking.”

“When Officer Coddington used Payne’s thumb to unlock his phone—which he could have accomplished even if Payne had been unconscious—he did not intrude on the contents of Payne’s mind,” the court also said.

Suspect’s mental process is key

Payne conceded that “the use of biometrics to open an electronic device is akin to providing a physical key to a safe” but argued it is still a testimonial act because it “simultaneously confirm[s] ownership and authentication of its contents,” the court said. “However, Payne was never compelled to acknowledge the existence of any incriminating information. He merely had to provide access to a source of potential information.”

The appeals court cited two Supreme Court rulings in cases involving the US government. In Doe v. United States in 1988, the government compelled a person to sign forms consenting to disclosure of bank records relating to accounts that the government already knew about. The Supreme Court “held that this was not a testimonial production, reasoning that the signing of the forms related no information about existence, control, or authenticity of the records that the bank could ultimately be forced to produce,” the 9th Circuit said.

In United States v. Hubbell in 2000, a subpoena compelled a suspect to produce 13,120 pages of documents and records and respond “to a series of questions that established that those were all of the documents in his custody or control that were responsive to the commands in the subpoena.” The Supreme Court ruled against the government, as the 9th Circuit explained:

The Court held that this act of production was of a fundamentally different kind than that at issue in Doe because it was “unquestionably necessary for respondent to make extensive use of ‘the contents of his own mind’ in identifying the hundreds of documents responsive to the requests in the subpoena.” The “assembly of those documents was like telling an inquisitor the combination to a wall safe, not like being forced to surrender the key to a strongbox.” Thus, the dividing line between Doe and Hubbell centers on the mental process involved in a compelled act, and an inquiry into whether that act implicitly communicates the existence, control, or authenticity of potential evidence.

Cops can force suspect to unlock phone with thumbprint, US court rules Read More »

feds-appoint-“ai-doomer”-to-run-ai-safety-at-us-institute

Feds appoint “AI doomer” to run AI safety at US institute

Confronting doom —

Former OpenAI researcher once predicted a 50 percent chance of AI killing all of us.

Feds appoint “AI doomer” to run AI safety at US institute

The US AI Safety Institute—part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation.

Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that “there’s a 50 percent chance AI development could end in ‘doom.'” While Christiano’s research background is impressive, some fear that by appointing a so-called “AI doomer,” NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.

There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano’s so-called “AI doomer” views, NIST staffers were “revolting.” Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing “that Christiano’s association” with effective altruism and “longtermism could compromise the institute’s objectivity and integrity.”

NIST’s mission is rooted in advancing science by working to “promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” Effective altruists believe in “using evidence and reason to figure out how to benefit others as much as possible” and longtermists that “we should be doing much more to protect future generations,” both of which are more subjective and opinion-based.

On the Bankless podcast, Christiano shared his opinions last year that “there’s something like a 10–20 percent chance of AI takeover” that results in humans dying, and “overall, maybe you’re getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level.”

“The most likely way we die involves—not AI comes out of the blue and kills everyone—but involves we have deployed a lot of AI everywhere… [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us,” Christiano said.

Critics of so-called “AI doomers” have warned that focusing on any potentially overblown talk of hypothetical killer AI systems or existential AI risks may stop humanity from focusing on current perceived harms from AI, including environmental, privacy, ethics, and bias issues. Emily Bender, a University of Washington professor of computation linguistics who has warned about AI doomers thwarting important ethical work in the field, told Ars that because “weird AI doomer discourse” was included in Joe Biden’s AI executive order, “NIST has been directed to worry about these fantasy scenarios” and “that’s the underlying problem” leading to Christiano’s appointment.

“I think that NIST probably had the opportunity to take it a different direction,” Bender told Ars. “And it’s unfortunate that they didn’t.”

As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will “design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern,” steer processes for evaluations, and implement “risk mitigations to enhance frontier model safety and security,” the Department of Commerce’s press release said.

Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as “a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research.” Part of ARC’s mission is to test if AI systems are evolving to manipulate or deceive humans, ARC’s website said. ARC also conducts research to help AI systems scale “gracefully.”

Because of Christiano’s research background, some people think he is a good choice to helm the safety institute, such as Divyansh Kaushik, an associate director for emerging technologies and national security at the Federation of American Scientists. On X (formerly Twitter), Kaushik wrote that the safety institute is designed to mitigate chemical, biological, radiological, and nuclear risks from AI, and Christiano is “extremely qualified” for testing those AI models. Kaushik cautioned, however, that “if there’s truth to NIST scientists threatening to quit” over Christiano’s appointment, “obviously that would be serious if true.”

The Commerce Department does not comment on its staffing, so it’s unclear if anyone actually resigned or plans to resign over Christiano’s appointment. Since the announcement was made, Ars was not able to find any public announcements from NIST staffers suggesting that they might be considering stepping down.

In addition to Christiano, the safety institute’s leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff. Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden’s AI executive order, will be head of international engagement.

“To safeguard our global leadership on responsible AI and ensure we’re equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer,” Gina Raimondo, US Secretary of Commerce, said in the press release. “That is precisely why we’ve selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team.”

VentureBeat’s report claimed that Raimondo directly appointed Christiano.

Bender told Ars that there’s no advantage to NIST including “doomsday scenarios” in its research on “how government and non-government agencies are using automation.”

“The fundamental problem with the AI safety narrative is that it takes people out of the picture,” Bender told Ars. “But the things we need to be worrying about are what people do with technology, not what technology autonomously does.”

Feds appoint “AI doomer” to run AI safety at US institute Read More »

billions-of-public-discord-messages-may-be-sold-through-a-scraping-service

Billions of public Discord messages may be sold through a scraping service

Discord chat-scraping service —

Cross-server tracking suggests a new understanding of “public” chat servers.

Discord logo, warped by vertical perspective over a phone displaying the app

Getty Images

It’s easy to get the impression that Discord chat messages are ephemeral, especially across different public servers, where lines fly upward at a near-unreadable pace. But someone claims to be catching and compiling that data and is offering packages that can track more than 600 million users across more than 14,000 servers.

Joseph Cox at 404 Media confirmed that Spy Pet, a service that sells access to a database of purportedly 3 billion Discord messages, offers data “credits” to customers who pay in bitcoin, ethereum, or other cryptocurrency. Searching individual users will reveal the servers that Spy Pet can track them across, a raw and exportable table of their messages, and connected accounts, such as GitHub. Ominously, Spy Pet lists more than 86,000 other servers in which it has “no bots,” but “we know it exists.”

  • An example of Spy Pet’s service from its website. Shown are a user’s nicknames, connected accounts, banner image, server memberships, and messages across those servers tracked by Spy Pet.

    Spy Pet

  • Statistics on servers, users, and messages purportedly logged by Spy Pet.

    Spy Pet

  • An example image of the publicly available data gathered by Spy Pet, in this example for a public server for the game Deep Rock Galactic: Survivor.

    Spy Pet

As Cox notes, Discord doesn’t make messages inside server channels, like blog posts or unlocked social media feeds, easy to publicly access and search. But many Discord users many not expect their messages, server memberships, bans, or other data to be grabbed by a bot, compiled, and sold to anybody wishing to pin them all on a particular user. 404 Media confirmed the service’s function with multiple user examples. Private messages are not mentioned by Spy Pet and are presumably still secure.

Spy Pet openly asks those training AI models, or “federal agents looking for a new source of intel,” to contact them for deals. As noted by 404 Media and confirmed by Ars, clicking on the “Request Removal” link plays a clip of J. Jonah Jameson from Spider-Man (the Tobey Maguire/Sam Raimi version) laughing at the idea of advance payment before an abrupt “You’re serious?” Users of Spy Pet, however, are assured of “secure and confidential” searches, with random usernames.

This author found nearly every public Discord he had ever dropped into for research or reporting in Spy Pet’s server list. Those who haven’t paid for message access can only see fairly benign public-facing elements, like stickers, emojis, and charted member totals over time. But as an indication of the reach of Spy Pet’s scraping, it’s an effective warning, or enticement, depending on your goals.

Ars has reached out to Spy Pet for comment and will update this post if we receive a response. A Discord spokesperson told Ars that the company is investigating whether Spy Pet violated its terms of service and community guidelines. It will take “appropriate steps to enforce our policies,” the company said, and could not provide further comment.

Billions of public Discord messages may be sold through a scraping service Read More »

tesla-asks-shareholders-to-approve-texas-move-and-restore-elon-musk’s-$56b-pay

Tesla asks shareholders to approve Texas move and restore Elon Musk’s $56B pay

Elon Musk wearing a suit during an event at a Tesla factory.

Enlarge / Tesla CEO Elon Musk at an opening event for Tesla’s Gigafactory on March 22, 2022, in Gruenheide, southeast of Berlin.

Getty Images | Patrick Pleul

Tesla is asking shareholders to approve a move to Texas and to re-approve a $55.8 billion pay package for CEO Elon Musk that was recently voided by a Delaware judge.

Musk’s 2018 pay package was voided in a ruling by Delaware Court of Chancery Judge Kathaleen McCormick, who found that the deal was unfair to shareholders. After the ruling, Musk said he would seek a shareholder vote on transferring Tesla’s state of incorporation from Delaware to Texas.

The proposed move to Texas and Musk’s pay package will be up for votes at Tesla’s 2024 annual meeting on June 13, Tesla Board Chairperson Robyn Denholm wrote in a letter to shareholders that was included in a regulatory filing today.

“Because the Delaware Court second-guessed your decision, Elon has not been paid for any of his work for Tesla for the past six years that has helped to generate significant growth and stockholder value,” the letter said. “That strikes us—and the many stockholders from whom we already have heard—as fundamentally unfair, and inconsistent with the will of the stockholders who voted for it.”

On the proposed move to Texas, the letter to shareholders said that “Texas is already our business home, and we are committed to it.” Moving the state of incorporation is really about operating under a state’s laws and court system, though. Incorporating in Texas “will restore Tesla’s stockholder democracy,” Denholm wrote.

Judge: Board members “were beholden to Musk”

Musk is a member of Tesla’s board. Although Musk and his brother Kimbal recused themselves from the 2018 pay-plan vote, McCormick’s ruling said that “five of the six directors who voted on the Grant were beholden to Musk or had compromising conflicts.” McCormick determined that the proxy statement given to investors for the 2018 vote “inaccurately described key directors as independent and misleadingly omitted details about the process.”

McCormick also wrote that Denholm had a “lackadaisical approach to her oversight obligations” and that she “derived the vast majority of her wealth from her compensation as a Tesla director.”

The ruling in favor of lead plaintiff and Tesla shareholder Richard Tornetta rescinded Musk’s pay package in order to “restore the parties to the position they occupied before the challenged transaction.”

Tornetta’s lawyer, Greg Varallo, declined to provide any detailed comment on Tesla’s plan for a new shareholder vote. “We are studying the Tesla proxy and will decide on any response in due course,” Varallo told Ars today.

In the new letter to shareholders, Denholm wrote that Tesla’s performance since 2018 proves that the pay package was deserved. Although Tesla’s stock price has fallen about 37 percent this year, it is up more than 630 percent since the March 2018 shareholder vote.

“We do not agree with what the Delaware Court decided, and we do not think that what the Delaware Court said is how corporate law should or does work,” Denholm wrote. “So we are coming to you now so you can help fix this issue—which is a matter of fundamental fairness and respect to our CEO. You have the chance to reinstate your vote and make it count. We are asking you to make your voice heard—once again—by voting to approve ratification of Elon’s 2018 compensation plan.”

Tesla asks shareholders to approve Texas move and restore Elon Musk’s $56B pay Read More »

isps-can-charge-extra-for-fast-gaming-under-fcc’s-internet-rules,-critics-say

ISPs can charge extra for fast gaming under FCC’s Internet rules, critics say

Fast lanes —

FCC plan rejected request to ban what agency calls “positive” discrimination.

Illustration of network data represented by curving lines flowing on a dark background.

Getty Images | Yuichiro Chino

Some net neutrality proponents are worried that soon-to-be-approved Federal Communications Commission rules will allow harmful fast lanes because the plan doesn’t explicitly ban “positive” discrimination.

FCC Chairwoman Jessica Rosenworcel’s proposed rules for Internet service providers would prohibit blocking, throttling, and paid prioritization. The rules mirror the ones imposed by the FCC during the Obama era and repealed during Trump’s presidency. But some advocates are criticizing a decision to let Internet service providers speed up certain types of applications as long as application providers don’t have to pay for special treatment.

Stanford Law Professor Barbara van Schewick, who has consistently argued for stricter net neutrality rules, wrote in a blog post on Thursday that “harmful 5G fast lanes are coming.”

“T-Mobile, AT&T and Verizon are all testing ways to create these 5G fast lanes for apps such as video conferencing, games, and video where the ISP chooses and controls what gets boosted,” van Schewick wrote. “They use a technical feature in 5G called network slicing, where part of their radio spectrum gets used as a special lane for the chosen app or apps, separated from the usual Internet traffic. The FCC’s draft order opens the door to these fast lanes, so long as the app provider isn’t charged for them.”

In an FCC filing yesterday, AT&T said that carriers will use network slicing “to better meet the needs of particular business applications and consumer preferences than they could over a best-efforts network that generally treats all traffic the same.”

Carriers could charge more for faster gaming

Van Schewick warns that carriers could charge consumers more for plans that speed up specific types of content. For example, a mobile operator could offer a basic plan alongside more expensive tiers that boost certain online games or a tier that boosts services like YouTube and TikTok.

Ericsson, a telecommunications vendor that sells equipment to carriers including AT&T, Verizon, and T-Mobile, has pushed for exactly this type of service. In a report on how network slicing can be used commercially, Ericsson said that “many gamers are willing to pay for enhanced gaming experiences” and would “pay up to $10.99 more for a guaranteed gaming experience on top of their 5G monthly subscription.”

Before the draft net neutrality order was released, van Schewick urged the FCC to “clarify that its proposed no-throttling rule prohibits ISPs from speeding up and slowing down applications and classes of applications.”

In a different filing last month, several advocacy groups similarly argued that the “no-throttling rule needs to ban selective speeding up, in addition to slowing down.” That filing was submitted by the American Civil Liberties Union, the Electronic Frontier Foundation, the Open Technology Institute at New America, Public Knowledge, Fight for the Future, and United Church of Christ Media Justice Ministry.

The request for a ban on selective speeding was denied in paragraph 492 of Rosenworcel’s draft rules, which are scheduled for an April 25 vote. The draft order argues that the FCC’s definition of “throttling” is expansive enough that an explicit ban on what the agency called positive discrimination isn’t needed:

With the no-throttling rule, we ban conduct that is not outright blocking, but inhibits the delivery of particular content, applications, or services, or particular classes of content, applications, or services. Likewise, we prohibit conduct that impairs or degrades lawful traffic to a non-harmful device or class of devices. We interpret this prohibition to include, for example, any conduct by a BIAS [Broadband Internet Access Service] provider that impairs, degrades, slows down, or renders effectively unusable particular content, services, applications, or devices, that is not reasonable network management. Our interpretation of “throttling” encompasses a wide variety of conduct that could impair or degrade an end user’s ability to access content of their choosing; thus, we decline commenters’ request to modify the rule to explicitly include positive and negative discrimination of content.

ISPs can charge extra for fast gaming under FCC’s Internet rules, critics say Read More »

so-much-for-free-speech-on-x;-musk-confirms-new-users-must-soon-pay-to-post

So much for free speech on X; Musk confirms new users must soon pay to post

100 pennies for your thoughts? —

The fee, likely $1, is aimed at stopping “relentless” bots, Musk said.

So much for free speech on X; Musk confirms new users must soon pay to post

Elon Musk confirmed Monday that X (formerly Twitter) plans to start charging new users to post on the platform, TechCrunch reported.

“Unfortunately, a small fee for new user write access is the only way to curb the relentless onslaught of bots,” Musk wrote on X.

In October, X confirmed that it was testing whether users would pay a small annual fee to access the platform by suddenly charging new users in New Zealand and the Philippines $1. Paying the fee enabled new users in those countries to post, reply, like, and bookmark X posts.

That test was deemed the “Not-A-Bot” program, and it’s unclear how successful it was at stopping bots. But X deciding to expand the program seems to suggest that the test must have had some success.

Musk has not yet clarified when X’s “small fee” might be required for new users, only confirming in a later post that any new users who avoid paying the fee will be able to post after three months. Ars created new accounts on the web and in the app, and neither signup required any fees yet.

Although Musk’s posts only mention paying for “write access,” it seems likely that the other features limited by the “Not-A-Bot” program will also be limited during those three months for any users who do not pay the fee, too. An X account called @x_alerts_ noticed on Sunday that X was updating its web app text that was seemingly enabling the “Not-A-Bot” program.

“Changes have been detected in the texts of the X web app!” @x_alerts_ wrote, noting that the altered text seemed to limit not just posting and replying, but also liking and bookmarking X posts.

“It looks like this text has been in the app, but they recently changed it, so not sure whether it’s an indication of launch or not!” the user wrote.

Back when X launched the “Not-A-Bot” program, Musk claimed that charging a $1 annual fee would make it “1000X harder to manipulate the platform.” In a help center post, X said that the “test was developed to bolster our already significant efforts to reduce spam, manipulation of our platform, and bot activity.”

Earlier this month, X warned users it was widely purging spam accounts, TechCrunch noted. X Support confirmed that follower counts would likely be impacted during that purge, because “we’re casting a wide net to ensure X remains secure and free of bots.”

But that attempt to purge bots apparently did not work as well as X hoped. This week, Musk confirmed that X is still struggling with “AI (and troll farms)” that he said are easily able to pass X’s “are you a bot” tests.

It’s hard to keep up with X’s inconsistent messaging on its bot problem since Musk took over. Last summer, Musk told attendees of The Wall Street Journal’s CEO Council that the platform had “eliminated at least 90 percent of scams,” claiming there had been a “dramatic improvement” in the platform’s ability to “detect and remove troll armies.”

At that time, experts told The Journal that solving X’s bot problem was nearly impossible because spammers’ tactics were always evolving and bots had begun using generative AI to avoid detection.

Musk’s plan to charge a fee to overcome bots won’t work, experts told WSJ, because anyone determined to spam X can just find credit cards and buy disposable phones on the dark web. And any bad actor who can’t find what they need on the dark web could theoretically just wait three months to launch scams or spread harmful content like disinformation or propaganda. This leads some critics to wonder what the point of charging the small fee really is.

When the “Not-A-Bot” program launched, X Support directly disputed critics’ claims that the program was simply testing whether charging small fees might expand X’s revenue to help Musk get the platform out of debt.

“This new test was developed to bolster our already successful efforts to reduce spam, manipulation of our platform, and bot activity, while balancing platform accessibility with the small fee amount,” X Support wrote on X. “It is not a profit driver.”

It seems likely that Musk is simply trying everything he can think of to reduce bots on the platform, even though it’s widely known that charging a subscription fee has failed to stop bots from overrunning other online platforms (just ask frustrated fans of World of Warcraft). Musk, who famously overpaid for Twitter and has been climbing out of debt since, has claimed since before the Twitter deal closed that his goal was to eliminate bots on the platform.

“We will defeat the spam bots or die trying!” Musk tweeted back in 2022, when a tweet was still a tweet and everyone could depend on accessing Twitter for free.

So much for free speech on X; Musk confirms new users must soon pay to post Read More »

us-woman-arrested,-accused-of-targeting-young-boys-in-$1.7m-sextortion-scheme

US woman arrested, accused of targeting young boys in $1.7M sextortion scheme

Preventing leaks —

FBI has warned of significant spike in teen sextortion in 2024.

US woman arrested, accused of targeting young boys in $1.7M sextortion scheme

A 28-year-old Delaware woman, Hadja Kone, was arrested after cops linked her to an international sextortion scheme targeting thousands of victims—mostly young men and including some minors, the US Department of Justice announced Friday.

Citing a recently unsealed indictment, the DOJ alleged that Kone and co-conspirators “operated an international, financially motivated sextortion and money laundering scheme in which the conspirators engaged in cyberstalking, interstate threats, money laundering, and wire fraud.”

Through the scheme, conspirators allegedly sought to extort about $6 million from “thousands of potential victims,” the DOJ said, and ultimately successfully extorted approximately $1.7 million.

Young men from the United States, Canada, and the United Kingdom fell for the scheme, the DOJ said. They were allegedly targeted by scammers posing as “young, attractive females online,” who initiated conversations by offering to send sexual photographs or video recordings, then invited victims to “web cam” or “live video chat” sessions.

“Unbeknownst to the victims, during the web cam/live video chats,” the DOJ said, the scammers would “surreptitiously” record the victims “as they exposed their genitals and/or engaged in sexual activity.” The scammers then threatened to publish the footage online or else share the footage with “the victims’ friends, family members, significant others, employers, and co-workers,” unless payments were sent, usually via Cash App or Apple Pay.

Much of these funds were allegedly transferred overseas to Kone’s accused co-conspirators, including 22-year-old Siaka Ouattara of the West African country the Ivory Coast. Ouattara was arrested by Ivorian authorities in February, the DOJ said.

“If convicted, Kone and Ouattara each face a maximum penalty of 20 years in prison for each conspiracy count and money laundering count, and a maximum penalty of 20 years in prison for each wire fraud count,” the DOJ said.

The FBI has said that it has been cracking down on sextortion after “a huge increase in the number of cases involving children and teens being threatened and coerced into sending explicit images online.” In 2024, the FBI announced a string of arrests, but none of the schemes so far have been as vast or far-reaching as the scheme that Kone allegedly helped operate.

In January, the FBI issued a warning about the “growing threat” to minors, warning parents that victims are “typically males between the ages of 14 to 17, but any child can become a victim.” Young victims are at risk of self-harm or suicide, the FBI said.

“From October 2021 to March 2023, the FBI and Homeland Security Investigations received over 13,000 reports of online financial sextortion of minors,” the FBI’s announcement said. “The sextortion involved at least 12,600 victims—primarily boys—and led to at least 20 suicides.”

For years, reports have shown that payment apps have been used in sextortion schemes with seemingly little intervention. When it comes to protecting minors, sextortion protections seem sparse, as neither Apple Pay nor Cash App appear to have any specific policies to combat the issue. However, both apps only allow minors over 13 to create accounts with authorized adult supervisors.

Apple and Cash App did not immediately respond to Ars’ request to comment.

Instagram, Snapchat add sextortion protections

Some social media platforms are responding to the spike in sextortion targeting minors.

Last year, Snapchat released a report finding that nearly two-thirds of more than 6,000 teens and young adults in six countries said that “they or their friends have been targeted in online ‘sextortion’ schemes” across many popular social media platforms. As a result of that report and prior research, Snapchat began allowing users to report sextortion specifically.

“Under the reporting menu for ‘Nudity or sexual content,’ a Snapchatter’s first option is to click, ‘They leaked/are threatening to leak my nudes,'” the report said.

Additionally, the DOJ’s announcement of Kone’s arrest came one day after Instagram confirmed that it was “testing new features to help protect young people from sextortion and intimate image abuse, and to make it more difficult for potential scammers and criminals to find and interact with teens.”

One feature will by default blur out sexual images shared over direct message, which Instagram said would protect minors from “scammers who may send nude images to trick people into sending their own images in return.” Instagram will also provide safety tips to anyone receiving a sexual image over DM, “encouraging them to report any threats to share their private images and reminding them that they can say no to anything that makes them feel uncomfortable.”

Perhaps more impactful, Instagram claimed that it was “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.” Having better signals helps Instagram to make it “harder for potential sextortion accounts to message or interact with people,” the platform said, by hiding those requests. Instagram also by default blocks adults from messaging users under 16 in some countries and under 18 in others.

Instagram said that other tech companies have also started “sharing more signals about sextortion accounts” through Lantern, a program that Meta helped to found with the Tech Coalition to prevent child sexual exploitation. Snapchat also participates in the cross-platform research.

According to the special agent in charge of the FBI’s Norfolk field office, Brian Dugan, “one of the best lines of defense to stopping a crime like this is to educate our most vulnerable on common warning signs, as well as empowering them to come forward if they are ever victimized.”

Both Instagram and Snapchat said they were also increasing sextortion resources available to educate young users.

“We know that sextortion is a risk teens and adults face across a range of platforms, and have developed tools and resources to help combat it,” Snap’s spokesperson told Ars. “We have extra safeguards for teens to protect against unwanted contact, and don’t offer public friend lists, which we know can be used to extort people. We also want to help young people learn the signs of this type of crime, and recently launched in-app resources to raise awareness of how to spot and report it.”

US woman arrested, accused of targeting young boys in $1.7M sextortion scheme Read More »

judge-halts-texas-probe-into-media-matters’-reporting-on-x

Judge halts Texas probe into Media Matters’ reporting on X

Texas Attorney General Ken Paxton speaks during the annual Conservative Political Action Conference (CPAC) meeting on February 23, 2024.

Enlarge / Texas Attorney General Ken Paxton speaks during the annual Conservative Political Action Conference (CPAC) meeting on February 23, 2024.

A judge has preliminarily blocked what Media Matters for America (MMFA) described as Texas Attorney General Ken Paxton’s attempt to “rifle through” confidential documents to prove that MMFA fraudulently manipulated X (formerly Twitter) data to ruin X’s advertising business, as Elon Musk has alleged.

After Musk accused MMFA of publishing reports that Musk claimed were designed to scare advertisers off X, Paxton promptly launched his own investigation into MMFA last November.

Suing MMFA over alleged violations of Texas’ Deceptive Trade Practices Act—which prohibits “disparaging the goods, services, or business of another by false or misleading representation of facts”—Paxton sought a wide range of MMFA documents through a civil investigative demand (CID). Filing a motion to block the CID, MMFA told the court that the CID had violated the media organization’s First Amendment rights, providing evidence that Paxton’s investigation and CID had chilled MMFA speech.

Paxton had requested Media Matters’ financial records—including “direct and indirect sources of funding for all Media Matters operations involving X research or publications”—as well as “internal and external communications” on “Musk’s purchase of X” and X’s current CEO Linda Yaccarino. He also asked for all of Media Matters’ communications with X representatives and X advertisers.

But perhaps most invasive, Paxton wanted to see all the communications about Media Matters’ X reporting that triggered the lawsuits, which, as US District Judge Amit Mehta wrote in an opinion published Friday, was a compelled disclosure that “poses a serious threat to the vitality of the newsgathering process.”

Mehta was concerned that MMFA showed that “Media Matters’ editorial leaders have pared back reporting and publishing, particularly on any topics that could be perceived as relating to the Paxton investigation”—including two follow-ups on its X reporting. Because of Paxton’s alleged First Amendment retaliation, MMFA said it did not publish “two pieces concerning X’s placement of advertising alongside antisemitic, pro-Nazi accounts”—”not out of legitimate concerns about fairness or accuracy,” but “out of fear of harassment, threats, and retaliation.”

According to Mehta’s order, Paxton did not contest that Texas’ lawsuit had chilled MMFA’s speech. Further, Paxton had given at least one podcast interview where he called upon other state attorneys general to join him in investigating MMFA.

Because Paxton “projected himself across state lines and asserted a pseudo-national executive authority,” Mehta wrote and repeatedly described MMFA as a “radical anti-free speech” or “radical left-wing organization,” the court had seen sufficient “evidence of retaliatory intent.”

“Notably,” Mehta wrote, Paxton remained “silent” and never “submitted a sworn declaration that explains his reasons for opening the investigation.”

In his press release, Paxton justified the investigation by saying, “We are examining the issue closely to ensure that the public has not been deceived by the schemes of radical left-wing organizations who would like nothing more than to limit freedom by reducing participation in the public square.”

Ultimately, Mehta granted MMFA’s request for a preliminary injunction to block Paxton’s CID because the judge found that the investigation and the CID have caused MMFA “to self-censor when making research and publication decisions, adversely affected the relationships between editors and reporters, and restricted communications with sources and journalists.”

“Only injunctive relief will ‘prevent the [ongoing] deprivation of free speech rights,'” Mehta’s opinion said, deeming MMFA’s reporting as “core First Amendment activities.”

Mehta’s order also banned Paxton from taking any steps to further his investigation until the lawsuit is decided.

In a statement Friday, MMFA President and CEO Angelo Carusone celebrated the win as not just against Paxton but also against Musk.

“Elon Musk encouraged Republican state attorneys general to use their power to harass their critics and stifle reporting about X,” Carusone said. “Ken Paxton was one of those AGs that took up the call and he was defeated. Today’s decision is a victory for free speech.”

Paxton has not yet responded to the preliminary injunction and his office did not respond to Ars’ request to comment..

Media Matters’ lawyer, Aria C. Branch, a partner at Elias Law Group, told Ars that “while Attorney General Paxton’s office has not yet responded to Friday’s ruling, the preliminary injunction should certainly put an end to these kind of lawless, politically motivated attempts to muzzle the press.”

Judge halts Texas probe into Media Matters’ reporting on X Read More »

elon-musk’s-x-to-stop-allowing-users-to-hide-their-blue-checks

Elon Musk’s X to stop allowing users to hide their blue checks

Nothing to hide —

X previously promised to “evolve” the “hide your checkmark” feature.

Elon Musk’s X to stop allowing users to hide their blue checks

X will soon stop allowing users to hide their blue checkmarks, and some users are not happy.

Previously, a blue tick on Twitter was a mark of a notable account, providing some assurance to followers of the account’s authenticity. But then Elon Musk decided to start charging for the blue tick instead, and mayhem ensued as a wave of imposter accounts began jokingly posing as brands.

After that, paying for a blue checkmark began to attract derision, as non-paying users passed around a meme under blue-checked posts, saying, “This MF paid for Twitter.” To help spare paid subscribers this embarrassment, X began allowing users to hide their blue check last August, turning “hide your checkmark” into a feature of paid subscriptions.

However, earlier this month, X decided that hiding a checkmark would no longer be allowed, deleting the feature from its webpage detailing what comes with X Premium. An archive of X’s page shows that the language about how to hide your checkmark was removed after April 6, with X no longer promising to “continue to evolve this feature to make it better for you” but instead abruptly ending the perk.

X’s decision to stop hiding checkmarks came after the platform began gifting blue checkmarks to popular accounts. Back in April 2023, then-Twitter had awarded blue checks to celebrity accounts with more than a million followers. Last week, now-X doled out even more blue checks to accounts with over 2,500 paid verified followers. Now, accounts with more than 2,500 paid verified followers get Premium features for free, and accounts with more than 5,000 paid verified followers get Premium+.

You might think that X giving out freebies would be well-received, but Business Insider tech reporter Katie Notopoulos, one of many accounts suddenly gifted the blue check, summed up how many X users were feeling about the gifted tick by asking, “does it seem uncool?”

X doesn’t seem to care anymore if blue checks are seen as uncool, though. Anyone who doesn’t want the complimentary check can refuse it, and any paid subscriber upset about losing the ability to hide their checkmark can always just stop paying for Premium features.

According to X, anyone deciding to cancel their subscription over the loss of the “hide your checkmark” feature can expect the check to remain on their account “until the end of the subscription term you paid for, unless your account is suspended or the blue checkmark is otherwise removed by X for any reason.”

X could also suddenly remove a checkmark without refunding users in extreme circumstances.

“X reserves the right without notice to remove your blue checkmark at any time in its sole discretion without offering you a refund, including if you violate our Terms of Service or if your account is suspended,” X’s subscription page warns.

X Daily, an X news account, announced that the change was coming this week, gathering “meltdown reactions” from users who are upset that their blue checks will soon no longer be hidden.

“Let me hide my checkmark, I’m not a fucking bot,” a user called @4gntt posted, the complaint seemingly alluding to Musk’s claim that paid subscriptions are the only way to stop bots from overrunning X.

“Oh no,” another user, @jeremyphoward, posted. “I signed up to X Premium since it’s required for them to pay me… but now they [are] making the cringemark non-optional 🙁 Not sure if it’s worth it.”

It’s currently unclear when the “hide your checkmark” feature will stop working. Neither of those users criticizing X currently display a blue tick on their profile, suggesting that their checks are still hidden, but it’s also possible that some users immediately stopped paying in response to the policy change.

Elon Musk’s X to stop allowing users to hide their blue checks Read More »