Donald Trump

elon-musk’s-x-loses-battle-over-federal-request-for-trump’s-dms

Elon Musk’s X loses battle over federal request for Trump’s DMs


Prosecutors now have a “blueprint” to seize privileged communications, X warned.

Last year, special counsel Jack Smith asked X (formerly Twitter) to hand over Donald Trump’s direct messages from his presidency without telling Trump. Refusing to comply, X spent the past year arguing that the gag order was an unconstitutional prior restraint on X’s speech and an “end-run” around a record law shielding privileged presidential communications.

Under its so-called free speech absolutist owner Elon Musk, X took this fight all the way to the Supreme Court, only for the nation’s highest court to decline to review X’s appeal on Monday.

It’s unclear exactly why SCOTUS rejected X’s appeal, but in a court filing opposing SCOTUS review, Smith told the court that X’s “contentions lack merit and warrant no further review.” And SCOTUS seemingly agreed.

The government had argued that its nondisclosure order was narrowly tailored to serve a compelling interest in stopping Trump from either deleting his DMs or intimidating witnesses engaged in his DMs while he was in office.

At that time, Smith was publicly probing the interference with a peaceful transfer of power after the 2020 presidential election, and courts had agreed that “there were ‘reasonable grounds to believe’ that disclosing the warrant” to Trump “‘would seriously jeopardize the ongoing investigation’ by giving him ‘an opportunity to destroy evidence, change patterns of behavior, [or] notify confederates,” Smith’s court filing said.

Under the Stored Communications Act (SCA), the government can request data and apply for a nondisclosure order gagging any communications provider from tipping off an account holder about search warrants for limited periods deemed appropriate by a court, Smith noted. X was only prohibited from alerting Trump to the search warrant for 180 days, Smith said, and only restricted from discussing the existence of the warrant.

As the government sees it, this reliance on the SCA “does not give unbounded, standardless discretion to government officials or otherwise create a risk of ‘freewheeling censorship,'” like X claims. But the government warned that affirming X’s appeal “would mean that no SCA warrant could be enforced without disclosure to a potential privilege holder, regardless of the dangers to the integrity of the investigation.”

Court finds X alternative to gag order “unpalatable”

X tried to wave a red flag in its SCOTUS petition, warning the court that this was “the first time in American history” that a court “ordered disclosure of presidential communications without notice to the President and without any adjudication of executive privilege.”

The social media company argued that it receives “tens of thousands” of government data requests annually—including “thousands” with nondisclosure orders—and pushes back on any request for privileged information that does not allow users to assert their privileges. Allowing the lower court rulings to stand, X warned SCOTUS, could create a path for government to illegally seize information not just protected by executive privilege, but also by attorney-client, doctor-patient, or journalist-source privileges.

X’s “policy is to notify users about law enforcement requests ‘prior to disclosure of account information’ unless legally ‘prohibited from doing so,'” X argued.

X suggested that rather than seize Trump’s DMs without giving him a chance to assert his executive privilege, the government should have designated a representative capable of weighing and asserting whether some of the data requested was privileged. That’s how the Presidential Records Act (PRA) works, X noted, suggesting that Smith’s team was improperly trying to avoid PRA compliance by invoking SCA instead.

But the US government didn’t have to prove that the less-restrictive alternative X submitted would have compromised its investigation, X said, because the court categorically rejected X’s submission as “unworkable” and “unpalatable.”

According to the court, designating a representative placed a strain on the government to deduce if the representative could be trusted not to disclose the search warrant. But X pointed out that the government had no explanation for why a PRA-designated representative, Steven Engel—a former assistant attorney general for the Office of Legal Counsel who “publicly testified about resisting the former President’s conduct”—”could not be trusted to follow a court order forbidding him from further disclosure.”

“Going forward, the government will never have to prove it could avoid seriously jeopardizing its investigation by disclosing a warrant to only a trusted representative—a common alternative to nondisclosure orders,” X argued.

In a brief supporting X, attorneys for the nonprofit digital rights group the Electronic Frontier Foundation (EFF) wrote that the court was “unduly dismissive of the arguments” X raised and “failed to apply exacting scrutiny, relieving the government of its burden to actually demonstrate, with evidence, that these alternatives would be ineffective.”

Further, X argued that none of the government’s arguments for nondisclosure made sense. Not only was Smith’s investigation announced publicly—allowing Trump ample time to delete his DMs already—but also “there was no risk of destruction of the requested records because Twitter had preserved them.” On top of that, during the court battle, the government eventually admitted that one rationale for the nondisclosure order—that Trump posed a supposed “flight risk” if the search warrant was known—”was implausible because the former President already had announced his re-election run.”

X unsuccessfully pushed SCOTUS to take on the Trump case as an “ideal” and rare opportunity to publicly decide when nondisclosure orders cross the line when seeking to seize potentially privileged information on social media.

In its petition for SCOTUS review, X pointed out that every social media or communications platform is bombarded with government data requests that only the platforms can challenge. That leaves it up to platforms to figure out when data requests are problematic, which they frequently are, as “the government often agrees to modify or vacate them in informal negotiations,” X argued.

But when the government refuses to negotiate, as in the Trump case, platforms have to decide if litigation is worth it, risking sanctions if the court finds the platform in contempt, just as X was sanctioned $350,000 in the Trump case. If a less restrictive alternative was determined appropriate by the courts, such as appointing a trusted representative, platforms would never have had to guess when data requests threaten to expose their users’ privileged information, X argued.

According to X, another case like this won’t come around for decades, where court filings wouldn’t have to be redacted and a ruling wouldn’t have to happen behind closed doors.

But the government seemingly persuaded the Supreme Court to decline to review the case, partly by arguing that X’s challenge to its nondisclosure order was moot. Responding to X’s objections, the government had eventually agreed to modify the nondisclosure order to disclose the warrant to Trump, so long as the name of the case agent assigned to the investigation was redacted. So X’s appeal is really over nothing, the government suggested.

Additionally, the government argued that “this case would not be an appropriate vehicle” for SCOTUS’ review of the question X raised because “no executive privilege issue actually existed in this case.”

“If review of the underlying legal issues were ever warranted, the Court should await a live case in which the issues are concretely presented,” Smith’s court filing said.

X is likely deflated by SCOTUS’ call declining to review X’s appeal. In its petition, X claimed that the court system risked providing “a blueprint for prosecutors who wish to obtain potentially privileged materials” and “this end-run will not be limited to federal prosecutors,” X warned. State prosecutors will likely also be emboldened to do the same now that the precedent has been set, X predicted.

In their brief supporting X, EFF lawyers noted that the government already has “far too much authority to shield its activities from public scrutiny.” By failing to prevent nondisclosure orders from restraining speech, the court system risks making it harder to “meaningfully test these gag orders in court,” EFF warned.

“Even a meritless gag order that is ultimately voided by a court causes great harm while it is in effect,” EFF’s lawyers said, while disclosure “ensures that individuals whose information is searched have an opportunity to defend their privacy from unwarranted and unlawful government intrusions.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk’s X loses battle over federal request for Trump’s DMs Read More »

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

taylor-swift-cites-ai-deepfakes-in-endorsement-for-kamala-harris

Taylor Swift cites AI deepfakes in endorsement for Kamala Harris

it’s raining creepy men —

Taylor Swift on AI: “The simplest way to combat misinformation is with the truth.”

A screenshot of Taylor Swift's Kamala Harris Instagram post, captured on September 11, 2024.

Enlarge / A screenshot of Taylor Swift’s Kamala Harris Instagram post, captured on September 11, 2024.

On Tuesday night, Taylor Swift endorsed Vice President Kamala Harris for US President on Instagram, citing concerns over AI-generated deepfakes as a key motivator. The artist’s warning aligns with current trends in technology, especially in an era where AI synthesis models can easily create convincing fake images and videos.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” she wrote in her Instagram post. “It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”

In August 2024, former President Donald Trump posted AI-generated images on Truth Social falsely suggesting Swift endorsed him, including a manipulated photo depicting Swift as Uncle Sam with text promoting Trump. The incident sparked Swift’s fears about the spread of misinformation through AI.

This isn’t the first time Swift and generative AI have appeared together in the news. In February, we reported that a flood of explicit AI-generated images of Swift originated from a 4chan message board where users took part in daily challenges to bypass AI image generator filters.

Listing image by Ronald Woan/CC BY-SA 2.0

Taylor Swift cites AI deepfakes in endorsement for Kamala Harris Read More »

google’s-threat-team-confirms-iran-targeting-trump,-biden,-and-harris-campaigns

Google’s threat team confirms Iran targeting Trump, Biden, and Harris campaigns

It is only August —

Another Big Tech firm seems to confirm Trump adviser Roger Stone was hacked.

Roger Stone, former adviser to Donald Trump's presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.

Enlarge / Roger Stone, former adviser to Donald Trump’s presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.

Getty Images

Google’s Threat Analysis Group confirmed Wednesday that they observed a threat actor backed by the Iranian government targeting Google accounts associated with US presidential campaigns, in addition to stepped-up attacks on Israeli targets.

APT42, associated with Iran’s Islamic Revolutionary Guard Corps, “consistently targets high-profile users in Israel and the US,” the Threat Analysis Group (TAG) writes. The Iranian group uses hosted malware, phishing pages, malicious redirects, and other tactics to gain access to Google, Dropbox, OneDrive, and other cloud-based accounts. Google’s TAG writes that it reset accounts, sent warnings to users, and blacklisted domains associated with APT42’s phishing attempts.

Among APT42’s tools were Google Sites pages that appeared to be a petition from legitimate Jewish activists, calling on Israel to mediate its ongoing conflict with Hamas. The page was fashioned from image files, not HTML, and an ngrok redirect sent users to phishing pages when they moved to sign the petition.

A petition purporting to be from The Jewish Agency for Israel, seeking support for mediation measures—but signatures quietly redirect to phishing sites, according to Google.

A petition purporting to be from The Jewish Agency for Israel, seeking support for mediation measures—but signatures quietly redirect to phishing sites, according to Google.

Google

In the US, Google’s TAG notes that, as with the 2020 elections, APT42 is actively targeting the personal emails of “roughly a dozen individuals affiliated with President Biden and former President Trump.” TAG confirms that APT42 “successfully gained access to the personal Gmail account of a high-profile political consultant,” which may be longtime Republican operative Roger Stone, as reported by The Guardian, CNN, and The Washington Post, among others. Microsoft separately noted last week that a “former senior advisor” to the Trump campaign had his Microsoft account compromised, which Stone also confirmed.

“Today, TAG continues to observe unsuccessful attempts from APT42 to compromise the personal accounts of individuals affiliated with President Biden, Vice President Harris and former President Trump, including current and former government officials and individuals associated with the campaigns,” Google’s TAG writes.

PDFs and phishing kits target both sides

Google’s post details the ways in which APT42 targets operatives in both parties. The broad strategy is to get the target off their email and into channels like Signal, Telegram, or WhatsApp, or possibly a personal email address that may not have two-factor authentication and threat monitoring set up. By establishing trust through sending legitimate PDFs, or luring them to video meetings, APT42 can then push links that use phishing kits with “a seamless flow” to harvest credentials from Google, Hotmail, and Yahoo.

After gaining a foothold, APT42 will often work to preserve its access by generating application-specific passwords inside the account, which typically bypass multifactor tools. Google notes that its Advanced Protection Program, intended for individuals at high risk of attack, disables such measures.

Publications, including Politico, The Washington Post, and The New York Times, have reported being offered documents from the Trump campaign, potentially stemming from Iran’s phishing efforts, in an echo of Russia’s 2016 targeting of Hillary Clinton’s campaign. None of them have moved to publish stories related to the documents.

John Hultquist, with Google-owned cybersecurity firm Mandiant, told Wired’s Andy Greenberg that what looks initially like spying or political interference by Iran can easily escalate to sabotage and that both parties are equal targets. He also said that current thinking about threat vectors may need to expand.

“It’s not just a Russia problem anymore. It’s broader than that,” Hultquist said. “There are multiple teams in play. And we have to keep an eye out for all of them.”

Google’s threat team confirms Iran targeting Trump, Biden, and Harris campaigns Read More »