Social Media

x-sues-to-block-copycat-ny-content-moderation-law-after-california-win

X sues to block copycat NY content moderation law after California win

“It is our sincere belief that the current social media landscape makes it far too easy for bad actors to promote false claims, hatred and dangerous conspiracies online, and some large social media companies are not able or willing to regulate this hate speech themselves,” the letter said.

Although the letter acknowledged that X was not the only platform targeted by the law, the lawmakers further noted that Musk taking over Twitter spiked hateful and harmful content on the platform. They said it seemed “clear to us that X needs to provide greater transparency for their moderation policies and we believe that our law, as written, will do that.”

This clearly aggravated X. In their complaint, X alleged that the letter made it clear that New York’s law was “tainted by viewpoint discriminatory motives”—alleging that the lawmakers were biased against X and Musk.

X seeks injunction in New York

Just as X alleged in the California lawsuit, the social media company has claimed that the New York law forces X “to make politically charged disclosures about content moderation” in order to “generate public controversy about content moderation in a way that will pressure social media companies, such as X Corp., to restrict, limit, disfavor, or censor certain constitutionally protected content on X that the State dislikes,” X alleged.

“These forced disclosures violate the First Amendment” and the New York constitution, X alleged, and the content categories covered in the disclosures “were taken word-for-word” from California’s enjoined law.

X is arguing that New York has no compelling interest, or any legitimate interest at all, in applying “pressure” to govern social media platforms’ content moderation choices. Because X faces penalties up to $15,000 per day per violation, the company has asked for a jury to grant an injunction blocking enforcement of key provisions of the law.

“Deciding what content should appear on a social media platform is a question that engenders considerable debate among reasonable people about where to draw the correct proverbial line,” X’s complaint said. “This is not a role that the government may play.”

X sues to block copycat NY content moderation law after California win Read More »

ads-are-“rolling-out-gradually”-to-whatsapp

Ads are “rolling out gradually” to WhatsApp

For the first time since launching in 2009, WhatsApp will now show users advertisements. The ads are “rolling out gradually,” the company said.

For now, the ads will only appear on WhatsApp’s Updates tab, where users can update their status and access channels or groups targeting specific interests they may want to follow. In its announcement of the ads, parent company Meta claimed that placing ads under Updates means that the ads won’t “interrupt personal chats.”

Meta said that 1.5 billion people use the Updates tab daily. However, if you exclusively use WhatsApp for direct messages and personal group chats, you could avoid ever seeing ads.

“Now the Updates tab is going to be able to help Channel admins, organizations, and businesses build and grow,” Meta’s announcement said.

WhatsApp users will see three different types of ads on their messaging app. One is through the tab’s Status section, where users typically share photos, videos, voice notes, and/or text with their friends that disappear after 24 hours. While scrolling through friends’ status updates, users will see status updates from advertisers and can send a message to the company about the offering that it is promoting.

There are also Promoted Channels: “For the first time, admins have a way to increase their Channel’s visibility,” Meta said.

Finally, WhatsApp is allowing advertisers to charge users a monthly fee in order to “receive exclusive updates.” For example, people could subscribe to a cooking Channel and request alerts for new recipes.

In order to decide which ads users see, Meta says WhatsApp will leverage user information like their country code, age, their device’s language settings, and the user’s “general (not precise) location, like city or country.”

Ads are “rolling out gradually” to WhatsApp Read More »

false-claims-that-ivermectin-treats-cancer,-covid-lead-states-to-pass-otc-laws

False claims that ivermectin treats cancer, COVID lead states to pass OTC laws

Doctors told the Times that they have already seen some cases where patients with treatable, early-stage cancers have delayed effective treatments to try ivermectin, only to see no effect and return to their doctor’s office with cancers that have advanced.

Risky business

Nevertheless, the malignant misinformation on social media has made its way into state legislatures. According to an investigation by NBC News published Monday, 16 states have proposed or passed legislation that would make ivermectin available over the counter. The intention is to make it much easier for people to get ivermectin and use it for any ailment they believe it can cure.

Idaho, Arkansas, and Tennessee have passed laws to make ivermectin available over the counter. On Monday, Louisiana’s state legislature passed a bill to do the same, and it now awaits signing by the governor. The other states that have considered or are considering such bills include: Alabama, Georgia, Kentucky, Maine, Minnesota, Mississippi, New Hampshire, North Carolina, North Dakota, Pennsylvania, South Carolina, and West Virginia.

State laws don’t mean the dewormer would be readily available, however; ivermectin is still regulated by the Food and Drug Administration, and it has not been approved for over-the-counter use yet. NBC News called 15 independent pharmacies in the three states that have laws on the books allowing ivermectin to be sold over the counter (Idaho, Arkansas, and Tennessee) and couldn’t find a single pharmacist who would sell it without a prescription. Pharmacists pointed to the federal regulations.

Likewise, CVS Health said its pharmacies are not currently selling ivermectin over the counter in any state. Walgreens declined to comment.

Some states, such as Alabama, have considered legislation that would protect pharmacists from any possible disciplinary action for dispensing ivermectin without a prescription. However, one pharmacist in Idaho, who spoke with NBC News, said that such protection would still not be enough. As a prescription-only drug, ivermectin is not packaged for retail sale. If it were, it would include over-the-counter directions and safety statements written specifically for consumers.

“If you dispense something that doesn’t have directions or safety precautions on it, who’s ultimately liable if that causes harm?” the pharmacist said. “I don’t know that I would want to assume that risk.”

It’s a risk people on social media don’t seem to be concerned with.

False claims that ivermectin treats cancer, COVID lead states to pass OTC laws Read More »

discord-cto-says-he’s-“constantly-bringing-up-enshittification”-during-meetings

Discord CTO says he’s “constantly bringing up enshittification” during meetings

Discord members are biting their nails. As reports swirl that the social media company is planning an initial public offering this year and increasingly leans on advertising revenue, there’s fear that Discord will become engulfed in the enshittification that has already scarred so many online communities. Co-founder and CTO Stanislav Vishnevskiy claims he’s worried about that, too.

In an interview with Engadget published today, Vishnevskiy claimed that Discord employees regularly discuss concerns about Discord going astray and angering users.

“I understand the anxiety and concern,” Vishnevskiy said. “I think the things that people are afraid of are what separate a great, long-term focused company from just any other company.”

But there are reasons for long-time Discord users to worry about the platform changing for the worse in the coming years. The most obvious one is Discord’s foray into ads, something the company has avoided since launching in 2015. Discord started showing ads in March 2024 via its desktop and console apps. Since then, it has introduced video ads to its mobile app and launched Orbs, which Discord users can earn by clicking on ads in Discord and trade for in-game rewards. Discord also recently said that it plans to start selling ads to more companies.

Fanning expectations of Discord going public soon and looking different in the future, Discord co-founder and CEO Jason Citron left in April. His replacement, Humam Sakhnini, has experience leading public companies, like Activision Blizzard. When Citron announced his departure in April, GamesBeat asked him if Discord was going public. Citron claimed there were “no specific plans” but added that “hiring someone like Humam is a step in that direction.” Vishnevskiy declined to comment on a potential Discord IPO while speaking to Engadget.

Amid current and imminent changes, though, Vishnevskiy claims to be eyeing Discord’s enshittification risk, telling Engadget:

I’m definitely the one who’s constantly bringing up enshittification [at internal meetings]. It’s not a bad thing to build a strong business and to monetize a product. That’s how we can reinvest and continue to make things better. But we have to be extremely thoughtful about how we do that.

Discord has axed bad ideas before

For some, the inclusion of ads is automatic enshittification. However, Discord’s ad load, at least for now, is minimally intrusive. The ads appear in sidebars within the platform that expand only if clicked upon and can lead to user rewards.

Discord CTO says he’s “constantly bringing up enshittification” during meetings Read More »

florida-ban-on-kids-using-social-media-likely-unconstitutional,-judge-rules

Florida ban on kids using social media likely unconstitutional, judge rules

A federal judge ruled today that Florida cannot enforce a law that requires social media platforms to block kids from using their platforms. The state law “is likely unconstitutional,” US Judge Mark Walker of the Northern District of Florida ruled while granting the tech industry’s request for a preliminary injunction.

The Florida law “prohibits some social media platforms from allowing youth in the state who are under the age of 14 to create or hold an account on their platforms, and similarly prohibits allowing youth who are 14 or 15 to create or hold an account unless a parent or guardian provides affirmative consent for them to do so,” Walker wrote.

The law is subject to intermediate scrutiny under the First Amendment, meaning it must be “narrowly tailored to serve a significant governmental interest,” must “leave open ample alternative channels for communication,” and must not “burden substantially more speech than is necessary to further the government’s legitimate interests,” the ruling said.

Florida claimed its law is designed to prevent harm to youth and is narrowly tailored because it targets sites that use specific features that have been deemed to be addictive. But the law applies too broadly, Walker found:

Even assuming the significance of the State’s interest in limiting the exposure of youth to websites with “addictive features,” the law’s restrictions are an extraordinarily blunt instrument for furthering it. As applied to Plaintiffs’ members alone, the law likely bans all youth under 14 from holding accounts on, at a minimum, four websites that provide forums for all manner of protected speech: Facebook, Instagram, YouTube, and Snapchat. It also bans 14- and 15-year-olds from holding accounts on those four websites absent a parent’s affirmative consent, a requirement that the Supreme Court has clearly explained the First Amendment does not countenance.

Walker said the Florida “law applies to any social media site that employs any one of the five addictive features under any circumstances, even if, for example, the site only sends push notifications if users opt in to receiving them, or the site does not auto-play video for account holders who are known to be youth. Accordingly, even if a social media platform created youth accounts for which none of the purportedly ‘addictive’ features are available, it would still be barred from allowing youth to hold those accounts if the features were available to adult account holders.”

Florida ban on kids using social media likely unconstitutional, judge rules Read More »

meta-argues-enshittification-isn’t-real-in-bid-to-toss-ftc-monopoly-case

Meta argues enshittification isn’t real in bid to toss FTC monopoly case

Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it “does not profit by showing more ads to users who do not click on them,” so it only shows more ads to users who click ads.

Meta also insisted that there’s “nothing but speculation” showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them.

The company claimed that without Meta’s resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was “pretty broken and duct-taped” together, making it “vulnerable to spam” before Meta bought it.

Rather than enshittification, what Meta did to Instagram could be considered “a consumer-welfare bonanza,” Meta argued, while dismissing “smoking gun” emails from Mark Zuckerberg discussing buying Instagram to bury it as “legally irrelevant.”

Dismissing these as “a few dated emails,” Meta argued that “efforts to litigate Mr. Zuckerberg’s state of mind before the acquisition in 2012 are pointless.”

“What matters is what Meta did,” Meta argued, which was pump Instagram with resources that allowed it “to ‘thrive’—adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success.”

In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that “the sole Meta witness to (supposedly) learn of Google’s acquisition efforts testified that he did not have that worry.”

Meta argues enshittification isn’t real in bid to toss FTC monopoly case Read More »

kids-are-short-circuiting-their-school-issued-chromebooks-for-tiktok-clout

Kids are short-circuiting their school-issued Chromebooks for TikTok clout

Schools across the US are warning parents about an Internet trend that has students purposefully trying to damage their school-issued Chromebooks so that they start smoking or catch fire.

Various school districts, including some in Colorado, New Jersey, North Carolina, and Washington, have sent letters to parents warning about the trend that’s largely taken off on TikTok.

Per reports from school districts and videos that Ars Technica has reviewed online, the so-called Chromebook Challenge includes students sticking things into Chromebook ports to short-circuit the system. Students are using various easily accessible items to do this, including writing utensils, paper clips, gum wrappers, and pushpins.

The Chromebook challenge has caused chaos for US schools, leading to laptop fires that have forced school evacuations, early dismissals, and the summoning of first responders.

Schools are also warning that damage to school property can result in disciplinary action and, in some states, legal action.

In Plainville, Connecticut, a middle schooler allegedly “intentionally stuck scissors into a laptop, causing smoke to emit from it,” Superintendent Brian Reas told local news station WFSB. The incident reportedly led to one student going to the hospital due to smoke inhalation and is suspected to be connected to the viral trend.

“Although the investigation is ongoing, the student involved will be referred to juvenile court to face criminal charges,” Reas said.

Kids are short-circuiting their school-issued Chromebooks for TikTok clout Read More »

at-monopoly-trial,-zuckerberg-redefined-social-media-as-texting-with-friends

At monopoly trial, Zuckerberg redefined social media as texting with friends


“The magic of friends has fallen away”

Mark Zuckerberg played up TikTok rivalry at monopoly trial, but judge may not buy it.

The Meta monopoly trial has raised a question that Meta hopes the Federal Trade Commission (FTC) can’t effectively answer: How important is it to use social media to connect with friends and family today?

Connecting with friends was, of course, Facebook’s primary use case as it became the rare social network to hit 1 billion users—not by being acquired by a Big Tech company but based on the strength of its clean interface and the network effects that kept users locked in simply because all the important people in their life chose to be there.

According to the FTC, Meta took advantage of Facebook’s early popularity, and it has since bought out rivals and otherwise cornered the market on personal social networks. Only Snapchat and MeWe (a privacy-focused Facebook alternative) are competitors to Meta platforms, the FTC argues, and social networks like TikTok or YouTube aren’t interchangeable, because those aren’t destinations focused on connecting friends and family.

For Meta CEO Mark Zuckerberg, however, those early days of Facebook bringing old friends back together are apparently over. He took the stand this week to testify that the FTC’s market definition ignores the reality that Meta contends with today, where “the amount that people are sharing with friends on Facebook, especially, has been declining,” CNN reported.

“Even the amount of new friends that people add … I think has been declining,” Zuckerberg said, although he did not indicate how steep the decline is. “I don’t know the exact numbers,” Zuckerberg admitted. Meta’s former chief operating officer, Sheryl Sandberg, also took the stand and reportedly testified that while she was at Meta, “friends and family sharing went way down over time . . . If you have a strategy of targeting friends and family, you’d have serious revenue issues.”

In particular, TikTok’s explosive popularity has shifted the dynamics of social media today, Zuckerberg suggested. For many users, “apps now serve primarily as discovery engines,” Zuckerberg testified, and social interactions increasingly come from sharing fun creator content in private messages, rather than through engaging with a friend or family member’s posts.

That’s why Meta added Reels, Zuckerberg testified, and, more recently, TikTok Shop-like functionality. To stay relevant, Meta had to make its platforms more like TikTok, investing heavily in its discovery algorithm, and even willing to irk loyal Instagram users by turning their perfectly curated square grids into rectangles, Wired noted in a piece probing Meta’s efforts to lure TikTok users to Instagram.

There was seemingly no bridge too far, because Zuckerberg said, “TikTok is still bigger than either Facebook or Instagram, and I don’t like it when our competitors do better than us.” And since Meta has no interest in buying TikTok, due to fears of basing business in China, Big Tech on Trial reported, Meta’s only choice was to TikTok-ify its apps to avoid a mass exodus after Facebook users started declining for the first time in 2022. Committing to this future, the next year, Meta doubled the amount of force-fed filler in Instagram feeds.

Right now, Meta is positioning TikTok as one of Meta’s biggest competitors, with Meta supposedly flagging it a “top priority” and “highly urgent” competitive threat as early as 2018, Zuckerberg said. Further, Zuckerberg testified that while TikTok’s popularity grew, Meta’s “growth slowed down dramatically,” TechCrunch reported. And perhaps most persuasively, when TikTok briefly went dark earlier this year, some TikTokers moved to Instagram, Meta argued, suggesting that some users consider the platforms interchangeable.

If Meta can convince the court that the FTC’s market definition is wrong and that TikTok is Meta’s biggest rival, then Meta’s market share drops below monopolist standards, “undercutting” the FTC’s case, Big Tech on Trial reported.

But are Facebook and Instagram substitutes for TikTok?

Although Meta paints the picture that TikTok users naturally gravitated to Instagram during the TikTok outage, it’s clear that Meta advertised heavily to move them in that direction. There was even a conspiracy theory that Meta had bought TikTok in the hours before TikTok went down, Wired reported, as users noticed Meta banners encouraging them to link their TikTok accounts to Meta platforms. However, even the reported Meta ad blitz seemingly didn’t sway that many TikTok users, as Sensor Tower data at the time apparently indicated that “Instagram and Facebook appeared to receive only a modest increase in daily active users and downloads” during the TikTok outage, Wired reported.

Perhaps a more interesting question that the court may entertain is not where TikTok users go when TikTok is down, but where Instagram or Facebook users turn if they no longer want to use those platforms. If the FTC can argue that people seeking a destination to connect with friends or family wouldn’t substitute TikTok for that purpose, their market definition might fly.

Kenneth Dintzer, a partner at Crowell & Moring and the former lead attorney in the DOJ’s winning Google search monopoly case, told Ars that the chief judge in the case, James Boasberg, made clear at summary judgment that acknowledging Meta’s rivalry with TikTok “doesn’t really answer the question about friends and family.”

So even though Zuckerberg was “pretty persuasive,” his testimony on TikTok may not move the judge much. However, there was one exchange at the trial where Boasberg asked, “How much does it matter if friends are on a particular platform, if friends can share outside of it?” Zuckerberg praised this as a “good question” and “explained that it doesn’t matter much because people can fluidly share across platforms, using each one for its value as a ‘discovery engine,'” Big Tech on Trial reported.

Dintzer noted that Zuckerberg seemed to attempt to float a different theory explaining why TikTok was a valid rival—curiously attempting to redefine “social media” to overcome the judge’s skepticism in considering TikTok a true Meta rival.

Zuckerberg’s theory, Dintzer said, suggests that “if I open up something on TikTok or on YouTube, and I send it to a friend, that is social media.”

But that broad definition could be problematic, since it would suggest that all texting and messaging are social media, Dintzer said.

“That didn’t seem particularly persuasive,” Dintzer said. Although that kind of social sharing is “certainly something that people enjoy,” it still “doesn’t seem to be quite the same thing as posting something on Facebook for your friends and family.”

Another wrinkle that may scramble Meta’s defense is that Meta has publicly declared that its priority is to bring back “OG Facebook” and refresh how friends and family connect on its platforms. Just today, Instagram chief Adam Mosseri announced a new Instagram feature called “blend” that strives to connect friends and family through sharing access to their unique discovery algorithms.

Those initiatives seem like a strategy that fully relies on Meta’s core use case of connecting friends and family (and network effects that Zuckerberg downplayed) to propel engagement that could spike revenue. However, that goal could invite scrutiny, perhaps signaling to the court that Meta still benefits from the alleged monopoly in personal social networking and will only continue locking in users seeking to connect with friends and family.

“The magic of friends has fallen away,” Meta’s blog said, which, despite seeming at odds, could serve as both a tagline for its new “Friends” tab on Facebook and the headline of its defense so far in the monopoly trial.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

At monopoly trial, Zuckerberg redefined social media as texting with friends Read More »

disgruntled-users-roast-x-for-killing-support-account

Disgruntled users roast X for killing Support account

After X (formerly Twitter) announced it would be killing its “Support” account, disgruntled users quickly roasted the social media platform for providing “essentially non-existent” support.

“We’ll soon be closing this account to streamline how users can contact us for help,” X’s Support account posted, explaining that now, paid “subscribers can get support via @Premium, and everyone can get help through our Help Center.”

On X, the Support account was one of the few paths that users had to publicly seek support for help requests the platform seemed to be ignoring. For suspended users, it was viewed as a lifeline. Replies to the account were commonly flooded with users trying to get X to fix reported issues, and several seemingly paying users cracked jokes in response to the news that the account would soon be removed.

“Lololol your support for Premium is essentially non-existent,” a subscriber with more than 200,000 followers wrote, while another quipped “Okay, so no more support? lol.”

On Reddit, X users recently suggested that contacting the Premium account is the only way to get human assistance after briefly interacting with a bot. But some self-described Premium users complained of waiting six months or longer for responses from X’s help center in the Support thread.

Some users who don’t pay for access to the platform similarly complained. But for paid subscribers or content creators, lack of Premium support is perhaps most frustrating, as one user claimed their account had been under review for years, allegedly depriving them of revenue. And another user claimed they’d had “no luck getting @Premium to look into” an account suspension while supposedly still getting charged. Several accused X of sending users into a never-ending loop, where the help center only serves to link users to the help center.

Disgruntled users roast X for killing Support account Read More »

x’s-globe-trotting-defense-of-ads-on-nazi-posts-violates-tos,-media-matters-says

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says

“X conceded that depending on what content a user follows and how long they’ve had their account, they might see advertisements placed next to extremist content,” MMFA alleged.

As MMFA sees it, Musk is trying to blame the organization for ad losses spurred by his own decisions after taking over the platform—like cutting content moderation teams, de-amplifying hateful content instead of removing it, and bringing back banned users. Through the lawsuits, Musk allegedly wants to make MMFA pay “hundreds of millions of dollars in lost advertising revenue” simply because its report didn’t outline “what accounts Media Matters followed or how frequently it refreshed its screen,” MMFA argued, previously likening this to suing MMFA for scrolling on X.

MMFA has already spent millions to defend against X’s multiple lawsuits, their filing said, while consistently contesting X’s chosen venues. If X loses the fight in California, the platform would potentially owe damages from improperly filing litigation outside the venue agreed upon in its TOS.

“This proliferation of claims over a single course of conduct, in multiple jurisdictions, is abusive,” MMFA’s complaint said, noting that the organization has a hearing in Singapore next month and another in Dublin in May. And it “does more than simply drive up costs: It means that Media Matters cannot focus its time and resources to mounting the best possible defense in one forum and must instead fight back piecemeal,” which allegedly prejudices MMFA’s “ability to most effectively defend itself.”

“Media Matters should not have to defend against attempts by X to hale Media Matters into court in foreign jurisdictions when the parties already agreed on the appropriate forum for any dispute related to X’s services,” MMFA’s complaint said. “That is—this Court.”

X still recovering from ad boycott

Although X CEO Linda Yaccarino started 2025 by signaling the X ad boycott was over, Ars found that external data did not support that conclusion. More recently, Business Insider cited independent data sources last month who similarly concluded that while X’s advertiser pool seemed to be increasing, its ad revenue was still “far” from where Twitter was prior to Musk’s takeover.

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says Read More »

elon-musk-blames-x-outages-on-“massive-cyberattack”

Elon Musk blames X outages on “massive cyberattack”

After DownDetector reported that tens of thousands of users globally experienced repeated X (formerly Twitter) outages, Elon Musk confirmed the issues are due to an ongoing cyberattack on the platform.

“There was (still is) a massive cyberattack against X,” Musk wrote on X. “We get attacked every day, but this was done with a lot of resources. Either a large, coordinated group and/or a country is involved.”

Details remain vague beyond Musk’s post, but rumors were circulating that X was under a distributed denial-of-service (DDOS) attack.

X’s official support channel, which has been dormant since August, has so far remained silent on the outage, but one user asked Grok—X’s chatbot that provides AI summaries of news—what was going on, and the chatbot echoed suspicions about the DDOS attack while raising other theories.

“Over 40,000 users reported issues, with the platform struggling to load globally,” Grok said. “No clear motive yet, but some speculate it’s political since X is the only target. Outages hit hard in the US, Switzerland, and beyond.”

As X goes down, users cry for Twitter

It has been almost two years since Elon Musk declared that Twitter “no longer exists,” haphazardly rushing to rebrand his social media company as X despite critics warning that users wouldn’t easily abandon the Twitter brand.

Fast-forward to today, and Musk got a reminder that his efforts to kill off the Twitter brand never really caught on with a large chunk of his platform.

Elon Musk blames X outages on “massive cyberattack” Read More »

reddit-mods-are-fighting-to-keep-ai-slop-off-subreddits-they-could-use-help.

Reddit mods are fighting to keep AI slop off subreddits. They could use help.


Mods ask Reddit for tools as generative AI gets more popular and inconspicuous.

Redditors in a treehouse with a NO AI ALLOWED sign

Credit: Aurich Lawson (based on a still from Getty Images)

Credit: Aurich Lawson (based on a still from Getty Images)

Like it or not, generative AI is carving out its place in the world. And some Reddit users are definitely in the “don’t like it” category. While some subreddits openly welcome AI-generated images, videos, and text, others have responded to the growing trend by banning most or all posts made with the technology.

To better understand the reasoning and obstacles associated with these bans, Ars Technica spoke with moderators of subreddits that totally or partially ban generative AI. Almost all these volunteers described moderating against generative AI as a time-consuming challenge they expect to get more difficult as time goes on. And most are hoping that Reddit will release a tool to help their efforts.

It’s hard to know how much AI-generated content is actually on Reddit, and getting an estimate would be a large undertaking. Image library Freepik has analyzed the use of AI-generated content on social media but leaves Reddit out of its research because “it would take loads of time to manually comb through thousands of threads within the platform,” spokesperson Bella Valentini told me. For its part, Reddit doesn’t publicly disclose how many Reddit posts involve generative AI use.

To be clear, we’re not suggesting that Reddit has a large problem with generative AI use. By now, many subreddits seem to have agreed on their approach to AI-generated posts, and generative AI has not superseded the real, human voices that have made Reddit popular.

Still, mods largely agree that generative AI will likely get more popular on Reddit over the next few years, making generative AI modding increasingly important to both moderators and general users. Generative AI’s rising popularity has also had implications for Reddit the company, which in 2024 started licensing Reddit posts to train the large language models (LLMs) powering generative AI.

(Note: All the moderators I spoke with for this story requested that I use their Reddit usernames instead of their real names due to privacy concerns.)

No generative AI allowed

When it comes to anti-generative AI rules, numerous subreddits have zero-tolerance policies, while others permit posts that use generative AI if it’s combined with human elements or is executed very well. These rules task mods with identifying posts using generative AI and determining if they fit the criteria to be permitted on the subreddit.

Many subreddits have rules against posts made with generative AI because their mod teams or members consider such posts “low effort” or believe AI is counterintuitive to the subreddit’s mission of providing real human expertise and creations.

“At a basic level, generative AI removes the human element from the Internet; if we allowed it, then it would undermine the very point of r/AskHistorians, which is engagement with experts,” the mods of r/AskHistorians told me in a collective statement.

The subreddit’s goal is to provide historical information, and its mods think generative AI could make information shared on the subreddit less accurate. “[Generative AI] is likely to hallucinate facts, generate non-existent references, or otherwise provide misleading content,” the mods said. “Someone getting answers from an LLM can’t respond to follow-ups because they aren’t an expert. We have built a reputation as a reliable source of historical information, and the use of [generative AI], especially without oversight, puts that at risk.”

Similarly, Halaku, a mod of r/wheeloftime, told me that the subreddit’s mods banned generative AI because “we focus on genuine discussion.” Halaku believes AI content can’t facilitate “organic, genuine discussion” and “can drown out actual artwork being done by actual artists.”

The r/lego subreddit banned AI-generated art because it caused confusion in online fan communities and retail stores selling Lego products, r/lego mod Mescad said. “People would see AI-generated art that looked like Lego on [I]nstagram or [F]acebook and then go into the store to ask to buy it,” they explained. “We decided that our community’s dedication to authentic Lego products doesn’t include AI-generated art.”

Not all of Reddit is against generative AI, of course. Subreddits dedicated to the technology exist, and some general subreddits permit the use of generative AI in some or all forms.

“When it comes to bans, I would rather focus on hate speech, Nazi salutes, and things that actually harm the subreddits,” said 3rdusernameiveused, who moderates r/consoom and r/TeamBuilder25, which don’t ban generative AI. “AI art does not do that… If I was going to ban [something] for ‘moral’ reasons, it probably won’t be AI art.”

“Overwhelmingly low-effort slop”

Some generative AI bans are reflective of concerns that people are not being properly compensated for the content they create, which is then fed into LLM training.

Mod Mathgeek007 told me that r/DeadlockTheGame bans generative AI because its members consider it “a form of uncredited theft,” adding:

You aren’t allowed to sell/advertise the workers of others, and AI in a sense is using patterns derived from the work of others to create mockeries. I’d personally have less of an issue with it if the artists involved were credited and compensated—and there are some niche AI tools that do this.

Other moderators simply think generative AI reduces the quality of a subreddit’s content.

“It often just doesn’t look good… the art can often look subpar,” Mathgeek007 said.

Similarly, r/videos bans most AI-generated content because, according to its announcement, the videos are “annoying” and “just bad video” 99 percent of the time. In an online interview, r/videos mod Abrownn told me:

It’s overwhelmingly low-effort slop thrown together simply for views/ad revenue. The creators rarely care enough to put real effort into post-generation [or] editing of the content [and] rarely have coherent narratives [in] the videos, etc. It seems like they just throw the generated content into a video, export it, and call it a day.

An r/fakemon mod told me, “I can’t think of anything more low-effort in terms of art creation than just typing words and having it generated for you.”

Some moderators say generative AI helps people spam unwanted content on a subreddit, including posts that are irrelevant to the subreddit and posts that attack users.

“[Generative AI] content is almost entirely posted for purely self promotional/monetary reasons, and we as mods on Reddit are constantly dealing with abusive users just spamming their content without regard for the rules,” Abrownn said.

A moderator of the r/wallpaper subreddit, which permits generative AI, disagrees. The mod told me that generative AI “provides new routes for novel content” in the subreddit and questioned concerns about generative AI stealing from human artists or offering lower-quality work, saying those problems aren’t unique to generative AI:

Even in our community, we observe human-generated content that is subjectively low quality (poor camera/[P]hotoshopping skills, low-resolution source material, intentional “shitposting”). It can be argued that AI-generated content amplifies this behavior, but our experience (which we haven’t quantified) is that the rate of such behavior (whether human-generated or AI-generated content) has not changed much within our own community.

But we’re not a very active community—[about] 13 posts per day … so it very well could be a “frog in boiling water” situation.

Generative AI “wastes our time”

Many mods are confident in their ability to effectively identify posts that use generative AI. A bigger problem is how much time it takes to identify these posts and remove them.

The r/AskHistorians mods, for example, noted that all bans on the subreddit (including bans unrelated to AI) have “an appeals process,” and “making these assessments and reviewing AI appeals means we’re spending a considerable amount of time on something we didn’t have to worry about a few years ago.”

They added:

Frankly, the biggest challenge with [generative AI] usage is that it wastes our time. The time spent evaluating responses for AI use, responding to AI evangelists who try to flood our subreddit with inaccurate slop and then argue with us in modmail, [direct messages that message a subreddits’ mod team], and discussing edge cases could better be spent on other subreddit projects, like our podcast, newsletter, and AMAs, … providing feedback to users, or moderating input from users who intend to positively contribute to the community.

Several other mods I spoke with agree. Mathgeek007, for example, named “fighting AI bros” as a common obstacle. And for r/wheeloftime moderator Halaku, the biggest challenge in moderating against generative AI is “a generational one.”

“Some of the current generation don’t have a problem with it being AI because content is content, and [they think] we’re being elitist by arguing otherwise, and they want to argue about it,” they said.

A couple of mods noted that it’s less time-consuming to moderate subreddits that ban generative AI than it is to moderate those that allow posts using generative AI, depending on the context.

“On subreddits where we allowed AI, I often take a bit longer time to actually go into each post where I feel like… it’s been AI-generated to actually look at it and make a decision,” explained N3DSdude, a mod of several subreddits with rules against generative AI, including r/DeadlockTheGame.

MyarinTime, a moderator for r/lewdgames, which allows generative AI images, highlighted the challenges of identifying human-prompted generative AI content versus AI-generated content prompted by a bot:

When the AI bomb started, most of those bots started using AI content to work around our filters. Most of those bots started showing some random AI render, so it looks like you’re actually talking about a game when you’re not. There’s no way to know when those posts are legit games unless [you check] them one by one. I honestly believe it would be easier if we kick any post with [AI-]generated image… instead of checking if a button was pressed by a human or not.

Mods expect things to get worse

Most mods told me it’s pretty easy for them to detect posts made with generative AI, pointing to the distinct tone and favored phrases of AI-generated text. A few said that AI-generated video is harder to spot but still detectable. But as generative AI gets more advanced, moderators are expecting their work to get harder.

In a joint statement, r/dune mods Blue_Three and Herbalhippie said, “AI used to have a problem making hands—i.e., too many fingers, etc.—but as time goes on, this is less and less of an issue.”

R/videos’ Abrownn also wonders how easy it will be to detect AI-generated Reddit content “as AI tools advance and content becomes more lifelike.”

Mathgeek007 added:

AI is becoming tougher to spot and is being propagated at a larger rate. When AI style becomes normalized, it becomes tougher to fight. I expect generative AI to get significantly worse—until it becomes indistinguishable from ordinary art.

Moderators currently use various methods to fight generative AI, but they’re not perfect. r/AskHistorians mods, for example, use “AI detectors, which are unreliable, problematic, and sometimes require paid subscriptions, as well as our own ability to detect AI through experience and expertise,” while N3DSdude pointed to tools like Quid and GPTZero.

To manage current and future work around blocking generative AI, most of the mods I spoke with said they’d like Reddit to release a proprietary tool to help them.

“I’ve yet to see a reliable tool that can detect AI-generated video content,” Aabrown said. “Even if we did have such a tool, we’d be putting hundreds of hours of content through the tool daily, which would get rather expensive rather quickly. And we’re unpaid volunteer moderators, so we will be outgunned shortly when it comes to detecting this type of content at scale. We can only hope that Reddit will offer us a tool at some point in the near future that can help deal with this issue.”

A Reddit spokesperson told me that the company is evaluating what such a tool could look like. But Reddit doesn’t have a rule banning generative AI overall, and the spokesperson said the company doesn’t want to release a tool that would hinder expression or creativity.

For now, Reddit seems content to rely on moderators to remove AI-generated content when appropriate. Reddit’s spokesperson added:

Our moderation approach helps ensure that content on Reddit is curated by real humans. Moderators are quick to remove content that doesn’t follow community rules, including harmful or irrelevant AI-generated content—we don’t see this changing in the near future.

Making a generative AI Reddit tool wouldn’t be easy

Reddit is handling the evolving concerns around generative AI as it has handled other content issues, including by leveraging AI and machine learning tools. Reddit’s spokesperson said that this includes testing tools that can identify AI-generated media, such as images of politicians.

But making a proprietary tool that allows moderators to detect AI-generated posts won’t be easy, if it happens at all. The current tools for detecting generative AI are limited in their capabilities, and as generative AI advances, Reddit would need to provide tools that are more advanced than the AI-detecting tools that are currently available.

That would require a good deal of technical resources and would also likely present notable economic challenges for the social media platform, which only became profitable last year. And as noted by r/videos moderator Abrownn, tools for detecting AI-generated video still have a long way to go, making a Reddit-specific system especially challenging to create.

But even with a hypothetical Reddit tool, moderators would still have their work cut out for them. And because Reddit’s popularity is largely due to its content from real humans, that work is important.

Since Reddit’s inception, that has meant relying on moderators, which Reddit has said it intends to keep doing. As r/dune mods Blue_Three and herbalhippie put it, it’s in Reddit’s “best interest that much/most content remains organic in nature.” After all, Reddit’s profitability has a lot to do with how much AI companies are willing to pay to access Reddit data. That value would likely decline if Reddit posts became largely AI-generated themselves.

But providing the technology to ensure that generative AI isn’t abused on Reddit would be a large challege. For now, volunteer laborers will continue to bear the brunt of generative AI moderation.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Reddit mods are fighting to keep AI slop off subreddits. They could use help. Read More »