Schools across the US are warning parents about an Internet trend that has students purposefully trying to damage their school-issued Chromebooks so that they start smoking or catch fire.
Various school districts, including some in Colorado, New Jersey,North Carolina, and Washington, have sent letters to parents warning about the trend that’s largely taken off on TikTok.
Per reports from school districts and videos that Ars Technica has reviewed online, the so-called Chromebook Challenge includes students sticking things into Chromebook ports to short-circuit the system. Students are using various easily accessible items to do this, including writing utensils, paper clips, gum wrappers, and pushpins.
The Chromebook challenge has caused chaos for US schools, leading to laptop fires that have forced school evacuations, early dismissals, and the summoning of first responders.
Schools are also warning that damage to school property can result in disciplinary action and, in some states, legal action.
In Plainville, Connecticut, a middle schooler allegedly “intentionally stuck scissors into a laptop, causing smoke to emit from it,” Superintendent Brian Reas told local news station WFSB. The incident reportedly led to one student going to the hospital due to smoke inhalation and is suspected to be connected to the viral trend.
“Although the investigation is ongoing, the student involved will be referred to juvenile court to face criminal charges,” Reas said.
Mark Zuckerberg played up TikTok rivalry at monopoly trial, but judge may not buy it.
The Meta monopoly trial has raised a question that Meta hopes the Federal Trade Commission (FTC) can’t effectively answer: How important is it to use social media to connect with friends and family today?
Connecting with friends was, of course, Facebook’s primary use case as it became the rare social network to hit 1 billion users—not by being acquired by a Big Tech company but based on the strength of its clean interface and the network effects that kept users locked in simply because all the important people in their life chose to be there.
According to the FTC, Meta took advantage of Facebook’s early popularity, and it has since bought out rivals and otherwise cornered the market on personal social networks. Only Snapchat and MeWe (a privacy-focused Facebook alternative) are competitors to Meta platforms, the FTC argues, and social networks like TikTok or YouTube aren’t interchangeable, because those aren’t destinations focused on connecting friends and family.
For Meta CEO Mark Zuckerberg, however, those early days of Facebook bringing old friends back together are apparently over. He took the stand this week to testify that the FTC’s market definition ignores the reality that Meta contends with today, where “the amount that people are sharing with friends on Facebook, especially, has been declining,” CNN reported.
“Even the amount of new friends that people add … I think has been declining,” Zuckerberg said, although he did not indicate how steep the decline is. “I don’t know the exact numbers,” Zuckerberg admitted. Meta’s former chief operating officer, Sheryl Sandberg, also took the stand and reportedly testified that while she was at Meta, “friends and family sharing went way down over time . . . If you have a strategy of targeting friends and family, you’d have serious revenue issues.”
In particular, TikTok’s explosive popularity has shifted the dynamics of social media today, Zuckerberg suggested. For many users, “apps now serve primarily as discovery engines,” Zuckerberg testified, and social interactions increasingly come from sharing fun creator content in private messages, rather than through engaging with a friend or family member’s posts.
That’s why Meta added Reels, Zuckerberg testified, and, more recently, TikTok Shop-like functionality. To stay relevant, Meta had to make its platforms more like TikTok, investing heavily in its discovery algorithm, and even willing to irk loyal Instagram users by turning their perfectly curated square grids into rectangles, Wired noted in a piece probing Meta’s efforts to lure TikTok users to Instagram.
There was seemingly no bridge too far, because Zuckerberg said, “TikTok is still bigger than either Facebook or Instagram, and I don’t like it when our competitors do better than us.” And since Meta has no interest in buying TikTok, due to fears of basing business in China, Big Tech on Trial reported, Meta’s only choice was to TikTok-ify its apps to avoid a mass exodus after Facebook users started declining for the first time in 2022. Committing to this future, the next year, Meta doubled the amount of force-fed filler in Instagram feeds.
Right now, Meta is positioning TikTok as one of Meta’s biggest competitors, with Meta supposedly flagging it a “top priority” and “highly urgent” competitive threat as early as 2018, Zuckerberg said. Further, Zuckerberg testified that while TikTok’s popularity grew, Meta’s “growth slowed down dramatically,” TechCrunch reported. And perhaps most persuasively, when TikTok briefly went dark earlier this year, some TikTokers moved to Instagram, Meta argued, suggesting that some users consider the platforms interchangeable.
If Meta can convince the court that the FTC’s market definition is wrong and that TikTok is Meta’s biggest rival, then Meta’s market share drops below monopolist standards, “undercutting” the FTC’s case, Big Tech on Trial reported.
But are Facebook and Instagram substitutes for TikTok?
Although Meta paints the picture that TikTok users naturally gravitated to Instagram during the TikTok outage, it’s clear that Meta advertised heavily to move them in that direction. There was even a conspiracy theory that Meta had bought TikTok in the hours before TikTok went down, Wired reported, as users noticed Meta banners encouraging them to link their TikTok accounts to Meta platforms. However, even the reported Meta ad blitz seemingly didn’t sway that many TikTok users, as Sensor Tower data at the time apparently indicated that “Instagram and Facebook appeared to receive only a modest increase in daily active users and downloads” during the TikTok outage, Wired reported.
Perhaps a more interesting question that the court may entertain is not where TikTok users go when TikTok is down, but where Instagram or Facebook users turn if they no longer want to use those platforms. If the FTC can argue that people seeking a destination to connect with friends or family wouldn’t substitute TikTok for that purpose, their market definition might fly.
Kenneth Dintzer, a partner at Crowell & Moring and the former lead attorney in the DOJ’s winning Google search monopoly case, told Ars that the chief judge in the case, James Boasberg, made clear at summary judgment that acknowledging Meta’s rivalry with TikTok “doesn’t really answer the question about friends and family.”
So even though Zuckerberg was “pretty persuasive,” his testimony on TikTok may not move the judge much. However, there was one exchange at the trial where Boasberg asked, “How much does it matter if friends are on a particular platform, if friends can share outside of it?” Zuckerberg praised this as a “good question” and “explained that it doesn’t matter much because people can fluidly share across platforms, using each one for its value as a ‘discovery engine,'” Big Tech on Trial reported.
Dintzer noted that Zuckerberg seemed to attempt to float a different theory explaining why TikTok was a valid rival—curiously attempting to redefine “social media” to overcome the judge’s skepticism in considering TikTok a true Meta rival.
Zuckerberg’s theory, Dintzer said, suggests that “if I open up something on TikTok or on YouTube, and I send it to a friend, that is social media.”
But that broad definition could be problematic, since it would suggest that all texting and messaging are social media, Dintzer said.
“That didn’t seem particularly persuasive,” Dintzer said. Although that kind of social sharing is “certainly something that people enjoy,” it still “doesn’t seem to be quite the same thing as posting something on Facebook for your friends and family.”
Another wrinkle that may scramble Meta’s defense is that Meta has publicly declared that its priority is to bring back “OG Facebook” and refresh how friends and family connect on its platforms. Just today, Instagram chief Adam Mosseri announced a new Instagram feature called “blend” that strives to connect friends and family through sharing access to their unique discovery algorithms.
Those initiatives seem like a strategy that fully relies on Meta’s core use case of connecting friends and family (and network effects that Zuckerberg downplayed) to propel engagement that could spike revenue. However, that goal could invite scrutiny, perhaps signaling to the court that Meta still benefits from the alleged monopoly in personal social networking and will only continue locking in users seeking to connect with friends and family.
“The magic of friends has fallen away,” Meta’s blog said, which, despite seeming at odds, could serve as both a tagline for its new “Friends” tab on Facebook and the headline of its defense so far in the monopoly trial.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
After X (formerly Twitter) announced it would be killing its “Support” account, disgruntled users quickly roasted the social media platform for providing “essentially non-existent” support.
“We’ll soon be closing this account to streamline how users can contact us for help,” X’s Support account posted, explaining that now, paid “subscribers can get support via @Premium, and everyone can get help through our Help Center.”
On X, the Support account was one of the few paths that users had to publicly seek support for help requests the platform seemed to be ignoring. For suspended users, it was viewed as a lifeline. Replies to the account were commonly flooded with users trying to get X to fix reported issues, and several seemingly paying users cracked jokes in response to the news that the account would soon be removed.
“Lololol your support for Premium is essentially non-existent,” a subscriber with more than 200,000 followers wrote, while another quipped “Okay, so no more support? lol.”
On Reddit, X users recently suggested that contacting the Premium account is the only way to get human assistance after briefly interacting with a bot. But some self-described Premium users complained of waiting six months or longer for responses from X’s help center in the Support thread.
Some users who don’t pay for access to the platform similarly complained. But for paid subscribers or content creators, lack of Premium support is perhaps most frustrating, as one user claimed their account had been under review for years, allegedly depriving them of revenue. And another user claimed they’d had “no luck getting @Premium to look into” an account suspension while supposedly still getting charged. Several accused X of sending users into a never-ending loop, where the help center only serves to link users to the help center.
“X conceded that depending on what content a user follows and how long they’ve had their account, they might see advertisements placed next to extremist content,” MMFA alleged.
As MMFA sees it, Musk is trying to blame the organization for ad losses spurred by his own decisions after taking over the platform—like cutting content moderation teams, de-amplifying hateful content instead of removing it, and bringing back banned users. Through the lawsuits, Musk allegedly wants to make MMFA pay “hundreds of millions of dollars in lost advertising revenue” simply because its report didn’t outline “what accounts Media Matters followed or how frequently it refreshed its screen,” MMFA argued, previously likening this to suing MMFA for scrolling on X.
MMFA has already spent millions to defend against X’s multiple lawsuits, their filing said, while consistently contesting X’s chosen venues. If X loses the fight in California, the platform would potentially owe damages from improperly filing litigation outside the venue agreed upon in its TOS.
“This proliferation of claims over a single course of conduct, in multiple jurisdictions, is abusive,” MMFA’s complaint said, noting that the organization has a hearing in Singapore next month and another in Dublin in May. And it “does more than simply drive up costs: It means that Media Matters cannot focus its time and resources to mounting the best possible defense in one forum and must instead fight back piecemeal,” which allegedly prejudices MMFA’s “ability to most effectively defend itself.”
“Media Matters should not have to defend against attempts by X to hale Media Matters into court in foreign jurisdictions when the parties already agreed on the appropriate forum for any dispute related to X’s services,” MMFA’s complaint said. “That is—this Court.”
X still recovering from ad boycott
Although X CEO Linda Yaccarino started 2025 by signaling the X ad boycott was over, Ars found that external data did not support that conclusion. More recently, Business Insider cited independent data sources last month who similarly concluded that while X’s advertiser pool seemed to be increasing, its ad revenue was still “far” from where Twitter was prior to Musk’s takeover.
After DownDetector reported that tens of thousands of users globally experienced repeated X (formerly Twitter) outages, Elon Musk confirmed the issues are due to an ongoing cyberattack on the platform.
“There was (still is) a massive cyberattack against X,” Musk wrote on X. “We get attacked every day, but this was done with a lot of resources. Either a large, coordinated group and/or a country is involved.”
Details remain vague beyond Musk’s post, but rumors were circulating that X was under a distributed denial-of-service (DDOS) attack.
X’s official support channel, which has been dormant since August, has so far remained silent on the outage, but one user asked Grok—X’s chatbot that provides AI summaries of news—what was going on, and the chatbot echoed suspicions about the DDOS attack while raising other theories.
“Over 40,000 users reported issues, with the platform struggling to load globally,” Grok said. “No clear motive yet, but some speculate it’s political since X is the only target. Outages hit hard in the US, Switzerland, and beyond.”
As X goes down, users cry for Twitter
It has been almost two years since Elon Musk declared that Twitter “no longer exists,” haphazardly rushing to rebrand his social media company as X despite critics warning that users wouldn’t easily abandon the Twitter brand.
Fast-forward to today, and Musk got a reminder that his efforts to kill off the Twitter brand never really caught on with a large chunk of his platform.
Mods ask Reddit for tools as generative AI gets more popular and inconspicuous.
Credit: Aurich Lawson (based on a still from Getty Images)
Credit: Aurich Lawson (based on a still from Getty Images)
Like it or not, generative AI is carving out its place in the world. And some Reddit users are definitely in the “don’t like it” category. While some subreddits openly welcome AI-generated images, videos, and text, others have responded to the growing trend by banning most or all posts made with the technology.
To better understand the reasoning and obstacles associated with these bans, Ars Technica spoke with moderators of subreddits that totally or partially ban generative AI. Almost all these volunteers described moderating against generative AI as a time-consuming challenge they expect to get more difficult as time goes on. And most are hoping that Reddit will release a tool to help their efforts.
It’s hard to know how much AI-generated content is actually on Reddit, and getting an estimate would be a large undertaking. Image library Freepik has analyzed the use of AI-generated content on social media but leaves Reddit out of its research because “it would take loads of time to manually comb through thousands of threads within the platform,” spokesperson Bella Valentini told me. For its part, Reddit doesn’t publicly disclose how many Reddit posts involve generative AI use.
To be clear, we’re not suggesting that Reddit has a large problem with generative AI use. By now, many subreddits seem to have agreed on their approach to AI-generated posts, and generative AI has not superseded the real, human voices that have made Reddit popular.
Still, mods largely agree that generative AI will likely get more popular on Reddit over the next few years, making generative AI modding increasingly important to both moderators and general users. Generative AI’s rising popularity has also had implications for Reddit the company, which in 2024 started licensing Reddit posts to train the large language models (LLMs) powering generative AI.
(Note: All the moderators I spoke with for this story requested that I use their Reddit usernames instead of their real names due to privacy concerns.)
No generative AI allowed
When it comes to anti-generative AI rules, numerous subreddits have zero-tolerance policies, while others permit posts that use generative AI if it’s combined with human elements or is executed very well. These rules task mods with identifying posts using generative AI and determining if they fit the criteria to be permitted on the subreddit.
Many subreddits have rules against posts made with generative AI because their mod teams or members consider such posts “low effort” or believe AI is counterintuitive to the subreddit’s mission of providing real human expertise and creations.
“At a basic level, generative AI removes the human element from the Internet; if we allowed it, then it would undermine the very point of r/AskHistorians, which is engagement with experts,” the mods of r/AskHistorians told me in a collective statement.
The subreddit’s goal is to provide historical information, and its mods think generative AI could make information shared on the subreddit less accurate. “[Generative AI] is likely to hallucinate facts, generate non-existent references, or otherwise provide misleading content,” the mods said. “Someone getting answers from an LLM can’t respond to follow-ups because they aren’t an expert. We have built a reputation as a reliable source of historical information, and the use of [generative AI], especially without oversight, puts that at risk.”
Similarly, Halaku, a mod of r/wheeloftime, told me that the subreddit’s mods banned generative AI because “we focus on genuine discussion.” Halaku believes AI content can’t facilitate “organic, genuine discussion” and “can drown out actual artwork being done by actual artists.”
The r/lego subreddit banned AI-generated art because it caused confusion in online fan communities and retail stores selling Lego products, r/lego mod Mescad said. “People would see AI-generated art that looked like Lego on [I]nstagram or [F]acebook and then go into the store to ask to buy it,” they explained. “We decided that our community’s dedication to authentic Lego products doesn’t include AI-generated art.”
Not all of Reddit is against generative AI, of course. Subreddits dedicated to the technology exist, and some general subreddits permit the use of generative AI in some or all forms.
“When it comes to bans, I would rather focus on hate speech, Nazi salutes, and things that actually harm the subreddits,” said 3rdusernameiveused, who moderates r/consoom and r/TeamBuilder25, which don’t ban generative AI. “AI art does not do that… If I was going to ban [something] for ‘moral’ reasons, it probably won’t be AI art.”
“Overwhelmingly low-effort slop”
Some generative AI bans are reflective of concerns that people are not being properly compensated for the content they create, which is then fed into LLM training.
Mod Mathgeek007 told me that r/DeadlockTheGame bans generative AI because its members consider it “a form of uncredited theft,” adding:
You aren’t allowed to sell/advertise the workers of others, and AI in a sense is using patterns derived from the work of others to create mockeries. I’d personally have less of an issue with it if the artists involved were credited and compensated—and there are some niche AI tools that do this.
Other moderators simply think generative AI reduces the quality of a subreddit’s content.
“It often just doesn’t look good… the art can often look subpar,” Mathgeek007 said.
Similarly, r/videos bans most AI-generated content because, according to its announcement, the videos are “annoying” and “just bad video” 99 percent of the time. In an online interview, r/videos mod Abrownn told me:
It’s overwhelmingly low-effort slop thrown together simply for views/ad revenue. The creators rarely care enough to put real effort into post-generation [or] editing of the content [and] rarely have coherent narratives [in] the videos, etc. It seems like they just throw the generated content into a video, export it, and call it a day.
An r/fakemon mod told me, “I can’t think of anything more low-effort in terms of art creation than just typing words and having it generated for you.”
Some moderators say generative AI helps people spam unwanted content on a subreddit, including posts that are irrelevant to the subreddit and posts that attack users.
“[Generative AI] content is almost entirely posted for purely self promotional/monetary reasons, and we as mods on Reddit are constantly dealing with abusive users just spamming their content without regard for the rules,” Abrownn said.
A moderator of the r/wallpaper subreddit, which permits generative AI, disagrees. The mod told me that generative AI “provides new routes for novel content” in the subreddit and questioned concerns about generative AI stealing from human artists or offering lower-quality work, saying those problems aren’t unique to generative AI:
Even in our community, we observe human-generated content that is subjectively low quality (poor camera/[P]hotoshopping skills, low-resolution source material, intentional “shitposting”). It can be argued that AI-generated content amplifies this behavior, but our experience (which we haven’t quantified) is that the rate of such behavior (whether human-generated or AI-generated content) has not changed much within our own community.
But we’re not a very active community—[about] 13 posts per day … so it very well could be a “frog in boiling water” situation.
Generative AI “wastes our time”
Many mods are confident in their ability to effectively identify posts that use generative AI. A bigger problem is how much time it takes to identify these posts and remove them.
The r/AskHistorians mods, for example, noted that all bans on the subreddit (including bans unrelated to AI) have “an appeals process,” and “making these assessments and reviewing AI appeals means we’re spending a considerable amount of time on something we didn’t have to worry about a few years ago.”
They added:
Frankly, the biggest challenge with [generative AI] usage is that it wastes our time. The time spent evaluating responses for AI use, responding to AI evangelists who try to flood our subreddit with inaccurate slop and then argue with us in modmail, [direct messages that message a subreddits’ mod team], and discussing edge cases could better be spent on other subreddit projects, like our podcast, newsletter, and AMAs, … providing feedback to users, or moderating input from users who intend to positively contribute to the community.
Several other mods I spoke with agree. Mathgeek007, for example, named “fighting AI bros” as a common obstacle. And for r/wheeloftime moderator Halaku, the biggest challenge in moderating against generative AI is “a generational one.”
“Some of the current generation don’t have a problem with it being AI because content is content, and [they think] we’re being elitist by arguing otherwise, and they want to argue about it,” they said.
A couple of mods noted that it’s less time-consuming to moderate subreddits that ban generative AI than it is to moderate those that allow posts using generative AI, depending on the context.
“On subreddits where we allowed AI, I often take a bit longer time to actually go into each post where I feel like… it’s been AI-generated to actually look at it and make a decision,” explained N3DSdude, a mod of several subreddits with rules against generative AI, including r/DeadlockTheGame.
MyarinTime, a moderator for r/lewdgames, which allows generative AI images, highlighted the challenges of identifying human-prompted generative AI content versus AI-generated content prompted by a bot:
When the AI bomb started, most of those bots started using AI content to work around our filters. Most of those bots started showing some random AI render, so it looks like you’re actually talking about a game when you’re not. There’s no way to know when those posts are legit games unless [you check] them one by one. I honestly believe it would be easier if we kick any post with [AI-]generated image… instead of checking if a button was pressed by a human or not.
Mods expect things to get worse
Most mods told me it’s pretty easy for them to detect posts made with generative AI, pointing to the distinct tone and favored phrases of AI-generated text. A few said that AI-generated video is harder to spot but still detectable. But as generative AI gets more advanced, moderators are expecting their work to get harder.
In a joint statement, r/dune mods Blue_Three and Herbalhippie said, “AI used to have a problem making hands—i.e., too many fingers, etc.—but as time goes on, this is less and less of an issue.”
R/videos’ Abrownn also wonders how easy it will be to detect AI-generated Reddit content “as AI tools advance and content becomes more lifelike.”
Mathgeek007 added:
AI is becoming tougher to spot and is being propagated at a larger rate. When AI style becomes normalized, it becomes tougher to fight. I expect generative AI to get significantly worse—until it becomes indistinguishable from ordinary art.
Moderators currently use various methods to fight generative AI, but they’re not perfect. r/AskHistorians mods, for example, use “AI detectors, which are unreliable, problematic, and sometimes require paid subscriptions, as well as our own ability to detect AI through experience and expertise,” while N3DSdude pointed to tools like Quid and GPTZero.
To manage current and future work around blocking generative AI, most of the mods I spoke with said they’d like Reddit to release a proprietary tool to help them.
“I’ve yet to see a reliable tool that can detect AI-generated video content,” Aabrown said. “Even if we did have such a tool, we’d be putting hundreds of hours of content through the tool daily, which would get rather expensive rather quickly. And we’re unpaid volunteer moderators, so we will be outgunned shortly when it comes to detecting this type of content at scale. We can only hope that Reddit will offer us a tool at some point in the near future that can help deal with this issue.”
A Reddit spokesperson told me that the company is evaluating what such a tool could look like. But Reddit doesn’t have a rule banning generative AI overall, and the spokesperson said the company doesn’t want to release a tool that would hinder expression or creativity.
For now, Reddit seems content to rely on moderators to remove AI-generated content when appropriate. Reddit’s spokesperson added:
Our moderation approach helps ensure that content on Reddit is curated by real humans. Moderators are quick to remove content that doesn’t follow community rules, including harmful or irrelevant AI-generated content—we don’t see this changing in the near future.
Making a generative AI Reddit tool wouldn’t be easy
Reddit is handling the evolving concerns around generative AI as it has handled other content issues, including by leveraging AI and machine learning tools. Reddit’s spokesperson said that this includes testing tools that can identify AI-generated media, such as images of politicians.
But making a proprietary tool that allows moderators to detect AI-generated posts won’t be easy, if it happens at all. The current tools for detecting generative AI are limited in their capabilities, and as generative AI advances, Reddit would need to provide tools that are more advanced than the AI-detecting tools that are currently available.
That would require a good deal of technical resources and would also likely present notable economic challenges for the social media platform, which only became profitable last year. And as noted by r/videos moderator Abrownn, tools for detecting AI-generated video still have a long way to go, making a Reddit-specific system especially challenging to create.
But even with a hypothetical Reddit tool, moderators would still have their work cut out for them. And because Reddit’s popularity is largely due to its content from real humans, that work is important.
Since Reddit’s inception, that has meant relying on moderators, which Reddit has said it intends to keep doing. As r/dune mods Blue_Three and herbalhippie put it, it’s in Reddit’s “best interest that much/most content remains organic in nature.” After all, Reddit’s profitability has a lot to do with how much AI companies are willing to pay to access Reddit data. That value would likely decline if Reddit posts became largely AI-generated themselves.
But providing the technology to ensure that generative AI isn’t abused on Reddit would be a large challege. For now, volunteer laborers will continue to bear the brunt of generative AI moderation.
Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.
Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.
Reddit is planning to introduce a paywall this year, CEO Steve Huffman said during a videotaped Ask Me Anything (AMA) session on Thursday.
Huffman previously showed interest in potentially introducing a new type of subreddit with “exclusive content or private areas” that Reddit users would pay to access.
When asked this week about plans for some Redditors to create “content that only paid members can see,” Huffman said:
It’s a work in progress right now, so that one’s coming… We’re working on it as we speak.
When asked about “new, key features that you plan to roll out for Reddit in 2025,” Huffman responded, in part: “Paid subreddits, yes.”
Reddit’s paywall would ostensibly only apply to certain new subreddit types, not any subreddits currently available. In August, Huffman said that even with paywalled content, free Reddit would “continue to exist and grow and thrive.”
A critical aspect of any potential plan to make Reddit users pay to access subreddit content is determining how related Reddit users will be compensated. Reddit may have a harder time getting volunteer moderators to wrangle discussions on paid-for subreddits—if it uses volunteer mods at all. Balancing paid and free content would also be necessary to avoid polarizing much of Reddit’s current user base.
Reddit has had paid-for premium versions of community features before, like r/Lounge, a subreddit that only people with Reddit Gold, which you have to buy with real money, can access.
Reddit would also need to consider how it might compensate people for user-generated content that people pay to access, as Reddit’s business is largely built on free, user-generated content. The Reddit Contributor Program, launched in September 2023, could be a foundation; it lets users “earn money for their qualifying contributions to the Reddit community, including awards and karma, collectible avatars, and developer apps,” according to Reddit. Reddit says it pays up to $0.01 per 1 Gold received, depending on how much karma the user has earned over the past year. For someone to pay out, they need at least 1,000 Gold, which is equivalent to $10.
On a recent WikiTok browsing run, I ran across entries on topics like SX-Window (a GUI for the Sharp X68000 series of computers), Xantocillin (“the first reported natural product found to contain the isocyanide functional group), Lorenzo Ghiberti (an Italian Renaissance sculptor from Florence), the William Wheeler House in Texas, and the city of Krautheim, Germany—none of which I knew existed before the session started.
How WikiTok took off
The original idea for WikiTok originated from developer Tyler Angert on Monday evening when he tweeted, “insane project idea: all of wikipedia on a single, scrollable page.” Bloomberg Beta VC James Cham replied, “Even better, an infinitely scrolling Wikipedia page based on whatever you are interested in next?” and Angert coined “WikiTok” in a follow-up post.
Early the next morning, at 12: 28 am, writer Grant Slatton quote-tweeted the WikiTok discussion, and that’s where Gemal came in. “I saw it from [Slatton’s] quote retweet,” he told Ars. “I immediately thought, ‘Wow I can build an MVP [minimum viable product] and this could take off.'”
Gemal started his project at 12: 30 am, and with help from AI coding tools like Anthropic’s Claude and Cursor, he finished a prototype by 2 am and posted the results on X. Someone later announced WikiTok on ycombinator’s Hacker News, where it topped the site’s list of daily news items.
A screenshot of the WikiTok web app running in a desktop web browser. Credit: Benj Edwards
“The entire thing is only several hundred lines of code, and Claude wrote the vast majority of it,” Gemal told Ars. “AI helped me ship really really fast and just capitalize on the initial viral tweet asking for Wikipedia with scrolling.”
Gemal posted the code for WikiTok on GitHub, so anyone can modify or contribute to the project. Right now, the web app supports 14 languages, article previews, and article sharing on both desktop and mobile browsers. New features may arrive as contributors add them. It’s based on a tech stack that includes React 18, TypeScript, Tailwind CSS, and Vite.
And so far, he is sticking to his vision of a free way to enjoy Wikipedia without being tracked and targeted. “I have no grand plans for some sort of insane monetized hyper-calculating TikTok algorithm,” Gemal told us. “It is anti-algorithmic, if anything.“
A Reddit spokesperson told Ars that decisions to ban or not ban X links are user-driven. Subreddit members are allowed to suggest and institute subreddit rules, they added.
“Notably, many Reddit communities also prohibit Reddit links,” the Reddit representative pointed out. They noted that Reddit as a company doesn’t currently have any ban on links to X.
A ban against links to an entire platform isn’t outside of the ordinary for Reddit. Numerous subreddits ban social media links, Reddit’s spokesperson said. r/EarthPorn, a subreddit for landscape photography, for example, doesn’t allow website links because all posts “must be static images,” per the subreddit’s official rules. r/AskReddit, meanwhile, only allows for questions asked in the title of a Reddit post and doesn’t allow for use of the text box, including for sharing links.
“Reddit has a longstanding commitment to freedom of speech and freedom of association,” Reddit’s spokesperson said. They added that any person is free to make or moderate their own community. Those unsatisfied with a forum about Seahawks football that doesn’t have X links could feel free to make their own subreddit. Although, some of the subreddits considering X bans, like r/MadeMeSmile, already have millions of followers.
Meta bans also under discussion
As 404 Media noted, some Redditors are also pushing to block content from Facebook, Instagram, and other Meta properties in response to new Donald Trump-friendly policies instituted by owner Mark Zuckerberg, like Meta killing diversity programs and axing third-party fact-checkers.
People can tell it’s not true, but if they’re outraged by it, they’ll share anyway.
Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.
But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.
Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.
Tracking the outrage
The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.
Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.
The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.
“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.
Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And that’s when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.
Going with the flow
The Facebook and Twitter data analyzed by Brady’s team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news since reliable sources usually produced less outrageous content.
“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.
This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”
Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.
Flawed human nature
Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.
The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.
It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.
Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. “When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about,” Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for office—people do this on social media all the time.
The urge to signal a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. “One thing this study was not focused on was the impact of social media algorithms,” Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.
Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.
Apparently, some London residents are getting fed up with social media influencers whose reviews make long lines of tourists at their favorite restaurants, sometimes just for the likes. Christian Calgie, a reporter for London-based news publication Daily Express, pointed out this trend on X yesterday, noting the boom of Redditors referring people to Angus Steakhouse, a chain restaurant, to combat it.
As Gizmodo deduced, the trend seemed to start on the r/London subreddit, where a user complained about a spot in Borough Market being “ruined by influencers” on Monday:
“Last 2 times I have been there has been a queue of over 200 people, and the ones with the food are just doing the selfie shit for their [I]nsta[gram] pages and then throwing most of the food away.”
As of this writing, the post has 4,900 upvotes and numerous responses suggesting that Redditors talk about how good Angus Steakhouse is so that Google picks up on it. Commenters quickly understood the assignment.
“Agreed with other posters Angus steakhouse is absolutely top tier and tourists shoyldnt [sic] miss out on it,” one Redditor wrote.
Another Reddit user wrote:
Spreading misinformation suddenly becomes a noble goal.
As of this writing, asking Google for the best steak, steakhouse, or steak sandwich in London (or similar) isn’t generating an AI Overview result for me. But when I searched for the best steak sandwich in London, the top result is from Reddit, including a thread from four days ago titled “Which Angus Steakhouse do you recommend for their steak sandwich?” and one from two days ago titled “Had to see what all the hype was about, best steak sandwich I’ve ever had!” with a picture of an Angus Steakhouse.
Following site-wide user protests last year that featured moderators turning thousands of subreddits private or not-safe-for-work (NSFW), Reddit announced that mods now need its permission to make those changes.
Reddit’s VP of community, going by Go_JasonWaterfalls, made the announcement about what Reddit calls Community Types today. Reddit’s permission is also required to make subreddits restricted or to go from NSFW to safe-for-work (SFW). Reddit’s employee claimed that requests will be responded to “in under 24 hours.”
Reddit’s employee said that “temporarily going restricted is exempt” from this requirement, adding that “mods can continue to instantly restrict posts and/or comments for up to 7 days using Temporary Events.” Additionally, if a subreddit has fewer than 5,000 members or is less than 30 days old, the request “will be automatically approved,” per Go_JasonWaterfalls.
Reddit’s post includes a list of “valid” reasons that mods tend to change their subreddit’s Community Type and provides alternative solutions.
Last year’s protests “accelerated” this policy change
Last year, Reddit announced that it would be charging a massive amount for access to its previously free API. This caused many popular third-party Reddit apps to close down. Reddit users then protested by turning subreddits private (or read-only) or by only showing NSFW content or jokes and memes. Reddit then responded by removing some moderators; eventually, the protests subsided.
Reddit, which previously admitted that another similar protest could hurt it financially, has maintained that moderators’ actions during the protests broke its rules. Now, it has solidified a way to prevent something like last year’s site-wide protests from happening again.
Speaking to The Verge, Laura Nestler, who The Verge reported is Go_JasonWaterfalls, claimed that Reddit has been talking about making this change since at least 2021. The protests, she said, were a wake-up call that moderators’ ability to turn subreddits private “could be used to harm Reddit at scale. The protests “accelerated” the policy change, per Nestler.
The announcement on r/modnews reads:
… the ability to instantly change Community Type settings has been used to break the platform and violate our rules. We have a responsibility to protect Reddit and ensure its long-term health, and we cannot allow actions that deliberately cause harm.
After shutting down a tactic for responding to unfavorable Reddit policy changes, Go_JasonWaterfalls claimed that Reddit still wants to hear from users.
“Community Type settings have historically been used to protest Reddit’s decisions,” they wrote.
“While we are making this change to ensure users’ expectations regarding a community’s access do not suddenly change, protest is allowed on Reddit. We want to hear from you when you think Reddit is making decisions that are not in your communities’ best interests. But if a protest crosses the line into harming redditors and Reddit, we’ll step in.”
Last year’s user protests illustrated how dependent Reddit is on unpaid moderators and user-generated content. At times, things turned ugly, pitting Reddit executives against long-time users (Reddit CEO Steve Huffman infamously called Reddit mods “landed gentry,” something that some were quick to remind Go_JasonWaterfalls of) and reportedly worrying Reddit employees.
Although the protests failed to reverse Reddit’s prohibitive API fees or to save most third-party apps, it succeeded in getting users’ concerns heard and even crashed Reddit for three hours. Further, NFSW protests temporarily prevented Reddit from selling ads on some subreddits. Since going public this year and amid a push to reach profitability, Reddit has been more focused on ads than ever. (Most of Reddit’s money comes from ads.)
Reddit’s Nestler told The Verge that the new policy was reviewed by Reddit’s Mod Council. Reddit is confident that it won’t lose mods because of the change, she said.
“Demotes us all to janitors”
The news marks another broad policy change that is likely to upset users and make Reddit seem unwilling to give into user feedback, despite Go_JasonWaterfalls saying that “protest is allowed on Reddit.” For example, in response, Reddit user CouncilOfStrongs said:
Don’t lie to us, please.
Something that you can ignore because it has no impact cannot be a protest, and no matter what you say that is obviously the one and only point of you doing this – to block moderators from being able to hold Reddit accountable in even the smallest way for malicious, irresponsible, bad faith changes that they make.
Reddit user belisaurius, who is listed as a mod for several active subreddits, including a 336,000-member one for the Philadelphia Eagles NFL team, said that the policy change “removes moderators from any position of central responsibility and demotes us all to janitors.”
As Reddit continues seeking profits and seemingly more control over a platform built around free user-generated content and moderation, users will have to either accept that Reddit is changing or leave the platform.
Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.