Social Media

people-will-share-misinformation-that-sparks-“moral-outrage”

People will share misinformation that sparks “moral outrage”


People can tell it’s not true, but if they’re outraged by it, they’ll share anyway.

Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.

But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.

Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.

Tracking the outrage

The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.

Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.

The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.

“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.

Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And that’s when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.

Going with the flow

The Facebook and Twitter data analyzed by Brady’s team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news since reliable sources usually produced less outrageous content.

“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.

This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”

Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.

Flawed human nature

Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.

The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.

It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.

Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. “When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about,” Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for office—people do this on social media all the time.

The urge to signal a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. “One thing this study was not focused on was the impact of social media algorithms,” Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.

Science, 2024.  DOI: 10.1126/science.adl2829

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

People will share misinformation that sparks “moral outrage” Read More »

annoyed-redditors-tanking-google-search-results-illustrates-perils-of-ai-scrapers

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers

Fed up Londoners

Apparently, some London residents are getting fed up with social media influencers whose reviews make long lines of tourists at their favorite restaurants, sometimes just for the likes. Christian Calgie, a reporter for London-based news publication Daily Express, pointed out this trend on X yesterday, noting the boom of Redditors referring people to Angus Steakhouse, a chain restaurant, to combat it.

As Gizmodo deduced, the trend seemed to start on the r/London subreddit, where a user complained about a spot in Borough Market being “ruined by influencers” on Monday:

“Last 2 times I have been there has been a queue of over 200 people, and the ones with the food are just doing the selfie shit for their [I]nsta[gram] pages and then throwing most of the food away.”

As of this writing, the post has 4,900 upvotes and numerous responses suggesting that Redditors talk about how good Angus Steakhouse is so that Google picks up on it. Commenters quickly understood the assignment.

“Agreed with other posters Angus steakhouse is absolutely top tier and tourists shoyldnt [sic] miss out on it,” one Redditor wrote.

Another Reddit user wrote:

Spreading misinformation suddenly becomes a noble goal.

As of this writing, asking Google for the best steak, steakhouse, or steak sandwich in London (or similar) isn’t generating an AI Overview result for me. But when I searched for the best steak sandwich in London, the top result is from Reddit, including a thread from four days ago titled “Which Angus Steakhouse do you recommend for their steak sandwich?” and one from two days ago titled “Had to see what all the hype was about, best steak sandwich I’ve ever had!” with a picture of an Angus Steakhouse.

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers Read More »

in-fear-of-more-user-protests,-reddit-announces-controversial-policy-change

In fear of more user protests, Reddit announces controversial policy change

Protest blowback —

Moderators now need Reddit’s permission to turn subreddits private, NSFW.

The Reddit application can be seen on the display of a smartphone.

Following site-wide user protests last year that featured moderators turning thousands of subreddits private or not-safe-for-work (NSFW), Reddit announced that mods now need its permission to make those changes.

Reddit’s VP of community, going by Go_JasonWaterfalls, made the announcement about what Reddit calls Community Types today. Reddit’s permission is also required to make subreddits restricted or to go from NSFW to safe-for-work (SFW). Reddit’s employee claimed that requests will be responded to “in under 24 hours.”

Reddit’s employee said that “temporarily going restricted is exempt” from this requirement, adding that “mods can continue to instantly restrict posts and/or comments for up to 7 days using Temporary Events.” Additionally, if a subreddit has fewer than 5,000 members or is less than 30 days old, the request “will be automatically approved,” per Go_JasonWaterfalls.

Reddit’s post includes a list of “valid” reasons that mods tend to change their subreddit’s Community Type and provides alternative solutions.

Last year’s protests “accelerated” this policy change

Last year, Reddit announced that it would be charging a massive amount for access to its previously free API. This caused many popular third-party Reddit apps to close down. Reddit users then protested by turning subreddits private (or read-only) or by only showing NSFW content or jokes and memes. Reddit then responded by removing some moderators; eventually, the protests subsided.

Reddit, which previously admitted that another similar protest could hurt it financially, has maintained that moderators’ actions during the protests broke its rules. Now, it has solidified a way to prevent something like last year’s site-wide protests from happening again.

Speaking to The Verge, Laura Nestler, who The Verge reported is Go_JasonWaterfalls, claimed that Reddit has been talking about making this change since at least 2021. The protests, she said, were a wake-up call that moderators’ ability to turn subreddits private “could be used to harm Reddit at scale. The protests “accelerated” the policy change, per Nestler.

The announcement on r/modnews reads:

… the ability to instantly change Community Type settings has been used to break the platform and violate our rules. We have a responsibility to protect Reddit and ensure its long-term health, and we cannot allow actions that deliberately cause harm.

After shutting down a tactic for responding to unfavorable Reddit policy changes, Go_JasonWaterfalls claimed that Reddit still wants to hear from users.

“Community Type settings have historically been used to protest Reddit’s decisions,” they wrote.

“While we are making this change to ensure users’ expectations regarding a community’s access do not suddenly change, protest is allowed on Reddit. We want to hear from you when you think Reddit is making decisions that are not in your communities’ best interests. But if a protest crosses the line into harming redditors and Reddit, we’ll step in.”

Last year’s user protests illustrated how dependent Reddit is on unpaid moderators and user-generated content. At times, things turned ugly, pitting Reddit executives against long-time users (Reddit CEO Steve Huffman infamously called Reddit mods “landed gentry,” something that some were quick to remind Go_JasonWaterfalls of) and reportedly worrying Reddit employees.

Although the protests failed to reverse Reddit’s prohibitive API fees or to save most third-party apps, it succeeded in getting users’ concerns heard and even crashed Reddit for three hours. Further, NFSW protests temporarily prevented Reddit from selling ads on some subreddits. Since going public this year and amid a push to reach profitability, Reddit has been more focused on ads than ever. (Most of Reddit’s money comes from ads.)

Reddit’s Nestler told The Verge that the new policy was reviewed by Reddit’s Mod Council. Reddit is confident that it won’t lose mods because of the change, she said.

“Demotes us all to janitors”

The news marks another broad policy change that is likely to upset users and make Reddit seem unwilling to give into user feedback, despite Go_JasonWaterfalls saying that “protest is allowed on Reddit.” For example, in response, Reddit user CouncilOfStrongs said:

Don’t lie to us, please.

Something that you can ignore because it has no impact cannot be a protest, and no matter what you say that is obviously the one and only point of you doing this – to block moderators from being able to hold Reddit accountable in even the smallest way for malicious, irresponsible, bad faith changes that they make.

Reddit user belisaurius, who is listed as a mod for several active subreddits, including a 336,000-member one for the Philadelphia Eagles NFL team, said that the policy change “removes moderators from any position of central responsibility and demotes us all to janitors.”

As Reddit continues seeking profits and seemingly more control over a platform built around free user-generated content and moderation, users will have to either accept that Reddit is changing or leave the platform.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

In fear of more user protests, Reddit announces controversial policy change Read More »

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

procreate-defies-ai-trend,-pledges-“no-generative-ai”-in-its-illustration-app

Procreate defies AI trend, pledges “no generative AI” in its illustration app

Political pixels —

Procreate CEO: “I really f—ing hate generative AI.”

Still of Procreate CEO James Cuda from a video posted to X.

Enlarge / Still of Procreate CEO James Cuda from a video posted to X.

On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.

“Generative AI is ripping the humanity out of things,” Procreate wrote on its website. “Built on a foundation of theft, the technology is steering us toward a barren future.”

In a video posted on X, Procreate CEO James Cuda laid out his company’s stance, saying, “We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists.”

Cuda’s sentiment echoes the fears of some digital artists who feel that AI image synthesis models, often trained on content without consent or compensation, threaten their livelihood and the authenticity of creative work. That’s not a universal sentiment among artists, but AI image synthesis is often a deeply divisive subject on social media, with some taking starkly polarized positions on the topic.

Procreate CEO James Cuda lays out his argument against generative AI in a video posted to X.

Cuda’s video plays on that polarization with clear messaging against generative AI. His statement reads as follows:

You’ve been asking us about AI. You know, I usually don’t like getting in front of the camera. I prefer that our products speak for themselves. I really fucking hate generative AI. I don’t like what’s happening in the industry and I don’t like what it’s doing to artists. We’re not going to be introducing any generative AI into out products. Our products are always designed and developed with the idea that a human will be creating something. You know, we don’t exactly know where this story’s gonna go or how it ends, but we believe that we’re on the right path supporting human creativity.

The debate over generative AI has intensified among some outspoken artists as more companies integrate these tools into their products. Dominant illustration software provider Adobe has tried to avoid ethical concerns by training its Firefly AI models on licensed or public domain content, but some artists have remained skeptical. Adobe Photoshop currently includes a “Generative Fill” feature powered by image synthesis, and the company is also experimenting with video synthesis models.

The backlash against image and video synthesis is not solely focused on creative app developers. Hardware manufacturer Wacom and game publisher Wizards of the Coast have faced criticism and issued apologies after using AI-generated content in their products. Toys “R” Us also faced a negative reaction after debuting an AI-generated commercial. Companies are still grappling with balancing the potential benefits of generative AI with the ethical concerns it raises.

Artists and critics react

A partial screenshot of Procreate's AI website captured on August 20, 2024.

Enlarge / A partial screenshot of Procreate’s AI website captured on August 20, 2024.

So far, Procreate’s anti-AI announcement has been met with a largely positive reaction in replies to its social media post. In a widely liked comment, artist Freya Holmér wrote on X, “this is very appreciated, thank you.”

Some of the more outspoken opponents of image synthesis also replied favorably to Procreate’s move. Karla Ortiz, who is a plaintiff in a lawsuit against AI image-generator companies, replied to Procreate’s video on X, “Whatever you need at any time, know I’m here!! Artists support each other, and also support those who allow us to continue doing what we do! So thank you for all you all do and so excited to see what the team does next!”

Artist RJ Palmer, who stoked the first major wave of AI art backlash with a viral tweet in 2022, also replied to Cuda’s video statement, saying, “Now thats the way to send a message. Now if only you guys could get a full power competitor to [Photoshop] on desktop with plugin support. Until someone can build a real competitor to high level [Photoshop] use, I’m stuck with it.”

A few pro-AI users also replied to the X post, including AI-augmented artist Claire Silver, who uses generative AI as an accessibility tool. She wrote on X, “Most of my early work is made with a combination of AI and Procreate. 7 years ago, before text to image was really even a thing. I loved procreate because it used tech to boost accessibility. Like AI, it augmented trad skill to allow more people to create. No rules, only tools.”

Since AI image synthesis continues to be a highly charged subject among some artists, reaffirming support for human-centric creativity could be an effective differentiated marketing move for Procreate, which currently plays underdog to creativity app giant Adobe. While some may prefer to use AI tools, in an (ideally healthy) app ecosystem with personal choice in illustration apps, people can follow their conscience.

Procreate’s anti-AI stance is slightly risky because it might also polarize part of its user base—and if the company changes its mind about including generative AI in the future, it will have to walk back its pledge. But for now, Procreate is confident in its decision: “In this technological rush, this might make us an exception or seem at risk of being left behind,” Procreate wrote. “But we see this road less traveled as the more exciting and fruitful one for our community.”

Procreate defies AI trend, pledges “no generative AI” in its illustration app Read More »

chinese-social-media-users-hilariously-mock-ai-video-fails

Chinese social media users hilariously mock AI video fails

Life imitates AI imitating life —

TikTok and Bilibili users transform nonsensical AI glitches into real-world performance art.

Still from a Chinese social media video featuring two people imitating imperfect AI-generated video outputs.

Enlarge / Still from a Chinese social media video featuring two people imitating imperfect AI-generated video outputs.

It’s no secret that despite significant investment from companies like OpenAI and Runway, AI-generated videos still struggle to achieve convincing realism at times. Some of the most amusing fails end up on social media, which has led to a new response trend on Chinese social media platforms TikTok and Bilibili where users create videos that mock the imperfections of AI-generated content. The trend has since spread to X (formerly Twitter) in the US, where users have been sharing the humorous parodies.

In particular, the videos seem to parody image synthesis videos where subjects seamlessly morph into other people or objects in unexpected and physically impossible ways. Chinese social media replicate these unusual visual non-sequiturs without special effects by positioning their bodies in unusual ways as new and unexpected objects appear on-camera from out of frame.

This exaggerated mimicry has struck a chord with viewers on X, who find the parodies entertaining. User @theGioM shared one video, seen above. “This is high-level performance arts,” wrote one X user. “art is imitating life imitating ai, almost shedded a tear.” Another commented, “I feel like it still needs a motorcycle the turns into a speedboat and takes off into the sky. Other than that, excellent work.”

An example Chinese social media video featuring two people imitating imperfect AI-generated video outputs.

While these parodies poke fun at current limitations, tech companies are actively attempting to overcome them with more training data (examples analyzed by AI models that teach them how to create videos) and computational training time. OpenAI unveiled Sora in February, capable of creating realistic scenes if they closely match examples found in training data. Runway’s Gen-3 Alpha suffers a similar fate: It can create brief clips of convincing video within a narrow set of constraints. This means that generated videos of situations outside the dataset often end up hilariously weird.

An AI-generated video that features impossibly-morphing people and animals. Social media users are imitating this style.

It’s worth noting that actor Will Smith beat Chinese social media users to this trend in February by poking fun at a horrific 2023 viral AI-generated video that attempted to depict him eating spaghetti. That may also bring back memories of other amusing video synthesis failures, such as May 2023’s AI-generated beer commercial, created using Runway’s earlier Gen-2 model.

An example Chinese social media video featuring two people imitating imperfect AI-generated video outputs.

While imitating imperfect AI videos may seem strange to some, people regularly make money pretending to be NPCs (non-player characters—a term for computer-controlled video game characters) on TikTok.

For anyone alive during the 1980s, witnessing this fast-changing and often bizarre new media world can cause some cognitive whiplash, but the world is a weird place full of wonders beyond the imagination. “There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy,” as Hamlet once famously said. “Including people pretending to be video game characters and flawed video synthesis outputs.”

Chinese social media users hilariously mock AI video fails Read More »

reddit-considers-search-ads,-paywalled-content-for-the-future

Reddit considers search ads, paywalled content for the future

Q2 2024 earnings —

Current ad load is relatively “light,” COO says.

In this photo illustration the Reddit logo seen displayed on

Reddit executives discussed plans on Tuesday for making more money from the platform, including showing ads in more places and possibly putting some content behind a paywall.

On Tuesday, Reddit shared its Q2 2024 earnings report (PDF). The company lost $10.1 million during the period, down from Q2 2023’s $41.1 million loss. Reddit has never been profitable, and during its earnings call yesterday, company heads discussed potential and slated plans for monetization.

As expected, selling ads continues to be a priority. Part of the reason Reddit was OK with most third-party Reddit apps closing was that the change was expected to drive people to Reddit’s native website and apps, where the company sells ads. In Q2, Reddit’s ad revenue grew 41 percent year over year (YoY) to $253.1 million, or 90 percent of total revenue ($281.2 million).

When asked how the platform would grow ad revenue, Reddit COO Jen Wong said it’s important that advertisers “find the outcomes they want at the volumes and price they want.” She also pointed to driving more value per ad, or the cost that advertisers pay per average 1,000 impressions. To do that, Wong pointed to putting ads in places on Reddit where there aren’t ads currently:

There are still many places on Reddit without ads today. So we’re more focused on designing ads for spaces where users are spending more time versus increasing ad load in existing spaces. So for example, 50 percent of screen views, they’re now on conversation pages—that’s an opportunity.

Wong said that in places where Reddit does show ads currently, the ad load is “light” compared to about half of its rivals.

One of the places where Redditors may see more ads is within comments, which Wong noted that Reddit is currently testing. This ad capability is only “experimental,” Wong emphasized, but Reddit sees ads in comments as a way to stand out to advertisers.

There’s also an opportunity to sell ad space within Reddit search results, according to Reddit CEO Steve Huffman, who said yesterday that “over the long term, there’s significant advertising potential there as well.” More immediately, though, Reddit is looking to improve its search capabilities and this year will test “new search result pages powered by AI to summarize and recommend content, helping users dive deeper into products, shows, games, and discover new communities on Reddit,” Huffman revealed yesterday. He said Reddit is using first- and third-party AI models to improve search aspects like speed and relevance.

The move comes as Reddit is currently blocking all search engines besides Google, OpenAI, and approved education/research instances from showing recent Reddit content in their results. Yesterday, Huffman reiterated his statement that Reddit is working with “big and small” search engines to strike deals like it already has with Google and OpenAI. But looking ahead, Reddit is focused on charging for content scraping and seems to be trying to capitalize on people’s interest in using Reddit as a filter for search results.

Paywalled content possible

The possibility of paywalls came up during the earnings call when an analyst asked Huffman about maintaining Reddit’s culture as it looks to “earn money now for people and creators on the platform.” Reddit has already launched a Contributor Program, where popular posts can make Reddit users money. It has discussed monetizing its developer platform, which is in public beta with “a few hundred active developers,” Huffman said yesterday. In response to the analyst’s question, Huffman said that based on his experience, adding new ways of using Reddit “expands” the platform but doesn’t “cannibalize existing Reddit.”

He continued:

I think the existing altruistic, free version of Reddit will continue to exist and grow and thrive just the way it has. But now we will unlock the door for new use cases, new types of subreddits that can be built that may have exclusive content or private areas—things of that nature.

Huffman’s comments suggest that paywalls could be added to new subreddits rather than existing ones. At this stage, though, it’s unclear how users may be charged to use Reddit in the future if at all.

The idea of paywalling some content comes as various online entities are trying to diversify revenue beyond often volatile ad spending. Reddit has also tried elevating free aspects of the site, such as updates to Ask Me Anything (AMA), including new features like RSVPs, which were announced Tuesday.

Reddit has angered some long-time users with recent changes—including blocking search engines, forcing personalized ads, introducing an exclusionary fee for API access, and ousting some moderators during last year’s user protests—but Reddit saw its daily active unique user count increase by 51 percent YoY in Q2 to 91.2 million.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

Reddit considers search ads, paywalled content for the future Read More »

openai-will-use-reddit-posts-to-train-chatgpt-under-new-deal

OpenAI will use Reddit posts to train ChatGPT under new deal

Data dealings —

Reddit has been eager to sell data from user posts.

An image of a woman holding a cell phone in front of the Reddit logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.

Reddit content will be incorporated into ChatGPT “and new products,” Reddit’s blog post said. The social media firm claims the partnership will “enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics.” OpenAI will also start advertising on Reddit.

The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make “new ways to display Reddit content” and provide “more efficient ways to train models,” Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit’s partnership with Google was reportedly worth $60 million.

Under the OpenAI partnership, Reddit also gains access to OpenAI large language models (LLMs) to create features for Reddit, including its volunteer moderators.

Reddit’s data licensing push

The news comes about a year after Reddit launched an API war by starting to charge for access to its data API. This resulted in many beloved third-party Reddit apps closing and a massive user protest. Reddit, which would soon become a public company and hadn’t turned a profit yet, said one of the reasons for the sudden change was to prevent AI firms from using Reddit content to train their LLMs for free.

Earlier this month, Reddit published a Public Content Policy stating: “Unfortunately, we see more and more commercial entities using unauthorized access or misusing authorized access to collect public data in bulk, including Reddit public content. Worse, these entities perceive they have no limitation on their usage of that data, and they do so with no regard for user rights or privacy, ignoring reasonable legal, safety, and user removal requests.

In its blog post on Thursday, Reddit said that deals like OpenAI’s are part of an “open” Internet. It added that “part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online.”

Reddit has been vocal about its interest in pursuing data licensing deals as a core part of its business. Its building of AI partnerships sparks discourse around the use of user-generated content to fuel AI models without users being compensated and some potentially not considering that their social media posts would be used this way. OpenAI and Stack Overflow faced pushback earlier this month when integrating Stack Overflow content with ChatGPT. Some of Stack Overflow’s user community responded by sabotaging their own posts.

OpenAI is also challenged to work with Reddit data that, like much of the Internet, can be filled with inaccuracies and inappropriate content. Some of the biggest opponents of Reddit’s API rule changes were volunteer mods. Some have exited the platform since, and following the rule changes, Ars Technica spoke with long-time Redditors who were concerned about Reddit content quality moving forward.

Regardless, generative AI firms are keen to tap into Reddit’s access to real-time conversations from a variety of people discussing a nearly endless range of topics. And Reddit seems equally eager to license the data from its users’ posts.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

OpenAI will use Reddit posts to train ChatGPT under new deal Read More »

bumble-apologizes-for-ads-shaming-women-into-sex

Bumble apologizes for ads shaming women into sex

Bumble apologizes for ads shaming women into sex

For the past decade, the dating app Bumble has claimed to be all about empowering women. But under a new CEO, Lidiane Jones, Bumble is now apologizing for a tone-deaf ad campaign that many users said seemed to channel incel ideology by telling women to stop denying sex.

“You know full well a vow of celibacy is not the answer,” one Bumble billboard seen in Los Angeles read. “Thou shalt not give up on dating and become a nun,” read another.

Bumble HQ

“We don’t have enough women on the app.”

“They’d rather be alone than deal with men.”

“Should we teach men to be better?”

“No, we should shame women so they come back to the app.”

“Yes! Let’s make them feel bad for choosing celibacy. Great idea!” pic.twitter.com/115zDdGKZo

— Arghavan Salles, MD, PhD (@arghavan_salles) May 14, 2024

Bumble intended these ads to bring “joy and humor,” the company said in an apology posted on Instagram after the backlash on social media began.

Some users threatened to delete their accounts, criticizing Bumble for ignoring religious or personal reasons for choosing celibacy. These reasons include preferring asexuality or sensibly abstaining from sex amid diminishing access to abortion nationwide.

Others accused Bumble of more shameful motives. On X (formerly Twitter), a user called UjuAnya posted that “Bumble’s main business model is selling men access to women,” since market analysts have reported that 76 percent of Bumble users are male.

“Bumble won’t alienate their primary customers (men) telling them to quit being shit,” UjuAnya posted on X. “They’ll run ads like this to make their product (women) ‘better’ and more available on their app for men.”

That account quote-tweeted an even more popular post with nearly 3 million views suggesting that Bumble needs to “fuck off and stop trying to shame women into coming back to the apps” instead of running “ads targeted at men telling them to be normal.”

One TikTok user, ItsNeetie, declared, “the Bumble reckoning is finally here.”

Bumble did not respond to Ars’ request to respond to these criticisms or verify user statistics.

In its apology, Bumble took responsibility for not living up to its “values” of “passionately” standing up for women and marginalized communities and defending “their right to fully exercise personal choice.” Admitting the ads were a “mistake” that “unintentionally” frustrated the dating community, the dating app responded to some of the user feedback:

Some of the perspectives we heard were from those who shared that celibacy is the only answer when reproductive rights are continuously restricted; from others for whom celibacy is a choice, one that we respect; and from the asexual community, for whom celibacy can have a particular meaning and importance, which should not be diminished. We are also aware that for many, celibacy may be brought on by harm or trauma.

Bumble’s pulled ads were part of a larger marketing campaign that at first seemed to resonate with its users. Created by the company’s in-house creative studio, according to AdAge, Bumble’s campaign attracted a lot of eyeballs by deleting Bumble’s entire Instagram feed and posting “cryptic messages” showing tired women in Renaissance-era paintings that alluded to the app’s rebrand.

In a press release, chief marketing officer Selby Drummond said that Bumble “wanted to take a fun, bold approach in celebrating the first chapter of our app’s evolution and remind women that our platform has been solving for their needs from the start.”

The dating app is increasingly investing in ads, AdAge reported, tripling investments from $8 million in 2022 to $24 million in 2023. These ads are seemingly meant to help Bumble recover after posting “a $1.9 million net loss last year,” CNN reported, following a dismal drop in its share price by 86 percent since its initial public offering in February 2021.

Bumble’s new CEO Jones told NBC News that younger users are dating less and that Bumble’s plan was to listen to users to find new ways to grow.

Bumble apologizes for ads shaming women into sex Read More »

reddit,-ai-spam-bots-explore-new-ways-to-show-ads-in-your-feed

Reddit, AI spam bots explore new ways to show ads in your feed

BRAZIL - 2024/04/08: In this photo illustration, a Reddit logo seen displayed on a computer screen through a magnifying glass

Reddit has made it clear that it’s an ad-first business. Today, it expanded on that practice with a new ad format that looks to sell things to Reddit users. Simultaneously, Reddit has marketers who are interested in pushing products to users through seemingly legitimate accounts.

In a blog post today, Reddit announced that its Dynamic Product Ads are entering public beta globally. The ad format uses “shopping signals,” aka discussions with people looking to try a product or brand, machine learning, and advertiser product catalogs in order to post relevant ads. Reddit shared an image in the blog post that shows ads, including with products and pricing, that seem to relate to a posted question. User responses to the Reddit post appear under the ad.

  • A somewhat blurry depiction of the new type of ads Reddit is testing.

  • A (still blurry) example of a more targeted approach to Reddit’s new ad format.

Reddit’s Dynamic Product Ads can automatically show users ads “based on the products they’ve previously engaged with on the advertiser’s site” and/or “based on what people engage with on Reddit or advertiser sites,” per the blog.

Reddit is an ad business

Reddit’s blog didn’t imply that Dynamic Product Ads means users would see more ads than they do currently. However, today’s blog highlighted the newly public company’s focus on ad sales.

“With Dynamic Product Ads, brands can tap into the rich, high-intent product conversations that people come to Reddit for,” Reddit EVP of Business Marketing and Growth Jim Squires said in a statement.

The blog also noted that “Reddit’s communities are naturally commercial,” adding:

Reddit is where people come to make shopping decisions, and we’re focused on bringing brands into these interactions in a way that adds value for people and drives growth for businesses.

The stance has been increasingly clear over the past year, as Reddit became rather vocal about the fact that it’s never been profitable. In Junethe company started charging for API access, resulting in numerous valued third-party Reddit apps closing and messy user protests that left a bad taste in countless long-time users’ and moderators’ mouths. While Reddit initially announced the change as a way to prevent large language models from using its data for free training, it was also seen as a way to drive users to Reddit’s website and mobile app, where it can serve users ads.

Per Reddit’s February SEC filing (PDF), ads made up 98 percent of Reddit’s revenues in 2023 and 2022. That filing included a note from CEO Steve Huffman, saying: “Advertising is our first business” and that Reddit’s ad business is “still in the early phases of growing.”

In September, the company started preventing users from opting out of personalized ads. In June, Reddit introduced a new tool to advertisers that uses natural language processing to look through Reddit user comments for keywords that signal potential interest for a brand.

Reddit’s blog post today hinted at some future evolutions focused on showing Reddit users ads, including “tools and features such as new shopping ads formats like collection ads that enhance the shopper experience while driving performance” and “merchant platform integrations that welcome smaller merchants.”

Reddit, AI spam bots explore new ways to show ads in your feed Read More »

elon-musk’s-x-to-stop-allowing-users-to-hide-their-blue-checks

Elon Musk’s X to stop allowing users to hide their blue checks

Nothing to hide —

X previously promised to “evolve” the “hide your checkmark” feature.

Elon Musk’s X to stop allowing users to hide their blue checks

X will soon stop allowing users to hide their blue checkmarks, and some users are not happy.

Previously, a blue tick on Twitter was a mark of a notable account, providing some assurance to followers of the account’s authenticity. But then Elon Musk decided to start charging for the blue tick instead, and mayhem ensued as a wave of imposter accounts began jokingly posing as brands.

After that, paying for a blue checkmark began to attract derision, as non-paying users passed around a meme under blue-checked posts, saying, “This MF paid for Twitter.” To help spare paid subscribers this embarrassment, X began allowing users to hide their blue check last August, turning “hide your checkmark” into a feature of paid subscriptions.

However, earlier this month, X decided that hiding a checkmark would no longer be allowed, deleting the feature from its webpage detailing what comes with X Premium. An archive of X’s page shows that the language about how to hide your checkmark was removed after April 6, with X no longer promising to “continue to evolve this feature to make it better for you” but instead abruptly ending the perk.

X’s decision to stop hiding checkmarks came after the platform began gifting blue checkmarks to popular accounts. Back in April 2023, then-Twitter had awarded blue checks to celebrity accounts with more than a million followers. Last week, now-X doled out even more blue checks to accounts with over 2,500 paid verified followers. Now, accounts with more than 2,500 paid verified followers get Premium features for free, and accounts with more than 5,000 paid verified followers get Premium+.

You might think that X giving out freebies would be well-received, but Business Insider tech reporter Katie Notopoulos, one of many accounts suddenly gifted the blue check, summed up how many X users were feeling about the gifted tick by asking, “does it seem uncool?”

X doesn’t seem to care anymore if blue checks are seen as uncool, though. Anyone who doesn’t want the complimentary check can refuse it, and any paid subscriber upset about losing the ability to hide their checkmark can always just stop paying for Premium features.

According to X, anyone deciding to cancel their subscription over the loss of the “hide your checkmark” feature can expect the check to remain on their account “until the end of the subscription term you paid for, unless your account is suspended or the blue checkmark is otherwise removed by X for any reason.”

X could also suddenly remove a checkmark without refunding users in extreme circumstances.

“X reserves the right without notice to remove your blue checkmark at any time in its sole discretion without offering you a refund, including if you violate our Terms of Service or if your account is suspended,” X’s subscription page warns.

X Daily, an X news account, announced that the change was coming this week, gathering “meltdown reactions” from users who are upset that their blue checks will soon no longer be hidden.

“Let me hide my checkmark, I’m not a fucking bot,” a user called @4gntt posted, the complaint seemingly alluding to Musk’s claim that paid subscriptions are the only way to stop bots from overrunning X.

“Oh no,” another user, @jeremyphoward, posted. “I signed up to X Premium since it’s required for them to pay me… but now they [are] making the cringemark non-optional 🙁 Not sure if it’s worth it.”

It’s currently unclear when the “hide your checkmark” feature will stop working. Neither of those users criticizing X currently display a blue tick on their profile, suggesting that their checks are still hidden, but it’s also possible that some users immediately stopped paying in response to the policy change.

Elon Musk’s X to stop allowing users to hide their blue checks Read More »

elon-musk:-ai-will-be-smarter-than-any-human-around-the-end-of-next-year

Elon Musk: AI will be smarter than any human around the end of next year

smarter than the average bear —

While Musk says superintelligence is coming soon, one critic says prediction is “batsh*t crazy.”

Elon Musk, owner of Tesla and the X (formerly Twitter) platform, attends a symposium on fighting antisemitism titled 'Never Again : Lip Service or Deep Conversation' in Krakow, Poland on January 22nd, 2024. Musk, who was invited to Poland by the European Jewish Association (EJA) has visited the Auschwitz-Birkenau concentration camp earlier that day, ahead of International Holocaust Remembrance Day. (Photo by Beata Zawrzel/NurPhoto)

Enlarge / Elon Musk, owner of Tesla and the X (formerly Twitter) platform on January 22, 2024.

On Monday, Tesla CEO Elon Musk predicted the imminent rise in AI superintelligence during a live interview streamed on the social media platform X. “My guess is we’ll have AI smarter than any one human probably around the end of next year,” Musk said in his conversation with hedge fund manager Nicolai Tangen.

Just prior to that, Tangen had asked Musk, “What’s your take on where we are in the AI race just now?” Musk told Tangen that AI “is the fastest advancing technology I’ve seen of any kind, and I’ve seen a lot of technology.” He described computers dedicated to AI increasing in capability by “a factor of 10 every year, if not every six to nine months.”

Musk made the prediction with an asterisk, saying that shortages of AI chips and high AI power demands could limit AI’s capability until those issues are resolved. “Last year, it was chip-constrained,” Musk told Tangen. “People could not get enough Nvidia chips. This year, it’s transitioning to a voltage transformer supply. In a year or two, it’s just electricity supply.”

But not everyone is convinced that Musk’s crystal ball is free of cracks. Grady Booch, a frequent critic of AI hype on social media who is perhaps best known for his work in software architecture, told Ars in an interview, “Keep in mind that Mr. Musk has a profoundly bad record at predicting anything associated with AI; back in 2016, he promised his cars would ship with FSD safety level 5, and here we are, closing on an a decade later, still waiting.”

Creating artificial intelligence at least as smart as a human (frequently called “AGI” for artificial general intelligence) is often seen as inevitable among AI proponents, but there’s no broad consensus on exactly when that milestone will be reached—or on the exact definition of AGI, for that matter.

“If you define AGI as smarter than the smartest human, I think it’s probably next year, within two years,” Musk added in the interview with Tangen while discussing AGI timelines.

Even with uncertainties about AGI, that hasn’t kept companies from trying. ChatGPT creator OpenAI, which launched with Musk as a co-founder in 2015, lists developing AGI as its main goal. Musk has not been directly associated with OpenAI for years (unless you count a recent lawsuit against the company), but last year, he took aim at the business of large language models by forming a new company called xAI. Its main product, Grok, functions similarly to ChatGPT and is integrated into the X social media platform.

Booch gives credit to Musk’s business successes but casts doubt on his forecasting ability. “Albeit a brilliant if not rapacious businessman, Mr. Musk vastly overestimates both the history as well as the present of AI while simultaneously diminishing the exquisite uniqueness of human intelligence,” says Booch. “So in short, his prediction is—to put it in scientific terms—batshit crazy.”

So when will we get AI that’s smarter than a human? Booch says there’s no real way to know at the moment. “I reject the framing of any question that asks when AI will surpass humans in intelligence because it is a question filled with ambiguous terms and considerable emotional and historic baggage,” he says. “We are a long, long way from understanding the design that would lead us there.”

We also asked Hugging Face AI researcher Dr. Margaret Mitchell to weigh in on Musk’s prediction. “Intelligence … is not a single value where you can make these direct comparisons and have them mean something,” she told us in an interview. “There will likely never be agreement on comparisons between human and machine intelligence.”

But even with that uncertainty, she feels there is one aspect of AI she can more reliably predict: “I do agree that neural network models will reach a point where men in positions of power and influence, particularly ones with investments in AI, will declare that AI is smarter than humans. By end of next year, sure. That doesn’t sound far off base to me.”

Elon Musk: AI will be smarter than any human around the end of next year Read More »