Social Media

openai-will-use-reddit-posts-to-train-chatgpt-under-new-deal

OpenAI will use Reddit posts to train ChatGPT under new deal

Data dealings —

Reddit has been eager to sell data from user posts.

An image of a woman holding a cell phone in front of the Reddit logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.

Reddit content will be incorporated into ChatGPT “and new products,” Reddit’s blog post said. The social media firm claims the partnership will “enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics.” OpenAI will also start advertising on Reddit.

The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make “new ways to display Reddit content” and provide “more efficient ways to train models,” Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit’s partnership with Google was reportedly worth $60 million.

Under the OpenAI partnership, Reddit also gains access to OpenAI large language models (LLMs) to create features for Reddit, including its volunteer moderators.

Reddit’s data licensing push

The news comes about a year after Reddit launched an API war by starting to charge for access to its data API. This resulted in many beloved third-party Reddit apps closing and a massive user protest. Reddit, which would soon become a public company and hadn’t turned a profit yet, said one of the reasons for the sudden change was to prevent AI firms from using Reddit content to train their LLMs for free.

Earlier this month, Reddit published a Public Content Policy stating: “Unfortunately, we see more and more commercial entities using unauthorized access or misusing authorized access to collect public data in bulk, including Reddit public content. Worse, these entities perceive they have no limitation on their usage of that data, and they do so with no regard for user rights or privacy, ignoring reasonable legal, safety, and user removal requests.

In its blog post on Thursday, Reddit said that deals like OpenAI’s are part of an “open” Internet. It added that “part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online.”

Reddit has been vocal about its interest in pursuing data licensing deals as a core part of its business. Its building of AI partnerships sparks discourse around the use of user-generated content to fuel AI models without users being compensated and some potentially not considering that their social media posts would be used this way. OpenAI and Stack Overflow faced pushback earlier this month when integrating Stack Overflow content with ChatGPT. Some of Stack Overflow’s user community responded by sabotaging their own posts.

OpenAI is also challenged to work with Reddit data that, like much of the Internet, can be filled with inaccuracies and inappropriate content. Some of the biggest opponents of Reddit’s API rule changes were volunteer mods. Some have exited the platform since, and following the rule changes, Ars Technica spoke with long-time Redditors who were concerned about Reddit content quality moving forward.

Regardless, generative AI firms are keen to tap into Reddit’s access to real-time conversations from a variety of people discussing a nearly endless range of topics. And Reddit seems equally eager to license the data from its users’ posts.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

OpenAI will use Reddit posts to train ChatGPT under new deal Read More »

bumble-apologizes-for-ads-shaming-women-into-sex

Bumble apologizes for ads shaming women into sex

Bumble apologizes for ads shaming women into sex

For the past decade, the dating app Bumble has claimed to be all about empowering women. But under a new CEO, Lidiane Jones, Bumble is now apologizing for a tone-deaf ad campaign that many users said seemed to channel incel ideology by telling women to stop denying sex.

“You know full well a vow of celibacy is not the answer,” one Bumble billboard seen in Los Angeles read. “Thou shalt not give up on dating and become a nun,” read another.

Bumble HQ

“We don’t have enough women on the app.”

“They’d rather be alone than deal with men.”

“Should we teach men to be better?”

“No, we should shame women so they come back to the app.”

“Yes! Let’s make them feel bad for choosing celibacy. Great idea!” pic.twitter.com/115zDdGKZo

— Arghavan Salles, MD, PhD (@arghavan_salles) May 14, 2024

Bumble intended these ads to bring “joy and humor,” the company said in an apology posted on Instagram after the backlash on social media began.

Some users threatened to delete their accounts, criticizing Bumble for ignoring religious or personal reasons for choosing celibacy. These reasons include preferring asexuality or sensibly abstaining from sex amid diminishing access to abortion nationwide.

Others accused Bumble of more shameful motives. On X (formerly Twitter), a user called UjuAnya posted that “Bumble’s main business model is selling men access to women,” since market analysts have reported that 76 percent of Bumble users are male.

“Bumble won’t alienate their primary customers (men) telling them to quit being shit,” UjuAnya posted on X. “They’ll run ads like this to make their product (women) ‘better’ and more available on their app for men.”

That account quote-tweeted an even more popular post with nearly 3 million views suggesting that Bumble needs to “fuck off and stop trying to shame women into coming back to the apps” instead of running “ads targeted at men telling them to be normal.”

One TikTok user, ItsNeetie, declared, “the Bumble reckoning is finally here.”

Bumble did not respond to Ars’ request to respond to these criticisms or verify user statistics.

In its apology, Bumble took responsibility for not living up to its “values” of “passionately” standing up for women and marginalized communities and defending “their right to fully exercise personal choice.” Admitting the ads were a “mistake” that “unintentionally” frustrated the dating community, the dating app responded to some of the user feedback:

Some of the perspectives we heard were from those who shared that celibacy is the only answer when reproductive rights are continuously restricted; from others for whom celibacy is a choice, one that we respect; and from the asexual community, for whom celibacy can have a particular meaning and importance, which should not be diminished. We are also aware that for many, celibacy may be brought on by harm or trauma.

Bumble’s pulled ads were part of a larger marketing campaign that at first seemed to resonate with its users. Created by the company’s in-house creative studio, according to AdAge, Bumble’s campaign attracted a lot of eyeballs by deleting Bumble’s entire Instagram feed and posting “cryptic messages” showing tired women in Renaissance-era paintings that alluded to the app’s rebrand.

In a press release, chief marketing officer Selby Drummond said that Bumble “wanted to take a fun, bold approach in celebrating the first chapter of our app’s evolution and remind women that our platform has been solving for their needs from the start.”

The dating app is increasingly investing in ads, AdAge reported, tripling investments from $8 million in 2022 to $24 million in 2023. These ads are seemingly meant to help Bumble recover after posting “a $1.9 million net loss last year,” CNN reported, following a dismal drop in its share price by 86 percent since its initial public offering in February 2021.

Bumble’s new CEO Jones told NBC News that younger users are dating less and that Bumble’s plan was to listen to users to find new ways to grow.

Bumble apologizes for ads shaming women into sex Read More »

reddit,-ai-spam-bots-explore-new-ways-to-show-ads-in-your-feed

Reddit, AI spam bots explore new ways to show ads in your feed

BRAZIL - 2024/04/08: In this photo illustration, a Reddit logo seen displayed on a computer screen through a magnifying glass

Reddit has made it clear that it’s an ad-first business. Today, it expanded on that practice with a new ad format that looks to sell things to Reddit users. Simultaneously, Reddit has marketers who are interested in pushing products to users through seemingly legitimate accounts.

In a blog post today, Reddit announced that its Dynamic Product Ads are entering public beta globally. The ad format uses “shopping signals,” aka discussions with people looking to try a product or brand, machine learning, and advertiser product catalogs in order to post relevant ads. Reddit shared an image in the blog post that shows ads, including with products and pricing, that seem to relate to a posted question. User responses to the Reddit post appear under the ad.

  • A somewhat blurry depiction of the new type of ads Reddit is testing.

  • A (still blurry) example of a more targeted approach to Reddit’s new ad format.

Reddit’s Dynamic Product Ads can automatically show users ads “based on the products they’ve previously engaged with on the advertiser’s site” and/or “based on what people engage with on Reddit or advertiser sites,” per the blog.

Reddit is an ad business

Reddit’s blog didn’t imply that Dynamic Product Ads means users would see more ads than they do currently. However, today’s blog highlighted the newly public company’s focus on ad sales.

“With Dynamic Product Ads, brands can tap into the rich, high-intent product conversations that people come to Reddit for,” Reddit EVP of Business Marketing and Growth Jim Squires said in a statement.

The blog also noted that “Reddit’s communities are naturally commercial,” adding:

Reddit is where people come to make shopping decisions, and we’re focused on bringing brands into these interactions in a way that adds value for people and drives growth for businesses.

The stance has been increasingly clear over the past year, as Reddit became rather vocal about the fact that it’s never been profitable. In Junethe company started charging for API access, resulting in numerous valued third-party Reddit apps closing and messy user protests that left a bad taste in countless long-time users’ and moderators’ mouths. While Reddit initially announced the change as a way to prevent large language models from using its data for free training, it was also seen as a way to drive users to Reddit’s website and mobile app, where it can serve users ads.

Per Reddit’s February SEC filing (PDF), ads made up 98 percent of Reddit’s revenues in 2023 and 2022. That filing included a note from CEO Steve Huffman, saying: “Advertising is our first business” and that Reddit’s ad business is “still in the early phases of growing.”

In September, the company started preventing users from opting out of personalized ads. In June, Reddit introduced a new tool to advertisers that uses natural language processing to look through Reddit user comments for keywords that signal potential interest for a brand.

Reddit’s blog post today hinted at some future evolutions focused on showing Reddit users ads, including “tools and features such as new shopping ads formats like collection ads that enhance the shopper experience while driving performance” and “merchant platform integrations that welcome smaller merchants.”

Reddit, AI spam bots explore new ways to show ads in your feed Read More »

elon-musk’s-x-to-stop-allowing-users-to-hide-their-blue-checks

Elon Musk’s X to stop allowing users to hide their blue checks

Nothing to hide —

X previously promised to “evolve” the “hide your checkmark” feature.

Elon Musk’s X to stop allowing users to hide their blue checks

X will soon stop allowing users to hide their blue checkmarks, and some users are not happy.

Previously, a blue tick on Twitter was a mark of a notable account, providing some assurance to followers of the account’s authenticity. But then Elon Musk decided to start charging for the blue tick instead, and mayhem ensued as a wave of imposter accounts began jokingly posing as brands.

After that, paying for a blue checkmark began to attract derision, as non-paying users passed around a meme under blue-checked posts, saying, “This MF paid for Twitter.” To help spare paid subscribers this embarrassment, X began allowing users to hide their blue check last August, turning “hide your checkmark” into a feature of paid subscriptions.

However, earlier this month, X decided that hiding a checkmark would no longer be allowed, deleting the feature from its webpage detailing what comes with X Premium. An archive of X’s page shows that the language about how to hide your checkmark was removed after April 6, with X no longer promising to “continue to evolve this feature to make it better for you” but instead abruptly ending the perk.

X’s decision to stop hiding checkmarks came after the platform began gifting blue checkmarks to popular accounts. Back in April 2023, then-Twitter had awarded blue checks to celebrity accounts with more than a million followers. Last week, now-X doled out even more blue checks to accounts with over 2,500 paid verified followers. Now, accounts with more than 2,500 paid verified followers get Premium features for free, and accounts with more than 5,000 paid verified followers get Premium+.

You might think that X giving out freebies would be well-received, but Business Insider tech reporter Katie Notopoulos, one of many accounts suddenly gifted the blue check, summed up how many X users were feeling about the gifted tick by asking, “does it seem uncool?”

X doesn’t seem to care anymore if blue checks are seen as uncool, though. Anyone who doesn’t want the complimentary check can refuse it, and any paid subscriber upset about losing the ability to hide their checkmark can always just stop paying for Premium features.

According to X, anyone deciding to cancel their subscription over the loss of the “hide your checkmark” feature can expect the check to remain on their account “until the end of the subscription term you paid for, unless your account is suspended or the blue checkmark is otherwise removed by X for any reason.”

X could also suddenly remove a checkmark without refunding users in extreme circumstances.

“X reserves the right without notice to remove your blue checkmark at any time in its sole discretion without offering you a refund, including if you violate our Terms of Service or if your account is suspended,” X’s subscription page warns.

X Daily, an X news account, announced that the change was coming this week, gathering “meltdown reactions” from users who are upset that their blue checks will soon no longer be hidden.

“Let me hide my checkmark, I’m not a fucking bot,” a user called @4gntt posted, the complaint seemingly alluding to Musk’s claim that paid subscriptions are the only way to stop bots from overrunning X.

“Oh no,” another user, @jeremyphoward, posted. “I signed up to X Premium since it’s required for them to pay me… but now they [are] making the cringemark non-optional 🙁 Not sure if it’s worth it.”

It’s currently unclear when the “hide your checkmark” feature will stop working. Neither of those users criticizing X currently display a blue tick on their profile, suggesting that their checks are still hidden, but it’s also possible that some users immediately stopped paying in response to the policy change.

Elon Musk’s X to stop allowing users to hide their blue checks Read More »

elon-musk:-ai-will-be-smarter-than-any-human-around-the-end-of-next-year

Elon Musk: AI will be smarter than any human around the end of next year

smarter than the average bear —

While Musk says superintelligence is coming soon, one critic says prediction is “batsh*t crazy.”

Elon Musk, owner of Tesla and the X (formerly Twitter) platform, attends a symposium on fighting antisemitism titled 'Never Again : Lip Service or Deep Conversation' in Krakow, Poland on January 22nd, 2024. Musk, who was invited to Poland by the European Jewish Association (EJA) has visited the Auschwitz-Birkenau concentration camp earlier that day, ahead of International Holocaust Remembrance Day. (Photo by Beata Zawrzel/NurPhoto)

Enlarge / Elon Musk, owner of Tesla and the X (formerly Twitter) platform on January 22, 2024.

On Monday, Tesla CEO Elon Musk predicted the imminent rise in AI superintelligence during a live interview streamed on the social media platform X. “My guess is we’ll have AI smarter than any one human probably around the end of next year,” Musk said in his conversation with hedge fund manager Nicolai Tangen.

Just prior to that, Tangen had asked Musk, “What’s your take on where we are in the AI race just now?” Musk told Tangen that AI “is the fastest advancing technology I’ve seen of any kind, and I’ve seen a lot of technology.” He described computers dedicated to AI increasing in capability by “a factor of 10 every year, if not every six to nine months.”

Musk made the prediction with an asterisk, saying that shortages of AI chips and high AI power demands could limit AI’s capability until those issues are resolved. “Last year, it was chip-constrained,” Musk told Tangen. “People could not get enough Nvidia chips. This year, it’s transitioning to a voltage transformer supply. In a year or two, it’s just electricity supply.”

But not everyone is convinced that Musk’s crystal ball is free of cracks. Grady Booch, a frequent critic of AI hype on social media who is perhaps best known for his work in software architecture, told Ars in an interview, “Keep in mind that Mr. Musk has a profoundly bad record at predicting anything associated with AI; back in 2016, he promised his cars would ship with FSD safety level 5, and here we are, closing on an a decade later, still waiting.”

Creating artificial intelligence at least as smart as a human (frequently called “AGI” for artificial general intelligence) is often seen as inevitable among AI proponents, but there’s no broad consensus on exactly when that milestone will be reached—or on the exact definition of AGI, for that matter.

“If you define AGI as smarter than the smartest human, I think it’s probably next year, within two years,” Musk added in the interview with Tangen while discussing AGI timelines.

Even with uncertainties about AGI, that hasn’t kept companies from trying. ChatGPT creator OpenAI, which launched with Musk as a co-founder in 2015, lists developing AGI as its main goal. Musk has not been directly associated with OpenAI for years (unless you count a recent lawsuit against the company), but last year, he took aim at the business of large language models by forming a new company called xAI. Its main product, Grok, functions similarly to ChatGPT and is integrated into the X social media platform.

Booch gives credit to Musk’s business successes but casts doubt on his forecasting ability. “Albeit a brilliant if not rapacious businessman, Mr. Musk vastly overestimates both the history as well as the present of AI while simultaneously diminishing the exquisite uniqueness of human intelligence,” says Booch. “So in short, his prediction is—to put it in scientific terms—batshit crazy.”

So when will we get AI that’s smarter than a human? Booch says there’s no real way to know at the moment. “I reject the framing of any question that asks when AI will surpass humans in intelligence because it is a question filled with ambiguous terms and considerable emotional and historic baggage,” he says. “We are a long, long way from understanding the design that would lead us there.”

We also asked Hugging Face AI researcher Dr. Margaret Mitchell to weigh in on Musk’s prediction. “Intelligence … is not a single value where you can make these direct comparisons and have them mean something,” she told us in an interview. “There will likely never be agreement on comparisons between human and machine intelligence.”

But even with that uncertainty, she feels there is one aspect of AI she can more reliably predict: “I do agree that neural network models will reach a point where men in positions of power and influence, particularly ones with investments in AI, will declare that AI is smarter than humans. By end of next year, sure. That doesn’t sound far off base to me.”

Elon Musk: AI will be smarter than any human around the end of next year Read More »

discord-starts-down-the-dangerous-road-of-ads-this-week

Discord starts down the dangerous road of ads this week

Sponsored Quests —

Discord’s first real foray into ads seems minimally intrusive.

Updated

The Discord logo on a funky cyber-background.

Discord

Discord had long been strongly opposed to ads, but starting this week, it’s giving video game makers the ability to advertise to its users. The introduction of so-called Sponsored Quests marks a notable change from the startup’s previous business model, but, at least for now, it seems much less intrusive than the ads shoved into other social media platforms, especially since Discord users can choose not to engage with them.

Discord first announced Sponsored Quests on March 7, with Peter Sellis, Discord’s SVP of product, writing in a blog post that users would start seeing them in the “coming weeks.” Sponsored Quests offer PC gamers in-game rewards for getting friends to watch a stream of them playing through Discord. Discord senior product communications manager Swaleha Carlson confirmed to Ars Technica that Sponsored Quests launch this week.

Discord shared this image in March as an example of the new type of ads.

Enlarge / Discord shared this image in March as an example of the new type of ads.

The goal is for video games to get exposure to more gamers, serving as a form of marketing. On Saturday, The Wall Street Journal (WSJ) reported that it viewed a slide from a slideshow Discord shows to game developers regarding the ads that reads: “We’ll get you in front of players. And those players will get you into their friend groups.”

Sellis told WSJ that Discord will target ads depending on users’ age, geographic location data, and gameplay. The ads will live on the bottom-left of the screen, but users can opt out of personalized promotions for Quests that are based on activity or data shared with Discord, Swaleha Carlson, senior product communications manager at Discord, told Ars Technica.

“Users may still see Quests, however, if they navigate to their Gift Inventory and/or through contextual entry points like a user’s friends’ activity. They’ll also have the option to hide an in-app promotion for a specific Quest or game they’re not interested in,” she said.

“Users may still see Quests, however, if they navigate to their Gift Inventory and/or through contextual entry points like a user’s friends’ activity. They’ll also have the option to hide an in-app promotion for a specific Quest or game they’re not interested in. “

Discord already tested the ads in May with Lucasfilm Games and Epic Games. Discord users were able to receive Star Wars-themed gear in Fortnite for getting a friend to watch them play Fortnite on PC for at least 15 minutes.

Jason Citron, Discord co-founder and CEO, told Bloomberg in March that the company hopes that one day “every game will offer Quests on Discord.”

Discord used to be anti-ads

It may be a nuisance for users to have to disable personalized promotion for Sponsored Quests when they never asked for them, but it should bring long-term users at least some comfort that their data purportedly doesn’t have to contribute to the marketing. However, it’s unclear if Discord may one day change this. The fact that the platform is implementing ads at all is somewhat surprising. Discord named its avoidance of advertising as one of its key differentiators from traditional social media platforms as recently as late January.

In March 2021, Citron told WSJ that Discord had eschewed ads until that point because ads would be intrusive, considering Discord’s purpose of instant back-and-forth communication and people’s general distaste for viewing ads and having their data shared with other companies.

“We really believe we can build products that make Discord more fun and that people will pay for them. It keeps our incentives aligned,” Citron told WSJ at the time.

That same year, Citron, in response to a question about why being ad-free is important to Discord, told NPR: “We believe that people’s data is their data and that people should feel comfortable and safe to have conversations and that their data is not going to be used against them in any way that is improper.”

Sponsored Quests differs from other types of ads that would more obviously disrupt Discord users’ experiences, such as pop-up ads or ads viewed alongside chat windows.

A tight-rope to walk

Beyond Sponsored Quests, Discord, which launched in 2015, previously announced that it would start selling sponsored profile effects and avatar decorations in the Discord Shop. In March, Discord’s Sellis said this would arrive in the “coming weeks.” Discord is also trying to hire more than 12 people to work in ad sales, WSJ said Saturday, citing anonymous “people familiar with [Discord’s] plans.”

Discord’s Carlson declined to comment to Ars on whether or not Discord plans to incorporate other types of ads into Discord. She noted that Sponsored Quests “are currently in the pilot phase” and that the company will “continue to iterate based on what we learn.”

In 2021, Discord enjoyed a nearly three-times revenue boost that it attributed to subscription sales for Nitro, which adds features like HD video streaming and up to 500MB uploads. In March, Citron told Bloomberg that Discord has more than 200 million monthly active users and that the company will “probably” go public eventually.

The publication, citing unnamed “people with knowledge of the matter,” also reported that Discord makes over $600 million in annualized revenue. The startup has raised over $1 billion in funding and is reported to have over $700 million in cash. However, the company reportedly isn’t profitable. It also laid off 17 percent of staffers, or 170 workers, in January.

Meanwhile, ads are the top revenue generator for many other social media platforms, such as Reddit, which recently went public.

While Discord’s first real ads endeavor seems like it will have minimal impact on users who aren’t interested in them, it brings the company down a tricky road that it hasn’t previously navigated. A key priority should be ensuring that any form of ads doesn’t disrupt the primary reasons people like using Discord. As it stands, Sponsored Quests might already put off some users.

“I don’t want my friendships to be monetized or productized in any way,” Zack Mohsen, a reported long-time user and computer hardware engineer based in Seattle, told WSJ.

Updated April 1, 2024 at 5: 32 p.m. ET to add information and comment from Discord. 

Discord starts down the dangerous road of ads this week Read More »

facebook-let-netflix-see-user-dms,-quit-streaming-to-keep-netflix-happy:-lawsuit

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit

A promotional image for Sorry for Your Loss, with Elizabeth Olsen

Enlarge / A promotional image for Sorry for Your Loss, which was a Facebook Watch original scripted series.

Last April, Meta revealed that it would no longer support original shows, like Jada Pinkett Smith’s Red Table Talk talk show, on Facebook Watch. Meta’s streaming business that was once viewed as competition for the likes of YouTube and Netflix is effectively dead now; Facebook doesn’t produce original series, and Facebook Watch is no longer available as a video-streaming app.

The streaming business’ demise has seemed related to cost cuts at Meta that have also included layoffs. However, recently unsealed court documents in an antitrust suit against Meta [PDF] claim that Meta has squashed its streaming dreams in order to appease one of its biggest ad customers: Netflix.

Facebook allegedly gave Netflix creepy privileges

As spotted via Gizmodo, a letter was filed on April 14 in relation to a class-action antitrust suit that was filed by Meta customers, accusing Meta of anti-competitive practices that harm social media competition and consumers. The letter, made public Saturday, asks a court to have Reed Hastings, Netflix’s founder and former CEO, respond to a subpoena for documents that plaintiffs claim are relevant to the case. The original complaint filed in December 2020 [PDF] doesn’t mention Netflix beyond stating that Facebook “secretly signed Whitelist and Data sharing agreements” with Netflix, along with “dozens” of other third-party app developers. The case is still ongoing.

The letter alleges that Netflix’s relationship with Facebook was remarkably strong due to the former’s ad spend with the latter and that Hastings directed “negotiations to end competition in streaming video” from Facebook.

One of the first questions that may come to mind is why a company like Facebook would allow Netflix to influence such a major business decision. The litigation claims the companies formed a lucrative business relationship that included Facebook allegedly giving Netflix access to Facebook users’ private messages:

By 2013, Netflix had begun entering into a series of “Facebook Extended API” agreements, including a so-called “Inbox API” agreement that allowed Netflix programmatic access to Facebook’s users’ private message inboxes, in exchange for which Netflix would “provide to FB a written report every two weeks that shows daily counts of recommendation sends and recipient clicks by interface, initiation surface, and/or implementation variant (e.g., Facebook vs. non-Facebook recommendation recipients). … In August 2013, Facebook provided Netflix with access to its so-called “Titan API,” a private API that allowed a whitelisted partner to access, among other things, Facebook users’ “messaging app and non-app friends.”

Meta said it rolled out end-to-end encryption “for all personal chats and calls on Messenger and Facebook” in December. And in 2018, Facebook told Vox that it doesn’t use private messages for ad targeting. But a few months later, The New York Times, citing “hundreds of pages of Facebook documents,” reported that Facebook “gave Netflix and Spotify the ability to read Facebook users’ private messages.”

Meta didn’t respond to Ars Technica’s request for comment. The company told Gizmodo that it has standard agreements with Netflix currently but didn’t answer the publication’s specific questions.

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit Read More »

public-officials-can-block-haters—but-only-sometimes,-scotus-rules

Public officials can block haters—but only sometimes, SCOTUS rules

Public officials can block haters—but only sometimes, SCOTUS rules

There are some circumstances where government officials are allowed to block people from commenting on their social media pages, the Supreme Court ruled Friday.

According to the Supreme Court, the key question is whether officials are speaking as private individuals or on behalf of the state when posting online. Issuing two opinions, the Supreme Court declined to set a clear standard for when personal social media use constitutes state speech, leaving each unique case to be decided by lower courts.

Instead, SCOTUS provided a test for courts to decide first if someone is or isn’t speaking on behalf of the state on their social media pages, and then if they actually have authority to act on what they post online.

The ruling suggests that government officials can block people from commenting on personal social media pages where they discuss official business when that speech cannot be attributed to the state and merely reflects personal remarks. This means that blocking is acceptable when the official has no authority to speak for the state or exercise that authority when speaking on their page.

That authority empowering officials to speak for the state could be granted by a written law. It could also be granted informally if officials have long used social media to speak on behalf of the state to the point where their power to do so is considered “well-settled,” one SCOTUS ruling said.

SCOTUS broke it down like this: An official might be viewed as speaking for the state if the social media page is managed by the official’s office, if a city employee posts on their behalf to their personal page, or if the page is handed down from one official to another when terms in office end.

Posting on a personal page might also be considered speaking for the state if the information shared has not already been shared elsewhere.

Examples of officials clearly speaking on behalf of the state include a mayor holding a city council meeting online or an official using their personal page as an official channel for comments on proposed regulations.

Because SCOTUS did not set a clear standard, officials risk liability when blocking followers on so-called “mixed use” social media pages, SCOTUS cautioned. That liability could be diminished by keeping personal pages entirely separate or by posting a disclaimer stating that posts represent only officials’ personal views and not efforts to speak on behalf of the state. But any official using a personal page to make official comments could expose themselves to liability, even with a disclaimer.

SCOTUS test for when blocking is OK

These clarifications came in two SCOTUS opinions addressing conflicting outcomes in two separate complaints about officials in California and Michigan who blocked followers heavily criticizing them on Facebook and X. The lower courts’ decisions have been vacated, and courts must now apply the Supreme Court’s test to issue new decisions in each case.

One opinion was brief and unsigned, discussing a case where California parents sued school district board members who blocked them from commenting on public Twitter pages used for campaigning and discussing board issues. The board members claimed they blocked their followers after the parents left dozens and sometimes hundreds of the same exact comments on tweets.

In the second—which was unanimous, with no dissenting opinions—Justice Amy Coney Barrett responded at length to a case from a Facebook user named Kevin Lindke. This opinion provides varied guidance that courts can apply when considering whether blocking is appropriate or violating constituents’ First Amendment rights.

Lindke was blocked by a Michigan city manager, James Freed, after leaving comments criticizing the city’s response to COVID-19 on a page that Freed created as a college student, sometime before 2008. Among these comments, Lindke called the city’s pandemic response “abysmal” and told Freed that “the city deserves better.” On a post showing Freed picking up a takeout order, Lindke complained that residents were “suffering,” while Freed ate at expensive restaurants.

After Freed hit 5,000 followers, he converted the page to reflect his public figure status. But while he primarily still used the page for personal posts about his family and always managed the page himself, the page went into murkier territory when he also shared updates about his job as city manager. Those updates included sharing updates on city efforts, posting screenshots of city press releases, and soliciting public feedback, like sharing links to city surveys.

Public officials can block haters—but only sometimes, SCOTUS rules Read More »

reddit-admits-more-moderator-protests-could-hurt-its-business

Reddit admits more moderator protests could hurt its business

SEC filing —

Losing third-party tools “could harm our moderators’ ability to review content…”

Reddit logo on website displayed on a laptop screen is seen in this illustration photo taken in Krakow, Poland on February 22, 2024.

Reddit filed to go public on Thursday (PDF), revealing various details of the social media company’s inner workings. Among the revelations, Reddit acknowledged the threat of future user protests and the value of third-party Reddit apps.

On July 1, Reddit enacted API rule changes—including new, expensive pricing —that resulted in many third-party Reddit apps closing. Disturbed by the changes, the timeline of the changes, and concerns that Reddit wasn’t properly appreciating third-party app developers and moderators, thousands of Reddit users protested by making the subreddits they moderate private, read-only, and/or engaging in other forms of protest, such as only discussing John Oliver or porn.

Protests went on for weeks and, at their onset, crashed Reddit for three hours. At the time, Reddit CEO Steve Huffman said the protests did not have “any significant revenue impact so far.”

In its filing with the Securities and Exchange Commission (SEC), though, Reddit acknowledged that another such protest could hurt its pockets:

While these activities have not historically had a material impact on our business or results of operations, similar actions by moderators and/or their communities in the future could adversely affect our business, results of operations, financial condition, and prospects.

The company also said that bad publicity and media coverage, such as the kind that stemmed from the API protests, could be a risk to Reddit’s success. The Form S-1 said bad PR around Reddit, including its practices, prices, and mods, “could adversely affect the size, demographics, engagement, and loyalty of our user base,” adding:

For instance, in May and June 2023, we experienced negative publicity as a result of our API policy changes.

Reddit’s filing also said that negative publicity and moderators disrupting the normal operation of subreddits could hurt user growth and engagement goals. The company highlighted financial incentives associated with having good relationships with volunteer moderators, noting that if enough mods decided to disrupt Reddit (like they did when they led protests last year), “results of operations, financial condition, and prospects could be adversely affected.” Reddit infamously forcibly removed moderators from their posts during the protests, saying they broke Reddit rules by refusing to reopen the subreddits they moderated.

“As communities grow, it can become more and more challenging for communities to find qualified people willing to act as moderators,” the filing says.

Losing third-party tools could hurt Reddit’s business

Much of the momentum for last year’s protests came from users, including long-time Redditors, mods, and people with accessibility needs, feeling that third-party apps were necessary to enjoyably and properly access and/or moderate Reddit. Reddit’s own technology has disappointed users in the past (leading some to cling to Old Reddit, which uses an older interface, for example). In its SEC filing, Reddit pointed to the value of third-party “tools” despite its API pricing killing off many of the most popular examples.

Reddit’s filing discusses losing moderators as a business risk and notes how important third-party tools are in maintaining mods:

While we provide tools to our communities to manage their subreddits, our moderators also rely on their own and third-party tools. Any disruption to, or lack of availability of, these third-party tools could harm our moderators’ ability to review content and enforce community rules. Further, if we are unable to provide effective support for third-party moderation tools, or develop our own such tools, our moderators could decide to leave our platform and may encourage their communities to follow them to a new platform, which would adversely affect our business, results of operations, financial condition, and prospects.

Since Reddit’s API policy changes, a small number of third-party Reddit apps remain available. But some of the remaining third-party Reddit app developers have previously told Ars Technica that they’re unsure of their app’s tenability under Reddit’s terms. Nondisclosure agreement requirements and the lack of a finalized developer platform also drive uncertainty around the longevity of the third-party Reddit app ecosystem, according to devs Ars spoke with this year.

Reddit admits more moderator protests could hurt its business Read More »

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

reddit-must-share-ip-addresses-of-piracy-discussing-users,-film-studios-say

Reddit must share IP addresses of piracy-discussing users, film studios say

A keyboard icon for piracy beside letter v and n

For the third time in less than a year, film studios with copyright infringement complaints against a cable Internet provider are trying to force Reddit to share information about users who have discussed piracy on the site.

In 2023, film companies lost two attempts to have Reddit unmask its users. In the first instance, US Magistrate Judge Laurel Beeler ruled in the US District Court for the Northern District of California that the First Amendment right to anonymous speech meant Reddit didn’t have to disclose the names, email addresses, and other account registration information for nine Reddit users. Film companies, including Bodyguard Productions and Millennium Media, had subpoenaed Reddit in relation to a copyright infringement lawsuit against Astound Broadband-owned RCN about subscribers allegedly pirating 34 movie titles, including Hellboy (2019), Rambo V: Last Blood, and Tesla.

In the second instance, the same companies sued Astound Broadband-owned ISP Grande, again for alleged copyright infringement occurring over the ISP’s network. The studios subpoenaed Reddit for user account information, including “IP address registration and logs from 1/1/2016 to present, name, email address, and other account registration information” for six Reddit users, per a July 2023 court filing.

In August, a federal court again quashed that subpoena, citing First Amendment rights. In her ruling, Beeler noted that while the First Amendment right to anonymous speech is not absolute, the film producers had already received the names of 118 Grande subscribers. She also said the film producers had failed to prove that “the identifying information is directly or materially relevant or unavailable from another source.”

Third piracy-related subpoena

This week, as reported by TorrentFreak, film companies Voltage Holdings, which are part of the previous two subpoenas, and Screen Media Ventures, another film studio with litigation against RCN, filed a motion to compel [PDF] Reddit to respond to the subpoena in the US District Court for the Northern District of California. The studios said they’re seeking the information concerning claims they’ve made that the “ability to pirate content efficiently without any consequences is a draw for becoming a Frontier subscriber” and that Frontier Communications “does not have an effective policy for terminating repeat infringers.” The film studios are claimants against Frontier in its bankruptcy case. The studios are represented by the same lawyers used in the two aforementioned cases.

The studios are asking that the court require Reddit to provide “IP address log information from 1/1/2017 to present” for six anonymous Reddit users who talked about piracy on Reddit. Although, Reddit posts shared in the court filing only date back to 2021.

Reddit responded to the studios’ subpoena with a letter [PDF] on January 2 stating that the subpoena “does not satisfy the First Amendment standard for disclosure of identifying information regarding an anonymous speaker.” Reddit also noted the two previously quashed subpoenas and suggested that it did not have to comply with the new request because the studios could acquire equivalent or better information elsewhere.

As with the previously mentioned litigation against ISPs, Reddit is a non-party. However, since the film companies claimed that Frontier had refused to produce customer identifying information and Reddit responded with a denial to the requests, the film companies filed their motion to compel.

The studios argue that the information requests do not implicate the First Amendment and that the rulings around the two aforementioned subpoenas are not applicable because the new subpoena is only about IP address logs and not other user-identifying information.

“The Reddit users do not have a recognized privacy interest in their IP addresses,” the motion says.

Reddit must share IP addresses of piracy-discussing users, film studios say Read More »