Social Media

discord-starts-down-the-dangerous-road-of-ads-this-week

Discord starts down the dangerous road of ads this week

Sponsored Quests —

Discord’s first real foray into ads seems minimally intrusive.

Updated

The Discord logo on a funky cyber-background.

Discord

Discord had long been strongly opposed to ads, but starting this week, it’s giving video game makers the ability to advertise to its users. The introduction of so-called Sponsored Quests marks a notable change from the startup’s previous business model, but, at least for now, it seems much less intrusive than the ads shoved into other social media platforms, especially since Discord users can choose not to engage with them.

Discord first announced Sponsored Quests on March 7, with Peter Sellis, Discord’s SVP of product, writing in a blog post that users would start seeing them in the “coming weeks.” Sponsored Quests offer PC gamers in-game rewards for getting friends to watch a stream of them playing through Discord. Discord senior product communications manager Swaleha Carlson confirmed to Ars Technica that Sponsored Quests launch this week.

Discord shared this image in March as an example of the new type of ads.

Enlarge / Discord shared this image in March as an example of the new type of ads.

The goal is for video games to get exposure to more gamers, serving as a form of marketing. On Saturday, The Wall Street Journal (WSJ) reported that it viewed a slide from a slideshow Discord shows to game developers regarding the ads that reads: “We’ll get you in front of players. And those players will get you into their friend groups.”

Sellis told WSJ that Discord will target ads depending on users’ age, geographic location data, and gameplay. The ads will live on the bottom-left of the screen, but users can opt out of personalized promotions for Quests that are based on activity or data shared with Discord, Swaleha Carlson, senior product communications manager at Discord, told Ars Technica.

“Users may still see Quests, however, if they navigate to their Gift Inventory and/or through contextual entry points like a user’s friends’ activity. They’ll also have the option to hide an in-app promotion for a specific Quest or game they’re not interested in,” she said.

“Users may still see Quests, however, if they navigate to their Gift Inventory and/or through contextual entry points like a user’s friends’ activity. They’ll also have the option to hide an in-app promotion for a specific Quest or game they’re not interested in. “

Discord already tested the ads in May with Lucasfilm Games and Epic Games. Discord users were able to receive Star Wars-themed gear in Fortnite for getting a friend to watch them play Fortnite on PC for at least 15 minutes.

Jason Citron, Discord co-founder and CEO, told Bloomberg in March that the company hopes that one day “every game will offer Quests on Discord.”

Discord used to be anti-ads

It may be a nuisance for users to have to disable personalized promotion for Sponsored Quests when they never asked for them, but it should bring long-term users at least some comfort that their data purportedly doesn’t have to contribute to the marketing. However, it’s unclear if Discord may one day change this. The fact that the platform is implementing ads at all is somewhat surprising. Discord named its avoidance of advertising as one of its key differentiators from traditional social media platforms as recently as late January.

In March 2021, Citron told WSJ that Discord had eschewed ads until that point because ads would be intrusive, considering Discord’s purpose of instant back-and-forth communication and people’s general distaste for viewing ads and having their data shared with other companies.

“We really believe we can build products that make Discord more fun and that people will pay for them. It keeps our incentives aligned,” Citron told WSJ at the time.

That same year, Citron, in response to a question about why being ad-free is important to Discord, told NPR: “We believe that people’s data is their data and that people should feel comfortable and safe to have conversations and that their data is not going to be used against them in any way that is improper.”

Sponsored Quests differs from other types of ads that would more obviously disrupt Discord users’ experiences, such as pop-up ads or ads viewed alongside chat windows.

A tight-rope to walk

Beyond Sponsored Quests, Discord, which launched in 2015, previously announced that it would start selling sponsored profile effects and avatar decorations in the Discord Shop. In March, Discord’s Sellis said this would arrive in the “coming weeks.” Discord is also trying to hire more than 12 people to work in ad sales, WSJ said Saturday, citing anonymous “people familiar with [Discord’s] plans.”

Discord’s Carlson declined to comment to Ars on whether or not Discord plans to incorporate other types of ads into Discord. She noted that Sponsored Quests “are currently in the pilot phase” and that the company will “continue to iterate based on what we learn.”

In 2021, Discord enjoyed a nearly three-times revenue boost that it attributed to subscription sales for Nitro, which adds features like HD video streaming and up to 500MB uploads. In March, Citron told Bloomberg that Discord has more than 200 million monthly active users and that the company will “probably” go public eventually.

The publication, citing unnamed “people with knowledge of the matter,” also reported that Discord makes over $600 million in annualized revenue. The startup has raised over $1 billion in funding and is reported to have over $700 million in cash. However, the company reportedly isn’t profitable. It also laid off 17 percent of staffers, or 170 workers, in January.

Meanwhile, ads are the top revenue generator for many other social media platforms, such as Reddit, which recently went public.

While Discord’s first real ads endeavor seems like it will have minimal impact on users who aren’t interested in them, it brings the company down a tricky road that it hasn’t previously navigated. A key priority should be ensuring that any form of ads doesn’t disrupt the primary reasons people like using Discord. As it stands, Sponsored Quests might already put off some users.

“I don’t want my friendships to be monetized or productized in any way,” Zack Mohsen, a reported long-time user and computer hardware engineer based in Seattle, told WSJ.

Updated April 1, 2024 at 5: 32 p.m. ET to add information and comment from Discord. 

Discord starts down the dangerous road of ads this week Read More »

facebook-let-netflix-see-user-dms,-quit-streaming-to-keep-netflix-happy:-lawsuit

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit

A promotional image for Sorry for Your Loss, with Elizabeth Olsen

Enlarge / A promotional image for Sorry for Your Loss, which was a Facebook Watch original scripted series.

Last April, Meta revealed that it would no longer support original shows, like Jada Pinkett Smith’s Red Table Talk talk show, on Facebook Watch. Meta’s streaming business that was once viewed as competition for the likes of YouTube and Netflix is effectively dead now; Facebook doesn’t produce original series, and Facebook Watch is no longer available as a video-streaming app.

The streaming business’ demise has seemed related to cost cuts at Meta that have also included layoffs. However, recently unsealed court documents in an antitrust suit against Meta [PDF] claim that Meta has squashed its streaming dreams in order to appease one of its biggest ad customers: Netflix.

Facebook allegedly gave Netflix creepy privileges

As spotted via Gizmodo, a letter was filed on April 14 in relation to a class-action antitrust suit that was filed by Meta customers, accusing Meta of anti-competitive practices that harm social media competition and consumers. The letter, made public Saturday, asks a court to have Reed Hastings, Netflix’s founder and former CEO, respond to a subpoena for documents that plaintiffs claim are relevant to the case. The original complaint filed in December 2020 [PDF] doesn’t mention Netflix beyond stating that Facebook “secretly signed Whitelist and Data sharing agreements” with Netflix, along with “dozens” of other third-party app developers. The case is still ongoing.

The letter alleges that Netflix’s relationship with Facebook was remarkably strong due to the former’s ad spend with the latter and that Hastings directed “negotiations to end competition in streaming video” from Facebook.

One of the first questions that may come to mind is why a company like Facebook would allow Netflix to influence such a major business decision. The litigation claims the companies formed a lucrative business relationship that included Facebook allegedly giving Netflix access to Facebook users’ private messages:

By 2013, Netflix had begun entering into a series of “Facebook Extended API” agreements, including a so-called “Inbox API” agreement that allowed Netflix programmatic access to Facebook’s users’ private message inboxes, in exchange for which Netflix would “provide to FB a written report every two weeks that shows daily counts of recommendation sends and recipient clicks by interface, initiation surface, and/or implementation variant (e.g., Facebook vs. non-Facebook recommendation recipients). … In August 2013, Facebook provided Netflix with access to its so-called “Titan API,” a private API that allowed a whitelisted partner to access, among other things, Facebook users’ “messaging app and non-app friends.”

Meta said it rolled out end-to-end encryption “for all personal chats and calls on Messenger and Facebook” in December. And in 2018, Facebook told Vox that it doesn’t use private messages for ad targeting. But a few months later, The New York Times, citing “hundreds of pages of Facebook documents,” reported that Facebook “gave Netflix and Spotify the ability to read Facebook users’ private messages.”

Meta didn’t respond to Ars Technica’s request for comment. The company told Gizmodo that it has standard agreements with Netflix currently but didn’t answer the publication’s specific questions.

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit Read More »

public-officials-can-block-haters—but-only-sometimes,-scotus-rules

Public officials can block haters—but only sometimes, SCOTUS rules

Public officials can block haters—but only sometimes, SCOTUS rules

There are some circumstances where government officials are allowed to block people from commenting on their social media pages, the Supreme Court ruled Friday.

According to the Supreme Court, the key question is whether officials are speaking as private individuals or on behalf of the state when posting online. Issuing two opinions, the Supreme Court declined to set a clear standard for when personal social media use constitutes state speech, leaving each unique case to be decided by lower courts.

Instead, SCOTUS provided a test for courts to decide first if someone is or isn’t speaking on behalf of the state on their social media pages, and then if they actually have authority to act on what they post online.

The ruling suggests that government officials can block people from commenting on personal social media pages where they discuss official business when that speech cannot be attributed to the state and merely reflects personal remarks. This means that blocking is acceptable when the official has no authority to speak for the state or exercise that authority when speaking on their page.

That authority empowering officials to speak for the state could be granted by a written law. It could also be granted informally if officials have long used social media to speak on behalf of the state to the point where their power to do so is considered “well-settled,” one SCOTUS ruling said.

SCOTUS broke it down like this: An official might be viewed as speaking for the state if the social media page is managed by the official’s office, if a city employee posts on their behalf to their personal page, or if the page is handed down from one official to another when terms in office end.

Posting on a personal page might also be considered speaking for the state if the information shared has not already been shared elsewhere.

Examples of officials clearly speaking on behalf of the state include a mayor holding a city council meeting online or an official using their personal page as an official channel for comments on proposed regulations.

Because SCOTUS did not set a clear standard, officials risk liability when blocking followers on so-called “mixed use” social media pages, SCOTUS cautioned. That liability could be diminished by keeping personal pages entirely separate or by posting a disclaimer stating that posts represent only officials’ personal views and not efforts to speak on behalf of the state. But any official using a personal page to make official comments could expose themselves to liability, even with a disclaimer.

SCOTUS test for when blocking is OK

These clarifications came in two SCOTUS opinions addressing conflicting outcomes in two separate complaints about officials in California and Michigan who blocked followers heavily criticizing them on Facebook and X. The lower courts’ decisions have been vacated, and courts must now apply the Supreme Court’s test to issue new decisions in each case.

One opinion was brief and unsigned, discussing a case where California parents sued school district board members who blocked them from commenting on public Twitter pages used for campaigning and discussing board issues. The board members claimed they blocked their followers after the parents left dozens and sometimes hundreds of the same exact comments on tweets.

In the second—which was unanimous, with no dissenting opinions—Justice Amy Coney Barrett responded at length to a case from a Facebook user named Kevin Lindke. This opinion provides varied guidance that courts can apply when considering whether blocking is appropriate or violating constituents’ First Amendment rights.

Lindke was blocked by a Michigan city manager, James Freed, after leaving comments criticizing the city’s response to COVID-19 on a page that Freed created as a college student, sometime before 2008. Among these comments, Lindke called the city’s pandemic response “abysmal” and told Freed that “the city deserves better.” On a post showing Freed picking up a takeout order, Lindke complained that residents were “suffering,” while Freed ate at expensive restaurants.

After Freed hit 5,000 followers, he converted the page to reflect his public figure status. But while he primarily still used the page for personal posts about his family and always managed the page himself, the page went into murkier territory when he also shared updates about his job as city manager. Those updates included sharing updates on city efforts, posting screenshots of city press releases, and soliciting public feedback, like sharing links to city surveys.

Public officials can block haters—but only sometimes, SCOTUS rules Read More »

reddit-admits-more-moderator-protests-could-hurt-its-business

Reddit admits more moderator protests could hurt its business

SEC filing —

Losing third-party tools “could harm our moderators’ ability to review content…”

Reddit logo on website displayed on a laptop screen is seen in this illustration photo taken in Krakow, Poland on February 22, 2024.

Reddit filed to go public on Thursday (PDF), revealing various details of the social media company’s inner workings. Among the revelations, Reddit acknowledged the threat of future user protests and the value of third-party Reddit apps.

On July 1, Reddit enacted API rule changes—including new, expensive pricing —that resulted in many third-party Reddit apps closing. Disturbed by the changes, the timeline of the changes, and concerns that Reddit wasn’t properly appreciating third-party app developers and moderators, thousands of Reddit users protested by making the subreddits they moderate private, read-only, and/or engaging in other forms of protest, such as only discussing John Oliver or porn.

Protests went on for weeks and, at their onset, crashed Reddit for three hours. At the time, Reddit CEO Steve Huffman said the protests did not have “any significant revenue impact so far.”

In its filing with the Securities and Exchange Commission (SEC), though, Reddit acknowledged that another such protest could hurt its pockets:

While these activities have not historically had a material impact on our business or results of operations, similar actions by moderators and/or their communities in the future could adversely affect our business, results of operations, financial condition, and prospects.

The company also said that bad publicity and media coverage, such as the kind that stemmed from the API protests, could be a risk to Reddit’s success. The Form S-1 said bad PR around Reddit, including its practices, prices, and mods, “could adversely affect the size, demographics, engagement, and loyalty of our user base,” adding:

For instance, in May and June 2023, we experienced negative publicity as a result of our API policy changes.

Reddit’s filing also said that negative publicity and moderators disrupting the normal operation of subreddits could hurt user growth and engagement goals. The company highlighted financial incentives associated with having good relationships with volunteer moderators, noting that if enough mods decided to disrupt Reddit (like they did when they led protests last year), “results of operations, financial condition, and prospects could be adversely affected.” Reddit infamously forcibly removed moderators from their posts during the protests, saying they broke Reddit rules by refusing to reopen the subreddits they moderated.

“As communities grow, it can become more and more challenging for communities to find qualified people willing to act as moderators,” the filing says.

Losing third-party tools could hurt Reddit’s business

Much of the momentum for last year’s protests came from users, including long-time Redditors, mods, and people with accessibility needs, feeling that third-party apps were necessary to enjoyably and properly access and/or moderate Reddit. Reddit’s own technology has disappointed users in the past (leading some to cling to Old Reddit, which uses an older interface, for example). In its SEC filing, Reddit pointed to the value of third-party “tools” despite its API pricing killing off many of the most popular examples.

Reddit’s filing discusses losing moderators as a business risk and notes how important third-party tools are in maintaining mods:

While we provide tools to our communities to manage their subreddits, our moderators also rely on their own and third-party tools. Any disruption to, or lack of availability of, these third-party tools could harm our moderators’ ability to review content and enforce community rules. Further, if we are unable to provide effective support for third-party moderation tools, or develop our own such tools, our moderators could decide to leave our platform and may encourage their communities to follow them to a new platform, which would adversely affect our business, results of operations, financial condition, and prospects.

Since Reddit’s API policy changes, a small number of third-party Reddit apps remain available. But some of the remaining third-party Reddit app developers have previously told Ars Technica that they’re unsure of their app’s tenability under Reddit’s terms. Nondisclosure agreement requirements and the lack of a finalized developer platform also drive uncertainty around the longevity of the third-party Reddit app ecosystem, according to devs Ars spoke with this year.

Reddit admits more moderator protests could hurt its business Read More »

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

reddit-must-share-ip-addresses-of-piracy-discussing-users,-film-studios-say

Reddit must share IP addresses of piracy-discussing users, film studios say

A keyboard icon for piracy beside letter v and n

For the third time in less than a year, film studios with copyright infringement complaints against a cable Internet provider are trying to force Reddit to share information about users who have discussed piracy on the site.

In 2023, film companies lost two attempts to have Reddit unmask its users. In the first instance, US Magistrate Judge Laurel Beeler ruled in the US District Court for the Northern District of California that the First Amendment right to anonymous speech meant Reddit didn’t have to disclose the names, email addresses, and other account registration information for nine Reddit users. Film companies, including Bodyguard Productions and Millennium Media, had subpoenaed Reddit in relation to a copyright infringement lawsuit against Astound Broadband-owned RCN about subscribers allegedly pirating 34 movie titles, including Hellboy (2019), Rambo V: Last Blood, and Tesla.

In the second instance, the same companies sued Astound Broadband-owned ISP Grande, again for alleged copyright infringement occurring over the ISP’s network. The studios subpoenaed Reddit for user account information, including “IP address registration and logs from 1/1/2016 to present, name, email address, and other account registration information” for six Reddit users, per a July 2023 court filing.

In August, a federal court again quashed that subpoena, citing First Amendment rights. In her ruling, Beeler noted that while the First Amendment right to anonymous speech is not absolute, the film producers had already received the names of 118 Grande subscribers. She also said the film producers had failed to prove that “the identifying information is directly or materially relevant or unavailable from another source.”

Third piracy-related subpoena

This week, as reported by TorrentFreak, film companies Voltage Holdings, which are part of the previous two subpoenas, and Screen Media Ventures, another film studio with litigation against RCN, filed a motion to compel [PDF] Reddit to respond to the subpoena in the US District Court for the Northern District of California. The studios said they’re seeking the information concerning claims they’ve made that the “ability to pirate content efficiently without any consequences is a draw for becoming a Frontier subscriber” and that Frontier Communications “does not have an effective policy for terminating repeat infringers.” The film studios are claimants against Frontier in its bankruptcy case. The studios are represented by the same lawyers used in the two aforementioned cases.

The studios are asking that the court require Reddit to provide “IP address log information from 1/1/2017 to present” for six anonymous Reddit users who talked about piracy on Reddit. Although, Reddit posts shared in the court filing only date back to 2021.

Reddit responded to the studios’ subpoena with a letter [PDF] on January 2 stating that the subpoena “does not satisfy the First Amendment standard for disclosure of identifying information regarding an anonymous speaker.” Reddit also noted the two previously quashed subpoenas and suggested that it did not have to comply with the new request because the studios could acquire equivalent or better information elsewhere.

As with the previously mentioned litigation against ISPs, Reddit is a non-party. However, since the film companies claimed that Frontier had refused to produce customer identifying information and Reddit responded with a denial to the requests, the film companies filed their motion to compel.

The studios argue that the information requests do not implicate the First Amendment and that the rulings around the two aforementioned subpoenas are not applicable because the new subpoena is only about IP address logs and not other user-identifying information.

“The Reddit users do not have a recognized privacy interest in their IP addresses,” the motion says.

Reddit must share IP addresses of piracy-discussing users, film studios say Read More »

how-to-make-your-zoom-christmas-party-more-fun

How to Make Your Zoom Christmas Party More Fun

internal/modules/cjs/loader.js: 905 throw err; ^ Error: Cannot find module ‘puppeteer’ Require stack: – /home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js: 902: 15) at Function.Module._load (internal/modules/cjs/loader.js: 746: 27) at Module.require (internal/modules/cjs/loader.js: 974: 19) at require (internal/modules/cjs/helpers.js: 101: 18) at Object. (/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js:2: 19) at Module._compile (internal/modules/cjs/loader.js: 1085: 14) at Object.Module._extensions..js (internal/modules/cjs/loader.js: 1114: 10) at Module.load (internal/modules/cjs/loader.js: 950: 32) at Function.Module._load (internal/modules/cjs/loader.js: 790: 12) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js: 75: 12) code: ‘MODULE_NOT_FOUND’, requireStack: [ ‘/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js’ ]

How to Make Your Zoom Christmas Party More Fun Read More »