youtube

“ig-is-a-drug”:-internal-messages-may-doom-meta-at-social-media-addiction-trial

“IG is a drug”: Internal messages may doom Meta at social media addiction trial


Social media addiction test case

A loss could cost social media companies billions and force changes on platforms.

Mark Zuckerberg testifies during the US Senate Judiciary Committee hearing, “Big Tech and the Online Child Sexual Exploitation Crisis,” in 2024.

Anxiety, depression, eating disorders, and death. These can be the consequences for vulnerable kids who get addicted to social media, according to more than 1,000 personal injury lawsuits that seek to punish Meta and other platforms for allegedly prioritizing profits while downplaying child safety risks for years.

Social media companies have faced scrutiny before, with congressional hearings forcing CEOs to apologize, but until now, they’ve never had to convince a jury that they aren’t liable for harming kids.

This week, the first high-profile lawsuit—considered a “bellwether” case that could set meaningful precedent in the hundreds of other complaints—goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality.

TikTok and Snapchat were also targeted by the lawsuit, but both have settled. The Snapchat settlement came last week, while TikTok settled on Tuesday just hours before the trial started, Bloomberg reported.

For now, YouTube and Meta remain in the fight. K.G.M. allegedly started watching YouTube when she was 6 years old and joined Instagram by age 11. She’s fighting to claim untold damages—including potentially punitive damages—to help her family recoup losses from her pain and suffering and to punish social media companies and deter them from promoting harmful features to kids. She also wants the court to require prominent safety warnings on platforms to help parents be aware of the risks.

Platforms failed to blame mom for not reading TOS

A loss could cost social media companies billions, CNN reported.

To avoid that, platforms have alleged that other factors caused K.G.M.’s psychological harm—like school bullies and family troubles—while insisting that Section 230 and the First Amendment protect platforms from being blamed for any harmful content targeted to K.G.M.

They also argued that K.G.M.’s mom never read the terms of service and, therefore, supposedly would not have benefited from posted warnings. And ByteDance, before settling, seemingly tried to pass the buck by claiming that K.G.M. “already suffered mental health harms before she began using TikTok.”

But the judge, Carolyn B. Kuhl, wrote in a ruling denying all platforms’ motions for summary judgment that K.G.M. showed enough evidence that her claims don’t stem from content to go to trial.

Further, platforms can’t liken warnings buried in terms of service to prominently displayed warnings, Kuhl said, since K.G.M.’s mom testified she would have restricted the minor’s app usage if she were aware of the alleged risks.

Two platforms settling before the trial seems like a good sign for K.G.M. However, Snapchat has not settled other social media addiction lawsuits that it’s involved in, including one raised by school districts, and perhaps is waiting to see how K.G.M.’s case shakes out before taking further action.

To win, K.G.M.’s lawyers will need to “parcel out” how much harm is attributed to each platform, due to design features, not the content that was targeted to K.G.M., Clay Calvert, a technology policy expert and senior fellow at a think tank called the American Enterprise Institute, wrote. Internet law expert Eric Goldman told The Washington Post that detailing those harms will likely be K.G.M.’s biggest struggle, since social media addiction has yet to be legally recognized, and tracing who caused what harms may not be straightforward.

However, Matthew Bergman, founder of the Social Media Victims Law Center and one of K.G.M.’s lawyers, told the Post that K.G.M. is prepared to put up this fight.

“She is going to be able to explain in a very real sense what social media did to her over the course of her life and how in so many ways it robbed her of her childhood and her adolescence,” Bergman said.

Internal messages may be “smoking-gun evidence”

The research is unclear on whether social media is harmful for kids or whether social media addiction exists, Tamar Mendelson, a professor at Johns Hopkins Bloomberg School of Public Health, told the Post. And so far, research only shows a correlation between Internet use and mental health, Mendelson noted, which could doom K.G.M.’s case and others’.

However, social media companies’ internal research might concern a jury, Bergman told the Post. On Monday, the Tech Oversight Project, a nonprofit working to rein in Big Tech, published a report analyzing recently unsealed documents in K.G.M.’s case that supposedly provide “smoking-gun evidence” that platforms “purposefully designed their social media products to addict children and teens with no regard for known harms to their wellbeing”—while putting increased engagement from young users at the center of their business models.

In the report, Sacha Haworth, executive director of The Tech Oversight Project, accused social media companies of “gaslighting and lying to the public for years.”

Most of the recently unsealed documents highlighted in the report came from Meta, which also faces a trial from dozens of state attorneys general on social media addiction this year.

Those documents included an email stating that Mark Zuckerberg—who is expected to testify at K.G.M.’s trial—decided that Meta’s top priority in 2017 was teens who must be locked in to using the company’s family of apps.

The next year, a Facebook internal document showed that the company pondered letting “tweens” access a private mode inspired by the popularity of fake Instagram accounts teens know as “finstas.” That document included an “internal discussion on how to counter the narrative that Facebook is bad for youth and admission that internal data shows that Facebook use is correlated with lower well-being (although it says the effect reverses longitudinally).”

Other allegedly damning documents showed Meta seemingly bragging that “teens can’t switch off from Instagram even if they want to” and an employee declaring, “oh my gosh yall IG is a drug,” likening all social media platforms to “pushers.”

Similarly, a 2020 Google document detailed the company’s plan to keep kids engaged “for life,” despite internal research showing young YouTube users were more likely to “disproportionately” suffer from “habitual heavy use, late night use, and unintentional use” deteriorating their “digital well-being.”

Shorts, YouTube’s feature that rivals TikTok, also is a concern for parents suing, and three years later, documents showed Google choosing to target teens with Shorts, despite research flagging that the “two biggest challenges for teen wellbeing on YouTube” were prominently linked to watching shorts. Those challenges included Shorts bombarding teens with “low quality content recommendations that can convey & normalize unhealthy beliefs or behaviors” and teens reporting that “prolonged unintentional use” was “displacing valuable activities like time with friends or sleep.”

Bergman told the Post that these documents will help the jury decide if companies owed young users better protections sooner but prioritized profits while pushing off interventions that platforms have more recently introduced amid mounting backlash.

“Internal documents that have been held establishing the willful misconduct of these companies are going to—for the first time—be given a public airing,” Bergman said. “The public is going to know for the first time what social media companies have done to prioritize their profits over the safety of our kids.”

Platforms failed to get experts’ testimony tossed

One seeming advantage K.G.M. has heading into the trial is that tech companies failed to get expert testimony dismissed that backs up her claims.

Platforms tried to exclude testimony from several experts, including Kara Bagot, a board-certified adult, child, and adolescent psychiatrist, as well as Arturo Bejar, a former Meta safety researcher and whistleblower. They claimed that experts’ opinions were irrelevant because they were based on K.G.M.’s interactions with content. They also suggested that child safety experts’ opinions “violate the standards of reliability” since the causal links they draw don’t account for “alternative explanations” and allegedly “contradict the experts’ own statements in non-litigation contexts.”

However, Kuhl ruled that platforms will have the opportunity to counter experts’ opinions at trial, while reminding social media companies that “ultimately, the critical question of causation is one that must be determined by the jury.” Only one expert’s testimony was excluded, Social Media Victims Law Center noted, a licensed clinical psychologist deemed unqualified.

“Testimony by Bagot as to design features that were employed on TikTok as well as on other social media platforms is directly relevant to the question of whether those design features cause the type of harms allegedly suffered by K.G.M. here,” Kuhl wrote.

That means that a jury will get a chance to weigh Bagot’s opinion that “social media overuse and addiction causes or plays a substantial role in causing or exacerbating psychopathological harms in children and youth, including depression, anxiety and eating disorders, as well as internalizing and externalizing psychopathological symptoms.”

The jury will also consider the insights and information Bejar (a fact witness and former consultant for the company) will share about Meta’s internal safety studies. That includes hearing about “his personal knowledge and experience related to how design defects on Meta’s platforms can cause harm to minors (e.g., age verification, reporting processes, beauty filters, public like counts, infinite scroll, default settings, private messages, reels, ephemeral content, and connecting children with adult strangers),” as well as “harms associated with Meta’s platforms including addiction/problematic use, anxiety, depression, eating disorders, body dysmorphia, suicidality, self-harm, and sexualization.” 

If K.G.M. can convince the jury that she was not harmed by platforms’ failure to remove content but by companies “designing their platforms to addict kids” and “developing algorithms that show kids not what they want to see but what they cannot look away from,” Bergman thinks her case could become a “data point” for “settling similar cases en masse,” he told Barrons.

“She is very typical of so many children in the United States—the harms that they’ve sustained and the way their lives have been altered by the deliberate design decisions of the social media companies,” Bergman told the Post.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

“IG is a drug”: Internal messages may doom Meta at social media addiction trial Read More »

australian-plumber-is-a-youtube-sensation

Australian plumber is a YouTube sensation

My personal favorites are when Bruce takes on clogged restaurant grease traps, including the one at the top of this article in which he pulls out a massive greaseberg “the size of a chihuahua.” When it’s Bruce versus a nasty grease trap, the man remains undefeated (well, almost—sometimes he needs to get a grease trap pumped out before he can fix the problem). And I have learned more than I probably ever needed to know about how grease traps work.

schematic illustration showing how a grease trap works

Credit: YouTube/Drain Cleaning Australia

Credit: YouTube/Drain Cleaning Australia

Each video is its own little adventure. Bruce arrives on a job, checks out the problem (“she is chock-a-block, mate!”), and starts methodically working that problem until he solves it, which inevitably involves firing up “the bloody jet” to blast through blockages with 5,000 psi of water pressure (“Go, you good thing!”). This being Australia, he’ll occasionally encounter not just cockroaches but poisonous spiders and snakes. And he’s caught so many facefulls of wastewater and sewage while jetting that he really ought to invest in a hazmat suit. Even the cheesy canned techno-music playing during lulls in the action is low-budget perfection.

Bruce isn’t the only plumber with a YouTube channel—it’s a surprisingly good-size subgenre—but he’s the most colorful and entertaining. His unbridled enthusiasm for what many would consider the dirtiest of jobs is positively infectious. He regularly effuses about having the best job in the world, insisting that unclogging gross drains is “living the dream,” and regularly asks his audience, “How good is this? I mean, where else would you rather be?” Sure, he says it with an ironic (unseen) wink at the camera, but deep down, you know he truly loves the work.

And you know what? Bruce is right. It might not be your definition of “what dreams are made of,” but there really is something profoundly satisfying about a free-flowing drain—and a job well done.

Australian plumber is a YouTube sensation Read More »

google-temporarily-disabled-youtube’s-advanced-captions-without-warning

Google temporarily disabled YouTube’s advanced captions without warning

YouTubers have been increasingly frustrated with Google’s management of the platform, with disinformation welcomed back and an aggressive push for more AI (except where Google doesn’t like it). So it’s no surprise that creators have been up in arms over the suspicious removal of YouTube’s advanced SRV3 caption format. You don’t have to worry too much just yet—Google says this is only temporary, and it’s working on a fix for the underlying bug.

Google added support for this custom subtitle format around 2018, giving creators more customization options than with traditional captions. SRV3 (also known as YTT or YouTube Timed Text) allows for custom colors, transparency, animations, fonts, and precise positioning in videos. Uploaders using this format can color-code and position captions to help separate multiple speakers, create sing-along animations, or style them to match the video.

Over the last several days, creators who’ve become accustomed to this level of control have been dismayed to see that YouTube is no longer accepting videos with this Google-created format. Many worried Google had ditched the format entirely, which could be problematic for all those previously uploaded videos.

Google has now posted a brief statement and confirmed to Ars that it has not ended support for SRV3. However, all is not well. The company says it has temporarily limited the serving of SRV3 caption files because they may break playback for some users. That’s pretty vague, but it sounds like developers made a change to the platform without taking into account how it might interfere with SRV3 captions. Rather than allow those videos to be non-functional, it’s disabling most of the captions.

Google temporarily disabled YouTube’s advanced captions without warning Read More »

youtube-bans-two-popular-channels-that-created-fake-ai-movie-trailers

YouTube bans two popular channels that created fake AI movie trailers

Deadline reports that the behavior of these creators ran afoul of YouTube’s spam and misleading-metadata policies. At the same time, Google loves generative AI—YouTube has added more ways for creators to use generative AI, and the company says more gen AI tools are coming in the future. It’s quite a tightrope for Google to walk.

AI movie trailers

A selection of videos from the now-defunct Screen Culture channel.

Credit: Ryan Whitwam

A selection of videos from the now-defunct Screen Culture channel. Credit: Ryan Whitwam

While passing off AI videos as authentic movie trailers is definitely spammy conduct, the recent changes to the legal landscape could be a factor, too. Disney recently entered into a partnership with OpenAI, bringing its massive library of characters to the company’s Sora AI video app. At the same time, Disney sent a cease-and-desist letter to Google demanding the removal of Disney content from Google AI. The letter specifically cited AI content on YouTube as a concern.

Both the banned trailer channels made heavy use of Disney properties, sometimes even incorporating snippets of real trailers. For example, Screen Culture created 23 AI trailers for The Fantastic Four: First Steps, some of which outranked the official trailer in searches. It’s unclear if either account used Google’s Veo models to create the trailers, but Google’s AI will recreate Disney characters without issue.

While Screen Culture and KH Studio were the largest purveyors of AI movie trailers, they are far from alone. There are others with five and six-digit subscriber counts, some of which include disclosures about fan-made content. Is that enough to save them from the ban hammer? Many YouTube viewers probably hope not.

YouTube bans two popular channels that created fake AI movie trailers Read More »

he-got-sued-for-sharing-public-youtube-videos;-nightmare-ended-in-settlement

He got sued for sharing public YouTube videos; nightmare ended in settlement


Librarian vows to stop invasive ed tech after ending lawsuit with Proctorio.

Librarian Ian Linkletter remains one of Proctorio’s biggest critics after 5-year legal battle. Credit: Ashley Linkletter

Nobody expects to get sued for re-posting a YouTube video on social media by using the “share” button, but librarian Ian Linkletter spent the past five years embroiled in a copyright fight after doing just that.

Now that a settlement has been reached, Linkletter told Ars why he thinks his 2020 tweets sharing public YouTube videos put a target on his back.

Linkletter’s legal nightmare started in 2020 after an education technology company, Proctorio, began monitoring student backlash on Reddit over its AI tool used to remotely scan rooms, identify students, and prevent cheating on exams. On Reddit, students echoed serious concerns raised by researchers, warning of privacy issues, racist and sexist biases, and barriers to students with disabilities.

At that time, Linkletter was a learning technology specialist at the University of British Columbia. He had been aware of Proctorio as a tool that some professors used, but he ultimately joined UBC students criticizing Proctorio, as, practically overnight, it became a default tool that every teacher relied on during the early stages of the pandemic.

To Linkletter, the AI tool not only seemed flawed, but it also seemingly made students more anxious about exams. However, he didn’t post any tweets criticizing the tech—until he grew particularly disturbed to see Proctorio’s CEO, Mike Olsen, “showing up in the comments” on Reddit to fire back at one of his university’s loudest student critics. Defending Proctorio, Olsen roused even more backlash by posting the student’s private chat logs publicly to prove the student “lied” about a support interaction, The Guardian reported.

“If you’re gonna lie bro … don’t do it when the company clearly has an entire transcript of your conversation,” Olsen wrote, later apologizing for the now-deleted post.

“That set me off, and I was just like, this is completely unacceptable for a CEO to be going after our students like this,” Linkletter told Ars.

The more that Linkletter researched Proctorio, the more concerned he became. Taking to then-Twitter, he posted a series of seven tweets over a couple days that linked to YouTube videos that Proctorio hosted in its help center. He felt the videos—which showed how Proctorio flagged certain behaviors, tracked “abnormal” eye and head movements, and scanned rooms—helped demonstrate why students were so upset. And while he had fewer than 1,000 followers, he hoped that the influential higher education administrators who followed him would see his posts and consider dropping the tech.

Rather than request Linkletter remove the tweets—which was the company’s standard practice—Proctorio moved quickly to delete the videos. Proctorio supposedly expected that the removals would put Linkletter on notice to stop tweeting out help center videos. Instead, Linkletter posted a screenshot of the help center showing all the disabled videos, while suggesting that Proctorio seemed so invested in secrecy that it was willing to gut its own support resources to censor criticism of their tools.

Together, the videos, the help center screenshot, and another screenshot showing course material describing how Proctorio works were enough for Proctorio to take Linkletter to court.

The ed tech company promptly filed a lawsuit and obtained a temporary injunction by spuriously claiming that Linkletter shared private YouTube videos containing confidential information. Because the YouTube videos—which were public but “unlisted” when Linkletter shared them—had been removed, Linkletter did not have to delete the seven tweets that initially caught Proctorio’s attention, but the injunction required that he remove two tweets, including the screenshots.

In the five years since, the legal fight dragged on, with no end in sight until last week, as Canadian courts tangled with copyright allegations that tested a recently passed law intended to shield Canadian rights to free expression, the Protection of Public Participation Act.

To fund his defense, Linkletter said in a blog announcing the settlement that he invested his life savings “ten times over.” Additionally, about 900 GoFundMe supporters and thousands of members of the Association of Administrative and Professional Staff at UBC contributed tens of thousands more. For the last year of the battle, a law firm, Norton Rose Fulbright, agreed to represent him on a pro bono basis, which Linkletter said “was a huge relief to me, as it meant I could defend myself all the way if Proctorio chose to proceed with the litigation.”

The terms of the settlement remain confidential, but both Linkletter and Proctorio confirmed that no money was exchanged.

For Proctorio, the settlement made permanent the injunction that restricted Linkletter from posting the company’s help center or instructional materials. But it doesn’t stop Linkletter from remaining the company’s biggest critic, as “there are no other restrictions on my freedom of expression,” Linkletter’s blog noted.

“I’ve won my life back!” Linkletter wrote, while reassuring his supporters that he’s “fine” with how things ended.

“It doesn’t take much imagination to understand why Proctorio is a nightmare for students,” Linkletter wrote. “I can say everything that matters about Proctorio using public information.”

Proctorio’s YouTube “mistake” triggered injunction

In a statement to Ars, Kevin Rockmael, Proctorio’s head of marketing, suggested that the ed tech company sees the settlement as a win.

“After years of successful litigation, we are pleased that this settlement (which did not include any monetary compensation) protects our interests by making our initial restraining order permanent,” Rockmael said. “Most importantly, we are glad to close this chapter and focus our efforts on helping teachers and educational institutions deliver valuable and secure assessments.”

Responding to Rockmael, Linkletter clarified that the settlement upholds a modified injunction, noting that Proctorio’s initial injunction was significantly narrowed after a court ruled it overly broad. Linkletter also pointed to testimony from Proctorio’s former head of marketing, John Devoy, whose affidavit “mistakenly” swearing that Linkletter was sharing private YouTube videos was the sole basis for the court approving the injunction. That testimony, Linkletter told Ars, suggested that Proctorio knew that the librarian had shared videos the company had accidentally made public and used it as “some sort of excuse to pull the trigger” on a lawsuit after Linkletter commented on the sub-Reddit incident.

“Even a child understands how YouTube works, so how are we supposed to trust a surveillance company that doesn’t?” Linkletter wrote in his blog.

Grilled by Linkletter’s lawyer, Devoy insisted that he was not “lying” when he claimed the videos Linkletter shared came from a private channel. Instead—even though he knew the difference between a private and public channel—Devoy claimed that he made a simple mistake, even suggesting that the inaccurate claim was a “typo.”

Linkletter maintains that Proctorio’s lawsuit had nothing to do with the videos he shared—which his legal team discovered had been shared publicly by many parties, including UBC, none of which Proctorio decided to sue. Instead, he felt targeted to silence his criticism of the company, and he successfully fought to keep Proctorio from accessing his private communications, which seemed to be a fishing expedition to find other critics to monitor.

“In my opinion, and this is just my opinion, one of the purposes of the lawsuit was to have a chilling effect on public discourse around proctoring,” Linkletter told Ars. “And it worked. I mean, a lot of people were scared to use the word Proctorio, especially in writing.”

Joe Mullin, a senior policy analyst who monitored Linkletter’s case for the nonprofit digital rights group the Electronic Frontier Foundation, agreed that Proctorio’s lawsuit risked chilling speech.

“We’re glad to see this lawsuit finally resolved in a way that protects Ian Linkletter’s freedom to speak out,” Mullin told Ars, noting that Linkletter “raised serious concerns about proctoring software at a time when students were subjected to unprecedented monitoring.”

“This case should never have dragged on for five years,” Mullin said. “Using copyright claims to retaliate against critics is wrong, and it chills public debate about surveillance technology.”

Preventing the “next” Proctorio

Linkletter is not the only critic to be targeted by Proctorio, Lia Holland, campaigns and communications director for a nonprofit digital rights group called Fight for the Future, told Ars.

Holland’s group was subpoenaed in a US fight after Proctorio sent a copyright infringement notice to Erik Johnson, a then-18-year-old college freshman who shared one of Linkletter’s screenshots. The ensuing litigation was similarly settled after Proctorio “threw every semi-plausible legal weapon at Johnson full force,” Holland told Ars. The pressure forced Johnson to choose between “living his life and his life being this suit from Proctorio,” Holland said.

Linkletter suspected that he and Johnson were added to a “list” of critics that Proctorio closely monitored online, but Proctorio has denied that such a list exists. Holland pushed back, though, telling Ars that Proctorio has “an incredibly long history of fudging the truth in the interest of profit.”

“We’re no strangers to Proctorio’s shady practices when it comes to oppressing dissent or criticism of their technologies,” Holland said. “I am utterly not shocked that they would employ tactics that appear to be doing the same thing when it comes to Ian Linkletter’s case.”

Regardless of Proctorio’s tactics for brand management, it seems clear that public criticism has impacted Proctorio’s sales, though. In 2021, Vice reported that student backlash led some schools to quickly abandon the software. UBC dropped Proctorio in 2021, too, citing “ethical concerns.”

Today, Linkletter works as an emerging technology and open education librarian at the British Columbia Institute of Technology (BCIT). While he considers himself an expert on Proctorio and continues to give lectures discussing harms of academic surveillance software, he’s ready to get away from discussing Proctorio now that the lawsuit has ended.

“I think I will continue to pay attention to what they do and say, and if there’s any new reports of harm that I can elevate,” Linkletter told Ars. “But I have definitely made my points in terms of my specific concerns, and I feel less obliged to spend more and more and more time repeating myself.”

Instead, Linkletter is determined to “prevent the next Proctorio” from potentially blindsiding students on his campus. In his role as vice chair of BCIT’s educational technology and learning design committee, he’s establishing “checks and balances” to ensure that if another pandemic-like situation arises forcing every student to work from home, he can stop “a bunch of creepy stuff” from being rolled out.

“I spent the last year advocating for and implementing algorithmic impact assessments as a mandatory thing that the institute has to do, including identifying how risk is going to be mitigated before we approve any new ed tech ever again,” Linkletter explained.

He also created the Canadian Privacy Library, where he posts privacy impact assessments that he collects by sending freedom-of-information requests to higher education institutions in British Columbia. That’s one way local students could monitor privacy concerns as AI use expands across campuses, increasingly impacting not just how exams are proctored, but how assignments are graded.

Holland told Ars that students concerned about ed tech surveillance “are most powerful when they act in solidarity with each other.” While the pandemic was widely forcing remote learning, student groups were able to successfully remove harmful proctoring tech by “working together so that there was not one single scapegoat or one single face that the ed tech company could go after,” she suggested. Those movements typically start with one or two students learning how the technology works, so that they can educate others about top concerns, Holland said.

Since Linkletter’s lawsuit started, Proctorio has stopped fighting with students on Reddit and suing critics over tweets, Holland said. But Linkletter told Ars that the company still seems to leave students in the dark when it comes to how its software works, and that “could lead to academic discipline for honest students, and unnecessary stress for everyone,” his earliest court filing defending his tweets said.

“I was and am gravely concerned about Proctorio’s lack of transparency about how its algorithms work, and how it labels student behaviours as ‘suspicious,’” Linkletter swore in the filing. One of his deleted tweets urged that all schools have to demand transparency and ask why Proctorio was “hiding” information about how the software worked. But in the end, Linkletter saw no point in continuing to argue over whether two deleted tweets re-posting Proctorio’s videos using YouTube’s sharing tool violated Proctorio’s copyrights.

“I didn’t feel too censored,” Linkletter told Ars. “But yeah, I guess it’s censorship, and I do believe they filed it to try and censor me. But as you can see, I just refused to go down, and I remained their biggest critic.”

As universities prepare to break ahead of the winter holidays, Linkletter told Ars that he’s looking forward to a change in dinner table conversation topics.

“It’s one of those things where I’m 41 and I have aging parents, and I’ve had to waste the last five Christmases talking to them about the lawsuit and their concerns about me,” Linkletter said. “So I’m really looking forward to this Thanksgiving, this Christmas, with this all behind me and the ability to just focus with my parents and my family.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

He got sued for sharing public YouTube videos; nightmare ended in settlement Read More »

meta-wins-monopoly-trial,-convinces-judge-that-social-networking-is-dead

Meta wins monopoly trial, convinces judge that social networking is dead


People are “bored” by their friends’ content, judge ruled, siding with Meta.

Mark Zuckerberg arrives at court after The Federal Trade Commission alleged the acquisitions of Instagram in 2012 and WhatsApp in 2014 gave Meta a social media monopoly. Credit: Bloomberg / Contributor | Bloomberg

After years of pushback from the Federal Trade Commission over Meta’s acquisitions of Instagram and WhatsApp, Meta has defeated the FTC’s monopoly claims.

In a Tuesday ruling, US District Judge James Boasberg said the FTC failed to show that Meta has a monopoly in a market dubbed “personal social networking.” In that narrowly defined market, the FTC unsuccessfully argued, Meta supposedly faces only two rivals, Snapchat and MeWe, which struggle to compete due to its alleged monopoly.

But the days of grouping apps into “separate markets of social networking and social media” are over, Boasberg wrote. He cited the Greek philosopher Heraclitus, who “posited that no man can ever step into the same river twice,” while telling the FTC they missed their chance to block Meta’s purchase.

Essentially, Boasberg agreed with Meta that social media—as it was known in Facebook’s early days—is dead. And that means that Meta now competes with a broader set of rival apps, which includes two hugely popular platforms: TikTok and YouTube.

“When the evidence implies that consumers are reallocating massive amounts of time from Meta’s apps to these rivals and that the amount of substitution has forced Meta to invest gobs of cash to keep up, the answer is clear: Meta is not a monopolist insulated from competition,” Boasberg wrote.

In fact, adding just TikTok alone to the market defeated the FTC’s claims, Boasberg wrote, leaving him to conclude that “Meta holds no monopoly in the relevant market.”

The FTC is not happy about the loss, which comes after Boasberg determined that one of the agency’s key expert witnesses, Scott Hemphill, could not have approached his testimony “with an open mind.” According to Boasberg, Hemphill was aligned with figures publicly calling for the breakup of Facebook, and that made “neutral evaluation of his opinions more difficult” in a case with little direct evidence of monopoly harms.

“We are deeply disappointed in this decision,” Joe Simonson, the FTC’s director of public affairs, told CNBC. “The deck was always stacked against us with Judge Boasberg, who is currently facing articles of impeachment. We are reviewing all our options.”

For Meta, the win ends years of FTC fights intended to break up the company’s family of apps: Facebook, Instagram, and WhatsApp.

“The Court’s decision today recognizes that Meta faces fierce competition,” Jennifer Newstead, Meta’s chief legal officer, said. “Our products are beneficial for people and businesses and exemplify American innovation and economic growth. We look forward to continuing to partner with the Administration and to invest in America.”

Reels’ popularity helped save Meta

Meta app users clicking on Reels helped Meta win.

Boasberg noted that “a majority of Americans’ time” on both Facebook and Instagram “is now spent watching videos,” with Reels becoming “the single most-used part of Facebook.” That puts Meta apps more on par with entertainment apps like TikTok and YouTube, the judge said.

While “connecting with friends remains an important part of both apps,” the judge cited Meta’s evidence showing that Meta had to pump more recommended content from strangers into users’ feeds to account for a trend where its users grew increasingly less inclined to post publicly.

“Both scrolling and sharing have transformed” since Facebook was founded, Boasberg wrote, citing six factors that he concluded invalidated the FTC’s market definition as markets exist today.

Initial factors that shifted markets were due to leaps in innovation. “First, smartphone usage exploded,” Boasberg explained, then “cell phone data got better,” which made it easier to watch videos without frustrating “freezing and buffering.” Soon after, content recommendation systems got better, with “advanced AI algorithms” helping users “find engaging videos about the things” they “care most about in the world.”

Other factors stemmed from social changes, the judge suggested, describing the fourth factor as a trend where Meta app users started feeling “increasingly bored by their friends’ posts.”

“Longtime users’ friend lists” start fresh, but over time, they “become an often-outdated archive of people they once knew: a casual friend from college, a long-ago friend from summer camp, some guy they met at a party once,” Boasberg wrote. “Posts from friends have therefore grown less interesting.”

Then came TikTok, the fifth factor, Boasberg said, which forced Meta to “evolve” Facebook and Instagram by adding Reels.

And finally, “those five changes both caused and were reinforced by a change in social norms, which evolved to discourage public posting,” Boasberg wrote. “People have increasingly become less interested in blasting out public posts that hundreds of others can see.”

As a result of these tech advancements and social trends, Boasberg said, “Facebook, Instagram, TikTok, and YouTube have thus evolved to have nearly identical main features.” That reality undermined the FTC’s claims that users preferred Facebook and Instagram before Meta shifted its focus away from friends-and-family content.

“The Court simply does not find it credible that users would prefer the Facebook and Instagram apps that existed ten years ago to the versions that exist today,” Boasberg wrote.

Meta apps have not deteriorated, judge ruled

Boasberg repeatedly emphasized that the FTC failed to prove that Meta has a monopoly “now,” either actively or imminently causing harms.

The FTC tried to win by claiming that “Meta has degraded its apps’ quality by increasing their ad load, that falling user sentiment shows that the apps have deteriorated and that Meta has sabotaged its apps by underinvesting in friend sharing,” Boasberg noted.

But, Boasberg said, the FTC failed to show that Meta’s app quality has diminished—a trend that Cory Doctorow dubbed “enshittification,” which Meta apparently successfully argued is not real.

The judge was also swayed by Meta’s arguments that users like seeing ads. Meta showed evidence that it can only profitably increase its ad load when ad quality improves; otherwise, it risks losing engagement. Because “the rate at which users buy something or subscribe to a service based on Meta’s ads has steadily risen,” this suggested “that the ads have gotten more and more likely to connect users to products in which they have an interest,” Boasberg said.

Additionally, surveys of Meta app users that show declining user sentiment are not evidence that its apps are deteriorating in quality, Boasberg said, but are more about “brand reputation.”

“That is unsurprising: ask people how they feel about, say, Exxon Mobil, and their answers will tell you very little about how good its oil is,” Boasberg wrote. “The FTC’s claim that worsening sentiment shows a worsening product is unpersuasive.”

Finally, the FTC’s claim that Meta underinvested in friends-and-family content, to the detriment of its core app users, “makes no sense,” Boasberg wrote, given Meta’s data showing that user posting declined.

“While it is true that users see less content from their friends these days, that is largely due to the friends themselves: people simply post less,” Boasberg wrote. “Users are not seeing less friend content because Meta is hiding it from them, but instead because there is less friend content for Meta to show.”

It’s not even “clear that users want more friend posts,” the judge noted, agreeing with Meta that “instead, what users really seem to want is Reels.”

Further, if Meta were a monopolist, Boasberg seemed to suggest that the platform might be more invested in forcing friends-and-family content than Reels, since “Reels earns Meta less money” due to its smaller ad load.

“Courts presume that sophisticated corporations act rationally,” Boasberg wrote. “Here, the FTC has not offered even an ordinarily persuasive case that Meta is making the economically irrational choice to underinvest in its most lucrative offerings. It certainly has not made a particularly persuasive one.”

Among the critics unhappy with the ruling is Nidhi Hegde, executive director of the American Economic Liberties Project, who suggested that Boasberg’s ruling was “a colossally wrong decision” that “turns a willful blind eye to Meta’s enormous power over social media and the harms that flow from it.”

“Judge Boasberg has purposefully ignored the overwhelming evidence of how Meta became a monopoly—not by building a better product, but by buying its rivals to shut down any real competitors before they could grow,” Hegde said. “These deals let Meta fuse Facebook, Instagram, and WhatsApp into one machine that poisons our children and discourse, bullies publishers and advertisers, and destroys the possibility of healthy online connections with friends and family. By pretending that TikTok’s rise wipes away over a decade of illegal conduct, this court has effectively told every aspiring monopolist that our current justice system is on their side.”

On the other side, industry groups cheered the ruling. Matt Schruers, president of the Computer & Communications Industry Association, suggested that Boasberg concluded “what every Internet user knows—that Meta competes with a number of platforms and the company’s relevant market shares are therefore nowhere close to those required to establish monopoly power.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta wins monopoly trial, convinces judge that social networking is dead Read More »

youtube-tv’s-disney-blackout-reminds-users-that-they-don’t-own-what-they-stream

YouTube TV’s Disney blackout reminds users that they don’t own what they stream

“I don’t know (or care) which side is responsible for this, but the DVR is not VOD, it is your recording, and shows recorded before the dispute should be available. This is a hard lesson for us all,” an apparently affected customer wrote on Reddit this week.

For current or former cable subscribers, this experience isn’t new. Carrier disputes have temporarily and permanently killed cable subscribers’ access to many channels over the years. And since the early 2000s, many cable companies have phased out DVRs with local storage in favor of cloud-based DVRs. Since then, cable companies have been able to revoke customers’ access to DVR files if, for example, the customer stopped paying for the channel from which the content was recorded. What we’re seeing with YouTube TV’s DVR feature is one of several ways that streaming services mirror cable companies.

Google exits Movies Anywhere

In a move that appears to be best described as tit for tat, Google has removed content purchased via Google Play and YouTube from Movies Anywhere, a Disney-owned unified platform that lets people access digital video purchases from various distributors, including Amazon Prime Video and Fandango.

In removing users’ content, Google may gain some leverage in its discussions with Disney, which is reportedly seeking a larger carriage fee from YouTube TV. The content removals, however, are just one more pain point of the fragmented streaming landscape customers are already dealing with.

Customers inconvenienced

As of this writing, Google and Disney have yet to reach an agreement. On Monday, Google publicly rejected Disney’s request to restore ABC to YouTube TV for yesterday’s election day, although the company showed a willingness to find a way to quickly bring back ABC and ESPN (“the channels that people want,” per Google). Disney has escalated things by making its content unavailable to rent or purchase from all Google platforms.

Google is trying to appease customers by saying it will give YouTube TV subscribers a $20 credit if Disney “content is unavailable for an extended period of time.” Some people online have reported receiving a $10 credit already.

Regardless of how this saga ends, the immediate effects have inconvenienced customers of both companies. People subscribe to streaming services and rely on digital video purchases and recordings for easy, instant access, which Google and Disney’s disagreement has disrupted. The squabble has also served as another reminder that in the streaming age, you don’t really own anything.

YouTube TV’s Disney blackout reminds users that they don’t own what they stream Read More »

youtube-denies-ai-was-involved-with-odd-removals-of-tech-tutorials

YouTube denies AI was involved with odd removals of tech tutorials


YouTubers suspect AI is bizarrely removing popular video explainers.

This week, tech content creators began to suspect that AI was making it harder to share some of the most highly sought-after tech tutorials on YouTube, but now YouTube is denying that odd removals were due to automation.

Creators grew alarmed when educational videos that YouTube had allowed for years were suddenly being bizarrely flagged as “dangerous” or “harmful,” with seemingly no way to trigger human review to overturn removals. AI seemed to be running the show, with creators’ appeals seemingly getting denied faster than a human could possibly review them.

Late Friday, a YouTube spokesperson confirmed that videos flagged by Ars have been reinstated, promising that YouTube will take steps to ensure that similar content isn’t removed in the future. But, to creators, it remains unclear why the videos got taken down, as YouTube claimed that both initial enforcement decisions and decisions on appeals were not the result of an automation issue.

Shocked creators were stuck speculating

Rich White, a computer technician who runs an account called CyberCPU Tech, had two videos removed that demonstrated workarounds to install Windows 11 on unsupported hardware.

These videos are popular, White told Ars, with people looking to bypass Microsoft account requirements each time a new build is released. For tech content creators like White, “these are bread and butter videos,” dependably yielding “extremely high views,” he said.

Because there’s such high demand, many tech content creators’ channels are filled with these kinds of videos. White’s account has “countless” examples, he said, and in the past, YouTube even featured his most popular video in the genre on a trending list.

To White and others, it’s unclear exactly what has changed on YouTube that triggered removals of this type of content.

YouTube only seemed to be removing recently posted content, White told Ars. However, if the takedowns ever impacted older content, entire channels documenting years of tech tutorials risked disappearing in “the blink of an eye,” another YouTuber behind a tech tips account called Britec09 warned after one of his videos was removed.

The stakes appeared high for everyone, White warned, in a video titled “YouTube Tech Channels in Danger!”

White had already censored content that he planned to post on his channel, fearing it wouldn’t be worth the risk of potentially losing his account, which began in 2020 as a side hustle but has since become his primary source of income. If he continues to change the content he posts to avoid YouTube penalties, it could hurt his account’s reach and monetization. Britec told Ars that he paused a sponsorship due to the uncertainty that he said has already hurt his channel and caused a “great loss of income.”

YouTube’s policies are strict, with the platform known to swiftly remove accounts that receive three strikes for violating community guidelines within 90 days. But, curiously, White had not received any strikes following his content removals. Although Britec reported that his account had received a strike following his video’s removal, White told Ars that YouTube so far had only given him two warnings, so his account is not yet at risk of a ban.

Creators weren’t sure why YouTube might deem this content as harmful, so they tossed around some theories. It seemed possible, White suggested in his video, that AI was detecting this content as “piracy,” but that shouldn’t be the case, he claimed, since his guides require users to have a valid license to install Windows 11. He also thinks it’s unlikely that Microsoft prompted the takedowns, suggesting tech content creators have a “love-hate relationship” with the tech company.

“They don’t like what we’re doing, but I don’t think they’re going to get rid of it,” White told Ars, suggesting that Microsoft “could stop us in our tracks” if it were motivated to end workarounds. But Microsoft doesn’t do that, White said, perhaps because it benefits from popular tutorials that attract swarms of Windows 11 users who otherwise may not use “their flagship operating system” if they can’t bypass Microsoft account requirements.

Those users could become loyal to Microsoft, White said. And eventually, some users may even “get tired of bypassing the Microsoft account requirements, or Microsoft will add a new feature that they’ll happily get the account for, and they’ll relent and start using a Microsoft account,” White suggested in his video. “At least some people will, not me.”

Microsoft declined Ars’ request to comment.

To White, it seemed possible that YouTube was leaning on AI  to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn’t allowing AI to issue strikes on his account.

But that was just a “theory” that he and other creators came up with, but couldn’t confirm, since YouTube’s chatbot that supports creators seemed to also be “suspiciously AI-driven,” seemingly auto-responding even when a “supervisor” is connected, White said in his video.

Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that unexpected changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, White and Britec said.

“We are not even sure what we can make videos on,” White said. “Everything’s a theory right now because we don’t have anything solid from YouTube.”

YouTube recommends making the content it’s removing

White’s channel gained popularity after YouTube highlighted an early trending video that he made, showing a workaround to install Windows 11 on unsupported hardware. Following that video, his channel’s views spiked, and then he gradually built up his subscriber base to around 330,000.

In the past, White’s videos in that category had been flagged as violative, but human review got them quickly reinstated.

“They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” White said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.”

Now, YouTube suggests that human review is causing the removals, which likely doesn’t completely ease creators’ fears about arbitrary takedowns.

Britec’s video was also flagged as dangerous or harmful. He has managed his account that currently has nearly 900,000 subscribers since 2009, and he’s worried he risked losing “years of hard work,” he said in his video.

Britec told Ars that “it’s very confusing” for panicked tech content creators trying to understand what content is permissible. It’s particularly frustrating, he noted in his video, that YouTube’s creator tool inspiring “ideas” for posts seemed to contradict the mods’ content warnings and continued to recommend that creators make content on specific topics like workarounds to install Windows 11 on unsupported hardware.

Screenshot from Britec09’s YouTube video, showing YouTube prompting creators to make content that could get their channels removed. Credit: via Britec09

“This tool was to give you ideas for your next video,” Britec said. “And you can see right here, it’s telling you to create content on these topics. And if you did this, I can guarantee you your channel will get a strike.”

From there, creators hit what White described as a “brick wall,” with one of his appeals denied within one minute, which felt like it must be an automated decision. As Britec explained, “You will appeal, and your appeal will be rejected instantly. You will not be speaking to a human being. You’ll be speaking to a bot or AI. The bot will be giving you automated responses.”

YouTube insisted that the decisions weren’t automated, even when an appeal was denied within one minute.

White told Ars that it’s easy for creators to be discouraged and censor their channels rather than fight with the AI. After wasting “an hour and a half trying to reason with an AI about why I didn’t violate the community guidelines” once his first appeal was quickly denied, he “didn’t even bother using the chat function” after the second appeal was denied even faster, White confirmed in his video.

“I simply wasn’t going to do that again,” White said.

All week, the panic spread, reaching fans who follow tech content creators. On Reddit, people recommended saving tutorials lest they risk YouTube taking them down.

“I’ve had people come out and say, ‘This can’t be true. I rely on this every time,’” White told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

YouTube denies AI was involved with odd removals of tech tutorials Read More »

tv-focused-youtube-update-brings-ai-upscaling,-shopping-qr-codes

TV-focused YouTube update brings AI upscaling, shopping QR codes

YouTube has been streaming for 20 years, but it was only in the last couple that it came to dominate TV streaming. Google’s video platform attracts more TV viewers than Netflix, Disney+, and all the other apps, and Google is looking to further beef up its big-screen appeal with a new raft of features, including shopping, immersive channel surfing, and an official version of the AI upscaling that had creators miffed a few months back.

According to Google, YouTube’s growth has translated into higher payouts. The number of channels earning more than $100,000 annually is up 45 percent in 2025 versus 2024. YouTube is now giving creators some tools to boost their appeal (and hopefully their income) on TV screens. Those elaborate video thumbnails featuring surprised, angry, smiley hosts are about to get even prettier with the new 50MB file size limit. That’s up from a measly 2MB.

Video upscaling is also coming to YouTube, and creators will be opted in automatically. To start, YouTube will be upscaling lower-quality videos to 1080p. In the near future, Google plans to support “super resolution” up to 4K.

The site stresses that it’s not modifying original files—creators will have access to both the original and upscaled files, and they can opt out of upscaling. In addition, super resolution videos will be clearly labeled on the user side, allowing viewers to select the original upload if they prefer. The lack of transparency was a sticking point for creators, some of whom complained about the sudden artificial look of their videos during YouTube’s testing earlier this year.

TV-focused YouTube update brings AI upscaling, shopping QR codes Read More »

youtube’s-likeness-detection-has-arrived-to-help-stop-ai-doppelgangers

YouTube’s likeness detection has arrived to help stop AI doppelgängers

AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators.

Google’s powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened—even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn’t happening.

Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site’s copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.

Sneak Peek: Likeness Detection on YouTube.

Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing “Content detection” menu. In YouTube’s demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It’s unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.

YouTube’s likeness detection has arrived to help stop AI doppelgängers Read More »

youtube-prepares-to-welcome-back-banned-creators-with-“second-chance”-program

YouTube prepares to welcome back banned creators with “second chance” program

A few weeks ago, Google told US Rep. Jim Jordan (R-Ohio) that it would allow creators banned for COVID and election misinformation to rejoin the platform. It didn’t offer many details in the letter, but now YouTube has explained the restoration process. YouTube’s “second chances” are actually more expansive than the letter made it seem. Going forward, almost anyone banned from YouTube will have an opportunity to request a new channel. The company doesn’t guarantee approval, but you can expect to see plenty of banned creators back on Google’s video platform in the coming months.

YouTube will now allow banned creator to request reinstatement, but this is separate from appealing a ban. If a channel is banned, creators continue to have the option of appealing the ban. If successful, their channel comes back as if nothing happened. After one year, creators will now have the “second chance” option.

“We know many terminated creators deserve a second chance,” the blog post reads, noting that YouTube itself doesn’t always get things right the first time. The option for getting a new channel will appear in YouTube Studio on the desktop, and Google expects to begin sending out these notices in the coming months. However, anyone terminated for copyright violations is out of luck—Google does not forgive such infringement as easily as it does claiming that COVID is a hoax.

The readmission process will still come with a review by YouTube staff, and the company says it will take multiple factors into consideration, including whether or not the behavior that got the channel banned is still against the rules. This is clearly a reference to COVID and election misinformation, which Google did not allow on YouTube for several years but has since stopped policing. The site will also consider factors like how severe or persistent the violations were and whether the creator’s actions “harmed or may continue to harm the YouTube community.”

YouTube prepares to welcome back banned creators with “second chance” program Read More »

youtube-music-is-testing-ai-hosts-that-will-interrupt-your-tunes

YouTube Music is testing AI hosts that will interrupt your tunes

YouTube has a new Labs program, allowing listeners to “discover the next generation of YouTube.” In case you were wondering, that generation is apparently all about AI. The streaming site says Labs will offer a glimpse of the AI features it’s developing for YouTube Music, and it starts with AI “hosts” that will chime in while you’re listening to music. Yes, really.

The new AI music hosts are supposed to provide a richer listening experience, according to YouTube. As you’re listening to tunes, the AI will generate audio snippets similar to, but shorter than, the fake podcasts you can create in NotebookLM. The “Beyond the Beat” host will break in every so often with relevant stories, trivia, and commentary about your musical tastes. YouTube says this feature will appear when you are listening to mixes and radio stations.

The experimental feature is intended to be a bit like having a radio host drop some playful banter while cueing up the next song. It sounds a bit like Spotify’s AI DJ, but the YouTube AI doesn’t create playlists like Spotify’s robot. This is still generative AI, which comes with the risk of hallucinations and low-quality slop, neither of which belongs in your music. That said, Google’s Audio Overviews are often surprisingly good in small doses.

YouTube Music is testing AI hosts that will interrupt your tunes Read More »