youtube

ai-trains-on-kids’-photos-even-when-parents-use-strict-privacy-settings

AI trains on kids’ photos even when parents use strict privacy settings

“Outrageous” —

Even unlisted YouTube videos are used to train AI, watchdog warns.

AI trains on kids’ photos even when parents use strict privacy settings

Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators—even when platforms prohibit scraping and families use strict privacy settings.

Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia’s states and territories, including indigenous children who may be particularly vulnerable to harms.

These photos are linked in the dataset “without the knowledge or consent of the children or their families.” They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han’s report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online.

That puts children in danger of privacy and safety risks, Han said, and some parents thinking they’ve protected their kids’ privacy online may not realize that these risks exist.

From a single link to one photo that showed “two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural,” Han could trace “both children’s full names and ages, and the name of the preschool they attend in Perth, in Western Australia.” And perhaps most disturbingly, “information about these children does not appear to exist anywhere else on the Internet”—suggesting that families were particularly cautious in shielding these boys’ identities online.

Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed “a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating” during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be “unlisted” and would not appear in searches.

Only someone with a link to the video was supposed to have access, but that didn’t stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.

Reached for comment, YouTube’s spokesperson, Jack Malon, told Ars that YouTube has “been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse.” But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That’s why—even more than parents need tech companies to up their game blocking AI training—kids need regulators to intervene and stop training before it happens, Han’s report said.

Han’s report comes a month before Australia is expected to release a reformed draft of the country’s Privacy Act. Those reforms include a draft of Australia’s first child data protection law, known as the Children’s Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren’t “actually sure how much the government is going to announce in August.”

“Children in Australia are waiting with bated breath to see if the government will adopt protections for them,” Han said, emphasizing in her report that “children should not have to live in fear that their photos might be stolen and weaponized against them.”

AI uniquely harms Australian kids

To hunt down the photos of Australian kids, Han “reviewed fewer than 0.0001 percent of the 5.85 billion images and captions contained in the data set.” Because her sample was so small, Han expects that her findings represent a significant undercount of how many children could be impacted by the AI scraping.

“It’s astonishing that out of a random sample size of about 5,000 photos, I immediately fell into 190 photos of Australian children,” Han told Ars. “You would expect that there would be more photos of cats than there are personal photos of children,” since LAION-5B is a “reflection of the entire Internet.”

LAION is working with HRW to remove links to all the images flagged, but cleaning up the dataset does not seem to be a fast process. Han told Ars that based on her most recent exchange with the German nonprofit, LAION had not yet removed links to photos of Brazilian kids that she reported a month ago.

LAION declined Ars’ request for comment.

In June, LAION’s spokesperson, Nathan Tyler, told Ars that, “as a nonprofit, volunteer organization,” LAION is committed to doing its part to help with the “larger and very concerning issue” of misuse of children’s data online. But removing links from the LAION-5B dataset does not remove the images online, Tyler noted, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl. And Han pointed out that removing the links from the dataset doesn’t change AI models that have already trained on them.

“Current AI models cannot forget data they were trained on, even if the data was later removed from the training data set,” Han’s report said.

Kids whose images are used to train AI models are exposed to a variety of harms, Han reported, including a risk that image generators could more convincingly create harmful or explicit deepfakes. In Australia last month, “about 50 girls from Melbourne reported that photos from their social media profiles were taken and manipulated using AI to create sexually explicit deepfakes of them, which were then circulated online,” Han reported.

For First Nations children—”including those identified in captions as being from the Anangu, Arrernte, Pitjantjatjara, Pintupi, Tiwi, and Warlpiri peoples”—the inclusion of links to photos threatens unique harms. Because culturally, First Nations peoples “restrict the reproduction of photos of deceased people during periods of mourning,” Han said the AI training could perpetuate harms by making it harder to control when images are reproduced.

Once an AI model trains on the images, there are other obvious privacy risks, including a concern that AI models are “notorious for leaking private information,” Han said. Guardrails added to image generators do not always prevent these leaks, with some tools “repeatedly broken,” Han reported.

LAION recommends that, if troubled by the privacy risks, parents remove images of kids online as the most effective way to prevent abuse. But Han told Ars that’s “not just unrealistic, but frankly, outrageous.”

“The answer is not to call for children and parents to remove wonderful photos of kids online,” Han said. “The call should be [for] some sort of legal protections for these photos, so that kids don’t have to always wonder if their selfie is going to be abused.”

AI trains on kids’ photos even when parents use strict privacy settings Read More »

youtube-tries-convincing-record-labels-to-license-music-for-ai-song-generator

YouTube tries convincing record labels to license music for AI song generator

Jukebox zeroes —

Video site needs labels’ content to legally train AI song generators.

Man using phone in front of YouTube logo

Chris Ratcliffe/Bloomberg via Getty

YouTube is in talks with record labels to license their songs for artificial intelligence tools that clone popular artists’ music, hoping to win over a skeptical industry with upfront payments.

The Google-owned video site needs labels’ content to legally train AI song generators, as it prepares to launch new tools this year, according to three people familiar with the matter.

The company has recently offered lump sums of cash to the major labels—Sony, Warner, and Universal—to try to convince more artists to allow their music to be used in training AI software, according to several people briefed on the talks.

However, many artists remain fiercely opposed to AI music generation, fearing it could undermine the value of their work. Any move by a label to force their stars into such a scheme would be hugely controversial.

“The industry is wrestling with this. Technically the companies have the copyrights, but we have to think through how to play it,” said an executive at a large music company. “We don’t want to be seen as a Luddite.”

YouTube last year began testing a generative AI tool that lets people create short music clips by entering a text prompt. The product, initially named “Dream Track,” was designed to imitate the sound and lyrics of well-known singers.

But only 10 artists agreed to participate in the test phase, including Charli XCX, Troye Sivan and John Legend, and Dream Track was made available to just a small group of creators.

YouTube wants to sign up “dozens” of artists to roll out a new AI song generator this year, said two of the people.

YouTube said: “We’re not looking to expand Dream Track but are in conversations with labels about other experiments.”

Licenses or lawsuits

YouTube is seeking new deals at a time when AI companies such as OpenAI are striking licensing agreements with media groups to train large language models, the systems that power AI products such as the ChatGPT chatbot. Some of those deals are worth tens of millions of dollars to media companies, insiders say.

The deals being negotiated in music would be different. They would not be blanket licenses but rather would apply to a select group of artists, according to people briefed on the discussions.

It would be up to the labels to encourage their artists to participate in the new projects. That means the final amounts YouTube might be willing to pay the labels are at this stage undetermined.

The deals would look more like the one-off payments from social media companies such as Meta or Snap to entertainment groups for access to their music, rather than the royalty-based arrangements labels have with Spotify or Apple, these people said.

YouTube’s new AI tool, which is unlikely to carry the Dream Track brand, could form part of YouTube’s Shorts platform, which competes with TikTok. Talks continue and deal terms could still change, the people said.

YouTube’s latest move comes as the leading record companies on Monday sued two AI start-ups, Suno and Udio, which they allege are illegally using copyrighted recordings to train their AI models. A music industry group is seeking “up to $150,000 per work infringed,” according to the filings.

After facing the threat of extinction following the rise of Napster in the 2000s, music companies are trying to get ahead of disruptive technology this time around. The labels are keen to get involved with licensed products that use AI to create songs using their music copyrights—and get paid for it.

Sony Music, which did not participate in the first phase of YouTube’s AI experiment, is in negotiations with the tech group to make available some of its music to the new tools, said a person familiar with the matter. Warner and Universal, whose artists participated in the test phase, are also in talks with YouTube about expanding the product, these people said.

In April, more than 200 musicians including Billie Eilish and the estate of Frank Sinatra signed an open letter.

“Unchecked, AI will set in motion a race to the bottom that will degrade the value of our work and prevent us from being fairly compensated for it,” the letter said.

YouTube added: “We are always testing new ideas and learning from our experiments; it’s an important part of our innovation process. We will continue on this path with AI and music as we build for the future.”

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

YouTube tries convincing record labels to license music for AI song generator Read More »

washing-machine-chime-scandal-shows-how-absurd-youtube-copyright-abuse-can-get

Washing machine chime scandal shows how absurd YouTube copyright abuse can get

Washing machine chime scandal shows how absurd YouTube copyright abuse can get

YouTube’s Content ID system—which automatically detects content registered by rightsholders—is “completely fucking broken,” a YouTuber called “Albino” declared in a rant on X (formerly Twitter) viewed more than 950,000 times.

Albino, who is also a popular Twitch streamer, complained that his YouTube video playing through Fallout was demonetized because a Samsung washing machine randomly chimed to signal a laundry cycle had finished while he was streaming.

Apparently, YouTube had automatically scanned Albino’s video and detected the washing machine chime as a song called “Done”—which Albino quickly saw was uploaded to YouTube by a musician known as Audego nine years ago.

But when Albino hit play on Audego’s song, the only thing that he heard was a 30-second clip of the washing machine chime. To Albino it was obvious that Audego didn’t have any rights to the jingle, which Dexerto reported actually comes from the song “Die Forelle” (“The Trout”) from Austrian composer Franz Schubert.

The song was composed in 1817 and is in the public domain. Samsung has used it to signal the end of a wash cycle for years, sparking debate over whether it’s the catchiest washing machine song and inspiring at least one violinist to perform a duet with her machine. It’s been a source of delight for many Samsung customers, but for Albino, hearing the jingle appropriated on YouTube only inspired ire.

“A guy recorded his fucking washing machine and uploaded it to YouTube with Content ID,” Albino said in a video on X. “And now I’m getting copyright claims” while “my money” is “going into the toilet and being given to this fucking slime.”

Albino suggested that YouTube had potentially allowed Audego to make invalid copyright claims for years without detecting the seemingly obvious abuse.

“How is this still here?” Albino asked. “It took me one Google search to figure this out,” and “now I’m sharing revenue with this? That’s insane.”

At first, Team YouTube gave Albino a boilerplate response on X, writing, “We understand how important it is for you. From your vid, it looks like you’ve recently submitted a dispute. When you dispute a Content ID claim, the person who claimed your video (the claimant) is notified and they have 30 days to respond.”

Albino expressed deep frustration at YouTube’s response, given how “egregious” he considered the copyright abuse to be.

“Just wait for the person blatantly stealing copyrighted material to respond,” Albino responded to YouTube. “Ah okay, yes, I’m sure they did this in good faith and will make the correct call, though it would be a shame if they simply clicked ‘reject dispute,’ took all the ad revenue money and forced me to risk having my channel terminated to appeal it!! XDxXDdxD!! Thanks Team YouTube!”

Soon after, YouTube confirmed on X that Audego’s copyright claim was indeed invalid. The social platform ultimately released the claim and told Albino to expect the changes to be reflected on his channel within two business days.

Ars could not immediately reach YouTube or Albino for comment.

Widespread abuse of Content ID continues

YouTubers have complained about abuse of Content ID for years. Techdirt’s Timothy Geigner agreed with Albino’s assessment that the YouTube system is “hopelessly broken,” noting that sometimes content is flagged by mistake. But just as easily, bad actors can abuse the system to claim “content that simply isn’t theirs” and seize sometimes as much as millions in ad revenue.

In 2021, YouTube announced that it had invested “hundreds of millions of dollars” to create content management tools, of which Content ID quickly emerged as the platform’s go-to solution to detect and remove copyrighted materials.

At that time, YouTube claimed that Content ID was created as a “solution for those with the most complex rights management needs,” like movie studios and record labels whose movie clips and songs are most commonly uploaded by YouTube users. YouTube warned that without Content ID, “rightsholders could have their rights impaired and lawful expression could be inappropriately impacted.”

Since its rollout, more than 99 percent of copyright actions on YouTube have consistently been triggered automatically through Content ID.

And just as consistently, YouTube has seen widespread abuse of Content ID, terminating “tens of thousands of accounts each year that attempt to abuse our copyright tools,” YouTube said. YouTube also acknowledged in 2021 that “just one invalid reference file in Content ID can impact thousands of videos and users, stripping them of monetization or blocking them altogether.”

To help rightsholders and creators track how much copyrighted content is removed from the platform, YouTube started releasing biannual transparency reports in 2021. The Electronic Frontier Foundation (EFF), a nonprofit digital rights group, applauded YouTube’s “move towards transparency” while criticizing YouTube’s “claim that YouTube is adequately protecting its creators.”

“That rings hollow,” EFF reported in 2021, noting that “huge conglomerates have consistently pushed for more and more restrictions on the use of copyrighted material, at the expense of fair use and, as a result, free expression.” As EFF saw it then, YouTube’s Content ID system mainly served to appease record labels and movie studios, while creators felt “pressured” not to dispute Content ID claims out of “fear” that their channel might be removed if YouTube consistently sided with rights holders.

According to YouTube, “it’s impossible for matching technology to take into account complex legal considerations like fair use or fair dealing,” and that impossibility seemingly ensures that creators bear the brunt of automated actions even when it’s fair to use copyrighted materials.

At that time, YouTube described Content ID as “an entirely new revenue stream from ad-supported, user generated content” for rights holders, who made more than $5.5 billion from Content ID matches by December 2020. More recently, YouTube reported that figure climbed above $9 million, as of December 2022. With so much money at play, it’s easy to see how the system could be seen as disproportionately favoring rights holders, while creators continue to suffer from income diverted by the automated system.

Washing machine chime scandal shows how absurd YouTube copyright abuse can get Read More »

studio:-takedown-notice-for-15-year-old-fan-made-hunt-for-gollum-was-a-mistake

Studio: Takedown notice for 15-year-old fan-made Hunt for Gollum was a mistake

The precious will be ours —

The Hunt for Gollum fan film racked up more than 13 million views on YouTube.

WETA

Enlarge / WETA “Gollum” figure at Arclight at the opening of “The Lord of the Rings: The Return of the King.”

A day after announcing that the tentatively titled Lord of the Rings: The Hunt for Gollum was scheduled for a 2026 release, Warner Bros. immediately moved to block a beloved 2009 unauthorized fan film with the exact same name on YouTube.

Less than 12 hours later, though, the studio appeared to back down from this copyright fight, reinstating the fan film on YouTube amid fan backlash protesting the copyright strike on Reddit as a “dick move.”

In 2009, director Chris Bouchard—who most recently directed Netflix’s The Little Mermaid—released The Hunt for Gollum through Independent Online Cinema after he claimed to have “reached an understanding” with the rightsholder of The Lord of the Rings books, then called Tolkien Enterprises (now called Middle-earth Enterprises).

Bouchard’s film was developed on a shoestring budget of $5,000, with a 39-minute story based on appendixes that Tolkien wrote for The Lord of the Rings books, The Washington Post reported. Upon its release, the fan film quickly racked up millions of views online, reaching more than 13 million on YouTube before it was suddenly blocked this week.

“Video unavailable,” the video hosted on Independent Online Cinema’s YouTube channel temporarily said. “This video contains content from Warner Bros. Entertainment, who has blocked it on copyright grounds.”

“That’s so lame,” one Reddit user wrote, dissing Warner Bros. as “greedy fks” that “can’t help but hoard every penny, like Smaug. The video already had 13 million views and was peacefully existing for all these years.”

By mid-day Friday, however, Independent Online Cinema announced that the video was reinstated, confirming that “we’re back, thanks to WB for being so understanding to us as fans and artists.” In an email, Bouchard told Ars that it’s still unclear what triggered the block, but “we’re just all very happy our small film still is out there, as it was made for love.”

Bourchard’s film was never intended to garner any profits, opening with a disclaimer that openly noted that the fan film was not affiliated with Tolkien, The Lord of the Rings movie director Peter Jackson, or any studio. Produced solely for private use, the movie was made “solely for the personal, uncompensated enjoyment of ourselves and other Tolkien fans,” the disclaimer said.

Bouchard confirmed to Ars that the fan film never made money on YouTube, noting that “there have been no ads or revenue” and “the film is a fan film, non-commercial project, made just for the creative endeavor.”

While the movie was blocked, fans and media outlets speculated that Warner Bros. may have wielded the copyright claim because the major motion picture might tell a similar story as Bouchard’s film or possibly just to ensure that there’s no confusion when movie-goers attempt to search for details about the new movie online. The Hollywood Reporter speculated that the fan film may have even helped inspire the release of the Warner Bros. film.

Some fans agreed that Warner Bros. was acting logically to protect its intellectual property (IP), however, saying that it “makes sense” and “any other company would have done the same.”

Others moved to help fans download the film before it was potentially removed everywhere online, offering to share their own downloads if necessary and pointing out that the film could still be found on at least one unofficial YouTube channel not seemingly associated with Independent Online Cinema.

Bouchard was touched by all the fan outcry, telling Ars that it has “been incredible that plenty of folks out there seemed to remember our project.”

It’s possible the backlash pushed Warner Bros. to reverse the block. In 2019, the Harvard Business Review noted that “until recently, companies have largely tolerated individuals who seek to bring their fictional worlds to life, on the theory that going after one’s fans is not good for business.”

“Overreaching by companies can threaten creativity, competition, fan goodwill, and, more fundamentally, the freedom to play and ‘geek out’ about the stories we love,” HBR warned.

At least one Redditor agreed with this viewpoint, posting, “I will never understand moves like this. Literally no one will pass on watching the movie because some fan film exists. Same with gaming companies that take down every fan project (Nintendo obviously). I‘ve read before, that it is to protect the IP, but other companies encourage that stuff and don’t lose the IP.”

Warner Bros. already faces “heavy skepticism” that a full-length feature seemingly centered on Gollum will be any good—even one directed by and starring original Gollum actor Andy Serkis, The Hollywood Reporter reported. Some Redditors declared that they might boycott the new movie if Bouchard’s film wasn’t reinstated on YouTube.

“This makes me more likely to pass on WB’s movie honestly,” one Reddit user wrote, while another declared, “I already hate their new film now.”

But it’s also just as possible that Warner Bros. did not actively seek to remove the fan film online, instead submitting video files to YouTube that potentially triggered an automated removal. Bouchard told Ars that’s his suspicion.

Ars could not immediately reach Warner Bros. or YouTube for comment.

While some Lord of the Rings fans aren’t sure that the new movie will be as good as the widely acclaimed Jackson trilogy, Bouchard said that his team can’t wait to see Warner Bros.’ version of The Hunt for Gollum.

“We are excited as fans about the new movie” and seeing “how the story will be visualized!” Bouchard told Ars. His film centers Gandalf and Aragorn as they hunt for Gollum, but there’s no telling yet if Warner Bros.’ movie will avoid featuring Gollum as the main character.

On YouTube, the Independent Online Cinema account posted under the video that they’re “glad” that any fans eager to see the Warner Bros. film can, in the “meantime,” enjoy “our low-budget effort at the story.”

Studio: Takedown notice for 15-year-old fan-made Hunt for Gollum was a mistake Read More »

google-sues-two-crypto-app-makers-over-allegedly-vast-“pig-butchering”-scheme

Google sues two crypto app makers over allegedly vast “pig butchering” scheme

Foul Play —

Crypto and other investment app scams promoted on YouTube targeted 100K users.

Google sues two crypto app makers over allegedly vast “pig butchering” scheme

Google has sued two app developers based in China over an alleged scheme targeting 100,000 users globally over four years with at least 87 fraudulent cryptocurrency and other investor apps distributed through the Play Store.

The tech giant alleged that scammers lured victims with “promises of high returns” from “seemingly legitimate” apps offering investment opportunities in cryptocurrencies and other products. Commonly known as “pig-butchering schemes,” these scams displayed fake returns on investments, but when users went to withdraw the funds, they discovered they could not.

In some cases, Google alleged, developers would “double down on the scheme by requesting various fees and other payments from victims that were supposedly necessary for the victims to recover their principal investments and purported gains.”

Google accused the app developers—Yunfeng Sun (also known as “Alphonse Sun”) and Hongnam Cheung (also known as “Zhang Hongnim” and “Stanford Fischer”)—of conspiring to commit “hundreds of acts of wire fraud” to further “an unlawful pattern of racketeering activity” that siphoned up to $75,000 from each user successfully scammed.

Google was able to piece together the elaborate alleged scheme because the developers used a wide array of Google products and services to target victims, Google said, including Google Play, Voice, Workspace, and YouTube, breaching each one’s terms of service. Perhaps most notably, the Google Play Store’s developer program policies “forbid developers to upload to Google Play ‘apps that expose users to deceptive or harmful financial products and services,’ including harmful products and services ‘related to the management or investment of money and cryptocurrencies.'”

In addition to harming Google consumers, Google claimed that each product and service’s reputation would continue to be harmed unless the US district court in New York ordered a permanent injunction stopping developers from using any Google products or services.

“By using Google Play to conduct their fraud scheme,” scammers “have threatened the integrity of Google Play and the user experience,” Google alleged. “By using other Google products to support their scheme,” the scammers “also threaten the safety and integrity of those other products, including YouTube, Workspace, and Google Voice.”

Google’s lawsuit is the company’s most recent attempt to block fraudsters from targeting Google products by suing individuals directly, Bloomberg noted. Last year, Google sued five people accused of distributing a fake Bard AI chatbot that instead downloaded malware to Google users’ devices, Bloomberg reported.

How did the alleged Google Play scams work?

Google said that the accused developers “varied their approach from app to app” when allegedly trying to scam users out of thousands of dollars but primarily relied on three methods to lure victims.

The first method relied on sending text messages using Google Voice—such as “I am Sophia, do you remember me?” or “I miss you all the time, how are your parents Mike?”—”to convince the targeted victims that they were sent to the wrong number.” From there, the scammers would apparently establish “friendships” or “romantic relationships” with victims before moving the conversation to apps like WhatsApp, where they would “offer to guide the victim through the investment process, often reassuring the victim of any doubts they had about the apps.” These supposed friends, Google claimed, would “then disappear once the victim tried to withdraw funds.”

Another strategy allegedly employed by scammers relied on videos posted to platforms like YouTube, where fake investment opportunities would be promoted, promising “rates of return” as high as “two percent daily.”

The third tactic, Google said, pushed bogus affiliate marketing campaigns, promising users commissions for “signing up additional users.” These apps, Google claimed, were advertised on social media as “a guaranteed and easy way to earn money.”

Once a victim was drawn into using one of the fraudulent apps, “user interfaces sought to convince victims that they were maintaining balances on the app and that they were earning ‘returns’ on their investments,” Google said.

Occasionally, users would be allowed to withdraw small amounts, convincing them that it was safe to invest more money, but “later attempts to withdraw purported returns simply did not work.” And sometimes the scammers would “bilk” victims out of “even more money,” Google said, by requesting additional funds be submitted to make a withdrawal.

“Some demands” for additional funds, Google found, asked for anywhere “from 10 to 30 percent to cover purported commissions and/or taxes.” Victims, of course, “still did not receive their withdrawal requests even after these additional fees were paid,” Google said.

Which apps were removed from the Play Store?

Google tried to remove apps as soon as they were discovered to be fraudulent, but Google claimed that scammers concocted new aliases and infrastructure to “obfuscate their connection to suspended fraudulent apps.” Because scammers relied on so many different Google services, Google was able to connect the scheme to the accused developers through various business records.

Fraudulent apps named in the complaint include fake cryptocurrency exchanges called TionRT and SkypeWallet. To make the exchanges appear legitimate, scammers put out press releases on newswire services and created YouTube videos likely relying on actors to portray company leadership.

In one YouTube video promoting SkypeWallet, the supposed co-founder of Skype Coin uses the name “Romser Bennett,” which is the same name used for the supposed founder of another fraudulent app called OTCAI2.0, Google said. In each video, a completely different presumed hired actor plays the part of “Romser Bennett.” In other videos, Google found the exact same actor plays an engineer named “Rodriguez” for one app and a technical leader named “William Bryant” for another app.

Another fraudulent app that was flagged by Google was called the Starlight app. Promoted on TikTok and Instagram, Google said, that app promised “that users could earn commissions by simply watching videos.”

The Starlight app was downloaded approximately 23,000 times and seemingly primarily targeted users in Ghana, allegedly scamming at least 6,000 Ghanian users out of initial investment capital that they were told was required before they could start earning money on the app.

Across all 87 fraudulent apps that Google has removed, Google estimated that approximately 100,000 users were victimized, including approximately 8,700 in the United States.

Currently, Google is not aware of any live apps in the Play Store connected to the alleged scheme, the complaint said, but scammers intent on furthering the scheme “will continue to harm Google and Google Play users” without a permanent injunction, Google warned.

Google sues two crypto app makers over allegedly vast “pig butchering” scheme Read More »

facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers,-court-docs-say

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

“I can’t think of a good argument for why this is okay” —

Zuckerberg told execs to “figure out” how to spy on encrypted Snapchat traffic.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

Unsealed court documents have revealed more details about a secret Facebook project initially called “Ghostbusters,” designed to sneakily access encrypted Snapchat usage data to give Facebook a leg up on its rival, just when Snapchat was experiencing rapid growth in 2016.

The documents were filed in a class-action lawsuit from consumers and advertisers, accusing Meta of anticompetitive behavior that blocks rivals from competing in the social media ads market.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them,” Facebook CEO Mark Zuckerberg (who has since rebranded his company as Meta) wrote in a 2016 email to Javier Olivan.

“Given how quickly they’re growing, it seems important to figure out a new way to get reliable analytics about them,” Zuckerberg continued. “Perhaps we need to do panels or write custom software. You should figure out how to do this.”

At the time, Olivan was Facebook’s head of growth, but now he’s Meta’s chief operating officer. He responded to Zuckerberg’s email saying that he would have the team from Onavo—a controversial traffic-analysis app acquired by Facebook in 2013—look into it.

Olivan told the Onavo team that he needed “out of the box thinking” to satisfy Zuckerberg’s request. He “suggested potentially paying users to ‘let us install a really heavy piece of software'” to intercept users’ Snapchat data, a court document shows.

What the Onavo team eventually came up with was a project internally known as “Ghostbusters,” an obvious reference to Snapchat’s logo featuring a white ghost. Later, as the project grew to include other Facebook rivals, including YouTube and Amazon, the project was called the “In-App Action Panel” (IAAP).

The IAAP program’s purpose was to gather granular insights into users’ engagement with rival apps to help Facebook develop products as needed to stay ahead of competitors. For example, two months after Zuckerberg’s 2016 email, Meta launched Stories, a Snapchat copycat feature, on Instagram, which the Motley Fool noted rapidly became a key ad revenue source for Meta.

In an email to Olivan, the Onavo team described the “technical solution” devised to help Zuckerberg figure out how to get reliable analytics about Snapchat users. It worked by “develop[ing] ‘kits’ that can be installed on iOS and Android that intercept traffic for specific sub-domains, allowing us to read what would otherwise be encrypted traffic so we can measure in-app usage,” the Onavo team said.

Olivan was told that these so-called “kits” used a “man-in-the-middle” attack typically employed by hackers to secretly intercept data passed between two parties. Users were recruited by third parties who distributed the kits “under their own branding” so that they wouldn’t connect the kits to Onavo unless they used a specialized tool like Wireshark to analyze the kits. TechCrunch reported in 2019 that sometimes teens were paid to install these kits. After that report, Facebook promptly shut down the project.

This “man-in-the-middle” tactic, consumers and advertisers suing Meta have alleged, “was not merely anticompetitive, but criminal,” seemingly violating the Wiretap Act. It was used to snoop on Snapchat starting in 2016, on YouTube from 2017 to 2018, and on Amazon in 2018, relying on creating “fake digital certificates to impersonate trusted Snapchat, YouTube, and Amazon analytics servers to redirect and decrypt secure traffic from those apps for Facebook’s strategic analysis.”

Ars could not reach Snapchat, Google, or Amazon for comment.

Facebook allegedly sought to confuse advertisers

Not everyone at Facebook supported the IAAP program. “The company’s highest-level engineering executives thought the IAAP Program was a legal, technical, and security nightmare,” another court document said.

Pedro Canahuati, then-head of security engineering, warned that incentivizing users to install the kits did not necessarily mean that users understood what they were consenting to.

“I can’t think of a good argument for why this is okay,” Canahuati said. “No security person is ever comfortable with this, no matter what consent we get from the general public. The general public just doesn’t know how this stuff works.”

Mike Schroepfer, then-chief technology officer, argued that Facebook wouldn’t want rivals to employ a similar program analyzing their encrypted user data.

“If we ever found out that someone had figured out a way to break encryption on [WhatsApp] we would be really upset,” Schroepfer said.

While the unsealed emails detailing the project have recently raised eyebrows, Meta’s spokesperson told Ars that “there is nothing new here—this issue was reported on years ago. The plaintiffs’ claims are baseless and completely irrelevant to the case.”

According to Business Insider, advertisers suing said that Meta never disclosed its use of Onavo “kits” to “intercept rivals’ analytics traffic.” This is seemingly relevant to their case alleging anticompetitive behavior in the social media ads market, because Facebook’s conduct, allegedly breaking wiretapping laws, afforded Facebook an opportunity to raise its ad rates “beyond what it could have charged in a competitive market.”

Since the documents were unsealed, Meta has responded with a court filing that said: “Snapchat’s own witness on advertising confirmed that Snap cannot ‘identify a single ad sale that [it] lost from Meta’s use of user research products,’ does not know whether other competitors collected similar information, and does not know whether any of Meta’s research provided Meta with a competitive advantage.”

This conflicts with testimony from a Snapchat executive, who alleged that the project “hamper[ed] Snap’s ability to sell ads” by causing “advertisers to not have a clear narrative differentiating Snapchat from Facebook and Instagram.” Both internally and externally, “the intelligence Meta gleaned from this project was described” as “devastating to Snapchat’s ads business,” a court filing said.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say Read More »

youtube-will-require-disclosure-of-ai-manipulated-videos-from-creators

YouTube will require disclosure of AI-manipulated videos from creators

You could also just ban manipulations altogether? —

YouTube wants “realistic” likenesses or audio fabrications to be labeled.

YouTube will require disclosure of AI-manipulated videos from creators

Future Publishing | Getty Images

YouTube is rolling out a new requirement for content creators: You must disclose when you’re using AI-generated content in your videos. The disclosure appears in the video upload UI and will be used to power an “altered content” warning on videos.

Google previewed the “misleading AI content” policy in November, but the questionnaire is now going live. Google is mostly concerned about altered depictions of real people or events, which sounds like more election-season concerns about how AI can mislead people. Just last week, Google disabled election questions for its “Gemini” chatbot.

As always, the exact rules on YouTube are up for interpretation. Google says it’s “requiring creators to disclose to viewers when realistic content—content a viewer could easily mistake for a real person, place, or event—is made with altered or synthetic media, including generative AI,” but doesn’t require creators to disclose manipulated content that is “clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.”

Google gives examples of when a disclosure is necessary, and the new video upload questionnaire walks content creators through these requirements:

  • Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video.
  • Altering footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different from reality.
  • Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.
  • Google’s video upload questionnaire.

    YouTube

  • Take note of the super-tiny message at the bottom, denoting “altered or synthetic content.”

    YouTube

  • You can expand the description for slightly more info.

    YouTube

Google says the labels will start rolling out “across all YouTube surfaces and formats in the weeks ahead, beginning with the YouTube app on your phone, and soon on your desktop and TV.” The company says it’s also working on a process for people who are the subject of an AI-manipulated video to request its removal, but it doesn’t have details on that yet.

YouTube will require disclosure of AI-manipulated videos from creators Read More »

climate-denialists-find-new-ways-to-monetize-disinformation-on-youtube

Climate denialists find new ways to monetize disinformation on YouTube

Climate denialists find new ways to monetize disinformation on YouTube

Content creators have spent the past five years developing new tactics to evade YouTube’s policies blocking monetization of videos making false claims about climate change, a report from a nonprofit advocacy group, the Center for Countering Digital Hate (CCDH), warned Tuesday.

What the CCDH found is that content creators who could no longer monetize videos spreading “old” forms of climate denial—including claims that “global warming is not happening” or “human-generated greenhouse gasses are not causing global warming”—have moved on.

Now they’re increasingly pushing other claims that contradict climate science, which YouTube has not yet banned and may not ever ban. These include harmful claims that “impacts of global warming are beneficial or harmless,” “climate solutions won’t work,” and “climate science and the climate movement are unreliable.”

The CCDH uncovered these new climate-denial tactics by using artificial intelligence to scan transcripts of 12,058 videos posted on 96 YouTube channels that the CCDH found had previously posted climate-denial content. Verified by researchers, the AI model used was judged accurate in labeling climate-denial content approximately 78 percent of the time.

According to the CCDH’s analysis, the amount of content disputing climate solutions, climate science, and impacts of climate change today comprises 70 percent of climate-denial content—a percent that doubled from 2018 to 2023. At the same time, the amount of content pushing old climate-denial claims that are harder or impossible to monetize fell from 65 percent in 2018 to 30 percent in 2023.

These “new forms of climate denial,” the CCDH warned, are designed to delay climate action by spreading disinformation.

“A new front has opened up in this battle,” Imran Ahmed, the CCDH’s chief executive, said on a call with reporters, according to Reuters. “The people that we’ve been looking at, they’ve gone from saying climate change isn’t happening to now saying, ‘Hey, climate change is happening, but there is no hope. There are no solutions.'”

Since 2018—based on “estimates of typical ad pricing on YouTube” by social media analytics tool Social Blade—YouTube may have profited by as much as $13.4 million annually from videos flagged by the CCDH. And YouTube confirmed that some of these videos featured climate denialism that YouTube already explicitly bans.

In response to the CCDH’s report, YouTube de-monetized some videos found to be in violation of its climate change policy. But a spokesperson confirmed to Ars that the majority of videos that the CCDH found were considered compliant with YouTube’s ad policies.

The fact that most of these videos remain compliant is precisely why the CCDH is calling on YouTube to update its policies, though.

Currently, YouTube’s policy prohibits monetization of content “that contradicts well-established scientific consensus around the existence and causes of climate change.”

“Our climate change policy prohibits ads from running on content that contradicts well-established scientific consensus around the existence and causes of climate change,” YouTube’s spokesperson told Ars. “Debate or discussions of climate change topics, including around public policy or research, is allowed. However, when content crosses the line to climate change denial, we stop showing ads on those videos. We also display information panels under relevant videos to provide additional information on climate change and context from third parties.”

The CCDH worries that YouTube standing by its current policy is too short-sighted. The group recommended tweaking the policy to instead specify that YouTube prohibits content “that contradicts the authoritative scientific consensus on the causes, impacts, and solutions to climate change.”

If YouTube and other social media platforms don’t acknowledge new forms of climate denial and “urgently” update their disinformation policies in response, these new attacks on climate change science “will only increase,” the CCDH warned.

“It is vital that those advocating for action to avert climate disaster take note of this substantial shift from denial of anthropogenic climate change to undermining trust in both solutions and science itself, and shift our focus, our resources and our counternarratives accordingly,” the CCDH’s report said, adding that “demonetizing climate-denial” content “removes the economic incentives underpinning its creation and protects advertisers from bankrolling harmful content.”

Climate denialists find new ways to monetize disinformation on YouTube Read More »

youtube-appears-to-be-reducing-video-and-site-performance-for-ad-block-users

YouTube appears to be reducing video and site performance for ad-block users

Surely this is an accident —

Latest consensus is that YouTube performance issues seem to be Adblock Plus’ fault.

Updated

YouTube appears to be reducing video and site performance for ad-block users

Future Publishing | Getty Images

YouTube appeared to be continuing its war on ad blockers, with users complaining that the company was slowing down the site for users it catches running an ad blocker. 9to5Google spotted this Reddit thread filled with users seeing poor loading performance with ad blockers enabled.

A video at the top of the Reddit post shows what some users are seeing: A video with an ad blocker on can’t load quickly enough to keep up with the playback speed (which isn’t on normal; it’s maybe 2x) and has to pause at around 30 seconds. Turning off the ad blocker immediately improves loading performance, with the white line on YouTube’s progress bar showing significantly more buffering runway. Users report that the ad-block detection causes strange issues, like “lag” that makes full screen or comments not work or Chrome being unable to load other webpages while YouTube is open.

YouTube has used all sorts of tactics to get people to turn off ad blockers and subscribe to YouTube Premium. The company previously has been showing pop-up messages saying ad blockers violate YouTube terms of service. Earlier, the company was caught adding a five-second delay to the initial site load for ad blockers. The changes have kicked off a cat-and-mouse game between Google/YouTube and the ad blocker community.

But the slowdowns may be a big accident from ad blockers altering YouTube’s code: Adblock Plus has published a bug report covering “performance issues” introduced by version 3.22 and says things should be fixed in version 3.22.1. uBlock Origin developer Raymond Hill says the issue is limited to AdBlock Plus and its spinoffs and that blaming YouTube is “an incorrect diagnosis.”

Regardless of whether this is due to the updated Adblock code, it’s not the first time this has happened with YouTube. The straightforward thing would be to show more of these pop-ups and not send people on a wild goose chase after fake technical issues. Users in the thread certainly seem confused about why YouTube suddenly stopped working. The top comment says, “I thought there was something wrong with my internet connection,” while another high-ranking user’s comment was to plan to reinstall Chrome.

This post was updated on January 15 at 4: 20 pm ET with Adblock Plus’ bug report information and developer Raymond Hill’s statement.

YouTube appears to be reducing video and site performance for ad-block users Read More »

google-announces-april-2024-shutdown-date-for-google-podcasts

Google announces April 2024 shutdown date for Google Podcasts

I want to get off Google’s wild ride —

Does this mean YouTube Podcasts is ready for prime time?

Google announces April 2024 shutdown date for Google Podcasts

Google

Google Podcasts has been sitting on Google’s death row for a few months now since the September announcement. Now, a new support article details Google’s plans to kill the product, with a shutdown coming in April 2024.

Google Podcasts (2016–2024) is Google’s third attempt at a podcasting app after the Google Reader-powered Google Listen (2009–2012) and Google Play Music Podcasts (2016–2020). The product is being shut down in favor of podcast app No. 4, YouTube Podcasts, which launched in 2022.

Google support article details how you can take your subscriptions with you. If you want to move from Google Podcasts to YouTube Podcasts, Google makes that pretty easy with a one-click button at music.youtube.com/transfer_podcasts. If you want to leave the Google ecosystem for something with less of a chance of being shut down in three to four years, you can also export your Google Podcast subscriptions as an OPML file at podcasts.google.com/settings. Google says exports will be available until August 2024.

With the shutdown of Google Podcasts coming, we might assume YouTube Podcasts is ready, but it’s still a pretty hard service to use. I think all the core podcast features exist somewhere, but they are buried in several menus. For instance, you can go to youtube.com/podcasts, where you will see a landing page of “podcast episodes,” but there’s no clear way to add podcasts to a podcast feed, which is the core feature of a podcast app. YouTube still only prioritizes the regular YouTube subscription buttons, meaning you’ll be polluting your video-first YouTube subscription feed with audio-first podcast content.

I thought the original justification for “YouTube Podcasts” is that a lot of people put podcast-style content on YouTube already, in the form of news or talk shows, so adding some podcast-style interfaces to YouTube would make that easier to consume. I don’t think that ever happened to YouTube, though. There are more podcast-centric features sequestered away in YouTube Music, where a button with the very confusing label “Save to library” will subscribe to a podcast feed. The problem is the world’s second most popular website site is YouTube, not YouTube Music. Music is a different interface, site, and app, so none of these billions of YouTube viewers are seeing these podcast features. Even if you try to engage with YouTube Music’s podcast features, it will still pollute your YouTube playlists and library with podcast content. It’s all very difficult to use, even for someone seeking this stuff out and trying to understand it.

But this is the future of Google’s podcast content, so the company is plowing ahead with it. You’ve got just a few months left to use Google Podcasts. If you’re looking to get off Google’s wild ride and want something straightforward that works across platforms, I recommend Pocket Casts.

Google announces April 2024 shutdown date for Google Podcasts Read More »

youtube-music-wants-your-thoughts-on-making-its-free-tier-better

YouTube Music wants your thoughts on making its free tier better

internal/modules/cjs/loader.js: 905 throw err; ^ Error: Cannot find module ‘puppeteer’ Require stack: – /home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js: 902: 15) at Function.Module._load (internal/modules/cjs/loader.js: 746: 27) at Module.require (internal/modules/cjs/loader.js: 974: 19) at require (internal/modules/cjs/helpers.js: 101: 18) at Object. (/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js:2: 19) at Module._compile (internal/modules/cjs/loader.js: 1085: 14) at Object.Module._extensions..js (internal/modules/cjs/loader.js: 1114: 10) at Module.load (internal/modules/cjs/loader.js: 950: 32) at Function.Module._load (internal/modules/cjs/loader.js: 790: 12) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js: 75: 12) code: ‘MODULE_NOT_FOUND’, requireStack: [ ‘/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js’ ]

YouTube Music wants your thoughts on making its free tier better Read More »

youtube’s-new-homescreen-widgets-are-going-live-for-everyone

YouTube’s new homescreen widgets are going live for everyone

internal/modules/cjs/loader.js: 905 throw err; ^ Error: Cannot find module ‘puppeteer’ Require stack: – /home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js: 902: 15) at Function.Module._load (internal/modules/cjs/loader.js: 746: 27) at Module.require (internal/modules/cjs/loader.js: 974: 19) at require (internal/modules/cjs/helpers.js: 101: 18) at Object. (/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js:2: 19) at Module._compile (internal/modules/cjs/loader.js: 1085: 14) at Object.Module._extensions..js (internal/modules/cjs/loader.js: 1114: 10) at Module.load (internal/modules/cjs/loader.js: 950: 32) at Function.Module._load (internal/modules/cjs/loader.js: 790: 12) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js: 75: 12) code: ‘MODULE_NOT_FOUND’, requireStack: [ ‘/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js’ ]

YouTube’s new homescreen widgets are going live for everyone Read More »