generative ai

tool-preventing-ai-mimicry-cracked;-artists-wonder-what’s-next

Tool preventing AI mimicry cracked; artists wonder what’s next

Tool preventing AI mimicry cracked; artists wonder what’s next

Aurich Lawson | Getty Images

For many artists, it’s a precarious time to post art online. AI image generators keep getting better at cheaply replicating a wider range of unique styles, and basically every popular platform is rushing to update user terms to seize permissions to scrape as much data as possible for AI training.

Defenses against AI training exist—like Glaze, a tool that adds a small amount of imperceptible-to-humans noise to images to stop image generators from copying artists’ styles. But they don’t provide a permanent solution at a time when tech companies appear determined to chase profits by building ever-more-sophisticated AI models that increasingly threaten to dilute artists’ brands and replace them in the market.

In one high-profile example just last month, the estate of Ansel Adams condemned Adobe for selling AI images stealing the famous photographer’s style, Smithsonian reported. Adobe quickly responded and removed the AI copycats. But it’s not just famous artists who risk being ripped off, and lesser-known artists may struggle to prove AI models are referencing their works. In this largely lawless world, every image uploaded risks contributing to an artist’s downfall, potentially watering down demand for their own work each time they promote new pieces online.

Unsurprisingly, artists have increasingly sought protections to diminish or dodge these AI risks. As tech companies update their products’ terms—like when Meta suddenly announced that it was training AI on a billion Facebook and Instagram user photos last December—artists frantically survey the landscape for new defenses. That’s why, counting among those offering scarce AI protections available today, The Glaze Project recently reported a dramatic surge in requests for its free tools.

Designed to help prevent style mimicry and even poison AI models to discourage data scraping without an artist’s consent or compensation, The Glaze Project’s tools are now in higher demand than ever. University of Chicago professor Ben Zhao, who created the tools, told Ars that the backlog for approving a “skyrocketing” number of requests for access is “bad.” And as he recently posted on X (formerly Twitter), an “explosion in demand” in June is only likely to be sustained as AI threats continue to evolve. For the foreseeable future, that means artists searching for protections against AI will have to wait.

Even if Zhao’s team did nothing but approve requests for WebGlaze, its invite-only web-based version of Glaze, “we probably still won’t keep up,” Zhao said. He’s warned artists on X to expect delays.

Compounding artists’ struggles, at the same time as demand for Glaze is spiking, the tool has come under attack by security researchers who claimed it was not only possible but easy to bypass Glaze’s protections. For security researchers and some artists, this attack calls into question whether Glaze can truly protect artists in these embattled times. But for thousands of artists joining the Glaze queue, the long-term future looks so bleak that any promise of protections against mimicry seems worth the wait.

Attack cracking Glaze sparks debate

Millions have downloaded Glaze already, and many artists are waiting weeks or even months for access to WebGlaze, mostly submitting requests for invites on social media. The Glaze Project vets every request to verify that each user is human and ensure bad actors don’t abuse the tools, so the process can take a while.

The team is currently struggling to approve hundreds of requests submitted daily through direct messages on Instagram and Twitter in the order they are received, and artists requesting access must be patient through prolonged delays. Because these platforms’ inboxes aren’t designed to sort messages easily, any artist who follows up on a request gets bumped to the back of the line—as their message bounces to the top of the inbox and Zhao’s team, largely volunteers, continues approving requests from the bottom up.

“This is obviously a problem,” Zhao wrote on X while discouraging artists from sending any follow-ups unless they’ve already gotten an invite. “We might have to change the way we do invites and rethink the future of WebGlaze to keep it sustainable enough to support a large and growing user base.”

Glaze interest is likely also spiking due to word of mouth. Reid Southen, a freelance concept artist for major movies, is advocating for all artists to use Glaze. Reid told Ars that WebGlaze is especially “nice” because it’s “available for free for people who don’t have the GPU power to run the program on their home machine.”

Tool preventing AI mimicry cracked; artists wonder what’s next Read More »

ai-trains-on-kids’-photos-even-when-parents-use-strict-privacy-settings

AI trains on kids’ photos even when parents use strict privacy settings

“Outrageous” —

Even unlisted YouTube videos are used to train AI, watchdog warns.

AI trains on kids’ photos even when parents use strict privacy settings

Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators—even when platforms prohibit scraping and families use strict privacy settings.

Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia’s states and territories, including indigenous children who may be particularly vulnerable to harms.

These photos are linked in the dataset “without the knowledge or consent of the children or their families.” They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han’s report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online.

That puts children in danger of privacy and safety risks, Han said, and some parents thinking they’ve protected their kids’ privacy online may not realize that these risks exist.

From a single link to one photo that showed “two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural,” Han could trace “both children’s full names and ages, and the name of the preschool they attend in Perth, in Western Australia.” And perhaps most disturbingly, “information about these children does not appear to exist anywhere else on the Internet”—suggesting that families were particularly cautious in shielding these boys’ identities online.

Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed “a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating” during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be “unlisted” and would not appear in searches.

Only someone with a link to the video was supposed to have access, but that didn’t stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.

Reached for comment, YouTube’s spokesperson, Jack Malon, told Ars that YouTube has “been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse.” But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That’s why—even more than parents need tech companies to up their game blocking AI training—kids need regulators to intervene and stop training before it happens, Han’s report said.

Han’s report comes a month before Australia is expected to release a reformed draft of the country’s Privacy Act. Those reforms include a draft of Australia’s first child data protection law, known as the Children’s Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren’t “actually sure how much the government is going to announce in August.”

“Children in Australia are waiting with bated breath to see if the government will adopt protections for them,” Han said, emphasizing in her report that “children should not have to live in fear that their photos might be stolen and weaponized against them.”

AI uniquely harms Australian kids

To hunt down the photos of Australian kids, Han “reviewed fewer than 0.0001 percent of the 5.85 billion images and captions contained in the data set.” Because her sample was so small, Han expects that her findings represent a significant undercount of how many children could be impacted by the AI scraping.

“It’s astonishing that out of a random sample size of about 5,000 photos, I immediately fell into 190 photos of Australian children,” Han told Ars. “You would expect that there would be more photos of cats than there are personal photos of children,” since LAION-5B is a “reflection of the entire Internet.”

LAION is working with HRW to remove links to all the images flagged, but cleaning up the dataset does not seem to be a fast process. Han told Ars that based on her most recent exchange with the German nonprofit, LAION had not yet removed links to photos of Brazilian kids that she reported a month ago.

LAION declined Ars’ request for comment.

In June, LAION’s spokesperson, Nathan Tyler, told Ars that, “as a nonprofit, volunteer organization,” LAION is committed to doing its part to help with the “larger and very concerning issue” of misuse of children’s data online. But removing links from the LAION-5B dataset does not remove the images online, Tyler noted, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl. And Han pointed out that removing the links from the dataset doesn’t change AI models that have already trained on them.

“Current AI models cannot forget data they were trained on, even if the data was later removed from the training data set,” Han’s report said.

Kids whose images are used to train AI models are exposed to a variety of harms, Han reported, including a risk that image generators could more convincingly create harmful or explicit deepfakes. In Australia last month, “about 50 girls from Melbourne reported that photos from their social media profiles were taken and manipulated using AI to create sexually explicit deepfakes of them, which were then circulated online,” Han reported.

For First Nations children—”including those identified in captions as being from the Anangu, Arrernte, Pitjantjatjara, Pintupi, Tiwi, and Warlpiri peoples”—the inclusion of links to photos threatens unique harms. Because culturally, First Nations peoples “restrict the reproduction of photos of deceased people during periods of mourning,” Han said the AI training could perpetuate harms by making it harder to control when images are reproduced.

Once an AI model trains on the images, there are other obvious privacy risks, including a concern that AI models are “notorious for leaking private information,” Han said. Guardrails added to image generators do not always prevent these leaks, with some tools “repeatedly broken,” Han reported.

LAION recommends that, if troubled by the privacy risks, parents remove images of kids online as the most effective way to prevent abuse. But Han told Ars that’s “not just unrealistic, but frankly, outrageous.”

“The answer is not to call for children and parents to remove wonderful photos of kids online,” Han said. “The call should be [for] some sort of legal protections for these photos, so that kids don’t have to always wonder if their selfie is going to be abused.”

AI trains on kids’ photos even when parents use strict privacy settings Read More »

music-industry-giants-allege-mass-copyright-violation-by-ai-firms

Music industry giants allege mass copyright violation by AI firms

No one wants to be defeated —

Suno and Udio could face damages of up to $150,000 per song allegedly infringed.

Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music.

Enlarge / Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson’s music.

Universal Music Group, Sony Music, and Warner Records have sued AI music-synthesis companies Udio and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music-generating AI models, reports Reuters. Udio and Suno can generate novel song recordings based on text-based descriptions of music (i.e., “a dubstep song about Linus Torvalds”).

The lawsuits, filed in federal courts in New York and Massachusetts, claim that the AI companies’ use of copyrighted material to train their systems could lead to AI-generated music that directly competes with and potentially devalues the work of human artists.

Like other generative AI models, both Udio and Suno (which we covered separately in April) rely on a broad selection of existing human-created artworks that teach a neural network the relationship between words in a written prompt and styles of music. The record labels correctly note that these companies have been deliberately vague about the sources of their training data.

Until generative AI models hit the mainstream in 2022, it was common practice in machine learning to scrape and use copyrighted information without seeking permission to do so. But now that the applications of those technologies have become commercial products themselves, rightsholders have come knocking to collect. In the case of Udio and Suno, the record labels are seeking statutory damages of up to $150,000 per song used in training.

In the lawsuit, the record labels cite specific examples of AI-generated content that allegedly re-creates elements of well-known songs, including The Temptations’ “My Girl,” Mariah Carey’s “All I Want for Christmas Is You,” and James Brown’s “I Got You (I Feel Good).” It also claims the music-synthesis models can produce vocals resembling those of famous artists, such as Michael Jackson and Bruce Springsteen.

Reuters claims it’s the first instance of lawsuits specifically targeting music-generating AI, but music companies and artists alike have been gearing up to deal with challenges the technology may pose for some time.

In May, Sony Music sent warning letters to over 700 AI companies (including OpenAI, Microsoft, Google, Suno, and Udio) and music-streaming services that prohibited any AI researchers from using its music to train AI models. In April, over 200 musical artists signed an open letter that called on AI companies to stop using AI to “devalue the rights of human artists.” And last November, Universal Music filed a copyright infringement lawsuit against Anthropic for allegedly including artists’ lyrics in its Claude LLM training data.

Similar to The New York Times’ lawsuit against OpenAI over the use of training data, the outcome of the record labels’ new suit could have deep implications for the future development of generative AI in creative fields, including requiring companies to license all musical training data used in creating music-synthesis models.

Compulsory licenses for AI training data could make AI model development economically impractical for small startups like Udio and Suno—and judging by the aforementioned open letter, many musical artists may applaud that potential outcome. But such a development would not preclude major labels from eventually developing their own AI music generators themselves, allowing only large corporations with deep pockets to control generative music tools for the foreseeable future.

Music industry giants allege mass copyright violation by AI firms Read More »

apple-intelligence-and-other-features-won’t-launch-in-the-eu-this-year

Apple Intelligence and other features won’t launch in the EU this year

DMA —

iPhone Mirroring and SharePlay screen sharing will also skip the EU for now.

A photo of a hand holding an iPhone running the Image Playground experience in iOS 18

Enlarge / Features like Image Playground won’t arrive in Europe at the same time as other regions.

Apple

Three major features in iOS 18 and macOS Sequoia will not be available to European users this fall, Apple says. They include iPhone screen mirroring on the Mac, SharePlay screen sharing, and the entire Apple Intelligence suite of generative AI features.

In a statement sent to Financial Times, The Verge, and others, Apple says this decision is related to the European Union’s Digital Markets Act (DMA). Here’s the full statement, which was attributed to Apple spokesperson Fred Sainz:

Two weeks ago, Apple unveiled hundreds of new features that we are excited to bring to our users around the world. We are highly motivated to make these technologies accessible to all users. However, due to the regulatory uncertainties brought about by the Digital Markets Act (DMA), we do not believe that we will be able to roll out three of these features — iPhone Mirroring, SharePlay Screen Sharing enhancements, and Apple Intelligence — to our EU users this year.

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

It is unclear from Apple’s statement precisely which aspects of the DMA may have led to this decision. It could be that Apple is concerned that it would be required to give competitors like Microsoft or Google access to user data collected for Apple Intelligence features and beyond, but we’re not sure.

This is not the first recent and major divergence between functionality and features for Apple devices in the EU versus other regions. Because of EU regulations, Apple opened up iOS to third-party app stores in Europe, but not in other regions. However, critics argued its compliance with that requirement was lukewarm at best, as it came with a set of restrictions and changes to how app developers could monetize their apps on the platform should they use those other storefronts.

While Apple says in the statement it’s open to finding a solution, no timeline is given. All we know is that the features won’t be available on devices in the EU this year. They’re expected to launch in other regions in the fall.

Apple Intelligence and other features won’t launch in the EU this year Read More »

adobe-to-update-vague-ai-terms-after-users-threaten-to-cancel-subscriptions

Adobe to update vague AI terms after users threaten to cancel subscriptions

Adobe to update vague AI terms after users threaten to cancel subscriptions

Adobe has promised to update its terms of service to make it “abundantly clear” that the company will “never” train generative AI on creators’ content after days of customer backlash, with some saying they would cancel Adobe subscriptions over its vague terms.

Users got upset last week when an Adobe pop-up informed them of updates to terms of use that seemed to give Adobe broad permissions to access user content, take ownership of that content, or train AI on that content. The pop-up forced users to agree to these terms to access Adobe apps, disrupting access to creatives’ projects unless they immediately accepted them.

For any users unwilling to accept, canceling annual plans could trigger fees amounting to 50 percent of their remaining subscription cost. Adobe justifies collecting these fees because a “yearly subscription comes with a significant discount.”

On X (formerly Twitter), YouTuber Sasha Yanshin wrote that he canceled his Adobe license “after many years as a customer,” arguing that “no creator in their right mind can accept” Adobe’s terms that seemed to seize a “worldwide royalty-free license to reproduce, display, distribute” or “do whatever they want with any content” produced using their software.

“This is beyond insane,” Yanshin wrote on X. “You pay a huge monthly subscription, and they want to own your content and your entire business as well. Going to have to learn some new tools.”

Adobe’s design leader Scott Belsky replied, telling Yanshin that Adobe had clarified the update in a blog post and noting that Adobe’s terms for licensing content are typical for every cloud content company. But he acknowledged that those terms were written about 11 years ago and that the language could be plainer, writing that “modern terms of service in the current climate of customer concerns should evolve to address modern day concerns directly.”

Yanshin has so far not been encouraged by any of Adobe’s attempts to clarify its terms, writing that he gives “precisely zero f*cks about Adobe’s clarifications or blog posts.”

“You forced people to sign new Terms,” Yanshin told Belsky on X. “Legally, they are the only thing that matters.”

Another user in the thread using an anonymous X account also pushed back, writing, “Point to where it says in the terms that you won’t use our content for LLM or AI training? And state unequivocally that you do not have the right to use our work beyond storing it. That would go a long way.”

“Stay tuned,” Belsky wrote on X. “Unfortunately, it takes a process to update a TOS,” but “we are working on incorporating these clarifications.”

Belsky co-authored the blog this week announcing that Adobe’s terms would be updated by June 18 after a week of fielding feedback from users.

“We’ve never trained generative AI on customer content, taken ownership of a customer’s work, or allowed access to customer content beyond legal requirements,” Adobe’s blog said. “Nor were we considering any of those practices as part of the recent Terms of Use update. That said, we agree that evolving our Terms of Use to reflect our commitments to our community is the right thing to do.”

Adobe to update vague AI terms after users threaten to cancel subscriptions Read More »

ai-trained-on-photos-from-kids’-entire-childhood-without-their-consent

AI trained on photos from kids’ entire childhood without their consent

AI trained on photos from kids’ entire childhood without their consent

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW’s report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed “less than 0.0001 percent” of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Among those images linked in the dataset, Han found 170 photos of children from at least 10 Brazilian states. These were mostly family photos uploaded to personal and parenting blogs most Internet surfers wouldn’t easily stumble upon, “as well as stills from YouTube videos with small view counts, seemingly uploaded to be shared with family and friends,” Wired reported.

LAION, the German nonprofit that created the dataset, has worked with HRW to remove the links to the children’s images in the dataset.

That may not completely resolve the problem, though. HRW’s report warned that the removed links are “likely to be a significant undercount of the total amount of children’s personal data that exists in LAION-5B.” Han told Wired that she fears that the dataset may still be referencing personal photos of kids “from all over the world.”

Removing the links also does not remove the images from the public web, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl, LAION’s spokesperson, Nate Tyler, told Ars.

“This is a larger and very concerning issue, and as a nonprofit, volunteer organization, we will do our part to help,” Tyler told Ars.

Han told Ars that “Common Crawl should stop scraping children’s personal data, given the privacy risks involved and the potential for new forms of misuse.”

According to HRW’s analysis, many of the Brazilian children’s identities were “easily traceable,” due to children’s names and locations being included in image captions that were processed when building the LAION dataset.

And at a time when middle and high school-aged students are at greater risk of being targeted by bullies or bad actors turning “innocuous photos” into explicit imagery, it’s possible that AI tools may be better equipped to generate AI clones of kids whose images are referenced in AI datasets, HRW suggested.

“The photos reviewed span the entirety of childhood,” HRW’s report said. “They capture intimate moments of babies being born into the gloved hands of doctors, young children blowing out candles on their birthday cake or dancing in their underwear at home, students giving a presentation at school, and teenagers posing for photos at their high school’s carnival.”

There is less risk that the Brazilian kids’ photos are currently powering AI tools since “all publicly available versions of LAION-5B were taken down” in December, Tyler told Ars. That decision came out of an “abundance of caution” after a Stanford University report “found links in the dataset pointing to illegal content on the public web,” Tyler said, including 3,226 suspected instances of child sexual abuse material.

Han told Ars that “the version of the dataset that we examined pre-dates LAION’s temporary removal of its dataset in December 2023.” The dataset will not be available again until LAION determines that all flagged illegal content has been removed.

“LAION is currently working with the Internet Watch Foundation, the Canadian Centre for Child Protection, Stanford, and Human Rights Watch to remove all known references to illegal content from LAION-5B,” Tyler told Ars. “We are grateful for their support and hope to republish a revised LAION-5B soon.”

In Brazil, “at least 85 girls” have reported classmates harassing them by using AI tools to “create sexually explicit deepfakes of the girls based on photos taken from their social media profiles,” HRW reported. Once these explicit deepfakes are posted online, they can inflict “lasting harm,” HRW warned, potentially remaining online for their entire lives.

“Children should not have to live in fear that their photos might be stolen and weaponized against them,” Han said. “The government should urgently adopt policies to protect children’s data from AI-fueled misuse.”

Ars could not immediately reach Stable Diffusion maker Stability AI for comment.

AI trained on photos from kids’ entire childhood without their consent Read More »

can-a-technology-called-rag-keep-ai-models-from-making-stuff-up?

Can a technology called RAG keep AI models from making stuff up?

Can a technology called RAG keep AI models from making stuff up?

Aurich Lawson | Getty Images

We’ve been living through the generative AI boom for nearly a year and a half now, following the late 2022 release of OpenAI’s ChatGPT. But despite transformative effects on companies’ share prices, generative AI tools powered by large language models (LLMs) still have major drawbacks that have kept them from being as useful as many would like them to be. Retrieval augmented generation, or RAG, aims to fix some of those drawbacks.

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called “hallucination”), which is a statistical gap-filling phenomenon AI language models produce when they are tasked with reproducing knowledge that wasn’t present in the training data. They generate plausible-sounding text that can veer toward accuracy when the training data is solid but otherwise may just be completely made up.

Relying on confabulating AI models gets people and companies in trouble, as we’ve covered in the past. In 2023, we saw two instances of lawyers citing legal cases, confabulated by AI, that didn’t exist. We’ve covered claims against OpenAI in which ChatGPT confabulated and accused innocent people of doing terrible things. In February, we wrote about Air Canada’s customer service chatbot inventing a refund policy, and in March, a New York City chatbot was caught confabulating city regulations.

So if generative AI aims to be the technology that propels humanity into the future, someone needs to iron out the confabulation kinks along the way. That’s where RAG comes in. Its proponents hope the technique will help turn generative AI technology into reliable assistants that can supercharge productivity without requiring a human to double-check or second-guess the answers.

“RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process” to help LLMs stick to the facts, according to Noah Giansiracusa, associate professor of mathematics at Bentley University.

Let’s take a closer look at how it works and what its limitations are.

A framework for enhancing AI accuracy

Although RAG is now seen as a technique to help fix issues with generative AI, it actually predates ChatGPT. Researchers coined the term in a 2020 academic paper by researchers at Facebook AI Research (FAIR, now Meta AI Research), University College London, and New York University.

As we’ve mentioned, LLMs struggle with facts. Google’s entry into the generative AI race, Bard, made an embarrassing error on its first public demonstration back in February 2023 about the James Webb Space Telescope. The error wiped around $100 billion off the value of parent company Alphabet. LLMs produce the most statistically likely response based on their training data and don’t understand anything they output, meaning they can present false information that seems accurate if you don’t have expert knowledge on a subject.

LLMs also lack up-to-date knowledge and the ability to identify gaps in their knowledge. “When a human tries to answer a question, they can rely on their memory and come up with a response on the fly, or they could do something like Google it or peruse Wikipedia and then try to piece an answer together from what they find there—still filtering that info through their internal knowledge of the matter,” said Giansiracusa.

But LLMs aren’t humans, of course. Their training data can age quickly, particularly in more time-sensitive queries. In addition, the LLM often can’t distinguish specific sources of its knowledge, as all its training data is blended together into a kind of soup.

In theory, RAG should make keeping AI models up to date far cheaper and easier. “The beauty of RAG is that when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information,” said Peterson. “This reduces LLM development time and cost while enhancing the model’s scalability.”

Can a technology called RAG keep AI models from making stuff up? Read More »

sky-voice-actor-says-nobody-ever-compared-her-to-scarjo-before-openai-drama

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama

Scarlett Johansson attends the Golden Heart Awards in 2023.

Enlarge / Scarlett Johansson attends the Golden Heart Awards in 2023.

OpenAI is sticking to its story that it never intended to copy Scarlett Johansson’s voice when seeking an actor for ChatGPT’s “Sky” voice mode.

The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman’s defense against Johansson’s claims that Sky was made to sound “eerily similar” to her critically acclaimed voice acting performance in the sci-fi film Her.

Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.

In OpenAI’s defense, The Post reported that the company’s voice casting call flier did not seek a “clone of actress Scarlett Johansson,” and initial voice test recordings of the unnamed actress hired to voice Sky showed that her “natural voice sounds identical to the AI-generated Sky voice.” Because of this, OpenAI has argued that “Sky’s voice is not an imitation of Scarlett Johansson.”

What’s more, an agent for the unnamed Sky actress who was cast—both granted anonymity to protect her client’s safety—confirmed to The Post that her client said she was never directed to imitate either Johansson or her character in Her. She simply used her own voice and got the gig.

The agent also provided a statement from her client that claimed that she had never been compared to Johansson before the backlash started.

This all “feels personal,” the voice actress said, “being that it’s just my natural voice and I’ve never been compared to her by the people who do know me closely.”

However, OpenAI apparently reached out to Johansson after casting the Sky voice actress. During outreach last September and again this month, OpenAI seemed to want to substitute the Sky voice actress’s voice with Johansson’s voice—which is ironically what happened when Johansson got cast to replace the original actress hired to voice her character in Her.

Altman has clarified that timeline in a statement provided to Ars that emphasized that the company “never intended” Sky to sound like Johansson. Instead, OpenAI tried to snag Johansson to voice the part after realizing—seemingly just as Her director Spike Jonze did—that the voice could potentially resonate with more people if Johansson did it.

“We are sorry to Ms. Johansson that we didn’t communicate better,” Altman’s statement said.

Johansson has not yet made any public indications that she intends to sue OpenAI over this supposed miscommunication. But if she did, legal experts told The Post and Reuters that her case would be strong because of legal precedent set in high-profile lawsuits raised by singers Bette Midler and Tom Waits blocking companies from misappropriating their voices.

Why Johansson could win if she sued OpenAI

In 1988, Bette Midler sued Ford Motor Company for hiring a soundalike to perform Midler’s song “Do You Want to Dance?” in a commercial intended to appeal to “young yuppies” by referencing popular songs from their college days. Midler had declined to do the commercial and accused Ford of exploiting her voice to endorse its product without her consent.

This groundbreaking case proved that a distinctive voice like Midler’s cannot be deliberately imitated to sell a product. It did not matter that the singer used in the commercial had used her natural singing voice, because “a number of people” told Midler that the performance “sounded exactly” like her.

Midler’s case set a powerful precedent preventing companies from appropriating parts of performers’ identities—essentially stopping anyone from stealing a well-known voice that otherwise could not be bought.

“A voice is as distinctive and personal as a face,” the court ruled, concluding that “when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs.”

Like in Midler’s case, Johansson could argue that plenty of people think that the Sky voice sounds like her and that OpenAI’s product might be more popular if it had a Her-like voice mode. Comics on popular late-night shows joked about the similarity, including Johansson’s husband, Saturday Night Live comedian Colin Jost. And other people close to Johansson agreed that Sky sounded like her, Johansson has said.

Johansson’s case differs from Midler’s case seemingly primarily because of the casting timeline that OpenAI is working hard to defend.

OpenAI seems to think that because Johansson was offered the gig after the Sky voice actor was cast that she has no case to claim that they hired the other actor after she declined.

The timeline may not matter as much as OpenAI may think, though. In the 1990s, Tom Waits cited Midler’s case when he won a $2.6 million lawsuit after Frito-Lay hired a Waits impersonator to perform a song that “echoed the rhyming word play” of a Waits song in a Doritos commercial. Waits won his suit even though Frito-Lay never attempted to hire the singer before casting the soundalike.

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

openai’s-flawed-plan-to-flag-deepfakes-ahead-of-2024-elections

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections

As the US moves toward criminalizing deepfakes—deceptive AI-generated audio, images, and videos that are increasingly hard to discern from authentic content online—tech companies have rushed to roll out tools to help everyone better detect AI content.

But efforts so far have been imperfect, and experts fear that social media platforms may not be ready to handle the ensuing AI chaos during major global elections in 2024—despite tech giants committing to making tools specifically to combat AI-fueled election disinformation. The best AI detection remains observant humans, who, by paying close attention to deepfakes, can pick up on flaws like AI-generated people with extra fingers or AI voices that speak without pausing for a breath.

Among the splashiest tools announced this week, OpenAI shared details today about a new AI image detection classifier that it claims can detect about 98 percent of AI outputs from its own sophisticated image generator, DALL-E 3. It also “currently flags approximately 5 to 10 percent of images generated by other AI models,” OpenAI’s blog said.

According to OpenAI, the classifier provides a binary “true/false” response “indicating the likelihood of the image being AI-generated by DALL·E 3.” A screenshot of the tool shows how it can also be used to display a straightforward content summary confirming that “this content was generated with an AI tool” and includes fields ideally flagging the “app or device” and AI tool used.

To develop the tool, OpenAI spent months adding tamper-resistant metadata to “all images created and edited by DALL·E 3” that “can be used to prove the content comes” from “a particular source.” The detector reads this metadata to accurately flag DALL-E 3 images as fake.

That metadata follows “a widely used standard for digital content certification” set by the Coalition for Content Provenance and Authenticity (C2PA), often likened to a nutrition label. And reinforcing that standard has become “an important aspect” of OpenAI’s approach to AI detection beyond DALL-E 3, OpenAI said. When OpenAI broadly launches its video generator, Sora, C2PA metadata will be integrated into that tool as well, OpenAI said.

Of course, this solution is not comprehensive because that metadata could always be removed, and “people can still create deceptive content without this information (or can remove it),” OpenAI said, “but they cannot easily fake or alter this information, making it an important resource to build trust.”

Because OpenAI is all in on C2PA, the AI leader announced today that it would join the C2PA steering committee to help drive broader adoption of the standard. OpenAI will also launch a $2 million fund with Microsoft to support broader “AI education and understanding,” seemingly partly in the hopes that the more people understand about the importance of AI detection, the less likely they will be to remove this metadata.

“As adoption of the standard increases, this information can accompany content through its lifecycle of sharing, modification, and reuse,” OpenAI said. “Over time, we believe this kind of metadata will be something people come to expect, filling a crucial gap in digital content authenticity practices.”

OpenAI joining the committee “marks a significant milestone for the C2PA and will help advance the coalition’s mission to increase transparency around digital media as AI-generated content becomes more prevalent,” C2PA said in a blog.

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections Read More »

tech-brands-are-forcing-ai-into-your-gadgets—whether-you-asked-for-it-or-not

Tech brands are forcing AI into your gadgets—whether you asked for it or not

Tech brands love hollering about the purported thrills of AI these days.

Enlarge / Tech brands love hollering about the purported thrills of AI these days.

Logitech announced a new mouse last week. A company rep reached out to inform Ars of Logitech’s “newest wireless mouse.” The gadget’s product page reads the same as of this writing.

I’ve had good experience with Logitech mice, especially wireless ones, one of which I’m using now. So I was keen to learn what Logitech might have done to improve on its previous wireless mouse designs. A quieter click? A new shape to better accommodate my overworked right hand? Multiple onboard profiles in a business-ready design?

I was disappointed to learn that the most distinct feature of the Logitech Signature AI Edition M750 is a button located south of the scroll wheel. This button is preprogrammed to launch the ChatGPT prompt builder, which Logitech recently added to its peripherals configuration app Options+.

That’s pretty much it.

Beyond that, the M750 looks just like the Logitech Signature M650, which came out in January 2022.  Also, the new mouse’s forward button (on the left side of the mouse) is preprogrammed to launch Windows or macOS dictation, and the back button opens ChatGPT within Options+. As of this writing, the new mouse’s MSRP is $10 higher ($50) than the M650’s.

  • The new M750 (pictured) is 4.26×2.4×1.52 inches and 3.57 ounces.

    Logitech

  • The M650 (pictured) comes in 3 sizes. The medium size is 4.26×2.4×1.52 inches and 3.58 ounces.

    Logitech

I asked Logitech about the M750 appearing to be the M650 but with an extra button, and a spokesperson responded by saying:

M750 is indeed not the same mouse as M650. It has an extra button that has been preprogrammed to trigger the Logi AI Prompt Builder once the user installs Logi Options+ app. Without Options+, the button does DPI toggle between 1,000 and 1,600 DPI.

However, a reprogrammable button south of a mouse’s scroll wheel that can be set to launch an app or toggle DPI out of the box is pretty common, including among Logitech mice. Logitech’s rep further claimed to me that the two mice use different electronic components, which Logitech refers to as the mouse’s platform. Logitech can reuse platforms for different models, the spokesperson said.

Logitech’s rep declined to comment on why the M650 didn’t have a button south of its scroll wheel. Price is a potential reason, but Logitech also sells cheaper mice with this feature.

Still, the minimal differences between the two suggest that the M750 isn’t worth a whole product release. I suspect that if it weren’t for Logitech’s trendy new software feature, the M750 wouldn’t have been promoted as a new product.

The M750 also raises the question of how many computer input devices need to be equipped with some sort of buzzy, generative AI-related feature.

Logitech’s ChatGPT prompt builder

Logitech’s much bigger release last week wasn’t a peripheral but an addition to its Options+ app. You don’t need the “new” M750 mouse to use Logitech’s AI Prompt Builder; I was able to program my MX Master 3S to launch it. Several Logitech mice and keyboards support AI Prompt Builder.

When you press a button that launches the prompt builder, an Options+ window appears. There, you can input text that Options+ will use to create a ChatGPT-appropriate prompt based on your needs:

A Logitech-provided image depicting its AI Prompt Builder software feature.

Enlarge / A Logitech-provided image depicting its AI Prompt Builder software feature.

Logitech

After you make your choices, another window opens with ChatGPT’s response. Logitech said the prompt builder requires a ChatGPT account, but I was able to use GPT-3.5 without entering one (the feature can also work with GPT-4).

The typical Arsian probably doesn’t need help creating a ChatGPT prompt, and Logitech’s new capability doesn’t work with any other chatbots. The prompt builder could be interesting to less technically savvy people interested in some handholding for early ChatGPT experiences. However, I doubt if people with an elementary understanding of generative AI need instant access to ChatGPT.

The point, though, is instant access to ChatGPT capabilities, something that Logitech is arguing is worthwhile for its professional users. Some Logitech customers, though, seem to disagree, especially with the AI Prompt Builder, meaning that Options+ has even more resources in the background.

But Logitech isn’t the only gadget company eager to tie one-touch AI access to a hardware button.

Pinching your earbuds to talk to ChatGPT

Similarly to Logitech, Nothing is trying to give its customers access to ChatGPT quickly. In this case, access occurs by pinching the device. This month, Nothing announced that it “integrated Nothing earbuds and Nothing OS with ChatGPT to offer users instant access to knowledge directly from the devices they use most, earbuds and smartphones.” The feature requires the latest Nothing OS and for the users to have a Nothing phone with ChatGPT installed. ChatGPT gestures work with Nothing’s Phone (2) and Nothing Ear and Nothing Ear (a), but Nothing plans to expand to additional phones via software updates.

Nothing's Ear and Ear (a) earbuds.

Enlarge / Nothing’s Ear and Ear (a) earbuds.

Nothing

Nothing also said it would embed “system-level entry points” to ChatGPT, like screenshot sharing and “Nothing-styled widgets,” to Nothing smartphone OSes.

A peek at setting up ChatGPT integration on the Nothing X app.

Enlarge / A peek at setting up ChatGPT integration on the Nothing X app.

Nothing’s ChatGPT integration may be a bit less intrusive than Logitech’s since users who don’t have ChatGPT on their phones won’t be affected. But, again, you may wonder how many people asked for this feature and how reliably it will function.

Tech brands are forcing AI into your gadgets—whether you asked for it or not Read More »

amazon-virtually-kills-efforts-to-develop-alexa-skills,-disappointing-dozens

Amazon virtually kills efforts to develop Alexa Skills, disappointing dozens

disincentives —

Most devs would need to pay out of pocket to host Alexa apps after June.

amazon echo dot gen 4

Enlarge / The 4th-gen Amazon Echo Dot smart speaker.

Amazon

Alexa hasn’t worked out the way Amazon originally planned.

There was a time when it thought that Alexa would yield a robust ecosystem of apps, or Alexa Skills, that would make the voice assistant an integral part of users’ lives. Amazon envisioned tens of thousands of software developers building valued abilities for Alexa that would grow the voice assistant’s popularity—and help Amazon make some money.

But about seven years after launching a rewards program to encourage developers to build Skills, Alexa’s most preferred abilities are the basic ones, like checking the weather. And on June 30, Amazon will stop giving out the monthly Amazon Web Services credits that have made it free for third-party developers to build and host Alexa Skills. The company also recently told devs that its Alexa Developer Rewards program was ending, virtually disincentivizing third-party devs to build for Alexa.

Death knell for third-party Alexa apps

The news has left dozens of Alexa Skills developers wondering if they have a future with Alexa, especially as Amazon preps a generative AI and subscription-based version of Alexa. “Dozens” may sound like a dig at Alexa’s ecosystem, but it’s an estimation based on a podcast from Skills developers Mark Tucker and Allen Firstenberg, who, in a recent podcast, agreed that “dozens” of third-party devs were contemplating if it’s still worthwhile to develop Alexa skills. The casual summary wasn’t stated as a hard fact or confirmed by Amazon but, rather, seemed like a rough and quick estimation based on the developers’ familiarity with the Skills community. But with such minimal interest and money associated with Skills, dozens isn’t an implausible figure either.

Amazon admitted that there’s little interest in its Skills incentives programs. Bloomberg reported that “fewer than 1 percent of developers were using the soon-to-end programs,” per Amazon spokesperson Lauren Raemhild.

“Today, with over 160,000 skills available for customers and a well-established Alexa developer community, these programs have run their course, and we decided to sunset them,” she told the publication.

The writing on the wall, though, is that Amazon doesn’t have the incentive or money to grow the Alexa app ecosystem it once imagined. Voice assistants largely became money pits, and the Alexa division has endured recent layoffs as it fights for survival and relevance. Meanwhile, Google Assistant stopped using third-party apps in 2022.

“Many developers are now going to need to make some tough decisions about maintaining existing or creating future experiences on Alexa,” Tucker said via a LinkedIn post.

Alexa Skills criticized as “useless”

As of this writing, the top Alexa skills, in order, are: Jeopardy, Are You Smarter Than a 5th Grader?, Who Wants to Be a Millionaire?, and Calm. That’s not exactly a futuristic list of must-have technological feats. For years, people have wondered when the “killer app” would come to catapult Alexa’s popularity. But now it seems like Alexa’s only hope at that killer use case is generative AI (a gamble filled with its own obstacles).

But like Amazon, third-party developers found it hard to make money off Skills, with a rare few pointing to making thousands of dollars at most and the vast majority not making anything.

“If you can’t make money off it, no one’s going to seriously engage,” Joseph “Jo” Jaquinta, a developer who had made over 12 Skills, told CNET in 2017.

By 2018, Amazon had paid developers millions to grow Alexa Skills. But by 2020, Amazon reduced the amount of money it paid out to third-party developers, an anonymous source told Bloomberg, The source noted that the apps made by paid developers weren’t making the company much money. Come 2024, the most desirable things you can make Alexa do remain basic tasks, like playing a song and apparently trivia games.

Amazon hasn’t said it’s ending Skills. That would seem premature considering that its Alexa chatbot isn’t expected until June. Developers can still make money off Skills with in-app purchases, but the incentive is minimal.

“Developers like you have and will play a critical role in the success of Alexa, and we appreciate your continued engagement,” Amazon’s notice to devs said, per Bloomberg.

We’ll see how “critical” Amazon treats those remaining developers once its generative AI chatbot is ready.

Amazon virtually kills efforts to develop Alexa Skills, disappointing dozens Read More »