Policy

sky-voice-actor-says-nobody-ever-compared-her-to-scarjo-before-openai-drama

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama

Scarlett Johansson attends the Golden Heart Awards in 2023.

Enlarge / Scarlett Johansson attends the Golden Heart Awards in 2023.

OpenAI is sticking to its story that it never intended to copy Scarlett Johansson’s voice when seeking an actor for ChatGPT’s “Sky” voice mode.

The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman’s defense against Johansson’s claims that Sky was made to sound “eerily similar” to her critically acclaimed voice acting performance in the sci-fi film Her.

Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.

In OpenAI’s defense, The Post reported that the company’s voice casting call flier did not seek a “clone of actress Scarlett Johansson,” and initial voice test recordings of the unnamed actress hired to voice Sky showed that her “natural voice sounds identical to the AI-generated Sky voice.” Because of this, OpenAI has argued that “Sky’s voice is not an imitation of Scarlett Johansson.”

What’s more, an agent for the unnamed Sky actress who was cast—both granted anonymity to protect her client’s safety—confirmed to The Post that her client said she was never directed to imitate either Johansson or her character in Her. She simply used her own voice and got the gig.

The agent also provided a statement from her client that claimed that she had never been compared to Johansson before the backlash started.

This all “feels personal,” the voice actress said, “being that it’s just my natural voice and I’ve never been compared to her by the people who do know me closely.”

However, OpenAI apparently reached out to Johansson after casting the Sky voice actress. During outreach last September and again this month, OpenAI seemed to want to substitute the Sky voice actress’s voice with Johansson’s voice—which is ironically what happened when Johansson got cast to replace the original actress hired to voice her character in Her.

Altman has clarified that timeline in a statement provided to Ars that emphasized that the company “never intended” Sky to sound like Johansson. Instead, OpenAI tried to snag Johansson to voice the part after realizing—seemingly just as Her director Spike Jonze did—that the voice could potentially resonate with more people if Johansson did it.

“We are sorry to Ms. Johansson that we didn’t communicate better,” Altman’s statement said.

Johansson has not yet made any public indications that she intends to sue OpenAI over this supposed miscommunication. But if she did, legal experts told The Post and Reuters that her case would be strong because of legal precedent set in high-profile lawsuits raised by singers Bette Midler and Tom Waits blocking companies from misappropriating their voices.

Why Johansson could win if she sued OpenAI

In 1988, Bette Midler sued Ford Motor Company for hiring a soundalike to perform Midler’s song “Do You Want to Dance?” in a commercial intended to appeal to “young yuppies” by referencing popular songs from their college days. Midler had declined to do the commercial and accused Ford of exploiting her voice to endorse its product without her consent.

This groundbreaking case proved that a distinctive voice like Midler’s cannot be deliberately imitated to sell a product. It did not matter that the singer used in the commercial had used her natural singing voice, because “a number of people” told Midler that the performance “sounded exactly” like her.

Midler’s case set a powerful precedent preventing companies from appropriating parts of performers’ identities—essentially stopping anyone from stealing a well-known voice that otherwise could not be bought.

“A voice is as distinctive and personal as a face,” the court ruled, concluding that “when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs.”

Like in Midler’s case, Johansson could argue that plenty of people think that the Sky voice sounds like her and that OpenAI’s product might be more popular if it had a Her-like voice mode. Comics on popular late-night shows joked about the similarity, including Johansson’s husband, Saturday Night Live comedian Colin Jost. And other people close to Johansson agreed that Sky sounded like her, Johansson has said.

Johansson’s case differs from Midler’s case seemingly primarily because of the casting timeline that OpenAI is working hard to defend.

OpenAI seems to think that because Johansson was offered the gig after the Sky voice actor was cast that she has no case to claim that they hired the other actor after she declined.

The timeline may not matter as much as OpenAI may think, though. In the 1990s, Tom Waits cited Midler’s case when he won a $2.6 million lawsuit after Frito-Lay hired a Waits impersonator to perform a song that “echoed the rhyming word play” of a Waits song in a Doritos commercial. Waits won his suit even though Frito-Lay never attempted to hire the singer before casting the soundalike.

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama Read More »

us-sues-ticketmaster-and-owner-live-nation,-seeks-breakup-of-monopoly

US sues Ticketmaster and owner Live Nation, seeks breakup of monopoly

A large Ticketmaster logo is displayed on a digital screen above the field where a soccer game is played.

Enlarge / Ticketmaster advertisements in the United States v. South Africa women’s soccer match at Soldier Field on September 24, 2023 in Chicago, Illinois.

Getty Images | Daniel Bartel/ISI Photos/USSF

The US government today sued Live Nation and its Ticketmaster subsidiary in a complaint that seeks a breakup of the company that dominates the live music and events market.

The US Department of Justice is seeking “structural relief,” including a breakup, “to stop the anticompetitive conduct arising from Live Nation’s monopoly power.” The DOJ complaint asked a federal court to “order the divestiture of, at minimum, Ticketmaster, along with any additional relief as needed to cure any anticompetitive harm.”

The District of Columbia and 29 states joined the DOJ in the lawsuit filed in US District Court for the Southern District of New York. “One monopolist serves as the gatekeeper for the delivery of nearly all live music in America today: Live Nation, including its wholly owned subsidiary Ticketmaster,” the complaint said.

US Attorney General Merrick Garland said during a press conference that “Live Nation relies on unlawful, anticompetitive conduct to exercise its monopolistic control over the live events industry in the United States… The result is that fans pay more in fees, artists have fewer opportunities to play concerts, smaller promoters get squeezed out, and venues have fewer real choices for ticketing services.”

“It is time to break it up,” Garland said.

Live Nation: We aren’t a monopoly

Garland said that Live Nation directly manages more than 400 artists, controls over 60 percent of concert promotions at major venues, and owns or controls over 60 percent of large amphitheaters. In addition to acquiring venues directly, Live Nation uses exclusive ticketing contracts with venues that last over a decade to exercise control, Garland said.

Garland said Ticketmaster imposes a “impose seemingly endless list of fees on fans,” including ticketing fees, service fees, convenience fees, order fees, handling fees, and payment processing fees. Live Nation and Ticketmaster control “roughly 80 percent or more of major concert venues’ primary ticketing for concerts and a growing share of ticket resales in the secondary market,” the lawsuit said.

Live Nation defended its business practices in a statement provided to Ars today, saying the lawsuit won’t solve problems “relating to ticket prices, service fees, and access to in-demand shows.”

“Calling Ticketmaster a monopoly may be a PR win for the DOJ in the short term, but it will lose in court because it ignores the basic economics of live entertainment, such as the fact that the bulk of service fees go to venues and that competition has steadily eroded Ticketmaster’s market share and profit margin,” the company said. “Our growth comes from helping artists tour globally, creating lasting memories for millions of fans, and supporting local economies across the country by sustaining quality jobs. We will defend against these baseless allegations, use this opportunity to shed light on the industry, and continue to push for reforms that truly protect consumers and artists.”

Live Nation said its profits aren’t high enough to justify the DOJ lawsuit.

“The defining feature of a monopolist is monopoly profits derived from monopoly pricing,” the company said. “Live Nation in no way fits the profile. Service charges on Ticketmaster are no higher than other ticket marketplaces, and frequently lower.” Live Nation said its net profit margin last fiscal year was 1.4 percent and claimed that “there is more competition than ever in the live events market.”

US sues Ticketmaster and owner Live Nation, seeks breakup of monopoly Read More »

lawmakers-say-section-230-repeal-will-protect-children—opponents-predict-chaos

Lawmakers say Section 230 repeal will protect children—opponents predict chaos

Section 230 repeal bill —

Repeal bill is bipartisan but has opponents from across the political spectrum.

A US lawmaker speaks at a congressional hearing

Enlarge / US Rep. Frank Pallone, Jr. (D-N.J.), right, speaks as House Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.) looks on during a hearing about TikTok on Thursday, March 23, 2023.

Getty Images | Tom Williams

A proposed repeal of Section 230 is designed to punish Big Tech but is also facing opposition from library associations, the Internet Archive, the owner of Wikipedia, and advocacy groups from across the political spectrum who say a repeal is bad for online speech. Opposition poured in before a House hearing today on the bipartisan plan to “sunset” Section 230 of the Communications Decency Act, which gives online platforms immunity from lawsuits over how they moderate user-submitted content.

Lawmakers defended the proposed repeal. House Commerce Committee Ranking Member Frank Pallone, Jr. (D-N.J.) today said that “Section 230 has outlived its usefulness and has played an outsized role in creating today’s ‘profits over people’ Internet” and criticized what he called “Big Tech’s constant scare tactics about reforming Section 230.”

Pallone teamed up with Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.) to propose the Section 230 repeal. The lawmakers haven’t come up with a replacement for the law, a tactic that some critics predict will lead to legislative chaos. A hearing memo said the draft bill “would sunset Section 230 of the Communications Act effective on December 31, 2025,” but claimed the “intent of the legislation is not to have Section 230 actually sunset, but to encourage all technology companies to work with Congress to advance a long-term reform solution to Section 230.”

McMorris Rodgers and Pallone wrote a Wall Street Journal op-ed alleging that “Big Tech companies are exploiting the law to shield them from any responsibility or accountability as their platforms inflict immense harm on Americans, especially children.”

While politicians are focused on Big Tech, one letter sent to lawmakers said the proposal “fails to recognize the indispensable role that Section 230 plays in fostering a diverse and innovative digital landscape across many industries that extends far beyond the realm of only large technology corporations.”

Library and Internet groups defend Section 230

The letter was sent by the American Library Association, the Association of Research Libraries, the Consumer Technology Association, Creative Commons, Educause, Incompas, the Internet Archive, the Internet Infrastructure Coalition, the Internet Society, and the Wikimedia Foundation.

Section 230 is essential for small and medium-sized tech businesses, educational institutions, libraries, ISPs, and many others, the letter said:

By narrowly framing the debate around the interests of “Big Tech,” there is a risk of misunderstanding the far-reaching implications of altering or dismantling Section 230. The heaviest costs and burdens of such action would fall on the millions of stakeholders we represent who, unlike large companies, do not have the resources to navigate a flood of content-based lawsuits. While it may seem that such changes will not “break the Internet,” this perspective overlooks the intricate interplay of legal liability and innovation that underpins the entire digital infrastructure.

Opposition this week also came from the Electronic Frontier Foundation, which said that “Section 230 is essential to protecting individuals’ ability to speak, organize, and create online.”

“The law is not a shield for Big Tech,” the EFF wrote. “Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.”

The EFF said it worries that if Big Tech helps Congress write a Section 230 replacement, the new law won’t “protect and benefit Internet users, as Section 230 does currently.”

Lawmakers say Section 230 repeal will protect children—opponents predict chaos Read More »

investigation-shows-how-easy-it-is-to-find-escorts,-oxycodone-on-eventbrite

Investigation shows how easy it is to find escorts, oxycodone on Eventbrite

Eventbrite headquarters in downtown San Francisco

This June, approximately 150 motorcycles will thunder down Route 9W in Saugerties, New York, for Ryan’s Ride for Recovery. Organized by Vince Kelder and his family, the barbecue and raffle will raise money to support their sober-living facility and honor their son who tragically died from a heroin overdose in 2015 after a yearslong drug addiction.

The Kelders established Raising Your Awareness about Narcotics (RYAN) to help others struggling with substance-use disorder. For years, the organization has relied on Eventbrite, an event management and ticketing website, to arrange its events. This year, however, alongside listings for Ryan’s Ride and other addiction recovery events, Eventbrite surfaced listings peddling illegal sales of prescription drugs like Xanax, Valium, and oxycodone.

“It’s criminal,” Vince Kelder says. “They’re preying on people trying to get their lives back together.”

Eventbrite prohibits listings dedicated to selling illegal substances on its platform. It’s one of the 16 categories of content the company’s policies restrict its users from posting. But a WIRED investigation found more than 7,400 events published on the platform that appeared to violate one or more of these terms.

Among these listings were pages claiming to sell fentanyl powder “without a prescription,” accounts pushing the sale of Social Security numbers, and pages offering a “wild night with independent escorts” in India. Some linked to sites offering such wares as Gmail accounts, Google reviews (positive and negative), and TikTok and Instagram likes and followers, among other services.

At least 64 of the event listings advertising drugs included links to online pharmacies that the National Association of Boards of Pharmacy have flagged as untrustworthy or unsafe. Amanda Hils, a spokesperson for the US Food and Drug Administration, says the agency does not comment on individual cases without a thorough review, but broadly some online pharmacies that appear to look legitimate may be “operating illegally and selling medicines that can be dangerous or even deadly.”

Eventbrite didn’t just publish these user-generated event listings; its algorithms appeared to actively recommend them to people through simple search queries or in “related events”—a section at the bottom of an event’s page showing users similar events they might be interested in. As well as posts selling illegal prescription drugs in search results appearing next to the RYAN event, a search for “opioid” in the United States showed Eventbrite’s recommendation algorithm suggesting a conference for opioid treatment practitioners between two listings for ordering oxycodone.

Robin Pugh, the executive director of nonprofit cybercrime-fighting organization Intelligence for Good, which first alerted WIRED to some of the listings, says it is quick and easy to identify the illicit posts on Eventbrite and that other websites that allow “user-generated content” are also plagued by scammers uploading posts in similar ways.

Investigation shows how easy it is to find escorts, oxycodone on Eventbrite Read More »

tesla-shareholder-group-opposes-musk’s-$46b-pay,-slams-board-“dysfunction”

Tesla shareholder group opposes Musk’s $46B pay, slams board “dysfunction”

A photoshopped image of Elon Musk emerging from an enormous pile of money.

Aurich Lawson / Duncan Hull / Getty

A Tesla shareholder group yesterday urged other shareholders to vote against Elon Musk’s $46 billion pay package, saying the Tesla board is dysfunctional and “overly beholden to CEO Musk.” The group’s letter also urged shareholders to vote against the reelection of board members Kimbal Musk and James Murdoch.

“Tesla is suffering from a material governance failure which requires our urgent attention and action,” and its board “is stacked with directors that have close personal ties to CEO Elon Musk,” the letter said. “There are multiple indications that these ties, coupled with excessive director compensation, prevent the level of critical and independent thinking required for effective governance.”

Tesla shareholders approved Elon Musk’s pay package in 2018, but it was nullified by a court ruling in January 2024. After a lawsuit filed by a shareholder, Delaware Court of Chancery Judge Kathaleen McCormick ruled that the pay plan was unfair to Tesla shareholders and must be rescinded.

McCormick wrote that most of Tesla’s board members were beholden to Musk or had compromising conflicts and that Tesla’s board provided false and misleading information to shareholders before the 2018 vote. Musk and the rest of the Tesla board subsequently asked shareholders to approve a transfer of Tesla’s state of incorporation from Delaware to Texas and to reinstate Musk’s pay package. Votes can be submitted before Tesla’s annual meeting on June 13.

The pay package was previously estimated to be worth $56 billion, but the stock options in the plan were more recently valued at $46 billion.

“Tesla has clearly lagged”

From March 2020 to November 2021, Tesla’s share price rose from $28.51 to $409.71. But it “has since fallen to $172.63, a decline of $237.08 or 62 percent from its peak,” the letter opposing the pay package said.

“Over the past three years, and especially over the past year, Tesla has clearly lagged behind its competitors and the broader market. We believe that the distractions caused by Musk’s many projects, particularly his decision to buy Twitter, have played a material role in Tesla’s underperformance,” the letter said.

Tesla’s reputation has been harmed by Musk’s “public fights with regulators, acquisition of Twitter, controversial statements on X, and his legal and personal troubles,” the letter said. The letter was sent by New York City Comptroller Brad Lander and investors including Amalgamated Bank, AkademikerPension, Nordea Asset Management, SOC Investment Group, and United Church Funds.

Musk has taken advantage of lax oversight in order “to use Tesla as a coffer for himself and his other business endeavors,” the letter said. It continued:

In 2022, Musk admitted to using Tesla engineers to work on issues at Twitter (now known as X), and defended the decision by saying that no Tesla Board member had stopped him from using Tesla staff for his other businesses. More recently, Musk has begun poaching top engineers from Tesla’s AI and autonomy team for his new company, xAI, including Ethan Knight, who was computer vision chief at Tesla.

This is on the heels of Musk’s post on X that he is “uncomfortable growing Tesla to be a leader in AI & robotics without having ~25% voting control,” a move widely seen as a threat to push Tesla’s Board to grant him another mega pay package.

The Tesla board “continues to allow Musk to be overcommitted” as he devotes “significant amounts of time to his roles at X, SpaceX, Neuralink, the Boring Company and other companies,” the letter said.

Tesla shareholder group opposes Musk’s $46B pay, slams board “dysfunction” Read More »

“csam-generated-by-ai-is-still-csam,”-doj-says-after-rare-arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

The US Department of Justice has started cracking down on the use of AI image generators to produce child sexual abuse materials (CSAM).

On Monday, the DOJ arrested Steven Anderegg, a 42-year-old “extremely technologically savvy” Wisconsin man who allegedly used Stable Diffusion to create “thousands of realistic images of prepubescent minors,” which were then distributed on Instagram and Telegram.

The cops were tipped off to Anderegg’s alleged activities after Instagram flagged direct messages that were sent on Anderegg’s Instagram account to a 15-year-old boy. Instagram reported the messages to the National Center for Missing and Exploited Children (NCMEC), which subsequently alerted law enforcement.

During the Instagram exchange, the DOJ found that Anderegg sent sexually explicit AI images of minors soon after the teen made his age known, alleging that “the only reasonable explanation for sending these images was to sexually entice the child.”

According to the DOJ’s indictment, Anderegg is a software engineer with “professional experience working with AI.” Because of his “special skill” in generative AI (GenAI), he was allegedly able to generate the CSAM using a version of Stable Diffusion, “along with a graphical user interface and special add-ons created by other Stable Diffusion users that specialized in producing genitalia.”

After Instagram reported Anderegg’s messages to the minor, cops seized Anderegg’s laptop and found “over 13,000 GenAI images, with hundreds—if not thousands—of these images depicting nude or semi-clothed prepubescent minors lasciviously displaying or touching their genitals” or “engaging in sexual intercourse with men.”

In his messages to the teen, Anderegg seemingly “boasted” about his skill in generating CSAM, the indictment said. The DOJ alleged that evidence from his laptop showed that Anderegg “used extremely specific and explicit prompts to create these images,” including “specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.” These go-to prompts were stored on his computer, the DOJ alleged.

Anderegg is currently in federal custody and has been charged with production, distribution, and possession of AI-generated CSAM, as well as “transferring obscene material to a minor under the age of 16,” the indictment said.

Because the DOJ suspected that Anderegg intended to use the AI-generated CSAM to groom a minor, the DOJ is arguing that there are “no conditions of release” that could prevent him from posing a “significant danger” to his community while the court mulls his case. The DOJ warned the court that it’s highly likely that any future contact with minors could go unnoticed, as Anderegg is seemingly tech-savvy enough to hide any future attempts to send minors AI-generated CSAM.

“He studied computer science and has decades of experience in software engineering,” the indictment said. “While computer monitoring may address the danger posed by less sophisticated offenders, the defendant’s background provides ample reason to conclude that he could sidestep such restrictions if he decided to. And if he did, any reoffending conduct would likely go undetected.”

If convicted of all four counts, he could face “a total statutory maximum penalty of 70 years in prison and a mandatory minimum of five years in prison,” the DOJ said. Partly because of “special skill in GenAI,” the DOJ—which described its evidence against Anderegg as “strong”—suggested that they may recommend a sentencing range “as high as life imprisonment.”

Announcing Anderegg’s arrest, Deputy Attorney General Lisa Monaco made it clear that creating AI-generated CSAM is illegal in the US.

“Technology may change, but our commitment to protecting children will not,” Monaco said. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest Read More »

openai-pauses-chatgpt-4o-voice-that-fans-said-ripped-off-scarlett-johansson

OpenAI pauses ChatGPT-4o voice that fans said ripped off Scarlett Johansson

“Her” —

“Sky’s voice is not an imitation of Scarlett Johansson,” OpenAI insists.

Scarlett Johansson and Joaquin Phoenix attend <em>Her</em> premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy.  ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/GettyImages-187586586-800×534.jpg”></img><figcaption>
<p><a data-height=Enlarge / Scarlett Johansson and Joaquin Phoenix attend Her premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy.

OpenAI has paused a voice mode option for ChatGPT-4o, Sky, after backlash accusing the AI company of intentionally ripping off Scarlett Johansson’s critically acclaimed voice-acting performance in the 2013 sci-fi film Her.

In a blog defending their casting decision for Sky, OpenAI went into great detail explaining its process for choosing the individual voice options for its chatbot. But ultimately, the company seemed pressed to admit that Sky’s voice was just too similar to Johansson’s to keep using it, at least for now.

“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI’s blog said.

OpenAI is not naming the actress, or any of the ChatGPT-4o voice actors, to protect their privacy.

A week ago, OpenAI CEO Sam Altman seemed to invite this controversy by posting “her” on X (formerly Twitter) after announcing the ChatGPT audio-video features that he said made it more “natural” for users to interact with the chatbot.

Altman has said that Her, a movie about a man who falls in love with his virtual assistant, is among his favorite movies. He told conference attendees at Dreamforce last year that the movie “was incredibly prophetic” when depicting “interaction models of how people use AI,” The San Francisco Standard reported. And just last week, Altman touted GPT-4o’s new voice mode by promising, “it feels like AI from the movies.”

But OpenAI’s chief technology officer, Mira Murati, has said that GPT-4o’s voice modes were less inspired by Her than by studying the “really natural, rich, and interactive” aspects of human conversation, The Wall Street Journal reported.

In 2013, of course, critics praised Johansson’s Her performance as expressively capturing a wide range of emotions, which is exactly what Murati described as OpenAI’s goals for its chatbot voices. Rolling Stone noted how effectively Johansson naturally navigated between “tones sweet, sexy, caring, manipulative, and scary.” Johansson achieved this, the Hollywood Reporter said, by using a “vivacious female voice that breaks attractively but also has an inviting deeper register.”

Her director/screenwriter Spike Jonze was so intent on finding the right voice for his film’s virtual assistant that he replaced British actor Samantha Morton late in the film’s production. According to Vulture, Jonze realized that Morton’s “maternal, loving, vaguely British, and almost ghostly” voice didn’t fit his film as well as Johansson’s “younger,” “more impassioned” voice, which he said brought “more yearning.”

Late-night shows had fun mocking OpenAI’s demo featuring the Sky voice, which showed the chatbot seemingly flirting with engineers, giggling through responses like “oh, stop it. You’re making me blush.” Where The New York Times described these demo interactions as Sky being “deferential and wholly focused on the user,” The Daily Show‘s Desi Lydic joked that Sky was “clearly programmed to feed dudes’ egos.”

OpenAI is likely hoping to avoid any further controversy amidst plans to roll out more voices soon that its blog said will “better match the diverse interests and preferences of users.”

OpenAI did not immediately respond to Ars’ request for comment.

Voice actors versus AI

The OpenAI controversy arrives at a moment when many are questioning AI’s impact on creative communities, triggering early lawsuits from artists and book authors. Just this month, Sony opted all of its artists out of AI training to stop voice clones from ripping off top talents like Adele and Beyoncé.

Voice actors, too, have been monitoring increasingly sophisticated AI voice generators, waiting to see what threat AI might pose to future work opportunities. Recently, two actors sued an AI start-up called Lovo that they claimed “illegally used recordings of their voices to create technology that can compete with their voice work,” The New York Times reported. According to that lawsuit, Lovo allegedly used the actors’ actual voice clips to clone their voices.

“We don’t know how many other people have been affected,” the actors’ lawyer, Steve Cohen, told The Times.

Rather than replace voice actors, OpenAI’s blog said that they are striving to support the voice industry when creating chatbots that will laugh at your jokes or mimic your mood. On top of paying voice actors “compensation above top-of-market rates,” OpenAI said they “worked with industry-leading casting and directing professionals to narrow down over 400 submissions” to the five voice options in the initial roll-out of audio-video features.

Their goals in hiring voice actors were to hire talents “from diverse backgrounds or who could speak multiple languages,” casting actors who had voices that feel “timeless” and “inspire trust.” To OpenAI, that meant finding actors who have a “warm, engaging, confidence-inspiring, charismatic voice with rich tone” that sounds “natural and easy to listen to.”

For ChatGPT-4o’s first five voice actors, the gig lasted about five months before leading to more work, OpenAI said.

“We are continuing to collaborate with the actors, who have contributed additional work for audio research and new voice capabilities in GPT-4o,” OpenAI said.

Arguably, these actors are helping to train AI tools that could one day replace them, though. Backlash defending Johansson—one of the world’s highest-paid actors—perhaps shows that fans won’t take direct mimicry of any of Hollywood’s biggest stars lightly, though.

While criticism of the Sky voice seemed widespread, some fans seemed to think that OpenAI has overreacted by pausing the Sky voice.

NYT critic Alissa Wilkinson wrote that it was only “a tad jarring” to hear Sky’s voice because “she sounded a whole lot” like Johansson. And replying to OpenAI’s X post announcing its decision to pull the voice feature for now, a clump of fans protested the AI company’s “bad decision,” with some complaining that Sky was the “best” and “hottest” voice.

At least one fan noted that OpenAI’s decision seemed to hurt the voice actor behind Sky most.

“Super unfair for the Sky voice actress,” a user called Ate-a-Pi wrote. “Just because she sounds like ScarJo, now she can never make money again. Insane.”

OpenAI pauses ChatGPT-4o voice that fans said ripped off Scarlett Johansson Read More »

slack-users-horrified-to-discover-messages-used-for-ai-training

Slack users horrified to discover messages used for AI training

Slack users horrified to discover messages used for AI training

After launching Slack AI in February, Slack appears to be digging its heels in, defending its vague policy that by default sucks up customers’ data—including messages, content, and files—to train Slack’s global AI models.

According to Slack engineer Aaron Maurer, Slack has explained in a blog that the Salesforce-owned chat service does not train its large language models (LLMs) on customer data. But Slack’s policy may need updating “to explain more carefully how these privacy principles play with Slack AI,” Maurer wrote on Threads, partly because the policy “was originally written about the search/recommendation work we’ve been doing for years prior to Slack AI.”

Maurer was responding to a Threads post from engineer and writer Gergely Orosz, who called for companies to opt out of data sharing until the policy is clarified, not by a blog, but in the actual policy language.

“An ML engineer at Slack says they don’t use messages to train LLM models,” Orosz wrote. “My response is that the current terms allow them to do so. I’ll believe this is the policy when it’s in the policy. A blog post is not the privacy policy: every serious company knows this.”

The tension for users becomes clearer if you compare Slack’s privacy principles with how the company touts Slack AI.

Slack’s privacy principles specifically say that “Machine Learning (ML) and Artificial Intelligence (AI) are useful tools that we use in limited ways to enhance our product mission. To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as other information (including usage information) as defined in our privacy policy and in your customer agreement.”

Meanwhile, Slack AI’s page says, “Work without worry. Your data is your data. We don’t use it to train Slack AI.”

Because of this incongruity, users called on Slack to update the privacy principles to make it clear how data is used for Slack AI or any future AI updates. According to a Salesforce spokesperson, the company has agreed an update is needed.

“Yesterday, some Slack community members asked for more clarity regarding our privacy principles,” Salesforce’s spokesperson told Ars. “We’ll be updating those principles today to better explain the relationship between customer data and generative AI in Slack.”

The spokesperson told Ars that the policy updates will clarify that Slack does not “develop LLMs or other generative models using customer data,” “use customer data to train third-party LLMs” or “build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data.” The update will also clarify that “Slack AI uses off-the-shelf LLMs where the models don’t retain customer data,” ensuring that “customer data never leaves Slack’s trust boundary, and the providers of the LLM never have any access to the customer data.”

These changes, however, do not seem to address a key concern for users who never explicitly consented to sharing chats and other Slack content for use in AI training.

Users opting out of sharing chats with Slack

This controversial policy is not new. Wired warned about it in April, and TechCrunch reported that the policy has been in place since at least September 2023.

But widespread backlash began swelling last night on Hacker News, where Slack users called out the chat service for seemingly failing to notify users about the policy change, instead quietly opting them in by default. To critics, it felt like there was no benefit to opting in for anyone but Slack.

From there, the backlash spread to social media, where SlackHQ hastened to clarify Slack’s terms with explanations that did not seem to address all the criticism.

“I’m sorry Slack, you’re doing fucking WHAT with user DMs, messages, files, etc?” Corey Quinn, the chief cloud economist for a cost management company called Duckbill Group, posted on X. “I’m positive I’m not reading this correctly.”

SlackHQ responded to Quinn after the economist declared, “I hate this so much,” and confirmed that he had opted out of data sharing in his paid workspace.

“To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results,” SlackHQ posted. “And yes, customers can exclude their data from helping train those (non-generative) ML models. Customer data belongs to the customer.”

Later in the thread, SlackHQ noted, “Slack AI—which is our generative AI experience natively built in Slack—[and] is a separately purchased add-on that uses Large Language Models (LLMs) but does not train those LLMs on customer data.”

Slack users horrified to discover messages used for AI training Read More »

financial-institutions-have-30-days-to-disclose-breaches-under-new-rules

Financial institutions have 30 days to disclose breaches under new rules

REGULATION S-P —

Amendments contain loopholes that may blunt their effectiveness.

Financial institutions have 30 days to disclose breaches under new rules

The Securities and Exchange Commission (SEC) will require some financial institutions to disclose security breaches within 30 days of learning about them.

On Wednesday, the SEC adopted changes to Regulation S-P, which governs the treatment of the personal information of consumers. Under the amendments, institutions must notify individuals whose personal information was compromised “as soon as practicable, but not later than 30 days” after learning of unauthorized network access or use of customer data. The new requirements will be binding on broker-dealers (including funding portals), investment companies, registered investment advisers, and transfer agents.

“Over the last 24 years, the nature, scale, and impact of data breaches has transformed substantially,” SEC Chair Gary Gensler said. “These amendments to Regulation S-P will make critical updates to a rule first adopted in 2000 and help protect the privacy of customers’ financial data. The basic idea for covered firms is if you’ve got a breach, then you’ve got to notify. That’s good for investors.”

Notifications must detail the incident, what information was compromised, and how those affected can protect themselves. In what appears to be a loophole in the requirements, covered institutions don’t have to issue notices if they establish that the personal information has not been used in a way to result in “substantial harm or inconvenience” or isn’t likely to.

The amendments will require covered institutions to “develop, implement, and maintain written policies and procedures” that are “reasonably designed to detect, respond to, and recover from unauthorized access to or use of customer information.” The amendments also:

• Expand and align the safeguards and disposal rules to cover both nonpublic personal information that a covered institution collects about its own customers and nonpublic personal information it receives from another financial institution about customers of that financial institution;

• Require covered institutions, other than funding portals, to make and maintain written records documenting compliance with the requirements of the safeguards rule and disposal rule;

• Conform Regulation S-P’s annual privacy notice delivery provisions to the terms of an exception added by the FAST Act, which provide that covered institutions are not required to deliver an annual privacy notice if certain conditions are met; and

• Extend both the safeguards rule and the disposal rule to transfer agents registered with the Commission or another appropriate regulatory agency.

The requirements also broaden the scope of nonpublic personal information covered beyond what the firm itself collects. The new rules will also cover personal information the firm has received from another financial institution.

SEC Commissioner Hester M. Peirce voiced concern that the new requirements may go too far.

“Today’s Regulation S-P modernization will help covered institutions appropriately prioritize safeguarding customer information,” she https://www.sec.gov/news/statement/peirce-statement-reg-s-p-051624 wrote. “Customers will be notified promptly when their information has been compromised so they can take steps to protect themselves, like changing passwords or keeping a closer eye on credit scores. My reservations stem from the breadth of the rule and the likelihood that it will spawn more consumer notices than are helpful.”

Regulation S-P hadn’t been substantially updated since its adoption in 2000.

Last year, the SEC adopted new regulations requiring publicly traded companies to disclose security breaches that materially affect or are reasonably likely to materially affect business, strategy, or financial results or conditions.

The amendments take effect 60 days after publication in the Federal Register, the official journal of the federal government that publishes regulations, notices, orders, and other documents. Larger organizations will have 18 months to comply after modifications are published. Smaller organizations will have 24 months.

Public comments on the amendments are available here.

Financial institutions have 30 days to disclose breaches under new rules Read More »

twitter-urls-redirect-to-x.com-as-musk-gets-closer-to-killing-the-twitter-name

Twitter URLs redirect to x.com as Musk gets closer to killing the Twitter name

Goodbye Twitter.com —

X.com stops redirecting to Twitter.com over a year after company name change.

An app icon and logo for Elon Musk's X service.

Getty Images | Kirill Kudryavtsev

Twitter.com links are now redirecting to the x.com domain as Elon Musk gets closer to wiping out the Twitter brand name over a year and half after buying the company.

“All core systems are now on X.com,” Musk wrote in an X post today. X also displayed a message to users that said, “We are letting you know that we are changing our URL, but your privacy and data protection settings remain the same.”

Musk bought Twitter in October 2022 and turned it into X Corp. in April 2023, but the social network continued to use Twitter.com as its primary domain for more than another year. X.com links redirected to Twitter.com during that time.

There were still remnants of Twitter after today’s change. This morning, I noticed a support link took me to a help.twitter.com page. The link subsequently redirected to a help.x.com page after I sent a message to X’s public relations email, though the timing could be coincidence. After sending that message to press@x.com, I got the standard auto-reply from press+noreply@twitter.com, just as I have in the past.

You might still encounter Twitter links that don’t redirect to x.com, depending on which browser you use. The Verge said it is “seeing a mix of results depending upon browser choice and whether you’re logged in or not.”

I had no trouble accessing x.com on desktop browsers today. But in Safari on iPhone, I received error messages when trying to access either twitter.com or x.com without first logging in. I eventually succeeded in logging in and was able to view content, but I remained at twitter.com in the iPhone browser instead of being redirected to x.com.

This will presumably be sorted out, but the awkward Twitter-to-X transition has previously been accompanied by technical problems. In early April, Musk’s service started automatically changing “twitter.com” to “x.com” in links posted by users in the iOS app. But the automatic text replacement initially applied to any URL ending in “twitter.com” even if it wasn’t actually a twitter.com link, which meant that phishers could have taken advantage by registering misleading domain names.

Twitter URLs redirect to x.com as Musk gets closer to killing the Twitter name Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

tesla-must-face-fraud-suit-for-claiming-its-cars-could-fully-drive-themselves

Tesla must face fraud suit for claiming its cars could fully drive themselves

The Tesla car company's logo

Getty Images | SOPA Images

A federal judge ruled yesterday that Tesla must face a lawsuit alleging that it committed fraud by misrepresenting the self-driving capabilities of its vehicles.

California resident Thomas LoSavio’s lawsuit points to claims made by Tesla and CEO Elon Musk starting in October 2016, a few months before LoSavio bought a 2017 Tesla Model S with “Enhanced Autopilot” and “Full Self-Driving Capability.” US District Judge Rita Lin in the Northern District of California dismissed some of LoSavio’s claims but ruled that the lawsuit can move forward on allegations of fraud:

The remaining claims, which arise out of Tesla’s alleged fraud and related negligence, may go forward to the extent they are based on two alleged representations: (1) representations that Tesla vehicles have the hardware needed for full self-driving capability and, (2) representations that a Tesla car would be able to drive itself cross-country in the coming year. While the Rule 9(b) pleading requirements are less stringent here, where Tesla allegedly engaged in a systematic pattern of fraud over a long period of time, LoSavio alleges, plausibly and with sufficient detail, that he relied on these representations before buying his car.

Tesla previously won a significant ruling in the case when a different judge upheld the carmaker’s arbitration agreement and ruled that four plaintiffs would have to go to arbitration. But LoSavio had opted out of the arbitration agreement and was given the option of filing an amended complaint.

LoSavio’s amended complaint seeks class-action status on behalf of himself “and fellow consumers who purchased or leased a new Tesla vehicle with Tesla’s ADAS [Advanced Driver Assistance System] technology but never received the self-driving car that Tesla promised them.”

Cars not fully autonomous

Lin didn’t rule on the merits of the claims but found that they are adequately alleged. LoSavio points to a Tesla statement in October 2016 that all its cars going forward would have the “hardware needed for full self-driving capability,” and a November 2016 email newsletter stating that “all Tesla vehicles produced in our factory now have full self-driving hardware.”

The ruling said:

Those statements were allegedly false because the cars lacked the combination of sensors, including lidar, needed to achieve SAE Level 4 (“High Automation”) and Level 5 (“Full Automation”), i.e., full autonomy. According to the SAC [Second Amended Complaint], Tesla’s cars have thus stalled at SAE Level 2 (“Partial Driving Automation”), which requires “the human driver’s constant supervision, responsibility, and control.”

If Tesla meant to convey that its hardware was sufficient to reach high or full automation, the SAC plainly alleges sufficient falsity. Even if Tesla meant to convey that its hardware could reach Level 2 only, the SAC still sufficiently alleges that those representations reasonably misled LoSavio.

The complaint also “sufficiently alleges that Musk falsely represented the vehicle’s future ability to self-drive cross-country and that LoSavio relied upon these representations pre-purchase,” Lin concluded. Musk claimed at an October 2016 news conference that a Tesla car would be able to drive from Los Angeles to New York City “by the end of next year without the need for a single touch.”

Tesla must face fraud suit for claiming its cars could fully drive themselves Read More »