Policy

washing-machine-chime-scandal-shows-how-absurd-youtube-copyright-abuse-can-get

Washing machine chime scandal shows how absurd YouTube copyright abuse can get

Washing machine chime scandal shows how absurd YouTube copyright abuse can get

YouTube’s Content ID system—which automatically detects content registered by rightsholders—is “completely fucking broken,” a YouTuber called “Albino” declared in a rant on X (formerly Twitter) viewed more than 950,000 times.

Albino, who is also a popular Twitch streamer, complained that his YouTube video playing through Fallout was demonetized because a Samsung washing machine randomly chimed to signal a laundry cycle had finished while he was streaming.

Apparently, YouTube had automatically scanned Albino’s video and detected the washing machine chime as a song called “Done”—which Albino quickly saw was uploaded to YouTube by a musician known as Audego nine years ago.

But when Albino hit play on Audego’s song, the only thing that he heard was a 30-second clip of the washing machine chime. To Albino it was obvious that Audego didn’t have any rights to the jingle, which Dexerto reported actually comes from the song “Die Forelle” (“The Trout”) from Austrian composer Franz Schubert.

The song was composed in 1817 and is in the public domain. Samsung has used it to signal the end of a wash cycle for years, sparking debate over whether it’s the catchiest washing machine song and inspiring at least one violinist to perform a duet with her machine. It’s been a source of delight for many Samsung customers, but for Albino, hearing the jingle appropriated on YouTube only inspired ire.

“A guy recorded his fucking washing machine and uploaded it to YouTube with Content ID,” Albino said in a video on X. “And now I’m getting copyright claims” while “my money” is “going into the toilet and being given to this fucking slime.”

Albino suggested that YouTube had potentially allowed Audego to make invalid copyright claims for years without detecting the seemingly obvious abuse.

“How is this still here?” Albino asked. “It took me one Google search to figure this out,” and “now I’m sharing revenue with this? That’s insane.”

At first, Team YouTube gave Albino a boilerplate response on X, writing, “We understand how important it is for you. From your vid, it looks like you’ve recently submitted a dispute. When you dispute a Content ID claim, the person who claimed your video (the claimant) is notified and they have 30 days to respond.”

Albino expressed deep frustration at YouTube’s response, given how “egregious” he considered the copyright abuse to be.

“Just wait for the person blatantly stealing copyrighted material to respond,” Albino responded to YouTube. “Ah okay, yes, I’m sure they did this in good faith and will make the correct call, though it would be a shame if they simply clicked ‘reject dispute,’ took all the ad revenue money and forced me to risk having my channel terminated to appeal it!! XDxXDdxD!! Thanks Team YouTube!”

Soon after, YouTube confirmed on X that Audego’s copyright claim was indeed invalid. The social platform ultimately released the claim and told Albino to expect the changes to be reflected on his channel within two business days.

Ars could not immediately reach YouTube or Albino for comment.

Widespread abuse of Content ID continues

YouTubers have complained about abuse of Content ID for years. Techdirt’s Timothy Geigner agreed with Albino’s assessment that the YouTube system is “hopelessly broken,” noting that sometimes content is flagged by mistake. But just as easily, bad actors can abuse the system to claim “content that simply isn’t theirs” and seize sometimes as much as millions in ad revenue.

In 2021, YouTube announced that it had invested “hundreds of millions of dollars” to create content management tools, of which Content ID quickly emerged as the platform’s go-to solution to detect and remove copyrighted materials.

At that time, YouTube claimed that Content ID was created as a “solution for those with the most complex rights management needs,” like movie studios and record labels whose movie clips and songs are most commonly uploaded by YouTube users. YouTube warned that without Content ID, “rightsholders could have their rights impaired and lawful expression could be inappropriately impacted.”

Since its rollout, more than 99 percent of copyright actions on YouTube have consistently been triggered automatically through Content ID.

And just as consistently, YouTube has seen widespread abuse of Content ID, terminating “tens of thousands of accounts each year that attempt to abuse our copyright tools,” YouTube said. YouTube also acknowledged in 2021 that “just one invalid reference file in Content ID can impact thousands of videos and users, stripping them of monetization or blocking them altogether.”

To help rightsholders and creators track how much copyrighted content is removed from the platform, YouTube started releasing biannual transparency reports in 2021. The Electronic Frontier Foundation (EFF), a nonprofit digital rights group, applauded YouTube’s “move towards transparency” while criticizing YouTube’s “claim that YouTube is adequately protecting its creators.”

“That rings hollow,” EFF reported in 2021, noting that “huge conglomerates have consistently pushed for more and more restrictions on the use of copyrighted material, at the expense of fair use and, as a result, free expression.” As EFF saw it then, YouTube’s Content ID system mainly served to appease record labels and movie studios, while creators felt “pressured” not to dispute Content ID claims out of “fear” that their channel might be removed if YouTube consistently sided with rights holders.

According to YouTube, “it’s impossible for matching technology to take into account complex legal considerations like fair use or fair dealing,” and that impossibility seemingly ensures that creators bear the brunt of automated actions even when it’s fair to use copyrighted materials.

At that time, YouTube described Content ID as “an entirely new revenue stream from ad-supported, user generated content” for rights holders, who made more than $5.5 billion from Content ID matches by December 2020. More recently, YouTube reported that figure climbed above $9 million, as of December 2022. With so much money at play, it’s easy to see how the system could be seen as disproportionately favoring rights holders, while creators continue to suffer from income diverted by the automated system.

Washing machine chime scandal shows how absurd YouTube copyright abuse can get Read More »

google-accused-of-secretly-tracking-drivers-with-disabilities

Google accused of secretly tracking drivers with disabilities

Google accused of secretly tracking drivers with disabilities

Google needs to pump the brakes when it comes to tracking sensitive information shared with DMV sites, a new lawsuit suggests.

Filing a proposed class-action suit in California, Katherine Wilson has accused Google of using Google Analytics and DoubleClick trackers on the California DMV site to unlawfully obtain information about her personal disability without her consent.

This, Wilson argued, violated the Driver’s Privacy Protection Act (DPPA), as well as the California Invasion of Privacy Act (CIPA), and impacted perhaps millions of drivers who had no way of knowing Google was collecting sensitive information shared only for DMV purposes.

“Google uses the personal information it obtains from motor vehicle records to create profiles, categorize individuals, and derive information about them to sell its customers the ability to create targeted marketing and advertising,” Wilson alleged.

According to Wilson, California’s DMV “encourages” drivers “to use its website rather than visiting one of the DMV’s physical locations” without telling drivers that Google has trackers all over its site.

Likely due to promoting the website’s convenience, the DMV reported a record number of online transactions in 2020, Wilson’s complaint said. And people with disabilities have taken advantage of that convenience. In 2023, approximately “40 percent of the 1.6 million disability parking placard renewals occurred online.”

Wilson last visited the DMV site last summer when she was renewing her disability parking placard online. At that time, she did not know that Google obtained her personal information when she filled out her application, communicated directly with the DMV, searched on the site, or clicked on various URLs, all of which she said revealed that either she had a disability or believed she had a disability.

Her complaint alleged that Google secretly gathers information about the contents of the DMV’s online users’ searches, logging sensitive keywords like “teens,” “disabled drivers,” and any “inquiries regarding disabilities.”

Google “knowingly” obtained this information, Wilson alleged, to quietly expand user profiles for ad targeting, “intentionally” disregarding DMV website users’ “reasonable expectation of privacy.”

“Google then uses the personal information and data to generate revenue from the advertising and marketing services that Google sells to businesses and individuals,” Wilson’s complaint alleged. “That Plaintiff and Class Members would not have consented to Google obtaining their personal information or learning the contents of their communications with the DMV is not surprising.”

Congressman James P. Moran, who sponsored the DPPA in 1994, made it clear that the law was enacted specifically to keep marketers from taking advantage of computers making it easy to “pull up a person’s DMV record” with the “click of a button.”

Even back then, some people were instantly concerned about any potential “invasion of privacy,” Moran said, noting that “if you review the way in which people are classified by direct marketers based on DMV information, you can see why some individuals might object to their personal information being sold.”

Google accused of secretly tracking drivers with disabilities Read More »

another-us-state-repeals-law-that-protected-isps-from-municipal-competition

Another US state repeals law that protected ISPs from municipal competition

Win for municipal broadband —

With Minnesota repeal, number of states restricting public broadband falls to 16.

Illustration of network data represented by curving lines flowing on a dark background.

Getty Images | Yuichiro Chino

Minnesota this week eliminated two laws that made it harder for cities and towns to build their own broadband networks. The state-imposed restrictions were repealed in an omnibus commerce policy bill signed on Tuesday by Gov. Tim Walz, a Democrat.

Minnesota was previously one of about 20 states that imposed significant restrictions on municipal broadband. The number can differ depending on who’s counting because of disagreements over what counts as a significant restriction. But the list has gotten smaller in recent years because states including Arkansas, Colorado, and Washington repealed laws that hindered municipal broadband.

The Minnesota bill enacted this week struck down a requirement that municipal telecommunications networks be approved in an election with 65 percent of the vote. The law is over a century old, the Institute for Local Self-Reliance’s Community Broadband Network Initiative wrote yesterday.

“Though intended to regulate telephone service, the way the law had been interpreted after the invention of the Internet was to lump broadband in with telephone service thereby imposing that super-majority threshold to the building of broadband networks,” the broadband advocacy group said.

The Minnesota omnibus bill also changed a law that let municipalities build broadband networks, but only if no private providers offer service or will offer service “in the reasonably foreseeable future.” That restriction had been in effect since at least the year 2000.

The caveat that prevented municipalities from competing against private providers was eliminated from the law when this week’s omnibus bill was passed. As a result, the law now lets cities and towns “improve, construct, extend, and maintain facilities for Internet access and other communications purposes” even if private ISPs already offer service.

“States are dropping misguided barriers”

The omnibus bill also added language intended to keep government-operated and private networks on a level playing field. The new language says cities and towns may “not discriminate in favor of the municipality’s own communications facilities by granting the municipality more favorable or less burdensome terms and conditions than a nonmunicipal service provider” with respect to the use of public rights-of-way, publicly owned equipment, and permitting fees.

Additional new language requires “separation between the municipality’s role as a regulator… and the municipality’s role as a competitive provider of services,” and forbids the sharing of “inside information” between the local government’s regulatory and service-provider divisions.

With Minnesota having repealed its anti-municipal broadband laws, the Institute for Local Self-Reliance says that 16 states still restrict the building of municipal networks.

The Minnesota change “is a significant win for the people of Minnesota and highlights a positive trend—states are dropping misguided barriers to deploying public broadband as examples of successful community-owned networks proliferate across the country,” said Gigi Sohn, executive director of the American Association for Public Broadband (AAPB), which represents community-owned broadband networks and co-ops.

There are about 650 public broadband networks in the US, Sohn said. “While 16 states still restrict these networks in various ways, we’re confident this number will continue to decrease as more communities demand the freedom to choose the network that best serves their residents,” she said.

State laws restricting municipal broadband have been passed for the benefit of private ISPs. Although cities and towns generally only build networks when private ISPs haven’t fully met their communities’ needs, those attempts to build municipal networks often face opposition from private ISPs and “dark money” groups that don’t reveal their donors.

Another US state repeals law that protected ISPs from municipal competition Read More »

biden’s-new-import-rules-will-hit-e-bike-batteries-too

Biden’s new import rules will hit e-bike batteries too

tariff tussle —

The tariffs’ effects on the bike industry are still up in the air.

family on cargo e-bike

Last week, the Biden administration announced it would levy dramatic new tariffs on electric vehicles, electric vehicle batteries, and battery components imported into the United States from China. The move kicked off another round of global debate on how best to push the transportation industry toward an emissions-free future, and how global automotive manufacturers outside of China should compete with the Asian country’s well-engineered and low-cost car options.

But what is an electric vehicle exactly? China has dominated bicycle manufacturing, too; it was responsible for some 80 percent of US bicycle imports in 2021, according to one report. In cycling circles, the US’s new trade policies have raised questions about how much bicycle companies will have to pay to get Chinese-made bicycles and components into the US, and whether any new costs will get passed on to US customers.

On Wednesday, the Office of the United States Trade Representative—the US agency that creates trade policy—clarified that ebike batteries would be affected by the new policy, too.

In a written statement, Angela Perez, a spokesperson for the USTR, said that e-bike batteries imported from China on their own will be subject to new tariffs of 25 percent in 2026, up from 7.5 percent.

But it’s unclear whether imported complete e-bikes, as well as other cycling products including children’s bicycles and bicycle trailers, might be affected by new US trade policies. These products have technically been subject to 25 percent tariffs since the Trump administration. But US trade officials have consistently used exclusions to waive tariffs for many of those cycling products. The latest round of exclusions are set to expire at the end of this month.

Perez, the USTR spokesperson, said the future of tariff exclusions related to bicycles would be “addressed in the coming days.”

If the administration does not extend tariff exclusions for some Chinese-made bicycle products, “it will not help adoption” of e-bikes, says Matt Moore, the head of policy at the bicycle advocacy group PeopleForBikes. Following the announcement of additional tariffs on Chinese products earlier this month, PeopleForBikes urged its members to contact local representatives and advocate for an extension of the tariff exclusions. The group estimates tariff exclusions have saved the bike industry more than $130 million since 2018. It’s hard to pinpoint how much this has saved bicycle buyers, but in general, Moore says, companies that pay higher “landed costs”—that is, the cost of the product to get from the factory floor to an owner’s home—raise prices to cover their margins.

The tariff tussle comes as the US is in the midst of an extended electric bicycle boom. US sales of e-bikes peaked in 2022 at $903 million, up from $240 million in 2019, according to Circana’s Retail Tracking Service. Sales spiked as Americans looked for ways to get active and take advantage of the pandemic era’s empty streets. E-bike sales fell last year, but have ticked up by 4 percent since the start of 2024, according to Circana.

In the US, climate-conscious state and local governments have started to think more seriously about subsidizing electric bicycles in the way they have electric autos. States including Colorado and Hawaii give rebates to income-qualified residents. E-bike rebate programs in Denver and Connecticut were so popular among cyclists that they ran out of funding in days.

A paper published last year by researchers with the University of California, Davis, suggests these sorts of programs might work. It found that people who used local and state rebate programs to buy e-bikes reported bicycling more after their purchases. Almost 40 percent of respondents said they replaced at least one weekly car trip with their e-bike in the long-term—the kind of shift that could put a noticeable dent in carbon emissions.

This story originally appeared on wired.com

Biden’s new import rules will hit e-bike batteries too Read More »

openai-backpedals-on-scandalous-tactic-to-silence-former-employees

OpenAI backpedals on scandalous tactic to silence former employees

That settles that? —

OpenAI releases employees from evil exit agreement in staff-wide memo.

OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman.

Former and current OpenAI employees received a memo this week that the AI company hopes to end the most embarrassing scandal that Sam Altman has ever faced as OpenAI’s CEO.

The memo finally clarified for employees that OpenAI would not enforce a non-disparagement contract that employees since at least 2019 were pressured to sign within a week of termination or else risk losing their vested equity. For an OpenAI employee, that could mean losing millions for expressing even mild criticism about OpenAI’s work.

You can read the full memo below in a post on X (formerly Twitter) from Andrew Carr, a former OpenAI employee whose LinkedIn confirms that he left the company in 2021.

“I guess that settles that,” Carr wrote on X.

OpenAI faced a major public backlash when Vox revealed the unusually restrictive language in the non-disparagement clause last week after OpenAI co-founder and chief scientist Ilya Sutskever resigned, along with his superalignment team co-leader Jan Leike.

As questions swirled regarding these resignations, the former OpenAI staffers provided little explanation for why they suddenly quit. Sutskever basically wished OpenAI well, expressing confidence “that OpenAI will build AGI that is both safe and beneficial,” while Leike only offered two words: “I resigned.”

Amid an explosion of speculation about whether OpenAI was perhaps forcing out employees or doing dangerous or reckless AI work, some wondered if OpenAI’s non-disparagement agreement was keeping employees from warning the public about what was really going on at OpenAI.

According to Vox, employees had to sign the exit agreement within a week of quitting or else potentially lose millions in vested equity that could be worth more than their salaries. The extreme terms of the agreement were “fairly uncommon in Silicon Valley,” Vox found, allowing OpenAI to effectively censor former employees by requiring that they never criticize OpenAI for the rest of their lives.

“This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI,” Altman posted on X, while claiming, “I did not know this was happening and I should have.”

Vox reporter Kelsey Piper called Altman’s apology “hollow,” noting that Altman had recently signed separation letters that seemed to “complicate” his claim that he was unaware of the harsh terms. Piper reviewed hundreds of pages of leaked OpenAI documents and reported that in addition to financially pressuring employees to quickly sign exit agreements, OpenAI also threatened to block employees from selling their equity.

Even requests for an extra week to review the separation agreement, which could afford the employees more time to seek legal counsel, were seemingly denied—”as recently as this spring,” Vox found.

“We want to make sure you understand that if you don’t sign, it could impact your equity,” an OpenAI representative wrote in an email to one departing employee. “That’s true for everyone, and we’re just doing things by the book.”

OpenAI Chief Strategy Officer Jason Kwon told Vox that the company began reconsidering revising this language about a month before the controversy hit.

“We are sorry for the distress this has caused great people who have worked hard for us,” Kwon told Vox. “We have been working to fix this as quickly as possible. We will work even harder to be better.”

Altman sided with OpenAI’s biggest critics, writing on X that the non-disparagement clause “should never have been something we had in any documents or communication.”

“Vested equity is vested equity, full stop,” Altman wrote.

These long-awaited updates make clear that OpenAI will never claw back vested equity if employees leave the company and then openly criticize its work (unless both parties sign a non-disparagement agreement). Prior to this week, some former employees feared steep financial retribution for sharing true feelings about the company.

One former employee, Daniel Kokotajlo, publicly posted that he refused to sign the exit agreement, even though he had no idea how to estimate how much his vested equity was worth. He guessed it represented “about 85 percent of my family’s net worth.”

And while Kokotajlo said that he wasn’t sure if the sacrifice was worth it, he still felt it was important to defend his right to speak up about the company.

“I wanted to retain my ability to criticize the company in the future,” Kokotajlo wrote.

Even mild criticism could seemingly cost employees, like Kokotajlo, who confirmed that he was leaving the company because he was “losing confidence” that OpenAI “would behave responsibly” when developing generative AI.

In OpenAI’s defense, the company confirmed that it had never enforced the exit agreements. But now, OpenAI’s spokesperson told CNBC, OpenAI is backtracking and “making important updates” to its “departure process” to eliminate any confusion the prior language caused.

“We have not and never will take away vested equity, even when people didn’t sign the departure documents,” OpenAI’s spokesperson said. “We’ll remove non-disparagement clauses from our standard departure paperwork, and we’ll release former employees from existing non-disparagement obligations unless the non-disparagement provision was mutual.”

The memo sent to current and former employees reassured everyone at OpenAI that “regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units.”

“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” OpenAI’s spokesperson said.

OpenAI backpedals on scandalous tactic to silence former employees Read More »

sky-voice-actor-says-nobody-ever-compared-her-to-scarjo-before-openai-drama

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama

Scarlett Johansson attends the Golden Heart Awards in 2023.

Enlarge / Scarlett Johansson attends the Golden Heart Awards in 2023.

OpenAI is sticking to its story that it never intended to copy Scarlett Johansson’s voice when seeking an actor for ChatGPT’s “Sky” voice mode.

The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman’s defense against Johansson’s claims that Sky was made to sound “eerily similar” to her critically acclaimed voice acting performance in the sci-fi film Her.

Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.

In OpenAI’s defense, The Post reported that the company’s voice casting call flier did not seek a “clone of actress Scarlett Johansson,” and initial voice test recordings of the unnamed actress hired to voice Sky showed that her “natural voice sounds identical to the AI-generated Sky voice.” Because of this, OpenAI has argued that “Sky’s voice is not an imitation of Scarlett Johansson.”

What’s more, an agent for the unnamed Sky actress who was cast—both granted anonymity to protect her client’s safety—confirmed to The Post that her client said she was never directed to imitate either Johansson or her character in Her. She simply used her own voice and got the gig.

The agent also provided a statement from her client that claimed that she had never been compared to Johansson before the backlash started.

This all “feels personal,” the voice actress said, “being that it’s just my natural voice and I’ve never been compared to her by the people who do know me closely.”

However, OpenAI apparently reached out to Johansson after casting the Sky voice actress. During outreach last September and again this month, OpenAI seemed to want to substitute the Sky voice actress’s voice with Johansson’s voice—which is ironically what happened when Johansson got cast to replace the original actress hired to voice her character in Her.

Altman has clarified that timeline in a statement provided to Ars that emphasized that the company “never intended” Sky to sound like Johansson. Instead, OpenAI tried to snag Johansson to voice the part after realizing—seemingly just as Her director Spike Jonze did—that the voice could potentially resonate with more people if Johansson did it.

“We are sorry to Ms. Johansson that we didn’t communicate better,” Altman’s statement said.

Johansson has not yet made any public indications that she intends to sue OpenAI over this supposed miscommunication. But if she did, legal experts told The Post and Reuters that her case would be strong because of legal precedent set in high-profile lawsuits raised by singers Bette Midler and Tom Waits blocking companies from misappropriating their voices.

Why Johansson could win if she sued OpenAI

In 1988, Bette Midler sued Ford Motor Company for hiring a soundalike to perform Midler’s song “Do You Want to Dance?” in a commercial intended to appeal to “young yuppies” by referencing popular songs from their college days. Midler had declined to do the commercial and accused Ford of exploiting her voice to endorse its product without her consent.

This groundbreaking case proved that a distinctive voice like Midler’s cannot be deliberately imitated to sell a product. It did not matter that the singer used in the commercial had used her natural singing voice, because “a number of people” told Midler that the performance “sounded exactly” like her.

Midler’s case set a powerful precedent preventing companies from appropriating parts of performers’ identities—essentially stopping anyone from stealing a well-known voice that otherwise could not be bought.

“A voice is as distinctive and personal as a face,” the court ruled, concluding that “when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs.”

Like in Midler’s case, Johansson could argue that plenty of people think that the Sky voice sounds like her and that OpenAI’s product might be more popular if it had a Her-like voice mode. Comics on popular late-night shows joked about the similarity, including Johansson’s husband, Saturday Night Live comedian Colin Jost. And other people close to Johansson agreed that Sky sounded like her, Johansson has said.

Johansson’s case differs from Midler’s case seemingly primarily because of the casting timeline that OpenAI is working hard to defend.

OpenAI seems to think that because Johansson was offered the gig after the Sky voice actor was cast that she has no case to claim that they hired the other actor after she declined.

The timeline may not matter as much as OpenAI may think, though. In the 1990s, Tom Waits cited Midler’s case when he won a $2.6 million lawsuit after Frito-Lay hired a Waits impersonator to perform a song that “echoed the rhyming word play” of a Waits song in a Doritos commercial. Waits won his suit even though Frito-Lay never attempted to hire the singer before casting the soundalike.

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama Read More »

us-sues-ticketmaster-and-owner-live-nation,-seeks-breakup-of-monopoly

US sues Ticketmaster and owner Live Nation, seeks breakup of monopoly

A large Ticketmaster logo is displayed on a digital screen above the field where a soccer game is played.

Enlarge / Ticketmaster advertisements in the United States v. South Africa women’s soccer match at Soldier Field on September 24, 2023 in Chicago, Illinois.

Getty Images | Daniel Bartel/ISI Photos/USSF

The US government today sued Live Nation and its Ticketmaster subsidiary in a complaint that seeks a breakup of the company that dominates the live music and events market.

The US Department of Justice is seeking “structural relief,” including a breakup, “to stop the anticompetitive conduct arising from Live Nation’s monopoly power.” The DOJ complaint asked a federal court to “order the divestiture of, at minimum, Ticketmaster, along with any additional relief as needed to cure any anticompetitive harm.”

The District of Columbia and 29 states joined the DOJ in the lawsuit filed in US District Court for the Southern District of New York. “One monopolist serves as the gatekeeper for the delivery of nearly all live music in America today: Live Nation, including its wholly owned subsidiary Ticketmaster,” the complaint said.

US Attorney General Merrick Garland said during a press conference that “Live Nation relies on unlawful, anticompetitive conduct to exercise its monopolistic control over the live events industry in the United States… The result is that fans pay more in fees, artists have fewer opportunities to play concerts, smaller promoters get squeezed out, and venues have fewer real choices for ticketing services.”

“It is time to break it up,” Garland said.

Live Nation: We aren’t a monopoly

Garland said that Live Nation directly manages more than 400 artists, controls over 60 percent of concert promotions at major venues, and owns or controls over 60 percent of large amphitheaters. In addition to acquiring venues directly, Live Nation uses exclusive ticketing contracts with venues that last over a decade to exercise control, Garland said.

Garland said Ticketmaster imposes a “impose seemingly endless list of fees on fans,” including ticketing fees, service fees, convenience fees, order fees, handling fees, and payment processing fees. Live Nation and Ticketmaster control “roughly 80 percent or more of major concert venues’ primary ticketing for concerts and a growing share of ticket resales in the secondary market,” the lawsuit said.

Live Nation defended its business practices in a statement provided to Ars today, saying the lawsuit won’t solve problems “relating to ticket prices, service fees, and access to in-demand shows.”

“Calling Ticketmaster a monopoly may be a PR win for the DOJ in the short term, but it will lose in court because it ignores the basic economics of live entertainment, such as the fact that the bulk of service fees go to venues and that competition has steadily eroded Ticketmaster’s market share and profit margin,” the company said. “Our growth comes from helping artists tour globally, creating lasting memories for millions of fans, and supporting local economies across the country by sustaining quality jobs. We will defend against these baseless allegations, use this opportunity to shed light on the industry, and continue to push for reforms that truly protect consumers and artists.”

Live Nation said its profits aren’t high enough to justify the DOJ lawsuit.

“The defining feature of a monopolist is monopoly profits derived from monopoly pricing,” the company said. “Live Nation in no way fits the profile. Service charges on Ticketmaster are no higher than other ticket marketplaces, and frequently lower.” Live Nation said its net profit margin last fiscal year was 1.4 percent and claimed that “there is more competition than ever in the live events market.”

US sues Ticketmaster and owner Live Nation, seeks breakup of monopoly Read More »

lawmakers-say-section-230-repeal-will-protect-children—opponents-predict-chaos

Lawmakers say Section 230 repeal will protect children—opponents predict chaos

Section 230 repeal bill —

Repeal bill is bipartisan but has opponents from across the political spectrum.

A US lawmaker speaks at a congressional hearing

Enlarge / US Rep. Frank Pallone, Jr. (D-N.J.), right, speaks as House Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.) looks on during a hearing about TikTok on Thursday, March 23, 2023.

Getty Images | Tom Williams

A proposed repeal of Section 230 is designed to punish Big Tech but is also facing opposition from library associations, the Internet Archive, the owner of Wikipedia, and advocacy groups from across the political spectrum who say a repeal is bad for online speech. Opposition poured in before a House hearing today on the bipartisan plan to “sunset” Section 230 of the Communications Decency Act, which gives online platforms immunity from lawsuits over how they moderate user-submitted content.

Lawmakers defended the proposed repeal. House Commerce Committee Ranking Member Frank Pallone, Jr. (D-N.J.) today said that “Section 230 has outlived its usefulness and has played an outsized role in creating today’s ‘profits over people’ Internet” and criticized what he called “Big Tech’s constant scare tactics about reforming Section 230.”

Pallone teamed up with Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.) to propose the Section 230 repeal. The lawmakers haven’t come up with a replacement for the law, a tactic that some critics predict will lead to legislative chaos. A hearing memo said the draft bill “would sunset Section 230 of the Communications Act effective on December 31, 2025,” but claimed the “intent of the legislation is not to have Section 230 actually sunset, but to encourage all technology companies to work with Congress to advance a long-term reform solution to Section 230.”

McMorris Rodgers and Pallone wrote a Wall Street Journal op-ed alleging that “Big Tech companies are exploiting the law to shield them from any responsibility or accountability as their platforms inflict immense harm on Americans, especially children.”

While politicians are focused on Big Tech, one letter sent to lawmakers said the proposal “fails to recognize the indispensable role that Section 230 plays in fostering a diverse and innovative digital landscape across many industries that extends far beyond the realm of only large technology corporations.”

Library and Internet groups defend Section 230

The letter was sent by the American Library Association, the Association of Research Libraries, the Consumer Technology Association, Creative Commons, Educause, Incompas, the Internet Archive, the Internet Infrastructure Coalition, the Internet Society, and the Wikimedia Foundation.

Section 230 is essential for small and medium-sized tech businesses, educational institutions, libraries, ISPs, and many others, the letter said:

By narrowly framing the debate around the interests of “Big Tech,” there is a risk of misunderstanding the far-reaching implications of altering or dismantling Section 230. The heaviest costs and burdens of such action would fall on the millions of stakeholders we represent who, unlike large companies, do not have the resources to navigate a flood of content-based lawsuits. While it may seem that such changes will not “break the Internet,” this perspective overlooks the intricate interplay of legal liability and innovation that underpins the entire digital infrastructure.

Opposition this week also came from the Electronic Frontier Foundation, which said that “Section 230 is essential to protecting individuals’ ability to speak, organize, and create online.”

“The law is not a shield for Big Tech,” the EFF wrote. “Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.”

The EFF said it worries that if Big Tech helps Congress write a Section 230 replacement, the new law won’t “protect and benefit Internet users, as Section 230 does currently.”

Lawmakers say Section 230 repeal will protect children—opponents predict chaos Read More »

investigation-shows-how-easy-it-is-to-find-escorts,-oxycodone-on-eventbrite

Investigation shows how easy it is to find escorts, oxycodone on Eventbrite

Eventbrite headquarters in downtown San Francisco

This June, approximately 150 motorcycles will thunder down Route 9W in Saugerties, New York, for Ryan’s Ride for Recovery. Organized by Vince Kelder and his family, the barbecue and raffle will raise money to support their sober-living facility and honor their son who tragically died from a heroin overdose in 2015 after a yearslong drug addiction.

The Kelders established Raising Your Awareness about Narcotics (RYAN) to help others struggling with substance-use disorder. For years, the organization has relied on Eventbrite, an event management and ticketing website, to arrange its events. This year, however, alongside listings for Ryan’s Ride and other addiction recovery events, Eventbrite surfaced listings peddling illegal sales of prescription drugs like Xanax, Valium, and oxycodone.

“It’s criminal,” Vince Kelder says. “They’re preying on people trying to get their lives back together.”

Eventbrite prohibits listings dedicated to selling illegal substances on its platform. It’s one of the 16 categories of content the company’s policies restrict its users from posting. But a WIRED investigation found more than 7,400 events published on the platform that appeared to violate one or more of these terms.

Among these listings were pages claiming to sell fentanyl powder “without a prescription,” accounts pushing the sale of Social Security numbers, and pages offering a “wild night with independent escorts” in India. Some linked to sites offering such wares as Gmail accounts, Google reviews (positive and negative), and TikTok and Instagram likes and followers, among other services.

At least 64 of the event listings advertising drugs included links to online pharmacies that the National Association of Boards of Pharmacy have flagged as untrustworthy or unsafe. Amanda Hils, a spokesperson for the US Food and Drug Administration, says the agency does not comment on individual cases without a thorough review, but broadly some online pharmacies that appear to look legitimate may be “operating illegally and selling medicines that can be dangerous or even deadly.”

Eventbrite didn’t just publish these user-generated event listings; its algorithms appeared to actively recommend them to people through simple search queries or in “related events”—a section at the bottom of an event’s page showing users similar events they might be interested in. As well as posts selling illegal prescription drugs in search results appearing next to the RYAN event, a search for “opioid” in the United States showed Eventbrite’s recommendation algorithm suggesting a conference for opioid treatment practitioners between two listings for ordering oxycodone.

Robin Pugh, the executive director of nonprofit cybercrime-fighting organization Intelligence for Good, which first alerted WIRED to some of the listings, says it is quick and easy to identify the illicit posts on Eventbrite and that other websites that allow “user-generated content” are also plagued by scammers uploading posts in similar ways.

Investigation shows how easy it is to find escorts, oxycodone on Eventbrite Read More »

tesla-shareholder-group-opposes-musk’s-$46b-pay,-slams-board-“dysfunction”

Tesla shareholder group opposes Musk’s $46B pay, slams board “dysfunction”

A photoshopped image of Elon Musk emerging from an enormous pile of money.

Aurich Lawson / Duncan Hull / Getty

A Tesla shareholder group yesterday urged other shareholders to vote against Elon Musk’s $46 billion pay package, saying the Tesla board is dysfunctional and “overly beholden to CEO Musk.” The group’s letter also urged shareholders to vote against the reelection of board members Kimbal Musk and James Murdoch.

“Tesla is suffering from a material governance failure which requires our urgent attention and action,” and its board “is stacked with directors that have close personal ties to CEO Elon Musk,” the letter said. “There are multiple indications that these ties, coupled with excessive director compensation, prevent the level of critical and independent thinking required for effective governance.”

Tesla shareholders approved Elon Musk’s pay package in 2018, but it was nullified by a court ruling in January 2024. After a lawsuit filed by a shareholder, Delaware Court of Chancery Judge Kathaleen McCormick ruled that the pay plan was unfair to Tesla shareholders and must be rescinded.

McCormick wrote that most of Tesla’s board members were beholden to Musk or had compromising conflicts and that Tesla’s board provided false and misleading information to shareholders before the 2018 vote. Musk and the rest of the Tesla board subsequently asked shareholders to approve a transfer of Tesla’s state of incorporation from Delaware to Texas and to reinstate Musk’s pay package. Votes can be submitted before Tesla’s annual meeting on June 13.

The pay package was previously estimated to be worth $56 billion, but the stock options in the plan were more recently valued at $46 billion.

“Tesla has clearly lagged”

From March 2020 to November 2021, Tesla’s share price rose from $28.51 to $409.71. But it “has since fallen to $172.63, a decline of $237.08 or 62 percent from its peak,” the letter opposing the pay package said.

“Over the past three years, and especially over the past year, Tesla has clearly lagged behind its competitors and the broader market. We believe that the distractions caused by Musk’s many projects, particularly his decision to buy Twitter, have played a material role in Tesla’s underperformance,” the letter said.

Tesla’s reputation has been harmed by Musk’s “public fights with regulators, acquisition of Twitter, controversial statements on X, and his legal and personal troubles,” the letter said. The letter was sent by New York City Comptroller Brad Lander and investors including Amalgamated Bank, AkademikerPension, Nordea Asset Management, SOC Investment Group, and United Church Funds.

Musk has taken advantage of lax oversight in order “to use Tesla as a coffer for himself and his other business endeavors,” the letter said. It continued:

In 2022, Musk admitted to using Tesla engineers to work on issues at Twitter (now known as X), and defended the decision by saying that no Tesla Board member had stopped him from using Tesla staff for his other businesses. More recently, Musk has begun poaching top engineers from Tesla’s AI and autonomy team for his new company, xAI, including Ethan Knight, who was computer vision chief at Tesla.

This is on the heels of Musk’s post on X that he is “uncomfortable growing Tesla to be a leader in AI & robotics without having ~25% voting control,” a move widely seen as a threat to push Tesla’s Board to grant him another mega pay package.

The Tesla board “continues to allow Musk to be overcommitted” as he devotes “significant amounts of time to his roles at X, SpaceX, Neuralink, the Boring Company and other companies,” the letter said.

Tesla shareholder group opposes Musk’s $46B pay, slams board “dysfunction” Read More »

“csam-generated-by-ai-is-still-csam,”-doj-says-after-rare-arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

The US Department of Justice has started cracking down on the use of AI image generators to produce child sexual abuse materials (CSAM).

On Monday, the DOJ arrested Steven Anderegg, a 42-year-old “extremely technologically savvy” Wisconsin man who allegedly used Stable Diffusion to create “thousands of realistic images of prepubescent minors,” which were then distributed on Instagram and Telegram.

The cops were tipped off to Anderegg’s alleged activities after Instagram flagged direct messages that were sent on Anderegg’s Instagram account to a 15-year-old boy. Instagram reported the messages to the National Center for Missing and Exploited Children (NCMEC), which subsequently alerted law enforcement.

During the Instagram exchange, the DOJ found that Anderegg sent sexually explicit AI images of minors soon after the teen made his age known, alleging that “the only reasonable explanation for sending these images was to sexually entice the child.”

According to the DOJ’s indictment, Anderegg is a software engineer with “professional experience working with AI.” Because of his “special skill” in generative AI (GenAI), he was allegedly able to generate the CSAM using a version of Stable Diffusion, “along with a graphical user interface and special add-ons created by other Stable Diffusion users that specialized in producing genitalia.”

After Instagram reported Anderegg’s messages to the minor, cops seized Anderegg’s laptop and found “over 13,000 GenAI images, with hundreds—if not thousands—of these images depicting nude or semi-clothed prepubescent minors lasciviously displaying or touching their genitals” or “engaging in sexual intercourse with men.”

In his messages to the teen, Anderegg seemingly “boasted” about his skill in generating CSAM, the indictment said. The DOJ alleged that evidence from his laptop showed that Anderegg “used extremely specific and explicit prompts to create these images,” including “specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.” These go-to prompts were stored on his computer, the DOJ alleged.

Anderegg is currently in federal custody and has been charged with production, distribution, and possession of AI-generated CSAM, as well as “transferring obscene material to a minor under the age of 16,” the indictment said.

Because the DOJ suspected that Anderegg intended to use the AI-generated CSAM to groom a minor, the DOJ is arguing that there are “no conditions of release” that could prevent him from posing a “significant danger” to his community while the court mulls his case. The DOJ warned the court that it’s highly likely that any future contact with minors could go unnoticed, as Anderegg is seemingly tech-savvy enough to hide any future attempts to send minors AI-generated CSAM.

“He studied computer science and has decades of experience in software engineering,” the indictment said. “While computer monitoring may address the danger posed by less sophisticated offenders, the defendant’s background provides ample reason to conclude that he could sidestep such restrictions if he decided to. And if he did, any reoffending conduct would likely go undetected.”

If convicted of all four counts, he could face “a total statutory maximum penalty of 70 years in prison and a mandatory minimum of five years in prison,” the DOJ said. Partly because of “special skill in GenAI,” the DOJ—which described its evidence against Anderegg as “strong”—suggested that they may recommend a sentencing range “as high as life imprisonment.”

Announcing Anderegg’s arrest, Deputy Attorney General Lisa Monaco made it clear that creating AI-generated CSAM is illegal in the US.

“Technology may change, but our commitment to protecting children will not,” Monaco said. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest Read More »

openai-pauses-chatgpt-4o-voice-that-fans-said-ripped-off-scarlett-johansson

OpenAI pauses ChatGPT-4o voice that fans said ripped off Scarlett Johansson

“Her” —

“Sky’s voice is not an imitation of Scarlett Johansson,” OpenAI insists.

Scarlett Johansson and Joaquin Phoenix attend <em>Her</em> premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy.  ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/GettyImages-187586586-800×534.jpg”></img><figcaption>
<p><a data-height=Enlarge / Scarlett Johansson and Joaquin Phoenix attend Her premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy.

OpenAI has paused a voice mode option for ChatGPT-4o, Sky, after backlash accusing the AI company of intentionally ripping off Scarlett Johansson’s critically acclaimed voice-acting performance in the 2013 sci-fi film Her.

In a blog defending their casting decision for Sky, OpenAI went into great detail explaining its process for choosing the individual voice options for its chatbot. But ultimately, the company seemed pressed to admit that Sky’s voice was just too similar to Johansson’s to keep using it, at least for now.

“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI’s blog said.

OpenAI is not naming the actress, or any of the ChatGPT-4o voice actors, to protect their privacy.

A week ago, OpenAI CEO Sam Altman seemed to invite this controversy by posting “her” on X (formerly Twitter) after announcing the ChatGPT audio-video features that he said made it more “natural” for users to interact with the chatbot.

Altman has said that Her, a movie about a man who falls in love with his virtual assistant, is among his favorite movies. He told conference attendees at Dreamforce last year that the movie “was incredibly prophetic” when depicting “interaction models of how people use AI,” The San Francisco Standard reported. And just last week, Altman touted GPT-4o’s new voice mode by promising, “it feels like AI from the movies.”

But OpenAI’s chief technology officer, Mira Murati, has said that GPT-4o’s voice modes were less inspired by Her than by studying the “really natural, rich, and interactive” aspects of human conversation, The Wall Street Journal reported.

In 2013, of course, critics praised Johansson’s Her performance as expressively capturing a wide range of emotions, which is exactly what Murati described as OpenAI’s goals for its chatbot voices. Rolling Stone noted how effectively Johansson naturally navigated between “tones sweet, sexy, caring, manipulative, and scary.” Johansson achieved this, the Hollywood Reporter said, by using a “vivacious female voice that breaks attractively but also has an inviting deeper register.”

Her director/screenwriter Spike Jonze was so intent on finding the right voice for his film’s virtual assistant that he replaced British actor Samantha Morton late in the film’s production. According to Vulture, Jonze realized that Morton’s “maternal, loving, vaguely British, and almost ghostly” voice didn’t fit his film as well as Johansson’s “younger,” “more impassioned” voice, which he said brought “more yearning.”

Late-night shows had fun mocking OpenAI’s demo featuring the Sky voice, which showed the chatbot seemingly flirting with engineers, giggling through responses like “oh, stop it. You’re making me blush.” Where The New York Times described these demo interactions as Sky being “deferential and wholly focused on the user,” The Daily Show‘s Desi Lydic joked that Sky was “clearly programmed to feed dudes’ egos.”

OpenAI is likely hoping to avoid any further controversy amidst plans to roll out more voices soon that its blog said will “better match the diverse interests and preferences of users.”

OpenAI did not immediately respond to Ars’ request for comment.

Voice actors versus AI

The OpenAI controversy arrives at a moment when many are questioning AI’s impact on creative communities, triggering early lawsuits from artists and book authors. Just this month, Sony opted all of its artists out of AI training to stop voice clones from ripping off top talents like Adele and Beyoncé.

Voice actors, too, have been monitoring increasingly sophisticated AI voice generators, waiting to see what threat AI might pose to future work opportunities. Recently, two actors sued an AI start-up called Lovo that they claimed “illegally used recordings of their voices to create technology that can compete with their voice work,” The New York Times reported. According to that lawsuit, Lovo allegedly used the actors’ actual voice clips to clone their voices.

“We don’t know how many other people have been affected,” the actors’ lawyer, Steve Cohen, told The Times.

Rather than replace voice actors, OpenAI’s blog said that they are striving to support the voice industry when creating chatbots that will laugh at your jokes or mimic your mood. On top of paying voice actors “compensation above top-of-market rates,” OpenAI said they “worked with industry-leading casting and directing professionals to narrow down over 400 submissions” to the five voice options in the initial roll-out of audio-video features.

Their goals in hiring voice actors were to hire talents “from diverse backgrounds or who could speak multiple languages,” casting actors who had voices that feel “timeless” and “inspire trust.” To OpenAI, that meant finding actors who have a “warm, engaging, confidence-inspiring, charismatic voice with rich tone” that sounds “natural and easy to listen to.”

For ChatGPT-4o’s first five voice actors, the gig lasted about five months before leading to more work, OpenAI said.

“We are continuing to collaborate with the actors, who have contributed additional work for audio research and new voice capabilities in GPT-4o,” OpenAI said.

Arguably, these actors are helping to train AI tools that could one day replace them, though. Backlash defending Johansson—one of the world’s highest-paid actors—perhaps shows that fans won’t take direct mimicry of any of Hollywood’s biggest stars lightly, though.

While criticism of the Sky voice seemed widespread, some fans seemed to think that OpenAI has overreacted by pausing the Sky voice.

NYT critic Alissa Wilkinson wrote that it was only “a tad jarring” to hear Sky’s voice because “she sounded a whole lot” like Johansson. And replying to OpenAI’s X post announcing its decision to pull the voice feature for now, a clump of fans protested the AI company’s “bad decision,” with some complaining that Sky was the “best” and “hottest” voice.

At least one fan noted that OpenAI’s decision seemed to hurt the voice actor behind Sky most.

“Super unfair for the Sky voice actress,” a user called Ate-a-Pi wrote. “Just because she sounds like ScarJo, now she can never make money again. Insane.”

OpenAI pauses ChatGPT-4o voice that fans said ripped off Scarlett Johansson Read More »