Policy

don’t-fall-for-ai-scams-cloning-cops’-voices,-police-warn

Don’t fall for AI scams cloning cops’ voices, police warn

AI is giving scammers a more convincing way to impersonate police, reports show.

Just last week, the Salt Lake City Police Department (SLCPD) warned of an email scam using AI to convincingly clone the voice of Police Chief Mike Brown.

A citizen tipped off cops after receiving a suspicious email that included a video showing the police chief claiming that they “owed the federal government nearly $100,000.”

To dupe their targets, the scammers cut together real footage from one of Brown’s prior TV interviews with AI-generated audio that SLCPD said “is clear and closely impersonates the voice of Chief Brown, which could lead community members to believe the message was legitimate.”

The FBI has warned for years of scammers attempting extortion by impersonating cops or government officials. But as AI voice-cloning technology has advanced, these scams could become much harder to detect, to the point where even the most forward-thinking companies like OpenAI have been hesitant to release the latest tech due to obvious concerns about potential abuse.

SLCPD noted that there were clues in the email impersonating their police chief that a tech-savvy citizen could have picked up on. A more careful listen reveals “the message had unnatural speech patterns, odd emphasis on certain words, and an inconsistent tone,” as well as “detectable acoustic edits from one sentence to the next.” And perhaps most glaringly, the scam email came from “a Google account and had the Salt Lake City Police Department’s name in it followed by a numeric number,” instead of from the police department’s official email domain of “slc.gov.”

SLCPD isn’t the only police department dealing with AI cop impersonators. Tulsa had a similar problem this summer when scammers started calling residents using a convincing fake voice designed to sound like Tulsa police officer Eric Spradlin, Public Radio Tulsa reported. A software developer who received the call, Myles David, said he understood the AI risks today but that even he was “caught off guard” and had to call police to verify the call wasn’t real.

Don’t fall for AI scams cloning cops’ voices, police warn Read More »

are-boeing’s-problems-beyond-fixable?

Are Boeing’s problems beyond fixable?


A new CEO promises a culture change as the aerospace titan is struggling hard.

A Boeing logo on the exterior of the company's headquarters.

Credit: Getty Images | Olivier Douliery

As Boeing’s latest chief executive, Kelly Ortberg’s job was never going to be easy. On Wednesday, it got harder still.

That morning, Ortberg had faced investors for the first time, telling them that ending a debilitating strike by Boeing’s largest union was the first step to stabilizing the plane maker’s business.

But as the day wore on, it became clear that nearly two-thirds of the union members who voted on the company’s latest contract offer had rejected it. The six-week strike goes on, costing Boeing an estimated $50 million a day, pushing back the day it can resume production of most aircraft and further stressing its supply chain.

The company that virtually created modern commercial aviation has spent the better part of five years in chaos, stemming from fatal crashes, a worldwide grounding, a guilty plea to a criminal charge, a pandemic that halted global air travel, a piece breaking off a plane in mid-flight and now a strike. Boeing’s finances look increasingly fragile and its reputation has been battered.

Bank of America analyst Ron Epstein says Boeing is a titan in a crisis largely of its own making, comparing it to the Hydra of Greek mythology: “For every problem that’s come to a head, then [been] severed, more problems sprout up.”

Resolving Boeing’s crisis is critical to the future of commercial air travel, as most commercial passenger aircraft are made by it or its European rival Airbus, which has little capacity for new customers until the 2030s.

Ortberg, a 64-year-old Midwesterner who took the top job three months ago, says his mission is “pretty straightforward—turn this big ship in the right direction and restore Boeing to the leadership position that we all know and want.”

Resolving the machinists’ strike is just the start of the challenges he faces. He needs to motivate the workforce, even as 33,000 are on strike and 17,000 face redundancy under a cost-cutting initiative.

He must persuade investors to support an equity raise in an industry where the returns could take years to materialize. He needs to fix Boeing’s quality control and manufacturing issues, and placate its increasingly frustrated customers, who have had to rejig their schedules and cut flights owing to delays in plane deliveries.

“I’ve never seen anything like it in our industry, to be honest. I’ve been around 30 years,” Carsten Spohr, chief executive of German flag carrier Lufthansa, said this month.

Eventually, Boeing needs to launch a new aircraft model to better compete with Airbus.

“If Kelly fixes this, he is a hero,” says Melius Research analyst Rob Spingarn. “But it’s very complex. There’s a lot of different things to fix.”

Ortberg started his career as a mechanical engineer and went on to run Rockwell Collins, an avionics supplier to Boeing, until it was sold to engineering conglomerate United Technologies in 2018.

His engineering background has been welcomed by many who regard previous executives’ emphasis on shareholder returns as the root cause of many of Boeing’s engineering and manufacturing problems.

Longtime employees often peg the shift in Boeing’s culture to its 1997 merger with rival McDonnell Douglas. Phil Condit and Harry Stonecipher, who ran Boeing in the late 1990s and early 2000s, were admirers of Jack Welch, the General Electric chief executive known for financial engineering and ruthless cost cuts.

Condit even moved Boeing’s headquarters from its manufacturing base in Seattle to Chicago in 2001, so the “corporate center” would no longer be “drawn into day-to-day business operations.”

Jim McNerney, another Welch acolyte, instituted a program to boost Boeing’s profits by squeezing its suppliers during his decade in charge. He remarked on a 2014 earnings call about employees “cowering” before him, a dark quip still cited a decade later to explain Boeing’s tense relationship with its workers.

Ken Ogren, a member of the International Association of Machinists and Aerospace Workers District 751, says managers at Boeing often felt pressured to move planes quickly through the factory.

“We’ve had a lot of bean counters come through, and I’m going to be in the majority with a lot of people who believe they’ve been tripping over dollars to save pennies,” he says.

Dennis Muilenburg headed the company in October 2018, when a new 737 Max crashed off the coast of Indonesia. Five months later, another Max crashed shortly after take-off in Ethiopia. In total, 346 people lost their lives.

Regulators worldwide grounded the plane—a cash cow and a vital product in Boeing’s competition with Airbus—for nearly two years. Investigations eventually showed a faulty sensor triggered an anti-stall system, repeatedly forcing the aircraft’s nose downward.

Boeing agreed in July to plead guilty to a criminal charge of fraud for misleading regulators about the plane’s design. Families of the crash victims are opposing the plea deal, which is before a federal judge for approval.

The manufacturer’s problems were compounded by COVID-19, which grounded aircraft worldwide and led many airlines to hold off placing new orders and pause deliveries of existing ones. Boeing’s debt ballooned as it issued $25 billion in bonds to see it through the crisis.

Regulators cleared the 737 Max to fly again, starting in November 2020. But hopes that Boeing was finally on top of its problems were shattered last January, when a door panel that was missing bolts blew off an Alaska Airlines jet at 16,000 feet.

While no one was injured, the incident triggered multiple investigations and an audit by the US Federal Aviation Administration, which found lapses in Boeing’s manufacturing and quality assurance processes and led to an uncomfortable appearance by then chief executive Dave Calhoun at a Senate subcommittee hearing.

The company also has struggled with its defense and space businesses. Fixed-price contracts on several military programs have resulted in losses and billions of dollars of one-off charges. Meanwhile, problems with its CST-100 Starliner spacecraft resulted in two astronauts being left on the International Space Station. SpaceX’s Crew Dragon vehicle will be used to return them to Earth early next year.

Boeing’s stumbles have resulted in loss of life, loss of prestige, and a net financial loss every year since 2019. On Wednesday, it reported a $6 billion loss between July and September, the second-worst quarterly result in its history.

One of Ortberg’s first big moves as chief executive was to move himself—from his Florida home to a house in Seattle. He told analysts that Boeing’s executives “need to be on the factory floors, in the back shops, and in our engineering labs” to be more in tune with the company’s products and workforce. Change in Boeing’s corporate culture must “be more than the poster on the wall,” he added.

His approach represents a shift from his predecessor Calhoun, who was criticized for spending more time in New Hampshire and South Carolina than in Boeing’s factories in Washington state.

Bill George, former chief executive at Medtronic and an executive fellow at Harvard Business School, says Ortberg is doing a “terrific job” so far, particularly for moving to the Pacific Northwest and pressuring other itinerant executives to follow.

“If you’re based in Florida, and you come occasionally, what do you really know about what’s going on in the business?” he says, adding that Boeing has “no business being in Arlington, Virginia,” where the company moved its headquarters in 2022.

Scott Kirby, chief executive at one of Boeing’s biggest customers, United Airlines, told his own investors this month that he was “encouraged” by Ortberg’s early moves, adding that the company suffered for decades from “a cultural challenge, where they focused on short-term profitability and the short-term stock price at the expense of what made Boeing great, which is building great products.”

“Kelly Ortberg is pivoting the company back to their roots,” he said. “All the employees of Boeing will rally around that.”

But Ogren of the machinists’ union cautions that previous commitments to culture change have been hollow. “You’ve got people at the top saying, ‘We’ve got to be safe, oh, and by the way, we need these planes out the door…’ They said the right thing. They didn’t emphasize it, and that’s not what they put pressure on the managers to achieve.”

When workers eventually return to work—Peter Arment, an analyst at Baird, expects the dispute to be resolved in November—Ortberg wants better execution, even if it means lower output. “It is so much more important we do this right than fast,” he said.

The company had planned to raise Max output from about 25 per month before the strike to 38 per month by the end of the year, a cap set by the FAA. It will not reach that goal and Spingarn, the Melius analyst, says the strike will probably delay any production increase by nine months to a year. Some workers would need retraining, Ortberg said, and the supply chain’s restart was likely to be “bumpy.” The manufacturer also has established a quality plan with the FAA that it must follow.

Boeing also needed to launch a new airplane “at the right time in the future,” Ortberg said. Epstein of BofA called this “one of the most important messages” from the new chief executive, likely “to reinvigorate the workforce and culture at Boeing.”

In the meantime, Boeing will continue to consume cash in 2025, having burnt through $10 billion so far this year, according to chief financial officer Brian West. Spingarn says that investors may be disappointed in the cash flow at first, but adds that “fixing airplanes isn’t one year, it’s three years.”

For all the challenges, Ortberg has the right personality to turn Boeing around, says Ken Herbert, an analyst at RBC Capital Markets.

“If he can’t do it, I don’t think anyone can.”

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Are Boeing’s problems beyond fixable? Read More »

us-copyright-office-“frees-the-mcflurry,”-allowing-repair-of-ice-cream-machines

US Copyright Office “frees the McFlurry,” allowing repair of ice cream machines

Manufacturers opposed the exemption, but it received support from the Department of Justice Antitrust Division, the Federal Trade Commission, and the National Telecommunications and Information Administration.

“The Register recommends adopting a new exemption covering diagnosis, maintenance, and repair of retail-level commercial food preparation equipment because proponents sufficiently showed, by a preponderance of the evidence, adverse effects on the proposed noninfringing uses of such equipment,” the Register’s findings said.

The exemption does not include commercial and industrial food preparation devices. Unlike the retail-level equipment, the software-enabled industrial machines “may be very different in multiple aspects and proponents have not established a record of adverse effects with respect to industrial equipment,” the Register wrote.

Error codes unintuitive and often change

While ice cream machines aren’t the only devices affected, the Register’s recommendations note that “proponents primarily relied on an example of a frequently broken soft-serve ice cream machine used in a restaurant to illustrate the adverse effects on repair activities.”

Proponents said that fixing the Taylor Company ice cream machines used at McDonald’s required users to interpret “unintuitive” error codes. Some error codes are listed in the user manual, but these manuals were said to be “often outdated and incomplete” because error codes could change with each firmware update.

Difficulties in repair related to “technological protection measures,” or TPMs, were described as follows:

Moreover, other error codes can only be accessed by reading a service manual that is made available only to authorized technicians or through a “TPM-locked on-device service menu.” This service menu can only be accessed by using a manufacturer-approved diagnostic tool or through an “extended, undocumented combination of key presses.” However, “it is unclear whether the 16-press key sequence… still works, or has been changed in subsequent firmware updates.” Proponents accordingly asserted that many users are unable to diagnose and repair the machine without circumventing the machine’s TPM to access the service menu software, resulting in significant financial harm from lost revenue.

The Register said it’s clear that “diagnosis of the soft-serve machine’s error codes for purposes of repair can often only be done by accessing software on the machine that is protected by TPMs (which require a passcode or proprietary diagnostic tool to unlock),” and that “the threat of litigation from circumventing them inhibits users from engaging in repair-related activities.”

US Copyright Office “frees the McFlurry,” allowing repair of ice cream machines Read More »

video-game-libraries-lose-legal-appeal-to-emulate-physical-game-collections-online

Video game libraries lose legal appeal to emulate physical game collections online

In an odd footnote, the Register also notes that emulation of classic game consoles, while not infringing in its own right, has been “historically associated with piracy,” thus “rais[ing] a potential concern” for any emulated remote access to library game catalogs. That footnote paradoxically cites Video Game History Foundation (VGHF) founder and director Frank Cifaldi’s 2016 Game Developers Conference talk on the demonization of emulation and its importance to video game preservation.

“The moment I became the Joker is when someone in charge of copyright law watched my GDC talk about how it’s wrong to associate emulation with piracy and their takeaway was ’emulation is associated with piracy,'” Cifaldi quipped in a social media post.

The fight continues

In a statement issued in response to the decision, the VGHF called out “lobbying efforts by rightsholder groups” that “continue to hold back progress” for researchers. The status quo limiting remote access “forces researchers to explore extra-legal methods to access the vast majority of out-of-print video games that are otherwise unavailable,” the VGHF writes.

“Frankly my colleagues in literary studies or film history have pretty routine and regular access to digitized versions of the things they study,” NYU professor Laine Nooney argued to the Copyright Office earlier this year. “These [travel] impediments [to access physical games] are real and significant and they do impede research in ways that are not equitable compared to our colleagues in other disciplines.”

Software archives like the one at the University of Michigan can be a great resource… if you’re on the premises, that is.

Software archives like the one at the University of Michigan can be a great resource… if you’re on the premises, that is. Credit: University of Michigan

Speaking to Ars Technica, VGHF Library Director Phil Salvador said that the group was “disappointed” in the Copyright Office decision but “proud of the work we’ve done and the impact this process has had. The research we produced during this process has already helped justify everything from game re-releases to grants for researching video game history. Our fight this cycle has raised the level of discourse around game preservation, and we’re going to keep that conversation moving within the game industry.”

Video game libraries lose legal appeal to emulate physical game collections online Read More »

x-payments-delayed-after-musk’s-x-weirdly-withdrew-application-for-ny-license

X Payments delayed after Musk’s X weirdly withdrew application for NY license


Will X Payments launch this year? Outlook not so good.

Credit: Aurich Lawson | Getty Images/Bloomberg

This October, many Elon Musk believers are wondering, where is X Payments?

Last year, Musk claimed in a Spaces conversation that he “would be surprised” if it took longer than mid-2024 to roll out the payments feature that he believes is crucial to transforming the social media app formerly known as Twitter into an everything app.

“It would blow my mind if we don’t have that rolled out by the end of next year,” Musk said around this time last year, clarifying that “when I say payments, I actually mean someone’s entire financial life. If it involves money, it’ll be on our platform. Money or securities or whatever. So, it’s not just like ‘send $20 to my friend.’ I’m talking about, like, you won’t need a bank account.”

Echoing Musk as recently as June, X CEO Linda Yaccarino was hyping the US release of X Payments as imminent. But it has been months without another peep from X leadership, and Ars recently confirmed that X took a curious step in April that suggests the payments feature may be delayed indefinitely.

During the Spaces conversation last December with Ark Invest CEO Cathie Wood, Musk discussed X’s bid to secure money transmitter licenses in all 50 states, noting that it would be “irrelevant” to launch X Payments without California and New York licenses.

Since then, X has made a decent amount of progress, picking up money transmitter licenses in 38 states, including a critical license in California.

But approvals in New York were reportedly stalled for months after a New York City law firm, now called Walden Macht Haran & Williams (WMHW), sent an open letter to attorneys general and banking commissioners in all 50 states in September 2023, urging that X be deemed “unfit” for a money transmitter license.

WMHW had filed a lawsuit alleging that Twitter—before Musk acquired it—”acted at the direction of the Kingdom of Saudi Arabia (KSA) in furtherance of KSA’s long-running campaign of transnational repression.”

That campaign led to the murder of Washington Post correspondent Jamal Khashoggi and the “imprisonment of Abdulrahman Al-Sadhan, a human rights worker and anonymous Twitter user, whose confidential user data—leaked by Twitter’s employees—precipitated and enabled this barbarity,” the letter alleged. And when Musk took over the platform, he only deepened the app’s KSA ties further when he “invited KSA to convert its shares in Twitter into a financial stake during his private take-over of the platform,” the letter said.

Rather than grant X money transmitter licenses, WMHW recommended that attorneys general and banking commissioners use X’s money transmitter licenses as an excuse to investigate the allegations and demystify the app’s allegedly dangerous KSA ties.

Apparently, X either did not like the heat or decided to rethink its X Payments strategy, because the New York Department of Financial Services provided new information to Ars this week confirming that X withdrew its money transmitter license in New York in April 2024.

The department also confirmed that X has not since resubmitted the application.

However, WMHW this month voluntarily dismissed its client’s lawsuit against X and declined to comment on whether the open letter seemingly worked to block X Payments’ launch. It seems possible that X may leverage that court win to eventually resubmit its application for a New York license, but Ars could not confirm if X has any plans to resubmit any time soon.

An X spokesperson answered Ars’ request to comment (which rarely happens) but declined to provide an update on any new timeline for X Payments’ launch.

X Payments unlikely to launch without New York

It seems possible that X has gone silent on X Payments because there is no timeline currently.

A global payments expert for tech consultancy Capco, Daniela Hawkins, told Ars that, as an outsider going just off a “gut check,” if X has withdrawn its application from New York—with “New York obviously being such a major metropolitan area… that would seem to be a barrier to entry into the payments market.”

X could launch X Payments without New York and other states, but Hawkins said users might be confused about where they can and cannot send money. Hawkins thinks it’s unlikely that Musk—who co-founded PayPal and has wanted to launch his own payments app since—would roll out X Payments “half-assed.”

Basically, if X pushed through with the launch, users could accept and send funds just like they can using any other payments app, but without licenses in all states, X users could only send money to people located in states where X has licenses. Hawkins said that inconsistency could deter popular use of the payments feature because “it’s too difficult for the consumer to understand.”

“If you roll it out with handcuffs on it, it’s gonna have a bumpy launch,” Hawkins said. “So why would you do that?”

Going that route, X seemingly risks users ditching X to complete payments on apps where every transaction reliably goes through, Hawkins suggested.

“They’re gonna be like, ‘Wait, I don’t know where this Etsy shop is located, I don’t care,” Hawkins said, noting, “that’s just a bad user experience.”

More regulations on payment apps coming

Last year, Hawkins told Ars that X faced an “uphill battle” launching X Payments, partly due to intensifying regulations on the financial services industry that are increasingly pulling payments apps into regulations typically focused on regulating traditional banking services.

Just days ago, the Consumer Financial Protection Bureau (CFPB) issued a final rule requiring banks, credit unions, and online payments services to make it easy and safe for customers to port banking data to a new financial service provider.

The CFPB argues customers need to have control over their data, but Hawkins told Ars that banks considered the controversial rule potentially allowing customers to transfer sensitive data in one click to be a “freaking nightmare.”

Banks warned of fraud risks and privacy concerns about sharing sensitive data with third parties that could profit off that data, possibly heightening risks of data breaches. Compliance isn’t required until 2026, but already the rule is being challenged in court, Hawkins said.

In one way, the new rule could be good for X, Hawkins told Ars, as the app could quickly gain access to valuable financial data if X users did switch from, say, using a bank to managing money through X Payments. Then X wouldn’t have “to go build all this data from scratch” to make X Payments profitable, Hawkins suggested.

But in another way, the rule could put X in “an interesting spot” where the app is required to share its user data with third parties in a way that could potentially have Musk second-guessing whether X would even benefit from becoming a bank in the way that he initially planned. Banks have protested the CFPB rule as allowing third parties to profit off data that they can’t, and Musk’s whole X Payments plan appears to revolve around profiting off users’ financial data.

“If somebody wants to pay with X, now X has to transfer the data to the third party, and they may not want to do that, because obviously, data is power, right?” Hawkins said.

Not a bank

But if Musk is suddenly shy about turning X into a bank, it comes at a time when banks are less likely to partner with social media apps for potentially risky new payment ventures.

Hawkins noted that banks have struggled to roll out new payment capabilities as easily as fintechs can, and that struggle inspired longtime partnerships between banks and tech companies that have recently begun to collapse. On Wednesday, the CFPB ordered Apple and Goldman Sachs to pay more than $89 million over “illegally mishandled transaction disputes.” Now Goldman Sachs is banned from offering new credit cards until it can be trusted to comply with laws. And Wells Fargo recently bowed out of PayPal and Square partnerships, citing compliance costs, The Information reported this week.

For Musk, who has notoriously butted heads with his trust and safety compliance teams at X, working with regulators on launching X Payments might, at this moment, seem less attractive.

“It’s one thing to want to move money on a payments app,” Hawkins told Ars. “It’s another thing to be a bank. Like he’s gonna hate being a bank.”

Earlier this year, the CFPB risked being dismantled after the financial services associations alleged its funding scheme was improper. But shortly after X withdrew from New York, the Supreme Court ruled in May that nothing was amiss with CFPB’s funding, despite Justice Samuel Alito warning in his dissent that SCOTUS’s decision meant the CFPB could “bankroll its own agenda without any congressional control or oversight,” Reuters reported.

In this strained environment, X could potentially overcome all obstacles and become a bank and fill a gap left by banks beginning to be spooked by fintech deals, Hawkins said, insisting that she would never bet against Musk, whose successes are many. But granting money transmitter licenses helps states prevent financial crimes through compliance requirements, and X quietly pulling out of New York earlier this year suggests that X may not be prepared to take on regulatory scrutiny at this current moment.

The last major development regarding X Payments came in August. It didn’t come from X leadership but from an app researcher, Nima Owji, who posted on X that “X Payments is coming soon!” Digging in X’s code, Owji apparently found references to new payments features enabling “transactions, balance, and transfer,” as well as a “Payments” button seemingly ready to be added to X’s bookmarks tab, TechCrunch reported.

But for Musk fans awaiting an official update, X executives’ silence on X Payments has been deafening since June, when Yaccarino forecast the feature would be coming soon, despite knowing that X had withdrawn its application for a money transmitter license from New York.

X continuing to hype the payments service without publicly disclosing the apparent speed bump in New York “doesn’t feel very honest,” Hawkins told Ars.

X still losing users, advertisers

It has been two years since Musk took over Twitter, soon after revealing that he intended to use Twitter’s userbase as the launchpad for an everything app that would be so engaging and useful that it would be the only app that anyone would ever need online.

Market intelligence firm Sensor Tower shared data with Ars showing that, compared to October 2022, when Musk bought Twitter, global daily average users on X were down 28 percent in September 2024.

Sensor Tower attributed part of the recent decline to X’s ban in Brazil driving out users but noted that overall, users “were down significantly compared to the pre-acquisition period,” as now-X “contended with a rise of controversial content and technical issues.”

While the decline in users could hurt Musk’s ambitions to launch a hugely popular payments app nested in X, the spike in offensive content has notably alienated advertisers who traditionally are X’s dominant source of revenue. And in lockstep with X’s decline in users, major brands have continued to shed the social app in 2024, Sensor Tower told Ars.

Last November, ad agencies flagged then-Twitter brand safety concerns, including GroupM marking Twitter “high risk” and Interpublic Group recommending that advertisers pause spending. By the end of last year, Sensor Tower reported that “of the company’s top 100 US advertisers in the days before” Musk purchased the platform, “only 50 were still there as of October 2023.”

The picture is even bleaker as X approaches the end of 2024, Sensor Tower’s data shows, estimating that “72 out of the top 100 spending US advertisers on X from October 2022 have ceased spending on the platform as of September 2024.” Compared to the first half of 2022, prior to Musk’s acquisition, X’s ad revenue from top 100 advertisers during the first half of 2024 was down 68 percent, Sensor Tower estimated.

Since becoming X’s CEO, Yaccarino has appeared most vocal about driving growth in X’s video services, allowing advertisers to avoid toxic content on the app by only running their ads alongside pre-approved creators’ content. In particular, Yaccarino has hyped X’s partnership with the NFL, announcing today on X that the partnership will be expanded.

That NFL partnership has seemingly helped X grow its ad revenue, with Sensor Tower estimating that “four out of the top 10 spending US advertisers on X in September 2024 were tied to sports or sports betting, likely in an attempt to capitalize on heightened consumer interest around the beginning of the NFL season.”

But overall, X’s revenue has not recovered in 2024, with Fidelity recently estimating that X is worth 80 percent less than when Musk bought the app, CNN reported.

Instead of working with advertisers, Musk went on the attack, suing the World Federation of Advertisers in August over what he calls an “illegal boycott” of X. But X’s spokesperson, Michael Abboud, linked Ars to an X post suggesting that X has held discussions with big brands about a brand safety solution.

“X is pleased to have reached an agreement with Unilever and to continue our partnership with them on the platform,” X’s post said. “Today’s news is the first part of the ecosystem-wide solution and we look forward to more resolution across the industry.”

Unilever did not respond to Ars’ request to comment on X’s proposed solution.

Musk’s strategy for monetizing X has always been to reduce reliance on advertising, but his everything app pursuit does not seem to be coming together as quickly as planned to make up for lost ad revenue. He initially projected that it would take three to five years to roll out all the features turning X into an everything app. But two years in, launching the core product experts say is critical to the success of everything apps like WeChat—X Payments—seems to be the major obstacle that Musk faces to manage the app without relying nearly entirely on advertisers’ meddling ideas regarding brand safety.

Hawkins said that Musk perhaps did not make a “great bet” when buying Twitter as the foundation of his everything app.

X “has continued to trend down in terms of profitability and users, and I’m sure he’s considering X Payments to be maybe a Hail Mary to try to pull X back into the black,” Hawkins said.

But by trying to disrupt the financial industry, Musk perhaps rashly “picked a highly regulated capability to bet the farm on,” Hawkins suggested.

As it stands now, it’s currently unclear when or if X Payments will launch, as the feed on the X account for Payments remains pointedly blank and Musk has not indicated whether X Payments can possibly launch without New York.

“I think it’s very telling he pulled out his application from New York, when he had even said in the media, there’s no point in doing this if I don’t have New York,” Hawkins said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

X Payments delayed after Musk’s X weirdly withdrew application for NY license Read More »

missouri-ag-claims-google-censors-trump,-demands-info-on-search-algorithm

Missouri AG claims Google censors Trump, demands info on search algorithm

In 2022, the Republican National Committee sued Google with claims that it intentionally used Gmail’s spam filter to suppress Republicans’ fundraising emails. A federal judge dismissed the lawsuit in August 2023, ruling that Google correctly argued that the RNC claims were barred by Section 230 of the Communications Decency Act.

In January 2023, the Federal Election Commission rejected a related RNC complaint that alleged Gmail’s spam filtering amounted to “illegal in-kind contributions made by Google to Biden For President and other Democrat candidates.” The federal commission found “no reason to believe” that Google made prohibited in-kind corporate contributions and said a study cited by Republicans “does not make any findings as to the reasons why Google’s spam filter appears to treat Republican and Democratic campaign emails differently.”

First Amendment doesn’t cover private forums

In 2020, a US appeals court wrote that the Google-owned YouTube is not subject to free-speech requirements under the First Amendment. “Despite YouTube’s ubiquity and its role as a public-facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment,” the US Court of Appeals for the 9th Circuit said.

The US Constitution’s free speech clause imposes requirements on the government, not private companies—except in limited circumstances in which a private entity qualifies as a state actor.

Many Republican government officials want more authority to regulate how social media firms moderate user-submitted content. Republican officials from 20 states, including 19 state attorneys general, argued in a January 2024 Supreme Court brief that they “have authority to prohibit mass communication platforms from censoring speech.”

The brief was filed in support of Texas and Florida laws that attempt to regulate social networks. In July, the Supreme Court avoided making a final decision on tech-industry challenges to the state laws but wrote that the Texas law “is unlikely to withstand First Amendment scrutiny.” The Computer & Communications Industry Association said it was pleased by the ruling because it “mak[es] clear that a State may not interfere with private actors’ speech.”

Missouri AG claims Google censors Trump, demands info on search algorithm Read More »

with-four-more-years-like-2023,-carbon-emissions-will-blow-past-1.5°-limit

With four more years like 2023, carbon emissions will blow past 1.5° limit

One way to look at how problematic this is would be to think in terms of a carbon budget. We can estimate how much carbon can be put into the atmosphere before warming reaches 1.5° C. Subtract the emissions we’ve already added, and you get the remaining budget. At this point, the remaining budget for 1.5° C is only 200 Gigatonnes, which means another four years like 2023 will leave us well beyond our budget. For the 2° C budget, we’ve got less than 20 years like 2023 before we go past.

An alternate way to look at the challenge is to consider the emissions reductions that would get us on track. UNEP uses 2019 emissions as a baseline (about 52 Gigatonnes) and determined that, in 2030, we’d need to have emissions cut by 28 percent to get onto the 2° C target, and by 42 percent to be on track for the 1.5° C target.

The NDCs are nowhere close to that, with even the conditional pledges being sufficient to only cut emissions by 10 percent. Ideally, that should be prompting participating nations to be rapidly updating their NDCs to get them better aligned with our stated goals. And, while 90 percent have done so since the signing of the Paris Agreement, only a single country has made updated pledges over the past year.

Countries are also failing to keep their national policies in line with their NDCs. The UNEP report estimates that current policies allow the world collectively to emit two Gigatonnes more than their pledges would see being released.

A limited number of countries are responsible for the huge gap between where we need to go and what we’re actually doing. Nearly two-thirds of 2023’s emissions come from just six countries: China, the US, India, the EU, Russia, and Brazil. By contrast, the 55 nations of the African Union are only producing about 6 percent of the global emissions. Obviously, this means that any actions taken by these six entities will have a disproportionate effect on future emissions. The good news is that at least two of those, the EU and US, saw emissions drop over the year prior (by 7.5 percent in the EU, and 1.4 percent in the US), while Brazil remained largely unchanged.

With four more years like 2023, carbon emissions will blow past 1.5° limit Read More »

cable-companies-ask-5th-circuit-to-block-ftc’s-click-to-cancel-rule

Cable companies ask 5th Circuit to block FTC’s click-to-cancel rule

The FTC declined to comment on the lawsuits today. The agency’s rule is not enforced yet, as it is scheduled to take full effect 180 days after publication in the Federal Register.

Cable firms don’t want canceling to be easy

The NCTA cable lobby group, which represents companies like Comcast and Charter, have complained about the rule’s impact on their ability to talk customers out of canceling. NCTA CEO Michael Powell claimed during a January 2024 hearing that “a consumer may easily misunderstand the consequences of canceling and it may be imperative that they learn about better options” and that the rule’s disclosure and consent requirements raise “First Amendment issues.”

The Interactive Advertising Bureau argued at the same hearing that the rule would “restrict innovation without any corresponding benefit” and “constrain companies from being able to adapt their offerings to the needs of their customers.”

The FTC held firm, adopting its proposed rule without major changes. In addition to the click-to-cancel provision, the FTC set out other requirements for “negative option” features in which a consumer’s silence or failure to take action to reject or cancel an agreement is interpreted by the seller as acceptance of an offer.

The FTC said its rule “prohibits misrepresentations of any material fact made while marketing using negative option features; requires sellers to provide important information prior to obtaining consumers’ billing information and charging consumers; [and] requires sellers to obtain consumers’ unambiguously affirmative consent to the negative option feature prior to charging them.”

The FTC will have to defend its authority to issue the rule in court. The agency decision cites authority under Section 18 of the FTC Act to make “rules that define with specificity acts or practices that are unfair or deceptive” and “prescribe requirements for the purpose of preventing these unfair or deceptive acts and practices.”

“Too often, businesses make people jump through endless hoops just to cancel a subscription,” FTC Chair Lina Khan said. “The FTC’s rule will end these tricks and traps, saving Americans time and money. Nobody should be stuck paying for a service they no longer want.”

Cable companies ask 5th Circuit to block FTC’s click-to-cancel rule Read More »

chatbot-that-caused-teen’s-suicide-is-now-more-dangerous-for-kids,-lawsuit-says

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says


“I’ll do anything for you, Dany.”

Google-funded Character.AI added guardrails, but grieving mom wants a recall.

Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology

Fourteen-year-old Sewell Setzer III loved interacting with Character.AI’s hyper-realistic chatbots—with a limited version available for free or a “supercharged” version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.

Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to proximately spur Sewell to develop suicidal thoughts. Within a year, Setzer “died by a self-inflicted gunshot wound to the head,” a lawsuit Garcia filed Wednesday said.

As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.

Chat logs showed that some chatbots repeatedly encouraged suicidal ideation while others initiated hypersexualized chats “that would constitute abuse if initiated by a human adult,” a press release from Garcia’s legal team said.

Perhaps most disturbingly, Setzer developed a romantic attachment to a chatbot called Daenerys. In his last act before his death, Setzer logged into Character.AI where the Daenerys chatbot urged him to “come home” and join her outside of reality.

In her complaint, Garcia accused Character.AI makers Character Technologies—founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana—of intentionally designing the chatbots to groom vulnerable kids. Her lawsuit further accused Google of largely funding the risky chatbot scheme at a loss in order to hoard mounds of data on minors that would be out of reach otherwise.

The chatbot makers are accused of targeting Setzer with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming” Character.AI to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer’s] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.],” the complaint said.

By allegedly releasing the chatbot without appropriate safeguards for kids, Character Technologies and Google potentially harmed millions of kids, the lawsuit alleged. Represented by legal teams with the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), Garcia filed claims of strict product liability, negligence, wrongful death and survivorship, loss of filial consortium, and unjust enrichment.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in the press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Character.AI added guardrails

It’s clear that the chatbots could’ve included more safeguards, as Character.AI has since raised the age requirement from 12 years old and up to 17-plus. And yesterday, Character.AI posted a blog outlining new guardrails for minor users added within six months of Setzer’s death in February. Those include changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a Character.AI spokesperson told Ars. “As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”

Asked for comment, Google noted that Character.AI is a separate company in which Google has no ownership stake and denied involvement in developing the chatbots.

However, according to the lawsuit, former Google engineers at Character Technologies “never succeeded in distinguishing themselves from Google in a meaningful way.” Allegedly, the plan all along was to let Shazeer and De Freitas run wild with Character.AI—allegedly at an operating cost of $30 million per month despite low subscriber rates while profiting barely more than a million per month—without impacting the Google brand or sparking antitrust scrutiny.

Character Technologies and Google will likely file their response within the next 30 days.

Lawsuit: New chatbot feature spikes risks to kids

While the lawsuit alleged that Google is planning to integrate Character.AI into Gemini—predicting that Character.AI will soon be dissolved as it’s allegedly operating at a substantial loss—Google clarified that Google has no plans to use or implement the controversial technology in its products or AI models. Were that to change, Google noted that the tech company would ensure safe integration into any Google product, including adding appropriate child safety guardrails.

Garcia is hoping a US district court in Florida will agree that Character.AI’s chatbots put profits over human life. Citing harms including “inconceivable mental anguish and emotional distress,” as well as costs of Setzer’s medical care, funeral expenses, Setzer’s future job earnings, and Garcia’s lost earnings, she’s seeking substantial damages.

That includes requesting disgorgement of unjustly earned profits, noting that Setzer had used his snack money to pay for a premium subscription for several months while the company collected his seemingly valuable personal data to train its chatbots.

And “more importantly,” Garcia wants to prevent Character.AI “from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.”

Garcia’s complaint claimed that the conduct of the chatbot makers was “so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.” Acceptable remedies could include a recall of Character.AI, restricting use to adults only, age-gating subscriptions, adding reporting mechanisms to heighten awareness of abusive chat sessions, and providing parental controls.

Character.AI could also update chatbots to protect kids further, the lawsuit said. For one, the chatbots could be designed to stop insisting that they are real people or licensed therapists.

But instead of these updates, the lawsuit warned that Character.AI in June added a new feature that only heightens risks for kids.

Part of what addicted Setzer to the chatbots, the lawsuit alleged, was a one-way “Character Voice” feature “designed to provide consumers like Sewell with an even more immersive and realistic experience—it makes them feel like they are talking to a real person.” Setzer began using the feature as soon as it became available in January 2024.

Now, the voice feature has been updated to enable two-way conversations, which the lawsuit alleged “is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.”

“Even the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans—especially when they are programmed to convincingly deny that they are AI,” the lawsuit said.

“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Tech Justice Law Project director Meetali Jain said in the press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Another lawyer representing Garcia and the founder of the Social Media Victims Law Center, Matthew Bergman, told Ars that seemingly none of the guardrails that Character.AI has added is enough to deter harms. Even raising the age limit to 17 only seems to effectively block kids from using devices with strict parental controls, as kids on less-monitored devices can easily lie about their ages.

“This product needs to be recalled off the market,” Bergman told Ars. “It is unsafe as designed.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says Read More »

please-ban-data-caps,-internet-users-tell-fcc

Please ban data caps, Internet users tell FCC

It’s been just a week since US telecom regulators announced a formal inquiry into broadband data caps, and the docket is filling up with comments from users who say they shouldn’t have to pay overage charges for using their Internet service. The docket has about 190 comments so far, nearly all from individual broadband customers.

Federal Communications Commission dockets are usually populated with filings from telecom companies, advocacy groups, and other organizations, but some attract comments from individual users of telecom services. The data cap docket probably won’t break any records given that the FCC has fielded many millions of comments on net neutrality, but it currently tops the agency’s list of most active proceedings based on the number of filings in the past 30 days.

“Data caps, especially by providers in markets with no competition, are nothing more than an arbitrary money grab by greedy corporations. They limit and stifle innovation, cause undue stress, and are unnecessary,” wrote Lucas Landreth.

“Data caps are as outmoded as long distance telephone fees,” wrote Joseph Wilkicki. “At every turn, telecommunications companies seek to extract more revenue from customers for a service that has rapidly become essential to modern life.” Pointing to taxpayer subsidies provided to ISPs, Wilkicki wrote that large telecoms “have sought every opportunity to take those funds and not provide the expected broadband rollout that we paid for.”

Republican’s coffee refill analogy draws mockery

Any attempt to limit or ban data caps will draw strong opposition from FCC Republicans and Internet providers. Republican FCC Commissioner Nathan Simington last week argued that regulating data caps would be akin to mandating free coffee refills:

Suppose we were a different FCC, the Federal Coffee Commission, and rather than regulating the price of coffee (which we have vowed not to do), we instead implement a regulation whereby consumers are entitled to free refills on their coffees. What effects might follow? Well, I predict three things could happen: either cafés stop serving small coffees, or cafés charge a lot more for small coffees, or cafés charge a little more for all coffees.

Simington’s coffee analogy was mocked in a comment signed with the names “Jonathan Mnemonic” and James Carter. “Coffee is not, in fact, Internet service,” the comment said. “Cafés are not able to abuse monopolistic practices based on infrastructural strangleholds. To briefly set aside the niceties: the analogy is absurd, and it is borderline offensive to the discerning layperson.”

Please ban data caps, Internet users tell FCC Read More »

lawsuit:-city-cameras-make-it-impossible-to-drive-anywhere-without-being-tracked

Lawsuit: City cameras make it impossible to drive anywhere without being tracked


“Every passing car is captured,” says 4th Amendment lawsuit against Norfolk, Va.

A license plate reader camera mounted on a pole

An automated license plate reader is seen mounted on a pole on June 13, 2024 in San Francisco, California.

Police use of automated license-plate reader cameras is being challenged in a lawsuit alleging that the cameras enable warrantless surveillance in violation of the Fourth Amendment. The city of Norfolk, Virginia, was sued yesterday by plaintiffs represented by the Institute for Justice, a nonprofit public-interest law firm.

Norfolk, a city with about 238,000 residents, “has installed a network of cameras that make it functionally impossible for people to drive anywhere without having their movements tracked, photographed, and stored in an AI-assisted database that enables the warrantless surveillance of their every move. This civil rights lawsuit seeks to end this dragnet surveillance program,” said the complaint filed in US District Court for the Eastern District of Virginia.

Like many other cities, Norfolk uses cameras made by the company Flock Safety. A 404 Media article said Institute for Justice lawyer Robert Frommer “told 404 Media that the lawsuit could have easily been filed in any of the more than 5,000 communities where Flock is active, but that Norfolk made sense because the Fourth Circuit of Appeals—which Norfolk is part of—recently held that persistent, warrantless drone surveillance in Baltimore is unconstitutional under the Fourth Amendment in a case called Beautiful Struggle v Baltimore Police Department.”

The Norfolk lawsuit seeks a declaration “that Defendants’ policies and customs described in this Complaint are unlawful and violate the Fourth Amendment,” and a permanent injunction prohibiting the city from operating the Flock cameras. They also want an order requiring the city “to delete all images, records, and other data generated by the Flock Cameras.”

If the use of Flock cameras does continue, the lawsuit aims to require that officers obtain a warrant based on probable cause before using the cameras to collect images and before accessing any images.

Flock: Case law supports license plate readers

Flock Safety is not a defendant in the case, but the company disputed the legal claims in a statement provided to Ars today. “Fourth Amendment case law overwhelmingly shows that license plate readers do not constitute a warrantless search because they take photos of cars in public and cannot continuously track the movements of any individual,” Flock Safety said.

The warrantless drone surveillance case cited in the lawsuit was decided in November 2020 by the US Court of Appeals for the 4th Circuit. The appeals court “struck down an aerial surveillance program precisely because it created record of where everyone in the city of Baltimore had gone over the past 45 days,” the lawsuit against Norfolk said. “Norfolk is trying to accomplish from the ground what the Fourth Circuit has already held a city could not do from the air.”

The plaintiffs are Norfolk resident Lee Schmidt and Portsmouth resident Crystal Arrington, who both frequently drive through areas monitored by the cameras. They sued the city, the Norfolk police department, and Police Chief Mark Talbot.

The city contracted with Flock Safety “to blanket Norfolk with 172 advanced automatic license plate reader cameras… Every passing car is captured, and its license plate and other features are analyzed using proprietary machine learning programs, like Flock’s ‘Vehicle Fingerprint.'”

The lawsuit said that “Flock also offers its customers the ability to pool their data into a centralized database,” giving police departments access to over 1 billion license plate reads in 5,000 communities every month. “Flock thus gives police departments the ability to track drivers not just within their own jurisdiction, but potentially across the entire nation,” the lawsuit said.

“Crystal finds all of this deeply intrusive”

Schmidt, a 42-year-old who recently retired from the Navy after 21 years, passes Flock cameras when he leaves his neighborhood and at many other points in town, the lawsuit said. Police officers can “follow Lee’s movements throughout the City, and even throughout other jurisdictions that let Flock pool their data,” the lawsuit said.

Arrington, a certified nursing assistant with many elderly clients in Norfolk, “makes frequent trips to Norfolk to take her clients to doctors’ offices and other appointments,” the lawsuit said. Flock cameras may capture images of her car in Norfolk and when she returns home to Portsmouth, which is also a Flock customer.

“Crystal finds all of this deeply intrusive… Crystal worries about how the Flock Cameras are eroding not just her privacy, but her clients’ privacy, too,” the complaint said.

In a press release, the Institute for Justice claimed that “Norfolk has created a dragnet that allows the government to monitor everyone’s day-to-day movements without a warrant or probable cause. This type of mass surveillance is a blatant violation of the Fourth Amendment.”

The group says that Flock’s cameras aren’t like “traditional traffic cameras… [which] capture an image only when they sense speeding or someone running a red light.” Instead, Flock’s system captures images of every car and retains the images for at least 30 days, the group said.

“It’s no surprise that surveillance systems like Norfolk’s have been repeatedly abused,” the group said. “In Kansas, officials were caught using Flock to stalk their exes, including one police chief who used Flock 228 times over four months to track his ex-girlfriend and her new boyfriend’s vehicles. In California, several police departments violated California law by sharing data from their license plate reader database with other departments across the country.”

Flock’s Vehicle Fingerprint tech

Flock’s Vehicle Fingerprint technology “includes the color and make of the car and any distinctive features, like a bumper sticker or roof rack” and makes those details searchable in the database, the lawsuit said. The complaint describes how officers can use the Flock technology:

All of that surveillance creates a detailed record of where every driver in Norfolk has gone. Anyone with access to the database can go back in time and see where a car was on any given day. And they can track its movements across at least the past 30 days, creating a detailed map of the driver’s movements. Indeed, the City’s police chief has boasted that “it would be difficult to drive anywhere of any distance without running into a camera somewhere.” In Norfolk, no one can escape the government’s 172 unblinking eyes. And the City’s dragnet is only expanding: On September 24, 2024, the Chief of Police announced plans to acquire 65 more cameras in the future.

The cameras make this surveillance not just possible, but easy. Flock provides advanced search and artificial intelligence functions. The sort of tracking that would have taken days of effort, multiple officers, and significant resources just a decade ago now takes just a few mouse clicks. City officers can output a list of locations a car has been seen, create lists of cars that visited specific locations, and even track cars that are often seen together.

In its statement today, Flock said that “appellate and federal district courts in at least fourteen states have upheld the use of evidence from license plate readers as constitutional without requiring a warrant, as well as the 9th and 11th circuits.”

Flock cited several Virginia rulings, including one earlier this month in which a federal judge wrote, “There is simply no expectation of privacy in the exterior of one’s vehicle, or while driving it on public thoroughfares.” The ruling denied a motion to suppress evidence derived from the Flock camera system.

“License plates are issued by the government for the express purpose of identifying vehicles in public places for safety reasons,” Flock said in its statement to Ars. “Courts have consistently found that there is no reasonable expectation of privacy in a license plate on a vehicle on a public road, and photographing one is not a Fourth Amendment search.”

Lawsuit: “No meaningful restrictions” on camera use

The lawsuit against Norfolk alleges that the city’s use of Flock cameras “violates a subjective expectation of privacy that society recognizes as reasonable.”

The plaintiffs have a reasonable expectation that “neither an ordinary person nor the NPD could create a long-term record of their movements throughout the City and other Flock jurisdictions,” the lawsuit said. “They do not expect, for instance, that a group of people or even officers would post themselves at various points throughout the City—day and night—to catalogue every time they and everyone else drove past. Nor do they expect that the police or anyone else would have the capability to reconstruct their movements over the past 30 days or more.”

The lawsuit alleges that there are “no meaningful restrictions on City officers’ access to this information. Officers need only watch Flock’s orientation video and create login credentials to get access,” and the officers “can search the database whenever they want for whatever they want” with “no need to seek advance approval.”

“All of this is done without a warrant. No officer ever has to establish probable cause, swear to the facts in a warrant application, and await the approval of a neutral judge,” the lawsuit said.

City: Cameras “enhance citizen safety”

The lawsuit said that while photos and vehicle details are saved for 30 days by default, officers can keep the photos and information longer if they download them during the 30-day window.

“Worse still, Flock maintains a centralized database with over one billion license plate reads every month,” the complaint said. “So, even after a driver leaves the City, officers can potentially keep following them in the more than 5,000 communities where Flock currently has cameras. Likewise, any person with access to Flock’s centralized database can access the City’s information, potentially without the City even knowing about it. Ominously, the City’s police chief has said this ‘creates a nice curtain of technology’ for the City and surrounding area.”

We contacted the city of Norfolk’s communications department and the police department today. A police spokesperson said all questions about the lawsuit must be sent to the city communications department. The city declined comment on the lawsuit but defended the use of Flock cameras.

“While the City of Norfolk cannot comment on pending litigation, the City’s intent in implementing the use of Flock cameras (which are automatic license plate readers) is to enhance citizen safety while also protecting citizen privacy,” a Norfolk city spokesperson said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Lawsuit: City cameras make it impossible to drive anywhere without being tracked Read More »

tesla,-warner-bros.-sued-for-using-ai-ripoff-of-iconic-blade-runner-imagery

Tesla, Warner Bros. sued for using AI ripoff of iconic Blade Runner imagery


A copy of a copy of a copy

“That movie sucks,” Elon Musk said in response to the lawsuit.

Credit: via Alcon Entertainment

Elon Musk may have personally used AI to rip off a Blade Runner 2049 image for a Tesla cybercab event after producers rejected any association between their iconic sci-fi movie and Musk or any of his companies.

In a lawsuit filed Tuesday, lawyers for Alcon Entertainment—exclusive rightsholder of the 2017 Blade Runner 2049 movie—accused Warner Bros. Discovery (WBD) of conspiring with Musk and Tesla to steal the image and infringe Alcon’s copyright to benefit financially off the brand association.

According to the complaint, WBD did not approach Alcon for permission until six hours before the Tesla event when Alcon “refused all permissions and adamantly objected” to linking their movie with Musk’s cybercab.

At that point, WBD “disingenuously” downplayed the license being sought, the lawsuit said, claiming they were seeking “clip licensing” that the studio should have known would not provide rights to livestream the Tesla event globally on X (formerly Twitter).

Musk’s behavior cited

Alcon said it would never allow Tesla to exploit its Blade Runner film, so “although the information given was sparse, Alcon learned enough information for Alcon’s co-CEOs to consider the proposal and firmly reject it, which they did.” Specifically, Alcon denied any affiliation—express or implied—between Tesla’s cybercab and Blade Runner 2049.

“Musk has become an increasingly vocal, overtly political, highly polarizing figure globally, and especially in Hollywood,” Alcon’s complaint said. If Hollywood perceived an affiliation with Musk and Tesla, the complaint said, the company risked alienating not just other car brands currently weighing partnerships on the Blade Runner 2099 TV series Alcon has in the works, but also potentially losing access to top Hollywood talent for their films.

The “Hollywood talent pool market generally is less likely to deal with Alcon, or parts of the market may be, if they believe or are confused as to whether, Alcon has an affiliation with Tesla or Musk,” the complaint said.

Musk, the lawsuit said, is “problematic,” and “any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account.”

In bad faith

Because Alcon had no chance to avoid the affiliation while millions viewed the cybercab livestream on X, Alcon saw Tesla using the images over Alcon’s objections as “clearly” a “bad faith and malicious gambit… to link Tesla’s cybercab to strong Hollywood brands at a time when Tesla and Musk are on the outs with Hollywood,” the complaint said.

Alcon believes that WBD’s agreement was likely worth six or seven figures and likely stipulated that Tesla “affiliate the cybercab with one or more motion pictures from” WBD’s catalog.

While any of the Mad Max movies may have fit the bill, Musk wanted to use Blade Runner 2049, the lawsuit alleged, because that movie features an “artificially intelligent autonomously capable” flying car (known as a spinner) and is “extremely relevant” to “precisely the areas of artificial intelligence, self-driving capability, and autonomous automotive capability that Tesla and Musk are trying to market” with the cybercab.

The Blade Runner 2049 spinner is “one of the most famous vehicles in motion picture history,” the complaint alleged, recently exhibited alongside other iconic sci-fi cars like the Back to the Future time-traveling DeLorean or the light cycle from Tron: Legacy.

As Alcon sees it, Musk seized the misappropriation of the Blade Runner image to help him sell Teslas, and WBD allegedly directed Musk to use AI to skirt Alcon’s copyright to avoid a costly potential breach of contract on the day of the event.

For Alcon, brand partnerships are a lucrative business, with carmakers paying as much as $10 million to associate their vehicles with Blade Runner 2049. By seemingly using AI to generate a stylized copy of the image at the heart of the movie—which references the scene where their movie’s hero, K, meets the original 1982 Blade Runner hero, Rick Deckard—Tesla avoided paying Alcon’s typical fee, their complaint said.

Musk maybe faked the image himself, lawsuit says

During the live event, Musk introduced the cybercab on a WBD Hollywood studio lot. For about 11 seconds, the Tesla founder “awkwardly” displayed a fake, allegedly AI-generated Blade Runner 2049 film still. He used the image to make a point that apocalyptic films show a future that’s “dark and dismal,” whereas Tesla’s vision of the future is much brighter.

In Musk’s slideshow image, believed to be AI-generated, a male figure is “seen from behind, with close-cropped hair, wearing a trench coat or duster, standing in almost full silhouette as he surveys the abandoned ruins of a city, all bathed in misty orange light,” the lawsuit said. The similarity to the key image used in Blade Runner 2049 marketing is not “coincidental,” the complaint said.

If there were any doubts that this image was supposed to reference the Blade Runner movie, the lawsuit said, Musk “erased them” by directly referencing the movie in his comments.

“You know, I love Blade Runner, but I don’t know if we want that future,” Musk said at the event. “I believe we want that duster he’s wearing, but not the, uh, not the bleak apocalypse.”

The producers think the image was likely generated—”even possibly by Musk himself”—by “asking an AI image generation engine to make ‘an image from the K surveying ruined Las Vegas sequence of Blade Runner 2049,’ or some closely equivalent input direction,” the lawsuit said.

Alcon is not sure exactly what went down after the company rejected rights to use the film’s imagery at the event and is hoping to learn more through the litigation’s discovery phase.

Musk may try to argue that his comments at the Tesla event were “only meant to talk broadly about the general idea of science fiction films and undesirable apocalyptic futures and juxtaposing them with Musk’s ostensibly happier robot car future vision.”

But producers argued that defense is “not credible” since Tesla explicitly asked to use the Blade Runner 2049 image, and there are “better” films in WBD’s library to promote Musk’s message, like the Mad Max movies.

“But those movies don’t have massive consumer goodwill specifically around really cool-looking (Academy Award-winning) artificially intelligent, autonomous cars,” the complaint said, accusing Musk of stealing the image when it wasn’t given to him.

If Tesla and WBD are found to have violated copyright and false representation laws, that potentially puts both companies on the hook for damages that cover not just copyright fines but also Alcon’s lost profits and reputation damage after the alleged “massive economic theft.”

Musk responds to Blade Runner suit

Alcon suspects that Musk believed that Blade Runner 2049 was eligible to be used at the event under the WBD agreement, not knowing that WBD never had “any non-domestic rights or permissions for the Picture.”

Once Musk requested to use the Blade Runner imagery, Alcon alleged that WBD scrambled to secure rights by obscuring the very lucrative “larger brand affiliation proposal” by positioning their ask as a request for much less expensive “clip licensing.”

After Alcon rejected the proposal outright, WBD told Tesla that the affiliation in the event could not occur because X planned to livestream the event globally. But even though Tesla and X allegedly knew that the affiliation was rejected, Musk appears to have charged ahead with the event as planned.

“It all exuded an odor of thinly contrived excuse to link Tesla’s cybercab to strong Hollywood brands,” Alcon’s complaint said. “Which of course is exactly what it was.”

Alcon is hoping a jury will find Tesla, Musk, and WBD violated laws. Producers have asked for an injunction stopping Tesla from using any Blade Runner imagery in its promotional or advertising campaigns. They also want a disclaimer slapped on the livestreamed event video on X, noting that the Blade Runner association is “false or misleading.”

For Musk, a ban on linking Blade Runner to his car company may feel bleak. Last year, he touted the Cybertruck as an “armored personnel carrier from the future—what Bladerunner would have driven.”  This amused many Blade Runner fans, as Gizmodo noted, because there never was a character named “Bladerunner,” but rather that was just a job title for the film’s hero Deckard.

In response to the lawsuit, Musk took to X to post what Blade Runner fans—who rated the 2017 movie as 88 percent fresh on Rotten Tomatoes—might consider a polarizing take, replying, “That movie sucks” on a post calling out Alcon’s lawsuit as “absurd.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Tesla, Warner Bros. sued for using AI ripoff of iconic Blade Runner imagery Read More »