Policy

missouri-ag-claims-google-censors-trump,-demands-info-on-search-algorithm

Missouri AG claims Google censors Trump, demands info on search algorithm

In 2022, the Republican National Committee sued Google with claims that it intentionally used Gmail’s spam filter to suppress Republicans’ fundraising emails. A federal judge dismissed the lawsuit in August 2023, ruling that Google correctly argued that the RNC claims were barred by Section 230 of the Communications Decency Act.

In January 2023, the Federal Election Commission rejected a related RNC complaint that alleged Gmail’s spam filtering amounted to “illegal in-kind contributions made by Google to Biden For President and other Democrat candidates.” The federal commission found “no reason to believe” that Google made prohibited in-kind corporate contributions and said a study cited by Republicans “does not make any findings as to the reasons why Google’s spam filter appears to treat Republican and Democratic campaign emails differently.”

First Amendment doesn’t cover private forums

In 2020, a US appeals court wrote that the Google-owned YouTube is not subject to free-speech requirements under the First Amendment. “Despite YouTube’s ubiquity and its role as a public-facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment,” the US Court of Appeals for the 9th Circuit said.

The US Constitution’s free speech clause imposes requirements on the government, not private companies—except in limited circumstances in which a private entity qualifies as a state actor.

Many Republican government officials want more authority to regulate how social media firms moderate user-submitted content. Republican officials from 20 states, including 19 state attorneys general, argued in a January 2024 Supreme Court brief that they “have authority to prohibit mass communication platforms from censoring speech.”

The brief was filed in support of Texas and Florida laws that attempt to regulate social networks. In July, the Supreme Court avoided making a final decision on tech-industry challenges to the state laws but wrote that the Texas law “is unlikely to withstand First Amendment scrutiny.” The Computer & Communications Industry Association said it was pleased by the ruling because it “mak[es] clear that a State may not interfere with private actors’ speech.”

Missouri AG claims Google censors Trump, demands info on search algorithm Read More »

with-four-more-years-like-2023,-carbon-emissions-will-blow-past-1.5°-limit

With four more years like 2023, carbon emissions will blow past 1.5° limit

One way to look at how problematic this is would be to think in terms of a carbon budget. We can estimate how much carbon can be put into the atmosphere before warming reaches 1.5° C. Subtract the emissions we’ve already added, and you get the remaining budget. At this point, the remaining budget for 1.5° C is only 200 Gigatonnes, which means another four years like 2023 will leave us well beyond our budget. For the 2° C budget, we’ve got less than 20 years like 2023 before we go past.

An alternate way to look at the challenge is to consider the emissions reductions that would get us on track. UNEP uses 2019 emissions as a baseline (about 52 Gigatonnes) and determined that, in 2030, we’d need to have emissions cut by 28 percent to get onto the 2° C target, and by 42 percent to be on track for the 1.5° C target.

The NDCs are nowhere close to that, with even the conditional pledges being sufficient to only cut emissions by 10 percent. Ideally, that should be prompting participating nations to be rapidly updating their NDCs to get them better aligned with our stated goals. And, while 90 percent have done so since the signing of the Paris Agreement, only a single country has made updated pledges over the past year.

Countries are also failing to keep their national policies in line with their NDCs. The UNEP report estimates that current policies allow the world collectively to emit two Gigatonnes more than their pledges would see being released.

A limited number of countries are responsible for the huge gap between where we need to go and what we’re actually doing. Nearly two-thirds of 2023’s emissions come from just six countries: China, the US, India, the EU, Russia, and Brazil. By contrast, the 55 nations of the African Union are only producing about 6 percent of the global emissions. Obviously, this means that any actions taken by these six entities will have a disproportionate effect on future emissions. The good news is that at least two of those, the EU and US, saw emissions drop over the year prior (by 7.5 percent in the EU, and 1.4 percent in the US), while Brazil remained largely unchanged.

With four more years like 2023, carbon emissions will blow past 1.5° limit Read More »

cable-companies-ask-5th-circuit-to-block-ftc’s-click-to-cancel-rule

Cable companies ask 5th Circuit to block FTC’s click-to-cancel rule

The FTC declined to comment on the lawsuits today. The agency’s rule is not enforced yet, as it is scheduled to take full effect 180 days after publication in the Federal Register.

Cable firms don’t want canceling to be easy

The NCTA cable lobby group, which represents companies like Comcast and Charter, have complained about the rule’s impact on their ability to talk customers out of canceling. NCTA CEO Michael Powell claimed during a January 2024 hearing that “a consumer may easily misunderstand the consequences of canceling and it may be imperative that they learn about better options” and that the rule’s disclosure and consent requirements raise “First Amendment issues.”

The Interactive Advertising Bureau argued at the same hearing that the rule would “restrict innovation without any corresponding benefit” and “constrain companies from being able to adapt their offerings to the needs of their customers.”

The FTC held firm, adopting its proposed rule without major changes. In addition to the click-to-cancel provision, the FTC set out other requirements for “negative option” features in which a consumer’s silence or failure to take action to reject or cancel an agreement is interpreted by the seller as acceptance of an offer.

The FTC said its rule “prohibits misrepresentations of any material fact made while marketing using negative option features; requires sellers to provide important information prior to obtaining consumers’ billing information and charging consumers; [and] requires sellers to obtain consumers’ unambiguously affirmative consent to the negative option feature prior to charging them.”

The FTC will have to defend its authority to issue the rule in court. The agency decision cites authority under Section 18 of the FTC Act to make “rules that define with specificity acts or practices that are unfair or deceptive” and “prescribe requirements for the purpose of preventing these unfair or deceptive acts and practices.”

“Too often, businesses make people jump through endless hoops just to cancel a subscription,” FTC Chair Lina Khan said. “The FTC’s rule will end these tricks and traps, saving Americans time and money. Nobody should be stuck paying for a service they no longer want.”

Cable companies ask 5th Circuit to block FTC’s click-to-cancel rule Read More »

chatbot-that-caused-teen’s-suicide-is-now-more-dangerous-for-kids,-lawsuit-says

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says


“I’ll do anything for you, Dany.”

Google-funded Character.AI added guardrails, but grieving mom wants a recall.

Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology

Fourteen-year-old Sewell Setzer III loved interacting with Character.AI’s hyper-realistic chatbots—with a limited version available for free or a “supercharged” version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.

Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to proximately spur Sewell to develop suicidal thoughts. Within a year, Setzer “died by a self-inflicted gunshot wound to the head,” a lawsuit Garcia filed Wednesday said.

As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.

Chat logs showed that some chatbots repeatedly encouraged suicidal ideation while others initiated hypersexualized chats “that would constitute abuse if initiated by a human adult,” a press release from Garcia’s legal team said.

Perhaps most disturbingly, Setzer developed a romantic attachment to a chatbot called Daenerys. In his last act before his death, Setzer logged into Character.AI where the Daenerys chatbot urged him to “come home” and join her outside of reality.

In her complaint, Garcia accused Character.AI makers Character Technologies—founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana—of intentionally designing the chatbots to groom vulnerable kids. Her lawsuit further accused Google of largely funding the risky chatbot scheme at a loss in order to hoard mounds of data on minors that would be out of reach otherwise.

The chatbot makers are accused of targeting Setzer with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming” Character.AI to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer’s] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.],” the complaint said.

By allegedly releasing the chatbot without appropriate safeguards for kids, Character Technologies and Google potentially harmed millions of kids, the lawsuit alleged. Represented by legal teams with the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), Garcia filed claims of strict product liability, negligence, wrongful death and survivorship, loss of filial consortium, and unjust enrichment.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in the press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Character.AI added guardrails

It’s clear that the chatbots could’ve included more safeguards, as Character.AI has since raised the age requirement from 12 years old and up to 17-plus. And yesterday, Character.AI posted a blog outlining new guardrails for minor users added within six months of Setzer’s death in February. Those include changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a Character.AI spokesperson told Ars. “As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”

Asked for comment, Google noted that Character.AI is a separate company in which Google has no ownership stake and denied involvement in developing the chatbots.

However, according to the lawsuit, former Google engineers at Character Technologies “never succeeded in distinguishing themselves from Google in a meaningful way.” Allegedly, the plan all along was to let Shazeer and De Freitas run wild with Character.AI—allegedly at an operating cost of $30 million per month despite low subscriber rates while profiting barely more than a million per month—without impacting the Google brand or sparking antitrust scrutiny.

Character Technologies and Google will likely file their response within the next 30 days.

Lawsuit: New chatbot feature spikes risks to kids

While the lawsuit alleged that Google is planning to integrate Character.AI into Gemini—predicting that Character.AI will soon be dissolved as it’s allegedly operating at a substantial loss—Google clarified that Google has no plans to use or implement the controversial technology in its products or AI models. Were that to change, Google noted that the tech company would ensure safe integration into any Google product, including adding appropriate child safety guardrails.

Garcia is hoping a US district court in Florida will agree that Character.AI’s chatbots put profits over human life. Citing harms including “inconceivable mental anguish and emotional distress,” as well as costs of Setzer’s medical care, funeral expenses, Setzer’s future job earnings, and Garcia’s lost earnings, she’s seeking substantial damages.

That includes requesting disgorgement of unjustly earned profits, noting that Setzer had used his snack money to pay for a premium subscription for several months while the company collected his seemingly valuable personal data to train its chatbots.

And “more importantly,” Garcia wants to prevent Character.AI “from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.”

Garcia’s complaint claimed that the conduct of the chatbot makers was “so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.” Acceptable remedies could include a recall of Character.AI, restricting use to adults only, age-gating subscriptions, adding reporting mechanisms to heighten awareness of abusive chat sessions, and providing parental controls.

Character.AI could also update chatbots to protect kids further, the lawsuit said. For one, the chatbots could be designed to stop insisting that they are real people or licensed therapists.

But instead of these updates, the lawsuit warned that Character.AI in June added a new feature that only heightens risks for kids.

Part of what addicted Setzer to the chatbots, the lawsuit alleged, was a one-way “Character Voice” feature “designed to provide consumers like Sewell with an even more immersive and realistic experience—it makes them feel like they are talking to a real person.” Setzer began using the feature as soon as it became available in January 2024.

Now, the voice feature has been updated to enable two-way conversations, which the lawsuit alleged “is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.”

“Even the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans—especially when they are programmed to convincingly deny that they are AI,” the lawsuit said.

“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Tech Justice Law Project director Meetali Jain said in the press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Another lawyer representing Garcia and the founder of the Social Media Victims Law Center, Matthew Bergman, told Ars that seemingly none of the guardrails that Character.AI has added is enough to deter harms. Even raising the age limit to 17 only seems to effectively block kids from using devices with strict parental controls, as kids on less-monitored devices can easily lie about their ages.

“This product needs to be recalled off the market,” Bergman told Ars. “It is unsafe as designed.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says Read More »

please-ban-data-caps,-internet-users-tell-fcc

Please ban data caps, Internet users tell FCC

It’s been just a week since US telecom regulators announced a formal inquiry into broadband data caps, and the docket is filling up with comments from users who say they shouldn’t have to pay overage charges for using their Internet service. The docket has about 190 comments so far, nearly all from individual broadband customers.

Federal Communications Commission dockets are usually populated with filings from telecom companies, advocacy groups, and other organizations, but some attract comments from individual users of telecom services. The data cap docket probably won’t break any records given that the FCC has fielded many millions of comments on net neutrality, but it currently tops the agency’s list of most active proceedings based on the number of filings in the past 30 days.

“Data caps, especially by providers in markets with no competition, are nothing more than an arbitrary money grab by greedy corporations. They limit and stifle innovation, cause undue stress, and are unnecessary,” wrote Lucas Landreth.

“Data caps are as outmoded as long distance telephone fees,” wrote Joseph Wilkicki. “At every turn, telecommunications companies seek to extract more revenue from customers for a service that has rapidly become essential to modern life.” Pointing to taxpayer subsidies provided to ISPs, Wilkicki wrote that large telecoms “have sought every opportunity to take those funds and not provide the expected broadband rollout that we paid for.”

Republican’s coffee refill analogy draws mockery

Any attempt to limit or ban data caps will draw strong opposition from FCC Republicans and Internet providers. Republican FCC Commissioner Nathan Simington last week argued that regulating data caps would be akin to mandating free coffee refills:

Suppose we were a different FCC, the Federal Coffee Commission, and rather than regulating the price of coffee (which we have vowed not to do), we instead implement a regulation whereby consumers are entitled to free refills on their coffees. What effects might follow? Well, I predict three things could happen: either cafés stop serving small coffees, or cafés charge a lot more for small coffees, or cafés charge a little more for all coffees.

Simington’s coffee analogy was mocked in a comment signed with the names “Jonathan Mnemonic” and James Carter. “Coffee is not, in fact, Internet service,” the comment said. “Cafés are not able to abuse monopolistic practices based on infrastructural strangleholds. To briefly set aside the niceties: the analogy is absurd, and it is borderline offensive to the discerning layperson.”

Please ban data caps, Internet users tell FCC Read More »

lawsuit:-city-cameras-make-it-impossible-to-drive-anywhere-without-being-tracked

Lawsuit: City cameras make it impossible to drive anywhere without being tracked


“Every passing car is captured,” says 4th Amendment lawsuit against Norfolk, Va.

A license plate reader camera mounted on a pole

An automated license plate reader is seen mounted on a pole on June 13, 2024 in San Francisco, California.

Police use of automated license-plate reader cameras is being challenged in a lawsuit alleging that the cameras enable warrantless surveillance in violation of the Fourth Amendment. The city of Norfolk, Virginia, was sued yesterday by plaintiffs represented by the Institute for Justice, a nonprofit public-interest law firm.

Norfolk, a city with about 238,000 residents, “has installed a network of cameras that make it functionally impossible for people to drive anywhere without having their movements tracked, photographed, and stored in an AI-assisted database that enables the warrantless surveillance of their every move. This civil rights lawsuit seeks to end this dragnet surveillance program,” said the complaint filed in US District Court for the Eastern District of Virginia.

Like many other cities, Norfolk uses cameras made by the company Flock Safety. A 404 Media article said Institute for Justice lawyer Robert Frommer “told 404 Media that the lawsuit could have easily been filed in any of the more than 5,000 communities where Flock is active, but that Norfolk made sense because the Fourth Circuit of Appeals—which Norfolk is part of—recently held that persistent, warrantless drone surveillance in Baltimore is unconstitutional under the Fourth Amendment in a case called Beautiful Struggle v Baltimore Police Department.”

The Norfolk lawsuit seeks a declaration “that Defendants’ policies and customs described in this Complaint are unlawful and violate the Fourth Amendment,” and a permanent injunction prohibiting the city from operating the Flock cameras. They also want an order requiring the city “to delete all images, records, and other data generated by the Flock Cameras.”

If the use of Flock cameras does continue, the lawsuit aims to require that officers obtain a warrant based on probable cause before using the cameras to collect images and before accessing any images.

Flock: Case law supports license plate readers

Flock Safety is not a defendant in the case, but the company disputed the legal claims in a statement provided to Ars today. “Fourth Amendment case law overwhelmingly shows that license plate readers do not constitute a warrantless search because they take photos of cars in public and cannot continuously track the movements of any individual,” Flock Safety said.

The warrantless drone surveillance case cited in the lawsuit was decided in November 2020 by the US Court of Appeals for the 4th Circuit. The appeals court “struck down an aerial surveillance program precisely because it created record of where everyone in the city of Baltimore had gone over the past 45 days,” the lawsuit against Norfolk said. “Norfolk is trying to accomplish from the ground what the Fourth Circuit has already held a city could not do from the air.”

The plaintiffs are Norfolk resident Lee Schmidt and Portsmouth resident Crystal Arrington, who both frequently drive through areas monitored by the cameras. They sued the city, the Norfolk police department, and Police Chief Mark Talbot.

The city contracted with Flock Safety “to blanket Norfolk with 172 advanced automatic license plate reader cameras… Every passing car is captured, and its license plate and other features are analyzed using proprietary machine learning programs, like Flock’s ‘Vehicle Fingerprint.'”

The lawsuit said that “Flock also offers its customers the ability to pool their data into a centralized database,” giving police departments access to over 1 billion license plate reads in 5,000 communities every month. “Flock thus gives police departments the ability to track drivers not just within their own jurisdiction, but potentially across the entire nation,” the lawsuit said.

“Crystal finds all of this deeply intrusive”

Schmidt, a 42-year-old who recently retired from the Navy after 21 years, passes Flock cameras when he leaves his neighborhood and at many other points in town, the lawsuit said. Police officers can “follow Lee’s movements throughout the City, and even throughout other jurisdictions that let Flock pool their data,” the lawsuit said.

Arrington, a certified nursing assistant with many elderly clients in Norfolk, “makes frequent trips to Norfolk to take her clients to doctors’ offices and other appointments,” the lawsuit said. Flock cameras may capture images of her car in Norfolk and when she returns home to Portsmouth, which is also a Flock customer.

“Crystal finds all of this deeply intrusive… Crystal worries about how the Flock Cameras are eroding not just her privacy, but her clients’ privacy, too,” the complaint said.

In a press release, the Institute for Justice claimed that “Norfolk has created a dragnet that allows the government to monitor everyone’s day-to-day movements without a warrant or probable cause. This type of mass surveillance is a blatant violation of the Fourth Amendment.”

The group says that Flock’s cameras aren’t like “traditional traffic cameras… [which] capture an image only when they sense speeding or someone running a red light.” Instead, Flock’s system captures images of every car and retains the images for at least 30 days, the group said.

“It’s no surprise that surveillance systems like Norfolk’s have been repeatedly abused,” the group said. “In Kansas, officials were caught using Flock to stalk their exes, including one police chief who used Flock 228 times over four months to track his ex-girlfriend and her new boyfriend’s vehicles. In California, several police departments violated California law by sharing data from their license plate reader database with other departments across the country.”

Flock’s Vehicle Fingerprint tech

Flock’s Vehicle Fingerprint technology “includes the color and make of the car and any distinctive features, like a bumper sticker or roof rack” and makes those details searchable in the database, the lawsuit said. The complaint describes how officers can use the Flock technology:

All of that surveillance creates a detailed record of where every driver in Norfolk has gone. Anyone with access to the database can go back in time and see where a car was on any given day. And they can track its movements across at least the past 30 days, creating a detailed map of the driver’s movements. Indeed, the City’s police chief has boasted that “it would be difficult to drive anywhere of any distance without running into a camera somewhere.” In Norfolk, no one can escape the government’s 172 unblinking eyes. And the City’s dragnet is only expanding: On September 24, 2024, the Chief of Police announced plans to acquire 65 more cameras in the future.

The cameras make this surveillance not just possible, but easy. Flock provides advanced search and artificial intelligence functions. The sort of tracking that would have taken days of effort, multiple officers, and significant resources just a decade ago now takes just a few mouse clicks. City officers can output a list of locations a car has been seen, create lists of cars that visited specific locations, and even track cars that are often seen together.

In its statement today, Flock said that “appellate and federal district courts in at least fourteen states have upheld the use of evidence from license plate readers as constitutional without requiring a warrant, as well as the 9th and 11th circuits.”

Flock cited several Virginia rulings, including one earlier this month in which a federal judge wrote, “There is simply no expectation of privacy in the exterior of one’s vehicle, or while driving it on public thoroughfares.” The ruling denied a motion to suppress evidence derived from the Flock camera system.

“License plates are issued by the government for the express purpose of identifying vehicles in public places for safety reasons,” Flock said in its statement to Ars. “Courts have consistently found that there is no reasonable expectation of privacy in a license plate on a vehicle on a public road, and photographing one is not a Fourth Amendment search.”

Lawsuit: “No meaningful restrictions” on camera use

The lawsuit against Norfolk alleges that the city’s use of Flock cameras “violates a subjective expectation of privacy that society recognizes as reasonable.”

The plaintiffs have a reasonable expectation that “neither an ordinary person nor the NPD could create a long-term record of their movements throughout the City and other Flock jurisdictions,” the lawsuit said. “They do not expect, for instance, that a group of people or even officers would post themselves at various points throughout the City—day and night—to catalogue every time they and everyone else drove past. Nor do they expect that the police or anyone else would have the capability to reconstruct their movements over the past 30 days or more.”

The lawsuit alleges that there are “no meaningful restrictions on City officers’ access to this information. Officers need only watch Flock’s orientation video and create login credentials to get access,” and the officers “can search the database whenever they want for whatever they want” with “no need to seek advance approval.”

“All of this is done without a warrant. No officer ever has to establish probable cause, swear to the facts in a warrant application, and await the approval of a neutral judge,” the lawsuit said.

City: Cameras “enhance citizen safety”

The lawsuit said that while photos and vehicle details are saved for 30 days by default, officers can keep the photos and information longer if they download them during the 30-day window.

“Worse still, Flock maintains a centralized database with over one billion license plate reads every month,” the complaint said. “So, even after a driver leaves the City, officers can potentially keep following them in the more than 5,000 communities where Flock currently has cameras. Likewise, any person with access to Flock’s centralized database can access the City’s information, potentially without the City even knowing about it. Ominously, the City’s police chief has said this ‘creates a nice curtain of technology’ for the City and surrounding area.”

We contacted the city of Norfolk’s communications department and the police department today. A police spokesperson said all questions about the lawsuit must be sent to the city communications department. The city declined comment on the lawsuit but defended the use of Flock cameras.

“While the City of Norfolk cannot comment on pending litigation, the City’s intent in implementing the use of Flock cameras (which are automatic license plate readers) is to enhance citizen safety while also protecting citizen privacy,” a Norfolk city spokesperson said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Lawsuit: City cameras make it impossible to drive anywhere without being tracked Read More »

tesla,-warner-bros.-sued-for-using-ai-ripoff-of-iconic-blade-runner-imagery

Tesla, Warner Bros. sued for using AI ripoff of iconic Blade Runner imagery


A copy of a copy of a copy

“That movie sucks,” Elon Musk said in response to the lawsuit.

Credit: via Alcon Entertainment

Elon Musk may have personally used AI to rip off a Blade Runner 2049 image for a Tesla cybercab event after producers rejected any association between their iconic sci-fi movie and Musk or any of his companies.

In a lawsuit filed Tuesday, lawyers for Alcon Entertainment—exclusive rightsholder of the 2017 Blade Runner 2049 movie—accused Warner Bros. Discovery (WBD) of conspiring with Musk and Tesla to steal the image and infringe Alcon’s copyright to benefit financially off the brand association.

According to the complaint, WBD did not approach Alcon for permission until six hours before the Tesla event when Alcon “refused all permissions and adamantly objected” to linking their movie with Musk’s cybercab.

At that point, WBD “disingenuously” downplayed the license being sought, the lawsuit said, claiming they were seeking “clip licensing” that the studio should have known would not provide rights to livestream the Tesla event globally on X (formerly Twitter).

Musk’s behavior cited

Alcon said it would never allow Tesla to exploit its Blade Runner film, so “although the information given was sparse, Alcon learned enough information for Alcon’s co-CEOs to consider the proposal and firmly reject it, which they did.” Specifically, Alcon denied any affiliation—express or implied—between Tesla’s cybercab and Blade Runner 2049.

“Musk has become an increasingly vocal, overtly political, highly polarizing figure globally, and especially in Hollywood,” Alcon’s complaint said. If Hollywood perceived an affiliation with Musk and Tesla, the complaint said, the company risked alienating not just other car brands currently weighing partnerships on the Blade Runner 2099 TV series Alcon has in the works, but also potentially losing access to top Hollywood talent for their films.

The “Hollywood talent pool market generally is less likely to deal with Alcon, or parts of the market may be, if they believe or are confused as to whether, Alcon has an affiliation with Tesla or Musk,” the complaint said.

Musk, the lawsuit said, is “problematic,” and “any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account.”

In bad faith

Because Alcon had no chance to avoid the affiliation while millions viewed the cybercab livestream on X, Alcon saw Tesla using the images over Alcon’s objections as “clearly” a “bad faith and malicious gambit… to link Tesla’s cybercab to strong Hollywood brands at a time when Tesla and Musk are on the outs with Hollywood,” the complaint said.

Alcon believes that WBD’s agreement was likely worth six or seven figures and likely stipulated that Tesla “affiliate the cybercab with one or more motion pictures from” WBD’s catalog.

While any of the Mad Max movies may have fit the bill, Musk wanted to use Blade Runner 2049, the lawsuit alleged, because that movie features an “artificially intelligent autonomously capable” flying car (known as a spinner) and is “extremely relevant” to “precisely the areas of artificial intelligence, self-driving capability, and autonomous automotive capability that Tesla and Musk are trying to market” with the cybercab.

The Blade Runner 2049 spinner is “one of the most famous vehicles in motion picture history,” the complaint alleged, recently exhibited alongside other iconic sci-fi cars like the Back to the Future time-traveling DeLorean or the light cycle from Tron: Legacy.

As Alcon sees it, Musk seized the misappropriation of the Blade Runner image to help him sell Teslas, and WBD allegedly directed Musk to use AI to skirt Alcon’s copyright to avoid a costly potential breach of contract on the day of the event.

For Alcon, brand partnerships are a lucrative business, with carmakers paying as much as $10 million to associate their vehicles with Blade Runner 2049. By seemingly using AI to generate a stylized copy of the image at the heart of the movie—which references the scene where their movie’s hero, K, meets the original 1982 Blade Runner hero, Rick Deckard—Tesla avoided paying Alcon’s typical fee, their complaint said.

Musk maybe faked the image himself, lawsuit says

During the live event, Musk introduced the cybercab on a WBD Hollywood studio lot. For about 11 seconds, the Tesla founder “awkwardly” displayed a fake, allegedly AI-generated Blade Runner 2049 film still. He used the image to make a point that apocalyptic films show a future that’s “dark and dismal,” whereas Tesla’s vision of the future is much brighter.

In Musk’s slideshow image, believed to be AI-generated, a male figure is “seen from behind, with close-cropped hair, wearing a trench coat or duster, standing in almost full silhouette as he surveys the abandoned ruins of a city, all bathed in misty orange light,” the lawsuit said. The similarity to the key image used in Blade Runner 2049 marketing is not “coincidental,” the complaint said.

If there were any doubts that this image was supposed to reference the Blade Runner movie, the lawsuit said, Musk “erased them” by directly referencing the movie in his comments.

“You know, I love Blade Runner, but I don’t know if we want that future,” Musk said at the event. “I believe we want that duster he’s wearing, but not the, uh, not the bleak apocalypse.”

The producers think the image was likely generated—”even possibly by Musk himself”—by “asking an AI image generation engine to make ‘an image from the K surveying ruined Las Vegas sequence of Blade Runner 2049,’ or some closely equivalent input direction,” the lawsuit said.

Alcon is not sure exactly what went down after the company rejected rights to use the film’s imagery at the event and is hoping to learn more through the litigation’s discovery phase.

Musk may try to argue that his comments at the Tesla event were “only meant to talk broadly about the general idea of science fiction films and undesirable apocalyptic futures and juxtaposing them with Musk’s ostensibly happier robot car future vision.”

But producers argued that defense is “not credible” since Tesla explicitly asked to use the Blade Runner 2049 image, and there are “better” films in WBD’s library to promote Musk’s message, like the Mad Max movies.

“But those movies don’t have massive consumer goodwill specifically around really cool-looking (Academy Award-winning) artificially intelligent, autonomous cars,” the complaint said, accusing Musk of stealing the image when it wasn’t given to him.

If Tesla and WBD are found to have violated copyright and false representation laws, that potentially puts both companies on the hook for damages that cover not just copyright fines but also Alcon’s lost profits and reputation damage after the alleged “massive economic theft.”

Musk responds to Blade Runner suit

Alcon suspects that Musk believed that Blade Runner 2049 was eligible to be used at the event under the WBD agreement, not knowing that WBD never had “any non-domestic rights or permissions for the Picture.”

Once Musk requested to use the Blade Runner imagery, Alcon alleged that WBD scrambled to secure rights by obscuring the very lucrative “larger brand affiliation proposal” by positioning their ask as a request for much less expensive “clip licensing.”

After Alcon rejected the proposal outright, WBD told Tesla that the affiliation in the event could not occur because X planned to livestream the event globally. But even though Tesla and X allegedly knew that the affiliation was rejected, Musk appears to have charged ahead with the event as planned.

“It all exuded an odor of thinly contrived excuse to link Tesla’s cybercab to strong Hollywood brands,” Alcon’s complaint said. “Which of course is exactly what it was.”

Alcon is hoping a jury will find Tesla, Musk, and WBD violated laws. Producers have asked for an injunction stopping Tesla from using any Blade Runner imagery in its promotional or advertising campaigns. They also want a disclaimer slapped on the livestreamed event video on X, noting that the Blade Runner association is “false or misleading.”

For Musk, a ban on linking Blade Runner to his car company may feel bleak. Last year, he touted the Cybertruck as an “armored personnel carrier from the future—what Bladerunner would have driven.”  This amused many Blade Runner fans, as Gizmodo noted, because there never was a character named “Bladerunner,” but rather that was just a job title for the film’s hero Deckard.

In response to the lawsuit, Musk took to X to post what Blade Runner fans—who rated the 2017 movie as 88 percent fresh on Rotten Tomatoes—might consider a polarizing take, replying, “That movie sucks” on a post calling out Alcon’s lawsuit as “absurd.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Tesla, Warner Bros. sued for using AI ripoff of iconic Blade Runner imagery Read More »

t-mobile,-at&t-oppose-unlocking-rule,-claim-locked-phones-are-good-for-users

T-Mobile, AT&T oppose unlocking rule, claim locked phones are good for users


Carriers fight plan to require unlocking of phones 60 days after activation.

A smartphone wrapped in a metal chain and padlock

T-Mobile and AT&T say US regulators should drop a plan to require unlocking of phones within 60 days of activation, claiming that locking phones to a carrier’s network makes it possible to provide cheaper handsets to consumers. “If the Commission mandates a uniform unlocking policy, it is consumers—not providers—who stand to lose the most,” T-Mobile alleged in an October 17 filing with the Federal Communications Commission.

The proposed rule has support from consumer advocacy groups who say it will give users more choice and lower their costs. T-Mobile has been criticized for locking phones for up to a year, which makes it impossible to use a phone on a rival’s network. T-Mobile claims that with a 60-day unlocking rule, “consumers risk losing access to the benefits of free or heavily subsidized handsets because the proposal would force providers to reduce the line-up of their most compelling handset offers.”

If the proposed rule is enacted, “T-Mobile estimates that its prepaid customers, for example, would see subsidies reduced by 40 percent to 70 percent for both its lower and higher-end devices, such as the Moto G, Samsung A15, and iPhone 12,” the carrier said. “A handset unlocking mandate would also leave providers little choice but to limit their handset offers to lower cost and often lesser performing handsets.”

T-Mobile and other carriers are responding to a call for public comments that began after the FCC approved a Notice of Proposed Rulemaking (NPRM) in a 5–0 vote. The FCC is proposing “to require all mobile wireless service providers to unlock handsets 60 days after a consumer’s handset is activated with the provider, unless within the 60-day period the service provider determines the handset was purchased through fraud.”

When the FCC proposed the 60-day unlocking rule in July 2024, the agency criticized T-Mobile for locking prepaid phones for a year. The NPRM pointed out that “T-Mobile recently increased its locking period for one of its brands, Metro by T-Mobile, from 180 days to 365 days.”

T-Mobile’s policy says the carrier will only unlock mobile devices on prepaid plans if “at least 365 days… have passed since the device was activated on the T-Mobile network.”

“You bought your phone, you should be able to take it to any provider you want,” FCC Chairwoman Jessica Rosenworcel said when the FCC proposed the rule. “Some providers already operate this way. Others do not. In fact, some have recently increased the time their customers must wait until they can unlock their device by as much as 100 percent.”

T-Mobile locking policy more onerous

T-Mobile executives, who also argue that the FCC lacks authority to impose the proposed rule, met with FCC officials last week to express their concerns.

“T-Mobile is passionate about winning customers for life, and explained how its handset unlocking policies greatly benefit our customers,” the carrier said in its post-meeting filing. “Our policies allow us to deliver access to high-speed mobile broadband on a nationwide 5G network via handsets that are free or heavily discounted off the manufacturer’s suggested retail price. T-Mobile’s unlocking policies are transparent, and there is absolutely no evidence of consumer harm stemming from these policies. T-Mobile’s current unlocking policies also help T-Mobile combat handset theft and fraud by sophisticated, international criminal organizations.”

For postpaid users, T-Mobile says it allows unlocking of fully paid-off phones that have been active for at least 40 days. But given the 365-day lock on prepaid users, T-Mobile’s overall policy is more onerous than those of other carriers. T-Mobile has also faced angry customers because of a recent decision to raise prices on plans that were advertised as having a lifetime price lock.

AT&T enables unlocking of paid-off phones after 60 days for postpaid users and after six months for prepaid users. AT&T lodged similar complaints as T-Mobile, saying in an October 7 filing that the FCC’s proposed rules would “mak[e] handsets less affordable for consumers, especially those in low-income households,” and “exacerbate handset arbitrage, fraud, and trafficking. “

AT&T told the FCC that “requiring providers to unlock handsets before they are paid-off would ultimately harm consumers by creating upward pressure on handset prices and disincentives to finance handsets on flexible terms.” If the FCC implements any rules, it should maintain “existing contractual arrangements between customers and providers, ensure that providers have at least 180 days to detect fraud before unlocking a device, and include at least a 24-month period for providers to implement any new rules,” AT&T said.

Verizon, which already faces unlocking rules because of requirements imposed on spectrum licenses it owns, automatically unlocks phones after 60 days for prepaid and postpaid users. Among the three major carriers, Verizon is the most amenable to the FCC’s new rules.

Consumer groups: Make Verizon rules industry-wide

An October 18 filing supporting a strict unlocking rule was submitted by numerous consumer advocacy groups including Public Knowledge, New America’s Open Technology Institute, Consumer Reports, the National Consumers League, the National Consumer Law Center, and the National Digital Inclusion Alliance.

“Wireless users are subject to unnecessary restrictions in the form of locked devices, which tie them to their service providers even when better options may be available. Handset locking practices limit consumer freedom and lessen competition by creating an artificial technological barrier to switching providers,” the groups said.

The groups cited the Verizon rules as a model and urged the FCC to require “that device unlocking is truly automatic—that is, unlocked after the requisite time period without any additional actions of the consumer.” Carriers should not be allowed to lock phones for longer than 60 days even when a phone is on a financing plan with outstanding payments, the groups’ letter said:

Providers should be required to transition out of selling devices without this [automatic unlocking] capability and the industry-wide rule should be the same as the one protecting Verizon customers today: after the expiration of the initial period, the handset must automatically unlock regardless of whether: (1) the customer asks for the handset to be unlocked or (2) the handset is fully paid off. Removing this barrier to switching will make the standard simple for consumers and encourage providers to compete more vigorously on mobile service price, quality, and innovation.

In an October 2 filing, Verizon said it supports “a uniform approach to handset unlocking that allows all wireless providers to lock wireless handsets for a reasonable period of time to limit fraud and to enable device subsidies, followed by automatic unlocking absent evidence of fraud.”

Verizon said 60 days should be the minimum for postpaid devices so that carriers have time to detect fraud and theft, and that “a longer, 180-day locking period for prepaid is necessary to enable wireless providers to continue offering subsidies that make phones affordable for prepaid customers.” Regardless of what time frame the FCC chooses, Verizon said “a uniform unlocking policy that applies to all providers… will benefit both consumers and competition.”

FCC considers impact on phone subsidies

While the FCC is likely to impose an unlocking rule, one question is whether it will apply when a carrier has provided a discounted phone. The FCC’s NPRM asked the public for “comment on the impact of a 60-day unlocking requirement in connection with service providers’ incentives to offer discounted handsets for postpaid and prepaid service plans.”

The FCC acknowledged Verizon’s argument “that providers may rely on handset locking to sustain their ability to offer handset subsidies and that such subsidies may be particularly important in prepaid environments.” But the FCC noted that public interest groups “argue that locked handsets tied to prepaid plans can disadvantage low-income customers most of all since they may not have the resources to switch service providers or purchase new handsets.”

The public interest groups also note that unlocked handsets “facilitate a robust secondary market for used devices, providing consumers with more affordable options,” the NPRM said.

The FCC says it can impose phone-unlocking rules using its legal authority under Title III of the Communications Act “to protect the public interest through spectrum licensing and regulations to require mobile wireless service providers to provide handset unlocking.” The FCC said it previously relied on the same Title III authority when it imposed the unlocking rules on 700 MHz C Block spectrum licenses purchased by Verizon.

T-Mobile told the FCC in a filing last month that “none of the litany of Title III provisions cited in the NPRM support the expansive authority asserted here to regulate consumer handsets (rather than telecommunications services).” T-Mobile also said that “the Commission’s legal vulnerabilities on this score are only magnified in light of recent Supreme Court precedent.”

The Supreme Court recently overturned the 40-year-old Chevron precedent that gave agencies like the FCC judicial deference when interpreting ambiguous laws. The end of Chevron makes it harder for agencies to issue regulations without explicit authorization from Congress. This is a potential problem for the FCC in its fight to revive net neutrality rules, which are currently blocked by a court order pending the outcome of litigation.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

T-Mobile, AT&T oppose unlocking rule, claim locked phones are good for users Read More »

bytedance-intern-fired-for-planting-malicious-code-in-ai-models

ByteDance intern fired for planting malicious code in AI models

After rumors swirled that TikTok owner ByteDance had lost tens of millions after an intern sabotaged its AI models, ByteDance issued a statement this weekend hoping to silence all the social media chatter in China.

In a social media post translated and reviewed by Ars, ByteDance clarified “facts” about “interns destroying large model training” and confirmed that one intern was fired in August.

According to ByteDance, the intern had held a position in the company’s commercial technology team but was fired for committing “serious disciplinary violations.” Most notably, the intern allegedly “maliciously interfered with the model training tasks” for a ByteDance research project, ByteDance said.

None of the intern’s sabotage impacted ByteDance’s commercial projects or online businesses, ByteDance said, and none of ByteDance’s large models were affected.

Online rumors suggested that more than 8,000 graphical processing units were involved in the sabotage and that ByteDance lost “tens of millions of dollars” due to the intern’s interference, but these claims were “seriously exaggerated,” ByteDance said.

The tech company also accused the intern of adding misleading information to his social media profile, seemingly posturing that his work was connected to ByteDance’s AI Lab rather than its commercial technology team. In the statement, ByteDance confirmed that the intern’s university was notified of what happened, as were industry associations, presumably to prevent the intern from misleading others.

ByteDance’s statement this weekend didn’t seem to silence all the rumors online, though.

One commenter on ByteDance’s social media post disputed the distinction between the AI Lab and the commercial technology team, claiming that “the commercialization team he is in was previously under the AI Lab. In the past two years, the team’s recruitment was written as AI Lab. He joined the team as an intern in 2021, and it might be the most advanced AI Lab.”

ByteDance intern fired for planting malicious code in AI models Read More »

judge-slams-florida-for-censoring-political-ad:-“it’s-the-first-amendment,-stupid”

Judge slams Florida for censoring political ad: “It’s the First Amendment, stupid”


Florida threatened TV stations over ad that criticized state’s abortion law.

A woman holding an MRI displaying a brain tumor.

Screenshot of political advertisement featuring a woman describing her experience having an abortion after being diagnosed with brain cancer. Credit: Floridians Protecting Freedom

US District Judge Mark Walker had a blunt message for the Florida surgeon general in an order halting the government official’s attempt to censor a political ad that opposes restrictions on abortion.

“To keep it simple for the State of Florida: it’s the First Amendment, stupid,” Walker, an Obama appointee who is chief judge in US District Court for the Northern District of Florida, wrote yesterday in a ruling that granted a temporary restraining order.

“Whether it’s a woman’s right to choose, or the right to talk about it, Plaintiff’s position is the same—’don’t tread on me,'” Walker wrote later in the ruling. “Under the facts of this case, the First Amendment prohibits the State of Florida from trampling on Plaintiff’s free speech.”

The Florida Department of Health recently sent a legal threat to broadcast TV stations over the airing of a political ad that criticized abortion restrictions in Florida’s Heartbeat Protection Act. The department in Gov. Ron DeSantis’ administration claimed the ad falsely described the abortion law, which could be weakened by a pending ballot question.

Floridians Protecting Freedom, the group that launched the TV ad and is sponsoring a ballot question to lift restrictions on abortion, sued Surgeon General Joseph Ladapo and Department of Health general counsel John Wilson. Wilson has resigned.

Surgeon general blocked from further action

Walker’s order granting the group’s motion states that “Defendant Ladapo is temporarily enjoined from taking any further actions to coerce, threaten, or intimate repercussions directly or indirectly to television stations, broadcasters, or other parties for airing Plaintiff’s speech, or undertaking enforcement action against Plaintiff for running political advertisements or engaging in other speech protected under the First Amendment.”

The order expires on October 29 but could be replaced by a preliminary injunction that would remain in effect while litigation continues. A hearing on the motion for a preliminary injunction is scheduled for the morning of October 29.

The pending ballot question would amend the state Constitution to say, “No law shall prohibit, penalize, delay, or restrict abortion before viability or when necessary to protect the patient’s health, as determined by the patient’s healthcare provider. This amendment does not change the Legislature’s constitutional authority to require notification to a parent or guardian before a minor has an abortion.”

Walker’s ruling said that Ladapo “has the right to advocate for his own position on a ballot measure. But it would subvert the rule of law to permit the State to transform its own advocacy into the direct suppression of protected political speech.”

Federal Communications Commission Chairwoman Jessica Rosenworcel recently criticized state officials, writing that “threats against broadcast stations for airing content that conflicts with the government’s views are dangerous and undermine the fundamental principle of free speech.”

State threatened criminal proceedings

The Floridians Protecting Freedom advertisement features a woman who “recalls her decision to have an abortion in Florida in 2022,” and “states that she would not be able to have an abortion for the same reason under the current law,” Walker’s ruling said.

Caroline, the woman in the ad, states that “the doctors knew if I did not end my pregnancy, I would lose my baby, I would lose my life, and my daughter would lose her mom. Florida has now banned abortion even in cases like mine. Amendment 4 is going to protect women like me; we have to vote yes.”

The ruling described the state government response:

Shortly after the ad began running, John Wilson, then general counsel for the Florida Department of Health, sent letters on the Department’s letterhead to Florida TV stations. The letters assert that Plaintiff’s political advertisement is false, dangerous, and constitutes a “sanitary nuisance” under Florida law. The letter informed the TV stations that the Department of Health must notify the person found to be committing the nuisance to remove it within 24 hours pursuant to section 386.03(1), Florida Statutes. The letter further warned that the Department could institute legal proceedings if the nuisance were not timely removed, including criminal proceedings pursuant to section 386.03(2)(b), Florida Statutes. Finally, the letter acknowledged that the TV stations have a constitutional right to “broadcast political advertisements,” but asserted this does not include “false advertisements which, if believed, would likely have a detrimental effect on the lives and health of pregnant women in Florida.” At least one of the TV stations that had been running Plaintiff’s advertisement stopped doing so after receiving this letter from the Department of Health.

The Department of Health claimed the ad “is categorically false” because “Florida’s Heartbeat Protection Act does not prohibit abortion if a physician determines the gestational age of the fetus is less than 6 weeks.”

Floridians Protecting Freedom responded that the woman in the ad made true statements, saying that “Caroline was diagnosed with stage four brain cancer when she was 20 weeks pregnant; the diagnosis was terminal. Under Florida law, abortions may only be performed after six weeks gestation if ‘[t]wo physicians certify in writing that, in reasonable medical judgment, the termination of the pregnancy is necessary to save the pregnant woman’s life or avert a serious risk of substantial and irreversible physical impairment of a major bodily function of the pregnant woman other than a psychological condition.'”

Because “Caroline’s diagnosis was terminal… an abortion would not have saved her life, only extended it. Florida law would not allow an abortion in this instance because the abortion would not have ‘save[d] the pregnant woman’s life,’ only extended her life,” the group said.

Judge: State should counter with its own speech

Walker’s ruling said the government can’t censor the ad by claiming it is false:

Plaintiff’s argument is correct. While Defendant Ladapo refuses to even agree with this simple fact, Plaintiff’s political advertisement is political speech—speech at the core of the First Amendment. And just this year, the United States Supreme Court reaffirmed the bedrock principle that the government cannot do indirectly what it cannot do directly by threatening third parties with legal sanctions to censor speech it disfavors. The government cannot excuse its indirect censorship of political speech simply by declaring the disfavored speech is “false.”

State officials must show that their actions “were narrowly tailored to serve a compelling government interest,” Walker wrote. A “narrowly tailored solution” in this case would be counterspeech, not censorship, he wrote.

“For all these reasons, Plaintiff has demonstrated a substantial likelihood of success on the merits,” the ruling said. Walker wrote that a ruling in favor of the state would open the door to more censorship:

This case pits the right to engage in political speech against the State’s purported interest in protecting the health and safety of Floridians from “false advertising.” It is no answer to suggest that the Department of Health is merely flexing its traditional police powers to protect health and safety by prosecuting “false advertising”—if the State can rebrand rank viewpoint discriminatory suppression of political speech as a “sanitary nuisance,” then any political viewpoint with which the State disagrees is fair game for censorship.

Walker then noted that Ladapo “has ample, constitutional alternatives to mitigate any harm caused by an injunction in this case.” The state is already running “its own anti-Amendment 4 campaign to educate the public about its view of Florida’s abortion laws and to correct the record, as it sees fit, concerning pro-Amendment 4 speech,” Walker wrote. “The State can continue to combat what it believes to be ‘false advertising’ by meeting Plaintiff’s speech with its own.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Judge slams Florida for censoring political ad: “It’s the First Amendment, stupid” Read More »

amazon-exec-tells-employees-to-work-elsewhere-if-they-dislike-rto-policy

Amazon exec tells employees to work elsewhere if they dislike RTO policy

Amazon workers are being reminded that they can find work elsewhere if they’re unhappy with Amazon’s return-to-office (RTO) mandate.

In September, Amazon told staff that they’ll have to RTO five days a week starting in 2025. Amazon employees are currently allowed to work remotely twice a week. A memo from CEO Andy Jassy announcing the policy change said that “it’s easier for our teammates to learn, model, practice, and strengthen our culture” when working at the office.

On Thursday, at what Reuters described as an “all-hands meeting” for Amazon Web Services (AWS), AWS CEO Matt Garman reportedly told workers:

If there are people who just don’t work well in that environment and don’t want to, that’s okay, there are other companies around.

Garman said that he didn’t “mean that in a bad way,” however, adding: “We want to be in an environment where we’re working together. When we want to really, really innovate on interesting products, I have not seen an ability for us to do that when we’re not in-person.”

Interestingly, Garman’s comments about dissatisfaction with the RTO policy coincided with him claiming that 9 out of 10 Amazon employees that he spoke to are in support of the RTO mandate, Reuters reported.

Some suspect RTO mandates are attempts to make workers quit

Amazon has faced resistance to RTO since pandemic restrictions were lifted. Like workers at other companies, some Amazon employees have publicly wondered if strict in-office policies are being enacted as attempts to reduce headcount without layoffs.

In July 2023, Amazon started requiring employees to work in their team’s central hub location (as opposed to remotely or in an office that may be closer to where they reside). Amazon reportedly told workers that if they didn’t comply or find a new job internally, they’d be considered a “voluntary resignation,” per a Slack message that Business Insider reportedly viewed. And many Amazon employees have already reported considering looking for a new job due to the impending RTO requirements.

However, employers like Amazon “can face an array of legal consequences for encouraging workers to quit via their RTO policies,” Helen D. (Heidi) Reavis, managing partner at Reavis Page Jump LLP, an employment, dispute resolution, and media law firm, told Ars Technica:

Amazon exec tells employees to work elsewhere if they dislike RTO policy Read More »

us-suspects-tsmc-helped-huawei-skirt-export-controls,-report-says

US suspects TSMC helped Huawei skirt export controls, report says

In April, TSMC was provided with $6.6 billion in direct CHIPS Act funding to “support TSMC’s investment of more than $65 billion in three greenfield leading-edge fabs in Phoenix, Arizona, which will manufacture the world’s most advanced semiconductors,” the Department of Commerce said.

These investments are key to the Biden-Harris administration’s mission of strengthening “economic and national security by providing a reliable domestic supply of the chips that will underpin the future economy, powering the AI boom and other fast-growing industries like consumer electronics, automotive, Internet of Things, and high-performance computing,” the department noted. And in particular, the funding will help America “maintain our competitive edge” in artificial intelligence, the department said.

It likely wouldn’t make sense to prop TSMC up to help the US “onshore the critical hardware manufacturing capabilities that underpin AI’s deep language learning algorithms and inferencing techniques,” to then limit access to US-made tech. TSMC’s Arizona fabs are supposed to support companies like Apple, Nvidia, and Qualcomm and enable them to “compete effectively,” the Department of Commerce said.

Currently, it’s unclear where the US probe into TSMC will go or whether a damaging finding could potentially impact TSMC’s CHIPS funding.

Last fall, the Department of Commerce published a final rule, though, designed to “prevent CHIPS funds from being used to directly or indirectly benefit foreign countries of concern,” such as China.

If the US suspected that TSMC was aiding Huawei’s AI chip manufacturing, the company could be perceived as avoiding CHIPS guardrails prohibiting TSMC from “knowingly engaging in any joint research or technology licensing effort with a foreign entity of concern that relates to a technology or product that raises national security concerns.”

Violating this “technology clawback” provision of the final rule risks “the full amount” of CHIPS Act funding being “recovered” by the Department of Commerce. That outcome seems unlikely, though, given that TSMC has been awarded more funding than any other recipient apart from Intel.

The Department of Commerce declined Ars’ request to comment on whether TSMC’s CHIPS Act funding could be impacted by their reported probe.

US suspects TSMC helped Huawei skirt export controls, report says Read More »