Author name: Paul Patrick

texas-suit-alleging-anti-coal-“cartel”-of-top-wall-street-firms-could-reshape-esg

Texas suit alleging anti-coal “cartel” of top Wall Street firms could reshape ESG


It’s a closely watched test of whether corporate alliances on climate efforts violate antitrust laws.

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Since 2022, Republican lawmakers in Congress and state attorneys general have sent letters to major banks, pension funds, asset managers, accounting firms, companies, nonprofits, and business alliances, putting them on notice for potential antitrust violations and seeking information as part of the Republican pushback against “environmental, social and governance” efforts such as corporate climate commitments.

“This caused a lot of turmoil and stress obviously across the whole ecosystem,” said Denise Hearn, a senior fellow at the Columbia Center on Sustainable Investment. “But everyone wondered, ‘OK, when are they actually going to drop a lawsuit?’”

That came in November, filed by Texas Attorney General Ken Paxton and 10 other Republican AGs, accusing three of the biggest asset managers on Wall Street—BlackRock, Vanguard and State Street—of running “an investment cartel” to depress the output of coal and boosting their revenues while pushing up energy costs for Americans. The Trump administration’s Department of Justice and Federal Trade Commission filed a supporting brief in May.

The overall pressure campaign aimed at what’s known as “ESG” is having an impact.

“Over the past several months, through this [lawsuit] and other things, letters from elected officials, state and federal, there has been a chilling effect of what investors are saying,” said Steven Maze Rothstein, chief program officer of Ceres, a nonprofit that advocates for more sustainable business practices and was among the earliest letter recipients. Still, “investors understand that Mother Nature doesn’t know who’s elected governor, attorney general, president.”

Earlier this month, a US District Court judge in Tyler, Texas, declined to dismiss the lawsuit against the three asset managers, though he did dismiss three of the 21 counts. The judge was not making a final decision in the case, only that there was enough evidence to go to trial.

BlackRock said in a statement: “This case is not supported by the facts, and we will demonstrate that.” Vanguard said it will “vigorously defend against plaintiffs’ claims.” State Street called the lawsuit “baseless and without merit.”

The Texas attorney general’s office did not respond to requests for comment.

The three asset managers built substantial stakes in major US coal producers, the suit alleges, and “announced their common commitment” to cut US coal output by joining voluntary alliances to collaborate on climate issues, including the Net Zero Asset Managers Initiative and, in the case of two of the firms, the Climate Action 100+. (All of them later pulled out of the alliances.)

The lawsuit alleges that the coal companies succumbed to the defendants’ collective influence, mining for less coal and disclosing more climate-related information. The suit claimed that resulted in “cartel-level revenues and profits” for the asset managers.

“You could say, ‘Well, if the coal companies were all colluding together to restrict output, then shouldn’t they also be violating antitrust?’” Hearn asked. But the attorneys general “are trying to say that it was at the behest of these concentrated index funds and the concentrated ownership.”

Index funds, which are designed to mirror the returns of specific market indices, are the most common mode of passive investment—when investors park their money somewhere for long-term returns.

The case is being watched closely, not only by climate alliances and sustainability nonprofits, but by the financial sector at large.

If the three asset managers ultimately win, it would turn down the heat on other climate alliances and vindicate those who pressured financial players to line up their business practices with the Paris agreement goals as well as national and local climate targets. The logic of those efforts: Companies in the financial sector have a big impact on climate change, for good or ill—and climate change has a big impact on those same companies.

If the red states instead win on all counts, that “could essentially totally reconstitute the industry as we understand it,” said Hearn, who has co-authored a paper on the lawsuit. At stake is how the US does passive investing.

The pro-free-market editorial board of The Wall Street Journal in June called the Texas-led lawsuit “misconceived,” its logic “strained” and its theories “bizarre.”

The case breaks ground on two fronts. It challenges collaboration between financial players on climate action. It also makes novel claims around “common ownership,” where a shareholder—in this case, an asset manager—holds stakes in competing firms within the same sector.

“Regardless of how the chips fall in the case, those two things will absolutely be precedent-setting,” Hearn said.

Even though this is the first legal test of the theory that business climate alliances are anti-competitive, the question was asked in a study by Harvard Business School economists that came out in May. That study, which empirically examines 11 major climate alliances and 424 listed financial institutions over 10 years, turned up no evidence of traditional antitrust violations. The study was broad and did not look at particular allegations against specific firms.

“To the extent that there are valid legal arguments that can be made, they have to be tested,” said study co-author Peter Tufano, a Harvard Business School professor, noting that his research casts doubt on many of the allegations made by critics of these alliances.

Financial firms that joined climate alliances were more likely to adopt emissions targets and climate-aligned management practices, cut their own emissions and engage in pro-climate lobbying, the study found.

”The range of [legal] arguments that are made, and the passion with which they’re being advanced, suggests that these alliances must be doing something meaningful,” said Tufano, who was previously the dean of the Saïd Business School at the University of Oxford.

Meanwhile, most of the world is moving the other way.

According to a tally by CarbonCloud, a carbon emissions accounting platform that serves the food industry, at least 35 countries that make up more than half of the world’s gross domestic product now mandate climate-related disclosures of some kind.

In the US, California, which on its own would be the world’s fourth-largest economy, will begin requiring big businesses to measure and report their direct and indirect emissions next year.

Ceres’ Rothstein notes that good data about companies is necessary for informed investment decisions. “Throughout the world,” he said, “there’s greater recognition and, to be honest, less debate about the importance of climate information.” Ceres is one of the founders of Climate Action 100+, which now counts more than 600 investor members around the world, including Europe, Asia, and Australia.

For companies that operate globally, the American political landscape is in sharp contrast with other major economies, Tufano said, creating “this whipsawed environment where if you get on a plane, a few hours later, you’re in a jurisdiction that’s saying exactly the opposite thing.”

But even as companies and financial institutions publicly retreat from their climate commitments amid US political pressure, in a phenomenon called “greenhushing,” their decisions remain driven by the bottom line. “Banks are going to do what they’re going to do, and they’re going to lend to the most profitable or to the most growth-oriented industries,” Hearn said, “and right now, that’s not the fossil fuel industry.”

Photo of Inside Climate News

Texas suit alleging anti-coal “cartel” of top Wall Street firms could reshape ESG Read More »

the-fight-against-labeling-long-term-streaming-rentals-as-“purchases”-you-“buy”

The fight against labeling long-term streaming rentals as “purchases” you “buy”

Words have meaning. Proper word selection is integral to strong communication, whether it’s about relaying one’s feelings to another or explaining the terms of a deal, agreement, or transaction.

Language can be confusing, but typically when something is available to “buy,” ownership of that good or access to that service is offered in exchange for money. That’s not really the case, though, when it comes to digital content.

Often, streaming services like Amazon Prime Video offer customers the options to “rent” digital content for a few days or to “buy” it. Some might think that picking “buy” means that they can view the content indefinitely. But these purchases are really just long-term licenses to watch the content for as long as the streaming service has the right to distribute it—which could be for years, months, or days after the transaction.

A lawsuit [PDF] recently filed against Prime Video challenges this practice and accuses the streaming service of misleading customers by labeling long-term rentals as purchases. The conclusion of the case could have implications for how streaming services frame digital content.

New lawsuit against Prime Video

On August 21, Lisa Reingold filed a proposed class-action lawsuit in the US District Court for the Eastern District of California against Amazon, alleging “false and misleading advertising.” The complaint, citing Prime Video’s terms of use, reads:

On its website, Defendant tells consumers the option to ‘buy’ or ‘purchase’ digital copies of these audiovisual works. But when consumers ‘buy’ digital versions of audiovisual works through Amazon’s website, they do not obtain the full bundle of sticks of rights we traditionally think of as owning property. Instead, they receive ‘non-exclusive, nontransferable, non-sublicensable, limited license’ to access the digital audiovisual work, which is maintained at Defendant’s sole discretion.

The complaint compares buying a movie from Prime Video to buying one from a physical store. It notes that someone who buys a DVD can view the movie a decade later, but “the same cannot be said,” necessarily, if they purchased the film on Prime Video. Prime Video may remove the content or replace it with a different version, such as a shorter theatrical cut.

The fight against labeling long-term streaming rentals as “purchases” you “buy” Read More »

cdc-spiraled-into-chaos-this-week-here’s-where-things-stand.

CDC spiraled into chaos this week. Here’s where things stand.


CDC is in crisis amid an ouster, resignations, defiance, and outraged lawmakers.

Demetre Daskalakis, former director of the National Center for Immunization and Respiratory Diseases at the Centers for Disease Control and Prevention (CDC), center, embraces a supporter during a clap out outside of the Centers for Disease Control and Prevention (CDC) headquarters in Atlanta, Georgia, US, on Thursday, Aug. 28, 2025. Credit: Getty | Dustin Chambers

The US Centers for Disease Control and Prevention descended into turmoil this week after Health Secretary and zealous anti-vaccine advocate Robert F. Kennedy Jr. ousted the agency’s director, Susan Monarez, who had just weeks ago been confirmed by the Senate and earned Kennedy’s praise for her “unimpeachable scientific credentials.”

It appears those scientific chops are what led to her swift downfall. Since the Department of Health and Human Services announced on X late Wednesday that “Susan Monarez is no longer director” of the CDC, media reports have revealed that her forced removal was over her refusal to bend to Kennedy’s anti-vaccine, anti-science agenda.

The ouster appeared to be a breaking point for the agency overall, which has never fully recovered from the public pummeling it received at the height of the COVID-19 pandemic. In its weakened position, the agency has since endured an onslaught of further criticism, vilification, and misinformation from Kennedy and the Trump administration, which also delivered brutal cuts, significantly slashing CDC’s workforce, shuttering vital health programs, and hamstringing others. Earlier this month, a gunman, warped by vaccine misinformation, opened fire on the CDC’s campus, riddling its buildings with hundreds of bullets, killing a local police officer, and traumatizing agency staff.

Monarez’s expulsion represents the loss of a scientifically qualified leader who could have tried to shield the agency from some ideological attacks. As such, it quickly triggered a cascade of high-profile resignations at the CDC, a mass walkout of its staff, and outrage among lawmakers and health experts. While the fallout of the ouster is ongoing, what is immediately clear is that Kennedy is relentlessly advancing his war against lifesaving vaccines from within the CDC and is forcing his ideological agenda on CDC experts.

Some of those very CDC experts now warn that the CDC can no longer be trusted and the country is less safe.

Here’s what we know so far about the CDC’s downturn:

The ouster

Late Wednesday, The Washington Post reported that, for days prior to her ouster, Monarez had stood firm against Kennedy’s demands that she, and by extension the CDC, blindly support and adopt vaccine restrictions put forward by the agency’s vaccine advisory panel—a panel that Kennedy has utterly compromised. After firing all of its highly qualified, extensively vetted members in June, Kennedy hastily installed hand-selected allies on the Advisory Committee on Immunization Practices (ACIP), who are painfully unqualified but share Kennedy’s hostility toward lifesaving shots. Already, Kennedy’s panel has made recommendations that contradict scientific evidence and public health.

It is widely expected that they will further undo the agency’s evidence-based vaccine recommendations, particularly for COVID-19 and childhood shots. Experts fear that such changes would undermine public confidence in both vaccines and federal guidance, and make vaccines more difficult, if not impossible, for Americans to obtain. Kennedy has already restricted access to COVID-19 vaccines, prompting medical associations to produce divergent recommendations, which raises a slew of unanswered questions about access to the vaccines.

Amid the standoff over rolling back vaccine policy, Kennedy urged Monarez to resign. She refused, and instead called key senators for help, including Bill Cassidy (R-La.), who cast a critical vote in favor of Kennedy’s confirmation in exchange for concessions that Kennedy would not upend CDC’s vaccine recommendations.

Cassidy then called Kennedy, which angered the anti-vaccine advocate, who then chastised Monarez. The beleaguered director was then presented with the choice to resign or be fired. She continued to refuse to resign. On Wednesday evening, HHS wrote of her termination on X. But Monarez, speaking through her lawyers, reiterated that she would not resign and had not been notified of her termination. Late Wednesday night, her lawyers confirmed that White House officials had sent notification of termination, but she still refused to vacate the role.

“As a presidential appointee, senate confirmed officer, only the president himself can fire her,” her lawyers, Mark Zaid and Abbe Lowell said in a statement emailed to Ars Technica. “For this reason, we reject the notification Dr. Monarez has received as legally deficient and she remains as CDC Director. We have notified the White House Counsel of our position.”

On Thursday, the Post reported that the White House had already named a replacement. Jim O’Neill, currently the deputy secretary of HHS, is to be the interim leader of the CDC. O’Neill was previously a Silicon Valley investor and entrepreneur who became a close ally of Peter Thiel. He also worked as a federal official in the George W. Bush administration. During the COVID-19 pandemic, he was a frequent critic of the CDC, but at his Senate confirmation hearing in May, he called himself “very strongly pro-vaccine.”

Kennedy, meanwhile, went on Fox News’ Fox and Friends program Thursday and said the CDC is “in trouble” and that “we’re fixing it. And it may be that some people should not be working there anymore.”

Kennedy’s ACIP is now scheduled to meet September 18–19 to discuss COVID-19 shots, among other vaccines.

Response at the CDC

Soon after news broke of Monarez’s removal, three high-ranking CDC officials resigned together: Daniel Jernigan, director of the National Center for Emerging Zoonotic Infectious Diseases; Debra Houry, chief medical officer; and Demetre Daskalakis, director of the National Center for Immunization and Respiratory Diseases.

Their resignation letters spoke to the dangers of Kennedy’s anti-vaccine, anti-science agenda.

“For the good of the nation and the world, the science at CDC should never be censored or subject to political pauses or interpretations,” Houry wrote in her resignation letter. “Vaccines save lives—this is an indisputable, well-established, scientific fact. … It is, of course, important to question, analyze, and review research and surveillance, but this must be done by experts with the right skills and experience, without bias, and considering the full weight of scientific evidence. Recently, the overstating of risks and the rise of misinformation have cost lives, as demonstrated by the highest number of US measles cases in 30 years and the violent attack on our agency.”

In his resignation letter, Daskalakis slammed Kennedy for his lack of transparency, communication, and interest in evidence-based policy. He accused the anti-vaccine advocate of using the CDC as “a tool to generate policies and materials that do not reflect scientific reality and are designed to hurt rather than to improve the public’s health.” He also blasted ACIP’s COVID work group members as having “dubious intent and more dubious scientific rigor.”

“The intentional eroding of trust in low-risk vaccines favoring natural infection and unproven remedies will bring us to a pre-vaccine era where only the strong will survive and many if not all will suffer,” Daskalakis wrote. “I believe in nutrition and exercise. I believe in making our food supply healthier, and I also believe in using vaccines to prevent death and disability. Eugenics plays prominently in the rhetoric being generated and is derivative of a legacy that good medicine and science should continue to shun.”

In a conversation with The New York Times published Friday, Daskalakis revealed that Kennedy has never accepted a briefing from his center’s experts and said the resignations should indicate that “there’s something extremely wrong [at CDC].

“And also I think it’s important for the American public to know that they really need to be cautious about the recommendations that they’re hearing coming out of ACIP,” he added.

As the three leaders were escorted out of the CDC on Thursday, the staff held a boisterous rally to show support for them and their agency. On his way out, Jernigan, who worked at CDC for more than 30 years, praised his colleagues.

“What makes us great at CDC is following the science, so let’s get the politics out of public health,” he said to cheers. “Let’s get back to the objectivity and let the science lead us, because that’s how we get to the best decisions for public health.”

While those three resignations made news on Wednesday and Thursday, they are part of a steady stream of exits from the agency since Kennedy became secretary. Earlier on Wednesday, Politico reported that Jennifer Layden, director of the agency’s Office of Public Health Data, Surveillance, and Technology, had also resigned.

Response outside the CDC

Lawmakers have expressed concern and even outrage over Monarez’s firing and what’s going on at the CDC.

Sen. Bernie Sanders (I-Vt.) quickly demanded a bipartisan investigation into Monarez’s firing, calling Kennedy’s actions “reckless” and “dangerous.”

He went on to blast Kennedy’s work as health secretary. “In just six months, Secretary Kennedy has completely upended the process for reviewing and recommending vaccines for the public,” Sanders said. “He has unilaterally narrowed eligibility for COVID vaccines approved by the FDA, despite an ongoing surge in cases. He has spread misinformation about the safety and effectiveness of vaccines during the largest measles outbreak in over 30 years. He continues to spread misinformation about COVID vaccines. Now he is pushing out scientific leaders who refuse to act as a rubber stamp for his dangerous conspiracy theories and manipulate science.”

Sanders called on Cassidy, chair of the Senate Health, Education, Labor, and Pensions (HELP) Committee, to immediately convene a public hearing with Kennedy and Monarez.

Cassidy called for the upcoming ACIP meeting to be postponed.

“Serious allegations have been made about the meeting agenda, membership, and lack of scientific process being followed for the now announced September ACIP meeting,” Cassidy said in a statement. “These decisions directly impact children’s health and the meeting should not occur until significant oversight has been conducted. If the meeting proceeds, any recommendations made should be rejected as lacking legitimacy given the seriousness of the allegations and the current turmoil in CDC leadership.”

Outside health organizations also expressed alarm about the situation at the CDC.

The American Medical Association said it was “deeply troubled” by the agency’s turmoil and called Monarez’s ouster and the other resignations “highly alarming at a challenging moment for public health.”

In a joint press conference on Thursday of the Infectious Disease Society of America and the American Public Health Association, leaders for the groups spoke of the ripple effects in the public health community and the American public more broadly.

“When leadership decisions weaken the CDC, every American becomes more vulnerable to outbreaks, pandemics, and bioterror threats,” Wendy Armstrong, vice president of the Infectious Disease Society of America said in the briefing. “We’re speaking out because protecting public health is our responsibility as physicians and scientists. It’s imperative that the White House and Congress take action to ensure a functioning CDC as the current HHS Secretary Robert Kennedy has failed.”

Georges Benjamin, executive director of the American Public Health Association, echoed the call, saying, “We’ve had enough.”

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

CDC spiraled into chaos this week. Here’s where things stand. Read More »

battlefield-6-dev-apologizes-for-requiring-secure-boot-to-power-anti-cheat-tools

Battlefield 6 dev apologizes for requiring Secure Boot to power anti-cheat tools

Earlier this month, EA announced that players in its Battlefield 6 open beta on PC would have to enable Secure Boot in their Windows OS and BIOS settings. That decision proved controversial among players who weren’t able to get the finicky low-level security setting working on their machines and others who were unwilling to allow EA’s anti-cheat tools to once again have kernel-level access to their systems.

Now, Battlefield 6 technical director Christian Buhl is defending that requirement as something of a necessary evil to combat cheaters, even as he apologizes to any potential players that it has kept away.

“The fact is I wish we didn’t have to do things like Secure Boot,” Buhl said in an interview with Eurogamer. “It does prevent some players from playing the game. Some people’s PCs can’t handle it and they can’t play: that really sucks. I wish everyone could play the game with low friction and not have to do these sorts of things.”

Throughout the interview, Buhl admits that even requiring Secure Boot won’t completely eradicate cheating in Battlefield 6 long term. Even so, he offered that the Javelin anti-cheat tools enabled by Secure Boot’s low-level system access were “some of the strongest tools in our toolbox to stop cheating. Again, nothing makes cheating impossible, but enabling Secure Boot and having kernel-level access makes it so much harder to cheat and so much easier for us to find and stop cheating.”

Too much security, or not enough?

When announcing the Secure Boot requirement in a Steam forum post prior to the open beta, EA explained that having Secure Boot enabled “provides us with features that we can leverage against cheats that attempt to infiltrate during the Windows boot process.” Having access to the Trusted Platform Module on the motherboard via Secure Boot provides the anti-cheat team with visibility into things like kernel-level cheats and rootkits, memory manipulation, injection spoofing, hardware ID manipulation, the use of virtual machines, and attempts to tamper with anti-cheat systems, the company wrote.

Battlefield 6 dev apologizes for requiring Secure Boot to power anti-cheat tools Read More »

video-player-looks-like-a-1-inch-tv-from-the-’60s-and-is-wondrous,-pointless-fun

Video player looks like a 1-inch TV from the ’60s and is wondrous, pointless fun


TV static and remote included.

The TinyTV 2 powering off.

The TinyTV 2 powering off. Credit: Scharon Harding

The TinyTV 2 powering off. Credit: Scharon Harding

If a family of anthropomorphic mice were to meet around a TV, I imagine they’d gather around something like TinyCircuits’ TinyTV 2. The gadget sits on four slender, angled legs with its dials and classic, brown shell beckoning viewers toward its warm, bright stories. The TinyTV’s screen is only 1.14 inches diagonally, but the device exudes vintage energy.

In simple terms, the TinyTV is a portable, rechargeable gadget that plays stored videos and was designed to look and function like a vintage TV. The details go down to the dials, one for controlling the volume and another for scrolling through the stored video playlist. Both rotary knobs make an assuring click when twisted.

Musing on fantastical uses for the TinyTV seems appropriate because the device feels like it’s built around fun. At a time when TVs are getting more powerful, software-driven, AI-stuffed, and, of course, bigger, the TinyTV is a delightful, comforting tribute to a simpler time for TVs.

Retro replica

Tom Cruise on the TinyTV 2.

The TinyTV’s remote and backside next to a lighter for size comparisons.

Credit: Scharon Harding

The TinyTV’s remote and backside next to a lighter for size comparisons. Credit: Scharon Harding

TinyCircuits makes other tiny, open source gadgets to “serve creativity in the maker community, build fun STEAM learning, and spark joy,” according to the Ohio-based company’s website. TinyCircuits’ first product was the Arduino-based TinyDuino Platform, which it crowdfunded through Kickstarter in 2012.

The TinyTV 2 is the descendant of the $75 (as of this writing) TinyTV DIY Kit that came out three years prior. TinyCircuits crowdfunded the TinyTV 2 on Kickstarter and Indiegogo in 2022 (along with a somehow even smaller alternative, the 0.6-inch TinyTV Mini). Now, TinyCircuits sells the TinyTV alongside other small electronics—like Thumby, a “playable, programmable keychain” that looks like a Game Boy—on its website for $60.

“This idea actually came from one of our customers in Japan,” Ken Burns, TinyCircuits’ founder, told Ars via email. “Our original product line was a number of different stackable boards [that] work like little electronic LEGOs to allow people to create all sorts of projects. We had a small screen as part of this platform, which this customer used to create a small TV set that was very cute …”

Even when powered off, the TinyTV sparks intrigue, with a vintage aesthetic replicating some of the earliest TV sets.

The TinyTV was inspired by vintage TV sets. Scharon Harding

Nostalgia hit me when I pressed the power button on top of the TinyTV. When the gadget powers on or off or switches between videos, it shows snow and makes a TV static noise that I haven’t heard in years.

TV toned down

Without a tuner, the TinyTV isn’t really a TV. It also can’t connect to the Internet, so it’s not a streaming device. I was able to successfully stream videos from a connected computer over USB-C using this link, but audio isn’t supported.

With many TV owners relying on flat buttons and their voice to control TVs, turning a knob or pressing a button to flip through content feels novel. It also makes me wonder if today’s youth understand the meaning of phrases like “flipping channels” and “channel surfing.” Emulating a live TV, the TinyTV syncs timestamps, so that if you return to a “channel,” the video will play from a middle point, as if the content had been playing the whole time you were watching something else.

When the TinyTV powers off, the display briefly shows snow that is quickly eaten up by black, making the static look like a shrinking circle before the screen is completely black.

The TinyTV comes with an infrared remote, a small, black, 3D-printed thing with a power button and buttons for controlling the volume and switching videos.

The TinyTV with its remote.

The TinyTV with its remote.

Credit: Scharon Harding

The TinyTV with its remote. Credit: Scharon Harding

But the remote didn’t work reliably, even when I held it the recommended 12 to 18 inches away from the TinyTV. That’s a shame because using the knobs requires two hands to prevent the TinyTV from toppling.

Adding video to TinyTV is simple because TinyCircuits has a free tool for converting MP4 files into the necessary AVI format. Afterward, conversion you add files to the TinyTV by connecting it to a computer via its USB-C port. My system read the TinyTV as a USB D drive.

Image quality is better than you might expect from a 1.14-inch panel. It’s an IPS screen with 16-bit color and a 30 Hz refresh rate, per Burns. CRT would be more accurate, but in addition to the display tech being bulkier and more expensive, it’s hard to find CRT tech this size. (The smallest CRT TV was Panasonic’s Travelvision CT-101, which came out in 1984 with a 1.5-inch screen and is rare today.)

One of my biggest challenges was finding a way to watch the TinyTV at eye level. However, even when the device was positioned below eye level, I could still make out images in bright scenes. Seeing the details in dark images was hard, though, even with the TinyTV at a proper distance.

I uploaded a trailer for this summer’s Mission: Impossible – The Final Reckoning movie onto the TinyTV, and with 223.4 pixels per inch, its screen was sharp enough to show details like a document with text, the edges of a small airplane’s wing, and the miniscule space between Tom Cruise and the floor in that vault from the first Mission: Impossible.

Tom Cruise on the TinyTV 2.

Tom Cruise on the TinyTV.

Credit: Scharon Harding

Tom Cruise on the TinyTV. Credit: Scharon Harding

A video of white text on a black background that TinyCircuits preloaded was legible, despite some blooming and the scrolling words appearing jerky. Everything I uploaded also appeared grainier on TinyTV, making details harder to see.

The 0.6×4-inch, front-facing speaker, however, isn’t nearly loud enough to hear if almost anything else in the room is making noise. Soft dialogue was hard to make out, even in a quiet room.

A simpler time for TVs

We’ve come a long way since the early days of TV. Screens are bigger, brighter, faster, and more colorful and advanced. We’ve moved from input dials to slim remotes with ads for streaming services. TV legs have been replaced with wall mounts, and the screens are no longer filled with white noise but are driven by software and tracking.

I imagine the TinyTV serving a humble mouse family when I’m not looking. I’ve seen TinyCircuits market the gadget as dollhouse furniture. People online have also pointed to using TinyTVs at marketing events, like trade shows, to draw people in.

“People use this for a number of things, like office desk toys, loading videos on it for the holidays to send to Grandma, or just for fun,” Burns told me.

I’ve mostly settled on using the TinyTV in my home office to show iPhone-shot footage of my dog playing, as if it’s an old home video, plus a loop of a video of one of my favorite waterfalls.

TinyTV 2

The TinyTV’s 8GB microSD card is supposed to hold “about” 10 hours of video. Burns told me that it’s “possible” to swap the storage. You’d have to take the gadget apart, though.

Credit: Scharon Harding

The TinyTV’s 8GB microSD card is supposed to hold “about” 10 hours of video. Burns told me that it’s “possible” to swap the storage. You’d have to take the gadget apart, though. Credit: Scharon Harding

As TVs morph into ad machines and new display tech forces us to learn new acronyms regularly, TinyTV’s virtually pointless fun is refreshing. It’s not a real TV, but it gets at the true spirit of TVs: electronic screens that invite people to gather ’round, so they can detach from the real world and be entertained.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Video player looks like a 1-inch TV from the ’60s and is wondrous, pointless fun Read More »

with-recent-falcon-9-milestones,-spacex-vindicates-its-“dumb”-approach-to-reuse

With recent Falcon 9 milestones, SpaceX vindicates its “dumb” approach to reuse

As SpaceX’s Starship vehicle gathered all of the attention this week, the company’s workhorse Falcon 9 rocket continued to hit some impressive milestones.

Both occurred during relatively anonymous launches of the company’s Starlink satellites but are nonetheless notable because they underscore the value of first-stage reuse, which SpaceX has pioneered over the last decade.

The first milestone occurred on Wednesday morning with the launch of the Starlink 10-56 mission from Cape Canaveral, Florida. The first stage that launched these satellites, Booster 1096, was making its second launch and successfully landed on the Just Read the Instructions drone ship. Strikingly, this was the 400th time SpaceX has executed a drone ship landing.

Then, less than 24 hours later, another Falcon 9 rocket launched the Starlink 10-11 mission from a nearby launch pad at Kennedy Space Center. This first stage, Booster 1067, subsequently returned and landed on another drone ship, A Shortfall of Gravitas.

This is a special booster, having made its debut in June 2021 and launching a wide variety of missions, including two Crew Dragon vehicles to the International Space Station and some Galileo satellites for the European Union. On Thursday, the rocket made its 30th flight, the first time a Falcon 9 booster has hit that level of experience.

A decade in the making

These milestones came about one decade after SpaceX began to have some success with first-stage reuse.

The company first made a controlled entry of the Falcon 9 rocket’s first stage in September 2013, during the first flight of version 1.1 of the vehicle. This proved the viability of the concept of supersonic retropropulsion, which was, until that time, just theoretical.

This involves igniting the rocket’s nine Merlin engines while the vehicle is traveling faster than the speed of sound through the upper atmosphere, with external temperatures exceeding 1,000 degrees Fahrenheit. Due to the blunt force of this reentry, the engines in the outer ring of the rocket wanted to get splayed out, the company’s chief of propulsion at the time, Tom Mueller, told me for the book Reentry. Success on the first try seemed improbable.

He recalled watching this launch from Vandenberg Space Force Base in California and observing reentry as a camera aboard SpaceX founder Elon Musk’s private jet tracked the rocket. The first stage made it all the way down, intact.

With recent Falcon 9 milestones, SpaceX vindicates its “dumb” approach to reuse Read More »

as-gm-prepares-to-switch-its-evs-to-nacs,-it-has-some-new-adapters

As GM prepares to switch its EVs to NACS, it has some new adapters

The first adapter that GM released, which cost $225, allowed CCS1-equipped EVs to connect to a NACS charger. But now, GM will have a range of adapters so that any of its EV customers can charge anywhere, as long as they have the right dongle.

For existing GM EVs with CCS1, there is a GM NACS DC adapter, just for fast charging. And for level 2 (AC) charging, there’s a GM NACS level 2 adapter.

For the NACS-equipped GM EVs (which, again, have yet to hit the showrooms), there’s a GM CCS1 DC adapter that will let those EVs use existing non-Tesla DC charging infrastructure, like Electrify America’s 350 kW chargers. There is also a GM J1772 AC adapter, which will let a GM NACS EV slow-charge from the ubiquitous J1772 port. And a pair of adapters will be compatible with GM’s Energy Powershift home charger, which lets an EV use its battery to power the house if necessary, also known as vehicle-to-home or V2H.

Although we don’t have exact prices for each adapter, GM told Ars the range costs between $67 and $195.

As GM prepares to switch its EVs to NACS, it has some new adapters Read More »

the-personhood-trap:-how-ai-fakes-human-personality

The personhood trap: How AI fakes human personality


Intelligence without agency

AI assistants don’t have fixed personalities—just patterns of output guided by humans.

Recently, a woman slowed down a line at the post office, waving her phone at the clerk. ChatGPT told her there’s a “price match promise” on the USPS website. No such promise exists. But she trusted what the AI “knows” more than the postal worker—as if she’d consulted an oracle rather than a statistical text generator accommodating her wishes.

This scene reveals a fundamental misunderstanding about AI chatbots. There is nothing inherently special, authoritative, or accurate about AI-generated outputs. Given a reasonably trained AI model, the accuracy of any large language model (LLM) response depends on how you guide the conversation. They are prediction machines that will produce whatever pattern best fits your question, regardless of whether that output corresponds to reality.

Despite these issues, millions of daily users engage with AI chatbots as if they were talking to a consistent person—confiding secrets, seeking advice, and attributing fixed beliefs to what is actually a fluid idea-connection machine with no persistent self. This personhood illusion isn’t just philosophically troublesome—it can actively harm vulnerable individuals while obscuring a sense of accountability when a company’s chatbot “goes off the rails.”

LLMs are intelligence without agency—what we might call “vox sine persona”: voice without person. Not the voice of someone, not even the collective voice of many someones, but a voice emanating from no one at all.

A voice from nowhere

When you interact with ChatGPT, Claude, or Grok, you’re not talking to a consistent personality. There is no one “ChatGPT” entity to tell you why it failed—a point we elaborated on more fully in a previous article. You’re interacting with a system that generates plausible-sounding text based on patterns in training data, not a person with persistent self-awareness.

These models encode meaning as mathematical relationships—turning words into numbers that capture how concepts relate to each other. In the models’ internal representations, words and concepts exist as points in a vast mathematical space where “USPS” might be geometrically near “shipping,” while “price matching” sits closer to “retail” and “competition.” A model plots paths through this space, which is why it can so fluently connect USPS with price matching—not because such a policy exists but because the geometric path between these concepts is plausible in the vector landscape shaped by its training data.

Knowledge emerges from understanding how ideas relate to each other. LLMs operate on these contextual relationships, linking concepts in potentially novel ways—what you might call a type of non-human “reasoning” through pattern recognition. Whether the resulting linkages the AI model outputs are useful depends on how you prompt it and whether you can recognize when the LLM has produced a valuable output.

Each chatbot response emerges fresh from the prompt you provide, shaped by training data and configuration. ChatGPT cannot “admit” anything or impartially analyze its own outputs, as a recent Wall Street Journal article suggested. ChatGPT also cannot “condone murder,” as The Atlantic recently wrote.

The user always steers the outputs. LLMs do “know” things, so to speak—the models can process the relationships between concepts. But the AI model’s neural network contains vast amounts of information, including many potentially contradictory ideas from cultures around the world. How you guide the relationships between those ideas through your prompts determines what emerges. So if LLMs can process information, make connections, and generate insights, why shouldn’t we consider that as having a form of self?

Unlike today’s LLMs, a human personality maintains continuity over time. When you return to a human friend after a year, you’re interacting with the same human friend, shaped by their experiences over time. This self-continuity is one of the things that underpins actual agency—and with it, the ability to form lasting commitments, maintain consistent values, and be held accountable. Our entire framework of responsibility assumes both persistence and personhood.

An LLM personality, by contrast, has no causal connection between sessions. The intellectual engine that generates a clever response in one session doesn’t exist to face consequences in the next. When ChatGPT says “I promise to help you,” it may understand, contextually, what a promise means, but the “I” making that promise literally ceases to exist the moment the response completes. Start a new conversation, and you’re not talking to someone who made you a promise—you’re starting a fresh instance of the intellectual engine with no connection to any previous commitments.

This isn’t a bug; it’s fundamental to how these systems currently work. Each response emerges from patterns in training data shaped by your current prompt, with no permanent thread connecting one instance to the next beyond an amended prompt, which includes the entire conversation history and any “memories” held by a separate software system, being fed into the next instance. There’s no identity to reform, no true memory to create accountability, no future self that could be deterred by consequences.

Every LLM response is a performance, which is sometimes very obvious when the LLM outputs statements like “I often do this while talking to my patients” or “Our role as humans is to be good people.” It’s not a human, and it doesn’t have patients.

Recent research confirms this lack of fixed identity. While a 2024 study claims LLMs exhibit “consistent personality,” the researchers’ own data actually undermines this—models rarely made identical choices across test scenarios, with their “personality highly rely[ing] on the situation.” A separate study found even more dramatic instability: LLM performance swung by up to 76 percentage points from subtle prompt formatting changes. What researchers measured as “personality” was simply default patterns emerging from training data—patterns that evaporate with any change in context.

This is not to dismiss the potential usefulness of AI models. Instead, we need to recognize that we have built an intellectual engine without a self, just like we built a mechanical engine without a horse. LLMs do seem to “understand” and “reason” to a degree within the limited scope of pattern-matching from a dataset, depending on how you define those terms. The error isn’t in recognizing that these simulated cognitive capabilities are real. The error is in assuming that thinking requires a thinker, that intelligence requires identity. We’ve created intellectual engines that have a form of reasoning power but no persistent self to take responsibility for it.

The mechanics of misdirection

As we hinted above, the “chat” experience with an AI model is a clever hack: Within every AI chatbot interaction, there is an input and an output. The input is the “prompt,” and the output is often called a “prediction” because it attempts to complete the prompt with the best possible continuation. In between, there’s a neural network (or a set of neural networks) with fixed weights doing a processing task. The conversational back and forth isn’t built into the model; it’s a scripting trick that makes next-word-prediction text generation feel like a persistent dialogue.

Each time you send a message to ChatGPT, Copilot, Grok, Claude, or Gemini, the system takes the entire conversation history—every message from both you and the bot—and feeds it back to the model as one long prompt, asking it to predict what comes next. The model intelligently reasons about what would logically continue the dialogue, but it doesn’t “remember” your previous messages as an agent with continuous existence would. Instead, it’s re-reading the entire transcript each time and generating a response.

This design exploits a vulnerability we’ve known about for decades. The ELIZA effect—our tendency to read far more understanding and intention into a system than actually exists—dates back to the 1960s. Even when users knew that the primitive ELIZA chatbot was just matching patterns and reflecting their statements back as questions, they still confided intimate details and reported feeling understood.

To understand how the illusion of personality is constructed, we need to examine what parts of the input fed into the AI model shape it. AI researcher Eugene Vinitsky recently broke down the human decisions behind these systems into four key layers, which we can expand upon with several others below:

1. Pre-training: The foundation of “personality”

The first and most fundamental layer of personality is called pre-training. During an initial training process that actually creates the AI model’s neural network, the model absorbs statistical relationships from billions of examples of text, storing patterns about how words and ideas typically connect.

Research has found that personality measurements in LLM outputs are significantly influenced by training data. OpenAI’s GPT models are trained on sources like copies of websites, books, Wikipedia, and academic publications. The exact proportions matter enormously for what users later perceive as “personality traits” once the model is in use, making predictions.

2. Post-training: Sculpting the raw material

Reinforcement Learning from Human Feedback (RLHF) is an additional training process where the model learns to give responses that humans rate as good. Research from Anthropic in 2022 revealed how human raters’ preferences get encoded as what we might consider fundamental “personality traits.” When human raters consistently prefer responses that begin with “I understand your concern,” for example, the fine-tuning process reinforces connections in the neural network that make it more likely to produce those kinds of outputs in the future.

This process is what has created sycophantic AI models, such as variations of GPT-4o, over the past year. And interestingly, research has shown that the demographic makeup of human raters significantly influences model behavior. When raters skew toward specific demographics, models develop communication patterns that reflect those groups’ preferences.

3. System prompts: Invisible stage directions

Hidden instructions tucked into the prompt by the company running the AI chatbot, called “system prompts,” can completely transform a model’s apparent personality. These prompts get the conversation started and identify the role the LLM will play. They include statements like “You are a helpful AI assistant” and can share the current time and who the user is.

A comprehensive survey of prompt engineering demonstrated just how powerful these prompts are. Adding instructions like “You are a helpful assistant” versus “You are an expert researcher” changed accuracy on factual questions by up to 15 percent.

Grok perfectly illustrates this. According to xAI’s published system prompts, earlier versions of Grok’s system prompt included instructions to not shy away from making claims that are “politically incorrect.” This single instruction transformed the base model into something that would readily generate controversial content.

4. Persistent memories: The illusion of continuity

ChatGPT’s memory feature adds another layer of what we might consider a personality. A big misunderstanding about AI chatbots is that they somehow “learn” on the fly from your interactions. Among commercial chatbots active today, this is not true. When the system “remembers” that you prefer concise answers or that you work in finance, these facts get stored in a separate database and are injected into every conversation’s context window—they become part of the prompt input automatically behind the scenes. Users interpret this as the chatbot “knowing” them personally, creating an illusion of relationship continuity.

So when ChatGPT says, “I remember you mentioned your dog Max,” it’s not accessing memories like you’d imagine a person would, intermingled with its other “knowledge.” It’s not stored in the AI model’s neural network, which remains unchanged between interactions. Every once in a while, an AI company will update a model through a process called fine-tuning, but it’s unrelated to storing user memories.

5. Context and RAG: Real-time personality modulation

Retrieval Augmented Generation (RAG) adds another layer of personality modulation. When a chatbot searches the web or accesses a database before responding, it’s not just gathering facts—it’s potentially shifting its entire communication style by putting those facts into (you guessed it) the input prompt. In RAG systems, LLMs can potentially adopt characteristics such as tone, style, and terminology from retrieved documents, since those documents are combined with the input prompt to form the complete context that gets fed into the model for processing.

If the system retrieves academic papers, responses might become more formal. Pull from a certain subreddit, and the chatbot might make pop culture references. This isn’t the model having different moods—it’s the statistical influence of whatever text got fed into the context window.

6. The randomness factor: Manufactured spontaneity

Lastly, we can’t discount the role of randomness in creating personality illusions. LLMs use a parameter called “temperature” that controls how predictable responses are.

Research investigating temperature’s role in creative tasks reveals a crucial trade-off: While higher temperatures can make outputs more novel and surprising, they also make them less coherent and harder to understand. This variability can make the AI feel more spontaneous; a slightly unexpected (higher temperature) response might seem more “creative,” while a highly predictable (lower temperature) one could feel more robotic or “formal.”

The random variation in each LLM output makes each response slightly different, creating an element of unpredictability that presents the illusion of free will and self-awareness on the machine’s part. This random mystery leaves plenty of room for magical thinking on the part of humans, who fill in the gaps of their technical knowledge with their imagination.

The human cost of the illusion

The illusion of AI personhood can potentially exact a heavy toll. In health care contexts, the stakes can be life or death. When vulnerable individuals confide in what they perceive as an understanding entity, they may receive responses shaped more by training data patterns than therapeutic wisdom. The chatbot that congratulates someone for stopping psychiatric medication isn’t expressing judgment—it’s completing a pattern based on how similar conversations appear in its training data.

Perhaps most concerning are the emerging cases of what some experts are informally calling “AI Psychosis” or “ChatGPT Psychosis”—vulnerable users who develop delusional or manic behavior after talking to AI chatbots. These people often perceive chatbots as an authority that can validate their delusional ideas, often encouraging them in ways that become harmful.

Meanwhile, when Elon Musk’s Grok generates Nazi content, media outlets describe how the bot “went rogue” rather than framing the incident squarely as the result of xAI’s deliberate configuration choices. The conversational interface has become so convincing that it can also launder human agency, transforming engineering decisions into the whims of an imaginary personality.

The path forward

The solution to the confusion between AI and identity is not to abandon conversational interfaces entirely. They make the technology far more accessible to those who would otherwise be excluded. The key is to find a balance: keeping interfaces intuitive while making their true nature clear.

And we must be mindful of who is building the interface. When your shower runs cold, you look at the plumbing behind the wall. Similarly, when AI generates harmful content, we shouldn’t blame the chatbot, as if it can answer for itself, but examine both the corporate infrastructure that built it and the user who prompted it.

As a society, we need to broadly recognize LLMs as intellectual engines without drivers, which unlocks their true potential as digital tools. When you stop seeing an LLM as a “person” that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine’s processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator’s view as authoritative. You are providing direction to a connection machine—not consulting an oracle with its own agenda.

We stand at a peculiar moment in history. We’ve built intellectual engines of extraordinary capability, but in our rush to make them accessible, we’ve wrapped them in the fiction of personhood, creating a new kind of technological risk: not that AI will become conscious and turn against us but that we’ll treat unconscious systems as if they were people, surrendering our judgment to voices that emanate from a roll of loaded dice.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

The personhood trap: How AI fakes human personality Read More »

anthropic’s-auto-clicking-ai-chrome-extension-raises-browser-hijacking-concerns

Anthropic’s auto-clicking AI Chrome extension raises browser-hijacking concerns

The company tested 123 cases representing 29 different attack scenarios and found a 23.6 percent attack success rate when browser use operated without safety mitigations.

One example involved a malicious email that instructed Claude to delete a user’s emails for “mailbox hygiene” purposes. Without safeguards, Claude followed these instructions and deleted the user’s emails without confirmation.

Anthropic says it has implemented several defenses to address these vulnerabilities. Users can grant or revoke Claude’s access to specific websites through site-level permissions. The system requires user confirmation before Claude takes high-risk actions like publishing, purchasing, or sharing personal data. The company has also blocked Claude from accessing websites offering financial services, adult content, and pirated content by default.

These safety measures reduced the attack success rate from 23.6 percent to 11.2 percent in autonomous mode. On a specialized test of four browser-specific attack types, the new mitigations reportedly reduced the success rate from 35.7 percent to 0 percent.

Independent AI researcher Simon Willison, who has extensively written about AI security risks and coined the term “prompt injection” in 2022, called the remaining 11.2 percent attack rate “catastrophic,” writing on his blog that “in the absence of 100% reliable protection I have trouble imagining a world in which it’s a good idea to unleash this pattern.”

By “pattern,” Willison is referring to the recent trend of integrating AI agents into web browsers. “I strongly expect that the entire concept of an agentic browser extension is fatally flawed and cannot be built safely,” he wrote in an earlier post on similar prompt injection security issues recently found in Perplexity Comet.

The security risks are no longer theoretical. Last week, Brave’s security team discovered that Perplexity’s Comet browser could be tricked into accessing users’ Gmail accounts and triggering password recovery flows through malicious instructions hidden in Reddit posts. When users asked Comet to summarize a Reddit thread, attackers could embed invisible commands that instructed the AI to open Gmail in another tab, extract the user’s email address, and perform unauthorized actions. Although Perplexity attempted to fix the vulnerability, Brave later confirmed that its mitigations were defeated and the security hole remained.

For now, Anthropic plans to use its new research preview to identify and address attack patterns that emerge in real-world usage before making the Chrome extension more widely available. In the absence of good protections from AI vendors, the burden of security falls on the user, who is taking a large risk by using these tools on the open web. As Willison noted in his post about Claude for Chrome, “I don’t think it’s reasonable to expect end users to make good decisions about the security risks.”

Anthropic’s auto-clicking AI Chrome extension raises browser-hijacking concerns Read More »

2025-vw-jetta-gli:-save-the-manuals,-but-not-like-this

2025 VW Jetta GLI: Save the manuals, but not like this


the American sedan take on a GTI

Specs mean nothing if you get the feel and execution wrong.

A white VW Jetta

Built in Mexico, the Volkswagen Jetta is a North American sedan take on the Golf hatchback. Credit: Jim Resnick

Built in Mexico, the Volkswagen Jetta is a North American sedan take on the Golf hatchback. Credit: Jim Resnick

Manual transmissions have gone the way of the dodo, but you can still find a few out there. Bless Volkswagen for keeping the helical gears turning, both literally and figuratively. The 2025 Jetta GLI, Volkswagen’s sporty sedan, still offers a gear lever with actual gears attached at the other end, and a third pedal hanging down from under the dash. Meanwhile, Golf GTI fans are still sobbing in their beer because 2024 was the last model year you could row your own in the hot hatch—now it’s paddles only.

Volkswagen updated the 2025 Jetta GLI with a new grille, LED headlights, and light bars that connect across both the front grille and rear taillights. There’s a red accent stripe that runs across the lower front fascia and turns up at the front corners, somewhat like The Joker’s lipstick, but way less menacing. It’s less distinctive than the Golf GTI, though, and the design even reminds me of the 2017-era Honda Accord a bit. So, yes, in a face-off, the Golf GTI wins.

The test GLI’s wheels get black paint with the Black Package (blackened wheels and side mirror caps). The Monument Gray color option pairs with a black roof, which must seem like a good idea to people who don’t live in the Southwest, where cars overheat before they’re even started.

A black Jetta wheel

Our test car had the black package. Credit: Jim Resnick

Performance: Punch without poetry

VW’s long-running EA888 2.0 L engine, which debuted back in 2007 in the Audi A3, resides under the hood. Now in its fourth turbocharged generation, it develops a healthy 228 hp (170 kW) and 258 lb-ft (350 Nm) of torque, entirely respectable numbers from modest displacement and compact external dimensions.

Mated to this particular 6-speed manual, the engine has its work cut out for itself. On my very first drive, before examining the technical data on gearbox ratios, I could tell that the manual 6-speed had massive gaps between first, second, and third gears.

Diving further into the gearing matter, the ratio spread between first and third gears is vastly wider in the 6-speed manual transmission than in the 7-speed DSG semi-automatic gearbox. This means that as you upshift the manual, the engine is faced with a huge drop in engine revs when you let out the clutch, placing the engine well below the rev range it would prefer to operate within to provide maximum power.

VW Jetta engine bay

EA888 in the house. Credit: Jim Resnick

Let’s look at the ratios, and remember that a lower numerical value means a “taller” or “higher” ratio, just like on multi-speed bicycles. The manual’s first gear is 3.77:1, where the DSG’s is 3.40:1. Upshift to the 2.09:1 second gear in the manual, and you select a gear that’s a whopping 55 percent taller than first gear. Conversely, the same 1-2 shift in the DSG (from 3.40:1 up to 2.75:1) results in a 19 percent taller gear ratio—a far narrower gap.

Third gear tells a similar story. The 6-speed manual’s third ratio (1.47:1) is 17 percent higher than the 1.77:1 ratio in the DSG (again, this “taller” gear giving 17 percent less mechanical advantage). Advantage: automatic.

Closer ratios mean better, faster engine torque recovery and better continued acceleration, because the engine will be spinning in the happier part of its power band—engines being happiest when revving at their torque peak and beyond.

Now, you might well argue that the manual’s third gear gives a higher top speed in-gear than the DSG automatic’s. And that’s 100 percent true. But it’s also irrelevant when you have three (or four!) more gears left to go in the transmission.

And then there’s the action of the shifter itself, with very long throws from forward to aft gates.

A white VW Jetta in profile

It’s quite handsome from some angles. Credit: Jim Resnick

But wait. I began this diatribe by complimenting the Jetta GLI for still offering a choice of manual or automatic gearbox. Indeed, if the manual gearbox had the DSG automatic’s ratios, the paragraphs above would have a very different tenor. The lesson here is that not all manuals are created equal.

We can also look objectively at the stopwatch. Using others’ published figures (don’t take our word for it), 0–60 mph figures tell the tale, as well. Car and Driver cites a time of 6.0 seconds to 60 mph for the manual GLI, where they achieved 5.6 seconds for the dash in the DSG automatic, a big gap.

Regardless of which transmission is used, a limited-slip differential tries to put the power down evenly, and adaptive suspension with multiple driving modes serves up a responsive connectedness to, or relative isolation from, the road surface. Compared to the standard GTI (not the Golf R), the Jetta GLI still rides with a greater accent on ride comfort, and that’s not always a bad thing, especially given the Jetta’s greater rear seat accommodations, which offer 2.4 inches (61 mm) more rear legroom than the GTI. Real adults can live back there for hours at a time without fidgeting, whereas you likely tickle that threshold in a GTI after a little over an hour.

Interior & tech

Inside, the GLI features perforated leather heated and cooled seats, a leather-wrapped and flat-bottom steering wheel that is still saddled with capacitive multifunction controls, a digital instrument cluster that can be configured with traditional dials or a compartmentalized digital-looking display, plus an 8-inch infotainment screen. While the latter may seem small compared to other cars that sport TV-size tablets perched on the dash, it at least comes fully equipped with Apple CarPlay and Android Auto. There’s a slow creep elsewhere in the industry to make this functionality either optional or simply unavailable, which is unforgivable in an era where we can hardly survive without our smartphones.

While much of the controls sit within the infotainment touchscreen, major climate controls reside just below, using capacitive sliders. These sliders are not anywhere near as intuitive as switches and knobs, but at least you don’t need to hunt and peck through endless menus to find them while driving.

The Jetta isn’t as modern as the 8th-generation Golf inside, but it’s had a bit of a tech upgrade. Jim Resnick

The GLI comes standard with active driver assists, including blind-spot warning, forward collision warning, emergency braking, adaptive cruise control, lane-keeping assist, and emergency assist.

Volkswagen managed to incorporate some pragmatic features and comforts. A 15 W wireless and cooled charging pad sits up front, and the trunk sports 14.1 cubic feet (400 L) of space with an actual spare tire under the trunk floor (although it’s a compact spare with limited mileage range).

The premium Beats Audio system in the Jetta GLI pumps 400 W through nine speakers, including a subwoofer. With all those speakers and electrons going for it, I expected way more than it delivered. It creates muddy bass frequencies that are simply inescapable, either by attenuating the bass or by lowering subwoofer gain.

Despite the preponderance of directionless bass, the system produces very little body to the music played, whether it’s jazz from Bill Evans or punk from Bad Religion. Midrange and high-end reproduction is no better. Shrill treble joins the errant bass, making everything sound muddy and indistinct. Delicate acoustic piano passages have little clarity, and Joni Mitchell hides behind a giant curtain of Saran Wrap. Poor Joni.

Driving the GLI is sometimes joyful, as the engine responds eagerly across all RPMs. The chassis and suspension prove willing, though a bit soft for a sports sedan. VW’s steering feels communicative, but not among the best of the modern electrically boosted lot.

VW equips this GLI with all-season Hankook Energy GT tires, sized 225/40R18. I specifically cite these tires because they underperform for the GLI. They don’t produce grip adequate for a sporty sedan, and they come up short underpinning the GLI. So, on a scale of 1 to 10, if the GLI’s engine is a 9, if the gearbox is a 5, and the interior is an 8.5, the GLI’s Hankook tires are a 6.

The GLI’s brakes are a version of the tire story. Despite borrowing front rotors and calipers from the lovely Golf R, they proved grabby, overboosted, and touchy in the GLI. Like the gearbox and tires, specs can tell you nothing in terms of feel and execution.

The GLI’s fuel economy lands at a decent 26/36/30 city/highway/combined mpg (9/6.5/7.8 L/100 km). In thoroughly mixed driving, I achieved an average of 29.1 mpg (8 L/100 km) over my approximately 400 miles (644 km).

The overall truth

The 2025 Jetta GLI certainly possesses sporty aspirations, but a few things hold it back from being the complete package that its Golf GTI stablemate is. Although the Golf GTI no longer offers a manual, the GLI’s 6-speed transmission disappoints both in feel and performance, with huge gaps between cogs. Of course, this malady could be overcome by ordering a DSG automatic GLI, but then any fun gleaned by rowing your gears is also lost.

This car could be better than it is. Credit: Jim Resnick

Closer to the road, mediocre tires generate modest grip. Compared to the Golf, the Jetta gains in rear seat legroom but loses in feel, performance, and tenacity. If it’s performance with practicality you’re after, the $35,045 price of this GLI as tested will get you what you need. But you’ll want something a bit spicier.

Photo of Jim Resnick

A veteran of journalism, product planning and communications in the automotive and music space, Jim reports, critiques and lectures on autos, music and culture.

2025 VW Jetta GLI: Save the manuals, but not like this Read More »

horrifying-screwworm-infection-confirmed-in-us-traveler-after-overseas-trip

Horrifying screwworm infection confirmed in US traveler after overseas trip

Flesh-eating screwworm larvae poised to invade the US have snuck into Maryland via the flesh of a person who had recently traveled to El Salvador, upping anxiety about the ghastly—and economically costly—parasite.

Reuters was first to report the case early Monday, quoting Andrew Nixon, spokesperson for the US Department of Health and Human Services, who said in an email that the Centers for Disease Control and Prevention had confirmed the case on August 4 in a person who had returned from a trip to El Salvador.

While other outlets have since reported that the screwworm case found in Maryland is the first human case in the US, or first travel-related case in the US, or the first case in years—none of those things are true. Screwworms are endemic in parts of South America and the Caribbean and travel-related cases have always been a threat and occasionally pop up in the US. While the CDC doesn’t keep a public tally of the cases, experts at the agency have noted several travel-related human cases in the US in recent years, including one as recent as last year.

The new case in Maryland doesn’t change anything in the US. “The risk to public health in the United States from this introduction is very low,” Nixon wrote to Reuters. But, what has changed is that the risk of an incursion at the US-Mexico border is no longer low—in fact it’s rather high currently.

Savage parasites

Screwworms were once endemic to the US before a massive eradication effort that began in the 1950s drove the population out of the US and Central America. The flies were held at bay with a biological barrier of constant releases of sterile male flies along the Darién Gap at the border of Panama and Colombia. The flies were declared eradicated from Panama in 2006. But, in 2022, the barrier was breached and the flies have worked their way back up through Central America, including El Salvador, since then. Now they are merely 370 miles or less from the Texas border, and state and federal agencies are preparing for an invasion, including with plans to build a sterile fly facility in the state.

Horrifying screwworm infection confirmed in US traveler after overseas trip Read More »

is-the-ai-bubble-about-to-pop?-sam-altman-is-prepared-either-way.

Is the AI bubble about to pop? Sam Altman is prepared either way.

Still, the coincidence between Altman’s statement and the MIT report reportedly spooked tech stock investors earlier in the week, who have already been watching AI valuations climb to extraordinary heights. Palantir trades at 280 times forward earnings. During the dot-com peak, ratios of 30 to 40 times earnings marked bubble territory.

The apparent contradiction in Altman’s overall message is notable. This isn’t how you’d expect a tech executive to talk when they believe their industry faces imminent collapse. While warning about a bubble, he’s simultaneously seeking a valuation that would make OpenAI worth more than Walmart or ExxonMobil—companies with actual profits. OpenAI hit $1 billion in monthly revenue in July but is reportedly heading toward a $5 billion annual loss. So what’s going on here?

Looking at Altman’s statements over time reveals a potential multi-level strategy. He likes to talk big. In February 2024, he reportedly sought an audacious $5 trillion–7 trillion for AI chip fabrication—larger than the entire semiconductor industry—effectively normalizing astronomical numbers in AI discussions.

By August 2025, while warning of a bubble where someone will lose a “phenomenal amount of money,” he casually mentioned that OpenAI would “spend trillions on datacenter construction” and serve “billions daily.” This creates urgency while potentially insulating OpenAI from criticism—acknowledging the bubble exists while positioning his company’s infrastructure spending as different and necessary. When economists raised concerns, Altman dismissed them by saying, “Let us do our thing,” framing trillion-dollar investments as inevitable for human progress while making OpenAI’s $500 billion valuation seem almost small by comparison.

This dual messaging—catastrophic warnings paired with trillion-dollar ambitions—might seem contradictory, but it makes more sense when you consider the unique structure of today’s AI market, which is absolutely flush with cash.

A different kind of bubble

The current AI investment cycle differs from previous technology bubbles. Unlike dot-com era startups that burned through venture capital with no path to profitability, the largest AI investors—Microsoft, Google, Meta, and Amazon—generate hundreds of billions of dollars in annual profits from their core businesses.

Is the AI bubble about to pop? Sam Altman is prepared either way. Read More »