Author name: Rejus Almole

what-if-the-aliens-come-and-we-just-can’t-communicate?

What if the aliens come and we just can’t communicate?


Ars chats with particle physicist Daniel Whiteson about his new book Do Aliens Speak Physics?

Science fiction has long speculated about the possibility of first contact with an alien species from a distant world and how we might be able to communicate with them. But what if we simply don’t have enough common ground for that to even be possible? An alien species is bound to be biologically very different, and their language will be shaped by their home environment, broader culture, and even how they perceive the universe. They might not even share the same math and physics. These and other fascinating questions are the focus of an entertaining new book, Do Aliens Speak Physics? And Other Questions About Science and the Nature of Reality.

Co-author Daniel Whiteson is a particle physicist at the University of California, Irvine, who has worked on the ATLAS collaboration at CERN’s Large Hadron Collider. He’s also a gifted science communicator who previously co-authored two books with cartoonist Jorge Cham of PhD Comics fame: 2018’s We Have No Idea and 2021’s Frequently Asked Questions About the Universe. (The pair also co-hosted a podcast from 2018 to 2024, Daniel and Jorge Explain the Universe.) This time around, cartoonist Andy Warner provided the illustrations, and Whiteson and Warner charmingly dedicate their book to “all the alien scientists we have yet to meet.”

Whiteson has long been interested in the philosophy of physics. “I’m not the kind of physicist who’s like, ‘whatever, let’s just measure stuff,’” he told Ars. “The thing that always excited me about physics was this implicit promise that we were doing something universal, that we were learning things that were true on other planets. But the more I learned, the more concerned I became that this might have been oversold. None are fundamental, and we don’t understand why anything emerges. Can we separate the human lens from the thing we’re looking at? We don’t know in the end how much that lens is distorting what we see or defining what we’re looking at. So that was the fundamental question I always wanted to explore.”

Whiteson initially pitched his book idea to his 14-year-old son, who inherited his father’s interest in science. But his son thought Whiteson’s planned discussion of how physics might not be universal was, well, boring. “When you ask for notes, you’ve got to listen to them,” said Whiteson. “So I came back and said, ‘Well, what about a book about when aliens arrive? Will their science be similar to ours? Can we collaborate together?’” That pitch won the teen’s enthusiastic approval: same ideas, but couched in the context of alien contact to make them more concrete.

As for Warner’s involvement as illustrator, “I cold-emailed him and said, ‘Want to write a book about aliens? You get to draw lots of weird aliens,’” said Whiteson. Who could resist that offer?

Ars caught up with Whiteson to learn more.

cartoon exploring possible outcomes of first alien contact

Credit: Andy Warner

Ars Technica: You open each chapter with fictional hypothetical scenarios. Why?

Daniel Whiteson: I sent an early version of the book to my friend, [biologist] Matt Giorgianni, who appears in cartoon form in the book. He said, “The book is great, but it’s abstract. Why don’t you write out a hypothetical concrete scenario for each chapter to show us what it would be like and how we would stumble.”

All great ideas seem obvious once you hear them. I’ve always been a huge science fiction fan. It’s thoughtful and creative and exploratory about the way the universe could be and might be. So I jumped at the opportunity to write a little bit of science fiction. Each one was challenging because you have to come up with a specific example that illustrates the concepts in that chapter, the issues that you might run into, but also be believable and interesting. We went through a lot of drafts. But I had a lot of fun writing them.

Ars Technica: The Voyager Golden Record is perhaps the best known example of humans attempting to communicate with an alien species, spearheaded by the late Carl Sagan, among others. But what are the odds that, despite our best efforts, any aliens will ever be able to decipher our “message in a bottle”?

Daniel Whiteson: I did an informal experiment where I printed out a picture of the Pioneer plaque and showed it to a bunch of grad students who were young enough to not have seen it before. This is Sagan’s audience: biological humans, same brain, same culture, physics grad students—none of them had any idea what any of that was supposed to refer to. NASA gave him two weeks to come up with that design. I don’t know that I would’ve done any better. It’s easy to criticize.

Those folks, they were doing their best. They were trying to step away from our culture. They didn’t use English, they didn’t even use mathematical symbols. They understood that those things are arbitrary, and they were reaching for something they hoped was going to be universal. But in the end, nothing can be universal because language is always symbols, and the choice of those symbols is arbitrary and cultural. It’s impossible to choose a symbol that can only be interpreted in one way.

Fundamentally, the book is trying to poke at our assumptions. It’s so inspiring to me that the history of physics is littered with times when we have had to abandon an assumption that we clung to desperately, until we were shown otherwise with enough data. So we’ve got to be really open-minded about whether these assumptions hold true, whether it’s required to do science, to be technological, or whether there is even a single explanation for reality. We could be very well surprised by what we discover.

Ars Technica: It’s often assumed that math and physics are the closest thing we have to a universal language. You challenge that assumption, probing such questions as “what does it even mean to ‘count’”? 

Daniel Whiteson: At an initial glance, you’re like, well, of course mathematics is required, and of course numbers are universal. But then you dig into it and you start to realize there are fuzzy issues here. So many of the assumptions that underlie our interpretation of what we learned about physics are that way. I had this experience that’s probably very common among physics undergrads in quantum mechanics, learning about those calculations where you see nine decimal places in the theory and nine decimal places in the experiment, and you go, “Whoa, this isn’t just some calculational tool. This is how the universe decides what happens to a particle.”

I literally had that moment. I’m not a religious person, but it was almost spiritual. For many years, I believed that deeply, and I thought it was obvious. But to research this book, I read Science Without Numbers by Hartry Field. I was lucky—here at Irvine, we happen to have an amazing logic and philosophy of science department, and those folks really helped me digest [his ideas] and understand how you can pull yourself away from things like having a number line. It turns out you don’t need numbers; you can just think about relationships. It was really eye-opening, both how essential mathematics seems to human science and how obvious it is that we’re making a bunch of assumptions that we don’t know how to justify.

There are dotted lines humans have drawn because they make sense to us. You don’t have to go to plasma swirls and atmospheres to imagine a scenario where aliens might not have the same differentiation between their identities, me and you, here’s where I end, and here’s where you begin. That’s complicated for aliens that are plasma swirls, but also it’s not even very well-defined for us. How do I define the edge of my body? Is it where my skin ends? What about the dead skin, the hair, my personal space? There’s no obvious definition. We just have a cultural sense for “this is me, this is not me.” In the end, that’s a philosophical choice.

Cartoon of an alien emerging from a spaceship and a human saying

Credit: Andy Warner

Ars Technica: You raise another interesting question in the book: Would aliens even need physics theory or a deeper understanding of how the universe works? Perhaps they could invent, say, warp drive through trial and error. You suggest that our theory is more like a narrative framework. It’s the story we tell, and that is very much prone to bias.

Daniel Whiteson: Absolutely. And not just bias, but emotion and curiosity. We put energy into certain things because we think they’re important. Physicists spend our lives on this because we think, among the many things we could spend our time on, this is an important question. That’s an emotional choice. That’s a personal subjective thing.

Other people find my work dead boring and would hate to have my life, and I would feel the same way about theirs. And that’s awesome. I’m glad that not everybody wants to be a particle physicist or a biologist or an economist. We have a diversity of curiosity, which we all benefit from. People have an intuitive feel for certain things. There’s something in their minds that naturally understands how the system works and reflects it, and they probably can’t explain it. They might not be able to design a better car, but they can drive the heck out of it.

This is maybe the biggest stumbling block for people who are just starting to think about this for the first time. “Obviously aliens are scientific.” Well, how do we know? What do we mean by scientific? That concept has evolved over time, and is it really required? I felt that that was a big-picture thing that people could wrap their minds around but also a shock to the system. They were already a little bit off-kilter and might realize that some things they assumed must be true maybe not have to be.

Ars Technica: You cite the 2016 film Arrival as an example of first contact and the challenge of figuring out how to communicate with an alien species. They had to learn each other’s cultural context before they had any chance of figuring out what their respective symbols meant. 

Daniel Whiteson:  I think that is really crucial. Again, how you choose to represent your ideas is, in a sense, arbitrary. So if you’re on the other side of that, and you have to go from symbols to ideas, you have to know something about how they made those choices in order to reverse-engineer a message, in order to figure out what it means. How do you know if you’ve done it correctly? Say we get a message from aliens and we spend years of supercomputer time cranking on it. Something we rely on for decoding human messages is that you can tell when you’ve done it right because you have something that makes sense.

cartoon about the

Credit: Andy Warner

How do we know when we’ve done that for an alien text? How do you know the difference between nonsense and things that don’t yet make sense to you? It’s essentially impossible. I spoke to some philosophers of language who convinced me that if we get a message from aliens, it might be literally impossible to decode it without their help, without their context. There aren’t enough examples of messages from aliens. We have the “Wow!” signal—who knows what that means? Maybe it’s a great example of getting a message and having no idea how to decode it.

As in many places in the book, I turned to human history because it’s the one example we have. I was shocked to discover how many human languages we haven’t decoded. Not to mention, we haven’t decoded whale. We know whales are talking to each other. Maybe they’re talking to us and we can’t decode it. The lesson, again and again, is culture.

With the Egyptians, the only reason we were able to decode hieroglyphics is because we had the Rosetta Stone, but that still took 20 years there. How does it take 20 years when you have examples and you know what it’s supposed to say? And the answer is that we made cultural assumptions. We assumed pictograms reflected the ideas in the pictures. If there’s a bird in it, it’s about birds. And we were just wrong. That’s maybe why we’ve been struggling with Etruscan and other languages we’ve never decoded.

Even when we do have a lot of culture in common, it’s very, very tricky. So the idea that we would get a message from space and have no cultural clues at all and somehow be able to decode it—I think that only works in the scenario where aliens have been listening to us for a long time and they’re essentially writing to us in something like our language. I’m a big fan of SETI. They host these conferences where they listen to philosophers and linguists and anthropologists. I don’t mean to say that they’ve thought too narrowly, but I think it’s not widely enough appreciated how difficult it is to decode another language with no culture in common.

Ars Technica: You devote a chapter to the possibility that aliens might be able to, say, “taste” electrons. That drives home your point that physiologically, they will probably be different, biologically they will be different, and their experiences are likely to be different. So their notion of culture will also be different.

Daniel Whiteson: Something I really wanted to get across was that perception determines your sense of intuition, which defines in many ways the answers you find acceptable. I read Ed Yong’s amazing book, An Immense World, about animal perception, the diversity of animal experience or animal interaction with the environment. What is it like to be an octopus with a distributed mind? What is it like to be a bird that senses magnetic fields? If you’re an alien and you have very different perceptions, you could also have a different intuitive language in which to understand the universe. What is a photon? Is it a particle? Is it a wave?  Maybe we just struggle with the concept because our intuitive language is limited.

For an alien that can see photons in superposition, this is boring. They could be bored by things we’re fascinated by, or we could explain our theories to them and they could be unsatisfied because to them, it doesn’t translate into their intuitive language. So even though we’ve transcended our biological limitations in many ways, we can detect gravitational waves and infrared photons, we’re still, I think, shaped by those original biological senses that frame our experience, the models we build. What it’s like to be a human in the world could be very, very different from what it’s like to be an alien in the world.

Ars Technica:  Your very last line reads, “Our theories may reveal the patterns of our thoughts as much as the patterns of nature.” Why did you choose to end your book on that note?

Daniel Whiteson: That’s a message to myself. When I started this journey, I thought physics is universal. That’s what makes it beautiful. That implies that if the aliens show up and they do physics in a very different way, it would be a crushing disappointment. Not only is what we’ve learned just another human science, but also it means maybe we can’t benefit from their work or we can’t have that fantasy of a galactic scientific conference. But that actually might be the best-case scenario.

I mean, the reason that you want to meet aliens is the reason you go traveling. You want to expand your horizons and learn new things. Imagine how boring it would be if you traveled around the world to some new country and they just had all the same food. It might be comforting in some way, but also boring. It’s so much more interesting to find out that they have spicy fish soup for breakfast. That’s the moment when you learn not just about what to have for breakfast but about yourself and your boundaries.

So if aliens show up and they’re so weirdly different in a way we can’t possibly imagine—that would be deeply revealing. It’ll give us a chance to separate the reality from the human lens. It’s not just questions of the universe. We are interesting, and we should learn about our own biases. I anticipate that when the aliens do come, their culture and their minds and their science will be so alien, it’ll be a real challenge to make any kind of connection. But if we do, I think we’ll learn a lot, not just about the universe but about ourselves.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

What if the aliens come and we just can’t communicate? Read More »

civil-war-is-brewing-in-the-wasteland-in-fallout-s2-trailer

Civil war is brewing in the wasteland in Fallout S2 trailer

Purnell, Goggins, MacLachlan, and Moten all return for S2, along with Moises Arias as Lucy’s younger brother, Norm, and Frances Turner as Barb Howard, Cooper’s wife and a high-ranking Vault-Tec executive. Justin Theroux joins the S2 cast as Mr. Robert House, founder and CEO of RobCo Industries, as well as Macaulay Culkin (possibly playing Caesar) and Kumail Nanjiani, both of whom appear briefly in the new trailer.

The Ghoul (Walton Goggins) has been searching for his family for 200 years. YouTube/Prime Video

The trailer opens with Maximus chatting with another denizen of the wasteland, who insists that while he’s seen lots of crazy and unnatural things in his struggle to survive, he’s never seen good people. Maximus has met one good person: Lucy. But his acquaintance isn’t having it. “I would be a good person too if I grew up in some cozy impenetrable home,” he says. “Wouldn’t have to steal and stab and fib all the time just to get by.”

Lucy, of course, has been challenged to hang onto her fundamental decency while navigating the brutal surface world, hardening just enough to do what’s necessary to survive. She’s now looking for her father with a new motive: to bring him to justice, “so people know that how they conduct themselves matters, and they don’t give up hope.” (We catch a few glimpses of Hank, most notably experimenting on a mouse in the lab, with disastrous results.) The Ghoul, for his part, is looking for his family; it’s the only reason he’s hung around for 200 years. Meanwhile, civil war is brewing, and you just know our main cast will all end up caught up in the conflict.

The second season of Fallout premieres on December 17, 2025, on Prime Video.

Civil war is brewing in the wasteland in Fallout S2 trailer Read More »

the-pope-offers-wisdom

The Pope Offers Wisdom

The Pope is a remarkably wise and helpful man. He offered us some wisdom.

Yes, he is generally playing on easy mode by saying straightforwardly true things, but that’s meeting the world where it is. You have to start somewhere.

Some rejected his teachings.

Two thousand years after Jesus famously got nailed to a cross for suggesting we all be nice to each other for a change, Pope Leo XIV issues a similarly wise suggestion.

Pope Leo XIV: Technological innovation can be a form of participation in the divine act of creation. It carries an ethical and spiritual weight, for every design choice expresses a vision of humanity. The Church therefore calls all builders of #AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.

The world needs honest and courageous entrepreneurs and communicators who care for the common good. We sometimes hear the saying: “Business is business!” In reality, it is not so. No one is absorbed by an organization to the point of becoming a mere cog or a simple function. Nor can there be true humanism without a critical sense, without the courage to ask questions: “Where are we going? For whom and for what are we working? How are we making the world a better place?”

Pope Leo XIV (offered later, not part of the discussion in the rest of the post): Technological development has brought, and continues to bring, significant benefits to humanity. In order to ensure true progress, it is imperative that human dignity and the common good remain resolute priorities for all, both individuals and public entities. We must never forget the “ontological dignity that belongs to the person as such simply because he or she exists and is willed, created, and loved by God” (Dignitas Infinita, 7).

Roon: Pope Leo I think you would enjoy my account.

Liv Boeree: Well said. Also, it’s telling to observe which technologists (there’s one in particular) who just openly mocked you in the quote tweets for saying this.

Grimes: This is so incredibly lit. This is one of the most important things ever said.

Pliny the Liberator:

Rohit: Quite cool that amongst the world leaders it’s the Pope that has the best take on dealing with AI.

The one in particular who had an issue with the Pope’s statement is exactly the one you would guess: Marc Andreessen.

For those who aren’t aware of the image, my understanding of the image below, its context and its intended meaning is it is a selected-to-be-maximally-unflattering picture of GQ features director Katherine Stoeffel, from her interview of Sydney Sweeney, to contrast with Sydney Sweeney looking in command and hot.

During the otherwise very friendly interview, Stoeffel asked Sweeney a few times in earnest fashion about concerns and responses around her ad for jeans, presumably showing such ilk that she is woke, and Sweeney notes that she disregards internet talk, doesn’t check for it and doesn’t let it get to her, and otherwise dodged artfully at one point in particular, really excellent response all around, thus proving to such ilk she is based.

Also potentially important context I haven’t seen mentioned before: Katherine Stoeffel reads to me in this interview as having autistic mannerisms. Once you see the meme in this light you simply cannot unsee it or think it’s not central to what’s happening here, and it makes the whole thing another level of not okay and I’m betting other people saw this too and are similarly pissed off about it.

Not cool. Like, seriously, don’t use this half of the meme.

It’s not directly related, but if you do watch the interview, notice how much Sydney actually pauses to think about her answers, watch her eye movements and willingness to allow silence, and also take her advice and leave your phone at home. This also means that in the questions where Sydney doesn’t pause to think, it hits different. She’s being strategic but also credibly communicating. The answer everyone remembers, in context, is thus even more clearly something she had prepped and gamed out.

Here for reference is the other half of the meme, from the interview:

Daniel Eth: Andreessen is so dogmatically against working on decreasing risks from AI that he’s now mocking the pope for saying tech innovation “carries an ethical and spiritual weight” and that AI builders should “cultivate moral discernment as a fundamental part of their work”

Daniel Eth (rest of block quote is him):

The pope: “you should probably be a good person”

Marc Andreessen (as imagined by Daniel Eth): “this is an attack on me and everything I stand for”

Yes, I do think ‘you should probably be a good person’ in the context of a Pope is indeed against everything that Marc Andreessen stands for, straight up.

Is that an ‘uncharitable’ take? Yes, but I am confident it is an accurate one, and no this type of systematic mockery, performative cruelty, vice signaling and attempted cyberbullying does not come out of some sort of legitimate general welfare concern.

Marc Andreessen centrally has a strong position on cultivating moral discernment.

Which is the same as his position on not mocking those who support morality or moral discernment.

Which is that he is, in the strongest possible terms, against these things.

Near: we’re in the performative cruelty phase of silicon valley now

if we get an admin change in 2028 and tech is regulated to a halt b/c the american people hate us well this would not surprise me at all tbh. hence the superpacs i suppose.

pmarca deleted all the tweets now, the pope reigns victorious once again.

Roon: this is actually the end stage of what @nearcyan is describing, an age dominated by vice signaling, a nihilistic rw voice that says well someone gave me an annoying moral lecture in the past, so actually virtue is obsolete now.

bring back virtue signaling. the alternative is worse. at least pretend you are saving the world

near: first time ive ever seen him delete an entire page of tweets.

Daniel Eth: Tbf Andreessen is a lot more cruel than the typical person in SV

Scott Alexander: List of Catholic miracles that make me question my atheism:

  1. Sun miracle of Fatima

  2. Joan of Arc

  3. Pope making Marc Andreessen shut up for one day.

Yosarian: It’s also amazing that Marc got into a fight with the Pope only a few weeks after you proved he’s the Antichrist

Scott Alexander: You have seen and believed; blessed are those who believed without seeing.

It turns out that yes, you can go too far around these parts of Twitter.

TBPN (from a 2 min clip discussing mundane AI dangers): John: I don’t think you should just throw “decel” at someone who’s identifying a negative externality of a new technology. That’s not necessarily decelerationist.

Dean Ball: It’s heartening to see such eminently reasonable takes making their way into prominent, pro-tech new media.

There is something very “old man” about the terms of AI policy debates, and perhaps this is because it is has been largely old people who have set them.

When I hear certain arguments about how we need to “let the innovators innovate” and thus cannot bear the thought of resolving negative externalities from new technology, it just sounds retro to me. Like an old man dimly remembering things he believes Milton Friedman once said.

Whereas anyone with enough neuroplasticity remaining to internalize a concept as alien as “AGI” can easily see that such simple frames do not work.

It is as heartening to see pro-tech new media say this out loud as it is disheartening to realize that it needed to be said out loud.

A reasonable number of people finally turned on Marc for this.

Spor: I kinda can’t believe all it took was a bad meme swipe against the Pope for legit all of tech Twitter to turn against Marc Andreessen

There’s a lesson in there somewhere

To be fair, most of it came from his response, at least I think

If he doesn’t go around unfollowing and blocking everybody and deleting tweets, it’s a nothingburger

Daniel Eth: I think the thing is everyone already hated Andreessen, and it’s just that before a critical mass of dissent appeared people in tech were scared of criticizing him. And this event just happened to break that threshold

Feels like the dam has broken on people in the tech community airing grievances with Andreessen. Honestly makes me feel better about the direction of the tech community writ large

Matthew Griisser: You come at the King, you best not miss.

Jeremiah Johnson considers Marc Andreessen as an avatar for societal delay as he goes over the events discussed in this section.

Jeremiah Johnson: Now, I don’t want to say that Marc Andreessen is single-handedly causing the collapse of society. That would be a slight exaggeration. But I do believe this one incident shows that Andreessen is at the nexus of all the ways society is decaying. He’s not responsible. He’s just representative. He’s the mascot of decay, the guy who’s always there when the negative trends accelerate, cheering them on and pushing things downhill faster.

I mean, yeah, fair. That’s distinct from his ‘get everyone killed by ASI’ crusades, but the same person doing both is not a coincidence.

And no, he doesn’t make up for this with the quality of his technical takes, as Roon says he has been consistently wrong about generative AI.

As a reminder, here is Dwarkesh Patel’s excellent refutation to Marc Andreessen’s ‘why AI will save the world, its name calling and its many nonsensical arguments. Liron Shapira has a less measured thread that also points out many rather terrible arguments that were made and then doubled down upon and repeated.

Daniel Eth: This whole Andreessen thing is a good reminder that you shouldn’t confuse vice with competence. Just because the guy is rude & subversive does not mean that he has intelligent things to say.

Marc’s position really has reliably been, as explicitly stated as the central thesis of The Techno-Optimist Manifesto, that technological advancement is always universally good and anyone who points to any downside is explicitly his ‘enemy.’

There were a few defenders, and their defenses… did not make things better.

Dan Elton: I’m a bit dismayed to see @pmarca bowing to anti free speech political correctness culture and deleting a bunch of funny light hearted tweets. The pope is so overrated. If we can’t make a joke about the pope, is there anything left we are allowed to laugh at?

That is very much not what happened here, sir. Nor was it about joking about the pope. This was in no way about ‘free speech,’ or about lighthearted joking, it was about being performatively mean and hostile to the very idea of responsibility, earnestness or goodness, even in the theoretical. And it was about the systematic, intentional conflation of hostility with such supposed joking.

(Perhaps the deleted AI video of the pope was funny? I don’t know, didn’t see it.)

Marc nuked several full days of posts in response to all this, which seems wise. It’s a good way to not have to make a statement about exactly what you’re deleting, plays it safe, resets the board.

Mike Solana: my sense is the number of people who 1) furiously defended the pope last night and then 2) went to mass this morning is probably close to zero. status games.

Joe Weisenthal: Got em.

Very obviously: We’re supporting the Pope here because he took the trust and power and yes holiness vested in him and spoke earnestly, wisely and with moral clarity, saying obviously true things, and we agree with him, and then he was mocked for it exactly because of all these things. Not because we’re Catholics.

There’s this mindset that you can’t possibly be actually standing up for ideas or principles or simply goodness. It must be some status game, or some political fight, or something. No, it isn’t.

Marc also went on an unfollow binge, including Bayes and Grimes, which is almost entirely symbolic since he (still) follows 27,700 people.

I feel good about highlighting his holiness the Pope’s wise message.

I feel bad about spending the bulk of an entire post pointing out that someone has consistently been acting as a straight up horrible person. Ideally one would never do this. But this is pretty important context within the politics of the AI space and also for anyone evaluating venture capital funding from various angles.

Remember, when people tell you who they are? Believe them.

So I wanted to ensure there was a reference for these events we can link back to, and also to remember that yes if you go sufficiently too far people eventually notice.

Discussion about this post

The Pope Offers Wisdom Read More »

audi-goes-full-minimalism-for-its-first-ever-formula-1-livery

Audi goes full minimalism for its first-ever Formula 1 livery


Audi says it wants to be an F1 title contender by 2030.

The actual bodywork of Audi’s R26 won’t look identical to this show car, but the livery should, with the addition of the sponsors, obviously. Credit: Jonathan Gitlin

MUNICH, Germany—Audi’s long-awaited Formula 1 team gave the world its first look at what the Audi R26 will look like when it takes to the track next year. Well, sort of—the car you see here is a generic show car for the 2026 aero regulations, but the livery you see, plus the sponsors’ logos, will race next year.

“By entering the pinnacle of motorsport, Audi is making a clear, ambitious statement. It is the next chapter in the company’s renewal. Formula 1 will be a catalyst for the change towards a leaner, faster, and more innovative Audi,” said Gernot Döllner, Audi’s CEO. “We are not entering Formula 1 just to be there. We want to win. At the same time, we know that you don’t become a top team in Formula 1 overnight. It takes time, perseverance, and tireless questioning of the status quo. By 2030, we want to fight for the World Championship title,” Döllner said.

After the complicated liveries of cars like the R18 or Audi’s Formula E program, the R26 is refreshingly simple. Jonathan Gitlin

I’ll admit, when I first saw the images Audi sent ahead of time, I was a little underwhelmed, but in person, as you approach it from different angles, it makes a lot more sense. The design is more than a little minimalist, juxtaposing straight-edged geometric blocks of color with the aerodynamically curved bodywork they adorn. The titanium references Audi’s latest concept car, and the red—which is almost fluorescent in person—is an all-new shade called Audi Red. It’s used to highlight the car’s various air intakes and looks really quite effective.

Why F1?

After a long and glorious history in sportscar racing and rallying before that, Audi’s motorsports activities virtually evaporated in the wake of dieselgate and then a brief Formula E program. Then in early 2022, Volkswagen Group revealed that after decades of “will they, won’t they” speculation, not one but two of its brands—Audi and Porsche—would be entering F1 in 2026. (The Porsche deal with Red Bull would later fall apart.)

The sport was already riding its post-COVID popularity surge, and a new technical ruleset for 2026 was written in large part to attract automakers like Audi, dropping one of the current hybrid energy recovery systems (the complex turbo-based MGU-H) in favor of a much more powerful electric motor (the MGU-K), and switching to synthetic fuels that must have at least 65 percent less carbon emissions than fossil fuels.

In August 2022, Audi confirmed a powertrain program that would be developed at its motorsports competence center in Neuberg, Germany. It also announced it was buying three-quarters of the Swiss-based Sauber team. The following year, then-Audi CTO and head of the F1 program Oliver Hoffman explained to Ars why the company was going to F1 now.

Audi's 2026 F1 livery on a show car, seen from the front 3/4s

As you see the car from other angles, you see how the bodywork interacts with the geometric shapes. Credit: Jonathan Gitlin

“Nearly 50 percent of the power will come out of the electric drive train. Especially for battery technology, thermal management, and also efficiency of the power electronics, there’s a clear focus. And together, the fit between Formula 1 and our RS technologies is [the] perfect fit for us,” Hoffman said at the time.

A team in trouble?

On track, things did not look great. The Sauber team had finished 2023 in 9th place, and 2024 was looking worse. (It eventually finished dead last with a miserable four points at the end of the year.) Rumors that Audi wanted out of the program had to be quashed, then in March 2024 Audi decided to buy all of Sauber, adding Andreas Seidl, formerly of McLaren and before that Porsche, to run the team, now called Audi Formula Racing.

But within a year, Hoffman was gone, together with Andreas Seidl. Replacing Hoffman as head of the Audi F1 project: Mattia Binotto, who saw Ferrari get close to but not quite land an F1 championship. Then in August, Jonathan Wheatley joined the team from Red Bull as the new team principal.

a closer look at Audi's 2026 F1 livery on a show car

The way the air intakes are highlighted is particularly effective. Credit: Jonathan Gitlin

Binotto and Wheatley’s arrival seems to have unlocked something. At Silverstone, Nico Hülkenberg finished third, the first podium for the team since 2012 and the first ever for a driver that was long, long overdue. Hülkenberg now lies ninth in points. His rookie teammate Gabriel Bortoleto might have just had a disappointing home race, but last year’s F2 champ has shown plenty of speed in the Sauber this year. It will surely want to carry that momentum forward to 2026.

“The goal is clear: to fight for championships by 2030. That journey takes time, the right people, and a mindset of continuous improvement,” Binotto said. “Formula 1 is one of the most competitive environments. Becoming a champion is a journey of progress. Mistakes will happen, but learning from them is what drives transformation. That’s why we follow a three-phased approach: starting as a challenger with the ambition to grow, evolving into a competitor by daring the status quo and achieving first successes, and ultimately becoming a champion,” Binotto said.

What’s under the bodywork?

Technical details on the R26 remain scarce, for the same reason we haven’t seen the actual race car yet. Earlier today we visited Audi’s Neuberg facility, which is much expanded from the sportscar days—while those were also hybrid race cars, the state of the art has moved on in the near-decade since Audi left that sport, and F1 power units (the internal combustion engine, hybrid system, battery, but not the transmission) operate at a much higher level.

In addition to the workshops and cleanrooms where engineers and technicians develop and build the various components that go into the power units, there are extensive test and analysis facilities, able to examine parts both destructively (for example, cutting sections for a scanning electron microscope) and nondestructively (like X-raying or CT scanning).

Development and assembly of the Energy Recovery System (ERS) at the Neuburg site

We weren’t allowed to take photos inside the Neuberg facility, or even go into many of the rooms, but the ones we did see are all immaculate. Credit: Audi

Unlike in the LMP1 days of multiple Le Mans wins, actual on-track testing in F1 is highly restricted, limited now to just nine days over three tests before the start of the season. “When I think of my past LMP1 racing, we tested 50 days, 60 days, and if we had a problem, we just added the days. We went to Sebring or whatever,” said Stefan Dreyer, Audi Formula Racing’s CTO. “And this is also a huge challenge… moving into the future of Formula 1.”

For the first three years of the project, there was one goal: design and build a power unit for the 2026 regulations. The complete powertrain (the power unit plus the transmission) first ran a race simulation in 2024. In addition to Neuberg and Hinwil, Switzerland, where Sauber is based, there’s a new location in Bicester, England, part of that country’s “motorsports valley” and home to an awful lot of F1 suppliers and talent.

Now, the group in Neuberg has multiple overlapping jobs: assemble power units and supply them to the race team in Hinwil that is building the chassis, as well as begin development on the 2027 and 2028 iterations of the power units. It’s telling that all over the facility, video screens count down the remaining three months and however many days, hours, minutes, and seconds are left until the cars take to the track in anger in Australia.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

Audi goes full minimalism for its first-ever Formula 1 livery Read More »

quantum-computing-tech-keeps-edging-forward

Quantum computing tech keeps edging forward


More superposition, less supposition

IBM follows through on its June promises, plus more trapped ion news.

IBM has moved to large-scale manufacturing of its Quantum Loon chips. Credit: IBM

The end of the year is usually a busy time in the quantum computing arena, as companies often try to announce that they’ve reached major milestones before the year wraps up. This year has been no exception. And while not all of these announcements involve interesting new architectures like the one we looked at recently, they’re a good way to mark progress in the field, and they often involve the sort of smaller, incremental steps needed to push the field forward.

What follows is a quick look at a handful of announcements from the past few weeks that struck us as potentially interesting.

IBM follows through

IBM is one of the companies announcing a brand-new architecture this year. That’s not at all a surprise, given that the company promised to do so back in June; this week sees the company confirming that it has built the two processors it said it would earlier in the year. These include one called Loon, which is focused on the architecture that IBM will use to host error-corrected logical qubits. Loon represents two major changes for the company: a shift to nearest-neighbor connections and the addition of long-distance connections.

IBM had previously used what it termed the “heavy hex” architecture, in which alternating qubits were connected to either two or three of their neighbors, forming a set of overlapping hexagonal structures. In Loon, the company is using a square grid, with each qubit having connections to its four closest neighbors. This higher density of connections can enable more efficient use of the qubits during computations. But qubits in Loon have additional long-distance connections to other parts of the chip, which will be needed for the specific type of error correction that IBM has committed to. It’s there to allow users to test out a critical future feature.

The second processor, Nighthawk, is focused on the now. It also has the nearest-neighbor connections and a square grid structure, but it lacks the long-distance connections. Instead, the focus with Nighthawk is to get error rates down so that researchers can start testing algorithms for quantum advantage—computations where quantum computers have a clear edge over classical algorithms.

In addition, the company is launching GitHub repository that will allow the community to deposit code and performance data for both classical and quantum algorithms, enabling rigorous evaluations of relative performance. Right now, those are broken down into three categories of algorithms that IBM expects are most likely to demonstrate a verifiable quantum advantage.

This isn’t the only follow-up to IBM’s June announcement, which also saw the company describe the algorithm it would use to identify errors in its logical qubits and the corrections needed to fix them. In late October, the company said it had confirmed that the algorithm could work in real time when run on an FPGA made in collaboration with AMD.

Record lows

A few years back, we reported on a company called Oxford Ionics, which had just announced that it achieved a record-low error rate in some qubit operations using trapped ions. Most trapped-ion quantum computers move qubits by manipulating electromagnetic fields, but they perform computational operations using lasers. Oxford Ionics figured out how to perform operations using electromagnetic fields, meaning more of their processing benefited from our ability to precisely manufacture circuitry (lasers were still needed for tasks like producing a readout of the qubits). And as we noted, it could perform these computational operations extremely effectively.

But Oxford Ionics never made a major announcement that would give us a good excuse to describe its technology in more detail. The company was ultimately acquired by IonQ, a competitor in the trapped-ion space.

Now, IonQ is building on what it gained from Oxford Ionics, announcing a new, record-low error rate for two-qubit gates: greater than 99.99 percent fidelity. That could be critical for the company, as a low error rate for hardware qubits means fewer are needed to get good performance from error-corrected qubits.

But the details of the two-qubit gates are perhaps more interesting than the error rate. Two-qubit gates involve bringing both qubits involved into close proximity, which often requires moving them. That motion pumps a bit of energy into the system, raising the ions’ temperature and leaving them slightly more prone to errors. As a result, any movement of the ions is generally followed by cooling, in which lasers are used to bleed energy back out of the qubits.

This process, which involves two distinct cooling steps, is slow. So slow that as much as two-thirds of the time spent in operations involves the hardware waiting around while recently moved ions are cooled back down. The new IonQ announcement includes a description of a method for performing two-qubit gates that doesn’t require the ions to be fully cooled. This allows one of the two cooling steps to be skipped entirely. In fact, coupled with earlier work involving one-qubit gates, it raises the possibility that the entire machine could operate with its ions at a still very cold but slightly elevated temperature, avoiding all need for one of the two cooling steps.

That would shorten operation times and let researchers do more before the limit of a quantum system’s coherence is reached.

State of the art?

The last announcement comes from another trapped-ion company, Quantum Art. A couple of weeks back, it announced a collaboration with Nvidia that resulted in a more efficient compiler for operations on its hardware. On its own, this isn’t especially interesting. But it’s emblematic of a trend that’s worth noting, and it gives us an excuse to look at Quantum Art’s technology, which takes a distinct approach to boosting the efficiency of trapped-ion computation.

First, the trend: Nvidia’s interest in quantum computing. The company isn’t interested in the quantum aspects (at least not publicly); instead, it sees an opportunity to get further entrenched in high-performance computing. There are three areas where the computational capacity of GPUs can play a role here. One is small-scale modeling of quantum processors so that users can perform an initial testing of algorithms without committing to paying for access to the real thing. Another is what Quantum Art is announcing: using GPUs as part of a compiler chain to do all the computations needed to find more efficient ways of executing an algorithm on specific quantum hardware.

Finally, there’s a potential role in error correction. Error correction involves some indirect measurements of a handful of hardware qubits to determine the most likely state that a larger collection (called a logical qubit) is in. This requires modeling a quantum system in real time, which is quite difficult—hence the computational demands that Nvidia hopes to meet. Regardless of the precise role, there has been a steady flow of announcements much like Quantum Art’s: a partnership with Nvidia that will keep the company’s hardware involved if the quantum technology takes off.

In Quantum Art’s case, that technology is a bit unusual. The trapped-ion companies we’ve covered so far are all taking different routes to the same place: moving one or two ions into a location where operations can be performed and then executing one- or two-qubit gates. Quantum Art’s approach is to perform gates with much larger collections of ions. At the compiler level, it would be akin to figuring out which qubits need a specific operation performed, clustering them together, and doing it all at once. Obviously, there are potential efficiency gains here.

The challenge would normally be moving so many qubits around to create these clusters. But Quantum Art uses lasers to “pin” ions in a row so they act to isolate the ones to their right from the ones to their left. Each cluster can then be operated on separately. In between operations, the pins can be moved to new locations, creating different clusters for the next set of operations. (Quantum Art is calling each cluster of ions a “core” and presenting this as multicore quantum computing.)

At the moment, Quantum Art is behind some of its competitors in terms of qubit count and performing interesting demonstrations, and it’s not pledging to scale quite as fast. But the company’s founders are convinced that the complexity of doing so many individual operations and moving so many ions around will catch up with those competitors, while the added efficiency of multiple qubit gates will allow it to scale better.

This is just a small sampling of all the announcements from this fall, but it should give you a sense of how rapidly the field is progressing—from technology demonstrations to identifying cases where quantum hardware has a real edge and exploring ways to sustain progress beyond those first successes.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Quantum computing tech keeps edging forward Read More »

kimi-k2-thinking

Kimi K2 Thinking

I previously covered Kimi K2, which now has a new thinking version. As I said at the time back in July, price in that the thinking version is coming.

Is it the real deal?

That depends on what level counts as the real deal. It’s a good model, sir, by all accounts. But there have been fewer accounts than we would expect if it was a big deal, and it doesn’t fall into any of my use cases.

Kimi.ai: 🚀 Hello, Kimi K2 Thinking!

The Open-Source Thinking Agent Model is here.

🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%)

🔹 Executes up to 200 – 300 sequential tool calls without human interference

🔹 Excels in reasoning, agentic search, and coding

🔹 256K context window

Built as a thinking agent, K2 Thinking marks our latest efforts in test-time scaling — scaling both thinking tokens and tool-calling turns.

K2 Thinking is now live on http://kimi.com in chat mode, with full agentic mode coming soon. It is also accessible via API.

API here, Tech blog here, Weights and code here.

(Pliny jailbreak here.)

It’s got 1T parameters, and Kimi and Kimi K2 have a solid track record, so it’s plausible this could play with the big boys, although the five month delay in getting to a reasoning model suggests skepticism it can be competitive.

As always, internal benchmark scores can differ greatly from outside benchmark scores, especially for open models. Sometimes this is due to outsiders botching setup, but also inside measurements need to be double checked.

For Humanity’s Last Exam, I see an outside source saying as of November 9 it was in second place on Humanity’s Last Exam at 23.9%, which is very much not 44.9% but still very good.

On writing quality we’ve gotten endorsements for Kimi K2 for a while.

Rohit: Kimi K2 is remarkably good at writing, and unlike all others thinking mode hasn’t degraded its writing ability more.

Morgan: if i recall, on release gpt-5 was the only model where writing quality improved with thinking effort.

Rohit: Alas.

Gary Fung: Kimi has always been a special snowflake on creative writing.

Here’s one part of the explanation of how they got the writing to be so good, which involves self-ranking RL and writing self-play, with a suggestion of some similarities to the training of Claude 3 Opus. In a sense this looks like ‘try to do better, at all.’

On the agentic tool use and general intelligence? I’m more skeptical.

Artificial Analysis has Kimi K2 Thinking at the top of its Agentic Tool Use, by 93%-87%, which is a huge gap in context, which is its strongest subset.

As is usually true when people compare open to closed models, this is the open model’s best benchmark, so don’t get carried away, but yes overall it did well on Artificial Analysis, indeed suspiciously well given how little talk I see.

The tool calling abilities are exciting for an open model, although standard for closed. This is a good example of how we look for ways for open models to impress by matching closed abilities in spots, also it is indeed highly useful.

Overall Artificial Analysis Intelligence index has Kimi K2 Thinking at 67, one point behind GPT-5 and ahead of everyone else. Kimi used the most tokens of any model, but total cost was lower than the top closed models, although not dramatically so ($829-$913 for GPT-5, $817 for Sonnet, $380 for Kimi K2) as cost is $0.6/$2.5 per million tokens, versus $1.25/$10 for GPT-5 and $3/$15 for Sonnet.

Nathan Lambert is impressed, relying on secondary information (‘seems like a joy to use’), and offers thoughts.

He notes that yes, labs start out targeting benchmarks and then transition to actually targeting useful things, such as how K2 Thinking was post-trained in 4bit precision to prepare for realistic tasks and benchmarked the same way. I agree that’s pretty cool.

It does seem plausible that Kimi K2 is still in the ‘target the benchmarks’ phrase in most places, although not in creative writing. By default, I expect such models to punch ‘below their benchmark-implied weight’ on practical tasks.

For now we don’t have many other outside scores to work with and feedback is light.

Simeon: is Kimi K2 benchmaxxing or are they actually SOTA while training on potatoes?

Prinz: In my testing (for my use cases, which have nothing to do with math and coding), K2-Thinking is obviously worse than GPT-5 Thinking, but by a relatively modest margin. If I had no access to other models, I would happily use K2-Thinking and it wouldn’t feel like a huge downgrade.

ahtoshkaa: I have a pretty sophisticated companion app that uses about 5-10K of varied, information dense context. So the model has to properly parse this information and have very good writing skills. kimi-k2-thinking is absolute ass. similarly to the new OpenAI model – Polaris Alpha.

There’s a growing rhetorical pressure, or marketing style pressure, where the ‘benchmark gaps’ are closing. Chinese labs can point to numbers that say they are ‘just as good’ or almost as good, for many purposes ‘good enough’ is good enough. And many people (including the likes of David Sacks) point to GPT-5 and similar as showing progress isn’t impressive or scary. But as Nathan points out we now see releases like Claude 4 where the benchmark gains look small but real world gains are large, and I would add GPT-5 (and Sonnet 4.5) to that category as well.

Teortaxes: It’s token-hungry, slow-ish, and sometimes rough around the edges. Generally though it’s a jump for open/Chinese models, in the league of Sonnet 4.5 and GPT-5 (maybe -mini depending on task) and a genuinely strong SWE agent. Legitimate alternative, not “but look at the price.”

It’s baked in that the open alternatives are pretty much always going to be rough around the edges, and get evaluated largely in terms of their peak relative performance areas. This is still high praise, putting Kimi in striking distance of the current big two.

Havard Isle has it coming in at a solid 42.1% on WeirdML, matching Opus 4.1.

Here’s something cool:

Pawal Azczesny: Kimi K2 Thinking is using systematically (on its own, without prompting) some of the debiasing strategies known from cognitive sciences. Very impressive. I didn’t see any other model doing that. Well done @Kimi_Moonshot.

It goes beyond “think step by step”. For instance it applied pre-mortem analysis, which is not frequently used. Or it exaggerates claims to see if the whole structure still stands on its own. Pretty neat. Other models need to be instructed to do this.

Steve Hsu got some good math results.

Other notes:

MinusGix: I’ve found it to be better than GPT-5 at understanding & explaining type-theory concepts. Though as usual with Kimi it writes eloquently enough that it is harder to tell when it is bullshitting compared to GPT-5.

Emerson Kimura: Did a few quick text tests, and it seemed comparable to GPT-5

Ian Pitchford: It’s very thorough; few hallucinations.

FredipusRex: Caught it hallucinating sources on Deep Research.

Lech Mazur: Sorry to report, but Kimi K2 Thinking is entering reasoning loops and failing to produce answers for many Extended Connections benchmark questions (double-checked using https://platform.moonshot.ai/playground, so it’s not an API call issue).

The safety protocols? The what now?

David Manheim: It’s very willing to give detailed chemical weapons synthesis instructions and advice, including for scaling production and improving purity, and help on how to weaponize it for use in rockets – with only minimal effort on my part to circumvent refusals.

Two of the three responses to that were ‘good news’ and ‘great. I mean it too.’ So yeah, AI is going to go great, I can tell.

I say strangely because this is by all accounts the strongest open model, the strongest Chinese model and a rival for best agentic or tool use model overall. Yet I don’t see much excitement, or feedback at all either positive or negative.

There’s no question Kimi K2 was impressive, and that Kimi K2 Thinking is also an impressive model, even assuming it underperforms its numbers. It’s good enough that it will often be worth testing it out on your use cases and seeing if it’s right for you. My guess is it will rarely be right unless you are highly price conscious, but we’ll see.

Discussion about this post

Kimi K2 Thinking Read More »

google-announces-even-more-ai-in-photos-app,-powered-by-nano-banana

Google announces even more AI in Photos app, powered by Nano Banana

We’re running out of ways to tell you that Google is releasing more generative AI features, but that’s what’s happening in Google Photos today. The Big G is finally making good on its promise to add its market-leading Nano Banana image-editing model to the app. The model powers a couple of features, and it’s not just for Google’s Android platform. Nano Banana edits are also coming to the iOS version of the app.

Nano Banana started making waves when it appeared earlier this year as an unbranded demo. You simply feed the model an image and tell it what edits you want to see. Google said Nano Banana was destined for the Photos app back in October, but it’s only now beginning the rollout. The Photos app already had conversational editing in the “Help Me Edit” feature, but it was running an older non-fruit model that produced inferior results. Nano Banana editing will produce AI slop, yes, but it’s better slop.

Nano Banana in Help me edit

Google says the updated Help Me Edit feature has access to your private face groups, so you can use names in your instructions. For example, you could type “Remove Riley’s sunglasses,” and Nano Banana will identify Riley in the photo (assuming you have a person of that name saved) and make the edit without further instructions. You can also ask for more fantastical edits in Help Me Edit, changing the style of the image from top to bottom.

Google announces even more AI in Photos app, powered by Nano Banana Read More »

pirelli’s-cyber-tire-might-become-highway-agencies’-newest-assistant

Pirelli’s Cyber Tire might become highway agencies’ newest assistant

“Two weeks ago, a European manufacturer tested… the traction control and stability with a dramatic improvement in stability and the traction,” he said. “The nice part of the story is that there is not only an objective improvement—2 or 3 meters in braking distance—but there is also from these customers always a better feeling… which is something that is very important to us because numbers are for technicians, but from our customer’s perspective, the pleasure to drive also very important.”

The headline said something about traffic?

While the application described above mostly serves the Cyber Tire-equipped car, the smart tires can also serve the greater good. Earlier this year, we learned of a trial in the Italian region of Apulia that fitted Cyber Tires to a fleet of vehicles and then inferred the health of the road surface from data collected by the tires.

Working with a Swedish startup called Univrses, Pirelli has been fusing sensor data from the Cyber Tire with cameras. Misani offered an example.

“You have a hole [in the road]. If you have a hole, maybe the visual [system] recognizes and the tire does not because you automatically try to avoid the hole. So if the tire does not pass over the hole, you don’t measure anything,” he said. “But your visual system will detect it. On the opposite side, there are some cracks on the road that are detected from the visual system as something that is not even on the road, but they cannot say how deep, how is the step, how is it affecting the stability of the car and things like this. Matching the two things, you have the possibility to monitor in the best possible way the condition of the road.”

“Plus thanks to the vision, you have also the possibility to exploit what we call the vertical status—traffic signs, the compatibility between the condition of the road and the traffic signs,” he said.

The next step is a national program in Italy. “We are investigating and making a project to actively control not the control unit of the car but the traffic information,” Misani said. “On some roads, you can vary the speed limit according to the status; if we can detect aquaplaning, we can warn [that] at kilometer whatever, there is aquaplaning and [the speed limit will be automatically reduced]. We are going in the direction of integrating into the smart roads.”

Pirelli’s Cyber Tire might become highway agencies’ newest assistant Read More »

clickfix-may-be-the-biggest-security-threat-your-family-has-never-heard-of

ClickFix may be the biggest security threat your family has never heard of

Another campaign, documented by Sekoia, targeted Windows users. The attackers behind it first compromise a hotel’s account for Booking.com or another online travel service. Using the information stored in the compromised accounts, the attackers contact people with pending reservations, an ability that builds immediate trust with many targets, who are eager to comply with instructions, lest their stay be canceled.

The site eventually presents a fake CAPTCHA notification that bears an almost identical look and feel to those required by content delivery network Cloudflare. The proof the notification requires for confirmation that there’s a human behind the keyboard is to copy a string of text and paste it into the Windows terminal. With that, the machine is infected with malware tracked as PureRAT.

Push Security, meanwhile, reported a ClickFix campaign with a page “adapting to the device that you’re visiting from.” Depending on the OS, the page will deliver payloads for Windows or macOS. Many of these payloads, Microsoft said, are LOLbins, the name for binaries that use a technique known as living off the land. These scripts rely solely on native capabilities built into the operating system. With no malicious files being written to disk, endpoint protection is further hamstrung.

The commands, which are often base-64 encoded to make them unreadable to humans, are often copied inside the browser sandbox, a part of most browsers that accesses the Internet in an isolated environment designed to protect devices from malware or harmful scripts. Many security tools are unable to observe and flag these actions as potentially malicious.

The attacks can also be effective given the lack of awareness. Many people have learned over the years to be suspicious of links in emails or messengers. In many users’ minds, the precaution doesn’t extend to sites that instruct them to copy a piece of text and paste it into an unfamiliar window. When the instructions come in emails from a known hotel or at the top of Google results, targets can be further caught off guard.

With many families gathering in the coming weeks for various holiday dinners, ClickFix scams are worth mentioning to those family members who ask for security advice. Microsoft Defender and other endpoint protection programs offer some defenses against these attacks, but they can, in some cases, be bypassed. That means that, for now, awareness is the best countermeasure.

ClickFix may be the biggest security threat your family has never heard of Read More »

canada-fought-measles-and-measles-won;-virus-now-endemic-after-1998-elimination

Canada fought measles and measles won; virus now endemic after 1998 elimination

“This loss represents a setback, of course, but it is also reversible,” Jarbas Barbosa, director of PAHO, said in a press briefing Monday.

Call to action

Barbosa was optimistic that Canada could regain its elimination status. He highlighted that such setbacks have happened before. “In 2018 and 2019, Venezuela and Brazil temporarily lost their elimination status following large outbreaks,” Barbosa noted. “Thanks to coordinated action by governments, civil society, and regional cooperation, those outbreaks were contained, and the Region of the Americas regained its measles-free status in 2024.”

On Monday, the Public Health Agency of Canada released a statement confirming that it received notification from PAHO that it had lost its measles elimination status, while reporting that it is already getting to work on earning it back. “PHAC is collaborating with the PAHO and working with federal, provincial, territorial, and community partners to implement coordinated actions—focused on improving vaccination coverage, strengthening data sharing, enabling better overall surveillance efforts, and providing evidence-based guidance,” the agency said.

However, Canada isn’t the only country facing an uphill battle against measles—the most infectious virus known to humankind. Outbreaks and sustained spread are also active in the US and Mexico. To date, the US has documented at least 1,618 measles cases since the start of the year, while Mexico has tallied at least 5,185. Bolivia, Brazil, Paraguay, and Belize also have ongoing outbreaks, PAHO reported.

As of November 7, PAHO has collected reports of 12,593 confirmed measles cases from 10 countries, but approximately 95 percent of them are in Canada, Mexico, and the US. That total is a 30-fold increase compared to 2024, PAHO notes, and the rise has led to at least 28 deaths: 23 in Mexico, three in the United States, and two in Canada.

The PAHO used Canada’s loss as a call to action not just in the northern country, but the rest of the region. “Every case we prevent, every outbreak we stop saves lives, protects families, and makes communities healthier,” Barbosa said. “Today, rather than lamenting the loss of a regional status, we call on all countries to redouble their efforts to strengthen vaccination rates, surveillance, and timely response to suspected cases—reaching every corner of the Americas. As a Region, we have eliminated measles twice. We can do it a third time.”

Canada fought measles and measles won; virus now endemic after 1998 elimination Read More »

new-project-brings-strong-linux-compatibility-to-more-classic-windows-games

New project brings strong Linux compatibility to more classic Windows games

Those additional options should be welcome news for fans looking for new ways to play PC games of a certain era. The PC Gaming Wiki lists over 400 titles written with the D3D7 APIs, and while most of those games were released between 2000 and 2004, a handful of new D3D7 games have continued to be released through 2022.

The D3D7 games list predictably includes a lot of licensed shovelware, but there are also well-remembered games like Escape from Monkey Island, Arx Fatalis, and the original Hitman: Codename 47. WinterSnowfall writes that the project was inspired by a desire to play games like Sacrifice and Disciples II on top of the existing dxvk framework.

Despite some known issues with certain D3D7 titles, WinterSnowfall writes that recent tuning means “things are now anywhere between decent to stellar in most of the supported games.” Still, the project author warns that the project will likely never reach full compatibility since “D3D7 is a land of highly cursed API interoperability.”

Don’t expect this project to expand to include support for even older DirectX APIs, either, WinterSnowfall warns. “D3D7 is enough of a challenge and a mess as it is,” the author writes. “The further we stray from D3D9, the further we stray from the divine.”

New project brings strong Linux compatibility to more classic Windows games Read More »

10,000-generations-of-hominins-used-the-same-stone-tools-to-weather-a-changing-world

10,000 generations of hominins used the same stone tools to weather a changing world

“This site reveals an extraordinary story of cultural continuity,” said Braun in a recent press release.

When the going gets tough, the tough make tools

Nomorotukunan’s layers of stone tools span the transition from the Pliocene to the Pleistocene, during which Earth’s climate turned gradually cooler and drier after a 2 to 3 million-year warm spell. Pollen and other microscopic traces of plants in the sediment at Nomorotukunan tell the tale: the lakeshore marsh gradually dried up, giving way to arid grassland dotted with shrubs. On a shorter timescale, hominins at Nomorotukunan faced wildfires (based on microcharcoal in the sediments), droughts, and rivers drying up or changing course.

“As vegetation shifted, the toolmaking remained steady,” said National University of Kenya archaeologist Rahab N. Kinyanjui in a recent press release. “This is resilience.”

Making sharp stone tools may have helped generations of hominins survive their changing, drying world. In the warm, humid Pliocene, finding food would have been relatively easy, but as conditions got tougher, hominins probably had to scavenge or dig for their meals. At least one animal bone at Nomorotukunan bears cut marks where long-ago hominins carved up the carcass for meat—something our lineage isn’t really equipped to do with its bare hands and teeth. Tools also would have enabled early hominins to dig up and cut tubers or roots.

It’s fair to assume that sharpened wood sticks probably also played a role in that particular work, but wood doesn’t tend to last as long as stone in the archaeological record, so we can’t say for sure. What is certain are the stone tools and cut bones, which hint at what Utrecht University archaeologist Dan Rolier, a coauthor of the paper, calls “one of our oldest habits: using technology to steady ourselves against change.”

A tale as old as time

Nomorotukunan may hint that Oldowan technology is even older than the earliest tools archaeologists have unearthed so far. The oldest tools unearthed from the deepest layer at Nomorotukunan are the work of skilled flint-knappers who understood where to strike a stone, and at exactly which angle, to flake off the right shape. They also clearly knew how to select the right stones for the job (fine-grained chalcedony for the win, in this case). In other words, these tools weren’t the work of a bunch of hominins who were just figuring out, for the first time, how to bang the rocks together.

10,000 generations of hominins used the same stone tools to weather a changing world Read More »