Author name: Kris Guyer

why-synthetic-emerald-green-pigments-degrade-over-time

Why synthetic emerald-green pigments degrade over time

Perhaps most relevant to this current paper is a 2020 study in which scientists analyzed Munch’s The Scream, which was showing alarming signs of degradation. They concluded the damage was not the result of exposure to light, but humidity—specifically, from the breath of museum visitors, perhaps as they lean in to take a closer look at the master’s brushstrokes.

Let there be (X-ray) light

Co-author Letizia Monico during the experiments at the European Synchrotron. ESRF

Emerald-green pigments are particularly prone to degradation, so that’s the pigment the authors of this latest paper decided to analyze. “It was already known that emerald-green decays over time, but we wanted to understand exactly the role of light and humidity in this degradation,” said co-author Letizia Monico of the University of Perugia in Italy.

The first step was to collect emerald-green paint microsamples with a scalpel and stereomicroscope from an artwork of that period—in this case, The Intrigue (1890) by James Ensor, currently housed in the Royal Museum of Fine Arts, in Antwerp, Belgium. The team analyzed the untreated samples using Fourier transform infrared imaging, then embedded the samples in polyester resin for synchrotron radiation X-ray analysis. They conducted separate analyses on both commercial and historical samples of emerald-green pigment powders and paint tubes, including one from a museum collection of paint tubes used by Munch.

Next, the authors created their own paint mockups by mixing commercial emerald-green pigment powders and their lab-made powders with linseed oil, and then applied the concoctions to polycarbonate substrates. They also squeezed paint from the Munch paint tube onto a substrate. Once the mockups were dry, thin samples were sliced from each mockup and also analyzed with synchrotron radiation. Then the mockups were subjected to two aging protocols designed to determine the effects of UV light (to simulate indoor lighting) and humidity on the pigments.

The results: In the mockups, light and humidity trigger different degradation pathways in emerald-green paints. Humidity results in the formation of arsenolite, making the paint brittle and prone to flaking. Light dulls the color by causing trivalent arsenic already in the pigment to oxidize into pentavalent compounds, forming a thin white layer on the surface. Those findings are consistent with the analyzed samples taken from The Intrigue, confirming the degradation is due to photo-oxidation. Light, it turns out, is the greatest threat to that particular painting, and possibly other masterpieces from the same period.

Science Advances, 2025. DOI: 10.1126/sciadv.ady1807  (About DOIs).

Why synthetic emerald-green pigments degrade over time Read More »

f1-in-las-vegas:-this-sport-is-a-200-mph-soap-opera

F1 in Las Vegas: This sport is a 200 mph soap opera

Then there’s the temperatures. The desert gets quite chilly in November without the sun shining on things, and the track surface gets down to just 11° C (52° F); by contrast, at the recent Singapore GP, also at night, the track temperature was more like 36° C (97° F).

LAS VEGAS, NEVADA - NOVEMBER 21: Lando Norris of Great Britain driving the (4) McLaren MCL39 Mercedes lifts a wheel on track during qualifying ahead of the F1 Grand Prix of Las Vegas at Las Vegas Strip Circuit on November 21, 2025 in Las Vegas, Nevada. (Photo by )

It’s rare to see an F1 car on full wet tires but not running behind the safety car. Credit: Clive Rose/Getty Images

So, low aero and mechanical grip, an unusual layout compared to most F1 tracks, and very cold temperatures all combine to create potential surprises, shaking up the usual running order.

We saw this last year, where the Mercedes shined in the cold, able to keep their tires in the right operating window, something the team wasn’t able to do at hotter races. But it was hard to tell much from Thursday’s two practice sessions, one of which was interrupted due to problems with a maintenance hatch, albeit not as serious as when one damaged a Ferrari in 2023. The cars looked impressively fast going through turn 17, and the hybrid power units are a little louder than I remember them, even if they’re not a patch on the naturally aspirated engines of old.

Very little of any use was learned by any of the teams for qualifying on Friday night, which took place in at times damp, at times wet conditions—so wet that the Pirelli intermediate tire wasn’t grooved enough, pushing teams to use the full wet-weather spec rubber. Norris took pole from Red Bull’s Max Verstappen, with Williams’ Carlos Sainz making best use of the opportunity to grab third. Piastri would start fifth, behind the Mercedes of last year’s winner, George Russell.

If the race is boring, the off-track action won’t be

Race night was a little windy, but dry. And the race itself was rather boring—Norris tried to defend pole position going into Turn 1 but ran wide, and Verstappen slipped into the lead, never looking back. Norris followed him home in second, with Piastri fourth, leaving Norris 30 points ahead of Piastri and 42 points ahead of Verstappen with two more race weekends and 58 points left on offer.

F1 in Las Vegas: This sport is a 200 mph soap opera Read More »

“go-generate-a-bridge-and-jump-off-it”:-how-video-pros-are-navigating-ai

“Go generate a bridge and jump off it”: How video pros are navigating AI


I talked with nine creators about economic pressures and fan backlash.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

In 2016, the legendary Japanese filmmaker Hayao Miyazaki was shown a bizarre AI-generated video of a misshapen human body crawling across a floor.

Miyazaki declared himself “utterly disgusted” by the technology demo, which he considered an “insult to life itself.”

“If you really want to make creepy stuff, you can go ahead and do it,” Miyazaki said. “I would never wish to incorporate this technology into my work at all.”

Many fans interpreted Miyazaki’s remarks as rejecting AI-generated video in general. So they didn’t like it when, in October 2024, filmmaker PJ Accetturo used AI tools to create a fake trailer for a live-action version of Miyazaki’s animated classic Princess Mononoke. The trailer earned him 22 million views on X. It also earned him hundreds of insults and death threats.

“Go generate a bridge and jump off of it,” said one of the funnier retorts. Another urged Accetturo to “throw your computer in a river and beg God’s forgiveness.”

Someone tweeted that Miyazaki “should be allowed to legally hunt and kill this man for sport.”

PJ Accetturo is a director and founder of Genre AI, an AI ad agency. Credit: PJ Accetturo

The development of AI image and video generation models has been controversial, to say the least. Artists have accused AI companies of stealing their work to build tools that put people out of a job. Using AI tools openly is stigmatized in many circles, as Accetturo learned the hard way.

But as these models have improved, they have sped up workflows and afforded new opportunities for artistic expression. Artists without AI expertise might soon find themselves losing work.

Over the last few weeks, I’ve spoken to nine actors, directors, and creators about how they are navigating these tricky waters. Here’s what they told me.

Actors have emerged as a powerful force against AI. In 2023, SAG-AFTRA, the Hollywood actors’ union, had its longest-ever strike, partly to establish more protections for actors against AI replicas.

Actors have lobbied to regulate AI in their industry and beyond. One actor I talked with, Erik Passoja, has testified before the California Legislature in favor of several bills, including for greater protections against pornographic deepfakes. SAG-AFTRA endorsed SB 1047, an AI safety bill regulating frontier models. The union also organized against the proposed moratorium on state AI bills.

A recent flashpoint came in September, when Deadline Hollywood reported that talent agencies were interested in signing “AI actress” Tilly Norwood.

Actors weren’t happy. Emily Blunt told Variety, “This is really, really scary. Come on agencies, don’t do that.”

Natasha Lyonne, star of Russian Doll, posted on an Instagram Story: “Any talent agency that engages in this should be boycotted by all guilds. Deeply misguided & totally disturbed.”

The backlash was partly specific to Tilly Norwood—Lyonne is no AI skeptic, having cofounded an AI studio—but it also reflects a set of concerns around AI common to many in Hollywood and beyond.

Here’s how SAG-AFTRA explained its position:

Tilly Norwood is not an actor, it’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion and, from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience. It doesn’t solve any “problem” — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry.

This statement reflects three broad criticisms that come up over and over in discussions of AI art:

Content theft: Most leading AI video models have been trained on broad swathes of the Internet, including images and films made by artists. In many cases, companies have not asked artists for permission to use this content, nor compensated them. Courts are still working out whether this is fair use under copyright law. But many people I talked to consider AI companies’ training efforts to be theft of artists’ work.

Job loss:  If AI tools can make passable video quickly or drastically speed up editing tasks, that potentially takes jobs away from actors or film editors. While past technological advancements have also eliminated jobs—the adoption of digital cameras drastically reduced the number of people cutting physical film—AI could have an even broader impact.

Artistic quality:  A lot of people told me they just didn’t think AI-generated content could ever be good art. Tess Dinerstein stars in vertical dramas—episodic programs optimized for viewing on smartphones. She told me that AI is “missing that sort of human connection that you have when you go to a movie theater and you’re sobbing your eyes out because your favorite actor is talking about their dead mom.”

The concern about theft is potentially solvable by changing how models are trained. Around the time Accetturo released the “Princess Mononoke” trailer, he called for generative AI tools to be “ethically trained on licensed datasets.”

Some companies have moved in this direction. For instance, independent filmmaker Gille Klabin told me he “feels pretty good” using Adobe products because the company trains its AI models on stock images that it pays royalties for.

But the other two issues—job losses and artistic integrity—will be harder to finesse. Many creators—and fans—believe that AI-generated content misses the fundamental point of art, which is about creating an emotional connection between creators and viewers.

But while that point is compelling in theory, the details can be tricky.

Dinerstein, the vertical drama actress, told me that she’s “not fundamentally against AI”—she admits “it provides a lot of resources to filmmakers” in specialized editing tasks—but she takes a hard stance against it on social media.

“It’s hard to ever explain gray areas on social media,” she said, and she doesn’t want to “come off as hypocritical.”

Even though she doesn’t think that AI poses a risk to her job—“people want to see what I’m up to”—she does fear people (both fans and vertical drama studios) making an AI representation of her without her permission. And she has found it easiest to just say, “You know what? Don’t involve me in AI.”

Others see it as a much broader issue. Actress Susan Spano told me it was “an issue for humans, not just actors.”

“This is a world of humans and animals,” she said. “Interaction with humans is what makes it fun. I mean, do we want a world of robots?”

It’s relatively easy for actors to take a firm stance against AI because they inherently do their work in the physical world. But things are more complicated for other Hollywood creatives, such as directors, writers, and film editors. AI tools can genuinely make them more productive, and they’re at risk of losing work if they don’t stay on the cutting edge.

So the non-actors I talked to took a range of approaches to AI. Some still reject it. Others have used the tools reluctantly and tried to keep their heads down. Still others have openly embraced the technology.

Kavan Cardoza is a director and AI filmmaker. Credit: Phantom X

Take Kavan Cardoza, for example. He worked as a music video director and photographer for close to a decade before getting his break into filmmaking with AI.

After the image model Midjourney was first released in 2022, Cardoza started playing around with image generation and later video generation. Eventually, he “started making a bunch of fake movie trailers” for existing movies and franchises. In December 2024, he made a fan film in the Batman universe that “exploded on the Internet,” before Warner Bros. took it down for copyright infringement.

Cardoza acknowledges that he re-created actors in former Batman movies “without their permission.” But he insists he wasn’t “trying to be malicious or whatever. It was truly just a fan film.”

Whereas Accetturo received death threats, the response to Cardoza’s fan film was quite positive.

“Every other major studio started contacting me,” Cardoza said. He set up an AI studio, Phantom X, with several of his close friends. Phantom X started by making ads (where AI video is catching on quickest), but Cardoza wanted to focus back on films.

In June, Cardoza made a short film called Echo Hunter, a blend of Blade Runner and The Matrix. Some shots look clearly AI-generated, but Cardoza used motion-capture technology from Runway to put the faces of real actors into his AI-generated world. Overall, the piece pretty much hangs together.

Cardoza wanted to work with real actors because their artistic choices can help elevate the script he’s written: “There’s a lot more levels of creativity to it.” But he needed SAG-AFTRA’s approval to make a film that blends AI techniques with the likenesses of SAG-AFTRA actors. To get it, he had to promise not to reuse the actors’ likenesses in other films.

In Cardoza’s view, AI is “giving voices to creators that otherwise never would have had the voice.”

But Cardoza isn’t wedded to AI. When an interviewer asked him whether he’d make a non-AI film if required to, he responded, “Oh, 100 percent.” Cardoza added that if he had the budget to do it now, “I’d probably still shoot it all live action.”

He acknowledged to me that there will be losers in the transition—“there’s always going to be changes”—but he compares the rise of AI with past technological developments in filmmaking, like the rise of visual effects. This created new jobs making visual effects digitally, but reduced jobs making elaborate physical sets.

Cardoza expressed interest in reducing the amount of job loss. In another interview, Cardoza said that for his film project, “we want to make sure we include as many people as possible,” not just actors, but sound designers, script editors, and other specialized roles.

But he believes that eventually, AI will get good enough to do everyone’s job. “Like I say with tech, it’s never about if, it’s just when.”

Accetturo’s entry into AI was similar. He told me that he worked for 15 years as a filmmaker, “mostly as a commercial director and former documentary director.” During the pandemic, he “raised millions” for an animated TV series, but it got caught up in development hell.

AI gave him a new chance at success. Over the summer of 2024, he started playing around with AI video tools. He realized that he was in the sweet spot to take advantage of AI: experienced enough to make something good, but not so established that he was risking his reputation. After Google released Veo 3 in May, Accetturo released a fake medicine ad that went viral. His studio now produces ads for prominent companies like Oracle and Popeyes.

Accetturo says the backlash against him has subsided: “It truly is nothing compared to what it was.” And he says he’s committed to working on AI: “Everyone understands that it’s the future.”

Between the anti- and pro-AI extremes, there are a lot of editors and artists quietly using AI tools without disclosing it. Unsurprisingly, it’s difficult to find people who will speak about this on the record.

“A lot of people want plausible deniability right now,” according to Ryan Hayden, a Hollywood talent agent. “There is backlash about it.”

But if editors don’t use AI tools, they risk becoming obsolete. Hayden says that he knows a lot of people in the editing field trying to master AI because “there’s gonna be a massive cut” in the total number of editors. Those who know AI might survive.

As one comedy writer involved in an AI project told Wired, “We wanted to be at the table and not on the menu.”

Clandestine AI usage extends into the upper reaches of the industry. Hayden knows an editor who works with a major director who has directed $100 million films. “He’s already using AI, sometimes without people knowing.”

Some artists feel morally conflicted but don’t think they can effectively resist. Vinny Dellay, a storyboard artist who has worked on Marvel films and Super Bowl ads, released a video detailing his views on the ethics of using AI as a working artist. Dellay said that he agrees that “AI being trained off of art found on the Internet without getting permission from the artist, it may not be fair, it may not be honest.” But refusing to use AI products won’t stop their general adoption. Believing otherwise is “just being delusional.”

Instead, Dellay said that the right course is to “adapt like cockroaches after a nuclear war.” If they’re lucky, using AI in storyboarding workflows might even “let a storyboard artist pump out twice the boards in half the time without questioning all your life’s choices at 3 am.”

Gille Klabin is an independent writer, director, and visual effects artist. Credit: Gille Klabin

Gille Klabin is an indie director and filmmaker currently working on a feature called Weekend at the End of the World.

As an independent filmmaker, Klabin can’t afford to hire many people. There are many labor-intensive tasks—like making a pitch deck for his film—that he’d otherwise have to do himself. An AI tool “essentially just liberates us to get more done and have more time back in our life.”

But he’s careful to stick to his own moral lines. Any time he mentioned using an AI tool during our interview, he’d explain why he thought that was an appropriate choice. He said he was fine with AI use “as long as you’re using it ethically in the sense that you’re not copying somebody’s work and using it for your own.”

Drawing these lines can be difficult, however. Hayden, the talent agent, told me that as AI tools make low-budget films look better, it gets harder to make high-budget films, which employ the most people at the highest wage levels.

If anything, Klabin’s AI uptake is limited more by the current capabilities of AI models. Klabin is an experienced visual effects artist, and he finds AI products to generally be “not really good enough to be used in a final project.”

He gave me a concrete example. Rotoscoping is a process in which you trace out the subject of the shot so you can edit the background independently. It’s very labor-intensive—one has to edit every frame individually—so Klabin has tried using Runway’s AI-driven rotoscoping. While it can make for a decent first pass, the result is just too messy to use as a final project.

Klabin sent me this GIF of a series of rotoscoped frames from his upcoming movie. While the model does a decent job of identifying the people in the frame, its boundaries aren’t consistent from frame to frame. The result is noisy.

Current AI tools are full of these small glitches, so Klabin only uses them for tasks that audiences don’t see (like creating a movie pitch deck) or in contexts where he can clean up the result afterward.

Stephen Robles reviews Apple products on YouTube and other platforms. He uses AI in some parts of the editing process, such as removing silences or transcribing audio, but doesn’t see it as disruptive to his career.

Stephen Robles is a YouTuber, podcaster, and creator covering tech, particularly Apple. Credit: Stephen Robles

“I am betting on the audience wanting to trust creators, wanting to see authenticity,” he told me. AI video tools don’t really help him with that and can’t replace the reputation he’s sought to build.

Recently, he experimented with using ChatGPT to edit a video thumbnail (the image used to advertise a video). He got a couple of negative reactions about his use of AI, so he said he “might slow down a little bit” with that experimentation.

Robles didn’t seem as concerned about AI models stealing from creators like him. When I asked him about how he felt about Google training on his data, he told me that “YouTube provides me enough benefit that I don’t think too much about that.”

Professional thumbnail artist Antioch Hwang has a similarly pragmatic view toward using AI. Some channels he works with have audiences that are “very sensitive to AI images.” Even using “an AI upscaler to fix up the edges” can provoke strong negative reactions. For those channels, he’s “very wary” about using AI.

Antioch Hwang is a YouTube thumbnail artist. Credit: Antioch Creative

But for most channels he works for, he’s fine using AI, at least for technical tasks. “I think there’s now been a big shift in the public perception of these AI image generation tools,” he told me. “People are now welcoming them into their workflow.”

He’s still careful with his AI use, though, because he thinks that having human artistry helps in the YouTube ecosystem. “If everyone has all the [AI] tools, then how do you really stand out?” he said.

Recently, top creators have started using more rough-looking thumbnails for their videos. AI has made polished thumbnails too easy to create, so top creators are using what Hwang would call “poorly made thumbnails” to help videos stand out.

Hwang told me something surprising: even as AI makes it easier for creators to make thumbnails themselves, business has never been better for thumbnail artists, even at the lower end. He said that demand has soared because “AI as a whole has lowered the barriers for content creation, and now there’s more creators flooding in.”

Still, Hwang doesn’t expect the good times to last forever. “I don’t see AI completely taking over for the next three-ish years. That’s my estimated timeline.”

Everyone I talked to had different answers to when—if ever—AI would meaningfully disrupt their part of the industry.

Some, like Hwang, were pessimistic. Actor Erik Passoja told me he thought the big movie studios—like Warner Bros. or Paramount—would be gone in three to five years.

But others were more optimistic. Tess Dinerstein, the vertical drama actor, said, “I don’t think that verticals are ever going to go fully AI.” Even if it becomes technologically feasible, she argued, “that just doesn’t seem to be what the people want.”

Gille Klabin, the independent filmmaker, thought there would always be a place for high-quality human films. If someone’s work is “fundamentally derivative,” then they are at risk. But he thinks the best human-created work will still stand out. “I don’t know how AI could possibly replace the borderline divine element of consciousness,” he said.

The people who were most bullish on AI were, if anything, the least optimistic about their own career prospects. “I think at a certain point it won’t matter,” Kavan Cardoza told me. “It’ll be that anyone on the planet can just type in some sentences” to generate full, high-quality videos.

This might explain why Accetturo has become something of an AI evangelist; his newsletter tries to teach other filmmakers how to adapt to the coming AI revolution.

AI “is a tsunami that is gonna wipe out everyone” he told me. “So I’m handing out surfboards—teaching people how to surf. Do with it what you will.”

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell FellowshipSubscribe to Understanding AI to get more from Tim and Kai.

“Go generate a bridge and jump off it”: How video pros are navigating AI Read More »

ai-trained-on-bacterial-genomes-produces-never-before-seen-proteins

AI trained on bacterial genomes produces never-before-seen proteins

The researchers argue that this setup lets Evo “link nucleotide-level patterns to kilobase-scale genomic context.” In other words, if you prompt it with a large chunk of genomic DNA, Evo can interpret that as an LLM would interpret a query and produce an output that, in a genomic sense, is appropriate for that interpretation.

The researchers reasoned that, given the training on bacterial genomes, they could use a known gene as a prompt, and Evo should produce an output that includes regions that encode proteins with related functions. The key question is whether it would simply output the sequences for proteins we know about already, or whether it would come up with output that’s less predictable.

Novel proteins

To start testing the system, the researchers prompted it with fragments of the genes for known proteins and determined whether Evo could complete them. In one example, if given 30 percent of the sequence of a gene for a known protein, Evo was able to output 85 percent of the rest. When prompted with 80 percent of the sequence, it could return all of the missing sequence. When a single gene was deleted from a functional cluster, Evo could also correctly identify and restore the missing gene.

The large amount of training data also ensured that Evo correctly identified the most important regions of the protein. If it made changes to the sequence, they typically resided in the areas of the protein where variability is tolerated. In other words, its training had enabled the system to incorporate the rules of evolutionary limits on changes in known genes.

So, the researchers decided to test what happened when Evo was asked to output something new. To do so, they used bacterial toxins, which are typically encoded along with an anti-toxin that keeps the cell from killing itself whenever it activates the genes. There are a lot of examples of these out there, and they tend to evolve rapidly as part of an arms race between bacteria and their competitors. So, the team developed a toxin that was only mildly related to known ones, and had no known antitoxin, and fed its sequence to Evo as a prompt. And this time, they filtered out any responses that looked similar to known antitoxin genes.

AI trained on bacterial genomes produces never-before-seen proteins Read More »

google’s-latest-swing-at-chromebook-gaming-is-a-free-year-of-geforce-now

Google’s latest swing at Chromebook gaming is a free year of GeForce Now

Earlier this year, Google announced the end of its efforts to get Steam running on Chromebooks, but it’s not done trying to make these low-power laptops into gaming machines. Google has teamed up with Nvidia to offer a version of GeForce Now cloud streaming that is perplexingly limited in some ways and generous in others. Starting today, anyone who buys a Chromebook will get a free year of a new service called GeForce Now Fast Pass. There are no ads and less waiting for server slots, but you don’t get to play very long.

Back before Google killed its Stadia game streaming service, it would often throw in a few months of the Pro subscription with Chromebook purchases. In the absence of its own gaming platform, Google has turned to Nvidia to level up Chromebook gaming. GeForce Now (GFN), which has been around in one form or another for more than a decade, allows you to render games on a remote server and stream the video output to the device of your choice. It works on computers, phones, TVs, and yes, Chromebooks.

The new Chromebook feature is not the same GeForce Now subscription you can get from Nvidia. Fast Pass, which is exclusive to Chromebooks, includes a mishmash of limits and bonuses that make it a pretty strange offering. Fast Pass is based on the free tier of GeForce Now, but users will get priority access to server slots. So no queuing for five or 10 minutes to start playing. It also lacks the ads that Nvidia’s standard free tier includes. Fast Pass also uses the more powerful RTX servers, which are otherwise limited to the $10-per-month ($100 yearly) Performance tier.

Google’s latest swing at Chromebook gaming is a free year of GeForce Now Read More »

nasa-really-wants-you-to-know-that-3i/atlas-is-an-interstellar-comet

NASA really wants you to know that 3I/ATLAS is an interstellar comet

The HiRISE camera, meant to image Mars’ surface, was repurposed to capture 3I/ATLAS. Credit: NASA/JPL-Caltech/University of Arizona

As eccentricity continues to rise from there, the question shifts from “what shape is its trajectory?” to “how much does the Sun alter its path through the Solar System?” For 3I/Atlas, with an eccentricity of over six, the answer is “not very much at all.” The object has approached the inner Solar System along a reasonable approximation of a straight line, experienced a gentle bend around the Sun near Mars’ orbit, and now will be zipping straight out of the Solar System again.

So, the object clearly did not originate here, which means getting a better look at it is a high priority. Unfortunately, 3I/ATLAS’s closest approach to Earth’s orbit happened when it was on the far side of the Sun from Earth. We’ve been getting closer to it since, but the hardware that got the best views was all orbiting Mars and is designed largely to point down. NASA’s Nicky Fox, the associate administrator for Science, praised the operators for getting NASA’s hardware “pushed beyond their designed capabilities” when imaging the object.

That includes using the MAVEN mission (designed to study Mars’ atmosphere) to get spectral information, and the HiRISE camera, which captured the image below. Other images came from a solar observatory and two separate missions that are on their way to visit asteroids. Other hardware that can normally image objects like this, such as the Hubble and JWST, pivoted to image 3I/ATLAS as well.

What we now know

Hubble has gotten the best view of 3I/ATLAS; its data suggests that the comet is, at most, just a couple of kilometers across. It doesn’t show much variability over time, suggesting that, if it’s rotating, it’s doing so very slowly. It has shown some differences as it warmed up, first producing a jet of material on its side facing the Sun before radiation pressure pushed that behind it to form a tail. There is some indication that, as we saw during the Rosetta mission’s visit to one of our Solar System’s comets, most of the material may be jetting out of distinct “hotspots” on the comet’s surface.

NASA really wants you to know that 3I/ATLAS is an interstellar comet Read More »

massive-cloudflare-outage-was-triggered-by-file-that-suddenly-doubled-in-size

Massive Cloudflare outage was triggered by file that suddenly doubled in size

Cloudflare’s proxy service has limits to prevent excessive memory consumption, with the bot management system having “a limit on the number of machine learning features that can be used at runtime.” This limit is 200, well above the actual number of features used.

“When the bad file with more than 200 features was propagated to our servers, this limit was hit—resulting in the system panicking” and outputting errors, Prince wrote.

Worst Cloudflare outage since 2019

The number of 5xx error HTTP status codes served by the Cloudflare network is normally “very low” but soared after the bad file spread across the network. “The spike, and subsequent fluctuations, show our system failing due to loading the incorrect feature file,” Prince wrote. “What’s notable is that our system would then recover for a period. This was very unusual behavior for an internal error.”

This unusual behavior was explained by the fact “that the file was being generated every five minutes by a query running on a ClickHouse database cluster, which was being gradually updated to improve permissions management,” Prince wrote. “Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.”

This fluctuation initially “led us to believe this might be caused by an attack. Eventually, every ClickHouse node was generating the bad configuration file and the fluctuation stabilized in the failing state,” he wrote.

Prince said that Cloudflare “solved the problem by stopping the generation and propagation of the bad feature file and manually inserting a known good file into the feature file distribution queue,” and then “forcing a restart of our core proxy.” The team then worked on “restarting remaining services that had entered a bad state” until the 5xx error code volume returned to normal later in the day.

Prince said the outage was Cloudflare’s worst since 2019 and that the firm is taking steps to protect against similar failures in the future. Cloudflare will work on “hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input; enabling more global kill switches for features; eliminating the ability for core dumps or other error reports to overwhelm system resources; [and] reviewing failure modes for error conditions across all core proxy modules,” according to Prince.

While Prince can’t promise that Cloudflare will never have another outage of the same scale, he said that previous outages have “always led to us building new, more resilient systems.”

Massive Cloudflare outage was triggered by file that suddenly doubled in size Read More »

rocket-lab-electron-among-first-artifacts-installed-in-ca-science-center-space-gallery

Rocket Lab Electron among first artifacts installed in CA Science Center space gallery

It took the California Science Center more than three years to erect its new Samuel Oschin Air and Space Center, including stacking NASA’s space shuttle Endeavour for its launch pad-like display.

Now the big work begins.

“That’s completing the artifact installation and then installing the exhibits,” said Jeffrey Rudolph, president and CEO of the California Science Center in Los Angeles, in an interview. “Most of the exhibits are in fabrication in shops around the country and audio-visual production is underway. We’re full-on focused on exhibits now.”

On Tuesday, the science center is marking the addition of the first artifacts to the Kent Kresa Space Gallery. Named for the former chairman and CEO of Northrop Grumman and former chairman of General Motors, the completed gallery will complement the Samuel Oschin Shuttle Gallery (featuring Endeavour) with three areas devoted to the themes of “Rocket Science,” “Robots in Space,” and “Humans in Space.”

Now in place are a space shuttle main engine (SSME), a walk-through segment of a shuttle solid rocket booster, and a Rocket Lab Electron rocket.

Erecting Electron

“The biggest thing we have put in—other than the space shuttle—was the Electron, which we think is really significant,” said Rudolph. “We’re really happy to show next-generation technologies from startup companies with new launch vehicles, particularly if the company is based in California. Our goal is to inspire and motivate the next generation, and we think that showing folks that there are still a lot of innovative things going on, happening in their backyard, is a really great opportunity to inspire kids and people of all ages.”

a large yellow crane is used to lift a long, black cylindrical artifact into place inside a museum

Credit: California Science Center

Founded in New Zealand in 2006 and now based in Long Beach, Rocket Lab developed the Electron as the first carbon-composite launch vehicle intended to service the small satellite market. It was also the first orbital-class rocket to use electric pump-fed engines. Having now flown 75 successful missions (including five suborbital flights), the Electron is the third most-launched small-lift rocket in history.

Of course, “small” can be relative. At 59 feet tall (18 meters), one floor of the Kresa gallery was not enough.

“The Electron rocket is actually at the center of a staircase, a section which is open all the way from level two, where you enter, to the lower level, which is 25 feet (7.6 meters) below. The Electron is standing up in that opening and it pretty much fills the whole thing,” said Rudolph.

Rocket Lab Electron among first artifacts installed in CA Science Center space gallery Read More »

critics-scoff-after-microsoft-warns-ai-feature-can-infect-machines-and-pilfer-data

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data


Integration of Copilot Actions into Windows is off by default, but for how long?

Credit: Photographer: Chona Kasinger/Bloomberg via Getty Images

Microsoft’s warning on Tuesday that an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

As reported Tuesday, Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

Hallucinations and prompt injections apply

The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”

The admonition is based on known defects inherent in most large language models, including Copilot, as researchers have repeatedly demonstrated.

One common defect of LLMs causes them to provide factually erroneous and illogical answers, sometimes even to the most basic questions. This propensity for hallucinations, as the behavior has come to be called, means users can’t trust the output of Copilot, Gemini, Claude, or any other AI assistant and instead must independently confirm it.

Another common LLM landmine is the prompt injection, a class of bug that allows hackers to plant malicious instructions in websites, resumes, and emails. LLMs are programmed to follow directions so eagerly that they are unable to discern those in valid user prompts from those contained in untrusted, third-party content created by attackers. As a result, the LLMs give the attackers the same deference as users.

Both flaws can be exploited in attacks that exfiltrate sensitive data, run malicious code, and steal cryptocurrency. So far, these vulnerabilities have proved impossible for developers to prevent and, in many cases, can only be fixed using bug-specific workarounds developed once a vulnerability has been discovered.

That, in turn, led to this whopper of a disclosure in Microsoft’s post from Tuesday:

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs,” Microsoft said. “Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

Microsoft indicated that only experienced users should enable Copilot Actions, which is currently available only in beta versions of Windows. The company, however, didn’t describe what type of training or experience such users should have or what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined.

Like “macros on Marvel superhero crack”

Some security experts questioned the value of the warnings in Tuesday’s post, comparing them to warnings Microsoft has provided for decades about the danger of using macros in Office apps. Despite the long-standing advice, macros have remained among the lowest-hanging fruit for hackers out to surreptitiously install malware on Windows machines. One reason for this is that Microsoft has made macros so central to productivity that many users can’t do without them.

“Microsoft saying ‘don’t enable macros, they’re dangerous’… has never worked well,” independent researcher Kevin Beaumont said. “This is macros on Marvel superhero crack.”

Beaumont, who is regularly hired to respond to major Windows network compromises inside enterprises, also questioned whether Microsoft will provide a means for admins to adequately restrict Copilot Actions on end-user machines or to identify machines in a network that have the feature turned on.

A Microsoft spokesperson said IT admins will be able to enable or disable an agent workspace at both account and device levels, using Intune or other MDM (Mobile Device Management) apps.

Critics voiced other concerns, including the difficulty for even experienced users to detect exploitation attacks targeting the AI agents they’re using.

“I don’t see how users are going to prevent anything of the sort they are referring to, beyond not surfing the web I guess,” researcher Guillaume Rossolini said.

Microsoft has stressed that Copilot Actions is an experimental feature that’s turned off by default. That design was likely chosen to limit its access to users with the experience required to understand its risks. Critics, however, noted that previous experimental features—Copilot, for instance—regularly become default capabilities for all users over time. Once that’s done, users who don’t trust the feature are often required to invest time developing unsupported ways to remove the features.

Sound but lofty goals

Most of Tuesday’s post focused on Microsoft’s overall strategy for securing agentic features in Windows. Goals for such features include:

  • Non-repudiation, meaning all actions and behaviors must be “observable and distinguishable from those taken by a user”
  • Agents must preserve confidentiality when they collect, aggregate, or otherwise utilize user data
  • Agents must receive user approval when accessing user data or taking actions

The goals are sound, but ultimately they depend on users reading the dialog windows that warn of the risks and require careful approval before proceeding. That, in turn, diminishes the value of the protection for many users.

“The usual caveat applies to such mechanisms that rely on users clicking through a permission prompt,” Earlence Fernandes, a University of California, San Diego professor specializing in AI security, told Ars. “Sometimes those users don’t fully understand what is going on, or they might just get habituated and click ‘yes’ all the time. At which point, the security boundary is not really a boundary.”

As demonstrated by the rash of “ClickFix” attacks, many users can be tricked into following extremely dangerous instructions. While more experienced users (including a fair number of Ars commenters) blame the victims falling for such scams, these incidents are inevitable for a host of reasons. In some cases, even careful users are fatigued or under emotional distress and slip up as a result. Other users simply lack the knowledge to make informed decisions.

Microsoft’s warning, one critic said, amounts to little more than a CYA (short for cover your ass), a legal maneuver that attempts to shield a party from liability.

“Microsoft (like the rest of the industry) has no idea how to stop prompt injection or hallucinations, which makes it fundamentally unfit for almost anything serious,” critic Reed Mideke said. “The solution? Shift liability to the user. Just like every LLM chatbot has a ‘oh by the way, if you use this for anything important be sure to verify the answers” disclaimer, never mind that you wouldn’t need the chatbot in the first place if you knew the answer.”

As Mideke indicated, most of the criticisms extend to AI offerings other companies—including Apple, Google, and Meta—are integrating into their products. Frequently, these integrations begin as optional features and eventually become default capabilities whether users want them or not.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data Read More »

faced-with-naked-man,-doordasher-demands-police-action;-they-arrest-her-for-illegal-surveillance

Faced with naked man, DoorDasher demands police action; they arrest her for illegal surveillance

“The only justice I’m getting is exposing this man and having posted that video,” she added. “And it has gone viral. Now he can live with shame and embarrassment if people have seen it.”

“I’m the victim!” she said. “Is this making sense to any-fucking-body?”

Her numerous videos attracted huge followings—anywhere from 5 million to 30 million views each—and DoorDash eventually felt the need to respond.

“DoorDash never deactivates someone for reporting [sexual assault]—full stop,” said the company.

But, it added, “posting a video of a customer in their home, and disclosing their personal details publicly, is a clear violation of our policies. That is the sole reason that this Dasher’s account was deactivated, along with the customer’s, while we investigated. We’ve also ensured that the Dasher has full access to their earnings.”

Meanwhile, the police were doing something—but not something that Henderson wanted.

The cops determined that the nude man in question “was incapacitated and unconscious on his couch due to alcohol consumption.” Being drunk and naked inside your own home apparently does not qualify as sexual assault on a delivery driver, and the police department said in a press release yesterday that “the investigation by the Oswego Police Department determined that no sexual assault occurred.”

As part of their investigation, the cops found that Henderson had filmed the man and “subsequently posted the video to social media, where it drew significant attention.” This shifted their attention to Henderson’s decision to film and upload the video without the man’s consent.

The police eventually arrested Henderson, who is now charged with two felonies: “Unlawful Surveillance in the Second Degree” and “Dissemination of an Unlawful Surveillance Image in the First Degree.” She was released after being charged, and her case will be heard by the Oswego City Court.

Henderson has stopped releasing videos on TikTok about the situation.

Faced with naked man, DoorDasher demands police action; they arrest her for illegal surveillance Read More »

google-unveils-gemini-3-ai-model-and-ai-first-ide-called-antigravity

Google unveils Gemini 3 AI model and AI-first IDE called Antigravity


Google’s flagship AI model is getting its second major upgrade this year.

Google has kicked its Gemini rollout into high gear over the past year, releasing the much-improved Gemini 2.5 family and cramming various flavors of the model into Search, Gmail, and just about everything else the company makes.

Now, Google’s increasingly unavoidable AI is getting an upgrade. Gemini 3 Pro is available in a limited form today, featuring more immersive, visual outputs and fewer lies, Google says. The company also says Gemini 3 sets a new high-water mark for vibe coding, and Google is announcing a new AI-first integrated development environment (IDE) called Antigravity, which is also available today.

The first member of the Gemini 3 family

Google says the release of Gemini 3 is yet another step toward artificial general intelligence (AGI). The new version of Google’s flagship AI model has expanded simulated reasoning abilities and shows improved understanding of text, images, and video. So far, testers like it—Google’s latest LLM is once again atop the LMArena leaderboard with an ELO score of 1,501, besting Gemini 2.5 Pro by 50 points.

Gemini 3 LMArena

Credit: Google

Factuality has been a problem for all gen AI models, but Google says Gemini 3 is a big step in the right direction, and there are myriad benchmarks to tell the story. In the 1,000-question SimpleQA Verified test, Gemini 3 scored a record 72.1 percent. Yes, that means the state-of-the-art LLM still screws up almost 30 percent of general knowledge questions, but Google says this still shows substantial progress. On the much more difficult Humanity’s Last Exam, which tests PhD-level knowledge and reasoning, Gemini set another record, scoring 37.5 percent without tool use.

Math and coding are also a focus of Gemini 3. The model set new records in MathArena Apex (23.4 percent) and WebDev Arena (1487 ELO). In the SWE-bench Verified, which tests a model’s ability to generate code, Gemini 3 hit an impressive 76.2 percent.

So there are plenty of respectable but modest benchmark improvements, but Gemini 3 also won’t make you cringe as much. Google says it has tamped down on sycophancy, a common problem in all these overly polite LLMs. Outputs from Gemini 3 Pro are reportedly more concise, with less of what you want to hear and more of what you need to hear.

You can also expect Gemini 3 Pro to produce noticeably richer outputs. Google claims Gemini’s expanded reasoning capabilities keep it on task more effectively, allowing it to take action on your behalf. For example, Gemini 3 can triage and take action on your emails, creating to-do lists, summaries, recommended replies, and handy buttons to trigger suggested actions. This differs from the current Gemini models, which would only create a text-based to-do list with similar prompts.

The model also has what Google calls a “generative interface,” which comes in the form of two experimental output modes called visual layout and dynamic view. The former is a magazine-style interface that includes lots of images in a scrollable UI. Dynamic view leverages Gemini’s coding abilities to create custom interfaces—for example, a web app that explores the life and work of Vincent Van Gogh.

There will also be a Deep Think mode for Gemini 3, but that’s not ready for prime time yet. Google says it’s being tested by a small group for later release, but you should expect big things. Deep Think mode manages 41 percent in Humanity’s Last Exam without tools. Believe it or not, that’s an impressive score.

Coding with vibes

Google has offered several ways of generating and modifying code with Gemini models, but the launch of Gemini 3 adds a new one: Google Antigravity. This is Google’s new agentic development platform—it’s essentially an IDE designed around agentic AI, and it’s available in preview today.

With Antigravity, Google promises that you (the human) can get more work done by letting intelligent agents do the legwork. Google says you should think of Antigravity as a “mission control” for creating and monitoring multiple development agents. The AI in Antigravity can operate autonomously across the editor, terminal, and browser to create and modify projects, but everything they do is relayed to the user in the form of “Artifacts.” These sub-tasks are designed to be easily verifiable so you can keep on top of what the agent is doing. Gemini will be at the core of the Antigravity experience, but it’s not just Google’s bot. Antigravity also supports Claude Sonnet 4.5 and GPT-OSS agents.

Of course, developers can still plug into the Gemini API for coding tasks. With Gemini 3, Google is adding a client-side bash tool, which lets the AI generate shell commands in its workflow. The model can access file systems and automate operations, and a server-side bash tool will help generate code in multiple languages. This feature is starting in early access, though.

AI Studio is designed to be a faster way to build something with Gemini 3. Google says Gemini 3 Pro’s strong instruction following makes it the best vibe coding model yet, allowing non-programmers to create more complex projects.

A big experiment

Google will eventually have a whole family of Gemini 3 models, but there’s just the one for now. Gemini 3 Pro is rolling out in the Gemini app, AI Studio, Vertex AI, and the API starting today as an experiment. If you want to tinker with the new model in Google’s Antigravity IDE, that’s also available for testing today on Windows, Mac, and Linux.

Gemini 3 will also launch in the Google search experience on day one. You’ll have the option to enable Gemini 3 Pro in AI Mode, where Google says it will provide more useful information about a query. The generative interface capabilities from the Gemini app will be available here as well, allowing Gemini to create tools and simulations when appropriate to answer the user’s question. Google says these generative interfaces are strongly preferred in its user testing. This feature is available today, but only for AI Pro and Ultra subscribers.

Because the Pro model is the only Gemini 3 variant available in the preview, AI Overviews isn’t getting an immediate upgrade. That will come, but for now, Overviews will only reach out to Gemini 3 Pro for especially difficult search queries—basically the kind of thing Google thinks you should have used AI Mode to do in the first place.

There’s no official timeline for releasing more Gemini 3 models or graduating the Pro variant to general availability. However, given the wide rollout of the experimental release, it probably won’t be long.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google unveils Gemini 3 AI model and AI-first IDE called Antigravity Read More »

5-plead-guilty-to-laptop-farm-and-id-theft-scheme-to-land-north-koreans-us-it-jobs

5 plead guilty to laptop farm and ID theft scheme to land North Koreans US IT jobs

Each defendant also helped the IT workers pass employer vetting procedures. Travis and Salazar, for example, appeared for drug testing on behalf of the workers.

Travis, an active-duty member of the US Army at the time, received at least $51,397 for his participation in the scheme. Phagnasay and Salazar earned at least $3,450 and $4,500, respectively. In all, the fraudulent jobs earned roughly $1.28 million in salary payments from the defrauded US companies, the vast majority of which were sent to the IT workers overseas.

The fifth defendant, Ukrainian national Oleksandr Didenko, pleaded guilty to one count of aggravated identity theft, in addition to wire fraud. He admitted to participating in a “years-long scheme that stole the identities of US citizens and sold them to overseas IT workers, including North Korean IT workers, so they could fraudulently gain employment at 40 US companies.” Didenko received hundreds of thousands of dollars from victim companies who hired the fraudulent applicants. As part of the plea agreement, Didenko is forfeiting more than $1.4 million, including more than $570,000 in fiat and virtual currency seized from him and his co-conspirators.

In 2022, the US Treasury Department said that the Democratic People’s Republic of Korea employs thousands of skilled IT workers around the world to generate revenue for the country’s weapons of mass destruction and ballistic missile programs.

“In many cases, DPRK IT workers represent themselves as US-based and/or non-North Korean teleworkers,” Treasury Department officials wrote. “The workers may further obfuscate their identities and/or location by sub-contracting work to non North Koreans. Although DPRK IT workers normally engage in IT work distinct from malicious cyber activity, they have used the privileged access gained as contractors to enable the DPRK’s malicious cyber intrusions. Additionally, there are likely instances where workers are subjected to forced labor.”

Other US government advisories posted in 2023 and 2024 concerning similar programs have been removed with no explanation.

In Friday’s release, the Justice Department also said it’s seeking the forfeiture of more than $15 million worth of USDT, a cryptocurrency stablecoin pegged to the US dollar, that the FBI seized in March from North APT38 actors. The seized funds were derived from four heists APT38 carried out, two in July 2023 against virtual currency payment processors in Estonia and Panama and two in November 2023 thefts from exchanges in Panama and Seychelles.

Justice Department attempts to locate, seize, and forfeit all the stolen assets remain ongoing because APT38 has laundered them through virtual currency bridges, mixers, exchanges, and over-the-counter traders, the Justice Department said.

5 plead guilty to laptop farm and ID theft scheme to land North Koreans US IT jobs Read More »