Author name: Kris Guyer

google’s-new-command-line-tool-can-plug-openclaw-into-your-workspace-data

Google’s new command-line tool can plug OpenClaw into your Workspace data

The command line is hot again. For some people, command lines were never not hot, of course, but it’s becoming more common now in the age of AI. Google launched a Gemini command-line tool last year, and now it has a new AI-centric command-line option for cloud products. The new Google Workspace CLI bundles the company’s existing cloud APIs into a package that makes it easy to integrate with a variety of AI tools, including OpenClaw. How do you know this setup won’t blow up and delete all your data? That’s the fun part—you don’t.

There are some important caveats with the Workspace tool. While this new GitHub project is from Google, it’s “not an officially supported Google product.” So you’re on your own if you choose to use it. The company notes that functionality may change dramatically as Google Workspace CLI continues to evolve, and that could break workflows you’ve created in the meantime.

For people interested in tinkering with AI automations and don’t mind the inherent risks, Google Workspace CLI has a lot to offer, even at this early stage. It includes the APIs for every Workspace product, including Gmail, Drive, and Calendar. It’s designed for use by humans and AI agents, but like everything else Google does now, there’s a clear emphasis on AI.

The tool supports structured JSON outputs, and there are more than 40 agent skills included, says Google Cloud director Addy Osmani. The focus of Workspace CLI seems to be on agentic systems that can create command-line inputs and directly parse JSON outputs. The integrated tools can load and create Drive files, send emails, create and edit Calendar appointments, send chat messages, and much more.

Google’s new command-line tool can plug OpenClaw into your Workspace data Read More »

ms-exec:-microsoft’s-next-console-will-play-“xbox-and-pc-games”

MS exec: Microsoft’s next console will play “Xbox and PC games”

Last summer, we here at Ars made the argument that the company’s next Xbox console should give up the walled garden approach and just run Windows already. Now, newly named Microsoft Executive Vice President for Gaming Asha Sharma has strongly hinted that this is indeed the direction Microsoft is going, saying its next-generation console will “play your Xbox and PC games.”

In a social media post Thursday afternoon, Sharma said that “our commitment to the return of Xbox” would include a new console codenamed Project Helix that “will lead in performance and play your Xbox and PC games.” Sharma said she would be discussing that commitment and that console itself with developers and partners at her first Game Developers Conference next week.

Sharma’s statement leaves a little wiggle room for Project Helix to be something other than a full-fledged Windows-based living room gaming box. The coming console’s access to PC games could be limited to Microsoft’s existing streaming solution via PC Game Pass, for instance, or to games designed for Microsoft’s own Xbox-branded PC SDK and the PC Xbox app.

Still, a plain reading of Sharma’s statement suggests that Microsoft is getting ready to open up its next console to a complete Windows installation, with the ability to play tens of thousands of existing PC games. That doesn’t come as a complete shock, considering that Microsoft already used the Xbox name for last year’s Windows-based ROG Xbox Ally (and its somewhat console-esque full-screen “Xbox Experience”). Microsoft has also been slowly reducing the number of games that are fully exclusive to Xbox consoles, lowering the value of a walled-off console platform (Sony, meanwhile, pulled back this week from its recent trend of releasing first-party titles on PC as well). Meanwhile, Valve’s coming Steam Machine is threatening to bring Windows-free PC gaming to living rooms everywhere in the near future.

MS exec: Microsoft’s next console will play “Xbox and PC games” Read More »

openai-introduces-gpt-5.4-with-more-knowledge-work-capability

OpenAI introduces GPT-5.4 with more knowledge-work capability

Additionally, there are improvements to visual understanding; it can now more carefully analyze images up to 10.24 million pixels, or up to a 6,000-pixel maximum dimension. OpenAI also claims responses from this model are 18 percent less likely to contain factual errors than before.

ChatGPT reportedly lost some users to competitor Anthropic in recent days, after OpenAI announced a deal with the Pentagon in the wake of a public feud between the Trump administration and Anthropic over limitations Anthropic wanted to impose on military applications of its models. However, it’s unclear just how many folks jumped ship or whether that led to a substantial dip in the product’s massive base of over 900 million users.

To take advantage of the situation, Anthropic rolled out the once-subscriber-only memory feature to free users and introduced a tool for importing memory from elsewhere. Anthropic says March 2 was its largest single day ever for new sign-ups.

OpenAI needs to compete in both capability and cost and token efficiency to maintain its relative popularity with users, and this update aims to support that objective.

GPT-5.4 is available to users of the ChatGPT web and native apps, Codex, and the API starting today. Subscribers to Plus, Team, and Pro are also getting GPT-5.4 Thinking, and GPT-5.4 Pro is hitting the API, Edu, and Enterprise.

OpenAI introduces GPT-5.4 with more knowledge-work capability Read More »

trump-gets-data-center-companies-to-pledge-to-pay-for-power-generation

Trump gets data center companies to pledge to pay for power generation

On Wednesday, the Trump administration announced that a large collection of tech companies had signed on to what it’s calling the Ratepayer Protection Pledge. By agreeing, the initial signatories—Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI—are saying they will pay for the new generation and transmission capacities needed for any additional data centers they build. But the agreement has no enforcement mechanism, and it will likely run into issues with hardware supplies. It also ignores basic economics.

Other than that, it seems like a great idea.

What’s being agreed to

The agreement is quite simple, laying out five points. The key ones are the first three: that the companies building data centers pledge to pay for new generating capacity, either building it themselves or paying for it as part of a new or expanded power plant. They’ll also pay for any transmission infrastructure needed to connect their data centers and the new supply to the grid and will cover these costs whether or not the power ultimately gets used by their facilities.

The companies also pledge to consider allowing the local grid to use on-site backup generators to handle emergency power shortages affecting the community. They will also hire and train locally when they build new data centers.

The agreement suggests that these promises will protect American consumers from price hikes due to the expansion of data centers and will somehow “lower electricity costs for consumers in the long term.” How that will happen is not specified.

Also missing from the agreement is any sort of enforcement mechanism. If a company decides to ignore the agreement, the worst it is guaranteed to suffer is bad publicity, something these companies already have experience handling. That said, Trump has been known to resort to blatantly illegal tactics to pressure companies to conform to his wishes, so ignoring the agreement carries risks.

That’s important because the companies will struggle to live up to the agreement. (Though Google, for its part, told Ars that it has typically followed the guidelines as a normal part of its process for building new data centers.)

Trump gets data center companies to pledge to pay for power generation Read More »

lanterns-teaser-swaps-superhero-hijinks-for-gritty-realism

Lanterns teaser swaps superhero hijinks for gritty realism

James Gunn and Peter Safran injected a much-needed shot of levity into the DC Universe when they took over the franchise and launched their “Gods and Monsters” chapter. But they’re getting a bit more serious with the latest installment: Lanterns, an eight-episode series that reimagines the Green Lantern mythology as a gritty prestige crime drama/spy thriller in the vein of True Detective and Slow Horses.

The logline says the show will focus on two versions of the Green Lantern who find themselves “drawn into a dark, Earth-based mystery as they investigate a murder in the American Heartland” (i.e., Nebraska). Will it work? We’ll see. This series was barely on my radar before, but the extended teaser that dropped last night is tonally unique for the DCU and so well done that the show now has a place on my must-watch TV list for 2026.

Kyle Chandler plays Hal Jordan, a former test pilot who is nearing his retirement from the Green Lantern Corps. He’s training a new recruit, John Stewart Jr. (Aaron Pierre), to replace him. Nathan Fillion reprises his Superman role as the obnoxious Guy Gardner. The cast also includes Kelly MacDonald as Kerry, a small-town family-oriented sheriff; Jason Ritter as Billy Macon, Kerry’s husband; Garret Dillahunt as William Macon, Kerry’s cowboy father-in-law; Poorna Jagannathan as a woman named Zoe; Ulrich Thomsen as Sinestro, a former Corps member who’s gone rogue; and Paul Ben-Victor as an extraterrestrial called Antaan.

Sherman Augustus plays John Stewart Sr., with J. Alphonse Nicholson playing the younger version; Nicole Ari Parker plays Bernadette Stewart (mother to John Jr.), with Jasmine Cephas Jones playing the young version of the character. In addition, Chris Coy plays a suspiciously nervous truck driver, Waylon Sanders; Cary Christopher plays a gifted child named Noah; and Laura Linney and Paula Patton will appear in as-yet-undisclosed guest roles.

Lanterns teaser swaps superhero hijinks for gritty realism Read More »

terrapower-gets-ok-to-start-construction-of-its-first-nuclear-plant

TerraPower gets OK to start construction of its first nuclear plant

On Wednesday, the US Nuclear Regulatory Commission announced that it had issued its first construction approval in nearly a decade. The approval will allow work to begin on a site in Kemmerer, Wyoming, by a company called TerraPower. That company is most widely recognized as being financially backed by Bill Gates, but it’s attempting to build a radically new reactor, one that is sodium-cooled and incorporates energy storage as part of its design.

This doesn’t necessarily mean it will gain approval to operate the reactor, but it’s a critical step for the company.

The TerraPower design, which it calls Natrium and has been developed jointly with GE Hitachi, has several novel features. Probably the most notable of these is the use of liquid sodium for cooling and heat transfer. This allows the primary coolant to remain liquid, avoiding any of the challenges posed by the high-pressure steam used in water-cooled reactors. But it carries the risk that sodium is highly reactive when exposed to air or water. Natrium is also a fast-neutron reactor, which could allow it to consume some isotopes that would otherwise end up as radioactive waste in more traditional reactor designs.

The reactor is also relatively small compared to most current nuclear plants (245 megawatts versus roughly one gigawatt), and incorporates energy storage. Rather than using the heat extracted by the sodium to boil water, the plant will put the heat into a salt-based storage material that can either be used to generate electricity or stored for later use. This will allow the plant to operate around renewable power, which would otherwise undercut it on price. The storage system will also allow it to temporarily output up to 500 MW of electricity.

TerraPower gets OK to start construction of its first nuclear plant Read More »

large-genome-model:-open-source-ai-trained-on-trillions-of-bases

Large genome model: Open source AI trained on trillions of bases


System can identify genes, regulatory sequences, splice sites, and more.

Late in 2025, we covered the development of an AI system called Evo that was trained on massive numbers of bacterial genomes. So many that, when prompted with sequences from a cluster of related genes, it could correctly identify the next one or suggest a completely novel protein.

That system worked because bacteria tend to cluster related genes together—something that’s not true in organisms with complex cells, which tend to have equally complex genome structures. Given that, our coverage noted, “It’s not clear that this approach will work with more complex genomes.”

Apparently, the team behind Evo viewed that as a challenge, because today it is describing Evo 2, an open source AI that has been trained on genomes from all three domains of life (bacteria, archaea, and eukaryotes). After training on trillions of base pairs of DNA, Evo 2 developed internal representations of key features in even complex genomes like ours, including things like regulatory DNA and splice sites, which can be challenging for humans to spot.

Genome features

Bacterial genomes are organized along relatively straightforward principles. Any genes that encode proteins or RNAs are contiguous, with no interruptions in the coding sequence. Genes that perform related functions, like metabolizing a sugar or producing an amino acid, tend to be clustered together, allowing them to be controlled by a single, compact regulatory system. It’s all straightforward and efficient.

Eukaryotes are not like that. The coding sections of genes are interrupted by introns, which don’t encode for anything. They’re regulated by a sequence that can be scattered across hundreds of thousands of base pairs. The sequences that define the edges of introns or the binding sites of regulatory proteins are all weakly defined—while they have a few bases that are absolutely required, there are a lot of bases that just have an above-average tendency to have a specific base (something like “45 percent of the time it’s a T”). Surrounding all of this in most eukaryotic genomes is a huge amount of DNA that has been termed junk: inactive viruses, terminally damaged genes, and so on.

That complexity has made eukaryotic genomes more difficult to interpret. And, while a lot of specialized tools have been developed to identify things like splice sites, they’re all sufficiently error-prone that it becomes a problem when you’re analyzing something as large as a 3 billion-base-long genome. We can learn a lot more by making evolutionary comparisons and looking for sequences that have been conserved, but there are limits to that, and we’re often as interested in the differences between species.

These sorts of statistical probabilities, however, are well-suited to neural networks, which are great at recognizing subtle patterns that can be impossible to pick out by eye. But you’d need absolutely massive amounts of data and computing time to process it and pick out some of these subtle features.

We now have the raw genome data that the process needs. Putting together a system to feed it into an effective AI training program, however, remained a challenge. That’s the challenge the team behind Evo took on.

Training a large genome model

The foundation of the Evo 2 system is a convolutional neural network called StripedHyena 2. The training took place in two stages. The initial stage focused on teaching the system to identify important genome features by feeding it sequences rich in them in chunks about 8,000 bases long. After that, there was a second stage in which sequences were fed a million bases at a time to provide the system the opportunity to identify large-scale genome features.

The researchers trained two versions of their system using a dataset called OpenGenome2, which contains 8.8 trillion bases from all three domains of life, as well as viruses that infect bacteria. They did not include viruses that attack eukaryotes, given that they were concerned that the system could be misused to create threats to humans. Two versions were trained: one that had 7 billion parameters tuned using 2.4 trillion bases, and the full version with 40 billion parameters trained on the full open genome dataset.

The logic behind the training is pretty simple: if something’s important enough to have been evolutionarily conserved across a lot of species, it will show up in multiple contexts, and the system should see it repeatedly during training. “By learning the likelihood of sequences across vast evolutionary datasets, biological sequence models capture conserved sequence patterns that often reflect functional importance,” the researchers behind the work write. “These constraints allow the models to perform zero-shot prediction without any task-specific fine-tuning or supervision.”

That last aspect is important. We could, for example, tell it about what known splice sites look like, which might help it pick out additional ones. But that might make it harder for it to recognize any unusual splice sites that we haven’t recognized yet. Skipping the fine-tuning might also help it identify genome features that we’re not aware of at all at the moment, but which could become apparent through future research.

All of this has now been made available to the public. “We have made Evo 2 fully open, including model parameters, training code, inference code, and the OpenGenome2 dataset,” the paper announces.

The researchers also used a system that can identify internal features in neural networks to poke around inside of Evo 2 and figure out what things it had learned to recognize. They trained a separate neural network to recognize the firing patterns in Evo 2 and identify high-level features in it. It clearly recognized protein-coding regions and the boundaries of the introns that flanked them. It was also able to recognize some structural features of proteins within the coding regions (alpha helices and beta sheets), as well as mutations that disrupt their coding sequence. Even something like mobile genetic elements (which you can think of as DNA-level parasites) ended up with a feature within Evo 2.

What is this good for?

To test the system, the researchers started making single-base mutations and fed them into Evo 2 to see how it responded. Evo 2 could detect problems when the mutations affected the sites in DNA where transcription into RNA started, or the sites where translation of that RNA into protein started. It also recognized the severity of mutations. Those that would interrupt protein translation, such as the introduction of stop signals, were identified as more significant changes than those that left the translation intact.

It also recognized when sequences weren’t translated at all. Many key cellular functions are carried out directly by RNAs, and Evo 2 was able to recognize when mutations disrupted those, as well.

Impressively, the ability to recognize features in eukaryotic genomes occurred without the loss of its ability to recognize them in bacteria and archaea. In fact, the system seemed to be able to work out what species it was working in. A number of evolutionary groups use genetic codes with a different set of signals to stop the translation of proteins. Evo 2 was able to recognize when it was looking at a sequence from one of those species, and used the correct genetic code for them.

It was also good at recognizing features that tolerate a lot of variability, such as sites that signal where to splice RNAs to remove introns from the coding sequence of proteins. By some measures, it was better than software specialized for that task. The same was true when evaluating mutations in the BRCA2 gene, where many of the mutations are associated with cancer. Given additional training on known BRCA2 mutations, its performance improved further.

Overall, Evo 2 seems great for evaluating genomes and identifying key features. The researchers who built it suggest it could serve as a good automated tool for preliminary genome annotation.

But the striking thing about the early version of Evo was that, when prompted with a chunk of sequence that includes known bacterial genes, some of its responses included entirely new proteins with related functions. Now that it was trained on more complex eukaryotic genes, could it do the same?

We don’t entirely know. If given a bunch of DNA from yeast (a eukaryote), it would respond with a sequence that included functional RNAs, and gene-like sequences with regulatory information and splice sites. But the researchers didn’t test whether any of the proteins did anything in particular. And it’s difficult to see how they could even do that test. With bacterial genes, they could safely assume that the AI-generated gene should be doing something related to the nearby genes. But that’s generally not the case in eukaryotes, so it’s difficult to guess what functions they should even test for.

In a somewhat more informative test, the researchers asked Evo 2 to make some regulatory DNA that was active in one cell type and not another after giving it information about what sequences were active in both those cell types. The sequences that came out were then inserted into these cells and tested, but the results were pretty weak, with only 17 percent having activity that differed by a factor of two or more between the two cell types. That’s a major achievement, but it isn’t in the same realm as designing brand new proteins.

What’s next?

Overall, given that this has come out less than four months after the paper describing the original Evo, it’s not at all surprising that there wasn’t more work done to test what Evo 2 can do for designing biologically relevant DNA sequences. Biology experiments are hard and time-consuming, and it’s not always easy to judge in advance which ones will provide the most compelling information. So we’ll probably have to wait months to years to find out whether the community finds interesting things to do with Evo 2, and whether it’s good at solving any useful protein design problems.

There’s also the question of whether further training and specialization can create Evo 2 relatives that are especially good at specific tasks, such as evaluating genomes from cancer cells or annotating newly sequenced genomes. To an extent, it appears the research team wanted to get this out so that others could start exploring how it might be put to use; that’s consistent with the fact that all of the software was made available.

The big open question is whether this system has identified anything that we don’t know how to test for. Things like intron/exon boundaries and regulatory DNA have been subjected to decades of study so that we already knew how to look for them and can recognize when Evo 2 spots them. But we’ve discovered a steady stream of new features in the genome—CRISPR repeats, microRNAs, and more—over the past decades. It remains technically possible that there are features in the genome we’re not aware of yet, and Evo 2 has picked them out.

It’s possible to imagine ways to use the tools described here to query Evo 2 and pick out new genome features. So I’m looking forward to seeing what might ultimately come out of that sort of work.

Nature, 2026. DOI: 10.1038/s41586-026-10176-5 (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Large genome model: Open source AI trained on trillions of bases Read More »

google-and-epic-announce-settlement-to-end-app-store-antitrust-case

Google and Epic announce settlement to end app store antitrust case

Google is in the midst of rewriting the rules for mobile applications, spurred by ongoing legal cases and an apparent desire to clamp down on perceived security weaknesses. Late last year, Google and Epic concocted a settlement that would end the long-running antitrust dispute that stemmed from Fortnite fees. The sides have now announced an updated version of the agreement with new changes aimed at placating US courts and putting this whole mess in the rearview mirror. The gist is that Android will get more app stores, and developers will pay lower fees.

A US court ruled against Google in the case in 2023, and the remedies announced in 2024 threatened to upend Google’s Play Store model. It tried unsuccessfully to have the verdict reversed, but then Epic came to the rescue. In late 2025, the companies announced a settlement that skipped many of the court’s orders.

Epic leadership professed interest in leveling the playing field for all developers on Android’s platform. But US District Judge James Donato expressed skepticism of the settlement in January, noting that it may be a “sweetheart deal” that benefited Epic more than other developers. The specifics of the arrangement were not fully disclosed, but it included lower Play Store fees, cross-licensing, attorneys’ fees, and other partnership offers.

It’s starting to look like both companies want to wrap up this case. For Epic, this all started as a way to avoid paying Google a 30 percent cut of Fortnite purchases—the game has been banned from the Play Store this whole time. Google, meanwhile, is in the midst of a major change to Android app distribution with its developer verification program. After all these years, the end is in sight. So the new settlement includes more explicit limits on Play Store fees and resurrects one of Donato’s more far-reaching remedies.

Google’s “new era” of apps

Representatives for Epic and Google have both expressed enthusiastic support for the newly announced settlement, which is subject to Judge Donato’s approval. The parties say the agreement will resolve their dispute globally, not only in the US.

The settlement affirms that developers in the Play Store will be able to steer users to other forms of payment. This is what got Fortnite pulled from the Play Store (and Apple App Store) back in 2020. When developers choose to use Google’s billing platform, they’ll pay lower fees as well.

Google and Epic announce settlement to end app store antitrust case Read More »

after-a-rocky-six-years,-sony-cancels-future-single-player-pc-game-releases

After a rocky six years, Sony cancels future single-player PC game releases

Finally, Bloomberg’s sources cautioned that Sony’s strategy for single-player releases could change again at some point in the future.

Historically, Sony did not release its first-party games on PCs. That began to change in 2020, and the company has put out titles like Horizon Zero Dawn, Helldivers 2, and Ghost of Tsushima on PCs, among others. Sony’s PC launch experiments haven’t been without confusion or drama, however.

The company was inconsistent about which titles reached the platform and about the timelines for those releases. Single-player titles hit Steam months or even years after their console releases, long after the gaming community buzz around them had died down.

Further, some titles required players sign in to a PlayStation account to access core features, which wasn’t a popular choice with everyone, and the back-and-forth on that policy felt chaotic to many players.

Sony has been less decisive about its PC strategy compared to the other two major console manufacturers. Nintendo simply does not release its games on PC at all, while Microsoft has released all of its first-party Xbox titles on PC.

Bloomberg also notes that some recent releases have not sold as well on PC as hoped, suggesting that Sony’s test-the-waters approach has found said water lukewarm.

After a rocky six years, Sony cancels future single-player PC game releases Read More »

downdetector,-speedtest-sold-to-it-service-provider-accenture-in-$1.2b-deal

Downdetector, Speedtest sold to IT service provider Accenture in $1.2B deal

In a statement, Accenture CEO and chair Julie Sweet said:

By acquiring Ookla, we will help our clients across business and government scale AI safely and build the trusted data foundations they need to deliver the reliable, seamless connectivity that creates value.

Current Accenture public sector clients include the US Air Force, the US Social Security Administration, and, recently, the US Department of State.

Speedtest and Downdetector are popular among people seeking something to help quickly test their current internet speed and the status of online services, respectively. Downdetector is often cited by media reports discussing the availability of websites, apps, banks, and more.

Under Ziff Davis, both programs also have business-to-business (B2B) applications. Using Speedtest, for instance, Ookla gathers, aggregates, and analyzes data for “billions of mobile network samples daily, which measure radio signal levels, network coverage, and availability, and [quality of experience] metrics for a number of connected experiences, such as streaming video, video conferencing, gaming, web browsing, and CDN and cloud provider performance,” Ookla says. Currently, Speedtest claims telecommunications operators, regulatory and trade bodies, analysts, journalists, and nonprofits as B2B customers.

Downdetector Explorer, meanwhile, is a monitoring tool that’s supposed to help businesses detect outages. Customers include streaming services, banks, social networks, and communication service providers.

Should Accenture’s acquisition close, the IT consultant will similarly use data from Speedtest and Downdetector to inform clients, and individual users will be subject to a new privacy policy and any other changes Accenture potentially makes.

An Accenture spokesperson told Ars Technica that Accenture plans to operate the Ookla “business as it operates today.” 

Downdetector, Speedtest sold to IT service provider Accenture in $1.2B deal Read More »

what-we-can-learn-from-scientific-analysis-of-renaissance-recipes

What we can learn from scientific analysis of Renaissance recipes


“a key change in how people constructed knowledge”

Multispectral imaging, proteomics, historical texts yield new insights into 16th-century medical manuals.

Credit: The John Rylands Research Institute and Library, The University of Manchester

Forget “eye of newt and toe of frog/wool of bat and tongue of dog.” People in the 16th century were more akin to DIY scientists than Macbeth’s three witches when it came to concocting home remedies for everything from hair loss and toothache, to kidney stones and fungal infections. Medical manuals targeted to the layperson were hugely popular at the time, according to Stefan Hanss, an early modern historian at the University of Manchester in the UK. “Reader-practitioners” would tinker with the various recipes, tweaking them as needed and making personalized notes in the margins. And they left telltale protein traces behind as they did so.

Hanss is part of an interdisciplinary team of archaeologists, chemists, historians, conservators, and materials scientists who have analyzed trace proteins from the fingerprints of Renaissance people rifling through the pages of medical manuals. The team reported their findings in a paper published in The American Historical Review. It’s the first time researchers have used proteomics to analyze Renaissance recipes, enhanced further by in-depth archival research to place the scientific results in the proper historical context.

“We have so many recipes of that time, [including] cosmetic, medical, and culinary recipes, as well as handwritten recipes passed down for generations,” Hanss told Ars. “It’s really a key element of Renaissance culture, and [the manuscripts] are all covered with scribbled marginalia of [past] users. Experimentation was everywhere. It’s not only about book-learned knowledge but hands-on practical knowledge. It’s a key change in the way people constructed knowledge at that time.”

As previously reported, a number of analytical techniques have emerged over the last few decades to create historical molecular records of the culture in which various artworks were created. For instance, studying the microbial species that congregate on works of art may lead to new ways to slow down the deterioration of priceless aging art. Case in point: Scientists analyzed the microbes found on seven of Leonardo da Vinci’s drawings in 2020 using a third-generation sequencing method known as Nanopore, which uses protein nanopores embedded in a polymer membrane for sequencing. They combined the Nanopore sequencing with a whole-genome-amplification protocol and found that each drawing had its own unique microbiome.

Mass spectrometry-based proteomics is a relative newcomer to the field and is capable of providing a thorough and very detailed characterization of any protein residues present in a given sample, as well as any accumulated damage. The technique is so sensitive that less sample material is needed compared to other methods. And unlike, say, gas chromatography-mass spectrometry, it’s also capable of characterizing all proteins present in a sample (regardless of the complexity of the mixture), rather than being narrowly targeted to predefined proteins. In 2023, scientists used this approach to discover that beer byproducts were popular canvas primers for artists of the Danish Golden Age. Hanss et al. are extending this methodology to Renaissance medical manuals.

A thriving DIY medical marketplace

This latest study has its roots in an event Hanss organized a few years ago called “Microscopic Records,” which brought together experts in various scientific fields and early modern historians. One of the master classes on offer focused on proteomics. Hanss was intrigued when he learned that researchers had extracted proteins from the lower-right and left corners (i.e., where contact occurs when one turns a page) of archived manuscripts in Milan. “I thought, we must have a conversation about doing this for Renaissance recipes,” said Hanss. “We know there was experimentation, but we couldn’t really trace it. This is really the first time that we’ve sampled and identified and contextualized biochemical traces of materials.”

Hanss et al. focused on two 1531 German medical manuals published by 16th-century physician Bartholomäus Vogtherr: How to Cure and Expel All Afflictions and Illnesses of the Human Body and A Useful and Essential Little Book of Medicine for the Common Man. The two tomes are bound together into a single volume and are part of the collection of the John Rylands Research Institute and Library at Manchester. The recipes included domestic remedies for brain disease, infertility, skin disorders, hair loss, wounds, and various other severe illnesses, written in the vernacular and targeted at the common populace.

It was a relatively new genre at the time, per the authors, a kind of everyday DIY science, since the manuals encouraged at-home hands-on experimentation. In 16th-century Augsburg (a printing hub), “experimentation was everywhere,” and the city boasted a thriving medical marketplace. It’s clear that people used the Rylands copies of Vogtherr’s manuals for their own experiments because the margins are filled with scribbled notes and comments dating back to that period.

The first step was to take high-resolution photographs and then run the pages through multispectral imaging (including infrared and UV wavelengths), which helped them recover the most faded, previously illegible handwriting, such as on the inside cover. One scribbled note turned out to be instructions to use a mixture of viola and scorpion oil as a treatment for ulcers. Then they sampled various pages from the manuals for the proteomics analysis, focusing on areas where Renaissance users would be most likely to rest their writing hand or leave fingerprints. That’s also why they avoided the bindings, which are far more likely to be handled by modern-day conservators.

While proteomics cannot establish the dates of specific samples, the team was able to distinguish between contemporary and old peptides based on degree of degradation (such as oxidation). The quantity of peptides detected was also a clue. In fact, the team ended up excluding one of the samples from the final paper because there was such a significantly higher number of peptide results (2,258) than expected, compared to all the other samples (which ranged from 40 to 210 peptides). And for these two particular manuals, “They were in use for more than a hundred years and we know the [users’] names,” said Hanss. “We could make an informed interpretation based on other recipes at the time, and letters exchanged between [Renaissance] medical practitioners.”

The handwritten marginalia are a fascinating window into how people experimented with and tweaked various Renaissance domestic remedies. For those suffering from urinary stones, for instance, a “reader-practitioner” commented that during painful flare-ups, “parsley powdered or soaked in wine” could be effective. There are references to the benefits of broadleaf plantain juice (administered anally), and eating scarlet hawthorn leaves.

The proteomics results confirmed, among other things, the presence of many popular ingredients used in the recipes, such as beech, watercress, and rosemary traces found next to hair loss remedies—commonly attributed to an “overheated brain—along with cabbage and radish oil, chicory, lizards, and, um, human feces. (Just how badly do you want to grow back that thinning hair?) The manuscripts also include recipes for blonde hair dyes. The analysis revealed traces of plants with particularly striking yellow flowers on those pages. “That is a common theme in cosmetic and medical discourse at the time,” said Hanss. “The idea was to look for resemblances between the remedies and what you wish to achieve in terms of the treatment.”

One of the most remarkable results, per Hanss et al., was the recovery of collagen peptides from hippopotamus teeth or bone, pointing to the global circulation of more exotic ingredients in the 16th century. Hippo teeth were said to cure kidney stones and “take away toothache,” and were even used to make dentures.

Hanss et al. also found that several of the proteins they found had antimicrobial functions, such as dermcidin (derived from human sweat glands), which kills E. coli and yeast infections like thrush. The samples also yielded insight into how Renaissance people’s bodies responded to the remedies. Traces of immunoglobulin,  lipocalin, and lysozyme are indicators of an active immune response, for instance.

Hanss is so pleased with these initial results that he hopes to launch a large-scale project to extend this interdisciplinary approach to other collections of medical manuals. He also hopes to further improve the dating methodology. “The ingredients for success are there,” said Hanss. “It’s not only that we found new answers to old questions, but we are now in a position to ask completely new questions.”

The American Historical Review, 2025. DOI: 10.1093/ahr/rhaf405 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

What we can learn from scientific analysis of Renaissance recipes Read More »

m5-pro-and-m5-max-are-surprisingly-big-departures-from-older-apple-silicon

M5 Pro and M5 Max are surprisingly big departures from older Apple Silicon


Apple is using more chiplets and three types of CPU cores to make the M5 family.

As part of today’s MacBook Pro update, Apple has also unveiled the M5 Pro and M5 Max, the newest members of the M5 chip family.

Normally, the Pro and Max chips take the same basic building blocks from the basic chip and just scale them up—more CPU cores, more GPU cores, and more memory bandwidth. But the M5 chips are a surprisingly large departure from past generations, both in terms of the CPU architectures they use and in how they’re packaged together.

We won’t know the impact these changes have had on performance until we have hardware in hand to test, but here are all the technical details we’ve been able to glean about the new updates and how the M5 chip family stacks up against the past few generations of Apple Silicon chips.

New Fusion Architecture and a third type of CPU core

Apple says that M5 Pro and M5 Max use an “all-new Fusion Architecture” that welds two silicon chiplets into a single processor. Apple has used this approach before, but historically only to combine two Max chips together into an Ultra.

Apple’s approach here is different—for example, the M5 Pro is not just a pair of M5 chips welded together. Rather, Apple has one chiplet handling the CPU and most of the I/O, and a second one that’s mainly for graphics, both built on the same 3nm TSMC manufacturing process.

The first silicon die is always the same, whether you get an M5 Pro or M5 Max. It includes the 18-core CPU, the 16-core Neural Engine, and controllers for the SSD, for the Thunderbolt ports, and for driving displays.

The second die is where the two chips differ; the M5 Pro gets up to 20 GPU cores, a single media encoding/decoding engine, and a memory controller with up to 307 GB/s of bandwidth. The M5 Max gets up to 40 GPU cores, a pair of media encoding/decoding engines, and a memory controller that provides up to 614 GB/s of memory bandwidth (note that everything in the GPU die seems to be doubled, implying that Apple is, in fact, sticking two M5 Pro GPUs together to make one M5 Max GPU).

Apple’s spec sheets now list three distinct types of CPU cores: “super” cores, performance cores, and efficiency cores.

Credit: Apple

Apple’s spec sheets now list three distinct types of CPU cores: “super” cores, performance cores, and efficiency cores. Credit: Apple

Apple is also introducing a third distinct type of CPU core beyond the typical “performance cores” and “efficiency cores” that were included in older M-series processors.

At the top, you have “super cores,” which is Apple’s new M5-era branding for what it used to call “performance cores.” This change is retroactive and also applies to the regular M5; Apple’s spec sheet for the M5 MacBook Pro used to refer to the big cores as “performance cores” but now calls them “super cores.”

At the bottom of the hierarchy, you still have “efficiency cores” that are tuned for low power usage. The M5 still uses six efficiency cores, and unlike the super cores, they haven’t been rebranded since yesterday. These cores do help with multi-core performance, but they prioritize lower power usage and lower temperatures first, since they need to fit in fanless devices like the iPad Pro and MacBook Air.

And now, in the middle, we have a new type of “performance core” used exclusively in the M5 Pro and M5 Max.

These are, in fact, a new, third type of CPU core design, distinct from both the super cores and the M5’s efficiency cores. They apparently use designs similar to the super cores but prioritize multi-threaded performance rather than fast single-core performance. Apple’s approach with the new performance cores sounds similar to the one AMD uses in its laptop silicon: it has larger Zen 4 and Zen 5 CPU cores, optimized for peak clock speeds and higher power usage, and smaller Zen 4c and Zen 5c cores that support the same capabilities but run slower and are optimized to use less die space.

What we don’t know yet is how these new chips perform relative to the previous versions. Technically, the M4 Pro and M4 Max both had more “big” cores than the M5 Pro and M5 Max do—up to 10 for the M4 Pro and up to 12 for the M4 Max. But higher single-core performance from the six “super cores” and strong multi-core performance from the 12 performance cores should mean that the M5 generation still shakes out to be faster overall.

How all the chips compare

For Mac buyers choosing between these three processors, we’re updating the spec tables we’ve put together in the past, comparing the M5-generation chips to one another and to their counterparts in the M2, M3, and M4 generations.

Here’s how all of the M5 chips stack up, including the partly disabled versions of each chip that Apple sells in lower-end MacBook Air and Pro models:

CPU S/P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth Video decode/encode engines
Apple M5 (low) 4S/6E 8 16GB Up to three 153GB/s One
Apple M5 (high) 4S/6E 10 16/24/32GB Up to three 153GB/s One
Apple M5 Pro (low) 5S/10P 16 24GB Up to four 307GB/s One
Apple M5 Pro (high) 6S/12P 20 24/48/64GB Up to four 307GB/s One
Apple M5 Max (low) 6S/12P 32 36GB Up to five 460GB/s Two
Apple M5 Max (high) 6S/12P 40 48/64/128GB Up to five 614GB/s Two

Despite all the big under-the-hood changes, the basic hierarchy here remains the same as in past generations. The Pro tier offers the biggest bump to CPU performance compared to the basic M5, along with twice as many GPU cores. The Max chip is mainly meant for those who want better graphics, 128GB of RAM, or both.

Compared to M2, M3, and M4

CPU S/P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth
Apple M5 (high) 4S/6E 8 16/24/32GB Up to three 153GB/s
Apple M4 (high) 4P/6E 10 16/24/32GB Up to three 120GB/s
Apple M3 (high) 4P/4E 10 8/16/24GB Up to two 102.4GB/s
Apple M2 (high) 4P/4E 10 8/16/24GB Up to two 102.4GB/s

Compared to past generations, the M5 looks like the basic incremental improvement that we’re used to—no huge jumps in CPU or GPU core counts, relying mostly on architectural improvements and memory bandwidth increases to deliver the expected generation-over-generation speed boost. The Pro and Max chips have similar graphics core counts across generations, but there has been more variability when it comes to the CPU cores.

CPU S/P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth
Apple M5 Pro (high) 6S/12P 20 24/48/64GB Up to four 307GB/s
Apple M4 Pro (high) 10P/4E 20 24/48/64GB Up to three 273GB/s
Apple M3 Pro (high) 6P/6E 18 18/36GB Up to three 153.6GB/s
Apple M2 Pro (high) 8P/4E 19 16/32GB Up to three 204.8GB/s

The Pro chips have been sort of all over the place, and the M3 generation in particular is an outlier. When we tested it at the time, we found it to be more or less a wash compared to the M2 Pro, which was (and still is) rare for Apple Silicon generations. The M4 Pro was a better upgrade, and the M5 Pro should still feel like an improvement over the M4 Pro despite the big underlying changes.

CPU S/P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth
Apple M5 Max (high) 6S/12P 40 48/64/128GB Up to five 614GB/s
Apple M4 Max (high) 12P/4E 40 48/64/128GB Up to five 546GB/s
Apple M3 Max (high) 12P/4E 40 48/64/128GB Up to five 409.6GB/s
Apple M2 Max (high) 8P/4E 38 64/96GB Up to five 409.6GB/s

The M5 Max will be the biggest test for Apple’s new performance cores. According to our testing of the M5 in the 14-inch MacBook Pro, the M5-generation super cores are about 12 to 15 percent faster than the M4 generation’s performance cores. The M4 Max had up to 12 of those cores, while the M5 Max only has six. That leaves a pretty substantial gap for M5 Max’s new non-super P-cores to close.

Aside from that, the biggest outstanding question is how the M5 shakeup changes Apple’s approach to Ultra chips, assuming the company continues to make them (Apple has already said that not every processor generation will see an Ultra update).

The M1 Ultra, M2 Ultra, and M3 Ultra were all made by fusing two Max chips together, perfectly doubling the CPU and GPU core counts. Will an M5 Ultra still weld two M5 Max chips together using the same basic ingredients to make an even larger processor? Or will Apple create distinct CPU and GPU chiplets just for the Ultra series? All we can say for sure is that we can no longer make assumptions based on Apple’s past behavior, which tends to be the most reliable predictor of its future behavior.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

M5 Pro and M5 Max are surprisingly big departures from older Apple Silicon Read More »