Author name: Paul Patrick

this-pdf-contains-a-playable-copy-of-doom

This PDF contains a playable copy of Doom

Here at Ars, we’re suckers for stories about hackers getting Doom running on everything from CAPTCHA robot checks and Windows’ notepad.exe to AI hallucinations and fluorescing gut bacteria. Despite all that experience, we were still thrown for a loop by a recent demonstration of Doom running in the usually static confines of a PDF file.

On the Github page for the quixotic project, coder ading2210 discusses how Adobe Acrobat included some robust support for JavaScript in the PDF file format. That JS coding support—which dates back decades and is still fully documented in Adobe’s official PDF specs—is currently implemented in a more limited, more secure form as part of PDFium, the built-in PDF-rendering engine of Chromium-based browsers.

In the past, hackers have used this little-known Adobe feature to code simple games like Breakout and Tetris into PDF documents. But ading220 went further, recompiling a streamlined fork of Doom‘s open source code using an old version of Emscripten that outputs optimized asm.js code.

With that code loaded, the Doom PDF can take inputs via the user typing in a designated text field and generate “video” output in the form of converted ASCII text fed into 200 individual text fields, each representing a horizontal line of the Doom display. The text in those fields is enough to simulate a six-color monochrome display at a “pretty poor but playable” 13 frames per second (about 80 ms per frame).

This PDF contains a playable copy of Doom Read More »

meta-takes-us-a-step-closer-to-star-trek’s-universal-translator

Meta takes us a step closer to Star Trek’s universal translator


The computer science behind translating speech from 100 source languages.

In 2023, AI researchers at Meta interviewed 34 native Spanish and Mandarin speakers who lived in the US but didn’t speak English. The goal was to find out what people who constantly rely on translation in their day-to-day activities expect from an AI translation tool. What those participants wanted was basically a Star Trek universal translator or the Babel Fish from the Hitchhiker’s Guide to the Galaxy: an AI that could not only translate speech to speech in real time across multiple languages, but also preserve their voice, tone, mannerisms, and emotions. So, Meta assembled a team of over 50 people and got busy building it.

What this team came up with was a next-gen translation system called Seamless. The first building block of this system is described in Wednesday’s issue of Nature; it can translate speech among 36 different languages.

Language data problems

AI translation systems today are mostly focused on text, because huge amounts of text are available in a wide range of languages thanks to digitization and the Internet. Institutions like the United Nations or European Parliament routinely translate all their proceedings into the languages of all their member states, which means there are enormous databases comprising aligned documents prepared by professional human translators. You just needed to feed those huge, aligned text corpora into neural nets (or hidden Markov models before neural nets became all the rage) and you ended up with a reasonably good machine translation system. But there were two problems with that.

The first issue was those databases comprised formal documents, which made the AI translators default to the same boring legalese in the target language even if you tried to translate comedy. The second problem was speech—none of this included audio data.

The problem of language formality was mostly solved by including less formal sources like books, Wikipedia, and similar material in AI training databases. The scarcity of aligned audio data, however, remained. Both issues were at least theoretically manageable in high-resource languages like English or Spanish, but they got dramatically worse in low-resource languages like Icelandic or Zulu.

As a result, the AI translators we have today support an impressive number of languages in text, but things are complicated when it comes to translating speech. There are cascading systems that simply do this trick in stages. An utterance is first converted to text just as it would be in any dictation service. Then comes text-to-text translation, and finally the resulting text in the target language is synthesized into speech. Because errors accumulate at each of those stages, the performance you get this way is usually poor, and it doesn’t work in real time.

A few systems that can translate speech-to-speech directly do exist, but in most cases they only translate into English and not in the opposite way. Your foreign language interlocutor can say something to you in one of the languages supported by tools like Google’s AudioPaLM, and they will translate that to English speech, but you can’t have a conversation going both ways.

So, to pull off the Star Trek universal translator thing Meta’s interviewees dreamt about, the Seamless team started with sorting out the data scarcity problem. And they did it in a quite creative way.

Building a universal language

Warren Weaver, a mathematician and pioneer of machine translation, argued in 1949 that there might be a yet undiscovered universal language working as a common base of human communication. This common base of all our communication was exactly what the Seamless team went for in its search for data more than 70 years later. Weaver’s universal language turned out to be math—more precisely, multidimensional vectors.

Machines do not understand words as humans do. To make sense of them, they need to first turn them into sequences of numbers that represent their meaning. Those sequences of numbers are numerical vectors that are termed word embeddings. When you vectorize tens of millions of documents this way, you’ll end up with a huge multidimensional space where words with similar meaning that often go together, like “tea” and “coffee,” are placed close to each other. When you vectorize aligned text in two languages like those European Parliament proceedings, you end up with two separate vector spaces, and then you can run a neural net to learn how those two spaces map onto each other.

But the Meta team didn’t have those nicely aligned texts for all the languages they wanted to cover. So, they vectorized all texts in all languages as if they were just a single language and dumped them into one embedding space called SONAR (Sentence-level Multimodal and Language-Agnostic Representations). Once the text part was done, they went to speech data, which was vectorized using a popular W2v (word to vector) tool and added it to the same massive multilingual, multimodal space. Of course, each embedding carried metadata identifying its source language and whether it was text or speech before vectorization.

The team just used huge amounts of raw data—no fancy human labeling, no human-aligned translations. And then, the data mining magic happened.

SONAR embeddings represented entire sentences instead of single words. Part of the reason behind that was to control for differences between morphologically rich languages, where a single word may correspond to multiple words in morphologically simple languages. But the most important thing was that it ensured that sentences with similar meaning in multiple languages ended up close to each other in the vector space.

It was the same story with speech, too—a spoken sentence in one language was close to spoken sentences in other languages with similar meaning. It even worked between text and speech. So, the team simply assumed that embeddings in two different languages or two different modalities (speech or text) that are at a sufficiently close distance to each other are equivalent to the manually aligned texts of translated documents.

This produced huge amounts of automatically aligned data. The Seamless team suddenly got access to millions of aligned texts, even in low-resource languages, along with thousands of hours of transcribed audio. And they used all this data to train their next-gen translator.

Seamless translation

The automatically generated data set was augmented with human-curated texts and speech samples where possible and used to train multiple AI translation models. The largest one was called SEAMLESSM4T v2. It could translate speech to speech from 101 source languages into any of 36 output languages, and translate text to text. It would also work as an automatic speech recognition system in 96 languages, translate speech to text from 101 into 96 languages, and translate text to speech from 96 into 36 languages—all from a single unified model. It also outperformed state-of-the-art cascading systems by 8 percent in a speech-to-text and by 23 percent in a speech-to-speech translations based on the scores in Bilingual Evaluation Understudy (an algorithm commonly used to evaluate the quality of machine translation).

But it can now do even more than that. The Nature paper published by Meta’s Seamless ends at the SEAMLESSM4T models, but Nature has a long editorial process to ensure scientific accuracy. The paper published on January 15, 2025, was submitted in late November 2023. But in a quick search of the arXiv.org, a repository of not-yet-peer-reviewed papers, you can find the details of two other models that the Seamless team has already integrated on top of the SEAMLESSM4T: SeamlessStreaming and SeamlessExpressive, which take this AI even closer to making a Star Trek universal translator a reality.

SeamlessStreaming is meant to solve the translation latency problem. The baseline SEAMLESSM4T, despite all the bells and whistles, worked as a standard AI translation tool. You had to say what you wanted to say, push “translate,” and it spat out the translation. SeamlessStreaming was designed to take this experience a bit closer to what human simultaneous translator do—it translates what you’re saying as you speak in a streaming fashion. SeamlessExpressive, on the other hand, is aimed at preserving the way you express yourself in translations. When you whisper or say something in a cheerful manner or shout out with anger, SeamlessExpressive will encode the features of your voice, like tone, prosody, volume, tempo, and so on, and transfer those into the output speech in the target language.

Sadly, it still can’t do both at the same time; you can only choose to go for either streaming or expressivity, at least at the moment. Also, the expressivity variant is very limited in supported languages—it only works in English, Spanish, French, and German. But at least it’s online so you can go ahead and give it a spin.

Nature, 2025.  DOI: 10.1038/s41586-024-08359-z

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Meta takes us a step closer to Star Trek’s universal translator Read More »

on-the-openai-economic-blueprint

On the OpenAI Economic Blueprint

  1. Man With a Plan.

  2. Oh the Pain.

  3. Actual Proposals.

  4. For AI Builders.

  5. Think of the Children.

  6. Content Identification.

  7. Infrastructure Week.

  8. Paying Attention.

The primary Man With a Plan this week for government-guided AI prosperity was UK Prime Minister Keir Starmer, with a plan coming primarily from Matt Clifford. I’ll be covering that soon.

Today I will be covering the other Man With a Plan, Sam Altman, as OpenAI offers its Economic Blueprint.

Cyrps1s (CISO OpenAI): AI is the ultimate race. The winner decides whether the future looks free and democratic, or repressed and authoritarian.

OpenAI, and the Western World, must win – and we have a blueprint to do so.

Do you hear yourselves? The mask on race and jingoism could not be more off, or firmly attached, depending on which way you want to set up your metaphor. If a movie had villains talking like this people would say it was too on the nose.

Somehow the actual documents tell that statement to hold its beer.

The initial exploratory document is highly disingenuous, trotting out stories of the UK requiring people to walk in front of cars waving red flags and talking about ‘AI’s main street,’ while threatening that if we don’t attract $175 billion in awaiting AI funding it will flow to China-backed projects. They even talk about creating jobs… by building data centers.

The same way some documents scream ‘an AI wrote this,’ others scream ‘the authors of this post are not your friends and are pursuing their book with some mixture of politics-talk and corporate-speak in the most cynical way you can imagine.’

I mean, I get it, playas gonna play, play, play, play, play. But can I ask OpenAI to play with at least some style and grace? To pretend to pretend not to be doing this, a little?

As opposed to actively inserting so many Fnords their document causes physical pain.

The full document starts out in the same vein. Chris Lehane, their Vice President of Global Affairs, writes an introduction as condescending as I can remember, and that plus the ‘where we stand’ repeat the same deeply cynical rhetoric from the summary.

In some sense, it is not important that the way the document is written makes me physically angry and ill in a way I endorse – to the extent that if it doesn’t set off your bullshit detectors and reading it doesn’t cause you pain, then I notice that there is at least some level on which I shouldn’t trust you.

But perhaps that is the most important thing about the document? That it tells you about the people writing it. They are telling you who they are. Believe them.

This is related to the ‘truesight’ that Claude sometimes displays.

As I wrote that, I was only on page 7, and hadn’t even gotten to the actual concrete proposals.

The actual concrete proposals are a distinct issue. I was having trouble reading through to find out what they are because this document filled me with rage and made me physically ill.

It’s important to notice that! I read documents all day, often containing things I do not like. It is very rare that my body responds by going into physical rebellion.

No, the document hasn’t yet mentioned even the possibility of any downside risks at all, let alone existential risks. And that’s pretty terrible on its own. But that’s not even what I’m picking up here, at all. This is something else. Something much worse.

Worst of all, it feels intentional. I can see the Fnords. They want me to see them. They want everyone to implicitly know they are being maximally cynical.

All right, so if one pushes through to the second half and the actual ‘solutions’ section, what is being proposed, beyond ‘regulating us would be akin to requiring someone to walk in front of every car waiving a red flag, no literally.’

The top level numbered statements describe what they propose, I attempted to group and separate proposals for better clarity. The nested statements (a, b, etc) are my reactions.

They say the Federal Government should, in a section where they actually say words with meanings rather than filling it with Fnords:

  1. Share national security information and resources.

    1. Okay. Yes. Please do.

  2. Incentivize AI companies to deploy their products widely, including to allied and partner nations and to support US government agencies.

    1. Huh? What? Is there a problem here that I am not noticing? Who is not deploying, other than in response to other countries regulations saying they cannot deploy (e.g. the EU)? Or are you trying to actively say that safety concerns are bad?

  3. Support the development of standards and safeguards, and ensure they are recognized and respected by other nations.

    1. In a different document I would be all for this – if we don’t have universal standards, people will go shopping. However, in this context, I can’t help but read it mostly as pre-emption, as in ‘we want America to prevent other states from imposing any safety requirements or roadblocks.’

  4. Share its unique expertise with AI companies, including mitigating threats including cyber and CBRN.

    1. Yes! Very much so. Jolly good.

  5. Help companies access secure infrastructure to evaluate model security risks and safeguards.

    1. Yes, excellent, great.

  6. Promote transparency consistent with competitiveness, protect trade secrets, promote market competition, ‘carefully choose disclosure requirements.’

    1. I can’t disagree, but how could anyone?

    2. The devil is in the details. If this had good details, and emphasized that the transparency should largely be about safety questions, it would be another big positive.

  7. Create a defined, voluntary pathway for companies that develop LLMs to work with government to define model evaluations, test models and exchange information to support the companies safeguards.

    1. This is about helping you, the company? And you want it to be entirely voluntary? And in exchange, they explicitly want preemption from state-by-state regulations.

    2. Basically this is a proposal for a fully optional safe harbor. I mean, yes, the Federal government should have a support system in place to aid in evaluations. But notice how they want it to work – as a way to defend companies against any other requirements, which they can in turn ignore when inconvenient.

    3. Also, the goal here is to ‘support the companies safeguards,’ not to in any way see if the models are actually a responsible thing to release on any level.

    4. Amazing to request actively less than zero Federal regulations on safety.

  8. Empower the public sector to quickly and securely adopt AI tools.

    1. I mean, sure, that would be nice if we can actually do it as described.

A lot of the components here are things basically everyone should agree upon.

Then there are the parts where, rather than this going hand-in-hand with an attempt to not kill everyone and ensure against catastrophes, attempts to ensure that no one else tries to stop catastrophes or prevent everyone from being killed. Can’t have that.

They also propose that AI ‘builders’ could:

  1. Form a consortium to identify best practices for working with NatSec.

  2. Develop training programs for AI talent.

I mean, sure, those seem good and we should have an antitrust exemption to allow actions like this along with one that allows them to coordinate, slow down or pause in the name of safety if it comes to that, too. Not that this document mentions that.

Sigh, here we go. Their solutions for thinking of the children are:

  1. Encourage policy solutions that prevent the creation and distribution of CSAM. Incorporate CSAM protections into the AI development lifestyle. ‘Take steps to prevent downstream developers from using their models to generate CSAM.’

    1. This is effectively a call to ban open source image models. I’m sorry, but it is. I wish it were not so, but there is no known way to open source image models, and have them not be used for CSAM, and I don’t see any reason to expect this to be solvable, and notice the reference to ‘downstream developers.’

  2. Promote conditions that support robust and lasting partnerships among AI companies and law enforcement.

  1. Apply provenance data to all AI-generated audio-visual content. Use common provenance standards. Have large companies report progress.

    1. Sure. I think we’re all roughly on the same page here. Let’s move on to ‘preferences.’

  2. People should be ‘empowered to personalize their AI tools.’

    1. I agree we should empower people in this way. But what does the government have to do with this? None of their damn business.

  3. People should control how their personal data is used.

    1. Yes, sure, agreed.

  4. ‘Government and industry should work together to scale AI literacy through robust funding for pilot programs, school district technology budgets and professional development trainings that help people understand how to choose their own preferences to personalize their tools.’

    1. No. Stop. Please. These initiatives never, ever work, we need to admit this.

    2. But also shrug, it’s fine, it won’t do that much damage.

And then, I feel like I need to fully quote this one too:

  1. In exchange for having so much freedom, users should be responsible for impacts of how they work and create with AI. Common-sense rules for AI that are aimed at protecting from actual harms can only provide that protection if they apply to those using the technology as well as those building it.

    1. If seeing the phrase ‘In exchange for having so much freedom’ doesn’t send a chill down your spine, We Are Not the Same.

    2. But I applaud the ‘as well as’ here. Yes, those using the technology should be responsible for the harm they themselves cause, so long as this is ‘in addition to’ rather than shoving all responsibility purely onto them.

Finally, we get to ‘infrastructure as destiny,’ an area where we mostly agree on what is to actually be done, even if I despise a lot of the rhetoric they’re using to argue for it.

  1. Ensure that AIs can train on all publicly available data.

    1. This is probably the law now and I’m basically fine with it.

  2. ‘While also protecting creators from unauthorized digital replicas.’

    1. This seems rather tricky if it means something other than ‘stop regurgitation of training data’? I assume that’s what it means, while trying to pretend it’s more than that. If it’s more than that, they need to explain what they have in mind and how one might do it.

  3. Digitize government data currently in analog form.

    1. Probably should do that anyway, although a lot of it shouldn’t go on the web or into LLMs. Kind of a call for government to pay for data curation.

  4. ‘A Compact for AI’ for capital and supply chains and such among US allies.

    1. I don’t actually understand why this is necessary, and worry this amounts to asking for handouts and to allow Altman to build in the UAE.

  5. ‘AI economic zones’ that speed up the permitting process.

    1. Or we could, you know, speed up the permitting process in general.

    2. But actually we can’t and won’t, so even though this is deeply, deeply stupid and second best it’s probably fine. Directionally this is helpful.

  6. Creation of AI research labs and workforces aligned with key local industries.

    1. This seems like pork barrel spending, an attempt to pick our pockets, we shouldn’t need to subsidize this. To the extent there are applications here, the bottleneck won’t be funding, it will be regulations and human objections, let’s work on those instead.

  7. ‘A nationwide AI education strategy’ to ‘help our current workforce and students become AI ready.’

    1. I strongly believe that what this points towards won’t work. What we actually need is to use AI to revolutionize the education system itself. That would work wonders, but you all (in government reading this document) aren’t ready for that conversation and OpenAI knows this.

  8. More money for research infrastructure and science. Basically have the government buy the scientists a bunch of compute, give OpenAI business?

    1. Again this seems like an attempt to direct government spending and get paid. Obviously we should get our scientists AI, but why can’t they just buy it the same way everyone else does? If we want to fund more science, why this path?

  9. Leading the way on the next generation of energy technology.

    1. No arguments here. Yay next generation energy production.

    2. Clearly Altman wants Helion to get money but I’m basically fine with that.

  10. Dramatically increase federal spending on power and data transmission and streamlined approval for new lines.

    1. I’d emphasize approvals and regulatory barriers more than money.

    2. Actual dollars spent don’t seem to me like the bottleneck, but I could be convinced otherwise.

    3. If we have a way to actually spend money and have that result in a better grid, I’m in favor.

  11. Federal backstops for high-value AI public works.

    1. If this is more than ‘build more power plants and transmission lines and batteries and such’ I am confused what is actually being proposed.

    2. In general, I think helping get us power is great, having the government do the other stuff is probably not its job.

When we get down to the actual asks in the document, a majority of them I actually agree with, and most of them are reasonable, once I was able to force myself to read the words intended to have meaning.

There are still two widespread patterns to note within the meaningful content.

  1. The easy theme, as you would expect, is the broad range of ‘spend money on us and other AI things’ proposals that don’t seem like they would accomplish much. There are some proposals that do seem productive, especially around electrical power, but a lot of this seems like the traditional ways the Federal government gets tricked into spending money. As long as this doesn’t scale too big, I’m not that concerned.

  2. Then there is the play to defeat any attempt at safety regulation, via Federal regulations that actively net interfere with that goal in case any states or countries wanted to try and help. There is clear desirability of a common standard for this, but a voluntary safe harbor preemption, in exchange for various nebulous forms of potential cooperation, cannot be the basis of our entire safety plan. That appears to be the proposal on offer here.

The real vision, the thing I will take away most, is in the rhetoric and presentation, combined with the broader goals, rather than the particular details.

OpenAI now actively wants to be seen as pursuing this kind of obviously disingenuous jingoistic and typically openly corrupt rhetoric, to the extent that their statements are physically painful to read – I dealt with much of that around SB 1047, but this document takes that to the next level and beyond.

OpenAI wants no enforced constraints on their behavior, and they want our money.

OpenAI are telling us who they are. I fully believe them.

Discussion about this post

On the OpenAI Economic Blueprint Read More »

there-was-a-straight-shot-from-earth-to-the-moon-and-mars-last-night

There was a straight shot from Earth to the Moon and Mars last night

The most recent lunar occultation of Mars that was visible from the United States occurred on December 7, 2022. A handful of these events occur every few years around each Martian opposition, but they are usually only visible from a small portion of Earth, often over the ocean or in polar regions. The next lunar occultation of Mars visible across most of the United States will happen on the night of February 4–5, 2042. There are similar occultations of Mars in 2035, 2038, and 2039 visible in narrow swaths of South Florida and the Pacific Northwest.

This photo was taken with a handheld Canon 80D and a 600 mm lens. Settings were 1/2000 sec, f/8, ISO 400. The image was cropped and lightly edited in Adobe Lightroom.

The Moon also periodically covers Venus, Jupiter, Saturn, and the Solar System’s more distant planets. A good resource on lunar occultations is In-The-Sky.org, which lists events where the Moon will block out a planet or a bright star. Be sure you choose your location on the upper right corner of the page and toggle year by year to plan out future viewing opportunities.

Viewing these kinds of events can be breathtaking and humbling. In 2012, I was lucky enough to observe the transit of Venus in front of the Sun, something that only happens twice every 121 years.

Seeing Mars, twice the size of the Moon, rising above the lunar horizon like a rusty BB pellet next to a dusty volleyball provided a perfect illustration of the scale and grandeur of the Solar System. Similarly, viewing Venus dwarfed by the Sun was a revealing moment. The worlds accompanying Earth around the Sun are varied in size, shape, color, and composition.

In one glance, an observer can see the barren, airless lunar surface and a cold, desert planet that once harbored rivers, lakes, and potentially life, all while standing on our own planet, an oasis in the cosmos. One thing that connects them all is humanity’s quest for exploration. Today, robots are operating on or around the Moon and Mars. Governments and private companies are preparing to return astronauts to the lunar surface within a few years, then moving on to dispatch human expeditions to the red planet.

Plans to land astronauts on the Moon are already in motion, but significant financial and technological hurdles remain for a crew mission to put humans on Mars. But for a short time Monday night, it looked like there was a direct path.

There was a straight shot from Earth to the Moon and Mars last night Read More »

fbi-forces-chinese-malware-to-delete-itself-from-thousands-of-us-computers

FBI forces Chinese malware to delete itself from thousands of US computers

The FBI said today that it removed Chinese malware from 4,258 US-based computers and networks by sending commands that forced the malware to use its “self-delete” function.

The People’s Republic of China (PRC) government paid the Mustang Panda group to develop a version of PlugX malware used to infect, control, and steal information from victim computers, the FBI said. “Since at least 2014, Mustang Panda hackers then infiltrated thousands of computer systems in campaigns targeting US victims, as well as European and Asian governments and businesses, and Chinese dissident groups,” the FBI said.

The malware has been known for years but many Windows computers were still infected while their owners were unaware. The FBI learned of a method to remotely remove the malware from a French law enforcement agency, which had gained access to a command-and-control server that could send commands to infected computers.

“When a computer infected with this variant of PlugX malware is connected to the Internet, the PlugX malware can send a request to communicate with a command-and-control (‘C2’) server, whose IP address is hard-coded in the malware. In reply, the C2 server can send several possible commands to the PlugX malware on the victim computer,” stated an FBI affidavit that was made on December 20 and unsealed today.

As it turns out, the “PlugX malware variant’s native functionality includes a command from a C2 server to ‘self-delete.'” This deletes the application, files created by the malware, and registry keys used to automatically run the PlugX application when the victim computer is started.

FBI forces Chinese malware to delete itself from thousands of US computers Read More »

up-close-and-personal-with-the-stag-beetle-in-a-real-bug’s-life-s2

Up close and personal with the stag beetle in A Real Bug’s Life S2


It’s just one of the many fascinating insect species featured in the second season of this NatGeo docuseries.

A female giant stag beetle Credit: National Geographic/Darlyne A. Murawski

A plucky male American stag beetle thinks he’s found a mate on a rotting old tree stump—and then realizes there’s another male eager to make the same conquest. The two beetles face off in battle, until the first manages to get enough leverage to toss his romantic rival off the stump in a deft display of insect jujitsu. It’s the first time this mating behavior has been captured on film, and the stag beetle is just one of the many fascinating insects featured in the second season of A Real Bug’s Life, a National Geographic docuseries narrated by Awkwafina.

The genesis for the docuseries lies in a past rumored sequel to Pixar’s 1998 animated film A Bug’s Life, which celebrated its 25th anniversary two years ago. That inspired producer Bill Markham, among others, to pitch a documentary series on a real bug’s life to National Geographic. “It was the quickest commission ever,” Markham told Ars last year. “It was such a good idea, to film bugs in an entertaining family way with Pixar sensibilities.” And thanks to the advent of new technologies—photogrammetry, probe and microscope lenses, racing drones, ultra-high-speed camera—plus a handful of skilled “bug wranglers,” the team was able to capture the bug’s-eye view of the world beautifully.

As with the Pixar film, the bugs (and adjacent creatures) are the main characters here, from cockroaches, monarch butterflies, and praying mantises to bees, spiders, and even hermit crabs. The 10 episodes, across two seasons, tell their stories as they struggle to survive in their respective habitats, capturing entire ecosystems in the process: city streets, a farm, the rainforest, a Texas backyard, and the African savannah, for example. Highlights from S1 included the first footage of cockroach egg casings hatching; wrangling army ants on location in a Costa Rica rainforest; and the harrowing adventures of a tiny jumping spider navigating the mean streets of New York City.

Looking for love

A luna moth perched on a twig. National Geographic/Nathan Small

S2 takes viewers to Malaysia’s tropical beaches, the wetlands of Derbyshire in England, and the forests of Tennessee’s Smoky Mountains. Among the footage highlights: Malaysian tiger beetles, who can run so fast they temporarily are unable to see; a young female hermit crab’s hunt for a bigger shell; and tiny peacock spiders hatching Down Under. There is also a special behind-the-scenes look for those viewers keen to learn more about how the episodes were filmed, involving 130 different species across six continents. Per the official synopsis:

A Real Bug’s Life is back for a thrilling second season that’s bolder than ever. Now, thanks to new cutting-edge filming technology, we are able to follow the incredible stories of the tiny heroes living in this hidden world, from the fast-legged tiger beetle escaping the heat of Borneo’s beaches to the magical metamorphosis of a damselfly on a British pond to the Smoky Mountain luna moth whose quest is to grow wings, find love and pass on his genes all in one short night. Join our witty guide, Awkwafina, on new bug journeys full of more mind-blowing behaviors and larger-than-life characters.

Entomologist Michael Carr, an environmental compliance officer for Santa Fe County in New Mexico, served as a field consultant for the “Love in the Forest” episode, which focuses on the hunt for mates by a luna moth, a firefly, and an American stag beetle. The latter species is Carr’s specialty, ever since he worked at the Smithsonian’s Museum of Natural History and realized the beetles flourished near where he grew up in Virginia. Since stag beetles are something of a niche species, NatGeo naturally tapped Carr as its field expert to help them find and film the insects in the Smoky Mountains. To do so, Carr set up a mercury vapor lamp on a tripod—”old style warehouse lights that take a little time to charge up,” which just happen to emit frequencies of light that attract different insect species.

Behind the scenes

Beetle expert Michael Carr and shooting researcher Katherine Hannaford film a stag beetle at night. National Geographic/Tom Oldridge

Stag beetles are saprocylic insects, according to Carr, so they seek out decaying wood and fungal communities. Males can fly as high as 30 feet to reach tree canopies, while the females can dig down to between 1 and 3 meters to lay their eggs in wood. Much of the stag beetle’s lifecycle is spent underground as a white grub molting into larger and larger forms before hatching in two to three years during the summer. Once their exoskeletons harden, they fly off to find mates and reproduce as quickly as possible. And if another male happens to get in their way, they’re quite prepared to do battle to win at love.

Stag beetles might be his specialty, but Carr found the fireflies also featured in that episode to be a particular highlight. “I grew up in rural Virginia,” Carr told Ars. “There was always fireflies, but I’d never seen anything like that until I was there on site. I did not realize, even though I’d grown up in the woods surrounded by fireflies, that, ‘Oh, the ones that are twinkling at the top, that’s one species. The ones in the middle that are doing a soft glow, that’s a different species.'”

And Carr was as surprised and fascinated as any newbie to learn about the “femme fatale” firefly: a species in which the female mimics the blinking patterns of other species of firefly, luring unsuspecting males to their deaths. The footage captured by the NatGeo crew includes a hair-raising segment where this femme fatale opts not to wait for her prey to come to her. A tasty male firefly has been caught in a spider’s web, and our daring, hungry lady flies right into the web to steal the prey:

A femme fatale firefly steals prey from a rival spider’s web.

Many people have a natural aversion to insects; Carr hopes that inventive docuseries like A Real Bug’s Life can help counter those negative perceptions by featuring some lesser-loved insects in anthropomorphized narratives—like the cockroaches and fire ants featured in S1. “[The series] did an amazing job of showing how something at that scale lives its life, and how that’s almost got a parallel to how we can live our life,” he said. “When you can get your mindset down to such a small scale and not just see them as moving dots on the ground and you see their eyes and you see how they move and how they behave and how they interact with each other, you get a little bit more appreciation for ants as a living organism.”

“By showcasing some of the bigger interesting insects like the femme fatale firefly or the big chivalrous stag beetle fighting over each other, or the dung beetle getting stomped by an elephant—those are some pretty amazing just examples of the biodiversity and breadth of insect life,” said Carr. “People don’t need to love insects. If they can, just, have some new modicum of respect, that’s good enough to change perspectives.”

The second season of A Real Bug’s Life premieres on January 15, 2025, on Disney+.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Up close and personal with the stag beetle in A Real Bug’s Life S2 Read More »

new-york-starts-enforcing-$15-broadband-law-that-isps-tried-to-kill

New York starts enforcing $15 broadband law that ISPs tried to kill

1.7 million New York households lost FCC discount

The order said quick implementation of the law is important because of “developments at the federal level impacting the affordability of broadband service.” About 1.7 million New York households, and 23 million nationwide, used to receive a monthly discount through an FCC program that expired in mid-2024 after Congress failed to provide more funding.

“For this reason, consumer benefit programs assisting low-income households—such as the ABA—are even more critical to ensure that the digital divide for low-income New Yorkers is being addressed,” the New York order said.

New York ISPs can obtain an exemption from the low-cost broadband law if they “provide service to no more than 20,000 households and the Commission determines that compliance with such requirements would result in ‘unreasonable or unsustainable financial impact on the broadband service provider,'” the order said.

Over 40 small ISPs filed for exemptions in 2021 before the law was blocked by a judge. Those ISPs and potentially others will be given one-month exemptions if they file paperwork by Wednesday stating that they meet the subscriber threshold. ISPs must submit detailed financial information by February 15 to obtain longer-term exemptions.

“All other ISPs (i.e., those with more than 20,000 subscribers) must comply with the ABA by January 15, 2025,” the order said. Failure to comply can be punished with civil penalties of up to $1,000 per violation. The law applies to wireline, fixed wireless, and satellite providers.

Charter Spectrum currently advertises a $25-per-month plan with 50Mbps speeds for low-income households. Comcast and Optimum have $15 plans. Verizon has a low-income program reducing the cost of some home Internet plans to as low as $20 a month.

Disclosure: The Advance/Newhouse Partnership, which owns 12.3 percent of Charter, is part of Advance Publications, which also owns Ars Technica parent Condé Nast.

New York starts enforcing $15 broadband law that ISPs tried to kill Read More »

161-years-ago,-a-new-zealand-sheep-farmer-predicted-ai-doom

161 years ago, a New Zealand sheep farmer predicted AI doom

The text anticipated several modern AI safety concerns, including the possibility of machine consciousness, self-replication, and humans losing control of their technological creations. These themes later appeared in works like Isaac Asimov’s The Evitable Conflict, Frank Herbert’s Dune novels (Butler possibly served as the inspiration for the term “Butlerian Jihad“), and the Matrix films.

A model of Charles Babbage's Analytical Engine, a calculating machine invented in 1837 but never built during Babbage's lifetime.

A model of Charles Babbage’s Analytical Engine, a calculating machine invented in 1837 but never built during Babbage’s lifetime. Credit: DE AGOSTINI PICTURE LIBRARY via Getty Images

Butler’s letter dug deep into the taxonomy of machine evolution, discussing mechanical “genera and sub-genera” and pointing to examples like how watches had evolved from “cumbrous clocks of the thirteenth century”—suggesting that, like some early vertebrates, mechanical species might get smaller as they became more sophisticated. He expanded these ideas in his 1872 novel Erewhon, which depicted a society that had banned most mechanical inventions. In his fictional society, citizens destroyed all machines invented within the previous 300 years.

Butler’s concerns about machine evolution received mixed reactions, according to Butler in the preface to the second edition of Erewhon. Some reviewers, he said, interpreted his work as an attempt to satirize Darwin’s evolutionary theory, though Butler denied this. In a letter to Darwin in 1865, Butler expressed his deep appreciation for The Origin of Species, writing that it “thoroughly fascinated” him and explained that he had defended Darwin’s theory against critics in New Zealand’s press.

What makes Butler’s vision particularly remarkable is that he was writing in a vastly different technological context when computing devices barely existed. While Charles Babbage had proposed his theoretical Analytical Engine in 1837—a mechanical computer using gears and levers that was never built in his lifetime—the most advanced calculating devices of 1863 were little more than mechanical calculators and slide rules.

Butler extrapolated from the simple machines of the Industrial Revolution, where mechanical automation was transforming manufacturing, but nothing resembling modern computers existed. The first working program-controlled computer wouldn’t appear for another 70 years, making his predictions of machine intelligence strikingly prescient.

Some things never change

The debate Butler started continues today. Two years ago, the world grappled with what one might call the “great AI takeover scare of 2023.” OpenAI’s GPT-4 had just been released, and researchers evaluated its “power-seeking behavior,” echoing concerns about potential self-replication and autonomous decision-making.

161 years ago, a New Zealand sheep farmer predicted AI doom Read More »

man-turns-irreversibly-gray-from-an-unidentified-silver-exposure

Man turns irreversibly gray from an unidentified silver exposure

When an 84-year-old man in Hong Kong was admitted to a hospital for a condition related to an enlarged prostate, doctors noticed something else about him—he was oddly gray, according to a case report in the New England Journal of Medicine.

His skin, particularly his face, had an ashen appearance. His fingernails and the whites of his eyes had become silvery. When doctors took a skin biopsy, they could see tiny, dark granules sitting in the fibers of his skin, in his blood vessels, in the membranes of his sweat glands, and in his hair follicles.

A blood test made clear what the problem was: the concentration of silver in his serum was 423 nmol/L, over 40 times the reference level for a normal result, which is less than 10 nmol/L. The man was diagnosed with a rare case of generalized argyria, a buildup of silver in the body’s tissue that causes a blueish-gray discoloration—which is generally permanent.

When someone consumes silver particles, the metal moves from the gut into the bloodstream in its ionic form. It’s then deposited throughout the body in various tissues, including the skin, muscles, heart, lungs, liver, spleen, and kidneys. There’s some evidence that it accumulates in at least parts of the brain as well.

Discoloration becomes apparent in tissues exposed to sunlight—hence the patient’s notably gray face. Silver ions in the skin undergo photoreduction from ultraviolet light exposure, forming atomic silver that can be oxidized to compounds such as silver sulfide and silver selenide, creating a bluish-gray tinge. Silver can also stimulate the production of the pigment melanin, causing darkening. Once discoloration develops, it’s considered irreversible. Chelation therapy—generally used to remove metals from the body—is ineffective against argyria. That said, some case studies have suggested that laser therapy may help.

Man turns irreversibly gray from an unidentified silver exposure Read More »

google-loses-in-court,-faces-trial-for-collecting-data-on-users-who-opted-out

Google loses in court, faces trial for collecting data on users who opted out

Plaintiffs have brought claims of privacy invasion under California law. Plaintiffs “present evidence that their data has economic value,” and “a reasonable juror could find that Plaintiffs suffered damage or loss because Google profited from the misappropriation of their data,” Seeborg wrote.

The lawsuit was filed in July 2020. The judge notes that summary judgment can be granted when “there is no genuine dispute as to any material fact and the movant is entitled to judgment as a matter of law.” Google hasn’t met that standard, he ruled.

In a statement provided to Ars, Google said that “privacy controls have long been built into our service and the allegations here are a deliberate attempt to mischaracterize the way our products work. We will continue to make our case in court against these patently false claims.”

In a proposed settlement of a different lawsuit, Google last year agreed to delete records reflecting users’ private browsing activities in Chrome’s Incognito mode.

Google disclosures are ambiguous, judge says

Google claimed that the “undisputed facts” show its collection of “data was lawful and consistent with its representations to class members,” Seeborg wrote. But in the judge’s view, the “various interpretations of these disclosures render them ambiguous such that a reasonable user would expect the WAA and (s)WAA settings to control Google’s collection of a user’s web app and activity on products using Google’s services.”

Google contends that its system is harmless to users. “Google argues that its sole purpose for collecting (s)WAA-off data is to provide these analytic services to app developers. This data, per Google, consists only of non-personally identifiable information and is unrelated (or, at least, not directly related) to any profit-making objectives,” Seeborg wrote.

On the other side, plaintiffs say that Google’s tracking contradicts its “representations to users because it gathers exactly the data Google denies saving and collecting about (s)WAA-off users,” Seeborg wrote. “Moreover, Plaintiffs insist that Google’s practices allow it to personalize ads by linking user ad interactions to any later related behavior—information advertisers are likely to find valuable—leading to Google’s lucrative advertising enterprise built, in part, on (s)WAA-off data unlawfully retrieved.”

Google loses in court, faces trial for collecting data on users who opted out Read More »

as-us-marks-first-h5n1-bird-flu-death,-who-and-cdc-say-risk-remains-low

As US marks first H5N1 bird flu death, WHO and CDC say risk remains low

The H5N1 bird flu situation in the US seems more fraught than ever this week as the virus continues to spread swiftly in dairy cattle and birds while sporadically jumping to humans.

On Monday, officials in Louisiana announced that the person who had developed the country’s first severe H5N1 infection had died of the infection, marking the country’s first H5N1 death. Meanwhile, with no signs of H5N1 slowing, seasonal flu is skyrocketing, raising anxiety that the different flu viruses could mingle, swap genetic elements, and generate a yet more dangerous virus strain.

But, despite the seemingly fever-pitch of viral activity and fears, a representative for the World Health Organization today noted that risk to the general population remains low—as long as one critical factor remains absent: person-to-person spread.

“We are concerned, of course, but we look at the risk to the general population and, as I said, it still remains low,” WHO spokesperson Margaret Harris told reporters at a Geneva press briefing Tuesday in response to questions related to the US death. In terms of updating risk assessments, you have to look at how the virus behaved in that patient and if it jumped from one person to another person, which it didn’t, Harris explained. “At the moment, we’re not seeing behavior that’s changing our risk assessment,” she added.

In a statement on the death late Monday, the US Centers for Disease Control and Prevention emphasized that no human-to-human transmission has been identified in the US. To date, there have been 66 documented human cases of H5N1 infections since the start of 2024. Of those, 40 were linked to exposure to infected dairy cows, 23 were linked to infected poultry, two had no clear source, and one case—the fatal case in Louisiana—was linked to exposure to infected backyard and wild birds.

As US marks first H5N1 bird flu death, WHO and CDC say risk remains low Read More »

nearly-two-years-after-its-radical-pivot,-fidelity-slashes-relativity’s-valuation

Nearly two years after its radical pivot, Fidelity slashes Relativity’s valuation

For several years, an innovative, California-based launch company named Relativity Space has been the darling of investors and media.

Relativity promised to disrupt launch by taking a somewhat niche technology in the space industry at the time, 3D printing, and using it as the foundation for manufacturing rockets. The pitch worked. Relativity’s chief executive Tim Ellis liked to brag that his first investor call was to Dallas Mavericks owner Mark Cuban, who cut the company’s first check. Cuban invested half a million dollars.

That was just the beginning of the torrent of fundraising by Ellis, who, by November 2023, turned the privately held Relativity into a $4.5 billion company following its latest, Series F funding. This was an impressive start for the company founded by Ellis and Jordan Noone, both engineers, in 2016.

A big bet

The Series F round occurred as Relativity was amid a bold gamble that, in hindsight, may have been a poor bet. In March 2023, the company launched its Terran 1 rocket for the first—and only—time. After this flight, Ellis announced that the company was pivoting immediately to developing the much larger and more capable Terran R rocket.

“It’s a big, bold bet,” Ellis said in an interview. “But it’s actually a really obvious decision.”

With an advertised capacity of more than 1 metric ton to low-Earth orbit and a “backlog” of launch contracts valued in the hundreds of millions of dollars, according to Ellis, Terran 1 had the potential to draw significant revenue. It could also have nabbed a share of launch contracts that have since been snared by competitors such as Rocket Lab, with its smaller Electron vehicle, and Firefly, with its comparably sized Alpha rocket.

Nearly two years after its radical pivot, Fidelity slashes Relativity’s valuation Read More »