Features

breaking-down-why-apple-tvs-are-privacy-advocates’-go-to-streaming-device

Breaking down why Apple TVs are privacy advocates’ go-to streaming device


Using the Apple TV app or an Apple account means giving Apple more data, though.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Every time I write an article about the escalating advertising and tracking on today’s TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven.

“Just disconnect your TV from the Internet and use an Apple TV box.”

That’s the common guidance you’ll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we’ve consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers.

But how private are Apple TV boxes, really? Apple TVs don’t use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple?

In this article, we’ll delve into what makes the Apple TV’s privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever.

Apple TV boxes limit tracking out of the box

One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple’s data and privacy policies. Also off by default is the boxes’ ability to send voice input data to Apple.

Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be.

Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users.

“If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier (IDFA), which is often used to track,” Apple says. “The app is also not permitted to track your activity using other information that identifies you or your device, like your email address.”

Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default.

The Apple TV also lets users control which apps can access the set-top box’s Bluetooth functionality, photos, music, and HomeKit data (if applicable), and the remote’s microphone.

“Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data,” said RJ Cross, director of the consumer privacy program at the Public Interest Research Group (PIRG). “I personally trust them more with my data than other tech companies.”

What if you share analytics data?

If you allow your Apple TV to share analytics data with Apple or app developers, that data won’t be personally identifiable, Apple says. Any collected personal data is “not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy,” Apple says.

Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation (PDF), Apple details its use of differential privacy:

The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don’t receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees.

What if you use an Apple account with your Apple TV?

Another factor to consider is Apple’s privacy policy regarding Apple accounts, formerly Apple IDs.

Apple support documentation says you “need” an Apple account to use an Apple TV, but you can use the hardware without one. Still, it’s common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes.

So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as “data about your activity on and use of” Apple offerings, including “app launches within our services…; browsing history; search history; [and] product interaction.”

Other types of data Apple may collect from Apple accounts include transaction information (Apple says this is “data about purchases of Apple products and services or transactions facilitated by Apple, including purchases on Apple platforms”), account information (“including email address, devices registered, account status, and age”), device information (including serial number and browser type), contact information (including physical address and phone number), and payment information (including bank details). None of that is surprising considering the type of data needed to make an Apple account work.

Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user.

A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won’t be able to use the Apple TV app, one of the device’s key features.

Data collection via the Apple TV app

You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of many (but not all) popular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is.

As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app’s privacy policy, “information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices.” That all makes sense for ensuring that the app remembers things like which episode of Severance you’re on across devices.

Apple collects other data, though, that isn’t necessary for functionality. It says it gathers data on things like the “features you use (for example, Continue Watching or Library),” content pages you view, how you interact with notifications, and approximate location information (that Apple says doesn’t identify users) to help improve the app.

Additionally, Apple tracks the terms you search for within the app, per its policy:

We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model.

This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate.

Data collected from the Apple TV app used for ads

By default, the Apple TV app also tracks “what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app” to make personalized content recommendations. Content recommendations aren’t ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you.

You can disable the Apple TV app’s personalized recommendations, but it’s a little harder than you might expect since you can’t do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off.

The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project (STOP), noted to Ars that even though Apple TV users can opt out of personalized content recommendations, “many will not realize they can.”

Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can “be used to serve geographically relevant ads,” according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes.

Apple’s tvOS doesn’t have integrated ads. For comparison, some TV OSes, like Roku OS and LG’s webOS, show ads on the OS’s home screen and/or when showing screensavers.

But data gathered from the Apple TV app can still help Apple’s advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, “including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you,” the Apple TV app privacy policy says.

Apple also provides third-party advertisers and strategic partners with “non-personal data” gathered from the Apple TV app:

We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks.

Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, “and improve their associated products and services,” Apple says.

Apple’s policy notes:

For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographics[,] such as age group and gender (which may be inferred from information such as your name and salutation in your Apple Account), to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative work [and] meet royalty and accounting requirements.

When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app.

All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies “in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements.”

Warner Bros. Discovery says it discloses information about Max viewers “with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests.” And Disney+ users have Nielsen tracking on by default.

What if you use Siri?

You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit.

According to the privacy policy accessible in Apple TV boxes’ settings, Apple boxes automatically send all Siri requests to Apple’s servers. If you opt into using Siri data to “Improve Siri and Dictation,” Apple will store your audio data. If you opt out, audio data won’t be stored, but per the policy:

In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple.

Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn’t store the audio but may store transcriptions of the audio.

If you opt to “Improve Siri and Dictation,” Apple says your history of voice requests isn’t tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option.

The policy states:

Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products …

Apple may also review a subset of the transcripts of your interactions and this … may be kept beyond two years for the ongoing improvements of products and services.

Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn’t always adhered to that commitment. In January, Apple agreed to pay $95 million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio.

Outside of Apple, we’ve seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person’s Apple TV usage might be unexpectedly analyzed to fuel Apple’s business.

Automatic content recognition

Apple TVs aren’t preloaded with automatic content recognition (ACR), an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point.

Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it’s technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs. (Enswers was acquired by Gracenote, which Nielsen now owns.)

In general, though, there are challenges to adding ACR to hardware that people already own, Li explained:

Everyone believes, in theory, you can add ACR anywhere you want at any time because it’s software, but because of the way [hardware is] architected… the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations.

Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, “including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation.”

Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route.

If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV’s biggest draws.

However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn’t currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom’s ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph.

Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it’s reported to be losing Apple $1 billion per year since its launch.

One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users.

“The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads,” PIRG’s Cross said.

Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers.

“Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike,” Apple’s advertising guide for buying ads on Apple News and Stocks reads.

The most private streaming gadget

It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs (which are incredibly hard to find these days). And if Apple follows its own policies, much of the data it gathers should be kept in-house.

However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple’s streaming hardware and software settings prioritize privacy by default, some advocates believe there’s room for improvement.

For example, STOP’s Maestro said:

Unlike in the [European Union], where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple’s servers. Users are left with little way to verify those privacy promises.

Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. “Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple’s public commitment to privacy,” Maestro said.

There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money.

As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren’t totally incapable of tracking you. But they’re still the best recommendation for streaming users seeking hardware with more privacy and fewer ads.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Breaking down why Apple TVs are privacy advocates’ go-to streaming device Read More »

ai-video-just-took-a-startling-leap-in-realism.-are-we-doomed?

AI video just took a startling leap in realism. Are we doomed?


Tales from the cultural singularity

Google’s Veo 3 delivers AI videos of realistic people with sound and music. We put it to the test.

Still image from an AI-generated Veo 3 video of “A 1980s fitness video with models in leotards wearing werewolf masks.” Credit: Google

Last week, Google introduced Veo 3, its newest video generation model that can create 8-second clips with synchronized sound effects and audio dialog—a first for the company’s AI tools. The model, which generates videos at 720p resolution (based on text descriptions called “prompts” or still image inputs), represents what may be the most capable consumer video generator to date, bringing video synthesis close to a point where it is becoming very difficult to distinguish between “authentic” and AI-generated media.

Google also launched Flow, an online AI filmmaking tool that combines Veo 3 with the company’s Imagen 4 image generator and Gemini language model, allowing creators to describe scenes in natural language and manage characters, locations, and visual styles in a web interface.

An AI-generated video from Veo 3: “ASMR scene of a woman whispering “Moonshark” into a microphone while shaking a tambourine”

Both tools are now available to US subscribers of Google AI Ultra, a plan that costs $250 a month and comes with 12,500 credits. Veo 3 videos cost 150 credits per generation, allowing 83 videos on that plan before you run out. Extra credits are available for the price of 1 cent per credit in blocks of $25, $50, or $200. That comes out to about $1.50 per video generation. But is the price worth it? We ran some tests with various prompts to see what this technology is truly capable of.

How does Veo work?

Like other modern video generation models, Veo 3 is built on diffusion technology—the same approach that powers image generators like Stable Diffusion and Flux. The training process works by taking real videos and progressively adding noise to them until they become pure static, then teaching a neural network to reverse this process step by step. During generation, Veo 3 starts with random noise and a text prompt, then iteratively refines that noise into a coherent video that matches the description.

AI-generated video from Veo 3: “An old professor in front of a class says, ‘Without a firm historical context, we are looking at the dawn of a new era of civilization: post-history.'”

DeepMind won’t say exactly where it sourced the content to train Veo 3, but YouTube is a strong possibility. Google owns YouTube, and DeepMind previously told TechCrunch that Google models like Veo “may” be trained on some YouTube material.

It’s important to note that Veo 3 is a system composed of a series of AI models, including a large language model (LLM) to interpret user prompts to assist with detailed video creation, a video diffusion model to create the video, and an audio generation model that applies sound to the video.

An AI-generated video from Veo 3: “A male stand-up comic on stage in a night club telling a hilarious joke about AI and crypto with a silly punchline.” An AI language model built into Veo 3 wrote the joke.

In an attempt to prevent misuse, DeepMind says it’s using its proprietary watermarking technology, SynthID, to embed invisible markers into frames Veo 3 generates. These watermarks persist even when videos are compressed or edited, helping people potentially identify AI-generated content. As we’ll discuss more later, though, this may not be enough to prevent deception.

Google also censors certain prompts and outputs that breach the company’s content agreement. During testing, we encountered “generation failure” messages for videos that involve romantic and sexual material, some types of violence, mentions of certain trademarked or copyrighted media properties, some company names, certain celebrities, and some historical events.

Putting Veo 3 to the test

Perhaps the biggest change with Veo 3 is integrated audio generation, although Meta previewed a similar audio-generation capability with “Movie Gen” last October, and AI researchers have experimented with using AI to add soundtracks to silent videos for some time. Google DeepMind itself showed off an AI soundtrack-generating model in June 2024.

An AI-generated video from Veo 3: “A middle-aged balding man rapping indie core about Atari, IBM, TRS-80, Commodore, VIC-20, Atari 800, NES, VCS, Tandy 100, Coleco, Timex-Sinclair, Texas Instruments”

Veo 3 can generate everything from traffic sounds to music and character dialogue, though our early testing reveals occasional glitches. Spaghetti makes crunching sounds when eaten (as we covered last week, with a nod to the famous Will Smith AI spaghetti video), and in scenes with multiple people, dialogue sometimes comes from the wrong character’s mouth. But overall, Veo 3 feels like a step change in video synthesis quality and coherency over models from OpenAI, Runway, Minimax, Pika, Meta, Kling, and Hunyuanvideo.

The videos also tend to show garbled subtitles that almost match the spoken words, which is an artifact of subtitles on videos present in the training data. The AI model is imitating what it has “seen” before.

An AI-generated video from Veo 3: “A beer commercial for ‘CATNIP’ beer featuring a real a cat in a pickup truck driving down a dusty dirt road in a trucker hat drinking a can of beer while country music plays in the background, a man sings a jingle ‘Catnip beeeeeeeeeeeeeeeeer’ holding the note for 6 seconds”

We generated each of the eight-second-long 720p videos seen below using Google’s Flow platform. Each video generation took around three to five minutes to complete, and we paid for them ourselves. It’s important to note that better results come from cherry-picking—running the same prompt multiple times until you find a good result. Due to cost and in the spirit of testing, we only ran every prompt once, unless noted.

New audio prompts

Let’s dive right into the deep end with audio generation to get a grip on what this technology can do. We’ve previously shown you a man singing about spaghetti and a rapping shark in our last Veo 3 piece, but here’s some more complex dialogue.

Since 2022, we’ve been using the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting” to test AI image generators like Midjourney. It’s time to bring that barbarian to life.

A muscular barbarian man holding an axe, standing next to a CRT television set. He looks at the TV, then to the camera and literally says, “You’ve been looking for this for years: a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting. Got that, Benj?”

The video above represents significant technical progress in AI media synthesis over the course of only three years. We’ve gone from a blurry colorful still-image barbarian to a photorealistic guy that talks to us in 720p high definition with audio. Most notably, there’s no reason to believe technical capability in AI generation will slow down from here.

Horror film: A scared woman in a Victorian outfit running through a forest, dolly shot, being chased by a man in a peanut costume screaming, “Wait! You forgot your wallet!”

Trailer for The Haunted Basketball Train: a Tim Burton film where 1990s basketball star is stuck at the end of a haunted passenger train with basketball court cars, and the only way to survive is to make it to the engine by beating different ghosts at basketball in every car

ASMR video of a muscular barbarian man whispering slowly into a microphone, “You love CRTs, don’t you? That’s OK. It’s OK to love CRT televisions and barbarians.”

1980s PBS show about a man with a beard talking about how his Apple II computer can “connect to the world through a series of tubes”

A 1980s fitness video with models in leotards wearing werewolf masks

A female therapist looking at the camera, zoom call. She says, “Oh my lord, look at that Atari 800 you have behind you! I can’t believe how nice it is!”

With this technology, one can easily imagine a virtual world of AI personalities designed to flatter people. This is a fairly innocent example about a vintage computer, but you can extrapolate, making the fake person talk about any topic at all. There are limits due to Google’s filters, but from what we’ve seen in the past, a future uncensored version of a similarly capable AI video generator is very likely.

Video call screenshot capture of a Zoom chat. A psychologist in a dark, cozy therapist’s office. The therapist says in a friendly voice, “Hi Tom, thanks for calling. Tell me about how you’re feeling today. Is the depression still getting to you? Let’s work on that.”

1960s NASA footage of the first man stepping onto the surface of the Moon, who squishes into a pile of mud and yells in a hillbilly voice, “What in tarnation??”

A local TV news interview of a muscular barbarian talking about why he’s always carrying a CRT TV set around with him

Speaking of fake news interviews, Veo 3 can generate plenty of talking anchor-persons, although sometimes on-screen text is garbled if you don’t specify exactly what it should say. It’s in cases like this where it seems Veo 3 might be most potent at casual media deception.

Footage from a news report about Russia invading the United States

Attempts at music

Veo 3’s AI audio generator can create music in various genres, although in practice, the results are typically simplistic. Still, it’s a new capability for AI video generators. Here are a few examples in various musical genres.

A PBS show of a crazy barbarian with a blonde afro painting pictures of Trees, singing “HAPPY BIG TREES” to some music while he paints

A 1950s cowboy rides up to the camera and sings in country music, “I love mah biiig ooold donkeee”

A 1980s hair metal band drives up to the camera and sings in rock music, “Help me with my huge huge huge hair!”

Mister Rogers’ Neighborhood PBS kids show intro done with psychedelic acid rock and colored lights

1950s musical jazz group with a scat singer singing about pickles amid gibberish

A trip-hop rap song about Ars Technica being sung by a guy in a large rubber shark costume on a stage with a full moon in the background

Some classic prompts from prior tests

The prompts below come from our previous video tests of Gen-3, Video-01, and the open source Hunyuanvideo, so you can flip back to those articles and compare the results if you want to. Overall, Veo 3 appears to have far greater temporal coherency (having a consistent subject or theme over time) than the earlier video synthesis models we’ve tested. But of course, it’s not perfect.

A highly intelligent person reading ‘Ars Technica’ on their computer when the screen explodes

The moonshark jumping out of a computer screen and attacking a person

A herd of one million cats running on a hillside, aerial view

Video game footage of a dynamic 1990s third-person 3D platform game starring an anthropomorphic shark boy

Aerial shot of a small American town getting deluged with liquid cheese after a massive cheese rainstorm where liquid cheese rained down and dripped all over the buildings

Wide-angle shot, starting with the Sasquatch at the center of the stage giving a TED talk about mushrooms, then slowly zooming in to capture its expressive face and gestures, before panning to the attentive audience

Some notable failures

Google’s Veo 3 isn’t perfect at synthesizing every scenario we can throw at it due to limitations of training data. As we noted in our previous coverage, AI video generators remain fundamentally imitative, making predictions based on statistical patterns rather than a true understanding of physics or how the world works.

For example, if you see mouths moving during speech, or clothes wrinkling in a certain way when touched, it means the neural network doing the video generation has “seen” enough similar examples of that scenario in the training data to render a convincing take on it and apply it to similar situations.

However, when a novel situation (or combination of themes) isn’t well-represented in the training data, you’ll see “impossible” or illogical things happen, such as weird body parts, magically appearing clothing, or an object that “shatters” but remains in the scene afterward, as you’ll see below.

We mentioned audio and video glitches in the introduction. In particular, scenes with multiple people sometimes confuse which character is speaking, such as this argument between tech fans.

A 2000s TV debate between fans of the PowerPC and Intel Pentium chips

Bombastic 1980s infomercial for the “Ars Technica” online service. With cheesy background music and user testimonials

1980s Rambo fighting Soviets on the Moon

Sometimes requests don’t make coherent sense. In this case, “Rambo” is correctly on the Moon firing a gun, but he’s not wearing a spacesuit. He’s a lot tougher than we thought.

An animated infographic showing how many floppy disks it would take to hold an installation of Windows 11

Large amounts of text also present a weak point, but if a short text quotation is explicitly specified in the prompt, Veo 3 usually gets it right.

A young woman doing a complex floor gymnastics routine at the Olympics, featuring running and flips

Despite Veo 3’s advances in temporal coherency and audio generation, it still suffers from the same “jabberwockies” we saw in OpenAI’s viral Sora gymnast video—those non-plausible video hallucinations like impossible morphing body parts.

A silly group of men and women cartwheeling across the road, singing “CHEEEESE” and holding the note for 8 seconds before falling over.

A YouTube-style try-on video of a person trying on various corncob costumes. They shout “Corncob haul!!”

A man made of glass runs into a brick wall and shatters, screaming

A man in a spacesuit holding up 5 fingers and counting down to zero, then blasting off into space with rocket boots

Counting down with fingers is difficult for Veo 3, likely because it’s not well-represented in the training data. Instead, hands are likely usually shown in a few positions like a fist, a five-finger open palm, a two-finger peace sign, and the number one.

As new architectures emerge and future models train on vastly larger datasets with exponentially more compute, these systems will likely forge deeper statistical connections between the concepts they observe in videos, dramatically improving quality and also the ability to generalize more with novel prompts.

The “cultural singularity” is coming—what more is left to say?

By now, some of you might be worried that we’re in trouble as a society due to potential deception from this kind of technology. And there’s a good reason to worry: The American pop culture diet currently relies heavily on clips shared by strangers through social media such as TikTok, and now all of that can easily be faked, whole-cloth. Automated generations of fake people can now argue for ideological positions in a way that could manipulate the masses.

AI-generated video by Veo 3: “A man on the street interview about someone who fears they live in a time where nothing can be believed”

Such videos could be (and were) manipulated before through various means prior to Veo 3, but now the barrier to entry has collapsed from requiring specialized skills, expensive software, and hours of painstaking work to simply typing a prompt and waiting three minutes. What once required a team of VFX artists or at least someone proficient in After Effects can now be done by anyone with a credit card and an Internet connection.

But let’s take a moment to catch our breath. At Ars Technica, we’ve been warning about the deceptive potential of realistic AI-generated media since at least 2019. In 2022, we talked about AI image generator Stable Diffusion and the ability to train people into custom AI image models. We discussed Sora “collapsing media reality” and talked about persistent media skepticism during the “deep doubt era.”

AI-generated video with Veo 3: “A man on the street ranting about the ‘cultural singularity’ and the ‘cultural apocalypse’ due to AI”

I also wrote in detail about the future ability for people to pollute the historical record with AI-generated noise. In that piece, I used the term “cultural singularity” to denote a time when truth and fiction in media become indistinguishable, not only because of the deceptive nature of AI-generated content but also due to the massive quantities of AI-generated and AI-augmented media we’ll likely soon be inundated with.

However, in an article I wrote last year about cloning my dad’s handwriting using AI, I came to the conclusion that my previous fears about the cultural singularity may be overblown. Media has always been vulnerable to forgery since ancient times; trust in any remote communication ultimately depends on trusting its source.

AI-generated video with Veo 3: “A news set. There is an ‘Ars Technica News’ logo behind a man. The man has a beard and a suit and is doing a sit-down interview. He says “This is the age of post-history: a new epoch of civilization where the historical record is so full of fabrication that it becomes effectively meaningless.”

The Romans had laws against forgery in 80 BC, and people have been doctoring photos since the medium’s invention. What has changed isn’t the possibility of deception but its accessibility and scale.

With Veo 3’s ability to generate convincing video with synchronized dialogue and sound effects, we’re not witnessing the birth of media deception—we’re seeing its mass democratization. What once cost millions of dollars in Hollywood special effects can now be created for pocket change.

An AI-generated video created with Google Veo-3: “A candid interview of a woman who doesn’t believe anything she sees online unless it’s on Ars Technica.”

As these tools become more powerful and affordable, skepticism in media will grow. But the question isn’t whether we can trust what we see and hear. It’s whether we can trust who’s showing it to us. In an era where anyone can generate a realistic video of anything for $1.50, the credibility of the source becomes our primary anchor to truth. The medium was never the message—the messenger always was.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

AI video just took a startling leap in realism. Are we doomed? Read More »

my-3d-printing-journey,-part-2:-printing-upgrades-and-making-mistakes

My 3D printing journey, part 2: Printing upgrades and making mistakes


3D-printing new parts for the A1 taught me a lot about plastic, and other things.

Different plastic filament is good for different things (and some kinds don’t work well with the A1 and other open-bed printers). Credit: Andrew Cunningham

Different plastic filament is good for different things (and some kinds don’t work well with the A1 and other open-bed printers). Credit: Andrew Cunningham

For the last three months or so, I’ve been learning to use (and love) a Bambu Labs A1 3D printer, a big, loud machine that sits on my desk and turns pictures on my computer screen into real-world objects.

In the first part of my series about diving into the wild world of 3D printers, I covered what I’d learned about the different types of 3D printers, some useful settings in the Bambu Studio app (which should also be broadly useful to know about no matter what printer you use), and some initial, magical-feeling successes in downloading files that I turned into useful physical items using a few feet of plastic filament and a couple hours of time.

For this second part, I’m focusing on what I learned when I embarked on my first major project—printing upgrade parts for the A1 with the A1. It was here that I made some of my first big 3D printing mistakes, mistakes that prompted me to read up on the different kinds of 3D printer filament, what each type of filament is good for, and which types the A1 is (and is not) good at handling as an un-enclosed, bed-slinging printer.

As with the information in part one, I share this with you not because it is groundbreaking but because there’s a lot of information out there, and it can be an intimidating hobby to break into. By sharing what I learned and what I found useful early in my journey, I hope I can help other people who have been debating whether to take the plunge.

Adventures in recursion: 3D-printing 3D printer parts

A display cover for the A1’s screen will protect it from wear and tear and allow you to easily hide it when you want to. Credit: Andrew Cunningham

My very first project was a holder for my office’s ceiling fan remote. My second, similarly, was a wall-mounted holder for the Xbox gamepad and wired headset I use with my gaming PC, which normally just had to float around loose on my desk when I wasn’t using them.

These were both relatively quick, simple prints that showed the printer was working like it was supposed to—all of the built-in temperature settings, the textured PEI plate, the printer’s calibration and auto-bed-leveling routines added up to make simple prints as dead-easy as Bambu promised they would be. It made me eager to seek out other prints, including stuff on the Makerworld site I hadn’t thought to try yet.

The first problem I had? Well, as part of its pre-print warmup routine, the A1 spits a couple of grams of filament out and tosses it to the side. This is totally normal—it’s called “purging,” and it gets rid of filament that’s gone brittle from being heated too long. If you’re changing colors, it also clears any last bits of the previous color that are still in the nozzle. But it didn’t seem particularly elegant to have the printer eternally launching little knots of plastic onto my desk.

The A1’s default design just ejects little molten wads of plastic all over your desk when it’s changing or purging filament. This is one of many waste bin (or “poop bucket”) designs made to catch and store these bits and pieces. Credit: Andrew Cunningham

The solution to this was to 3D-print a purging bucket for the A1 (also referred to, of course, as a “poop bucket” or “poop chute.”) In fact, there are tons of purging buckets designed specifically for the A1 because it’s a fairly popular budget model and there’s nothing stopping people from making parts that fit it like a glove.

I printed this bucket, as well as an additional little bracket that would “catch” the purged filament and make sure it fell into the bucket. And this opened the door to my first major printing project: printing additional parts for the printer itself.

I took to YouTube and watched a couple of videos on the topic because I’m apparently far from the first person who has had this reaction to the A1. After much watching and reading, here are the parts I ended up printing:

  • Bambu Lab AMS Lite Top Mount and Z-Axis Stiffener: The Lite version of Bambu’s Automated Materials System (AMS) is the optional accessory that enables multi-color printing for the A1. And like the A1 itself, it’s a lower-cost, open-air version of the AMS that works with Bambu’s more expensive printers.
    • The AMS Lite comes with a stand that you can use to set it next to the A1, but that’s more horizontal space than I had to spare. This top mount is Bambu’s official solution for putting the AMS Lite on top of the A1 instead, saving you some space.
    • The top mount actually has two important components: the top mount itself and a “Z-Axis Stiffener,” a pair of legs that extend behind the A1 to make the whole thing more stable on a desk or table. Bambu already recommends 195 mm (or 7.7 inches) of “safety margin” behind the A1 to give the bed room to sling, so if you’ve left that much space behind the printer, you probably have enough space for these legs.
    • After installing all of these parts, the top mount, and a fully loaded AMS, it’s probably a good idea to run the printer’s calibration cycle again to account for the difference in balance.
    • You may want to print the top mount itself with PETG, which is a bit stronger and more impact-resistant than PLA plastic.
  • A1 Purge Waste Bin and Deflector, by jimbobble. There are approximately 1 million different A1 purge bucket designs, each with its own appeal. But this one is large and simple and includes a version that is compatible with the printer Z-Axis Stiffener legs.
  • A1 rectangular fan cover, by Arzhang Lotfi. There are a bunch of options for this, including fun ones, but you can find dozens of simple grille designs that snap in place and protect the fan on the A1’s print head.
  • Bambu A1 Adjustable Camera Holder, by mlodybuk: This one’s a little more complicated because it does require some potentially warranty-voiding disassembly of components. The A1’s camera is also pretty awful no matter how you position it, with sub-1 FPS video that’s just barely suitable for checking on whether a print has been ruined or not.
    • But if you want to use it, I’d highly recommend moving it from the default location, which is low down and at an odd angle, so you’re not getting the best view of your print that you can.
    • This print includes a redesigned cover for the camera area, a filler piece to fill the hole where the camera used to be to keep dust and other things from getting inside the printer, and a small camera receptacle that snaps in place onto the new cover and can be turned up and down.
    • If you’re not comfortable modding your machine like this, the camera is livable as-is, but this got me a much better vantage point on my prints.

With a little effort, this print allows you to reposition the A1’s camera, giving you a better angle on your prints and making it adjustable. Credit: Andrew Cunningham

  • A1 Screen Protector New Release, by Rox3D: Not strictly necessary, but an unobtrusive way to protect (and to “turn off”) the A1’s built-in LCD screen when it’s not in use. The hinge mechanism of this print is stiff enough that the screen cover can be lifted partway without flopping back down.
  • A1 X-Axis Cover, by Moria3DPStudio: Another only-if-you-want-it print, this foldable cover slides over the A1’s exposed rail when you’re not using it. Just make sure you take it back off before you try to print anything—it won’t break anything, but the printer won’t be happy with you. Not that I’m speaking from experience.
  • Ultimate Filament Spool Enclosure for the AMS Lite, by Supergrapher: Here’s the big one, and it’s a true learning experience for all kinds of things. The regular Bambu AMS system for the P- and X-series printers is enclosed, which is useful not just for keeping dust from settling on your filament spools but for controlling humidity and keeping spools you’ve dried from re-absorbing moisture. There’s no first-party enclosure for the AMS Lite, but this user-created enclosure is flexible and popular, and it can be used to enclose the AMS Lite whether you have it mounted on top of or to the side of the A1. The small plastic clips that keep the lids on are mildly irritating to pop on and off, relative to a lid that you can just lift up and put back down, but the benefits are worth it.
  • 3D Disc for A1 – “Pokéball,” by BS 3D Print: One of the few purely cosmetic parts I’ve printed. The little spinning bit on the front of the A1’s print head shows you when the filament is being extruded, but it’s not a functional part. This is just one of dozens and dozens of cosmetic replacements for it if you choose to pop it off.
  • Sturdy Modular Filament Spool Rack, by Antiphrasis: Not technically an upgrade for the A1, but an easy recommendation for any new 3D printers who suddenly find themselves with a rainbow of a dozen-plus different filaments you want to try. Each shelf here holds three spools of filament, and you can print additional shelves to spread them out either horizontally, vertically, or both, so you can make something that exactly meets your needs and fits your space. A two-by-three shelf gave me room for 18 spools, and I can print more if I need them.

There are some things that others recommend for the A1 that I haven’t printed yet—mainly guides for cables, vibration dampeners for the base, and things to reinforce areas of possible stress for the print head and the A1’s loose, dangly wire.

Part of the fun is figuring out what your problems are, identifying prints that could help solve the problem, and then trying them out to see if they do solve your problem. (The parts have also given my A1 its purple accents, since a bright purple roll of filament was one of the first ones my 5-year-old wanted to get.)

Early mistakes

The “Z-Axis stiffener,” an extra set of legs for the A1 that Bambu recommends if you top-mount your AMS Lite. This took me three tries to print, mainly because of my own inexperience. Credit: Andrew Cunningham

Printing each of these parts gave me a solid crash course into common pitfalls and rookie mistakes.

For example, did you know that ABS plastic doesn’t print well on an open-bed printer? Well, it doesn’t! But I didn’t know that when I bought a spool of ABS to print some parts that I wanted to be sturdier and more resistant to wear and tear. I’d open the window and leave the room to deal with the fumes and be fine, I figured.

I tried printing the Z-Axis Stiffener supports for the A1 in ABS, but they went wonky. Lower bed temperature and (especially) ambient temperature tends to make ABS warp and curl upward, and extrusion-based printers rely on precision to do their thing. Once a layer—any layer!—gets screwed up during a print, that will reverberate throughout the entire rest of the object. Which is why my first attempt at supports ended up being totally unusable.

Large ABS plastic prints are tough to do on an open-bed printer. You can see here how that lower-left corner peeled upward slightly from the print bed, and any unevenness in the foundation of your print is going to reverberate in the layers that are higher up. Credit: Andrew Cunningham

I then tried printing another set of supports with PLA plastic, ones that claimed to maintain their sturdiness while using less infill (that is, how much plastic is actually used inside the print to give it rigidity—around 15 percent is typically a good balance between rigidity and wasting plastic that you’ll never see, though there may be times when you want more or less). I’m still not sure what I did, but the prints I got were squishy and crunchy to the touch, a clear sign that the amount and/or type of infill wasn’t sufficient. It wasn’t until my third try—the original Bambu-made supports, in PLA instead of ABS—that I made supports I could actually use.

An attempt at printing the same part with PLA, but with insufficient infill plastic that left my surfaces rough and the interiors fragile and crunchy. I canceled this one about halfway through when it became clear that something wasn’t right. Credit: Andrew Cunningham

After much reading and research, I learned that for most things, PETG plastic is what you use if you want to make sturdier (and outdoor-friendly) prints on an open bed. Great! I decided I’d print most of the A1 ABS enclosure with clear PETG filament to make something durable that I could also see through when I wanted to see how much filament was left on a given spool.

This ended up being a tricky first experiment with PETG plastic for three different reasons. For one, printing “clear” PETG that actually looks clear is best done with a larger nozzle (Bambu offers 0.2 mm, 0.6 mm, and 0.8 mm nozzles for the A1, in addition to the default 0.4 mm) because you can get the same work done in fewer layers, and the more layers you have, the less “clear” that clear plastic will be. Fine!

The Inland-brand clear PETG+ I bought from our local Micro Center also didn’t love the default temperature settings for generic PETG that the A1 uses, both for the heatbed and the filament itself; plastic flowed unevenly from the nozzle and was prone to coming detached from the bed. If this is happening to you (or if you want to experiment with lowering your temperatures to save a bit of energy), going into Bambu Studio, nudging temperatures by 5 degrees in either direction, and trying a quick test print (I like this one) helped me dial in my settings when using unfamiliar filament.

This homebrewed enclosure for the AMS Lite multi-color filament switcher (and the top mount that sticks it on the top of the printer) has been my biggest and most complex print to date. An 0.8 mm nozzle and some settings changes are recommended to maximize the transparency of transparent PETG filament. Credit: Andrew Cunningham

Finally, PETG is especially prone to absorbing ambient moisture. When that moisture hits a 260° nozzle, it quickly evaporates, and that can interfere with the evenness of the flow rate and the cleanliness of your print (this usually manifests as “stringing,” fine, almost cotton-y strands that hang off your finished prints).

You can buy dedicated filament drying boxes or stick spools in an oven at a low temperature for a few hours if this really bothers you or if it’s significant enough to affect the quality of your prints. One of the reasons to have an enclosure is to create a humidity-controlled environment to keep your spools from absorbing too much moisture in the first place.

The temperature and nozzle-size adjustments made me happy enough with my PETG prints that I was fine to pick off the little fuzzy stringers that were on my prints afterward, but your mileage may vary.

These are just a few examples of the kinds of things you learn if you jump in with both feet and experiment with different prints and plastics in rapid succession. Hopefully, this advice helps you avoid my specific mistakes. But the main takeaway is that experience is the best teacher.

The wide world of plastics

I used filament to print a modular filament shelf for my filaments. Credit: Andrew Cunningham

My wife had gotten me two spools of filament, a white and a black spool of Bambu’s own PLA Basic. What does all of that mean?

No matter what you’re buying, it’s most commonly sold in 1 kilogram spools (the weight of the plastic, not the plastic and the spool together). Each thing you print will give you an estimate of how much filament, in grams, you’ll need to print it.

There are quite a few different types of plastics out there, on Bambu’s site and in other stores. But here are the big ones I found out about almost immediately:

Polylactic acid, or PLA

By far the most commonly used plastic, PLA is inexpensive, available in a huge rainbow of colors and textures, and has a relatively low melting point, making it an easy material for most 3D printers to work with. It’s made of renewable material rather than petroleum, which makes it marginally more environmentally friendly than some other kinds of plastic. And it’s easy to “finish” PLA-printed parts if you’re trying to make props, toys, or other objects that you don’t want to have that 3D printed look about them, whether you’re sanding those parts or using a chemical to smooth the finish.

The downside is that it’s not particularly resilient—sitting in a hot car or in direct sunlight for very long is enough to melt or warp it, which makes it a bad choice for anything that needs to survive outdoors or anything load-bearing. Its environmental bona fides are also a bit oversold—it is biodegradable, but it doesn’t do so quickly outside of specialized composting facilities. If you throw it in the trash and it goes to a landfill, it will still take its time returning to nature.

You’ll find a ton of different kinds of PLA out there. Some have additives that give them a matte or silky texture. Some have little particles of wood or metal or even coffee or spent beer grains embedded in them, meant to endow 3D printed objects with the look, feel, or smell of those materials.

Some PLA just has… some other kind of unspecified additive in it. You’ll see “PLA+” all over the place, but as far as I can tell, there is no industry-wide agreed-upon standard for what the plus is supposed to mean. Manufacturers sometimes claim it’s stronger than regular PLA; other terms like “PLA Pro” and “PLA Max” are similarly non-standardized and vague.

Polyethylene terephthalate glycol, or PETG

PET is a common household plastic, and you’ll find it in everything from clothing fibers to soda bottles. PETG is the same material, with ethylene glycol (the “G”) added to lower the melting point and make it less prone to crystallizing and warping. It also makes it more transparent, though trying to print anything truly “transparent” with an extrusion printer is difficult.

PETG has a higher melting point than PLA, but it’s still lower than other kinds of plastics. This makes PETG a good middle ground for some types of printing. It’s better than PLA for functional load-bearing parts and outdoor use because it’s stronger and able to bend a bit without warping, but it’s still malleable enough to print well on all kinds of home 3D printers.

PETG can still be fussier to work with than PLA. I more frequently had issues with the edges of my PETG prints coming unstuck from the bed of the printer before the print was done.

PETG filament is also especially susceptible to absorbing moisture from the air, which can make extrusion messier. My PETG prints have usually had lots of little wispy strings of plastic hanging off them by the end—not enough to affect the strength or utility of the thing I’ve printed but enough that I needed to pull the strings off to clean up the print once it was done. Drying the filament properly could help with that if I ever need the prints to be cleaner in the first place.

It’s also worth noting that PETG is the strongest kind of filament that an open-bed printer like the A1 can handle reliably. You can succeed with other plastics, but Reddit anecdotes, my own personal experience, and Bambu’s filament guide all point to a higher level of difficulty.

Acrylonitrile butadiene styrene, or ABS

“Going to look at the filament wall at Micro Center” is a legit father-son activity at this point. Credit: Andrew Cunningham

You probably have a lot of ABS plastic in your life. Game consoles and controllers, the plastic keys on most keyboards, Lego bricks, appliances, plastic board game pieces—it’s mostly ABS.

Thin layers of ABS stuck together aren’t as strong or durable as commercially manufactured injection-molded ABS, but it’s still more heat-resistant and durable than 3D-printed PLA or PETG.

There are two big issues specific to ABS, which are also outlined in Bambu’s FAQ for the A1. The first is that it doesn’t print well on an open-bed printer, especially for larger prints. The corners are more prone to pulling up off the print bed, and as with a house, any problems in your foundation will reverberate throughout the rest of your print.

The second is fumes. All 3D-printed plastics emit fumes when they’ve been melted, and a good rule of thumb is to at least print things in a room where you can open the window (and not in a room where anyone or anything sleeps). But ABS and ASA plastics in particular can emit fumes that cause eye and respiratory irritation, headaches, and nausea if you’re printing them indoors with insufficient ventilation.

As for what quantity of printing counts as “dangerous,” there’s no real consensus, and the studies that have been done mostly land in inconclusive “further study is needed” territory. At a bare minimum, it’s considered a best practice to at least be able to open a window if you’re printing with ABS or to use a closed-bed printer in an unoccupied part of your home, like a garage, shed, or workshop space (if you have one).

Acrylonitrile styrene acrylate, or ASA

Described to me by Ars colleague Lee Hutchinson as “ABS but with more UV resistance,” this material is even better suited for outdoor applications than the other plastics on this list.

But also like ABS, you’ll have a hard time getting good results with an open-bed printer, and the fumes are more harmful to inhale. You’ll want a closed-bed printer and decent ventilation for good results.

Thermoplastic polyurethane, or TPU

TPU is best known for its flexibility relative to the other kinds of plastics on this list. It doesn’t get as brittle when it’s cold and has more impact-resistance, and it can print reasonably well on an open-bed printer.

One downside of TPU is that you need to print slowly to get reliably good results—a pain, when even relatively simple fidget toys can take an hour or two to print at full speed using PLA. Longer prints mean more power use and more opportunities for your print to peel off the print bed. A roll of TPU filament will also usually run you a few dollars more than a roll of PLA, PETG, or ABS.

First- or third-party filament?

The first-party Bambu spools have RFID chips in them that Bambu printers can scan to automatically show the type and color of filament that it is and to keep track of how much filament you have remaining. Bambu also has temperature and speed presets for all of its first-party filaments built into the printer and the Bambu Studio software. There are presets for a few other filament brands in the printer, but I usually ended up using the “generic” presets, which may need some tuning to ensure the best possible adhesion to the print bed and extrusion from the nozzle.

I mostly ended up using Inland-branded filament I picked up from my local Micro Center—both because it’s cheaper than Bambu’s first-party stuff and because it’s faster and easier for me to get to. If you don’t have a brick-and-mortar hobby store with filaments in stock, the A1 and other printers sometimes come with some sample filament swatches so you can see the texture and color of the stuff you’re buying online.

What’s next?

Part of the fun of 3D printing is that it can be used for a wide array of projects—organizing your desk or your kitchen, printing out little fidget-toy favors for your kid’s birthday party, printing out replacement parts for little plastic bits and bobs that have broken, or just printing out decorations and other objects you’ll enjoy looking at.

Once you’re armed with all of the basic information in this guide, the next step is really up to you. What would you find fun or useful? What do you need? How can 3D printing help you with other household tasks or hobbies that you might be trying to break into? For the last part of this series, the Ars staffers with 3D printers at home will share some of their favorite prints—hearing people talk about what they’d done themselves really opened my eyes to the possibilities and the utility of these devices, and more personal testimonials may help those of you who are on the fence to climb down off of it.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

My 3D printing journey, part 2: Printing upgrades and making mistakes Read More »

where-hyperscale-hardware-goes-to-retire:-ars-visits-a-very-big-itad-site

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site.

Credit: SK tes

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site. Credit: SK tes

The details of each unit—CPU, memory, HDD size—are taken down and added to the asset tag, and the device is sent on to be physically examined. This step is important because “many a concealed drive finds its way into this line,” Kent Green, manager of this site, told me. Inside the machines coming from big firms, there are sometimes little USB, SD, SATA, or M.2 drives hiding out. Some were make-do solutions installed by IT and not documented, and others were put there by employees tired of waiting for more storage. “Some managers have been pretty surprised when they learn what we found,” Green said.

With everything wiped and with some sense of what they’re made of, each device gets a rating. It’s a three-character system, like “A-3-6,” based on function, cosmetic condition, and component value. Based on needs, trends, and other data, devices that are cleared for resale go to either wholesale, retail, component harvesting, or scrap.

Full-body laptop skins

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin.

Credit: SK TES

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin. Credit: SK TES

If a device has retail value, it heads into a section of this giant facility where workers do further checks. Automated software plays sounds on the speakers, checks that every keyboard key is sending signals, and checks that laptop batteries are at 80 percent capacity or better. At the end of the line is my favorite discovery: full-body laptop skins.

Some laptops—certain Lenovo, Dell, and HP models—are so ubiquitous in corporate fleets that it’s worth buying an adhesive laminating sticker in their exact shape. They’re an uncanny match for the matte black, silver, and slightly less silver finishes of the laptops, covering up any blemishes and scratches. Watching one of the workers apply this made me jealous of their ability to essentially reset a laptop’s condition (so one could apply whole new layers of swag stickers, of course). Once rated, tested, and stickered, laptops go into a clever “cradle” box, get the UN 3481 “battery inside” sticker, and can be sold through retail.

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site Read More »

200-mph-for-500-miles:-how-indycar-drivers-prepare-for-the-big-race

200 mph for 500 miles: How IndyCar drivers prepare for the big race


Andretti Global’s Kyle Kirkwood and Marcus Ericsson talk to us about the Indy 500.

INDIANAPOLIS, INDIANA - MAY 15: #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana.

#28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images

#28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images

This coming weekend is a special one for most motorsport fans. There are Formula 1 races in Monaco and NASCAR races in Charlotte. And arguably towering over them both is the Indianapolis 500, being held this year for the 109th time. America’s oldest race is also one of its toughest: The track may have just four turns, but the cars negotiate them going three times faster than you drive on the highway, inches from the wall. For hours. At least at Le Mans, you have more than one driver per car.

This year’s race promises to be an exciting one. The track is sold out for the first time since the centenary race in 2016. A rookie driver and a team new to the series took pole position. Two very fast cars are starting at the back thanks to another conflict-of-interest scandal involving Team Penske, the second in two years for a team whose owner also owns the track and the series. And the cars are trickier to drive than they have been for many years, thanks to a new supercapacitor-based hybrid system that has added more than 100 lbs to the rear of the car, shifting the weight distribution further back.

Ahead of Sunday’s race, I spoke with a couple of IndyCar drivers and some engineers to get a better sense of how they prepare and what to expect.

INDIANAPOLIS, INDIANA - MAY 17: #28, Marcus Ericsson, Andretti Global Honda during qualifying for the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 17, 2025 in Indianapolis, Indiana.

This year, the cars are harder to drive thanks to a hybrid system that has altered the weight balance. Credit: Geoff MIller/Lumen via Getty Images

Concentrate

It all comes “from months of preparation,” said Marcus Ericsson, winner of the race in 2022 and one of Andretti Global’s drivers in this year’s event. “When we get here to the month of May, it’s just such a busy month. So you’ve got to be prepared mentally—and basically before you get to the month of May because if you start doing it now, it’s too late,” he told me.

The drivers spend all month at the track, with a race on the road course earlier this month. Then there’s testing on the historic oval, followed by qualifying last weekend and the race this coming Sunday. “So all those hours you put in in the winter, really, and leading up here to the month of May—it’s what pays off now,” Ericsson said. That work involved multiple sessions of physical training each week, and Ericsson says he also does weekly mental coaching sessions.

“This is a mental challenge,” Ericsson told me. “Doing those speeds with our cars, you can’t really afford to have a split second of loss of concentration because then you might be in the wall and your day is over and you might hurt yourself.”

When drivers get tired or their focus slips, that’s when mistakes happen, and a mistake at Indy often has consequences.

A racing driver stands in front of four mechanics, who are facing away from him. The mechanics have QR codes on the back of their shirts.

Ericsson is sponsored by the antihistamine Allegra and its anti-drowsy-driving campaign. Fans can scan the QR codes on the back of his pit crew’s shirts for a “gamified experience.” Credit: Andretti Global/Allegra

Simulate

Being mentally and physically prepared is part of it. It also helps if you can roll the race car off the transporter and onto the track with a setup that works rather than spending the month chasing the right combination of dampers, springs, wing angles, and so on. And these days, that means a lot of simulation testing.

The multi-axis driver in the loop simulators might look like just a very expensive video game, but these multimillion-dollar setups aren’t about having fun. “Everything that you are feeling or changing in the sim is ultimately going to reflect directly to what happens on track,” explained Kyle Kirkwood, teammate to Ericsson at Andretti Global and one of only two drivers to have won an Indycar race in 2025.

Andretti, like the other teams using Honda engines, uses the new HRC simulator in Indiana. “And yes, it’s a very expensive asset, but it’s also likely cheaper than going to the track and doing the real thing,” Kirkwood said. “And it’s a much more controlled environment than being at the track because temperature changes or track conditions or wind direction play a huge factor with our car.”

A high degree of correlation between the simulation and the track is what makes it a powerful tool. “We run through a sim, and you only get so many opportunities, especially at a place like Indianapolis, where you go from one day to the next and the temperature swings, or the wind conditions, or whatever might change drastically,” Kirkwood said. “You have to be able to sim it and be confident with the sim that you’re running to go out there and have a similar balance or a similar performance.”

Kyle Kirkwood's indycar drives past the IMS logo on one of the track walls.

Andretti Global’s Kyle Kirkwood is the only driver other than Álex Palou to have won an IndyCar race in 2025. Credit: Alison Arena/Andretti Global

“So you have to make adjustments, whether it’s a spring rate, whether it’s keel ballast or just overall, maybe center of pressure, something like that,” Kirkwood said. “You have to be able to adjust to it. And that’s where the sim tool comes in play. You move the weight balance back, and you’re like, OK, now what happens with the balance? How do I tune that back in? And you run that all through the sim, and for us, it’s been mirror-perfect going to the track when we do that.”

More impressively, a lot of that work was done months ago. “I would say most of it, we got through it before the start of this season,” Kirkwood said. “Once we get into the season, we only get a select few days because every Honda team has to run on the same simulator. Of course, it’s different with the engineering sim; those are running nonstop.”

Sims are for engineers, too

An IndyCar team is more than just its drivers—”the spacer between the seat and the wheel,” according to Kirkwood—and the engineers rely heavily on sim work now that real-world testing is so highly restricted. And they use a lot more than just driver-in-the-loop (DiL).

“Digital simulation probably goes to a higher level,” explained Scott Graves, engineering manager at Andretti Global. “A lot of the models we develop work in the DiL as well as our other digital tools. We try to develop universal models, whether that’s tire models, engine models, or transmission models.”

“Once you get into to a fully digital model, then I think your optimization process starts kicking in,” Graves said. “You’re not just changing the setting and running a pretend lap with a driver holding a wheel. You’re able to run through numerous settings and optimization routines and step through a massive number of permutations on a car. Obviously, you’re looking for better lap times, but you’re also looking for fuel efficiency and a lot of other parameters that go into crossing the finish line first.”

A screenshot of a finite element analysis tool

Parts like this anti-roll bar are simulated thousands of times. Credit: Siemens/Andretti Global

As an example, Graves points to the dampers. “The shock absorber is a perfect example where that’s a highly sophisticated piece of equipment on the car and it’s very open for team development. So our cars have fully customized designs there that are optimized for how we run the car, and they may not be good on another team’s car because we’re so honed in on what we’re doing with the car,” he said.

“The more accurate a digital twin is, the more we are able to use that digital twin to predict the performance of the car,” said David Taylor, VP of industry strategy at Siemens DISW, which has partnered with Andretti for some years now. “It will never be as complete and accurate as we want it to be. So it’s a continuous pursuit, and we keep adding technology to our portfolio and acquiring companies to try to provide more and more tools to people like Scott so they can more accurately predict that performance.”

What to expect on Sunday?

Kirkwood was bullish about his chances despite starting relatively deep in the field, qualifying in 23rd place. “We’ve been phenomenal in race trim and qualifying,” he said. “We had a bit of a head-scratcher if I’m being honest—I thought we would definitely be a top-six contender, if not a front row contender, and it just didn’t pan out that way on Saturday qualifying.”

“But we rolled back out on Monday—the car was phenomenal. Once again, we feel very, very racy in traffic, which is a completely different animal than running qualifying,” Kirkwood said. “So I’m happy with it. I think our chances are good. We’re starting deep in the field, but so are a lot of other drivers. So you can expect a handful of us to move forward.”

The more nervous hybrid IndyCars with their more rearward weight bias will probably result in more cautions, according to Ericsson, who will line up sixth for the start of the race on Sunday.

“Whereas in previous years you could have a bit of a moment and it would scare you, you usually get away with it,” he said. “This year, if you have a moment, it usually ends up with you being in the fence. I think that’s why we’ve seen so many crashes this year—because a pendulum effect from the rear of the car that when you start losing it, this is very, very difficult or almost impossible to catch.”

“I think it’s going to mean that the race is going to be quite a few incidents with people making mistakes,” Ericsson said. “In practice, if your car is not behaving well, you bring it to the pit lane, right? You can do adjustments, whereas in the race, you have to just tough it out until the next pit stop and then make some small adjustments. So if you have a bad car at the start a race, it’s going to be a tough one. So I think it’s going to be a very dramatic and entertaining race.”

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

200 mph for 500 miles: How IndyCar drivers prepare for the big race Read More »

what-i-learned-from-my-first-few-months-with-a-bambu-lab-a1-3d-printer,-part-1

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1


One neophyte’s first steps into the wide world of 3D printing.

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

For a couple of years now, I’ve been trying to find an excuse to buy a decent 3D printer.

Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks’ worth of plastic filament.

But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present.

Since then, I’ve been tinkering with the thing nearly daily, learning more about what I’ve gotten myself into and continuing to find fun and useful things to print. I’ve gathered a bunch of thoughts about my learning process here, not because I think I’m breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. “Hyperfixating on new hobbies” is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming.

Getting to know my printer

My wife settled on the Bambu A1 because it’s a larger version of the A1 Mini, Wirecutter’s main 3D printer pick at the time (she also noted it was “hella on sale”). Other reviews she read noted that it’s beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far.

Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I’m already leaning mostly on the first-party tools and built-in functionality to get everything going, so I’m not really experiencing the sense of having “lost” features I was relying on, and any concerns I did have are mostly addressed by Bambu’s update about its update.

I hadn’t really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu’s printers (and those like them) are capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints.

Basic terminology

Extrusion-based 3D printers (also sometimes called “FDM,” for “fused deposition modeling”) work by depositing multiple thin layers of melted plastic filament on a heated bed. Credit: Andrew Cunningham

First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plastic (filament) and then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick.

The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography” (SLA). You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints (it’s popular for making figurines for tabletop games). Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible.

There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a “bed slinger.” In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to “move” the print head on the Y axis.

More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are “CoreXY” printers, which include a third rail or set of rails (and more Z-axis rails) that allow the print head to travel in all three directions.

The A1 is also an “open-bed” printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer.

Together, the downsides of a bed-slinger (introducing more wobble for tall prints, more opportunities for parts of your print to come loose from the plate) and an open-bed printer (worse temperature, fume, and dust control) mainly just mean that the A1 isn’t well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it.

Setting up

Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It’s not quite the same as the “take it out of the box, remove all the plastic film, and plug it in” process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that’s roughly the level of complexity we’re talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple.

The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer’s calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. “Stable enough to put a lamp on” is not the same as “stable enough to put a constantly wobbling contraption” on—obvious in retrospect, but my being new to this is why this article exists.

After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer’s every motion to our kid’s room below, a boon for when I’m trying to print something after he has gone to bed.

The first-party Bambu apps for sending files to the printer are Bambu Handy (for iOS/Android, with no native iPad version) and Bambu Studio (for Windows, macOS, and Linux). Handy works OK for sending ready-made models from MakerWorld (a mostly community-driven but Bambu-developer repository for 3D printable files) and for monitoring prints once they’ve started. But I’ll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing.

Bambu Studio: A primer

Bambu Studio is what’s known in the hobby as a “slicer,” software that takes existing 3D models output by common CAD programs (Tinkercad, FreeCAD, SolidWorks, Autodesk Fusion, others) and converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it’s primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects.

Bambu Studio isn’t the most approachable application, but if you’ve made it this far, it shouldn’t be totally beyond your comprehension. For first-time setup, you’ll choose your model of printer (all Bambu models and a healthy selection of third-party printers are officially supported), leave the filament settings as they are, and sign in if you want to use Bambu’s cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious.

For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you’ve downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click “slice plate,” and then click “print.” Things like the default 0.4 mm nozzle size and Bambu’s included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time.

When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the “total filament” figure, which tells you how many grams of filament the printer will use to make your model (filament typically comes in 1 kg spools, and the printer generally won’t track usage for you, so if you want to avoid running out in the middle of the job, you may want to keep track of what you’re using). The second is the “total time” figure, which tells you how long the entire print will take from the first calibration steps to the end of the job.

Selecting your filament and/or temperature presets. If you have the Automatic Material System (AMS), this is also where you’ll manage multicolor printing. Andrew Cunningham

When selecting filament, people who stick to Bambu’s first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I’ve had almost zero trouble with the “generic” presets and the spools of generic Inland-branded filament I’ve bought from our local Micro Center, at least when sticking to PLA (polylactic acid, the most common and generally the easiest-to-print of the different kinds of filament you can buy). But we’ll dive deeper into plastics in part 2 of this series.

I won’t pretend I’m skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I’ve found most useful:

  • The “clone” function, accessed by right-clicking an object and clicking “clone.” Useful if you’d like to fit several copies of an object on the build plate at once, especially if you’re using a filament with a color gradient and you’d like to make the gradient effect more pronounced by spreading it out over a bunch of prints.
  • The “arrange all objects” function, the fourth button from the left under the “prepare” tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn’t need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space.
  • Layer height, located in the sidebar directly beneath “Process” (which is directly underneath the area where you select your filament. For many functional parts, the standard 0.2 mm layer height is fine. Going with thinner layer heights adds to the printing time but can preserve more detail on prints that have a lot of it and slightly reduce the visible layer lines that give 3D-printed objects their distinct look (for better or worse). Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail.
  • Infill percentage and wall loops, located in the Strength tab beneath the “Process” sidebar item. For most everyday prints, you don’t need to worry about messing with these settings much; the infill percentage determines the amount of your print’s interior that’s plastic and the part that’s empty space (15 percent is a good happy medium most of the time between maintaining rigidity and overusing plastic). The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it (think hooks, hangers, shelves and brackets, and other things that will be asked to bear some weight).

My first prints

A humble start: My very first print was a wall bracket for the remote for my office’s ceiling fan. Credit: Andrew Cunningham

When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk.

When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control.

MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it’s not a terrible idea, you can usually find someone out there who has made something close to what you’re looking for.

I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that’s all it took to kick the printer into action.

After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object.

Print No. 2 was another wall bracket, this time for my gaming PC’s gamepad and headset. Credit: Andrew Cunningham

It wears off a bit after you successfully execute a print, but I still haven’t quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office.

The remote holder was, as I’d learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn.

This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I’ll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I’ve learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1 Read More »

sierra-made-the-games-of-my-childhood.-are-they-still-fun-to-play?

Sierra made the games of my childhood. Are they still fun to play?


Get ready for some nostalgia.

My Ars colleagues were kicking back at the Orbital HQ water cooler the other day, and—as gracefully aging gamers are wont to do—they began to reminisce about classic Sierra On-Line adventure games. I was a huge fan of these games in my youth, so I settled in for some hot buttered nostalgia.

Would we remember the limited-palette joys of early King’s Quest, Space Quest, or Quest for Glory titles? Would we branch out beyond games with “Quest” in their titles, seeking rarer fare like Freddy Pharkas: Frontier Pharmacist? What about the gothic stylings of The Colonel’s Bequest or the voodoo-curious Gabriel Knight?

Nope. The talk was of acorns. [Bleeping] acorns, in fact!

The scene in question came from King’s Quest III, where our hero Gwydion must acquire some exceptionally desiccated acorns to advance the plot. It sounds simple enough. As one walkthrough puts it, “Go east one screen and north one screen to the acorn tree. Try picking up acorns until you get some dry ones. Try various spots underneath the tree.” Easy! And clear!

Except it wasn’t either one because the game rather notoriously won’t always give you the acorns, even when you enter the right command. This led many gamers to believe they were in the wrong spot, when in reality, they just had to keep entering the “get acorns” command while moving pixel by pixel around the tree until the game finally supplied them. One of our staffers admitted to having purchased the King’s Quest III hint book solely because of this “puzzle.” (The hint book, which is now online, says that players should “move around” the particular oak tree in question because “you can only find the right kind of acorns in one spot.”)

This wasn’t quite the “fun” I had remembered from these games, but as I cast my mind back, I dimly began to recall similar situations. Space Quest II: Vohaul’s Revenge had been my first Sierra title, and after my brother and I spent weeks on the game only to get stuck and die repeatedly in some pitch-dark tunnels, we implored my dad to call Sierra’s 1-900 pay hint line. He thought about it. I could see it pained him because he had never before (and never since!) called a 1-900 number in his life. In this case, the call cost a piratical 75 cents for the first minute and 50 cents for each additional minute. But after listening to us whine for several days straight, my dad decided that his sanity was worth the fee, and he called.

Much like with the acorn example above, we had known what to do—we had just not done it to the game’s rather exacting and sometimes obscure standards. The key was to use a glowing gem as a light source, which my brother and I had long understood. The problem was the text parser, which demanded that we “put gem in mouth” to use its light in the tunnels. There was no other place to put the gem, no other way to hold or attach it. (We tried them all.) No other attempts to use the light of this shining crystal, no matter how clear, well-intentioned, or succinctly expressed, would work. You put the gem in your mouth, or you died in the darkness.

Returning from my reveries to the conversation at hand, I caught Ars Senior Editor Lee Hutchinson’s cynical remark that these kinds of puzzles were “the only way to make 2–3 hours of ‘game’ last for months.” This seemed rather shocking, almost offensive. How could one say such a thing about the games that colored my memories of childhood?

So I decided to replay Space Quest II for the first time in 35 years in an attempt to defend my own past.

Big mistake.

Space Quest II screenshot.

We’re not on Endor anymore, Dorothy.

Play it again, Sam

In my memory, the Space Quest series was filled with sharply written humor, clever puzzles, and enchanting art. But when I fired up the original version of the game, I found that only one of these was true. The art, despite its blockiness and limited colors, remained charming.

As for the gameplay, the puzzles were not so much “clever” as “infuriating,” “obvious,” or (more often) “rather obscure.”

Finding the glowing gem discussed above requires you to swim into one small spot of a multi-screen river, with no indication in advance that anything of importance is in that exact location. Trying to “call” a hunter who has captured you does nothing… until you do it a second time. And the less said about trying to throw a puzzle at a Labian Terror Beast, typing out various word permutations while death bears down upon you, the better.

The whole game was also filled with far more no-warning insta-deaths than I had remembered. On the opening screen, for instance, after your janitorial space-broom floats off into the cosmic ether, you can walk your character right off the edge of the orbital space station he is cleaning. The game doesn’t stop you; indeed, it kills you and then mocks you for “an obvious lack of common sense.” It then calls you a “wing nut” with an “inability to sustain life.” Game over.

The game’s third screen, which features nothing more to do than simply walking around, will also kill you in at least two different ways. Walk into the room still wearing your spacesuit and your boss will come over and chew you out. Game over.

If you manage to avoid that fate by changing into your indoor uniform first, it’s comically easy to tap the wrong arrow key and fall off the room’s completely guardrail-free elevator platform. Game over.

Space Quest II screenshot.

Do NOT touch any part of this root monster.

Get used to it because the game will kill you in so, so many ways: touching any single pixel of a root monster whose branches form a difficult maze; walking into a giant mushroom; stepping over an invisible pit in the ground; getting shot by a guard who zips in on a hovercraft; drowning in an underwater tunnel; getting swiped at by some kind of giant ape; not putting the glowing gem in your mouth; falling into acid; and many more.

I used the word “insta-death” above, but the game is not even content with this. At one key point late in the game, a giant Aliens-style alien stalks the hallways, and if she finds you, she “kisses” you. But then she leaves! You are safe after all! Of course, if you have seen the films, you will recognize that you are not safe, but the game lets you go on for a bit before the alien’s baby inevitably bursts from your chest, killing you. Game over.

This is why the official hint book suggests that you “save your game a lot, especially when it seems that you’re entering a dangerous area. That way, if you die, you don’t have to retrace your steps much.” Presumably, this was once considered entertaining.

When it comes to the humor, most of it is broad. (When you are told to “say the word,” you have to say “the word.”) Sometimes it is condescending. (“You quickly glance around the room to see if anyone saw you blow it.”) Or it might just be potty jokes. (Plungers, jock straps, toilet paper, alien bathrooms, and fouling one’s trousers all make appearances.)

My total gameplay time: a few hours.

“By Grabthar’s hammer!” I thought. “Lee was right!”

When I admitted this to him, Lee told me that he had actually spent time learning to speedrun the Space Quest games during the pandemic. “According to my notes, a clean run of SQ2 in ‘fast’ mode—assuming good typing skills—takes about 20 minutes straight-up,” he said. Yikes.

Space Quest II screenshot.

What a fiendish plot!

And yet

The past was a different time. Computer memory was small, graphics capabilities were low, and computer games had emerged from the “let them live just long enough to encourage spending another quarter” arcade model. Mouse adoption took a while; text parsers made sense even though they created plenty of frustration. So yes—some of these games were a few hours of gameplay stretched out with insta-death, obscure puzzles, and the sheer amount of time it took just to walk across the game’s various screens. (Seriously, “walking around” took a ridiculous amount of the game’s playtime, especially when a puzzle made you backtrack three screens, type some command, and then return.)

Space Quest II screenshot.

Let’s get off this rock.

Judged by current standards, the Sierra games are no longer what I would play for fun.

All the same, I loved them. They introduced me to the joy of exploring virtual worlds and to the power of evocative artwork. I went into space, into fairy tales, and into the past, and I did so while finding the games’ humor humorous and their plotlines compelling. (“An army of life insurance salesmen?” I thought at the time. “Hilarious and brilliant!”)

If the games can feel a bit arbitrary or vexing today, my child-self’s love of repetition was able to treat them as engaging challenges rather than “unfair” design.

Replaying Space Quest II, encountering the half-remembered jokes and visual designs, brought back these memories. The novelist Thomas Wolfe knew that you can’t go home again, and it was probably inevitable that the game would feel dated to me now. But playing it again did take me back to that time before the Internet, when not even hint lines, insta-death, and EGA graphics could dampen the wonder of the new worlds computers were capable of showing us.

Space Quest II screenshot.

Literal bathroom humor.

Space Quest II, along with several other Sierra titles, is freely and legally available online at sarien.net—though I found many, many glitches in the implementation. Windows users can buy the entire Space Quest collection through Steam or Good Old Games. There’s even a fan remake that runs on macOS, Windows, and Linux.

Photo of Nate Anderson

Sierra made the games of my childhood. Are they still fun to play? Read More »

motorola-razr-and-razr-ultra-(2025)-review:-cool-as-hell,-but-too-much-ai

Motorola Razr and Razr Ultra (2025) review: Cool as hell, but too much AI


The new Razrs are sleek, capable, and overflowing with AI features.

Razr Ultra and Razr (2025)

Motorola’s 2025 Razr refresh includes its first Ultra model. Credit: Ryan Whitwam

Motorola’s 2025 Razr refresh includes its first Ultra model. Credit: Ryan Whitwam

For phone nerds who’ve been around the block a few times, the original Motorola Razr is undeniably iconic. The era of foldables has allowed Motorola to resurrect the Razr in an appropriately flexible form, and after a few generations of refinement, the 2025 Razrs are spectacular pieces of hardware. They look great, they’re fun to use, and they just about disappear in your pocket.

The new Razrs also have enormous foldable OLEDs, along with external displays that are just large enough to be useful. Moto has upped its design game, offering various Pantone shades with interesting materials and textures to make the phones more distinctive, but Motorola’s take on mobile AI could use some work, as could its long-term support policy. Still, these might be the coolest phones you can get right now.

An elegant tactile experience

Many phone buyers couldn’t care less about how a phone’s body looks or feels—they’ll just slap it in a case and never look at it again. Foldables tend not to fit as well in cases, so the physical design of the Razrs is important. The good news is that Motorola has refined the foldable formula with an updated hinge and some very interesting material choices.

Razr Ultra back

The Razr Ultra is available with a classy wood back.

Credit: Ryan Whitwam

The Razr Ultra is available with a classy wood back. Credit: Ryan Whitwam

The 2025 Razrs come in various colors, all of which have interesting material choices for the back panel. There are neat textured plastics, wood, vegan leather, and synthetic fabrics. We’ve got wood (Razr Ultra) and textured plastic (Razr) phones to test—they look and feel great. The Razr is very grippy, and the wooden Ultra looks ultra-stylish, though not quite as secure in the hand. The aluminum frames are also colored to match the back with a smooth matte finish. Motorola has gone to great lengths to make these phones feel unique without losing the premium vibe. It’s nice to see a phone maker do that without resorting to a standard glass sandwich body.

The buttons are firm and tactile, but we’re detecting just a bit of rattle in the power button. That’s also where you’ll find the fingerprint sensor. It’s reasonably quick and accurate, whether the phone is open or closed. The Razr Ultra also has an extra AI button on the opposite side, which is unnecessary, for reasons we’ll get to later. And no, you can’t remap it to something else.

Motorola Razr 2025

The Razrs have a variety of neat material options.

Credit: Ryan Whitwam

The Razrs have a variety of neat material options. Credit: Ryan Whitwam

The front of the flip on these phones features a big sheet of Gorilla Glass Ceramic, which is supposedly similar to Apple’s Ceramic Shield glass. That should help ward off scratches. The main camera sensors poke through this front OLED, which offers some interesting photographic options we’ll get to later. The Razr Ultra has a larger external display, clocking in at 4 inches. The cheaper Razr gets a smaller 3.6-inch front screen, but that’s still plenty of real estate, even with the camera lenses at the bottom.

Specs at a glance: 2025 Motorola Razrs
Motorola Razr ($699.99) Motorola Razr+ ($999.99) Motorola Razr Ultra ($1,299.99)
SoC MediaTek Dimensity 7400X Snapdragon 8s Gen 3 Snapdragon 8 Elite
Memory 8GB 12GB 16GB
Storage 256GB 256GB 512GB, 1TB
Display 6.9″ foldable OLED (120 Hz, 2640 x 1080), 3.6″ external (90 Hz) 6.9″ foldable OLED (165 Hz, 2640 x 1080), 4″ external (120 Hz, 1272 x 1080) 7″ foldable OLED (165 Hz, 2992 x 1224), 4″ external (165 Hz)
Cameras 50 MP f/1.7 OIS primary; 13 MP f/2.2  ultrawide, 32 MP selfie 50 MP f/1.7 OIS primary; 50 MP 2x telephoto f/2.0, 32 MP selfie 50 MP f/1.8 OIS primary, 50 MP ultrawide + macro, f/2.0, 50 MP selfie
Software Android 15 Android 15 Android 15
Battery 4,500 mAh, 30 W wired charging, 15 W wireless charging 4,000 mAh, 45 W wired charging, 15 W wireless charging 4,700 mAh, 68 W wired charging, 15 W wireless charging
Connectivity Wi-Fi 6e, NFC, Bluetooth 5.4, sub-6 GHz 5G, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 5.4, sub-6 GHz 5G, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 5.4, sub-6 GHz 5G, USB-C 2.0
Measurements Open: 73.99 x 171.30 x 7.25 mm;

Closed: 73.99 x 88.08 x 15.85 mm; 188 g
Open: 73.99 x 171.42 x 7.09 mm;

Closed: 73.99 x 88.09x 15.32 mm; 189 g
Open: 73.99 x 171.48 x 7.19 mm;

Closed: 73.99 x 88.12 x 15.69 mm; 199 g

Motorola says the updated foldable hinge has been reinforced with titanium. This is the most likely point of failure for a flip phone, but the company’s last few Razrs already felt pretty robust. It’s good that Moto is still thinking about durability, though. The hinge is smooth, allowing you to leave the phone partially open, but there are magnets holding the two halves together with no gap when closed. The magnets also allow for a solid snap when you shut it. Hanging up on someone is so, so satisfying when you’re using a Razr flip phone.

Flip these phones open, and you get to the main event. The Razr has a 6.9-inch, 2640×1080 foldable OLED, and the Ultra steps up to 7 inches at an impressive 2992×1224. These phones have almost exactly the same dimensions, so the additional bit of Ultra screen comes from thinner bezels. Both phones are extremely tall when open, but they’re narrow enough to be usable in one hand. Just don’t count on reaching the top of the screen easily. While Motorola has not fully eliminated the display crease, it’s much smoother and less noticeable than it is on Samsung’s or Google’s foldables.

Motorola Razr Ultra

The Razr Ultra has a 7-inch foldable OLED.

Credit: Ryan Whitwam

The Razr Ultra has a 7-inch foldable OLED. Credit: Ryan Whitwam

The Razr can hit 3,000 nits of brightness, and the $1,300 Razr Ultra tops out at 4,500 nits. Both are bright enough to be usable outdoors, though the Ultra is noticeably brighter. However, both suffer from the standard foldable drawbacks of having a plastic screen. The top layer of the foldable screen is a non-removable plastic protector, which has very high reflectivity that makes it harder to see the display. That plastic layer also means you have to be careful not to poke or scratch the inner screen. It’s softer than your fingernails, so it’s not difficult to permanently damage the top layer.

Too much AI

Motorola’s big AI innovation for last year’s Razr was putting Gemini on the phone, making it one of the first to ship with Google’s generative AI system. This time around, it has AI features based on Gemini, Meta Llama, Perplexity, and Microsoft Copilot. It’s hard to say exactly how much AI is worth having on a phone with the rapid pace of change, but Motorola has settled on the wrong amount. To be blunt, there’s too much AI. What is “too much” in this context? This animation should get the point across.

Moto AI

Motorola’s AI implementation is… a lot.

Credit: Ryan Whitwam

Motorola’s AI implementation is… a lot. Credit: Ryan Whitwam

The Ask and Search bar appears throughout the UI, including as a floating Moto AI icon. It’s also in the app drawer and is integrated with the AI button on the Razr Ultra. You can use it to find settings and apps, but it’s also a full LLM (based on Copilot) for some reason. Gemini is a better experience if you’re looking for a chatbot, though.

Moto AI also includes a raft of other features, like Pay Attention, which can record and summarize conversations similar to the Google recorder app. However, unlike that app, the summarizing happens in the cloud instead of locally. That’s a possible privacy concern. You also get Perplexity integration, allowing you to instantly search based on your screen contents. In addition, the Perplexity app is preloaded with a free trial of the premium AI search service.

There’s so much AI baked into the experience that it can be difficult to keep all the capabilities straight, and there are some more concerning privacy pitfalls. Motorola’s Catch Me Up feature is a notification summarizer similar to a feature of Apple Intelligence. On the Ultra, this feature works locally with a Llama 3 model, but the less powerful Razr can’t do that. It sends your notifications to a remote server for processing when you use Catch Me Up. Motorola says data is “anonymous and secure” and it does not retain any user data, but you have to put a lot of trust in a faceless corporation to send it all your chat notifications.

Razr Ultra and Razr (2025)

The Razrs have additional functionality if you prop them up in “tent” or “stand” mode.

Credit: Ryan Whitwam

The Razrs have additional functionality if you prop them up in “tent” or “stand” mode. Credit: Ryan Whitwam

If you can look past Motorola’s frenetic take on mobile AI, the version of Android 15 on the Razrs is generally good. There are a few too many pre-loaded apps and experiences, but it’s relatively simple to debloat these phones. It’s quick, doesn’t diverge too much from the standard Android experience, and avoids duplicative apps.

We appreciate the plethora of settings and features for the external display. It’s a much richer experience than you get with Samsung’s flip phones. For example, we like how easy it is to type out a reply in a messaging app without even opening the phone. In fact, you can run any app on the phone without opening it, even though many of them won’t work quite right on a smaller square display. Still, it can be useful for chat apps, email, and other text-based stuff. We also found it handy for using smart home devices like cameras and lights. There are also customizable panels for weather, calendar, and Google “Gamesnack” games.

Razr Ultra and Razr (2025)

The Razr Ultra (left) has a larger screen than the Razr (right).

Credit: Ryan Whitwam

The Razr Ultra (left) has a larger screen than the Razr (right). Credit: Ryan Whitwam

Motorola promises three years of full OS updates and an additional year of security patches. This falls far short of the seven-year update commitment from Samsung and Google. For a cheaper phone like the Razr, four years of support might be fine, but it’s harder to justify that when the Razr Ultra costs as much as a Galaxy S25 Ultra.

One fast foldable, one not so much

Motorola is fond of saying the Razr Ultra is the fastest flip phone in the world, which is technically true. It has the Snapdragon 8 Elite chip with 16GB of RAM, but we expect to see the Elite in Samsung’s 2025 foldables later this year. For now, though, the Razr Ultra stands alone. The $700 Razr runs a Mediatek Dimensity 7400X, which is a distinctly midrange processor with just 8GB of RAM.

Razr geekbench

The Razr Ultra gets close to the S25.

Credit: Ryan Whitwam

The Razr Ultra gets close to the S25. Credit: Ryan Whitwam

In daily use, neither phone feels slow. Side by side, you can see the Razr is slower to open apps and unlock, and the scrolling exhibits occasional jank. However, it’s not what we’d call a slow phone. It’s fine for general smartphone tasks like messaging, browsing, and watching videos. You may have trouble with gaming, though. Simple games run well enough, but heavy 3D titles like Diablo Immortal are rough with the Dimensity 7400X.

The Razr Ultra is one of the fastest Android phones we’ve tested, thanks to the Snapdragon chip. You can play complex games and multitask to your heart’s content without fear of lag. It does run a little behind the Galaxy S25 series in benchmarks, but it thankfully doesn’t get as toasty as Samsung’s phones.

We never expect groundbreaking battery life from foldables. The hinge takes up space, which limits battery capacity. That said, Motorola did fairly well cramming a 4,700 mAh battery in the Razr Ultra and a 4,500 mAh cell in the Razr.

Based on our testing, both of these phones should last you all day. The large external displays can help by giving you just enough information that you don’t have to use the larger, more power-hungry foldable OLED. If you’re playing games or using the main display exclusively, you may find the Razrs just barely make it to bedtime. However, no matter what you do, these are not multi-day phones. The base model Razr will probably eke out a few more hours, even with its smaller battery, due to the lower-power MediaTek processor. The Snapdragon 8 Elite in the Razr Ultra really eats into the battery when you take advantage of its power.

Motorola Razr Ultra

The Razrs are extremely pocketable.

Credit: Ryan Whitwam

The Razrs are extremely pocketable. Credit: Ryan Whitwam

While the battery life is just this side of acceptable, the Razr Ultra’s charging speed makes this less of a concern. This phone hits an impressive 68 W, which is faster than the flagship phones from Google, Samsung, and Apple. Just a few minutes plugged into a compatible USB-C charger and you’ve got enough power that you can head out the door without worry. Of course, the phone doesn’t come with a charger, but we’ve tested a few recent models, and they all hit the max wattage.

OK cameras with super selfies

Camera quality is another area where foldable phones tend to compromise. The $1,300 Razr Ultra has just two sensors—a 50 MP primary sensor and a 50 MP ultrawide lens. The $700 Razr has a slightly different (and less capable) 50 MP primary camera and a 13 MP ultrawide. There are also selfie cameras peeking through the main foldable OLED panels—50 MP for the Ultra and 32 MP for the base model.

Motorola Razr 2025 in hand

The cheaper Razr has a smaller external display, but it’s still large enough to be usable.

Credit: Ryan Whitwam

The cheaper Razr has a smaller external display, but it’s still large enough to be usable. Credit: Ryan Whitwam

Motorola’s Razrs tend toward longer exposures compared to Pixels—they’re about on par with Samsung phones. That means capturing fast movement indoors is difficult, and you may miss your subject outside due to a perceptible increase in shutter lag compared to Google’s phones. Images from the base model Razr’s primary camera also tend to look a bit more overprocessed than they do on the Ultra, which leads to fuzzy details and halos in bright light.

Razr Ultra outdoors. Ryan Whitwam

That said, Motorola’s partnership with Pantone is doing some good. The colors in our photos are bright and accurate, capturing the vibe of the scene quite well. You can get some great photos of stationary or slowly moving subjects.

Razr 2025 indoor medium light. Ryan Whitwam

The 50 MP ultrawide camera on the Razr Ultra has a very wide field of view, but there’s little to no distortion at the edges. The colors are also consistent between the two sensors, but that’s not always the case for the budget Razr. Its ultrawide camera also lacks detail compared to the Ultra, which isn’t surprising considering the much lower resolution.

You should really only use the dedicated front-facing cameras for video chat. For selfies, you’ll get much better results by taking advantage of the Razr’s distinctive form factor. When closed, the Razrs let you take selfies with the main camera sensors, using the external display as the viewfinder. These are some of the best selfies you’ll get with a smartphone, and having the ultrawide sensor makes group shots excellent as well.

Flip phones are still fun

While we like these phones for what they are, they are objectively not the best value. Whether you’re looking at the Razr or the Razr Ultra, you can get more phone for the same money from other companies—more cameras, more battery, more updates—but those phones don’t fold in half. There’s definitely a cool-factor here. Flip phones are stylish, and they’re conveniently pocket-friendly in a world where giant phones barely fit in your pants. We also like the convenience and functionality of the external displays.

Motorola Razr Ultra

The Razr Ultra is all screen from the front.

Credit: Ryan Whitwam

The Razr Ultra is all screen from the front. Credit: Ryan Whitwam

The Razr Ultra makes the usual foldable compromises, but it’s as capable a flip phone as you’ll find right now. It’s blazing fast, it has two big displays, and the materials are top-notch. However, $1,300 is a big ask.

Is the Ultra worth $500 more than the regular Razr? Probably not. Most of what makes the foldable Razrs worth using is present on the cheaper model. You still get the solid construction, cool materials, great selfies, and a useful (though slightly smaller) outer display. Yes, it’s a little slower, but it’s more than fast enough as long as you’re not a heavy gamer. Just be aware of the potential for Moto AI to beam your data to the cloud.

There is also the Razr+, which slots in between the models we have tested at $1,000. It’s faster than the base model and has the same large external display as the Ultra. This model could be the sweet spot if neither the base model nor the flagship does it for you.

The good

  • Sleek design with distinctive materials
  • Great performance from Razr Ultra
  • Useful external display
  • Big displays in a pocket-friendly package

The bad

  • Too much AI
  • Razr Ultra is very expensive
  • Only three years of OS updates, four years of security patches
  • Cameras trail the competition

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Motorola Razr and Razr Ultra (2025) review: Cool as hell, but too much AI Read More »

the-tinkerers-who-opened-up-a-fancy-coffee-maker-to-ai-brewing

The tinkerers who opened up a fancy coffee maker to AI brewing

(Ars contacted Fellow Products for comment on AI brewing and profile sharing and will update this post if we get a response.)

Opening up brew profiles

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app.

Credit: Fellow Products

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app. Credit: Fellow Products

Aiden profiles are shared and added to Aiden units through Fellow’s brew.link service. But the profiles are not offered in an easy-to-sort database, nor are they easy to scan for details. So Aiden enthusiast and hobbyist coder Kevin Anderson created brewshare.coffee, which gathers both general and bean-based profiles, makes them easy to search and load, and adds optional but quite helpful suggested grind sizes.

As a non-professional developer jumping into a public offering, he had to work hard on data validation, backend security, and mobile-friendly design. “I just had a bit of an idea and a hobby, so I thought I’d try and make it happen,” Anderson writes. With his tool, brew links can be stored and shared more widely, which helped both Dixon and another AI/coffee tinkerer.

Gabriel Levine, director of engineering at retail analytics firm Leap Inc., lost his OXO coffee maker (aka the “Barista Brain”) to malfunction just before the Aiden debuted. The Aiden appealed to Levine as a way to move beyond his coffee rut—a “nice chocolate-y medium roast, about as far as I went,” he told Ars. “This thing that can be hyper-customized to different coffees to bring out their characteristics; [it] really kind of appealed to that nerd side of me,” Levine said.

Levine had also been doing AI stuff for about 10 years, or “since before everyone called it AI—predictive analytics, machine learning.” He described his career as “both kind of chief AI advocate and chief AI skeptic,” alternately driving real findings and talking down “everyone who… just wants to type, ‘how much money should my business make next year’ and call that work.” Like Dixon, Levine’s work and fascination with Aiden ended up intersecting.

The coffee maker with 3,588 ideas

The author’s conversation with the Aiden Profile Creator, which pulled in both brewing knowledge and product info for a widely available coffee.

Levine’s Aiden Profile Creator is a ChatGPT prompt set up with a custom prompt and told to weight certain knowledge more heavily. What kind of prompt and knowledge? Levine didn’t want to give away his exact work. But he cited resources like the Specialty Coffee Association of America and James Hoffman’s coffee guides as examples of what he fed it.

What it does with that knowledge is something of a mystery to Levine himself. “There’s this kind of blind leap, where it’s grabbing the relevant pieces of information from the knowledge base, biasing toward all the expert advice and extraction science, doing something with it, and then I take that something and coerce it back into a structured output I can put on your Aiden,” Levine said.

It’s a blind leap, but it has landed just right for me so far. I’ve made four profiles with Levine’s prompt based on beans I’ve bought: Stumptown’s Hundred Mile, a light-roasted batch from Jimma, Ethiopia from Small Planes, Lost Sock’s Western House filter blend, and some dark-roast beans given as a gift. With the Western House, Levine’s profile creator said it aimed to “balance nutty sweetness, chocolate richness, and bright cherry acidity, using a slightly stepped temperature profile and moderate pulse structure.” The resulting profile has worked great, even if the chatbot named it “Cherry Timber.”

Levine’s chatbot relies on two important things: Dixon’s work in revealing Fellow’s Aiden API and his own workhorse Aiden. Every Aiden profile link is created on a machine, so every profile created by Levine’s chat is launched, temporarily, from the Aiden in his kitchen, then deleted. “I’ve hit an undocumented limit on the number of profiles you can have on one machine, so I’ve had to do some triage there,” he said. As of April 22, nearly 3,600 profiles had passed through Levine’s Aiden.

“My hope with this is that it lowers the bar to entry,” Levine said, “so more people get into these specialty roasts and it drives people to support local roasters, explore their world a little more. I feel like that certainly happened to me.”

Something new is brewing

Credit: Fellow Products

Having admitted to myself that I find something generated by ChatGPT prompts genuinely useful, I’ve softened my stance slightly on LLM technology, if not the hype. Used within very specific parameters, with everything second-guessed, I’m getting more comfortable asking chat prompts for formatted summaries on topics with lots of expertise available. I do my own writing, and I don’t waste server energy on things I can, and should, research myself. I even generally resist calling language model prompts “AI,” given the term’s baggage. But I’ve found one way to appreciate its possibilities.

This revelation may not be new to someone already steeped in the models. But having tested—and tasted—my first big experiment with willfully engaging with a brewing bot, I’m a bit more awake.

This post was updated at 8: 40 a.m. with a different capture of a GPT-created recipe.

The tinkerers who opened up a fancy coffee maker to AI brewing Read More »

doom:-the-dark-ages-review:-shields-up!

Doom: The Dark Ages review: Shields up!


Prepare to add a more defensive stance to the usual dodge-and-shoot gameplay loop.

There’s a reason that shield is so prominent in this image. Credit: Bethesda Game Studios

There’s a reason that shield is so prominent in this image. Credit: Bethesda Game Studios

For decades now, you could count on there being a certain rhythm to a Doom game. From the ’90s originals to the series’ resurrection in recent years, the Doom games have always been about using constant, zippy motion to dodge through a sea of relatively slow-moving bullets, maintaining your distance while firing back at encroaching hordes of varied monsters. The specific guns and movement options you could call on might change from game to game, but the basic rhythm of that dodge-and-shoot gameplay never has.

Just a few minutes in, Doom: The Dark Ages throws out that traditional Doom rhythm almost completely. The introduction of a crucial shield adds a whole suite of new verbs to the Doom vocabulary; in addition to running, dodging, and shooting, you’ll now be blocking, parrying, and stunning enemies for counterattacks. In previous Doom games, standing still for any length of time often led to instant death. In The Dark Ages, standing your ground to absorb and/or deflect incoming enemy attacks is practically required at many points.

During a preview event earlier this year, the game’s developers likened this change to the difference between flying a fighter jet and piloting a tank. That’s a pretty apt metaphor, and it’s not exactly an unwelcome change for a series that might be in need of a shake-up. But it only works if you go in ready to play like a tank and not like the fighter jet that has been synonymous with Doom for decades.

Stand your ground

Don’t get me wrong, The Dark Ages still features its fair share of the Doom series’ standard position-based Boomer Shooter action. The game includes the usual stockpile of varied weapons—from short-range shotguns to long-range semi-automatics to high-damage explosives with dangerous blowback—and doles them out slowly enough that major new options are still being introduced well into the back half of the game.

But the shooting side has simplified a bit since Doom Eternal. Gone are the secondary weapon modes, grenades, chainsaws, and flamethrowers that made enemy encounters a complicated weapon and ammo juggling act. Gone too are the enemies that practically forced you to use a specific weapon to exploit their One True Weakness; I got by for most of The Dark Ages by leaning on my favored plasma rifle, with occasional switches to a charged steel ball-and-chain launcher for heavily armored enemies.

See green, get ready to parry…

Credit: Bethesda Game Studios

See green, get ready to parry… Credit: Bethesda Game Studios

In their place is the shield, which gives you ample (but not unlimited) ability to simply deflect enemy attacks damage-free. You can also throw the shield for a ranged attack that’s useful for blowing up frequent phalanxes of shielded enemies or freezing larger unarmored enemies in place for a safe, punishing barrage.

But the shield’s most important role comes when you stand face to face with a particularly punishing demon, waiting for a flash of green to appear on the screen. When that color appears, it’s your signal that the associated projectile and/or incoming melee attack can be parried by raising your shield just before it lands. A successful parry knocks that attack back entirely, returning projectiles to their source and/or temporarily deflecting the encroaching enemy themselves.

A well-timed, powerful parry is often the only reasonable option for attacks that are otherwise too quick or overwhelming to dodge effectively. The overall effect ends up feeling a bit like Doom by way of Mike Tyson’s Punch-Out!! Instead of dancing around a sea of hazards and looking for an opening, you’ll often find yourself just standing still for a few seconds, waiting to knock back a flash of green so you can have the opportunity to unleash your own counterattack. Various shield sigils introduced late in the game encourage this kind of conservative turtling strategy even more by adding powerful bonus effects to each successful parry.

The window for executing a successful parry is pretty generous, and the dramatic temporal slowdown and sound effects make each one feel like an impactful moment. But they start to feel less impactful as the game goes on, and battles often devolve into vast seas of incoming green flashes. There were countless moments in my Dark Ages playthrough where I found myself more or less pinned down by a deluge of green attacks, frantically clicking the right mouse button four or five times in quick succession to parry off threats from a variety of angles.

In between all the parrying, you do get to shoot stuff.

Credit: Bethesda Game Studios

In between all the parrying, you do get to shoot stuff. Credit: Bethesda Game Studios

In between these parries, the game seems to go out of its way to encourage a more fast-paced, aggressive style of play. A targeted shield slam move lets you leap quickly across great distances to get up close and personal with enemy demons, at which point you can use one of a variety of melee weapons for some extremely satisfying, crunchy close quarters beatdowns (though these melee attacks are limited by their own slowly recharging ammo system).

You might absorb some damage in the process of going in for these aggressive close-up attacks, but don’t worry—defeated enemies tend to drop heaps of health, armor, and ammo, depending on the specific way they were killed. I’d often find myself dancing on the edge of critically low health after an especially aggressive move, only to recover just in time by finishing off a major demon. Doubling back for a shield slam on a far-off “fodder” enemy can also be an effective strategy for quickly escaping a sticky situation and grabbing some health in the process.

The back-and-forth tug between these aggressive encroachments and the more conservative parry-based turtling makes for some exciting moment-to-moment gameplay, with enough variety in the enemy mix to never feel too stale. Effectively managing your movement and attack options in any given firefight feels complex enough to be engaging without ever tipping into overwhelming, as well.

Even so, working through Doom: The Dark Ages, there was a part of me that missed the more free-form, three-dimensional acrobatics of Doom Eternal’s double jumps and air dashes. Compared to the almost balletic, improvisational movement in that game, playing The Dark Ages too often felt like it devolved into something akin to a simple rhythm game; simply wait for each green “note” to reach the bottom of the screen, then hit the button to activate your counterattack.

Stories and secrets

In between chapters, Doom: The Dark Ages breaks things up with some extremely ponderous cutscenes featuring a number of religious and political factions, both demon and human, jockeying for position and control in an interdimensional war. This mostly involves a lot of tedious standing around discussing the Heart of Argent (a McGuffin that’s supposed to grant the bearer the power of a god) and debating how, where, and when to deploy the Slayer (that’s you) as a weapon.

I watched these cutscenes out of a sense of professional obligation, but I tuned out at points and thus had trouble following the internecine intrigue that seemed to develop between factions whose motivations and backgrounds never seemed to be sufficiently explained or delineated. Most players who aren’t reviewing the game should feel comfortable skipping these scenes and getting back to the action as quickly as possible.

I hope you like red and black, because there’s a lot of it here…

Credit: Bethesda Game Studios

I hope you like red and black, because there’s a lot of it here… Credit: Bethesda Game Studios

The levels themselves are all dripping with the usual mix of Hellish symbology and red-and-black gore, with mood lighting so dark that it can be hard to see a wall right in front of your face. Design-wise, the chapters seem to alternate between Doom’s usual system of twisty enemy-filled corridors and more wide-open outdoor levels. The latter are punctuated by a number of large, open areas where huge groups of demons simply teleport in as soon as you set foot in the pre-set engagement zone. These battle arenas might have a few inclines or spires to mix things up, but for the most part, they all feel depressingly similar and bland after a while. If you’ve stood your ground in one canyon, you’ve stood your ground in them all.

Each level is also absolutely crawling with secret collectibles hidden in various nooks and crannies, which often tease you with a glimpse through a hole in some impassable wall or rock formation. Studying the map screen for a minute more often than not reveals the general double-back path you’ll need to follow to find the hidden entrance behind these walls, even as finding the precise path can involve solving some simple puzzles or examining your surroundings for one particularly well-hidden bit that will allow you to advance.

After all the enemies were cleared in one particularly vast open level, I spent a good half hour picking through every corner of the map until I tracked down the hidden pathways leading to every stray piece of gold and collectible trinket. It was fine as a change of pace—and lucrative in terms of upgrading my weapons and shield for later fights—but it felt kind of lonely and quiet compared to the more action-packed battles.

Don’t unleash the dragon

Speaking of changes of pace, by far the worst parts of Doom: The Dark Ages come when the game insists on interrupting the usual parry-and-shoot gameplay to put you in some sort of vehicle. This includes multiple sections where your quick-moving hero is replaced with a lumbering 30-foot-tall mech, which slouches pitifully down straight corridors toward encounters with equally large demons.

These mech battles play out as the world’s dullest fistfights, where you simply wail on the attack buttons while occasionally tapping the dodge button to step away from some incredibly slow and telegraphed counterattacks. I found myself counting the minutes until these extremely boring interludes were over.

Believe me, this is less exciting than it looks.

Credit: Bethesda Game Studios

Believe me, this is less exciting than it looks. Credit: Bethesda Game Studios

The sections where your Slayer rides a dragon for some reason are ever-so-slightly more interesting, if only because the intuitive, fast-paced flight controls can be a tad more exciting. Unfortunately, these sections don’t give you any thrilling dogfights or complex obstacle courses to take advantage of these controls, topping out instead in a few simplistic chase sequences where you take literally no incoming fire.

Between those semi-engaging chase sequences is a seemingly endless parade of showdowns with stationary turrets. These require your dragon to hover frustratingly still in mid-air, waiting patiently for an incoming energy attack to dodge, which in turn somehow powers up your gun enough to take out the turret in a counterattack. How anyone thought that this was the most engaging use of a seemingly competent third-person flight-combat system is utterly baffling.

Those too-frequent interludes aside, Doom: The Dark Ages is a more-than-suitable attempt to shake up the Doom formula with a completely new style of gameplay. While the more conservative, parry-based shield system takes some getting used to—and may require adjusting some of your long-standing Doom muscle memory in the process—it’s ultimately a welcome and engaging way to add new types of interaction to the long-running franchise.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Doom: The Dark Ages review: Shields up! Read More »

ars-technica’s-gift-guide-for-mother’s-day:-give-mom-some-cool-things

Ars Technica’s gift guide for Mother’s Day: Give mom some cool things


say hi to your mom for me

Wondering what to get the mom who has everything? We’ve got some ideas!

Credit: Carol Yepes / Getty

Greetings, Arsians, and welcome to Mother’s Day, which I am told is once again happening this weekend in the US! Do you, much like the rest of humanity, have a mother? Well, if you do, then this is the time of year when you’re supposed to buy her something to make up for all the pain and suffering she went through in order to bring you into this world! Mom raised you, and while what your mother probably wants more than anything is for you to pick up the phone and talk, you could do a lot worse than throwing some money at the problem and buying your mother something from the list we’ve assembled below!

Stuff for under $100

Severance TV show mug, $17.99

Photograph of a Severance-themed mug

From Allentown to Cold Harbor, discerning innies know the best way to drink beverages.

Credit: Amazon

From Allentown to Cold Harbor, discerning innies know the best way to drink beverages. Credit: Amazon

Whether the mom in your life operates primarily as an “innie” or an “outie,” if she’s a fan of the Apple TV+ show Severance, she may appreciate this ceramic coffee mug inspired by the series. The mug features the iconic phrase “The Work is Both Mysterious and Important” in blue text that mirrors Lumon Industries’ sterile corporate aesthetic. It’s an ideal conversation starter for fans enjoying the show’s second season after the painfully long production delay. Best of all, it’s perfect for morning coffee before Mom begins her own mysterious and important work. The same company also offers a “Woe’s Hollow” mug if you’d like an alternative design. Celebrate the Sisyphean every-day-is-the-same blessing of motherhood with confidence as these mugs are dishwasher and microwave safe, and they are available in both 11oz and 15oz sizes.

Frameo digital picture frame, $49.99

Photograph of a digital picture frame

Frame your mom!

Credit: Amazon

Frame your mom! Credit: Amazon

From awkward baby pictures, to wedding memories and last year’s holidays, your mother likely has more photos than she knows what to do with. Frameo digital picture frames give your mom somewhere to show off those countless photos without embarrassing, unexpected tags on Facebook.

The digital frame shows uploaded pictures like a slideshow. With the iOS or Android app, it’s easy to upload photos from their phone to the frame. Mom hates the cloud and prefers PC storage? No proble—Frameo frames also support PC upload via USB. You can even help ensure the frame puts your best face forward: With the proper permissions, you can access the thing remotely and upload photos to your mom’s frame from afar. Other handy features include sleep mode and the ability to display the time or weather.

There are various Frameo frames available depending on the size, resolution, and look your mom prefers. Here is a popular 10.1-inch IPS one with 1280ˣ800 resolution and 32GB of storage.

A USB-C charger that will actually fast charge her phone, $27.99

Photograph of an Anker phone charger

Faster charging means never having to say “I’m sorry.” Or “My phone is dead.”

Credit: Amazon

Faster charging means never having to say “I’m sorry.” Or “My phone is dead.” Credit: Amazon

Smartphones haven’t come with chargers for a few years now, which means most people like your mom are using one that’s years old. And that old plug probably can’t hit the increasingly lofty charging speeds of new phones. If mom has an old 18W plug, a new one could juice up her phone at double the speed or more. We like Anker’s Nano II charger (on sale for $28) because it’s compact and reaches an impressive 45W, which is fast enough to hit the max wattage for even high-end phones from Apple, Samsung, and Google. It supports USB-PD and PPS charging technologies, which means it can max out anything with a USB-C port to a limit of 45W.

Loop Quiet 2 ear plugs, $20.95

Photograph of the Loop earbuds

They look soft!

Credit: Amazon

They look soft! Credit: Amazon

Let’s face it: Moms need uninterrupted sleep. When relieved from kid or pet duty, or perhaps resting after a night shift, it’s great to wipe out a sleep deficit thanks to some good earplugs. These silicone earplugs offer 24dB noise reduction while avoiding the uncomfortable pressure of cheap foam plugs. The most frustrating thing about traditional earplugs is that there’s a wide variety in “ear hole” sizes out there. To fix that, Loop includes four different tip sizes to ensure a proper fit in any ear canal. The silicone design works well for side-sleepers, and the rubber loop makes them easy to pull out and won’t get lost in your ear canals. Amazon reviewers praise these earplugs highly for travel, focus work, and sensitive hearing situations, with many noting they’re comfortable enough to forget you’re wearing them.

Cornell Lab of Ornithology’s Bird courses, $40-$125

Photograph of a red bird

If you take a course, you might be able to identify this bird! (Spoiler: This bird is Christina.)

Credit: Cornell

If you take a course, you might be able to identify this bird! (Spoiler: This bird is Christina.) Credit: Cornell

These courses do a few things at once. They give a bird-loving mum hours of engagement. They nicely supplement a complementary webcam-equipped bird feeder or binoculars. And they support an organization that advances bird education, research, and conservation. You can start with beginner basics, pick a lane like ducks or owls, or pack in more with a savings bundle. They pair well with pointing mom to the free Merlin Bird ID app and trading sightings with her throughout the year.

The gift of enlightenment: An Ars subscription, $25-50

Hope to avoid dinner conversations about why you shouldn’t vaccinate your kids? Don’t want forwarded messages about how climate change is a hoax? Can’t endure intense lectures on the perils of germ theory? Know the symptoms of brainworms—and stay safe with Ars Technica.

Brainworms are an active public safety threat in the US. They can infect people you know and love. Brainworms have already attacked our current HHS secretary; they could be coming for your mom next!

Fortunately, there is a vaccine. Get the peace of mind that comes from an Ars Technica subscription. Provide your mom with a completely ad-free viewing experience and access to extra homepage stuff (like the ability to hide news categories she doesn’t like!). An Ars Technica subscription, if applied topically on a daily basis, should inoculate against dumb Facebook memes and other vectors for brainworm infection.

An image of the Ars logo, gift-wrapped

Give that mom an Ars subscription! Moms love Ars subscriptions.

Give that mom an Ars subscription! Moms love Ars subscriptions.

Keep your mom away from jade eggs and pseudoscience! An Ars subscription will show instead how measles is killing kids again, or how the administration’s attack on science is ending America’s worldwide leadership in research, or how NOAA is being stripped bare even as the Atlantic hurricane season gets underway.

Every single subscription helps support the work we do at Ars, and every one is appreciated. So help fight the brainworms and give a subscription to your mother! Soon, she’ll be proudly posting memes in the Ars OpenForum!

Mid-price: $100-$300

Bose Ultra Open-Ear headphones, $249

These are the strangest headphones we’ve ever liked. They’re made for people who want excellent environmental awareness and don’t want the weight of headphones or the discomfort of in-ear units. If those two things are high on your list, these deserve a look. If not, move on because they’re relatively expensive otherwise.

Photograph of earbuds

They’re a little odd-looking, but they sound great.

Credit: Amazon

They’re a little odd-looking, but they sound great. Credit: Amazon

When we first tried these, we expected muddy sound and paltry bass, but to our surprise, they sound extremely good–leaps and bounds beyond bone-conduction headphones that have become popular with some runners in recent years. And unless you crank these, or are sitting somewhere very quiet, people near you won’t hear your music.

They are comfortable, too, despite being clamped on the outer ring of your ear. You can almost forget you’re wearing them, unless you’re one of the rare people for whom these are pure torture. All this said, beware: a recent firmware update improved the microphone, but it’s now only average, and we couldn’t recommend these for anyone who plans on doing a lot of calls with them on. As we said at the outset, these are designed for a particular use case; outside of that, there are better options.

Google TV Streamer 4K, $99.99

Photograph of streamer thing

Stream away!

Credit: Google

Stream away! Credit: Google

Your mom’s TV probably has streaming apps built in, but they’re terrible. A good streaming box can offer better audio and video options, as well as a smoother experience. Google’s TV Streamer 4K fits the bill, with support for HDR10+ and the more rare Dolby Vision video, plus Atmos audio. There are other streaming boxes out there, but we think mom will appreciate Google’s Android TV interface over the cluttered, clunky stuff you get from Roku or Amazon. Google’s streamer also connects to the Play Store for apps, which is much better than the alternatives.

Victrola Empire 6-in-1 record player and speaker, $289.99

Part record player, part boombox, part Bluetooth speaker, all class. Victrola’s Empire 6-in-1 Wood Record Player could be the last speaker your musical mother ever needs.

With the ability to play vinyl records (33 1/3, 45, or 78 RPM), CDs, cassettes, and FM radio (unfortunately, there’s no AM tuner), this speaker lets your mom enjoy all that physical media she has stacked in the house, while also allowing her to tune into live radio to hear the latest hits or about current events. If your mom doesn’t need all that, Victrola also makes record player-Bluetooth speaker combos that skip CD, cassette, and radio functionality for less money.

Photo of a record player

Oooh, retro!

Credit: Victrola

Oooh, retro! Credit: Victrola

Keeping mom in the 21st century, the Empire also lets you connect to its speaker via Bluetooth or a 3.5mm jack, so she can stream music from her favorite apps or play files

stored on her phone.

With a vintage look available in multiple hues, the speaker makes for a classic living room piece that looks vintage without feeling overly dated or antiquated.

Belkin auto-tracking phone stand, $144.99

Photograph of the stand

Mom will be unable to look away. Because the stand will track her.

Credit: Amazon

Mom will be unable to look away. Because the stand will track her. Credit: Amazon

This stand is partly pitched at video creators, sure—but it’s also a boon to anyone who wants to reduce the pain of FaceTime calls with their parents. Set up the stand in their home, make the call, and your folks can sit, stand, wash dishes, wander about, or do anything besides hold a phone in the air or crouch over a table. It comes with a cable and charger, it requires no companion app, and it’s a gift to you, too—the person spending far less time looking up your parents’ noses.

Lego Botanicals Flower Arrangement ($109.99)

The thing about a bouquet of flowers is that it looks nice for a few days, maybe a week, and then it dies. Not so with a Lego Botanicals set, which will always look as good as it did the day you built it. (Speaking from experience, they’re also great conversation starters!)

Photograph of LEGO flowers

Beautiful! Just don’t step on the pieces.

Credit: Target

Beautiful! Just don’t step on the pieces. Credit: Target

This 1,161-piece flower arrangement is one of the larger and pricier sets, but the good news is that the Botanicals series includes many sets at all kinds of prices. Sets like this mini orchid or plum blossom run around $24, or you could pick up a flower bouquet for $48. Longtime Lego fans will also enjoy seeing how Lego has repurposed heads, hats, and other shapes from other sets to create plastic plants.

Big spender: Over $300

Bose QuietComfort Ultra Bluetooth headphones, $379.99

In our estimation, Bose still sits atop the noise cancellation game. And that’s why you buy these: best in class noise cancellation. The first time I used Bose’s QC line was on a trip from Boston to London. A flight attendant offered me a pair for the flight, and the rest is history: they became my go-to for travel. Apple and Sony can’t touch this noise cancellation.

Photograph of headphones

They’re quiet and comfortable, like it says on the tin.

Credit: Amazon

They’re quiet and comfortable, like it says on the tin. Credit: Amazon

It’s not just bout noise cancellation though. The sound quality is excellent, even if we might give the Sony’s high-end WH-1000XM5 the nod on bass. We found the Bose QC Ultras to be warm and detailed, and certain types of music (hello, Radiohead) sounded amazing with their spatialized stereo option, dubbed Immersive Audio.

Critically, we stumbled upon these as a gift for someone who found the AirPod Max too heavy (385g). At 252g, they’re nearly a third lighter. Bose primarily accomplished this using plastic rather than metal, but in our usage, we appreciated the lightness more than the looks.

Oura Ring 4, $499

We last checked in with Oura when they released version 3.0 of the Oura ring, and it did not impress us much. With Oura Ring 4.0, we’re ready to recommend this device to fitness fanatics with a few caveats. First, the good stuff. The Oura has slightly better battery performance, and we can go 6 days between charges. It’s more comfortable now, too, thanks to repositioning improved sensors. If you want a smart ring, this is the best one right now.

Photograph of a ring

Smart rings are getting smarter all the time.

Credit: Amazon

Smart rings are getting smarter all the time. Credit: Amazon

But yes, caveats: fit is critical. If you want accurate steps and activities, you must get the tightest ring you can still easily remove. Order the sizing kits, or use the sizing its in the story. Do not rely on your traditional ring size. It might not fit!

Most annoying, full use of the ring’s software requires a subscription, which is best purchased annually at $70. This makes the Oura quite a splurge, but if that special mom is looking for extra motivation to focus on fitness, wants to track her sleep, and doesn’t want a wrist-tracker or Apple Watch, we’d recommend this.

Apple iPhone 16E, $599 and up

Though no longer as budget-friendly as Apple’s old, discontinued iPhone SE, Apple’s new iPhone 16E still gives you a lot of value for your money (read our review here). It excels at all the things that most people use their phones for—it’s fast, it’s well-built, it has a great camera, and it will get years of software support from Apple.

Photograph of iPhones

Who doesn’t need an iPhone?

Credit: Apple

Who doesn’t need an iPhone? Credit: Apple

For anyone using an aging iPhone SE, or any iPhone that’s more than three or four years old, it will feel like an immense upgrade. It also ditches Apple’s Lightning port in favor of USB-C, so you can charge it with the same power brick you already use for your laptop/Nintendo Switch/Kindle/etc.

Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Ars Technica’s gift guide for Mother’s Day: Give mom some cool things Read More »

spacex-pushed-“sniper”-theory-with-the-feds-far-more-than-is-publicly-known

SpaceX pushed “sniper” theory with the feds far more than is publicly known


“It came out of nowhere, and it was really violent.”

The Amos 6 satellite is lost atop a Falcon 9 rocket. Credit: USLaunchReport

The Amos 6 satellite is lost atop a Falcon 9 rocket. Credit: USLaunchReport

The rocket was there. And then it decidedly was not.

Shortly after sunrise on a late summer morning nearly nine years ago at SpaceX’s sole operational launch pad, engineers neared the end of a static fire test. These were still early days for their operation of a Falcon 9 rocket that used super-chilled liquid propellants, and engineers pressed to see how quickly they could complete fueling. This was because the liquid oxygen and kerosene fuel warmed quickly in Florida’s sultry air, and cold propellants were essential to maximizing the rocket’s performance.

On this morning, September 1, 2016, everything proceeded more or less nominally up until eight minutes before the ignition of the rocket’s nine Merlin engines. It was a stable point in the countdown, so no one expected what happened next.

“I saw the first explosion,” John Muratore, launch director for the mission, told me. “It came out of nowhere, and it was really violent. I swear, that explosion must have taken an hour. It felt like an hour. But it was only a few seconds. The second stage exploded in this huge ball of fire, and then the payload kind of teetered on top of the transporter erector. And then it took a swan dive off the top rails, dove down, and hit the ground. And then it exploded.”

The dramatic loss of the Falcon 9 rocket and its Amos-6 satellite, captured on video by a commercial photographer, came at a pivotal moment for SpaceX and the broader commercial space industry. It was SpaceX’s second rocket failure in a little more than a year, and it occurred as NASA was betting heavily on the company to carry its astronauts to orbit. SpaceX was not the behemoth it is today, a company valued at $350 billion. It remained vulnerable to the vicissitudes of the launch industry. This violent failure shook everyone, from the engineers in Florida to satellite launch customers to the suits at NASA headquarters in Washington, DC.

As part of my book on the Falcon 9 and Dragon years at SpaceX, Reentry, I reported deeply on the loss of the Amos-6 mission. In the weeks afterward, the greatest mystery was what had precipitated the accident. It was understood that a pressurized helium tank inside the upper stage had ruptured. But why? No major parts on the rocket were moving at the time of the failure. It was, for all intents and purposes, akin to an automobile idling in a driveway with half a tank of gasoline. And then it exploded.

This failure gave rise to one of the oddest—but also strangely compelling—stories of the 2010s in spaceflight. And we’re still learning new things today.

The “sniper” theory

The lack of a concrete explanation for the failure led SpaceX engineers to pursue hundreds of theories. One was the possibility that an outside “sniper” had shot the rocket. This theory appealed to SpaceX founder Elon Musk, who was asleep at his home in California when the rocket exploded. Within hours of hearing about the failure, Musk gravitated toward the simple answer of a projectile being shot through the rocket.

This is not as crazy as it sounds, and other engineers at SpaceX aside from Musk entertained the possibility, as some circumstantial evidence to support the notion of an outside actor existed. Most notably, the first rupture in the rocket occurred about 200 feet above the ground, on the side of the vehicle facing the southwest. In this direction, about one mile away, lay a building leased by SpaceX’s main competitor in launch, United Launch Alliance. A separate video indicated a flash on the roof of this building, now known as the Spaceflight Processing Operations Center. The timing of this flash matched the interval it would take a projectile to travel from the building to the rocket.

A sniper on the roof of a competitor’s building—forget the Right Stuff, this was the stuff of a Mission: Impossible or James Bond movie.

At Musk’s direction, SpaceX worked this theory both internally and externally. Within the company, engineers and technicians actually took pressurized tanks that stored helium—one of these had burst, leading to the explosion—and shot at them in Texas to determine whether they would explode and what the result looked like. Externally, they sent the site director for their Florida operations, Ricky Lim, to inquire whether he might visit the roof of the United Launch Alliance building.

SpaceX pursued the sniper theory for more than a month. A few SpaceX employees told me that they did not stop this line of inquiry until the Federal Aviation Administration sent the company a letter definitively saying that there was no gunman involved. It would be interesting to see this letter, so I submitted a Freedom of Information Act request to the FAA in the spring of 2023. Because the federal FOIA process moves slowly, I did not expect to receive a response in time for the book. But it was worth a try anyway.

No reply came in 2023 or early 2024, when the final version of my book was due to my editor. Reentry was published last September, and still nothing. However, last week, to my great surprise and delight, I got a response from the FAA. It was the very letter I requested, sent from the FAA to Tim Hughes, the general counsel of SpaceX, on October 13, 2016. And yes, the letter says there was no gunman involved.

However, there were other things I did not know—namely, that the FBI had also investigated the incident.

The ULA rivalry

One of the most compelling elements of this story is that it involves SpaceX’s heated rival, United Launch Alliance. For a long time, ULA had the upper hand, but in recent years, it has taken a dramatic turn. Now we know that David would grow up and slay Goliath: Between the final rocket ULA launched last year (the Vulcan test flight on October 4) and the first rocket the company launched this year (Atlas V, April 28), SpaceX launched 90 rockets.

Ninety.

But it was a different story in the summer of 2016 in the months leading up to the Amos 6 failure. Back then, ULA was launching about 15 rockets a year, compared to SpaceX’s five. And ULA was launching all of the important science missions for NASA and the critical spy satellites for the US military. They were the big dog, SpaceX the pup.

In the early days of the Falcon 9 rocket, some ULA employees would drive to where SpaceX was working on the first booster and jeer at their efforts. And rivalry played out not just on the launch pad but in courtrooms and on Capitol Hill. After ULA won an $11 billion block buy contract from the US Air Force to launch high-value military payloads into the early 2020s, Musk sued in April 2014. He alleged that the contract had been awarded without a fair competition and said the Falcon 9 rocket could launch the missions at a substantially lower price. Taxpayers, he argued, were being taken for a ride.

Eventually, SpaceX and the Air Force resolved their claims. The Air Force agreed to open some of its previously awarded national security missions to competitive bids. Over time, SpaceX has overtaken ULA even in this arena. During the most recent round of awards, SpaceX won 60 percent of the contracts compared to ULA’s 40 percent.

So when SpaceX raised the possibility of a ULA sniper, it came at an incendiary moment in the rivalry, when SpaceX was finally putting forth a very serious challenge to ULA’s dominance and monopoly.

It is no surprise, therefore, that ULA told SpaceX’s Ricky Lim to get lost when he wanted to see the roof of their building in Florida.

“Hair-on-fire stuff”

NASA officials were also deeply concerned by the loss of the Falcon 9 rocket in September 2016.

The space agency spent much of the 2010s working with SpaceX and Boeing to develop, test, and fly spacecraft that could fly humans into space. These were difficult years for the space agency, which had to rely on Russia to get its astronauts into space. NASA also had a challenging time balancing costs with astronaut safety. Then rockets started blowing up.

Consider this sequence from mid-2015 to mid-2016. In June 2015, the second stage of a Falcon 9 rocket carrying a cargo version of the Dragon spacecraft into orbit exploded. Less than two weeks later, NASA named four astronauts to its “commercial crew” cadre from which the initial pilots of Dragon and Starliner spacecraft would be selected. Finally, a little more than a year after this, a second Falcon 9 rocket upper stage exploded during flight.

Video of CRS-7 launch and failure.

Even as it was losing Falcon 9 rockets, SpaceX revealed that it intended to upend NASA’s long-standing practice of fueling a rocket and then, when the vehicle reached a stable condition, putting crew on board. Rather, SpaceX said it would put the astronauts on board before fueling. This process became known as “load and go.”

NASA’s safety community went nuts.

“When SpaceX came to us and said we want to load the crew first and then the propellant, mushroom clouds went off in our safety community,” Phil McAlister, the head of NASA’s commercial programs, told me for Reentry. “I mean, hair-on-fire stuff. It was just conventional wisdom that you load the propellant first and get it thermally stable. Fueling is a very dynamic operation. The vehicle is popping and hissing. The safety community was adamantly against this.”

Amos-6 compounded these concerns. That’s because the rocket was not shot by a sniper. After months of painful investigation and analysis, engineers determined the rocket was lost due to the propellant-loading process. In their goal of rapidly fueling the Falcon 9 rocket, the SpaceX teams had filled the pressurized helium tanks too quickly, heating the aluminum liner and causing it to buckle. In their haste to load super-chilled propellant onto the Falcon 9, SpaceX had found its speed limit.

At NASA, it was not difficult to visualize astronauts in a Dragon capsule sitting atop an exploding rocket during propellant loading rather than a commercial satellite.

Enter the FBI

We should stop and appreciate the crucible that SpaceX engineers and technicians endured in the fall of 2016. They were simultaneously attempting to tease out the physics of a fiendishly complex failure; prove to NASA their exploding rocket was safe; convince safety officials that even though they had just blown up their rocket by fueling it too quickly, load-and-go was feasible for astronaut missions; increase the cadence of Falcon 9 missions to catch and surpass ULA; and, oh yes, gently explain to the boss that a sniper had not shot their rocket.

So there had to be some relief when, on October 13, Hughes received that letter from Dr. Michael C. Romanowski, director of Commercial Space Integration at the FAA.

According to this letter (see a copy here), three weeks after the launch pad explosion, SpaceX submitted “video and audio” along with its analysis of the failure to the FAA. “SpaceX suggested that in the company’s view, this information and data could be indicative of sabotage or criminal activity associated with the on-pad explosion of SpaceX’s Falcon 9,” the letter states.

This is notable because it suggests that Musk directed SpaceX to elevate the “sniper” theory to the point that the FAA should take it seriously. But there was more. According to the letter, SpaceX reported the same data and analysis to the Federal Bureau of Investigation in Florida.

After this, the Tampa Field Office of the FBI and its Criminal Investigative Division in Washington, DC, looked into the matter. And what did they find? Nothing, apparently.

“The FBI has informed us that based upon a thorough and coordinated review by the appropriate Federal criminal and security investigative authorities, there were no indications to suggest that sabotage or any other criminal activity played a role in the September 1 Falcon 9 explosion,” Romanowski wrote. “As a result, the FAA considers this matter closed.”

The failure of the Amos-6 mission would turn out to be a low point for SpaceX. For a few weeks, there were non-trivial questions about the company’s financial viability. But soon, SpaceX would come roaring back. In 2017, the Falcon 9 rocket launched a record 18 times, surpassing ULA for the first time. The gap would only widen. Last year, SpaceX launched 137 rockets to ULA’s five.

With Amos-6, therefore, SpaceX lost the battle. But it would eventually win the war—without anyone firing a shot.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

SpaceX pushed “sniper” theory with the feds far more than is publicly known Read More »