Author name: Rejus Almole

music-labels-will-regret-coming-for-the-internet-archive,-sound-historian-says

Music labels will regret coming for the Internet Archive, sound historian says

But David Seubert, who manages sound collections at the University of California, Santa Barbara library, told Ars that he frequently used the project as an archive and not just to listen to the recordings.

For Seubert, the videos that IA records of the 78 RPM albums capture more than audio of a certain era. Researchers like him want to look at the label, check out the copyright information, and note the catalogue numbers, he said.

“It has all this information there,” Seubert said. “I don’t even necessarily need to hear it,” he continued, adding, “just seeing the physicality of it, it’s like, ‘Okay, now I know more about this record.'”

Music publishers suing IA argue that all the songs included in their dispute—and likely many more, since the Great 78 Project spans 400,000 recordings—”are already available for streaming or downloading from numerous services.”

“These recordings face no danger of being lost, forgotten, or destroyed,” their filing claimed.

But Nathan Georgitis, the executive director of the Association for Recorded Sound Collections (ARSC), told Ars that you just don’t see 78 RPM records out in the world anymore. Even in record stores selling used vinyl, these recordings will be hidden “in a few boxes under the table behind the tablecloth,” Georgitis suggested. And in “many” cases, “the problem for libraries and archives is that those recordings aren’t necessarily commercially available for re-release.”

That “means that those recordings, those artists, the repertoire, the recorded sound history in itself—meaning the labels, the producers, the printings—all of that history kind of gets obscured from view,” Georgitis said.

Currently, libraries trying to preserve this history must control access to audio collections, Georgitis said. He sees IA’s work with the Great 78 Project as a legitimate archive in that, unlike a streaming service, where content may be inconsistently available, IA’s “mission is to preserve and provide access to content over time.”

Music labels will regret coming for the Internet Archive, sound historian says Read More »

bad-vibes?-google-may-have-screwed-up-haptics-in-the-new-pixel-drop-update

Bad vibes? Google may have screwed up haptics in the new Pixel Drop update

The unexpected appearance of notification cooldown, along with smaller changes to haptics globally, could be responsible for the complaints. Maybe this is working as intended and Pixel owners are just caught off guard; or maybe Google broke something. It wouldn’t be the first time.

Pixel notification cooldown

The unexpected appearance of Notification Cooldown in the update might have something to do with the reports—it’s on by default.

Credit: Ryan Whitwam

The unexpected appearance of Notification Cooldown in the update might have something to do with the reports—it’s on by default. Credit: Ryan Whitwam

In 2022, Google released an update that weakened haptic feedback on the Pixel 6, making it so soft that people were missing calls. Google released a fix for the problem a few weeks later. If there’s something wrong with the new Pixel Drop, it’s a more subtle problem. People can’t even necessarily explain how it’s different, but most seem to agree that it is.

After testing several Pixel phones both before and after the update, there may be some truth to the complaints. The length and intensity of haptic notification feedback feel different on a Pixel 9 Pro XL post-update, but our Pixel 9 Pro feels the same after installing the Pixel Drop. The different models may simply have been tuned differently in the update, or there could be a bug involved. We’ve reached out to Google to ask about this possible issue and have been told the Pixel team is actively investigating the reports.

Updated on 3/7/2025 with comment from Google. 

Bad vibes? Google may have screwed up haptics in the new Pixel Drop update Read More »

will-the-future-of-software-development-run-on-vibes?

Will the future of software development run on vibes?


Accepting AI-written code without understanding how it works is growing in popularity.

For many people, coding is about telling a computer what to do and having the computer perform those precise actions repeatedly. With the rise of AI tools like ChatGPT, it’s now possible for someone to describe a program in English and have the AI model translate it into working code without ever understanding how the code works. Former OpenAI researcher Andrej Karpathy recently gave this practice a name—”vibe coding”—and it’s gaining traction in tech circles.

The technique, enabled by large language models (LLMs) from companies like OpenAI and Anthropic, has attracted attention for potentially lowering the barrier to entry for software creation. But questions remain about whether the approach can reliably produce code suitable for real-world applications, even as tools like Cursor Composer, GitHub Copilot, and Replit Agent make the process increasingly accessible to non-programmers.

Instead of being about control and precision, vibe coding is all about surrendering to the flow. On February 2, Karpathy introduced the term in a post on X, writing, “There’s a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” He described the process in deliberately casual terms: “I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

Karapthy tweet screenshot: There's a new kind of coding I call

A screenshot of Karpathy’s original X post about vibe coding from February 2, 2025. Credit: Andrej Karpathy / X

While vibe coding, if an error occurs, you feed it back into the AI model, accept the changes, hope it works, and repeat the process. Karpathy’s technique stands in stark contrast to traditional software development best practices, which typically emphasize careful planning, testing, and understanding of implementation details.

As Karpathy humorously acknowledged in his original post, the approach is for the ultimate lazy programmer experience: “I ask for the dumbest things, like ‘decrease the padding on the sidebar by half,’ because I’m too lazy to find it myself. I ‘Accept All’ always; I don’t read the diffs anymore.”

At its core, the technique transforms anyone with basic communication skills into a new type of natural language programmer—at least for simple projects. With AI models currently being held back by the amount of code an AI model can digest at once (context size), there tends to be an upper-limit to how complex a vibe-coded software project can get before the human at the wheel becomes a high-level project manager, manually assembling slices of AI-generated code into a larger architecture. But as technical limits expand with each generation of AI models, those limits may one day disappear.

Who are the vibe coders?

There’s no way to know exactly how many people are currently vibe coding their way through either hobby projects or development jobs, but Cursor reported 40,000 paying users in August 2024, and GitHub reported 1.3 million Copilot users just over a year ago (February 2024). While we can’t find user numbers for Replit Agent, the site claims 30 million users, with an unknown percentage using the site’s AI-powered coding agent.

One thing we do know: the approach has particularly gained traction online as a fun way of rapidly prototyping games. Microsoft’s Peter Yang recently demonstrated vibe coding in an X thread by building a simple 3D first-person shooter zombie game through conversational prompts fed into Cursor and Claude 3.7 Sonnet. Yang even used a speech-to-text app so he could verbally describe what he wanted to see and refine the prototype over time.

A photo of a MS-DOS computer with Q-BASIC code on the screen.

In August 2024, the author vibe coded his way into a working Q-BASIC utility script for MS-DOS, thanks to Claude Sonnet. Credit: Benj Edwards

We’ve been doing some vibe coding ourselves. Multiple Ars staffers have used AI assistants and coding tools for extracurricular hobby projects such as creating small games, crafting bespoke utilities, writing processing scripts, and more. Having a vibe-based code genie can come in handy in unexpected places: Last year, I asked Anthropic’s Claude write a Microsoft Q-BASIC program in MS-DOS that decompressed 200 ZIP files into custom directories, saving me many hours of manual typing work.

Debugging the vibes

With all this vibe coding going on, we had to turn to an expert for some input. Simon Willison, an independent software developer and AI researcher, offered a nuanced perspective on AI-assisted programming in an interview with Ars Technica. “I really enjoy vibe coding,” he said. “It’s a fun way to try out an idea and prove if it can work.”

But there are limits to how far Willison will go. “Vibe coding your way to a production codebase is clearly risky. Most of the work we do as software engineers involves evolving existing systems, where the quality and understandability of the underlying code is crucial.”

At some point, understanding at least some of the code is important because AI-generated code may include bugs, misunderstandings, and confabulations—for example, instances where the AI model generates references to nonexistent functions or libraries.

“Vibe coding is all fun and games until you have to vibe debug,” developer Ben South noted wryly on X, highlighting this fundamental issue.

Willison recently argued on his blog that encountering hallucinations with AI coding tools isn’t as detrimental as embedding false AI-generated information into a written report, because coding tools have built-in fact-checking: If there’s a confabulation, the code won’t work. This provides a natural boundary for vibe coding’s reliability—the code runs or it doesn’t.

Even so, the risk-reward calculation for vibe coding becomes far more complex in professional settings. While a solo developer might accept the trade-offs of vibe coding for personal projects, enterprise environments typically require code maintainability and reliability standards that vibe-coded solutions may struggle to meet. When code doesn’t work as expected, debugging requires understanding what the code is actually doing—precisely the knowledge that vibe coding tends to sidestep.

Programming without understanding

When it comes to defining what exactly constitutes vibe coding, Willison makes an important distinction: “If an LLM wrote every line of your code, but you’ve reviewed, tested, and understood it all, that’s not vibe coding in my book—that’s using an LLM as a typing assistant.” Vibe coding, in contrast, involves accepting code without fully understanding how it works.

While vibe coding originated with Karpathy as a playful term, it may encapsulate a real shift in how some developers approach programming tasks—prioritizing speed and experimentation over deep technical understanding. And to some people, that may be terrifying.

Willison emphasizes that developers need to take accountability for their code: “I firmly believe that as a developer you have to take accountability for the code you produce—if you’re going to put your name to it you need to be confident that you understand how and why it works—ideally to the point that you can explain it to somebody else.”

He also warns about a common path to technical debt: “For experiments and low-stake projects where you want to explore what’s possible and build fun prototypes? Go wild! But stay aware of the very real risk that a good enough prototype often faces pressure to get pushed to production.”

The future of programming jobs

So, is all this vibe coding going to cost human programmers their jobs? At its heart, programming has always been about telling a computer how to operate. The method of how we do that has changed over time, but there may always be people who are better at telling a computer precisely what to do than others—even in natural language. In some ways, those people may become the new “programmers.”

There was a point in the late 1970s to early ’80s when many people thought people required programming skills to use a computer effectively because there were very few pre-built applications for all the various computer platforms available. School systems worldwide made educational computer literacy efforts to teach people to code.

A brochure for the GE 210 computer from 1964. BASIC's creators used a similar computer four years later to develop the programming language.

A brochure for the GE 210 computer from 1964. BASIC’s creators used a similar computer four years later to develop the programming language that many children were taught at home and school. Credit: GE / Wikipedia

Before too long, people made useful software applications that let non-coders utilize computers easily—no programming required. Even so, programmers didn’t disappear—instead, they used applications to create better and more complex programs. Perhaps that will also happen with AI coding tools.

To use an analogy, computer controlled technologies like autopilot made reliable supersonic flight possible because they could handle aspects of flight that were too taxing for all but the most highly trained and capable humans to safely control. AI may do the same for programming, allowing humans to abstract away complexities that would otherwise take too much time to manually code, and that may allow for the creation of more complex and useful software experiences in the future.

But at that point, will humans still be able to understand or debug them? Maybe not. We may be completely dependent on AI tools, and some people no doubt find that a little scary or unwise.

Whether vibe coding lasts in the programming landscape or remains a prototyping technique will likely depend less on the capabilities of AI models and more on the willingness of organizations to accept risky trade-offs in code quality, maintainability, and technical debt. For now, vibe coding remains an apt descriptor of the messy, experimental relationship between AI and human developers—more collaborative than autonomous, but increasingly blurring the lines of who (or what) is really doing the programming.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Will the future of software development run on vibes? Read More »

amd-radeon-rx-9070-and-9070-xt-review:-rdna-4-fixes-a-lot-of-amd’s-problems

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems


For $549 and $599, AMD comes close to knocking out Nvidia’s GeForce RTX 5070.

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD is a company that knows a thing or two about capitalizing on a competitor’s weaknesses. The company got through its early-2010s nadir partially because its Ryzen CPUs struck just as Intel’s current manufacturing woes began to set in, first with somewhat-worse CPUs that were great value for the money and later with CPUs that were better than anything Intel could offer.

Nvidia’s untrammeled dominance of the consumer graphics card market should also be an opportunity for AMD. Nvidia’s GeForce RTX 50-series graphics cards have given buyers very little to get excited about, with an unreachably expensive high-end 5090 refresh and modest-at-best gains from 5080 and 5070-series cards that are also pretty expensive by historical standards, when you can buy them at all. Tech YouTubers—both the people making the videos and the people leaving comments underneath them—have been almost uniformly unkind to the 50 series, hinting at consumer frustrations and pent-up demand for competitive products from other companies.

Enter AMD’s Radeon RX 9070 XT and RX 9070 graphics cards. These are aimed right at the middle of the current GPU market at the intersection of high sales volume and decent profit margins. They promise good 1440p and entry-level 4K gaming performance and improved power efficiency compared to previous-generation cards, with fixes for long-time shortcomings (ray-tracing performance, video encoding, and upscaling quality) that should, in theory, make them more tempting for people looking to ditch Nvidia.

Table of Contents

RX 9070 and 9070 XT specs and speeds

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650GB/s 650GB/s 960GB/s 800GB/s 576GB/s 624GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

AMD’s high-level performance promise for the RDNA 4 architecture revolves around big increases in performance per compute unit (CU). An RDNA 4 CU, AMD says, is nearly twice as fast in rasterized performance as RDNA 2 (that is, rendering without ray-tracing effects enabled) and nearly 2.5 times as fast as RDNA 2 in games with ray-tracing effects enabled. Performance for at least some machine learning workloads also goes way up—twice as fast as RDNA 3 and four times as fast as RDNA 2.

We’ll see this in more detail when we start comparing performance, but AMD seems to have accomplished this goal. Despite having 64 or 56 compute units (for the 9070 XT and 9070, respectively), the cards’ performance often competes with AMD’s last-generation flagships, the RX 7900 XTX and 7900 XT. Those cards came with 96 and 84 compute units, respectively. The 9070 cards are specced a lot more like last generation’s RX 7800 XT—including the 16GB of GDDR6 on a 256-bit memory bus, as AMD still isn’t using GDDR6X or GDDR7—but they’re much faster than the 7800 XT was.

AMD has dramatically increased the performance-per-compute unit for RDNA 4. AMD

The 9070 series also uses a new 4 nm manufacturing process from TSMC, an upgrade from the 7000 series’ 5 nm process (and the 6 nm process used for the separate memory controller dies in higher-end RX 7000-series models that used chiplets). AMD’s GPUs are normally a bit less efficient than Nvidia’s, but the architectural improvements and the new manufacturing process allow AMD to do some important catch-up.

Both of the 9070 models we tested were ASRock Steel Legend models, and the 9070 and 9070 XT had identical designs—we’ll probably see a lot of this from AMD’s partners since the GPU dies and the 16GB RAM allotments are the same for both models. Both use two 8-pin power connectors; AMD says partners are free to use the 12-pin power connector if they want, but given Nvidia’s ongoing issues with it, most cards will likely stick with the reliable 8-pin connectors.

AMD doesn’t appear to be making and selling reference designs for the 9070 series the way it did for some RX 7000 and 6000-series GPUs or the way Nvidia does with its Founders Edition cards. From what we’ve seen, 2 or 2.5-slot, triple-fan designs will be the norm, the way they are for most midrange GPUs these days.

Testbed notes

We used the same GPU testbed for the Radeon RX 9070 series as we have for our GeForce RTX 50-series reviews.

An AMD Ryzen 7 9800X3D ensures that our graphics cards will be CPU-limited as little as possible. An ample 1050 W power supply, 32GB of DDR5-6000, and an AMD X670E motherboard with the latest BIOS installed round out the hardware. On the software side, we use an up-to-date installation of Windows 11 24H2 and recent GPU drivers for older cards, ensuring that our tests reflect whatever optimizations Microsoft, AMD, Nvidia, and game developers have made since the last generation of GPUs launched.

We have numbers for all of Nvidia’s RTX 50-series GPUs so far, plus most of the 40-series cards, most of AMD’s RX 7000-series cards, and a handful of older GPUs from the RTX 30-series and RX 6000 series. We’ll focus on comparing the 9070 XT and 9070 to other 1440p-to-4K graphics cards since those are the resolutions AMD is aiming at.

Performance

At $549 and $599, the 9070 series is priced to match Nvidia’s $549 RTX 5070 and undercut the $749 RTX 5070 Ti. So we’ll focus on comparing the 9070 series to those cards, plus the top tier of GPUs from the outgoing RX 7000-series.

Some 4K rasterized benchmarks.

Starting at the top with rasterized benchmarks with no ray-tracing effects, the 9070 XT does a good job of standing up to Nvidia’s RTX 5070 Ti, coming within a few frames per second of its performance in all the games we tested (and scoring very similarly in the 3DMark Time Spy Extreme benchmark).

Both cards are considerably faster than the RTX 5070—between 15 and 28 percent for the 9070 XT and between 5 and 13 percent for the regular 9070 (our 5070 scored weirdly low in Horizon Zero Dawn Remastered, so we’d treat those numbers as outliers for now). Both 9070 cards also stack up well next to the RX 7000 series here—the 9070 can usually just about match the performance of the 7900 XT, and the 9070 XT usually beats it by a little. Both cards thoroughly outrun the old RX 7900 GRE, which was AMD’s $549 GPU offering just a year ago.

The 7900 XT does have 20GB of RAM instead of 16GB, which might help its performance in some edge cases. But 16GB is still perfectly generous for a 1440p-to-4K graphics card—the 5070 only offers 12GB, which could end up limiting its performance in some games as RAM requirements continue to rise.

On ray-tracing improvements

Nvidia got a jump on AMD when it introduced hardware-accelerated ray-tracing in the RTX 20-series in 2018. And while these effects were only supported in a few games at the time, many modern games offer at least some kind of ray-traced lighting effects.

AMD caught up a little when it began shipping its own ray-tracing support in the RDNA2 architecture in late 2020, but the issue since then has always been that AMD cards have taken a larger performance hit than GeForce GPUs when these effects are turned on. RDNA3 promised improvements, but our tests still generally showed the same deficit as before.

So we’re looking for two things with RDNA4’s ray-tracing performance. First, we want the numbers to be higher than they were for comparably priced RX 7000-series GPUs, the same thing we look for in non-ray-traced (or rasterized) rendering performance. Second, we want the size of the performance hit to go down. To pick an example: the RX 7900 GRE could compete with Nvidia’s RTX 4070 Ti Super in games without ray tracing, but it was closer to a non-Super RTX 4070 in ray-traced games. It has helped keep AMD’s cards from being across-the-board competitive with Nvidia’s—is that any different now?

Benchmarks for games with ray-tracing effects enabled. Both AMD cards generally keep pace with the 5070 in these tests thanks to RDNA 4’s improvements.

The picture our tests paint is mixed but tentatively positive. The 9070 series and RDNA4 post solid improvements in the Cyberpunk 2077 benchmarks, substantially closing the performance gap with Nvidia. In games where AMD’s cards performed well enough before—here represented by Returnal—performance goes up, but roughly proportionately with rasterized performance. And both 9070 cards still punch below their weight in Black Myth: Wukong, falling substantially behind the 5070 under the punishing Cinematic graphics preset.

So the benefits you see, as with any GPU update, will depend a bit on the game you’re playing. There’s also a possibility that game optimizations and driver updates made with RDNA4 in mind could boost performance further. We can’t say that AMD has caught all the way up to Nvidia here—the 9070 and 9070 XT are both closer to the GeForce RTX 5070 than the 5070 Ti, despite keeping it closer to the 5070 Ti in rasterized tests—but there is real, measurable improvement here, which is what we were looking for.

Power usage

The 9070 series’ performance increases are particularly impressive when you look at the power-consumption numbers. The 9070 comes close to the 7900 XT’s performance but uses 90 W less power under load. It beats the RTX 5070 most of the time but uses around 30 W less power.

The 9070 XT is a little less impressive on this front—AMD has set clock speeds pretty high, and this can increase power use disproportionately. The 9070 XT is usually 10 or 15 percent faster than the 9070 but uses 38 percent more power. The XT’s power consumption is similar to the RTX 5070 Ti’s (a GPU it often matches) and the 7900 XT’s (a GPU it always beats), so it’s not too egregious, but it’s not as standout as the 9070’s.

AMD gives 9070 owners a couple of new toggles for power limits, though, which we’ll talk about in the next section.

Experimenting with “Total Board Power”

We don’t normally dabble much with overclocking when we review CPUs or GPUs—we’re happy to leave that to folks at other outlets. But when we review CPUs, we do usually test them with multiple power limits in place. Playing with power limits is easier (and occasionally safer) than actually overclocking, and it often comes with large gains to either performance (a chip that performs much better when given more power to work with) or efficiency (a chip that can run at nearly full speed without using as much power).

Initially, I experimented with the RX 9070’s power limits by accident. AMD sent me one version of the 9070 but exchanged it because of a minor problem the OEM identified with some units early in the production run. I had, of course, already run most of our tests on it, but that’s the way these things go sometimes.

By bumping the regular RX 9070’s TBP up just a bit, you can nudge it closer to 9070 XT-level performance.

The replacement RX 9070 card, an ASRock Steel Legend model, was performing significantly better in our tests, sometimes nearly closing the gap between the 9070 and the XT. It wasn’t until I tested power consumption that I discovered the explanation—by default, it was using a 245 W power limit rather than the AMD-defined 220 W limit. Usually, these kinds of factory tweaks don’t make much of a difference, but for the 9070, this power bump gave it a nice performance boost while still keeping it close to the 250 W power limit of the GeForce RTX 5070.

The 90-series cards we tested both add some power presets to AMD’s Adrenalin app in the Performance tab under Tuning. These replace and/or complement some of the automated overclocking and undervolting buttons that exist here for older Radeon cards. Clicking Favor Efficiency or Favor Performance can ratchet the card’s Total Board Power (TBP) up or down, limiting performance so that the card runs cooler and quieter or allowing the card to consume more power so it can run a bit faster.

The 9070 cards get slightly different performance tuning options in the Adrenalin software. These buttons mostly change the card’s Total Board Power (TBP), making it simple to either improve efficiency or boost performance a bit. Credit: Andrew Cunningham

For this particular ASRock 9070 card, the default TBP is set to 245 W. Selecting “Favor Efficiency” sets it to the default 220 W. You can double-check these values using an app like HWInfo, which displays both the current TBP and the maximum TBP in its Sensors Status window. Clicking the Custom button in the Adrenalin software gives you access to a Power Tuning slider, which for our card allowed us to ratchet the TBP up by up to 10 percent or down by as much as 30 percent.

This is all the firsthand testing we did with the power limits of the 9070 series, though I would assume that adding a bit more power also adds more overclocking headroom (bumping up the power limits is common for GPU overclockers no matter who makes your card). AMD says that some of its partners will ship 9070 XT models set to a roughly 340 W power limit out of the box but acknowledges that “you start seeing diminishing returns as you approach the top of that [power efficiency] curve.”

But it’s worth noting that the driver has another automated set-it-and-forget-it power setting you can easily use to find your preferred balance of performance and power efficiency.

A quick look at FSR4 performance

There’s a toggle in the driver for enabling FSR 4 in FSR 3.1-supporting games. Credit: Andrew Cunningham

One of AMD’s headlining improvements to the RX 90-series is the introduction of FSR 4, a new version of its FidelityFX Super Resolution upscaling algorithm. Like Nvidia’s DLSS and Intel’s XeSS, FSR 4 can take advantage of RDNA 4’s machine learning processing power to do hardware-backed upscaling instead of taking a hardware-agnostic approach as the older FSR versions did. AMD says this will improve upscaling quality, but it also means FSR4 will only work on RDNA 4 GPUs.

The good news is that FSR 3.1 and FSR 4 are forward- and backward-compatible. Games that have already added FSR 3.1 support can automatically take advantage of FSR 4, and games that support FSR 4 on the 90-series can just run FSR 3.1 on older and non-AMD GPUs.

FSR 4 comes with a small performance hit compared to FSR 3.1 at the same settings, but better overall quality can let you drop to a faster preset like Balanced or Performance and end up with more frames-per-second overall. Credit: Andrew Cunningham

The only game in our current test suite to be compatible with FSR 4 is Horizon Zero Dawn Remastered, and we tested its performance using both FSR 3.1 and FSR 4. In general, we found that FSR 4 improved visual quality at the cost of just a few frames per second when run at the same settings—not unlike using Nvidia’s recently released “transformer model” for DLSS upscaling.

Many games will let you choose which version of FSR you want to use. But for FSR 3.1 games that don’t have a built-in FSR 4 option, there’s a toggle in AMD’s Adrenalin driver you can hit to switch to the better upscaling algorithm.

Even if they come with a performance hit, new upscaling algorithms can still improve performance by making the lower-resolution presets look better. We run all of our testing in “Quality” mode, which generally renders at two-thirds of native resolution and scales up. But if FSR 4 running in Balanced or Performance mode looks the same to your eyes as FSR 3.1 running in Quality mode, you can still end up with a net performance improvement in the end.

RX 9070 or 9070 XT?

Just $50 separates the advertised price of the 9070 from that of the 9070 XT, something both Nvidia and AMD have done in the past that I find a bit annoying. If you have $549 to spend on a graphics card, you can almost certainly scrape together $599 for a graphics card. All else being equal, I’d tell most people trying to choose one of these to just spring for the 9070 XT.

That said, availability and retail pricing for these might be all over the place. If your choices are a regular RX 9070 or nothing, or an RX 9070 at $549 and an RX 9070 XT at any price higher than $599, I would just grab a 9070 and not sweat it too much. The two cards aren’t that far apart in performance, especially if you bump the 9070’s TBP up a little bit, and games that are playable on one will be playable at similar settings on the other.

Pretty close to great

If you’re building a 1440p or 4K gaming box, the 9070 series might be the ones to beat right now. Credit: Andrew Cunningham

We’ve got plenty of objective data in here, so I don’t mind saying that I came into this review kind of wanting to like the 9070 and 9070 XT. Nvidia’s 50-series cards have mostly upheld the status quo, and for the last couple of years, the status quo has been sustained high prices and very modest generational upgrades. And who doesn’t like an underdog story?

I think our test results mostly justify my priors. The RX 9070 and 9070 XT are very competitive graphics cards, helped along by a particularly mediocre RTX 5070 refresh from Nvidia. In non-ray-traced games, both cards wipe the floor with the 5070 and come close to competing with the $749 RTX 5070 Ti. In games and synthetic benchmarks with ray-tracing effects on, both cards can usually match or slightly beat the similarly priced 5070, partially (if not entirely) addressing AMD’s longstanding performance deficit here. Neither card comes close to the 5070 Ti in these games, but they’re also not priced like a 5070 Ti.

Just as impressively, the Radeon cards compete with the GeForce cards while consuming similar amounts of power. At stock settings, the RX 9070 uses roughly the same amount of power under load as a 4070 Super but with better performance. The 9070 XT uses about as much power as a 5070 Ti, with similar performance before you turn ray-tracing on. Power efficiency was a small but consistent drawback for the RX 7000 series compared to GeForce cards, and the 9070 cards mostly erase that disadvantage. AMD is also less stingy with the RAM, giving you 16GB for the price Nvidia charges for 12GB.

Some of the old caveats still apply. Radeons take a bigger performance hit, proportionally, than GeForce cards. DLSS already looks pretty good and is widely supported, while FSR 3.1/FSR 4 adoption is still relatively low. Nvidia has a nearly monopolistic grip on the dedicated GPU market, which means many apps, AI workloads, and games support its GPUs best/first/exclusively. AMD is always playing catch-up to Nvidia in some respect, and Nvidia keeps progressing quickly enough that it feels like AMD never quite has the opportunity to close the gap.

AMD also doesn’t have an answer for DLSS Multi-Frame Generation. The benefits of that technology are fairly narrow, and you already get most of those benefits with single-frame generation. But it’s still a thing that Nvidia does that AMDon’t.

Overall, the RX 9070 cards are both awfully tempting competitors to the GeForce RTX 5070—and occasionally even the 5070 Ti. They’re great at 1440p and decent at 4K. Sure, I’d like to see them priced another $50 or $100 cheaper to well and truly undercut the 5070 and bring 1440p-to-4K performance t0 a sub-$500 graphics card. It would be nice to see AMD undercut Nvidia’s GPUs as ruthlessly as it undercut Intel’s CPUs nearly a decade ago. But these RDNA4 GPUs have way fewer downsides than previous-generation cards, and they come at a moment of relative weakness for Nvidia. We’ll see if the sales follow.

The good

  • Great 1440p performance and solid 4K performance
  • 16GB of RAM
  • Decisively beats Nvidia’s RTX 5070, including in most ray-traced games
  • RX 9070 XT is competitive with RTX 5070 Ti in non-ray-traced games for less money
  • Both cards match or beat the RX 7900 XT, AMD’s second-fastest card from the last generation
  • Decent power efficiency for the 9070 XT and great power efficiency for the 9070
  • Automated options for tuning overall power use to prioritize either efficiency or performance
  • Reliable 8-pin power connectors available in many cards

The bad

  • Nvidia’s ray-tracing performance is still usually better
  • At $549 and $599, pricing matches but doesn’t undercut the RTX 5070
  • FSR 4 isn’t as widely supported as DLSS and may not be for a while

The ugly

  • Playing the “can you actually buy these for AMD’s advertised prices” game

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems Read More »

andor-s2-featurette-teases-canonical-tragic-event

Andor S2 featurette teases canonical tragic event

Most of the main S1 cast is returning for S2, with the exception of Shaw. Forest Whitaker once again reprises his Rogue One role as Clone Wars veteran Saw Gerrera, joined by fellow Rogue One alums Ben Mendelsohn and Alan Tudyk as Orson Krennic and K-2SO, respectively. Benjamin Bratt has also been cast in an as-yet-undisclosed role.

The behind-the-scenes look opens with footage of a desperate emergency broadcast calling for help because Imperial ships were landing, filled with storm troopers intent on quashing any protesters or nascent rebels against the Empire who might be lurking about. “Revolutionary movements are spontaneously happening all over the galaxy,” series creator Tony Gilroy explains. “How those come together is the stuff of our story.” While S1 focused a great deal on political intrigue, Genevieve O’Reilly, who plays Mon Mothma, describes S2 as a “juggernaut,” with a size and scope to match.

The footage shown—some new, some shown in the last week’s teaser—confirms that assessment. There are glimpses of Gerrera, Krennic, and K-2SO, as well as Mothma’s home world, Chandrila. And are all those protesters chanting on the planet of Ghorman? That means we’re likely to see the infamous Ghorman Massacre, a brutal event that resulted in Mothma resigning from the Senate in protest against Emperor Palpatine. The massacre was so horrifying that it eventually served to mobilize and unite rebel forces across the galaxy in the Star Wars canon.

The first three (of 12) episodes of Andor S2 premiere on April 22, 2025, on Disney+. Subsequent three-episode chapters will drop weekly for the next three weeks after that.

poster art for Andor S2

Credit: LucasFilm/Disney+

Andor S2 featurette teases canonical tragic event Read More »

“wooly-mice”-a-test-run-for-mammoth-gene-editing

“Wooly mice” a test run for mammoth gene editing

On Tuesday, the team behind the plan to bring mammoth-like animals back to the tundra announced the creation of what it is calling wooly mice, which have long fur reminiscent of the woolly mammoth. The long fur was created through the simultaneous editing of as many as seven genes, all with a known connection to hair growth, color, and/or texture.

But don’t think that this is a sort of mouse-mammoth hybrid. Most of the genetic changes were first identified in mice, not mammoths. So, the focus is on the fact that the team could do simultaneous editing of multiple genes—something that they’ll need to be able to do to get a considerable number of mammoth-like changes into the elephant genome.

Of mice and mammoths

The team at Colossal Biosciences has started a number of de-extinction projects, including the dodo and thylacine, but its flagship project is the mammoth. In all of these cases, the plan is to take stem cells from a closely related species that has not gone extinct, and edit a series of changes based on the corresponding genomes of the deceased species. In the case of the mammoth, that means the elephant.

But the elephant poses a large number of challenges, as the draft paper that describes the new mice acknowledges. “The 22-month gestation period of elephants and their extended reproductive timeline make rapid experimental assessment impractical,” the researchers acknowledge. “Further, ethical considerations regarding the experimental manipulation of elephants, an endangered species with complex social structures and high cognitive capabilities, necessitate alternative approaches for functional testing.”

So, they turned to a species that has been used for genetic experiments for over a century: the mouse. We can do all sorts of genetic manipulations in mice, and have ways of using embryonic stem cells to get those manipulations passed on to a new generation of mice.

For testing purposes, the mouse also has a very significant advantage: mutations that change its fur are easy to spot. Over the century-plus that we’ve been using mice for research, people have noticed and observed a huge variety of mutations that affect their fur, altering color, texture, and length. In many of these cases, the changes in the DNA that cause these changes have been identified.

“Wooly mice” a test run for mammoth gene editing Read More »

cod-liver-oil-embraced-amid-texas-measles-outbreak;-doctors-fight-misinfo

Cod liver oil embraced amid Texas measles outbreak; doctors fight misinfo

US Health Secretary and long-standing anti-vaccine advocate Robert F. Kennedy Jr. is facing criticism for his equivocal response to the raging measles outbreak in West Texas, which as of Tuesday has grown to 159 cases, with 22 hospitalizations and one child death.

While public health officials would like to see a resounding endorsement of the Measles, Mumps, and Rubella (MMR) vaccine as the best way to protect children and vulnerable community members from further spread of the extremely infectious virus, Kennedy instead penned an Op-Ed for Fox News sprinkled with anti-vaccine talking points. Before noting that vaccines “protect individual children” and “contribute to community immunity,” he stressed parental choice. The decision to vaccinate is “a personal one,” he wrote, and merely advised parents to “consult with their healthcare providers to understand their options to get the MMR vaccine.”

Further, Kennedy seemed more eager to embrace nutrition and supplements as a way to combat the potentially deadly infection. He declared that the “best defense” against infectious diseases, like the measles, is “good nutrition”—not lifesaving, highly effective vaccines.

“Vitamins A, C, and D, and foods rich in vitamins B12, C, and E should be part of a balanced diet,” according to Kennedy, who has no medical or health background. In particular, he highlighted that vitamin A can be used as a treatment for severe measles cases—only when it is administered carefully by a doctor.

Vitamins over vaccines

But, Kennedy’s emphasis has spurred a general embrace of vitamin A and cod liver oil (which is rich in vitamin A, among other nutrients) by vaccine-hesitant parents in West Texas, according to The Washington Post.

A Post reporter spent time in Gaines County, the undervaccinated epicenter of the outbreak, which has a large Mennonite community. At a Mennonite-owned pizzeria in Seminole, the county seat of Gaines, a waitress advised diners that vitamin A was a great way to help children with measles, according to the Post.

A Mennonite-owned health food and supplement store a mile away has been running low on vitamin A products as demand increased amid the outbreak. “They’ll do cod liver oil because it’s high in vitamin A and D naturally, food-based,” said Nancy Ginter, the store’s owner, told the Post. “Some people come in before they break out because they’re trying to just get their kids’ immune system to go up so they don’t get a secondary infection.”

Cod liver oil embraced amid Texas measles outbreak; doctors fight misinfo Read More »

apple’s-m4-macbook-air-refresh-may-be-imminent,-with-ipads-likely-to-follow

Apple’s M4 MacBook Air refresh may be imminent, with iPads likely to follow

Aside from the M4’s modest performance improvements over the M3, it seems likely that Apple will add a new webcam to match the ones added to the iMac and MacBook Pros. The M4 can also support up to three displays simultaneously—two external, plus a Mac’s internal display. The M3 supported two external displays, but only if the Mac’s built-in screen was turned off.

Gurman also indicates that refreshes for the basic 10.9-inch iPad and the iPad Air are coming soon, though they’re apparently not as imminent as the M4 MacBook Airs. The report doesn’t indicate which processors either of those refreshes will include; the current iPad Air lineup uses the M2, so either the M3 or M4 would be an upgrade. If Apple wants to bring Apple Intelligence to the 10.9-inch iPad, that would limit it to either the A17 Pro (like the 7th-gen iPad mini) or a variant of the Apple A18 (like the iPhone 16e). Apple Intelligence requires a chip with at least 8GB of RAM.

The iPad Air was refreshed a little less than a year ago, but the 10.9-inch iPad is due for an update. Apple gave it a price cut in 2024, but its hardware has been the same since October of 2022.

Apple’s M4 MacBook Air refresh may be imminent, with iPads likely to follow Read More »

on-may-5,-microsoft’s-skype-will-shut-down-for-good

On May 5, Microsoft’s Skype will shut down for good

After more than 21 years, Skype will soon be no more. Last night, some users (including Ars readers) poked around in the latest Skype preview update and noticed as-yet-unsurfaced text that read “Starting in May, Skype will no longer be available. Continue your calls and chats in Teams.”

This morning, Microsoft has confirmed to Ars that it’s true. May 5, 2025, will mark the end of Skype’s long run.

Alongside the verification that the end is nigh, Microsoft shared a bunch of details about how it plans to migrate Skype users over. Starting right away, some Skype users (those in Teams and Skype Insider) will be able to log in to Teams using their Skype credentials. More people will gain that ability over the next few days.

Microsoft claims that users who do this will see their existing contacts and chats from Skype in Teams from the start. Alternatively, users who don’t want to do this can export their Skype data—specifically contacts, call history, and chats.

On May 5, Microsoft’s Skype will shut down for good Read More »

spacex-readies-a-redo-of-last-month’s-ill-fated-starship-test-flight

SpaceX readies a redo of last month’s ill-fated Starship test flight


The FAA has cleared SpaceX to launch Starship’s eighth test flight as soon as Monday.

Ship 34, destined to launch on the next Starship test flight, test-fired its engines in South Texas on February 12. Credit: SpaceX

SpaceX plans to launch the eighth full-scale test flight of its enormous Starship rocket as soon as Monday after receiving regulatory approval from the Federal Aviation Administration.

The test flight will be a repeat of what SpaceX hoped to achieve on the previous Starship launch in January, when the rocket broke apart and showered debris over the Atlantic Ocean and Turks and Caicos Islands. The accident prevented SpaceX from completing many of the flight’s goals, such as testing Starship’s satellite deployment mechanism and new types of heat shield material.

Those things are high on the to-do list for Flight 8, set to lift off at 5: 30 pm CST (6: 30 pm EST; 23: 30 UTC) Monday from SpaceX’s Starbase launch facility on the Texas Gulf Coast. Over the weekend, SpaceX plans to mount the rocket’s Starship upper stage atop the Super Heavy booster already in position on the launch pad.

The fully stacked rocket will tower 404 feet (123.1 meters) tall. Like the test flight on January 16, this launch will use a second-generation, Block 2, version of Starship with larger propellant tanks with 25 percent more volume than previous vehicle iterations. The payload compartment near the ship’s top is somewhat smaller than the payload bay on Block 1 Starships.

This block upgrade moves SpaceX closer to attempting more challenging things with Starship, such as returning the ship, or upper stage, back to the launch site from orbit. It will be caught with the launch tower at Starbase, just like SpaceX accomplished last year with the Super Heavy booster. Officials also want to bring Starship into service to launch Starlink Internet satellites and demonstrate in-orbit refueling, an enabling capability for future Starship flights to the Moon and Mars.

NASA has contracts with SpaceX worth more than $4 billion to develop a Starship spinoff as a human-rated Moon lander for the Artemis lunar program. The mega-rocket is central to Elon Musk’s ambition to create a human settlement on Mars.

Another shot at glory

Other changes introduced on Starship Version 2 include redesigned forward flaps, which are smaller and closer to the tip of the ship’s nose to better protect them from the scorching heat of reentry. Technicians also removed some of the ship’s thermal protection tiles to “stress-test vulnerable areas” of the vehicle during descent. SpaceX is experimenting with metallic tile designs, including one with active cooling, that might be less brittle than the ceramic tiles used elsewhere on the ship.

Engineers also installed rudimentary catch fittings on the ship to evaluate how they respond to the heat of reentry, when temperatures outside the vehicle climb to 2,600° Fahrenheit (1,430° Celsius). Read more about Starship Version in this previous story from Ars.

It will take about 1 hour and 6 minutes for Starship to fly from the launch pad in South Texas to a splashdown zone in the Indian Ocean northwest of Australia. The rocket’s Super Heavy booster will fire 33 methane-fueled Raptor engines for two-and-a-half minutes as it climbs east from the Texas coastline, then jettison from the Starship upper stage and reverse course to return to Starbase for another catch with mechanical arms on the launch tower.

Meanwhile, Starship will ignite six Raptor engines and accelerate to a speed just shy of orbital velocity, putting the ship on a trajectory to reenter the atmosphere after soaring about halfway around the world.

Booster 15 perched on the launch mount at Starbase, Texas. Credit: SpaceX

If you’ve watched the last few Starship flights, this profile probably sounds familiar. SpaceX achieved successful splashdowns after three Starship test flights last year, and hoped to do it again before the premature end of Flight 7 in January. Instead, the accident was the most significant technical setback for the Starship program since the first full-scale test flight in 2023, which damaged the launch pad before the rocket spun out of control in the upper atmosphere.

Now, SpaceX hopes to get back on track. At the end of last year, company officials said they targeted as many as 25 Starship flights in 2025. Two months in, SpaceX is about to launch its second Starship of the year.

The breakup of Starship last month prevented SpaceX from evaluating the performance of the ship’s Pez-like satellite deployer and upgraded heat shield. Engineers are eager to see how those perform on Monday’s flight. Once in space, the ship will release four simulators replicating the approximate size and mass of SpaceX’s next-generation Starlink Internet satellites. They will follow the same suborbital trajectory as Starship and reenter the atmosphere over the Indian Ocean.

That will be followed by a restart of a Raptor engine on Starship in space, repeating a feat first achieved on Flight 6 in November. Officials want to ensure Raptor engines can reignite reliably in space before actually launching Starship into a stable orbit, where the ship must burn an engine to guide itself back into the atmosphere for a controlled reentry. With another suborbital flight on tap Monday, the engine relight is purely a confidence-building demonstration and not critical for a safe return to Earth.

The flight plan for Starship’s next launch includes another attempt to catch the Super Heavy booster with the launch tower, a satellite deployment demonstration, and an important test of its heat shield. Credit: SpaceX

Then, about 47 minutes into the mission, Starship will plunge back into the atmosphere. If this flight is like the previous few, expect to see live high-definition video streaming back from Starship as super-heated plasma envelops the vehicle in a cloak of pink and orange. Finally, air resistance will slow the ship below the speed of sound, and just 20 seconds before reaching the ocean, the rocket will flip to a vertical orientation and reignite its Raptor engines again to brake for splashdown.

This is where SpaceX hopes Starship Version 2 will shine. Although three Starships have made it to the ocean intact, the scorching temperatures of reentry damaged parts of their heat shields and flaps. That won’t do for SpaceX’s vision of rapidly reusing Starship with minimal or no refurbishment. Heat shield repairs slowed down the turnaround time between NASA’s space shuttle missions, and officials hope the upgraded heat shield on Starship Version 2 will decrease the downtime.

FAA’s green light

The FAA confirmed Friday it issued a launch license earlier this week for Starship Flight 8.

“The FAA determined SpaceX met all safety, environmental and other licensing requirements for the suborbital test flight,” an FAA spokesperson said in a statement.

The federal regulator oversaw a SpaceX-led investigation into the failure of Flight 7. SpaceX said NASA, the National Transportation Safety Board, and the US Space Force also participated in the investigation, which determined that propellant leaks and fires in an aft compartment, or attic, of Starship led to the shutdown of its engines and eventual breakup.

Engineers concluded the leaks were most likely caused by a harmonic response several times stronger than predicted, suggesting the vibrations during the ship’s climb into space were in resonance with the vehicle’s natural frequency. This would have intensified the vibrations beyond the levels engineers expected from ground testing.

Earlier this month, SpaceX completed an extended-duration static fire of the next Starship upper stage to test hardware modifications at multiple engine thrust levels. According to SpaceX, findings from the static fire informed changes to the fuel feed lines to Starship’s Raptor engines, adjustments to propellant temperatures, and a new operating thrust for the next test flight.

“To address flammability potential in the attic section on Starship, additional vents and a new purge system utilizing gaseous nitrogen are being added to the current generation of ships to make the area more robust to propellant leakage,” SpaceX said. “Future upgrades to Starship will introduce the Raptor 3 engine, reducing the attic volume and eliminating the majority of joints that can leak into this volume.”

FAA officials were apparently satisfied with all of this. The agency’s commercial spaceflight division completed a “comprehensive safety review” and determined Starship can return to flight operations while the investigation into the Flight 7 failure remains open. This isn’t new. The FAA also used this safety determination to expedite SpaceX launch license approvals last year as officials investigated mishaps on Starship and Falcon 9 rocket flights.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

SpaceX readies a redo of last month’s ill-fated Starship test flight Read More »

mars’-polar-ice-cap-is-slowly-pushing-its-north-pole-inward

Mars’ polar ice cap is slowly pushing its north pole inward

The orbiters that carried the radar hardware, along with one or two others, have been orbiting long enough that any major changes in Mars’ gravity caused by ice accumulation or crustal displacement would have shown up in their orbital behavior. The orbital changes they do see, “indicates that the increase in the gravitational potential associated with long-term ice accumulation is higher than the decrease in gravitational potential from downward deflection.” They calculate that the deformation has to be less than 0.13 millimeters per year to be consistent with the gravitational signal.

Finally, the model had to have realistic conditions at the polar ice cap, with a density consistent with a mixture of ice and dust.

Out of those 84 models, only three were consistent with all of these constraints. All three had a very viscous Martian interior, consistent with a relatively cold interior. That’s not a surprise, given what we’ve already inferred about Mars’ history. But it also suggests that most of the radioactive elements that provide heat to the red planet are in the crust, rather than deeper in the interior. That’s something we might have been able to check, had InSight’s temperature measurement experiment deployed correctly. But as it is, we’ll have to wait until some unidentified future mission to get a picture of Mars’ heat dynamics.

In any case, the models also suggest that Mars’ polar ice cap is less than 10 million years old, consistent with the orbitally driven climate models.

In a lot of ways, the new information is an update of earlier attempts to model the Martian interior, given a few more years of orbital data and the information gained from the InSight lander, which also determined the thickness of Mars’ crust and size of its core. But it’s also a good way of understanding how scientists can take bits and pieces of information from seemingly unrelated sources and build them into a coherent picture.

Nature, 2025. DOI: 10.1038/s41586-024-08565-9  (About DOIs).

Mars’ polar ice cap is slowly pushing its north pole inward Read More »

texas-official-warns-against-“measles-parties”-as-outbreak-keeps-growing

Texas official warns against “measles parties” as outbreak keeps growing

Cook, along with Lubbock’s director of public health, Katherine Wells, said they see no end in sight for the outbreak, which now spans nine counties in Texas, many of which have low vaccination rates. “This outbreak is going to continue to grow,” Wells said, declining to forecast how high the final case count could go after a reporter raised the possibility of several hundred.

So far, 116 of the 146 cases are under the age of 18, with 46 being between the ages of 0 and 4. Only five of the 146 were vaccinated with at least one dose of the Measles, Mumps, and Rubella (MMR) vaccine.

Messaging

On a more positive note, Wells reported that the outbreak has seemed to sway some vaccine-hesitant parents to get their children vaccinated. Just yesterday in Lubbock, over 50 children came into the city’s clinic for measles vaccines. Eleven of those children had vaccine exemptions, meaning their parents had previously gone through the state process to exempt their child from having to receive routine childhood vaccines to attend school. “Which is a really good sign; that means our message is getting out there,” Wells said.

So far in the outbreak, which erupted in late January, messaging about the disease and the importance of vaccination has exclusively come from state and local authorities. The Centers for Disease Control and Prevention only released a brief statement late Thursday, which was not sent through the agency’s press distribution list. It did, however, note that “vaccination remains the best defense against measles infection.”

During a cabinet meeting Wednesday, US Health Secretary and anti-vaccine advocate Robert F. Kennedy Jr. responded to a question about the outbreak, offering a variety of inaccurate information. Kennedy downplayed the outbreak, falsely claiming that “it’s not unusual.” But, this is an unusual year for measles in the US. As epidemiologist Katelyn Jetelina noted on Bluesky, the number of US measles cases this year has already surpassed the total case counts from eight of the previous 15 years. And it is only February.

Texas official warns against “measles parties” as outbreak keeps growing Read More »