Reviews

2025-ipad-air-hands-on:-why-mess-with-a-good-thing?

2025 iPad Air hands-on: Why mess with a good thing?

There’s not much new in Apple’s latest refresh of the iPad Air, so there’s not much to say about it, but it’s worth taking a brief look regardless.

In almost every way, this is identical to the previous generation. There are only two differences to go over: the bump from the M2 chip to the slightly faster M3, and a redesign of the Magic Keyboard peripheral.

If you want more details about this tablet, refer to our M2 iPad Air review from last year. Everything we said then applies now.

From M2 to M3

The M3 chip has an 8-core CPU with four performance cores and four efficiency cores. On the GPU side, there are nine cores. There’s also a 16-core Neural Engine, which is what Apple calls its NPU.

We’ve seen the M3 in other devices before, and it performs comparably here in the iPad Air in Geekbench benchmarks. Those coming from the M1 or older A-series chips will see some big gains, but it’s a subtle step up over the M2 in last year’s iPad Air.

That will be a noticeable boost primarily for a handful of particularly demanding 3D games (the likes of Assassin’s Creed Mirage, Resident Evil Village, Infinity Nikki, and Genshin Impact) and some heavy-duty applications only a few people use, like CAD or video editing programs.

Most of the iPad Air’s target audience would never know the difference, though, and the main benefit here isn’t necessarily real-world performance. Rather, the upside of this upgrade is the addition of a few specific features, namely hardware-accelerated ray tracing and hardware-accelerated AV1 video codec support.

This isn’t new, but this chip supports Apple Intelligence, the much-ballyhooed suite of generative AI features Apple recently introduced. At this point there aren’t many devices left in Apple’s lineup that don’t support Apple Intelligence (it’s basically just the cheapest, entry-level iPad that doesn’t have it) and that’s good news for Apple, as it helps the company simplify its marketing messaging around the features.

2025 iPad Air hands-on: Why mess with a good thing? Read More »

iphone-16e-review:-the-most-expensive-cheap-iphone-yet

iPhone 16e review: The most expensive cheap iPhone yet


The iPhone 16e rethinks—and prices up—the basic iPhone.

An iPhone sits on the table, displaying the time with the screen on

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

For a long time, the cheapest iPhones were basically just iPhones that were older than the current flagship, but last week’s release of the $600 iPhone 16e marks a big change in how Apple is approaching its lineup.

Rather than a repackaging of an old iPhone, the 16e is the latest main iPhone—that is, the iPhone 16—with a bunch of stuff stripped away.

There are several potential advantages to this change. In theory, it allows Apple to support its lower-end offerings for longer with software updates, and it gives entry-level buyers access to more current technologies and features. It also simplifies the marketplace of accessories and the like.

There’s bad news, too, though: Since it replaces the much cheaper iPhone SE in Apple’s lineup, the iPhone 16e significantly raises the financial barrier to entry for iOS (the SE started at $430).

We spent a few days trying out the 16e and found that it’s a good phone—it’s just too bad it’s a little more expensive than the entry-level iPhone should ideally be. In many ways, this phone solves more problems for Apple than it does for consumers. Let’s explore why.

Table of Contents

A beastly processor for an entry-level phone

Like the 16, the 16e has Apple’s A18 chip, the most recent in the made-for-iPhone line of Apple-designed chips. There’s only one notable difference: This variation of the A18 has just four GPU cores instead of five. That will show up in benchmarks and in a handful of 3D games, but it shouldn’t make too much of a difference for most people.

It’s a significant step up over the A15 found in the final 2022 refresh of the iPhone SE, enabling a handful of new features like AAA games and Apple Intelligence.

The A18’s inclusion is good for both Apple and the consumer; Apple gets to establish a new, higher baseline of performance when developing new features for current and future handsets, and consumers likely get many more years of software updates than they’d get on the older chip.

The key example of a feature enabled by the A18 that Apple would probably like us all to talk about the most is Apple Intelligence, a suite of features utilizing generative AI to solve some user problems or enable new capabilities across iOS. By enabling these for the cheapest iPhone, Apple is making its messaging around Apple Intelligence a lot easier; it no longer needs to put effort into clarifying that you can use X feature with this new iPhone but not that one.

We’ve written a lot about Apple Intelligence already, but here’s the gist: There are some useful features here in theory, but Apple’s models are clearly a bit behind the cutting edge, and results for things like notifications summaries or writing tools are pretty mixed. It’s fun to generate original emojis, though!

The iPhone 16e can even use Visual Intelligence, which actually is handy sometimes. On my iPhone 16 Pro Max, I can point the rear camera at an object and press the camera button a certain way to get information about it.

I wouldn’t have expected the 16e to support this, but it does, via the Action Button (which was first introduced in the iPhone 15 Pro). This is a reprogrammable button that can perform a variety of functions, albeit just one at a time. Visual Intelligence is one of the options here, which is pretty cool, even though it’s not essential.

The screen is the biggest upgrade over the SE

Also like the 16, the 16e has a 6.1-inch display. The resolution’s a bit different, though; it’s 2,532 by 1,170 pixels instead of 2,556 by 1,179. It also has a notch instead of the Dynamic Island seen in the 16. All this makes the iPhone 16e’s display seem like a very close match to the one seen in 2022’s iPhone 14—in fact, it might literally be the same display.

I really missed the Dynamic Island while using the iPhone 16e—it’s one of my favorite new features added to the iPhone in recent years, as it consolidates what was previously a mess of notification schemes in iOS. Plus, it’s nice to see things like Uber and DoorDash ETAs and sports scores at a glance.

The main problem with losing the Dynamic Island is that we’re back to the old minor mess of notifications approaches, and I guess Apple has to keep supporting the old ways for a while yet. That genuinely surprises me; I would have thought Apple would want to unify notifications and activities with the Dynamic Island just like the A18 allows the standardization of other features.

This seems to indicate that the Dynamic Island is a fair bit more expensive to include than the good old camera notch flagship iPhones had been rocking since 2017’s iPhone X.

That compromise aside, the display on the iPhone 16e is ridiculously good for a phone at this price point, and it makes the old iPhone SE’s small LCD display look like it’s from another eon entirely by comparison. It gets brighter for both HDR content and sunny-day operation; the blacks are inky and deep, and the contrast and colors are outstanding.

It’s the best thing about the iPhone 16e, even if it isn’t quite as refined as the screens in Apple’s current flagships. Most people would never notice the difference between the screens in the 16e and the iPhone 16 Pro, though.

There is one other screen feature I miss from the higher-end iPhones you can buy in 2025: Those phones can drop the display all the way down to 1 nit, which is awesome for using the phone late at night in bed without disturbing a sleeping partner. Like earlier iPhones, the 16e can only get so dark.

It gets quite bright, though; Apple claims it typically reaches 800 nits in peak brightness but that it can stretch to 1200 when viewing certain HDR photos and videos. That means it gets about twice as bright as the SE did.

Connectivity is key

The iPhone 16e supports the core suite of connectivity options found in modern phones. There’s Wi-Fi 6, Bluetooth 5.3, and Apple’s usual limited implementation of NFC.

There are three new things of note here, though, and they’re good, neutral, and bad, respectively.

USB-C

Let’s start with the good. We’ve moved from Apple’s proprietary Lightning port found in older iPhones (including the final iPhone SE) toward USB-C, now a near-universal standard on mobile devices. It allows faster charging and more standardized charging cable support.

Sure, it’s a bummer to start over if you’ve spent years buying Lightning accessories, but it’s absolutely worth it in the long run. This change means that the entire iPhone line has now abandoned Lightning, so all iPhones and Android phones will have the same main port for years to come. Finally!

The finality of this shift solves a few problems for Apple: It greatly simplifies the accessory landscape and allows the company to move toward producing a smaller range of cables.

Satellite connectivity

Recent flagship iPhones have gradually added a small suite of features that utilize satellite connectivity to make life a little easier and safer.

Among those is crash detection and roadside assistance. The former will use the sensors in the phone to detect if you’ve been in a car crash and contact help, and roadside assistance allows you to text for help when you’re outside of cellular reception in the US and UK.

There are also Emergency SOS and Find My via satellite, which let you communicate with emergency responders from remote places and allow you to be found.

Along with a more general feature that allows Messages via satellite, these features can greatly expand your options if you’re somewhere remote, though they’re not as easy to use and responsive as using the regular cellular network.

Where’s MagSafe?

I don’t expect the 16e to have all the same features as the 16, which is $200 more expensive. In fact, it has more modern features than I think most of its target audience needs (more on that later). That said, there’s one notable omission that makes no sense to me at all.

The 16e does not support MagSafe, a standard for connecting accessories to the back of the device magnetically, often while allowing wireless charging via the Qi standard.

Qi wireless charging is still supported, albeit at a slow 7.5 W, but there are no magnets, meaning a lot of existing MagSafe accessories are a lot less useful with this phone, if they’re usable at all. To be fair, the SE didn’t support MagSafe either, but every new iPhone design since the iPhone 12 way back in 2020 has—and not just the premium flagships.

It’s not like the MagSafe accessory ecosystem was some bottomless well of innovation, but that magnetic alignment is handier than you might think, whether we’re talking about making sure the phone locks into place for the fastest wireless charging speeds or hanging the phone on a car dashboard to use GPS on the go.

It’s one of those things where folks coming from much older iPhones may not care because they don’t know what they’re missing, but it could be annoying in households with multiple generations of iPhones, and it just doesn’t make any sense.

Most of Apple’s choices in the 16e seem to serve the goal of unifying the whole iPhone lineup to simplify the message for consumers and make things easier for Apple to manage efficiently, but the dropping of MagSafe is bizarre.

It almost makes me think that Apple might plan to drop MagSafe from future flagship iPhones, too, and go toward something new, just because that’s the only explanation I can think of. That otherwise seems unlikely to me right now, but I guess we’ll see.

The first Apple-designed cellular modem

We’ve been seeing rumors that Apple planned to drop third-party modems from companies like Qualcomm for years. As far back as 2018, Apple was poaching Qualcomm employees in an adjacent office in San Diego. In 2020, Apple SVP Johny Srouji announced to employees that work had begun.

It sounds like development has been challenging, but the first Apple-designed modem has arrived here in the 16e of all places. Dubbed the C1, it’s… perfectly adequate. It’s about as fast or maybe just a smidge slower than what you get in the flagship phones, but almost no user would notice any difference at all.

That’s really a win for Apple, which has struggled with a tumultuous relationship with its partners here for years and which has long run into space problems in its phones in part because the third-party modems weren’t compact enough.

This change may not matter much for the consumer beyond freeing up just a tiny bit of space for a slightly larger battery, but it’s another step in Apple’s long journey to ultimately and fully control every component in the iPhone that it possibly can.

Bigger is better for batteries

There is one area where the 16e is actually superior to the 16, much less the SE: battery life. The 16e reportedly has a 3,961 mAh battery, the largest in any of the many iPhones with roughly this size screen. Apple says it offers up to 26 hours of video playback, which is the kind of number you expect to see in a much larger flagship phone.

I charged this phone three times in just under a week with it, though I wasn’t heavily hitting 5G networks, playing many 3D games, or cranking the brightness way up all the time while using it.

That’s a bit of a bump over the 16, but it’s a massive leap over the SE, which promised a measly 15 hours of video playback. Every single phone in Apple’s lineup now has excellent battery life by any standard.

Quality over quantity in the camera system

The 16E’s camera system leaves the SE in the dust, but it’s no match for the robust system found in the iPhone 16. Regardless, it’s way better than you’d typically expect from a phone at this price.

Like the 16, the 16e has a 48 MP “Fusion” wide-angle rear camera. It typically doesn’t take photos at 48 MP (though you can do that while compromising color detail). Rather, 24 MP is the target. The 48 MP camera enables 2x zoom that is nearly visually indistinguishable from optical zoom.

Based on both the specs and photo comparisons, the main camera sensor in the 16e appears to me to be exactly the same as that one found in the 16. We’re just missing the ultra-wide lens (which allows more zoomed-out photos, ideal for groups of people in small spaces, for example) and several extra features like advanced image stabilization, the newest Photographic Styles, and macro photography.

The iPhone 16e takes excellent photos in bright conditions. Samuel Axon

That’s a lot of missing features, sure, but it’s wild how good this camera is for this price point. Even something like the Pixel 8a can’t touch it (though to be fair, the Pixel 8a is $100 cheaper).

Video capture is a similar situation: The 16e shoots at the same resolutions and framerates as the 16, but it lacks a few specialized features like Cinematic and Action modes. There’s also a front-facing camera with the TrueDepth sensor for Face ID in that notch, and it has comparable specs to the front-facing cameras we’ve seen in a couple of years of iPhones at this point.

If you were buying a phone for the cameras, this wouldn’t be the one for you. It’s absolutely worth paying another $200 for the iPhone 16 (or even just $100 for the iPhone 15 for the ultra-wide lens for 0.5x zoom; the 15 is still available in the Apple Store) if that’s your priority.

The iPhone 16’s macro mode isn’t available here, so ultra-close-ups look fuzzy. Samuel Axon

But for the 16e’s target consumer (mostly folks with the iPhone 11 or older or an iPhone SE, who just want the cheapest functional iPhone they can get) it’s almost overkill. I’m not complaining, though it’s a contributing factor to the phone’s cost compared to entry-level Android phones and Apple’s old iPhone SE.

RIP small phones, once and for all

In one fell swoop, the iPhone 16e’s replacement of the iPhone SE eliminates a whole range of legacy technologies that have held on at the lower end of the iPhone lineup for years. Gone are Touch ID, the home button, LCD displays, and Lightning ports—they’re replaced by Face ID, swipe gestures, OLED, and USB-C.

Newer iPhones have had most of those things for quite some time. The latest feature was USB-C, which came in 2023’s iPhone 15. The removal of the SE from the lineup catches the bottom end of the iPhone up with the top in these respects.

That said, the SE had maintained one positive differentiator, too: It was small enough to be used one-handed by almost anyone. With the end of the SE and the release of the 16e, the one-handed iPhone is well and truly dead. Of course, most people have been clear they want big screens and batteries above almost all else, so the writing had been on the wall for a while for smaller phones.

The death of the iPhone SE ushers in a new era for the iPhone with bigger and better features—but also bigger price tags.

A more expensive cheap phone

Assessing the iPhone 16e is a challenge. It’s objectively a good phone—good enough for the vast majority of people. It has a nearly top-tier screen (though it clocks in at 60Hz, while some Android phones close to this price point manage 120Hz), a camera system that delivers on quality even if it lacks special features seen in flagships, strong connectivity, and performance far above what you’d expect at this price.

If you don’t care about extra camera features or nice-to-haves like MagSafe or the Dynamic Island, it’s easy to recommend saving a couple hundred bucks compared to the iPhone 16.

The chief criticism I have that relates to the 16e has less to do with the phone itself than Apple’s overall lineup. The iPhone SE retailed for $430, nearly half the price of the 16. By making the 16e the new bottom of the lineup, Apple has significantly raised the financial barrier to entry for iOS.

Now, it’s worth mentioning that a pretty big swath of the target market for the 16e will buy it subsidized through a carrier, so they might not pay that much up front. I always recommend buying a phone directly if you can, though, as carrier subsidization deals are usually worse for the consumer.

The 16e’s price might push more people to go for the subsidy. Plus, it’s just more phone than some people need. For example, I love a high-quality OLED display for watching movies, but I don’t think the typical iPhone SE customer was ever going to care about that.

That’s why I believe the iPhone 16e solves more problems for Apple than it does for the consumer. In multiple ways, it allows Apple to streamline production, software support, and marketing messaging. It also drives up the average price per unit across the whole iPhone line and will probably encourage some people who would have spent $430 to spend $600 instead, possibly improving revenue. All told, it’s a no-brainer for Apple.

It’s just a mixed bag for the sort of no-frills consumer who wants a minimum viable phone and who for one reason or another didn’t want to go the Android route. The iPhone 16e is definitely a good phone—I just wish there were more options for that consumer.

The good

  • Dramatically improved display than the iPhone SE
  • Likely stronger long-term software support than most previous entry-level iPhones
  • Good battery life and incredibly good performance for this price point
  • A high-quality camera, especially for the price

The bad

  • No ultra-wide camera
  • No MagSafe
  • No Dynamic Island

The ugly

  • Significantly raises the entry price point for buying an iPhone

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

iPhone 16e review: The most expensive cheap iPhone yet Read More »

amd-radeon-rx-9070-and-9070-xt-review:-rdna-4-fixes-a-lot-of-amd’s-problems

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems


For $549 and $599, AMD comes close to knocking out Nvidia’s GeForce RTX 5070.

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD is a company that knows a thing or two about capitalizing on a competitor’s weaknesses. The company got through its early-2010s nadir partially because its Ryzen CPUs struck just as Intel’s current manufacturing woes began to set in, first with somewhat-worse CPUs that were great value for the money and later with CPUs that were better than anything Intel could offer.

Nvidia’s untrammeled dominance of the consumer graphics card market should also be an opportunity for AMD. Nvidia’s GeForce RTX 50-series graphics cards have given buyers very little to get excited about, with an unreachably expensive high-end 5090 refresh and modest-at-best gains from 5080 and 5070-series cards that are also pretty expensive by historical standards, when you can buy them at all. Tech YouTubers—both the people making the videos and the people leaving comments underneath them—have been almost uniformly unkind to the 50 series, hinting at consumer frustrations and pent-up demand for competitive products from other companies.

Enter AMD’s Radeon RX 9070 XT and RX 9070 graphics cards. These are aimed right at the middle of the current GPU market at the intersection of high sales volume and decent profit margins. They promise good 1440p and entry-level 4K gaming performance and improved power efficiency compared to previous-generation cards, with fixes for long-time shortcomings (ray-tracing performance, video encoding, and upscaling quality) that should, in theory, make them more tempting for people looking to ditch Nvidia.

Table of Contents

RX 9070 and 9070 XT specs and speeds

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650GB/s 650GB/s 960GB/s 800GB/s 576GB/s 624GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

AMD’s high-level performance promise for the RDNA 4 architecture revolves around big increases in performance per compute unit (CU). An RDNA 4 CU, AMD says, is nearly twice as fast in rasterized performance as RDNA 2 (that is, rendering without ray-tracing effects enabled) and nearly 2.5 times as fast as RDNA 2 in games with ray-tracing effects enabled. Performance for at least some machine learning workloads also goes way up—twice as fast as RDNA 3 and four times as fast as RDNA 2.

We’ll see this in more detail when we start comparing performance, but AMD seems to have accomplished this goal. Despite having 64 or 56 compute units (for the 9070 XT and 9070, respectively), the cards’ performance often competes with AMD’s last-generation flagships, the RX 7900 XTX and 7900 XT. Those cards came with 96 and 84 compute units, respectively. The 9070 cards are specced a lot more like last generation’s RX 7800 XT—including the 16GB of GDDR6 on a 256-bit memory bus, as AMD still isn’t using GDDR6X or GDDR7—but they’re much faster than the 7800 XT was.

AMD has dramatically increased the performance-per-compute unit for RDNA 4. AMD

The 9070 series also uses a new 4 nm manufacturing process from TSMC, an upgrade from the 7000 series’ 5 nm process (and the 6 nm process used for the separate memory controller dies in higher-end RX 7000-series models that used chiplets). AMD’s GPUs are normally a bit less efficient than Nvidia’s, but the architectural improvements and the new manufacturing process allow AMD to do some important catch-up.

Both of the 9070 models we tested were ASRock Steel Legend models, and the 9070 and 9070 XT had identical designs—we’ll probably see a lot of this from AMD’s partners since the GPU dies and the 16GB RAM allotments are the same for both models. Both use two 8-pin power connectors; AMD says partners are free to use the 12-pin power connector if they want, but given Nvidia’s ongoing issues with it, most cards will likely stick with the reliable 8-pin connectors.

AMD doesn’t appear to be making and selling reference designs for the 9070 series the way it did for some RX 7000 and 6000-series GPUs or the way Nvidia does with its Founders Edition cards. From what we’ve seen, 2 or 2.5-slot, triple-fan designs will be the norm, the way they are for most midrange GPUs these days.

Testbed notes

We used the same GPU testbed for the Radeon RX 9070 series as we have for our GeForce RTX 50-series reviews.

An AMD Ryzen 7 9800X3D ensures that our graphics cards will be CPU-limited as little as possible. An ample 1050 W power supply, 32GB of DDR5-6000, and an AMD X670E motherboard with the latest BIOS installed round out the hardware. On the software side, we use an up-to-date installation of Windows 11 24H2 and recent GPU drivers for older cards, ensuring that our tests reflect whatever optimizations Microsoft, AMD, Nvidia, and game developers have made since the last generation of GPUs launched.

We have numbers for all of Nvidia’s RTX 50-series GPUs so far, plus most of the 40-series cards, most of AMD’s RX 7000-series cards, and a handful of older GPUs from the RTX 30-series and RX 6000 series. We’ll focus on comparing the 9070 XT and 9070 to other 1440p-to-4K graphics cards since those are the resolutions AMD is aiming at.

Performance

At $549 and $599, the 9070 series is priced to match Nvidia’s $549 RTX 5070 and undercut the $749 RTX 5070 Ti. So we’ll focus on comparing the 9070 series to those cards, plus the top tier of GPUs from the outgoing RX 7000-series.

Some 4K rasterized benchmarks.

Starting at the top with rasterized benchmarks with no ray-tracing effects, the 9070 XT does a good job of standing up to Nvidia’s RTX 5070 Ti, coming within a few frames per second of its performance in all the games we tested (and scoring very similarly in the 3DMark Time Spy Extreme benchmark).

Both cards are considerably faster than the RTX 5070—between 15 and 28 percent for the 9070 XT and between 5 and 13 percent for the regular 9070 (our 5070 scored weirdly low in Horizon Zero Dawn Remastered, so we’d treat those numbers as outliers for now). Both 9070 cards also stack up well next to the RX 7000 series here—the 9070 can usually just about match the performance of the 7900 XT, and the 9070 XT usually beats it by a little. Both cards thoroughly outrun the old RX 7900 GRE, which was AMD’s $549 GPU offering just a year ago.

The 7900 XT does have 20GB of RAM instead of 16GB, which might help its performance in some edge cases. But 16GB is still perfectly generous for a 1440p-to-4K graphics card—the 5070 only offers 12GB, which could end up limiting its performance in some games as RAM requirements continue to rise.

On ray-tracing improvements

Nvidia got a jump on AMD when it introduced hardware-accelerated ray-tracing in the RTX 20-series in 2018. And while these effects were only supported in a few games at the time, many modern games offer at least some kind of ray-traced lighting effects.

AMD caught up a little when it began shipping its own ray-tracing support in the RDNA2 architecture in late 2020, but the issue since then has always been that AMD cards have taken a larger performance hit than GeForce GPUs when these effects are turned on. RDNA3 promised improvements, but our tests still generally showed the same deficit as before.

So we’re looking for two things with RDNA4’s ray-tracing performance. First, we want the numbers to be higher than they were for comparably priced RX 7000-series GPUs, the same thing we look for in non-ray-traced (or rasterized) rendering performance. Second, we want the size of the performance hit to go down. To pick an example: the RX 7900 GRE could compete with Nvidia’s RTX 4070 Ti Super in games without ray tracing, but it was closer to a non-Super RTX 4070 in ray-traced games. It has helped keep AMD’s cards from being across-the-board competitive with Nvidia’s—is that any different now?

Benchmarks for games with ray-tracing effects enabled. Both AMD cards generally keep pace with the 5070 in these tests thanks to RDNA 4’s improvements.

The picture our tests paint is mixed but tentatively positive. The 9070 series and RDNA4 post solid improvements in the Cyberpunk 2077 benchmarks, substantially closing the performance gap with Nvidia. In games where AMD’s cards performed well enough before—here represented by Returnal—performance goes up, but roughly proportionately with rasterized performance. And both 9070 cards still punch below their weight in Black Myth: Wukong, falling substantially behind the 5070 under the punishing Cinematic graphics preset.

So the benefits you see, as with any GPU update, will depend a bit on the game you’re playing. There’s also a possibility that game optimizations and driver updates made with RDNA4 in mind could boost performance further. We can’t say that AMD has caught all the way up to Nvidia here—the 9070 and 9070 XT are both closer to the GeForce RTX 5070 than the 5070 Ti, despite keeping it closer to the 5070 Ti in rasterized tests—but there is real, measurable improvement here, which is what we were looking for.

Power usage

The 9070 series’ performance increases are particularly impressive when you look at the power-consumption numbers. The 9070 comes close to the 7900 XT’s performance but uses 90 W less power under load. It beats the RTX 5070 most of the time but uses around 30 W less power.

The 9070 XT is a little less impressive on this front—AMD has set clock speeds pretty high, and this can increase power use disproportionately. The 9070 XT is usually 10 or 15 percent faster than the 9070 but uses 38 percent more power. The XT’s power consumption is similar to the RTX 5070 Ti’s (a GPU it often matches) and the 7900 XT’s (a GPU it always beats), so it’s not too egregious, but it’s not as standout as the 9070’s.

AMD gives 9070 owners a couple of new toggles for power limits, though, which we’ll talk about in the next section.

Experimenting with “Total Board Power”

We don’t normally dabble much with overclocking when we review CPUs or GPUs—we’re happy to leave that to folks at other outlets. But when we review CPUs, we do usually test them with multiple power limits in place. Playing with power limits is easier (and occasionally safer) than actually overclocking, and it often comes with large gains to either performance (a chip that performs much better when given more power to work with) or efficiency (a chip that can run at nearly full speed without using as much power).

Initially, I experimented with the RX 9070’s power limits by accident. AMD sent me one version of the 9070 but exchanged it because of a minor problem the OEM identified with some units early in the production run. I had, of course, already run most of our tests on it, but that’s the way these things go sometimes.

By bumping the regular RX 9070’s TBP up just a bit, you can nudge it closer to 9070 XT-level performance.

The replacement RX 9070 card, an ASRock Steel Legend model, was performing significantly better in our tests, sometimes nearly closing the gap between the 9070 and the XT. It wasn’t until I tested power consumption that I discovered the explanation—by default, it was using a 245 W power limit rather than the AMD-defined 220 W limit. Usually, these kinds of factory tweaks don’t make much of a difference, but for the 9070, this power bump gave it a nice performance boost while still keeping it close to the 250 W power limit of the GeForce RTX 5070.

The 90-series cards we tested both add some power presets to AMD’s Adrenalin app in the Performance tab under Tuning. These replace and/or complement some of the automated overclocking and undervolting buttons that exist here for older Radeon cards. Clicking Favor Efficiency or Favor Performance can ratchet the card’s Total Board Power (TBP) up or down, limiting performance so that the card runs cooler and quieter or allowing the card to consume more power so it can run a bit faster.

The 9070 cards get slightly different performance tuning options in the Adrenalin software. These buttons mostly change the card’s Total Board Power (TBP), making it simple to either improve efficiency or boost performance a bit. Credit: Andrew Cunningham

For this particular ASRock 9070 card, the default TBP is set to 245 W. Selecting “Favor Efficiency” sets it to the default 220 W. You can double-check these values using an app like HWInfo, which displays both the current TBP and the maximum TBP in its Sensors Status window. Clicking the Custom button in the Adrenalin software gives you access to a Power Tuning slider, which for our card allowed us to ratchet the TBP up by up to 10 percent or down by as much as 30 percent.

This is all the firsthand testing we did with the power limits of the 9070 series, though I would assume that adding a bit more power also adds more overclocking headroom (bumping up the power limits is common for GPU overclockers no matter who makes your card). AMD says that some of its partners will ship 9070 XT models set to a roughly 340 W power limit out of the box but acknowledges that “you start seeing diminishing returns as you approach the top of that [power efficiency] curve.”

But it’s worth noting that the driver has another automated set-it-and-forget-it power setting you can easily use to find your preferred balance of performance and power efficiency.

A quick look at FSR4 performance

There’s a toggle in the driver for enabling FSR 4 in FSR 3.1-supporting games. Credit: Andrew Cunningham

One of AMD’s headlining improvements to the RX 90-series is the introduction of FSR 4, a new version of its FidelityFX Super Resolution upscaling algorithm. Like Nvidia’s DLSS and Intel’s XeSS, FSR 4 can take advantage of RDNA 4’s machine learning processing power to do hardware-backed upscaling instead of taking a hardware-agnostic approach as the older FSR versions did. AMD says this will improve upscaling quality, but it also means FSR4 will only work on RDNA 4 GPUs.

The good news is that FSR 3.1 and FSR 4 are forward- and backward-compatible. Games that have already added FSR 3.1 support can automatically take advantage of FSR 4, and games that support FSR 4 on the 90-series can just run FSR 3.1 on older and non-AMD GPUs.

FSR 4 comes with a small performance hit compared to FSR 3.1 at the same settings, but better overall quality can let you drop to a faster preset like Balanced or Performance and end up with more frames-per-second overall. Credit: Andrew Cunningham

The only game in our current test suite to be compatible with FSR 4 is Horizon Zero Dawn Remastered, and we tested its performance using both FSR 3.1 and FSR 4. In general, we found that FSR 4 improved visual quality at the cost of just a few frames per second when run at the same settings—not unlike using Nvidia’s recently released “transformer model” for DLSS upscaling.

Many games will let you choose which version of FSR you want to use. But for FSR 3.1 games that don’t have a built-in FSR 4 option, there’s a toggle in AMD’s Adrenalin driver you can hit to switch to the better upscaling algorithm.

Even if they come with a performance hit, new upscaling algorithms can still improve performance by making the lower-resolution presets look better. We run all of our testing in “Quality” mode, which generally renders at two-thirds of native resolution and scales up. But if FSR 4 running in Balanced or Performance mode looks the same to your eyes as FSR 3.1 running in Quality mode, you can still end up with a net performance improvement in the end.

RX 9070 or 9070 XT?

Just $50 separates the advertised price of the 9070 from that of the 9070 XT, something both Nvidia and AMD have done in the past that I find a bit annoying. If you have $549 to spend on a graphics card, you can almost certainly scrape together $599 for a graphics card. All else being equal, I’d tell most people trying to choose one of these to just spring for the 9070 XT.

That said, availability and retail pricing for these might be all over the place. If your choices are a regular RX 9070 or nothing, or an RX 9070 at $549 and an RX 9070 XT at any price higher than $599, I would just grab a 9070 and not sweat it too much. The two cards aren’t that far apart in performance, especially if you bump the 9070’s TBP up a little bit, and games that are playable on one will be playable at similar settings on the other.

Pretty close to great

If you’re building a 1440p or 4K gaming box, the 9070 series might be the ones to beat right now. Credit: Andrew Cunningham

We’ve got plenty of objective data in here, so I don’t mind saying that I came into this review kind of wanting to like the 9070 and 9070 XT. Nvidia’s 50-series cards have mostly upheld the status quo, and for the last couple of years, the status quo has been sustained high prices and very modest generational upgrades. And who doesn’t like an underdog story?

I think our test results mostly justify my priors. The RX 9070 and 9070 XT are very competitive graphics cards, helped along by a particularly mediocre RTX 5070 refresh from Nvidia. In non-ray-traced games, both cards wipe the floor with the 5070 and come close to competing with the $749 RTX 5070 Ti. In games and synthetic benchmarks with ray-tracing effects on, both cards can usually match or slightly beat the similarly priced 5070, partially (if not entirely) addressing AMD’s longstanding performance deficit here. Neither card comes close to the 5070 Ti in these games, but they’re also not priced like a 5070 Ti.

Just as impressively, the Radeon cards compete with the GeForce cards while consuming similar amounts of power. At stock settings, the RX 9070 uses roughly the same amount of power under load as a 4070 Super but with better performance. The 9070 XT uses about as much power as a 5070 Ti, with similar performance before you turn ray-tracing on. Power efficiency was a small but consistent drawback for the RX 7000 series compared to GeForce cards, and the 9070 cards mostly erase that disadvantage. AMD is also less stingy with the RAM, giving you 16GB for the price Nvidia charges for 12GB.

Some of the old caveats still apply. Radeons take a bigger performance hit, proportionally, than GeForce cards. DLSS already looks pretty good and is widely supported, while FSR 3.1/FSR 4 adoption is still relatively low. Nvidia has a nearly monopolistic grip on the dedicated GPU market, which means many apps, AI workloads, and games support its GPUs best/first/exclusively. AMD is always playing catch-up to Nvidia in some respect, and Nvidia keeps progressing quickly enough that it feels like AMD never quite has the opportunity to close the gap.

AMD also doesn’t have an answer for DLSS Multi-Frame Generation. The benefits of that technology are fairly narrow, and you already get most of those benefits with single-frame generation. But it’s still a thing that Nvidia does that AMDon’t.

Overall, the RX 9070 cards are both awfully tempting competitors to the GeForce RTX 5070—and occasionally even the 5070 Ti. They’re great at 1440p and decent at 4K. Sure, I’d like to see them priced another $50 or $100 cheaper to well and truly undercut the 5070 and bring 1440p-to-4K performance t0 a sub-$500 graphics card. It would be nice to see AMD undercut Nvidia’s GPUs as ruthlessly as it undercut Intel’s CPUs nearly a decade ago. But these RDNA4 GPUs have way fewer downsides than previous-generation cards, and they come at a moment of relative weakness for Nvidia. We’ll see if the sales follow.

The good

  • Great 1440p performance and solid 4K performance
  • 16GB of RAM
  • Decisively beats Nvidia’s RTX 5070, including in most ray-traced games
  • RX 9070 XT is competitive with RTX 5070 Ti in non-ray-traced games for less money
  • Both cards match or beat the RX 7900 XT, AMD’s second-fastest card from the last generation
  • Decent power efficiency for the 9070 XT and great power efficiency for the 9070
  • Automated options for tuning overall power use to prioritize either efficiency or performance
  • Reliable 8-pin power connectors available in many cards

The bad

  • Nvidia’s ray-tracing performance is still usually better
  • At $549 and $599, pricing matches but doesn’t undercut the RTX 5070
  • FSR 4 isn’t as widely supported as DLSS and may not be for a while

The ugly

  • Playing the “can you actually buy these for AMD’s advertised prices” game

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems Read More »

hands-on:-this-3.5-inch-smart-display-makes-my-digital-calendars-more-digestible

Hands-on: This 3.5-inch smart display makes my digital calendars more digestible

My preferred methods of organizing my schedule could be considered dated, so when I got a chance to try out a gadget meant to streamline my various digital calendars, I took it.

While I do use digital calendars and to-do lists, my go-to method for organizing my day’s tasks, goals, and upcoming events is pen and paper. I use paper calendars in agendas for a visual layout of events, including those as far away as next month. They give me a sense of control, as I’m able to highlight, circle, draw arrows, underline, erase, and so on. I also write more to-do lists than might be considered efficient (as evidenced by “make to-do list” being a frequent line on my to-do lists).

But there are many benefits to using tech for staying organized, too. With digital options, I can easily check my availability on the go with my phone and get alerts to remind me of events.

But it’s hard to find a simple, stripped-down tech solution to put my work calendar, work goals, personal calendar, and personal to-do lists in one place while minimizing distraction. When checking what time I set aside to work out, for example, I typically don’t want to think about whether the event is recurring, who else knows about it, what “type” of event it is, or how many minutes before, during, and after the event I’ll get phone alerts about it. Those details are often valuable for creating a highly informative digital calendar, but they can also be distracting and result in information overload.

Enter a smart display called DeskBuddy. The 3.5-inch touchscreen device has essentially one capability: showing today’s events from your synced digital calendars. Blueberry Consultants, a custom software development firm headquartered in Birmingham, England, crowdfunded the desktop accessory via Kickstarter in December 2023 and currently sells it online, including on Etsy.

Hands-on: This 3.5-inch smart display makes my digital calendars more digestible Read More »

nvidia-geforce-rtx-5070-ti-review:-an-rtx-4080-for-$749,-at-least-in-theory

Nvidia GeForce RTX 5070 Ti review: An RTX 4080 for $749, at least in theory


may the odds be ever in your favor

It’s hard to review a product if you don’t know what it will actually cost!

The Asus Prime GeForce RTX 5070 Ti. Credit: Andrew Cunningham

The Asus Prime GeForce RTX 5070 Ti. Credit: Andrew Cunningham

Nvidia’s RTX 50-series makes its first foray below the $1,000 mark starting this week, with the $749 RTX 5070 Ti—at least in theory.

The third-fastest card in the Blackwell GPU lineup, the 5070 Ti is still far from “reasonably priced” by historical standards (the 3070 Ti was $599 at launch). But it’s also $50 cheaper and a fair bit faster than the outgoing 4070 Ti Super and the older 4070 Ti. These are steps in the right direction, if small ones.

We’ll talk more about its performance shortly, but at a high level, the 5070 Ti’s performance falls in the same general range as the 4080 Super and the original RTX 4080, a card that launched for $1,199 just over two years ago. And it’s probably your floor for consistently playable native 4K gaming for those of you out there who don’t want to rely on DLSS or 4K upscaling to hit that resolution (it’s also probably all the GPU that most people will need for high-FPS 1440p, if that’s more your speed).

But it’s a card I’m ambivalent about! It’s close to 90 percent as fast as a 5080 for 75 percent of the price, at least if you go by Nvidia’s minimum list prices, which for the 5090 and 5080 have been mostly fictional so far. If you can find it at that price—and that’s a big “if,” since every $749 model is already out of stock across the board at Newegg—and you’re desperate to upgrade or are building a brand-new 4K gaming PC, you could do worse. But I wouldn’t spend more than $749 on it, and it might be worth waiting to see what AMD’s first 90-series Radeon cards look like in a couple weeks before you jump in.

Meet the GeForce RTX 5070 Ti

RTX 5080 RTX 4080 Super RTX 5070 Ti RTX 4070 Ti Super RTX 4070 Ti RTX 5070
CUDA Cores 10,752 10,240 8,960 8,448 7,680 6,144
Boost Clock 2,617 MHz 2,550 MHz 2,452 MHz 2,610 MHz 2,610 MHz 2,512 MHz
Memory Bus Width 256-bit 256-bit 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 960 GB/s 736 GB/s 896 GB/s 672 GB/s 504 GB/s 672 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 16GB GDDR7 16GB GDDR6X 12GB GDDR6X 12GB GDDR7
TGP 360 W 320 W 300 W 285 W 285 W 250 W

Nvidia isn’t making a Founders Edition version of the 5070 Ti, so this time around our review unit is an Asus Prime GeForce RTX 5070 Ti provided by Asus and Nvidia. These third-party cards will deviate a little from the stock specs listed above, but factory overclocks tend to be inordinately mild, and done mostly so the GPU manufacturer can slap a big “overclocked” badge somewhere on the box. We tested this Asus card with its BIOS switch set to “performance” mode, which elevates the boost clock by an entire 30 MHz; you don’t need to be a math whiz to guess that a 1.2 percent overclock is not going to change performance much.

Compared to the 4070 Ti Super, the 5070 Ti brings two things to the table: a roughly 6 percent increase in CUDA cores and a 33 percent increase in memory bandwidth, courtesy of the switch from GDDR6X to GDDR7. The original 4070 Ti had even fewer CUDA cores, but most importantly for its 4K performance included just 12GB of memory on a 192-bit bus.

The 5070 Ti is based on the same GB203 GPU silicon as the 5080 series, but with 1,792 CUDA cores disabled. But there are a lot of similarities between the two, including the 16GB bank of GDDR7 and the 256-bit memory bus. It looks nothing like the yawning gap between the RTX 5090 and the RTX 5080, and the two cards’ similar-ish specs meant they weren’t too far away from each other in our testing. The 5070 Ti’s 300 W power requirement is also a bit lower than the 5080’s 360 W, but it’s pretty close to the 4080 and 4080 Super’s 320 W; in practice, the 5070 Ti draws about as much as the 4080 cards do under load.

Asus’ design for its Prime RTX 5070 Ti is an inoffensive 2.5-slot, triple-fan card that should fit without a problem in most builds. Credit: Andrew Cunningham

As a Blackwell GPU, the 5070 Ti also supports Nvidia’s most-hyped addition to the 50-series: support for DLSS 4 and Multi-Frame Generation (MFG). We’ve already covered this in our 5090 and 5080 reviews, but the short version is that MFG works exactly like Frame Generation did in the 40-series, except that it can now insert up to three AI-generated frames in between natively rendered frames instead of just one.

Especially if you’re already running at a reasonably high frame rate, this can make things look a lot smoother on a high-refresh-rate monitor without introducing distractingly excessive lag or weird rendering errors. The feature is mainly controversial because Nvidia is comparing 50-series performance numbers with DLSS MFG enabled to older 40-series cards without DLSS MFG to make the 50-series cards seem a whole lot faster than they actually are.

We’ll publish some frame-generation numbers in our review, both using DLSS and (for AMD cards) FSR. But per usual, we’ll continue to focus on natively rendered performance—more relevant for all the games out there that don’t support frame generation or don’t benefit much from it, and more relevant because your base performance dictates how good your generated frames will look and feel anyway.

Testbed notes

We tested the 5070 Ti in the same updated testbed and with the same updated suite of games that we started using in our RTX 5090 review. The heart of the build is an AMD Ryzen 9800X3D, ensuring that our numbers are limited as little as possible by the CPU speed.

Per usual, we prioritize testing GPUs at resolutions that we think most people will use them for. For the 5070 Ti, that means both 4K and 1440p—this card is arguably still overkill for 1440p, but if you’re trying to hit 144 or 240 Hz (or even more) on a monitor, there’s a good case to be made for it. We also use a mix of ray-traced and non-ray-traced games. For the games we test with upscaling enabled, we use DLSS on Nvidia cards and the newest supported version of FSR (usually 2.x or 3.x) for AMD cards.

Though we’ve tested and re-tested multiple cards with recent drivers in our updated testbed, we don’t have a 4070 Ti Super, 4070 Ti, or 3070 Ti available to test with. We’ve provided some numbers for those GPUs from past reviews; these are from a PC running older drivers and a Ryzen 7 7800X3D instead of a 9800X3D, and we’ve put asterisks next to them in our charts. They should still paint a reasonably accurate picture of the older GPUs’ relative performance, but take them with that small grain of salt.

Performance and power

Despite including fewer CUDA cores than either version of the 4080, some combination of architectural improvements and memory bandwidth increases help the card keep pace with both 4080 cards almost perfectly. In most of our tests, it landed in the narrow strip right in between the 4080 and the 4080 Super, and its power consumption under load was also almost identical.

Benchmarks with DLSS/FSR and/or frame generation enabled.

In every way that matters, the 5070 Ti is essentially an RTX 4080 that also supports DLSS Multi-Frame Generation. You can see why we’d be mildly enthusiastic about it at $749 but less and less impressed the closer the price creeps to $1,000.

Being close to a 4080 also means that the performance gap between the 5070 Ti and the 5080 is usually pretty small. In most of the games we tested, the 5070 Ti hovers right around 90 percent of the 5080’s performance.

The 5070 Ti is also around 60 percent as fast as an RTX 5090. The performance is a lot lower, but the price-to-performance ratio is a lot higher, possibly reflecting the fact that the 5070 Ti actually has other GPUs it has to compete with (in non-ray-traced games, the Radeon RX 7900 XTX generally keeps pace with the 5070 Ti, though at this late date it is mostly out of stock unless you’re willing to pay way more than you ought to for one).

Compared to the old 4070 Ti, the 5070 Ti can be between 20 and 50 percent faster at 4K, depending on how limited the game is by the 4070 Ti’s narrower memory bus and 12GB bank of RAM. The performance improvement over the 4070 Ti Super is more muted, ranging from as little as 8 percent to as much as 20 percent in our 4K tests. This is better than the RTX 5080 did relative to the RTX 4080 Super, but as a generational leap, it’s still pretty modest—it’s clear why Nvidia wants everyone to look at the Multi-Frame Generation numbers when making comparisons.

Waiting to put theory into practice

Asus’ RTX 5070 Ti, replete with 12-pin power plug. Credit: Andrew Cunningham

Being able to get RTX 4080-level performance for several hundred dollars less just a couple of years after the 4080 launched is kind of exciting, though that excitement is leavened by the still high-ish $749 price tag (again, assuming it’s actually available at or anywhere near that price). That certainly makes it feel more like a next-generation GPU than the RTX 5080 did—and whatever else you can say about it, the 5070 Ti certainly feels like a better buy than the 5080.

The 5070 Ti is a fast and 4K-capable graphics card, fast enough that you should be able to get some good results from all of Blackwell’s new frame-generation trickery if that’s something you want to play with. Its price-to-performance ratio does not thrill me, but if you do the math, it’s still a much better value than the 4070 Ti series was—particularly the original 4070 Ti, with the 12GB allotment of RAM that limited its usefulness and future-proofness at 4K.

Two reasons to hold off on buying a 5070 Ti, if you’re thinking about it: We’re waiting to see how AMD’s 9070 series GPUs shake out, and Nvidia’s 50-series launch so far has been kind of a mess, with low availability and price gouging both on retail sites and in the secondhand market. Pay much more than $749 for a 5070 Ti, and its delicate value proposition fades quickly. We should know more about the AMD cards in a couple of weeks. The supply situation, at least so far, seems like a problem that Nvidia can’t (or won’t) figure out how to solve.

The good

  • For a starting price of $749, you get the approximate performance and power consumption of an RTX 4080, a GPU that cost $1,199 two years ago and $999 one year ago.
  • Good 4K performance and great 1440p performance for those with high-refresh monitors.
  • 16GB of RAM should be reasonably future-proof.
  • Multi-Frame Generation is an interesting performance-boosting tool to have in your toolbox, even if it isn’t a cure-all for low framerates.
  • Nvidia-specific benefits like DLSS support and CUDA.

The bad

  • Not all that much faster than a 4070 Ti Super.
  • $749 looks cheap compared to a $2,000 GPU, but it’s still enough money to buy a high-end game console or an entire 1080p gaming PC.

The ugly

  • Pricing and availability for other 50-series GPUs to date have both been kind of a mess.
  • Will you actually be able to get it for $749? Because it doesn’t make a ton of sense if it costs more than $749.
  • Seriously, it’s been months since I reviewed a GPU that was actually widely available at its advertised price.
  • And it’s not just the RTX 5090 or 5080, it’s low-end stuff like the Intel Arc B580 and B570, too.
  • Is it high demand? Low supply? Scalpers and resellers hanging off the GPU market like the parasites they are? No one can say!
  • It makes these reviews very hard to do.
  • It also makes PC gaming, as a hobby, really difficult to get into if you aren’t into it already!
  • It just makes me mad is all.
  • If you’re reading this months from now and the GPUs actually are in stock at the list price, I hope this was helpful.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Nvidia GeForce RTX 5070 Ti review: An RTX 4080 for $749, at least in theory Read More »

are-any-of-apple’s-official-magsafe-accessories-worth-buying?

Are any of Apple’s official MagSafe accessories worth buying?


When MagSafe was introduced, it promised an accessories revolution. Meh.

Apple’s current lineup of MagSafe accessories. Credit: Samuel Axon

When Apple introduced what it currently calls MagSafe in 2020, its marketing messaging suggested that the magnetic attachment standard for the iPhone would produce a boom in innovation in accessories, making things possible that simply weren’t before.

Four years later, that hasn’t really happened—either from third-party accessory makers or Apple’s own lineup of branded MagSafe products.

Instead, we have a lineup of accessories that matches pretty much what was available at launch in 2020: chargers, cases, and just a couple more unusual applications.

With the launch of the iPhone 16 just behind us and the holidays just in front of us, a bunch of people are moving to phones that support MagSafe for the first time. Apple loves an upsell, so it offers some first-party MagSafe accessories—some useful, some not worth the cash, given the premiums it sometimes charges.

Given all that, it’s a good time to check in and quickly point out which (if any) of these first-party MagSafe accessories might be worth grabbing alongside that new iPhone and which ones you should skip in favor of third-party offerings.

Cases with MagSafe

Look, we could write thousands of words about the variety of iPhone cases available, or even just about those that support MagSafe to some degree or another—and we still wouldn’t really scratch the surface. (Unless that surface was made with Apple’s leather-replacement FineWoven material—hey-o!)

It’s safe to say there’s a third-party case for every need and every type of person out there. If you want one that meets your exact needs, you’ll be able to find it. Just know that cases that are labeled as MagSafe-ready will allow charge through and will let the magnets align correctly between a MagSafe charger and an iPhone—that’s really the whole point of the “MagSafe” name.

But if you prefer to stick with Apple’s own cases, there are currently two options: the clear cases and the silicone cases.

A clear iPhone case on a table

The clear case is definitely the superior of Apple’s two first-party MagSafe cases. Credit: Samuel Axon

The clear cases actually have a circle where the edges of the MagSafe magnets are, which is pretty nice for getting the magnets to snap without any futzing—though it’s really not necessary, since, well, magnets attract. They have a firm plastic shell that is likely to do a good job of protecting your phone when you drop it.

The Silicone case is… fine. Frankly, it’s ludicrously priced for what it is. It offers no advantages over a plethora of third-party cases that cost exactly half as much.

Recommendation: The clear case has its advantages, but the silicone case is awfully expensive for what it is. Generally, third party is the way to go. There are lots of third-party cases from manufacturers who got licensed by Apple, and you can generally trust those will work with wireless charging just fine. That was the whole point of the MagSafe branding, after all.

The MagSafe charger

At $39 or $49 (depending on length, one meter or two), these charging cables are pretty pricey. But they’re also highly durable, relatively efficient, and super easy to use. In most cases, you might as well just use any old USB-C cable.

There are some situations where you might prefer this option, though—for example, if you prop your iPhone up against your bedside lamp like a nightstand clock, or if you (like me) listen to audiobooks on wired earbuds while you fall asleep via the USB-C port, but you want to make sure the phone is still charging.

A charger with cable sits on a table

The MagSafe charger for the iPhone. Credit: Samuel Axon

So the answer on Apple’s MagSafe charger is that it’s pretty specialized, but it’s arguably the best option for those who have some specific reason not to just use USB-C.

Recommendation: Just use a USB-C cable, unless you have a specific reason to go this route—shoutout to my fellow individuals who listen to audiobooks while falling asleep but need headphones so as not to keep their spouse awake but prefer wired earbuds that use the USB-C port over AirPods to avoid losing AirPods in the bed covers. I’m sure there are dozens of us! If you do go this route, Apple’s own cable is the safest pick.

Apple’s FineWoven Wallet with MagSafe

While I’d long known people with dense wallet cases for their iPhones, I was excited about Apple’s leather (and later FineWoven) wallet with MagSafe when it was announced. I felt the wallet cases I’d seen were way too bulky, making the phone less pleasant to use.

Unfortunately, Apple’s FineWoven Wallet with MagSafe might be the worst official MagSafe product.

The problem is that the “durable microtwill” material that Apple went with instead of leather is prone to scratching, as many owners have complained. That’s a bit frustrating for something that costs nearly $60.

Apple's MagSafe wallet on a table

The MagSafe wallet has too many limitations to be worthwhile for most people. Credit: Samuel Axon

The wallet also only holds a few cards, and putting cards here means you probably can’t or at least shouldn’t try to use wireless charging, because the cards would be between the charger and the phone. Apple itself warns against doing this.

For those reasons, skip the FineWoven Wallet. There are lots of better-designed iPhone wallet cases out there, even though they might not be so minimalistic.

Recommendation: Skip this one. It’s a great idea in theory, but in practice and execution, it just doesn’t deliver. There are zillions of great wallet cases out there if you don’t mind a bit of bulk—just know you’ll have some wireless charging issues with many cases.

Other categories offered by third parties

Frankly, a lot of the more interesting applications of MagSafe for the iPhone are only available through third parties.

There are monitor mounts for using the iPhone as a webcam with Macs; bedside table stands for charging the phone while it acts as a smart display; magnetic phone stands for car dashboards that let you use GPS while you drive using MagSafe; magnetic versions for attaching power banks and portable batteries; and of course, multi-device chargers similar to the infamously canceled Airpower charging pad Apple had planned to release at one point. (I have the Belkin Boost Charge Pro 3-in-1 on my desk, and it works great.)

It’s not the revolution of new applications that some imagined when MagSafe was launched, but that’s not really a surprise. Still, there are some quality products out there. It’s both strange and a pity that Apple hasn’t made most of them itself.

No revolution here

Truthfully, MagSafe never seemed like it would be a huge smash. iPhones already supported Qi wireless charging before it came along, so the idea of magnets keeping the device aligned with the charger was always the main appeal—its existence potentially saved some users from ending up with chargers that didn’t quite work right with their phones, provided those users bought officially licensed MagSafe accessories.

Apple’s MagSafe accessories are often overpriced compared to alternatives from Belkin and other frequent partners. MagSafe seemed to do a better job bringing some standards to certain third-party products than it did bringing life to Apple’s offerings, and it certainly did not bring about a revolution of new accessory categories to the iPhone.

Still, it’s hard to blame anyone for choosing to go with Apple’s versions; the world of third-party accessories can be messy, and going the first-party route is generally a surefire way to know you’re not going to have many problems, even if the sticker’s a bit steep.

You could shop for third-party options, but sometimes you want a sure thing. With the possible exception of the FineWoven Wallet, all of these Apple-made MagSafe products are sure things.

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Are any of Apple’s official MagSafe accessories worth buying? Read More »

review:-amazon’s-2024-kindle-paperwhite-makes-the-best-e-reader-a-little-better

Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better

A fast Kindle?

From left to right: 2024 Paperwhite, 2021 Paperwhite, and 2018 Paperwhite. Note not just the increase in screen size, but also how the screen corners get a little more rounded with each release. Credit: Andrew Cunningham

I don’t want to oversell how fast the new Kindle is, because it’s still not like an E-Ink screen can really compete with an LCD or OLED panel for smoothness of animations or UI responsiveness. But even compared to the 2021 Paperwhite, tapping buttons, opening menus, opening books, and turning pages feels considerably snappier—not quite instantaneous, but without the unexplained pauses and hesitation that longtime Kindle owners will be accustomed to. For those who type out notes in their books, even the onscreen keyboard feels fluid and responsive.

Compared to the 2018 Paperwhite (again, the first waterproofed model, and the last one with a 6-inch screen and micro USB port), the difference is night and day. While it still feels basically fine for reading books, I find that the older Kindle can sometimes pause for so long when opening menus or switching between things that I wonder if it’s still working or whether it’s totally locked up and frozen.

“Kindle benchmarks” aren’t really a thing, but I attempted to quantify the performance improvements by running some old browser benchmarks using the Kindle’s limited built-in web browser and Google’s ancient Octane 2.0 test—the 2018, 2021, and 2024 Kindles are all running the same software update here (5.17.0), so this should be a reasonably good apples-to-apples comparison of single-core processor speed.

The new Kindle is actually way faster than older models. Credit: Andrew Cunningham

The 2021 Kindle was roughly 30 percent faster than the 2018 Kindle. The new Paperwhite is nearly twice as fast as the 2021 Paperwhite, and well over twice as fast as the 2018 Paperwhite. That alone is enough to explain the tangible difference in responsiveness between the devices.

Turning to the new Paperwhite’s other improvements: compared side by side, the new screen is appreciably bigger, more noticeably so than the 0.2-inch size difference might suggest. And it doesn’t make the Paperwhite much larger, though it is a tiny bit taller in a way that will wreck compatibility with existing cases. But you only really appreciate the upgrade if you’re coming from one of the older 6-inch Kindles.

Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better Read More »

after-working-with-a-dual-screen-portable-monitor-for-a-month,-i’m-a-believer

After working with a dual-screen portable monitor for a month, I’m a believer

I typically used the FlipGo Pro with a 16: 10 laptop screen, meaning that the portable monitor provided me with a taller view that differed from what most laptops offer. When the FlipGo Pro is working as one unified screen, it delivers a 6:2 (or 2:6) experience. These more unique aspect ratios, combined with the abilities to easily rotate the lightweight FlipGo Pro from portrait to landscape mode and swap between a dual or unified monitor, amplified the gadget’s versatility and minimal desk space requirement.

Dual-screen monitors edge out dual-screen PCs

The appeal of a device that can bring you two times the screen space without being a burden to carry around is obvious. Many of the options until now, however, have felt experimental, fragile, or overly niche for most people to consider.

I recently gave praise to the concept behind a laptop with a secondary screen that attaches to the primary through a 360-degree hinge on the primary display’s left side:

AceMagic X1

The AceMagic X1 dual-screen laptop.

Credit: Scharon Harding

The AceMagic X1 dual-screen laptop. Credit: Scharon Harding

Unlike the dual-screen Lenovo Yoga Book 9i, the AceMagic X1 has an integrated keyboard and touchpad. However, the PC’s questionable durability and dated components and its maker’s sketchy reputation (malware was once found inside AceMagic mini PCs) prevent me from recommending the laptop.

Meanwhile, something like the FlipGo Pro does something that today’s dual-screen laptops fail to do in their quest to provide extra screen space. With its quick swapping from one to two screens and simple adjustability, it’s easy for users of various OSes to maximize its versatility. As tech companies continue exploring the integration of extra screens, products like the FlipGo Pro remind me of the importance of evolution over sacrifice. A second screen has less value if it takes the place of critical features or quality builds. While a dual portable monitor isn’t as flashy or groundbreaking as a laptop with two full-size displays built in, when well-executed, it could be significantly more helpful—which, at least for now, is groundbreaking enough.

After working with a dual-screen portable monitor for a month, I’m a believer Read More »

review:-the-fastest-of-the-m4-macbook-pros-might-be-the-least-interesting-one

Review: The fastest of the M4 MacBook Pros might be the least interesting one


Not a surprising generational update, but a lot of progress for just one year.

The new M4 Pro and M4 Max MacBook Pros. Credit: Andrew Cunningham

The new M4 Pro and M4 Max MacBook Pros. Credit: Andrew Cunningham

In some ways, my review of the new MacBook Pros will be a lot like my review of the new iMac. This is the third year and fourth generation of the Apple Silicon-era MacBook Pro design, and outwardly, few things have changed about the new M4, M4 Pro, and M4 Max laptops.

Here are the things that are different. Boosted RAM capacities, across the entire lineup but most crucially in the entry-level $1,599 M4 MacBook Pro, make the new laptops a shade cheaper and more versatile than they used to be. The new nano-texture display option, a $150 upgrade on all models, is a lovely matte-textured coating that completely eliminates reflections. There’s a third Thunderbolt port on the baseline M4 model (the M3 model had two), and it can drive up to three displays simultaneously (two external, plus the built-in screen). There’s a new webcam. It looks a little nicer and has a wide-angle lens that can show what’s on your desk instead of your face if you want it to. And there are new chips, which we’ll get to.

That is essentially the end of the list. If you are still using an Intel-era MacBook Pro, I’ll point you to our previous reviews, which mostly celebrate the improvements (more and different kids of ports, larger screens) while picking one or two nits (they are a bit larger and heavier than late-Intel MacBook Pros, and the display notch is an eyesore).

New chips: M4 and M4 Pro

That leaves us with the M4, M4 Pro, and M4 Max.

We’ve already talked a bunch about the M4 and M4 Pro in our reviews of the new iMac and the new Mac minis, but to recap, the M4 is a solid generational upgrade over the M3, thanks to its two extra efficiency cores on the CPU side. Comparatively, the M4 Pro is a much larger leap over the M3 Pro, mostly because the M3 Pro was such a mild update compared to the M2 Pro.

The M4’s single-core performance is between 14 and 21 percent faster than the M3s in our tests, and tests that use all the CPU cores are usually 20 or 30 percent faster. The GPU is occasionally as much as 33 percent faster than the M3 in our tests, though more often, the improvements are in the single or low double digits.

For the M4 Pro—bearing in mind that we tested the fully enabled version with 14 CPU cores and 20 GPU cores, and not the slightly cut down version sold in less expensive machines—single-core CPU performance is up by around 20-ish percent in our tests, in line with the regular M4’s performance advantage over the regular M3. The huge boost to CPU core count increases multicore performance by between 50 and 60 percent most of the time, a substantial boost that actually allows the M4 Pro to approach the CPU performance of the 2022 M1 Ultra. GPU performance is up by around 33 percent compared to M3 Pro, thanks to the additional GPU cores and memory bandwidth, but it’s still not as fast as any of Apple’s Max or Ultra chips, even the M1-series.

M4 Max

And finally, there’s the M4 Max (again, the fully enabled version, this one with 12 P-cores, 4 E-cores, 40 GPU cores, and 546GB/s of memory bandwidth). Single-core CPU performance is the biggest leap forward, jumping by between 18 and 28 percent in single-threaded benchmarks. Multi-core performance is generally up by between 15 and 20 percent. That’s a more-than-respectable generational leap, but it’s nowhere near what happened for the M4 Pro since both M3 Mac and M4 Max have the same CPU core counts.

The only weird thing we noticed in our testing was an inconsistent performance in our Handbrake video encoding test. Every time we ran it, it reliably took either five minutes and 20 seconds or four minutes and 30 seconds. For the slower result, power usage was also slightly reduced, which suggests to me that some kind of throttling is happening during this workload; we saw roughly these two results over and over across a dozen or so runs, each separated by at least five minutes to allow the Mac to cool back down. High Power mode didn’t make a difference in either direction.

CPU P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth
Apple M4 Max (low) 10/4 32 36GB Up to five 410GB/s
Apple M4 Max (high) 12/4 40 48/64/128GB Up to five 546GB/s
Apple M3 Max (high) 12/4 40 48/64/128GB Up to five 409.6GB/s
Apple M2 Max (high) 8/4 38 64/96GB Up to five 409.6GB/s

We shared our data with Apple and haven’t received a response. Note that we tested the M4 Max in the 16-inch MacBook Pro, and we’d expect any kind of throttling behavior to be slightly more noticeable in the 14-inch Pro since it has less room for cooling hardware.

The faster result is more in line with the rest of our multi-core tests for the M4 Max. Even the slower of the two results is faster than the M3 Max, albeit not by much. We also didn’t notice similar behavior for any of the other multi-core tests we ran. It’s worth keeping in mind if you plan to use the MacBook Pro for CPU-heavy, sustained workloads that will run for more than a few minutes at a time.

GPU performance in our tests varies widely compared to the M4 Max, with results ranging from as little as 10 or 15 percent (for 4K and 1440p GFXBench tests—the bigger boost to the 1080p version is coming partially from CPU improvements) to as high as 30 percent for the Cinebench 2024 GPU test. I suspect the benefits will vary depending on how much the apps you’re running benefit from the M4 Max’s improved memory bandwidth.

Power efficiency in the M4 Max isn’t dramatically different from the M3 Max—it’s more efficient by virtue of using roughly the same amount of power as the M3 Max and running a little faster, consuming less energy overall to do the same amount of work.

Credit: Andrew Cunningham

Finally, in a test of High Power mode, we did see some very small differences in the GFXBench scores, though not in other GPU-based tests like Cinebench and Blender or in any CPU-based tests. You might notice slightly better performance in games if you’re running them, but as with the M4 Pro, it doesn’t seem hugely beneficial. This is different from how it’s handled in many Windows PCs, including Snapdragon X Elite PCs with Arm-based chips in them because they do have substantially different performance in high-performance mode relative to the default “balanced” mode.

Nice to see you, yearly upgrade

The 14-inch and 16-inch MacBook Pros. The nano-texture glass displays eliminate all of the normal glossy-screen reflections and glare. Credit: Andrew Cunningham

The new MacBook Pros are all solid year-over-year upgrades, though they’ll be most interesting to people who bought their last MacBook Pro toward the end of the Intel era sometime in 2019 or 2020. The nano-texture display, extra speed, and extra RAM may be worth a look for owners of the M1 MacBook Pros if you truly need the best performance you can get in a laptop. But I’d still draw a pretty bright line between latter-day Intel Macs (aging, hot, getting toward the end of the line for macOS updates, not getting all the features of current macOS versions anyway) and any kind of Apple Silicon Mac (fully supported with all features, still-current designs, barely three years old at most).

Frankly, the computer that benefits the most is probably the $1,599 entry-level MacBook Pro, which, thanks to the 16GB RAM upgrade and improved multi-monitor support, is a fairly capable professional computer. Of all the places where Apple’s previous 8GB RAM floor felt inappropriate, it was in the M3 MacBook Pro. With the extra ports, high-refresh-rate screen, and nano-texture coating option, it’s a bit easier to articulate the kind of user who that laptop is actually for, separating it a bit from the 15-inch MacBook Air.

The M4 Pro version also deserves a shout-out for its particularly big performance jump compared to the M2 Pro and M3 Pro generations. It’s a little odd to have a MacBook Pro generation where the middle chip is the most impressive of the three, and that’s not to discount how fast the M4 Max is—it’s just the reality of the situation given Apple’s focus on efficiency rather than performance for the M3 Pro.

The good

  • RAM upgrades across the whole lineup. This particularly benefits the $1,599 M4 MacBook Air, which jumps from 8GB to 16GB
  • M4 and M4 Max are both respectable generational upgrades and offer substantial performance boosts from Intel or even M1 Macs
  • M4 Pro is a huge generational leap, as Apple’s M3 Pro used a more conservative design
  • Nano-texture display coating is very nice and not too expensive relative to the price of the laptops
  • Better multi-monitor support for M4 version
  • Other design things—ports, 120 Hz screen, keyboard, and trackpad—are all mostly the same as before and are all very nice

The bad

  • Occasional evidence of M4 Max performance throttling, though it’s inconsistent, and we only saw it in one of our benchmarks
  • Need to jump all the way to M4 Max to get the best GPU performance

The ugly

  • Expensive, especially once you start considering RAM and storage upgrades

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Review: The fastest of the M4 MacBook Pros might be the least interesting one Read More »

macos-15-sequoia:-the-ars-technica-review

macOS 15 Sequoia: The Ars Technica review

macOS 15 Sequoia: The Ars Technica review

Apple

The macOS 15 Sequoia update will inevitably be known as “the AI one” in retrospect, introducing, as it does, the first wave of “Apple Intelligence” features.

That’s funny because none of that stuff is actually ready for the 15.0 release that’s coming out today. A lot of it is coming “later this fall” in the 15.1 update, which Apple has been testing entirely separately from the 15.0 betas for weeks now. Some of it won’t be ready until after that—rumors say image generation won’t be ready until the end of the year—but in any case, none of it is ready for public consumption yet.

But the AI-free 15.0 release does give us a chance to evaluate all of the non-AI additions to macOS this year. Apple Intelligence is sucking up a lot of the media oxygen, but in most other ways, this is a typical 2020s-era macOS release, with one or two headliners, several quality-of-life tweaks, and some sparsely documented under-the-hood stuff that will subtly change how you experience the operating system.

The AI-free version of the operating system is also the one that all users of the remaining Intel Macs will be using, since all of the Apple Intelligence features require Apple Silicon. Most of the Intel Macs that ran last year’s Sonoma release will run Sequoia this year—the first time this has happened since 2019—but the difference between the same macOS version running on different CPUs will be wider than it has been. It’s a clear indicator that the Intel Mac era is drawing to a close, even if support hasn’t totally ended just yet.

macOS 15 Sequoia: The Ars Technica review Read More »

asus-rog-ally-x-review:-better-performance-and-feel-in-a-pricey-package

Asus ROG Ally X review: Better performance and feel in a pricey package

Faster, grippier, pricier, and just as Windows-ed —

A great hardware refresh, but it stands out for its not-quite-handheld cost.

Updated

It's hard to fit the perfomance-minded but pricey ROG Ally X into a simple product category. It's also tricky to fit it into a photo, at the right angle, while it's in your hands.

Enlarge / It’s hard to fit the perfomance-minded but pricey ROG Ally X into a simple product category. It’s also tricky to fit it into a photo, at the right angle, while it’s in your hands.

Kevin Purdy

The first ROG Ally from Asus, a $700 Windows-based handheld gaming PC, performed better than the Steam Deck, but it did so through notable compromises on battery life. The hardware also had a first-gen feel and software jank from both Asus’ own wraparound gaming app and Windows itself. The Ally asked an awkward question: “Do you want to pay nearly 50 percent more than you’d pay for a Steam Deck for a slightly faster but far more awkward handheld?”

The ROG Ally X makes that question more interesting and less obvious to answer. Yes, it’s still a handheld that’s trying to hide Windows annoyances, and it’s still missing trackpads, without which some PC games just feel bad. And (review spoiler) it still eats a charge faster than the Steam Deck OLED on less demanding games.

But the improvements Asus made to this X sequel are notable, and its new performance stats make it more viable for those who want to play more demanding games on a rather crisp screen. At $800, or $100 more than the original ROG Ally with no extras thrown in, you have to really, really want the best possible handheld gaming experience while still tolerating Windows’ awkward fit.

Asus

What’s new in the Ally X

Specs at a glance: Asus ROG Ally X
Display 7-inch IPS panel: 1920×1080, 120 Hz, 7 ms, 500 nits, 100% sRGB, FreeSync, Gorilla Glass Victus
OS Windows 11 (Home)
CPU AMD Ryzen Z1 Extreme (Zen 4, 8 core, 24M cache, 5.10 Ghz, 9-30 W (as reviewed)
RAM 24GB LPDDR5X 6400 MHz
GPU AMD Radeon RDNA3, 2.7 GHz, 8.6 Teraflops
Storage M.2 NVME 2280 Gen4x4, 1TB (as reviewed)
Networking Wi-Fi 6E, Bluetooth 5.2
Battery 80 Wh (65W max charge)
Ports USB-C (3.2 Gen2, DPI 1.4, PD 3.0), USB-C (DP, PD 3.0), 3.5 mm audio, Micro SD
Size 11×4.3×0.97 in. (280×111×25 mm)
Weight 1.49 lbs (678 g)
Price as reviewed $800

The ROG Ally X is essentially the ROG Ally with a bigger battery packed into a shell that is impressively not much bigger or heavier, more storage and RAM, and two USB-C ports instead of one USB-C and one weird mobile port that nobody could use. Asus reshaped the device and changed the face-button feel, and it all feels noticeably better, especially now that gaming sessions can last longer. The company also moved the microSD card slot so that your cards don’t melt, which is nice.

There’s a bit more to each of those changes that we’ll get into, but that’s the short version. Small spec bumps wouldn’t have changed much about the ROG Ally experience, but the changes Asus made for the X version do move the needle. Having more RAM available has a sizable impact on the frame performance of demanding games, and you can see that in our benchmarks.

We kept the LCD Steam Deck in our benchmarks because its chip has roughly the same performance as its OLED upgrade. But it’s really the Ally-to-Ally-X comparisons that are interesting; the Steam Deck has been fading back from AAA viability. If you want the Ally X to run modern, GPU-intensive games as fast as is feasible for a battery-powered device, it can now do that a lot better—for longer—and feel a bit better while you do.

The Rog Ally X has better answered the question “why not just buy a gaming laptop?” than its predecessor. At $800 and up, you might still ask how much portability is worth to you. But the Ally X is not as much of a niche (Windows-based handheld) inside a niche (moderately higher-end handhelds).

I normally would not use this kind of handout image with descriptive text embedded, but Asus is right: the ROG Ally X is indeed way more comfortable (just maybe not all-caps).

I normally would not use this kind of handout image with descriptive text embedded, but Asus is right: the ROG Ally X is indeed way more comfortable (just maybe not all-caps).

Asus

How it feels using the Rog Ally X

My testing of the Rog Ally X consisted of benchmarks, battery testing, and playing some games on the couch. Specifically: Deep Rock Galactic: Survivor and Tactical Breach Wizards on the devices lowest-power setting (“Silent”), Deathloop on its medium-power setting (“Performance”), and Shadow of the Erdtree on its all-out “Turbo” mode.

All four of those games worked mostly fine, but DRG: Survivor pushed the boundaries of Silent mode a bit when its levels got crowded with enemies and projectiles. Most games could automatically figure out a decent settings scheme for the Ally X. If a game offers AMD’s FSR (FidelityFX Super Resolution) upscaling, you should at least try it; it’s usually a big boon to a game running on this handheld.

Overall, the ROG Ally X was a device I didn’t notice when I was using it, which is the best recommendation I can make. Perhaps I noticed that the 1080p screen was brighter, closer to the glass, and sharper than the LCD (original) Steam Deck. At handheld distance, the difference between 800p and 1080p isn’t huge to me, but the difference between LCD and OLED is more so. (Of course, an OLED version of the Steam Deck was released late last year.)

Asus ROG Ally X review: Better performance and feel in a pricey package Read More »

sunrise-alarm-clock-didn’t-make-waking-up-easier—but-made-sleeping-more-peaceful

Sunrise alarm clock didn’t make waking up easier—but made sleeping more peaceful

  • The Hatch Restore 2 with one of its lighting options on.

    Scharon Harding

  • The time is visible here, but you can disable that.

    Scharon Harding

  • Here’s the clock with a light on in the dark.

    Scharon Harding

  • A closer look.

    Scharon Harding

  • The clock’s backside.

    Scharon Harding

To say “I’m not a morning person” would be an understatement. Not only is it hard for me to be useful in the first hour (or so) of being awake, but it’s hard for me to wake up. I mean, really hard.

I’ve tried various recommendations and tricks: I’ve set multiple alarms and had coffee ready and waiting, and I’ve put my alarm clock far from my bed and kept my blinds open so the sun could wake me. But I’m still prone to sleeping through my alarm or hitting snooze until the last minute.

The Hatch Restore 2, a smart alarm clock with lighting that mimics sunrises and sunsets, seemed like a technologically savvy approach to realizing my dreams of becoming a morning person.

After about three weeks, though, I’m still no early bird. But the smart alarm clock is still earning a spot on my nightstand.

How it works

Hatch refers to the Restore 2 as a “smart sleep clock.” That’s marketing speak, but to be fair, the Restore 2 does help me sleep. A product page describes the clock as targeting users’ “natural circadian rhythm, so you can get your best sleep.” There’s some reasoning here. Circadian rhythms are “the physical, mental, and behavioral changes an organism experiences over a 24-hour cycle,” per the National Institute of General Medical Sciences (NIGMS). Circadian rhythms affect our sleep patterns (as well as other biological aspects, like appetite), NIGMS says.

The Restore 2’s pitch is a clock programmed to emit soothing lighting, which you can make change gradually as it approaches bedtime (like get darker), partnered with an alarm clock that simulates a sunrise with brightening lighting that can help you wake up more naturally. You can set the clock to play various soothing sounds while you’re winding down, sleeping, and/or as your alarm sound.

The clock needs a Wi-Fi connection and its app to set up the device. The free app has plenty of options, including sounds, colors, and tips for restful sleep (there’s a subscription for extra features and sounds for $5 per month, but thankfully, it’s optional).

Out like a light

This is, by far, the most customizable alarm clock I’ve ever used. The app was a little overwhelming at first, but once I got used to it, it was comforting to be able to set Routines or different lighting/sounds for different days. For example, I set mine to play two hours of “Calming Singing Bowls” with a slowly dimming sunset effect when I press the “Rest” button. Once I press the button again, the clock plays ocean sounds until my alarm goes off.

  • Routines in the Restore 2 app.

    Scharon Harding/Hatch

  • Setting a sunrise alarm part one.

    Scharon Harding/Hatch

  • Setting a sunrise alarm part two. (Part three would show a volume slider).

    Scharon Harding/Hatch

I didn’t think I needed a sleeping aid—I’m really good at sleeping. But I was surprised at how the Restore 2 helped me fall asleep more easily by blocking unpleasant noises. In my room, the biggest culprit is an aging air conditioner that’s loud while on, and it gets even more uproarious when automatically turning itself on and off (a feature that has become a bug I can’t disable).

As I’ve slept these past weeks, the clock has served as a handy, adjustable colored light to have on in the evening or as a cozy nightlight. The ocean noises have been blending in with the AC’s sounds, clearing my mind. I’d sleepily ponder if certain sounds I heard were coming from the clock or my AC. That’s the dull, fruitless thinking that quickly gets me snoozing.

Playing sounds to fall asleep is obviously not new (some of my earlier memories are falling asleep to a Lady and the Tramp cassette). Today, many would prefer using an app or playing a long video over getting a $170 alarm clock for the experience. Still, the convenience of setting repeating Routines on a device dedicated to being a clock turned out to be an asset. It’s also nice to be able to start a Routine by pressing an on-device button rather than having to use my phone to play sleeping sounds.

But the idea of the clock’s lighting and sounds helping me wind down in the hours before bed would only succeed if I was by the clock when winding down. I’m usually spending my last waking moments in my living room. So unless I’m willing to change my habits, or get a Restore 2 for the living room, this feature is lost on me.

Sunrise alarm clock didn’t make waking up easier—but made sleeping more peaceful Read More »