Tech

in-apple’s-first-quarter-earnings,-the-mac-leads-the-way-in-sales-growth

In Apple’s first-quarter earnings, the Mac leads the way in sales growth

Apple fell slightly short of investor expectations when it reported its first-quarter earnings today. While sales were up 4 percent overall, the iPhone showed signs of weakness, and sales in the Chinese market slipped by just over 11 percent.

CEO Tim Cook told CNBC that the iPhone performed better in countries where Apple Intelligence was available, like the US—seemingly suggesting that the slip was partially because Chinese consumers do not see enough reason to buy new phones without Apple Intelligence. (He also said, “Half of the decline is due to a change in channel inventory.”) iPhone sales also slipped in China during this same quarter last year; this was the first full quarter during which the iPhone 16 was available.

In any case, Cook said the company plans to roll out Apple Intelligence in additional languages, including Mandarin, this spring.

Apple’s wearables category also declined slightly, but only by 2 percent.

Despite the trends that worried investors, Apple reported $36.33 billion in net revenue for the first quarter. That’s 7.1 percent more than last year’s Q1. This was driven by the Mac, the iPad, and Services (which includes everything from Apple Music to iCloud)—all of which saw slight upticks in sales. Services was up 14 percent, continuing a strong streak for that business, while the Mac and the iPad both jumped up 15 percent.

The uptick in Mac and iPad sales was likely helped by several new Mac models and a new iPad mini starting shipments last October.

Cook shared some other interesting numbers in the earnings call with investors and the press: The company has an active base of 2.35 billion devices, and it has more than 1 billion active subscriptions.

In Apple’s first-quarter earnings, the Mac leads the way in sales growth Read More »

for-the-first-time,-a-privately-developed-aircraft-has-flown-faster-than-sound

For the first time, a privately developed aircraft has flown faster than sound

A new generation of companies, including Boom Supersonic, are aiming to meld new ideas, technology, and a commercial approach to develop more cost-effective travel at supersonic speeds. The significance of Tuesday’s flight is that it marks the first time one of these companies has built and flown its own vehicle above the speed of sound.

Now, on to the real thing

Although this is an important and notable step—this flight was the culmination of 11 successful test flights of the XB-1 since March 2024—it is only a step along the path toward development and operation of a commercially successful supersonic aircraft. Now Boom must build the real thing.

The company said the XB-1 demonstrator validates many of the key technologies that will be incorporated into Overture, including carbon-fiber composites, digital stability augmentation, and supersonic intakes. However, Overture will feature a different propulsion system named Symphony. The company is working with several partners, including Florida Turbine Technologies for engine design, GE Additive for additive technology design consulting, and StandardAero for maintenance to develop the engine.

There appears to be plenty of demand in the commercial air travel industry for a company that can develop and deliver supersonic aircraft to the market.

Boom Supersonic said it has taken 130 orders and pre-orders from American Airlines, United Airlines, and Japan Airlines for the Overture aircraft. In 2024, Boom said it completed construction on the Overture “Superfactory” in Greensboro, North Carolina, which will scale to produce 66 Overture aircraft per year. Boom is hoping to start delivering on those orders before the end of the decade.

For the first time, a privately developed aircraft has flown faster than sound Read More »

pebble’s-founder-wants-to-relaunch-the-e-paper-smartwatch-for-its-fans

Pebble’s founder wants to relaunch the e-paper smartwatch for its fans

With that code, Migicovsky can address the second reason for a new Pebble—nothing has really replaced the original. On his blog, Migicovsky defines the core of Pebble’s appeal: always-on screen; long battery life; a “simple and beautiful user experience” focused on useful essentials; physical buttons; and “Hackable,” including custom watchfaces.

Migicovsky writes that a small team is tackling the hardware aspect, making a watch that runs PebbleOS and “basically has the same specs and features as Pebble” but with “fun new stuff as well.” Crucially, they’re taking a different path than the original Pebble company:

“This time round, we’re keeping things simple. Lessons were learned last time! I’m building a small, narrowly focused company to make these watches. I don’t envision raising money from investors, or hiring a big team. The emphasis is on sustainability. I want to keep making cool gadgets and keep Pebble going long into the future.”

Still not an Apple Watch, by design

Pebble watch showing a text watchface (reading 12:27 p.m.), with greenh silicone band and prominent side button.

The Pebble 2 HR, the last Pebble widely shipped.

Credit: Valentina Palladino

The Pebble 2 HR, the last Pebble widely shipped. Credit: Valentina Palladino

Ars asked Migicovsky by email if modern-day Pebbles would have better interoperability with Apple’s iPhones than the original models. “No, even less now!” Migicovsky replied, pointing to the Department of Justice’s lawsuit against Apple in 2024. That lawsuit claims that Apple “limited the functionality of third-party smartwatches” to keep people using Apple Watches and then, as a result, less likely to switch away from iPhones.

Apple has limited the functionality of third-party smartwatches so that users who purchase the Apple Watch face substantial out-of-pocket costs if they do not keep buying iPhones. The core functionality Migicovsky detailed, he wrote, was still possible on iOS. Certain advanced features, like replying to notifications with voice dictation, may be limited to Android phones.

Migicovsky’s site and blog do not set a timeline for new hardware. His last major project, the multi-protocol chat app Beeper, was sold to WordPress.com owner Automattic in April 2024, following a protracted battle with Apple over access to its iMessage protocol.

Pebble’s founder wants to relaunch the e-paper smartwatch for its fans Read More »

new-fpga-powered-retro-console-re-creates-the-playstation,-cd-rom-drive-optional

New FPGA-powered retro console re-creates the PlayStation, CD-ROM drive optional

Retro game enthusiasts may already be acquainted with Analogue, a company that designs and manufactures updated versions of classic consoles that can play original games but also be hooked up to modern televisions and monitors. The most recent of its announcements is the Analogue 3D, a console designed to play Nintendo 64 cartridges.

Now, a company called Retro Remake is reigniting the console wars of the 1990s with its SuperStation one, a new-old game console designed to play original Sony PlayStation games and work with original accessories like controllers and memory cards. Currently available as a $180 pre-order, Retro Remake expects the consoles to ship no later than Q4 of 2025.

The base console is modeled on the redesigned PSOne console from mid-2000, released late in the console’s lifecycle to appeal to buyers on a budget who couldn’t afford a then-new PlayStation 2. The Superstation one includes two PlayStation controller ports and memory card slots on the front, plus a USB-A port. But there are lots of modern amenities on the back, including a USB-C port for power, two USB-A ports, an HDMI port for new TVs, DIN10 and VGA ports that support analog video output, and an Ethernet port. Other analog video outputs, including component and RCA outputs, are located on the sides behind small covers. The console also supports Wi-Fi and Bluetooth.

New FPGA-powered retro console re-creates the PlayStation, CD-ROM drive optional Read More »

with-ios-18.3,-apple-intelligence-is-now-on-by-default

With iOS 18.3, Apple Intelligence is now on by default

As is custom, Apple rolled out software updates to all its platforms at once today. All users should now have access to the public releases of iOS 18.3, macOS Sequoia 15.3, watchOS 11.3, iPadOS 15.3, tvOS 15.3, and visionOS 2.3.

Also, as usual, the iOS update is the meatiest of the bunch. Most of the changes relate to Apple Intelligence, a suite of features built on deep learning models. The first Apple Intelligence features were introduced in iOS 18, with additional ones added in iOS 18.1 and iOS 18.2

iOS 18.3 doesn’t add any significant new features to Apple Intelligence—instead, it tweaks what’s already there. Whereas Apple Intelligence was opt-in in previous OS versions, it is now on by default in iOS 18.3 on supported devices.

For the most part, that shouldn’t be a noticeable change for the majority of users, except for one thing: notification summaries. As we’ve reported, the feature that summarizes large batches of notifications using a large language model is hit-and-miss at best.

For most apps, not much has changed on that front, but Apple announced that with iOS 18.3, it’s temporarily disabling notification summaries for apps from the “News & Entertainment” category in light of criticisms by the BBC and others about how the feature was getting the substance of headlines wrong. The feature will still mess up summarizing your text messages and emails, though.

Apple says it has changed the presentation of summaries to make it clearer that they are distinct from other, non-AI generated summaries and that they are in beta and may be inaccurate.

Other updates include one to visual intelligence, a feature available on the most recent phones that gives you information on objects your camera is focused on. It can now identify more plants and animals, and you can create calendar events from flyers or posters seen in your viewfinder.

With iOS 18.3, Apple Intelligence is now on by default Read More »

nvidia-starts-to-wind-down-support-for-old-gpus,-including-the-long-lived-gtx-1060

Nvidia starts to wind down support for old GPUs, including the long-lived GTX 1060

Nvidia is launching the first volley of RTX 50-series GPUs based on its new Blackwell architecture, starting with the RTX 5090 and working downward from there. The company also appears to be winding down support for a few of its older GPU architectures, according to these CUDA release notes spotted by Tom’s Hardware.

The release notes say that CUDA support for the Maxwell, Pascal, and Volta GPU architectures “is considered feature-complete and will be frozen in an upcoming release.” While all of these architectures—which collectively cover GeForce GPUs from the old GTX 700 series all the way up through 2016’s GTX 1000 series, plus a couple of Quadro and Titan workstation cards—are still currently supported by Nvidia’s December Game Ready driver package, the end of new CUDA feature support suggests that these GPUs will eventually be dropped from these driver packages soon.

It’s common for Nvidia and AMD to drop support for another batch of architectures all at once every few years; Nvidia last dropped support for older cards in 2021, and AMD dropped support for several prominent GPUs in 2023. Both companies maintain a separate driver branch for some of their older cards but releases usually only happen every few months, and they focus on security updates, not on providing new features or performance optimizations for new games.

Nvidia starts to wind down support for old GPUs, including the long-lived GTX 1060 Read More »

nvidia-geforce-rtx-5090-costs-as-much-as-a-whole-gaming-pc—but-it-sure-is-fast

Nvidia GeForce RTX 5090 costs as much as a whole gaming PC—but it sure is fast


Even setting aside Frame Generation, this is a fast, power-hungry $2,000 GPU.

Credit: Andrew Cunningham

Credit: Andrew Cunningham

Nvidia’s GeForce RTX 5090 starts at $1,999 before you factor in upsells from the company’s partners or price increases driven by scalpers and/or genuine demand. It costs more than my entire gaming PC.

The new GPU is so expensive that you could build an entire well-specced gaming PC with Nvidia’s next-fastest GPU in it—the $999 RTX 5080, which we don’t have in hand yet—for the same money, or maybe even a little less with judicious component selection. It’s not the most expensive GPU that Nvidia has ever launched—2018’s $2,499 Titan RTX has it beat, and 2022’s RTX 3090 Ti also cost $2,000—but it’s safe to say it’s not really a GPU intended for the masses.

At least as far as gaming is concerned, the 5090 is the very definition of a halo product; it’s for people who demand the best and newest thing regardless of what it costs (the calculus is probably different for deep-pocketed people and companies who want to use them as some kind of generative AI accelerator). And on this front, at least, the 5090 is successful. It’s the newest and fastest GPU you can buy, and the competition is not particularly close. It’s also a showcase for DLSS Multi-Frame Generation, a new feature unique to the 50-series cards that Nvidia is leaning on heavily to make its new GPUs look better than they already are.

Founders Edition cards: Design and cooling

RTX 5090 RTX 4090 RTX 5080 RTX 4080 Super
CUDA cores 21,760 16,384 10,752 10,240
Boost clock 2,410 MHz 2,520 MHz 2,617 MHz 2,550 MHz
Memory bus width 512-bit 384-bit 256-bit 256-bit
Memory bandwidth 1,792 GB/s 1,008 GB/s 960 GB/s 736 GB/s
Memory size 32GB GDDR7 24GB GDDR6X 16GB GDDR7 16GB GDDR6X
TGP 575 W 450 W 360 W 320 W

We won’t spend too long talking about the specific designs of Nvidia’s Founders Edition cards since many buyers will experience the Blackwell GPUs with cards from Nvidia’s partners instead (the cards we’ve seen so far mostly look like the expected fare: gargantuan triple-slot triple-fan coolers, with varying degrees of RGB). But it’s worth noting that Nvidia has addressed a couple of my functional gripes with the 4090/4080-series design.

The first was the sheer dimensions of each card—not an issue unique to Nvidia, but one that frequently caused problems for me as someone who tends toward ITX-based PCs and smaller builds. The 5090 and 5080 FE designs are the same length and height as the 4090 and 4080 FE designs, but they only take up two slots instead of three, which will make them an easier fit for many cases.

Nvidia has also tweaked the cards’ 12VHPWR connector, recessing it into the card and mounting it at a slight angle instead of having it sticking straight out of the top edge. The height of the 4090/4080 FE design made some cases hard to close up once you factored in the additional height of a 12VHPWR cable or Nvidia’s many-tentacled 8-pin-to-12VHPWR adapter. The angled connector still extends a bit beyond the top of the card, but it’s easier to tuck the cable away so you can put the side back on your case.

Finally, Nvidia has changed its cooler—whereas most OEM GPUs mount all their fans on the top of the GPU, Nvidia has historically placed one fan on each side of the card. In a standard ATX case with the GPU mounted parallel to the bottom of the case, this wasn’t a huge deal—there’s plenty of room for that air to circulate inside the case and to be expelled by whatever case fans you have installed.

But in “sandwich-style” ITX cases, where a riser cable wraps around so the GPU can be mounted parallel to the motherboard, the fan on the bottom side of the GPU was poorly placed. In many sandwich-style cases, the GPU fan will dump heat against the back of the motherboard, making it harder to keep the GPU cool and creating heat problems elsewhere besides. The new GPUs mount both fans on the top of the cards.

Nvidia’s Founders Edition cards have had heat issues in the past—most notably the 30-series GPUs—and that was my first question going in. A smaller cooler plus a dramatically higher peak power draw seems like a recipe for overheating.

Temperatures for the various cards we re-tested for this review. The 5090 FE is the toastiest of all of them, but it still has a safe operating temperature.

At least for the 5090, the smaller cooler does mean higher temperatures—around 10 to 12 degrees Celsius higher when running the same benchmarks as the RTX 4090 Founders Edition. And while temperatures of around 77 degrees aren’t hugely concerning, this is sort of a best-case scenario, with an adequately cooled testbed case with the side panel totally removed and ambient temperatures at around 21° or 22° Celsius. You’ll just want to make sure you have a good amount of airflow in your case if you buy one of these.

Testbed notes

A new high-end Nvidia GPU is a good reason to tweak our test bed and suite of games, and we’ve done both here. Mainly, we added a 1050 W Thermaltake Toughpower GF A3 power supply—Nvidia recommends at least 1000 W for the 5090, and this one has a native 12VHPWR connector for convenience. We’ve also swapped the Ryzen 7 7800X3D for a slightly faster Ryzen 7 9800X3D to reduce the odds that the CPU will bottleneck performance as we try to hit high frame rates.

As for the suite of games, we’ve removed a couple of older titles and added some with built-in benchmarks that will tax these GPUs a bit more, especially at 4K with all the settings turned up. Those games include the RT Overdrive preset in the perennially punishing Cyberpunk 2077 and Black Myth: Wukong in Cinematic mode, both games where even the RTX 4090 struggles to hit 60 fps without an assist from DLSS. We’ve also added Horizon Zero Dawn Remastered, a recent release that doesn’t include ray-tracing effects but does support most DLSS 3 and FSR 3 features (including FSR Frame Generation).

We’ve tried to strike a balance between games with ray-tracing effects and games without it, though most AAA games these days include it, and modern GPUs should be able to handle it well (best of luck to AMD with its upcoming RDNA 4 cards).

For the 5090, we’ve run all tests in 4K—if you don’t care about running games in 4K, even if you want super-high frame rates at 1440p or for some kind of ultrawide monitor, the 5090 is probably overkill. When we run upscaling tests, we use the newest DLSS version available for Nvidia cards, the newest FSR version available for AMD cards, and the newest XeSS version available for Intel cards (not relevant here, just stating for the record), and we use the “Quality” setting (at 4K, that equates to an actual rendering version of 1440p).

Rendering performance: A lot faster, a lot more power-hungry

Before we talk about Frame Generation or “fake frames,” let’s compare apples to apples and just examine the 5090’s rendering performance.

The card mainly benefits from four things compared to the 4090: the updated Blackwell GPU architecture, a nearly 33 percent increase in the number of CUDA cores, an upgrade from GDDR6X to GDDR7, and a move from a 384-bit memory bus to a 512-bit bus. It also jumps from 24GB of RAM to 32GB, but games generally aren’t butting up against a 24GB limit yet, so the capacity increase by itself shouldn’t really change performance if all you’re focused on is gaming.

And for people who prioritize performance over all else, the 5090 is a big deal—it’s the first consumer graphics card from any company that is faster than a 4090, as Nvidia never spruced up the 4090 last year when it did its mid-generation Super refreshes of the 4080, 4070 Ti, and 4070.

Comparing natively rendered games at 4K, the 5090 is between 17 percent and 40 percent faster than the 4090, with most of the games we tested landing somewhere in the low to high 30 percent range. That’s an undeniably big bump, one that’s roughly commensurate with the increase in the number of CUDA cores. Tests run with DLSS enabled (both upscaling-only and with Frame Generation running in 2x mode) improve by roughly the same amount.

You could find things to be disappointed about if you went looking for them. That 30-something-percent performance increase comes with a 35 percent increase in power use in our testing under load with punishing 4K games—the 4090 tops out around 420 W, whereas the 5090 went all the way up to 573 W, with the 5090 coming closer to its 575 W TDP than the 4090 does to its theoretical 450 W maximum. The 50-series cards use the same TSMC 4N manufacturing process as the 40-series cards, and increasing the number of transistors without changing the process results in a chip that uses more power (though it should be said that capping frame rates, running at lower resolutions, or running less-demanding games can rein in that power use a bit).

Power draw under load goes up by an amount roughly commensurate with performance. The 4090 was already power-hungry; the 5090 is dramatically more so. Credit: Andrew Cunningham

The 5090’s 30-something percent increase over the 4090 might also seem underwhelming if you recall that the 4090 was around 55 percent faster than the previous-generation 3090 Ti while consuming about the same amount of power. To be even faster than a 4090 is no small feat—AMD’s fastest GPU is more in line with Nvidia’s 4080 Super—but if you’re comparing the two cards using the exact same tests, the relative leap is less seismic.

That brings us to Nvidia’s answer for that problem: DLSS 4 and its Multi-Frame Generation feature.

DLSS 4 and Multi-Frame Generation

As a refresher, Nvidia’s DLSS Frame Generation feature, as introduced in the GeForce 40-series, takes DLSS upscaling one step further. The upscaling feature inserted interpolated pixels into a rendered image to make it look like a sharper, higher-resolution image without having to do all the work of rendering all those pixels. DLSS FG would interpolate an entire frame between rendered frames, boosting your FPS without dramatically boosting the amount of work your GPU was doing. If you used DLSS upscaling and FG at the same time, Nvidia could claim that seven out of eight pixels on your screen were generated by AI.

DLSS Multi-Frame Generation (hereafter MFG, for simplicity’s sake) does the same thing, but it can generate one to three interpolated frames for every rendered frame. The marketing numbers have gone up, too; now, 15 out of every 16 pixels on your screen can be generated by AI.

Nvidia might point to this and say that the 5090 is over twice as fast as the 4090, but that’s not really comparing apples to apples. Expect this issue to persist over the lifetime of the 50-series. Credit: Andrew Cunningham

Nvidia provided reviewers with a preview build of Cyberpunk 2077 with DLSS MFG enabled, which gives us an example of how those settings will be exposed to users. For 40-series cards that only support the regular DLSS FG, you won’t notice a difference in games that support MFG—Frame Generation is still just one toggle you can turn on or off. For 50-series cards that support MFG, you’ll be able to choose from among a few options, just as you currently can with other DLSS quality settings.

The “2x” mode is the old version of DLSS FG and is supported by both the 50-series cards and 40-series GPUs; it promises one generated frame for every rendered frame (two frames total, hence “2x”). The “3x” and “4x” modes are new to the 50-series and promise two and three generated frames (respectively) for every rendered frame. Like the original DLSS FG, MFG can be used in concert with normal DLSS upscaling, or it can be used independently.

One problem with the original DLSS FG was latency—user input was only being sampled at the natively rendered frame rate, meaning you could be looking at 60 frames per second on your display but only having your input polled 30 times per second. Another is image quality; as good as the DLSS algorithms can be at guessing and recreating what a natively rendered pixel would look like, you’ll inevitably see errors, particularly in fine details.

Both these problems contribute to the third problem with DLSS FG: Without a decent underlying frame rate, the lag you feel and the weird visual artifacts you notice will both be more pronounced. So DLSS FG can be useful for turning 120 fps into 240 fps, or even 60 fps into 120 fps. But it’s not as helpful if you’re trying to get from 20 or 30 fps up to a smooth 60 fps.

We’ll be taking a closer look at the DLSS upgrades in the next couple of weeks (including MFG and the new transformer model, which will supposedly increase upscaling quality and supports all RTX GPUs). But in our limited testing so far, the issues with DLSS MFG are basically the same as with the first version of Frame Generation, just slightly more pronounced. In the built-in Cyberpunk 2077 benchmark, the most visible issues are with some bits of barbed-wire fencing, which get smoother-looking and less detailed as you crank up the number of AI-generated frames. But the motion does look fluid and smooth, and the frame rate counts are admittedly impressive.

But as we noted in last year’s 4090 review, the xx90 cards portray FG and MFG in the best light possible since the card is already capable of natively rendering such high frame rates. It’s on lower-end cards where the shortcomings of the technology become more pronounced. Nvidia might say that the upcoming RTX 5070 is “as fast as a 4090 for $549,” and it might be right in terms of the number of frames the card can put up on your screen every second. But responsiveness and visual fidelity on the 4090 will be better every time—AI is a good augmentation for rendered frames, but it’s iffy as a replacement for rendered frames.

A 4090, amped way up

Nvidia’s GeForce RTX 5090. Credit: Andrew Cunningham

The GeForce RTX 5090 is an impressive card—it’s the only consumer graphics card to be released in over two years that can outperform the RTX 4090. The main caveats are its sky-high power consumption and sky-high price; by itself, it costs as much (and consumes as much power as) an entire mainstream gaming PC. The card is aimed at people who care about speed way more than they care about price, but it’s still worth putting it into context.

The main controversy, as with the 40-series, is how Nvidia talks about its Frame Generation-inflated performance numbers. Frame Generation and Multi-Frame Generation are tools in a toolbox—there will be games where they make things look great and run fast with minimal noticeable impact to visual quality or responsiveness, games where those impacts are more noticeable, and games that never add support for the features at all. (As well-supported as DLSS generally is in new releases, it is incumbent upon game developers to add it—and update it when Nvidia puts out a new version.)

But using those Multi-Frame Generation-inflated FPS numbers to make topline comparisons to last-generation graphics cards just feels disingenuous. No, an RTX 5070 will not be as fast as an RTX 4090 for just $549, because not all games support DLSS MFG, and not all games that do support it will run it well. Frame Generation still needs a good base frame rate to start with, and the slower your card is, the more issues you might notice.

Fuzzy marketing aside, Nvidia is still the undisputed leader in the GPU market, and the RTX 5090 extends that leadership for what will likely be another entire GPU generation, since both AMD and Intel are focusing their efforts on higher-volume, lower-cost cards right now. DLSS is still generally better than AMD’s FSR, and Nvidia does a good job of getting developers of new AAA game releases to support it. And if you’re buying this GPU to do some kind of rendering work or generative AI acceleration, Nvidia’s performance and software tools are still superior. The misleading performance claims are frustrating, but Nvidia still gains a lot of real advantages from being as dominant and entrenched as it is.

The good

  • Usually 30-something percent faster than an RTX 4090
  • Redesigned Founders Edition card is less unwieldy than the bricks that were the 4090/4080 design
  • Adequate cooling, despite the smaller card and higher power use
  • DLSS Multi-Frame Generation is an intriguing option if you’re trying to hit 240 or 360 fps on your high-refresh-rate gaming monitor

The bad

  • Much higher power consumption than the 4090, which already consumed more power than any other GPU on the market
  • Frame Generation is good at making a game that’s running fast run faster, it’s not as good for bringing a slow game up to 60 Hz
  • Nvidia’s misleading marketing around Multi-Frame Generation is frustrating—and will likely be more frustrating for lower-end cards since they aren’t getting the same bumps to core count and memory interface that the 5090 gets

The ugly

  • You can buy a whole lot of PC for $2,000, and we wouldn’t bet on this GPU being easy to find at MSRP

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Nvidia GeForce RTX 5090 costs as much as a whole gaming PC—but it sure is fast Read More »

reddit-won’t-interfere-with-users-revolting-against-x-with-subreddit-bans

Reddit won’t interfere with users revolting against X with subreddit bans

A Reddit spokesperson told Ars that decisions to ban or not ban X links are user-driven. Subreddit members are allowed to suggest and institute subreddit rules, they added.

“Notably, many Reddit communities also prohibit Reddit links,” the Reddit representative pointed out. They noted that Reddit as a company doesn’t currently have any ban on links to X.

A ban against links to an entire platform isn’t outside of the ordinary for Reddit. Numerous subreddits ban social media links, Reddit’s spokesperson said. r/EarthPorn, a subreddit for landscape photography, for example, doesn’t allow website links because all posts “must be static images,” per the subreddit’s official rules. r/AskReddit, meanwhile, only allows for questions asked in the title of a Reddit post and doesn’t allow for use of the text box, including for sharing links.

“Reddit has a longstanding commitment to freedom of speech and freedom of association,” Reddit’s spokesperson said. They added that any person is free to make or moderate their own community. Those unsatisfied with a forum about Seahawks football that doesn’t have X links could feel free to make their own subreddit. Although, some of the subreddits considering X bans, like r/MadeMeSmile, already have millions of followers.

Meta bans also under discussion

As 404 Media noted, some Redditors are also pushing to block content from Facebook, Instagram, and other Meta properties in response to new Donald Trump-friendly policies instituted by owner Mark Zuckerberg, like Meta killing diversity programs and axing third-party fact-checkers.

Reddit won’t interfere with users revolting against X with subreddit bans Read More »

wine-10.0-brings-arm-windows-apps-to-linux,-still-is-not-an-emulator

Wine 10.0 brings Arm Windows apps to Linux, still is not an emulator

The open source Wine project—sometimes stylized WINE, for Wine Is Not an Emulator—has become an important tool for companies and individuals who want to make Windows apps and games run on operating systems like Linux or even macOS. The CrossOver software for Mac and Windows, Apple’s Game Porting Toolkit, and the Proton project that powers Valve’s SteamOS and the Steam Deck are all rooted in Wine, and the attention and resources put into the project in recent years have dramatically improved its compatibility and usefulness.

Yesterday, the Wine project announced the stable release of version 10.0, the next major version of the compatibility layer that is not an emulator. The headliner for this release is support for ARM64EC, the application binary interface (ABI) used for Arm apps in Windows 11, but the release notes say that the release contains “over 6,000 individual changes” produced over “a year of development effort.”

ARM64EC allows developers to mix Arm and x86-compatible code—if you’re making an Arm-native version of your app, you can still allow the use of more obscure x86-based plugins or add-ons without having to port everything over at once. Wine 10.0 also supports ARM64X, a different type of application binary file that allows ARM64EC code to be mixed with older, pre-Windows 11 ARM64 code.

Wine’s ARM64EC support does have one limitation that will keep it from working on some prominent Arm Linux distributions, at least by default: the release notes say it “requires the system page size to be 4K, since that is what the Windows ABI specifies.” Several prominent Linux-on-Arm distributions default to a 16K page size because it can improve performance—when page sizes are smaller, you need more of them, and managing a higher number of pages can introduce extra CPU overhead.

Asahi Linux, the Fedora-based distribution that is working to bring Linux to Apple Silicon Macs, uses 16K pages because that’s all Apple’s processors support. Some versions of the Raspberry Pi OS also default to a 16K page size, though it’s possible to switch to 4K for compatibility’s sake. Given that the Raspberry Pi and Asahi Linux are two of the biggest Linux-on-Arm projects going right now, that does at least somewhat limit the appeal of ARM64EC support in Wine. But as we’ve seen with Proton and other successful Wine-based compatibility layers, laying the groundwork now can deliver big benefits down the road.

Wine 10.0 brings Arm Windows apps to Linux, still is not an emulator Read More »

samsung’s-galaxy-s25-event-was-an-ai-presentation-with-occasional-phone-hardware

Samsung’s Galaxy S25 event was an AI presentation with occasional phone hardware

Samsung announced the Galaxy S25, S25+, and S25 Ultra at its Unpacked event today. What is different from last year’s models? With the phones themselves, not much, other than a new chipset and a wide camera. But pure AI optimism? Samsung managed to pack a whole lot more of that into its launch event and promotional materials.

The corners on the S25 Ultra are a bit more rounded, the edges are flatter, and the bezels seem to be slightly thinner. The S25 and S25+ models have the same screen size as the S24 models, at 6.2 and 6.7 inches, respectively, while the Ultra notches up slightly from 6.8 to 6.9 inches.

Samsung’s S25 Ultra, in titanium builds colored silver blue, black, gray, and white silver.

Credit: Samsung

Samsung’s S25 Ultra, in titanium builds colored silver blue, black, gray, and white silver. Credit: Samsung

The S25 Ultra, starting at $1,300, touts a Snapdragon 8 Elite processor, a new 50-megapixel ultra-wide lens, and what Samsung claims is improved detail in software-derived zoom images. It comes with the S Pen, a vestige of the departed Note line, but as The Verge notes, there is no Bluetooth included, so you can’t pull off hand gestures with the pen off the screen or use it as a quirky remote camera trigger.

Samsung’s S25 Plus phones, in silver blue, navy, and icy blue.

Credit: Samsung

Samsung’s S25 Plus phones, in silver blue, navy, and icy blue. Credit: Samsung

It’s much the same with the S25 and S25 Plus, starting at $800. The base models got an upgrade to a default of 12GB of RAM. The displays, cameras, and general shape and build are the same. All the Galaxy devices released in 2025 have Qi2 wireless charging support—but not by default. You’ll need a “Qi2 Ready” magnetic case to get a sturdy attachment and the 15 W top charging speed.

One thing that hasn’t changed, for the better, is Samsung’s recent bump up in longevity. Each Galaxy S25 model gets seven years of security updates and seven of OS upgrades, which matches Google’s Pixel line in number of years.

Side view of the Galaxy S25 Edge, which is looking rather thin. Samsung

At the very end of Samsung’s event, for less than 30 seconds, a “Galaxy S25 Edge” was teased. In a mostly black field with some shiny metal components, Samsung seemed to be teasing the notably slimmer variant of the S25 that had been rumored. The same kinds of leaks about an “iPhone Air” have been circulating. No details were provided beyond its name, and a brief video suggesting its svelte nature.

Samsung’s Galaxy S25 event was an AI presentation with occasional phone hardware Read More »

bambu-lab-pushes-a-“control-system”-for-3d-printers,-and-boy,-did-it-not-go-well

Bambu Lab pushes a “control system” for 3D printers, and boy, did it not go well

Bambu Lab, a major maker of 3D printers for home users and commercial “farms,” is pushing an update to its devices that it claims will improve security while still offering third-party tools “authorized” access. Some in the user community—and 3D printing advocates broadly—are pushing back, suggesting the firm has other, more controlling motives.

As is perhaps appropriate for 3D printing, this matter has many layers, some long-standing arguments about freedom and rights baked in, and a good deal of heat.

Bambu Lab’s image marketing Bambu Handy, its cloud service that allows you to “Control your printer anytime anywhere, also we support SD card and local network to print the projects.”

Credit: Bambu Lab

Bambu Lab’s image marketing Bambu Handy, its cloud service that allows you to “Control your printer anytime anywhere, also we support SD card and local network to print the projects.” Credit: Bambu Lab

Printing more, tweaking less

Bambu Lab, launched in 2022, has stood out in the burgeoning consumer 3D printing market because of its printers’ capacity for printing at high speeds without excessive tinkering or maintenance. The product page for the X1 series, the printer first targeted for new security, starts with the credo, “We hated 3D printing as much as we loved it.” Bambu’s faster, less fussy multicolor printers garnered attention—including an ongoing patent lawsuit from established commercial printer Stratasys.

Part of Bambu’s “just works” nature relies on a relatively more closed system than its often open-minded counterparts. Sending a print to most Bambu printers typically requires either Bambu’s cloud service, or, in “LAN mode,” a manual “sneakernet” transfer through SD cards. Cloud connections also grant perks like remote monitoring, and many customers have accepted the trade-off.

However, other customers, eager to tinker with third-party software and accessories, along with those fearing a subscription-based future for 3D printing, see Bambu Lab’s purported security concerns as something else. And Bambu acknowledges that its messaging on its upcoming change came out in rough shape.

Authorized access and operations

Firmware Update Introducing New Authorization Control System,” posted by Bambu Lab on January 16 (and since updated twice), states that Bambu’s printers—starting with its popular X series, then the P and A lines—will receive a “significant security enhancement to ensure only authorized access and operations are permitted.” This would, Bambu suggested, mitigate risks of “remote hacks or printer exposure issues” and lower the risk of “abnormal traffic or attacks.”

Bambu Lab pushes a “control system” for 3D printers, and boy, did it not go well Read More »

new-year,-same-streaming-headaches:-netflix-raises-prices-by-up-to-16-percent

New year, same streaming headaches: Netflix raises prices by up to 16 percent

Today Netflix, the biggest streaming service based on subscriber count, announced that it will increase subscription prices by up to $2.50 per month.

In a letter to investors [PDF], Netflix announced price changes starting today in the US, Canada, Argentina, and Portugal.

People who subscribe to Netflix’s cheapest ad-free plan (Standard) will see the biggest increase in monthly costs. The subscription will go from $15.49/month to $17.99/month, representing a 16.14 percent bump. The subscription tier allows commercial-free streaming for up to two devices and maxes out at 1080p resolution. It’s Netflix’s most popular subscription in the US, Bloomberg noted.

Netflix’s Premium ad-free tier has cost $22.99/month but is going up 8.7 percent to $24.99/month. The priciest Netflix subscription supports simultaneous streaming for up to four devices, downloads on up to six devices, 4K resolution, HDR, and spatial audio.

Finally, Netflix’s Standard With Ads tier will go up by $1, or 14.3 percent, to $7.99/month. This tier supports streaming from up to two devices and up to 1080p resolution. In Q4 2024, this subscription represented “over 55 percent of sign-ups” in countries where it’s available and generally grew “nearly 30 percent quarter over quarter,” Netflix said in its quarterly letter to investors.

“As we continue to invest in programming and deliver more value for our members, we will occasionally ask our members to pay a little more so that we can re-invest to further improve Netflix,” Netflix’s letter reads.

New year, same streaming headaches: Netflix raises prices by up to 16 percent Read More »