ryzen ai

amd’s-new-laptop-cpu-lineup-is-a-mix-of-new-silicon-and-new-names-for-old-silicon

AMD’s new laptop CPU lineup is a mix of new silicon and new names for old silicon

AMD’s CES announcements include a tease about next-gen graphics cards, a new flagship desktop CPU, and a modest refresh of its processors for handheld gaming PCs. But the company’s largest announcement, by volume, is about laptop processors.

Today the company is expanding the Ryzen AI 300 lineup with a batch of updated high-end chips with up to 16 CPU cores and some midrange options for cheaper Copilot+ PCs. AMD has repackaged some of its high-end desktop chips for gaming laptops, including the first Ryzen laptop CPU with 3D V-Cache enabled. And there’s also a new-in-name-only Ryzen 200 series, another repackaging of familiar silicon to address lower-budget laptops.

Ryzen AI 300 is back, along with high-end Max and Max+ versions

Ryzen AI is back, with Max and Max+ versions that include huge integrated GPUs. Credit: AMD

We came away largely impressed by the initial Ryzen AI 300 processors in August 2024, and new processors being announced today expand the lineup upward and downward.

AMD is announcing the Ryzen AI 7 350 and Ryzen AI 5 340 today, along with identically specced Pro versions of the same chips with a handful of extra features for large businesses and other organizations.

Midrange Ryzen AI processors should expand Copilot+ features into somewhat cheaper x86 PCs.

Credit: AMD

The 350 includes eight CPU cores split evenly between large Zen 5 cores and smaller, slower but more efficient Zen 5C cores, plus a Radeon 860M with eight integrated graphics cores (down from a peak of 16 for the Ryzen AI 9). The 340 has six CPU cores, again split evenly between Zen 5 and Zen 5C, and a Radeon 840M with four graphics cores. But both have the same 50 TOPS NPUs as the higher-end Ryzen AI chips, qualifying both for the Copilot+ label.

For consumers, AMD is launching three high-end chips across the new “Ryzen AI Max+” and “Ryzen AI Max” families. Compared to the existing Strix Point-based Ryzen AI processors, Ryzen AI Max+ and Max include more CPU cores, and all of their cores are higher-performing Zen 5 cores, with no Zen 5C cores mixed in. The integrated graphics also get significantly more powerful, with as many as 40 cores built in—these chips seem to be destined for larger thin-and-light systems that could benefit from more power but don’t want to make room for a dedicated GPU.

AMD’s new laptop CPU lineup is a mix of new silicon and new names for old silicon Read More »

review:-intel-lunar-lake-cpus-combine-good-battery-life-and-x86-compatibility

Review: Intel Lunar Lake CPUs combine good battery life and x86 compatibility

that lake came from the moon —

But it’s too bad that Intel had to turn to TSMC to make its chips competitive.

  • An Asus Zenbook UX5406S with a Lunar Lake-based Core Ultra 7 258V inside.

    Andrew Cunningham

  • These high-end Zenbooks usually offer pretty good keyboards and trackpads, and the ones here are comfortable and reliable.

    Andrew Cunningham

  • An HDMI port, a pair of Thunderbolt ports, and a headphone jack.

    Andrew Cunningham

  • A single USB-A port on the other side of the laptop. Dongles are fine, but we still appreciate when thin-and-light laptops can fit one of these in.

    Andrew Cunningham

Two things can be true for Intel’s new Core Ultra 200-series processors, codenamed Lunar Lake: They can be both impressive and embarrassing.

Impressive because they perform reasonably well, despite some regressions and inconsistencies, and because they give Intel’s battery life a much-needed boost as the company competes with new Snapdragon X Elite processors from Qualcomm and Ryzen AI chips from AMD. It will also be Intel’s first chip to meet Microsoft’s performance requirements for the Copilot+ features in Windows 11.

Embarrassing because, to get here, Intel had to use another company’s manufacturing facilities to produce a competitive chip.

Intel claims that this is a temporary arrangement, just a bump in the road as the company prepares to scale up its upcoming 18A manufacturing process so it can bring its own chip production back in-house. And maybe that’s true! But years of manufacturing misfires (and early reports of troubles with 18A) have made me reflexively skeptical of any timelines the company gives for its manufacturing operations. And Intel has outsourced some of its manufacturing at the same time it is desperately trying to get other chip designers to manufacture their products in Intel’s factories.

This is a review of Intel’s newest mobile silicon by way of an Asus Zenbook UX5406S with a Core Ultra 7 258V provided by Intel, not a chronicle of Intel’s manufacturing decline and ongoing financial woes. I will mostly focus on telling you whether the chip performs well and whether you should buy it. But it’s a rare situation, where whether it’s a solid chip is not a slam-dunk win for Intel, which might factor into our overall analysis.

About Lunar Lake

A high-level breakdown of Intel's next-gen Lunar Lake chips, which preserve some of Meteor Lake's changes while reverting others.

Enlarge / A high-level breakdown of Intel’s next-gen Lunar Lake chips, which preserve some of Meteor Lake’s changes while reverting others.

Intel

Let’s talk about the composition of Lunar Lake, in brief.

Like last year’s Meteor Lake-based Core Ultra 100 chips, Lunar Lake is a collection of chiplets stitched together via Intel’s Foveros technology. In Meteor Lake, Intel used this to combine several silicon dies manufactured by different companies—Intel made the compute tile where the main CPU cores were housed, while TSMC made the tiles for graphics, I/O, and other functions.

In Lunar Lake, Intel is still using Foveros—basically, using a silicon “base tile” as an interposer that enables communication between the different chiplets—to put the chips together. But the CPU, GPU, and NPU have been reunited in a single compute tile, and I/O and other functions are all handled by the platform controller tile (sometimes called the Platform Controller Hub or PCH in previous Intel CPUs). There’s also a “filler tile” that exists only so that the end product is rectangular. Both the compute tile and the platform controller tile are made by TSMC this time around.

Intel is still splitting its CPU cores between power-efficient E-cores and high-performance P-cores, but core counts overall are down relative to both previous-generation Core Ultra chips and older 12th- and 13th-generation Core chips.

Some high-level details of Intel's new E- and P-core architectures.

Enlarge / Some high-level details of Intel’s new E- and P-core architectures.

Intel

Lunar Lake has four E-cores and four P-cores, a composition common for Apple’s M-series chips but not, so far, for Intel’s. The Meteor Lake Core Ultra 7 155H, for example, included six P-cores and a total of 10 E-cores. A Core i7-1255U included two P-cores and eight E-cores. Intel has also removed Hyperthreading from the CPU architecture it’s using for its P-cores, claiming that the silicon space was better spent on improving single-core performance. You’d expect this to boost Lunar Lake’s single-core performance and hurt its multi-core performance relative to past generations, and to spoil our performance section a bit, that’s basically what happens, though not by as much as you might expect.

Intel is also shipping a new GPU architecture with Lunar Lake, codenamed Battlemage—it will also power the next wave of dedicated desktop Arc GPUs, when and if we get them (Intel hasn’t said anything on that front, but it’s canceling or passing off a lot of its side projects lately). It has said that the Arc 140V integrated GPU is an average of 31 percent faster than the old Meteor Lake Arc GPU in games, and 16 percent faster than AMD’s newest Radeon 890M, though performance will vary widely based on the game. The Arc 130V GPU has one less of Intel’s Xe cores (7, instead of 8) and lower clock speeds.

The last piece of the compute puzzle is the neural processing unit (NPU), which can process some AI and machine-learning workloads locally rather than sending them to the cloud. Windows and most apps still aren’t doing much with these, but Intel does rate the Lunar Lake NPUs at between 40 and 48 trillion operations per second (TOPS) depending on the chip you’re buying, meeting or exceeding Microsoft’s 40 TOPS requirement and generally around four times faster than the NPU in Meteor Lake (11.5 TOPS).

Intel is shifting to on-package RAM for Meteor Lake, something Apple also uses for its M-series chips.

Enlarge / Intel is shifting to on-package RAM for Meteor Lake, something Apple also uses for its M-series chips.

Intel

And there’s one last big change: For these particular Core Ultra chips, Intel is integrating the RAM into the CPU package, rather than letting PC makers solder it to the motherboard separately or offer DIMM slots—again, something we see in Apple Silicon chips in the Mac. Lunar Lake chips ship with either 16GB or 32GB of RAM, and most of the variants can be had with either amount (in the chips Intel has announced so far, model numbers ending in 8 like our Core Ultra 7 258V have 32GB, and model numbers ending in 6 have 16GB). Packaging memory this way both saves motherboard space and, according to Intel, reduces power usage, because it shortens the physical distance that data needs to travel.

I am reasonably confident that we’ll see other Core Ultra 200-series variants with more CPU cores and external memory—I don’t see Intel giving up on high-performance, high-margin laptop processors, and those chips will need to compete with AMD’s high-end performance and offer additional RAM. But if those chips are coming, Intel hasn’t announced them yet.

Review: Intel Lunar Lake CPUs combine good battery life and x86 compatibility Read More »

for-the-second-time-in-two-years,-amd-blows-up-its-laptop-cpu-numbering-system

For the second time in two years, AMD blows up its laptop CPU numbering system

this again —

AMD reverses course on “decoder ring” numbering system for laptop CPUs.

AMD's Ryzen 9 AI 300 series is a new chip and a new naming scheme.

Enlarge / AMD’s Ryzen 9 AI 300 series is a new chip and a new naming scheme.

AMD

Less than two years ago, AMD announced that it was overhauling its numbering scheme for laptop processors. Each digit in its four-digit CPU model numbers picked up a new meaning, which, with the help of a detailed reference sheet, promised to inform buyers of exactly what it was they were buying.

One potential issue with this, as we pointed out at the time, was that this allowed AMD to change over the first and most important of those four digits every single year that it decided to re-release a processor, regardless of whether that chip actually included substantive improvements or not. Thus a “Ryzen 7730U” from 2023 would look two generations newer than a Ryzen 5800U from 2021, despite being essentially identical.

AMD is partially correcting this today by abandoning the self-described “decoder ring” naming system and resetting it to something more conventional.

For its new Ryzen AI laptop processors, codenamed “Strix Point,” AMD is still using the same broad Ryzen 3/5/7/9 number to communicate general performance level plus a one- or two-letter suffix to denote general performance and power level (U for ultraportables, HX for higher-performance chips, and so on). A new three-digit processor number will inform buyers of the chip’s generation in the first digit and denote the specific SKU using the last two digits.

AMD is changing how it numbers its laptop CPUs again.

Enlarge / AMD is changing how it numbers its laptop CPUs again.

AMD

In other words, the company is essentially hitting the undo button.

Like Intel, AMD is shifting from four-digit numbers to three digits. The Strix Point processor numbers will start with the 300 series, which AMD says is because this is the third generation of Ryzen laptop processors with a neural processing unit (NPU) included. Current 7040-series and 8040-series processors with NPUs are not being renamed retroactively, and AMD plans to stop using the 7000- and 8000-series numbering for processor introductions going forward.

AMD wouldn’t describe exactly how it would approach CPU model numbers for new products that used older architectures but did say that new processors that didn’t meet the 40+ TOPS requirement for Microsoft’s Copilot+ program would simply use the “Ryzen” name instead of the new “Ryzen AI” branding. That would include older architectures with slower NPUs, like the current 7040 and 8040-series chips.

Desktop CPUs are, once again, totally unaffected by this change. Desktop processors’ four-digit model numbers and alphabetic suffixes generally tell you all you need to know about their underlying architecture; the new Ryzen 9000 desktop CPUs and the Zen 5 architecture were also announced today.

It seems like a lot of work to do to end up basically where we started, especially when the people at AMD who make and market the desktop chips have been getting by just fine with older model numbers for newly released products when appropriate. But to be fair to AMD, there just isn’t a great way to do processor model numbers in a simple and consistent way, at least not given current market realities:

  • PC OEMs that seem to demand or expect “new” product from chipmakers every year, even though chip companies tend to take somewhere between one and three years to release significantly updated designs.
  • The fact that casual and low-end users don’t actually benefit a ton from performance enhancements, keeping older chips viable for longer.
  • Different subsections of the market that must be filled with slightly different chips (consider chips with vPro versus similar chips without it).
  • The need to “bin” chips—that is, disable small parts of a given silicon CPU or GPU die and then sell the results as a lower-end product—to recoup manufacturing costs and minimize waste.

Apple may come the closest to what the “ideal” would probably be—one number for the overarching chip generation (M1, M3, etc.), one word like “Pro” or “Max” to communicate the general performance level, and a straightforward description of the number of CPU and GPU cores included, to leave flexibility for binning chips. But as usual, Apple occupies a unique position: it’s the only company putting its own processors into its own systems, and the company usually only updates a product when there’s something new to put in it, rather than reflexively announcing new models every time another CES or back-to-school season or Windows version rolls around.

In reverting to more traditional model numbers, AMD has at least returned to a system that people who follow CPUs will be broadly familiar with. It’s not perfect, and it leaves plenty of room for ambiguity as the product lineup gets more complicated. But it’s in the same vein as Intel’s rebranding of 13th-gen Core chips, the whole “Intel Processor” thing, or Qualcomm’s unfriendly eight-digit model numbers for its Snapdragon X Plus and Elite chips. AMD’s new nomenclature is a devil, but at least it’s one we know.

For the second time in two years, AMD blows up its laptop CPU numbering system Read More »

amd-intros-ryzen-ai-300-chips-with-zen-5,-better-gpu,-and-hugely-improved-npu

AMD intros Ryzen AI 300 chips with Zen 5, better GPU, and hugely improved NPU

ai everywhere —

High-end Ryzen laptop chips combine big and little Zen cores for the first time.

  • AMD’s Ryzen AI 300 series is its next-gen laptop platform, and the first to support Copilot+ PC features.

    AMD

  • Ryzen AI 300 uses a new CPU architecture, a revamped NPU, and a tweaked GPU architecture that AMD hasn’t said much about.

    AMD

  • Only two high-end processors will be available by July, though others will surely follow.

    AMD

  • How AMD’s new laptop CPU naming scheme applies to Ryzen AI 300.

    AMD

AMD’s next-generation laptop processors are coming later this year, joining new Ryzen 9000 desktop processors and ushering in yet another revamp to the way AMD does laptop CPU model numbers.

But the big thing the company wants to push is the new chips’ performance in generative AI and machine-learning workloads—it’s putting “Ryzen AI” right in the name and emphasizing the presence of an improved neural processing unit (NPU) that meets and exceeds Microsoft’s performance requirements for Copilot+ PCs. The new Ryzen AI 300-series, codenamed Strix Point, succeeds the Ryzen 8040 chips from earlier this year, which were themselves a relatively mild refresh for the Ryzen 7040 processors less than a year before.

AMD promises performance of up to 50 trillion operations per second (TOPS) with its new third-generation NPU, a significant boost from the 10 to 16 TOPS offered by Ryzen 7000 and 8000 processors with NPUs. This would make it faster than the 45 TOPS offered by the Qualcomm Snapdragon X Elite and X Plus in the first wave of Copilot+ compatible PCs, and also Intel’s projected performance for its next-generation Core Ultra chips, codenamed Lunar Lake. All exceed Microsoft’s Copilot+ requirement of 40 TOPS, which enables some Windows 11 features that aren’t normally available on typical PCs. Copilot+ PCs can do more AI processing locally on device rather than relying on the cloud, potentially improving performance and giving users more privacy.

If you don’t particularly care about generative AI, locally executed or otherwise, the Ryzen AI 300 processors also come with an updated CPU based on the same Zen 5 architecture as the desktop chips, plus an “RDNA 3.5” integrated GPU to boost gaming performance for thin-and-light systems that can’t fit a dedicated graphics processor. The chips are being manufactured on a TSMC N4 process.

  • AMD is mostly talking about the performance of the new NPU, which at least according to AMD should slightly outperform offerings from Qualcomm and Intel.

    AMD

  • The new integrated GPUs stack up well against Intel’s current Arc GPUs, though how they perform against next-gen Lunar Lake-based chips is anyone’s guess.

    AMD

AMD is announcing two chips today, both in the high-end Ryzen 9 series. The Ryzen AI 9 HX 370 includes 12 CPU cores and 16 GPU cores, up from a maximum of eight CPU cores and 12 GPU cores for the Ryzen 8040 series. The Ryzen AI 9 365 steps down to 10 CPU cores and 12 GPU cores. Both have the same NPU onboard.

Though an increase in CPU core count suggests big improvements in multi-threaded performance, note that in both chips a majority of the CPU cores (8 in the 370, 6 in the 365) actually use the “Zen 5c” architecture, a variant of Zen 5 that supports the exact same instructions and features but is optimized for small size rather than high clock speeds. The result is essentially AMD’s version of one of Intel’s E-cores, though without the truly heterogeneous CPU architecture that has caused incompatibility problems with some apps and games.

This isn’t the first time we’ve seen a mix of big and small CPU cores from AMD, but it is the first time we’ve seen it at the high-end. Zen 4c cores only really showed up in lower-end, lower-power CPU designs in the Ryzen 3 and 5 and Ryzen Z1 families.

Perhaps tellingly, AMD offered no direct comparisons between the CPU performance of the Ryzen AI 300 chips and the Ryzen 8040 series, opting instead to compare to offerings from Intel, Qualcomm, and Apple. This certainly doesn’t mean performance has regressed generation over generation, but it is usually code for “this isn’t the kind of improvement we want to draw attention to.”

AMD also didn’t offer performance comparisons between the new Radeon 890M and 880M and the old Radeon 780M. The company said that the 890M was an average of 36 percent faster in a small selection of games compared to the Intel Arc integrated GPU in the Meteor Lake Core Ultra chips and 60 percent faster than the Snapdragon X Elite in the 3DMark Night Raid benchmark (this was part of a slide that was specifically highlighting the performance impact of translating x86 code on Arm chips, though for the time being it’s true that the vast majority of games running on Snapdragon PCs will have to deal with the overhead of code translation).

AMD says that the Ryzen AI chips are slated to appear in “100+ platforms from OEMs” starting in July 2024, a month or so after Microsoft and Qualcomm’s first wave of Snapdragon X-equipped Copilot+ PCs. Ryzen AI will also compete with Intel’s next-gen Lunar Lake chips, also due out sometime later this year.

Listing image by AMD

AMD intros Ryzen AI 300 chips with Zen 5, better GPU, and hugely improved NPU Read More »