DRAM contract prices have increased 171 percent year over year, according to industry data. Gerry Chen, general manager of memory manufacturer TeamGroup, warned that the situation will worsen in the first half of 2026 once distributors exhaust their remaining inventory. He expects supply constraints to persist through late 2027 or beyond.
The fault lies squarely at the feet of AI mania in the tech industry. The construction of new AI infrastructure has created unprecedented demand for high-bandwidth memory (HBM), the specialized DRAM used in AI accelerators from Nvidia and AMD. Memory manufacturers have been reallocating production capacity away from consumer products toward these more profitable enterprise components, and Micron has presold its entire HBM output through 2026.
A photo of the “Stargate I” site in Abilene, Texas. AI data center sites like this are eating up the RAM supply. Credit: OpenAI
At the moment, the structural imbalance between AI demand and consumer supply shows no signs of easing. OpenAI’s Stargate project has reportedly signed agreements for up to 900,000 wafers of DRAM per month, which could account for nearly 40 percent of global production.
The shortage has already forced companies to adapt. As Ars’ Andrew Cunningham reported, laptop maker Framework stopped selling standalone RAM kits in late November to prevent scalping and said it will likely be forced to raise prices soon.
For Micron, the calculus is clear: Enterprise customers pay more and buy in bulk. But for the DIY PC community, the decision will leave PC builders with one fewer option when reaching for the RAM sticks. In his statement, Sadana reflected on the brand’s 29-year run.
“Thanks to a passionate community of consumers, the Crucial brand has become synonymous with technical leadership, quality and reliability of leading-edge memory and storage products,” Sadana said. “We would like to thank our millions of customers, hundreds of partners and all of the Micron team members who have supported the Crucial journey for the last 29 years.”
An EEG signal of sleep is associated with better performance on a mental task.
The guy in the back may be doing a more useful activity. Credit: XAVIER GALIANA
Dmitri Mendeleev famously saw the complete arrangement of the periodic table after falling asleep on his desk. He claimed in his dream he saw a table where all the elements fell into place, and he wrote it all down when he woke up. By having a eureka moment right after a nap, he joined a club full of rather talented people: Mary Shelley, Thomas Edison, and Salvador Dali.
To figure out if there’s a grain of truth to all these anecdotes, a team of German scientists at the Hamburg University, led by cognitive science researcher Anika T. Löwe, conducted an experiment designed to trigger such nap-following strokes of genius—and catch them in the act with EEG brain monitoring gear. And they kind of succeeded.
Catching Edison’s cup
“Thomas Edison had this technique where he held a cup or something like that when he was napping in his chair,” says Nicolas Schuck, a professor of cognitive science at the Hamburg University and senior author of the study. “When he fell asleep too deeply, the cup falling from his hand would wake him up—he was convinced that was the way to trigger these eureka moments.” While dozing off in a chair with a book or a cup doesn’t seem particularly radical, a number of cognitive scientists got serious about re-creating Edison’s approach to insights and testing it in their experiments.
One of the recent such studies was done at Sorbonne University by Célia Lacaux, a cognitive neuroscientist, and her colleagues. Over 100 participants were presented with a mathematical problem and told it could be solved by applying two simple rules in a stepwise manner. However, there was also an undescribed shortcut that made reaching the solution much quicker. The goal was to see if participants would figure this shortcut out after an Edison-style nap. The scientists would check whether the eureka moment would show in EEG.
Lacaux’s team also experimented with different objects the participants should hold while napping: spoons, steel spheres, stress balls, etc. It turned out Edison was right, and a cup was by far the best choice. It also turned out that most participants recognized there was a hidden rule after the falling cup woke them up. The nap was brief, only long enough to enter the light, non-REM N1 phase of sleep.
Initially, Schuck’s team wanted to replicate the results of Lacaux’s study. They even bought the exact same make of cups, but the cups failed this time. “For us, it just didn’t work. People who fell asleep often didn’t drop these cups—I don’t know why,” Schuck says.
The bigger surprise, however, was that the N1 phase sleep didn’t work either.
Tracking the dots
Schuck’s team set up an experiment that involved asking 90 participants to track dots on a screen in a series of trials, with a 20-minute-long nap in between. The dots were rather small, colored either purple or orange, placed in a circle, and they moved in one of two directions. The task for the participants was to determine the direction the dots were moving. That could range from easy to really hard, depending on the amount of jitter the team introduced.
The insight the participants could discover was hidden in the color coding. After a few trials where the dots’ direction was random, the team introduced a change that tied the movement to the color: orange dots always moved in one direction, and the purple dots moved in the other. It was up to the participants to figure this out, either while awake or through a nap-induced insight.
Those dots were the first difference between Schuck’s experiment and the Sorbonne study. Lacaux had her participants cracking a mathematical problem that relied on analytical skills. Schuck’s task was more about perceptiveness and out-of-the-box thinking.
The second difference was that the cups failed to drop and wake participants up. Muscles usually relax more when sleep gets deeper, which is why most people drop whatever they’re holding either at the end of the N1 phase or at the onset of the N2 phase, when the body starts to lose voluntary motor control. “We didn’t really prevent people from reaching the N2 phase, and it turned out the participants who reached the N2 phase had eureka moments most often,” Schuck explains.
Over 80 percent of people who reached the deeper, N2 phase of sleep found the color-coding solution. Participants who fell into a light N1 sleep had a 61 percent success rate; that dropped to just 55 percent in a group that stayed awake during their 20-minute nap time. In a control group that did the same task without a nap break, only 49 percent of participants figured out the hidden trick.
The divergent results in Lacaux’s and Schuck’s experiments were puzzling, so the team looked at the EEG readouts, searching for features in the data that could predict eureka moments better than sleep phases alone. And they found something.
The slope of genius
The EEG signal in the human brain consists of low and high frequencies that can be plotted on a spectral slope. When we are awake, there are a lot of high-frequency signals, and this slope looks rather flat. During sleep, these high frequencies get muted, there are more low-frequency signals, and the slope gets steeper. Usually, the deeper we sleep, the steeper our EEG slope is.
The team noticed that eureka moments seemed to be highly correlated with a steep EEG spectral slope—the steeper the slope, the more likely people were to get a breakthrough. In fact, the models based on the EEG signal alone predicted eureka moments better than predictions made based on sleep phases and even based on the sleep phases and EEG readouts combined.
“Traditionally, people divided sleep EEG readouts down into discrete stages like N1 or N2, but as usual in biology, things in reality are not as discrete,” Schuck says. “They’re much more continuous, there’s kind of a gray zone.” He told Ars that looking specifically at the EEG trace may help us better understand what exactly happens in the brain when a sudden moments of insight arrives.
But Shuck wants to get even more data in the future. “We’re currently running a study that’s been years in the making: We want to use both EEG and [functional magnetic resonance imaging] at the same time to see what happens in the brain when people are sleeping,” Schuck says. The addition of the fMRI imaging will enable Schuck and his colleagues to see which areas of the brain get activated during sleep. What the team wants to learn from combining EEG and fMRI imagery is how sleep boosts memory consolidation.
“We also hope to get some insights, no pun intended, into the processes that play a role in generating insights,” Schuck adds.
Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.
For many of us, memories of our childhood have become a bit hazy, if not vanishing entirely. But nobody really remembers much before the age of 4, because nearly all humans experience what’s termed “infantile amnesia,” in which memories that might have formed before that age seemingly vanish as we move through adolescence. And it’s not just us; the phenomenon appears to occur in a number of our fellow mammals.
The simplest explanation for this would be that the systems that form long-term memories are simply immature and don’t start working effectively until children hit the age of 4. But a recent animal experiment suggests that the situation in mice is more complex: the memories are there, they’re just not normally accessible, although they can be re-activated. Now, a study that put human infants in an MRI tube suggests that memory activity starts by the age of 1, suggesting that the results in mice may apply to us.
Less than total recall
Mice are one of the species that we know experience infantile amnesia. And, thanks to over a century of research on mice, we have some sophisticated genetic tools that allow us to explore what’s actually involved in the apparent absence of the animals’ earliest memories.
A paper that came out last year describes a series of experiments that start by having very young mice learn to associate seeing a light come on with receiving a mild shock. If nothing else is done with those mice, that association will apparently be forgotten later in life due to infantile amnesia.
But in this case, the researchers could do something. Neural activity normally results in the activation of a set of genes. In these mice, the researchers engineered it so one of the genes that gets activated encodes a protein that can modify DNA. When this protein is made, it results in permanent changes to a second gene that was inserted in the animal’s DNA. Once activated through this process, the gene leads to the production of a light-activated ion channel.
Neurons and a second cell type called an astrocyte collaborate to hold memories.
Astrocytes (labelled in black) sit within a field of neurons. Credit: Ed Reschke
“If we go back to the early 1900s, this is when the idea was first proposed that memories are physically stored in some location within the brain,” says Michael R. Williamson, a researcher at the Baylor College of Medicine in Houston. For a long time, neuroscientists thought that the storage of memory in the brain was the job of engrams, ensembles of neurons that activate during a learning event. But it turned out this wasn’t the whole picture.
Williamson’s research investigated the role astrocytes, non-neuron brain cells, play in the read-and-write operations that go on in our heads. “Over the last 20 years the role of astrocytes has been understood better. We’ve learned that they can activate neurons. The addition we have made to that is showing that there are subsets of astrocytes that are active and involved in storing specific memories,” Williamson says in describing a new study his lab has published.
One consequence of this finding: Astrocytes could be artificially manipulated to suppress or enhance a specific memory, leaving all other memories intact.
Marking star cells
Astrocytes, otherwise known as star cells due to their shape, play various roles in the brain, and many are focused on the health and activity of their neighboring neurons. Williamson’s team started by developing techniques that enabled them to mark chosen ensembles of astrocytes to see when they activate genes (including one named c-Fos) that help neurons reconfigure their connections and are deemed crucial for memory formation. This was based on the idea that the same pathway would be active in neurons and astrocytes.
“In simple terms, we use genetic tools that allow us to inject mice with a drug that artificially makes astrocytes express some other gene or protein of interest when they become active,” says Wookbong Kwon, a biotechnologist at Baylor College and co-author of the study.
Those proteins of interest were mainly fluorescent proteins that make cells fluoresce bright red. This way, the team could spot the astrocytes in mouse brains that became active during learning scenarios. Once the tagging system was in place, Williamson and his colleagues gave their mice a little scare.
“It’s called fear conditioning, and it’s a really simple idea. You take a mouse, put it into a new box, one it’s never seen before. While the mouse explores this new box, we just apply a series of electrical shocks through the floor,” Williamson explains. A mouse treated this way remembers this as an unpleasant experience and associates it with contextual cues like the box’s appearance, the smells and sounds present, and so on.
The tagging system lit up all astrocytes that expressed the c-Fos gene in response to fear conditioning. Williamson’s team inferred that this is where the memory is stored in the mouse’s brain. Knowing that, they could move on to the next question, which was if and how astrocytes and engram neurons interacted during this process.
Modulating engram neurons
“Astrocytes are really bushy,” Williamson says. They have a complex morphology with lots and lots of micro or nanoscale processes that infiltrate the area surrounding them. A single astrocyte can contact roughly 100,000 synapses, and not all of them will be involved in learning events. So the team looked for correlations between astrocytes activated during memory formation and the neurons that were tagged at the same time.
“When we did that, we saw that engram neurons tended to be contacting the astrocytes that are active during the formation of the same memory,” Williamson says. To see how astrocytes’ activity affects neurons, the team artificially stimulated the astrocytes by microinjecting them with a virus engineered to induce the expression of the c-Fos gene. “It directly increased the activity of engram neurons but did not increase the activity of non-engram neurons in contact with the same astrocyte,” Williamson explains.
This way his team established that at least some astrocytes could preferentially communicate with engram neurons. The researchers also noticed that astrocytes involved in memorizing the fear conditioning event had elevated levels of a protein called NFIA, which is known to regulate memory circuits in the hippocampus.
But probably the most striking discovery came when the researchers tested whether the astrocytes involved in memorizing an event also played a role in recalling it later.
Selectively forgetting
The first test to see if astrocytes were involved in recall was to artificially activate them when the mice were in a box that they were not conditioned to fear. It turned out artificial activation of astrocytes that were active during the formation of a fear memory formed in one box caused the mice to freeze even when they were in a different one.
So, the next question was, if you just killed or otherwise disabled an astrocyte ensemble active during a specific memory formation, would it just delete this memory from the brain? To get that done, the team used their genetic tools to selectively delete the NFIA protein in astrocytes that were active when the mice received their electric shocks. “We found that mice froze a lot less when we put them in the boxes they were conditioned to fear. They could not remember. But other memories were intact,” Kwon claims.
The memory was not completely deleted, though. The mice still froze in the boxes they were supposed to freeze in, but they did it for a much shorter time on average. “It looked like their memory was maybe a bit foggy. They were not sure if they were in the right place,” Williamson says.
After figuring out how to suppress a memory, the team also figured out where the “undo” button was and brought it back to normal.
“When we deleted the NFIA protein in astrocytes, the memory was impaired, but the engram neurons were intact. So, the memory was still somewhere there. The mice just couldn’t access it,” Williamson claims. The team brought the memory back by artificially stimulating the engram neurons using the same technique they employed for activating chosen astrocytes. “That caused the neurons involved in this memory trace to be activated for a few hours. This artificial activity allowed the mice to remember it again,” Williamson says.
The team’s vision is that in the distant future this technique can be used in treatments targeting neurons that are overactive in disorders such as PTSD. “We now have a new cellular target that we can evaluate and potentially develop treatments that target the astrocyte component associated with memory,” Williamson claims. But there’s lot more to learn before anything like that becomes possible. “We don’t yet know what signal is released by an astrocyte that acts on the neuron. Another thing is our study was focused on one brain region, which was the hippocampus, but we know that engrams exist throughout the brain in lots of different regions. The next step is to see if astrocytes play the same role in other brain regions that are also critical for memory,” Williamson says.
Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.
Enlarge/ A Voyager space probe in a clean room at the Jet Propulsion Laboratory in 1977.
Engineers have determined why NASA’s Voyager 1 probe has been transmitting gibberish for nearly five months, raising hopes of recovering humanity’s most distant spacecraft.
Voyager 1, traveling outbound some 15 billion miles (24 billion km) from Earth, started beaming unreadable data down to ground controllers on November 14. For nearly four months, NASA knew Voyager 1 was still alive—it continued to broadcast a steady signal—but could not decipher anything it was saying.
Confirming their hypothesis, engineers at NASA’s Jet Propulsion Laboratory (JPL) in California confirmed a small portion of corrupted memory caused the problem. The faulty memory bank is located in Voyager 1’s Flight Data System (FDS), one of three computers on the spacecraft. The FDS operates alongside a command-and-control central computer and another device overseeing attitude control and pointing.
The FDS duties include packaging Voyager 1’s science and engineering data for relay to Earth through the craft’s Telemetry Modulation Unit and radio transmitter. According to NASA, about 3 percent of the FDS memory has been corrupted, preventing the computer from carrying out normal operations.
Optimism growing
Suzanne Dodd, NASA’s project manager for the twin Voyager probes, told Ars in February that this was one of the most serious problems the mission has ever faced. That is saying something because Voyager 1 and 2 are NASA’s longest-lived spacecraft. They launched 16 days apart in 1977, and after flying by Jupiter and Saturn, Voyager 1 is flying farther from Earth than any spacecraft in history. Voyager 2 is trailing Voyager 1 by about 2.5 billion miles, although the probes are heading out of the Solar System in different directions.
Normally, engineers would try to diagnose a spacecraft malfunction by analyzing data it sent back to Earth. They couldn’t do that in this case because Voyager 1 has been transmitting data packages manifesting a repeating pattern of ones and zeros. Still, Voyager 1’s ground team identified the FDS as the likely source of the problem.
The Flight Data Subsystem was an innovation in computing when it was developed five decades ago. It was the first computer on a spacecraft to use volatile memory. Most of NASA’s missions operate with redundancy, so each Voyager spacecraft launched with two FDS computers. But the backup FDS on Voyager 1 failed in 1982.
Due to the Voyagers’ age, engineers had to reference paper documents, memos, and blueprints to help understand the spacecraft’s design details. After months of brainstorming and planning, teams at JPL uplinked a command in early March to prompt the spacecraft to send back a readout of the FDS memory.
The command worked, and Voyager.1 responded with a signal different from the code the spacecraft had been transmitting since November. After several weeks of meticulous examination of the new code, engineers pinpointed the locations of the bad memory.
“The team suspects that a single chip responsible for storing part of the affected portion of the FDS memory isn’t working,” NASA said in an update posted Thursday. “Engineers can’t determine with certainty what caused the issue. Two possibilities are that the chip could have been hit by an energetic particle from space or that it simply may have worn out after 46 years.”
Voyager 1’s distance from Earth complicates the troubleshooting effort. The one-way travel time for a radio signal to reach Voyager 1 from Earth is about 22.5 hours, meaning it takes roughly 45 hours for engineers on the ground to learn how the spacecraft responded to their commands.
NASA also must use its largest communications antennas to contact Voyager 1. These 230-foot-diameter (70-meter) antennas are in high demand by many other NASA spacecraft, so the Voyager team has to compete with other missions to secure time for troubleshooting. This means it will take time to get Voyager 1 back to normal operations.
“Although it may take weeks or months, engineers are optimistic they can find a way for the FDS to operate normally without the unusable memory hardware, which would enable Voyager 1 to begin returning science and engineering data again,” NASA said.
Enlarge /Eternal Sunshine of the Spotless Mind stars Jim Carrey in one of his most powerful dramatic roles.
Focus Features
Last week, the 2004 cult classic Eternal Sunshine of the Spotless Mind marked its 20th anniversary, prompting many people to revisit the surreal sci-fi psychological drama about two ex-lovers who erase their memories of each other—only to find themselves falling in love all over again. Eternal Sunshine was a box office success and earned almost universal praise upon its release. It’s still a critical favorite today and remains one of star Jim Carrey’s most powerful and emotionally resonant dramatic roles. What better time for a rewatch and in-depth discussion of the film’s themes of memory, personal identity, love, and loss?
(Spoilers for the 2004 film below.)
Director Michel Gondry and co-writer Pierre Bismuth first came up with the concept for the film in 1998, based on a conversation Bismuth had with a female friend who, when he asked, said she would absolutely erase her boyfriend from her memory if she could. They brought on Charlie Kaufman to write the script, and the three men went on to win an Oscar for Best Original Screenplay for their efforts. The title alludes to a 1717 poem by Alexander Pope, “Eloisa to Abelard,” based on the tragic love between medieval philosopher Peter Abelard and Héloïse d’Argenteuil and their differing perspectives on what happened between them when they exchanged letters later in life. These are the most relevant lines:
Of all affliction taught a lover yet,
‘Tis sure the hardest science to forget!
…
How happy is the blameless vestal’s lot!
The world forgetting, by the world forgot.
Eternal sunshine of the spotless mind!
Carrey plays Joel, a shy introvert who falls in love with the extroverted free spirit Clementine (Kate Winslet). The film opens with the couple estranged and Joel discovering that Clementine has erased all her memories of him, thanks to the proprietary technology of a company called Lacuna. Joel decides to do the same, and much of the film unfolds backward in time in a nonlinear narrative as Joel (while dreaming) relives his memories of their relationship in reverse. Those memories dissolve as he recalls each one, even though at one point, he changes his mind and tries unsuccessfully to stop the process.
The twist: Joel ends up meeting Clementine all over again on that beach in Montauk, and they are just as drawn to each other as before. When they learn—thanks to the machinations of a vengeful Lacuna employee—what happened between them the first time around, they almost separate again. But Joel convinces Clementine to take another chance, believing their relationship to be worth any future pain.
Enlarge/ Joel (Jim Carrey) and Clementine (Kate Winslet) meet-cute on the LIRR to Montauk.
Much has been written over the last two decades about the scientific basis for the film, particularly the technology used to erase Joel’s and Clementine’s respective memories. The underlying neuroscience involves what’s known as memory reconsolidation. The brain is constantly processing memories, including associated emotions, both within the hippocampus and across the rest of the brain (system consolidation). Research into reconsolidation of memories emerged in the 2000s, in which past memories (usually traumatic ones) are recalled with the intent of altering them, since memories are unstable during the recall process. For example, in the case of severe PTSD, administering Beta blockers can decouple intense feelings of fear from traumatic memories while leaving those memories intact.
Like all good science fiction, Eternal Sunshine takes that grain of actual science and extends it in thought-provoking ways. In the film, so-called “problem memories” can be recalled individually while the patient is in a dream state and erased completely—uncomfortable feelings and all—as if they were computer files. Any neuroscientist will tell you this is not how memory works. What remains most interesting about Eternal Sunshine‘s premise is its thematic exploration of the persistence and vital importance of human memory.
So we thought it would be intriguing to mark the film’s 20th anniversary by exploring those ideas through the lens of philosophy with the guidance of Johns Hopkins University philosopher Jenann Ismael. Ismael specializes in probing questions of physics, metaphysics, cognition, and theory of mind. Her many publications include The Situated Self (2009), How Physics Makes Us Free (2016), and, most recently, Time: A Very Short Introduction (2021).
Enlarge/ Samsung shared this rendering of a CAMM ahead of the publishing of the CAMM2 standard in September.
Of all the PC-related things to come out of CES this year, my favorite wasn’t Nvidia’s graphics cards or AMD’s newest Ryzens or Intel’s iterative processor refreshes or any one of the oddball PC concept designs or anything to do with the mad dash to cram generative AI into everything.
No, of all things, the thing that I liked the most was this Crucial-branded memory module spotted by Tom’s Hardware. If it looks a little strange to you, it’s because it uses the Compression Attached Memory Module (CAMM) standard—rather than being a standard stick of RAM that you insert into a slot on your motherboard, it lies flat against the board where metal contacts on the board and the CAMM module can make contact with one another.
CAMM memory has been on my radar for a while, since it first cropped up in a handful of Dell laptops. Mistakenly identified at the time as a proprietary type of RAM that would give Dell an excuse to charge more for it, Dell has been pushing for the standardization of CAMM modules for a couple of years now, and JEDEC (the organization that handles all current computer memory standards) formally finalized the spec last month.
Something about seeing an actual in-the-wild CAMM module with a Crucial sticker on it, the same kind of sticker you’d see on any old memory module from Amazon or Newegg, made me more excited about the standard’s future. I had a similar feeling when I started digging into USB-C or when I began seeing M.2 modules show up in actual computers (though CAMM would probably be a bit less transformative than either). Here’s a thing that solves some real problems with the current technology, and it has the industry backing to actually become a viable replacement.
From upgradable to soldered (and back again?)
Enlarge/ SO-DIMM memory slots in the Framework Laptop 13. RAM slots used to be the norm in laptop motherboards, though now you need to do a bit of work to seek out laptops that feature them.
Andrew Cunningham
It used to be easy to save some money on a new PC by buying a version without much RAM and performing an upgrade yourself, using third-party RAM sticks that cost a fraction of what manufacturers would charge. But most laptops no longer afford you the luxury.
Most PC makers and laptop PC buyers made an unspoken bargain in the early- to mid-2010s, around when the MacBook Air and the Ultrabook stopped being special thin-and-light outliers and became the standard template for the mainstream laptop: We would jettison nearly any port or internal component in the interest of making a laptop that was thinner, sleeker, and lighter.
The CD/DVD drive was one of the most immediate casualties, though its demise had already been foreshadowed thanks to cheap USB drives, cloud storage, and streaming music and video services. But as laptops got thinner, it also gradually became harder to find Ethernet and most other non-USB ports (and, eventually, even traditional USB-A ports), space for hard drives (not entirely a bad thing, now that M.2 SSDs are cheap and plentiful), socketed laptop CPUs, and room for other easily replaceable or upgradable components. Early Microsoft Surface tablets were some of the worst examples of this era of computer design—thin sandwiches of glass, metal, and glue that were difficult or impossible to open without totally destroying them.
Another casualty of this shift was memory modules, specifically Dual In-line Memory Modules (DIMMs) that could be plugged into a socket on the motherboard and easily swapped out. Most laptops had a pair of SO-DIMM slots, either stacked on top of each other (adding thickness) or placed side by side (taking up valuable horizontal space that could have been used for more battery).
Eventually, these began to go away in favor of soldered-down memory, saving space and making it easier for manufacturers to build the kinds of MacBook Air-alikes that people wanted to buy, but also adding a point of failure to the motherboard and possibly shortening its useful life by setting its maximum memory capacity at the outset.
Move over, SO-DIMM. A new type of memory module has been made official, and backers like Dell are hoping that it eventually replaces SO-DIMM (small outline dual in-line memory module) entirely.
This month, JEDEC, a semiconductor engineering trade organization, announced that it had published the JESD318: Compression Attached Memory Module (CAMM2) standard, as spotted by Tom’s Hardware.
CAMM2 was originally introduced as CAMM via Dell, which has been pushing for standardization since it announced the technology at CES 2022. Dell released the only laptops with CAMM in 2022, the Dell Precision 7670 and 7770 workstations.
The standard includes DDR5 and LPDDR5/5X designs. The former targets “performance notebooks and mainstream desktops,” and the latter is for “a broader range of notebooks and certain server market segment,” JEDEC’s announcement said.
They each have the same connector but differing pinouts, so a DDR5 CAMM2 can’t be wrongfully mounted onto an LPDDR5/5X connector. CAMM2 means that it will be possible to have non-soldered LPDD5X memory. Currently, you can only get LPDDR5X as soldered chips.
Another reason supporters are pushing CAMM2 is in consideration of speed, as SO-DIMM tops out at 6,400 MHz, with max supported speeds even lower in four-DIMM designs. Many mainstream designs aren’t yet at this threshold. But Dell originally proposed CAMM as a way to get ahead of this limitation (largely through closer contact between the module and motherboard). The published CAMM2 standard says LPDDR5 DRAM CAMM2 “is expected to start at 6,400 MTs and increment upward in cadence with the DRAM speed capabilities.”
Samsung in September announced plans to offer LPDDR CAMM at 7.5Gbps, noting that it expects commercialization in 2024. Micron also plans to offer CAMM at up to 9,600Mbps and 192GB-plus per module in late 2026, as per a company road map shared by AnandTech last month. Both announcements were made before the CAMM2 standard was published, and we wouldn’t be surprised to see timelines extended.
Enlarge/ Samsung shared this rendering of a CAMM ahead of the publishing of the CAMM2 standard in September.
CAMM2 supports capacities of 8GB to 128GB on a single module. This opens the potential for thinner computer designs that don’t sacrifice memory or require RAM modules on both sides of the motherboard. Dell’s Precision laptops with Dell’s original CAMM design is 57 percent thinner than SO-DIMM, Dell said. The laptops released with up to 128GB of DDR5-3600 across one module and thinness as low as 0.98 inches, with a 16-inch display.
Enlarge/ A Dell rendering depicting the size differences between SO-DIMM and CAMM.
Dell
Nominal module dimensions listed in the standard point to “various” form factors for the modules, with the X-axis measuring 78 mm (3.07 inches) and the Y-axis 29.6–68 mm (1.17–2.68 inches).
Computers can also achieve dual-channel memory for more bandwidth with one CAMM compared to SO-DIMM’s single-channel design. Extra space could lead to better room for things like device heat management.
JEDEC’s announcement said:
By splitting the dual-channel CAMM2 connector lengthwise into two single-channel CAMM2 connectors, each connector half can elevate the CAMM2 to a different level. The first connector half supports one DDR5 memory channel at 2.85mm height while the second half supports a different DDR5 memory channel at 7.5mm height. Or, the entire CAMM2 connector can be used with a dual-channel CAMM2. This scalability from single-channel and dual-channel configurations to future multi-channel setups promises a significant boost in memory capacity.
Unlike their taller SO-DIMM counterparts, CAMM2 modules press against an interposer, which has pins on both sides to communicate with the motherboard. However, it’s also worth noting that compared to SO-DIMM modules, CAMM2 modules are screwed in. Upgrades may also be considered more complex since going from 8GB to 16GB, for example, would require buying a whole new CAMM and getting rid of the prior rather than only buying a second 8GB module.
JEDEC’s standardization should eventually make it cheaper for these parts to be created and sourced for different computers. It could also help adoption grow, but it will take years before we can expect this CAMM2 to overtake 26-year-old SO-DIMM, as Dell hopes. But with a few big names behind the standard and interest in thinner, more powerful computers, we should see a greater push for these modules in computers in the coming years.