Author name: Tim Belzer

synology-caves,-walks-back-some-drive-restrictions-on-upcoming-nas-models

Synology caves, walks back some drive restrictions on upcoming NAS models


Policy change affects at least 2025 model Plus, Value, and J-series DiskStations.

Credit: SOPA Images / Getty

If you were considering the purchase of a Synology NAS but were leery of the unreasonably high cost of populating it with special Synology-branded hard disk drives, you can breathe a little easier today. In a press release dated October 8, Synology noted that with the release of its latest Disk Station Manager (DSM) update, some of its 2025 model-year products—specifically, the Plus, Value, and J-series DiskStation NAS devices—would “support the installation and storage pool creation of non-validated third-party drives.”

This unexpected move comes just a few months after Synology aggressively expanded its “verified drive” policy down-market to the entire Plus line of DiskStations. Prior to today, the network-attached storage vendor had shown no signs of swerving from the decision, painting it as a pro-consumer move intended to enhance reliability. “Extensive internal testing has shown that drives that follow a rigorous validation process when paired with Synology systems are at less risk of drive failure and ongoing compatibility issues,” Synology previously claimed in an email to Ars.

What is a “verified” or “validated” drive?

Synology first released its own brand of hard disk drives back in 2021 and began requiring their use in a small but soon-to-increase number of its higher-end NAS products. Although the drives were rebadged offerings from other manufacturers—there are very few hard disk drive OEMs, and Synology isn’t one of them—the company claimed that its branded disks underwent significant additional validation and testing that, when coupled with customized firmware, yielded reliability and performance improvements over off-the-shelf components.

However, those drives came with what was in some cases a substantial price increase over commodity hardware. Although I couldn’t find an actual published MSRP list, some spot checking on several web stores shows that the Synology HAT5310 enterprise SATA drive (a drive with the same warranty and expected service life as a Seagate Exos or Western Digital Gold) is available in 8TB at $299, 12TB at $493, and 20TB at an eye-watering $605. (For comparison, identically sized Seagate Exos disks are $220 at 8TB, $345 at 12TB, and $399 at 20TB.) Other Synology drive models tell similar pricing stories.

Photograph of a synology nas in profile

A Synology DS1525+ NAS, which up until today would scream at you unless you filled it with special Synology-branded disks.

Credit: Synology

A Synology DS1525+ NAS, which up until today would scream at you unless you filled it with special Synology-branded disks. Credit: Synology

If you put non-verified drives in a Synology NAS that required verified drives, certain functionality would be reduced or potentially removed, depending on the specific model disks you were introducing. Additionally, the Synology DSM interface would spam you with large “DANGER” warnings that your data might not be safe. Synology also at first refused to display S.M.A.R.T. diagnostic information from unverified drives, though this particular restriction was eventually lifted.

Savvy sysadmins could disable the verified drive requirements altogether by using one of several different workarounds, though that kind of thing opens one up to a different kind of danger—the danger of depending on an unsupported configuration tweak to keep a production system fully online and functional. It’s not a big deal for home users, but for business users relying on a Synology system at work with people’s livelihoods involved, the should-I-or-shouldn’t-I calculus of using such a workaround gets murkier. Synology is likely banking on the fact that if your business is of a certain size and you’re spending someone else’s money, a few hundred bucks more on each disk drive for peace of mind and a smoothly functioning NAS might seem like less of a speed bump than it would to a homelab admin spending money out of their own pocket.

While Synology’s claims about its validated drives having undergone extensive testing and yielding some performance benefit do hold water (at least under the specific benchmark circumstances called out on Synology drive page), it’s very difficult for me to see Synology’s actions here as anything other than an attempt to squeeze additional revenue out of what the company thought to be an exploitable market segment.

Enterprise storage companies like Dell-EMC enjoy vast margins on high-end storage gear—margins that don’t exist down in the consumer and SMB space where Synology is usually found. So the company decided to be the change it wanted to see in the world and created a way to extract those margins by making expensive custom hard disk drives mandatory (at least in a “nice data you got there, it’d be a shame if something happened to it—better use our disks” kind of way) for more and more products.

Unfortunately for Synology, today is not 2021, and the prosumer/SMB NAS market is getting downright crowded. In addition to long-time players like QNAP that continue to pump out new products, up-and-comer UGREEN is taking market share from Synology in the consumer areas where Synology has traditionally been most successful, and even Ubiquiti is making a run at the mid-market with its own line of Unifi-integrated NAS devices. Synology’s verified drive rent-seeking has made the brand practically impossible to recommend over competitors’ offerings for any use case without significant caveats. At least, up until today’s backpedaling.

When asked about the reasoning behind the change, a Synology representative gave the following statement via email: “First and foremost, our goal is to create reliable and secure solutions for user’s data, which is what drives our decisions as a company, including this original one. We are continuing with our validation program, working with third-party vendors to test their drives under the same rigorous testing we put our branded drives through, so we will still uphold those standards that we have set for ourselves. However, based on user feedback and to provide more flexibility in drive choices since testing third party drives has taken a while, we’re opening up the drive policy to include non-verified drives.”

As part of the same exchange, I asked Synology if they’re aware that—at least anecdotally, from what I see among the IT-savvy Ars audience—that this change has caused reputational damage among a significant number of existing and potential Synology customers. “While our original goal was to improve system reliability by focusing on a smaller set of validated configurations,” the company representative replied, “our valued community has shared feedback that flexibility is equally important. We are committed to our user’s experience and we understand that this decision didn’t align with their expectations of us. We value their input and will utilize it as we move forward.”

The about-face

As of the October 8 release of DSM 7.3, the input has been utilized. Here’s the full section from the company’s DSM 7.3 announcement:

As a part of its mission statement, Synology is committed to delivering reliable, high-performance storage systems. This commitment has led to a standardized process of rigorous testing and validation for both hardware and software components, and has been an integral part of Synology’s development approach for many years. Both Synology storage drives and components validated through the third-party program undergo uniform testing processes to ensure they are able to provide the highest levels of reliability with DSM.

Synology is currently collaborating closely with third-party drive manufacturers to accelerate the testing and verification of additional storage drives, and will announce more updates as soon as possible. In the meantime, 25 model year DiskStation Plus, Value, and J series running DSM 7.3 will support the installation and storage pool creation of non-validated third-party drives. This provides users greater flexibility while Synology continues to expand the lineup of officially verified drives that meet long-term reliability standards.

The upshot is that the validated drive requirements are being removed from 2025 model-year Plus, Value, and J-series NAS devices. (Well, mostly removed—the press release indicates that pool and cache creation on M.2 disks “still requires drives on the HCL [hardware compatibility list].”)

We asked Synology whether the requirements will also be lifted from previous-generation Synology products—and the answer to that question appears to be a “no.”

“This change only affects the ’25 series models: DS725+, DS225+, DS425+, DS925+, DS1525+, DS1825+. Models in the xs+ line, like the DS3622xs+, are considered a business/enterprise model and will remain under the current HCL policy for our business lines,” Synology explained.

Updated with comments from Synology.

Photo of Lee Hutchinson

Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston.

Synology caves, walks back some drive restrictions on upcoming NAS models Read More »

actually,-we-are-going-to-tell-you-the-odds-of-recovering-new-glenn’s-second-launch

Actually, we are going to tell you the odds of recovering New Glenn’s second launch

The only comparison available is SpaceX, with its Falcon 9 rocket. The company made its first attempt at a powered descent of the Falcon 9 into the ocean during its sixth launch in September 2013. On the vehicle’s ninth flight, it successfully made a controlled ocean landing. SpaceX made its first drone ship landing attempt in January 2015, a failure. Finally, on the vehicle’s 20th launch, SpaceX successfully put the Falcon 9 down on land, with the first successful drone ship landing following on the 23rd flight in April 2016.

SpaceX did not attempt to land every one of these 23 flights, but the company certainly experienced a number of failures as it worked to safely bring back an orbital rocket onto a small platform out at sea. Blue Origin’s engineers, some of whom worked at SpaceX at the time, have the benefit of those learnings. But it is still a very, very difficult thing to do on the second flight of a new rocket. The odds aren’t 3,720-to-1, but they’re probably not 75 percent, either.

Reuse a must for the bottom line

Nevertheless, for the New Glenn program to break even financially and eventually turn a profit, it must demonstrate reuse fairly quickly. According to multiple sources, the New Glenn first stage costs in excess of $100 million to manufacture. It is a rather exquisite piece of hardware, with many costs baked into the vehicle to make it rapidly reusable. But those benefits only come after a rocket is landed in good condition.

On its nominal plan, Blue Origin plans to refurbish the “Never Tell Me The Odds” booster for the New Glenn program’s third flight, a highly anticipated launch of the Mark 1 lunar lander. Such a refurbishment—again, on a nominal timeline—could be accomplished within 90 days. That seems unlikely, though. SpaceX did not reuse the first Falcon 9 booster it landed, and the first booster to re-fly required 356 days of analysis and refurbishment.

Nevertheless, we’re not supposed to talk about the odds with this mission. So instead, we’ll just note that the hustle and ambition from Blue Origin is a welcome addition to the space industry, which benefits from both.

Actually, we are going to tell you the odds of recovering New Glenn’s second launch Read More »

2025-nobel-prize-in-physics-awarded-for-macroscale-quantum-tunneling

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling


John Clarke, Michel H. Devoret, and John Martinis built an electrical circuit-based oscillator on a microchip.

A device consisting of four transmon qubits, four quantum buses, and four readout resonators fabricated by IBM in 2017. Credit: ay M. Gambetta, Jerry M. Chow & Matthias Steffen/CC BY 4.0

The 2025 Nobel Prize in Physics has been awarded to John Clarke, Michel H. Devoret, and John M. Martinis “for the discovery of macroscopic quantum tunneling and energy quantization in an electrical circuit.” The Nobel committee said during a media briefing that the laureates’ work provides opportunities to develop “the next generation of quantum technology, including quantum cryptography, quantum computers, and quantum sensors.” The three men will split the $1.1 million (11 million Swedish kroner) prize money. The presentation ceremony will take place in Stockholm on December 10, 2025.

“To put it mildly, it was the surprise of my life,” Clarke told reporters by phone during this morning’s press conference. “Our discovery in some ways is the basis of quantum computing. Exactly at this moment where this fits in is not entirely clear to me. One of the underlying reasons that cellphones work is because of all this work.”

When physicists began delving into the strange new realm of subatomic particles in the early 20th century, they discovered a realm where the old, deterministic laws of classical physics no longer apply. Instead, uncertainty reigns supreme. It is a world governed not by absolutes, but by probabilities, where events that would seem impossible on the macroscale occur on a regular basis.

For instance, subatomic particles can “tunnel” through seemingly impenetrable energy barriers. Imagine that an electron is a water wave trying to surmount a tall barrier. Unlike water, if the electron’s wave is shorter than the barrier, there is still a small probability that it will seep through to the other side.

This neat little trick has been experimentally verified many times. In the 1950s, physicists devised a system in which the flow of electrons would hit an energy barrier and stop because they lacked sufficient energy to surmount that obstacle. But some electrons didn’t follow the established rules of behavior. They simply tunneled right through the energy barrier.

(l-r): John Clarke, Michel H. Devoret and John M. Martinis

(l-r): John Clarke, Michel H. Devoret, and John M. Martinis. Credit: Niklas Elmehed/Nobel Prize Outreach

From subatomic to the macroscale

Clarke, Devoret, and Martinis were the first to demonstrate that quantum effects, such as quantum tunneling and energy quantization, can operate on macroscopic scales, not just one particle at a time.

After earning his PhD from University of Cambridge, Clarke came to the University of California, Berkeley, as a postdoc, eventually joining the faculty in 1969. By the mid-1980s, Devoret and Martinis had joined Clarke’s lab as a postdoc and graduate student, respectively. The trio decided to look for evidence of macroscopic quantum tunneling using a specialized circuit called a Josephson junction—a macroscopic device that takes advantage of a tunneling effect that is now widely used in quantum computing, quantum sensing, and cryptography.

A Josephson junction—named after British physicist Brian Josephson, who won the 1973 Nobel Prize in physics—is basically two semiconductor pieces separated by an insulating barrier. Despite this small gap between two conductors, electrons can still tunnel through the insulator and create a current. That occurs at sufficiently low temperatures, when the junction becomes superconducting as electrons form so-called “Cooper pairs.”

The team built an electrical circuit-based oscillator on a microchip measuring about one centimeter in size—essentially a quantum version of the classic pendulum. Their biggest challenge was figuring out how to reduce the noise in their experimental apparatus. For their experiments, they first fed a weak current into the junction and measured the voltage—initially zero. Then they increased the current and measured how long it took for the system to tunnel out of its enclosed state to produce a voltage.

Credit: Johan Jarnestad/The Royal Swedish Academy of Sciences

They took many measurements and found that the average current increased as the device’s temperature falls, as expected. But at some point, the temperature got so low that the device became superconducting and the average current became independent of the device’s temperature—a telltale signature of macroscopic quantum tunneling.

The team also demonstrated that the Josephson junction exhibited quantized energy levels—meaning the energy of the system was limited to only certain allowed values, just like subatomic particles can gain or lose energy only in fixed, discrete amounts—confirming the quantum nature of the system. Their discovery effectively revolutionized quantum science, since other scientists could now test precise quantum physics on silicon chips, among other applications.

Lasers, superconductors, and superfluid liquids exhibit quantum mechanical effects at the macroscale, but these arise by combining the behavior of microscopic components. Clarke, Devoret, and Martinis were able to create a macroscopic effect—a measurable voltage—from a macroscopic state. Their system contained billions of Cooper pairs filling the entire superconductor on the chip, yet all of them were described by a single wave function. They behave like a large-scale artificial atom.

In fact, their circuit was basically a rudimentary qubit. Martinis showed in a subsequent experiment that such a circuit could be an information-bearing unit, with the lowest energy state and the first step upward functioning as a 0 and a 1, respectively. This paved the way for such advances as the transmon in 2007: a superconducting charge qubit with reduced sensitivity to noise.

“That quantization of the energy levels is the source of all qubits,” said Irfan Siddiqi, chair of UC Berkeley’s Department of Physics and one of Devoret’s former postdocs. “This was the grandfather of qubits. Modern qubit circuits have more knobs and wires and things, but that’s just how to tune the levels, how to couple or entangle them. The basic idea that Josephson circuits could be quantized and were quantum was really shown in this experiment. The fact that you can see the quantum world in an electrical circuit in this very direct way was really the source of the prize.”

So perhaps it is not surprising that Martinis left academia in 2014 to join Google’s quantum computing efforts, helping to build a quantum computer the company claimed had achieved “quantum supremacy” in 2019. Martinis left in 2020 and co-founded a quantum computing startup, Qolab, in 2022. His fellow Nobel laureate, Devoret, now leads Google’s quantum computing division and is also a faculty member at the University of California, Santa Barbara. As for Clarke, he is now a professor emeritus at UC Berkeley.

“These systems bridge the gap between microscopic quantum behavior and macroscopic devices that form the basis for quantum engineering,” Gregory Quiroz, an expert in quantum information science and quantum algorithms at Johns Hopkins University, said in a statement. “The rapid progress in this field over the past few decades—in part fueled by their critical results—has allowed superconducting qubits to go from small-scale laboratory experiments to large-scale, multi-qubit devices capable of realizing quantum computation. While we are still on the hunt for undeniable quantum advantage, we would not be where we are today without many of their key contributions to the field.”

As is often the case with fundamental research, none of the three physicists realized at the time how significant their discovery would be in terms of its impact on quantum computing and other applications.

“This prize really demonstrates what the American system of science has done best,” Jonathan Bagger, CEO of the American Physical Society, told the New York Times. “It really showed the importance of the investment in research for which we do not yet have an application, because we know that sooner or later, there will be an application.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling Read More »

natural-disasters-are-a-rising-burden-for-the-national-guard

Natural disasters are a rising burden for the National Guard


New Pentagon data show climate impacts shaping reservists’ mission.

National Guard soldiers search for people stranded by flooding in the aftermath of Hurricane Helene on September 27, 2024, in Steinhatchee, Florida. Credit: Sean Rayford/Getty Images

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

The National Guard logged more than 400,000 member service days per year over the past decade responding to hurricanes, wildfires, and other natural disasters, the Pentagon has revealed in a report to Congress.

The numbers mean that on any given day, 1,100 National Guard troops on average have been deployed on disaster response in the United States.

Congressional investigators believe this is the first public accounting by the Pentagon of the cumulative burden of natural disaster response on the nation’s military reservists.

The data reflect greater strain on the National Guard and show the potential stakes of the escalating conflict between states and President Donald Trump over use of the troops. Trump’s drive to deploy the National Guard in cities as an auxiliary law enforcement force—an effort curbed by a federal judge over the weekend—is playing out at a time when governors increasingly rely on reservists for disaster response.

In the legal battle over Trump’s efforts to deploy the National Guard in Portland, Oregon, that state’s attorney general, Dan Rayfield, argued in part that Democratic Gov. Tina Kotek needed to maintain control of the Guard in case they were needed to respond to wildfire—including a complex of fires now burning along the Rogue River in southwest Oregon.

The Trump administration, meanwhile, rejects the science showing that climate change is worsening natural disasters and has ceased Pentagon efforts to plan for such impacts or reduce its own carbon footprint.

The Department of Defense recently provided the natural disaster figures to four Democratic senators as part of a response to their query in March to Defense Secretary Pete Hegseth regarding planned cuts to the military’s climate programs. Sen. Elizabeth Warren of Massachusetts, who led the query on behalf of herself and three other members of the Senate Committee on Armed Services, shared the response with Inside Climate News.

“The effects of climate change are destroying the military’s infrastructure—Secretary Hegseth should take that threat seriously,” Warren told ICN in an email. “This data shows just how costly this threat already is for the National Guard to respond to natural disasters. Failing to act will only make these costs skyrocket.”

Neither the Department of Defense nor the White House immediately responded to a request for comment.

Last week, Hegseth doubled down on his vow to erase climate change from the military’s agenda. “No more climate change worship,” Hegseth exhorted, before an audience of senior officials he summoned to Marine Corps Base Quantico in Virginia on October 1. “No more division, distraction, or gender delusions. No more debris,” he said. Departing from the prepared text released by the Pentagon, he added, “As I’ve said before, and will say again, we are done with that shit.”

But the data released by the Pentagon suggest that the impacts of climate change are shaping the military’s duties, even if the department ceases acknowledging the science or planning for a warming future. In 2024, National Guard paid duty days on disaster response—445,306—had nearly tripled compared to nine years earlier, with significant fluctuations in between. (The Pentagon provided the figures in terms of “mandays,” or paid duty days over and above reservists’ required annual training days.)

Demand for reservist deployment on disaster assistance over those years peaked at 1.25 million duty days in 2017, when Hurricanes Harvey, Irma, and Maria unleashed havoc in Texas, Florida, and Puerto Rico.

The greatest deployment of National Guard members in response to wildfire over the past decade came in 2023, when wind-driven wildfires tore across Maui, leaving more than 100 people dead. Called into action by Gov. Josh Green, the Hawaii National Guard performed aerial water drops in CH-47 Chinook helicopters. On the ground, they helped escort fleeing residents, aided in search and recovery, distributed potable water, and performed other tasks.

Sen. Mazie Hirono of Hawaii, Sen. Richard Blumenthal of Connecticut, and Sen. Tammy Duckworth of Illinois joined Warren in seeking numbers on National Guard natural disaster deployment from the Pentagon.

It was not immediately possible to compare National Guard disaster deployment over the last decade to prior decades, since the Pentagon has not published a similar accounting for years prior to 2015.

But last year, a study by the Rand Corporation, a research firm, on stressors for the National Guard said that service leaders believed that natural disaster response missions were growing in scale and intensity.

“Seasons for these events are lasting longer, the extent of areas that are seeing these events is bigger, and the storms that occur are seemingly more intense and therefore more destructive,” noted the Rand study, produced for the Pentagon. “Because of the population density changes that have occurred, the devastation that can result and the population that can be affected are bigger as well.”

A history of the National Guard published by the Pentagon in 2001 describes the 1990s as a turning point for the service, marked by increasing domestic missions in part to “a nearly continuous string” of natural disasters.

One of those disasters was Hurricane Andrew, which ripped across southern Florida on August 23, 1992, causing more property damage than any storm in US history to that point. The crisis led to conflict between President George H.W. Bush’s administration and Florida’s Democratic governor, Lawton Chiles, over control of the National Guard and who should bear the blame for a lackluster initial response.

The National Guard, with 430,000 civilian soldiers, is a unique military branch that serves under both state and federal command. In Iraq and Afghanistan, for example, the president called on reservists to serve alongside the active-duty military. But state governors typically are commanders-in-chief for Guard units, calling on them in domestic crises, including natural disasters. The president only has limited legal authority to deploy the National Guard domestically, and such powers nearly always have been used in coordination with state governors.

But Trump has broken that norm and tested the boundaries of the law. In June, he deployed the National Guard for law and immigration enforcement in Los Angeles in defiance of Democratic Gov. Gavin Newsom. (Trump also deployed the Guard in Washington, DC, where members already are under the president’s command.) Over the weekend, Trump’s plans to deploy the Guard in Portland, Oregon, were put on hold by US District Judge Karin J. Immergut, a Trump appointee. She issued a second, broader stay on Sunday to block Trump from an attempt to deploy California National Guard members to Oregon. Nevertheless, the White House moved forward with an effort to deploy the Guard to Chicago in defiance of Illinois Gov. J.B. Pritzker, a Democrat. In that case, Trump is calling on Guard members from a politically friendly state, Texas, and a federal judge has rejected a bid by both the city of Chicago and the state of Illinois to block the move.

The conflicts could escalate should a natural disaster occur in a state where Trump has called the Guard into service on law enforcement, one expert noted.

“At the end of the day, it’s a political problem,” said Mark Nevitt, a professor at Emory University School of Law and a Navy veteran who specializes in the national security implications of climate change. “If, God forbid, there’s a massive wildfire in Oregon and there’s 2,000 National Guard men and women who are federalized, the governor would have to go hat-in-hand to President Trump” to get permission to redeploy the service members for disaster response, he said.

“The state and the federal government, most times it works—they are aligned,” Nevitt said. “But you can imagine a world where the president essentially refuses to give up the National Guard because he feels as though the crime-fighting mission has primacy over whatever other mission the governor wants.”

That scenario may already be unfolding in Oregon. On September 27, the same day that Trump announced his intent to send the National Guard into Portland, Kotek was mobilizing state resources to fight the Moon Complex Fire on the Rogue River, which had tripled in size due to dry winds. That fire is now 20,000 acres and only 10 percent contained. Pointing to that fire, Oregon Attorney General Rayfield told the court the Guard should remain ready to respond if needed, noting the role reservists played in responding to major Oregon fires in 2017 and 2020.

“Wildfire response is one of the most significant functions the Oregon National Guard performs in the State,” Rayfield argued in a court filing Sunday.

Although Oregon won a temporary stay, the Trump administration is appealing that order. And given the increasing role of the National Guard in natural disaster response, according to the Pentagon’s figures, the legal battle will have implications far beyond Portland. It will determine whether governors like Kotek will be forced to negotiate with Trump for control of the National Guard amid a crisis that his administration is seeking to downplay.

Photo of Inside Climate News

Natural disasters are a rising burden for the National Guard Read More »

qualcomm-is-buying-arduino,-releases-new-raspberry-pi-esque-arduino-board

Qualcomm is buying Arduino, releases new Raspberry Pi-esque Arduino board

Smartphone processor and modem maker Qualcomm is acquiring Arduino, the Italian company known mainly for its open source ecosystem of microcontrollers and the software that makes them function. In its announcement, Qualcomm said that Arduino would “[retain] its brand and mission,” including its “open source ethos” and “support for multiple silicon vendors.”

“Arduino will retain its independent brand, tools, and mission, while continuing to support a wide range of microcontrollers and microprocessors from multiple semiconductor providers as it enters this next chapter within the Qualcomm family,” Qualcomm said in its press release. “Following this acquisition, the 33M+ active users in the Arduino community will gain access to Qualcomm Technologies’ powerful technology stack and global reach. Entrepreneurs, businesses, tech professionals, students, educators, and hobbyists will be empowered to rapidly prototype and test new solutions, with a clear path to commercialization supported by Qualcomm Technologies’ advanced technologies and extensive partner ecosystem.”

Qualcomm didn’t disclose what it would pay to acquire Arduino. The acquisition also needs to be approved by regulators “and other customary closing conditions.”

The first fruit of this pending acquisition will be the Arduino Uno Q, a Qualcomm-based single-board computer with a Qualcomm Dragonwing QRB2210 processor installed. The QRB2210 includes a quad-core Arm Cortex-A53 CPU and a Qualcomm Adreno 702 GPU, plus Wi-Fi and Bluetooth connectivity, and combines that with a real-time microcontroller “to bridge high-performance computing with real-time control.”

Qualcomm is buying Arduino, releases new Raspberry Pi-esque Arduino board Read More »

ted-cruz-picks-a-fight-with-wikipedia,-accusing-platform-of-left-wing-bias

Ted Cruz picks a fight with Wikipedia, accusing platform of left-wing bias

Cruz pressures Wikipedia after criticizing FCC chair

Cruz sent the letter about two weeks after criticizing Federal Communications Commission Chairman Brendan Carr for threatening ABC with station license revocations over political content on Jimmy Kimmel’s show. Cruz said that using the government to dictate what the media can say “will end up bad for conservatives” because when Democrats are back in power, “they will silence us, they will use this power, and they will use it ruthlessly.” Cruz said that Carr threatening ABC was like “a mafioso coming into a bar going, ‘Nice bar you have here, it’d be a shame if something happened to it.'”

Cruz, who chairs the Senate Commerce Committee, doesn’t mind using his authority to pressure Wikipedia’s operator, however. “The Standing Rules of the Senate grant the Committee on Commerce, Science, and Transportation jurisdiction over communications, including online information platforms,” he wrote to the Wikimedia Foundation. “As the Chairman of the Committee, I request that you provide written responses to the questions below, as well as requested documents, no later than October 17, 2025, and in accordance with the attached instructions.”

We asked Cruz’s office to explain why a senator pressuring Wikipedia is appropriate while an FCC chair pressuring ABC is not and will update this article if we get a response.

Among other requests, Cruz asked for “documents sufficient to show what supervision, oversight, or influence, if any, the Wikimedia Foundation has over the editing community,” and “documents sufficient to show how the Wikimedia Foundation addresses political or ideological bias.”

Cruz has separately been launching investigations into the Biden administration for alleged censorship. He issued a report allegedly “revealing how the Biden administration transformed the Cybersecurity and Infrastructure Security Agency (CISA) into an agent of censorship pressuring Big Tech to police speech,” and scheduled a hearing for Wednesday titled, “Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans.”

Cruz’s letter to Wikimedia seeks evidence that could figure into his ongoing investigations into the Biden administration. “Provide any and all documents and communications—including emails, texts, or other digital messages—between any officer, employee, or agent of the Wikimedia Foundation and any officer, employee, or agent of the federal government since January 1, 2020,” the letter said.

Ted Cruz picks a fight with Wikipedia, accusing platform of left-wing bias Read More »

openai,-jony-ive-struggle-with-technical-details-on-secretive-new-ai-gadget

OpenAI, Jony Ive struggle with technical details on secretive new AI gadget

OpenAI overtook Elon Musk’s SpaceX to become the world’s most valuable private company this week, after a deal that valued it at $500 billion. One of the ways the ChatGPT maker is seeking to justify the price tag is a push into hardware.

The goal is to improve the “smart speakers” of the past decade, such as Amazon’s Echo speaker and its Alexa digital assistant, which are generally used for a limited set of functions such as listening to music and setting kitchen timers.

OpenAI and Ive are seeking to build a more powerful and useful machine. But two people familiar with the project said that settling on the device’s “voice” and its mannerisms were a challenge.

One issue is ensuring the device only chimes in when useful, preventing it from talking too much or not knowing when to finish the conversation—an ongoing issue with ChatGPT.



“The concept is that you should have a friend who’s a computer who isn’t your weird AI girlfriend… like [Apple’s digital voice assistant] Siri but better,” said one person who was briefed on the plans. OpenAI was looking for “ways for it to be accessible but not intrusive.”

“Model personality is a hard thing to balance,” said another person close to the project. “It can’t be too sycophantic, not too direct, helpful, but doesn’t keep talking in a feedback loop.”

OpenAI’s device will be entering a difficult market. Friend, an AI companion worn as a pendant around your neck, has been criticized for being “creepy” and having a “snarky” personality. An AI pin made by Humane, a company that Altman personally invested in, has been scrapped.

Still, OpenAI has been on a hiring spree to build its hardware business. Its acquisition of io brought in more than 20 former Apple hardware employees poached by Ive from his alma mater. It has also recruited at least a dozen other Apple device experts this year, according to LinkedIn accounts.

It has similarly poached members of Meta’s staff working on the Big Tech group’s Quest headset and smart glasses.

OpenAI is also working with Chinese contract manufacturers, including Luxshare, to create its first device, according to two people familiar with the development that was first reported by The Information. The people added that the device might be assembled outside of China.

OpenAI and LoveFrom, Ive’s design group, declined to comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

OpenAI, Jony Ive struggle with technical details on secretive new AI gadget Read More »

elon-musk-tries-to-make-apple-and-mobile-carriers-regret-choosing-starlink-rivals

Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals

SpaceX holds spectrum licenses for the Starlink fixed Internet service for homes and businesses. Adding the EchoStar spectrum will make its holdings suitable for mobile service.

“SpaceX currently holds no terrestrial spectrum authorizations and no license to use spectrum allocated on a primary basis to MSS,” the company’s FCC filing said. “Its only authorization to provide any form of mobile service is an authorization for secondary SCS [Supplemental Coverage from Space] operations in spectrum licensed to T-Mobile.”

Starlink unlikely to dethrone major carriers

SpaceX’s spectrum purchase doesn’t make it likely that Starlink will become a fourth major carrier. Grand claims of that sort are “complete nonsense,” wrote industry analyst Dean Bubley. “Apart from anything else, there’s one very obvious physical obstacle: walls and roofs,” he wrote. “Space-based wireless, even if it’s at frequencies supported in normal smartphones, won’t work properly indoors. And uplink from devices to satellites will be even worse.”

When you’re indoors, “there’s more attenuation of the signal,” resulting in lower data rates, Farrar said. “You might not even get megabits per second indoors, unless you are going to go onto a home Starlink broadband network,” he said. “You might only be able to get hundreds of kilobits per second in an obstructed area.”

The Mach33 analyst firm is more bullish than others regarding Starlink’s potential cellular capabilities. “With AWS-4/H-block and V3 [satellites], Starlink DTC is no longer niche, it’s a path to genuine MNO competition. Watch for retail mobile bundles, handset support, and urban hardware as the signals of that pivot,” the firm said.

Mach33’s optimism is based in part on the expectation that SpaceX will make more deals. “DTC isn’t just a coverage filler, it’s a springboard. It enables alternative growth routes; M&A, spectrum deals, subleasing capacity in denser markets, or technical solutions like mini-towers that extend Starlink into neighborhoods,” the group’s analysis said.

The amount of spectrum SpaceX is buying from EchoStar is just a fraction of what the national carriers control. There is “about 1.1 GHz of licensed spectrum currently allocated to mobile operators,” wireless lobby group CTIA said in a January 2025 report. The group also says the cellular industry has over 432,000 active cell sites around the US.

What Starlink can offer cellular users “is nothing compared to the capacity of today’s 5G networks,” but it would be useful “in less populated areas or where you cannot get coverage,” Rysavy said.

Starlink has about 8,500 satellites in orbit. Rysavy estimated in a July 2025 report that about 280 of them are over the United States at any given time. These satellites are mostly providing fixed Internet service in which an antenna is placed outside a building so that people can use Wi-Fi indoors.

SpaceX’s FCC filing said the EchoStar spectrum’s mix of terrestrial and satellite frequencies will be ideal for Starlink.

“By acquiring EchoStar’s market-access authorization for 2 GHz MSS as well as its terrestrial AWS-4 licenses, SpaceX will be able to deploy a hybrid satellite and terrestrial network, just as the Commission envisioned EchoStar would do,” SpaceX said. “Consistent with the Commission’s finding that potential interference between MSS and terrestrial mobile service can best be managed by enabling a single licensee to control both networks, assignment of the AWS-4 spectrum is critical to enable SpaceX to deploy robust MSS service in this band.”

Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals Read More »

trump-admin-defiles-even-the-“out-of-office”-email-auto-reply

Trump admin defiles even the “out of office” email auto-reply

Well—not “Democrats,” exactly, but “Democrat Senators.” The use of the noun “Democrat” as an adjective (e.g., “the Democrat Party”) is a long-standing and deliberate right-wing refusal to call the opposition by its name. (If you visit the Democrats’ website, the very first words below the site header are “We are the Democratic Party”; the party is run by the “Democratic National Committee.”) Petty? Sure! But that’s a feature, not a bug.

Similar out-of-office suggestions have been made to employees at the Small Business Administration and the Department of Health and Human Services. Such messages appear to be violations of the Hatch Act, which prohibits partisan speech from most executive branch employees while they are on duty, since these people represent and work for all Americans.

The Office of Special Counsel, which is supposed to prosecute violations of the Hatch Act, notes in a training flyer that most executive branch workers “may not engage in political activity—i.e., activity directed at the success or failure of a political party.”

Employees may also not “use any e-mail account or social media to distribute, send, or forward content that advocates for or against a partisan political party.”

When asked about its suggested out-of-office message blaming Democrats, the Department of Health and Human Services told CNN that yes, it had suggested this—but added that this was okay because the partisan message was accurate.

“Employees were instructed to use out-of-office messages that reflect the truth: Democrats have shut the government down,” the agency said.

Truly, as even a sitting Supreme Court justice has noted, the “rule of law” has now become “Calvinball.”

Websites, too

Department websites have also gotten in on the partisan action. The Department of Housing and Urban Development’s site now loads with a large floating box atop the page, which reads, “The Radical Left in Congress shut down the government.” When you close the box, you see atop the main page itself an eye-searingly red banner that says… the same thing. Thanks, I think we got it!

Trump admin defiles even the “out of office” email auto-reply Read More »

meta-won’t-allow-users-to-opt-out-of-targeted-ads-based-on-ai-chats

Meta won’t allow users to opt out of targeted ads based on AI chats

Facebook, Instagram, and WhatsApp users may want to be extra careful while using Meta AI, as Meta has announced that it will soon be using AI interactions to personalize content and ad recommendations without giving users a way to opt out.

Meta plans to notify users on October 7 that their AI interactions will influence recommendations beginning on December 16. However, it may not be immediately obvious to all users that their AI interactions will be used in this way.

The company’s blog noted that the initial notification users will see only says, “Learn how Meta will use your info in new ways to personalize your experience.” Users will have to click through to understand that the changes specifically apply to Meta AI, with a second screen explaining, “We’ll start using your interactions with AIs to personalize your experience.”

Ars asked Meta why the initial notification doesn’t directly mention AI, and Meta spokesperson Emil Vazquez said he “would disagree with the idea that we are obscuring this update in any way.”

“We’re sending notifications and emails to people about this change,” Vazquez said. “As soon as someone clicks on the notification, it’s immediately apparent that this is an AI update.”

In its blog post, Meta noted that “more than 1 billion people use Meta AI every month,” stating its goals are to improve the way Meta AI works in order to fuel better experiences on all Meta apps. Sensitive “conversations with Meta AI about topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership “will not be used to target ads, Meta confirmed.

“You’re in control,” Meta’s blog said, reiterating that users can “choose” how they “interact with AIs,” unlink accounts on different apps to limit AI tracking, or adjust ad and content settings at any time. But once the tracking starts on December 16, users will not have the option to opt out of targeted ads based on AI chats, Vazquez confirmed, emphasizing to Ars that “there isn’t an opt out for this feature.”

Meta won’t allow users to opt out of targeted ads based on AI chats Read More »

how-america-fell-behind-china-in-the-lunar-space-race—and-how-it-can-catch-back-up

How America fell behind China in the lunar space race—and how it can catch back up


Thanks to some recent reporting, we’ve found a potential solution to the Artemis blues.

A man in a suit speaks in front of a mural of the Moon landing.

NASA Administrator Jim Bridenstine says that competition is good for the Artemis Moon program. Credit: NASA

NASA Administrator Jim Bridenstine says that competition is good for the Artemis Moon program. Credit: NASA

For the last month, NASA’s interim administrator, Sean Duffy, has been giving interviews and speeches around the world, offering a singular message: “We are going to beat the Chinese to the Moon.”

This is certainly what the president who appointed Duffy to the NASA post wants to hear. Unfortunately, there is a very good chance that Duffy’s sentiment is false. Privately, many people within the space industry, and even at NASA, acknowledge that the US space agency appears to be holding a losing hand. Recently, some influential voices, such as former NASA Administrator Jim Bridenstine, have spoken out.

“Unless something changes, it is highly unlikely the United States will beat China’s projected timeline to the Moon’s surface,” Bridenstine said in early September.

As the debate about NASA potentially losing the “second” space race to China heats up in Washington, DC, everyone is pointing fingers. But no one is really offering answers for how to beat China’s ambitions to land taikonauts on the Moon as early as the year 2029. So I will. The purpose of this article is to articulate how NASA ended up falling behind China, and more importantly, how the Western world could realistically retake the lead.

But first, space policymakers must learn from their mistakes.

Begin at the beginning

Thousands of words could be written about the space policy created in the United States over the last two decades and all of the missteps. However, this article will only hit the highlights (lowlights). And the story begins in 2003, when two watershed events occurred.

The first of these was the loss of space shuttle Columbia in February, the second fatal shuttle accident, which signaled that the shuttle era was nearing its end, and it began a period of soul-searching at NASA and in Washington, DC, about what the space agency should do next.

“There’s a crucial year after the Columbia accident,” said eminent NASA historian John Logsdon. “President George W. Bush said we should go back to the Moon. And the result of the assessment after Columbia is NASA should get back to doing great things.” For NASA, this meant creating a new deep space exploration program for astronauts, be it the Moon, Mars, or both.

The other key milestone in 2003 came in October, when Yang Liwei flew into space and China became the third country capable of human spaceflight. After his 21-hour spaceflight, Chinese leaders began to more deeply appreciate the soft power that came with spaceflight and started to commit more resources to related programs. Long-term, the Asian nation sought to catch up to the United States in terms of spaceflight capabilities and eventually surpass the superpower.

It was not much of a competition then. China would not take its first tentative steps into deep space for another four years, with the Chang’e 1 lunar orbiter. NASA had already walked on the Moon and sent spacecraft across the Solar System and even beyond.

So how did the United States squander such a massive lead?

Mistakes were made

SpaceX and its complex Starship lander are getting the lion’s share of the blame today for delays to NASA’s Artemis Program. But the company and its lunar lander version of Starship are just the final steps on a long, winding path that got the United States where it is today.

After Columbia, the Bush White House, with its NASA Administrator Mike Griffin, looked at a variety of options (see, for example, the Exploration Systems Architecture Study in 2005). But Griffin had a clear plan in his mind that he dubbed “Apollo on Steroids,” and he sought to develop a large rocket (Ares V), spacecraft (later to be named Orion), and a lunar lander to accomplish a lunar landing by 2020. Collectively, this became known as the Constellation Program.

It was a mess. Congress did not provide NASA the funding it needed, and the rocket and spacecraft programs quickly ran behind schedule. At one point, to pay for surging Constellation costs, NASA absurdly mulled canceling the just-completed International Space Station. By the end of the first decade of the 2000s, two things were clear: NASA was going nowhere fast, and the program’s only achievement was to enrich the legacy space contractors.

By early 2010, after spending a year assessing the state of play, the Obama administration sought to cancel Constellation. It ran into serious congressional pushback, powered by lobbying from Boeing, Lockheed Martin, Northrop Grumman, and other key legacy contractors.

The Space Launch System was created as part of a political compromise between Sen. Bill Nelson (D-Fla.) and senators from Alabama and Texas.

Credit: Chip Somodevilla/Getty Images

The Space Launch System was created as part of a political compromise between Sen. Bill Nelson (D-Fla.) and senators from Alabama and Texas. Credit: Chip Somodevilla/Getty Images

The Obama White House wanted to cancel both the rocket and the spacecraft and hold a competition for the private sector to develop a heavy lift vehicle. Their thinking: Only with lower-cost access to space could the nation afford to have a sustainable deep space exploration plan. In retrospect, it was the smart idea, but Congress was not having it. In 2011, Congress saved Orion and ordered a slightly modified rocket—it would still be based on space shuttle architecture to protect key contractors—that became the Space Launch System.

Then the Obama administration, with its NASA leader Charles Bolden, cast about for something to do with this hardware. They started talking about a “Journey to Mars.” But it was all nonsense. There was never any there there. Essentially, NASA lost a decade, spending billions of dollars a year developing “exploration” systems for humans and talking about fanciful missions to the red planet.

There were critics of this approach, myself included. In 2014, I authored a seven-part series at the Houston Chronicle called Adrift, the title referring to the direction of NASA’s deep space ambitions. The fundamental problem is that NASA, at the direction of Congress, was spending all of its exploration funds developing Orion, the SLS rocket, and ground systems for some future mission. This made the big contractors happy, but their cost-plus contracts gobbled up so much funding that NASA had no money to spend on payloads or things to actually fly on this hardware.

This is why doubters called the SLS the “rocket to nowhere.” They were, sadly, correct.

The Moon, finally

Fairly early on in the first Trump administration, the new leader of NASA, Jim Bridenstine, managed to ditch the Journey to Mars and establish a lunar program. However, any efforts to consider alternatives to the SLS rocket were quickly rebuffed by the US Senate.

During his tenure, Bridenstine established the Artemis Program to return humans to the Moon. But Congress was slow to open its purse for elements of the program that would not clearly benefit a traditional contractor or NASA field center. Consequently, the space agency did not select a lunar lander until April 2021, after Bridenstine had left office. And NASA did not begin funding work on this until late 2021 due to a protest by Blue Origin. The space agency did not support a lunar spacesuit program for another year.

Much has been made about the selection of SpaceX as the sole provider of a lunar lander. Was it shady? Was the decision rushed before Bill Nelson was confirmed as NASA administrator? In truth, SpaceX was the only company that bid a value that NASA could afford with its paltry budget for a lunar lander (again, Congress prioritized SLS funding), and which had the capability the agency required.

To be clear, for a decade, NASA spent in excess of $3 billion a year on the development of the SLS rocket and its ground systems. That’s every year for a rocket that used main engines from the space shuttle, a similar version of its solid rocket boosters, and had a core stage the same diameter as the shuttle’s external tank. Thirty billion bucks for a rocket highly derivative of a vehicle NASA flew for three decades. SpaceX was awarded less than a single year of this funding, $2.9 billion, for the entire development of a Human Landing System version of Starship, plus two missions.

So yes, after 20 years, Orion appears to be ready to carry NASA astronauts out to the Moon. After 15 years, the shuttle-derived rocket appears to work. And after four years (and less than a tenth of the funding), Starship is not ready to land humans on the Moon.

When will Starship be ready?

Probably not any time soon.

For SpaceX and its founder, Elon Musk, the Artemis Program is a sidequest to the company’s real mission of sending humans to Mars. It simply is not a priority (and frankly, the limited funding from NASA does not compel prioritization). Due to its incredible ambition, the Starship program has also understandably hit some technical snags.

Unfortunately for NASA and the country, Starship still has a long way to go to land humans on the Moon. It must begin flying frequently (this could happen next year, finally). It must demonstrate the capability to transfer and store large amounts of cryogenic propellant in space. It must land on the Moon, a real challenge for such a tall vehicle, necessitating a flat surface that is difficult to find near the poles. And then it must demonstrate the ability to launch from the Moon, which would be unprecedented for cryogenic propellants.

Perhaps the biggest hurdle is the complexity of the mission. To fully fuel a Starship in low-Earth orbit to land on the Moon and take off would require multiple Starship “tanker” launches from Earth. No one can quite say how many because SpaceX is still working to increase the payload capacity of Starship, and no one has real-world data on transfer efficiency and propellant boiloff. But the number is probably at least a dozen missions. One senior source recently suggested to Ars that it may be as many as 20 to 40 launches.

The bottom line: It’s a lot. SpaceX is far and away the highest-performing space company in the Solar System. But putting all of the pieces together for a lunar landing will require time. Privately, SpaceX officials are telling NASA it can meet a 2028 timeline for Starship readiness for Artemis astronauts.

But that seems very optimistic. Very. It’s not something I would feel comfortable betting on, especially if China plans to land on the Moon “before” 2030, and the country continues to make credible progress toward this date.

What are the alternatives?

Duffy’s continued public insistence that he will not let China beat the United States back to the Moon rings hollow. The shrewd people in the industry I’ve spoken with say Duffy is an intelligent person and is starting to realize that betting the entire farm on SpaceX at this point would be a mistake. It would be nice to have a plan B.

But please, stop gaslighting us. Stop blustering about how we’re going to beat China while losing a quarter of NASA’s workforce and watching your key contractors struggle with growing pains. Let’s have an honest discussion about the challenges and how we’ll solve them.

What few people have done is offer solutions to Duffy’s conundrum. Fortunately, we’re here to help. As I have conducted interviews in recent weeks, I have always closed by asking this question: “You’re named NASA administrator tomorrow. You have one job: get NASA astronauts safely back to the Moon before China. What do you do?”

I’ve received a number of responses, which I’ll boil down into the following buckets. None of these strike me as particularly practical solutions, which underscores the desperation of NASA’s predicament. However, recent reporting has uncovered one solution that probably would work. I’ll address that last. First, the other ideas:

  • Stubby Starship: Multiple people have suggested this option. Tim Dodd has even spoken about it publicly. Two of the biggest issues with Starship are the need for many refuelings and its height, making it difficult to land on uneven terrain. NASA does not need Starship’s incredible capability to land 100–200 metric tons on the lunar surface. It needs fewer than 10 tons for initial human missions. So shorten Starship, reduce its capability, and get it down to a handful of refuelings. It’s not clear how feasible this would be beyond armchair engineering. But the larger problem is that Musk wants Starship to get taller, not shorter, so SpaceX would probably not be willing to do this.
  • Surge CLPS funding: Since 2019, NASA has been awarding relatively small amounts of funding to private companies to land a few hundred kilograms of cargo on the Moon. NASA could dramatically increase funding to this program, say up to $10 billion, and offer prizes for the first and second companies to land two humans on the Moon. This would open the competition to other companies beyond SpaceX and Blue Origin, such as Firefly, Intuitive Machines, and Astrobotic. The problem is that time is running short, and scaling up from 100 kilograms to 10 metric tons is an extraordinary challenge.
  • Build the Lunar Module: NASA already landed humans on the Moon in the 1960s with a Lunar Module built by Grumman. Why not just build something similar again? In fact, some traditional contractors have been telling NASA and Trump officials this is the best option, that such a solution, with enough funding and cost-plus guarantees, could be built in two or three years. The problem with this is that, sorry, the traditional space industry just isn’t up to the task. It took more than a decade to build a relatively simple rocket based on the space shuttle. The idea that a traditional contractor will complete a Lunar Module in five years or less is not supported by any evidence in the last 20 years. The flimsy Lunar Module would also likely not pass NASA’s present-day safety standards.
  • Distract China: I include this only for completeness. As for how to distract China, use your imagination. But I would submit that ULA snipers or starting a war in the South China Sea is not the best way to go about winning the space race.

OK, I read this far. What’s the answer?

The answer is Blue Origin’s Mark 1 lander.

The company has finished assembly of the first Mark 1 lander and will soon ship it from Florida to Johnson Space Center in Houston for vacuum chamber testing. A pathfinder mission is scheduled to launch in early 2026. It will be the largest vehicle to ever land on the Moon. It is not rated for humans, however. It was designed as a cargo lander.

There have been some key recent developments, though. About two weeks ago, NASA announced that a second mission of Mark 1 will carry the VIPER rover to the Moon’s surface in 2027. This means that Blue Origin intends to start a production line of Mark 1 landers.

At the same time, Blue Origin already has a contract with NASA to develop the much larger Mark 2 lander, which is intended to carry humans to the lunar surface. Realistically, though, this will not be ready until sometime in the 2030s. Like SpaceX’s Starship, it will require multiple refueling launches. As part of this contract, Blue has worked extensively with NASA on a crew cabin for the Mark 2 lander.

A full-size mock-up of the Blue Origin Mk. 1 lunar lander.

Credit: Eric Berger

A full-size mock-up of the Blue Origin Mk. 1 lunar lander. Credit: Eric Berger

Here comes the important part. Ars can now report, based on government sources, that Blue Origin has begun preliminary work on a modified version of the Mark 1 lander—leveraging learnings from Mark 2 crew development—that could be part of an architecture to land humans on the Moon this decade. NASA has not formally requested Blue Origin to work on this technology, but according to a space agency official, the company recognizes the urgency of the need.

How would it work? Blue Origin is still architecting the mission, but it would involve “multiple” Mark 1 landers to carry crew down to the lunar surface and then ascend back up to lunar orbit to rendezvous with the Orion spacecraft. Enough work has been done, according to the official, that Blue Origin engineers are confident the approach could work. Critically, it would not require any refueling.

It is unclear whether this solution has reached Duffy, but he would be smart to listen. According to sources, Blue Origin founder Jeff Bezos is intrigued by the idea. And why wouldn’t he be? For a quarter of a century, he has been hearing about how Musk has been kicking his ass in spaceflight. Bezos also loves the Apollo program and could now play an essential role in serving his country in an hour of need. He could beat SpaceX to the Moon and stamp his name in the history of spaceflight.

Jeff and Sean? Y’all need to talk.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

How America fell behind China in the lunar space race—and how it can catch back up Read More »

hands-on-with-fallout-76’s-next-expansion:-yep,-it-has-walton-goggins

Hands-on with Fallout 76’s next expansion: Yep, it has Walton Goggins


TV tie-ins aside, it’s the combat tweaks over the past year that really matter.

There aren’t a lot of games set in Ohio, but here we are. Credit: Bethesda

Bethesda provided flights from Chicago to New York City so that Ars could participate in the preview opportunity for Fallout 76: Burning Springs. Ars does not accept paid editorial content.

Like anybody, I have a few controversial gaming opinions and tastes. One of the most controversial is that Fallout 76 —the multiplayer take on Bethesda’s rethink of a beloved ’90s open-world computer roleplaying game—has been my favorite online multiplayer game since its launch.

As much as I like the game, though, I’ve been surprised that it has actually grown over the past seven years. I’m not saying it’s seen a full, No Man’s Sky-like redemption story, though. It’s still not for everyone, and in some ways, it has fallen behind the times since 2018.

Nevertheless, the success of the streaming TV show based on the game franchise has attracted new players and given the developers a chance to seize the moment and attempt to complete a partial redemption story. To help make that happen, the game’s developers will soon release an expansion fully capitalizing on that TV series for the first time, and I got to spend a few hours playing that update to see if it’s any fun.

That said, don’t get distracted by the shiny TV tie-in. The important work is a lot less flashy: combat overhauls, bug fixes, balance updates, quality-of-life improvements, and technological tweaks—all of which have been added to the game over time. Ultimately, that little stuff adds up to be more impactful than the big stuff for players.

With that in mind, let’s take a quick look at where things stand based on my seven years of regularly playing the game and a few hours with the next major expansion.

Months of combat and game balance overhauls

You probably already know that the game originally launched without NPCs or the kinds of story- and character-driven quests most people expect from Fallout and that those things were added to the game in 2020, with more similar additions in the years since.

You could make a case that the original, NPC-free vision made sense for a certain kind of player, but that’s not the kind of player who tends to like Fallout games. Bethesda clearly pictured a Rust-like, emergent social PvP (player vs. player) situation when the game first came out. By now, though, PvP is almost completely absent from the game, and story-based quests loaded with NPCs are plentiful.

It still wasn’t enough for some players. There were several small frustrations about gameplay balance, as some folks felt that combat wasn’t always as fun as it could be and that the viable character builds in the endgame were too narrow.

Through a series of many patches over just this past year, Bethesda has been making significant changes to that aspect of the game. Go to Reddit and you’ll see that some players have gripes—mainly because the changes nerfed some uber-powerful endgame builds and weapons to level the playing field. (Also, some recent changes to VATS are admittedly a double-edged sword, depending on your philosophy about what role it should play in the game.)

You’ll definitely engage in some combat in this Deathclaw junkyard battle arena. Credit: Bethesda

As someone who has been playing almost nonstop this whole time, though, I think the designers have done a great job of making more play styles viable while just generally making the game feel better to play. They also totally overhauled how the base-building system works. That’s the sort of stuff that is hard to convey in a marketing blitz, but you feel it when you’re playing.

I won’t get into every detail about it here since most people reading this probably haven’t played the game enough to warrant that, but you can look at the patch notes—it’s a lot.

But I want to point that out up front because I think it’s more important than anything in the actual expansion the developer and publisher are hyping up. The game is just generally more fun to play than it used to be—even a year ago. You love to see it.

Technically, it’s a mixed bag

Earlier, I mentioned that the game has fallen behind the times in many ways. I’m mostly talking about its technical presentation and the lack of modern features players now expect from big-budget, cross-platform multiplayer games.

The assets are great, the art direction is top-notch, and the world is dense and attractive, but there are some now-standard AAA boxes it doesn’t check. A full redemption story requires addressing at least some of these things to keep the game up to modern standards.

By and large, the game’s environments look great on PC. Consoles are a bit behind. Credit: Bethesda

First up, the game has no executable for modern consoles; the Xbox Series X|S and PlayStation 5/5 Pro consoles seem to run the last-gen Xbox One X and PlayStation 4 Pro versions, respectively, just with the framerate cap (thankfully) raised from 30 fps to 60 fps.

But there’s good news on that front: I spoke with development team members who confirmed that current-gen console versions are coming soon, though they didn’t specify what kinds of upgrades we can expect.

I hope that also means a rethought approach to how the game displays on HDR (high dynamic range) TVs. To this day, HDR does not work like you’d expect; the game looks washed out on an OLED TV in particular, and there are none of the industry-standard HDR calibration sliders to fix it. HDR also didn’t work properly in Starfield at launch (it got partially addressed about a year later), and it is completely absent from the otherwise gorgeous-to-behold The Elder Scrolls IV: Oblivion remaster that came out just this year. I don’t know what the deal is with Bethesda Game Studios and HDR, but I hope they figure it out by the time The Elder Scrolls VI hits.

I also asked the Fallout 76 team about cross-play and cross-progression—the ability to play with friends on different platforms (or to at least access the same character across platforms). These features are likely nontrivial to implement, and they weren’t standard in 2018. They’re increasingly expected for big-budget, AAA multiplayer games today, though.

Unfortunately, the Bethesda devs I spoke to didn’t have any plans to share on that front. Still, it’s good to hear that the company still supports this game enough to at least launch modern console versions—and to continue adding major content updates.

OK, we can talk about the TV show update now

Speaking of major content updates, Bethesda is planning a big release called Burning Springs this December. It marks the second significant map expansion. Whereas the first expanded from the game’s West Virginia locales southward into Virginia’s Shenandoah National Park, this one pushes the map farther west, into the state of Ohio.

Ohio is a dust bowl now, it seems, so Fallout 76 will see its first desert locale. That’s an intentional choice, as the launch of this expansion will be timed closely to the release of season two of the TV show, and the show will be set in Nevada (specifically, around New Vegas). It obviously wouldn’t make sense to expand the game’s map all the way out to the western US, so this gives the developers a way to add a little season two flavor to Fallout 76.

As I was leaving my home to go to Bethesda’s gameplay preview event for Burning Springs, my wife joked that they should add Walton Goggins to the game as the ultimate tie-in with the show. Funny enough, that’s exactly what they’ve done. Goggins’ character from the show, The Ghoul, can be found in the new Burning Springs region, and he voices the character. This game is a prequel to the show by many, many years, but fortunately, Ghouls don’t age.

The Ghoul will give players repeatable bounty hunter missions of two types—one that you can handle solo and one that’s meant to be done as a public event with other players.

The Ghoul's ugly mug

Walton Goggins voices his character from the TV show in Fallout 76. That must have been expensive! Credit: Bethesda

I got to try both, and I found they were pretty fun, even though they don’t go too far in breaking the mold of Fallout 76‘s existing public events.

I also spent more than two hours freely exploring the game’s post-apocalyptic interpretation of Ohio. Despite the new desert aesthetic, it’s all pretty familiar Fallout stuff: raider-infested Super Duper Marts, blown-out neighborhoods, and the like. There is a very large new settlement that has a distinct character compared to the game’s existing towns, and it’s loaded with NPCs. I also enjoyed a public event that has players battling through a junkyard with a cyborg Deathclaw at their side—yep, you read that right.

I’m told there will be a new story quest line attached to the new region that involves a highly intelligent Super Mutant named the Rust King. I didn’t get to do those quests during this demo, though.

Burning Springs doesn’t do anything to rethink Fallout 76‘s basic experience; it’s just more of it, with a different flavor. But since Bethesda has done so much work making that basic experience more fun, that’s OK. It means more Fallout 76 is, in fact, more of a good thing.

TV tie-ins don’t fix a broken game, but they bring new or lapsed players back to a broken game that has since been fixed.

If you don’t like looter shooters, survival crafting games, or the very idea of multiplayer games—and some Fallout players just don’t—it’s not going to change your mind. But if the reason you skipped this game or bounced off of it was that you liked what it was going for but felt it stumbled on the execution, it can’t hurt to give it another try with the new update.

I don’t think that’s such a controversial opinion anymore. As a longtime player, it’s nice to be able to say that.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Hands-on with Fallout 76’s next expansion: Yep, it has Walton Goggins Read More »