Science

we’re-closer-to-re-creating-the-sounds-of-parasaurolophus

We’re closer to re-creating the sounds of Parasaurolophus

The duck-billed dinosaur Parasaurolophus is distinctive for its prominent crest, which some scientists have suggested served as a kind of resonating chamber to produce low-frequency sounds. Nobody really knows what Parasaurolophus sounded like, however. Hongjun Lin of New York University is trying to change that by constructing his own model of the dinosaur’s crest and its acoustical characteristics. Lin has not yet reproduced the call of Parasaurolophus, but he talked about his progress thus far at a virtual meeting of the Acoustical Society of America.

Lin was inspired in part by the dinosaur sounds featured in the Jurassic Park film franchise, which were a combination of sounds from other animals like baby whales and crocodiles. “I’ve been fascinated by giant animals ever since I was a kid. I’d spend hours reading books, watching movies, and imagining what it would be like if dinosaurs were still around today,” he said during a press briefing. “It wasn’t until college that I realized the sounds we hear in movies and shows—while mesmerizing—are completely fabricated using sounds from modern animals. That’s when I decided to dive deeper and explore what dinosaurs might have actually sounded like.”

A skull and partial skeleton of Parasaurolophus were first discovered in 1920 along the Red Deer River in Alberta, Canada, and another partial skull was discovered the following year in New Mexico. There are now three known species of Parasaurolophus; the name means “near crested lizard.” While no complete skeleton has yet been found, paleontologists have concluded that the adult dinosaur likely stood about 16 feet tall and weighed between 6,000 to 8,000 pounds. Parasaurolophus was an herbivore that could walk on all four legs while foraging for food but may have run on two legs.

It’s that distinctive crest that has most fascinated scientists over the last century, particularly its purpose. Past hypotheses have included its use as a snorkel or as a breathing tube while foraging for food; as an air trap to keep water out of the lungs; or as an air reservoir so the dinosaur could remain underwater for longer periods. Other scientists suggested the crest was designed to help move and support the head or perhaps used as a weapon while combating other Parasaurolophus. All of these, plus a few others, have largely been discredited.

We’re closer to re-creating the sounds of Parasaurolophus Read More »

nasa-is-stacking-the-artemis-ii-rocket,-implying-a-simple-heat-shield-fix

NASA is stacking the Artemis II rocket, implying a simple heat shield fix

A good sign

The readiness of the Orion crew capsule, where the four Artemis II astronauts will live during their voyage around the Moon, is driving NASA’s schedule for the mission. Officially, Artemis II is projected to launch in September of next year, but there’s little chance of meeting that schedule.

At the beginning of this year, NASA officials ruled out any opportunity to launch Artemis II in 2024 due to several technical issues with the Orion spacecraft. Several of these issues are now resolved, but NASA has not released any meaningful updates on the most significant problem.

This problem involves the Orion spacecraft’s heat shield. During atmospheric reentry at the end of the uncrewed Artemis I test flight in 2022, the Orion capsule’s heat shield eroded and cracked in unexpected ways, prompting investigations by NASA engineers and an independent panel.

NASA’s Orion heat shield inquiry ran for nearly two years. The investigation has wrapped up, two NASA officials said last month, but they declined to discuss any details of the root cause of the heat shield issue or the actions required to resolve the problem on Artemis II.

These corrective options ranged from doing nothing to changing the Orion spacecraft’s reentry angle to mitigate heating or physically modifying the Artemis II heat shield. In the latter scenario, NASA would have to disassemble the Orion spacecraft, which is already put together and is undergoing environmental testing at Kennedy Space Center. This would likely delay the Artemis II launch by a couple of years.

In August, NASA’s top human exploration official told Ars that the agency would hold off on stacking the SLS rocket until engineers had a good handle on the heat shield problem. There are limits to how long the solid rocket boosters can remain stacked vertically. The joints connecting each segment of the rocket motors are certified for one year. This clock doesn’t actually start ticking until NASA stacks the next booster segments on top of the lowermost segments.

However, NASA waived this rule on Artemis I when the boosters were stacked nearly two years before the successful launch.

A NASA spokesperson told Ars on Wednesday that the agency had nothing new to share on the Orion heat shield or what changes, if any, are required for the Artemis II mission. This information should be released before the end of the year, she said. At the same time, NASA could announce a new target launch date for Artemis II at the end of 2025, or more likely in 2026.

But because NASA gave the “go” for SLS stacking now, it seems safe to rule out any major hardware changes on the Orion heat shield for Artemis II.

NASA is stacking the Artemis II rocket, implying a simple heat shield fix Read More »

study:-yes,-tapping-on-frescoes-can-reveal-defects

Study: Yes, tapping on frescoes can reveal defects

The US Capitol building in Washington, DC, is adorned with multiple lavish murals created in the 19th century by Italian artist Constantino Brumidi. These include panels in the Senate first-floor corridors, Room H-144, and the rotunda. The crowning glory is The Apotheosis of Washington on the dome of the rotunda, 180 feet above the floor.

Brumidi worked in various mediums, including frescoes. Among the issues facing conservators charged with maintaining the Capitol building frescoes is delamination. Artists apply dry pigments to wet plaster to create a fresco, and a good fresco can last for centuries. Over time, though, the decorative plaster layers can separate from the underlying masonry, introducing air gaps. Knowing precisely where such delaminated areas are, and their exact shape, is crucial to conservation efforts, yet the damage might not be obvious to the naked eye.

Acoustician Nicholas Gangemi is part of a research group led by Joseph Vignola at the Catholic University of America that has been using laser Doppler vibrometry to pinpoint delaminated areas of the Capitol building frescoes. It’s a non-invasive method that zaps the frescoes with sound waves and measures the vibrational signatures that reflect back to learn about the structural conditions. This in turn enables conservators to make very precise repairs to preserve the frescoes for future generations.

It’s an alternative to the traditional technique of gently knocking on the plaster with knuckles or small mallets, listening to the resulting sounds to determine where delamination has occurred. Once separation occurs, the delaminated part of the fresco acts a bit like the head of a drum; tapping on it produces a distinctive acoustic signature.

But the method is highly subjective. It takes years of experience to become proficient at this method, and there are only a small number of people who can truly be deemed experts. “We really wanted to put that experience and knowledge into an inexperienced person’s hands,” Gangemi said during a press briefing at a virtual meeting of the Acoustical Society of America. So he and his colleagues decided to put the traditional knocking method to the test.

Study: Yes, tapping on frescoes can reveal defects Read More »

qubit-that-makes-most-errors-obvious-now-available-to-customers

Qubit that makes most errors obvious now available to customers


Can a small machine that makes error correction easier upend the market?

A graphic representation of the two resonance cavities that can hold photons, along with a channel that lets the photon move between them. Credit: Quantum Circuits

We’re nearing the end of the year, and there are typically a flood of announcements regarding quantum computers around now, in part because some companies want to live up to promised schedules. Most of these involve evolutionary improvements on previous generations of hardware. But this year, we have something new: the first company to market with a new qubit technology.

The technology is called a dual-rail qubit, and it is intended to make the most common form of error trivially easy to detect in hardware, thus making error correction far more efficient. And, while tech giant Amazon has been experimenting with them, a startup called Quantum Circuits is the first to give the public access to dual-rail qubits via a cloud service.

While the tech is interesting on its own, it also provides us with a window into how the field as a whole is thinking about getting error-corrected quantum computing to work.

What’s a dual-rail qubit?

Dual-rail qubits are variants of the hardware used in transmons, the qubits favored by companies like Google and IBM. The basic hardware unit links a loop of superconducting wire to a tiny cavity that allows microwave photons to resonate. This setup allows the presence of microwave photons in the resonator to influence the behavior of the current in the wire and vice versa. In a transmon, microwave photons are used to control the current. But there are other companies that have hardware that does the reverse, controlling the state of the photons by altering the current.

Dual-rail qubits use two of these systems linked together, allowing photons to move from the resonator to the other. Using the superconducting loops, it’s possible to control the probability that a photon will end up in the left or right resonator. The actual location of the photon will remain unknown until it’s measured, allowing the system as a whole to hold a single bit of quantum information—a qubit.

This has an obvious disadvantage: You have to build twice as much hardware for the same number of qubits. So why bother? Because the vast majority of errors involve the loss of the photon, and that’s easily detected. “It’s about 90 percent or more [of the errors],” said Quantum Circuits’ Andrei Petrenko. “So it’s a huge advantage that we have with photon loss over other errors. And that’s actually what makes the error correction a lot more efficient: The fact that photon losses are by far the dominant error.”

Petrenko said that, without doing a measurement that would disrupt the storage of the qubit, it’s possible to determine if there is an odd number of photons in the hardware. If that isn’t the case, you know an error has occurred—most likely a photon loss (gains of photons are rare but do occur). For simple algorithms, this would be a signal to simply start over.

But it does not eliminate the need for error correction if we want to do more complex computations that can’t make it to completion without encountering an error. There’s still the remaining 10 percent of errors, which are primarily something called a phase flip that is distinct to quantum systems. Bit flips are even more rare in dual-rail setups. Finally, simply knowing that a photon was lost doesn’t tell you everything you need to know to fix the problem; error-correction measurements of other parts of the logical qubit are still needed to fix any problems.

The layout of the new machine. Each qubit (gray square) involves a left and right resonance chamber (blue dots) that a photon can move between. Each of the qubits has connections that allow entanglement with its nearest neighbors. Credit: Quantum Circuits

In fact, the initial hardware that’s being made available is too small to even approach useful computations. Instead, Quantum Circuits chose to link eight qubits with nearest-neighbor connections in order to allow it to host a single logical qubit that enables error correction. Put differently: this machine is meant to enable people to learn how to use the unique features of dual-rail qubits to improve error correction.

One consequence of having this distinctive hardware is that the software stack that controls operations needs to take advantage of its error detection capabilities. None of the other hardware on the market can be directly queried to determine whether it has encountered an error. So, Quantum Circuits has had to develop its own software stack to allow users to actually benefit from dual-rail qubits. Petrenko said that the company also chose to provide access to its hardware via its own cloud service because it wanted to connect directly with the early adopters in order to better understand their needs and expectations.

Numbers or noise?

Given that a number of companies have already released multiple revisions of their quantum hardware and have scaled them into hundreds of individual qubits, it may seem a bit strange to see a company enter the market now with a machine that has just a handful of qubits. But amazingly, Quantum Circuits isn’t alone in planning a relatively late entry into the market with hardware that only hosts a few qubits.

Having talked with several of them, there is a logic to what they’re doing. What follows is my attempt to convey that logic in a general form, without focusing on any single company’s case.

Everyone agrees that the future of quantum computation is error correction, which requires linking together multiple hardware qubits into a single unit termed a logical qubit. To get really robust, error-free performance, you have two choices. One is to devote lots of hardware qubits to the logical qubit, so you can handle multiple errors at once. Or you can lower the error rate of the hardware, so that you can get a logical qubit with equivalent performance while using fewer hardware qubits. (The two options aren’t mutually exclusive, and everyone will need to do a bit of both.)

The two options pose very different challenges. Improving the hardware error rate means diving into the physics of individual qubits and the hardware that controls them. In other words, getting lasers that have fewer of the inevitable fluctuations in frequency and energy. Or figuring out how to manufacture loops of superconducting wire with fewer defects or handle stray charges on the surface of electronics. These are relatively hard problems.

By contrast, scaling qubit count largely involves being able to consistently do something you already know how to do. So, if you already know how to make good superconducting wire, you simply need to make a few thousand instances of that wire instead of a few dozen. The electronics that will trap an atom can be made in a way that will make it easier to make them thousands of times. These are mostly engineering problems, and generally of similar complexity to problems we’ve already solved to make the electronics revolution happen.

In other words, within limits, scaling is a much easier problem to solve than errors. It’s still going to be extremely difficult to get the millions of hardware qubits we’d need to error correct complex algorithms on today’s hardware. But if we can get the error rate down a bit, we can use smaller logical qubits and might only need 10,000 hardware qubits, which will be more approachable.

Errors first

And there’s evidence that even the early entries in quantum computing have reasoned the same way. Google has been working iterations of the same chip design since its 2019 quantum supremacy announcement, focusing on understanding the errors that occur on improved versions of that chip. IBM made hitting the 1,000 qubit mark a major goal but has since been focused on reducing the error rate in smaller processors. Someone at a quantum computing startup once told us it would be trivial to trap more atoms in its hardware and boost the qubit count, but there wasn’t much point in doing so given the error rates of the qubits on the then-current generation machine.

The new companies entering this market now are making the argument that they have a technology that will either radically reduce the error rate or make handling the errors that do occur much easier. Quantum Circuits clearly falls into the latter category, as dual-rail qubits are entirely about making the most common form of error trivial to detect. The former category includes companies like Oxford Ionics, which has indicated it can perform single-qubit gates with a fidelity of over 99.9991 percent. Or Alice & Bob, which stores qubits in the behavior of multiple photons in a single resonance cavity, making them very robust to the loss of individual photons.

These companies are betting that they have distinct technology that will let them handle error rate issues more effectively than established players. That will lower the total scaling they need to do, and scaling will be an easier problem overall—and one that they may already have the pieces in place to handle. Quantum Circuits’ Petrenko, for example, told Ars, “I think that we’re at the point where we’ve gone through a number of iterations of this qubit architecture where we’ve de-risked a number of the engineering roadblocks.” And Oxford Ionics told us that if they could make the electronics they use to trap ions in their hardware once, it would be easy to mass manufacture them.

None of this should imply that these companies will have it easy compared to a startup that already has experience with both reducing errors and scaling, or a giant like Google or IBM that has the resources to do both. But it does explain why, even at this stage in quantum computing’s development, we’re still seeing startups enter the field.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Qubit that makes most errors obvious now available to customers Read More »

study:-why-aztec-“death-whistles”-sound-like-human-screams

Study: Why Aztec “death whistles” sound like human screams

Aztec death whistles don’t fit into any existing Western classification for wind instruments; they seem to be a unique kind of “air spring” whistle, based on CT scans of some of the artifacts. Sascha Frühholz, a cognitive and affective neuroscientist at the University of Zürich, and several colleagues wanted to learn more about the physical mechanisms behind the whistle’s distinctive sound, as well as how humans perceive said sound—a field known as psychoacoustics. “The whistles have a very unique construction, and we don’t know of any comparable musical instrument from other pre-Columbian cultures or from other historical and contemporary contexts,” said Frühholz.

A symbolic sound?

Human sacrifice with original skull whistle (small red box and enlarged rotated view in lower right) discovered 1987–89 at the Ehecatl-Quetzalcoatl temple in Mexico City, Mexico.

Human sacrifice with original skull whistle (small red box and enlarged rotated view in lower right) discovered 1987–89 at the Ehecatl-Quetzalcoatl temple in Mexico City. Credit: Salvador Guillien Arroyo, Proyecto Tlatelolco

For their acoustic analysis, Frühholz et al. obtained sound recordings from two Aztec skull whistles excavated from Tlatelolco, as well as from three noise whistles (part of Aztec fire snake incense ladles). They took CT scans of whistles in the collection of the Ethnological Museum in Berlin, enabling them to create both 3D digital reconstructions and physical clay replicas. They were also able to acquire three additional artisanal clay whistles for experimental purposes.

Human participants then blew into the replicas with low-, medium-, and high-intensity air pressure, and the ensuing sounds were recorded. Those recordings were compared to existing databases of a broad range of sounds: animals, natural soundscapes, water sounds, urban noise, synthetic sounds (as for computers, pinball machines, printers, etc.), and various ancient instruments, among other samples. Finally, a group of 70 human listeners rated a random selection of sounds from a collection of over 2,500 samples.

The CT scans showed that skull whistles have an internal tube-like air duct with a constricted passage, a counter pressure chamber, a collision chamber, and a bell cavity. The unusual construction suggests that the basic principle at play is the Venturi effect, in which air (or a generic fluid) speeds up as it flows through a constricted passage, thereby reducing the pressure. “At high playing intensities and air speeds, this leads to acoustic distortions and to a rough and piercing sound character that seems uniquely produced by the skull whistles,” the authors wrote.

Study: Why Aztec “death whistles” sound like human screams Read More »

spacex-just-got-exactly-what-it-wanted-from-the-faa-for-texas-starship-launches

SpaceX just got exactly what it wanted from the FAA for Texas Starship launches

And there will be significant impacts. For example, the number of large trucks that deliver water, liquid oxygen, methane, and other commodities will increase substantially. According to the FAA document, the vehicle presence will grow from an estimated 6,000 trucks a year to 23,771 trucks annually. This number could be reduced by running a water line along State Highway 4 to supply the launch site’s water deluge system.

SpaceX has made progress in some areas, the document notes. For example, in terms of road closures for testing and launch activities, SpaceX has reduced the duration of closures along State Highway 4 to Boca Chica Beach by 85 percent between the first and third flight of Starship. This has partly been accomplished by moving launch preparation activities to the “Massey’s Test Site,” located about four miles from the launch site. SpaceX is now expected to need less than 20 hours of access restrictions per launch campaign, including landings.

SpaceX clearly got what it wanted

If finalized, this environmental assessment will give SpaceX the regulatory greenlight to match its aspirations for launches in at least 2025, if not beyond. During recent public meetings, SpaceX’s general manager of Starbase, Kathy Lueders, has said the company aims to launch Starship 25 times next year from Texas. The new regulations would permit this.

Additionally, SpaceX founder Elon Musk has said the company intends to move to a larger and more powerful version of the Starship and Super Heavy rocket about a year from now. This version, dubbed Starship 3, would double the thrust of the upper stage and increase the thrust of the booster stage from about 74 meganewtons to about 100 meganewtons. If that number seems a little abstract, another way to think about it is that Starship would have a thrust at liftoff three times as powerful as NASA’s Saturn V rocket that launched humans to the Moon decades ago. The draft environmental assessment permits this as well.

SpaceX just got exactly what it wanted from the FAA for Texas Starship launches Read More »

cracking-the-recipe-for-perfect-plant-based-eggs

Cracking the recipe for perfect plant-based eggs


Hint: It involves finding exactly the right proteins.

An egg is an amazing thing, culinarily speaking: delicious, nutritious, and versatile. Americans eat nearly 100 billion of them every year, almost 300 per person. But eggs, while greener than other animal food sources, have a bigger environmental footprint than almost any plant food—and industrial egg production raises significant animal welfare issues.

So food scientists, and a few companies, are trying hard to come up with ever-better plant-based egg substitutes. “We’re trying to reverse-engineer an egg,” says David Julian McClements, a food scientist at the University of Massachusetts Amherst.

That’s not easy, because real eggs play so many roles in the kitchen. You can use beaten eggs to bind breadcrumbs in a coating, or to hold together meatballs; you can use them to emulsify oil and water into mayonnaise, scramble them into an omelet or whip them to loft a meringue or angel food cake. An all-purpose egg substitute must do all those things acceptably well, while also yielding the familiar texture and—perhaps—flavor of real eggs.

Today’s plant-based eggs still fall short of that one-size-fits-all goal, but researchers in industry and academia are trying to improve them. New ingredients and processes are leading toward egg substitutes that are not just more egg-like, but potentially more nutritious and better tasting than the original.

In practice, making a convincing plant-based egg is largely a matter of mimicking the way the ovalbumin and other proteins in real eggs behave during cooking. When egg proteins are heated beyond a critical point, they unfold and grab onto one another, forming what food scientists call a gel. That causes the white and then the yolk to set up when cooked.

Woman cracking egg

Eggs aren’t just for frying or scrambling. Cooks use them to bind other ingredients together and to emulsify oil and water to make mayonnaise. The proteins in egg whites can also be whipped into a foam that’s essential in meringues and angel food cake. Finding a plant-based egg substitute that does all of these things has proven challenging.

Eggs aren’t just for frying or scrambling. Cooks use them to bind other ingredients together and to emulsify oil and water to make mayonnaise. The proteins in egg whites can also be whipped into a foam that’s essential in meringues and angel food cake. Finding a plant-based egg substitute that does all of these things has proven challenging. Credit: Adam Gault via Getty

That’s not easy to replicate with some plant proteins, which tend to have more sulfur-containing amino acids than egg proteins do. These sulfur groups bind to each other, so the proteins unfold at higher temperatures. As a result, they must usually be cooked longer and hotter than ones in real eggs.

To make a plant-based egg, food scientists typically start by extracting a mix of proteins from a plant source such as soybean, mung bean, or other crops. “You want to start with what is a sustainable, affordable, and consistent source of plant proteins,” says McClements, who wrote about the design of plant-based foods in the 2024 Annual Review of Food Science and Technology. “So you’re going to narrow your search to that group of proteins that are economically feasible to use.”

Fortunately, some extracts are dominated by one or a few proteins that set at low-enough temperatures to behave pretty much like real egg proteins. Current plant-based eggs rely on these proteins: Just Egg uses the plant albumins and globulin found in mung bean extract, Simply Eggless uses proteins from lupin beans, and McClements and others are experimenting with the photosynthetic enzyme rubisco that is abundant in duckweed and other leafy tissues.

These days, food technologists can produce a wide range of proteins in large quantities by inserting the gene for a selected protein into hosts like bacteria or yeast, then growing the hosts in a tank, a process called precision fermentation. That opens a huge new window for exploration of other plant-based protein sources that may more precisely match the properties of actual eggs.

A few companies are already searching. Shiru, a California-based biotech company, for example, uses a sophisticated artificial intelligence platform to identify proteins with specific properties from its database of more than 450 million natural protein sequences. To find a more egglike plant protein, the company first picked the criteria it needed to match. “For eggs, that is the thermal gel onset—that is, when it goes from liquid to solid when you heat it,” says Jasmin Hume, a protein engineer who is the company’s founder and CEO. “And it must result in the right texture—not too hard, not too gummy, not too soft.” Those properties depend on details such as which amino acids a protein contains, in what order, and precisely how it folds into a 3D structure—a hugely complex process that was the subject of the 2024 Nobel Prize in chemistry.

The company then scoured its database, winnowing it down to a short list that it predicted would fit the bill. Technicians produced those proteins and tested their properties, pinpointing a handful of potential egglike proteins. A few were good enough to start the company working to commercialize their production, though Hume declined to provide further details.

Cracking the flavor code

With the main protein in hand, the next step for food technologists is to add other molecules that help make the product more egglike. Adding vegetable oils, for example, can change the texture. “If I don’t put any oil in the product, it’s going to scramble more like an egg white,” says Chris Jones, a chef who is vice president of product development at Eat Just, which produces the egg substitute Just Egg. “If I put 8 to 15 percent, it’s going to scramble like a whole egg. If I add more, it’s going to behave like a batter.”

Developers can also add gums to prevent the protein in the mixture from settling during storage, or add molecules that are translucent at room temperature but turn opaque when cooked, providing the same visual cue to doneness that real eggs provide.

And then there’s the taste: Current plant-based eggs often suffer from off flavors. “Our first version tasted like what you imagine the bottom of a lawn mower deck would taste like—really grassy,” says Jones. The company’s current product, version 5, still has some beany notes, he says.

Those beany flavors aren’t caused by a single molecule, says Devin Peterson, a flavor chemist at Ohio State University: “It’s a combination that creates beany.” Protein extracts from legumes contain enzymes that create some of these off-flavor volatile molecules—and it’s a painstaking process to single out the offending volatiles and avoid or remove them, he says. (Presumably, cooking up single proteins in a vat could reduce this problem.) Many plant proteins also have molecules called polyphenols bound to their surfaces that contribute to beany flavors. “It’s very challenging to remove these polyphenols, because they’re tightly stuck,” says McClements.

Experts agree that eliminating beany and other off flavors is a good thing. But there’s less agreement on whether developers need to actively make a plant-based egg taste more like a real egg. “That’s actually a polarizing question,” says Jones.

Much of an egg’s flavor comes from sulfur compounds that aren’t necessarily pleasing to consumers. “An egg tastes a certain way because it’s releasing sulfur as it decays,” says Jones. When tasters were asked to compare Eat Just’s egg-free mayonnaise against the traditional, real-egg version, he notes, “at least 50 percent didn’t like the sulfur flavor of a true-egg mayo.”

That poses a quandary for developers. “Should it have a sulfur flavor, or should it have its own point of view, a flavor that our chefs develop? We don’t have an answer yet,” Jones says. Even for something like an omelet, he says, developers could aim for “a neutral spot where whatever seasoning you add is what you’re going to taste.”

As food technologists work to overcome these challenges, plant-based eggs are likely to get better and better. But the ultimate goal might be to surpass, not merely match, the performance of real eggs. Already, McClements and his colleagues have experimented with adding lutein, a nutrient important for eye health, to oil droplets in plant-based egg yolks.

In the future, scientists could adjust the amino acid composition of proteins or boost the calcium or iron content in plant-based eggs to match nutritional needs. “We ultimately could engineer something that’s way healthier than what’s available now,” says Bianca Datta, a food scientist at the Good Food Institute, an international nonprofit that supports the development of plant-based foods. “We’re just at the beginning of seeing what’s possible.”

This story originally appeared in Knowable Magazine.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

Cracking the recipe for perfect plant-based eggs Read More »

the-key-moment-came-38-minutes-after-starship-roared-off-the-launch-pad

The key moment came 38 minutes after Starship roared off the launch pad


SpaceX wasn’t able to catch the Super Heavy booster, but Starship is on the cusp of orbital flight.

The sixth flight of Starship lifts off from SpaceX’s Starbase launch site at Boca Chica Beach, Texas. Credit: SpaceX.

SpaceX launched its sixth Starship rocket Tuesday, proving for the first time that the stainless steel ship can maneuver in space and paving the way for an even larger, upgraded vehicle slated to debut on the next test flight.

The only hiccup was an abortive attempt to catch the rocket’s Super Heavy booster back at the launch site in South Texas, something SpaceX achieved on the previous flight on October 13. The Starship upper stage flew halfway around the world, reaching an altitude of 118 miles (190 kilometers) before plunging through the atmosphere for a pinpoint slow-speed splashdown in the Indian Ocean.

The sixth flight of the world’s largest launcher—standing 398 feet (121.3 meters) tall—began with a lumbering liftoff from SpaceX’s Starbase facility near the US-Mexico border at 4 pm CST (22: 00 UTC) Tuesday. The rocket headed east over the Gulf of Mexico, propelled by 33 Raptor engines clustered on the bottom of its Super Heavy first stage.

A few miles away, President-elect Donald Trump joined SpaceX founder Elon Musk to witness the launch. The SpaceX boss became one of Trump’s closest allies in this year’s presidential election, giving the world’s richest man extraordinary influence in US space policy. Sen. Ted Cruz (R-Texas) was there, too, among other lawmakers. Gen. Chance Saltzman, the top commander in the US Space Force, stood nearby, chatting with Trump and other VIPs.

Elon Musk, SpaceX’s CEO, President-elect Donald Trump, and Gen. Chance Saltzman of the US Space Force watch the sixth launch of Starship Tuesday. Credit: Brandon Bell/Getty Images

From their viewing platform, they watched Starship climb into a clear autumn sky. At full power, the 33 Raptors chugged more than 40,000 pounds of super-cold liquid methane and liquid oxygen per second. The engines generated 16.7 million pounds of thrust, 60 percent more than the Soviet N1, the second-largest rocket in history.

Eight minutes later, the rocket’s upper stage, itself also known as Starship, was in space, completing the program’s fourth straight near-flawless launch. The first two test flights faltered before reaching their planned trajectory.

A brief but crucial demo

As exciting as it was, we’ve seen all that before. One of the most important new things engineers wanted to test on this flight occurred about 38 minutes after liftoff.

That’s when Starship reignited one of its six Raptor engines for a brief burn to make a slight adjustment to its flight path. The burn lasted only a few seconds, and the impulse was small—just a 48 mph (77 km/hour) change in velocity, or delta-V—but it demonstrated that the ship can safely deorbit itself on future missions.

With this achievement, Starship will likely soon be cleared to travel into orbit around Earth and deploy Starlink Internet satellites or conduct in-space refueling experiments, two of the near-term objectives on SpaceX’s Starship development roadmap.

Launching Starlinks aboard Starship will allow SpaceX to expand the capacity and reach of its commercial consumer broadband network, which, in turn, provides revenue for Musk to reinvest into Starship. Orbital refueling enables Starship voyages beyond low-Earth orbit, fulfilling SpaceX’s multibillion-dollar contract with NASA to provide a human-rated Moon lander for the agency’s Artemis program. Likewise, transferring cryogenic propellants in orbit is a prerequisite for sending Starships to Mars, making real Musk’s dream of creating a settlement on the red planet.

Artist’s illustration of Starship on the surface of the Moon. Credit: SpaceX

Until now, SpaceX has intentionally launched Starships to speeds just shy of the blistering velocities needed to maintain orbit. Engineers wanted to test the Raptor’s ability to reignite in space on the third Starship test flight in March, but the ship lost control of its orientation, and SpaceX canceled the engine firing.

Before going for a full orbital flight, officials needed to confirm that Starship could steer itself back into the atmosphere for reentry, ensuring it wouldn’t present any risk to the public with an unguided descent over a populated area. After Tuesday, SpaceX can check this off its to-do list.

“Congrats to SpaceX on Starship’s sixth test flight,” NASA Administrator Bill Nelson posted on X. “Exciting to see the Raptor engine restart in space—major progress towards orbital flight. Starship’s success is Artemis’ success. Together, we will return humanity to the Moon & set our sights on Mars.”

While it lacks the pizzazz of a fiery launch or landing, the engine relight unlocks a new phase of Starship development. SpaceX has now proven that the rocket is capable of reaching space with a fair measure of reliability. Next, engineers will fine-tune how to reliably recover the booster and the ship and learn how to use them.

Acid test

SpaceX appears well on its way to doing this. While SpaceX didn’t catch the Super Heavy booster with the launch tower’s mechanical arms Tuesday, engineers have shown they can do it. The challenge of catching Starship itself back at the launch pad is more daunting. The ship starts its reentry thousands of miles from Starbase, traveling approximately 17,000 mph (27,000 km/hour), and must thread the gap between the tower’s catch arms within a matter of inches.

The good news is that SpaceX has now twice proven it can bring Starship back to a precision splashdown in the Indian Ocean. In October, the ship settled into the sea in darkness. SpaceX moved the launch time for Tuesday’s flight to the late afternoon, setting up for splashdown shortly after sunrise northwest of Australia.

The shift in time paid off with some stunning new visuals. Cameras mounted on the outside of Starship beamed dazzling live views back to SpaceX through the Starlink network, showing a now-familiar glow of plasma encasing the spacecraft as it plowed deeper into the atmosphere. But this time, daylight revealed the ship’s flaps moving to control its belly-first descent toward the ocean. After passing through a deck of low clouds, Starship reignited its Raptor engines and tilted from horizontal to vertical, making contact with the water tail-first within view of a floating buoy and a nearby aircraft in position to observe the moment.

Here’s a replay of the spacecraft’s splashdown around 65 minutes after launch.

Splashdown confirmed! Congratulations to the entire SpaceX team on an exciting sixth flight test of Starship! pic.twitter.com/bf98Va9qmL

— SpaceX (@SpaceX) November 19, 2024

The ship made it through reentry despite flying with a substandard heat shield. Starship’s thermal protection system is made up of thousands of ceramic tiles to protect the ship from temperatures as high as 2,600° Fahrenheit (1,430° Celsius).

Kate Tice, a SpaceX engineer hosting the company’s live broadcast of the mission, said teams at Starbase removed 2,100 heat shield tiles from Starship ahead of Tuesday’s launch. Their removal exposed wider swaths of the ship’s stainless steel skin to super-heated plasma, and SpaceX teams were eager to see how well the spacecraft held up during reentry. In the language of flight testing, this approach is called exploring the corners of the envelope, where engineers evaluate how a new airplane or rocket performs in extreme conditions.

“Don’t be surprised if we see some wackadoodle stuff happen here,” Tice said. There was nothing of the sort. One of the ship’s flaps appeared to suffer some heating damage, but it remained intact and functional, and the harm looked to be less substantial than damage seen on previous flights.

Many of the removed tiles came from the sides of Starship where SpaceX plans to place catch fittings on future vehicles. These are the hardware protuberances that will catch on the top side of the launch tower’s mechanical arms, similar to fittings used on the Super Heavy booster.

“The next flight, we want to better understand where we can install catch hardware, not necessarily to actually do the catch but to see how that hardware holds up in those spots,” Tice said. “Today’s flight will help inform ‘does the stainless steel hold up like we think it may, based on experiments that we conducted on Flight 5?'”

Musk wrote on his social media platform X that SpaceX could try to bring Starship back to Starbase for a catch on the eighth test flight, which is likely to occur in the first half of 2025.

“We will do one more ocean landing of the ship,” Musk said. “If that goes well, then SpaceX will attempt to catch the ship with the tower.”

The heat shield, Musk added, is a focal point of SpaceX’s attention. The delicate heat-absorbing tiles used on the belly of the space shuttle proved vexing to NASA technicians. Early in the shuttle’s development, NASA had trouble keeping tiles adhered to the shuttle’s aluminum skin. Each of the shuttle tiles was custom-machined to fit on a specific location on the orbiter, complicating refurbishment between flights. Starship’s tiles are all hexagonal in shape and agnostic to where technicians place them on the vehicle.

“The biggest technology challenge remaining for Starship is a fully & immediately reusable heat shield,” Musk wrote on X. “Being able to land the ship, refill propellant & launch right away with no refurbishment or laborious inspection. That is the acid test.”

This photo of the Starship vehicle for Flight 6, numbered Ship 31, shows exposed portions of the vehicle’s stainless steel skin after tile removal. Credit: SpaceX

There were no details available Tuesday night on what caused the Super Heavy booster to divert from its planned catch on the launch tower. After detaching from the Starship upper stage less than three minutes into the flight, the booster reversed course to begin the journey back to Starbase.

Then SpaceX’s flight director announced the rocket would fly itself into the Gulf rather than back to the launch site: “Booster offshore divert.”

The booster finished its descent with a seemingly perfect landing burn using a subset of its Raptor engines. As expected after the water landing, the booster—itself 233 feet (71 meters) tall—toppled and broke apart in a dramatic fireball visible to onshore spectators.

In an update posted to its website after the launch, SpaceX said automated health checks of hardware on the launch and catch tower triggered the aborted catch attempt. The company did not say what system failed the health check. As a safety measure, SpaceX must send a manual command for the booster to come back to land in order to prevent a malfunction from endangering people or property.

Turning it up to 11

There will be plenty more opportunities for more booster catches in the coming months as SpaceX ramps up its launch cadence at Starbase. Gwynne Shotwell, SpaceX’s president and chief operating officer, hinted at the scale of the company’s ambitions last week.

“We just passed 400 launches on Falcon, and I would not be surprised if we fly 400 Starship launches in the next four years,” she said at the Barron Investment Conference.

The next batch of test flights will use an improved version of Starship designated Block 2, or V2. Starship Block 2 comes with larger propellant tanks, redesigned forward flaps, and a better heat shield.

The new-generation Starship will hold more than 11 million pounds of fuel and oxidizer, about a million pounds more than the capacity of Starship Block 1. The booster and ship will produce more thrust, and Block 2 will measure 408 feet (124.4 meters) tall, stretching the height of the full stack by a little more than 10 feet.

Put together, these modifications should give Starship the ability to heave a payload of up to 220,000 pounds (100 metric tons) into low-Earth orbit, about twice the carrying capacity of the first-generation ship. Further down the line, SpaceX plans to introduce Starship Block 3 to again double the ship’s payload capacity.

Just as importantly, these changes are designed to make it easier for SpaceX to recover and reuse the Super Heavy booster and Starship upper stage. SpaceX’s goal of fielding a fully reusable launcher builds on the partial reuse SpaceX pioneered with its Falcon 9 rocket. This should dramatically bring down launch costs, according to SpaceX’s vision.

With Tuesday’s flight, it’s clear Starship works. Now it’s time to see what it can do.

Updated with additional details, quotes, and images.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

The key moment came 38 minutes after Starship roared off the launch pad Read More »

microsoft-and-atom-computing-combine-for-quantum-error-correction-demo

Microsoft and Atom Computing combine for quantum error correction demo


New work provides a good view of where the field currently stands.

The first-generation tech demo of Atom’s hardware. Things have progressed considerably since. Credit: Atom Computing

In September, Microsoft made an unusual combination of announcements. It demonstrated progress with quantum error correction, something that will be needed for the technology to move much beyond the interesting demo phase, using hardware from a quantum computing startup called Quantinuum. At the same time, however, the company also announced that it was forming a partnership with a different startup, Atom Computing, which uses a different technology to make qubits available for computations.

Given that, it was probably inevitable that the folks in Redmond, Washington, would want to show that similar error correction techniques would also work with Atom Computing’s hardware. It didn’t take long, as the two companies are releasing a draft manuscript describing their work on error correction today. The paper serves as both a good summary of where things currently stand in the world of error correction, as well as a good look at some of the distinct features of computation using neutral atoms.

Atoms and errors

While we have various technologies that provide a way of storing and manipulating bits of quantum information, none of them can be operated error-free. At present, errors make it difficult to perform even the simplest computations that are clearly beyond the capabilities of classical computers. More sophisticated algorithms would inevitably encounter an error before they could be completed, a situation that would remain true even if we could somehow improve the hardware error rates of qubits by a factor of 1,000—something we’re unlikely to ever be able to do.

The solution to this is to use what are called logical qubits, which distribute quantum information across multiple hardware qubits and allow the detection and correction of errors when they occur. Since multiple qubits get linked together to operate as a single logical unit, the hardware error rate still matters. If it’s too high, then adding more hardware qubits just means that errors will pop up faster than they can possibly be corrected.

We’re now at the point where, for a number of technologies, hardware error rates have passed the break-even point, and adding more hardware qubits can lower the error rate of a logical qubit based on them. This was demonstrated using neutral atom qubits by an academic lab at Harvard University about a year ago. The new manuscript demonstrates that it also works on a commercial machine from Atom Computing.

Neutral atoms, which can be held in place using a lattice of laser light, have a number of distinct advantages when it comes to quantum computing. Every single atom will behave identically, meaning that you don’t have to manage the device-to-device variability that’s inevitable with fabricated electronic qubits. Atoms can also be moved around, allowing any atom to be entangled with any other. This any-to-any connectivity can enable more efficient algorithms and error-correction schemes. The quantum information is typically stored in the spin of the atom’s nucleus, which is shielded from environmental influences by the cloud of electrons that surround it, making them relatively long-lived qubits.

Operations, including gates and readout, are performed using lasers. The way the physics works, the spacing of the atoms determines how the laser affects them. If two atoms are a critical distance apart, the laser can perform a single operation, called a two-qubit gate, that affects both of their states. Anywhere outside this distance, and a laser only affects each atom individually. This allows a fine control over gate operations.

That said, operations are relatively slow compared to some electronic qubits, and atoms can occasionally be lost entirely. The optical traps that hold atoms in place are also contingent upon the atom being in its ground state; if any atom ends up stuck in a different state, it will be able to drift off and be lost. This is actually somewhat useful, in that it converts an unexpected state into a clear error.

Image of a grid of dots arranged in sets of parallel vertical rows. There is a red bar across the top, and a green bar near the bottom of the grid.

Atom Computing’s system. Rows of atoms are held far enough apart so that a single laser sent across them (green bar) only operates on individual atoms. If the atoms are moved to the interaction zone (red bar), a laser can perform gates on pairs of atoms. Spaces where atoms can be held can be left empty to avoid performing unneeded operations. Credit: Reichardt, et al.

The machine used in the new demonstration hosts 256 of these neutral atoms. Atom Computing has them arranged in sets of parallel rows, with space in between to let the atoms be shuffled around. For single-qubit gates, it’s possible to shine a laser across the rows, causing every atom it touches to undergo that operation. For two-qubit gates, pairs of atoms get moved to the end of the row and moved a specific distance apart, at which point a laser will cause the gate to be performed on every pair present.

Atom’s hardware also allows a constant supply of new atoms to be brought in to replace any that are lost. It’s also possible to image the atom array in between operations to determine whether any atoms have been lost and if any are in the wrong state.

It’s only logical

As a general rule, the more hardware qubits you dedicate to each logical qubit, the more simultaneous errors you can identify. This identification can enable two ways of handling the error. In the first, you simply discard any calculation with an error and start over. In the second, you can use information about the error to try to fix it, although the repair involves additional operations that can potentially trigger a separate error.

For this work, the Microsoft/Atom team used relatively small logical qubits (meaning they used very few hardware qubits), which meant they could fit more of them within 256 total hardware qubits the machine made available. They also checked the error rate of both error detection with discard and error detection with correction.

The research team did two main demonstrations. One was placing 24 of these logical qubits into what’s called a cat state, named after Schrödinger’s hypothetical feline. This is when a quantum object simultaneously has non-zero probability of being in two mutually exclusive states. In this case, the researchers placed 24 logical qubits in an entangled cat state, the largest ensemble of this sort yet created. Separately, they implemented what’s called the Bernstein-Vazirani algorithm. The classical version of this algorithm requires individual queries to identify each bit in a string of them; the quantum version obtains the entire string with a single query, so is a notable case of something where a quantum speedup is possible.

Both of these showed a similar pattern. When done directly on the hardware, with each qubit being a single atom, there was an appreciable error rate. By detecting errors and discarding those calculations where they occurred, it was possible to significantly improve the error rate of the remaining calculations. Note that this doesn’t eliminate errors, as it’s possible for multiple errors to occur simultaneously, altering the value of the qubit without leaving an indication that can be spotted with these small logical qubits.

Discarding has its limits; as calculations become increasingly complex, involving more qubits or operations, it will inevitably mean every calculation will have an error, so you’d end up wanting to discard everything. Which is why we’ll ultimately need to correct the errors.

In these experiments, however, the process of correcting the error—taking an entirely new atom and setting it into the appropriate state—was also error-prone. So, while it could be done, it ended up having an overall error rate that was intermediate between the approach of catching and discarding errors and the rate when operations were done directly on the hardware.

In the end, the current hardware has an error rate that’s good enough that error correction actually improves the probability that a set of operations can be performed without producing an error. But not good enough that we can perform the sort of complex operations that would lead quantum computers to have an advantage in useful calculations. And that’s not just true for Atom’s hardware; similar things can be said for other error-correction demonstrations done on different machines.

There are two ways to go beyond these current limits. One is simply to improve the error rates of the hardware qubits further, as fewer total errors make it more likely that we can catch and correct them. The second is to increase the qubit counts so that we can host larger, more robust logical qubits. We’re obviously going to need to do both, and Atom’s partnership with Microsoft was formed in the hope that it will help both companies get there faster.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft and Atom Computing combine for quantum error correction demo Read More »

scientist-behind-superconductivity-claims-ousted

Scientist behind superconductivity claims ousted

University of Rochester physicist Ranga Dias made headlines with his controversial claims of high-temperature superconductivity—and made headlines again when the two papers reporting the breakthroughs were later retracted under suspicion of scientific misconduct, although Dias denied any wrongdoing. The university conducted a formal investigation over the past year and has now terminated Dias’ employment, The Wall Street Journal reported.

“In the past year, the university completed a fair and thorough investigation—conducted by a panel of nationally and internationally known physicists—into data reliability concerns within several retracted papers in which Dias served as a senior and corresponding author,” a spokesperson for the University of Rochester said in a statement to the WSJ, confirming his termination. “The final report concluded that he engaged in research misconduct while a faculty member here.”

The spokesperson declined to elaborate further on the details of his departure, and Dias did not respond to the WSJ’s request for comment. Dias did not have tenure, so the final decision rested with the Board of Trustees after a recommendation from university President Sarah Mangelsdorf. Mangelsdorf had called for terminating his position in an August letter to the chair and vice chair of the Board of Trustees, so the decision should not come as a surprise. Dias’ lawsuit claiming that the investigation was biased was dismissed by a judge in April.

Ars has been following this story ever since Dias first burst onto the scene with reports of a high-pressure, room-temperature superconductor, published in Nature in 2020. Even as that paper was being retracted due to concerns about the validity of some of its data, Dias published a second paper in Nature claiming a similar breakthrough: a superconductor that works at high temperatures but somewhat lower pressures. Shortly afterward, that paper was retracted as well. As Ars Science Editor John Timmer reported previously:

Dias’ lab was focused on high-pressure superconductivity. At extreme pressures, the orbitals where electrons hang out get distorted, which can alter the chemistry and electronic properties of materials. This can mean the formation of chemical compounds that don’t exist at normal pressures, along with distinct conductivity. In a number of cases, these changes enabled superconductivity at unusually high temperatures, although still well below the freezing point of water.

Dias, however, supposedly found a combination of chemicals that would boost the transition to superconductivity to near room temperature, although only at extreme pressures. While the results were plausible, the details regarding how some of the data was processed to produce one of the paper’s key graphs were lacking, and Dias didn’t provide a clear explanation.

The ensuing investigation cleared Dias of misconduct for that first paper. Then came the second paper, which reported another high-temperature superconductor forming at less extreme pressures. However, potential problems soon became apparent, with many of the authors calling for its retraction, although Dias did not.

Scientist behind superconductivity claims ousted Read More »

the-iss-has-been-leaking-air-for-5-years,-and-engineers-still-don’t-know-why

The ISS has been leaking air for 5 years, and engineers still don’t know why

“The station is not young,” said Michael Barratt, a NASA astronaut who returned from the space station last month. “It’s been up there for quite a while, and you expect some wear and tear, and we’re seeing that.”

“The Russians believe that continued operations are safe, but they can’t prove to our satisfaction that they are,” said Cabana, who was the senior civil servant at NASA until his retirement in 2023. “And the US believes that it’s not safe, but we can’t prove that to the Russian satisfaction that that’s the case.

“So while the Russian team continues to search for and seal the leaks, it does not believe catastrophic disintegration of the PrK is realistic,” Cabana said. “And NASA has expressed concerns about the structural integrity of the PrK and the possibility of a catastrophic failure.”

Closing the PrK hatch permanently would eliminate the use of one of the space station’s four Russian docking ports.

NASA has chartered a team of independent experts to assess the cracks and leaks and help determine the root cause, Cabana said. “This is an engineering problem, and good engineers should be able to agree on it.”

As a precaution, Barratt said space station crews are also closing the hatch separating the US and Russian sections of the space station when cosmonauts are working in the PrK.

“The way it’s affected us, mostly, is as they go in and open that to unload a cargo vehicle that’s docked to it, they’ve also taken time to inspect and try to repair when they can,” Barratt said. “We’ve taken a very conservative approach to closing the hatch between the US side and the Russian side for those time periods.

“It’s not a comfortable thing, but it is the best agreement between all the smart people on both sides, and it’s something that we as a crew live with and adapt.”

The ISS has been leaking air for 5 years, and engineers still don’t know why Read More »

trust-in-scientists-hasn’t-recovered-from-covid-some-humility-could-help.

Trust in scientists hasn’t recovered from COVID. Some humility could help.

Study 3 essentially replicated study 2, but with the tweak that the articles varied whether the fictional scientist was male or female, in case gendered expectations affected how people perceived humility and trustworthiness. The results from 369 participants indicated that gender didn’t affect the link between IH and trust. Similarly, in study 4, with 371 participants, the researchers varied the race/ethnicity of the scientist, finding again that the link between IH and trust remained.

“Together, these four studies offer compelling evidence that perceptions of scientists’ IH play an important role in both trust in scientists and willingness to follow their research-based recommendations,” the authors concluded.

Next steps

In the final study involving 679 participants, researchers examined different ways that scientists might express IH, including whether the IH was expressed as a personal trait, limitations of research methods, or as limitations of research results. Unexpectedly, the strategies to express IH by highlighting limitations in the methods and results of research both increased perceptions of IH, but shook trust in the research. Only personal IH successfully boosted perceptions of IH without backfiring, the authors report.

The finding suggests that more research is needed to guide scientists on how best to express high IH. But, it’s clear that low IH is not good. “[W]e encourage scientists to be particularly mindful of displaying low IH, such as by expressing overconfidence, being unwilling to course correct or disrespecting others’ views,” the researchers caution.

Overall, Schumann said she was encouraged by the team’s findings. “They suggest that the public understands that science isn’t about having all the answers; it’s about asking the right questions, admitting what we don’t yet understand, and learning as we go. Although we still have much to discover about how scientists can authentically convey intellectual humility, we now know people sense that a lack of intellectual humility undermines the very aspects of science that make it valuable and rigorous. This is a great place to build from.”

Trust in scientists hasn’t recovered from COVID. Some humility could help. Read More »