Science

our-universe-is-not-fine-tuned-for-life,-but-it’s-still-kind-of-ok

Our Universe is not fine-tuned for life, but it’s still kind of OK


Inspired by the Drake equation, researchers optimize a model universe for life.

Physicists including Robert H. Dickle and Fred Hoyle have argued that we are living in a universe that is perfectly fine-tuned for life. Following the anthropic principle, they claimed that the only reason fundamental physical constants have the values we measure is because we wouldn’t exist if those values were any different. There would simply have been no one to measure them.

But now a team of British and Swiss astrophysicists have put that idea to test. “The short answer is no, we are not in the most likely of the universes,” said Daniele Sorini, an astrophysicist at Durham University. “And we are not in the most life-friendly universe, either.” Sorini led a study aimed at establishing how different amounts of the dark energy present in a universe would affect its ability to produce stars. Stars, he assumed, are a necessary condition for intelligent life to appear.

But worry not. While our Universe may not be the best for life, the team says it’s still pretty OK-ish.

Expanding the Drake equation

Back in the 1960s, Frank Drake, an American astrophysicist and astrobiologist, proposed an equation aimed at estimating the number of intelligent civilizations in our Universe. The equation started with stars as a precondition for life and worked its way down in scale from there. How many new stars appear in the Universe per year? How many of the stars are orbited by planets? How many of those planets are habitable? How many of those habitable planets can develop life? Eventually, you’re left with the fraction of planets that host intelligent civilizations.

The problem with the Drake equation was that it wasn’t really supposed to yield a definite number. We couldn’t—and still can’t—know the values for most of its variables, like the fraction of the planets that developed life. So far, we know of only one such planet, and you can’t infer any statistical probabilities when you only have one sample. The equation was meant more as a guide for future researchers, giving them ideas of what to look for in their search for extraterrestrial life.

But even without knowing the actual values of all those variables present in the Drake equation, one thing was certain: The more stars you had at the beginning, the better the odds for life were. So Sorini’s team focused on stars.

“Our work is connected to the Drake equation in that it relies on the same logic,” Sorini said. “The difference is we are not adding to the life side of the equation. We’re adding to the stars’ side of the equation.” His team attempted to identify the basic constituents of a universe that’s good at producing stars.

“By ‘constituents,’ I mean ordinary matter, the stuff we are made of—the dark matter, which is a weirder, invisible type of matter, and the dark energy, which is what is making the expansion of a universe proceed faster and faster,” Sorinin explained. Of all those constituents, his team found that dark energy has a key influence on the star formation rate.

Into the multiverse

Dark energy accelerates the expansion of the Universe, counteracting gravity and pushing matter further apart. If there’s enough dark energy, it would be difficult to form the dark matter web that structures galaxies. “The idea is ‘more dark energy, fewer galaxies—so fewer stars,’” Sorini said.

The effect of dark energy in a universe can be modeled by a number called the cosmological constant. “You could reinterpret it as a form of energy that can make your universe expand faster,” Sorinin said.

(The cosmological constant was originally a number Albert Einstein came up with to fix the fact that his theory of general relativity caused the expansion of what was thought to be a static universe. Einstein later learned that the Universe actually was expanding and declared the cosmological constant his greatest blunder. But the idea eventually managed to make a comeback after it was discovered that the Universe’s expansion is accelerating.)

The cosmological constant was one of the variables Sorini’s team manipulated to determine if we are living in a universe that is maximally efficient at producing stars. Sorini based this work on an idea put forward by Steven Weinberg, a Nobel Prize-winning physicist, back in 1989. “Weinberg proposed that there could be a multiverse of all possible universes, each with a different value of dark energy,” Sorini explained.  Sorini’s team modeled that multiverse composed of thousands upon thousands of possible universes, each complete with a past and future.

Cosmological fluke

To simulate the history of all those universes, Sorini used a slightly modified version of a star formation model he developed back in 2021 with John A. Peacock, a British astronomer at the University of Edinburgh, Scotland, and co-author of the study. It wasn’t the most precise model, but the approximations it suggested produced a universe that was reasonably close to our own. The team validated the results by predicting the stellar mass fraction in the total mass of the Milky Way Galaxy, which we know stands somewhere between 2.2 and 6.6 percent. The model came up with 6.7 percent, which was deemed good enough for the job.

In the next step, Sorini and his colleagues defined a large set of possible universes in which the value of the cosmological constant ranged from a very tiny fraction of the one we observe in our Universe all the way to the value 100,000 times higher than our own.

It turned out our Universe was not the best at producing stars. But it was decent.

“The value of the cosmological constant in the most life-friendly universe would be measured at roughly one-tenth of the value we observe in our own,” Sorini said.

In a universe like that, the fraction of the matter that gets turned into stars would stand at 27 percent. “But we don’t seem to be that far from the optimal value. In our Universe, stars are formed with around 23 percent of the matter,” Sorini said.

The last question the team addressed was how lucky we are to even be here. According to Sorini’s calculations, if all universes in the multiverse are equally likely, the chances of having a cosmological constant at or lower than the value present in our Universe is just 0.5 percent. In other words, we rolled the dice and got a pretty good score, although it could have been a bit better. The odds of getting a cosmological constant at one-tenth of our own or lower were just 0.2 percent.

Things also could have been much worse. The flip side of these odds is that the number of possible universes that are worse than our own vastly exceeds the number of universes that are better.

“That is of course all subject to the assumptions of our model, and the only assumption about life we made was that more stars lead to higher chances for life to appear,” Sorini said. In the future, his team plans to go beyond that idea and make the model more sophisticated by considering more parameters. “For example, we could ask ourselves what the chances are of producing carbons in order to have life as we know it or something like that,” Sorini said.

Monthly Notices of the Royal Astronomical Society, 2024.  DOI: https://doi.org/10.1093/mnras/stae2236

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Our Universe is not fine-tuned for life, but it’s still kind of OK Read More »

a-former-orion-manager-has-surprisingly-credible-plans-to-fly-european-astronauts

A former Orion manager has surprisingly credible plans to fly European astronauts

She found herself wanting to build something more modern. Looking across the Atlantic, she drew inspiration from what SpaceX was doing with its reusable Falcon 9 rocket. She watched humans launch into space aboard Crew Dragon and saw that same vehicle fly again and again. “I have a huge admiration for what SpaceX has done,” she said.

Huby also saw opportunity in that company’s success. SpaceX is the only provider of crew transportation in the Western world. It’s likely that Boeing’s Starliner spacecraft will never become a serious competitor. India’s human spaceflight program is making some progress, but it’s unclear whether the Gaganyaan vehicle will serve non-Indian customers.

The opportunity she saw was to provide an alternative to SpaceX based in Europe. This would yield 100 percent of the market in Europe and offer an option to countries like Saudi Arabia, the United Arab Emirates, Australia, and other nations interested in going to space.

“I know it’s super hard, and I know it was crazy,” Huby said. “But I wanted to try.”

Starting small

She founded The Exploration Company in August 2021 with $50,000 in the bank and a small team of four people. Three years later, the company has 200 employees and recently announced that it had raised $160 million in Series B funding. It marked the first time that two European sovereign funds, French Tech and Germany-based DTCF, invested together. The news even scored a congratulatory post on LinkedIn from French President Emmanuel Macron, who wrote, “The history of space continues to be written in Europeans.”

To date, then, Huby has raised nearly $230 million. Her company has already flown a mission, the “Bikini” reentry demonstrator, on the debut flight of the Ariane 6 rocket this last summer. The small capsule was intended to demonstrate the company’s reentry technology. Unfortunately, the rocket’s upper stage failed on its deorbit burn, so the Bikini capsule remains stuck in space.

Still, the company is already hard at work on a second demonstration vehicle, about 2.5 meters in diameter, that will have more than a dozen customers on board. The spacecraft for this demonstration flight, named Mission Possible, is fully assembled, Huby said, and it will launch on SpaceX’s Transporter 14 mission next summer, likely in July. This mission was developed in 2.5 years at a cost of $20 million, plus $10 million for the launch.

A former Orion manager has surprisingly credible plans to fly European astronauts Read More »

russian-ballistic-missile-attack-on-ukraine-portends-new-era-of-warfare

Russian ballistic missile attack on Ukraine portends new era of warfare

The Oreshnik missiles strike their targets at speeds of up to Mach 10, or 2.5 to 3 kilometers per second, Putin said. “The existing air defense systems around the world, including those being developed by the US in Europe, are unable to intercept such missiles.”

A global war?

In perhaps the most chilling part of his remarks, Putin said the conflict in Ukraine is “taking on global dimensions” and said Russia is entitled to use missiles against Western countries supplying weapons for Ukraine to use against Russian targets.

“In the event of escalation, we will respond decisively and in kind,” Putin said. “I advise the ruling elites of those countries planning to use their military forces against Russia to seriously consider this.”

The change in nuclear doctrine authorized by Putin earlier this week also lowers the threshold for Russia’s use of nuclear weapons to counter a conventional attack that threatens Russian “territorial integrity.”

This seems to have already happened. Ukraine launched an offensive into Russia’s Kursk region in August, taking control of more than 1,000 square kilometers of Russian land. Russian forces, assisted by North Korean troops, are staging a counteroffensive to try to retake the territory.

Singh called Russia’s invitation of North Korean troops “escalatory” and said Putin could “choose to end this war today.”

US officials say Russian forces are suffering some 1,200 deaths or injuries per day in the war. In September, The Wall Street Journal reported that US intelligence sources estimated that a million Ukrainians and Russians had been killed or wounded in the war.

The UN Human Rights Office most recently reported that 11,973 civilians have been killed, including 622 children, since the start of the full-scale Russian invasion in February 2022.

“We warned Russia back in 2022 not to do this, and they did it anyways, so there are consequences for that,” Singh said. “But we don’t want to see this escalate into a wider regional conflict. We don’t seek war with Russia.”

Russian ballistic missile attack on Ukraine portends new era of warfare Read More »

surgeons-remove-2.5-inch-hairball-from-teen-with-rare-rapunzel-syndrome

Surgeons remove 2.5-inch hairball from teen with rare Rapunzel syndrome

Hair is resistant to digestion and isn’t easily moved through the digestive system. As such, it often gets lodged in folds of the gastric lining, denatures, and then traps food and gunk to form a mass. Over time, it will continue to collect material, growing into a thick, matted wad.

Of all the bezoars, trichobezoars are the most common. But none of them are particularly easy to spot. On CT scans, bezoars can be indistinguishable from food in the stomach unless there’s an oral contrast material. To look for a possible bezoar in the teen, her doctors ordered an esophagogastroduodenoscopy, in which a scope is put down into the stomach through the mouth. With that, they got a clear shot of the problem: a trichobezoar. (The image is here, but a warning: it’s graphic).

Tangled tail

But this trichobezoar was particularly rare; hair from the mottled mat had dangled down from the stomach and into the small bowel, which is an extremely uncommon condition called Rapunzel syndrome, named after the fairy-tale character who lets down her long hair. It carries a host of complications beyond acute abdominal pain, including perforation of the stomach and intestines, and acute pancreatitis. The only resolution is surgical removal. In the teen’s case, the trichobezoar came out during surgery using a gastrostomy tube. Surgeons recovered a hairball about 2.5 inches wide, along with the dangling hair that reached into the small intestine.

For any patient with a trichobezoar, the most important next step is to address any psychiatric disorders that might underlie hair-eating behavior. Hair eating is often linked to a condition called trichotillomania, a repetitive behavior disorder marked by hair pulling. Sometimes, the disorder can be diagnosed by signs of hair loss—bald patches, irritated scalp areas, or hair at different growth stages. But, for the most part, it’s an extremely difficult condition to diagnose as patients have substantial shame and embarrassment about the condition and will often go to great lengths to hide it.

Another possibility is that the teen had pica, a disorder marked by persistent eating of nonfood, nonnutritive substances. Intriguingly, the teen noted that she had pica as a toddler. But doctors were skeptical that pica could explain her condition given that hair was the only nonfood material in the bezoar.

The teen’s doctors would have liked to get to the bottom of her condition and referred her to a psychiatrist after she successfully recovered from surgery. But unfortunately, she did not return for follow-up care and told her doctors she would instead see a hypnotherapist that her friends recommended.

Surgeons remove 2.5-inch hairball from teen with rare Rapunzel syndrome Read More »

we’re-closer-to-re-creating-the-sounds-of-parasaurolophus

We’re closer to re-creating the sounds of Parasaurolophus

The duck-billed dinosaur Parasaurolophus is distinctive for its prominent crest, which some scientists have suggested served as a kind of resonating chamber to produce low-frequency sounds. Nobody really knows what Parasaurolophus sounded like, however. Hongjun Lin of New York University is trying to change that by constructing his own model of the dinosaur’s crest and its acoustical characteristics. Lin has not yet reproduced the call of Parasaurolophus, but he talked about his progress thus far at a virtual meeting of the Acoustical Society of America.

Lin was inspired in part by the dinosaur sounds featured in the Jurassic Park film franchise, which were a combination of sounds from other animals like baby whales and crocodiles. “I’ve been fascinated by giant animals ever since I was a kid. I’d spend hours reading books, watching movies, and imagining what it would be like if dinosaurs were still around today,” he said during a press briefing. “It wasn’t until college that I realized the sounds we hear in movies and shows—while mesmerizing—are completely fabricated using sounds from modern animals. That’s when I decided to dive deeper and explore what dinosaurs might have actually sounded like.”

A skull and partial skeleton of Parasaurolophus were first discovered in 1920 along the Red Deer River in Alberta, Canada, and another partial skull was discovered the following year in New Mexico. There are now three known species of Parasaurolophus; the name means “near crested lizard.” While no complete skeleton has yet been found, paleontologists have concluded that the adult dinosaur likely stood about 16 feet tall and weighed between 6,000 to 8,000 pounds. Parasaurolophus was an herbivore that could walk on all four legs while foraging for food but may have run on two legs.

It’s that distinctive crest that has most fascinated scientists over the last century, particularly its purpose. Past hypotheses have included its use as a snorkel or as a breathing tube while foraging for food; as an air trap to keep water out of the lungs; or as an air reservoir so the dinosaur could remain underwater for longer periods. Other scientists suggested the crest was designed to help move and support the head or perhaps used as a weapon while combating other Parasaurolophus. All of these, plus a few others, have largely been discredited.

We’re closer to re-creating the sounds of Parasaurolophus Read More »

nasa-is-stacking-the-artemis-ii-rocket,-implying-a-simple-heat-shield-fix

NASA is stacking the Artemis II rocket, implying a simple heat shield fix

A good sign

The readiness of the Orion crew capsule, where the four Artemis II astronauts will live during their voyage around the Moon, is driving NASA’s schedule for the mission. Officially, Artemis II is projected to launch in September of next year, but there’s little chance of meeting that schedule.

At the beginning of this year, NASA officials ruled out any opportunity to launch Artemis II in 2024 due to several technical issues with the Orion spacecraft. Several of these issues are now resolved, but NASA has not released any meaningful updates on the most significant problem.

This problem involves the Orion spacecraft’s heat shield. During atmospheric reentry at the end of the uncrewed Artemis I test flight in 2022, the Orion capsule’s heat shield eroded and cracked in unexpected ways, prompting investigations by NASA engineers and an independent panel.

NASA’s Orion heat shield inquiry ran for nearly two years. The investigation has wrapped up, two NASA officials said last month, but they declined to discuss any details of the root cause of the heat shield issue or the actions required to resolve the problem on Artemis II.

These corrective options ranged from doing nothing to changing the Orion spacecraft’s reentry angle to mitigate heating or physically modifying the Artemis II heat shield. In the latter scenario, NASA would have to disassemble the Orion spacecraft, which is already put together and is undergoing environmental testing at Kennedy Space Center. This would likely delay the Artemis II launch by a couple of years.

In August, NASA’s top human exploration official told Ars that the agency would hold off on stacking the SLS rocket until engineers had a good handle on the heat shield problem. There are limits to how long the solid rocket boosters can remain stacked vertically. The joints connecting each segment of the rocket motors are certified for one year. This clock doesn’t actually start ticking until NASA stacks the next booster segments on top of the lowermost segments.

However, NASA waived this rule on Artemis I when the boosters were stacked nearly two years before the successful launch.

A NASA spokesperson told Ars on Wednesday that the agency had nothing new to share on the Orion heat shield or what changes, if any, are required for the Artemis II mission. This information should be released before the end of the year, she said. At the same time, NASA could announce a new target launch date for Artemis II at the end of 2025, or more likely in 2026.

But because NASA gave the “go” for SLS stacking now, it seems safe to rule out any major hardware changes on the Orion heat shield for Artemis II.

NASA is stacking the Artemis II rocket, implying a simple heat shield fix Read More »

study:-yes,-tapping-on-frescoes-can-reveal-defects

Study: Yes, tapping on frescoes can reveal defects

The US Capitol building in Washington, DC, is adorned with multiple lavish murals created in the 19th century by Italian artist Constantino Brumidi. These include panels in the Senate first-floor corridors, Room H-144, and the rotunda. The crowning glory is The Apotheosis of Washington on the dome of the rotunda, 180 feet above the floor.

Brumidi worked in various mediums, including frescoes. Among the issues facing conservators charged with maintaining the Capitol building frescoes is delamination. Artists apply dry pigments to wet plaster to create a fresco, and a good fresco can last for centuries. Over time, though, the decorative plaster layers can separate from the underlying masonry, introducing air gaps. Knowing precisely where such delaminated areas are, and their exact shape, is crucial to conservation efforts, yet the damage might not be obvious to the naked eye.

Acoustician Nicholas Gangemi is part of a research group led by Joseph Vignola at the Catholic University of America that has been using laser Doppler vibrometry to pinpoint delaminated areas of the Capitol building frescoes. It’s a non-invasive method that zaps the frescoes with sound waves and measures the vibrational signatures that reflect back to learn about the structural conditions. This in turn enables conservators to make very precise repairs to preserve the frescoes for future generations.

It’s an alternative to the traditional technique of gently knocking on the plaster with knuckles or small mallets, listening to the resulting sounds to determine where delamination has occurred. Once separation occurs, the delaminated part of the fresco acts a bit like the head of a drum; tapping on it produces a distinctive acoustic signature.

But the method is highly subjective. It takes years of experience to become proficient at this method, and there are only a small number of people who can truly be deemed experts. “We really wanted to put that experience and knowledge into an inexperienced person’s hands,” Gangemi said during a press briefing at a virtual meeting of the Acoustical Society of America. So he and his colleagues decided to put the traditional knocking method to the test.

Study: Yes, tapping on frescoes can reveal defects Read More »

qubit-that-makes-most-errors-obvious-now-available-to-customers

Qubit that makes most errors obvious now available to customers


Can a small machine that makes error correction easier upend the market?

A graphic representation of the two resonance cavities that can hold photons, along with a channel that lets the photon move between them. Credit: Quantum Circuits

We’re nearing the end of the year, and there are typically a flood of announcements regarding quantum computers around now, in part because some companies want to live up to promised schedules. Most of these involve evolutionary improvements on previous generations of hardware. But this year, we have something new: the first company to market with a new qubit technology.

The technology is called a dual-rail qubit, and it is intended to make the most common form of error trivially easy to detect in hardware, thus making error correction far more efficient. And, while tech giant Amazon has been experimenting with them, a startup called Quantum Circuits is the first to give the public access to dual-rail qubits via a cloud service.

While the tech is interesting on its own, it also provides us with a window into how the field as a whole is thinking about getting error-corrected quantum computing to work.

What’s a dual-rail qubit?

Dual-rail qubits are variants of the hardware used in transmons, the qubits favored by companies like Google and IBM. The basic hardware unit links a loop of superconducting wire to a tiny cavity that allows microwave photons to resonate. This setup allows the presence of microwave photons in the resonator to influence the behavior of the current in the wire and vice versa. In a transmon, microwave photons are used to control the current. But there are other companies that have hardware that does the reverse, controlling the state of the photons by altering the current.

Dual-rail qubits use two of these systems linked together, allowing photons to move from the resonator to the other. Using the superconducting loops, it’s possible to control the probability that a photon will end up in the left or right resonator. The actual location of the photon will remain unknown until it’s measured, allowing the system as a whole to hold a single bit of quantum information—a qubit.

This has an obvious disadvantage: You have to build twice as much hardware for the same number of qubits. So why bother? Because the vast majority of errors involve the loss of the photon, and that’s easily detected. “It’s about 90 percent or more [of the errors],” said Quantum Circuits’ Andrei Petrenko. “So it’s a huge advantage that we have with photon loss over other errors. And that’s actually what makes the error correction a lot more efficient: The fact that photon losses are by far the dominant error.”

Petrenko said that, without doing a measurement that would disrupt the storage of the qubit, it’s possible to determine if there is an odd number of photons in the hardware. If that isn’t the case, you know an error has occurred—most likely a photon loss (gains of photons are rare but do occur). For simple algorithms, this would be a signal to simply start over.

But it does not eliminate the need for error correction if we want to do more complex computations that can’t make it to completion without encountering an error. There’s still the remaining 10 percent of errors, which are primarily something called a phase flip that is distinct to quantum systems. Bit flips are even more rare in dual-rail setups. Finally, simply knowing that a photon was lost doesn’t tell you everything you need to know to fix the problem; error-correction measurements of other parts of the logical qubit are still needed to fix any problems.

The layout of the new machine. Each qubit (gray square) involves a left and right resonance chamber (blue dots) that a photon can move between. Each of the qubits has connections that allow entanglement with its nearest neighbors. Credit: Quantum Circuits

In fact, the initial hardware that’s being made available is too small to even approach useful computations. Instead, Quantum Circuits chose to link eight qubits with nearest-neighbor connections in order to allow it to host a single logical qubit that enables error correction. Put differently: this machine is meant to enable people to learn how to use the unique features of dual-rail qubits to improve error correction.

One consequence of having this distinctive hardware is that the software stack that controls operations needs to take advantage of its error detection capabilities. None of the other hardware on the market can be directly queried to determine whether it has encountered an error. So, Quantum Circuits has had to develop its own software stack to allow users to actually benefit from dual-rail qubits. Petrenko said that the company also chose to provide access to its hardware via its own cloud service because it wanted to connect directly with the early adopters in order to better understand their needs and expectations.

Numbers or noise?

Given that a number of companies have already released multiple revisions of their quantum hardware and have scaled them into hundreds of individual qubits, it may seem a bit strange to see a company enter the market now with a machine that has just a handful of qubits. But amazingly, Quantum Circuits isn’t alone in planning a relatively late entry into the market with hardware that only hosts a few qubits.

Having talked with several of them, there is a logic to what they’re doing. What follows is my attempt to convey that logic in a general form, without focusing on any single company’s case.

Everyone agrees that the future of quantum computation is error correction, which requires linking together multiple hardware qubits into a single unit termed a logical qubit. To get really robust, error-free performance, you have two choices. One is to devote lots of hardware qubits to the logical qubit, so you can handle multiple errors at once. Or you can lower the error rate of the hardware, so that you can get a logical qubit with equivalent performance while using fewer hardware qubits. (The two options aren’t mutually exclusive, and everyone will need to do a bit of both.)

The two options pose very different challenges. Improving the hardware error rate means diving into the physics of individual qubits and the hardware that controls them. In other words, getting lasers that have fewer of the inevitable fluctuations in frequency and energy. Or figuring out how to manufacture loops of superconducting wire with fewer defects or handle stray charges on the surface of electronics. These are relatively hard problems.

By contrast, scaling qubit count largely involves being able to consistently do something you already know how to do. So, if you already know how to make good superconducting wire, you simply need to make a few thousand instances of that wire instead of a few dozen. The electronics that will trap an atom can be made in a way that will make it easier to make them thousands of times. These are mostly engineering problems, and generally of similar complexity to problems we’ve already solved to make the electronics revolution happen.

In other words, within limits, scaling is a much easier problem to solve than errors. It’s still going to be extremely difficult to get the millions of hardware qubits we’d need to error correct complex algorithms on today’s hardware. But if we can get the error rate down a bit, we can use smaller logical qubits and might only need 10,000 hardware qubits, which will be more approachable.

Errors first

And there’s evidence that even the early entries in quantum computing have reasoned the same way. Google has been working iterations of the same chip design since its 2019 quantum supremacy announcement, focusing on understanding the errors that occur on improved versions of that chip. IBM made hitting the 1,000 qubit mark a major goal but has since been focused on reducing the error rate in smaller processors. Someone at a quantum computing startup once told us it would be trivial to trap more atoms in its hardware and boost the qubit count, but there wasn’t much point in doing so given the error rates of the qubits on the then-current generation machine.

The new companies entering this market now are making the argument that they have a technology that will either radically reduce the error rate or make handling the errors that do occur much easier. Quantum Circuits clearly falls into the latter category, as dual-rail qubits are entirely about making the most common form of error trivial to detect. The former category includes companies like Oxford Ionics, which has indicated it can perform single-qubit gates with a fidelity of over 99.9991 percent. Or Alice & Bob, which stores qubits in the behavior of multiple photons in a single resonance cavity, making them very robust to the loss of individual photons.

These companies are betting that they have distinct technology that will let them handle error rate issues more effectively than established players. That will lower the total scaling they need to do, and scaling will be an easier problem overall—and one that they may already have the pieces in place to handle. Quantum Circuits’ Petrenko, for example, told Ars, “I think that we’re at the point where we’ve gone through a number of iterations of this qubit architecture where we’ve de-risked a number of the engineering roadblocks.” And Oxford Ionics told us that if they could make the electronics they use to trap ions in their hardware once, it would be easy to mass manufacture them.

None of this should imply that these companies will have it easy compared to a startup that already has experience with both reducing errors and scaling, or a giant like Google or IBM that has the resources to do both. But it does explain why, even at this stage in quantum computing’s development, we’re still seeing startups enter the field.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Qubit that makes most errors obvious now available to customers Read More »

study:-why-aztec-“death-whistles”-sound-like-human-screams

Study: Why Aztec “death whistles” sound like human screams

Aztec death whistles don’t fit into any existing Western classification for wind instruments; they seem to be a unique kind of “air spring” whistle, based on CT scans of some of the artifacts. Sascha Frühholz, a cognitive and affective neuroscientist at the University of Zürich, and several colleagues wanted to learn more about the physical mechanisms behind the whistle’s distinctive sound, as well as how humans perceive said sound—a field known as psychoacoustics. “The whistles have a very unique construction, and we don’t know of any comparable musical instrument from other pre-Columbian cultures or from other historical and contemporary contexts,” said Frühholz.

A symbolic sound?

Human sacrifice with original skull whistle (small red box and enlarged rotated view in lower right) discovered 1987–89 at the Ehecatl-Quetzalcoatl temple in Mexico City, Mexico.

Human sacrifice with original skull whistle (small red box and enlarged rotated view in lower right) discovered 1987–89 at the Ehecatl-Quetzalcoatl temple in Mexico City. Credit: Salvador Guillien Arroyo, Proyecto Tlatelolco

For their acoustic analysis, Frühholz et al. obtained sound recordings from two Aztec skull whistles excavated from Tlatelolco, as well as from three noise whistles (part of Aztec fire snake incense ladles). They took CT scans of whistles in the collection of the Ethnological Museum in Berlin, enabling them to create both 3D digital reconstructions and physical clay replicas. They were also able to acquire three additional artisanal clay whistles for experimental purposes.

Human participants then blew into the replicas with low-, medium-, and high-intensity air pressure, and the ensuing sounds were recorded. Those recordings were compared to existing databases of a broad range of sounds: animals, natural soundscapes, water sounds, urban noise, synthetic sounds (as for computers, pinball machines, printers, etc.), and various ancient instruments, among other samples. Finally, a group of 70 human listeners rated a random selection of sounds from a collection of over 2,500 samples.

The CT scans showed that skull whistles have an internal tube-like air duct with a constricted passage, a counter pressure chamber, a collision chamber, and a bell cavity. The unusual construction suggests that the basic principle at play is the Venturi effect, in which air (or a generic fluid) speeds up as it flows through a constricted passage, thereby reducing the pressure. “At high playing intensities and air speeds, this leads to acoustic distortions and to a rough and piercing sound character that seems uniquely produced by the skull whistles,” the authors wrote.

Study: Why Aztec “death whistles” sound like human screams Read More »

spacex-just-got-exactly-what-it-wanted-from-the-faa-for-texas-starship-launches

SpaceX just got exactly what it wanted from the FAA for Texas Starship launches

And there will be significant impacts. For example, the number of large trucks that deliver water, liquid oxygen, methane, and other commodities will increase substantially. According to the FAA document, the vehicle presence will grow from an estimated 6,000 trucks a year to 23,771 trucks annually. This number could be reduced by running a water line along State Highway 4 to supply the launch site’s water deluge system.

SpaceX has made progress in some areas, the document notes. For example, in terms of road closures for testing and launch activities, SpaceX has reduced the duration of closures along State Highway 4 to Boca Chica Beach by 85 percent between the first and third flight of Starship. This has partly been accomplished by moving launch preparation activities to the “Massey’s Test Site,” located about four miles from the launch site. SpaceX is now expected to need less than 20 hours of access restrictions per launch campaign, including landings.

SpaceX clearly got what it wanted

If finalized, this environmental assessment will give SpaceX the regulatory greenlight to match its aspirations for launches in at least 2025, if not beyond. During recent public meetings, SpaceX’s general manager of Starbase, Kathy Lueders, has said the company aims to launch Starship 25 times next year from Texas. The new regulations would permit this.

Additionally, SpaceX founder Elon Musk has said the company intends to move to a larger and more powerful version of the Starship and Super Heavy rocket about a year from now. This version, dubbed Starship 3, would double the thrust of the upper stage and increase the thrust of the booster stage from about 74 meganewtons to about 100 meganewtons. If that number seems a little abstract, another way to think about it is that Starship would have a thrust at liftoff three times as powerful as NASA’s Saturn V rocket that launched humans to the Moon decades ago. The draft environmental assessment permits this as well.

SpaceX just got exactly what it wanted from the FAA for Texas Starship launches Read More »

cracking-the-recipe-for-perfect-plant-based-eggs

Cracking the recipe for perfect plant-based eggs


Hint: It involves finding exactly the right proteins.

An egg is an amazing thing, culinarily speaking: delicious, nutritious, and versatile. Americans eat nearly 100 billion of them every year, almost 300 per person. But eggs, while greener than other animal food sources, have a bigger environmental footprint than almost any plant food—and industrial egg production raises significant animal welfare issues.

So food scientists, and a few companies, are trying hard to come up with ever-better plant-based egg substitutes. “We’re trying to reverse-engineer an egg,” says David Julian McClements, a food scientist at the University of Massachusetts Amherst.

That’s not easy, because real eggs play so many roles in the kitchen. You can use beaten eggs to bind breadcrumbs in a coating, or to hold together meatballs; you can use them to emulsify oil and water into mayonnaise, scramble them into an omelet or whip them to loft a meringue or angel food cake. An all-purpose egg substitute must do all those things acceptably well, while also yielding the familiar texture and—perhaps—flavor of real eggs.

Today’s plant-based eggs still fall short of that one-size-fits-all goal, but researchers in industry and academia are trying to improve them. New ingredients and processes are leading toward egg substitutes that are not just more egg-like, but potentially more nutritious and better tasting than the original.

In practice, making a convincing plant-based egg is largely a matter of mimicking the way the ovalbumin and other proteins in real eggs behave during cooking. When egg proteins are heated beyond a critical point, they unfold and grab onto one another, forming what food scientists call a gel. That causes the white and then the yolk to set up when cooked.

Woman cracking egg

Eggs aren’t just for frying or scrambling. Cooks use them to bind other ingredients together and to emulsify oil and water to make mayonnaise. The proteins in egg whites can also be whipped into a foam that’s essential in meringues and angel food cake. Finding a plant-based egg substitute that does all of these things has proven challenging.

Eggs aren’t just for frying or scrambling. Cooks use them to bind other ingredients together and to emulsify oil and water to make mayonnaise. The proteins in egg whites can also be whipped into a foam that’s essential in meringues and angel food cake. Finding a plant-based egg substitute that does all of these things has proven challenging. Credit: Adam Gault via Getty

That’s not easy to replicate with some plant proteins, which tend to have more sulfur-containing amino acids than egg proteins do. These sulfur groups bind to each other, so the proteins unfold at higher temperatures. As a result, they must usually be cooked longer and hotter than ones in real eggs.

To make a plant-based egg, food scientists typically start by extracting a mix of proteins from a plant source such as soybean, mung bean, or other crops. “You want to start with what is a sustainable, affordable, and consistent source of plant proteins,” says McClements, who wrote about the design of plant-based foods in the 2024 Annual Review of Food Science and Technology. “So you’re going to narrow your search to that group of proteins that are economically feasible to use.”

Fortunately, some extracts are dominated by one or a few proteins that set at low-enough temperatures to behave pretty much like real egg proteins. Current plant-based eggs rely on these proteins: Just Egg uses the plant albumins and globulin found in mung bean extract, Simply Eggless uses proteins from lupin beans, and McClements and others are experimenting with the photosynthetic enzyme rubisco that is abundant in duckweed and other leafy tissues.

These days, food technologists can produce a wide range of proteins in large quantities by inserting the gene for a selected protein into hosts like bacteria or yeast, then growing the hosts in a tank, a process called precision fermentation. That opens a huge new window for exploration of other plant-based protein sources that may more precisely match the properties of actual eggs.

A few companies are already searching. Shiru, a California-based biotech company, for example, uses a sophisticated artificial intelligence platform to identify proteins with specific properties from its database of more than 450 million natural protein sequences. To find a more egglike plant protein, the company first picked the criteria it needed to match. “For eggs, that is the thermal gel onset—that is, when it goes from liquid to solid when you heat it,” says Jasmin Hume, a protein engineer who is the company’s founder and CEO. “And it must result in the right texture—not too hard, not too gummy, not too soft.” Those properties depend on details such as which amino acids a protein contains, in what order, and precisely how it folds into a 3D structure—a hugely complex process that was the subject of the 2024 Nobel Prize in chemistry.

The company then scoured its database, winnowing it down to a short list that it predicted would fit the bill. Technicians produced those proteins and tested their properties, pinpointing a handful of potential egglike proteins. A few were good enough to start the company working to commercialize their production, though Hume declined to provide further details.

Cracking the flavor code

With the main protein in hand, the next step for food technologists is to add other molecules that help make the product more egglike. Adding vegetable oils, for example, can change the texture. “If I don’t put any oil in the product, it’s going to scramble more like an egg white,” says Chris Jones, a chef who is vice president of product development at Eat Just, which produces the egg substitute Just Egg. “If I put 8 to 15 percent, it’s going to scramble like a whole egg. If I add more, it’s going to behave like a batter.”

Developers can also add gums to prevent the protein in the mixture from settling during storage, or add molecules that are translucent at room temperature but turn opaque when cooked, providing the same visual cue to doneness that real eggs provide.

And then there’s the taste: Current plant-based eggs often suffer from off flavors. “Our first version tasted like what you imagine the bottom of a lawn mower deck would taste like—really grassy,” says Jones. The company’s current product, version 5, still has some beany notes, he says.

Those beany flavors aren’t caused by a single molecule, says Devin Peterson, a flavor chemist at Ohio State University: “It’s a combination that creates beany.” Protein extracts from legumes contain enzymes that create some of these off-flavor volatile molecules—and it’s a painstaking process to single out the offending volatiles and avoid or remove them, he says. (Presumably, cooking up single proteins in a vat could reduce this problem.) Many plant proteins also have molecules called polyphenols bound to their surfaces that contribute to beany flavors. “It’s very challenging to remove these polyphenols, because they’re tightly stuck,” says McClements.

Experts agree that eliminating beany and other off flavors is a good thing. But there’s less agreement on whether developers need to actively make a plant-based egg taste more like a real egg. “That’s actually a polarizing question,” says Jones.

Much of an egg’s flavor comes from sulfur compounds that aren’t necessarily pleasing to consumers. “An egg tastes a certain way because it’s releasing sulfur as it decays,” says Jones. When tasters were asked to compare Eat Just’s egg-free mayonnaise against the traditional, real-egg version, he notes, “at least 50 percent didn’t like the sulfur flavor of a true-egg mayo.”

That poses a quandary for developers. “Should it have a sulfur flavor, or should it have its own point of view, a flavor that our chefs develop? We don’t have an answer yet,” Jones says. Even for something like an omelet, he says, developers could aim for “a neutral spot where whatever seasoning you add is what you’re going to taste.”

As food technologists work to overcome these challenges, plant-based eggs are likely to get better and better. But the ultimate goal might be to surpass, not merely match, the performance of real eggs. Already, McClements and his colleagues have experimented with adding lutein, a nutrient important for eye health, to oil droplets in plant-based egg yolks.

In the future, scientists could adjust the amino acid composition of proteins or boost the calcium or iron content in plant-based eggs to match nutritional needs. “We ultimately could engineer something that’s way healthier than what’s available now,” says Bianca Datta, a food scientist at the Good Food Institute, an international nonprofit that supports the development of plant-based foods. “We’re just at the beginning of seeing what’s possible.”

This story originally appeared in Knowable Magazine.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

Cracking the recipe for perfect plant-based eggs Read More »

the-key-moment-came-38-minutes-after-starship-roared-off-the-launch-pad

The key moment came 38 minutes after Starship roared off the launch pad


SpaceX wasn’t able to catch the Super Heavy booster, but Starship is on the cusp of orbital flight.

The sixth flight of Starship lifts off from SpaceX’s Starbase launch site at Boca Chica Beach, Texas. Credit: SpaceX.

SpaceX launched its sixth Starship rocket Tuesday, proving for the first time that the stainless steel ship can maneuver in space and paving the way for an even larger, upgraded vehicle slated to debut on the next test flight.

The only hiccup was an abortive attempt to catch the rocket’s Super Heavy booster back at the launch site in South Texas, something SpaceX achieved on the previous flight on October 13. The Starship upper stage flew halfway around the world, reaching an altitude of 118 miles (190 kilometers) before plunging through the atmosphere for a pinpoint slow-speed splashdown in the Indian Ocean.

The sixth flight of the world’s largest launcher—standing 398 feet (121.3 meters) tall—began with a lumbering liftoff from SpaceX’s Starbase facility near the US-Mexico border at 4 pm CST (22: 00 UTC) Tuesday. The rocket headed east over the Gulf of Mexico, propelled by 33 Raptor engines clustered on the bottom of its Super Heavy first stage.

A few miles away, President-elect Donald Trump joined SpaceX founder Elon Musk to witness the launch. The SpaceX boss became one of Trump’s closest allies in this year’s presidential election, giving the world’s richest man extraordinary influence in US space policy. Sen. Ted Cruz (R-Texas) was there, too, among other lawmakers. Gen. Chance Saltzman, the top commander in the US Space Force, stood nearby, chatting with Trump and other VIPs.

Elon Musk, SpaceX’s CEO, President-elect Donald Trump, and Gen. Chance Saltzman of the US Space Force watch the sixth launch of Starship Tuesday. Credit: Brandon Bell/Getty Images

From their viewing platform, they watched Starship climb into a clear autumn sky. At full power, the 33 Raptors chugged more than 40,000 pounds of super-cold liquid methane and liquid oxygen per second. The engines generated 16.7 million pounds of thrust, 60 percent more than the Soviet N1, the second-largest rocket in history.

Eight minutes later, the rocket’s upper stage, itself also known as Starship, was in space, completing the program’s fourth straight near-flawless launch. The first two test flights faltered before reaching their planned trajectory.

A brief but crucial demo

As exciting as it was, we’ve seen all that before. One of the most important new things engineers wanted to test on this flight occurred about 38 minutes after liftoff.

That’s when Starship reignited one of its six Raptor engines for a brief burn to make a slight adjustment to its flight path. The burn lasted only a few seconds, and the impulse was small—just a 48 mph (77 km/hour) change in velocity, or delta-V—but it demonstrated that the ship can safely deorbit itself on future missions.

With this achievement, Starship will likely soon be cleared to travel into orbit around Earth and deploy Starlink Internet satellites or conduct in-space refueling experiments, two of the near-term objectives on SpaceX’s Starship development roadmap.

Launching Starlinks aboard Starship will allow SpaceX to expand the capacity and reach of its commercial consumer broadband network, which, in turn, provides revenue for Musk to reinvest into Starship. Orbital refueling enables Starship voyages beyond low-Earth orbit, fulfilling SpaceX’s multibillion-dollar contract with NASA to provide a human-rated Moon lander for the agency’s Artemis program. Likewise, transferring cryogenic propellants in orbit is a prerequisite for sending Starships to Mars, making real Musk’s dream of creating a settlement on the red planet.

Artist’s illustration of Starship on the surface of the Moon. Credit: SpaceX

Until now, SpaceX has intentionally launched Starships to speeds just shy of the blistering velocities needed to maintain orbit. Engineers wanted to test the Raptor’s ability to reignite in space on the third Starship test flight in March, but the ship lost control of its orientation, and SpaceX canceled the engine firing.

Before going for a full orbital flight, officials needed to confirm that Starship could steer itself back into the atmosphere for reentry, ensuring it wouldn’t present any risk to the public with an unguided descent over a populated area. After Tuesday, SpaceX can check this off its to-do list.

“Congrats to SpaceX on Starship’s sixth test flight,” NASA Administrator Bill Nelson posted on X. “Exciting to see the Raptor engine restart in space—major progress towards orbital flight. Starship’s success is Artemis’ success. Together, we will return humanity to the Moon & set our sights on Mars.”

While it lacks the pizzazz of a fiery launch or landing, the engine relight unlocks a new phase of Starship development. SpaceX has now proven that the rocket is capable of reaching space with a fair measure of reliability. Next, engineers will fine-tune how to reliably recover the booster and the ship and learn how to use them.

Acid test

SpaceX appears well on its way to doing this. While SpaceX didn’t catch the Super Heavy booster with the launch tower’s mechanical arms Tuesday, engineers have shown they can do it. The challenge of catching Starship itself back at the launch pad is more daunting. The ship starts its reentry thousands of miles from Starbase, traveling approximately 17,000 mph (27,000 km/hour), and must thread the gap between the tower’s catch arms within a matter of inches.

The good news is that SpaceX has now twice proven it can bring Starship back to a precision splashdown in the Indian Ocean. In October, the ship settled into the sea in darkness. SpaceX moved the launch time for Tuesday’s flight to the late afternoon, setting up for splashdown shortly after sunrise northwest of Australia.

The shift in time paid off with some stunning new visuals. Cameras mounted on the outside of Starship beamed dazzling live views back to SpaceX through the Starlink network, showing a now-familiar glow of plasma encasing the spacecraft as it plowed deeper into the atmosphere. But this time, daylight revealed the ship’s flaps moving to control its belly-first descent toward the ocean. After passing through a deck of low clouds, Starship reignited its Raptor engines and tilted from horizontal to vertical, making contact with the water tail-first within view of a floating buoy and a nearby aircraft in position to observe the moment.

Here’s a replay of the spacecraft’s splashdown around 65 minutes after launch.

Splashdown confirmed! Congratulations to the entire SpaceX team on an exciting sixth flight test of Starship! pic.twitter.com/bf98Va9qmL

— SpaceX (@SpaceX) November 19, 2024

The ship made it through reentry despite flying with a substandard heat shield. Starship’s thermal protection system is made up of thousands of ceramic tiles to protect the ship from temperatures as high as 2,600° Fahrenheit (1,430° Celsius).

Kate Tice, a SpaceX engineer hosting the company’s live broadcast of the mission, said teams at Starbase removed 2,100 heat shield tiles from Starship ahead of Tuesday’s launch. Their removal exposed wider swaths of the ship’s stainless steel skin to super-heated plasma, and SpaceX teams were eager to see how well the spacecraft held up during reentry. In the language of flight testing, this approach is called exploring the corners of the envelope, where engineers evaluate how a new airplane or rocket performs in extreme conditions.

“Don’t be surprised if we see some wackadoodle stuff happen here,” Tice said. There was nothing of the sort. One of the ship’s flaps appeared to suffer some heating damage, but it remained intact and functional, and the harm looked to be less substantial than damage seen on previous flights.

Many of the removed tiles came from the sides of Starship where SpaceX plans to place catch fittings on future vehicles. These are the hardware protuberances that will catch on the top side of the launch tower’s mechanical arms, similar to fittings used on the Super Heavy booster.

“The next flight, we want to better understand where we can install catch hardware, not necessarily to actually do the catch but to see how that hardware holds up in those spots,” Tice said. “Today’s flight will help inform ‘does the stainless steel hold up like we think it may, based on experiments that we conducted on Flight 5?'”

Musk wrote on his social media platform X that SpaceX could try to bring Starship back to Starbase for a catch on the eighth test flight, which is likely to occur in the first half of 2025.

“We will do one more ocean landing of the ship,” Musk said. “If that goes well, then SpaceX will attempt to catch the ship with the tower.”

The heat shield, Musk added, is a focal point of SpaceX’s attention. The delicate heat-absorbing tiles used on the belly of the space shuttle proved vexing to NASA technicians. Early in the shuttle’s development, NASA had trouble keeping tiles adhered to the shuttle’s aluminum skin. Each of the shuttle tiles was custom-machined to fit on a specific location on the orbiter, complicating refurbishment between flights. Starship’s tiles are all hexagonal in shape and agnostic to where technicians place them on the vehicle.

“The biggest technology challenge remaining for Starship is a fully & immediately reusable heat shield,” Musk wrote on X. “Being able to land the ship, refill propellant & launch right away with no refurbishment or laborious inspection. That is the acid test.”

This photo of the Starship vehicle for Flight 6, numbered Ship 31, shows exposed portions of the vehicle’s stainless steel skin after tile removal. Credit: SpaceX

There were no details available Tuesday night on what caused the Super Heavy booster to divert from its planned catch on the launch tower. After detaching from the Starship upper stage less than three minutes into the flight, the booster reversed course to begin the journey back to Starbase.

Then SpaceX’s flight director announced the rocket would fly itself into the Gulf rather than back to the launch site: “Booster offshore divert.”

The booster finished its descent with a seemingly perfect landing burn using a subset of its Raptor engines. As expected after the water landing, the booster—itself 233 feet (71 meters) tall—toppled and broke apart in a dramatic fireball visible to onshore spectators.

In an update posted to its website after the launch, SpaceX said automated health checks of hardware on the launch and catch tower triggered the aborted catch attempt. The company did not say what system failed the health check. As a safety measure, SpaceX must send a manual command for the booster to come back to land in order to prevent a malfunction from endangering people or property.

Turning it up to 11

There will be plenty more opportunities for more booster catches in the coming months as SpaceX ramps up its launch cadence at Starbase. Gwynne Shotwell, SpaceX’s president and chief operating officer, hinted at the scale of the company’s ambitions last week.

“We just passed 400 launches on Falcon, and I would not be surprised if we fly 400 Starship launches in the next four years,” she said at the Barron Investment Conference.

The next batch of test flights will use an improved version of Starship designated Block 2, or V2. Starship Block 2 comes with larger propellant tanks, redesigned forward flaps, and a better heat shield.

The new-generation Starship will hold more than 11 million pounds of fuel and oxidizer, about a million pounds more than the capacity of Starship Block 1. The booster and ship will produce more thrust, and Block 2 will measure 408 feet (124.4 meters) tall, stretching the height of the full stack by a little more than 10 feet.

Put together, these modifications should give Starship the ability to heave a payload of up to 220,000 pounds (100 metric tons) into low-Earth orbit, about twice the carrying capacity of the first-generation ship. Further down the line, SpaceX plans to introduce Starship Block 3 to again double the ship’s payload capacity.

Just as importantly, these changes are designed to make it easier for SpaceX to recover and reuse the Super Heavy booster and Starship upper stage. SpaceX’s goal of fielding a fully reusable launcher builds on the partial reuse SpaceX pioneered with its Falcon 9 rocket. This should dramatically bring down launch costs, according to SpaceX’s vision.

With Tuesday’s flight, it’s clear Starship works. Now it’s time to see what it can do.

Updated with additional details, quotes, and images.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

The key moment came 38 minutes after Starship roared off the launch pad Read More »