Science

runaway-black-hole-mergers-may-have-built-supermassive-black-holes

Runaway black hole mergers may have built supermassive black holes

The researchers used cosmological simulations to recreate the first 700 million years of cosmic history, focusing on the formation of a single dwarf galaxy. In their virtual galaxy, waves of stars were born in short, explosive bursts as cold gas clouds collapsed inside a dark matter halo. Instead of a single starburst episode followed by a steady drizzle of star formation as Garcia expected, there were two major rounds of stellar birth. Whole swarms of stars flared to life like Christmas tree lights.

“The early Universe was an incredibly crowded place,” Garcia said. “Gas clouds were denser, stars formed faster, and in those environments, it’s natural for gravity to gather stars into these tightly bound systems.”

Those clusters started out scattered around the galaxy but fell in toward the center like water swirling down a drain. Once there, they merged to create one megacluster, called a nuclear star cluster (so named because it lies at the nucleus of the galaxy). The young galactic heart shone with the light of a million suns and may have set the stage for a supermassive black hole to form.

A simulation of the formation of the super-dense star clusters.

A seemingly simple tweak was needed to make the simulation more precise than previous ones. “Most simulations simplify things to make calculations more practical, but then you sacrifice realism,” Garcia said. “We used an improved model that allowed star formation to vary depending on local conditions rather than just go at a constant rate like with previous models.”

Using the University of Maryland’s supercomputing facility Zaratan, Garcia accomplished in six months what would have taken 12 years on a MacBook.

Some clouds converted as much as 80 percent of their gas into stars—a ferocious rate compared to the 2 percent typically seen in nearby galaxies today. The clouds sparkled to life, becoming clusters of newborn stars held together by their mutual gravity and lighting a new pathway for supermassive black holes to form extremely early in the Universe.

Chicken or egg?

Most galaxies, including our own, are anchored by a nuclear star cluster nestled around a supermassive black hole. But the connection between the two has been a bit murky—did the monster black hole form and then draw stars close, or did the cluster itself give rise to the black hole?

Runaway black hole mergers may have built supermassive black holes Read More »

here’s-how-orbital-dynamics-wizardry-helped-save-nasa’s-next-mars-mission

Here’s how orbital dynamics wizardry helped save NASA’s next Mars mission


Blue Origin is counting down to launch of its second New Glenn rocket Sunday.

The New Glenn rocket rolls to Launch Complex-36 in preparation for liftoff this weekend. Credit: Blue Origin

CAPE CANAVERAL, FloridaThe field of astrodynamics isn’t a magical discipline, but sometimes it seems trajectory analysts can pull a solution out of a hat.

That’s what it took to save NASA’s ESCAPADE mission from a lengthy delay, and possible cancellation, after its rocket wasn’t ready to send it toward Mars during its appointed launch window last year. ESCAPADE, short for Escape and Plasma Acceleration and Dynamics Explorers, consists of two identical spacecraft setting off for the red planet as soon as Sunday with a launch aboard Blue Origin’s massive New Glenn rocket.

“ESCAPADE is pursuing a very unusual trajectory in getting to Mars,” said Rob Lillis, the mission’s principal investigator from the University of California, Berkeley. “We’re launching outside the typical Hohmann transfer windows, which occur every 25 or 26 months. We are using a very flexible mission design approach where we go into a loiter orbit around Earth in order to sort of wait until Earth and Mars are lined up correctly in November of next year to go to Mars.”

This wasn’t the original plan. When it was first designed, ESCAPADE was supposed to take a direct course from Earth to Mars, a transit that typically takes six to nine months. But ESCAPADE will now depart the Earth when Mars is more than 220 million miles away, on the opposite side of the Solar System.

The payload fairing of Blue Origin’s New Glenn rocket, containing NASA’s two Mars-bound science probes. Credit: Blue Origin

The most recent Mars launch window was last year, and the next one doesn’t come until the end of 2026. The planets are not currently in alignment, and the proverbial stars didn’t align to get the ESCAPADE satellites and their New Glenn rocket to the launch pad until this weekend.

This is fine

But there are several reasons this is perfectly OK to NASA. The New Glenn rocket is overkill for this mission. The two-stage launcher could send many tons of cargo to Mars, but NASA is only asking it to dispatch about a ton of payload, comprising a pair of identical science probes designed to study how the planet’s upper atmosphere interacts with the solar wind.

But NASA got a good deal from Blue Origin. The space agency is paying Jeff Bezos’ space company about $20 million for the launch, less than it would for a dedicated launch on any other rocket capable of sending the ESCAPADE mission to Mars. In exchange, NASA is accepting a greater than usual chance of a launch failure. This is, after all, just the second flight of the 321-foot-tall (98-meter) New Glenn rocket, which hasn’t yet been certified by NASA or the US Space Force.

The ESCAPADE mission, itself, was developed with a modest budget, at least by the standards of interplanetary exploration. The mission’s total cost amounts to less than $80 million, an order of magnitude lower than all of NASA’s recent Mars missions. NASA officials would not entrust the second flight of the New Glenn rocket to launch a billion-dollar spacecraft, but the risk calculation changes as costs go down.

NASA knew all of this in 2023 when it signed a launch contract with Blue Origin for the ESCAPADE mission. What officials didn’t know was that the New Glenn rocket wouldn’t be ready to fly when ESCAPADE needed to launch in late 2024. It turned out Blue Origin didn’t launch the first New Glenn test flight until January of this year. It was a success. It took another 10 months for engineers to get the second New Glenn vehicle to the launch pad.

The twin ESCAPADE spacecraft undergoing final preparations for launch. Each spacecraft is about a half-ton fully fueled. Credit: NASA/Kim Shiflett

Aiming high

That’s where the rocket sits this weekend at Cape Canaveral Space Force Station, Florida. If all goes according to plan, New Glenn will take off Sunday afternoon during an 88-minute launch window opening at 2: 45 pm EST (19: 45 UTC). There is a 65 percent chance of favorable weather, according to Blue Origin.

Blue Origin’s launch team, led by launch director Megan Lewis, will oversee the countdown Sunday. The rocket will be filled with super-cold liquid methane and liquid oxygen propellants beginning about four-and-a-half hours prior to liftoff. After some final technical and weather checks, the terminal countdown sequence will commence at T-minus 4 minutes, culminating in ignition of the rocket’s seven BE-4 main engines at T-minus 5.6 seconds.

The rocket’s flight computer will assess the health of each of the powerful engines, combining to generate more than 3.8 million pounds of thrust. If all looks good, hold-down restraints will release to allow the New Glenn rocket to begin its ascent from Florida’s Space Coast.

Heading east, the rocket will surpass the speed of sound in a little over a minute. After soaring through the stratosphere, New Glenn will shut down its seven booster engines and shed its first stage a little more than 3 minutes into the flight. Twin BE-3U engines, burning liquid hydrogen, will ignite to finish the job of sending the ESCAPADE satellites toward deep space. The rocket’s trajectory will send the satellites toward a gravitationally-stable location beyond the Moon, called the L2 Lagrange point, where it will swing into a loosely-bound loiter orbit to wait for the right time to head for Mars.

Meanwhile, the New Glenn booster, itself measuring nearly 20 stories tall, will begin maneuvers to head toward Blue Origin’s recovery ship floating a few hundred miles downrange in the Atlantic Ocean. The final part of the descent will include a landing burn using three of the BE-4 engines, then downshifting to a single engine to control the booster’s touchdown on the landing platform, dubbed “Jacklyn” in honor of Bezos’ late mother.

The launch timeline for New Glenn’s second mission. Credit: Blue Origin

New Glenn’s inaugural launch at the start of this year was a success, but the booster’s descent did not go well. The rocket was unable to restart its engines, and it crashed into the sea.

“We’ve incorporated a number of changes to our propellant management system, some minor hardware changes as well, to increase our likelihood of landing that booster on this mission,” said Laura Maginnis, Blue Origin’s vice president of New Glenn mission management. “That was the primary schedule driver that kind of took us from from January to where we are today.”

Blue Origin officials are hopeful they can land the booster this time. The company’s optimism is enough for officials to have penciled in a reflight of this particular booster on the very next New Glenn launch, slated for the early months of next year. That launch is due to send Blue Origin’s first Blue Moon cargo lander to the Moon.

“Our No. 1 objective is to deliver ESCAPADE safely and successfully on its way to L2, and then eventually on to Mars,” Maginnis said in a press conference Saturday. “We also are planning and wanting to land our booster. If we don’t land the booster, that’s OK. We have several more vehicles in production. We’re excited to see how the mission plays out tomorrow.”

Tracing a kidney bean

ESCAPADE’s path through space, relative to the Earth, has the peculiar shape of a kidney bean. In the world of astrodynamics, this is called a staging or libration orbit. It’s a way to keep the spacecraft on a stable trajectory to wait for the opportunity to go to Mars late next year.

“ESCAPADE has identified that this is the way that we want to fly, so we launch from Earth onto this kidney bean-shaped orbit,” said Jeff Parker, a mission designer from the Colorado-based company Advanced Space. “So, we can launch on virtually any day. What happens is that kidney bean just grows and shrinks based on how much time you need to spend in that orbit. So, we traverse that kidney bean and at the very end there’s a final little loop-the-loop that brings us down to Earth.”

That’s when the two ESCAPADE spacecraft, known as Blue and Gold, will pass a few hundred miles above our planet. At the right moment, on November 7 and 9 of next year, the satellites will fire their engines to set off for Mars.

An illustration of ESCAPADE’s trajectory to wait for the opportunity to go to Mars. Credit: UC-Berkeley

There are some tradeoffs with this unique staging orbit. It is riskier than the original plan of sending ESCAPADE straight to Mars. The satellites will be exposed to more radiation, and will consume more of their fuel just to get to the red planet, eating into reserves originally set aside for science observations.

The satellites were built by Rocket Lab, which designed them with extra propulsion capacity in order to accommodate launches on a variety of different rockets. In the end, NASA “judged that the risk for the mission was acceptable, but it certainly is higher risk,” said Richard French, Rocket Lab’s vice president of business development and strategy.

The upside of the tradeoff is it will demonstrate an “exciting and flexible way to get to Mars,” Lillis said. “In the future, if we’d like to send hundreds of spacecraft to Mars at once, it will be difficult to do that from just the launch pads we have on Earth within that month [of the interplanetary launch window]. We could potentially queue up spacecraft using the approach that ESCAPADE is pioneering.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Here’s how orbital dynamics wizardry helped save NASA’s next Mars mission Read More »

10,000-generations-of-hominins-used-the-same-stone-tools-to-weather-a-changing-world

10,000 generations of hominins used the same stone tools to weather a changing world

“This site reveals an extraordinary story of cultural continuity,” said Braun in a recent press release.

When the going gets tough, the tough make tools

Nomorotukunan’s layers of stone tools span the transition from the Pliocene to the Pleistocene, during which Earth’s climate turned gradually cooler and drier after a 2 to 3 million-year warm spell. Pollen and other microscopic traces of plants in the sediment at Nomorotukunan tell the tale: the lakeshore marsh gradually dried up, giving way to arid grassland dotted with shrubs. On a shorter timescale, hominins at Nomorotukunan faced wildfires (based on microcharcoal in the sediments), droughts, and rivers drying up or changing course.

“As vegetation shifted, the toolmaking remained steady,” said National University of Kenya archaeologist Rahab N. Kinyanjui in a recent press release. “This is resilience.”

Making sharp stone tools may have helped generations of hominins survive their changing, drying world. In the warm, humid Pliocene, finding food would have been relatively easy, but as conditions got tougher, hominins probably had to scavenge or dig for their meals. At least one animal bone at Nomorotukunan bears cut marks where long-ago hominins carved up the carcass for meat—something our lineage isn’t really equipped to do with its bare hands and teeth. Tools also would have enabled early hominins to dig up and cut tubers or roots.

It’s fair to assume that sharpened wood sticks probably also played a role in that particular work, but wood doesn’t tend to last as long as stone in the archaeological record, so we can’t say for sure. What is certain are the stone tools and cut bones, which hint at what Utrecht University archaeologist Dan Rolier, a coauthor of the paper, calls “one of our oldest habits: using technology to steady ourselves against change.”

A tale as old as time

Nomorotukunan may hint that Oldowan technology is even older than the earliest tools archaeologists have unearthed so far. The oldest tools unearthed from the deepest layer at Nomorotukunan are the work of skilled flint-knappers who understood where to strike a stone, and at exactly which angle, to flake off the right shape. They also clearly knew how to select the right stones for the job (fine-grained chalcedony for the win, in this case). In other words, these tools weren’t the work of a bunch of hominins who were just figuring out, for the first time, how to bang the rocks together.

10,000 generations of hominins used the same stone tools to weather a changing world Read More »

rocket-report:-canada-invests-in-sovereign-launch;-india-flexes-rocket-muscles

Rocket Report: Canada invests in sovereign launch; India flexes rocket muscles


Europe’s Ariane 6 rocket gave an environmental monitoring satellite a perfect ride to space.

Rahul Goel, the CEO of Canadian launch startup NordSpace, poses with a suborbital demo rocket and members of his team in Toronto earlier this year. Credit: Andrew Francis Wallace/Toronto Star via Getty Images

Welcome to Edition 8.18 of the Rocket Report! NASA is getting a heck of a deal from Blue Origin for launching the agency’s ESCAPADE mission to Mars. Blue Origin is charging NASA about $20 million for the launch on the company’s heavy-lift New Glenn rocket. A dedicated ride on any other rocket capable of the job would undoubtedly cost more.

But there are trade-offs. First, there’s the question of risk. The New Glenn rocket is only making its second flight, and it hasn’t been certified by NASA or the US Space Force. Second, the schedule for ESCAPADE’s launch has been at the whim of Blue Origin, which has delayed the mission several times due to issues developing New Glenn. NASA’s interplanetary missions typically have a fixed launch period, and the agency pays providers like SpaceX and United Launch Alliance a premium to ensure the launch happens when it needs to happen.

New Glenn is ready, the satellites are ready, and Blue Origin has set a launch date for Sunday, November 9. The mission will depart Earth outside of the usual interplanetary launch window, so orbital dynamics wizards came up with a unique trajectory that will get the satellites to Mars in 2027.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Canadian government backs launcher development. The federal budget released by the Liberal Party-led government of Canada this week includes a raft of new defense initiatives, including 182.6 million Canadian dollars ($129.4 million) for sovereign space launch capability, SpaceQ reports. The new funding is meant to “establish a sovereign space launch capability” with funds available this fiscal year and spent over three years. How the money will be spent and on what has yet to be released. As anticipated, Canada will have a new Defense Investment Agency (DIA) to oversee defense procurement. Overall, the government outlined 81.8 billion Canadian dollars ($58 billion) over five years for the Canadian Armed Forces. The Department of National Defense will manage the government’s cash infusion for sovereign launch capability.

Kick-starting an industry … Canada joins a growing list of nations pursuing homegrown launchers as many governments see access to space as key to national security and an opportunity for economic growth. International governments don’t want to be beholden to a small number of foreign launch providers from established space powers. That’s why startups in Germany, the United Kingdom, South Korea, and Australia are making a play in the launch arena, often with government support. A handful of Canadian startups, such as Maritime Launch Services, Reaction Dynamics, and NordSpace, are working on commercial satellite launchers. The Canadian government’s announcement came days after MDA Space, the largest established space company in Canada, announced its own multimillion-dollar investment in Maritime Launch Services.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Money alone won’t solve Europe’s space access woes. Increasing tensions with Russia have prompted defense spending boosts throughout Europe that will benefit fledgling smallsat launcher companies across the continent. But Europe is still years away from meeting its own space access needs, Space News reports. Space News spoke with industry analysts from two European consulting firms. They concluded that a lack of experience, not a deficit of money, is holding European launch startups back. None of the new crop of European rocket companies have completed a successful orbital flight.

Swimming in cash … The German company Isar Aerospace has raised approximately $600 million, the most funding of any of the European launch startups. Isar is also the only one of the bunch to make an orbital launch attempt. Its Spectrum rocket failed less than 30 seconds after liftoff last March, and a second launch is expected next year. Isar has attracted more investment than Rocket Lab, Firefly Aerospace, and Astra collectively raised on the private market before each of them successfully launched a rocket into orbit. In addition to Isar, several other European companies have raised more than $100 million on the road to developing a small satellite launcher. (submitted by EllPeaTea)

Successful ICBM test from Vandenberg. Air Force Global Strike Command tested an unarmed Minuteman III intercontinental ballistic missile in the predawn hours of Wednesday, Air and Space Forces Magazine reports. The test, the latest in a series of launches that have been carried out at regular intervals for decades, came as Russian President Vladimir Putin has touted the development of two new nuclear weapons and President Donald Trump has suggested in recent days that the US might resume nuclear testing. The ICBM launched from an underground silo at Vandenberg Space Force Base, California, and traveled some 4,200 miles to a test range in the Pacific Ocean after receiving launch orders from an airborne nuclear command-and-control plane.

Rehearsing for the unthinkable … The test, known as Glory Trip 254 (GT 254), provided a “comprehensive assessment” of the Minuteman III’s readiness to launch at a moment’s notice, according to the Air Force. “The data collected during the test is invaluable in ensuring the continued reliability and accuracy of the ICBM weapon system,” said Lt. Col. Karrie Wray, commander of the 576th Flight Test Squadron. For Minuteman III tests, the Air Force pulls its missiles from the fleet of some 400 operational ICBMs. This week’s test used one from F.E. Warren Air Force Base, Wyoming, and the missile was equipped with a single unarmed reentry vehicle that carried telemetry instrumentation instead of a warhead, service officials said. (submitted by EllPeaTea)

One crew launches, another may be stranded. Three astronauts launched to China’s Tiangong space station on October 31 and arrived at the outpost a few hours later, extending the station’s four-year streak of continuous crew operations. The Shenzhou 21 crew spacecraft lifted off on a Chinese Long March 2F rocket from the Jiuquan space center in the Gobi Desert. Shenzhou 21 is supposed to replace a three-man crew that has been on the Tiangong station since April, but China’s Manned Space Agency announced Tuesday the outgoing crew’s return craft may have been damaged by space junk, Ars reports.

Few details … Chinese officials said the Shenzhou 20 spacecraft will remain at the station while engineers investigate the potential damage. As of Thursday, China has not set a new landing date or declared whether the spacecraft is safe to return to Earth at all. “The Shenzhou 20 manned spacecraft is suspected of being impacted by small space debris,” Chinese officials wrote on social media. “Impact analysis and risk assessment are underway. To ensure the safety and health of the astronauts and the complete success of the mission, it has been decided that the Shenzhou 20 return mission, originally scheduled for November 5, will be postponed.” In the event Shenzhou 20 is unsafe to return, China could launch a rescue craft—Shenzhou 22—already on standby at the Jiuquan space center.

Falcon 9 rideshare boosts Vast ambitions. A pathfinder mission for Vast’s privately owned space station launched into orbit Sunday and promptly extended its solar panel, kicking off a shakedown cruise to prove the company’s designs can meet the demands of spaceflight, Ars reports. Vast’s Haven Demo mission lifted off just after midnight Sunday from Cape Canaveral Space Force Station, Florida, and rode a SpaceX Falcon 9 rocket into orbit. Haven Demo was one of 18 satellites sharing a ride on SpaceX’s Bandwagon 4 mission, launching alongside a South Korean spy satellite and a small testbed for Starcloud, a startup working with Nvidia to build an orbital data center.

Subscale testing … After release from the Falcon 9, the half-ton Haven Demo spacecraft stabilized itself and extended its power-generating solar array. The satellite captured 4K video of the solar array deployment, and Vast shared the beauty shot on social media. “Haven Demo’s mission success has turned us into a proven spacecraft company,” Vast’s CEO, Max Haot, posted on X. “The next step will be to become an actual commercial space station company next year. Something no one has achieved yet.” Vast plans to launch its first human-rated habitat, named Haven-1, into low-Earth orbit in 2026. Haven Demo lacks crew accommodations but carries several systems that are “architecturally similar” to Haven-1, according to Vast. For example, Haven-1 will have 12 solar arrays, each identical to the single array on Haven Demo. The pathfinder mission uses a subset of Haven-1’s propulsion system, but with identical thrusters, valves, and tanks.

Lights out at Vostochny. One of Russia’s most important projects over the last 15 years has been the construction of the Vostochny spaceport as the country seeks to fly its rockets from native soil and modernize its launch operations. Progress has been slow as corruption clouded Vostochny’s development. Now, the primary contractor building the spaceport, the Kazan Open Stock Company (PSO Kazan), has failed to pay its bills, Ars reports. The story, first reported by the Moscow Times, says that the energy company supplying Vostochny cut off electricity to areas of the spaceport still under construction after PSO Kazan racked up $627,000 in unpaid energy charges. The electricity company did so, it said, “to protect the interests of the region’s energy system.”

A dark reputation … Officials at the government-owned spaceport said PSO Kazan would repay its debt by the end of November, but the local energy company said it intends to file a lawsuit against KSO Kazan to declare the entity bankrupt. The two operational launch pads at Vostochny are apparently not affected by the power cuts. Vostochny has been a fiasco from the start. After construction began in 2011, the project was beset by hunger strikes, claims of unpaid workers, and the theft of $126 million. Additionally, a man driving a diamond-encrusted Mercedes was arrested after embezzling $75,000. Five years ago, there was another purge of top officials after another round of corruption.

Ariane 6 delivers for Europe again. European launch services provider Arianespace has successfully launched the Sentinel 1D Earth observation satellite aboard an Ariane 62 rocket for the European Commission, European Spaceflight reports. Launched in its two-booster configuration, the Ariane 6 rocket lifted off from the Guiana Space Center in South America on Tuesday. Approximately 34 minutes after liftoff, the satellite was deployed from the rocket’s upper stage into a Sun-synchronous orbit at an altitude of 693 kilometers (430 miles). Sentinel 1D is the newest spacecraft to join Europe’s Copernicus program, the world’s most expansive network of environmental monitoring satellites. The new satellite will extend Europe’s record of global around-the-clock radar imaging, revealing information about environmental disasters, polar ice cover, and the use of water resources.

Doubling cadence … This was the fourth flight of Europe’s new Ariane 6 rocket, and its third operational launch. Arianespace plans one more Ariane 62 launch to close out the year with a pair of Galileo navigation satellites. The company aims to double its Ariane 6 launch cadence in 2026, with between six and eight missions planned, according to David Cavaillès, Arianespace’s CEO. The European launch provider will open its 2026 manifest with the first flight of the more powerful four-booster variant of the rocket. If the company does manage eight Ariane 6 flights in 2026, it will already be close to reaching the stated maximum launch cadence of between nine and 10 flights per year.

India sets its own record for payload mass. The Indian Space Research Organization on Sunday successfully launched the Indian Navy’s advanced communication satellite GSAT-7R, or CMS-03, on an LVM3 rocket from the Satish Dhawan Space Center, The Hindu reports. The indigenously designed and developed satellite, weighing approximately 4.4 metric tons (9,700 pounds), is the heaviest satellite ever launched by an Indian rocket and marks a major milestone in strengthening the Navy’s space-based communications and maritime domain awareness.

Going heavy … The launch Sunday was India’s fourth of 2025, a decline from the country’s high-water mark of eight orbital launches in a year in 2023. The failure in May of India’s most-flown rocket, the PSLV, has contributed to this year’s slower launch cadence. India’s larger rockets, the GSLV and LVM3, have been more active while officials grounded the PSLV for an investigation into the launch failure. (submitted by EllPeaTea)

Blue Origin preps for second flight of New Glenn. The road to the second flight of Blue Origin’s heavy-lifting New Glenn rocket got a lot clearer this week. The company confirmed it is targeting Sunday, November 9, for the launch of New Glenn from Cape Canaveral Space Force Station, Florida. This follows a successful test-firing of the rocket’s seven BE-4 main engines last week, Ars reports. Blue Origin, the space company owned by billionaire Jeff Bezos, said the engines operated at full power for 22 seconds, generating nearly 3.9 million pounds of thrust on the launch pad.

Fully integrated … With the launch date approaching, engineers worked this week to attach the rocket’s payload shroud containing two NASA satellites set to embark on a journey to Mars. Now that the rocket is fully integrated, ground crews will roll it back to Blue Origin’s Launch Complex-36 (LC-36) for final countdown preps. The launch window on Sunday opens at 2: 45 pm EST (19: 45 UTC). Blue Origin is counting on recovering the New Glenn first stage on the next flight after missing the landing on the rocket’s inaugural mission in January. Officials plan to reuse this booster on the third New Glenn launch early next year, slated to propel Blue Origin’s first unpiloted Blue Moon lander toward the Moon.

Next three launches

Nov. 8: Falcon 9 | Starlink 10-51 | Kennedy Space Center, Florida | 08: 30 UTC

Nov. 8: Long March 11H| Unknown Payload | Haiyang Spaceport, China Coastal Waters | 21: 00 UTC

Nov. 9: New Glenn | ESCAPADE | Cape Canaveral Space Force Station, Florida | 19: 45 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Canada invests in sovereign launch; India flexes rocket muscles Read More »

james-watson,-who-helped-unravel-dna’s-double-helix,-has-died

James Watson, who helped unravel DNA’s double-helix, has died

James Dewey Watson, who helped reveal DNA’s double-helix structure, kicked off the Human Genome Project, and became infamous for his racist, sexist, and otherwise offensive statements, has died. He was 97.

His death was confirmed to The New York Times by his son Duncan, who said Watson died on Thursday in a hospice in East Northport, New York, on Long Island. He had previously been hospitalized with an infection. Cold Spring Harbor Laboratory also confirmed his passing.

Watson was born in Chicago in 1928 and attained scientific fame in 1953 at 25 years old for solving the molecular structure of DNA—the genetic blueprints for life—with his colleague Francis Crick at England’s Cavendish laboratory. Their discovery heavily relied on the work of chemist and crystallographer Rosalind Franklin at King’s College in London, whose X-ray images of DNA provided critical clues to the molecule’s twisted-ladderlike architecture. One image in particular from Franklin’s lab, Photo 51, made Watson and Crick’s discovery possible. But, she was not fully credited for her contribution. The image was given to Watson and Crick without Franklin’s knowledge or consent by Maurice Wilkins, a biophysicist and colleague of Franklin.

Watson, Crick, and Wilkins were awarded the Nobel Prize in Physiology or Medicine in 1962 for the discovery of DNA’s structure. By that time, Franklin had died (she died in 1958 at the age of 37 from ovarian cancer), and Nobels are not given posthumously. But Watson and Crick’s treatment of Franklin and her research has generated lasting scorn within the scientific community. Throughout his career and in his memoir, Watson disparaged Franklin’s intelligence and appearance.

James Watson, who helped unravel DNA’s double-helix, has died Read More »

the-government-shutdown-is-starting-to-have-cosmic-consequences

The government shutdown is starting to have cosmic consequences

The federal government shutdown, now in its 38th day, prompted the Federal Aviation Administration to issue a temporary emergency order Thursday prohibiting commercial rocket launches from occurring during “peak hours” of air traffic.

The FAA also directed commercial airlines to reduce domestic flights from 40 “high impact airports” across the country in a phased approach beginning Friday. The agency said the order from the FAA’s administrator, Bryan Bedford, is aimed at addressing “safety risks and delays presented by air traffic controller staffing constraints caused by the continued lapse in appropriations.”

The government considers air traffic controllers essential workers, so they remain on the job without pay until Congress passes a federal budget and President Donald Trump signs it into law. The shutdown’s effects, which affected federal workers most severely at first, are now rippling across the broader economy.

Sharing the airspace

Vehicles traveling to and from space share the skies with aircraft, requiring close coordination with air traffic controllers to clear airspace for rocket launches and reentries. The FAA said its order restricting commercial air traffic, launches, and reentries is intended to “ensure the safety of aircraft and the efficiency of the [National Airspace System].”

In a statement explaining the order, the FAA said the air traffic control system is “stressed” due to the shutdown.

“With continued delays and unpredictable staffing shortages, which are driving fatigue, risk is further increasing, and the FAA is concerned with the system’s ability to maintain the current volume of operations,” the regulator said. “Accordingly, the FAA has determined additional mitigation is necessary.”

Beginning Monday, the FAA said commercial space launches will only be permitted between 10 pm and 6 am local time, when the national airspace is most calm. The order restricts commercial reentries to the same overnight timeframe. The FAA licenses all commercial launches and reentries.

The government shutdown is starting to have cosmic consequences Read More »

next-generation-black-hole-imaging-may-help-us-understand-gravity-better

Next-generation black hole imaging may help us understand gravity better

Right now, we probably don’t have the ability to detect these small changes in phenomena. However, that may change, as a next-generation version of the Event Horizon Telescope is being considered, along with a space-based telescope that would operate on similar principles. So the team (four researchers based in Shanghai and CERN) decided to repeat an analysis they did shortly before the Event Horizon Telescope went operational, and consider whether the next-gen hardware might be able to pick up features of the environment around the black hole that might discriminate among different theorized versions of gravity.

Theorists have been busy, and there are a lot of potential replacements for general relativity out there. So, rather than working their way through the list, they used a model of gravity (the parametric Konoplya–Rezzolla–Zhidenko metric) that allows that isn’t specific to any given hypothesis. Instead, it allows some of its parameters to be changed, thus allowing the team to vary the behavior of gravity within some limits. To get a sense of the sort of differences that might be present, the researchers swapped two different parameters between zero and one, giving them four different options. Those results were compared to the Kerr metric, which is the standard general relativity version of the event horizon.

Small but clear differences

Using those five versions of gravity, they model the three-dimensional environment near the event horizon using hydrodynamic simulations, including infalling matter, the magnetic fields it produces, and the jets of matter that those magnetic fields power.

The results resemble the sorts of images that the Event Horizon Telescope produced. These include a bright ring with substantial asymmetry, where one side is significantly brighter due to the rotation of the black hole. And, while the differences are subtle between all the variations of gravity, they’re there. One extreme version produced the smallest but brightest ring; another had a reduced contrast between the bright and dim side of the ring. There were also differences between the width of the jets produced in these models.

Next-generation black hole imaging may help us understand gravity better Read More »

new-quantum-hardware-puts-the-mechanics-in-quantum-mechanics

New quantum hardware puts the mechanics in quantum mechanics


As a test case, the machine was used to test a model of superconductivity.

Quantum computers based on ions or atoms have one major advantage: The hardware itself isn’t manufactured, so there’s no device-to-device variability. Every atom is the same and should perform similarly every time. And since the qubits themselves can be moved around, it’s theoretically possible to entangle any atom or ion with any other in the system, allowing for a lot of flexibility in how algorithms and error correction are performed.

This combination of consistent, high-fidelity performance with all-to-all connectivity has led many key demonstrations of quantum computing to be done on trapped-ion hardware. Unfortunately, the hardware has been held back a bit by relatively low qubit counts—a few dozen compared to the hundred or more seen in other technologies. But on Wednesday, a company called Quantinuum announced a new version of its trapped-ion hardware that significantly boosts the qubit count and uses some interesting technology to manage their operation.

Trapped-ion computing

Both neutral atom and trapped-ion computers store their qubits in the spin of the nucleus. That spin is somewhat shielded from the environment by the cloud of electrons around the nucleus, giving these qubits a relatively long coherence time. While neutral atoms are held in place by a network of lasers, trapped ions are manipulated via electromagnetic control based on the ion’s charge. This means that key components of the hardware can be built using standard electronic manufacturing, although lasers are still needed for manipulations and readout.

While the electronics are static—they stay wherever they were manufactured—they can be used to move the ions around. That means that as long as the trackways the atoms can move on enable it, any two ions can be brought into close proximity and entangled. This all-to-all connectivity can enable more efficient implementation of algorithms performed directly on the hardware qubits or the use of error-correction codes that require a complicated geometry of connections. That’s one reason why Microsoft used a Quantinuum machine to demonstrate error-correction code based on a tesseract.

But arranging the trackways so that any two qubits can be next to each other can become increasingly complicated. Moving ions around is a relatively slow process, so retrieving two ions from the far ends of a chip too often can cause a system to start pushing up against the coherence time of the qubits. In the long term, Quantinuum plans to build chips with a square grid reminiscent of the street layout of many cities. But doing so will require a mastery of controlling the flow of ions through four-way intersections.

And that’s what Quantinuum is doing in part with its new chip, named Helios. It has a single intersection that couples two ion-storage areas, enabling operations as ions slosh from one end of the chip to the other. And it comes with significantly more qubits than its earlier hardware, moving from 56 to 96 qubits without sacrificing performance. “We’ve kept and actually even improved the two qubit gate fidelity,” Quantinuum VP Jenni Strabley told Ars. “So we’re not seeing any degradation in the two-qubit gate fidelity as we go to larger and larger sizes.”

Doing the loop

The image below is taken using the fluorescence of the atoms in the hardware itself. As you can see, the layout is dominated by two features: A loop at the left and two legs extending to the right. They’re connected by a four-way intersection. The Quantinuum staff described this intersection as being central to the computer’s operation.

A black background on which a series of small blue dots trace out a circle and two parallel lines connected by an x-shaped junction.

The actual ions trace out the physical layout of the Helios system, featuring a storage ring and two legs that contain dedicated operation sites. Credit: Quantinuum

The system works by rotating the ions around the loop. As an ion reaches the intersection, the system chooses whether to kick it into one of the legs and, if so, which leg. “We spin that ring almost like a hard drive, really, and whenever the ion that we want to gate gets close to the junction, there’s a decision that happens: Either that ion goes [into the legs], or it kind of makes a little turn and goes back into the ring,” said David Hayes, Quantinuum’s director of Computational Design and Theory. “And you can make that decision with just a few electrodes that are right at that X there.”

Each leg has a region where operations can take place, so this system can ensure that the right qubits are present together in the operation zones for things like two-qubit gates. Once the operations are complete, the qubits can be moved into the leg storage regions, and new qubits can be shuffled in. When the legs fill up, the qubits can be sent back to the loop, and the process is restarted.

“You get less traffic jams if all the traffic is running one way going through the gate zones,” Hayes told Ars. “If you had to move them past each other, you would have to do kind of physical swaps, and you want to avoid that.”

Obviously, issuing all the commands to control the hardware will be quite challenging for anything but the simplest operations. That puts an increasing emphasis on the compilers that add a significant layer of abstraction between what you want a quantum computer to do and the actual hardware commands needed to implement it. Quantinuum has developed its own compiler to take user-generated code and produce something that the control system can convert into the sequence of commands needed.

The control system now incorporates a real-time engine that can read data from Helios and update the commands it issues based on the state of the qubits. Quantinuum has this portion of the system running on GPUs rather than requiring customized hardware.

Quantinuum’s SDK for users is called Guppy and is based on Python, which has been modified to allow users to describe what they’d like the system to do. Helios is being accompanied by a new version of Guppy that includes some traditional programming tools like FOR loops and IF-based conditionals. These will be critical for the sorts of things we want to do as we move toward error-corrected qubits. This includes testing for errors, fixing them if they’re present, or repeatedly attempting initialization until it succeeds without error.

Hayes said the new version is also moving toward error correction. Thanks to Guppy’s ability to dynamically reassign qubits, Helios will be able to operate as a machine with 94 qubits while detecting errors on any of them. Alternatively, the 96 hardware qubits can be configured as a single unit that hosts 48 error-corrected qubits. “It’s actually a concatenated code,” Hayes told Ars. “You take two error detection codes and weave them together… it’s a single code block, but it has 48 logical cubits housed inside of it.” (Hayes said it’s a distance-four code, meaning it can fix up to two errors that occur simultaneously.)

Tackling superconductivity

While Quantinuum hardware has always had low error rates relative to most of its competitors, there was only so much you could do with 56 qubits. With 96 now at their disposal, researchers at the company decided to build a quantum implementation of a model (called the Fermi-Hubbard model) that’s meant to help study the electron pairing that takes place during the transition to superconductivity.

“There are definitely terms that the model doesn’t capture,” Quantinuum’s Henrik Dreyer acknowledged. “They neglect their electrorepulsion that [the electrons] still have—I mean, they’re still negatively charged; they are still repelling. There are definitely terms that the model doesn’t capture. On the other hand, I should say that this Fermi-Hubbard model—it has many of the features that a superconductor has.”

Superconductivity occurs when electrons join to form what are called Cooper pairs, overcoming their normal repulsion. And the model can tell that apart from normal conductivity in the same material.

“You ask the question ‘What’s the chance that one of the charged particles spontaneously disappears because of quantum fluctuations and goes over here?’” Dreyer said, describing what happens when simulating a conductor. “What people do in superconductivity is they take this concept, but instead of asking what’s the chance of a single-charge particle to tunnel over there spontaneously, they’re asking what is the chance of a pair to tunnel spontaneously?”

Even in its simplified form, however, it’s still a model of a quantum system, with all the computational complexity that comes with that. So the Quantinuum team modeled a few systems that classical computers struggle with. One was simply looking at a larger grid of atoms than most classical simulations have done; another expanded the grid in an additional dimension, modeling layers of a material. Perhaps the most complicated simulation involved what happens when a laser pulse of the right wavelength hits a superconductor at room temperature, an event that briefly induces a superconducting state.

And the system produced results, even without error correction. “It’s maybe a technical point, but I think it’s very important technical point, which is [that] the circuits that we ran, they all had errors,” Dreyer told Ars. “Maybe on the average of three or so errors, and for some reason, that is not very fully understood for this application, it doesn’t matter. You still get almost the perfect result in some of these cases.”

That said, he also indicated that having higher-fidelity hardware would help the team do a better job of putting the system in a ground state or running the simulation for longer. But those will have to wait for future hardware.

What’s next

If you look at Quantinuum’s roadmap for that future hardware, Helios would appear to be the last of its kind. It and earlier versions of the processors have loops and large straight stretches; everything in the future features a grid of squares. But both Strabley and Hayes said that Helios has several key transitional features. “Those ions are moving through that junction many, many times over the course of a circuit,” Strabley told Ars. “And so it’s really enabled us to work on the reliability of the junction, and that will translate into the large-scale systems.”

Image of a product roadmap, with years from 2020 to 2029 noted across the top. There are five processors arrayed from left to right, each with increasingly complex geometry.

Helios sits at the pivot between the simple geometries of earlier Quantinuum processors and the grids of future designs. Credit: Quantinuum

The collection of squares seen in future processors will also allow the same sorts of operations to be done with the loop-and-legs of Helios. Some squares can serve as the equivalent of a loop in terms of storage and sorting, while some of the straight lines nearby can be used for operations.

“What will be common to both of them is kind of the general concept that you can have a storage and sorting region and then gating regions on the side and they’re separated from one another,” Hayes said. “It’s not public yet, but that’s the direction we’re heading: a storage region where you can do really fast sorting in these 2D grids, and then gating regions that have parallelizable logical operations.”

In the meantime, we’re likely to see improvements made to Helios—ideas that didn’t quite make today’s release. “There’s always one more improvement that people want to make, and I’m the person that says, ‘No, we’re going to go now. Put this on the market, and people are going to go use it,’” Strabley said. “So there is a long list of things that we’re going to add to improve the performance. So expect that over the course of Helios, the performance is going to get better and better and better.”

That performance is likely to be used for the sort of initial work done on superconductivity or the algorithm recently described by Google, which is at or a bit beyond what classical computers can manage and may start providing some useful insights. But it will still be a generation or two before we start seeing quantum computing fulfill some of its promise.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

New quantum hardware puts the mechanics in quantum mechanics Read More »

if-you-want-to-satiate-ai’s-hunger-for-power,-google-suggests-going-to-space

If you want to satiate AI’s hunger for power, Google suggests going to space


Google engineers think they already have all the pieces needed to build a data center in orbit.

With Project Suncatcher, Google will test its Tensor Processing Units on satellites. Credit: Google

It was probably always when, not if, Google would add its name to the list of companies intrigued by the potential of orbiting data centers.

Google announced Tuesday a new initiative, named Project Suncatcher, to examine the feasibility of bringing artificial intelligence to space. The idea is to deploy swarms of satellites in low-Earth orbit, each carrying Google’s AI accelerator chips designed for training, content generation, synthetic speech and vision, and predictive modeling. Google calls these chips Tensor Processing Units, or TPUs.

“Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links to one day scale machine learning compute in space,” Google wrote in a blog post.

“Like any moonshot, it’s going to require us to solve a lot of complex engineering challenges,” Google’s CEO, Sundar Pichai, wrote on X. Pichai noted that Google’s early tests show the company’s TPUs can withstand the intense radiation they will encounter in space. “However, significant challenges still remain like thermal management and on-orbit system reliability.”

The why and how

Ars reported on Google’s announcement on Tuesday, and Google published a research paper outlining the motivation for such a moonshot project. One of the authors, Travis Beals, spoke with Ars about Project Suncatcher and offered his thoughts on why it just might work.

“We’re just seeing so much demand from people for AI,” said Beals, senior director of Paradigms of Intelligence, a research team within Google. “So, we wanted to figure out a solution for compute that could work no matter how large demand might grow.”

Higher demand will lead to bigger data centers consuming colossal amounts of electricity. According to the MIT Technology Review, AI alone could consume as much electricity annually as 22 percent of all US households by 2028. Cooling is also a problem, often requiring access to vast water resources, raising important questions about environmental sustainability.

Google is looking to the sky to avoid potential bottlenecks. A satellite in space can access an infinite supply of renewable energy and an entire Universe to absorb heat.

“If you think about a data center on Earth, it’s taking power in and it’s emitting heat out,” Beals said. “For us, it’s the satellite that’s doing the same. The satellite is going to have solar panels … They’re going to feed that power to the TPUs to do whatever compute we need them to do, and then the waste heat from the TPUs will be distributed out over a radiator that will then radiate that heat out into space.”

Google envisions putting a legion of satellites into a special kind of orbit that rides along the day-night terminator, where sunlight meets darkness. This north-south, or polar, orbit would be synchronized with the Sun, allowing a satellite’s power-generating solar panels to remain continuously bathed in sunshine.

“It’s much brighter even than the midday Sun on Earth because it’s not filtered by Earth’s atmosphere,” Beals said.

This means a solar panel in space can produce up to eight times more power than the same collecting area on the ground, and you don’t need a lot of batteries to reserve electricity for nighttime. This may sound like the argument for space-based solar power, an idea first described by Isaac Asimov in his short story Reason published in 1941. But instead of transmitting the electricity down to Earth for terrestrial use, orbiting data centers would tap into the power source in space.

“As with many things, the ideas originate in science fiction, but it’s had a number of challenges, and one big one is, how do you get the power down to Earth?” Beals said. “So, instead of trying to figure out that, we’re embarking on this moonshot to bring [machine learning] compute chips into space, put them on satellites that have the solar panels and the radiators for cooling, and then integrate it all together so you don’t actually have to be powered on Earth.”

SpaceX is driving down launch costs, thanks to reusable rockets and an abundant volume of Starlink satellite launches. Credit: SpaceX

Google has a mixed record with its ambitious moonshot projects. One of the most prominent moonshot graduates is the self-driving car kit developer Waymo, which spun out to form a separate company in 2016 and is now operational. The Project Loon initiative to beam Internet signals from high-altitude balloons is one of the Google moonshots that didn’t make it.

Ars published two stories last week on the promise of space-based data centers. One of the startups in this field, named Starcloud, is partnering with Nvidia, the world’s largest tech company by market capitalization, to build a 5 gigawatt orbital data center with enormous solar and cooling panels approximately 4 kilometers (2.5 miles) in width and length. In response to that story, Elon Musk said SpaceX is pursuing the same business opportunity but didn’t provide any details. It’s worth noting that Google holds an estimated 7 percent stake in SpaceX.

Strength in numbers

Google’s proposed architecture differs from that of Starcloud and Nvidia in an important way. Instead of putting up just one or a few massive computing nodes, Google wants to launch a fleet of smaller satellites that talk to one another through laser data links. Essentially, a satellite swarm would function as a single data center, using light-speed interconnectivity to aggregate computing power hundreds of miles over our heads.

If that sounds implausible, take a moment to think about what companies are already doing in space today. SpaceX routinely launches more than 100 Starlink satellites per week, each of which uses laser inter-satellite links to bounce Internet signals around the globe. Amazon’s Kuiper satellite broadband network uses similar technology, and laser communications will underpin the US Space Force’s next-generation data-relay constellation.

Artist’s illustration of laser crosslinks in space. Credit: TESAT

Autonomously constructing a miles-long structure in orbit, as Nvidia and Starcloud foresee, would unlock unimagined opportunities. The concept also relies on tech that has never been tested in space, but there are plenty of engineers and investors who want to try. Starcloud announced an agreement last week with a new in-space assembly company, Rendezvous Robotics, to explore the use of modular, autonomous assembly to build Starcloud’s data centers.

Google’s research paper describes a future computing constellation of 81 satellites flying at an altitude of some 400 miles (650 kilometers), but Beals said the company could dial the total swarm size to as many spacecraft as the market demands. This architecture could enable terawatt-class orbital data centers, according to Google.

“What we’re actually envisioning is, potentially, as you scale, you could have many clusters,” Beals said.

Whatever the number, the satellites will communicate with one another using optical inter-satellite links for high-speed, low-latency connectivity. The satellites will need to fly in tight formation, perhaps a few hundred feet apart, with a swarm diameter of a little more than a mile, or about 2 kilometers. Google says its physics-based model shows satellites can maintain stable formations at such close ranges using automation and “reasonable propulsion budgets.”

“If you’re doing something that requires a ton of tight coordination between many TPUs—training, in particular—you want links that have as low latency as possible and as high bandwidth as possible,” Beals said. “With latency, you run into the speed of light, so you need to get things close together there to reduce latency. But bandwidth is also helped by bringing things close together.”

Some machine-learning applications could be done with the TPUs on just one modestly sized satellite, while others may require the processing power of multiple spacecraft linked together.

“You might be able to fit smaller jobs into a single satellite. This is an approach where, potentially, you can tackle a lot of inference workloads with a single satellite or a small number of them, but eventually, if you want to run larger jobs, you may need a larger cluster all networked together like this,” Beals said.

Google has worked on Project Suncatcher for more than a year, according to Beals. In ground testing, engineers tested Google’s TPUs under a 67 MeV proton beam to simulate the total ionizing dose of radiation the chip would see over five years in orbit. Now, it’s time to demonstrate Google’s AI chips, and everything else needed for Project Suncatcher will actually work in the real environment.

Google is partnering with Planet, the Earth-imaging company, to develop a pair of small prototype satellites for launch in early 2027. Planet builds its own satellites, so Google has tapped it to manufacture each spacecraft, test them, and arrange for their launch. Google’s parent company, Alphabet, also has an equity stake in Planet.

“We have the TPUs and the associated hardware, the compute payload… and we’re bringing that to Planet,” Beals said. “For this prototype mission, we’re really asking them to help us do everything to get that ready to operate in space.”

Beals declined to say how much the demo slated for launch in 2027 will cost but said Google is paying Planet for its role in the mission. The goal of the demo mission is to show whether space-based computing is a viable enterprise.

“Does it really hold up in space the way we think it will, the way we’ve tested on Earth?” Beals said.

Engineers will test an inter-satellite laser link and verify Google’s AI chips can weather the rigors of spaceflight.

“We’re envisioning scaling by building lots of satellites and connecting them together with ultra-high bandwidth inter-satellite links,” Beals said. “That’s why we want to launch a pair of satellites, because then we can test the link between the satellites.”

Evolution of a free-fall (no thrust) constellation under Earth’s gravitational attraction, modeled to the level of detail required to obtain Sun-synchronous orbits, in a non-rotating coordinate system. Credit: Google

Getting all this data to users on the ground is another challenge. Optical data links could also route enormous amounts of data between the satellites in orbit and ground stations on Earth.

Aside from the technical feasibility, there have long been economic hurdles to fielding large satellite constellations. But SpaceX’s experience with its Starlink broadband network, now with more than 8,000 active satellites, is proof that times have changed.

Google believes the economic equation is about to change again when SpaceX’s Starship rocket comes online. The company’s learning curve analysis shows launch prices could fall to less than $200 per kilogram by around 2035, assuming Starship is flying about 180 times per year by then. This is far below SpaceX’s stated launch targets for Starship but comparable to SpaceX’s proven flight rate with its workhorse Falcon 9 rocket.

It’s possible there could be even more downward pressure on launch costs if SpaceX, Nvidia, and others join Google in the race for space-based computing. The demand curve for access to space may only be eclipsed by the world’s appetite for AI.

“The more people are doing interesting, exciting things in space, the more investment there is in launch, and in the long run, that could help drive down launch costs,” Beals said. “So, it’s actually great to see that investment in other parts of the space supply chain and value chain. There are a lot of different ways of doing this.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

If you want to satiate AI’s hunger for power, Google suggests going to space Read More »

google’s-new-hurricane-model-was-breathtakingly-good-this-season

Google’s new hurricane model was breathtakingly good this season

This early model comparison does not include the “gold standard” traditional, physics-based model produced by the European Centre for Medium-Range Weather Forecasts. However, the ECMWF model typically does not do better on hurricane track forecasts than the hurricane center or consensus models, which weigh several different model outputs. So it is unlikely to be superior to Google’s DeepMind.

This will change forecasting forever

It’s worth noting that DeepMind also did exceptionally well at intensity forecasting, which is the fluctuations in the strength of a hurricane. So in its first season, it nailed both hurricane tracks and intensity.

As a forecaster who has relied on traditional physics-based models for a quarter of a century, it is difficult to say how gobsmacking these results are. Going forward, it is safe to say that we will rely heavily on Google and other AI weather models, which are likely to improve in the coming years, as they are relatively new and have room for improvement.

“The beauty of DeepMind and other similar data-driven, AI-based weather models is how much more quickly they produce a forecast compared to their traditional physics-based counterparts that require some of the most expensive and advanced supercomputers in the world,” noted Michael Lowry, a hurricane specialist and author of the Eye on the Tropics newsletter, about the model performance. “Beyond that, these ‘smart’ models with their neural network architectures have the ability to learn from their mistakes and correct on-the-fly.”

What about the North American model?

As for the GFS model, it is difficult to explain why it performed so poorly this season. In the past, it has been, at worst, worthy of consideration in making a forecast. But this year, myself and other forecasters often disregarded it.

“It’s not immediately clear why the GFS performed so poorly this hurricane season,” Lowry wrote. “Some have speculated the lapse in data collection from DOGE-related government cuts this year could have been a contributing factor, but presumably such a factor would have affected other global physics-based models as well, not just the American GFS.”

With the US government in shutdown mode, we probably cannot expect many answers soon. But it seems clear that the massive upgrade of the model’s dynamic core, which began in 2019, has largely been a failure. If the GFS was a little bit behind some competitors a decade ago, it is now fading further and faster.

Google’s new hurricane model was breathtakingly good this season Read More »

some-stinkbugs’-legs-carry-a-mobile-fungal-garden

Some stinkbugs’ legs carry a mobile fungal garden

Many insect species hear using tympanal organs, membranes roughly resembling our eardrums but located on their legs. Grasshoppers, mantises, and moths all have them, and for decades, we thought that female stinkbugs of the Dinidoridae family have them, too, although located a bit unusually on their hind rather than front legs.

Suspecting that they use their hind leg tympanal organs to listen to male courtship songs, a team of Japanese researchers took a closer look at the organs in Megymenum gracilicorne, a Dinidoridae stinkbug species native to Japan. They discovered that these “tympanal organs” were not what they seemed. They’re actually mobile fungal nurseries of a kind we’ve never seen before.

Portable gardens

Dinidoridae is a small stinkbug family that lives exclusively in Asia. The bug did attract some scientific attention, but not nearly as much as its larger relatives like Pentatomidae. Prior work looking specifically into organs growing on the hind legs of Dinidoridae females was thus somewhat limited. “Most research relied on taxonomic and morphological approaches. Some taxonomists did describe that female Dinidoridae stinkbugs have an enlarged part on the hind legs that looks like the tympanal organ you can find, for example, in crickets,” said Takema Fukatsu, an evolutionary biologist at the National Institute of Advanced Industrial Science and Technology in Tokyo.

Based on that appearance, these parts were classified as tympanal organs—the case was closed, and it stayed closed until Fukatsu’s team started examining them more closely. Most insects have tympanal organs on their front legs, not hind legs, or on abdominal segments. The initial goal of Fukatsu’s study was to figure out what impact this unusual position has on Dinidoridae females’ ability to hear sounds.

Early on in the study, it turned out that whatever Dinidoridae females have on their hind legs, they are not tympanal organs. “We found no tympanal membrane and no sensory neurons, so the enlarged parts on the hind legs had nothing to do with hearing,” Fukatsu explained. Instead, the organ had thousands of small pores filled with benign filamentous fungi. The pores were connected to secretory cells that released substances that Fukatsu’s team hypothesized were nutrients enabling the fungi to grow.

Some stinkbugs’ legs carry a mobile fungal garden Read More »

disruption-to-science-will-last-longer-than-the-us-government-shutdown

Disruption to science will last longer than the US government shutdown

President Donald Trump alongside Office of Management and Budget Director Russell Vought.

Credit: Brendan Smialowski/AFP via Getty Images

President Donald Trump alongside Office of Management and Budget Director Russell Vought. Credit: Brendan Smialowski/AFP via Getty Images

However, the full impact of the shutdown and the Trump administration’s broader assaults on science to US international competitiveness, economic security, and electoral politics could take years to materialize.

In parallel, the dramatic drop in international student enrollment, the financial squeeze facing research institutions, and research security measures to curb foreign interference spell an uncertain future for American higher education.

With neither the White House nor Congress showing signs of reaching a budget deal, Trump continues to test the limits of executive authority, reinterpreting the law—or simply ignoring it.

Earlier in October, Trump redirected unspent research funding to pay furloughed service members before they missed their Oct. 15 paycheck. Changing appropriated funds directly challenges the power vested in Congress—not the president—to control federal spending.

The White House’s promise to fire an additional 10,000 civil servants during the shutdown, its threat to withhold back pay from furloughed workers, and its push to end any programs with lapsed funding “not consistent with the President’s priorities” similarly move to broaden presidential power.

Here, the damage to science could snowball. If Trump and Vought chip enough authority away from Congress by making funding decisions or shuttering statutory agencies, the next three years will see an untold amount of impounded, rescinded, or repurposed research funds.

photo of empty science lab

The government shutdown has emptied many laboratories staffed by federal scientists. Combined with other actions by the Trump administration, more scientists could continue to lose funding.

Credit: Monty Rakusen/DigitalVision via Getty Images

The government shutdown has emptied many laboratories staffed by federal scientists. Combined with other actions by the Trump administration, more scientists could continue to lose funding. Credit: Monty Rakusen/DigitalVision via Getty Images

Science, democracy, and global competition

While technology has long served as a core pillar of national and economic security, science has only recently reemerged as a key driver of greater geopolitical and cultural change.

China’s extraordinary rise in science over the past three decades and its arrival as the United States’ chief technological competitor has upended conventional wisdom that innovation can thrive only in liberal democracies.

The White House’s efforts to centralize federal grantmaking, restrict free speech, erase public data, and expand surveillance mirror China’s successful playbook for building scientific capacity while suppressing dissent.

As the shape of the Trump administration’s vision for American science has come into focus, what remains unclear is whether, after the shutdown, it can outcompete China by following its lead.

Kenneth M. Evans is a Fellow in Science, Technology, and Innovation Policy at the Baker Institute for Public Policy, Rice University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disruption to science will last longer than the US government shutdown Read More »