Author name: Rejus Almole

blogging-service-typepad-is-shutting-down-and-taking-all-blog-content-with-it

Blogging service TypePad is shutting down and taking all blog content with it

TypePad was a blogging service based on the Movable Type content management system but hosted on TypePad’s site and with other customizations. Both Movable Type and TypePad were originally created by Six Apart, with TypePad being the solution for less technical users who just wanted to create a site and Movable Type being the version you could download and host anywhere and customize to your liking—not unlike the relationship between WordPress.com (the site that hosts other sites) and WordPress.org (the site that hosts the open source software).

Movable Type and TypePad diverged in the early 2010s; Six Apart was bought by a company called VideoEgg in 2010, resulting in a merged company called Say Media. In 2011, Say Media sold Movable Type and the Six Apart brand name to a Japanese company called InfoCom while retaining control of TypePad. Forms filed with the SEC indicate that TypePad was acquired in 2013 by Endurance International Group, which also owns Bluehost, among other hosting and hosting-related brands. Trying to sign up for a new TypePad account currently redirects users to BlueHost instead.

Movable Type still lives on; its latest major release, 8.4.0, came out in November of 2024.

The TypePad shutdown is rough news for the site’s remaining user base—and it’s yet another tranche of old Internet content that will only be available via the Internet Archive’s Wayback Machine, when it’s available at all.

Blogging service TypePad is shutting down and taking all blog content with it Read More »

new-dinosaur-species-is-the-punk-rock-version-of-an-ankylosaur

New dinosaur species is the punk rock version of an ankylosaur

And we have known for sure that the armor was around back then, given that we’ve found the skin-derived osteoderms that comprise the armor in Jurassic deposits. But with little more than a rib and a handful of mouth parts to go on, it wasn’t really possible to say much more than that.

Until now, that is. Because the new Spicomellus remains show extremely clearly that the armor of ankylosaurs got less elaborate over time.

The small, solid-looking spikes found along the edges of later ankylosaurs? Forget those. Spicomellus had a back that was probably bristling with sharper spines, along with far larger ones along its outer edges. Each rib appears to have generated as many as six individual spikes. At a handful of locations, these spikes extended out to nearly a meter, looking more like lances than anything needed to ward off a close-in attack.

And the largest of these were along its neck. On the upper surface of its neck, several osteoderms fused to form a massive half-collar of bone and then extended out five or more individual spikes, each among the longest on the animal’s body. And there were three of these structures along the neck. “No known ankylosaur possesses any condition close to the extremely long pairs of spines on the cervical half-ring of Spicomellus,” its discoverers note.

As if its hedgehog-on-acid appearance weren’t enough, handles present on the tail vertebrae suggest that it also had a weaponized tail. All told, the researchers sum things up by saying, “The new specimen reveals extreme dermal armour modifications unlike those of any other vertebrate, extinct or extant, which fall far outside of the range of morphologies shown by other armoured dinosaurs.”

Out go the hypotheses

Because it’s so unusual, the skeleton’s characteristics are difficult to place within a neat family tree of the ankylosaurs. The researchers conclude that some details of its skeleton do suggest Spicomellus groups among the ankylosaurs and conclude that it’s probably an early branch from the main lineage. But without any other significant examples from the lineage at that time, it’s an extremely tentative conclusion. Still, the alternative is that this thing is unrelated to the only other organisms that share at least a few of its bizarre features, which is a difficult idea to swallow.

New dinosaur species is the punk rock version of an ankylosaur Read More »

porsche-adds-digital-keys,-in-car-gaming-to-2026-macan-electric

Porsche adds digital keys, in-car gaming to 2026 Macan Electric

There’s also a trained parking feature, which lets you record up to five parking routines. Once the car recognizes it’s in a parking environment that it knows, it will offer to take over the job of putting your car away for you, although only with the driver in the car—this does not appear to be a remote parking feature that you control by phone.

And there’s a new reversing assist. This can remember up to 160 feet (49 m) of a route that it has just traveled forward, so that it can automatically reverse back the way it came, which Porsche says should be “ideal for narrow access roads or winding parking garages.”

The AirConsole in-car gaming platform that we started seeing in other German luxury cars of late has been added to the infotainment. This lets you pair your phone as a controller or use Bluetooth game controllers, and the App Store contains a bunch of games, including a passable Mario Kart clone, last I checked. Porsche says it has also beefed up the in-car voice assistant with better AI, created a better charging planner app that lets you prioritize individual charging stations, and increased the towing capacity from 4,400 lbs to 5,500 lbs (1,995–2,495 kg).

Porsche adds digital keys, in-car gaming to 2026 Macan Electric Read More »

cdc-director-has-been-ousted-just-weeks-after-senate-confirmation

CDC director has been ousted just weeks after Senate confirmation

Georges Benjamin, executive director of the American Public Health Association, told the outlet that Monarez “values science, is a solid researcher, and has a history of being a good manager. We’re looking forward to working with her.”

A low point for the agency

The reported ouster comes at what feels like a nadir for the CDC. The agency has lost hundreds of staff from layoffs and buyouts. Vital health programs have been shuttered or hampered. Dangerous rhetoric and health misinformation from Kennedy and other health officials in the Trump administration have made once-respected CDC experts feel vilified by the public and like targets of hate. Kennedy himself has falsely called the COVID-19 shots the “deadliest vaccine ever made” and the CDC a “cesspool of corruption,” for example.

On August 8, a gunman warped by vaccine disinformation opened fire on the CDC campus. Of nearly 500 shots fired, about 200 struck six CDC buildings as terrified staff dove for safety. One local police officer was killed in the incident. The gunman had specifically targeted the CDC for the shooting and blamed COVID-19 vaccines for his health problems.

Additional exits reported

After news broke of Monarez’s removal, Stat News reported that a wave of CDC leadership has resigned. The high-ranking resignations include: Daniel Jernigan, director of the National Center for Emerging Zoonotic Infectious Diseases; Deb Houry, Chief Medical Officer; and Demetre Daskalakis, director of the National Center for Immunization and Respiratory Diseases.

“I am not able to serve in this role any longer because of the ongoing weaponization of public health,” Daskalakis said in a message to staff seen by Stat.

“I am committed to protecting the public’s health, but the ongoing changes prevent me from continuing in my job as a leader of the agency,” Houry wrote in a message to staff. Houry added that science should “never be censored or subject to political interpretations.”

Earlier today, Politico reported that Jennifer Layden, director of the agency’s Office of Public Health Data, Surveillance, and Technology, has also resigned.

8/27/2025 8: 15 pm ET: This post has been updated to include the social media post from HHS, reporting from the Washington Post on the circumstances around Monarez’s exit, additional resignations reported by Stat and Politico, and the statement from Monarez’s lawyers.

CDC director has been ousted just weeks after Senate confirmation Read More »

are-they-starting-to-take-our-jobs?

Are They Starting To Take Our Jobs?

Is generative AI making it harder for young people to find jobs?

My answer is:

  1. Yes, definitely, in terms of for any given job that exists finding the job and getting hired. That’s getting harder. AI is most definitely screwing up that process.

  2. Yes, probably, in terms of employment in automation-impacted sectors. It always seemed odd to think otherwise, and this week’s new study has strong evidence here.

  3. Maybe, overall, in terms of the jobs available (excluding search and matching effects from #1), because AI should be increasing employment in areas not being automated yet, and that effect can be small and still dominate.

The claims go back and forth on the employment effects of AI. As Derek Thompson points out, if you go by articles in the popular press, we’ve gone from ‘possibly’ to ‘definitely yes’ to ‘almost certainly no’ until what Derek describes as this week’s ‘plausibly yes’ and which others are treating as stronger than that.

Derek Thompson: To be honest with you, I considered this debate well and truly settled. No, I’d come to think, AI is probably not wrecking employment for young people. But now, I’m thinking about changing my mind again.

It’s weird to pull an ‘I told you all so’ when what you said was ‘I am confused and you all are overconfident’ but yeah, basically. The idea that this was ‘well and truly settled’ always seemed absurd to me even considering present effects, none of these arguments should have filled anyone with confidence and neither should the new one, and this is AI so even if it definitively wasn’t happening now who knows where we would be six months later.

People changing their minds a lot reflects, as Derek notes, the way discovery, evaluation, discourse and science are supposed to work, except for the overconfidence.

Most recently before this week we had claims that what looks like effects of AI automation are delayed impacts from Covid, various interest rate changes, existing overhiring or other non-AI market trends.

The new hotness is this new Stanford study from Brynjolfsson, Chandar and Chen:

This paper examines changes in the labor market for occupations exposed to generative artificial intelligence using high-frequency administrative data from the largest payroll software provider in the United States.

We present six facts that characterize these shifts. We find that since the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks.

In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow.

We also find that adjustments occur primarily through employment rather than compensation. Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor. Our results are robust to alternative explanations, such as excluding technology-related firms and excluding occupations amenable to remote work.

Effects acting through employment rather than compensation makes sense since the different fields are competing against each other for labor and wages are sticky downwards even across workers.

Bharat Chanar (author): We observe millions of workers each month. Use this cut the data finely by age and occ.

What do we find?

Stories about young SW developers struggling to find work borne out in data

Employment for 22-25 y/o developers ⬇️ ~20% from peak in 2022. Older ages show steady rise.

This isn’t just about software. See a similar pattern for customer service reps, another job highly exposed to AI. For both roles, the decline is sharpest for the 22-25 age group, with older, more experienced workers less affected.

In contrast, jobs less exposed to AI, like health aides, show the opposite trend. These jobs, which require in-person physical tasks, have seen the fastest employment growth among youngest workers.

Overall, job market for entry-level workers has been stagnant since late 2022, while market for experienced workers remains robust. Stagnation for young workers driven by declines in AI-exposed jobs. Of course, lots of changes in the economy, so this is not all caused by AI.

Note the y-axis scale on the graphs, but that does seem like a definitive result. It seems far too fast and targeted to be the result of non-AI factors.

John Burn-Murdoch: Very important paper, for two reasons:

  1. Key finding: employment *isfalling in early-career roles exposed to LLM automation

  2. Shows that administrative data (millions of payroll records) is much better than survey data for questions requiring precision (occupation x age)

There’s always that battle between ‘our findings are robust to various things’ and ‘your findings go away when you account for this particular thing in this way,’ and different findings appear to contradict.

I don’t know for sure who is right, but I was convinced by their explanation of why they have better data sources and thus they’re right and the FT study was wrong, in terms of there being relative entry-level employment effects that vary based on the amount of automation in each sector.

Areas with automation from AI saw job losses at entry level, whereas areas with AI amplification saw job gains, but we should expect more full automation over time.

There’s the additional twist that a 13 percent decline in employment for the AI-exposed early-career jobs does not mean work is harder to find. Everyone agrees AI will automate away some jobs. The bull case for employment is not that those jobs don’t go away. It is that those jobs are replaced by other jobs. So the 13% could be an 11% decline in some areas and a 2% increase in other larger areas, where they cancel out. AI is driving substantial economic growth already which should create jobs. We can’t tell.

There is one place I am very confident AI is making things harder. That is the many ways it is making it harder to find and get hired for what jobs do exist. Automated job applications are flooding and breaking the job application market, most of all in software but across the board. Matching is by all reports getting harder rather than easier, although if you are ahead of the curve on AI use here you presumably have an edge.

Predictions are hard, especially about the future, but I would as strongly as always disagree with this advice from Derek Thompson:

Derek Thompson: Someone once asked me recently if I had any advice on how to predict the future when I wrote about social and technological trends. Sure, I said. My advice is that predicting the future is impossible, so the best thing you can do is try to describe the present accurately.

Since most people live in the past, hanging onto stale narratives and outdated models, people who pay attention to what’s happening as it happens will appear to others like they’re predicting the future when all they’re doing is describing the present.

Predicting the future is hard in some ways, but that is no reason to throw up one’s hands and pretend to know nothing. We can especially know big things based on broad trends, destinations are often clearer than the road towards them. And in the age of AI, while predicting the present puts you ahead of many, we can know for certain many ways the future will not look like the present.

The most important and in some ways easiest things we can say involve what would happen with powerful or transformational AI, and that is really important, the only truly important thing, but in this particular context that’s not important right now.

If by the future we do mean the effect on jobs, and we presume that the world is not otherwise transformed so much we have far bigger problems, we can indeed still say many things. At minimum, we know many jobs will be amplified or augmented, and many more jobs will be fully automated or rendered irrelevant, even if we have high uncertainty about which ones in what order how fast.

We know that there will be some number of new jobs created by this process, especially if we have time to adjust, but that as AI ‘automates the new jobs as well’ this will get harder and eventually break. And we know that there is a lot of slack for an increasingly wealthy civilization to hire people for quite a lot of what I call ‘shadow jobs,’ which are jobs that would already exist except labor and capital currently choose better opportunities, again if those jobs too are not yet automated. Eventually we should expect unemployment.

Getting more speculative and less confident, earlier than that, it makes sense to expect unemployment for those lacking a necessary threshold of skill as technology advances, even if AI wasn’t a direct substitute for your intelligence. Notice that the employment charts above start at age 22. They used to start at age 18, and before that even younger, or they would have if we had charts back then.

Discussion about this post

Are They Starting To Take Our Jobs? Read More »

intel-details-everything-that-could-go-wrong-with-us-taking-a-10%-stake

Intel details everything that could go wrong with US taking a 10% stake


Intel warns investors to brace for losses and uncertainties.

Some investors are not happy that Intel agreed to sell the US a 10 percent stake in the company after Donald Trump attacked Intel CEO Lip-Bu Tan with a demand to resign.

After Intel accepted the deal at a meeting with the president, it alarmed some investors when Trump boasted that his pressure campaign worked, claiming Tan “walked in wanting to keep his job, and he ended up giving us $10 billion for the United States.”

“It sets a bad precedent if the president can just take 10 percent of a company by threatening the CEO,” James McRitchie, a private investor and shareholder activist in California who owns Intel shares, told Reuters. To McRitchie, Tan accepting the deal effectively sent the message that “we love Trump, we don’t want 10 percent of our company taken away.”

McRitchie wasn’t the only shareholder who raised an eyebrow. Kristin Hull, chief investment officer of a California-based activist firm called Nia Impact Capital—which manages shares in Intel for its clients—told Reuters she has “more questions than confidence” about how the deal will benefit investors. To her, the deal seems to blur some lines “between where is the government and where is the private sector.”

As Reuters explains, Intel agreed to convert $11.1 billion in CHIPS funding and other grants “into a 9.9 percent equity stake in Intel.”

Some early supporters of the agreement—including tech giants like Microsoft and Trump critics like Bernie Sanders (I-Vt.)—have praised the deal as allowing the US to profit off billions in CHIPS grants that Intel was awarded under the Biden administration. After pushing for the deal, Commerce Secretary Howard Lutnick criticized Joe Biden for giving away CHIPS funding “for free,” while praising Trump for turning the CHIPS Act grants into “equity for the Trump administration” and “for the American people.”

But to critics of the deal, it seems weird for the US to swoop in and take stake in a company that doesn’t need government assistance. The only recent precedent was the US temporarily taking stake in key companies considered vital to the economy that risked going under during the 2008 financial crisis.

Compare that to the Intel deal, where Tan has made it clear that Intel, while struggling to compete with rivals, “didn’t need the money,” Reuters noted—largely due to SoftBank purchasing $2 billion in Intel shares in the days prior to the US agreement being reached. Instead, the US is incentivized to take the stake to help further Trump’s mission to quickly build up a domestic chip manufacturing supply chain that can keep the US a global technology leader at the forefront of AI innovation.

Investors told Reuters that it’s unusual for the US to take this much control over a company that’s not in crisis, noting that “this level of tractability was not usually associated with relations between businesses and Washington.”

Intel did not immediately respond to Ars’ request to comment on investors’ concerns, but a spokesperson told Reuters that Intel’s board has already approved the deal. In a press release, the company emphasized that “the government’s investment in Intel will be a passive ownership, with no Board representation or other governance or information rights. The government also agrees to vote with the Company’s Board of Directors on matters requiring shareholder approval, with limited exceptions.”

Intel reveals why investors should be spooked

The Trump administration has also stressed that the US stake in Intel does not give the Commerce Department any board seats or any voting or governance rights in Intel. Instead, the terms stipulate that the Commerce Department must “support the board on director nominees and proposals,” an Intel securities filing said.

However, the US can vote “as it wishes,” Intel reported, and experts suggested to Reuters that regulations may be needed to “limit government opportunities for abuses such as insider trading.” That could reassure investors somewhat, Rich Weiss, a senior vice president and chief investment officer of multi-asset strategies for American Century Investments, told Reuters. Without such laws, Weiss noted that “in an unchecked scenario of government direct investing, trading in those companies could be much riskier for investors.”

It also seems possible that the US could influence Intel’s decisions without the government explicitly taking voting control, experts suggested. “Several investors and representatives” told Reuters that the US could impact major decisions regarding things like layoffs or business shifts into foreign markets. At a certain point, Intel may be stuck choosing between corporate and national interests, Robert McCormick, executive director of the Council of Institutional Investors, told Reuters.

“A government stake in an otherwise private entity potentially creates a conflict between what’s right for the company and what’s right for the country,” McCormick suggested.

Further, Intel becoming partly state-controlled risks disrupting Intel’s non-US business, subjecting the company to “additional regulations, obligations or restrictions, such as foreign subsidy laws or otherwise, in other countries,” Intel’s filing said.

In the filing, Intel confirmed directly to investors that they have good cause to be spooked by the US stake. Offering a bulleted list, the company outlined “a number of risks and uncertainties” that could “adversely impact” shareholders due to “the US Government’s ownership of significant equity interests in the company.”

Perhaps most alarming in the short term, Intel admitted that the deal will dilute investors’ stock due to the discounted shares issued to Trump. And their shares could suffer additional dilutions if certain terms of the deal are “triggered” or “exercised,” Intel noted.

In the long term, investors were told that the US stake may limit the company’s eligibility for future federal grants while leaving Intel shareholders dwelling in the uncertainty of knowing that terms of the deal could be voided or changed over time, as federal administration and congressional priorities shift.

Additionally, Intel forecasted potential legal challenges over the deal, which Intel anticipates could come from both third parties and the US government.

The final bullet point in Intel’s risk list could be the most ominous, though. Due to the unprecedented nature of the deal, Intel fears there’s no way to anticipate myriad other challenges the deal may trigger.

“It is difficult to foresee all the potential consequences,” Intel’s filing said. “Among other things, there could be adverse reactions, immediately or over time, from investors, employees, customers, suppliers, other business or commercial partners, foreign governments or competitors. There may also be litigation related to the transaction or otherwise and increased public or political scrutiny with respect to the Company.”

Meanwhile, it’s hard to see what Intel truly gains from the deal other than maybe getting Trump off its back for a bit. A Fitch Ratings research note reported that “the deal does not improve Intel’s BBB credit rating, which sits just above junk status” and “does not fundamentally improve customer demand for Intel chips” despite providing “more liquidity,” Reuters reported.

Intel’s filing, in addition to rattling investors, likely also serves as a warning sign to other companies who may be approached by the Trump administration to strike similar deals. So far, the administration has confirmed that the US is not eyeing a stake in Nvidia and seems unlikely to seek a stake in the Taiwan Semiconductor Manufacturing Company. While Lutnick has said he plans to push to make more deals, any chipmakers committing to increasing investments in the US, sources told the Wall Street Journal, will supposedly be spared from pressure to make a similar deal.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Intel details everything that could go wrong with US taking a 10% stake Read More »

under-pressure-after-setbacks,-spacex’s-huge-rocket-finally-goes-the-distance

Under pressure after setbacks, SpaceX’s huge rocket finally goes the distance

The ship made it all the way through reentry, turned to a horizontal position to descend through scattered clouds, then relit three of its engines to flip back to a vertical orientation for the final braking maneuver before splashdown.

Things to improve on

There are several takeaways from Tuesday’s flight that will require some improvements to Starship, but these are more akin to what officials might expect from a rocket test program and not the catastrophic failures of the ship that occurred earlier this year.

One of the Super Heavy booster’s 33 engines prematurely shut down during ascent. This has happened before, and while it didn’t affect the booster’s overall performance, engineers will investigate the failure to try to improve the reliability of SpaceX’s Raptor engines, each of which can generate more than a half-million pounds of thrust.

Later in the flight, cameras pointed at one of the ship’s rear flaps showed structural damage to the back of the wing. It wasn’t clear what caused the damage, but super-heated plasma burned through part of the flap as the ship fell deeper into the atmosphere. Still, the flap remained largely intact and was able to help control the vehicle through reentry and splashdown.

“We’re kind of being mean to this Starship a little bit,” Huot said on SpaceX’s live webcast. “We’re really trying to put it through the paces and kind of poke on what some of its weak points are.”

Small chunks of debris were also visible peeling off the ship during reentry. The origin of the glowing debris wasn’t immediately clear, but it may have been parts of the ship’s heat shield tiles. On this flight, SpaceX tested several different tile designs, including ceramic and metallic materials, and one tile design that uses “active cooling” to help dissipate heat during reentry.

A bright flash inside the ship’s engine bay during reentry also appeared to damage the vehicle’s aft skirt, the stainless steel structure that encircles the rocket’s six main engines.

“That’s not what we want to see,” Huot said. “We just saw some of the aft skirt just take a hit. So we’ve got some visible damage on the aft skirt. We’re continuing to reenter, though. We are intentionally stressing the ship as we go through this, so it is not guaranteed to be a smooth ride down to the Indian Ocean.

“We’ve removed a bunch of tiles in kind of critical places across the vehicle, so seeing stuff like that is still valuable to us,” he said. “We are trying to kind of push this vehicle to the limits to learn what its limits are as we design our next version of Starship.”

Shana Diez, a Starship engineer at SpaceX, perhaps summed up Tuesday’s results best on X: “It’s not been an easy year but we finally got the reentry data that’s so critical to Starship. It feels good to be back!”

Under pressure after setbacks, SpaceX’s huge rocket finally goes the distance Read More »

the-first-stars-may-not-have-been-as-uniformly-massive-as-we thought

The first stars may not have been as uniformly massive as we thought


Collapsing gas clouds in the early universe may have formed lower-mass stars as well.

Stars form in the universe from massive clouds of gas. Credit: European Southern Observatory, CC BY-SA

For decades, astronomers have wondered what the very first stars in the universe were like. These stars formed new chemical elements, which enriched the universe and allowed the next generations of stars to form the first planets.

The first stars were initially composed of pure hydrogen and helium, and they were massive—hundreds to thousands of times the mass of the Sun and millions of times more luminous. Their short lives ended in enormous explosions called supernovae, so they had neither the time nor the raw materials to form planets, and they should no longer exist for astronomers to observe.

At least that’s what we thought.

Two studies published in the first half of 2025 suggest that collapsing gas clouds in the early universe may have formed lower-mass stars as well. One study uses a new astrophysical computer simulation that models turbulence within the cloud, causing fragmentation into smaller, star-forming clumps. The other study—an independent laboratory experiment—demonstrates how molecular hydrogen, a molecule essential for star formation, may have formed earlier and in larger abundances. The process involves a catalyst that may surprise chemistry teachers.

As an astronomer who studies star and planet formation and their dependence on chemical processes, I am excited at the possibility that chemistry in the first 50 million to 100 million years after the Big Bang may have been more active than we expected.

These findings suggest that the second generation of stars—the oldest stars we can currently observe and possibly the hosts of the first planets—may have formed earlier than astronomers thought.

Primordial star formation

Video illustration of the star and planet formation process. Credit: Space Telescope Science Institute.

Stars form when massive clouds of hydrogen many light-years across collapse under their own gravity. The collapse continues until a luminous sphere surrounds a dense core that is hot enough to sustain nuclear fusion.

Nuclear fusion happens when two or more atoms gain enough energy to fuse together. This process creates a new element and releases an incredible amount of energy, which heats the stellar core. In the first stars, hydrogen atoms fused together to create helium.

The new star shines because its surface is hot, but the energy fueling that luminosity percolates up from its core. The luminosity of a star is its total energy output in the form of light. The star’s brightness is the small fraction of that luminosity that we directly observe.

This process where stars form heavier elements by nuclear fusion is called stellar nucleosynthesis. It continues in stars after they form as their physical properties slowly change. The more massive stars can produce heavier elements such as carbon, oxygen, and nitrogen, all the way up to iron, in a sequence of fusion reactions that end in a supernova explosion.

Supernovae can create even heavier elements, completing the periodic table of elements. Lower-mass stars like the Sun, with their cooler cores, can sustain fusion only up to carbon. As they exhaust the hydrogen and helium in their cores, nuclear fusion stops, and the stars slowly evaporate.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right. Credit: CC BY 4.0

High-mass stars have high pressure and temperature in their cores, so they burn bright and use up their gaseous fuel quickly. They last only a few million years, whereas low-mass stars—those less than two times the Sun’s mass—evolve much more slowly, with lifetimes of billions or even trillions of years.

If the earliest stars were all high-mass stars, then they would have exploded long ago. But if low-mass stars also formed in the early universe, they may still exist for us to observe.

Chemistry that cools clouds

The first star-forming gas clouds, called protostellar clouds, were warm—roughly room temperature. Warm gas has internal pressure that pushes outward against the inward force of gravity trying to collapse the cloud. A hot air balloon stays inflated by the same principle. If the flame heating the air at the base of the balloon stops, the air inside cools, and the balloon begins to collapse.

Stars form when clouds of dust collapse inward and condense around a small, bright, dense core. Credit: NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

Only the most massive protostellar clouds with the most gravity could overcome the thermal pressure and eventually collapse. In this scenario, the first stars were all massive.

The only way to form the lower-mass stars we see today is for the protostellar clouds to cool. Gas in space cools by radiation, which transforms thermal energy into light that carries the energy out of the cloud. Hydrogen and helium atoms are not efficient radiators below several thousand degrees, but molecular hydrogen, H₂, is great at cooling gas at low temperatures.

When energized, H₂ emits infrared light, which cools the gas and lowers the internal pressure. That process would make gravitational collapse more likely in lower-mass clouds.

For decades, astronomers have reasoned that a low abundance of H₂ early on resulted in hotter clouds whose internal pressure would be too hot to easily collapse into stars. They concluded that only clouds with enormous masses, and therefore higher gravity, would collapse, leaving more massive stars.

Helium hydride

In a July 2025 journal article, physicist Florian Grussie and collaborators at the Max Planck Institute for Nuclear Physics demonstrated that the first molecule to form in the universe, helium hydride, HeH⁺, could have been more abundant in the early universe than previously thought. They used a computer model and conducted a laboratory experiment to verify this result.

Helium hydride? In high school science you probably learned that helium is a noble gas, meaning it does not react with other atoms to form molecules or chemical compounds. As it turns out, it does—but only under the extremely sparse and dark conditions of the early universe, before the first stars formed.

HeH⁺ reacts with hydrogen deuteride—HD, which is one normal hydrogen atom bonded to a heavier deuterium atom—to form H₂. In the process, HeH⁺ also acts as a coolant and releases heat in the form of light. So the high abundance of both molecular coolants earlier on may have allowed smaller clouds to cool faster and collapse to form lower-mass stars.

Gas flow also affects stellar initial masses

In another study, published in July 2025, astrophysicist Ke-Jung Chen led a research group at the Academia Sinica Institute of Astronomy and Astrophysics using a detailed computer simulation that modeled how gas in the early universe may have flowed.

The team’s model demonstrated that turbulence, or irregular motion, in giant collapsing gas clouds can form lower-mass cloud fragments from which lower-mass stars condense.

The study concluded that turbulence may have allowed these early gas clouds to form stars either the same size or up to 40 times more massive than the Sun’s mass.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way. Credit: ESA/Hubble & NASA, CC BY-ND

The two new studies both predict that the first population of stars could have included low-mass stars. Now, it is up to us observational astronomers to find them.

This is no easy task. Low-mass stars have low luminosities, so they are extremely faint. Several observational studies have recently reported possible detections, but none are yet confirmed with high confidence. If they are out there, though, we will find them eventually.The Conversation

Luke Keller is a professor of physics and astronomy at Ithaca College.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

The first stars may not have been as uniformly massive as we thought Read More »

spacex’s-latest-dragon-mission-will-breathe-more-fire-at-the-space-station

SpaceX’s latest Dragon mission will breathe more fire at the space station

“Our capsule’s engines are not pointed in the right direction for optimum boost,” said Sarah Walker, SpaceX’s director of Dragon mission management. “So, this trunk module has engines pointed in the right direction to maximize efficiency of propellant usage.”

When NASA says it’s the right time, SpaceX controllers will command the Draco thrusters to ignite and gently accelerate the massive 450-ton complex. All told, the reboost kit can add about 20 mph, or 9 meters per second, to the space station’s already-dizzying speed, according to Walker.

Spetch said that’s roughly equivalent to the total reboost impulse provided by one-and-a-half Russian Progress cargo vehicles. That’s about one-third to one-fourth of the total orbit maintenance the ISS needs in a year.

“The boost kit will help sustain the orbiting lab’s altitude, starting in September, with a series of burns planned periodically throughout the fall of 2025,” Spetch said.

After a few months docked at the ISS, the Dragon cargo capsule will depart and head for a parachute-assisted splashdown in the Pacific Ocean off the coast of California. SpaceX will recover the pressurized capsule to fly again, while the trunk containing the reboost kit will jettison and burn up in the atmosphere.

SpaceX’s Dragon spacecraft approaches the International Space Station for docking at 7: 05 am EDT (11: 05 UTC) on Monday. Credit: NASA TV/Ars Technica

While this mission is SpaceX’s 33rd cargo flight to the ISS under the auspices of NASA’s multibillion-dollar Commercial Resupply Services contract, it’s also SpaceX’s 50th overall Dragon mission to the outpost. This tally includes 17 flights of the human-rated Crew Dragon.

“With CRS-33, we’ll mark our 50th voyage to ISS,” Walker said. “Just incredible. Together, these missions have (carried) well over 300,000 pounds of cargo and supplies to the orbiting lab and well over 1,000 science and research projects that are not only helping us to understand how to live and work effectively in space… but also directly contributing to critical research that serves our lives here on Earth.”

Future Dragon trunks will be able to accommodate a reboost kit or unpressurized science payloads, depending on NASA’s needs at the space station.

The design of the Dragon reboost kit is a smaller-scale version of what SpaceX will build for a much larger Dragon trunk under a $843 million contract signed with NASA last year for the US Deorbit Vehicle. This souped-up Dragon will dock with the ISS and steer it back into the atmosphere after the lab’s decommissioning in the early 2030s. The deorbit vehicle will have 46 Draco thrusters—16 to control the craft’s orientation and 30 in the trunk to provide the impulse needed to drop the station out of orbit.

SpaceX’s latest Dragon mission will breathe more fire at the space station Read More »

why-wind-farms-attract-so-much-misinformation-and-conspiracy theory

Why wind farms attract so much misinformation and conspiracy theory

The recent resistance

Academic work on the question of anti-wind farm activism is revealing a pattern: Conspiracy thinking is a stronger predictor of opposition than age, gender, education, or political leaning.

In Germany, the academic Kevin Winter and colleagues found that belief in conspiracies had many times more influence on wind opposition than any demographic factor. Worryingly, presenting opponents with facts was not particularly successful.

In a more recent article, based on surveys in the US, UK, and Australia that looked at people’s propensity to give credence to conspiracy theories, Winter and colleagues argued that opposition is “rooted in people’s worldviews.”

If you think climate change is a hoax or a beat-up by hysterical eco-doomers, you’re going to be easily persuaded that wind turbines are poisoning groundwater, causing blackouts, or, in Trump’s words, “driving [the whales] loco.”

Wind farms are fertile ground for such theories. They are highly visible symbols of climate policy, and complex enough to be mysterious to non-specialists. A row of wind turbines can become a target for fears about modernity, energy security, or government control.

This, say Winter and colleagues, “poses a challenge for communicators and institutions committed to accelerating the energy transition.” It’s harder to take on an entire worldview than to correct a few made-up talking points.

What is it all about?

Beneath the misinformation, often driven by money or political power, there’s a deeper issue. Some people—perhaps Trump among them—don’t want to deal with the fact that fossil technologies, which brought prosperity and a sense of control, are also causing environmental crises. And these are problems that aren’t solved with the addition of more technology. It offends their sense of invulnerability, of dominance. This “anti-reflexivity,” as some academics call it, is a refusal to reflect on the costs of past successes.

It is also bound up with identity. In some corners of the online “manosphere,” concerns over climate change are being painted as effeminate.

Many boomers, especially white heterosexual men like Trump, have felt disoriented as their world has shifted and changed around them. The clean energy transition symbolizes part of this change. Perhaps this is a good way to understand why Trump is lashing out at “windmills.”The Conversation

Marc Hudson, Visiting Fellow, SPRU, University of Sussex Business School, University of Sussex. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why wind farms attract so much misinformation and conspiracy theory Read More »

trump-says-us-will-take-10%-stake-in-intel-because-ceo-wants-to-“keep-his-job”

Trump says US will take 10% stake in Intel because CEO wants to “keep his job”

Intel has agreed to sell the US a 10 percent stake in the company, Donald Trump announced at a news conference Friday.

The US stake is worth $10 billion, Trump said, confirming that the deal was inked following his talks with Intel CEO Lip-Bu Tan.

Trump had previously called for Tan to resign, accusing the CEO of having “concerning” ties to the Chinese Communist Party. During their meeting, the president claimed that Tan “walked in wanting to keep his job and he ended up giving us $10 billion for the United States.”

“I said, ‘I think it would be good having the United States as your partner.’ He agreed, and they’ve agreed to do it,” Trump said. “And I think it’s a great deal for them.”

Sources have suggested that Commerce Secretary Howard Lutnick pushed the idea of the US buying large stakes in various chipmakers like Intel in exchange for access to CHIPS Act funding that had already been approved. Earlier this week, Senator Bernie Sanders (I-Vt.) got behind the plan, noting that “if microchip companies make a profit from the generous grants they receive from the federal government, the taxpayers of America have a right to a reasonable return on that investment.”

However, Trump apparently doesn’t plan to seek a stake in every company that the US has awarded CHIPS funding to. Instead, he likely plans to only approach chipmakers that won’t commit to increasing their investments in the US. For example, a government official, speaking anonymously, told The Wall Street Journal Friday that “the administration isn’t looking to own equity in companies like TSMC that are increasing their investments” in the US.

Trump says US will take 10% stake in Intel because CEO wants to “keep his job” Read More »

deepseek-v3.1-is-not-having-a-moment

DeepSeek v3.1 Is Not Having a Moment

What if DeepSeek released a model claiming 66 on SWE and almost no one tried using it? Would it be any good? Would you be able to tell? Or would we get the shortest post of the year?

Why are we settling for v3.1 and have yet to see DeepSeek release v4 or r2 yet?

Eleanor Olcott and Zijing Wu: Chinese artificial intelligence company DeepSeek delayed the release of its new model after failing to train it using Huawei’s chips, highlighting the limits of Beijing’s push to replace US technology.

DeepSeek was encouraged by authorities to adopt Huawei’s Ascend processor rather than use Nvidia’s systems after releasing its R1 model in January, according to three people familiar with the matter.

But the Chinese start-up encountered persistent technical issues during its R2 training process using Ascend chips, prompting it to use Nvidia chips for training and Huawei’s for inference, said the people.

The issues were the main reason the model’s launch was delayed from May, said a person with knowledge of the situation, causing it to lose ground to rivals.

The real world so often involves people acting so much stupider than you could write into fiction.

America tried to sell China H20s and China decided they didn’t want them and now Nvidia is halting related orders with suppliers.

DeepSeek says that the main restriction on their development is lack of compute, and the PRC responds not by helping them get better chips but by advising them to not use the chips that they have, greatly slowing things down at least for a while.

In any case, DeepSeek v3.1 exists now, and remarkably few people care?

DeepSeek: Introducing DeepSeek-V3.1: our first step toward the agent era! 🚀

🧠 Hybrid inference: Think & Non-Think — one model, two modes

⚡️ Faster thinking: DeepSeek-V3.1-Think reaches answers in less time vs. DeepSeek-R1-0528

🛠️ Stronger agent skills: Post-training boosts tool use and multi-step agent tasks

Try it now — toggle Think/Non-Think via the “DeepThink” button.

API Update ⚙️

🔹 deepseek-chat → non-thinking mode

🔹 deepseek-reasoner → thinking mode

🧵 128K context for both

🔌 Anthropic API format supported.

Strict Function Calling supported in Beta API.

🚀 More API resources, smoother API experience

Tools & Agents Upgrades 🧰

📈 Better results on SWE / Terminal-Bench

🔍 Stronger multi-step reasoning for complex search tasks

⚡️ Big gains in thinking efficiency

🔹 V3.1 Base: 840B tokens continued pretraining for long context extension on top of V3

🔹 Tokenizer & chat template updated — new tokenizer config.

🔗 V3.1 Base Open-source weights.

🔗 V3.1 Open-source weights.

Pricing Changes 💳

🔹 New pricing starts & off-peak discounts end at Sep 5th, 2025, 16: 00 (UTC Time)

🔹 Until then, APIs follow current pricing

📝 Pricing page.

Teortaxes: for now seems to have the same performance ceiling as 0528, maybe a bit weaker on some a bit stronger on other problems. The main change is that it’s a unified merge that uses ≥2x fewer reasoning tokens. I take it as a trial balloon before V4 that’ll be unified out of the box.

There are some impressive scores here. A true 66 on SWE would be very strong.

There’s also the weird result where it is claimed to outscore Opus 4 on Aider Polyglot at a low price.

Wes Roth: DeepSeek has quietly published V 3.1, a 685-billion-parameter open-source model that folds chat, reasoning, and coding into a single architecture, handles 128 k-token context windows, and posts a 71.6 % score on the Aider coding benchmark edging out Claude Opus 4 while costing ~68× less in inference.

But these two data points don’t seem backed up by the other reactions, or especially the lack of other reactions, or some other test results.

Artificial Analysis has it coming in at 60 versus r1’s 59, which would be only a small improvement.

Hasan Can said it hallucinates a lot. Steve Strickland says ‘it’s the worst LLM I’ve even tried’ complaining about it failing a mundane task, which presumably was very bad luck.

I tried to conduct Twitter polls, but well over 90% of respondents had to click ‘see results’ which left me with only a handful of real responses and means Lizardman Constant problems and small sample size invalidate the results, beyond confirming no one is looking, and the different polls don’t entirely agree with each other as a result.

If this were most open model companies, I would treat this lack of reaction as indicating there was nothing here, that they likely targeted SWE as a benchmark, and move on.

Since it is DeepSeek, I give them more credit than that, but am still going to assume this is only a small incremental upgrade that does not change the overall picture. However, if 3.1 really was at 66-level for real in practice, it has been several days now, and people would likely be shouting it from the rooftops. They’re not.

Even if no one finds anything to do with it, I don’t downgrade DeepSeek much for 3.1 not impressing compared to if they hadn’t released anything. It’s fine to do incremental improvements. They should do a v3.1 here.

The dumbest style of reaction is when a company offers an incremental improvement (see: GPT-5) and people think that means it’s all over for them, or for AI in general, because it didn’t sufficiently blow them away. Chill out.

It’s also not fair to fully pin this on DeepSeek when they were forced to do a lot of their training this year on Huawei Ascend chips rather than Nvidia chips. Assuming, that is, they are going to be allowed to switch back.

Either way, the clock is ticking on v4 and r2.

Discussion about this post

DeepSeek v3.1 Is Not Having a Moment Read More »