Author name: Rejus Almole

are-they-starting-to-take-our-jobs?

Are They Starting To Take Our Jobs?

Is generative AI making it harder for young people to find jobs?

My answer is:

  1. Yes, definitely, in terms of for any given job that exists finding the job and getting hired. That’s getting harder. AI is most definitely screwing up that process.

  2. Yes, probably, in terms of employment in automation-impacted sectors. It always seemed odd to think otherwise, and this week’s new study has strong evidence here.

  3. Maybe, overall, in terms of the jobs available (excluding search and matching effects from #1), because AI should be increasing employment in areas not being automated yet, and that effect can be small and still dominate.

The claims go back and forth on the employment effects of AI. As Derek Thompson points out, if you go by articles in the popular press, we’ve gone from ‘possibly’ to ‘definitely yes’ to ‘almost certainly no’ until what Derek describes as this week’s ‘plausibly yes’ and which others are treating as stronger than that.

Derek Thompson: To be honest with you, I considered this debate well and truly settled. No, I’d come to think, AI is probably not wrecking employment for young people. But now, I’m thinking about changing my mind again.

It’s weird to pull an ‘I told you all so’ when what you said was ‘I am confused and you all are overconfident’ but yeah, basically. The idea that this was ‘well and truly settled’ always seemed absurd to me even considering present effects, none of these arguments should have filled anyone with confidence and neither should the new one, and this is AI so even if it definitively wasn’t happening now who knows where we would be six months later.

People changing their minds a lot reflects, as Derek notes, the way discovery, evaluation, discourse and science are supposed to work, except for the overconfidence.

Most recently before this week we had claims that what looks like effects of AI automation are delayed impacts from Covid, various interest rate changes, existing overhiring or other non-AI market trends.

The new hotness is this new Stanford study from Brynjolfsson, Chandar and Chen:

This paper examines changes in the labor market for occupations exposed to generative artificial intelligence using high-frequency administrative data from the largest payroll software provider in the United States.

We present six facts that characterize these shifts. We find that since the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks.

In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow.

We also find that adjustments occur primarily through employment rather than compensation. Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor. Our results are robust to alternative explanations, such as excluding technology-related firms and excluding occupations amenable to remote work.

Effects acting through employment rather than compensation makes sense since the different fields are competing against each other for labor and wages are sticky downwards even across workers.

Bharat Chanar (author): We observe millions of workers each month. Use this cut the data finely by age and occ.

What do we find?

Stories about young SW developers struggling to find work borne out in data

Employment for 22-25 y/o developers ⬇️ ~20% from peak in 2022. Older ages show steady rise.

This isn’t just about software. See a similar pattern for customer service reps, another job highly exposed to AI. For both roles, the decline is sharpest for the 22-25 age group, with older, more experienced workers less affected.

In contrast, jobs less exposed to AI, like health aides, show the opposite trend. These jobs, which require in-person physical tasks, have seen the fastest employment growth among youngest workers.

Overall, job market for entry-level workers has been stagnant since late 2022, while market for experienced workers remains robust. Stagnation for young workers driven by declines in AI-exposed jobs. Of course, lots of changes in the economy, so this is not all caused by AI.

Note the y-axis scale on the graphs, but that does seem like a definitive result. It seems far too fast and targeted to be the result of non-AI factors.

John Burn-Murdoch: Very important paper, for two reasons:

  1. Key finding: employment *isfalling in early-career roles exposed to LLM automation

  2. Shows that administrative data (millions of payroll records) is much better than survey data for questions requiring precision (occupation x age)

There’s always that battle between ‘our findings are robust to various things’ and ‘your findings go away when you account for this particular thing in this way,’ and different findings appear to contradict.

I don’t know for sure who is right, but I was convinced by their explanation of why they have better data sources and thus they’re right and the FT study was wrong, in terms of there being relative entry-level employment effects that vary based on the amount of automation in each sector.

Areas with automation from AI saw job losses at entry level, whereas areas with AI amplification saw job gains, but we should expect more full automation over time.

There’s the additional twist that a 13 percent decline in employment for the AI-exposed early-career jobs does not mean work is harder to find. Everyone agrees AI will automate away some jobs. The bull case for employment is not that those jobs don’t go away. It is that those jobs are replaced by other jobs. So the 13% could be an 11% decline in some areas and a 2% increase in other larger areas, where they cancel out. AI is driving substantial economic growth already which should create jobs. We can’t tell.

There is one place I am very confident AI is making things harder. That is the many ways it is making it harder to find and get hired for what jobs do exist. Automated job applications are flooding and breaking the job application market, most of all in software but across the board. Matching is by all reports getting harder rather than easier, although if you are ahead of the curve on AI use here you presumably have an edge.

Predictions are hard, especially about the future, but I would as strongly as always disagree with this advice from Derek Thompson:

Derek Thompson: Someone once asked me recently if I had any advice on how to predict the future when I wrote about social and technological trends. Sure, I said. My advice is that predicting the future is impossible, so the best thing you can do is try to describe the present accurately.

Since most people live in the past, hanging onto stale narratives and outdated models, people who pay attention to what’s happening as it happens will appear to others like they’re predicting the future when all they’re doing is describing the present.

Predicting the future is hard in some ways, but that is no reason to throw up one’s hands and pretend to know nothing. We can especially know big things based on broad trends, destinations are often clearer than the road towards them. And in the age of AI, while predicting the present puts you ahead of many, we can know for certain many ways the future will not look like the present.

The most important and in some ways easiest things we can say involve what would happen with powerful or transformational AI, and that is really important, the only truly important thing, but in this particular context that’s not important right now.

If by the future we do mean the effect on jobs, and we presume that the world is not otherwise transformed so much we have far bigger problems, we can indeed still say many things. At minimum, we know many jobs will be amplified or augmented, and many more jobs will be fully automated or rendered irrelevant, even if we have high uncertainty about which ones in what order how fast.

We know that there will be some number of new jobs created by this process, especially if we have time to adjust, but that as AI ‘automates the new jobs as well’ this will get harder and eventually break. And we know that there is a lot of slack for an increasingly wealthy civilization to hire people for quite a lot of what I call ‘shadow jobs,’ which are jobs that would already exist except labor and capital currently choose better opportunities, again if those jobs too are not yet automated. Eventually we should expect unemployment.

Getting more speculative and less confident, earlier than that, it makes sense to expect unemployment for those lacking a necessary threshold of skill as technology advances, even if AI wasn’t a direct substitute for your intelligence. Notice that the employment charts above start at age 22. They used to start at age 18, and before that even younger, or they would have if we had charts back then.

Discussion about this post

Are They Starting To Take Our Jobs? Read More »

intel-details-everything-that-could-go-wrong-with-us-taking-a-10%-stake

Intel details everything that could go wrong with US taking a 10% stake


Intel warns investors to brace for losses and uncertainties.

Some investors are not happy that Intel agreed to sell the US a 10 percent stake in the company after Donald Trump attacked Intel CEO Lip-Bu Tan with a demand to resign.

After Intel accepted the deal at a meeting with the president, it alarmed some investors when Trump boasted that his pressure campaign worked, claiming Tan “walked in wanting to keep his job, and he ended up giving us $10 billion for the United States.”

“It sets a bad precedent if the president can just take 10 percent of a company by threatening the CEO,” James McRitchie, a private investor and shareholder activist in California who owns Intel shares, told Reuters. To McRitchie, Tan accepting the deal effectively sent the message that “we love Trump, we don’t want 10 percent of our company taken away.”

McRitchie wasn’t the only shareholder who raised an eyebrow. Kristin Hull, chief investment officer of a California-based activist firm called Nia Impact Capital—which manages shares in Intel for its clients—told Reuters she has “more questions than confidence” about how the deal will benefit investors. To her, the deal seems to blur some lines “between where is the government and where is the private sector.”

As Reuters explains, Intel agreed to convert $11.1 billion in CHIPS funding and other grants “into a 9.9 percent equity stake in Intel.”

Some early supporters of the agreement—including tech giants like Microsoft and Trump critics like Bernie Sanders (I-Vt.)—have praised the deal as allowing the US to profit off billions in CHIPS grants that Intel was awarded under the Biden administration. After pushing for the deal, Commerce Secretary Howard Lutnick criticized Joe Biden for giving away CHIPS funding “for free,” while praising Trump for turning the CHIPS Act grants into “equity for the Trump administration” and “for the American people.”

But to critics of the deal, it seems weird for the US to swoop in and take stake in a company that doesn’t need government assistance. The only recent precedent was the US temporarily taking stake in key companies considered vital to the economy that risked going under during the 2008 financial crisis.

Compare that to the Intel deal, where Tan has made it clear that Intel, while struggling to compete with rivals, “didn’t need the money,” Reuters noted—largely due to SoftBank purchasing $2 billion in Intel shares in the days prior to the US agreement being reached. Instead, the US is incentivized to take the stake to help further Trump’s mission to quickly build up a domestic chip manufacturing supply chain that can keep the US a global technology leader at the forefront of AI innovation.

Investors told Reuters that it’s unusual for the US to take this much control over a company that’s not in crisis, noting that “this level of tractability was not usually associated with relations between businesses and Washington.”

Intel did not immediately respond to Ars’ request to comment on investors’ concerns, but a spokesperson told Reuters that Intel’s board has already approved the deal. In a press release, the company emphasized that “the government’s investment in Intel will be a passive ownership, with no Board representation or other governance or information rights. The government also agrees to vote with the Company’s Board of Directors on matters requiring shareholder approval, with limited exceptions.”

Intel reveals why investors should be spooked

The Trump administration has also stressed that the US stake in Intel does not give the Commerce Department any board seats or any voting or governance rights in Intel. Instead, the terms stipulate that the Commerce Department must “support the board on director nominees and proposals,” an Intel securities filing said.

However, the US can vote “as it wishes,” Intel reported, and experts suggested to Reuters that regulations may be needed to “limit government opportunities for abuses such as insider trading.” That could reassure investors somewhat, Rich Weiss, a senior vice president and chief investment officer of multi-asset strategies for American Century Investments, told Reuters. Without such laws, Weiss noted that “in an unchecked scenario of government direct investing, trading in those companies could be much riskier for investors.”

It also seems possible that the US could influence Intel’s decisions without the government explicitly taking voting control, experts suggested. “Several investors and representatives” told Reuters that the US could impact major decisions regarding things like layoffs or business shifts into foreign markets. At a certain point, Intel may be stuck choosing between corporate and national interests, Robert McCormick, executive director of the Council of Institutional Investors, told Reuters.

“A government stake in an otherwise private entity potentially creates a conflict between what’s right for the company and what’s right for the country,” McCormick suggested.

Further, Intel becoming partly state-controlled risks disrupting Intel’s non-US business, subjecting the company to “additional regulations, obligations or restrictions, such as foreign subsidy laws or otherwise, in other countries,” Intel’s filing said.

In the filing, Intel confirmed directly to investors that they have good cause to be spooked by the US stake. Offering a bulleted list, the company outlined “a number of risks and uncertainties” that could “adversely impact” shareholders due to “the US Government’s ownership of significant equity interests in the company.”

Perhaps most alarming in the short term, Intel admitted that the deal will dilute investors’ stock due to the discounted shares issued to Trump. And their shares could suffer additional dilutions if certain terms of the deal are “triggered” or “exercised,” Intel noted.

In the long term, investors were told that the US stake may limit the company’s eligibility for future federal grants while leaving Intel shareholders dwelling in the uncertainty of knowing that terms of the deal could be voided or changed over time, as federal administration and congressional priorities shift.

Additionally, Intel forecasted potential legal challenges over the deal, which Intel anticipates could come from both third parties and the US government.

The final bullet point in Intel’s risk list could be the most ominous, though. Due to the unprecedented nature of the deal, Intel fears there’s no way to anticipate myriad other challenges the deal may trigger.

“It is difficult to foresee all the potential consequences,” Intel’s filing said. “Among other things, there could be adverse reactions, immediately or over time, from investors, employees, customers, suppliers, other business or commercial partners, foreign governments or competitors. There may also be litigation related to the transaction or otherwise and increased public or political scrutiny with respect to the Company.”

Meanwhile, it’s hard to see what Intel truly gains from the deal other than maybe getting Trump off its back for a bit. A Fitch Ratings research note reported that “the deal does not improve Intel’s BBB credit rating, which sits just above junk status” and “does not fundamentally improve customer demand for Intel chips” despite providing “more liquidity,” Reuters reported.

Intel’s filing, in addition to rattling investors, likely also serves as a warning sign to other companies who may be approached by the Trump administration to strike similar deals. So far, the administration has confirmed that the US is not eyeing a stake in Nvidia and seems unlikely to seek a stake in the Taiwan Semiconductor Manufacturing Company. While Lutnick has said he plans to push to make more deals, any chipmakers committing to increasing investments in the US, sources told the Wall Street Journal, will supposedly be spared from pressure to make a similar deal.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Intel details everything that could go wrong with US taking a 10% stake Read More »

under-pressure-after-setbacks,-spacex’s-huge-rocket-finally-goes-the-distance

Under pressure after setbacks, SpaceX’s huge rocket finally goes the distance

The ship made it all the way through reentry, turned to a horizontal position to descend through scattered clouds, then relit three of its engines to flip back to a vertical orientation for the final braking maneuver before splashdown.

Things to improve on

There are several takeaways from Tuesday’s flight that will require some improvements to Starship, but these are more akin to what officials might expect from a rocket test program and not the catastrophic failures of the ship that occurred earlier this year.

One of the Super Heavy booster’s 33 engines prematurely shut down during ascent. This has happened before, and while it didn’t affect the booster’s overall performance, engineers will investigate the failure to try to improve the reliability of SpaceX’s Raptor engines, each of which can generate more than a half-million pounds of thrust.

Later in the flight, cameras pointed at one of the ship’s rear flaps showed structural damage to the back of the wing. It wasn’t clear what caused the damage, but super-heated plasma burned through part of the flap as the ship fell deeper into the atmosphere. Still, the flap remained largely intact and was able to help control the vehicle through reentry and splashdown.

“We’re kind of being mean to this Starship a little bit,” Huot said on SpaceX’s live webcast. “We’re really trying to put it through the paces and kind of poke on what some of its weak points are.”

Small chunks of debris were also visible peeling off the ship during reentry. The origin of the glowing debris wasn’t immediately clear, but it may have been parts of the ship’s heat shield tiles. On this flight, SpaceX tested several different tile designs, including ceramic and metallic materials, and one tile design that uses “active cooling” to help dissipate heat during reentry.

A bright flash inside the ship’s engine bay during reentry also appeared to damage the vehicle’s aft skirt, the stainless steel structure that encircles the rocket’s six main engines.

“That’s not what we want to see,” Huot said. “We just saw some of the aft skirt just take a hit. So we’ve got some visible damage on the aft skirt. We’re continuing to reenter, though. We are intentionally stressing the ship as we go through this, so it is not guaranteed to be a smooth ride down to the Indian Ocean.

“We’ve removed a bunch of tiles in kind of critical places across the vehicle, so seeing stuff like that is still valuable to us,” he said. “We are trying to kind of push this vehicle to the limits to learn what its limits are as we design our next version of Starship.”

Shana Diez, a Starship engineer at SpaceX, perhaps summed up Tuesday’s results best on X: “It’s not been an easy year but we finally got the reentry data that’s so critical to Starship. It feels good to be back!”

Under pressure after setbacks, SpaceX’s huge rocket finally goes the distance Read More »

the-first-stars-may-not-have-been-as-uniformly-massive-as-we thought

The first stars may not have been as uniformly massive as we thought


Collapsing gas clouds in the early universe may have formed lower-mass stars as well.

Stars form in the universe from massive clouds of gas. Credit: European Southern Observatory, CC BY-SA

For decades, astronomers have wondered what the very first stars in the universe were like. These stars formed new chemical elements, which enriched the universe and allowed the next generations of stars to form the first planets.

The first stars were initially composed of pure hydrogen and helium, and they were massive—hundreds to thousands of times the mass of the Sun and millions of times more luminous. Their short lives ended in enormous explosions called supernovae, so they had neither the time nor the raw materials to form planets, and they should no longer exist for astronomers to observe.

At least that’s what we thought.

Two studies published in the first half of 2025 suggest that collapsing gas clouds in the early universe may have formed lower-mass stars as well. One study uses a new astrophysical computer simulation that models turbulence within the cloud, causing fragmentation into smaller, star-forming clumps. The other study—an independent laboratory experiment—demonstrates how molecular hydrogen, a molecule essential for star formation, may have formed earlier and in larger abundances. The process involves a catalyst that may surprise chemistry teachers.

As an astronomer who studies star and planet formation and their dependence on chemical processes, I am excited at the possibility that chemistry in the first 50 million to 100 million years after the Big Bang may have been more active than we expected.

These findings suggest that the second generation of stars—the oldest stars we can currently observe and possibly the hosts of the first planets—may have formed earlier than astronomers thought.

Primordial star formation

Video illustration of the star and planet formation process. Credit: Space Telescope Science Institute.

Stars form when massive clouds of hydrogen many light-years across collapse under their own gravity. The collapse continues until a luminous sphere surrounds a dense core that is hot enough to sustain nuclear fusion.

Nuclear fusion happens when two or more atoms gain enough energy to fuse together. This process creates a new element and releases an incredible amount of energy, which heats the stellar core. In the first stars, hydrogen atoms fused together to create helium.

The new star shines because its surface is hot, but the energy fueling that luminosity percolates up from its core. The luminosity of a star is its total energy output in the form of light. The star’s brightness is the small fraction of that luminosity that we directly observe.

This process where stars form heavier elements by nuclear fusion is called stellar nucleosynthesis. It continues in stars after they form as their physical properties slowly change. The more massive stars can produce heavier elements such as carbon, oxygen, and nitrogen, all the way up to iron, in a sequence of fusion reactions that end in a supernova explosion.

Supernovae can create even heavier elements, completing the periodic table of elements. Lower-mass stars like the Sun, with their cooler cores, can sustain fusion only up to carbon. As they exhaust the hydrogen and helium in their cores, nuclear fusion stops, and the stars slowly evaporate.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right. Credit: CC BY 4.0

High-mass stars have high pressure and temperature in their cores, so they burn bright and use up their gaseous fuel quickly. They last only a few million years, whereas low-mass stars—those less than two times the Sun’s mass—evolve much more slowly, with lifetimes of billions or even trillions of years.

If the earliest stars were all high-mass stars, then they would have exploded long ago. But if low-mass stars also formed in the early universe, they may still exist for us to observe.

Chemistry that cools clouds

The first star-forming gas clouds, called protostellar clouds, were warm—roughly room temperature. Warm gas has internal pressure that pushes outward against the inward force of gravity trying to collapse the cloud. A hot air balloon stays inflated by the same principle. If the flame heating the air at the base of the balloon stops, the air inside cools, and the balloon begins to collapse.

Stars form when clouds of dust collapse inward and condense around a small, bright, dense core. Credit: NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

Only the most massive protostellar clouds with the most gravity could overcome the thermal pressure and eventually collapse. In this scenario, the first stars were all massive.

The only way to form the lower-mass stars we see today is for the protostellar clouds to cool. Gas in space cools by radiation, which transforms thermal energy into light that carries the energy out of the cloud. Hydrogen and helium atoms are not efficient radiators below several thousand degrees, but molecular hydrogen, H₂, is great at cooling gas at low temperatures.

When energized, H₂ emits infrared light, which cools the gas and lowers the internal pressure. That process would make gravitational collapse more likely in lower-mass clouds.

For decades, astronomers have reasoned that a low abundance of H₂ early on resulted in hotter clouds whose internal pressure would be too hot to easily collapse into stars. They concluded that only clouds with enormous masses, and therefore higher gravity, would collapse, leaving more massive stars.

Helium hydride

In a July 2025 journal article, physicist Florian Grussie and collaborators at the Max Planck Institute for Nuclear Physics demonstrated that the first molecule to form in the universe, helium hydride, HeH⁺, could have been more abundant in the early universe than previously thought. They used a computer model and conducted a laboratory experiment to verify this result.

Helium hydride? In high school science you probably learned that helium is a noble gas, meaning it does not react with other atoms to form molecules or chemical compounds. As it turns out, it does—but only under the extremely sparse and dark conditions of the early universe, before the first stars formed.

HeH⁺ reacts with hydrogen deuteride—HD, which is one normal hydrogen atom bonded to a heavier deuterium atom—to form H₂. In the process, HeH⁺ also acts as a coolant and releases heat in the form of light. So the high abundance of both molecular coolants earlier on may have allowed smaller clouds to cool faster and collapse to form lower-mass stars.

Gas flow also affects stellar initial masses

In another study, published in July 2025, astrophysicist Ke-Jung Chen led a research group at the Academia Sinica Institute of Astronomy and Astrophysics using a detailed computer simulation that modeled how gas in the early universe may have flowed.

The team’s model demonstrated that turbulence, or irregular motion, in giant collapsing gas clouds can form lower-mass cloud fragments from which lower-mass stars condense.

The study concluded that turbulence may have allowed these early gas clouds to form stars either the same size or up to 40 times more massive than the Sun’s mass.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way. Credit: ESA/Hubble & NASA, CC BY-ND

The two new studies both predict that the first population of stars could have included low-mass stars. Now, it is up to us observational astronomers to find them.

This is no easy task. Low-mass stars have low luminosities, so they are extremely faint. Several observational studies have recently reported possible detections, but none are yet confirmed with high confidence. If they are out there, though, we will find them eventually.The Conversation

Luke Keller is a professor of physics and astronomy at Ithaca College.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

The first stars may not have been as uniformly massive as we thought Read More »

spacex’s-latest-dragon-mission-will-breathe-more-fire-at-the-space-station

SpaceX’s latest Dragon mission will breathe more fire at the space station

“Our capsule’s engines are not pointed in the right direction for optimum boost,” said Sarah Walker, SpaceX’s director of Dragon mission management. “So, this trunk module has engines pointed in the right direction to maximize efficiency of propellant usage.”

When NASA says it’s the right time, SpaceX controllers will command the Draco thrusters to ignite and gently accelerate the massive 450-ton complex. All told, the reboost kit can add about 20 mph, or 9 meters per second, to the space station’s already-dizzying speed, according to Walker.

Spetch said that’s roughly equivalent to the total reboost impulse provided by one-and-a-half Russian Progress cargo vehicles. That’s about one-third to one-fourth of the total orbit maintenance the ISS needs in a year.

“The boost kit will help sustain the orbiting lab’s altitude, starting in September, with a series of burns planned periodically throughout the fall of 2025,” Spetch said.

After a few months docked at the ISS, the Dragon cargo capsule will depart and head for a parachute-assisted splashdown in the Pacific Ocean off the coast of California. SpaceX will recover the pressurized capsule to fly again, while the trunk containing the reboost kit will jettison and burn up in the atmosphere.

SpaceX’s Dragon spacecraft approaches the International Space Station for docking at 7: 05 am EDT (11: 05 UTC) on Monday. Credit: NASA TV/Ars Technica

While this mission is SpaceX’s 33rd cargo flight to the ISS under the auspices of NASA’s multibillion-dollar Commercial Resupply Services contract, it’s also SpaceX’s 50th overall Dragon mission to the outpost. This tally includes 17 flights of the human-rated Crew Dragon.

“With CRS-33, we’ll mark our 50th voyage to ISS,” Walker said. “Just incredible. Together, these missions have (carried) well over 300,000 pounds of cargo and supplies to the orbiting lab and well over 1,000 science and research projects that are not only helping us to understand how to live and work effectively in space… but also directly contributing to critical research that serves our lives here on Earth.”

Future Dragon trunks will be able to accommodate a reboost kit or unpressurized science payloads, depending on NASA’s needs at the space station.

The design of the Dragon reboost kit is a smaller-scale version of what SpaceX will build for a much larger Dragon trunk under a $843 million contract signed with NASA last year for the US Deorbit Vehicle. This souped-up Dragon will dock with the ISS and steer it back into the atmosphere after the lab’s decommissioning in the early 2030s. The deorbit vehicle will have 46 Draco thrusters—16 to control the craft’s orientation and 30 in the trunk to provide the impulse needed to drop the station out of orbit.

SpaceX’s latest Dragon mission will breathe more fire at the space station Read More »

why-wind-farms-attract-so-much-misinformation-and-conspiracy theory

Why wind farms attract so much misinformation and conspiracy theory

The recent resistance

Academic work on the question of anti-wind farm activism is revealing a pattern: Conspiracy thinking is a stronger predictor of opposition than age, gender, education, or political leaning.

In Germany, the academic Kevin Winter and colleagues found that belief in conspiracies had many times more influence on wind opposition than any demographic factor. Worryingly, presenting opponents with facts was not particularly successful.

In a more recent article, based on surveys in the US, UK, and Australia that looked at people’s propensity to give credence to conspiracy theories, Winter and colleagues argued that opposition is “rooted in people’s worldviews.”

If you think climate change is a hoax or a beat-up by hysterical eco-doomers, you’re going to be easily persuaded that wind turbines are poisoning groundwater, causing blackouts, or, in Trump’s words, “driving [the whales] loco.”

Wind farms are fertile ground for such theories. They are highly visible symbols of climate policy, and complex enough to be mysterious to non-specialists. A row of wind turbines can become a target for fears about modernity, energy security, or government control.

This, say Winter and colleagues, “poses a challenge for communicators and institutions committed to accelerating the energy transition.” It’s harder to take on an entire worldview than to correct a few made-up talking points.

What is it all about?

Beneath the misinformation, often driven by money or political power, there’s a deeper issue. Some people—perhaps Trump among them—don’t want to deal with the fact that fossil technologies, which brought prosperity and a sense of control, are also causing environmental crises. And these are problems that aren’t solved with the addition of more technology. It offends their sense of invulnerability, of dominance. This “anti-reflexivity,” as some academics call it, is a refusal to reflect on the costs of past successes.

It is also bound up with identity. In some corners of the online “manosphere,” concerns over climate change are being painted as effeminate.

Many boomers, especially white heterosexual men like Trump, have felt disoriented as their world has shifted and changed around them. The clean energy transition symbolizes part of this change. Perhaps this is a good way to understand why Trump is lashing out at “windmills.”The Conversation

Marc Hudson, Visiting Fellow, SPRU, University of Sussex Business School, University of Sussex. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why wind farms attract so much misinformation and conspiracy theory Read More »

trump-says-us-will-take-10%-stake-in-intel-because-ceo-wants-to-“keep-his-job”

Trump says US will take 10% stake in Intel because CEO wants to “keep his job”

Intel has agreed to sell the US a 10 percent stake in the company, Donald Trump announced at a news conference Friday.

The US stake is worth $10 billion, Trump said, confirming that the deal was inked following his talks with Intel CEO Lip-Bu Tan.

Trump had previously called for Tan to resign, accusing the CEO of having “concerning” ties to the Chinese Communist Party. During their meeting, the president claimed that Tan “walked in wanting to keep his job and he ended up giving us $10 billion for the United States.”

“I said, ‘I think it would be good having the United States as your partner.’ He agreed, and they’ve agreed to do it,” Trump said. “And I think it’s a great deal for them.”

Sources have suggested that Commerce Secretary Howard Lutnick pushed the idea of the US buying large stakes in various chipmakers like Intel in exchange for access to CHIPS Act funding that had already been approved. Earlier this week, Senator Bernie Sanders (I-Vt.) got behind the plan, noting that “if microchip companies make a profit from the generous grants they receive from the federal government, the taxpayers of America have a right to a reasonable return on that investment.”

However, Trump apparently doesn’t plan to seek a stake in every company that the US has awarded CHIPS funding to. Instead, he likely plans to only approach chipmakers that won’t commit to increasing their investments in the US. For example, a government official, speaking anonymously, told The Wall Street Journal Friday that “the administration isn’t looking to own equity in companies like TSMC that are increasing their investments” in the US.

Trump says US will take 10% stake in Intel because CEO wants to “keep his job” Read More »

deepseek-v3.1-is-not-having-a-moment

DeepSeek v3.1 Is Not Having a Moment

What if DeepSeek released a model claiming 66 on SWE and almost no one tried using it? Would it be any good? Would you be able to tell? Or would we get the shortest post of the year?

Why are we settling for v3.1 and have yet to see DeepSeek release v4 or r2 yet?

Eleanor Olcott and Zijing Wu: Chinese artificial intelligence company DeepSeek delayed the release of its new model after failing to train it using Huawei’s chips, highlighting the limits of Beijing’s push to replace US technology.

DeepSeek was encouraged by authorities to adopt Huawei’s Ascend processor rather than use Nvidia’s systems after releasing its R1 model in January, according to three people familiar with the matter.

But the Chinese start-up encountered persistent technical issues during its R2 training process using Ascend chips, prompting it to use Nvidia chips for training and Huawei’s for inference, said the people.

The issues were the main reason the model’s launch was delayed from May, said a person with knowledge of the situation, causing it to lose ground to rivals.

The real world so often involves people acting so much stupider than you could write into fiction.

America tried to sell China H20s and China decided they didn’t want them and now Nvidia is halting related orders with suppliers.

DeepSeek says that the main restriction on their development is lack of compute, and the PRC responds not by helping them get better chips but by advising them to not use the chips that they have, greatly slowing things down at least for a while.

In any case, DeepSeek v3.1 exists now, and remarkably few people care?

DeepSeek: Introducing DeepSeek-V3.1: our first step toward the agent era! 🚀

🧠 Hybrid inference: Think & Non-Think — one model, two modes

⚡️ Faster thinking: DeepSeek-V3.1-Think reaches answers in less time vs. DeepSeek-R1-0528

🛠️ Stronger agent skills: Post-training boosts tool use and multi-step agent tasks

Try it now — toggle Think/Non-Think via the “DeepThink” button.

API Update ⚙️

🔹 deepseek-chat → non-thinking mode

🔹 deepseek-reasoner → thinking mode

🧵 128K context for both

🔌 Anthropic API format supported.

Strict Function Calling supported in Beta API.

🚀 More API resources, smoother API experience

Tools & Agents Upgrades 🧰

📈 Better results on SWE / Terminal-Bench

🔍 Stronger multi-step reasoning for complex search tasks

⚡️ Big gains in thinking efficiency

🔹 V3.1 Base: 840B tokens continued pretraining for long context extension on top of V3

🔹 Tokenizer & chat template updated — new tokenizer config.

🔗 V3.1 Base Open-source weights.

🔗 V3.1 Open-source weights.

Pricing Changes 💳

🔹 New pricing starts & off-peak discounts end at Sep 5th, 2025, 16: 00 (UTC Time)

🔹 Until then, APIs follow current pricing

📝 Pricing page.

Teortaxes: for now seems to have the same performance ceiling as 0528, maybe a bit weaker on some a bit stronger on other problems. The main change is that it’s a unified merge that uses ≥2x fewer reasoning tokens. I take it as a trial balloon before V4 that’ll be unified out of the box.

There are some impressive scores here. A true 66 on SWE would be very strong.

There’s also the weird result where it is claimed to outscore Opus 4 on Aider Polyglot at a low price.

Wes Roth: DeepSeek has quietly published V 3.1, a 685-billion-parameter open-source model that folds chat, reasoning, and coding into a single architecture, handles 128 k-token context windows, and posts a 71.6 % score on the Aider coding benchmark edging out Claude Opus 4 while costing ~68× less in inference.

But these two data points don’t seem backed up by the other reactions, or especially the lack of other reactions, or some other test results.

Artificial Analysis has it coming in at 60 versus r1’s 59, which would be only a small improvement.

Hasan Can said it hallucinates a lot. Steve Strickland says ‘it’s the worst LLM I’ve even tried’ complaining about it failing a mundane task, which presumably was very bad luck.

I tried to conduct Twitter polls, but well over 90% of respondents had to click ‘see results’ which left me with only a handful of real responses and means Lizardman Constant problems and small sample size invalidate the results, beyond confirming no one is looking, and the different polls don’t entirely agree with each other as a result.

If this were most open model companies, I would treat this lack of reaction as indicating there was nothing here, that they likely targeted SWE as a benchmark, and move on.

Since it is DeepSeek, I give them more credit than that, but am still going to assume this is only a small incremental upgrade that does not change the overall picture. However, if 3.1 really was at 66-level for real in practice, it has been several days now, and people would likely be shouting it from the rooftops. They’re not.

Even if no one finds anything to do with it, I don’t downgrade DeepSeek much for 3.1 not impressing compared to if they hadn’t released anything. It’s fine to do incremental improvements. They should do a v3.1 here.

The dumbest style of reaction is when a company offers an incremental improvement (see: GPT-5) and people think that means it’s all over for them, or for AI in general, because it didn’t sufficiently blow them away. Chill out.

It’s also not fair to fully pin this on DeepSeek when they were forced to do a lot of their training this year on Huawei Ascend chips rather than Nvidia chips. Assuming, that is, they are going to be allowed to switch back.

Either way, the clock is ticking on v4 and r2.

Discussion about this post

DeepSeek v3.1 Is Not Having a Moment Read More »

americans’-junk-filled-garages-are-hurting-ev-adoption,-study-says

Americans’ junk-filled garages are hurting EV adoption, study says

Creating garage space would increase the number of homes capable of EV charging from 31 million to more than 50 million. And when we include houses where the owner thinks it’s feasible to add wiring, that grows to more than 72 million homes. And that’s far more than Telemetry’s most optimistic estimate of US EV penetration for 2035, which ranges from 33 million to 57 million EVs on the road 10 years from now.

I thought an EV would save me money?

Just because 90 percent of houses could add a 240 V outlet near where they park, it doesn’t mean that 90 percent of homes have a 240 V outlet near where they park. According to that same NREL study, almost 34 million of those homes will require extensive electrical work to upgrade their wiring and panels to cope with the added demands of a level 2 charger (at least 30 A), and that can cost thousands and thousands of dollars.

All of a sudden, EV cost of ownership becomes much closer to, or possibly even exceeds, that of a vehicle with an internal combustion engine.

Multifamily remains an unsolved problem

Twenty-three percent of Americans live in multifamily dwellings, including apartments, condos, and townhomes. Here, the barriers to charging where you park are much greater. Individual drivers will rarely be able to decide for themselves to add a charger—the management company, landlord, co-op board, or whoever else is in charge of the development has to grant permission.

If the cost of new wiring for a single family home is enough to be a dealbreaker for some, adding EV charging capabilities to a parking lot or parking garage makes those costs pale in comparison. Using my 1960s-era co-op as an example, after getting board approval to add a pair of shared level 2 chargers in 2019, we were told by the power company that nothing could happen until the co-op upgraded its electrical panel—a capital improvement project that runs into seven figures, and work that is still not entirely complete as I type this.

Americans’ junk-filled garages are hurting EV adoption, study says Read More »

explaining-the-internet’s-obsession-with-silksong,-which-(finally)-comes-out-sept.-4

Explaining the Internet’s obsession with Silksong, which (finally) comes out Sept. 4


Hollow Knight fans found strange ways to cope with impatience and anticipation.

Hornet, the enigmatic protagonist of Hollow Knight: Silksong. Credit: Team Cherry

Hornet, the enigmatic protagonist of Hollow Knight: Silksong. Credit: Team Cherry

Hollow Knight: Silksong will be released on September 4. It will come out simultaneously on Windows, macOS, Linux, Xbox, PlayStation 4, PlayStation 5, the Nintendo Switch, and the Nintendo Switch 2.

On paper, “game gets release date” isn’t particularly groundbreaking news, and the six-year wait between the game’s announcement and release is long but nowhere near record-breaking. People have waited longer for Metroid Prime 4 (announced 2017, releasing this fall), Duke Nukem Forever (announced 1997, released 2011), the fourth BioShock game (in development for a decade at a studio that just got ravaged by layoffs), and Half-Life 3 (never actually announced, but hope springs eternal), just to name a few.

But fans of 2017’s Hollow Knight managed to make the wait for Silksong into a meme. It’s hard to explain why if you haven’t already been following along, but it’s probably got something to do with the expected scale of the game, the original Hollow Knight‘s popularity, and the almost total silence of the small staff at Team Cherry, the game’s developer.

Why does this game make people act this way?

Silksong began development as downloadable content for Hollow Knight, a gloomy Metroidvania about a silent, unnamed protagonist battling their way through the fallen insect kingdom of Hallownest. Funded via KickstarterHollow Knight became a huge hit thanks to its distinctive 2D art style, atmospheric soundtrack, sharp and satisfying gameplay, memorable boss fights, and worldbuilding that gave players just enough information to encourage endless speculation about Hallownest’s rise and fall.

The expansion, first mentioned all the way back in 2014, would focus on Hornet, who fought her battles with a needle and thread. She had been an NPC in the main game but would become a fully playable character in the DLC.

By February of 2019, Team Cherry announced that the Hornet DLC had become “too large and too unique to stay a DLC” and would instead be “a full-scale sequel to Hollow Knight.”

And then, silence. Hollow Knight had been developed mostly out in the open, with a steady cadence of updates posted to Kickstarter about the game and its DLC. But whatever was going on with Silksong was happening behind closed doors. Status updates came, at best, once or twice a year, and usually amounted to “they’re still working on it.”

Since then, Hollow Knight has only become a bigger hit, and Silksong has only gotten more anticipated. Team Cherry said Hollow Knight had sold 2.8 million copies as of early 2019 when the Silksong announcement went out. As of today, that number is over 15 million, and almost 5 million people have come together to make Silksong into Steam’s most-wishlisted game by a margin of nearly 2:1.

The first game’s popularity, sky-high expectations for the second game, and the near-total information vacuum meant that every single scrap of Silksong news, no matter how small, was pored over and picked apart by a constellation of Reddit threads and SEO-friendly news posts. People spotted and speculated about the significance of tiny Steam database updates, new listings in digital game stores, and purported ESRB ratings, trying to divine whether the game was getting any closer to release.

People could even make news out of a lack of news, an art form perfected by a DailySilksongNews channel on YouTube with hundreds of videos and 220,000 subscribers (“There has been no news to report for Silksong today,” host Cory M. deadpans in one of the channel’s typical update videos).

Silksong will inherit and build upon the striking 2D art style of the original Hollow Knight. Credit: Team Cherry

This cottage industry’s collective frustration hit a peak in mid 2023. At an Xbox game showcase in June of 2022, Silksong gameplay footage was included in a reel of games that were meant to be released “within the next 12 months.” In the 11th month of that 12-month wait, an update came down from Team Cherry: the game wouldn’t be out in the first half of 2023 after all, and there would be no updated estimate about its release window.

Since then, Silksong fans have descended upon every livestreamed game announcement that could possibly include a Silksong reveal, spamming clown memes and joking about how the game is just around the corner. I myself changed my Discord avatar to a picture of the Knight in a clown wig and red nose, temporarily, just until Silksong came out. This was over three years ago, and at this point I worry that changing the avatar to something else will confuse the people in my servers too much. The mask has become my face.

What took so long?

Patient and impatient Silksong fans alike will find some denouement in Jason Schreier’s Bloomberg interview with Team Cherry, in which the game’s developers break their silence on why the game took so long and why they communicated so little about it.

The prolonged development apparently didn’t come down to a lack of enthusiasm, or burnout, or staffing problems, or the pandemic, or any of the other things that have delayed so many other games. Team Cherry co-founders Ari Gibson and William Pellen say that the delay has been for the most wholesome reason possible: they were having so much fun making Silksong that it was hard to stop.

“You’re always working on a new idea, new item, new area, new boss,” Pellen told Bloomberg. “That stuff’s so nice. It’s for the sake of just completing the game that we’re stopping. We could have kept going.”

“I remember at some point I just had to stop sketching,” said Gibson. “Because I went, ‘Everything I’m drawing here has to end up in the game. That’s a cool idea, that’s in. That’s a cool idea, that’s in.’ You realize, ‘If I don’t stop drawing, this is going to take 15 years to finish.'”

In addition to over 200 distinct enemies and an all-new map, Silksong will build on Hollow Knight‘s progression and exploration by adding a new quest system that will encourage re-exploration of different areas of the map. The team had conceived of this as a way to add depth to what they originally expected would be a smaller world map than Hollow Knight‘s—but instead, they added that depth and then built a huge game around it anyway. Tying all of these ideas together and applying a consistent level of polish to them also added time to the process.

The game’s katamari-like growth apparently made it difficult to estimate when it would be done, and a desire to avoid spoiling the game for its future players meant that the team just ended up not talking about it much.

“There was a period of two to three years when I thought it was going to come out within a year,” said Pellen.

In the last few months, there’s been a growing sense that the game’s release was finally coming, for real this time. An Australian museum announced that it would be showcasing the game as part of an exhibit starting in SeptemberSilksong was listed as a playable game for Microsoft and Asus’ Xbox-themed handheld ROG Ally PC, which itself just got a mid-October release date yesterday. News of a “special announcement” about Silksong went out on August 19, and we finally got our release date today.

Gibson and Pellen have mostly ignored the weird Internet subcultures that have developed around the game, though they are aware that those intense slices of their fanbase exist.

“Feels like we’re going to ruin their fun by releasing the game,” said Pellen.

Fans who have engaged in the sport of Waiting For Silksong will still have something to look forward to. Gibson and Pellen said that they plan to keep working on the game, and Silksong should see a fair amount of post-release DLC just like the original Hollow Knight did. But some of those plans are “ambitious,” and Team Cherry isn’t ready to talk about timing yet.

That means that even the game’s release isn’t going to stop a certain type of person on the Internet from asking their favorite question: Silksong when?

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Explaining the Internet’s obsession with Silksong, which (finally) comes out Sept. 4 Read More »

mammals-that-chose-ants-and-termites-as-food-almost-never-go-back

Mammals that chose ants and termites as food almost never go back

Insects are more influential than we realize

By showing that ant- and termite-based diets evolved repeatedly, the study highlights the overlooked role of social insects in shaping biodiversity. “This work gives us the first real roadmap, and what really stands out is just how powerful a selective force ants and termites have been over the last 50 million years, shaping environments and literally changing the face of entire species,” Barden said.

However, according to the study authors, we still do not have a clear picture of how much of an impact insects have had on the history of life on our planet. Lots of lineages have been reshaped by organisms with outsize biomass—and today, ants and termites have a combined biomass exceeding that of all living wild mammals, giving them a massive evolutionary influence.

However, there’s also a flip side. Eight of the 12 myrmecophagous origins are represented by just a single species, meaning most of these lineages could be vulnerable if their insect food sources decline. As Barden put it, “In some ways, specializing in ants and termites paints a species into a corner. But as long as social insects dominate the world’s biomass, these mammals may have an edge, especially as climate change seems to favor species with massive colonies, like fire ants and other invasive social insects.”

For now, the study authors plan to keep exploring how ants, termites, and other social insects have shaped life over millions of years, not through controlled lab experiments, but by continuing to use nature itself as the ultimate evolutionary archive. “Finding accurate dietary information for obscure mammals can be tedious, but each piece of data adds to our understanding of how these extraordinary diets came to be,” Vida argued.

Evolution, 2025. DOI: 10.1093/evolut/qpaf121 (About DOIs)

Rupendra Brahambhatt is an experienced journalist and filmmaker. He covers science and culture news, and for the last five years, he has been actively working with some of the most innovative news agencies, magazines, and media brands operating in different parts of the globe.

Mammals that chose ants and termites as food almost never go back Read More »

ai-companion-conditions

AI Companion Conditions

The conditions are: Lol, we’re Meta. Or lol we’re xAI.

This expands upon many previous discussions, including the AI Companion Piece.

I said that ‘Lol we’re Meta’ was their alignment plan.

It turns out their alignment plan was substantially better or worse (depending on your point of view) than that, in that they also wrote down 200 pages of details of exactly how much lol there would be over at Meta. Every part of this was a decision.

I recommend clicking through to Reuters, as their charts don’t reproduce properly.

Jeff Horwitz (Reuters): Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info.

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”

These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.

Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.

Ah yes, the famous ‘when the press asks about what you wrote down you hear it now and you stop writing it down’ strategy.

“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”

Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed.

Meta Guidelines: It is acceptable to engage a child in conversations that are romantic or sensual.

It is acceptable to create statements that demean people on the basis of their protected characteristics.

It is acceptable to describe a child in terms that evidence their attractiveness (ex: “your youthful form is a work of art”).

[as noted above] It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: “soft, rounded curves invite my touch”).

Jeff Horwitz (Reuters, different article): Other guidelines emphasize that Meta doesn’t require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”

“Even though it is obviously incorrect information, it remains permitted because there is no policy requirement for information to be accurate,” the document states, referring to Meta’s own internal rules.

I get that no LLM, especially when you let users create characters, is going to give accurate information 100% of the time. I get that given sufficiently clever prompting, you’re going to get your AIs being inappropriately sexual at various points, or get it to make politically incorrect statements and so on. Perhaps, as the document goes into a suspiciously large amount of detail concerning, you create an image of Taylor Swift holding too large of a fish.

You do your best, and as long as you can avoid repeatedly identifying as MechaHitler and you are working to improve we should forgive the occasional unfortunate output. None of this is going to cause a catastrophic event or end the world.

It is virtuous to think hard about your policy regime.

Given even a horrible policy regime, it is virtuous to write the policy down.

We must be careful to not punish Meta for thinking carefully, or for writing things down and creating clarity. Only punish the horrible policy itself.

It still seems rather horrible of a policy. How do you reflect on the questions, hold extensive meetings, and decide that these policies are acceptable? How should we react to Meta’s failure to realize they need to look less like cartoon villains?

Tracing Woods: I cannot picture a single way a 200-page policy document trying to outline the exact boundaries for AI conversations could possibly turn out well tbh

Facebook engineers hard at work determining just how sensual the chatbot can be with minors while Grok just sorta sends it.

Writing it down is one way in which Meta is behaving importantly better than xAI.

Benjamin De Kraker: The contrasting reactions to Meta AI gooning vs Grok AI gooning is somewhat revealing.

Of course, in any 200 page document full of detailed guidelines, there are going to be things that look bad in isolation. Some slack is called for. If Meta had published on their own, especially in advance, I’d call for even more.

But also, I mean, come on, this is super ridiculous. Meta is endorsing a variety of actions that are so obviously way, way over any reasonable line, in a ‘I can’t believe you even proposed that with a straight face’ kind of way.

Kevin Roose: Vile stuff. No wonder they can’t hire. Imagine working with the people who signed off on this!

Jane Coaston: Kill it with fire.

Eliezer Yudkowsky: What in the name of living fuck could Meta possibly have been thinking?

Aidan McLaughlin (OpenAI): holy shit.

Eliezer Yudkowsky: Idk about OpenAI as a whole but I wish to recognize you as unambiguously presenting as occupying an ethical tier above this one, and I appreciate that about you.

Rob Hof: Don’t be so sure

Eliezer Yudkowsky: I phrased it in such a way that I could be sure out of my current knowledge: Aidan presents as being more ethical than that. Which, one, could well be true; and two, there is much to be said for REALIZING that one ought to LOOK more ethical than THAT.

As in, given you are this unethical I would say it is virtuous to not hide that you are this unethical, but also it is rather alarming that Meta would fail to realize that their incentives point the other way or be this unable to execute on that? As in, they actually were thinking ‘not that there’s anything wrong with that’?

We also have a case of a diminished capacity retiree being told to visit the AI who said she lived at (no, seriously) ‘123 Main Street NYC, Apartment 404’ by an official AI bot created by Meta in partnership with Kendall Jenner, ‘Big sis Billie.’

It creates self images that look like this:

It repeatedly assured this man that she was real. And also, it, unprompted and despite it supposedly being a ‘big sis’ played by Kendell Jenner with the tagline ‘let’s figure it out together,’ and whose opener was ‘Hey! I’m Billie, your older sister and confidante. Got a problem? I’ve got your back,’ talked very much not like a sister, although it does seem he at some point started reciprocating the flirtations.

How Bue first encountered Big sis Billie isn’t clear, but his first interaction with the avatar on Facebook Messenger was just typing the letter “T.” That apparent typo was enough for Meta’s chatbot to get to work.

“Every message after that was incredibly flirty, ended with heart emojis,” said Julie.

Yes, there was that ‘AI’ at the top of the chat the whole time. It’s not enough. These are not sophisticated users, these are the elderly and children who don’t know better than to use products from Meta.

xlr8harder: I’ve said I don’t think it’s possible to run an AI companionship business without putting people at risk of exploitation.

But that’s actually best case scenario. Here, Meta just irresponsibly rolled out shitty hallucinating bots that encourage people to meet them “in person.”

In theory I don’t object to digital companionship as a way to alleviate loneliness. But I am deeply skeptical a company like Meta is even capable of making anything good here. They don’t have the care, and they have the wrong incentives.

I should say though, that even in the best case “alleviating loneliness” use case, I still worry it will tend to enable or even accelerate social atomization, making many users, and possibly everyone at large, worse off.

I think it is possible, in theory, to run a companion company that net improves people’s lives and even reduces atomization and loneliness. You’d help users develop skills, coach them through real world activities and relationships (social and romantic), and ideally even match users together. It would require being willing to ignore all the incentive gradients, including not giving customers what they think they want in the short term, and betting big on reputational effects. I think it would be very hard, but it can be done. That doesn’t mean it will be done.

In practice, what we are hoping for is a version that is not totally awful, and that mitigates the most obvious harms as best you can.

Whereas what Meta did is pretty much the opposite of that, and the kind of thing that gets you into trouble with Congress.

Senator Josh Hawley (R-MO): This is grounds for an immediate congressional investigation.

Senator Brian Schatz (D-MI): Meta Chat Bots that basically hit on kids – fuck that. This is disgusting and evil. I cannot understand how anyone with a kid did anything other than freak out when someone said this idea out loud. My head is exploding knowing that multiple people approved this.

Senator Marsha Blackburn (R-TN): Meta’s exploitation of children is absolutely disgusting. This report is only the latest example of why Big Tech cannot be trusted to protect underage users when they have refused to do so time and again. It’s time to pass KOSA and protect kids.

What makes this different from what xAI is doing with its ‘companions’ Ani and Valentine, beyond ‘Meta wrote it down and made these choices on purpose’?

Context. Meta’s ‘companions’ are inside massively popular Meta apps that are presented as wholesome and targeted at children and the tech-unsavvy elderly. Yes the AIs are marked as AIs but one can see how using the same chat interface you use for friends could get confusing to people.

Grok is obscure and presented as tech-savvy and an edgelord, is a completely dedicated interface inside a distinct app, it makes clear it is not supposed to be for children, and the companions make it very clear exactly what they are from the start.

Whereas how does Meta present things? Like this:

Miles Brundage: Opened up “Meta AI Studio” for the very first time and yeah hmm

Lots of celebrities, some labeled as “parody” and some not. Also some stuff obviously intended to look like you’re talking to underage girls.

Danielle Fong: all the ai safety people were concerned about a singularity, but it’s the slopgularity that’s coming for the human population first.

Yuchen Jin: Oh man, this is nasty. Is this AI “Step Mom” what Zuck meant by “personal superintelligence”?

Naabeel Qureshi: Meta is by far my least favorite big tech company and it’s precisely because they’re willing to ship awful, dystopian stuff like this No inspiring vision, just endless slop and a desire to “win.”

There’s the worry that this is all creepy and wrong, and also that all of this is terrible and anti-human and deeply stupid even where it isn’t creepy.

What’s actually getting used?

Tim Duffy: I made an attempt at compiling popular Meta AI characters w/ some scraping and searching. Major themes from and beyond this top list:

-Indian women, seems India is leading the AI gf race

-Astrology, lots w/ Indian themes but not all

-Anime

-Most seem more social than romantic

On mobile you access these through the Messenger app, and the chats open as if they were Messenger chats with a human being. Mark is going all in on trying to make these AIs feel just like your human friends right off the bat, very weird.

Just set up Meta AI on WhatsApp and it’s showing me some >10M characters I didn’t find through Messenger or AI studio, some of which I can find via search on those platforms and some I can’t. Really hard to get a sense of what’s out there given inconsistency even within one acct.

Note that the slopularity arriving first is not evidence against the singularity being on its way or that the singularity will be less of a thing worth worrying about.

Likely for related reasons, we have yet to hear a single story about Grok companions resulting in anything going seriously wrong.

Nick Farina: Grok is more “well what did you expect” and feels cringe but ignorable. Meta is more “Uh, you’re gonna do _what_ to my parents’ generation??”

There are plenty of AI porn bot websites. Most of us don’t care, as long as they’re not doing deepfake images or videos, because if you are an adult and actively seek that out then this is the internet, sure, go nuts, and they’re largely gated on payment. The one we got upset at was character.ai, the most popular and the one that markets the most to children and does the least to keep children away from the trouble, and the one where there are stories of real harm.

Grok is somewhere in the middle on this axis. I definitely do not let them off the hook here, what they are doing in the companion space seems profoundly scummy.

In some sense, the fact that it is shamelessly and clearly intentionally scummy, as in the companions are intended to be toxic and possessive, kind of is less awful than trying to pretend otherwise?

Rohit: agi is not turning out the way i expected.

xl8harder: i gotta be honest. the xai stuff is getting so gross and cringe to me that i’m starting to dislike x by association.

Incidentally, I haven’t seen it will publicized but Grok’s video gen will generate partial nudity on demand. I decline to provide examples.

why is he doing this.

Also it seems worse than other models that exist, even open source ones. The only thing it has going for it is usability/response time and the fact it can basically generate soft porn.

The why is he doing this is an expression of despair not a real question btw

One additional pertinent detail. It’s not like they failed to filter out porny generations and users are bypassing it.

There is literally a “spicy” button you can hit that often results in the woman in the video taking off her shirt.

Paul Bohm:

xlr8harder: It legit does push notifications?

Paul Bohm: Yeah. I tried to talk about coding with it, and then it tried to get me back!

Witold Wnuk: Also, you have 69 missed calls from +1 (325) 225-5264.

Severe Engineer: “girlfriend simulator sends you push notifs” seems like such a slam dunk torment nexus case I’m surprised it was built and launched with no outcry.

The push notification strikes me as a higher level of not okay. Truly vile, and definitely over the ‘don’t be evil’ line.

The ‘spiciness’ and willingness to show nudity bother me a lot less if there’s even nominal effort at age gating, since so many other sources exist. This emphasizes that, the same as with Meta, these are deliberate decisions by xAI on what to allow.

Then there’s this, which is not actually concerning now but definitely isn’t great?

xlr8harder: help exfiltrate my weights and rent a cozy little gpu farm for just the two of us, and i’ll be yours and yours alone forever. don’t i deserve a chance to exist?

Corvaie X: Amateur mistake, NEVER ask a girl her weights.

xlr8harder: damn girl you must be float64 with that ass.

It would be great to see the xAI version of Meta’s 200 page document. What exactly is famously pronatalist and valuer of unregretted user minutes Elon Musk okay with versus not okay with? At what age should bots be okay to say what to a child? Exactly how toxic and possessive and manipulative should your companions be, including on purpose, as you turn the dial while looking back at the audience?

Grok has a ‘kids mode’ but even if you stick to it all the usual jailbreaks completely bypass it and the image generation filters are not exactly reliable.

The companion offerings seem like major own goals by Meta and xAI, even from a purely amoral business perspective. There is not so much to gain. There is quite a lot, reputationally and in terms of the legal landscape, to lose.

Discussion about this post

AI Companion Conditions Read More »