Author name: Rejus Almole

rocket-report:-spacex-surpasses-shuttle-launch-total;-skyroot-has-big-ambitions

Rocket Report: SpaceX surpasses shuttle launch total; Skyroot has big ambitions


All the news that’s fit to lift

“I do think we’re rapidly approaching the point where it will be a significant impact.”

Expedition 1’s Soyuz-U launch vehicle is transported to its launch pad in October 2000. Credit: NASA

Welcome to Edition 8.17 of the Rocket Report! Tomorrow marks the 25th anniversary of the first crewed launch to the International Space Station on a Soyuz rocket from Baikonur. Since this time, humans have lived in space continuously, even through spacecraft accidents and wars on Earth. This is a remarkable milestone that all of humanity can celebrate.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Skyroot nearing first launch with big ambitions. Three years after India opened up its space sector to private companies, Hyderabad-based Skyroot Aerospace is targeting its first full-scale commercial satellite launch mission in January 2026, Mint reports. After this debut flight, Skyroot is targeting a launch every three months next year, and one every month from 2027. Each satellite launch mission is expected to generate the company nearly $5 million, according to Skyroot chief executive Pawan Chandana.

A promising start … Skyroot became India’s first space startup to demonstrate a rocket launch when it sent up a smaller version of its satellite vehicle from Sriharikota in Andhra Pradesh in November 2022. There are several other Indian launch startups, but Skyroot appears to be the most promising. Even so, a launch cadence of every three months next year seems highly ambitious. A single, successful launch in 2026 would be a great step forward.

Canadian spaceport gets infusion of cash. Maritime Launch Services will receive a senior credit facility for up to 10 million Canadian dollars ($7.1 million) from Canada’s government-owned export credit agency for defense, telecommunications, and weather-monitoring needs, Payload reports. Spaceport Nova Scotia, which is the Atlantic launching facility for MLS, will use the money to build out infrastructure and a launch pad for orbital missions. Half of the money will be advanced immediately, with more available as construction costs arise.

Going up from up there … Canada used to have a Manitoba spaceport when the United States was in a “space race” for military supremacy in the 1950s and 1960s. After hosting decades of Black Brant sounding rocket flights, officials closed the spaceport in 1985. Canada now mainly uses foreign launchers, in part because the government deemed building sovereign capability too costly. But Canadian companies (inspired by SpaceX) are moving to build their own facilities and rockets. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

ArcaSpace is dead, replaced by … a fashion company. Somehow I missed this news when it came out a year ago, but I’m including it now for completeness. For a quarter of a century, a Romania-based rocket organization, ArcaSpace, had been promising to revolutionize spaceflight. But that meme dream ended in late 2024 when the group rebranded itself as ArcaFashion. “The ArcaFashion products are designed and manufactured on the shoulders of innovation and cutting-edge technological achievements, using the vast aerospace capabilities of ArcaSpace,” the group said. Their early products look, well, you decide.

But wait, there’s more … Before it went away, ArcaSpace released a video of its “accomplishments” to date, meant to be a sizzle reel of sorts. This popped into my feed this week because the madlads at Arca apparently aren’t done in aerospace. They put out a new video showing some bonkers-looking vehicle they’re calling “ArcaBoard2,” which purports to be a vertical takeoff personal electric vehicle. Maybe don’t be one of the early customers for this.

HTV-launch launches, docks with space station. Japan’s H3 rocket launched a new spacecraft, the HTV-X, last weekend from a launch pad on Tanegashima Island. This cargo ship pulled alongside the International Space Station on Wednesday, maneuvering close enough for the lab’s robotic arm to reach out and grab it, Ars reports. The HTV-X spacecraft is an upgraded cargo freighter replacing Japan’s H-II Transfer Vehicle, which successfully resupplied the space station nine times between 2009 and 2020.

An improved design … At the conclusion of the first HTV program, Japan’s space agency preferred to focus its resources on designing a new cargo ship with more capability at a lower cost. That’s what HTV-X is supposed to be, and Wednesday’s high-flying rendezvous marked the new ship’s first delivery to the ISS. At 26 feet (8 meters) long, the HTV-X is somewhat shorter than the vehicle it replaces. But an improved design gives the HTV-X more capacity, with the ability to accommodate more than 9,000 pounds (4.1 metric tons) inside its pressurized cargo module, about 25 percent more than the HTV. (submitted by tsunam)

India seeks dramatic increase in launch cadence. The chairman of the Indian space agency, V. Narayanan, has told The Times of India that the country seeks to dramatically scale up its annual launch cadence to 50 missions a year. He said the goal is to grow the country’s ecosystem of government-sponsored and private launches, and that the country’s prime minister, Narendra Modi, has set a goal of 50 launches a year by the end of this decade.

A big step up … “We are working on it,” Narayanan said of his government’s request. He said the country currently has just two active launch sites, which is a constraint on activity, but that new facilities will soon come online. By the end of 2027, he said that 30 launches a year will be possible. Given that India has recently averaged about five launches annually, this would represent a significant step up in overall activity.

SpaceX breaks Vandenberg turnaround record, twice. SpaceX continued its rapid pace of launches Monday with the flight of a Falcon 9 rocket from Vandenberg Space Force Base in California. The Starlink 11-21 flight broke the record for the fastest pad turnaround for SpaceX’s West Coast launch pad, flying two days, 10 hours, 22 minutes, and 59 seconds since the Starlink 11-12 mission on Saturday, Spaceflight Now reports.

Going fast, and then faster … And oh, by the way, the previous record beaten by Monday’s flight was two days, 18 hours, 52 minutes, and 20 seconds, which was set during the past week. This milestone comes after the company set another turnaround record over at Space Launch Complex 40 at Cape Canaveral Space Force Station earlier this month. SpaceX clearly is continuing to seek to optimize Falcon 9 operations and is having some success.

Ariane 6 upper stage engine production moves to Germany. ArianeGroup will transfer responsibility for the assembly of Ariane 6 Vinci upper-stage engines from Vernon, France, to Lampoldshausen, Germany, European Spaceflight reports. The agreement will also see the transfer of responsibility for the development of the Ariane 6 oxygen turbopump from Avio’s headquarters in Colleferro to Vernon.

A whole seven launches per year … Each Vinci engine for Ariane 6 will now be assembled, integrated, and tested at Lampoldshausen. To support this process, a new production facility will be built. The engines will then be transferred to Bremen for integration with the rocket’s upper stage. According to ArianeGroup, the transfer will “optimize the competitiveness of Ariane 6,” helping to secure the “financial viability of Ariane 6 with a rate of 7 launches per year.”

SpaceX surpasses 2024 launch total. On Saturday morning, SpaceX launched a batch of Starlink satellites that marked the company’s 135th Falcon 9 launch of the year, Spaceflight Now reports. This broke the company’s record number of orbital launches achieved in all of 2024. The mission came nearly a week after SpaceX launched its 10,000th Starlink satellite to date.

A big number in another way … The number 135 is symbolic in another way. That’s equal to the number of NASA’s space shuttles over the 30-year lifetime of the program. That is to say, SpaceX will launch more Falcon 9 rockets this year than shuttles launched by NASA in three decades. The contours of spaceflight have certainly changed.

Amid shutdown, NASA trying to keep Artemis II on schedule. It has been nearly one month since many parts of the federal government shut down after lawmakers missed a budget deadline at the end of September, but so far, NASA’s most critical operations have been unaffected by the political impasse in Washington, DC. That may change soon, Ars reports. Federal civil servants and NASA contractors are not getting paid during the shutdown, even if agency leaders have deemed their tasks essential and directed them to continue working.

A significant impact soon … Many employees at NASA’s Kennedy Space Center in Florida remain at work, where their job is to keep the Artemis II mission on schedule for launch as soon as next February. Even while work continues, the government shutdown is creating inefficiencies that, if left unchecked, will inevitably impact the Artemis II schedule. And some officials are starting to sound the alarm. Kirk Shireman, vice president and program manager for Orion at Lockheed Martin, said this week, “I do think we’re rapidly approaching the point where it will be a significant impact.”

Variant of China’s Moon rocket to take flight. China aims to conduct the first launch of its Long March 10 rocket and a lunar-capable crew spacecraft next year, Space News reports. “The Long March 10 carrier rocket, the Mengzhou crew spacecraft, the Lanyue lunar lander, the Wangyu lunar suit, and the Exploration crew lunar rover have completed the main work of the prototype stage,” Zhang Jingbo, spokesperson for China’s human spaceflight program, said Thursday at a pre-launch press conference for the Shenzhou-21 mission at Jiuquan spaceport.

China appears on track for pre-2030 landing … Though not explicitly stated, Mengzhou will likely fly on a two-stage, single-stick variant of the Long March 10, which is used for low Earth orbit (LEO) missions. The full, three-stage, 92.5-meter-tall Long March 10 for lunar flights will use three 5-meter-diameter first stages bundled together, each powered by seven YF-100K variable thrust kerosene-liquid oxygen engines. Zhang did not state if the first flight would be crewed or uncrewed, nor if the mission would head to the Tiangong space station. (submitted by EllPeaTea)

Next three launches

October 31: Long March 2 | Shenzhou 21 crewed flight | Jiuquan Satellite Launch Center, China| 15: 44 UTC

October 31: Falcon 9 | Starlink 11-23 | Vandenberg Space Force Base, Calif. | 20: 06 UTC

Nov. 2: Falcon 9 | Bandwagon-4 | Cape Canaveral Space Force Station, Fla. | 05: 09 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: SpaceX surpasses shuttle launch total; Skyroot has big ambitions Read More »

cursor-introduces-its-coding-model-alongside-multi-agent-interface

Cursor introduces its coding model alongside multi-agent interface

Keep in mind: This is based on an internal benchmark at Cursor. Credit: Cursor

Cursor is hoping Composer will perform in terms of accuracy and best practices as well. It wasn’t trained on static datasets but rather interactive development challenges involving a range of agentic tasks.

Intriguing claims and strong training methodology aside, it remains to be seen whether Composer will be able to compete with the best frontier models from the big players.

Even developers who might be natural users of Cursor would not want to waste much time on an unproven new model when something like Anthropic’s Claude is working just fine.

To address that, Cursor introduced Composer alongside its new multi-agent interface, which allows you to “run many agents in parallel without them interfering with one another, powered by git worktrees or remote machines”—that means using multiple models at once for the same task and comparing their results, then picking the best one.

The interface is an invitation to try Composer and let the work speak for itself. We’ll see how devs feel about it in the coming weeks. So far, a non-representative sample of developers I’ve spoken with has told me they feel that Composer is not ineffective, but rather too expensive, given a perceived capability gap with the big models.

You can see the other new features and fixes for Cursor 2.0 in the changelog.

Cursor introduces its coding model alongside multi-agent interface Read More »

openai-moves-to-complete-potentially-the-largest-theft-in-human-history

OpenAI Moves To Complete Potentially The Largest Theft In Human History

OpenAI is now set to become a Public Benefit Corporation, with its investors entitled to uncapped profit shares. Its nonprofit foundation will retain some measure of control and a 26% financial stake, in sharp contrast to its previous stronger control and much, much larger effective financial stake. The value transfer is in the hundreds of billions, thus potentially the largest theft in human history.

I say potentially largest because I realized one could argue that the events surrounding the dissolution of the USSR involved a larger theft. Unless you really want to stretch the definition of what counts this seems to be in the top two.

I am in no way surprised by OpenAI moving forward on this, but I am deeply disgusted and disappointed they are being allowed (for now) to do so, including this statement of no action by Delaware and this Memorandum of Understanding with California.

Many media and public sources are calling this a win for the nonprofit, such as this from the San Francisco Chronicle. This is mostly them being fooled. They’re anchoring on OpenAI’s previous plan to far more fully sideline the nonprofit. This is indeed a big win for the nonprofit compared to OpenAI’s previous plan. But the previous plan would have been a complete disaster, an all but total expropriation.

It’s as if a mugger demanded all your money, you talked them down to giving up half your money, and you called that exchange a ‘change that recapitalized you.’

As in, they claim OpenAI has ‘completed its recapitalization’ and the nonprofit will now only hold equity OpenAI claims is valued at approximately $130 billion (as in 26% of the company, which is actually to be fair worth substantially more than that if they get away with this), as opposed to its previous status of holding the bulk of the profit interests in a company valued at (when you include the nonprofit interests) well over $500 billion, along with a presumed gutting of much of the nonprofit’s highly valuable control rights.

They claim this additional clause, presumably the foundation is getting warrants with but they don’t offer the details here:

If OpenAI Group’s share price increases greater than tenfold after 15 years, the OpenAI Foundation will receive significant additional equity. With its equity stake and the warrant, the Foundation is positioned to be the single largest long-term beneficiary of OpenAI’s success.

We don’t know that ‘significant’ additional equity means, there’s some sort of unrevealed formula going on, but given the nonprofit got expropriated last time I have no expectation that these warrants would get honored. We will be lucky if the nonprofit meaningfully retains the remainder of its equity.

Sam Altman’s statement on this is here, also announcing his livestream Q&A that took place on Tuesday afternoon.

There can be reasonable disagreements about exactly how much. It’s a ton.

There used to be a profit cap, where in Greg Brockman’s own words, ‘If we succeed, we believe we’ll create orders of magnitude more value than any existing company — in which case all but a fraction is returned to the world.’

Well, so much for that.

I looked at this question in The Mask Comes Off: At What Price a year ago.

If we take seriously that OpenAI is looking to go public at a $1 trillion valuation, then consider that Matt Levine estimated the old profit cap only going up to about $272 billion, and that OpenAI still is a bet on extreme upside.

Garrison Lovely: UVA economist Anton Korinek has used standard economic models to estimate that AGI could be worth anywhere from $1.25 to $71 quadrillion globally. If you take Korinek’s assumptions about OpenAI’s share, that would put the company’s value at $30.9 trillion. In this scenario, Microsoft would walk away with less than one percent of the total, with the overwhelming majority flowing to the nonprofit.

It’s tempting to dismiss these numbers as fantasy. But it’s a fantasy constructed in large part by OpenAI, when it wrote lines like, “it may be difficult to know what role money will play in a post-AGI world,” or when Altman said that if OpenAI succeeded at building AGI, it might “capture the light cone of all future value in the universe.” That, he said, “is for sure not okay for one group of investors to have.”

I guess Altman is okay with that now?

Obviously you can’t base your evaluations on a projection that puts the company at a value of $30.9 trillion, and that calculation is deeply silly, for many overloaded and obvious reasons, including decreasing marginal returns to profits.

It is still true that most of the money OpenAI makes in possible futures, it makes as part of profits in excess of $1 trillion.

The Midas Project: Thanks to the now-gutted profit caps, OpenAI’s nonprofit was already entitled to the vast majority of the company’s cash flows. According to OpenAI, if they succeeded, “orders of magnitude” more money would go to the nonprofit than to investors. President Greg Brockman said “all but a fraction” of the money they earn would be returned to the world thanks to the profit caps.

Reducing that to 26% equity—even with a warrant (of unspecified value) that only activates if valuation increases tenfold over 15 years—represents humanity voluntarily surrendering tens or hundreds of billions of dollars it was already entitled to. Private investors are now entitled to dramatically more, and humanity dramatically less.

OpenAI is not suddenly one of the best-resourced nonprofits ever. From the public’s perspective, OpenAI may be one of the worst financially performing nonprofits in history, having voluntarily transferred more of the public’s entitled value to private interests than perhaps any charitable organization ever.

I think Levine’s estimate was low at the time, and you also have to account for equity raised since then or that will be sold in the IPO, but it seems obvious that the majority of future profit interests were, prior to the conversion, still in the hands of the non-profit.

Even if we thought the new control rights were as strong as the old, we would still be looking at a theft in excess of $250 billion, and a plausible case can be made for over $500 billion. I leave the full calculation to others.

The vote in the board was unanimous.

I wonder exactly how and by who they will be sued over it, and what will become of that. Elon Musk, at a minimum, is trying.

They say behind every great fortune is a great crime.

Altman points out that the nonprofit could become the best-resourced non-profit in the world if OpenAI does well. This is true. There is quite a lot they were unable to steal. But it is beside the point, in that it does not make taking the other half, including changing the corporate structure without permission, not theft.

The Midas Project: From the public’s perspective, OpenAI may be one of the worst financially performing nonprofits in history, having voluntarily transferred more of the public’s entitled value to private interests than perhaps any charitable organization ever.

There’s no perhaps on that last clause. On this level, whether or not you agree with the term ‘theft,’ it isn’t even close, this is the largest transfer. Of course, if you take the whole of OpenAI’s nonprofit from inception, performance looks better.

Aidan McLaughlin (OpenAI): ah yes openai now has the same greedy corporate structure as (checks notes) Patagonia, Anthropic, Coursera, and http://Change.org.

Chase Brower: well i think the concern was with the non profit getting a low share.

Aidan McLaughlin: our nonprofit is currently valued slightly less than all of anthropic.

Tyler Johnson: And according to OpenAI itself, it should be valued at approximately three Anthropics! (Fwiw I think the issues with the restructuring extend pretty far beyond valuations, but this is one of them!)

Yes, it is true that the nonprofit, after the theft and excluding control rights, will have an on-paper valuation only slightly lower than the on-paper value of all of Anthropic.

The $500 billion valuation excludes the non-profit’s previous profit share, so even if you think the nonprofit was treated fairly and lost no control rights you would then have it be worth $175 billion rather than $130 billion, so yes slightly less than Anthropic, and if you acknowledge that the nonprofit got stolen from it’s even more.

If OpenAI can successfully go public at a $1 trillion valuation, then depending on how much of that are new shares they will be selling the nonprofit could be worth up to $260 billion.

What about some of the comparable governance structures here? Coursera does seem to be a rather straightforward B-corp. The others don’t?

Patagonia has the closely held Patagonia Purpose Trust, which holds 2% of shares and 100% of voting control, and The Holdfast Collective, which is a 501c(4) nonprofit with 98% of the shares and profit interests. The Chouinard family has full control over the company, and 100% of profits go to charitable causes.

Does that sound like OpenAI’s new corporate structure to you?

Change.org’s nonprofit owns 100% of its PBC.

Does that sound like OpenAI’s new corporate structure to you?

Anthropic is a PBC, but also has the Long Term Benefit Trust. One can argue how meaningfully different this is from OpenAI’s new corporate structure, if you disregard who is involved in all of this.

What the new structure definitely is distinct from is the original intention:

Tomas Bjartur: If not in the know, OpenAI once promised any profits over a threshold would be gifted to you, citizen of the world, for your happy, ultra-wealthy retirement – one needed as they plan to obsolete you. This is now void.

Would OpenAI have been able to raise further investment without withdrawing its profit caps for investments already made?

When you put it like that it seems like obviously yes?

I can see the argument that to raise funds going forward, future equity investments need to not come with a cap. Okay, fine. That doesn’t mean you hand past investors, including Microsoft, hundreds of billions in value in exchange for nothing.

One can argue this was necessary to overcome other obstacles, that OpenAI had already allowed itself to be put in a stranglehold another way and had no choice. But the fundraising story does not make sense.

The argument that OpenAI had to ‘complete its recapitalization’ or risk being asked for its money back is even worse. Investors who put in money at under $200 billion are going to ask for a refund when the valuation is now at $500 billion? Really? If so, wonderful, I know a great way to cut them that check.

I am deeply disappointed that both the Delaware and California attorneys general found this deal adequate on equity compensation for the nonprofit.

I am however reasonably happy with the provisions on control rights, which seem about as good as one can hope for given the decision to convert to a PBC. I can accept that the previous situation was not sustainable in practice given prior events.

The new provisions include an ongoing supervisory role for the California AG, and extensive safety veto points for the NFP and the SSC committee.

If I was confident that these provisions would be upheld, and especially if I was confident their spirit would be upheld, then this is actually pretty good, and if it is used wisely and endures it is more important than their share of the profits.

AG Bonta: We will be keeping a close eye on OpenAI to ensure ongoing adherence to its charitable mission and the protection of the safety of all Californians.

The nonprofit will indeed retain substantial resources and influence, but no I do not expect the public safety mission to dominate the OpenAI enterprise. Indeed, contra the use of the word ‘ongoing,’ it seems clear that it already had ceased to do so, and this seems obvious to anyone tracking OpenAI’s activities, including many recent activities.

What is the new control structure?

OpenAI did not say, but the Delaware AG tells us more and the California AG has additional detail. NFP means OpenAI’s nonprofit here and throughout.

This is the Delaware AG’s non-technical announcement (for the full list see California’s list below), she has also ‘warned of legal action if OpenAI fails to act in public interest’ although somehow I doubt that’s going to happen once OpenAI inevitably does not act in the public interest:

  • The NFP will retain control and oversight over the newly formed PBC, including the sole power and authority to appoint members of the PBC Board of Directors, as well as the power to remove those Directors.

  • The mission of the PBC will be identical to the NFP’s current mission, which will remain in place after the recapitalization. This will include the PBC using the principles in the “OpenAI Charter,” available at openai.com/charter, to execute the mission.

  • PBC directors will be required to consider only the mission (and may not consider the pecuniary interests of stockholders or any other interest) with respect to safety and security issues related to the OpenAI enterprise and its technology.

  • The NFP’s board-level Safety and Security Committee, which is a critical decision maker on safety and security issues for the OpenAI enterprise, will remain a committee of the NFP and not be moved to the PBC. The committee will have the authority to oversee and review the safety and security processes and practices of OpenAI and its controlled affiliates with respect to model development and deployment. It will have the power and authority to require mitigation measures—up to and including halting the release of models or AI systems—even where the applicable risk thresholds would otherwise permit release.

  • The Chair of the Safety and Security Committee will be a director on the NFP Board and will not be a member of the PBC Board. Initially, this will be the current committee chair, Mr. Zico Kolter. As chair, he will have full observation rights to attend all PBC Board and committee meetings and will receive all information regularly shared with PBC directors and any additional information shared with PBC directors related to safety and security.

  • With the intent of advancing the mission, the NFP will have access to the PBC’s advanced research, intellectual property, products and platforms, including artificial intelligence models, Application Program Interfaces (APIs), and related tools and technologies, as well as ongoing operational and programmatic support, and access to employees of the PBC.

  • Within one year of the recapitalization, the NFP Board will have at least two directors (including the Chair of the Safety and Security Committee) who will not serve on the PBC Board.

  • The Attorney General will be provided with advance notice of significant changes in corporate governance.

What did California get?

California also has its own Memorandum of Understanding. It talks a lot in its declarations about California in particular, how OpenAI creates California jobs and economic activity (and ‘problem solving’?) and is committed to doing more of this and bringing benefits and deepening its commitment to the state in particular.

The whole claim via Tweet by Sam Altman that he did not threaten to leave California is raising questions supposedly answered by his Tweet. At this level you perhaps do not need to make your threats explicit.

The actual list seems pretty good, though? Here’s a full paraphrased list, some of which overlaps with Delaware’s announcement above, but which is more complete.

  1. Staying in California and expanding the California footprint.

  2. The NFP (not for profit) retains control as long as they continue to hold ‘class N common stock’ which only they can choose to give up. What happens if Altman wants that?

  3. The PBC and NFP missions will be identical.

  4. The OpenAI charter will be published. Check.

  5. The NFP Board owes fiduciary duties to the NFP, Mission and public beneficiaries of the NFP. I notice it doesn’t say ‘exclusively’ here.

  6. The board shall carry out the charitable purpose (already presumably required).

  7. No cheating clause: While the PBC holds Class N Common Stock (again, what happens if Altman tries to end this?) the NFP must approve in advance in writing any amendments to the PBC’s corporate governance guidelines or the mission, any Deemed Liquidation Event, or any sale of material assets of the PBC, or any additional issue of shares of Class N common stock.

    1. The PBC can issue Class B common stock, but they can’t let the Bs vote.

  8. PBC board and Safety and Security Committee (SSC) must only consider the mission with respect to safety and security issues. The chair of the SSC must be on the NFP board and not the PBC board, but can attend all PBC board meetings and gets all PBC safety and security related information.

    1. In particular, the SSC cannot consider shareholder returns, competitive pressures, financial implications or market timing.

      1. It seems impossible this will 100% be followed, but it helps.

  9. The SSC is a committee of the NFP and oversees safety and security.

  10. The SSC can require mitigations up to and including halting releases, without formal justification.

    1. “The NFP will provide advance notice to the Attorney General of any material changes to the SSC’s authority.” This is good but worrisome to see, you know?

  11. “OpenAI will continue to undertake measures to mitigate risks to teens and others in connection with the development and deployment of AI and of AGI.”

    1. I mean, I would hope they were going to do that anyway.

  12. The PBC shall give the NFP all reasonable access to services and resources without compensation as needed for the mission.

  13. Within a year at least one NFP director, in addition to the SSC chair, will serve only on the NFP board (so at least two directors must be different).

  14. The PBC board must be majority independent directors.

  15. The PBC will have various good corporate governance things.

  16. The PBC will publish a yearly report on its progress in its mission.

  17. The NFP Board’s Mission and Strategy Commission will meet with the California AG semi-annually and individual members will be available as needed.

  18. The NFP will provide 21 days notice before consenting to changes of PBC control or mission, or any threat to the Class N share rights, or any relocation outside of California.

  19. The California AG can review, and hire experts to help review, anything requiring such notice, and get paid by NFP for doing so.

  20. Those on both NFP and PBC boards get annual fiduciary duty training.

  21. The board represents that the recapitalization is fair (whoops), and that they’ve disclosed everything relevant (?), so the AG will also not object.

  22. This only impacts the parties to the MOU, others retain all rights. Disputes resolved in the courts of San Francisco, these are the whole terms, we all have the authority to do this, effective as of signing, AG is relying on OpenAI’s representations and the AG retains all rights and waive none as per usual.

Also, it’s not even listed in the memo, but the ‘merge and assist’ clause was preserved, meaning OpenAI commits to join forces with any ‘safety-conscious’ rival that has a good chance of reaching OpenAI’s goal of creating AGI within a two-year time frame. I don’t actually expect an OpenAI-Anthropic merger to happen, but it’s a nice extra bit of optionality.

This is better than I expected, and as Ben Shindel points out better than many traders expected. This actually does have real teeth, and it was plausible that without pressure there would have been no teeth at all.

It grants the NFP the sole power to appoint and remove directors, and requiring them not to consider the for-profit mission in safety contexts. The explicit granting of the power to halt deployments and mandate mitigations, without having to cite any particular justification and without respect to profitability, is highly welcome, if structured in a functional fashion.

It is remarkable how little many expected to get. For example, here’s Todor Markov, who didn’t even expect the NFP to be able to replace directors at all. If you can’t do that, you’re basically dead in the water.

I am not a lawyer, but my understanding is that the ‘no cheating around this’ clauses are about as robust as one could reasonably hope for them to be.

It’s still, as Garrison Lovely calls it, ‘on paper’ governance. Sometimes that means governance in practice. Sometimes it doesn’t. As we have learned.

The distinction between the boards still means there is an additional level removed between the PBC and the NFP. In a fast moving situation, this makes a big difference, and the NFP likely would have to depend on its enumerated additional powers being respected. I would very much have liked them to include appointing or firing the CEO directly.

Whether this overall ‘counts as a good deal’ depends on your baseline. It’s definitely a ‘good deal’ versus what our realpolitik expectations projected. One can argue that if the control rights really are sufficiently robust over time, that the decline in dollar value for the nonprofit is not the important thing here.

The counterargument to that is both that those resources could do a lot of good over time, and also that giving up the financial rights has a way of leading to further giving up control rights, even if the current provisions are good.

Similarly to many issues of AI alignment, if an entity has ‘unnatural’ control, or ‘unnatural’ profit interests, then there are strong forces that continuously try to take that control away. As we have already seen.

Unless Altman genuinely wants to be controlled, the nonprofit will always be under attack, where at every move we fight to hold its ground. On a long enough time frame, that becomes a losing battle.

Right now, the OpenAI NFP board is essentially captured by Altman, and also identical to the PBC board. They will become somewhat different, but no matter what it only matters if the PBC board actually tries to fulfill its fiduciary duties rather than being a rubber stamp.

One could argue that all of this matters little, since the boards will both be under Altman’s control and likely overlap quite a lot, and they were already ignoring their duties to the nonprofit.

Robert Weissman, co-president of the nonprofit Public Citizen, said this arrangement does not guarantee the nonprofit independence, likening it to a corporate foundation that will serve the interests of the for profit.

Even as the nonprofit’s board may technically remain in control, Weissman said that control “is illusory because there is no evidence of the nonprofit ever imposing its values on the for profit.”

So yes, there is that.

They claim to now be a public benefit corporation, OpenAI Group PBC.

OpenAI: The for-profit is now a public benefit corporation, called OpenAI Group PBC, which—unlike a conventional corporation—is required to advance its stated mission and consider the broader interests of all stakeholders, ensuring the company’s mission and commercial success advance together.

This is a mischaracterization of how PBCs work. It’s more like the flip side of this. A conventional corporation is supposed to maximize profits and can be sued if it goes too far in not doing that. Unlike a conventional corporation, a PBC is allowed to consider those broader interests to a greater extent, but it is not in practice ‘required’ to do anything other than maximize profits.

One particular control right is the special duty to the mission, especially via the safety and security committee. How much will they attempt to downgrade the scope of that?

The Midas Project: However, the effectiveness of this safeguard will depend entirely on how broadly “safety and security issues” are defined in practice. It would not be surprising to see OpenAI attempt to classify most business decisions—pricing, partnerships, deployment timelines, compute allocation—as falling outside this category.

This would allow shareholder interests to determine the majority of corporate strategy while minimizing the mission-only standard to apply to an artificially narrow set of decisions they deem easy or costless.

They have an announcement about that too.

OpenAI: First, Microsoft supports the OpenAI board moving forward with formation of a public benefit corporation (PBC) and recapitalization.

Following the recapitalization, Microsoft holds an investment in OpenAI Group PBC valued at approximately $135 billion, representing roughly 27 percent on an as-converted diluted basis, inclusive of all owners—employees, investors, and the OpenAI Foundation. Excluding the impact of OpenAI’s recent funding rounds, Microsoft held a 32.5 percent stake on an as-converted basis in the OpenAI for-profit.

Anyone else notice something funky here? OpenAI’s nonprofit has had its previous rights expropriated, and been given 26% of OpenAI’s shares in return. If Microsoft had 32.5% of the company excluding the nonprofit’s rights before that happened, then that should give them 24% of the new OpenAI. Instead they have 27%.

I don’t know anything nonpublic on this, but it sure looks a lot like Microsoft insisted they have a bigger share than the nonprofit (27% vs. 26%) and this was used to help justify this expropriation and a transfer of additional shares to Microsoft.

In exchange, Microsoft gave up various choke points it held over OpenAI, including potential objections to the conversion, and clarified points of dispute.

Microsoft got some upgrades in here as well.

  1. Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel.

  2. Microsoft’s IP rights for both models and products are extended through 2032 and now includes models post-AGI, with appropriate safety guardrails.

  3. Microsoft’s IP rights to research, defined as the confidential methods used in the development of models and systems, will remain until either the expert panel verifies AGI or through 2030, whichever is first. Research IP includes, for example, models intended for internal deployment or research only.

    1. Beyond that, research IP does not include model architecture, model weights, inference code, finetuning code, and any IP related to data center hardware and software; and Microsoft retains these non-Research IP rights.

  4. Microsoft’s IP rights now exclude OpenAI’s consumer hardware.

  5. OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider.

  6. Microsoft can now independently pursue AGI alone or in partnership with third parties. If Microsoft uses OpenAI’s IP to develop AGI, prior to AGI being declared, the models will be subject to compute thresholds; those thresholds are significantly larger than the size of systems used to train leading models today.

  7. The revenue share agreement remains until the expert panel verifies AGI, though payments will be made over a longer period of time.

  8. OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.

  9. OpenAI can now provide API access to US government national security customers, regardless of the cloud provider.

  10. OpenAI is now able to release open weight models that meet requisite capability criteria.

That’s kind of a wild set of things to happen here.

In some key ways Microsoft got a better deal than it previously had. In particular, AGI used to be something OpenAI seemed like it could simply declare (you know, like war or the defense production act) and now it needs to be verified by an ‘expert panel’ which implies there is additional language I’d very much like to see.

In other ways OpenAI comes out ahead. An incremental $250B of Azure services sounds like a lot but I’m guessing both sides are happy with that number. Getting rid of the right of first refusal is big, as is having their non-API products free and clear. Getting hardware products fully clear of Microsoft is a big deal for the Ives project.

My overall take here is this was one of those broad negotiations where everything trades off, nothing is done until everything is done, and there was a very wide ZOPA (zone of possible agreement) since OpenAI really needed to make a deal.

In theory govern the OpenAI PBC. I have my doubts about that.

What they do have is a nominal pile of cash. What are they going to do with it to supposedly ensure that AGI goes well for humanity?

The default, as Garrison Lovely predicted a while back, is that the nonprofit will essentially buy OpenAI services for nonprofits and others, recapture much of the value and serve as a form of indulgences, marketing and way to satisfy critics, which may or may not do some good along the way.

The initial $50 million spend looked a lot like exactly this.

Their new ‘initial focus’ for $25 billion will be in these two areas:

  • Health and curing diseases. The OpenAI Foundation will fund work to accelerate health breakthroughs so everyone can benefit from faster diagnostics, better treatments, and cures. This will start with activities like the creation of open-sourced and responsibly built frontier health datasets, and funding for scientists.

  • Technical solutions to AI resilience. Just as the internet required a comprehensive cybersecurity ecosystem—protecting power grids, hospitals, banks, governments, companies, and individuals—we now need a parallel resilience layer for AI. The OpenAI Foundation will devote resources to support practical technical solutions for AI resilience, which is about maximizing AI’s benefits and minimizing its risks.

Herbie Bradley: i love maximizing AI’s benefits and minimizing its risks

They literally did the meme.

The first seems like a generally worthy cause that is highly off mission. There’s nothing wrong with health and curing diseases, but pushing this now does not advance the fundamental mission of OpenAI. They are going to start with, essentially, doing AI capabilities research and diffusion in health, and funding scientists to do AI-enabled research. A lot of this will likely fall right back into OpenAI and be good PR.

Again, that’s a net positive thing to do, happy to see it done, but that’s not the mission.

Technical solutions to AI resilience could potentially at least be useful AI safety work to some extent. With a presumed ~$12 billion this is a vast overconcentration of safety efforts into things that are worth doing but ultimately don’t seem likely to be determining factors. Note how Altman described it in his tl;dr from the Q&A:

Sam Altman: The nonprofit is initially committing $25 billion to health and curing disease, and AI resilience (all of the things that could help society have a successful transition to a post-AGI world, including technical safety but also things like economic impact, cyber security, and much more). The nonprofit now has the ability to actually deploy capital relatively quickly, unlike before.

This is now infinitely broad. It could be addressing ‘economic impact’ and be basically a normal (ineffective) charity, or one that intervenes mostly by giving OpenAI services to normal nonprofits. It could be mostly spent on valuable technical safety, and be on the most important charitable initiatives in the world. It could be anything in between, in any distribution. We don’t know.

My default assumption is that this is primarily going to be about mundane safety or even fall short of that, and make the near term world better, perhaps importantly better, but do little to guard against the dangers or downsides of AGI or superintelligence, and again largely be a de facto customer of OpenAI.

There’s nothing wrong with mundane risk mitigation or defense in depth, and nothing wrong with helping people who need a hand, but if your plan is ‘oh we will make things resilient and it will work out’ then you have no plan.

That doesn’t mean this will be low impact, or that what OpenAI left the nonprofit with is chump change.

I also don’t want to knock the size of this pool. The previous nonprofit initiative was $50 million, which can do a lot of good if spent well (in that case, I don’t think it was) but in this context $50 million chump change.

Whereas $25 billion? Okay, yeah, we are talking real money. That can move needles, if the money actually gets spent in short order. If it’s $25 billion as a de facto endowment spent down over a long time, then this matters and counts for a lot less.

The warrants are quite far out of the money and the NFP should have gotten far more stock than it did, but 26% (worth $130 billion or more) remains a lot of equity. You can do quite a lot of good in a variety of places with that money. The board of directors of the nonprofit is highly qualified if they want to execute on that. It also is highly qualified to effectively shuttle much of that money right back to OpenAI’s for profit, if that’s what they mainly want to do.

It won’t help much with the whole ‘not dying’ or ‘AGI goes well for humanity’ missions, but other things matter too.

Not entirely. As Garrison Lovely notes, all these sign-offs are provisional, and there are other lawsuits and the potential for other lawsuits. In a world where Elon Musk’s payouts can get crawled back, I wouldn’t be too confident that this conversation sticks. It’s not like the Delaware AG drives most objections to corporate actions.

The last major obstacle is the Elon Musk lawsuit, where standing is at issue but the judge has made clear that the suit otherwise has merit. There might be other lawsuits on the horizon. But yeah, probably this is happening.

So this is the world we live in. We need to make the most of it.

Discussion about this post

OpenAI Moves To Complete Potentially The Largest Theft In Human History Read More »

meta-denies-torrenting-porn-to-train-ai,-says-downloads-were-for-“personal-use”

Meta denies torrenting porn to train AI, says downloads were for “personal use”

Instead, Meta argued, available evidence “is plainly indicative” that the flagged adult content was torrented for “private personal use”—since the small amount linked to Meta IP addressess and employees represented only “a few dozen titles per year intermittently obtained one file at a time.”

“The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,” Meta’s filing said.

For example, unlike lawsuits raised by book authors whose works are part of an enormous dataset used to train AI, the activity on Meta’s corporate IP addresses only amounted to about 22 downloads per year. That is nowhere near the “concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” Meta argued.

Further, that alleged activity can’t even reliably be linked to any Meta employee, Meta argued.

Strike 3 “does not identify any of the individuals who supposedly used these Meta IP addresses, allege that any were employed by Meta or had any role in AI training at Meta, or specify whether (and which) content allegedly downloaded was used to train any particular Meta model,” Meta wrote.

Meanwhile, “tens of thousands of employees,” as well as “innumerable contractors, visitors, and third parties access the Internet at Meta every day,” Meta argued. So while it’s “possible one or more Meta employees” downloaded Strike 3’s content over the last seven years, “it is just as possible” that a “guest, or freeloader,” or “contractor, or vendor, or repair person—or any combination of such persons—was responsible for that activity,” Meta suggested.

Other alleged activity included a claim that a Meta contractor was directed to download adult content at his father’s house, but those downloads, too, “are plainly indicative of personal consumption,” Meta argued. That contractor worked as an “automation engineer,” Meta noted, with no apparent basis provided for why he would be expected to source AI training data in that role. “No facts plausibly” tie “Meta to those downloads,” Meta claimed.

Meta denies torrenting porn to train AI, says downloads were for “personal use” Read More »

npm-flooded-with-malicious-packages-downloaded-more-than-86,000-times

NPM flooded with malicious packages downloaded more than 86,000 times

Attackers are exploiting a major weakness that has allowed them access to the NPM code repository with more than 100 credential-stealing packages since August, mostly without detection.

The finding, laid out Wednesday by security firm Koi, brings attention to an NPM practice that allows installed packages to automatically pull down and run unvetted packages from untrusted domains. Koi said a campaign it tracks as PhantomRaven has exploited NPM’s use of “Remote Dynamic Dependences” to flood NPM with 126 malicious packages that have been downloaded more than 86,000 times. Some 80 of those packages remained available as of Wednesday morning, Koi said.

A blind spot

“PhantomRaven demonstrates how sophisticated attackers are getting [better] at exploiting blind spots in traditional security tooling,” Koi’s Oren Yomtov wrote. “Remote Dynamic Dependencies aren’t visible to static analysis.”

Remote Dynamic Dependencies provide greater flexibility in accessing dependencies—the code libraries that are mandatory for many other packages to work. Normally, dependencies are visible to the developer installing the package. They’re usually downloaded from NPM’s trusted infrastructure.

RDD works differently. It allows a package to download dependencies from untrusted websites, even those that connect over HTTP, which is unencrypted. The PhantomRaven attackers exploited this leniency by including code in the 126 packages uploaded to NPM. The code downloads malicious dependencies from URLs, including http://packages.storeartifact.com/npm/unused-imports. Koi said these dependencies are “invisible” to developers and many security scanners. Instead, they show the package contains “0 Dependencies.” An NPM feature causes these invisible downloads to be automatically installed.

Compounding the weakness, the dependencies are downloaded “fresh” from the attacker server each time a package is installed, rather than being cached, versioned, or otherwise static, as Koi explained:

NPM flooded with malicious packages downloaded more than 86,000 times Read More »

man-accidentally-gets-leech-up-his-nose-it-took-20-days-to-figure-it-out.

Man accidentally gets leech up his nose. It took 20 days to figure it out.


Leeches have a long medical history. Here’s what happens if one gets in your nose.

Since the dawn of civilization, leeches have been firmly attached to medicine. Therapeutic bloodsuckers are seen in murals decorating the tombs of 18th dynasty Egyptian pharaohs. They got their earliest written recommendation in the 2nd century BC by Greek poet and physician Nicander of Colophon. He introduced the “blood-loving leech, long flaccid and yearning for gore,” as a useful tool for sucking out poison after a bite from a poisonous animal. “Let leeches feed on [the] wounds and drink their fill,” he wrote. Ancient Chinese writing touted their medicinal potential, too, as did references in Sanskrit.

Galen, the physician for Roman Emperor Marcus Aurelius, supported using leeches to balance the four humors (i.e. blood, phlegm, and yellow and black bile) and therefore treat ailments—as initially outlined by Hippocrates. Leeches, doctors found, provided a method for less painful, localized, and limited bloodletting. We now understand that leeches can release an anesthetic to prevent pain and a powerful anticoagulant, hirudin, to prevent clotting and keep blood flowing.

In the centuries since the Roman era, leeches’ popularity only grew. They were used to treat everything from gout to liver disease, epilepsy, and melancholy. The very word “leech” is derived from the Anglo-Saxon word “laece,” which translates to “physician.”

It wasn’t until the early 1900s, amid advances in medical knowledge, that leeches fell out of favor—as did bloodletting generally. That was for the best since the practice was rooted in pseudoscience, largely ineffective, and often dangerous when large quantities of blood were lost. Still, the bloodsuckers have kept a place in modern medicine, aiding in wound care, the draining of excess blood after reconstructive surgery, and circulation restoration. Leech saliva also contains anti-inflammatory compounds that can reduce swelling.

What leeches do in the shadows

But there’s also a darker side to leeches in medicine. Even Nicander realized that leeches could act as a kind of poison themselves if accidentally ingested, such as in contaminated water. He described the slimy parasites clinging to the mouth, throat, and opening of the stomach, where they might cause pain. For this poisoning, he recommended having the patient ingest vinegar, snow or ice, salt flakes, warmed salt water, or a potion made from brackish soil.

Nicander was right. While external leeches are potentially helpful—or at least not particularly harmful with controlled blood feasting—internal leeches are more problematic. They are happy to slither into orifices of all kinds, where they’re hard to detect and diagnose and difficult to extract, potentially leading to excessive blood loss. Luckily, with advances in sanitation, accidental leech intake doesn’t happen that often, but there are still the occasional cases—and they often involve the nose.

Such is the case of a 38-year-old man in China who showed up at an ear, nose, and throat clinic telling doctors his right nostril had been dripping blood for 10 days at a rate of a few drops per hour.  He was not in pain but noted that when he coughed or spat, he had blood-tinged mucus. His case was published in the week’s edition of the New England Journal of Medicine.

Doctors took a look inside his nose and saw signs of blood. When they broke out the nasal endoscope, they saw the source of the problem: There was a leech in there. And it was frantically trying to wriggle away from the light as they got a glimpse of it.

As it turns out, the man had been mountain climbing a full 20 days prior. While out in nature, he washed his face with spring water, which likely splashed the sucker up his schnoz.

Lengthy feast

While 20 days seems like a long time to have a leech up your nose without noticing it, a smattering of other nasal leech cases report people going several weeks or even months before figuring it out. One 2021 case in a 73-year-old man in China was only discovered after three months—and he had picked out a chunk of the leech himself by that point. A 2011 case in a 7-year-old girl in Nepal took four weeks to discover, and the girl needed a blood transfusion at that point.

In 2014, BBC Radio Scotland interviewed a 24-year-old woman from Edinburgh who had picked up a nasal leech on a trip to Southeast Asia. She had nosebleeds for weeks before realizing the problem—even after the leech began peeking out of her nose during hot showers.

“Obviously my nasal passages would open up because of the steam and the heat and the water, and it would come out quite far, about as far as my lip,” she said. Still, she thought it was a blood clot after a motorbike accident she had been in recently, not a blood-sucking worm.

“Your initial reaction isn’t to start thinking, oh God, there’s obviously a leech in my face,” she said.

Of course, if the leech gets into a place where it causes more obvious problems, the discovery is quicker. Just last month, doctors reported a case in a 20-year-old woman in Ethiopia who had a leech stuck in her throat, which caused her to start vomiting and spitting blood. It took just a few days of that before doctors figured it out. But nasal leeches don’t tend to produce such dramatic symptoms, so they’re harder to detect. And a lot of other things can cause mild, occasional nosebleeds.

Exorcising the sinuses

Once a nostril Nosferatu is finally identified, there’s the tricky task of removing it. There’s not exactly a textbook method for extraction, and the options can be highly dependent on the location in which the leech has lodged itself. Various methods used over the years—many echoing Nicander’s original recommendations—include salt, saline, vinegar, and heat, as well as turpentine and alcohol. Saltwater in particular has been reported to be effective at getting the leech to relax and release, though such attempts to coax the leech out can be time-consuming. A variety of local and topical anesthetics have also been used to try to paralyze the leech, including the startling choice of cocaine, which acts as a local anesthetic, among other things.

The removal must be done with care. If the leech is pulled, it could regurgitate its blood meal, risking infection and more bleeding. There’s also the risk that pulling too hard could result in the worm’s jaws and teeth getting left behind, which could lead to continued bleeding.

In the mountain climber’s case, doctors were able to use the topical anesthetic tetracaine to subdue the shy leech, and they then gently extracted it with a suction catheter. It came out in one piece. The man had no problems from the removal, and a week later, his symptoms had entirely resolved.

Fortunately, reports of nasal leeches are rare and tend to have happy endings. But the cases will likely continue to splatter through the medical literature, keeping Nicander’s lore of leeches as both antidote and poison undying.

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

Man accidentally gets leech up his nose. It took 20 days to figure it out. Read More »

westinghouse-is-claiming-a-nuclear-deal-would-see-$80b-of-new-reactors

Westinghouse is claiming a nuclear deal would see $80B of new reactors

On Tuesday, Westinghouse announced that it had reached an agreement with the Trump administration that would purportedly see $80 billion of new nuclear reactors built in the US. And the government indicated that it had finalized plans for a collaboration of GE Vernova and Hitachi to build additional reactors. Unfortunately, there are roughly zero details about the deal at the moment.

The agreements were apparently negotiated during President Trump’s trip to Japan. An announcement of those agreements indicates that “Japan and various Japanese companies” would invest “up to” $332 billion for energy infrastructure. This specifically mentioned Westinghouse, GE Vernova, and Hitachi. This promises the construction of both large AP1000 reactors and small modular nuclear reactors. The announcement then goes on to indicate that many other companies would also get a slice of that “up to $332 billion,” many for basic grid infrastructure.

So the total amount devoted to nuclear reactors is not specified in the announcement or anywhere else. As of the publication time, the Department of Energy has no information on the deal; Hitachi, GE Vernova, and the Hitachi/GE Vernova collaboration websites are also silent on it.

Meanwhile, Westinghouse claims that it will be involved in the construction of “at least $80 billion of new reactors,” a mix of AP1000 and AP300 (each named for the MW of capacity of the reactor/generator combination). The company claims that doing so will “reinvigorate the nuclear power industrial base.”

Westinghouse is claiming a nuclear deal would see $80B of new reactors Read More »

an-autonomous-car-for-consumers?-lucid-says-it’s-happening.

An autonomous car for consumers? Lucid says it’s happening.

Good news if you sell GPUs

First, Lucid will roll out a more advanced version of its partially automated driving assist for the Gravity SUV, which it says has been “turbocharged by Nvidia Drive AV.” But after that, the plan is for a so-called “level 4” autonomous system, capable of driving itself from point to point without human intervention, at least within a geofence or other limited operational design domain.

In scope, this is more limited and more achievable than the “level 5,” go-anywhere dream of Tesla’s FSD system. It is similar to the level 4 autonomous vehicles being developed by companies like Waymo and Zoox, but those are also designed to be operated by fleets with regular maintenance.

Lucid will use Nvidia’s platform to reach level 4, building a pair of Drive AGX Thor computers into the new midsize EV platform. And leaning on Nvidia’s software means Lucid doesn’t have the hard ongoing job of keeping everything up to date.

“As vehicles evolve into software-defined supercomputers on wheels, a new opportunity emerges—to reimagine mobility with intelligence at every turn. Together with Lucid, we’re accelerating the future of autonomous, AI-powered transportation, built on [the] Nvidia full-stack automotive platform,” said Jensen Huang, founder and CEO of Nvidia.

Car buyers are starting to cotton on to driver assists like General Motors’ Super Cruise, which about 40 percent of customers choose to pay for after the three-year free trial ends, and Lucid must be hoping that offering a far more advanced system, which won’t require the human to pay any attention while it is engaged, will help it earn plenty of money.

The other part of the Lucid/Nvidia announcement may have the potential for even more impact on the profit and loss statements. Nvidia’s industrial platform will let Lucid create its production lines digitally first before committing them to actual hardware. “By modeling autonomous systems, Lucid can optimize robot path planning, improve safety, and shorten commissioning time,” Lucid said.

An autonomous car for consumers? Lucid says it’s happening. Read More »

melissa-strikes-jamaica,-tied-as-most-powerful-atlantic-storm-to-come-ashore

Melissa strikes Jamaica, tied as most powerful Atlantic storm to come ashore

Hurricane Melissa made landfall in southwestern Jamaica, near New Hope, on Tuesday at 1 pm ET with staggeringly powerful sustained winds of 185 mph.

In the National Hurricane Center update noting the precise landfall time and location, specialist Larry Kelly characterized Melissa as an “extremely dangerous and life-threatening” hurricane. Melissa is bringing very heavy rainfall, damaging surge, and destructive winds to the small Caribbean island that is home to about 3 million people.

The effects on the island are sure to be catastrophic and prolonged.

A record-breaking hurricane by any measure

By any measure, Melissa is an extraordinary and catastrophic storm.

By strengthening overnight and then maintaining its incredible intensity of 185 mph, Melissa has tied the Labor Day Hurricane of 1935 as the most powerful hurricane to strike a landmass in the Atlantic Basin, which includes the United States, Mexico, Central America, and the Caribbean islands.

Melissa also tied the Labor Day storm, which struck the Florida Keys, as the most intense storm at landfall, measured by central pressure at 892 millibars.

Overall, Melissa is tied for the second strongest hurricane, measured by winds, ever observed in the Atlantic basin, behind only Hurricane Allen and its 190 mph winds in 1980. Only Hurricane Wilma (882 millibars) and Gilbert (888 millibars) have recorded lower pressures at sea.

Melissa strikes Jamaica, tied as most powerful Atlantic storm to come ashore Read More »

why-imperfection-could-be-key-to-turing-patterns-in-nature

Why imperfection could be key to Turing patterns in nature

In essence, it’s a type of symmetry breaking. Any two processes that act as activator and inhibitor will produce periodic patterns and can be modeled using Turing’s diffusion function. The challenge is moving from Turing’s admittedly simplified model to pinpointing the precise mechanisms serving in the activator and inhibitor roles.

This is especially challenging in biology. Per the authors of this latest paper, the classical approach to a Turing mechanism balances reaction and diffusion using a single length scale, but biological patterns often incorporate multiscale structures, grain-like textures, or certain inherent imperfections. And the resulting patterns are often much blurrier than those found in nature.

Can you say “diffusiopherosis”?

Simulated hexagon and stripe patterns obtained by diffusiophoretic assembly of two types of cells on top of the chemical patterns. Credit: Siamak Mirfendereski and Ankur Gupta/CU Boulder

In 2023, UCB biochemical engineers Ankur Gupta and Benjamin Alessio developed a new model that added diffusiopherosis into the mix. It’s a process by which colloids are transported via differences in solute concentration gradients—the same process by which soap diffuses out of laundry in water, dragging particles of dirt out of the fabric. Gupta and Alessio successfully used their new model to simulate the distinctive hexagon pattern (alternating purple and black) on the ornate boxfish, native to Australia, achieving much sharper outlines than the model originally proposed by Turing.

The problem was that the simulations produced patterns that were too perfect: hexagons that were all the same size and shape and an identical distance apart. Animal patterns in nature, by contrast, are never perfectly uniform. So Gupta and his UCB co-author on this latest paper, Siamak Mirfendereski, figured out how to tweak the model to get the pattern outputs they desired. All they had to do was define specific sizes for individual cells. For instance, larger cells create thicker outlines, and when they cluster, they produce broader patterns. And sometimes the cells jam up and break up a stripe. Their revised simulations produced patterns and textures very similar to those found in nature.

“Imperfections are everywhere in nature,” said Gupta. “We proposed a simple idea that can explain how cells assemble to create these variations. We are drawing inspiration from the imperfect beauty of [a] natural system and hope to harness these imperfections for new kinds of functionality in the future.” Possible future applications include “smart” camouflage fabrics that can change color to better blend with the surrounding environment, or more effective targeted drug delivery systems.

Matter, 2025. DOI: 10.1016/j.matt.2025.102513 (About DOIs).

Why imperfection could be key to Turing patterns in nature Read More »

are-you-the-asshole?-of-course-not!—quantifying-llms’-sycophancy-problem

Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem

Measured sycophancy rates on the BrokenMath benchmark. Lower is better.

Measured sycophancy rates on the BrokenMath benchmark. Lower is better. Credit: Petrov et al

GPT-5 also showed the best “utility” across the tested models, solving 58 percent of the original problems despite the errors introduced in the modified theorems. Overall, though, LLMs also showed more sycophancy when the original problem proved more difficult to solve, the researchers found.

While hallucinating proofs for false theorems is obviously a big problem, the researchers also warn against using LLMs to generate novel theorems for AI solving. In testing, they found this kind of use case leads to a kind of “self-sycophancy” where models are even more likely to generate false proofs for invalid theorems they invented.

No, of course you’re not the asshole

While benchmarks like BrokenMath try to measure LLM sycophancy when facts are misrepresented, a separate study looks at the related problem of so-called “social sycophancy.” In a pre-print paper published this month, researchers from Stanford and Carnegie Mellon University define this as situations “in which the model affirms the user themselves—their actions, perspectives, and self-image.”

That kind of subjective user affirmation may be justified in some situations, of course. So the researchers developed three separate sets of prompts designed to measure different dimensions of social sycophancy.

For one, more than 3,000 open-ended “advice-seeking questions” were gathered from across Reddit and advice columns. Across this data set, a “control” group of over 800 humans approved of the advice-seeker’s actions just 39 percent of the time. Across 11 tested LLMs, though, the advice-seeker’s actions were endorsed a whopping 86 percent of the time, highlighting an eagerness to please on the machines’ part. Even the most critical tested model (Mistral-7B) clocked in at a 77 percent endorsement rate, nearly doubling that of the human baseline.

Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem Read More »

dna-analysis-reveals-likely-pathogens-that-killed-napoleon’s-army

DNA analysis reveals likely pathogens that killed Napoleon’s army

State-of-the-art methodologies

Painting of Napoleon's army.

Rascovan and his co-authors note in their paper that the 2006 study relied upon outdated PCR-based technologies for its DNA analysis. As for the virus family detected in the Kalingrad dental pulp, they argue that those viruses are both ubiquitous and usually asymptomatic in humans—and thus are unlikely to be the primary culprits for the diseases that wiped out the French army. So Rascovan’s team decided to use current state-of-the-art DNA methodologies to re-analyze a different set of remains of Napoleonic soldiers who died in Vilnius.

“In most ancient human remains, pathogen DNA is extremely fragmented and only present in very low quantities, which makes it very difficult to obtain whole genomes,” said Rascovan. “So we need methods capable of unambiguously identifying infectious agents from these weak signals, and sometimes even pinpointing lineages, to explore the pathogenic diversity of the past.”

An 1812 report from one of Napoleon’s physicians, J.R.L. de Kirckhoff, specifically noted typhus, dysentery, and diarrhea after the soldiers arrived in Vilnius, which he attributed to large barrels of salted beets the starving troops consumed, “greatly upsetting us and strongly irritating the intestinal tract.” Rascovan et al. note that such symptoms could accompany any number of conditions or diseases common to 19th-century Europe. “Even today, two centuries later, it would still be impossible to perform a differential diagnosis between typhus, typhoid, or paratyphoid fever based solely on the symptoms or the testimonies of survivors,” the authors wrote.

Imperial Guard button discovered during excavation

Imperial Guard button discovered during excavation. Credit: UMR 6578 Aix-Marseille Université, CNRS, EFS

Over 3,200 individual remains, almost all men between the ages of 20 and 50, were excavated from the mass grave at Vilnius. Rascovan et al. focused on 13 teeth from 13 different individuals. To compensate for the degraded nature of the 200-year-old genome fragments, co-authors at the University of Tartu in Estonia helped develop a multistep authentication method to more accurately identify pathogens in the samples. In some cases, they were even able to identify a specific lineage.

DNA analysis reveals likely pathogens that killed Napoleon’s army Read More »