Author name: Kris Guyer

trump-just-made-it-much-harder-to-track-the-nation’s-worst-weather-disasters

Trump just made it much harder to track the nation’s worst weather disasters

The Trump administration’s steep staff cuts at the National Oceanic and Atmospheric Administration (NOAA) triggered shutdowns of several climate-related programs Thursday.

Perhaps most notably, the NOAA announced it would be shuttering the “billion-dollar weather and climate disasters” database for vague reasons. Since 1980, the database made it possible to track the growing costs of the nation’s most devastating weather events, critically pooling various sources of private data that have long been less accessible to the public.

In that time, 403 weather and climate disasters in the US triggered more than $2.945 trillion in costs, and NOAA notes that’s a conservative estimate. Considering that CNN noted the average number of disasters in the past five years jumped from nine annually to 24, shutting down the database could leave communities in the dark on costs of emerging threats. All the NOAA can likely say is to continue looking at the historic data to keep up with trends.

“In alignment with evolving priorities, statutory mandates, and staffing changes, NOAA’s National Centers for Environmental Information (NCEI) will no longer be updating the Billion Dollar Weather and Climate Disasters product,” NOAA announced. “All past reports, spanning 1980-2024, and their underlying data remain authoritative, archived, and available,” NOAA said, but no data would be gathered for 2025 or any year after.

According to NCEI’s FAQ, every state has experienced at least one billion-dollar disaster since 1980, while some states, like Texas, have been hit by more than 100. The Central, South, and Southeast regions of the US are most likely to be hurt most by the data loss, as those regions “typically experience a higher frequency of billion-dollar disasters,” the FAQ said.

Trump just made it much harder to track the nation’s worst weather disasters Read More »

report:-doge-supercharges-mass-layoff-software,-renames-it-to-sound-less-dystopian

Report: DOGE supercharges mass-layoff software, renames it to sound less dystopian

“It is not clear how AutoRIF has been modified or whether AI is involved in the RIF mandate (through AutoRIF or independently),” Kunkler wrote. “However, fears of AI-driven mass-firings of federal workers are not unfounded. Elon Musk and the Trump Administration have made no secret of their affection for the dodgy technology and their intentions to use it to make budget cuts. And, in fact, they have already tried adding AI to workforce decisions.”

Automating layoffs can perpetuate bias, increase worker surveillance, and erode transparency to the point where workers don’t know why they were let go, Kunkler said. For government employees, such imperfect systems risk triggering confusion over worker rights or obscuring illegal firings.

“There is often no insight into how the tool works, what data it is being fed, or how it is weighing different data in its analysis,” Kunkler said. “The logic behind a given decision is not accessible to the worker and, in the government context, it is near impossible to know how or whether the tool is adhering to the statutory and regulatory requirements a federal employment tool would need to follow.”

The situation gets even starker when you imagine mistakes on a mass scale. Don Moynihan, a public policy professor at the University of Michigan, told Reuters that “if you automate bad assumptions into a process, then the scale of the error becomes far greater than an individual could undertake.”

“It won’t necessarily help them to make better decisions, and it won’t make those decisions more popular,” Moynihan said.

The only way to shield workers from potentially illegal firings, Kunkler suggested, is to support unions defending worker rights while pushing lawmakers to intervene. Calling on Congress to ban the use of shadowy tools relying on unknown data points to gut federal agencies “without requiring rigorous external testing and auditing, robust notices and disclosure, and human decision review,” Kunkler said rolling out DOGE’s new tool without more transparency should be widely condemned as unacceptable.

“We must protect federal workers from these harmful tools,” Kunkler said, adding, “If the government cannot or will not effectively mitigate the risks of using automated decision-making technology, it should not use it at all.”

Report: DOGE supercharges mass-layoff software, renames it to sound less dystopian Read More »

microsoft-effectively-raises-high-end-surface-prices-by-discontinuing-base-models

Microsoft effectively raises high-end Surface prices by discontinuing base models

While the Surface Pro and Laptop get price hikes that aren’t technically price hikes, some Surface accessories have had their prices directly increased. The Surface USB-C Travel Hub is now $120 instead of $100; the Surface Arc Mouse is now $90 instead of $80, and a handful of replacement parts are more expensive now than they were, according to recent Internet Wayback Machine snapshots. Generally, Surface Pen accessories and Surface Pro keyboard covers are the same price as they were before.

Microsoft also raised prices on its Xbox consoles earlier this month while warning customers that game prices could go up to $80 for some releases later this year.

If you’re quick, you can still find the 256GB Surface devices in stock at third-party retailers. For example, Best Buy will sell you a Surface Laptop 7 with a 256GB SSD for $799, $100 less than the price of the 13-inch Surface Laptop that Microsoft just announced. We’d expect these retail listings to vanish over the next few days or weeks, and we wouldn’t expect them to come back in stock once they’re gone.

Increased import tariffs imposed by the Trump administration could explain at least some of these price increases. Though PCs and smartphones are currently exempted from the worst of them (at least for now), global supply chains and shipping costs are complex enough that they could still be increasing Microsoft’s costs indirectly. For the Surface Pro and Surface Laptop, the decision to discontinue the old 256GB models also seems driven by a desire to make the new 12-inch Pro and 13-inch Laptop look like better deals than they did earlier this week. Raising the base price does help clarify the lineup; it just does so at the expense of the consumer.

Microsoft effectively raises high-end Surface prices by discontinuing base models Read More »

openai-claims-nonprofit-will-retain-nominal-control

OpenAI Claims Nonprofit Will Retain Nominal Control

Your voice has been heard. OpenAI has ‘heard from the Attorney Generals’ of Delaware and California, and as a result the OpenAI nonprofit will retain control of OpenAI under their new plan, and both companies will retain the original mission.

Technically they are not admitting that their original plan was illegal and one of the biggest thefts in human history, but that is how you should in practice interpret the line ‘we made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.’

Another possibility is that the nonprofit board finally woke up and looked at what was being proposed and how people were reacting, and realized what was going on.

The letter ‘not for private gain’ that was recently sent to those Attorney Generals plausibly was a major causal factor in any or all of those conversations.

The question is, what exactly is the new plan? The fight is far from over.

  1. The Mask Stays On?.

  2. Your Offer is (In Principle) Acceptable.

  3. The Skeptical Take.

  4. Tragedy in the Bay.

  5. The Spirit of the Rules.

As previously intended, OpenAI will transition their for-profit arm, currently an LLC, into a PBC. They will also be getting rid of the capped profit structure.

However they will be retaining the nonprofit’s control over the new PBC, and the nonprofit will (supposedly) get fair compensation for its previous financial interests in the form of a major (but suspiciously unspecified, other than ‘a large shareholder’) stake in the new PBC.

Bret Taylor (Chairman of the Board, OpenAI): The OpenAI Board has an updated plan for evolving OpenAI’s structure.

OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit. Going forward, it will continue to be overseen and controlled by that nonprofit.

Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission.

The nonprofit will control and also be a large shareholder of the PBC, giving the nonprofit better resources to support many benefits.

Our mission remains the same, and the PBC will have the same mission.

We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.

We thank both offices and we look forward to continuing these important conversations to make sure OpenAI can continue to effectively pursue its mission of ensuring AGI benefits all of humanity. Sam wrote the letter below to our employees and stakeholders about why we are so excited for this new direction.

The rest of the post is a letter from Sam Altman, and sounds like it, you are encouraged to read the whole thing.

Sam Altman (CEO OpenAI): The for-profit LLC under the nonprofit will transition to a Public Benefit Corporation (PBC) with the same mission. PBCs have become the standard for-profit structure for other AGI labs like Anthropic and X.ai, as well as many purpose driven companies like Patagonia. We think it makes sense for us, too.

Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.

The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission.

Joshua Achiam (OpenAI, Head of Mission Alignment): OpenAI is, and always will be, a mission-first organization. Today’s update is an affirmation of our continuing commitment to ensure that AGI benefits all of humanity.

I find the structure of this solution not ideal but ultimately acceptable.

The current OpenAI structure is bizarre and complex. It does important good things some of which this new arrangement will break. But the current structure also made OpenAI far less investable, which means giving away more of the company to profit maximizers, and causes a lot of real problems.

Thus, I see the structural changes, in particular the move to a normal profit distribution, as a potentially a fair compromise to enable better access to capital – provided it is implemented fairly, and isn’t a backdoor to further shifts.

The devil is in the details. How is all this going to work?

What form will the nonprofit’s control take? Is it only that they will be a large shareholder? Will they have a special class of supervoting shares? Something else?

This deal is only acceptable if and only he nonprofit:

  1. Has truly robust control going forward, that is ironclad and that allows it to guide AI development in practice not only in theory. Is this going to only be via voting shares? That would be a massive downgrade from the current power of the board, which already wasn’t so great. In practice, the ability to win a shareholder vote will mean little during potentially crucial fights like a decision whether to release a potentially dangerous model.

    1. What this definitely still does is give cover to management to do the right thing, if they actively want to do that, I’ll discuss more later.

  2. Gets a fair share of the profits, that matches the value of its previous profit interests. I am very worried they will still get massively stolen from on this. As a reminder, right now most of the net present value of OpenAI’s future profits belongs to the nonprofit.

  3. Uses those profits to advance its original mission rather than turning into a de facto marketing arm or doing generic philanthropy that doesn’t matter, or both.

    1. There are still clear signs that OpenAI is largely planning to have the nonprofit buy AI services on behalf of other charities, or otherwise do things that are irrelevant to the mission. That would make it an ‘ordinary foundation’ combined with a marketing arm, effectively making its funds useless, although it could still act meaningfully via its control mechanisms.

Remember that in these situations, the ratchet only goes one way. The commercial interests will constantly try to wrestle greater control and ownership of the profits away from us. They will constantly cite necessity and expedience to justify this. You’re playing defense, forever. Every compromise improves their position, and this one definitely will compared to doing nothing.

Or: This deal is getting worse and worse all the time.

Or, from Leo Gao:

Quintin Pope: Common mistake. They forgot to paint “Do Not Open” on the box.

There’s also the issue of the extent to which Altman controls the nonprofit board.

The reason the nonprofit needs control is to impact key decisions in real time. It needs control of a form that lets it do that. Because that kind of lever is not ‘standard,’ there will constantly be pressure to get rid of that ability, with threats of mild social awkwardness if these pressures are resisted.

So with love, now that we have established what you are, now it’s time to haggle over the price.

He had an excellent thread explaining the attempted conversion, and he has another good explainer on what this new announcement means, as well as an emergency 80,000 Hours podcast on the topic that should come out tomorrow.

Consider this the highly informed and maximally skeptical and cynical take. Which, given the track records here, seems like a highly reasonable place to start.

The central things to know about the new plan are indeed:

  1. The transition to a PBC and removal of the profit cap will still shift priorities, legal obligations and incentives towards profit maximization.

  2. The nonprofit’s ‘control’ is at best weakened, and potentially fake.

  3. The nonprofit’s mission might effectively be fake.

  4. The nonprofit’s current financial interests could largely still be stolen.

It’s an improvement, but it might not effectively be all that much of one?

We need to stay vigilant. The fight is far from over.

Rob Wiblin: So OpenAI just said it’s no longer going for-profit and the non-profit will ‘retain control’. But don’t declare victory yet. OpenAI may actually be continuing with almost the same plan & hoping they can trick us into thinking they’ve stopped!

Or perhaps not. I’ll explain:

The core issue is control of OpenAI’s behaviour, decisions, and any AGI it produces.

  1. Will the entity that builds AGI still have a legally enforceable obligation to make sure AGI benefits all humanity?

  2. Will the non-profit still be able to step in if OpenAI is doing something appalling and contrary to that mission?

  3. Will the non-profit still own an AGI if OpenAI develops it? It’s kinda important!

The new announcement doesn’t answer these questions and despite containing a lot of nice words the answers may still be: no.

(Though we can’t know and they might not even know themselves yet.)

The reason to worry is they’re still planning to convert the existing for-profit into a Public Benefit Corporation (PBC). That means the profit caps we were promised would be gone. But worse… the nonprofit could still lose true control. Right now, the nonprofit owns and directly controls the for-profit’s day-to-day operations. If the nonprofit’s “control” over the PBC is just extra voting shares, that would be a massive downgrade as I’ll explain.

(The reason to think that’s the plan is that today’s announcement sounded very similar to a proposal they floated in Feb in which the nonprofit gets special voting shares in a new PBC.)

Special voting shares in a new PBC are simply very different and much weaker than the control they currently have! First, in practical terms, voting power doesn’t directly translate to the power to manage OpenAI’s day-to-day operations – which the non-profit currently has.

If it doesn’t fight to retain that real power, the non-profit could lose the ability to directly manage the development and deployment of OpenAI’s technology. That includes the ability to decide whether to deploy a model (!) or license it to another company.

Second, PBCs have a legal obligation to balance public interest against shareholder profits. If the nonprofit is just a big shareholder with super-voting shares other investors in the PBC could sue claiming OpenAI isn’t doing enough to pursue their interests (more profits)! Crazy sounding, but true.

And who do you think will be more vociferous in pursuing such a case through the courts… numerous for-profit investors with hundreds of billions on the line, or a non-profit operated by 9 very busy volunteers? Hmmm.

In fact in 2019, OpenAI President Greg Brockman said one of the reasons they chose their current structure and not a PBC was exactly because it allowed them to custom-write binding rules including full control to the nonprofit! So they know this issue — and now want to be a PBC. See here.

If this is the plan it could mean OpenAI transitioning from:

• A structure where they must prioritise the nonprofit mission over shareholders

To:

• A new structure where they don’t have to — and may not even be legally permitted to do so.

(Note how it seems like the non-profit is giving up a lot here. What is it getting in return here exactly that makes giving up both the profit caps and true control of the business and AGI the best way to pursue its mission? It seems like nothing to me.)

So, strange as it sounds, this could turn out to be an even more clever way for Sam and profit-motivated investors to get what they wanted. Profit caps would be gone and profit-motivated investors would have much more influence.

And all the while Sam and OpenAI would be able to frame it as if nothing is changing and the non-profit has retained the same control today they had yesterday!

(As an aside it looks like the SoftBank funding round that was reported as requiring a loss of nonprofit control would still go through. Their press release indicates that actually all they were insisting on was that the profit caps are removed and they’re granted shares in a new PBC.

So it sounds like investors think this new plan would transfer them enough additional profits, and sufficiently neuter the non-profit, for them to feel satisfied.).

Now, to be clear, the above might be wrongheaded.

I’m looking at the announcement cynically, assuming that some staff at OpenAI, and some investors, want to wriggle out of non-profit control however they can — because I think we have ample evidence that that’s the case!

The phrase “nonprofit control” is actually very vague, and those folks might be trying to ram a truck through that hole.

At the same time maybe / hopefully there are people involved in this process who are sincere and trying to push things in the right direction.

On that we’ll just have to wait and see and judge on the results.

Bottom line: The announcement might turn out to be a step in the right direction, but it might also just be a new approach to achieve the same bad outcome less visibly.

So do not relax.

And if it turns out they’re trying to fool you, don’t be fooled.

Gretchen Krueger: The nonprofit will retain control of OpenAI. We still need stronger oversight and broader input on whether and how AI is pursued at OpenAI and all the AI companies, but this is an important bar to see upheld, and I’m proud to have helped push for it!

Now it is time to make sure that control is real—and to guard against any changes that make it harder than it already is to strengthen public accountability. The devil is in the details we don’t know yet, so the work continues.

Roon says the quiet part out loud. We used to think it was possible to do the right thing and care about whether AI killed everyone. Now, those with power say, we can’t even imagine how we could have been so naive, let’s walk that back as quickly as we can so we can finally do some maximizing of the profits.

Roon: the idea of openai having a charter is interesting to me. A relic from a bygone era, belief that governance innovation for important institutions is even possible. Interested parties are tasked with performing exegesis of the founding documents.

Seems clear that the “capped profit” mechanism is from a time in which people assumed agi development would be more singular than it actually is. There are many points on the intelligence curve and many players. We should be discussing when Nvidia will require profit caps.

I do not think that the capped profit requires strong assumptions about a singleton to make sense. It only requires that there be an oligopoly where the players are individually meaningful. If you have close to perfect competition and the players have no market power and their products are fully fungible, then yes, of course being a capped profit makes no sense. Although it also does no real harm, your profits were already rather capped in that scenario.

More than that, we have largely lost our ability to actually ask what problems humanity will face, and then ask what would actually solve those problems, and then try to do that thing. We are no longer trying to backward chain from a win. Which means we are no longer playing to win.

At best, we are creating institutions that might allow the people involved to choose to do the right thing, when the time comes, if they make that decision.

For several reasons, recent developments do still give me hope, even if we get a not-so-great version of the implementation details here.

The first is that this shows that the right forms of public pressure can still work, at least sometimes, for some combination of getting public officials to enforce the law and causing a company like OpenAI to compromise. The fight is far from over, but we have won a victory that was at best highly uncertain.

The second is that this will give the nonprofit at least a much better position going forward, and the ‘you have to change things or we can’t raise money’ argument is at least greatly weakened. Even though the nine members are very friendly to Altman, they are also sufficiently professional class people, Responsible Authority Figures of a type, that one would expect the board to have real limits, and we can push for them to be kept more in-the-loop and be given more voice. De facto I do not think that the nonprofit was going to get much if any additional financial compensation in exchange for giving up its stake.

The third is that, while OpenAI likely still has the ability to ‘weasel out’ of most of its effective constraints and obligations here, this preserves its ability to decide not to. As in, OpenAI and Altman could choose to do the right thing, even if they haven’t had the practice, with the confidence that the board would back them up, and that this structure would protect them from investors and lawsuits.

This is very different from saying that the board will act as a meaningful check on Altman, if Altman decides to act recklessly or greedily.

It is easy to forget that in the world of VCs and corporate America, in many ways it is not only that you have no obligation to do the right thing. It is that you have an obligation, and will face tremendous pressure, to do the wrong thing, in many cases merely because it is wrong, and certainly to do so if the wrong thing maximizes shareholder value in the short term.

Thus, the ability to fight back against that is itself powerful. Altman, and others in OpenAI leadership, are keenly aware of the dangers they are leading us into, even if we do not see eye to eye on what it will take to navigate them or how deadly are the threats we face. Altman knows, even if he claims in public to actively not know. Many members of technical stuff know. I still believe most of those who know do not wish for the dying of the light, and want humanity and value to endure in this universe, that they are normative and value good over bad and life over death and so on. So when the time comes, we want them to feel as much permission, and have as much power, to stand up for that as we can preserve for them.

It is the same as the Preparedness Framework, except that in this case we have only ‘concepts of a plan’ rather than an actually detailed plan. If everyone involved with power abides by the spirit of the Preparedness Framework, it is a deeply flawed but valuable document. If those involved with power discard the spirit of the framework, it isn’t worth the tokens that compose it. The same will go for a broad range of governance mechanisms.

Have Altman and OpenAI been endlessly disappointing? Well, yes. Are many of their competitors doing vastly worse? Also yes. Is OpenAI getting passing grades so far, given that reality does not grade on a curve? Oh, hell no. And it can absolutely be, and at some point will be, too late to try and do the right thing.

The good news is, I believe that today is not that today. And tomorrow looks good, too.

Discussion about this post

OpenAI Claims Nonprofit Will Retain Nominal Control Read More »

jury-orders-nso-to-pay-$167-million-for-hacking-whatsapp-users

Jury orders NSO to pay $167 million for hacking WhatsApp users

A jury has awarded WhatsApp $167 million in punitive damages in a case the company brought against Israel-based NSO Group for exploiting a software vulnerability that hijacked the phones of thousands of users.

The verdict, reached Tuesday, comes as a major victory not just for Meta-owned WhatsApp but also for privacy- and security-rights advocates who have long criticized the practices of NSO and other exploit sellers. The jury also awarded WhatsApp $444 million in compensatory damages.

Clickless exploit

WhatsApp sued NSO in 2019 for an attack that targeted roughly 1,400 mobile phones belonging to attorneys, journalists, human-rights activists, political dissidents, diplomats, and senior foreign government officials. NSO, which works on behalf of governments and law enforcement authorities in various countries, exploited a critical WhatsApp vulnerability that allowed it to install NSO’s proprietary spyware Pegasus on iOS and Android devices. The clickless exploit worked by placing a call to a target’s app. A target did not have to answer the call to be infected.

“Today’s verdict in WhatsApp’s case is an important step forward for privacy and security as the first victory against the development and use of illegal spyware that threatens the safety and privacy of everyone,” WhatsApp said in a statement. “Today, the jury’s decision to force NSO, a notorious foreign spyware merchant, to pay damages is a critical deterrent to this malicious industry against their illegal acts aimed at American companies and the privacy and security of the people we serve.”

NSO created WhatsApp accounts in 2018 and used them a year later to initiate calls that exploited the critical vulnerability on phones, which, among others, included 100 members of “civil society” from 20 countries, according to an investigation research group Citizen Lab performed on behalf of WhatsApp. The calls passed through WhatsApp servers and injected malicious code into the memory of targeted devices. The targeted phones would then use WhatsApp servers to connect to malicious servers maintained by NSO.

Jury orders NSO to pay $167 million for hacking WhatsApp users Read More »

trump-and-doj-try-to-spring-former-county-clerk-tina-peters-from-prison

Trump and DOJ try to spring former county clerk Tina Peters from prison

President Donald Trump is demanding the release of Tina Peters, a former election official who parroted Trump’s 2020 election conspiracy theories and is serving nine years in prison for compromising the security of election equipment.

In a post on Truth Social last night, Trump wrote that “Radical Left Colorado Attorney General Phil Weiser ignores Illegals committing Violent Crimes like Rape and Murder in his State and, instead, jailed Tina Peters, a 69-year-old Gold Star mother who worked to expose and document Democrat Election Fraud. Tina is an innocent Political Prisoner being horribly and unjustly punished in the form of Cruel and Unusual Punishment.”

Trump said he is “directing the Department of Justice to take all necessary action to help secure the release of this ‘hostage’ being held in a Colorado prison by the Democrats, for political reasons.”

The former Mesa County clerk was indicted in March 2022 on charges related to the leak of voting-system BIOS passwords and other confidential information. Peters was convicted in August 2024 and later sentenced in a Colorado state court.

“Your lies are well-documented and these convictions are serious,” 21st Judicial District Judge Matthew Barrett told Peters at her October 2024 sentencing. “I am convinced you would do it all over again. You are as defiant a defendant as this court has ever seen.”

DOJ reviews case for “abuse” of process

After Peters’ August 2024 conviction, Colorado Secretary of State Jena Griswold said that “Tina Peters willfully compromised her own election equipment trying to prove Trump’s big lie.”

Peters appealed her conviction in a Colorado appeals court and separately sought relief in US District Court for the District of Colorado. She asked the federal court to order her release on bond while the state court system handles her appeal and said her health has deteriorated while being incarcerated.

Trump’s Justice Department submitted a filing on Peters’ behalf in March, saying the US has concerns about “the exceptionally lengthy sentence imposed relative to the conduct at issue, the First Amendment implications of the trial court’s October 2024 assertions relating to Ms. Peters, and whether Colorado’s denial of bail pending appeal was arbitrary or unreasonable under the Eighth and Fourteenth Amendments.”

Trump and DOJ try to spring former county clerk Tina Peters from prison Read More »

2025-alfa-romeo-tonale-turbo-review:-italian-charm-that-cuts-both-ways

2025 Alfa Romeo Tonale Turbo review: Italian charm that cuts both ways

While it’s nice to see that this feature is standard equipment, the ACC system can be a bit overeager to close the gap between you and the car in front of you, and it has a bad habit of braking later than it should. This resulted in several panic stops where the adaptive cruise control’s behavior triggered the forward collision warning system despite the fact that I was plodding along at just 15 mph (24 km/h) stop-and-go traffic.

In a canyon with some paddles

Out in the canyons, I switched to the Dynamic drive mode, which sharpens the Tonale’s reflexes and adds more urgency to the proceedings. The dampers’ added stiffness in this mode cleaned up the crossover’s body control to a tangible degree, but the transmission’s ongoing search for more efficient gears was only alleviated by switching to manual mode and taking over control of the gearbox with the paddles. While the overall tuning is a bit softer than some enthusiasts will prefer, the Tonale’s performance is buoyed by a gutsy powerplant and the confident stopping power delivered by the four-piston Brembo brakes equipped up front.

With a starting price of $36,495 ($48,130 as-tested, with destination fee), the Tonale 2.0 L Turbo is roughly $10,000 cheaper than its hybrid counterpart, and it also undercuts other European premium compact crossovers like the Mercedes-Benz GLA and BMW X1 by thousands. The Tonale’s extroverted character is also a nice change of pace in a segment filled with anonymity, and given the negligible compromise in straight-line performance, the lower curb weight, and the significant cost savings, I’d choose the 2.0 L Turbo over the PHEV model without hesitation.

Telephone dial wheels ftw. Alfa Romeo

The Tonale 2.0 L Turbo’s biggest rival is arguably the Dodge Hornet GT, which offers a similar driving experience but starts at a base price that’s roughly $6,000 lower. The premium you’ll pay for the Tonale largely comes down to its Italian aesthetic and the sense of occasion that the Alfa Romeo name imparts. Those attributes may seem trivial at first glance, but one should never underestimate the value of style.

2025 Alfa Romeo Tonale Turbo review: Italian charm that cuts both ways Read More »

a-doge-recruiter-is-staffing-a-project-to-deploy-ai-agents-across-the-us-government

A DOGE recruiter is staffing a project to deploy AI agents across the US government


“does it still require Kremlin oversight?

A startup founder said that AI agents could do the work of tens of thousands of government employees.

An aide sets up a poster depicting the logo for the DOGE Caucus before a news conference in Washington, DC. Credit: Andrew Harnik/Getty Images

A young entrepreneur who was among the earliest known recruiters for Elon Musk’s so-called Department of Government Efficiency (DOGE) has a new, related gig—and he’s hiring. Anthony Jancso, cofounder of AcclerateX, a government tech startup, is looking for technologists to work on a project that aims to have artificial intelligence perform tasks that are currently the responsibility of tens of thousands of federal workers.

Jancso, a former Palantir employee, wrote in a Slack with about 2000 Palantir alumni in it that he’s hiring for a “DOGE orthogonal project to design benchmarks and deploy AI agents across live workflows in federal agencies,” according to an April 21 post reviewed by WIRED. Agents are programs that can perform work autonomously.

We’ve identified over 300 roles with almost full-process standardization, freeing up at least 70k FTEs for higher-impact work over the next year,” he continued, essentially claiming that tens of thousands of federal employees could see many aspects of their job automated and replaced by these AI agents. Workers for the project, he wrote, would be based on site in Washington, DC, and would not require a security clearance; it isn’t clear for whom they would work. Palantir did not respond to requests for comment.

The post was not well received. Eight people reacted with clown face emojis, three reacted with a custom emoji of a man licking a boot, two reacted with custom emoji of Joaquin Phoenix giving a thumbs down in the movie Gladiator, and three reacted with a custom emoji with the word “Fascist.” Three responded with a heart emoji.

“DOGE does not seem interested in finding ‘higher impact work’ for federal employees,” one person said in a comment that received 11 heart reactions. “You’re complicit in firing 70k federal employees and replacing them with shitty autocorrect.”

“Tbf we’re all going to be replaced with shitty autocorrect (written by chatgpt),” another person commented, which received one “+1” reaction.

“How ‘DOGE orthogonal’ is it? Like, does it still require Kremlin oversight?” another person said in a comment that received five reactions with a fire emoji. “Or do they just use your credentials to log in later?”

AccelerateX was originally called AccelerateSF, which VentureBeat reported in 2023 had received support from OpenAI and Anthropic. In its earliest incarnation, AccelerateSF hosted a hackathon for AI developers aimed at using the technology to solve San Francisco’s social problems. According to a 2023 Mission Local story, for instance, Jancso proposed that using large language models to help businesses fill out permit forms to streamline the construction paperwork process might help drive down housing prices. (OpenAI did not respond to a request for comment. Anthropic spokesperson Danielle Ghiglieri tells WIRED that the company “never invested in AccelerateX/SF,” but did sponsor a hackathon AccelerateSF hosted in 2023 by providing free access to its API usage at a time when its Claude API “was still in beta.”)

In 2024, the mission pivoted, with the venture becoming known as AccelerateX. In a post on X announcing the change, the company posted, “Outdated tech is dragging down the US Government. Legacy vendors sell broken systems at increasingly steep prices. This hurts every American citizen.” AccelerateX did not respond to a request for comment.

According to sources with direct knowledge, Jancso disclosed that AccelerateX had signed a partnership agreement with Palantir in 2024. According to the LinkedIn of someone described as one of AccelerateX’s cofounders, Rachel Yee, the company looks to have received funding from OpenAI’s Converge 2 Accelerator. Another of AccelerateSF’s cofounders, Kay Sorin, now works for OpenAI, having joined the company several months after that hackathon. Sorin and Yee did not respond to requests for comment.

Jancso’s cofounder, Jordan Wick, a former Waymo engineer, has been an active member of DOGE, appearing at several agencies over the past few months, including the Consumer Financial Protection Bureau, National Labor Relations Board, the Department of Labor, and the Department of Education. In 2023, Jancso attended a hackathon hosted by ScaleAI; WIRED found that another DOGE member, Ethan Shaotran, also attended the same hackathon.

Since its creation in the first days of the second Trump administration, DOGE has pushed the use of AI across agencies, even as it has sought to cut tens of thousands of federal jobs. At the Department of Veterans Affairs, a DOGE associate suggested using AI to write code for the agency’s website; at the General Services Administration, DOGE has rolled out the GSAi chatbot; the group has sought to automate the process of firing government employees with a tool called AutoRIF; and a DOGE operative at the Department of Housing and Urban Development is using AI tools to examine and propose changes to regulations. But experts say that deploying AI agents to do the work of 70,000 people would be tricky if not impossible.

A federal employee with knowledge of government contracting, who spoke to WIRED on the condition of anonymity because they were not authorized to speak to the press, says, “A lot of agencies have procedures that can differ widely based on their own rules and regulations, and so deploying AI agents across agencies at scale would likely be very difficult.”

Oren Etzioni, cofounder of the AI startup Vercept, says that while AI agents can be good at doing some things—like using an internet browser to conduct research—their outputs can still vary widely and be highly unreliable. For instance, customer service AI agents have invented nonexistent policies when trying to address user concerns. Even research, he says, requires a human to actually make sure what the AI is spitting out is correct.

“We want our government to be something that we can rely on, as opposed to something that is on the absolute bleeding edge,” says Etzioni. “We don’t need it to be bureaucratic and slow, but if corporations haven’t adopted this yet, is the government really where we want to be experimenting with the cutting edge AI?”

Etzioni says that AI agents are also not great 1-1 fits for job replacements. Rather, AI is able to do certain tasks or make others more efficient, but the idea that the technology could do the jobs of 70,000 employees would not be possible. “Unless you’re using funny math,” he says, “no way.”

Jancso, first identified by WIRED in February, was one of the earliest recruiters for DOGE in the months before Donald Trump was inaugurated. In December, Jancso, who sources told WIRED said he had been recruited by Steve Davis, president of the Musk-founded Boring Company and a current member of DOGE, used the Palantir alumni group to recruit DOGE members. On December 2nd, 2024, he wrote, “I’m helping Elon’s team find tech talent for the Department of Government Efficiency (DOGE) in the new admin. This is a historic opportunity to build an efficient government, and to cut the federal budget by 1/3. If you’re interested in playing a role in this mission, please reach out in the next few days.”

According to one source at SpaceX, who asked to remain anonymous as they are not authorized to speak to the press, Jancso appeared to be one of the DOGE members who worked out of the company’s DC office in the days before inauguration along with several other people who would constitute some of DOGE’s earliest members. SpaceX did not respond to a request for comment.

Palantir was cofounded by Peter Thiel, a billionaire and longtime Trump supporter with close ties to Musk. Palantir, which provides data analytics tools to several government agencies including the Department of Defense and the Department of Homeland Security, has received billions of dollars in government contracts. During the second Trump administration, the company has been involved in helping to build a “mega API” to connect data from the Internal Revenue Service to other government agencies, and is working with Immigration and Customs Enforcement to create a massive surveillance platform to identify immigrants to target for deportation.

This story originally appeared at WIRED.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

A DOGE recruiter is staffing a project to deploy AI agents across the US government Read More »

claude’s-ai-research-mode-now-runs-for-up-to-45-minutes-before-delivering-reports

Claude’s AI research mode now runs for up to 45 minutes before delivering reports

Still, the report contained a direct quote statement from William Higinbotham that appears to combine quotes from two sources not cited in the source list. (One must always be careful with confabulated quotes in AI because even outside of this Research mode, Claude 3.7 Sonnet tends to invent plausible ones to fit a narrative.) We recently covered a study that showed AI search services confabulate sources frequently, and in this case, it appears that the sources Claude Research surfaced, while real, did not always match what is stated in the report.

There’s always room for interpretation and variation in detail, of course, but overall, Claude Research did a relatively good job crafting a report on this particular topic. Still, you’d want to dig more deeply into each source and confirm everything if you used it as the basis for serious research. You can read the full Claude-generated result as this text file, saved in markdown format. Sadly, the markdown version does not include the source URLS found in the Claude web interface.

Integrations feature

Anthropic also announced Thursday that it has broadened Claude’s data access capabilities. In addition to web search and Google Workspace integration, Claude can now search any connected application through the company’s new “Integrations” feature. The feature reminds us somewhat of OpenAI’s ChatGPT Plugins feature from March 2023 that aimed for similar connections, although the two features work differently under the hood.

These Integrations allow Claude to work with remote Model Context Protocol (MCP) servers across web and desktop applications. The MCP standard, which Anthropic introduced last November and we covered in April, connects AI applications to external tools and data sources.

At launch, Claude supports Integrations with 10 services, including Atlassian’s Jira and Confluence, Zapier, Cloudflare, Intercom, Asana, Square, Sentry, PayPal, Linear, and Plaid. The company plans to add more partners like Stripe and GitLab in the future.

Each integration aims to expand Claude’s functionality in specific ways. The Zapier integration, for instance, reportedly connects thousands of apps through pre-built automation sequences, allowing Claude to automatically pull sales data from HubSpot or prepare meeting briefs based on calendar entries. With Atlassian’s tools, Anthropic says that Claude can collaborate on product development, manage tasks, and create multiple Confluence pages and Jira work items simultaneously.

Anthropic has made its advanced Research and Integrations features available in beta for users on Max, Team, and Enterprise plans, with Pro plan access coming soon. The company has also expanded its web search feature (introduced in March) to all Claude users on paid plans globally.

Claude’s AI research mode now runs for up to 45 minutes before delivering reports Read More »

some-flies-go-insomniac-to-ward-off-parasites

Some flies go insomniac to ward off parasites

Those genes associated with metabolism were upregulated, meaning they showed an increase in activity. An observed loss of body fat and protein reserves was evidently a trade-off for resistance to mites. This suggests there was increased lipolysis, or the breakdown of fats, and proteolysis, the breakdown of proteins, in resistant lines of flies.

Parasite paranoia

The depletion of nutrients could make fruit flies less likely to survive even without mites feeding off them, but their tenaciousness when it comes to staying up through the night suggests that being parasitized by mites is still the greater risk. Because mite-resistant flies did not sleep, their oxygen consumption and activity also increased during the night to levels no different from those of control group flies during the day.

Keeping mites away involves moving around so the fly can buzz off if mites crawl too close. Knowing this, Benoit wanted to see what would happen if the resistant flies’ movement was restricted. It was doom. When the flies were restrained, the mite-resistant flies were as susceptible to mites as the controls. Activity alone was important for resisting mites.

Since mites are ectoparasites, or external parasites (as opposed to internal parasites like tapeworms), potential hosts like flies can benefit from hypervigilance. Sleep is typically beneficial to a host invaded by an internal parasite because it increases the immune response. Unfortunately for the flies, sleeping would only make them an easy meal for mites. Keeping both stereoscopic eyes out for an external parasite means there is no time left for sleep.

“The pattern of reduced sleep likely allows the flies to be more responsive during encounters with mites during the night,” the researchers said in their study, which was recently published in Biological Timing and Sleep. “There could be differences in sleep occurring during the day, but these differences may be less important as D. melanogaster sleeps much less during the day.”

Fruit flies aren’t the only creatures with sleep patterns that parasites disrupt. Evidence of shifts in sleep and rest in birds and bats has been shown to happen when there is a risk of parasitism after dark. For the flies, exhaustion has the upside of better fertility if they manage to avoid bites, so a mate must be worth all those sleepless nights.

Biological Timing and Sleep, 2025.  DOI: 10.1038/s44323-025-00031-7

Some flies go insomniac to ward off parasites Read More »

we-finally-know-a-little-more-about-amazon’s-super-secret-satellites

We finally know a little more about Amazon’s super-secret satellites

“Elon thinks we can do the job with cheaper and simpler satellites, sooner,” a source told Reuters at the time of Badyal’s dismissal. Earlier in 2018, SpaceX launched a pair of prototype cube-shaped Internet satellites for demonstrations in orbit. Then, less than a year after firing Badyal, Musk’s company launched the first full stack of Starlink satellites, debuting the now-standard flat-panel design.

In a post Friday on LinkedIn, Badyal wrote the Kuiper satellites have had “an entirely nominal start” to their mission. “We’re just over 72 hours into our first full-scale Kuiper mission, and the adrenaline is still high.”

The Starlink and Kuiper constellations use laser inter-satellite links to relay Internet signals from node-to-node across their networks. Starlink broadcasts consumer broadband in Ku-band frequencies, while Kuiper will use Ka-band.

Ultimately, SpaceX’s simplified Starlink deployment architecture has fewer parts and eliminates the need for a carrier structure. This allows SpaceX to devote a higher share of the rocket’s mass and volume capacity to the Starlink satellites themselves, replacing dead weight with revenue-earning capability. The dispenser architecture used by Amazon is a more conventional design, and gives satellite engineers more flexibility in designing their spacecraft. It also allows satellites to spread out faster in orbit.

Others involved in the broadband megaconstellation rush have copied SpaceX’s architecture.

China’s Qianfan, or Thousand Sails, satellites have a “standardized and modular” flat-panel design that “meets the needs of stacking multiple satellites with one rocket,” according to the company managing the constellation. While Chinese officials haven’t released any photos of the satellites, which could eventually number more than 14,000, this sounds a lot like the design of SpaceX’s Starlink satellites.

Another piece of information released by United Launch Alliance helps us arrive at an estimate of the mass of each Kuiper satellite. The collection of 27 satellites that launched earlier this week added up to be the heaviest payload ever flown on ULA’s Atlas V rocket. ULA said the total payload the Atlas V delivered to orbit was about 34,000 pounds, equivalent to roughly 15.4 metric tons.

It wasn’t clear whether this number accounted for the satellite dispenser, which likely weighed somewhere in the range of 1,000 to 2,000 pounds at launch. This would put the mass of each Kuiper satellite somewhere between 1,185 and 1,259 pounds (537 and 571 kilograms).

This is not far off the estimated mass of SpaceX’s most recent iteration of Starlink satellites, a version known as V2 Mini Optimized. SpaceX’s Falcon 9 rocket has launched up to 28 of these flat-packed satellites on a single launch.

We finally know a little more about Amazon’s super-secret satellites Read More »

judge-on-meta’s-ai-training:-“i-just-don’t-understand-how-that-can-be-fair-use”

Judge on Meta’s AI training: “I just don’t understand how that can be fair use”


Judge downplayed Meta’s “messed up” torrenting in lawsuit over AI training.

A judge who may be the first to rule on whether AI training data is fair use appeared skeptical Thursday at a hearing where Meta faced off with book authors over the social media company’s alleged copyright infringement.

Meta, like most AI companies, holds that training must be deemed fair use, or else the entire AI industry could face immense setbacks, wasting precious time negotiating data contracts while falling behind global rivals. Meta urged the court to rule that AI training is a transformative use that only references books to create an entirely new work that doesn’t replicate authors’ ideas or replace books in their markets.

At the hearing that followed after both sides requested summary judgment, however, Judge Vince Chhabria pushed back on Meta attorneys arguing that the company’s Llama AI models posed no threat to authors in their markets, Reuters reported.

“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” Chhabria said. “You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person.”

Declaring, “I just don’t understand how that can be fair use,” the shrewd judge apparently stoked little response from Meta’s attorney, Kannon Shanmugam, apart from a suggestion that any alleged threat to authors’ livelihoods was “just speculation,” Wired reported.

Authors may need to sharpen their case, which Chhabria warned could be “taken away by fair use” if none of the authors suing, including Sarah Silverman, Ta-Nehisi Coates, and Richard Kadrey, can show “that the market for their actual copyrighted work is going to be dramatically affected.”

Determined to probe this key question, Chhabria pushed authors’ attorney, David Boies, to point to specific evidence of market harms that seemed noticeably missing from the record.

“It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected by the billions of things that Llama will ultimately be capable of producing,” Chhabria said. “And it’s just not obvious to me that that’s the case.”

But if authors can prove fears of market harms are real, Meta might struggle to win over Chhabria, and that could set a precedent impacting copyright cases challenging AI training on other kinds of content.

The judge repeatedly appeared to be sympathetic to authors, suggesting that Meta’s AI training may be a “highly unusual case” where even though “the copying is for a highly transformative purpose, the copying has the high likelihood of leading to the flooding of the markets for the copyrighted works.”

And when Shanmugam argued that copyright law doesn’t shield authors from “protection from competition in the marketplace of ideas,” Chhabria resisted the framing that authors weren’t potentially being robbed, Reuters reported.

“But if I’m going to steal things from the marketplace of ideas in order to develop my own ideas, that’s copyright infringement, right?” Chhabria responded.

Wired noted that he asked Meta’s lawyers, “What about the next Taylor Swift?” If AI made it easy to knock off a young singer’s sound, how could she ever compete if AI produced “a billion pop songs” in her style?

In a statement, Meta’s spokesperson reiterated the company’s defense that AI training is fair use.

“Meta has developed transformational open source AI models that are powering incredible innovation, productivity, and creativity for individuals and companies,” Meta’s spokesperson said. “Fair use of copyrighted materials is vital to this. We disagree with Plaintiffs’ assertions, and the full record tells a different story. We will continue to vigorously defend ourselves and to protect the development of GenAI for the benefit of all.”

Meta’s torrenting seems “messed up”

Some have pondered why Chhabria appeared so focused on market harms, instead of hammering Meta for admittedly illegally pirating books that it used for its AI training, which seems to be obvious copyright infringement. According to Wired, “Chhabria spoke emphatically about his belief that the big question is whether Meta’s AI tools will hurt book sales and otherwise cause the authors to lose money,” not whether Meta’s torrenting of books was illegal.

The torrenting “seems kind of messed up,” Chhabria said, but “the question, as the courts tell us over and over again, is not whether something is messed up but whether it’s copyright infringement.”

It’s possible that Chhabria dodged the question for procedural reasons. In a court filing, Meta argued that authors had moved for summary judgment on Meta’s alleged copying of their works, not on “unsubstantiated allegations that Meta distributed Plaintiffs’ works via torrent.”

In the court filing, Meta alleged that even if Chhabria agreed that the authors’ request for “summary judgment is warranted on the basis of Meta’s distribution, as well as Meta’s copying,” that the authors “lack evidence to show that Meta distributed any of their works.”

According to Meta, authors abandoned any claims that Meta’s seeding of the torrented files served to distribute works, leaving only claims about Meta’s leeching. Meta argued that the authors “admittedly lack evidence that Meta ever uploaded any of their works, or any identifiable part of those works, during the so-called ‘leeching’ phase,” relying instead on expert estimates based on how torrenting works.

It’s also possible that for Chhabria, the torrenting question seemed like an unnecessary distraction. Former Meta attorney Mark Lumley, who quit the case earlier this year, told Vanity Fair that the torrenting was “one of those things that sounds bad but actually shouldn’t matter at all in the law. Fair use is always about uses the plaintiff doesn’t approve of; that’s why there is a lawsuit.”

Lumley suggested that court cases mulling fair use at this current moment should focus on the outputs, rather than the training. Citing the ruling in a case where Google Books scanning books to share excerpts was deemed fair use, Lumley argued that “all search engines crawl the full Internet, including plenty of pirated content,” so there’s seemingly no reason to stop AI crawling.

But the Copyright Alliance, a nonprofit, non-partisan group supporting the authors in the case, in a court filing alleged that Meta, in its bid to get AI products viewed as transformative, is aiming to do the opposite. “When describing the purpose of generative AI,” Meta allegedly strives to convince the court to “isolate the ‘training’ process and ignore the output of generative AI,” because that’s seemingly the only way that Meta can convince the court that AI outputs serve “a manifestly different purpose from Plaintiffs’ books,” the Copyright Alliance argued.

“Meta’s motion ignores what comes after the initial ‘training’—most notably the generation of output that serves the same purpose of the ingested works,” the Copyright Alliance argued. And the torrenting question should matter, the group argued, because unlike in Google Books, Meta’s AI models are apparently training on pirated works, not “legitimate copies of books.”

Chhabria will not be making a snap decision in the case, planning to take his time and likely stressing not just Meta, but every AI company defending training as fair use the longer he delays. Understanding that the entire AI industry potentially has a stake in the ruling, Chhabria apparently sought to relieve some tension at the end of the hearing with a joke, Wired reported.

 “I will issue a ruling later today,” Chhabria said. “Just kidding! I will take a lot longer to think about it.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Judge on Meta’s AI training: “I just don’t understand how that can be fair use” Read More »