Author name: Kris Guyer

google-solves-its-mysterious-pixel-problem,-announces-9a-launch-date

Google solves its mysterious Pixel problem, announces 9a launch date

Google revealed the Pixel 9a last week, but its release plans were put on hold by a mysterious “component quality issue.” Whatever that was, it’s been worked out. Google now says its new budget smartphone will arrive as soon as April 10. The date varies by market, but the wait is almost over.

The first wave of 9a releases on April 10 will include the US, Canada, and the UK. On April 14, the Pixel 9a will arrive in Europe, launching in Germany, Spain, Italy, Ireland, France, Norway, Denmark, Sweden, Netherlands, Belgium, Austria, Portugal, Switzerland, Poland, Czechia, Romania, Hungary, Slovenia, Slovakia, Lithuania, Estonia, Latvia, and Finland. On April 16, the phone will come to Australia, India, Singapore, Taiwan, and Malaysia.

You may think that takes care of Google’s launch commitments, but no—Japan still has no official launch date. That’s a bit strange, as Japan is not a new addition to Google’s list of supported regions. It’s unclear if this has anything to do with the previous component issue. Google says only that the Japanese launch will happen “soon.” Its statements about the delayed release were also vague, with representatives noting that the cause was a “passive component.”

Google solves its mysterious Pixel problem, announces 9a launch date Read More »

pillars-of-eternity-is-getting-turn-based-combat,-all-but-demanding-replays

Pillars of Eternity is getting turn-based combat, all but demanding replays

More than just rolling for initiative

Obsidian added a turn-based mode to Pillars of Eternity II: Deadfire in patch 4.1, roughly eight months after the game’s initial release. Designer Josh Sawyer, who worked on Baldur’s Gate II and directed both PoE games, said in a 2023 interview with Touch Arcade that the real-time systems in the PoE games were largely a concession to the old-school CRPG fans that crowdfunded both games.

Turn-based was Sawyer’s stated preference, and he thinks Baldur’s Gate 3 largely put an end to the debate in modern times:

I just think it’s easier to design more intricate combats. I like games with a lot of stats, obviously. (He laughs). But the problem with real time with pause is that it’s honestly very difficult for people to actually parse all of that information, and one of the things I’ve heard a lot from people who’ve played Deadfire in turn based, is that there were things about the game like the affliction and inspiration system that they didn’t really understand very clearly until they played it in turn based.

But both Pillars games were designed with real-time combat in mind, such that, even with his appreciation for the turn-based addition to PoE 2, Sawyer knows “the game wasn’t designed for it,” he told Touch Arcade. This is almost certainly going to be the case, too, for the original PoE, but there could be lessons learned from PoE 2‘s transformation to apply. Other games from that era might also lure folks like me back, though perhaps they, too, have a density of encounters and maps that just can’t cut it for turn-based.

Beyond this notably big “patch” coming to the original PoE, the 10th anniversary patch should make it easier for Mac and Linux (through Proton) users to stay up to date on bug fixes, and for players on GOG and Epic to get Kickstarter rewards and achievements. Lots of audio and visual effects were fixed up, along with a whole heap of mechanical and combat fixes.

Pillars of Eternity is getting turn-based combat, all but demanding replays Read More »

with-vulcan’s-certification,-space-force-is-no-longer-solely-reliant-on-spacex

With Vulcan’s certification, Space Force is no longer solely reliant on SpaceX

The US Space Force on Wednesday announced that it has certified United Launch Alliance’s Vulcan rocket to conduct national security missions.

“Assured access to space is a core function of the Space Force and a critical element of national security,” said Brig. Gen. Panzenhagen, program executive officer for Assured Access to Space, in a news release. “Vulcan certification adds launch capacity, resiliency, and flexibility needed by our nation’s most critical space-based systems.”

The formal announcement closes a yearslong process that has seen multiple delays in the development of the Vulcan rocket, as well as two anomalies in recent years that were a further setback to certification.

The first of these, an explosion on a test stand in northern Alabama during the spring of 2023, delayed the first test flight of Vulcan by several months. Then, in October 2024, during the second test flight of the rocket, a nozzle on one of the Vulcan’s two side-mounted boosters failed.

A cumbersome process

This nozzle issue, more than five months ago, compounded the extensive paperwork needed to certify Vulcan for the US Department of Defense’s most sensitive missions. The military has several options for companies to certify their rockets depending on the number of flights completed, which could be two, three, or more. The fewer the flights, the more paperwork and review that must be done. For Vulcan, this process entailed:

  • 52 certification criteria
  • more than 180 discrete tasks
  • 2 certification flight demonstrations
  • 60 payload interface requirement verifications
  • 18 subsystem design and test reviews
  • 114 hardware and software audits

That sounds like a lot of work, but at least the military’s rules and regulations are straightforward and simple to navigate, right? Anyway, the certification process is complete, elevating United Launch Alliance to fly national security missions alongside SpaceX with its fleet of Falcon 9 and Falcon Heavy rockets.

With Vulcan’s certification, Space Force is no longer solely reliant on SpaceX Read More »

f1’s-cruel-side-is-on-show-as-red-bull-to-fire-liam-lawson-after-2-races

F1’s cruel side is on show as Red Bull to fire Liam Lawson after 2 races

That was before a mediocre time at Alpine and a disastrous stint at McLaren, and while Drive to Survive‘s producers would no doubt have loved the redemption story of Ricciardo returning to Red Bull, it wasn’t to be, as the speed just wasn’t there.

The entire point of having the junior Racing Bull team is so Red Bull’s driver program has another pair of race seats to allow its young drivers to get experience. But in practice, it hasn’t really been much of a success.

Neither Max Verstappen nor Sebastian Vettel (the team’s previous four-time champion) were properly part of the young driver program, nor was Perez. And the racers who were promoted to the Red Bull seat—Danny Kvyat, Pierre Gasly, Alex Albon—all fell short of Red Bull’s expectations and were replaced.

ABU DHABI, UNITED ARAB EMIRATES - DECEMBER 06: Yuki Tsunoda of Japan and Visa Cash App RB and Liam Lawson of New Zealand and Visa Cash App RB take a selfie at the Visa Cash App RB team photo after practice ahead of the F1 Grand Prix of Abu Dhabi at Yas Marina Circuit on December 06, 2024 in Abu Dhabi, United Arab Emirates.

Yuki Tsunoda (L) and Liam Lawson (R) were teammates at the end of last year. Credit: Rudy Carezzevoli/Getty Images

There may have been doubts about Tsunoda’s speed when he first joined the Racing Bull team (then called Alpha Tauri) back in 2021, but five years on, the Japanese driver has silenced those doubts. Promoting Tsunoda to the main squad seemed an obvious fix to the Perez problem, but Red Bull bosses Christian Horner and Helmut Marko evidently believed otherwise. Instead, for 2025, they went with Lawson, lifting him up from a reserve role that led to 11 races over a couple of years.

And that has been a disaster. This year’s Red Bull is very much not the fastest car in the field—that’s the McLaren, for now. Depending on the day, the Red Bull might only be the fourth fastest car, behind Ferrari and Mercedes, too, at least while Verstappen is driving it. In years past, a heavy revision to the car might have solved things. But with F1’s budget cap, a team like Red Bull can no longer spend its way out of the problem.

F1’s cruel side is on show as Red Bull to fire Liam Lawson after 2 races Read More »

rfk-jr-claws-back-$11.4b-in-cdc-funding-amid-wave-of-top-level-departures

RFK Jr. claws back $11.4B in CDC funding amid wave of top-level departures

Those departures follow Kevin Griffis, head of the CDC’s office of communications, who left last week; Robin Bailey, the agency’s chief operating officer, left late last month; and Nirav Shah, a former CDC principal deputy director.

Pulled funding

Meanwhile, NBC News reported this afternoon that the Department of Health and Human Services (HHS) is pulling back $11.4 billion in funding from the agency, which it allocated to state and local health departments as well as partners.

NBC reported that the funds were largely used for COVID-19 testing and vaccination, and to support community health workers and initiatives that address pandemic health disparities among high-risk and underserved populations, such as rural communities and minority populations. The funds also supported global COVID-19 projects.

“The COVID-19 pandemic is over, and HHS will no longer waste billions of taxpayer dollars responding to a non-existent pandemic that Americans moved on from years ago,” HHS Director of Communications Andrew Nixon said in a statement. “HHS is prioritizing funding projects that will deliver on President Trump’s mandate to address our chronic disease epidemic and Make America Healthy Again.”

State health departments told NBC News that they’re still evaluating the impact of the withdrawn funding. On Monday, some grantees received notices that read: “Now that the pandemic is over, the grants and cooperative agreements are no longer necessary as their limited purpose has run out.”

Since the public health emergency for COVID-19 was declared over in the US on May 11, 2023, over 92,000 Americans died from the pandemic virus, according to CDC data. In total, the pandemic killed over 1.2 million in the US.

RFK Jr. claws back $11.4B in CDC funding amid wave of top-level departures Read More »

no-cloud-needed:-nvidia-creates-gaming-centric-ai-chatbot-that-runs-on-your-gpu

No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU

Nvidia has seen its fortunes soar in recent years as its AI-accelerating GPUs have become worth their weight in gold. Most people use their Nvidia GPUs for games, but why not both? Nvidia has a new AI you can run at the same time, having just released its experimental G-Assist AI. It runs locally on your GPU to help you optimize your PC and get the most out of your games. It can do some neat things, but Nvidia isn’t kidding when it says this tool is experimental.

G-Assist is available in the Nvidia desktop app, and it consists of a floating overlay window. After invoking the overlay, you can either type or speak to G-Assist to check system stats or make tweaks to your settings. You can ask basic questions like, “How does DLSS Frame Generation work?” but it also has control over some system-level settings.

By calling up G-Assist, you can get a rundown of how your system is running, including custom data charts created on the fly by G-Assist. You can also ask the AI to tweak your machine, for example, optimizing the settings for a particular game or toggling on or off a setting. G-Assist can even overclock your GPU if you so choose, complete with a graph of expected performance gains.

Nvidia on G-Assist.

Nvidia demoed G-Assist last year with some impressive features tied to the active game. That version of G-Assist could see what you were doing and offer suggestions about how to reach your next objective. The game integration is sadly quite limited in the public version, supporting just a few games, like Ark: Survival Evolved.

There is, however, support for a number of third-party plug-ins that give G-Assist control over Logitech G, Corsair, MSI, and Nanoleaf peripherals. So, for instance, G-Assist could talk to your MSI motherboard to control your thermal profile or ping Logitech G to change your LED settings.

No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU Read More »

apple-barred-from-google-antitrust-trial,-putting-$20-billion-search-deal-on-the-line

Apple barred from Google antitrust trial, putting $20 billion search deal on the line

Apple has suffered a blow in its efforts to salvage its lucrative search placement deal with Google. A new ruling from the DC Circuit Court of Appeals affirms that Apple cannot participate in Google’s upcoming antitrust hearing, which could leave a multibillion-dollar hole in Apple’s balance sheet. The judges in the case say Apple simply waited too long to get involved.

Just a few years ago, a high-stakes court case involving Apple and Google would have found the companies on opposing sides, but not today. Apple’s and Google’s interests are strongly aligned here, to the tune of $20 billion. Google forks over that cash every year, and it’s happy to do so to secure placement as the default search provider in the Safari desktop and mobile browser.

The antitrust penalties pending against Google would make that deal impermissible. Throughout the case, the government made the value of defaults clear—most people never change them. That effectively delivers Google a captive audience on Apple devices.

Google’s ongoing legal battle with the DOJ’s antitrust division is shaping up to be the most significant action the government has taken against a tech company since Microsoft in the late ’90s. Perhaps this period of stability tricked Google’s partners into thinking nothing would change, but the seriousness of the government’s proposed remedies seems to have convinced them otherwise.

Google lost the case in August 2024, and the government proposed remedies in October. According to MediaPost, the appeals court took issue with Apple’s sluggishness in choosing sides. It didn’t even make its filing to participate in the remedy phase until November, some 33 days after the initial proposal. The judges ruled this delay “seems difficult to justify.”

When Google returns to court in the coming weeks, the company’s attorneys will not be flanked by Apple’s legal team. While Apple will be allowed to submit written testimony and file friend-of-the-court briefs, it will not be able to present evidence to the court or cross-examine witnesses, as it sought. Apple argued that it was entitled to do so because it had a direct stake in the outcome.

Apple barred from Google antitrust trial, putting $20 billion search deal on the line Read More »

on-(not)-feeling-the-agi

On (Not) Feeling the AGI

Ben Thompson interviewed Sam Altman recently about building a consumer tech company, and about the history of OpenAI. Mostly it is a retelling of the story we’ve heard before, and if anything Altman is very good about pushing back on Thompson when Thompson tries to turn OpenAI’s future into the next Facebook, complete with an advertising revenue model.

It is such a strange perspective to witness. They do not feel the AGI, let alone the ASI. The downside risks of AI, let alone existential risks, are flat out not discussed, this is a world where that’s not even a problem for Future Earth.

Then we contrast this with the new Epoch model of economic growth from AI, which can produce numbers like 30% yearly economic growth. Epoch feels the AGI.

Given the discussion of GPT-2 in the OpenAI safety and alignment philosophy document, I wanted to note his explanation there was quite good.

Sam Altman: The GPT-2 release, there were some people who were just very concerned about, you know, probably the model was totally safe, but we didn’t know we wanted to get — we did have this new and powerful thing, we wanted society to come along with us.

Now in retrospect, I totally regret some of the language we used and I get why people are like, “Ah man, this was like hype and fear-mongering and whatever”, it was truly not the intention. The people who made those decisions had I think great intentions at the time, but I can see now how it got misconstrued.

As I said a few weeks ago, ‘this is probably totally safe but we don’t know for sure’ was exactly the correct attitude to initially take to GPT-2, given my understanding of what they knew at the time. The messaging could have made this clearer, but it very much wasn’t hype or fearmongering.

Altman repeatedly emphasizes that what he wanted to do from the beginning, what me still most wants to do, is build AGI.

Altman’s understanding of what he means by that, and what the implications will be, continues to seem increasingly confused. Now it seems it’s… fungible? And not all that transformative?

Sam Altman: My favorite historical analog is the transistor for what AGI is going to be like. There’s going to be a lot of it, it’s going to diffuse into everything, it’s going to be cheap, it’s an emerging property of physics and it on its own will not be a differentiator.

This seems bonkers crazy to me. First off, it seems to include the idea of ‘AGI’ as a fungible commodity, as a kind of set level. Even if AI stays for substantial amounts of time at ‘roughly human’ levels, differentiation between ‘roughly which humans in which ways, exactly’ is a giant deal, as anyone who has dealt with humans knows. There isn’t some natural narrow attractor level of capability ‘AGI.’

Then there’s the obvious question of why you can ‘diffuse AGI into everything’ and expect the world to otherwise look not so different, the way it did with transistors? Altman also says this:

Ben Thompson: What’s going to be more valuable in five years? A 1-billion daily active user destination site that doesn’t have to do customer acquisition, or the state-of-the-art model?

Sam Altman: The 1-billion user site I think.

That again implies little differentiation in capability, and he expects commoditization of everything but the very largest models to happen quickly.

Charles: This seems pretty incompatible with AGI arriving in that timeframe or shortly after, unless it gets commoditised very fast and subsequently improvements plateau.

Similarly, Altman in another interview continues to go with the line ‘I kind of believe we can launch the first AGI and no one cares that much.’

The whole thing is pedestrian, he’s talking about the Next Great Consumer Product. As in, Ben Thompson is blown away that this is the next Facebook, with a similar potential. Thompson and Altman are talking about issues of being a platform versus an aggregator and bundling and how to make ad revenue. Altman says they expect to be a platform only in the style of a Google, and wisely (and also highly virtuously) hopes to avoid the advertising that I sense has Thompson very excited, as he continues to assume ‘people won’t pay’ so the way you profit from AGI (!!!) is ads. It’s so weird to see Thompson trying to sell Altman on the need to make our future an ad-based dystopia, and the need to cut off the API to maximize revenue.

Such considerations do matter, and I think that Thompson’s vision is wrong on both business level and also on the normative level of ‘at long last we have created the advertising fueled cyberpunk dystopia world from the novel…’ but that’s not important now. Eyes on the damn prize!

I don’t even know how to respond to a vision so unambitious. I cannot count that low.

I mean, I could, and I have preferences over how we do so when we do, but it’s bizarre how much this conversation about AGI does not feel the AGI.

Altman’s answers in the DeepSeek section are scary. But it’s Thompson who really, truly, profoundly, simply does not get what is coming at all, or how you deal with this type of situation, and this answer from Altman is very good (at least by 2025 standards):

Ben Thompson: What purpose is served at this point in being sort of precious about these releases?

Sam Altman: I still think there can be big risks in the future. I think it’s fair that we were too conservative in the past. I also think it’s fair to say that we were conservative, but a principle of being a little bit conservative when you don’t know is not a terrible thing.

I think it’s also fair to say that at this point, this is going to diffuse everywhere and whether it’s our model that does something bad or somebody else’s model that does something bad, who cares? But I don’t know, I’d still like us to be as responsible an actor as we can be.

Other Altman statements, hinting at getting more aggressive with releases, are scarier.

They get to regulation, where Thompson repeats the bizarre perspective that previous earnest calls for regulations that only hit OpenAI and other frontier labs were an attempt at regulatory capture. And Altman basically says (in my words!), fine, the world doesn’t want to regulate only us and Google and a handful of others at the top, so we switched from asking for regulations to protect everyone into regulations to pave the way for AI.

Thus, the latest asks from OpenAI are to prevent states from regulating frontier models, and to declare universal free fair use for all model training purposes, saying to straight up ignore copyright.

Some of this week’s examples, on top of Thompson and Altman.

Spor: I genuinely get the feeling that no one *actuallybelieves in superintelligence except for the doomers

I think they were right about this (re: common argument against e/acc on x) and i have to own up to that.

John Pressman: There’s an entire genre of Guy on here whose deal is basically “Will the singularity bring me a wife?” and the more common I learn this guy is the less I feel I have in common with others.

Also this one:

Rohit: Considering AGI is coming, all coding is about to become vibe coding, and if you don’t believe it then you don’t really believe in AGI do you

Ethan Mollick: Interestingly, if you look at almost every investment decision by venture capital, they don’t really believe in AGI either, or else can’t really imagine what AGI would mean if they do believe in it.

Epoch creates the GATE model, explaining that if AI is highly useful, it will also get highly used to do a lot of highly useful things, and that would by default escalate quickly. The model is, as all such things are, simplified in important ways, ignoring regulatory friction issues and also the chance we lose control or all die.

My worry is that by ignoring regulatory, legal and social frictions in particular, Epoch has not modeled the questions we should be most interested in, as in what to actually expect if we are not in a takeoff scenario. The paper does explicitly note this.

Their default result of their model, excluding the excluded issues, is roughly 30% additional yearly economic growth.

You can play with their simulator here, and their paper is here.

Epoch AI: We developed GATE: a model that shows how AI scaling and automation will impact growth.

It predicts trillion‐dollar infrastructure investments, 30% annual growth, and full automation in decades.

Tweak the parameters—these transformative outcomes are surprisingly hard to avoid.

Imagine if a central bank took AI seriously. They’d build GATE—merging economics with AI scaling laws to show how innovation, automation, and investment interact.

At its core: more compute → more automation → growth → more investment in chips, fabs, etc.

Even when investors are uncertain, GATE predicts explosive economic growth within two decades. Trillions of dollars flow into compute, fabs, and related infrastructure—even before AI generates much value—because investors anticipate massive returns from widespread AI automation.

We’ve created an interactive sandbox so you can explore these dynamics yourself. Test your own assumptions, run different scenarios, and visualize how the economy might evolve as AI automation advances.

GATE has important limitations: no regulatory frictions, no innovation outside AI, and sensitivity to uncertain parameters. We see it as a first-order approximation of AI’s dynamics—try it out to learn how robust its core conclusions are!

Charles Foster: Epoch AI posts, for dummies

Epoch’s Ege Erdil and Matthew Barnett argue that most AI value will come from broad automation, not from R&D and scientific progress. That’s not a knock on the value of R&D, it’s saying the broad automation is even more valuable. Broadly automating labor is the name of their game.

One could say ‘wait, the reason you can do broad automation is exactly because of AI R&D, so that still makes R&D super valuable,’ so make of that dilemma what you will. A lot of this likely comes from the conflation of ‘formalized R&D’ as it is currently practiced, and ‘generally figuring things out’ which goes well beyond formalized R&D and will be turbocharged across the board. Yes, only ~20% of productivity gains look like they come from ‘R&D’ but that’s because we’re only counting formal R&D, and plausibly ignoring compounding effects.

Epoch argues against the likelihood of a full-on AI R&D singularity (or full RSI) via the standard arguments and emphasizing the need for a variety of cognitive skills. I don’t find this generally convincing, but they’re making a narrower argument than usual, that AI will first be able to fully automate a large share of the general workforce. I suppose sure, but also that’s going to happen rather quickly in such scenarios anyway?

I was pleased to see Tyler Cowen link to this without derision or claiming that it assumed its conclusions, even though it would have been highly reasonable to make similar objections.

I find very high growth scenarios highly plausible if we assume AI away loss of control and other existential risks along the way, and assume all of this is pointed in pro-human directions and that capabilities continue to scale.

Alas, I worry that the model here does not address most people’s ‘true objections.’

I don’t intend for this to be my definitive version of this, but sketching quickly, the plausibly true objections fall roughly into two categories.

First, there’s the ones that are often driving objections but that aren’t actually objections in this context:

  1. ASI likely kills everyone. Yeah, yeah. True but here we’re assuming it doesn’t.

  2. Disbelief on priors, absurdity heuristic, ‘sounds sci-fi’ or Nothing Ever Happens.

  3. Belief that tech is hype or always involves tons of hype, so this is also hype.

  4. Large growth would have implications I don’t want to think about, so no.

  5. Large growth means nothing matters so I’m going to act as if it won’t happen.

  6. Failure to even feel the AGI.

That’s all understandable, but not especially relevant. It’s a physical question, and it’s of the form of solving for the [Y] in ‘[X] → [Y].’

Second, there’s actual arguments, in various combinations, such as:

  1. AI progress will stall before we reach superintelligence (ASI), because of reasons.

  2. AI won’t be able to solve robotics or do act physically, because of reasons.

  3. Partial automation, even 90% or 99%, is very different from 100%, o-ring theory.

  4. Physical bottlenecks and delays prevent growth. Intelligence only goes so far.

  5. Regulatory and social bottlenecks prevent growth this fast, INT only goes so far.

  6. Decreasing marginal value means there literally aren’t goods with which to grow.

  7. Dismissing ability of AI to cause humans to make better decisions.

  8. Dismissing ability of AI to unlock new technologies.

And so on.

One common pattern is that relatively ‘serious people’ who do at least somewhat understand what AI is going to be put out highly pessimistic estimates and then call those estimates wildly optimistic and bullish. Which, compared to the expectations of most economists or regular people, they are, but that’s not the right standard here.

Dean Ball: For the record: I expect AI to add something like 1.5-2.5% GDP growth per year, on average, for a period of about 20 years that will begin in the late 2020s.

That is *wildlyoptimistic and bullish. But I do not believe 10% growth scenarios will come about.

Daniel Kokotajlo: Does that mean you think that even superintelligence (AI better than the best humans at everything, while also being faster and cheaper) couldn’t grow the economy at 10%+ speed? Or do you think that superintelligence by that definition won’t exist?

Dean Ball: the latter. it’s the “everything” that does it. 100% is a really big number. It’s radically bigger than 80%, 95%, or 99%. if bottlenecks persist–and I believe strongly that they will–we will have see baumol issues.

Daniel Kokotajlo: OK, thanks. Can you give some examples of things that AIs will remain worse than the best humans at 20 years from now?

Dean Ball: giving massages, running for president, knowing information about the world that isn’t on the internet, performing shakespeare, tasting food, saying sorry.

Samuel Hammond (responding to DB’s OP): That’s my expectation too, at least into the early 2030s as the last mile of resource and institutional constraints get ironed out. But once we have strong AGI and robotics production at scale, I see no theoretical reason why growth wouldn’t run much faster, a la 10-20% GWP. Not indefinitely, but rapidly to a much higher plateau.

Think of AGI as a step change increase in the Solow-Swan productivity factor A. This pushes out the production possibilities frontier, making even first world economies like a developing country. The marginal product of capital is suddenly much higher, setting off a period of rapid “catch up growth” to the post-AGI balanced growth path with the capital / labor ratio in steady state, signifying Baumol constraints.

Dean Ball: Right—by “AI” I really just meant the software side. Robotics is a totally separate thing, imo. I haven’t thought about the economics of robotics carefully but certainly 10% growth is imaginable, particularly in China where doing stuff is legal-er than in the us.

Thinking about AI impacts down the line without robotics seems to me like thinking about the steam engine without railroads, or computers without spreadsheets. You can talk about that if you want, but it’s not the question we should be asking. And even then, I expect more – for example I asked Claude about automating 80% of non-physical tasks, and it estimated about 5.5% additional GDP growth per year.

Another way of thinking about Dean Ball’s growth estimate is that in 20 years of having access to this, that would roughly turn Portugal into the Netherlands, or China into Romania. Does that seem plausible?

If you make a sufficient number of the pessimistic objections on top of each other, where we stall out before ASI and have widespread diffusion bottlenecks and robotics proves mostly unsolvable without ASI, I suppose you could get to 2% a year scenario. But I certainly wouldn’t call that wildly optimistic.

Distinctly, on the other objections, I will reiterate my position that various forms of ‘intelligence only goes so far’ are almost entirely a Skill Issue, certainly over a decade-long time horizon and at the margins discussed here, amounting to Intelligence Denialism. The ASI cuts through everything. And yes, physical actions take non-zero time, but that’s being taken into account, future automated processes can go remarkably quickly even in the physical realm, and a lot of claims of ‘you can only know [X] by running a physical experiment’ are very wrong, again a Skill Issue.

On the decreasing marginal value of goods, I think this is very much a ‘dreamed of in your philosophy’ issue, or perhaps it is definitional. I very much doubt that the physical limits kick in that close to where we are now, even if in important senses our basic human needs are already being met.

Altman’s model of the how AGI will impact the world is super weird if you take it seriously as a physical model of a future reality.

It’s kind of like there is this thing, ‘intelligence.’ It’s basically fungible, as it asymptotes quickly at close to human level, so it won’t be a differentiator.

There’s only so intelligent a thing can be, either in practice around current tech levels or in absolute terms, it’s not clear which. But it’s not sufficiently beyond us to be that dangerous, or for the resulting world to look that different. There’s risks, things that can go wrong, but they’re basically pedestrian, not that different from past risks. AGI will get released into the world, and ‘no one will care that much’ about the first ‘AGI products.’

I’m not willing to say that something like that is purely physically impossible, or has probability epsilon or zero. But it seems pretty damn unlikely to be how things go. I don’t see why we should expect this fungibility, or for capabilities to stall out exactly there even if they do stall out. And even if that did happen, I would expect things to change quite a lot more.

It’s certainly possible that the first AGI-level product will come out – maybe it’s a new form of Deep Research, let’s say – and initially most people don’t notice or care all that much. People often ignore exponentials until things are upon them, and can pretend things aren’t changing until well past points of no return. People might sense there were boom times and lots of cool toys without understanding what was happening, and perhaps AI capabilities don’t get out of control too quickly.

It still feels like an absurd amount of downplaying, from someone who knows better. And he’s far from alone.

Discussion about this post

On (Not) Feeling the AGI Read More »

the-wheel-of-time-delivers-on-a-pivotal-fan-favorite-moment

The Wheel of Time delivers on a pivotal fan-favorite moment

Andrew Cunningham and Lee Hutchinson have spent decades of their lives with Robert Jordan and Brandon Sanderson’s Wheel of Time books, and they previously brought that knowledge to bear as they recapped each first season episode and second season episode of Amazon’s WoT TV series. Now we’re back in the saddle for season 3—along with insights, jokes, and the occasional wild theory.

These recaps won’t cover every element of every episode, but they will contain major spoilers for the show and the book series. We’ll do our best to not spoil major future events from the books, but there’s always the danger that something might slip out. If you want to stay completely unspoiled and haven’t read the books, these recaps aren’t for you.

New episodes of The Wheel of Time season three will be posted for Amazon Prime subscribers every Thursday. This write-up covers episode four, “The Road to the Spear,” which was released on March 20.

Lee: Wow. That was an episode right there. Before we get into the recapping, maybe it’s a good idea to emphasize to the folks who haven’t read the books just what a big deal Rand’s visit to Rhuidean is—and why what he saw was so important.

At least for me, when I got to this point (which happens in book four and is being transposed forward a bit by the show), this felt like the first time author Robert Jordan was willing to pull the curtain back and actually show us something substantive about what’s really happening. We’ve already gotten a couple of flashbacks to Coruscant The Age of Legends in the show, but my recollection is that in the books, Rand’s trip through the glass columns is the first time we really get to see just how advanced things were before the Breaking of the World.

Image of a city obscured by clouds.

Our heroes approach Rhuidean, the clouded city. Credit: Prime/Amazon MGM Studios

Andrew: Yes! If you’re a showrunner or writer or performer with any relationship with the source material—and Rafe Lee Judkins certainly knows all of these books cover to cover, because you would need to if you wanted to navigate a show through all the ripple effects emanating outward from the changes he’s making—this is probably one of the Big Scenes that you’re thinking about adapting from the start.

Because yes, it’s a big character moment for Rand, but it’s also grappling with some of the story’s big themes—the relationship between past, present, and future and how inextricably they’re all intertwined—and building a world that’s even bigger than the handful of cities and kingdoms our characters have passed through so far.

So do we think they pulled it off? Do you want to start with the Aiel stuff we get before we head into Rhuidean?

Lee: Well, let’s see—we get the sweat tents, and we get Aviendha and Lan having a dance-off over whose weapons are more awesome, and we get our first glimpse at the Shaido Aiel, who will be sticking around as long-term bad guys. We learn that at least one of the Wise Ones, Bair (played by Nukâka Coster-Waldau, real-life spouse of Game of Thrones actor Nikolaj Coster-Waldau) seems to be able to channel. And we also briefly meet aspiring Shaido clan chief Couladin—a name to remember, because this guy will definitely be back.

I appreciate that we’re actually spending more time with the Aiel here, allowing us to see a few of them as people rather than as tropey desert-dwellers. And I appreciate that we continue to be mercifully free of Robert Jordan’s kinks. The Shaido Wise One Sevanna, for example, is as bedecked in finery and necklaces as her book counterpart, but unlike the book character, the on-screen version of Sevanna seems to have no problem keeping her bodice from constantly falling down.

On the whole, though, the impression the show gives is that being Aiel is hard and the Three-Fold Land sucks. It’s not where I’d want to pop up if I were transported to Randland, that’s for sure.

Image of Sevanna, Shaido Wise One.

Sevanna’s hat is extremely fancy. Credit: Prime/Amazon MGM Studios

Andrew: I’ve always liked what the story is doing with Rhuidean, though. For context, it’s a bit like the Accepted test in the White Tower that we see Nynaeve and Egwene take—a big ter’angreal located in the unfinished ruins of a holy city that all Aiel leaders must pass through to prove that they are worthy of leadership. But unlike the Accepted test, which tests your character by throwing you into emotionally fraught hypothetical situations, Rhuidean is about concrete events, what has happened and what may happen.

Playing into the series’ strict One Power-derived gender binary, men have to face the past to see that their proud and mighty warrior race are actually honorless failed pacifists. Women are made to reckon with every possible permutation of the future, no matter how painful.

It’s just an interesting thought experiment, given how many historical errors and atrocities have repeated themselves because we cannot directly transfer firsthand memories from generation to generation. How would leaders lead differently if they could see every action that led their people to this point? If they could glimpse the future implications of their current actions? And isn’t it nice to imagine some all-powerful, neutral, third-party arbiter whose sole purpose is to keep people who don’t deserve to hold power from holding it? Sigh.

Anyway, I think the show visualizes all of this effectively, even if the specifics of some of the memories differ. We can get into the specifics of what is shown, if you like, but we get a lot of Rand and Moiraine here, after a couple of episodes where those characters have been backgrounded a bit.

Lee: I agree—I was afraid that the show would misstep here, but I think they nailed it. I’ve never had a very concrete vision for what the “forest of glass columns” that Rand must traverse is supposed to look like, but I dig the presentation in the show, and the tying together of Rand’s physical steps with stepping back through time. (I also like the trick of having Josha Stradowski in varying degrees of prosthetics playing Rand’s own ancestors, going all the way back to the Age of Legends.)

Your point about leaders perhaps acting differently if forced to face their pasts before assuming leadership is solid, and as we see, some of the Aiel just cannot handle the truth: that for all the ways that honor stratifies their society, they are at their core descended from oath-breakers, offshoots of the “true” pacifist Jenn Aiel who once served the Aes Sedai. Some Aiel, like Couladin’s brother Muradin, are so incapable of accepting that truth that death—along with some self-eyeball-scooping—is the only way forward.

The thing that I appreciate is that the portrayal of the past succeeds for me in the same way that it does in the books—it viscerally drives home the magnitude of what was lost and the incomprehensible tragedy of the fall from peaceful utopia to dark-age squalor. The idea of sending out thousands of chora tree cuttings because it’s literally the last thing that can be done is heart-breaking.

Image of one of Rand's Aiel ancestors

Sometimes the make-up works, sometimes it feels a little forced. Credit: Prime/Amazon MGM Studios

Andrew: “Putting a wig and prosthetics on Josha Stradowski so he can play all of Rand’s ancestors” is more successful in some flashbacks than it is in others. He plays an old guy like a young guy playing an old guy, and it’s hard to mistake him for anything else. I do like the idea of it a lot, though!

But yes, as Rand says to Aviendha once they have both been through the wringer of their respective tests, he now knows enough about the Aiel to know how much he doesn’t know.

We don’t see any of Aviendha’s test, though she enters Rhuidean at the same time as Rand and Moiraine. (Rand enters because he is descended from the Aiel, and they all think he’s probably the central character in a prophecy; Moiraine goes in mainly because an Aiel Wise One accidentally tells her she’ll die if she doesn’t.) At this point we have pointedly not been allowed glimpses into Aviendha or Elayne’s psyches, which makes me wonder if the show is dancing around telling us about A Certain Polycule or if it plans to downplay that relationship altogether.

I feel like the show is too respectful of the major relationships in the books to skip it, but they are playing some cards close to the chest.

Lee: Before we push on, I want to emphasize something to show-watchers that may not have been fully explicated: Yes, that was Lanfear in the deepest flashback. She was a researcher at the Collam Daan—that huge floating sphere, which was an enormous university and center for research. In an effort to find a new Power, one that could be used together by all instead of segregated by gender, she and a team of other powerful channelers create what the books call “The Bore”—a hole, drilled through the pattern of reality into the Dark One’s prison.

I loved the way this was portrayed on screen—it perfectly matched what I’ve been seeing in my head for all these years, with the sky crinkling up into screaming blackness as the Collam Daan drops to the ground and shatters.

Good stuff. Definitely my favorite moment of the episode. What was yours?

Image of the Bore.

This is not a good sign. Credit: Prime/Amazon MGM Studios

Andrew: Oh yeah that was super cool and unsettling.

As we see from both Moiraine and Aviendha, the women’s version of the test isn’t the glass columns, but a series of rings. You jump in and spin around like you’re a kid at space camp in that zero-g spinny thing. I am sure that it has a name and that you know what the name is.

Image showing two people floating in rings.

And unlike in the books, nobody has to do this naked! Credit: Prime/Amazon MGM Studios

Andrew: Everything we see of Moiraine’s vision is presented in a way that mirrors this spinning—each flip is another possible future, which we get to see just a glimpse of in passing before we flip over to the next thing. Most of the visions are Rand-centric, obviously. Sometimes Moiraine is killing Rand; sometimes she’s bowing to him; sometimes things get Spicy between the two of them.

But the one thing that comes back over and over again, and the most memorable bit of the episode for me, is a long string of visions where Lanfear kills Moiraine, over and over and over again.

Both Moiraine and Rand have been playing footsy with Lanfear this season, imagining that they can use her knowledge and Lews Therin lust to get one over on their enemies. But both Rand and Moiraine have now seen firsthand that Lanfear is not someone you can trust, not even a little. She’s vengeful and brutal and as close to directly responsible for the Current State Of The World as it’s possible to be (though the flashback we see her in leaves open the possibility that it was accidental, at least at first). What Rand and Moiraine choose to do with this knowledge is an open question, since the show is mostly charting its own path here.

Lee: Agreed, that was well done—and was a neat way of using the medium as a part of the storytelling, incorporating the visual metaphor of a wheel forever turning.

You’re also right that we’re kind of off the map here with what’s going to happen next. In the books, several other very important things have happened before we make it to Rhuidean, and Rand’s relationship with Moiraine is in a vastly different state, and there are, shall we say, more characters participating.

Pulling Rhuidean forward in the story must have been a difficult choice to make, since it’s one of the key events in the series, but having seen it done, I gotta commend the showrunners. It was the right call.

Andrew: We wrote about this way back in the first season, but I keep coming back to it.

The show’s most consequential change was the decision to center Rosamund Pike’s Moiraine as a more fully realized main character, where the books spent most of their time centering Rand and the Two Rivers crew and treating Moiraine as an aloof and unknowable cipher. Ultimately an ally, but one who the characters (and to some extent, the readers) usually couldn’t fully trust.

Image of Rand's dragon tattoos.

“Twice and twice shall he be marked.” Credit: Prime/Amazon MGM Studios

Lee: I feel like we should leave it here—maybe with one final word of praise from me for Rand’s dragon marks, which I thought looked fantastic. And it’s a good thing, too, because he’s going to keep them for the rest of the series. (Though I suspect the wardrobe folks will do everything they can to keep Rand in long sleeves to avoid what is likely at least an hour or two in the make-up chair.)

It’s a pensive ending, and everyone who emerges from Rhuidean emerges changed. Rand marches out from the city as the dawn breaks, fulfilling prophecy as he does so, carrying an unconscious Moiraine in his dragon-branded arms. Rand has the look of someone who’s glimpsed a hard road ahead, and we fade out to the credits with a foreboding lack of dialog. What fell things will sunrise—and the next episode—bring?!

Credit: WoT Wiki

The Wheel of Time delivers on a pivotal fan-favorite moment Read More »

how-the-language-of-job-postings-can-attract-rule-bending-narcissists

How the language of job postings can attract rule-bending narcissists

Why it matters

Companies write job postings carefully in hopes of attracting the ideal candidate. However, they may unknowingly attract and select narcissistic candidates whose goals and ethics might not align with a company’s values or long-term success. Research shows that narcissistic employees are more likely to behave unethically, potentially leading to legal consequences.

While narcissistic traits can lead to negative outcomes, we aren’t saying that companies should avoid attracting narcissistic applicants altogether. Consider a company hiring a salesperson. A firm can benefit from a salesperson who is persuasive, who “thinks outside the box,” and who is “results-oriented.” In contrast, a company hiring an accountant or compliance officer would likely benefit from someone who “thinks methodically” and “communicates in a straightforward and accurate manner.”

Bending the rules is of particular concern in accounting. A significant amount of research examines how accounting managers sometimes bend rules or massage the numbers to achieve earnings targets. This “earnings management” can misrepresent the company’s true financial position.

In fact, my co-author Nick Seybert is currently working on a paper whose data suggests rule-bender language in accounting job postings predicts rule-bending in financial reporting.

Our current findings shed light on the importance of carefully crafting job posting language. Recruiting professionals may instinctively use rule-bender language to try to attract someone who seems like a good fit. If companies are concerned about hiring narcissists, they may want to clearly communicate their ethical values and needs while crafting a job posting, or avoid rule-bender language entirely.

What still isn’t known

While we find that professional recruiters are using language that attracts narcissists, it is unclear whether this is intentional.

Additionally, we are unsure what really drives rule-bending in a company. Rule-bending could happen due to attracting and hiring more narcissistic candidates, or it could be because of a company’s culture—or a combination of both.

The Research Brief is a short take on interesting academic work.

Jonathan Gay is Assistant Professor of Accountancy at the University of Mississippi.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How the language of job postings can attract rule-bending narcissists Read More »

trump-white-house-drops-diversity-plan-for-moon-landing-it-created-back-in-2019

Trump White House drops diversity plan for Moon landing it created back in 2019

That was then. NASA’s landing page for the First Woman comic series, where young readers could download or listen to the comic, no longer exists. Callie and her crew survived the airless, radiation-bathed surface of the Moon, only to be wiped out by President Trump’s Diversity, Equity, and Inclusion executive order, signed two months ago.

Another casualty is the “first woman” language within the Artemis Program. For years, NASA’s main Artemis page, an archived version of which is linked here, included the following language: “With the Artemis campaign, NASA will land the first woman and first person of color on the Moon, using innovative technologies to explore more of the lunar surface than ever before.”

Artemis website changes

The current landing page for the Artemis program has excised this paragraph. It is not clear how recently the change was made. It was first noticed by British science journalist Oliver Morton.

The removal is perhaps more striking than Callie’s downfall since it was the first Trump administration that both created Artemis and highlighted its differences from Apollo by stating that the Artemis III lunar landing would fly the first woman and person of color to the lunar surface.

How NASA’s Artemis website appeared before recent changes.

Credit: NASA

How NASA’s Artemis website appeared before recent changes. Credit: NASA

For its part, NASA says it is simply complying with the White House executive order by making the changes.

“In keeping with the President’s Executive Order, we’re updating our language regarding plans to send crew to the lunar surface as part of NASA’s Artemis campaign,” an agency spokesperson said. “We look forward to learning more from about the Trump Administration’s plans for our agency and expanding exploration at the Moon and Mars for the benefit of all.”

The nominal date for the Artemis III landing is 2027, but few in the industry expect NASA to be able to hold to that date. With further delays likely, the space agency will probably not name a crew anytime soon.

Trump White House drops diversity plan for Moon landing it created back in 2019 Read More »

boeing-will-build-the-us-air-force’s-next-air-superiority-fighter

Boeing will build the US Air Force’s next air superiority fighter

Today, it emerged that Boeing has won its bid to supply the United States Air Force with its next jet fighter. As with the last fighter aircraft design procurement in recent times, the Department of Defense was faced with a choice between awarding Boeing or Lockheed the contract for the Next Generation Air Dominance program, which will replace the Lockheed F-22 Raptor sometime in the 2030s.

Very little is known about the NGAD, which the Air Force actually refers to as a “family of systems,” as its goal of owning the skies requires more than just a fancy airplane. The program has been underway for a decade, and a prototype designed by the Air Force first flew in 2020, breaking records in the process (although what records and by how much was not disclosed).

Last summer, the Pentagon paused the program as it reevaluated whether the NGAD would still meet its needs and whether it could afford to pay for the plane, as well as a new bomber, a new early warning aircraft, a new trainer, and a new ICBM, all at the same time. But in late December, it concluded that, yes, a crewed replacement for the F-22 was in the national interest.

While no images have ever been made public, then-Air Force Secretary Frank Kendall said in 2024 that “it’s an F-22 replacement. You can make some inferences from that.”

The decision is good news for Boeing’s plant in St. Louis, which is scheduled to end production of the F/A-18 Super Hornet in 2027. Boeing lost its last bid to build a fighter jet when its X-32 lost out to Lockheed’s X-35 in the Joint Strike Fighter competition in 2001.

A separate effort to award a contract for the NGAD’s engine, called the Next Generation Adaptive Propulsion, is underway between Pratt & Whitney and GE Aerospace, with an additional program aiming to develop “drone wingmen” also in the works between General Atomics and Anduril.

Boeing will build the US Air Force’s next air superiority fighter Read More »