And once they take our jobs, will we be able to find new ones? Will AI take those too?
Seb Krier recently wrote an unusually good take on that, which will center this post.
I believe that Seb is being too optimistic on several fronts, but in a considered and highly reasonable way. The key is to understand the assumptions being made, and also to understand that he is only predicting that the era of employment optimism will last for 10-20 years.
By contrast, there are others that expect human employment and even human labor share of income to remain robust indefinitely, no matter the advances of AI capabilities, even if AI can do a superior job on all tasks, often citing comparative advantage. I will centrally respond to such claims in a different future post.
So to disambiguate, this post is about point #2 here, but I also assert #1 and #3:
-
By default, if AI capabilities continue to advance, then the humans lose control over the future and are rather likely to all die.
-
If we manage to avoid that, then there is a good chance humans can retain a lot of employment during the rest of The Cyborg Era, which might well last 10-20 years.
-
What is not plausible is that AI capabilities and available compute continue to increase, and this state endures indefinitely. It is a transitional state.
First I’ll make explicit the key assumptions, then unpack the central dynamics.
There’s a background undiscussed ‘magic’ going on, in most scenarios where we discuss what AI does to future employment. That ‘magic’ is, somehow, ensuring everything is controlled by and run for the benefit of the humans, and is keeping the humans alive, and usually also preserving roughly our system of government and rights to private property.
I believe that this is not how things are likely to turn out, or how they turn out by default.
I believe that by default, if you build sufficiently capable AI, and have it generally loose in the economy, humans will cease to have control over the future, and also it is quite likely that everyone will die. All questions like those here would become moot.
Thus I wish that this assumption was always made explicit, rather than being ignored or referred to as a given as it so often is. Here’s Seb’s version, which I’ll skip ahead to:
Seb Krier: Note that even then, the humans remain the beneficiaries of this now ‘closed loop’ ASI economy: again, the ASI economy is not producing paper clips for their own enjoyment. But when humans ‘demand’ a new underwater theme park, the ASIs would prefer that the humans don’t get involved in the production process. Remember the ‘humans keep moving up a layer of abstraction’ point above? At some point this could stop!
Why should we expect the humans to remain the beneficiaries? You don’t get to assert that without justification, or laying out what assumptions underlie that claim.
With that out of the way, let’s assume it all works out, and proceed on that basis.
Seb Krier wrote recently about human job prospects in The Cyborg Era.
The Cyborg Era means the period where both AI and humans meaningfully contribute to a wide variety of work.
I found this post to be much better than his and others’ earlier efforts to explore these questions. I’d have liked to see the implicit assumptions and asserted timelines be more explicit, but in terms of what happens in the absence of hardcore recursive AI self-improvement this seemed like a rather good take.
I appreciate that he:
-
Clearly distinguishes this transitional phase from what comes later.
-
Emphasizes that employment requires (A) complementarity to hold or (B) cases where human involvement is intrinsic to value.
-
Sets out expectations for how fast this might play out.
Seb Krier: We know that at least so far, AI progress is rapid but not a sudden discontinuous threshold where you get a single agent that does everything a human does perfectly; it’s a jagged, continuous, arduous process that gradually reaches various capabilities at different speeds and performance levels. And we already have experience with integrating ‘alternative general intelligences’ via international trade: other humans. Whether through immigration or globalization, the integration of new pools of intelligence is always jagged and uneven rather than instantaneous.
I think we get there eventually, but (a) it takes longer than bulls typically expect – I think 5-10 years personally; (b) people generally focus on digital tasks alone – they’re extremely important of course, but an argument about substitution/complementarity should also account for robotics and physical bottlenecks; (c) it requires more than just capable models – products attuned to local needs, environments, and legal contexts; (d) it also requires organising intelligence to derive value from it – see for example Mokyr’s work on social/industrial intelligence. This means that you don’t just suddenly get a hyper-versatile ‘drop in worker’ that does everything and transforms the economy overnight (though we shouldn’t completely dismiss this either).
…
So I expect cyborgism to last a long time – at least until ASI is so superior that a human adds negative value/gets in the way, compute is highly abundant, bottlenecks disappear, and demand for human stuff is zero – which are pretty stringent conditions.
I agree that cyborgism can ‘survive a lot’ in terms of expanding AI capabilities.
However I believe that his ending expectation condition goes too far, especially setting the demand limit at zero. It also risks giving a false impression of how long we can expect before it happens.
I clarified with him that what he means is that The Cyborg Era is starting now (I agree, and hello Claude Code!) and that he expects this to last on the order of 10-20 years. That’s what ‘a long time’ stands for.
It very much does not mean ‘don’t worry about it’ or ‘the rest of our natural lifetimes.’
That is not that long a time, even if this slow diffusion hypothesis is basically right.
Yes, it seems likely that, as Alex Imas quotes, “Human labor share will remain a substantial part of the economy a lot longer than the AGI-maximalist timelines suggest,” but ‘a lot longer’ does not mean all that long in these scenarios, and also it might not persist that long, or humans might not persist that long at all, depending on how things play out.
As long as the combination of Human + AGI yields even a marginal gain over AGI alone, the human retains a comparative advantage.
Technically and in the short term (e.g. this 10-20 year window) where the humans are ‘already paid for’ then yes, although in increasingly many places this can be false faster than you think because involving the slow humans is not cheap, and the number of humans practically required could easily end up very small. I suggest the movie No Other Choice, and expect this complementarity to apply to a steadily shrinking group of the humans.
Seb correctly points out that labor can have value disconnected from the larger supply chain, but that rules a lot of things out, as per his discussions of integration costs and interface frictions.
In this style of scenario, I’d expect it to be hard to disambiguate transitional unemployment from permanent structural unemployment, because the AIs will be diffusing and advancing faster than many of the humans can adapt and respecialize.
Humans will need, repeatedly, to move from existing jobs to other ‘shadow jobs’ that did not previously justify employment, or that represent entirely new opportunities and modes of production. During the Cyborg Era, humans will still have a place in such new jobs, or at least have one for a time until those jobs too are automated. After the Cyborg Era ends, such jobs never materialize. They get done by AI out of the gate.
Thus, if the diffusion timeline and length of the Cyborg Era is on the order of 10-20 years during which things stay otherwise normal, I’d expect the second half of the Cyborg Era to involve steadily rising unemployment and falling labor power, even if ‘at the equilibrium’ of the current level of AI diffusion this would fix itself.
Mostly it seems like Seb thinks that it is plausibly that most of the work to ensure full employment will be via the ‘literally be a human’ tasks, even long after other opportunities are mostly or entirely gone.
This would largely come from associated demand for intra-human positional goods and status games.
I don’t expect it to play out that way in practice, if other opportunities do vanish. There will at least for a time be demand for such tasks. I don’t see how, when you consider who is consuming and has the ability to engage in such consumption, and the AI-provided alternative options, it adds up to anything approaching full employment.
Krier later also points to bespoke human judgment or taste as a future bottleneck. Such taste evolves over time, so even if you could take a snapshot of bespoke taste now it would not long remain ‘taste complete.’ And he reiterates the standard ‘there’s always more to do’:
Seb Krier: People expect that at some point, “it’s solved” – well the world is not a finite set of tasks and problems to solve. Almost everything people ever did in the ancient times is automated – and yet the world today now has more preferences to satiate and problems to solve than ever. The world hasn’t yet shown signs of coalescing to a great unification or a fixed state! Of course it’s conceivable that at sufficient capability levels, the generative process exhausts itself and preferences stabilize – but I’d be surprised.
Yinan Na: Taste changes faster than automation can capture it, that gap can create endless work.
There are two distinct ways this could fail us.
One, as Seb notes, is if things reached a static end state. This could eventually happen.
The one Seb is neglecting, the point I keep emphasizing, is that this assumes we can outcompete the AIs on new problems, or in developing new taste, or in some other new task [N]. Even if there is always a new task [N], that only keeps the humans employed or useful if they are better at [N] than the AI, or at least useful enough to invoke comparative advantage. If that breaks down, we’re cooked.
If neither of those happens, and we otherwise survive, then there will remain a niche for some humans to be bespoke taste arbiters and creators, and this remains a bottleneck to some forms of growth. One should still not expect this to be a major source of employment, as bespoke taste creation or judgment ability has always been rare, and only necessary in small quantities.
Contra Imas and Krier, I do think that full substitution of AI for human labor, with the exception of literally-be-a-human tasks, should be the ‘default assumption’ for what happens in the long term even if things otherwise turn out well, as something we would eventually have to deal with.
I don’t understand why we would expect otherwise.
I’d also note that even if ‘real wages’ rise in such a scenario as Trammell predicts (I do not predict this), due to the economy technically growing faster than the labor share falls, that this would not fix people’s real consumption problems and make people better off, for reasons I explored in The Revolution of Rising Expectations series. Yes, think about all the value you’re getting from Claude Code, but also man’s gotta eat.
Ultimately, the caution is not to do policy interventions on this front now:
Until that specific evidence mounts, preemptive policy surgery is likely to do more harm than good.
I agree with Krier and also Trammell that interventions aimed in particular at preserving human jobs and employment would be premature. That’s a problem that emerges and can be addressed over time, and where there’s a lot of uncertainty we will resolve as we go.
What we need to do now on the policy front is focus on our bigger and more deadly and irreversible problems, of how we’re navigating all of this while being able to stay alive and in control of and steering the future.
What we shouldn’t yet do are interventions designed to protect jobs.
As I said, I believe Krier gave us a good take. By contrast, here’s a very bad take as an example of the ‘no matter what humans will always be fine’ attitude:
Garry Tan: But even more than that: humans will want more things, and humans will do more things assisted and supercharged by AGI
As @typesfast says: “How are people going to make money if AI is doing all the work? I think that that very much misunderstands human nature that we’ll just want more things. There’s an infinite desire inside the human soul can never be satisfied without God. We need more stuff. Like we got to have more. We got to have more.”
Yeah, sure, we will want more things and more things will happen, but what part of ‘AI doing all the work’ do you not understand? So we previously wanted [XYZ] and now we have [XYZ] and want [ABC] too, so the AI gets us [ABCXYZ]. By construction the AI is doing all the work.
You could say, that’s fine, you have [ABCXYZ] without doing work. Which, if we’ve managed to stay in charge and wealthy and alive despite not doing any of the work, is indeed an outcome that can be looked at various ways. You’re still unemployed at best.
A full response on the maximalist comparative advantage, unlimited demand and other arguments that think humans are magic will follow at a future date, in some number of parts.