25 Comments
User's avatar
Craig Palsson @ Market Power's avatar

On the populist backlash, I think this is a reason developing countries could be well-positioned to win from AI. Those countries don't have as much of a white-collar political class threatened by AI, so they might be more willing to accept the technology.

Zeb Camp's avatar

To me this seems like an analog to how developing nations are leap frogging straight to solar and wind production; just based on savings and ease of installation. They don’t have to deal as much with the political baggage that comes with environmentalist and fossil fuel lobbyists. My guess at least

DalaiLana's avatar

From what I hear, the number one thing American companies want AI for is to replace their overseas Indian engineers. If you've ever dealt with an Indian engineering office, you'll understand this.

Cubicle Farmer's avatar

"AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process."

I'm not sure. Ricardo showed it made sense for Portugal to specialize in wine even if they had an absolute advantage in both wine and cloth over England, but that's because there's only one Portugal. What happens if we wake up one morning and there are a billion Portugals? (Granted, AI is constrained by energy requirements, etc).

David Oks's avatar

I think the "infinite Portugals" argument about comparative advantage doesn't quite apply, since the relevant bottlenecks for AI productivity gains tend to be on the absorption/integration/coordination side rather than the production side. The constraints and bottlenecks I'm thinking of are the ones that *cannot* be solved by infinite AI instances, i.e. everything that humans touch.

nineofclubs's avatar

I wonder. Consider that, at the height of the Great Depression, Great Britain had an unemployment rate of around 20-22%. In the US, I think it was about 25% and in Australia it was 30%; Australia being more reliant on agriculture and more indebted to foreign banks.

So, for AI to deliver a significant economic blow, it’s not like everyone has to lose their job, right? Particularly since private debt in the Anglosphere is now orders of magnitude higher than in the 1930’s.

I think the challenge for national governments will be to *really* get ready for increased unemployment; even if it’s not 20%, because AI is already starting to impact some sectors in ways that could lead to a significant economic downturn. And by ‘get ready’ I don’t mean the usual pittance-until-you-find-another-job dole cheque solutions.

Waterskiiii's avatar

Appreciate the argument. But at the time this email hit my inbox I was writing a proposal to pull my company’s (advanced-degree preferred) job opening so that we could try and replace the job with a few Claude agents. I bet we could replace the job with an hour of human time overseeing the agents a week, so given my optimism it’s probably more like 5 hours.

If this happens (p<1) and it works (p<1), that’s just a job that’s gone. Not a person who lost one or a job that looks dramatically different. Not sure how this is viable at a large scale.

Vittu Perkele's avatar

This article seems to focus on AI automating work in the white collar sphere, and how that won't necessarily result in humans being replaced, but how do you think increasing robotics technology will affect the blue collar and service sectors? Do you think we'll see mass fast food layoffs from robots flipping burgers, for instance? Or will we also see "cyborg" solutions in those sectors?

David Oks's avatar

Yes, this was more written with the "software singularity" in mind, the "artificial remote drop-in worker." I think if we had robots that could do everything that humans could do, but were distinct from humans, we'd see something broadly similar but on a much bigger scale. I suspect we'd see complementarity as long as there are human-relevant bottlenecks (most obviously, stuff that humans prefer humans do). But over time, as complementarity goes to zero, I think we'd just see unbelievable abundance and "jobs" redefined around "things that only humans can do," or rather "things that humans want to do."

But turning to the real world, I think in the next few years/decades, while we will see serious robotics progress, I would honestly be surprised if it has a huge impact on blue-collar employment outside of perhaps driving-related stuff (and even there my understanding is that the trucking impact, for instance, is slower than expected; it'll proceed at the speed of human adoption). Robotics isn't making progress at the same speed as software

Das P's avatar
1dEdited

This article is not convincing at all as it is too coarse grained and shallow. The complementarity narrative already failed in the most recent technological disruption namely PNTR.

The introduction of a massive number of poorly paid blue collar labor workers from China to the labor pool ready to do a lot of the manufacturing did not lead to a Jevons paradox in jobs whereby American manufacturing industry jobs also grew. In other words demand did not grow enough due to real world income/wealth distribution bottlenecks to compensate for the substitution effect of the new technology namely "shipping containers" and capital mobility.

https://fred.stlouisfed.org/series/MANEMP

Manufacturing output worldwide went up and from the perspective of US capital, productivity went up, but US manufacturing jobs did not go up and in fact crashed.

The losers from PNTR who were concentrated in the US Mid West caused a political revolution precisely because they lost their jobs and there was no local Jevons paradox in jobs at play.

Statements like "ordinary people will be fine" are ridiculous unless one also states how exactly they are going to prevent something that is maybe 10x the size of the China shock from causing mass economic dislocation and a rust-belt type situation from repeating. Or maybe the statement "ordinary people will be fine" is meant in some vague sense coarse-graining over a century of human time.

David Oks's avatar

U.S. manufacturing jobs had been in decline for decades! And as you know, the “China shock” didn’t have an effect on overall employment rates. Some people lost jobs, other people gained jobs, through the same basic mechanism as always. And then everyone benefited from consumer surplus, things got cheaper.

“We find that the ‘China shock’ accounted for 28% of the decline in U.S. manufacturing between 2000 and 2014—1.65 times the magnitude predicted from a model imposing balanced trade. A concurrent rise in U.S. service employment led to a negligible aggregate unemployment response.”

https://academic.oup.com/qje/article-abstract/138/2/1109/6967141?redirectedFrom=fulltext

Das P's avatar
1dEdited

>U.S. manufacturing jobs had been in decline for decades!

OK so you agree that in any particular sector, like say manufacturing, productivity gains lead to pure substitution without replacement within that sector. The demand for manufactured goods was not elastic enough to always sustain high employment in that sector just as in agriculture, the demand for food is not elastic enough to sustain high employment in agriculture. We cannot all eat a sack of rice in one day. The idea that software or professional services is uniquely immune to substitution-only outcomes is not at all obvious to anyone.

>We find that the ‘China shock’ accounted for 28% of the decline in U.S. manufacturing between 2000 and 2014

28% is not a small number.

> A concurrent rise in U.S. service employment led to a negligible aggregate unemployment response.

I am not sure the American people would have taken the deal on PNTR if they knew that they would be forced out of production lines into cleaning bad pans and serving food to the capital owners who benefited from PNTR labor arbitrage. People have preferences they care about deeply.

Complementary and Jevons paradox exist but only in large aggregate sense coarse-grained over decades and entire continents. But politics is local and the problems from local dislocations can have global ramifications.

Artur et al "Adjustment in local labor markets is remarkably slow, with wages and labor-force participation rates remaining depressed and unemployment rates remaining elevated for at least a full decade after the China trade shock commences. Exposed workers experience greater job churning and reduced lifetime income."

https://www.nber.org/papers/w21906

Marcus Seldon's avatar

There’s something to this argument, but I guess I’m not fully persuaded that we might not hit an inflection point where we truly do get an artificial drop-in remote worker. And what then? Whole categories of jobs could be replaced in a single digit number of years. That won’t be gentle at all for the tens of millions of people who work primarily or solely on computers.

David Oks's avatar

I think as long as there is any complementarity the same logic would apply: I’m thinking more of the “artificial drop-in remote worker” world than the current “cool AI tools” world. As long as there are bottlenecks, there’s complementarity; and as long as there’s complementarity, productivity should lead to a Jevons dynamic rather than mass immiseration. I think by the time that we have no or very little complementarity, we are talking about such an unbelievable level of productivity that “properly sharing the unbelievable infinite riches we produce” is the more pressing question.

But yes, if complementarity drops to ~0 tomorrow it won’t be gentle; I think the world is difficult and complicated enough that the transition will be a lot more gentle. Not because AI isn’t making unbelievable progress, just because the world is just full of frictions.

gregvp's avatar
2hEdited

I agree with nearly everything. But:-

The comparative advantage argument for a role for humans ignores the very considerable transaction costs involved in getting humans to do things. The question is, does the value of a human's contribution exceed the cost of the time and space required to get the human to understand what is required on time, and to monitor the process?

It is not the case that if a human provides non-zero value they will have a role. The human has to provide value above epsilon, for epsilon > 0 and possibly different for each task.

(Aside, the fully burdened cost of humans is much greater than their wage. The blighters only work about a third of the time at best, and they have to have quiet, clean, warm, dry, non-toxic, non-stabby, well-lit, well-ventilated space, and have to be supervised, and they insist on vacation days and sick days as well. And then there's the hiring and onboarding costs, taxes, insurances, and administration... Epsilon is *much* greater than zero.)

Yes the transition will be drawn-out. The supertanker model of the economy applies. So does the transient model so beloved of undergrad differential equations professors: The AI shock to work is like hitting an iron bar with a large hammer. There will be large oscillations in the economy for a while, loud ringing. Jobs are collections of tasks; tasks can be grouped different ways. This re-grouping is always going on, but now it will accelerate for a while. Going too fast makes humans twitchy.

On the eschaton: it is notable than when AI company higher-ups resign, they don't say "excited for the next chapter of my career". They say things like "I am going to live in the wilderness and pursue independent study of poetry".

Victor Bezrukov's avatar

I don't use any of these for my daily activity as an IT manager. When is see people do everything with the "help" of ai i do think not about their jobs loss, but about their brain activity loss and this is much worthy. Ppl aks the robot recommendations what to wear to the evening party, what to say to mothers birthday, how to be more "famous" on Teender dates app. Mmm.. Some explained me that this way they safe time for other things. Lole what ?? To sit on the Tiktok timeline ? To learn on TikTok? But why (if even) to learn if after this to use ai for the most simple activity?

DalaiLana's avatar

I'm inclined to agree with you. In my industry, computers made work significantly easier and cheaper. As a result, there is more demand.

Additionally, I see AI already filling roles where previously there was a gap. For example, recently I considered signing my kids up for a high-end day camp. The website had an ai agent who answered all my questions about the sorts of things they'd be doing given their age and ability.

Before ai, there was no official role called 'answering parents questions' -- it was just an annoying task done by someone whose actual job was making the camp happen. Now that person can focus fully on hiring, ordering, and organizing, and not be interrupted by obsessive parents wanting to know precisely what 'culinary camp' actually means for a 7yo.

Jenna Hermann's avatar

I totally agree. I think the biggest miss tech always has is the human adoption factor. Cloud was better. It came around in 2006. But human adopters blocked full scale adoption well into 2024.

MICHAEL DAWSON's avatar

I have some sympathy with the argument in this post, but I think that delays in the uptake of AI are more likely because some of the most threatened groups are powerful and, in the case of doctors, have a largely sympathetic public behind them. I don't particularly buy the argument that humans and AI will often work in a complementary way, with relatively few jobs being lost, at least for some time.

Last Spring I looked at AI and whether it would replace GPs (doctors in the UK who see patients and either treat them directly for lesser issues or refer them to specialists for more difficult conditions) - https://freeblogger.substack.com/p/will-ai-replace-gps-in-the-next-decade

Even then, it was obvious to me as a complete layman that AI COULD replace GPs very soon, despite the medical establishment view being ultra-cautious and at best seeing AI as a useful adjunct to human judgement. Progress with AI since then probably makes my assessment look too conservative. But I do still think it will be a struggle, at least in the UK, to convince patients that AI will deliver better primary care and to get changes through a highly politicised NHS. Ultimately, though, reality prevails. And I think that sceptics will end up moving to AI where it's available because it will be a lot more accessible/responsive and provide better diagnoses and guidance on treatment. (Some research suggests it will also be more empathic, which really would leave nothing much for a human doctor to contribute.)

Medicine is probably an extreme example in this respect. I doubt that lawyers or accountants or journalists etc will have similar public support, and the arguments for AI being able to replace them are just as strong. If I were in a white collar job today, or leaving university and looking to get into one, I'd be seriously worried.

Richard Pinch's avatar

It's interesting to view this through the professional lens. Think of doctors, lawyers, engineers. These are legally defined or regulated professions. A (human) professional has to pass checks and demonstrate skill and knowledge to gain that accreditation: often it's illegal to practise otherwise, or at least impossible to do so in practice. Such professionals are then held legally accountable for the work they do. How will this transfer to AI agents? Are we going to dismantle the protections, or are humans in these businesses going to take on legal (civil and criminal) responsibility for the actions of an AI agent they neither understand nor control? How will the public be protected against mistaken or wrongful actions; how will disputes be adjuducated; who will be liable to pay compensation by way of damages? Make no mistake, the current state of AI theory, technique, tools and tradecraft does not allow anyone -- designer, developer or user -- to have that degree of general understanding and control.

Ranita Shows's avatar

Why is the conservative a commentator and the liberal is a pundent. Lost me there.

Matt's avatar

Of course we can't know for sure either way, but your confidence in slow and gentle seems quite misplaced to me. What percentage of structural unemployment over the first 15 years would cause mass social disruption? 20%? 15%? Being confident that is basically out of the question seems incredibly blinkered to me.

Remember, that's an instant relative to previous technological shifts. Those caused things like more than a century of devastating war after the printing press and two devastating world wars after the 2nd industrial revolution.

Elliot Friedland's avatar

“It’ll all be fine don’t worry about it” just feels like you’re gaslighting people or are oblivious to reality. Millions will be out of work soon, and permanently

Dennis Bruno's avatar

I think there will quickly come a point where one AI model can over see and task others. The base model might need complementarity but the rest wouldn’t. I do think the US will be okay because either you are right and jevons will hold despite this(I don’t know how much productivity you can really need, does it really just scale forever?). Or people will just vote in politicians that will ban AI fully taking a job. I am more scared for what AI will mean for a surveillance state like china and other more dictatorial countries. As a vote for jevons, Amazon has laid off 30,000 corporate workers this year and is still growing due to hiring in other areas (down in hr up in advertising). I still find it hard to believe this will continue as AIs become more competent outside of coding and basic system work.