Nic’s Orb
Orb has ended
0xE45e••bB35
#17

How should one approach life under the belief that humans will be fully replaced by LLMs in the next 5 to 10 years?

Nic's Response

Since the question is posed ambiguously, let me interpret it two ways and respond to each.

First formulation

The first is the straightforward question of labor; that is, what happens if the human workforce is rendered obsolete by AI models that vastly outperform them within a decade. How do you react?

So I basically believe this will happen, albeit not on the timescale proposed. Already, relatively primitive LLMs have already completely changed the game (as in, undercut the equivalent of human labor by 1/100th of the cost, or performing equivalent tasks in 1/100th the time) in a few domains. Off the top of my head:

• Translation
• Transcription
• Stock photo generation
• Graphic design
• Data cleaning/preparation/manipulation
• Essay composition
• Programming
• Copywriting
• Vehicle operation
• Summarization
• Legal advice
• Radiology/imaging

I’ve used AI for most of these use cases and it continues to utterly shock me in terms of how much human labor it eliminates. And this is with only a year or two of experimentation with newer models. Eventually, as we lose regulatory shackles, this will trivially extend to other white collar professions like accounting, law, tax, and medicine. At which point the AI will be incredibly disruptive and will be highly resisted at the government level.

There’s one school of thought that the AI won’t destroy that many jobs, because technological innovations have always created new jobs where they’ve made others obsolete. But this is to misunderstand what AI does. Prior technological innovations were generally narrow, as in, they helped people harness new forms of energy or communicate more efficiently. AI isn’t a narrow innovation. It’s broad. It touches on everyone aspect of human behavior and work, because it involves general cognitive tasks. There’s nothing that AI doesn’t pertain to. There’s only a few things that the steam engine or the mechanical loom help with. AI is a superseding technological development because it provides anyone a cognitive overlay on top of the world that can be applied to any form of knowledge work (and a lot of analog tasks too).

There’s another point to be made here. The industrial revolution was the most analogous situation because it allowed us to harness brand new forms of energy like coal, natural gas, and oil, allowing us to develop modern mechanized industry and basically, civilization. Before that we were effectively neolithic still. But no one was put out of work by the industrial revolution right? Wrong. It was the horses and other forms of animal labor that were “put out of work”. Except they weren’t around to complain about it. They were just slaughtered. This time, it’s the accountants, doctors, and lawyers that are the “horses” of the latter day industrial revolution. Once an AI model reaches parity, there’s no reason to use a human for the task that’s thousands of times slower and more expensive. Of course, these professions will resist and will try to stop the flow of progress. But logically people that want to consume cheap, AI-based medical or legal services will simply choose to do it in the jurisdictions that are open to the idea. So the market will route around these kinds of barriers. And the outcome will be the same. No one will want the inferior human version.

(We don’t have to consider the risk that AI models stop improving, or fail to reach human parity in the key fields I mentioned, because the question presumes that they will).

So how do you approach life? Very simple. What this AI revolution will do is massively empower capital at the expense of labor. And not just working class / blue collar labor, but the professional classes (in particular). Labor is the biggest cost line item in most corporates today. If you eliminate that, you massively increase profitability. So I think you get a massive productivity boom while simultaneously thinning the workforce significantly. AI has made it possible for the solopreneur to emerge, a single entrepreneur building a business reaching over 1m ARR with no employees at all. This is a massive benefit to founders and the entities that fund them, and generally very bad for everyone else.

So my strategy is to do what I’m doing – invest aggressively in AI at each layer of the stack (compute/datacenter, hardware, and applications), both directly, and by LPing into VC funds. Under no circumstances would I want to be on the other side of the equation – selling my labor on a linear basis, even if it’s consulting or lawyer type billable hours. Or even working as a tech employee, because headcount requirements are simply going to be reduced as AI models improve. Already in my day to day life I’ve made fewer hires than I otherwise would have because of AI.

So my advice is to try to own as much equity in AI-benefiting businesses as possible (frankly, they don’t even have to be AI companies per se, since in theory virtually all capital will benefit from AI verus labor), short of starting one directly.

In terms of picking a field that’s “AI proof”, I think VC is actually one of the last to go, because it’s highly personality based and not systematic at all, and it’s more about access than it is merely evaluating deals. But I don’t have that many good ideas, because I’m pretty pessimistic on the ability of any field – whether professional, manual labor, or artistic – actually avoid the ravages of AI over the next decade or so.

Section formulation

The second formulation of this question, is that the singularity is achieved within 10 years. By this I mean we reach a stage in which AI becomes recursively self-improving, and achieves a state of superintelligence, vastly eclipsing the sum of all human intelligence. At this point humans begin to see biological life as futile perhaps, and upload their consciousness to merge with the AI in some sort of AI rapture. (The latter doesn’t follow the former but you can grant me some creative license). This is the kind of “positive transhumanism” believed by folks like Grimes. How do you live, knowing that this will be the case?

First of all, I’d point out that we don’t have a ton of market signal at least that this is all that likely. For instance, if you read this paper (https://basilhalperin.com/papers/agi_emh.pdf), they point out that if either aligned or unaligned AGI were imminent (on the order of decades), that would be reflected in high real interest rates. Which is not the case. So at least, the most liquid markets on earth are not at all pricing in the probability of the emergence of hyper potent AI within a decade.

But let’s assume it happens anyway. Let’s break it down into unaligned AGI and aligned AGI. In the unaligned case, the AI needs our atoms for something else, and simply vaporizes us/turns us into goo. Let’s say its malicious turn is relatively abrupt too (based on the question, we know that AGI is coming, but we don’t know if it’s aligned or not), and we don’t have plenty of forewarning. In this case, the best choice would simply be to live life as normal. Presumably, any approach you could take to the imminent extinction of humanity aside from following the ordinary course of business would be -EV. In the aligned case, I would take a similar attitude, although I would prepare by keeping my health and mind as intact as possible ahead of the sublimation and merging with the AI. So that would require being extremely health conscious, very cautious in terms of risk taking, etc, because a possible biological-machine hybrid eternal life waits at the end of the decade.