Altra

The Last Company

“We have no idea how we one day may generate revenue… [but] once we build this generally intelligent system, we will ask it for a way” - Sam Altman


Frontier labs will build, gatekeep, and exploit 3-5x Mythos-level models to disrupt nearly every industry with substantial knowledge work expenditures.

Anthropic crushed a $30B run rate this year servicing codegen. But is the real end goal to just replace software developers with Claude Code?

We are dramatically underestimating the scale and danger of frontier lab ambition. They’re planning an economic shakeout where every legacy business will need to compete with lab-owned firms where every knowledge-intensive function has been replaced by AI.

Whoever builds ASI first will rule this economy. They will be The Last Company.

Schumpeterian Blitzkrieg

In the near term, model performance is the biggest constraint to widespread market adoption. Diffusion is one thing, but by and large, most companies don’t see the point of AI because it’s not intelligent enough to do anything valuable. This will probably remain true for some time.

In the long term, following substantial compute, data, and research, models will perform exceptionally well on nontrivial GPDval-style economic output benchmarks. (We’ll call this a “Mythos leap”). Every laptop-class job will feel the same “I don’t write code anymore” moment engineers are having right now.

After this point, labs face two choices:

Intuition pump: Anthropic General Hospital

When was the last time you went to the doctor and had an overwhelmingly positive experience with the system?

Probably not.

Now suppose Anthropic creates a hospital system from first principles using 5x Mythos capabilities. Every aspect of the system is built, hardened, and tested with models trained on billions of dollars of human data from doctors, practitioners, and healthcare experts. EHR software is built to be performant and fully knowledgable about your medical history. And every practitioner gets access to on-prem HIPAA compliant Dr. Claude.

Would you still feel loyal to your legacy healthcare provider?

There is no reason why Anthropic wouldn’t launch:

They’re already thinking about it.

Anthropic partnered with Andon Labs, a startup building “autonomous organizations without humans in the loop” to see how effectively Claude could run a vending machine (Project Vend). Recently, Andon gave Claude a 3-year physical retail lease in San Francisco to see how it would perform managing a store.

Neither of these experiments were major commercial successes. But that’s the point. The main limiter in both cases has been the model’s underlying intelligence. A 3-5x Mythos capabilities leap would almost certainly turn a profit.

Pick XYZ dinosaur industry of choice with sufficiently low startup costs. Prompt the model to build a firm from first principles (or acquire one if antitrust is not a risk). Substitute every knowledge work expense with AI. Software engineering isn’t the TAM anymore. Any part of the total economy with knowledge work is up for grabs.

So, Technofeudalism?

Suppose we control the lab that achieves a 5x Mythos intelligence leap. How do we address the threat of AI democratization to our triumph?

The highest risk is knowledge leakage. Economic incentives still apply to researchers. As long as talent is fluid, competing labs can poach researchers with high enough salaries. Non-competes are broadly unenforceable. Nothing stops the next-closest lab to ASI from grabbing them.

Distillation, on the other hand, is much lower risk. Pre-training data and architecture will likely be the most important factors behind a 5x Mythos. They need to be guarded like the nuclear codes. But beyond that, never make the model publicly available. Rival labs must be prevented from matching our capabilities at all costs.

Regulation and anti-trust are medium-risk. But a 5x Mythos superlawyer can probably out-litigate most legal challenges. Or at the very least, stall court proceedings long enough to let us seize a reasonable share of the economy.

Concretely, risk prevention looks like:

Barring these risks, we soon become the most powerful company in human history.

Answering Dwarkesh’s original question:

TL;DR - How do the labs start making money? They expand and become private equity firms.

Addendum: “AI Safety”

Sci-fi/EA types like to believe “AI becomes sentient and kills us all.” This narrative is harmful.

Real insiders believe “safety” is about control. If a sociopath (guess who) controls this, we’re all doomed. EA was always about shrimp welfare. But we are the shrimp.