Fully Automated Firms, The Management Paradox and The Future Is About to Hit Us
Beware the student of one teacher.
Good morning
As promised in the last edition, here’s an invite to the free AI_managers webinar series hosted by many friends. It’s a series of excellent meetings on topics like how to manage an AI project, how to build an understanding of the value and potential of AI in an organization, practical implementation of AI in sales, and more. They are led by really excellent people with real-life experience and case-studies.
Finally, I’m hosting one on AI Agents on February 20th as well. Be sure to sign up for any of those sessions, and enjoy! I don’t think you will find anything like it.
PS. It’s in Polish only, so apologies to all my English-speaking readers.
In today's edition, among other things:
Fully Automated Firms
The Future Is About to Hit Us
The Management Paradox
Crossing the Seed to Series A Chasm
AI Voice Agents: 2025
Always Be Launching
Onwards!
Fully Automated Firms
Great and thought-provoking piece by Dwarkesh Patel:
Even people who expect human-level AI soon are still seriously underestimating how different the world will look when we have it. Most people are anchoring on how smart they expect individual models to be. (i.e. they’re asking themselves “What would the world be like if everyone had a very smart assistant who could work 24/7?”.)
Everyone is sleeping on the collective advantages AIs will have, which have nothing to do with raw IQ but rather with the fact that they are digital—they can be copied, distilled, merged, scaled, and evolved in ways human simply can’t.
What would a fully automated company look like - with all the workers, all the managers as AIs? I claim that such AI firms will grow, coordinate, improve, and be selected-for at unprecedented speed.
There are interesting questions he’s asking:
Currently, firms are extremely bottlenecked in hiring and training talent. But if your talent is an AI, you can copy it a stupid number of times. What if Google had a million AI software engineers?
Think about this. If When AI surpasses human intelligence not just in IQ but in scalability, replication, and integration then AI workers can be copied, distilled, merged, and scaled with zero learning curves.
The means future AI firms will coordinate and improve at unprecedented speeds.
Unlike human firms, AI firms are not bottlenecked by hiring and training. The means AI can be trained to elite levels and cloned indefinitely. With expertise amortized across countless copies, enabling deeper specialization across fields.
And there’s more:
Think about how limited a CEO's knowledge is today. How much does Sundar Pichai really know about what's happening across Google's vast empire? He gets filtered reports and dashboards, attends key meetings, and reads strategic summaries. But he can't possibly absorb the full context of every product launch, every customer interaction, every technical decision made across hundreds of teams. His mental model of Google is necessarily incomplete.
Now imagine mega-Sundar – the central AI that will direct our future AI firm. Just as Tesla's Full Self-Driving model can learn from the driving records of millions of drivers, mega-Sundar might learn from everything seen by the distilled Sundars - every customer conversation, every engineering decision, every market response.
This means eliminating the principal-agent problem by aligning all decision-making to firm objectives. CEOs and executives can be copied to oversee all company functions. AI executives can simulate thousands of business scenarios instantly, optimizing strategy dynamically. First, in x, then in 10x/n more decisions.
Then if AI models can merge insights seamlessly, eliminating miscommunication it means that, unlike humans, AI firms can transfer knowledge without loss, bias, or inefficiency. Every decision, experiment, and innovation is immediately propagated across the organization.
The cost to have an AI take a given role will become just the amount of compute the AI consumes. This will change our understanding of which roles are scarce.
Future AI firms won’t be constrained by what's scarce or abundant in human skill distributions – they can optimize for whatever abilities are most valuable. Want Jeff Dean-level engineering talent? Cool: once you’ve got one, the marginal copy costs pennies. Need a thousand world-class researchers? Just spin them up. The limiting factor isn't finding or training rare talent – it's just compute.
So what becomes expensive in this world? Roles which justify massive amounts of test- time compute. The CEO function is perhaps the clearest example. Would it be worth it for Google to spend $100 billion annually on inference compute for mega-Sundar? Sure! Just consider what this buys you: millions of subjective hours of strategic planning, Monte Carlo simulations of different five-year trajectories, deep analysis of every line of code and technical system, and exhaustive scenario planning.
If AI firms will evolve much faster than human firms due to their ability to iterate at scale, then massive AI populations will generate innovations through rapid experimentation.
This means that future companies will operate more like collective intelligence networks rather than rigid hierarchies.
This changes, transforming and changing the scarcity and economic value of the future.
AI talent is infinitely replicable, shifting scarcity from labor to compute power.
The most expensive roles will be those requiring the most compute (e.g., high-level strategic AI functions).
Capital allocation will prioritize AI-driven simulations and Monte Carlo scenario planning over human-driven intuition.
AI firms will drastically lower transaction costs, leading to larger, more integrated firms.
Instead of outsourcing, firms will replicate successful internal processes, reducing reliance on external entities.
Ronald Coase’s theory of the firm tells us that companies exist to reduce transaction costs (so that you don’t have to go rehire all your employees and rent a new office every morning on the free market). His theory states that the lower the intra-firm transaction costs, the larger the firms will grow. Five hundred years ago, it was practically impossible to coordinate knowledge work across thousands of people and dozens of offices. So you didn’t get very big firms. Now you can spin up an arbitrarily large Slack channel or HR database, so firms can get much bigger.
AI firms will lower transaction costs so much relative to human firms. It’s hard to beat shooting lossless latent representations to an exact copy of you for communication efficiency! So firms probably will become much larger than they are now.
This would imply that the first company to master full AI workflows could dominate entire industries as hyper-efficient ecosystems, dynamically optimizing every aspect of operations. You may think that Patel is wrong (I don’t), but as the saying goes - never bet against compute.
The Future Is About to Hit Us
Read the entire piece by Tomas Pueyo:
The best AI models were about as intelligent as rats four years ago, dogs three years ago, high school students two years ago, average undergrads a year ago, PhDs a few months ago, and now they’re better than human PhDs in their own field. Just project that into the future.
The makers of these models are all saying this is scaling faster than they expected, and they’re not using weird tricks. In fact they see opportunities for optimization everywhere. Intelligence just scales up. They believe AGI is 1-4 years away. This is only a bit more optimistic than the markets, which estimate it will arrive in ~2-6 years.
AIs are now like elite software developers. And AI is already improving AI. Put these two together, and you can imagine how quickly AI development speed will accelerate.
At this point, it doesn’t look like intelligence will be a barrier to AGI. Rather, it might be other factors like energy or computing power.
Except DeepSeek just proved that we can make models that consume orders of magnitude less money, energy, and compute.
So although electricity, data, and especially compute might be limiting factors to AI growth, we are constantly finding ways to make these models more efficient, eliminating these physical constraints.
In other words: AI is progressing ever faster, we have no clear barriers to hinder their progress, and those in the know believe that means we’ll see AGI in half a decade.
Until then we also get this:
The Management Paradox
Everything simple is wrong. Everything complex is unusable.
Loved this quote. Traditional management advice presents a false choice between hands-off leadership and micromanagement. Either or. This oversimplified view ignores the complex reality of modern tech organizations, where the same team might be simultaneously:
Launching a critical new product feature
Maintaining legacy code/systems
Exploring experimental stuff
Managing critical customer relationships (always be closing)
Each of these contexts demands different levels of oversight and support.
Here’s Scale&Signal:
What's the actual goal of management? It's to ensure valuable work gets done. Everything else - employee happiness, team culture, process optimization - these are all inputs to that function. Important inputs, certainly, but not the fundamental purpose.
The interesting thing about competence and motivation is that they create a matrix. Think of competence as having four levels:
1. Unknown (We don't know what they can do)
2. Junior (Learning the basics)
3. Professional (Can execute independently)
4. Expert (Deeply understands the domain)
And motivation as having four levels too:
1. Actively sabotaging
2. Uninterested
3. Fine/Acceptable
4. Fully Driven
This creates sixteen possible states.
Sixten, at minimum. Take a good look:
The effectiveness of any management style depends on two critical variables: competence and motivation. But here's the crucial insight: these aren't fixed properties of individuals—they're properties of individuals in specific contexts.
Consider a senior engineer who's:
An expert in backend systems (high competence)
Deeply passionate about architecture (high motivation)
New to mobile development (low competence)
Skeptical about blockchain technology (low motivation)
Depending on the context, the same person requires radically different management approaches. Treating them as "senior" or “junior” across all domains is a recipe for failure.
Here’s the framework:
Keep reading with a 7-day free trial
Subscribe to Bartek Pucek to keep reading this post and get 7 days of free access to the full post archives.