The Best People Don’t Look Like the Best People, The FarmVille Problem in AI, and Speed Is a Safety Problem
the bottleneck moved.
Good morning
In today’s edition, among other things:
The Best People Don’t Look Like the Best People
The FarmVille Problem in Enterprise AI
The Space Economy’s Shipping Container Moment
When the Factory Is the Moat
Speed Is a Safety Problem
Onwards!
The FarmVille Problem in Enterprise AI
Will Manidis recently published an essay that gave a name to something I’ve been circling around for months. He calls them Tool Shaped Objects: things that look like tools, feel like tools, produce the sensation of work, but don’t produce work.
Manidis applies this to the current AI cycle. The dominant AI narrative is about consumption, not output. Token budgets. GPU clusters. Billion-dollar training runs. The story is the capex, not what the capex produces.
The market for feeling productive is orders of magnitude larger than the market for being productive.
He’s right. And the mechanism is worth understanding, because it explains a pattern I see constantly when working with companies on AI transformation.
The verbal fluency of LLMs is what makes them the most convincing tool-shaped objects ever built. FarmVille could only simulate farming. Notion could only simulate organizing. An LLM can simulate anything. It streams tokens. It shows chain-of-thought. You can watch it reason, adjust the temperature, swap models, add tools. The entire experience screams productivity. Manidis describes teams of smart engineers building agent systems whose primary output is the existence of the system itself. Agents producing logs analyzed by other agents, generating reports that populate dashboards nobody reads.
The boundary between tool and tool-shaped object is a gradient, not a line.
The test for you and your work is simple: can you name the metric that you changed with AI? Not tokens consumed. Not workflows automated. Not agents deployed. The business metric. Response time. Conversion rate. Revenue per employee. Error rate. Cycle time. If you can’t point to a number that moved in a direction your CFO cares about, the work doesn’t matter.
This is why a sprint model works for AI transformation and the “let’s experiment with AI” model doesn’t. Experiments produce shavings. Sprints produce measurable changes in a business process with a defined owner and a defined target. The constraint of a deadline and a metric forces you off the gradient and onto one side or the other.
The incentives are aligned to produce it:
AI vendors sell by consumption (tokens, seats, API calls). Their revenue goes up whether or not your output improves.
Internal champions need to show activity to justify their AI budget. Dashboards full of agent logs are activity.
Executives want to tell their board they’re “doing AI.” A five-agent pipeline is more impressive in a board deck than two people using Claude well.
Every participant in the chain is rewarded for the sensation of AI adoption, not the fact of it.
What Manidis also gets right is that this doesn’t mean LLMs are useless. They’re super powerful. The models are getting better fast. But the gap between “this technology can do real work” and “this organization is using it to do real work” is where most of the capital is currently burning.
If you’re deploying AI in your organization: start with the metric, not the tool. Pick one process. Define what better looks like in numbers. Build the simplest thing that moves that number. If it requires five agents and an orchestration layer, you’re probably building a kanna.
If you’re investing in or building AI companies: ask what metric their customers track. If the answer is adoption or usage, that’s a consumption story. If the answer is a business outcome their customer’s CFO would recognize, that’s a tool. Both can make money. Only one compounds.
Ask what the number is before making it go up.
The Best People Don’t Look Like the Best People
Every founder says they hire “the best.” That doesn’t work. If every company has a world-class team, either the definition is meaningless or most founders are lying to themselves.
Alex Kurilin wrote a long, specific piece on what predicts great startup hires. He’s a CTO with enough reps to have real pattern recognition:
The best hires in the early stages are usually non-obviously good to the untrained eye. They don’t look as appealing to employers with infinite resources who otherwise would have already hired them.
The Stanford CS grad with the OpenAI internship isn’t joining your seed-stage startup. And if they did, they’d probably be wrong for it. Your edge as a startup is finding people the market hasn’t priced yet.
Kurilin lists 11 traits for great startup hires. Most compress into two that I keep seeing across portfolio companies:
Hunger over pedigree. The best early hires are driven by curiosity and a need to prove themselves. Failures build humility, rejection sharpens EQ, but the drive to compete and learn either exists or it doesn’t. No onboarding program installs it.
Mid-career sweet spot. Too junior means you’re funding their education on your runway. Too senior means they’ve built muscle memory for cross-team coordination, staff-level architecture, organizational politics, none of which exists at your 8-person company. That training can work against them in a small team where the job changes every two weeks.
When tools compress five roles into one, the cross-disciplinary generalist who can think product, design the interface, write the backend, and debug the deploy pipeline stops being a hiring thing. The tools only amplify hunger and tolerance for chaos. A passive engineer with access to Claude Code is still a passive engineer.
Your hiring process is probably filtering out your best candidates. If the funnel starts with resume keywords and ends with LeetCode tests, you’re selecting for people who are good at interviewing. Work trials and real projects, where candidates bring their own AI tools, tell you how someone builds. That’s what you’re paying for.
Your best early hires will leave. They’ll outgrow you, get poached, or start their own thing. Kurilin frames this as “good industry karma.” I’d reframe: if nobody leaves for something bigger within two years, you probably hired too conservatively. You want people whose growth rate exceeds yours.
The Space Economy’s Shipping Container Moment
Reusable rockets are following a cost curve investors have seen before. The question is whether the analogy holds all the way through.
NFX’s Daniel Museles and Morgan Beller make a clean historical argument: every major transportation revolution follows the same five-step sequence. An enabling technology collapses costs by an order of magnitude, access democratizes, unpredicted industries emerge, and power reshuffles. They map this across four eras: sail, rail, automobiles, containerization. Then they draw the line to space.
The Space Shuttle put payloads into low Earth orbit at roughly $54,500 per kilogram. Falcon 9 does it for around $2,700. Falcon Heavy pushes that to $1,500: a 36x reduction, which is the same magnitude as containerization’s impact on shipping costs. Starship, if it hits target economics, could push below $100 per kilogram. That’s a 500x reduction from the Shuttle era.
NFX’s:
The people who built the infrastructure – the ports, the rail depots, the highways, the container cranes – generally did quite well. Those who waited for the second-order effects to become obvious arrived too late.
NFX frames this primarily as a space story. I’d frame it as an infrastructure timing story that happens to be about space right now.
The most useful frame in the piece: second-order effects are always bigger than the enabling technology, and they’re always surprising. Sears wasn’t planned when railroads laid track. Suburbia wasn’t planned when Ford built the Model T. The financial primitives, the consumer brands, the social reorganization: all of it emerged after the infrastructure existed and entrepreneurs could see what was newly possible.
Investors try to predict the second-order effects before the infrastructure is built. That’s backwards. You can’t predict what Sears looks like before the railroad exists. You invest in the infrastructure layer and accept that the applications will surprise you.
Where I push back on the NFX framing: they argue the U.S. has a one-to-two-decade window of unique advantage. That feels optimistic. China landed a reusable rocket booster in 2024 and is building orbital supercomputers. India’s space program is accelerating. The window for infrastructure moats in space may be closer to five to seven years.
The infrastructure layer is the investable layer right now. Launch, on-orbit servicing, positioning, orbital computing. The applications layer is still too speculative to price.
The NFX piece is worth reading in full for the historical depth. The pattern is real and the cost collapse is measurable. Whether the U.S. infrastructure advantage holds long enough to capture the second-order value is a different bet.
When the Factory Is the Moat
a16z just published a manufacturing economics primer. A guide to yield curves, OEE decomposition, and cash conversion cycles for hardware startups. Super interesting. That this came from the firm that built its reputation on the software-eats-world idea says something specific about where capital is going, too.
Oliver Hsu at a16z lays out the “factory-is-the-product” thesis:
For companies that master these economics, the rewards are substantial. A factory that works — that produces at high yield, at competitive cost, at scale — is extraordinarily difficult for competitors to replicate.
When your manufacturing process is your IP, your moat compounds with every unit you ship. Wright’s Law: every doubling of cumulative production drops costs by 15-20%. The company that ships first and ships fastest builds a cost position that late entrants can’t close without burning enormous capital.
Getting the sequencing wrong is expensive. Hsu walks through a clean example: a factory running at 70% yield pays $114 per good unit. A competitor at 90% yield pays $89. That $25/unit gap comes entirely from process discipline, not better materials or cheaper labor. Scale the difference across millions of units and you have the kind of structural cost advantage that kills competitors slowly, over years.
The parallel to AI infrastructure is what makes this relevant beyond hardware. GPU clusters, inference farms, data center operations: these are factories. The same dynamics apply:
Utilization is everything. A GPU cluster at 50% utilization doesn’t generate 50% of the margin. It likely generates negative margin, because fixed costs (hardware depreciation, power, cooling, staff) don’t scale down. Same as a physical factory running half-empty.
The learning curve is the strategy here. Whoever accumulates production volume fastest descends the cost curve fastest. In AI, whoever processes the most tokens builds the deepest operational knowledge about scheduling, routing, and optimization. In manufacturing, whoever ships the most units compounds their process IP the fastest.
The capital structure section is where hardware founders should pay close attention. Equity for technology risk, debt for execution risk, project finance for market risk.
Most founders I talk to either over-equity everything (diluting themselves unnecessarily) or try to debt-finance R&D risk, which blows up when yield doesn’t hit plan.
Software moats are compressing as AI-native competitors rebuild features in weeks. Manufacturing moats go the other direction: they get wider with time, because accumulated process knowledge and yield improvements are nearly impossible to replicate without running your own production at scale.
Whether you’re investing in hardware or AI infrastructure, read the full piece. It’s long but worth it.
Speed Is a Safety Problem
Most organizations think shipping faster means accepting more risk. They have it backwards. The teams that ship the most break the least, because they’ve invested in the infrastructure that makes being wrong cheap.
Addy Osmani (engineering leader at Google, author of Learning Patterns) wrote one of the better breakdowns I’ve read on what “bias toward action” requires in practice. His framing:
Take the smallest step that produces real feedback, but know exactly how you’ll recover when it breaks.
Speed is an engineering discipline. Osmani cites Kathleen Eisenhardt’s research on decision-making in high-velocity environments: fast decision makers developed more alternatives, used better advice processes, and acted on roughly 70% of the information they wanted. Fast teams use more information than slow teams. They just refuse to wait for the remaining 30%.
Osmani’s line on this is good:
Speed comes from making the safe thing easy, not from being brave about doing dangerous things.
Etsy was deploying 50 times per day with mean time to resolution in minutes. Lowe’s cut mean time to recovery by over 80% after adopting SRE practices. These aren’t startups with nothing to lose. They built the plumbing, and the velocity followed.
Instead of product and engineering fighting about whether to slow down, you look at the numbers.
It converts a political negotiation into a dashboard. Every organization I’ve worked with that adopted some version of this saw their shipping cadence increase, because the people who wanted stability finally had a framework that protected them without blocking everyone else.
The Knight Capital story is the counterweight. In 2012, a deployment error left old trading code running on some servers. Within 45 minutes: $460 million in losses. They had no kill switches and no pre-trade limits. The SEC’s post-mortem: they never asked “what happens if each component malfunctions?”
The lesson isn’t “move slow.” It’s “invest in the safety nets that let you move fast.”
I see the same failure mode in enterprise AI adoption. The organizations that move slowest aren’t being cautious. The highest-leverage move is converting a scary one-way door into a safe two-way door.
If you’re running AI pilots: define what “broken” means before you start. Pick your metrics. Agree on the threshold where you pull the plug. Then ship to a small group and watch.
If you’re an engineering or ops leader: make moving fast boring. Invest in the safety nets that let your team ship without heroics. That means monitoring AI outputs with the same rigor you’d monitor latency.
If you’re evaluating AI vendors: ask them what rollback looks like. If the answer is “re-train the model,” you’re looking at a one-way door. If the answer is “flip a flag and revert to the previous version,” you’ve found a team that understands deployment discipline.
Osmani closes with a 30-day plan worth stealing: pick one service, define SLOs and an error budget, create rollout checklists, fix noisy alerts, and practice rolling back your scariest deployment. After a month, deploying should feel routine.
About 70% of outages come from changes, so the winning move is smaller changes with better escape routes, not fewer changes. The organizations that learn to make experiments cheap and reversible will run more of them. All that extra learning compounds.
Interesting Analysis and Trends
AI, Agents & Infrastructure
A Guide to Which AI to Use in the Agentic Era LINK
The Emerging Harness Engineering Playbook LINK
Why AI Products Break in Production: Context Engineering LINK
AI Is Bigger Than SaaS LINK
AI Coding, Productivity & People
AI Fatigue Is Real and Nobody Talks About It LINK
My AI Adoption Journey LINK
The AI Vampire LINK
Cognitive Debt: How AI Shifts Concern from Technical Debt to Cognitive Debt LINK
Startups, Growth & Product
How to Price Your Product: A Pricing Model Decision Tree LINK
Escaping the TAM Trap LINK
Your Org Structure Is My Opportunity LINK
Company as Code LINK
Venture, Markets & Ecosystems
Size for Simplicity: The Income Statements of Software After SaaS LINK
The Tech Market Is Fundamentally Broken LINK
What Consumers Are Doing Now LINK
Macro, Policy & Commentary
Americans Are Ten Times More Likely to Be Fired Than Germans LINK
Issue 22: Why Europe Doesn’t Have a Tesla LINK
Research & Ideas
Taste for Makers LINK
How to Get Better at What You Do LINK
“Nothing” Is the Secret to Structuring Your Work LINK
Meditations
Peter Drucker
There is nothing so useless as doing efficiently that which should not be done at all.
----
Thank you for your time,
Bartek

