The Founder's Paradox, Too Big Teams, The Future of Work with AI Agents, and Pricing AI Software
Optimize for enthusiasm.
Good morning
In today's edition, among other things:
The Founder's Paradox
Teams Too Big to Think
The Future of Work with AI Agents
Software Is Changing (Again)
How to Price AI Software
The Hidden Metric That Determines AI Product Success
What Is an AI-Native Company?
How to Build a PLG Motion at a Sales-Led Company
MCP Is the New WWW
Onwards!
The Founder's Paradox
From Reddit:
Everyone thinks they want to be a founder because the beginning is actually fun as hell. New idea, endless possibilities, that high you get when you're building something from scratch. It's addictive.
But then reality hits. You're not launching new stuff every week. You're doing the same things over and over for literal years. Same customer calls, same product tweaks, same marketing strategies. While your friends are jumping to new jobs and trying new things, you're stuck grinding on one thing.
Most people tap out here. They get bored. They start new side projects. They convince themselves they need a "fresh challenge" when really they just can't handle the repetition.
Here's what I learned the hard way - building anything worthwhile is boring as hell most of the time. You're not having breakthrough moments daily. You're making tiny improvements, having similar conversations, solving variations of the same problems. The glamorous founder life is maybe 5% of what you actually do.
The difference between people who make it and people who don't isn't talent or luck. It's who can stay interested in their boring thing long enough to make it not boring.
Teams Too Big to Think
The startup graveyard is filled with teams that looked great on an org-chart and terrible at showing up daily to deliver.
Some founders celebrate “hiring velocity,” investors track ratios of engineers to revenue, operators obsess over role-based KPIs. The real leverage isn’t adding more specialists—it’s subtracting hand-offs.
The bigger the team, the smaller the ownership. And when ownership fragments, everything else decays: speed, reliability, accountability, even morale.
Shrinking the surface area of expertise inside a team—by turning specialists into motivated generalists—creates more throughput than any headcount expansion or tooling upgrade.
AI makes “learning the other side of the stack” a short project, not a career change. Capital scarcity is forcing ruthless clarity on burn efficiency. And the rise of remote-plus-async culture exposes every hidden dependency. The companies that internalize this will ship faster and survive longer.
This opening anecdote is painfully familiar: a 14-person squad drowning in a daily agile-inspired stand-up nobody understood. Here’s Ewerlöf:
It wasn’t unusual to see someone visibly yawn or fall almost asleep during the stand-up… In an effort to keep the stand-up short, we invented hand gestures to signal the person who is talking that their time is over.
Sounds familiar?
This means the “agile” ritual designed for surfacing blockers was now creating them. The key insight is that boredom in stand-ups is a lagging indicator of deeper misalignment—not an agenda problem.
I’ve seen this many times over with different teams and companies:
Phantom tasks emerged only when they were “close to the finish line,” proving work was happening in parallel shadow worlds.
Async Slack updates turned into a report; information radiated but never absorbed.
When communication cost exceeds perceived benefit, people quietly opt out—and start shipping solo.
What’s actually happening is the team’s topology no longer matches the product’s dependency graph. Stand-up fatigue is simply the first smoke.
Leadership’s first fix was textbook: split into front-end and back-end “task forces.” Velocity went down. Why? Because specialization created new borders to defend.
On the next stand-up it was evident that there are dependencies between the two task forces. Who would have thought?
This is a mistake. Specialization feels efficient—but only on a whiteboard. In production it breeds idle cycles, context switches, and a pernicious form of fake work.
“Deep experts build higher-quality components” is often the argument.
True, if the bottleneck is quality, not coordination. In most early-stage software, bugs caused by mis-interface are more expensive than those caused by imperfect code. Depth solves 10% of your risk; latency and blame games cause the other 90%.
9/10 times, aim for speed vs depth. And we have AI now.
AI shifts the unit economics of expertise—context switching is now cheaper than queueing for a teammate.
Then the team realized that their ownership model was fundamentally broken. Specialists can never truly own anything because ownership requires three elements:
Knowledge: You understand both the problem and solution space
Mandate: You have power to make changes without permission or baby-sitters
Responsibility: When things break, you're the one who fixes it
Here's where it gets uncomfortable for a lot of managers obsessed with "resource utilization" of engineering teams and projects, after they said that from now on, more devs will have to be generalists:
My theory is that the specialist jobs don't take up 8 hours every single day. This creates a good amount of slack to learn new things or rest while getting paid! The generalist model maximized resource utilization... High utilization made some resources too 'hot' to do in a sustainable manner.
When specialists became generalists, some quit. Not because they couldn't do the work, but because the slack disappeared. The hidden inefficiency of specialization had been protecting them from burnout. The generalist model is more productive precisely because it eliminates the hiding places.
Here’s the lesson: if you're running a team larger than 6-8 people, you're probably experiencing these symptoms:
Standups where half the team zones out
"Waiting on X team" as a constant blocker
Specialists who always seem busy but delivery is slow
Knowledge silos that cause production issues
The solution isn't more process or better tools. It's reconsidering the fundamental assumption that specialization equals productivity. In Ewerlöf's words:
Good engineers are good problem solvers not a layer of meat wrapped around a tool!
There are no universal best practices, only fit practices.
The Future of Work with AI Agents
Most executives are building AI strategies based on a fundamental misunderstanding. They assume workers fear automation and need to be dragged into the AI future. Stanford's latest research reveals something far more interesting: workers are actually ahead of leadership in imagining how AI should reshape work. The disconnect isn't about resistance - it's about companies building the wrong kind of automation.
Stanford researchers introduced the Human Agency Scale (HAS), a five-level framework that moves beyond simplistic "automate or not" thinking. The scale ranges from H1 (AI handles everything) to H5 (humans lead, AI assists minimally). This nuanced approach reveals something crucial: different tasks require different collaboration models.
Here's the study's framework:
Ranging from H1 to H5, this scale moves beyond the traditional automate or not debate, categorising tasks where AI excels at full automation (H1-H2) and those where human agency remains essential for augmentation (H3-H5).
This means we've been asking the wrong question. It's not "What can we automate?" but "What level of human involvement creates the best outcome?" The highest concentration of tasks clusters around H2 and H3 - where AI needs human input or humans and AI work as equal partners. Workers aren't choosing between human or machine - they're designing hybrid workflows that leverage both.
What's actually happening is a sophisticated understanding of task allocation. Workers envision AI handling data collection, report generation, and routine analysis (H1-H2), while they focus on interpretation, relationship building, and strategic decisions (H3-H5). They're not protecting their jobs - they're redesigning them.
The study's most striking visualization compares workplace skills by average wage versus required human agency. The results upend conventional wisdom about "future-proof" careers. Technical skills that command high salaries today - data analysis, information processing, documentation - show up as highly vulnerable to automation.
Here's what the researchers found:
Green lines indicate skills that gain rank when judged by human involvement over pay, suggesting undervalued roles needing more human input. Red lines highlight well-paid skills that rely less on human effort, often tied to automation-friendly tasks like data processing.
This signals a massive revaluation coming. The skills gaining value aren't just "soft skills" - they're specific human capabilities: organizing and prioritizing work, training and teaching others, communicating with supervisors and subordinates. These aren't fuzzy concepts - they're measurable competencies that will command premium compensation as AI handles information tasks.
Consider what this means practically. A data analyst who only runs reports becomes replaceable. But one who can translate findings into strategic recommendations, coach junior team members, and navigate organizational politics becomes invaluable. The premium shifts from information processing to human interface.
From How AI is Rewriting the Playbook for Talent in European Tech Startups:
The positions most at risk are grounded in routine, rule-based activities:
Administrative and back-office operations: Basic accounting, data entry, and process coordination are particularly vulnerable, with automation tools now handling much of the repetitive workload.
Junior-level analytical roles: Entry-level finance and data positions, once key stepping stones for new talent, are increasingly automated as AI takes over standard reporting and basic data processing.
Customer service and support: This is where the risk is most acute, 74% of customer service/support roles in our dataset are flagged as susceptible to automation. First-level support jobs, historically an entry point into the workforce, are being replaced by chatbots, intelligent routing, and self-service platforms.
Operations: 44% of roles are at risk, especially in process-heavy and coordination-focused jobs.
Finance & Accounting: Another department facing significant disruption, with 62% of roles at risk as automation handles routine bookkeeping and compliance tasks.
This shift is not just a workforce challenge but a societal one. As AI eliminates traditional entry points, the classic “bottom rung” of the corporate ladder is being sawed off. The next generation of talent will find fewer opportunities to learn on the job in routine roles, raising critical questions about how to train and develop future professionals in judgment, creativity, and strategic thinking.
Key Data Points:
Customer Service/Support: 74% displacement risk
Finance & Accounting: 62% displacement risk
Operations: 44% displacement risk
People & HR: 33% displacement risk (primarily in transactional/admin HR)
Marketing & Communications: 14% displacement risk (mostly in basic campaign ops/content moderation)
Legal & Compliance: 16% displacement risk (primarily in paralegal and routine compliance roles)
Sales & Business Development: 11% displacement risk (mainly in prospecting and admin-heavy roles)
Software Engineering and Data Science/Analytics: Minimal displacement risk (2–4%), as these roles are largely creative and complex
Perhaps most revealing is where workers resist automation. Via the Stanford research:
"28% of workers express negative sentiments about AI agent automation in their daily work. Top concerns: Lack of trust in AI accuracy, capability or reliability - 45%. Fear of job replacement — 23%. Absence of human qualities in AI, such as human touch, creative control, and decision-making agency — 16.3%."
But flip these numbers: 72% aren't expressing negative sentiment. And of those who are concerned, less than a quarter worry about job loss. The primary concern is quality and reliability, not replacement. Workers aren't Luddites - they're quality control experts who understand where AI falls short.
What everyone's missing is the sophistication of worker preferences. They want AI for specific tasks:
Repetitive data entry and collection
Initial report drafting
Pattern recognition in large datasets
Scheduling and administrative coordination
But they insist on human control for:
Final decision-making
Client and team interactions
Creative problem-solving
Ethical judgments and exceptions
This isn't resistance - it's expertise. Workers understand their jobs' nuances better than either AI developers or executives.
Read that again, please, if you are an executive or a team leader.
Organizations typically approach AI implementation top-down: executives decide what to automate based on cost savings. But the research suggests a different path. Companies that involve workers in designing AI integration will build systems people actually want to use.
For founders and product teams: Stop building for complete automation. The market wants AI that enhances human decision-making, not replaces it. Focus on the H3 level - tools that create true collaboration between human insight and machine processing.
For investors: Look for companies building "centaur" solutions - where humans and AI each do what they do best. Avoid pure automation plays in knowledge work. The winners will build for augmentation.
For operators and managers: Involve your teams in AI deployment decisions. They understand task nuances you might miss. Use the Human Agency Scale framework to map which tasks should be H1 (full automation) versus H3 (partnership) versus H5 (human-led).
Workers are already designing the future of work with AI. Some tools and solutions are not there yet, but they're not waiting for permission.
Software Is Changing (Again)
Must watch. Don’t summarize this with AI, really spend time watching and understanding.
How to Price AI Software
SaaS Software pricing is dead. Not dying—dead. The evidence is sitting right there in Emergence Capital's latest pricing study, but everyone's too polite to say it out loud. I’ve been on meetings few months back with $1B+ prospects saying that they have banned per-seat pricing at their companies.
What's actually happening is a full-scale abandonment of everything we learned building SaaS companies regarding pricing.
Here's what everyone's missing in the pricing debate. Via Jake Super:
AI taps labor budgets, not just software budgets. But as AI reduces headcount, per-seat models become self-defeating.
This means your competition isn't Salesforce or Slack. It's the $200K senior analyst your customer was about to hire. The $2M call center team they're currently managing. The $500K/year law firm on retainer.
Software budgets are rounding errors. Labor budgets are where CFOs lose sleep.
The real number that should terrify traditional SaaS:
With proper ROI frameworks, AI products are capturing 25-50% of created value—significantly higher than traditional SaaS's 10-20%.
Think about that. AI companies are already extracting 2-3x more value than software companies. Not because they're better at negotiating. Because they're playing a different game entirely.
A year ago, pricing was simple/simpler. Today?
Hybrid models dominate, combining elements of seat, usage, per agent, and outcome pricing.
I see founders adding pricing variables like they're seasoning soup—a little usage here, some outcomes there, keep the seats for safety.
Every pricing tier you add admits you don't know what you're worth.
The real opportunity is in simplification. Pick one metric that matters. Price on it. Period.
Want to 10x your deal size tomorrow? According to Sapes we should start here:
If your pilot costs $5K but commercial deals are $100K-$300K based on the value unlocked, state this explicitly to avoid anchoring.
This is the most expensive sentence in the entire report. Here's why:
Your POC price is your value anchor. Price it at $5K, and you've just told the market you're worth $5K. Good luck explaining why the "real" price is 20x higher.
The smartest founders are doing the opposite. They're charging $50K+ for POCs. Not because they're greedy—because it forces real commitment and sets up the $500K expansion.
What's actually happening: POCs aren't product trials anymore. They're paid consulting engagements that happen to use your AI. This shift alone can add a zero to your valuation.
Buried in the report is this gem about AI-enabled services:
Service businesses naturally own end-to-end execution with clear attribution. Mechanical Orchard is a great example; they use AI to move mainframe workloads into the cloud. They take ownership of the entire process...
This means the future isn't AI products. It's AI-powered outcomes. Start thinking like the service provider you're replacing—then price accordingly.
The authors share a 2x2 matrix for picking your pricing model. Here's the only part that matters:
Bottom left (low autonomy, low attribution) = You're a feature
Top right (high autonomy, high attribution) = You're the future
Everything else is transition state.
What everyone's missing is that moving from bottom-left to top-right isn't about product development. It's about courage. The technology to deliver outcomes exists. The question is whether you'll take responsibility for them.
Based on the patterns in this data, here's what should work:
1. Find one workflow you can completely own. Not improve. Not accelerate. Own. If you can't guarantee the outcome, you're not ready for outcome pricing.
2. Charge for POCs like they're consulting engagements. Because they are. $50K minimum. This isn't about the money—it's about setting value expectations.
3. Stop talking to IT Via the report: "Business users who experience the benefits firsthand are far more receptive to ROI-based pricing." Your buyer has P&L responsibility. Find them.
When AI can do the entire job, why would anyone buy tools?
The answer is they won't. They'll buy results.
And the companies selling results? They're capturing 25-50% of the value they create, building AI-enabled services that look nothing like software companies.
AI-first software will eat services companies by becoming AI-driven services.
Interesting Analysis and Trends
AI & Agentic Systems
Built a Multi-Agent Research System LINK
Musings on AI Companies of 2025–2026 LINK
The Hidden Metric That Determines AI Product Success LINK
GenAI for Hedge Funds & Startups LINK
Voice AI in a Box: The Future Is Talking Back LINK
What Is an AI-Native Company? LINK
Revenue Benchmarks for AI Apps LINK
GTM, Sales & Micro-Exits
AI Is Reshaping B2B GTM With Signals You’re Ignoring LINK
Maximizing GTM Efficiency With AI LINK
How to Build a PLG Motion at a Sales-Led Company LINK
10-Step Playbook for a Micro-Exit LINK
Product & Organizational Design
Creating Intelligent Products LINK
Factors in Structuring a Product Organization LINK
The Rise of Systems of Consolidation LINK
MCP Is the New WWW LINK
MCP Best Practices LINK
People, Talent & Culture
The Art of Mentorship in Seed Investing LINK
A Good Engineer LINK
How Tech’s Most Resilient Workers Stay Relevant LINK
Trends, Commentary & Frameworks
Six Themes for 2025 LINK
Geo Over SEO LINK
An Engineer’s Guide to Vibe Design LINK
Why Venture Math Is Different for Operators LINK
Meditations
Martin H. Fischer:
Knowledge is a process of piling up facts; wisdom lies in their simplification.
Thank you for your time,
Bartek