The Big Squeeze in B2B and the Challenge of Lasting Defensibility

AI has created the fastest-scaling companies we’ve ever seen. Lovable, for instance, hit $100 million ARR just eight months after launch. As Brian Balfour observes in The Big Squeeze, “Escape velocity elevated Lovable from obscurity to household name. And now the company has a real chance to build a large and successful business. But there’s no guarantee they’ve found long-term defensibility or can turn this wave of interest into a sustainable business.”

That tension—between speed and defensibility—is the defining challenge of today’s market. Startups can achieve breakout growth only to find incumbents copying their innovation and distribution channels drying up. For B2B startups, the squeeze is even harsher. Distribution windows are shorter, incumbents are stickier, and the path to defensibility is narrower. Winning requires not just speed, but turning that speed into structural moats.

The Mechanics of the Big Squeeze

Balfour describes three converging forces:

  • Massive AI interest: fueling rapid adoption and record-breaking growth.
  • Incumbent mirroring: big players rushing to replicate startup innovations.
  • Distribution scarcity: organic channels like search and social in steep decline.

The result, he writes, is

The Big Squeeze. Startups must get massive distribution quickly, but it’s harder to get and easier for their innovations to be ripped off once they do.

This dynamic was captured years earlier by Alex Rampell of Andreessen Horowitz:

The battle between every startup and incumbent comes down to whether the startup gets distribution before the incumbent gets innovation.

In B2B, where distribution has always been more constrained, the battle is even tougher.

Why Distribution Alone Isn’t a Moat

Balfour is clear: “Distribution isn’t success in itself, but an opportunity to capture it. It’s the very first step in building a moat.” [...]

Curiosity Beats Tenure in the Age of AI

Key Takeaway

The jury is still out on whether AI will replace or empower software developers, but dismissing junior talent is a short-sighted approach. Their curiosity and adaptability make them the best positioned to thrive in an AI-driven future—qualities that matter more than years of experience.

Why This Matters

AI is reshaping the nature of engineering work. Leaders face pressure to cut costs and experiment with automation. Some argue junior developers are the most “replaceable” role. Others counter that they are the most essential. The question is not simply about efficiency, but about who will carry engineering organizations into the next era.

Two Diverging Perspectives

The Replacement View

Several leaders predict AI will soon take over tasks typically assigned to entry-level engineers. Dario Amodei believes AI could generate up to 90 percent of new code. Sam Altman has said jobs will “definitely go away.” Geoffrey Hinton has gone further, warning that AI could eventually replace white-collar work broadly.

This argument positions juniors as expendable—valuable tasks automated, oversight left to senior staff.

The Augmentation View

Others view AI as a lever, not a replacement. Thomas Dohmke expects the smartest companies to hire more engineers, not fewer. Andrew Bosworth describes AI as expanding the capabilities of developers. Mustafa Suleyman warns more about a widening skills gap than job loss.

This view highlights a crucial fact: juniors bring curiosity and drive that seniors, conditioned by legacy practices, often lack. As AWS CEO Matt Garman noted in a podcast (video), they are the most eager to experiment with new tools. Curiosity is not a “soft” trait—it’s the core ingredient for mastering rapidly evolving technologies.

Why Curiosity Beats Tenure

Years of experience are not always a proxy for adaptability. In many cases, “20 years of experience” can mean repeating the same year’s practices twenty times. Senior developers carry the weight of how things used to be done. Juniors, by contrast, arrive without baggage, ready to adopt AI workflows, test new approaches, and ask questions that break conventional thinking.

It's not even junior or senior: are you intellectually curious, hungry to make a difference, and ready to reinvent yourself at work?

This difference matters because AI is not just a coding assistant—it’s a paradigm shift. The engineers most willing to learn, unlearn, and relearn will be the ones who define the next wave of software.

Takeaway for Product and Tech Leaders

The debate is unresolved, but the path forward is clear:

  • Value curiosity as much as expertise. Juniors bring energy and openness that AI will reward.
  • Build a dual strategy. Utilize AI to automate repetitive tasks while investing in mentoring early-career engineers.
  • Avoid false efficiency. Cutting junior roles may deliver short-term savings, but risks hollowing out the future talent pipeline.

Conclusion

AI may change how code is written, but curiosity, adaptability, and drive are timeless assets. Junior developers embody these traits more than any other group. The leaders who cultivate them—rather than replace them—will build organizations ready for whatever this fast-moving future holds.

Agentic AI Needs APIs to Act

APIs are often seen as back-office plumbing, but in the emerging world of agentic AI, they are the execution layer that makes autonomy possible. Without APIs, AI remains stranded in theory—able to reason, but unable to act.

From Copilots to Agents

The last wave of AI adoption has been copilots—tools that help users write emails, summarize documents, or draft code. These copilots assist, but they don’t take initiative. Agentic AI is different. Agents can plan, reason, and execute tasks end-to-end, often without human intervention.

But here’s the catch: for agents to actually do something, they need APIs. APIs are the hands and feet of AI. They let an agent check a customer’s eligibility, process a payment, or reschedule a shipment.

APIs as the Action Layer

Consider healthcare. A patient-facing AI agent may be asked to verify benefits and book an appointment. That agent cannot achieve the goal without a secure and reliable API. Through APIs, it connects to claims systems, checks eligibility, and schedules care—all actions that today require phone calls and manual lookups.

Let me take an example close to my heart (I work at Optum). At Optum, APIs like those available on the Developer Portal already provide secure access to eligibility, claims, and clinical functions. This means agentic AI can be layered on top to handle tasks that previously bogged down patients, providers, and administrators.

The same story plays out across industries:

  • A fintech agent creates an invoice through Stripe’s APIs.
  • A retail agent adjusts staffing using HR and scheduling APIs.
  • A logistics agent reroutes shipments by orchestrating supply chain APIs.

In every case, APIs are the indispensable bridge between an AI’s reasoning and real-world outcomes.

Why This Matters for Leaders

Leaders who think of APIs as minor technical connectors risk missing the bigger shift. As agentic AI moves from hype to reality, APIs will determine whether your organization can harness it to deliver value. If your APIs are robust, secure, and well-documented, they become the foundation for intelligent automation and new business models. If they’re neglected, your AI strategy stalls at the whiteboard.

Takeaway: Agentic AI is only as powerful as the APIs it can call. Leaders should treat APIs as strategic assets—the execution layer that will determine whether AI can move from promising demos to meaningful business outcomes.

No New Ideas in AI? The Power and Limits of Data

The claim that there are no new ideas in AI, only new datasets, is both provocative and partly true. As Jack Morris argued in his recent post, many of the most important AI milestones have been driven not by theoretical leaps, but by new sources of data.

He puts it succinctly:

“The breakthroughs weren’t big ideas; they were new ways to learn from new kinds of data.” from blog.jxmo.io

That framing resonates with history. The ImageNet dataset unlocked computer vision. Web-scale text collections made Transformers viable. Reinforcement learning from human feedback reshaped how models align with our preferences. Time and again, progress has come from scaling access to structured, abundant inputs rather than from fundamentally novel algorithms.

balance-of-datasets-and-algorithms

But the story is incomplete if we stop there. Innovation in AI is multidimensional, and algorithms still matter. DeepMind’s AlphaEvolve has already demonstrated the ability to generate algorithms beyond human expertise, producing approaches to matrix multiplication and optimization problems that surprised even expert researchers, as reported in Wired. If data were the only driver, we would not see systems out-innovating decades of human design.

As Dr. Rumman Chowdhury cautions:

“Innovation stems from human minds, not AI. Don’t delegate your thinking to machines.” from Business Insider

Her warning underscores the deeper point: humans frame the problems, interpret the outputs, and decide which ideas matter. AI extends our reach, but it does not replace human originality. Even when AI generates unexpected solutions, the spark of innovation is in how people direct, interpret, and apply them.

Morris’s post highlights an important truth: data has been the engine of AI’s rise. But the full picture includes algorithms, architectures, and above all, human creativity. Ignoring these other drivers risks oversimplifying how innovation actually happens.

Why Customer Success Belongs at the Start of Product Strategy

Customer Success (CS) is one of the most misunderstood roles in SaaS. As Saahil Karkera wrote in a widely shared LinkedIn post, one quarter CS teams are heroes; the next, they're blamed for churn, adoption drops, and burnout.

This volatility exists because CS sits at the fault lines of Product, Sales, and Customer expectations. The solution isn’t hiring “miracle CSMs.” It’s treating Customer Success as a shift-left strategy—designed into product, GTM, and organizational incentives, not bolted on at the end.

Learning From Shift-Left in Security

In IT security, shift-left transformed how teams work. Instead of treating security as a gate at deployment, leaders embed standards and checks during design and development. Fixing vulnerabilities early is cheaper and more effective than firefighting later.

Customer Success needs the same treatment.

Too often, CS operates to the right of the timeline—absorbing missed expectations, shaky onboarding, or roadmap gaps. The result is that CSMs spend energy patching systemic cracks instead of accelerating value.

Shift-left for CS means:

  • In Product Strategy: Define time-to-value and adoption as success criteria alongside revenue.
  • In Product Design: Build for real customer workflows, not idealized demos.
  • In Go-to-Market: Align sales promises with product realities and onboarding capacity.

When CS is part of those early conversations, it doesn’t play defense. It drives sustainable growth by ensuring customers realize value quickly and reliably.

The Visibility Paradox of CS

Churn metrics and Net Revenue Retention (NRR) dips are visible. Product gaps are visible. What isn’t visible are the hours of trust-building, escalation diplomacy, and behind-the-scenes fixes that keep customers afloat.

The problem is that most companies measure CS primarily on lagging indicators like renewals. These only show what has already happened.

A shift-left approach asks: how can we measure what predicts success, not just what records failure?

Leading CS metrics include:

  • Onboarding completion rate
  • Time-to-first-value
  • Adoption depth across core personas
  • Customer engagement in adoption programs [...]

Observability Now Includes Watching AI

When product managers think of observability, they usually mean uptime, latency, or error rates. But as AI becomes central to user experiences, that definition must expand. Observability now includes monitoring model accuracy, hallucinations, prompt injection, and real-time behavior. As Datadog’s CPO Yanbing Li notes, AI systems add a new layer of complexity to enterprise monitoring.

Why AI demands a new observability lens

Traditional software is deterministic. If a server or a function fails, you can diagnose and fix it. AI systems are probabilistic: a model hallucination may look valid until it misleads a user. Prompt injections or data poisoning might not cause system errors but can quietly undermine trust.

For PMs, this means observability must extend beyond infrastructure metrics to capture:

  • Accuracy drift — whether model outputs align with ground truth.
  • Security resilience — spotting adversarial prompts or unusual input patterns.
  • Behavioral health — tracking whether agents operate within safe, useful boundaries.

Case study: Hallucinations in hospital transcription

Consider OpenAI’s Whisper, deployed in hospitals to transcribe millions of medical conversations. Research found it occasionally hallucinated—generating entire sentences during silences, sometimes violent or nonsensical—in about 1% of transcripts. In clinical settings, even one fabricated note carries serious risks.

This shows why observability must go beyond uptime dashboards. Teams must detect—and act on—content errors that may otherwise slip by unnoticed.

What this means for product teams

  1. Behavior-focused dashboards: Observability should surface hallucinations, unsupported claims, and policy violations alongside API errors. Datadog now offers hallucination detection and prompt injection monitoring in its observability suite.
  2. Continuous evaluation: Like regression testing, AI models need evolving test suites that reflect real-world prompts and track drift over time.
  3. Shared accountability: Observability is not just engineering’s job. Product, design, and trust & safety must all help define what “healthy AI behavior” looks like. Regular model reviews can institutionalize this check.

Fixing Google SEO Indexing Issues with ClaudeCode

Most SEO problems look scarier in Google Search Console than they really are. Recently, I ran into one of those situations. Google flagged 17 indexing issues across my site:

  • 16 pages marked as “Page with redirect”
  • 1 page flagged as “Duplicate without user-selected canonical”

At first glance, this looked like something I’d need SEO expertise to fix. But a quick debugging session with ClaudeCode showed me it was manageable with a bit of structured troubleshooting.

seo-issues-fixed-by-claude-code

The Background: A Redundant /blog Path

When I first set up my blog, URLs looked like this: blog.suryas.org/blog/[slug]

Later, I cleaned it up to a simpler and more readable format: blog.suryas.org/[slug]

I set up redirects from the old paths to the new ones. Everything seemed fine until Google Search Console started complaining.

Diagnosing the Issues with ClaudeCode

Here’s what we uncovered during the live coding session:

  1. Redirect Warnings – The old /blog URLs still exist in Google’s index, so Search Console reports them as “Page with redirect.”
  2. Canonical Tag Missing – None of my pages had a canonical tag, leaving Google unsure which version of the page was the “source of truth.”
  3. Sitemap Mismatch – The sitemap might still include redirecting URLs.

The Fix Strategy

ClaudeCode mapped out a straightforward plan that didn’t require deep SEO knowledge:

Add Canonical URLs

  • Add tags to all layouts (Layout.astro, BlogPost.astro)
  • Ensure each page points to its own preferred URL

Update Sitemap

  • Verify that only the cleaned-up URLs are included
  • Resubmit to Google for reindexing

Improve Redirects

  • Keep the wildcard redirect (/blog/* → /:splat 301)
  • Add explicit redirects for the most commonly accessed old URLs

Re-indexing Request

  • Submit the updated sitemap in Search Console
  • Trigger re-indexing for the flagged pages

What more? I have asked it to document the changes, troubleshooting, and best practices for my future reference. Got that!

Claude Code documenting SEO fixes and best practices

Final Reflection

The biggest surprise was how approachable this process turned out to be. I didn’t need to be an SEO expert; I just needed a clear diagnosis and a step-by-step plan.

With ClaudeCode guiding the session, what looked like a technical SEO rabbit hole became a clean set of coding tasks: add canonical tags, adjust the sitemap, confirm redirects, and ask Google to re-crawl.

Google hasn’t finished reprocessing the sitemap yet, but I expect the redirect warnings to disappear soon.

Why Empathy, Not IQ, Defines Success in the AI Age

Walk into any workplace today, and you’ll see AI embedded in daily tools and workflows. It drafts emails, generates reports, and even proposes design ideas. What it can’t do is sit across from someone, understand their frustration, and respond with care. That distinctly human capacity is becoming the true differentiator.

Carnegie Mellon professor Po-Shen Loh puts it bluntly (video): “The only sustainable trait in the age of AI is the ability to care about people and act on it.”

Machines can replicate knowledge and pattern recognition, but they cannot genuinely understand what it feels like to be human. That gap is where the next era of value will be created.

Empathy as a Competitive Advantage

For product managers and technologists, this is more than philosophy. Empathy drives the ability to design experiences people love, not just tolerate.

A recommendation system can suggest content, but only a team that understands user frustration can simplify onboarding. A chatbot can answer questions, but only a product leader attuned to customer anxieties will build trust into its design.

“Creativity isn’t unique anymore. Empathy and delighting others are what truly matter.”

The Role of Critical Thinking

Loh also warns that in a world overflowing with AI-generated content, the ability to question, validate, and synthesize ideas is critical.

“We must not let AI replace our thinking. We must train ourselves to think, critique, and analyze.”

For product leaders, that means not just interpreting data but challenging its assumptions. What problem are we solving, and for whom? Why now? These are not questions an algorithm can fully answer.

An Entrepreneurial Mindset

Finally, there’s the entrepreneurial stance—an openness to try, fail, and adapt quickly. Loh encourages constant questioning and self-critique, a habit that prevents complacency in the face of AI’s rapid progress.

For product managers, that translates into running small experiments, engaging with real users, and resisting the comfort of static roadmaps.

Takeaways for Product Leaders

  • Lead with empathy: Prioritize understanding over efficiency when shaping products.
  • Question everything: Treat AI outputs as inputs, not truths.
  • Experiment constantly: Build resilience by testing ideas in the market, not just in the lab.


What AI Does Well vs. What Humans Do Uniquely Well

AI StrengthsHuman Strengths
Processing large data setsEmpathy and care for others
Pattern recognitionCritical thinking and questioning assumptions
Automating repetitive tasksCreativity tied to human meaning
Generating content at scaleBuilding trust and delight in products
Speed and efficiencyEntrepreneurial experimentation and judgment


The lesson is clear. Success in the AI era isn’t about out-thinking the machine. It’s about leaning into the one trait it cannot replicate: the ability to care about people.