Skip to main content

Product-Led Growth Engineering in the AI Era

PLG is evolving into ALG — Agentic-Led Growth. Cursor hit $500M ARR in under 24 months. Lovable hit $100M ARR in 8 months. Per-seat pricing is dying. Per-task and per-outcome models are replacing it. Engineering teams are being asked to build for metrics that did not exist 18 months ago.

Abhishek Sharma· Head of Engg @ Fordel Studios
14 min read min read
Product-Led Growth Engineering in the AI Era

Product-led growth (PLG) was built on a premise: let the product sell itself. Users sign up, use the product, get value, convert to paid. The engineering team's job was to make the product good enough to drive this loop — fast onboarding, clear aha moments, low friction on the path to value.

That premise is still valid. What has changed is the nature of the user. AI agents are now users of products. Not metaphorically — Cursor's users are developers using the product to build things faster; the "product" those developers use is partly AI-native. Lovable's users are non-technical founders whose relationship with the product is fundamentally different from a traditional SaaS user. Engineering for PLG in 2026 requires designing for this new user type alongside the old one.

$500M ARRCursor in under 24 monthsOne of the fastest SaaS growth trajectories ever recorded
$100M ARRLovable in 8 monthsFastest-ever SaaS product to $100M ARR at the time
···

From PLG to ALG: Agentic-Led Growth

Agentic-Led Growth describes products where AI agents are both the primary tool users interact with and the delivery mechanism for value. The user does not "use the product" in a traditional sense — they describe what they want to an agent, and the agent does the work.

The implications for growth engineering are significant. Traditional PLG tracks activation events (user reached aha moment), engagement metrics (DAU, feature usage breadth), and expansion triggers (seat count, data volume). In an ALG product, these metrics are still relevant but the events that matter have shifted. "User completed a task using an agent" is more meaningful than "user clicked a feature button."

The activation metric for an AI-native product is often "first successful outcome" rather than "first session" or "first feature used." Engineering teams building growth infrastructure for these products need instrumentation that tracks outcome completion, not just feature interaction.

In agentic products, the activation metric is first successful outcome — not first login, not first feature click. Engineering teams that do not instrument for outcomes will misread their activation funnel.
···

The Death of Per-Seat Pricing

Per-seat pricing assumes that value scales with the number of humans using the product. In a world where AI agents do the work, "seats" is the wrong unit. A team of 5 using an AI-heavy product may derive 50x the value of a team of 5 using a traditional tool — and a team of 2 using agents may do more than a team of 20 without them.

Two models are replacing per-seat: WaaS (Work as a Service, per-task pricing) and RaaS (Results as a Service, per-outcome pricing). WaaS: you pay per agent task completion. RaaS: you pay based on measurable outcomes (revenue generated, costs saved, tickets resolved).

Pricing modelUnitAlignment with valueEngineering complexityEarly adopters
Per-seatHuman usersLow (agents break the unit)LowLegacy SaaS
Per-task (WaaS)Agent task completionsMediumMedium (task instrumentation)Cursor, coding agents
Per-outcome (RaaS)Business resultsHighHigh (outcome attribution)Early experiments
Per-token / per-computeAI usageMediumMedium (usage tracking)OpenAI, Anthropic, Replicate

The engineering work for per-task pricing: task definition, task completion detection, task attribution, and usage reporting infrastructure. This is non-trivial. A task that starts as "user asked agent to do X" needs to be tracked through to completion, with idempotency (so a task does not get double-billed if the agent retries), rollback handling, and clear user-visible reporting.

···

The Inference Cost Problem Killing Freemium

Classic PLG depends on a generous free tier to drive top-of-funnel adoption. The economics work when the marginal cost of an additional free user is near zero. For AI-native products, the marginal cost of an additional free user is significant — every interaction involves LLM inference, which costs real money.

Lovable, Cursor, and similar products have all navigated this tension. Lovable moved from unlimited to credit-based free tiers. Cursor offers a limited number of fast requests per month free. The free tier still exists, but it is carefully bounded to keep the unit economics viable.

···

Engineering AI Agents as Users

AI agents are now users of products in a literal sense: API-consuming agents that sign up, activate, and expand exactly like human users — except they do it at machine speed with no friction. A well-designed PLG product that exposes a good API will have agentic users without deliberately targeting them.

The growth engineering implication: your activation and expansion funnels need to handle agents. This means machine-readable onboarding (API keys without email verification requirements that an agent cannot complete, webhook-based activation confirmation rather than click-confirmation), machine-readable documentation, and usage-based expansion that works when the "user" making 10,000 API calls is an agent rather than a human.

Engineering PLG for AI-era products

01
Instrument for outcomes, not just features

Define the "aha moment" as a measurable outcome in your domain. Track it. Build your funnel around it. Feature clicks are noise; outcomes are signal.

02
Build usage-based billing infrastructure before you need it

Retrofitting per-task pricing onto a product designed for per-seat is painful. Stripe Billing, Orb, and Metronome are the tools. Build the instrumentation early so you have the data when you want to change the pricing model.

03
Design the free tier with a cost ceiling

Define a hard per-user cost ceiling for the free tier before launching it. Model the economics at 10K, 100K, and 1M free users. Adjust the free tier constraints to keep the model viable at each scale.

04
Make your product agent-friendly

Comprehensive, machine-readable API documentation. Programmatic onboarding. Webhook-based event notifications. An agent that can fully use your product without human intervention is a distribution channel you get for free.

···

What 80% GenAI Embedding Means for PLG

Gartner's estimate that 80%+ of software vendors will embed GenAI in their products by 2026 means your competitive differentiation is increasingly not "do you have AI features" but "how well do your AI features deliver outcomes." PLG for commodity AI features (summarise, generate, rewrite) will not work — the features are table stakes. PLG for domain-specific AI that solves a problem no horizontal AI tool addresses is where the growth leverage lives.

PLG metrics to instrument in 2026
  • Time to first successful outcome (not time to first login)
  • Outcomes per active user per week (not feature clicks per session)
  • Expansion rate: how does outcome volume grow after activation?
  • Agent user conversion: what % of your API users are agents vs humans?
  • Unit economics per free user: actual inference cost per free-tier active user
Build with us

Need this kind of thinking applied to your product?

We build AI agents, full-stack platforms, and engineering systems. Same depth, applied to your problem.

Newsletter

Enjoyed this? Get the weekly digest.

Research highlights and AI news, delivered every Thursday. No spam.

Loading comments...