Three Pillars of Developer Success - Reimagined for AI Agents

Tim Seager
<-- Back to Blog
Written by
Tim Seager

Jeff Lawson, co-founder and former CEO of Twilio, has long been one of the sharpest voices on what makes a developer-focused business scale. Since we take a big interest in developers who use our products directly, we were very interested in his recent conversation on 20VC. Jeff laid out his framework: three categories of developer companies that break through to hundreds of millions or billions of dollars in revenue. Here’s a recap of each and how we see each pillar changing with the advent of agentic AI.

The Three Categories of Breakout Developer Tools (Pre-AI) as Jeff described on 20VC.

1) Business Development as a Service (BDaaS)
Developers can’t negotiate telco contracts, open merchant accounts, or cut deals with banks. But Twilio, Stripe, and others let them do so indirectly — giving engineers ‘backdoor’ access to business relationships via APIs. Once a product works, leadership pays the bill.
Example: Twilio , Stripe

2) CapEx as a Service
According to Jeff, historically, developers couldn’t drop $10M to build a data center. AWS flipped this, letting them provision compute/storage on a credit card. This turned huge capital projects into API calls.
Example: AWS, Google Cloud

3) Algorithm as a Service (rare - we think less rare)
If an algorithm is impossibly hard to build, or in our view overly arduous for developers to build, they won’t reinvent it. However, If the algorithm is not seen as very difficult or arduous to replicate, there will be temptation for many developers to reinvent it. The canonical case: Amazon’s [DynamoDB](https://aws.amazon.com/dynamodb/) as an infinitely scalable database. We would also see highly successful vendors like Datadog or Incident.io not for their algorithms as being impossibly hard but having put a lot of thought and code into building them to be easy for their customers to use. More recently, model training at the scale of [OpenAI](https://openai.com/) or [Anthropic](https://www.anthropic.com/). Inference alone often commoditizes, but cutting-edge training recipes remain defensible.
Reference: 20VC episode transcript.

How We See Each Pillar Shifting with AI

1) BDaaS → Data/Model Access as a Service
Then: Carrier contracts, payments, identity networks.
Now: Safe, governed access to high-value AI resources such as proprietary data corpora, fine-tuned models, and licensed retrieval connectors. Developers can’t sign these agreements alone, so products that bundle legal rights, SLAs, and billing act as the new BDaaS.
Example: [Harvey AI](https://www.harvey.ai/) for legal research, where client data rights and compliance are core. Harvey { background here - story}

2) CapEx → GPU/Agent Fleet as a Service

Then: Rent servers via API.
Now: Rent GPU clusters, inference fleets, evaluation systems, or even agent orchestration. The hard part isn’t raw compute but governance, observability, and cost controls. Winners look like AI-native ‘cloud providers’ for inference and agent ops.
Example: [Anyscale](https://www.anyscale.com/), [Modal](https://modal.com/).

3) Algorithm → Frontier Training & Continual Learning as a Service
Then: Dynamo-class systems, impossible to rebuild.
Now: Proprietary training recipes, RLHF pipelines, and continual learning tuned to domain-specific data. While inference is increasingly commoditized, models trained with unique data/IP or that require a great deal of code and deployment effort will retain high switching costs.
Examples: OpenAI, Anthropic

New Considerations in the AI Era

1) Pricing pressure and switching costs


Developer tools like Cursor have seen rapid adoption, but because switching is easy (export/import your codebase) they may face commoditization risk. By contrast, platforms like Harvey don’t. 

For example, Paul, Weiss partners with Harvey and embeds deeply into its legal workflows, where switching costs are high because of data migration, confidentiality agreements, and retraining on proprietary corpora. Were also seeing FDE (Forward Deployed Engineers) also gaining popularity with agentic AI companies to enable those type of integrations. 

Harvey.ai is a fast-scaling legal AI platform focused on research, drafting, and workflow automation for law firms and in‑house teams. Public reporting and company disclosures indicate rapid customer growth, major strategic partnerships (notably LexisNexis), and substantial capital raises in 2025.

Harvey.ai and companies like it have high switching costs because customer firms embed their own best practices and guardrails directly into Harvey’s Workflow Builder, tailoring it to each customer and each team with their own unique processes. This deep customization, combined with firm-wide adoption and embedded improvement over time, makes migrating to another platform costly and disruptive. Any competitor would need to replicate not just the software, but the embedded legal expertise, consistency and efficiencies Harvey provides.

We actually see our space, AI SRE and AI powered debugging in a similar way as providing high value with a high threshold to switch. If done right these systems can be straightforward to deploy within a company or team, but each team within the company will co-create their own processes with the AI SRE eventually delivering great impact as a result. Switching would require not just changing the software but the embedded observability expertise shared between company and AI SRE vendor. 


Other high-switching-cost plays include:
- Palantir - deeply embedded in government/enterprise data.
- Databricks - where the data lakehouse architecture locks in workloads.
- Veeva - in life sciences (AI + regulatory compliance = sticky).

2) Enterprise readiness as moat

Security, auditability, and governance are perhaps the new differentiators. This mirrors AWS’s rise: it wasn’t just compute, but trust (SLAs, compliance, enterprise support) that made enterprises standardize on it.

We are seeing the importance of ensuring AI agents are not blackboxes in an upcoming blog, we discuss emerging agentic AI systems, while powerful, risk undermining privacy, security, and competition if they demand root-level access to sensitive data and operate without transparency. Applied to observability, these risks translate into potential data leaks, cascading outages, and vendor lock-in if agents act autonomously without safeguards. At Relvy, we believe the solution is transparent, human-in-the-loop debugging agents that enforce clear boundaries, preserve data sovereignty, and strengthen reliability without black boxes.

3) Vibecoding 

This affects Jeff’s Algorithm pillar. As Replit, Lovable along with Cursor to name a few empower all developers along with enabling the non-trained to code, both the speed at which new software can be completed and the number of developers able to attack harder problems will make software which was once hard easier and quicker for more to build. 

Conclusion

According to Jeff, he came up with this framework years ago, but the fact he recounted it so recently and from our above analysis it still seems applicable with perhaps our AI additions: the breakout companies focused on developers either unlock new business relationships, make massive CapEx available via API, or deliver algorithms too hard to replicate. With AI, those categories evolve into data/model access, GPU/agent ops, and proprietary co-developed agent & company pipelines. But the extra layer today is economics: low-switching-cost tools may explode quickly but risk churn, while companies embedding into compliance-heavy, data-sensitive workflows may build lasting moats.

About Us 

At Relvy, we believe engineers—not black-box AI—should stay in control. Incidents are high-stakes, so every step must be transparent, explainable, and guided by the team on call.

We’re building AI-powered debugging that eliminates manual, time consuming investigations—cutting resolution time while keeping humans in the loop. With Relvy, teams reduce failures, improve uptime, and trust their tools as much as their teammates.

Our mission: make reliability faster, smarter, and human-centered.