The Loop Dispatch

Issue 006 · March 14, 2026

Lead essay · Quality Engineering

The AI-Native QE Operating Model: Why Traditional QA Can't Keep Up

Most QA functions were built for a world where humans wrote all the code. That world is gone. Here's the operating model we've deployed across 30+ engagements to replace it.

By Ben Fellows8 min readRead the lead essay →
AllQuality EngineeringAgentic PipelinesTDD DevelopmentTeam & Culture
Quality EngineeringMarch 14, 2026

The AI-Native QE Operating Model: Why Traditional QA Can't Keep Up

Most QA functions were built for a world where humans wrote all the code. That world is gone. Here's the operating model we've deployed across 30+ engagements to replace it.

Read article8 min read

About the Author

Ben Fellows

The fundamental problem with traditional QA isn't headcount, tooling, or process. It's the operating model itself. QA was designed as a gate

All Articles

Team & CultureFebruary 7, 2026

What We Look For When Hiring AI-Native SDETs

The SDET role is being redefined. Manual test execution is gone. What remains is automation engineering, infrastructure design, and the ability to operate AI-assisted workflows. Here's how we hire for it.

How readers use the writing

From idea to internal memo to org change.

The essays are most useful when they give you language and evidence for an argument you already wanted to make.

01
“I sent “Coverage Is a Vanity Metric” to my CTO. Two weeks later we redefined our QA OKRs.”
. SarahQA Director at

Series-B fintech · ~50 engineers

Used: Coverage Is a Vanity Metric

02
“The AI-generated tests piece reframed our vendor review. We rewrote the evaluation rubric the same week.”
. MarcusHead of Quality at

Healthcare SaaS · 120 engineers

Used: Why Your AI-Generated Tests Are Worthless

03
“I forwarded the staffing-model essay to our head of engineering. It started a conversation we'd been avoiding.”
. PriyaQA Director at

E-commerce platform · 200+ engineers

Used: The Staffing Model Is Broken

Names + companies anonymized at the speakers' request.

Watch · From the desk

On the channel

Subscribe on YouTube · @benfellows-dev
Stop Doing AI “Factory Work” - Own Your Agentic Pipeline #agenticai #agentic #agenticcoding #coding

Apr 30, 2026

Stop Doing AI “Factory Work” - Own Your Agentic Pipeline #agenticai #agentic #agenticcoding #coding

Stop treating AI like factory work. Rigid, assembly-line workflows break down with complex codebases. Owning your agentic pipeline means customizing every step and refreshing context each time—leading to better accuracy, flexibility, and scalability where it actually matters. #agenticai #agentic #agenticcoding #coding

Why “Agentic Factories” Don’t Work #agenticai #agentic #agenticcoding #coding #programming #code

Apr 30, 2026

Why “Agentic Factories” Don’t Work #agenticai #agentic #agenticcoding #coding #programming #code

Why “agentic factories” don’t actually work Trying to force one system to handle every codebase is like expecting one factory to build every type of car - it just doesn’t scale. Instead, the future is agentic pipelines: flexible, tailored workflows built around your specific repo, while still reusing powerful components like agents, prompts, and shared memory. Smarter systems aren’t universal - they’re purpose-built. #agenticai #agentic #agenticcoding #coding #programming #code

Stop Building God Agents: The 5 Agentic Pipelines Every Serious Codebase Needs

Apr 30, 2026

Stop Building God Agents: The 5 Agentic Pipelines Every Serious Codebase Needs

Most people trying to do agentic development are building what I call “God agents” — one giant system that tries to do everything. In my experience, that approach breaks down quickly. It hits context limits, becomes hard to reason about, and fails in inconsistent ways. The result is usually more time spent debugging the agent than actually building software. This video is about a different approach. Instead of building one massive agent, I break down the five pipeline categories that I use across my codebases to make agentic development actually work at scale. These aren’t meant to be perfect or universal, but they’ve been a solid foundation for structuring real systems. The five categories I walk through are: surface area pipelines, change type pipelines, failure mode pipelines, integration pipelines, and confidence pipelines. Each one exists for a reason. As I’ve worked more with AI, one pattern has become clear. The failures are unpredictable at first, but over time they repeat. The same classes of mistakes show up again and again. These pipeline categories are designed to target those failure patterns directly, with loops and checks that are specific to the kinds of problems you’ve already seen. The goal isn’t to create a single “correct” pipeline or a magic factory. It’s to build a system of smaller, focused pipelines that are tailored to your codebase, your architecture, and the ways AI tends to drift in your environment. If you pair this approach with something like policy as code, you end up with both structure and control, which is where agentic development starts to become much more reliable. This video is meant as a starting point. These categories will evolve, and I’m still refining them as I see new patterns emerge. If you’re building with AI and trying to move beyond simple prompts into something more scalable, this should give you a useful framework to start from. Would love to hear how others are structuring their pipelines and what categories you’re seeing in your own systems.

Want the methodology behind the insights?

Every article is backed by a published methodology paper. Download the frameworks, run the assessments, and see the full system.

Template

90-Day QA Leverage Plan

Coming soon