The Loop Dispatch

Issue 006 · March 14, 2026

Lead essay · Quality Engineering

The AI-Native QE Operating Model: Why Traditional QA Can't Keep Up

Most QA functions were built for a world where humans wrote all the code. That world is gone. Here's the operating model we've deployed across 30+ engagements to replace it.

By Ben Fellows8 min readRead the lead essay →
AllQuality EngineeringAgentic PipelinesTDD DevelopmentTeam & Culture
Quality EngineeringMarch 14, 2026

The AI-Native QE Operating Model: Why Traditional QA Can't Keep Up

Most QA functions were built for a world where humans wrote all the code. That world is gone. Here's the operating model we've deployed across 30+ engagements to replace it.

Read article8 min read

About the Author

Ben Fellows

The fundamental problem with traditional QA isn't headcount, tooling, or process. It's the operating model itself. QA was designed as a gate

All Articles

Team & CultureFebruary 7, 2026

What We Look For When Hiring AI-Native SDETs

The SDET role is being redefined. Manual test execution is gone. What remains is automation engineering, infrastructure design, and the ability to operate AI-assisted workflows. Here's how we hire for it.

How readers use the writing

From idea to internal memo to org change.

The essays are most useful when they give you language and evidence for an argument you already wanted to make.

01
“I sent “Coverage Is a Vanity Metric” to my CTO. Two weeks later we redefined our QA OKRs.”
. SarahQA Director at

Series-B fintech · ~50 engineers

Used: Coverage Is a Vanity Metric

02
“The AI-generated tests piece reframed our vendor review. We rewrote the evaluation rubric the same week.”
. MarcusHead of Quality at

Healthcare SaaS · 120 engineers

Used: Why Your AI-Generated Tests Are Worthless

03
“I forwarded the staffing-model essay to our head of engineering. It started a conversation we'd been avoiding.”
. PriyaQA Director at

E-commerce platform · 200+ engineers

Used: The Staffing Model Is Broken

Names + companies anonymized at the speakers' request.

Watch · From the desk

On the channel

Subscribe on YouTube · @benfellows-dev
Set Up Policy as Code in 1 Hour (Control AI Code Fast)

Apr 28, 2026

Set Up Policy as Code in 1 Hour (Control AI Code Fast)

If you want to start controlling AI-generated code today, this is the simplest way I’ve found to do it. In the previous videos, I talked about why agentic development breaks at scale and introduced the concept of policy as code as a way to fix it. In this video, I’m showing how to actually get started. The idea is straightforward. Instead of relying only on prompts, rules, or memory to guide AI, you introduce a deterministic layer that scans your codebase and flags violations. Think of it as a much more comprehensive, fully customizable linting system that works alongside tools like Claude. What surprised me is how easy it is to get a first version working. In this walkthrough, I show how you can go from zero to a basic policy as code setup in a very short amount of time. We start by generating a small set of rules, wire up a simple scanner, and immediately run it against a real codebase. Even with a basic setup, you’ll start catching issues and inconsistencies right away. This is not the full system I use in production. At scale, this turns into hundreds or even thousands of rules, with more advanced concepts like evidence layers, caching, and reporting. But the goal of this video is to show that you don’t need any of that to begin. If you’re using AI to write code and you’re starting to see drift, inconsistency, or quality issues over time, this is a practical way to start putting guardrails in place. Over time, what I’ve found is that as you add more rules, the amount of drift drops significantly, and the system becomes more reliable without slowing development down. If you haven’t watched the earlier videos in this series, I’d recommend starting with those for more context on why this approach exists and how it fits into a larger agentic workflow. If you try this yourself, I’d be interested to hear what kinds of rules you end up writing and what it catches in your codebase.

I Tried Building with Agentic Factories. They Failed. Here’s What Worked Instead.

Apr 27, 2026

I Tried Building with Agentic Factories. They Failed. Here’s What Worked Instead.

I spent time building with “agentic factories” - multi-agent pipelines that promise fully autonomous workflows. On paper, they look like the future. In practice, they broke down in ways that matter: reliability, coordination, and real-world constraints. In this video, I break down where these systems failed, why they fail structurally, and what actually worked instead in production. If you're building with AI agents, this will save you time (and probably some pain).

How We Use Policy as Code to Control Claude and AI Agents

Apr 24, 2026

How We Use Policy as Code to Control Claude and AI Agents

Claude and other AI agents are incredibly good at writing code. The problem is they don’t stay consistent over time. In the first few iterations, everything looks great. Output is fast, patterns are mostly correct, and it feels like you’ve unlocked a new level of development speed. But as the codebase grows, small inconsistencies start to compound. Patterns drift, structure degrades, and eventually the system becomes harder to maintain than it was before. That’s the problem this video is about. In this walkthrough, I break down how we use a concept called policy as code to control AI-generated code in real systems. Instead of relying only on prompts, rules files, or memory, we introduce a deterministic layer that enforces how code is allowed to be written. Every time an agent makes changes, those changes are checked against a large set of rules. If something doesn’t match the expected patterns, it fails. The agent has to fix it before moving forward. This ends up acting like a much more comprehensive version of linting, but tailored specifically to your architecture, your patterns, and your codebase. The result is that we’re able to keep the speed benefits of AI while dramatically reducing drift and long-term degradation. This video focuses on how the system works in practice. What kinds of rules we write, how they’re structured, and how they integrate into an agentic workflow using tools like Claude. If you’re experimenting with AI coding and running into issues with inconsistency or quality over time, this is one approach that has worked well for us. I’ll also be doing follow-up videos on how to implement this from scratch and how it fits into larger agentic pipeline systems. If you’ve tried something similar or have different approaches to controlling AI-generated code, I’d be interested to hear about it.

Want the methodology behind the insights?

Every article is backed by a published methodology paper. Download the frameworks, run the assessments, and see the full system.

Template

90-Day QA Leverage Plan