All workshops
WS-001Now enrolling · 9 seats remaining

Doing More With Less in QA

A one-day course for QA leaders who need to reduce regression drag, use AI wisely, and prove quality value in 90 days.

Format

Live · Virtual · Cohort

Duration

1 day

Cadence

Last Tuesday of the month

Refund

Full, up to 7 days out

On reservation

Reserve today: instant access to the 90-Day QA Leverage Plan and Boss-Justification Memo templates.

Refund & transfer

Full refund up to 7 days out. Seat is transferable to a teammate.

Instructor

Led by Ben Fellows. 30+ Loop client engagements building AI-Native QE.

The promise

By the end of one day you'll have a practical 90-day plan to reduce low-leverage QA work, identify where AI can actually help, and reposition QA around quality value instead of test execution.

Who it's for

QA Directors, QA Managers, and Heads of Quality whose teams are shrinking, whose AI mandate is unclear, and who need a defensible plan their boss can read on one page.

Buyer state. “My team is shrinking, leadership keeps asking about AI, our automation is flaky, and I need to prove QA still matters.”

The day, hour by hour

The full agenda. Six sessions, two breaks, lunch, and a close.

QA teams do more with less by increasing their leverage. Not by running more tests. By the end of the day you'll know what access you need, what engineering should own, where tests should live, how automation changes with AI, how to measure output and quality, and how to explain the new QA operating model to leadership.

9:00 – 9:30

Opening: The QA Leverage Shift

Frame the day around high-leverage vs low-leverage QA work. Sort your current QA activities into what creates durable leverage, what's necessary but non-compounding, what should be automated, and what should move to engineering or stop entirely.

9:30 – 10:30

Working session

Session 1 · Setting QA Up for Success in an AI World

Show that AI only helps if the QA team has the right access, permissions, process, and engineering relationship.

A QA team without engineering access uses AI for surface-level work. A QA team with engineering access uses AI to change the quality system. We walk through every access lever. Code, test IDs, PRs, API contracts, DB state, CI failures, acceptance criteria. And what changes when each is unlocked.

Live exercise · QA Access & Permissions Audit

Score your team across 10 access dimensions. Identify the single biggest unlock that would move your team from low-leverage to high-leverage in 30 days.

What you walk out with

QA Access & Permissions ScorecardTest ID Access Request Template“What QA Needs From Engineering” Checklist

10:30 – 10:45

Break

10:45 – 12:00

Working session

Session 2 · QA Operations Maturity Ranking

Give you a ranking system for what 'good QA operations' look like now. Especially when QA is expected to operate closer to engineering.

Walk through the 5-level maturity model: Reactive → Managed → Technical → Quality Engineering → AI-Leveraged Quality Operations. Self-assess across access, technical capability, automation ownership, engineering relationship, AI usage, metrics, output quality, and release confidence. Identify the next level and the single operational constraint blocking you from reaching it.

Live exercise · QA Ops Maturity Self-Assessment

Score your team across 8 dimensions. Name the next level you need to reach and the first constraint blocking you.

What you walk out with

QA Operations Maturity ScorecardCurrent-State / Next-Level Gap WorksheetQA Operating Model Improvement Backlog

12:00 – 1:00

Lunch

1:00 – 2:15

Working session

Session 3 · Redefining White-Box and Black-Box Testing

Move past 'manual vs automated' and toward 'where should this quality signal actually live?'

The full quality signal map: unit, integration, API, contract, E2E, visual, observability, logs, DB validation, production monitoring, AI-assisted analysis. For each important product risk, answer: what's the cheapest, fastest, most reliable layer that catches it?

Live exercise · Test Layer Mapping

Pick 3–5 important risks. For each one, name the current test method, the better layer, the right owner, and why.

What you walk out with

White-Box / Black-Box Testing MapTest Layer Decision TreeUnit vs Integration vs API vs E2E Cheat Sheet“Move This Test Lower” Worksheet

2:15 – 3:15

Working session

Session 4 · Automation Strategy After AI

Define what's actually changed in automation now that AI can generate starting points, scripts, analysis, and test scaffolds.

AI lowers the cost of first-draft tests, refactoring, Playwright coverage, test data, failure analysis, code explanation, and script maintenance. It does not remove the need for good selectors, test IDs, stable environments, clear ownership, acceptance criteria, test architecture, or review standards. The new question isn't 'what should we automate first?'. It's 'what's slowing us down from automating everything obvious?'

Live exercise · Automation Constraint Diagnosis

For each manual flow you should have automated: name the blocker (access / skill / process / architecture / ownership / time), whether AI can generate the starting point, who owns the final version, and the quality bar it must clear.

What you walk out with

AI Automation Strategy CanvasAutomation Constraint DiagnosticAI Test Generation Prompt PackPlaywright Review ChecklistTest ID Implementation Checklist

3:15 – 3:30

Break

3:30 – 4:30

Working session

Session 5 · Measuring Output and Quality in an AI-Enabled QA Team

AI makes output easier to fake. Teach managers how to measure whether QA and engineering are actually producing more high-quality work. Not just more artifacts.

Output metrics: meaningful tests added, manual flows converted, endpoints covered, flaky tests fixed permanently, regression hours removed, AI-drafts merged. Quality metrics: tests pass selector standards, coverage gaps reduced, tests fail for the right reasons, defects caught earlier, escape rate down. Plus the weekly quality gate that holds the line.

Live exercise · Output vs Quality Metrics Design

Define one output metric, one quality metric, one leading indicator, one executive narrative, and one weekly review habit your team will run starting Monday.

What you walk out with

QA Output & Quality Metrics MenuWeekly AI/Automation Quality Review TemplatePlaywright Quality ChecklistCoverage Gap Review TemplateManager's Guide to Spotting Fake AI Productivity

4:30 – 5:15

Working session

Session 6 · Ownership, Buyers, and the 90-Day Narrative

Turn the day into an internal plan that leadership understands.

Four categories for the final plan: access / permissions, process, output, quality. For each, define: what QA owns, what engineering owns, what the economic and technical buyers care about, what the executive narrative is, and what the 30/60/90 changes are.

Live exercise · 90-Day QA Leverage Plan

Write the success metric, economic and technical buyers, current constraint, access request, process change, automation change, AI usage change, output and quality metrics, and the executive narrative for your team.

What you walk out with

90-Day QA Leverage PlanBoss Justification MemoEconomic Buyer / Technical Buyer WorksheetQA vs Engineering Ownership CharterExecutive Narrative Builder

5:15 – 5:30

Close: What Changes Monday?

Each attendee picks one access request to make, one low-leverage activity to stop or reduce, one automation constraint to diagnose, one AI workflow to pilot, and one metric to start reporting. With deadlines.

Why each session matters

The arguments behind each session.

01

Setting QA Up for Success in an AI World

Access, permissions, and the engineering relationship that determines everything.

AI only helps if the QA team has the right access. We walk through every lever. Code, test IDs, PRs, API contracts, DB state, CI failures, acceptance criteria. And what changes when each is unlocked. A QA team without engineering access uses AI for surface-level work. A QA team with access uses AI to change the quality system.

02

The QA Operations Maturity Ranking

Where your team is, and the next operational constraint blocking you.

The 5-level maturity model: Reactive → Managed → Technical → Quality Engineering → AI-Leveraged Quality Operations. Self-assess across access, technical capability, automation ownership, engineering relationship, AI usage, metrics, output quality, and release confidence. Walk out knowing which level you're at and which constraint is blocking the jump.

03

Redefining White-Box and Black-Box Testing

The full quality signal map and where each risk should actually live.

Move past 'manual vs. automated.' The new question is: what's the cheapest, fastest, most reliable layer that catches this risk? We map important product risks across unit, integration, API, contract, E2E, visual, observability, logs, DB validation, and AI-assisted analysis. Then assign owners and move tests lower in the stack where the economics support it.

04

Automation Strategy After AI

What's actually changed. And what still requires the boring engineering.

AI lowers the cost of first-draft tests, refactoring, Playwright coverage, test data, failure analysis, and script maintenance. It does not remove the need for good selectors, test IDs, stable environments, or review standards. The new question isn't 'what should we automate first?'. It's 'what's slowing us down from automating everything obvious?'

05

Measuring Output and Quality in an AI-Enabled QA Team

AI makes output easier to fake. Here's how managers tell the difference.

Output metrics: meaningful tests added, manual flows converted, endpoints covered, flaky tests fixed, regression hours removed. Quality metrics: tests pass selector standards, coverage gaps reduced, tests fail for the right reasons, defects caught earlier. Plus the weekly quality gate that catches AI productivity theater before it ships.

06

Ownership, Buyers, and the 90-Day Narrative

Turn the day into an internal plan that leadership signs off on.

Define what QA owns, what engineering owns, what the economic and technical buyers need to hear, and what the 30/60/90 changes are. You leave with a 90-day plan, a boss-justification memo, and an executive narrative built around your team's specifics. Ready to present at the next leadership offsite.

Deliverables

You'll walk away with all of this.

  • QA Access & Permissions Scorecard
  • Test ID Access Request Template
  • QA Operations Maturity Scorecard
  • Current-State / Next-Level Gap Worksheet
  • Test Layer Decision Tree
  • White-Box / Black-Box Testing Map
  • AI Automation Strategy Canvas
  • Automation Constraint Diagnostic
  • QA Output & Quality Metrics Menu
  • Manager's Guide to Spotting Fake AI Productivity
  • 90-Day QA Leverage Plan
  • Boss Justification Memo
  • QA vs. Engineering Ownership Charter
  • Executive Narrative Builder

The boss-approval frame

How to get this approved.

“This course will help me identify where QA is spending time on low-value work, where AI can actually create leverage, and how we can reduce regression effort over the next 90 days. If it helps us remove a few hours of recurring work each week or avoid one bad tooling decision, it pays for itself.” At $1,000, the ROI bar is low. Your boss only needs to believe you'll come back with a better plan.

The offer ladder

Four tiers. One question each.

Pick by what you're trying to answer this quarter. Not by what tier looks "best." The depth of change escalates with the question. The entry course is publicly priced; higher tiers are scoped per engagement.

  1. You're here
    01$1,000 / seat

    “Teach me the model.”

    Doing More With Less in QA

    Duration
    1 day
    Scope
    Individual leader
    ROI by
    90-day plan, same week

    You leave with the framework, the worksheets, and a 90-day plan you can hand to your boss on Monday.

    This page
  2. 02Scoped per engagement

    “Apply the model to my current QA team.”

    QA Leverage Review

    Duration
    1 day · private
    Scope
    QA team only
    ROI by
    Top-5 moves named the same day

    An outside diagnosis built around your team. You leave with the top-5 leverage opportunities, scored, and a 90-day plan ready for leadership.

    See this tier →
  3. 03Scoped per engagement

    “Redesign our company-wide quality strategy.”

    Quality Strategy & Leadership Alignment

    Duration
    3–6 weeks
    Scope
    QA + engineering + product + executives
    ROI by
    Cross-functional ownership reset in week 4

    Cross-functional alignment, ownership clarity, and a 90-day implementation roadmap. Backed by 3 follow-up reviews so the strategy actually ships.

    See this tier →
  4. 04Scoped per engagement

    “Lead the transformation.”

    Quality Transformation Sprint

    Duration
    6–10 weeks
    Scope
    Org-wide implementation
    ROI by
    Visible ROI by week 8

    Quality intelligence dashboard, AI/automation pilot, manager operating cadence, and 180-day roadmap. Built into how the team actually works, not delivered as a deck.

    See this tier →

Most clients move up the ladder one tier at a time. Skipping tiers works only when the depth of change you need is obvious from the start.

For attendees · Memo MEM-DOIN

Convince your boss this is worth the seat.

The one-page justification memo every cohort attendee gets. Designed to survive a 90-second skim by a busy CTO or VP of Engineering.

Memo · 1 page · markdown + PDF

For: Engineering leadership

What this is

A one-day live cohort that helps me leave with a 90-day QA plan we can defend at the next board update. Focused on reducing low-leverage testing, applying AI where it actually compounds, and turning QA into measurable business value.

The math

  • Cost of a seat: $1,000. Roughly half a day of fully-loaded engineering time.
  • Typical leverage gain: a single regression-flow elimination in the first 30 days reclaims 4–8 engineering-hours per sprint, every sprint.
  • Avoided cost: one bad AI testing tool decision easily costs $50k–$150k. The course frames the readiness audit before any vendor signature.
  • Net effect: the course pays for itself if it removes a few hours of recurring work each week or prevents one bad tooling decision.
  • ROI bar is intentionally low. Your only required belief is that I'll come back with a better plan than I have today.

Five things I'll bring back

  1. 01A QA Access & Permissions Scorecard naming the unlocks we need from engineering.
  2. 02Our team's QA Operations Maturity ranking, with the next operational constraint identified.
  3. 03A Test Layer Decision Tree applied to our top product risks. What should move where, and who should own it.
  4. 04An Automation Constraint Diagnostic + AI Test Generation prompt pack for our specific stack.
  5. 05A 90-day QA Leverage Plan with our metrics, ownership model, and executive narrative. Ready to present at the next leadership offsite.

Email template

Subject: Approval request: Doing More With Less in QA cohort. $1,000

Hi [boss],

I'd like approval to attend Doing More With Less in QA. A one-day live cohort run by Loop. The seat is $1,000.

Why this is a good use of my time:

• It's a working session, not a talk. We finish with a 90-day plan applied to our team. Not slideware.
• The curriculum directly targets the biggest leverage gaps in our QA operating model: access and permissions, the maturity ranking, where tests should actually live, what's changed in automation now that AI is in the mix, how to measure output vs quality without faking it, and how to pitch the operating-model change to our leadership.
• I'll come back with five concrete artifacts I can act on next sprint:
    1. A QA Access & Permissions Scorecard naming the unlocks I need from engineering
    2. Our team's QA Operations Maturity ranking with the next operational constraint identified
    3. A Test Layer Decision Tree applied to our top product risks
    4. An Automation Constraint Diagnostic + AI prompt pack for our stack
    5. A 90-day QA Leverage Plan + boss-justification memo for our specific situation

The ROI math:

• If the audit reframes our test-layer strategy and we move even a few flows lower in the stack, we reclaim 4–8 engineering-hours per sprint, every sprint.
• If the AI-productivity manager's guide prevents us from approving one bad AI testing tool purchase, that's $50k–$150k of avoided cost.
• At $1,000 per seat, the course pays for itself if any one of those happens.

I'll share the artifacts with you within a week of attending and propose the changes I think are highest-leverage for the team.

Thanks for considering it.

[your name]

Cohort details: https://workwithloop.com/workshops/doing-more-with-less-in-qa
Template

90-Day QA Leverage Plan

Coming soon