Half-day live workshop • February 11, 9 AM PST • Limited to 40 developers • $375
🎉 25% off — Sale ends Feb 5! Only 5 days left.
The Ralph
technique runs AI agents in loops with structured PRDs and feedback mechanisms. It's real, it's working in production, and you'll build it today.
Geoffrey Huntley figured out the pattern. Matt Pocock validated it with people shipping code. The technique has a name—Ralph (yeah, like Ralph Wiggum from The Simpsons, because of course it does)—and developers are using it right now to clear their GitHub backlogs while they sleep.
You're going to learn exactly how to build it.
Interactive, hands-on, real code running on YOUR backlog by 1 PM.
Here's the thing about AI coding agents right now
You've tried them. Claude, ChatGPT, Cursor, whatever.
You ask for Feature A. You get half a solution. You manually fix it. You ask for Feature B. It breaks Feature A. You're back to debugging. The cycle never ends.
You're still doing the engineer's job. The AI is just doing grunt work.
But here's what's actually happening right now, today, in production:
Developers are setting up agents that run in loops. They point them at GitHub backlogs. The agents execute tasks—real tasks, not toy examples—and create working PRs. Without constant human intervention. Without hallucinating broken code into existence.
This isn't theory. This is happening.
The question is: Are you going to keep manually prompting AI and hoping it works? Or are you going to learn the systematic technique that actually makes this autonomous?
Why this works now (and didn't before)
Two things changed:
1. The models got good enough
Opus 4.5 and GPT 5.2 can execute complex tasks if you structure them right. They're not perfect, but they're finally smart enough to work in loops with proper feedback.
2. The technique matured
This isn't about better prompting. It's about:
- Running agents in loops (not one-shot prompts)
- Structuring PRDs so agents understand the task breakdown
- Building feedback mechanisms (types, tests, browser validation) so agents catch their own mistakes
- Letting the loop run until the agent self-corrects instead of you fixing everything manually
When you combine capable models with this systematic approach, you get something that actually looks like autonomous execution.
Not magic. Not perfect. But real, usable leverage.
How we'll learn this: together, live, step-by-step
You'll be:
- Debugging real code on screen with Matt. When something breaks (and it will), we'll figure it out together. Live.
- Hooking up Ralph directly to your tools. You'll connect the loop to a GitHub backlog and see real PRs getting created.
- Asking your questions in real-time and getting answers immediately. Not a forum post you check in 3 days. Right there, right then.
- Working in groups. You'll be mob programming in groups of 2-3, bouncing ideas off each other and learning collaboratively.
By 1 PM, you'll have a Ralph setup that works for YOU.
A blog post can explain concepts. A video can show you a demo. But neither can debug YOUR specific setup when your bash loop fails, or answer "what happens if I structure my PRD THIS way?" immediately, or show you how other developers in the workshop are solving the same problems you're hitting.
That's what you're paying for. Immediate answers to your specific problems.
What you're going to learn (and build)
This is a half-day workshop. Four hours. Each module gives you something you can implement immediately.
Section 1: Getting Started
Your own bash loop, not vendor plugins
The Anthropic plugin doesn't get the best out of Ralph. You're going to run your own bash loop instead.
You'll build a working loop on your machine that gives you complete ownership. Works with Claude Code, OpenCode, Codex, or any coding agent CLI you choose. Your infrastructure, your control, your pricing.
We'll walk through the exact setup, debug it live if it breaks, and make sure you've got a loop running before we move on.
What this unlocks: Everything else. This is your foundation—once it's running, you own the infrastructure and the loop.
Section 2: Feedback Loops
How agents catch their own mistakes
Here's the secret: Agents are only as clever as the tools they use to interface with the world. Without feedback, they're coding blind. With the right tools, they can feel their way to working code.
For backend: We teach Ralph to do TDD. Red-Green-Refactor is the perfect feedback loop. Write a failing test (Red). Write code until it passes (Green). Clean up (Refactor). The test gives the agent eyes. But tests can lie—you'll learn how to structure PRDs so agents write tests you can actually trust.
For frontend: We hook Ralph up to an MCP server that can query the actual UI. The agent can see what's on screen, check if buttons exist, verify layouts rendered correctly. It's not guessing—it's looking.
The agent runs, gets feedback, tries again, self-corrects. That's the loop.
What this unlocks: Autonomy. Now the agent doesn't just execute—it validates, catches mistakes, and self-corrects.
Section 3: Hooking Up Ralph To Your Backlog
From GitHub issues to executed PRs
Wire an agent to your GitHub backlog so it executes tasks without you.
You'll structure your backlog so agents can parse it. Turn issues into agent-executable tasks using the specific pattern that works. Handle agent state and failures gracefully. Review PRs from agents (what to look for, what to ignore, what breaks most often).
This is where you'll start seeing actual PRs getting created from your real issues.
What this unlocks: Scale. Now you're not running the loop manually—it's connected to your backlog.
Section 4: Making PRD's Ralph Loves
The skill that makes or breaks agent execution
Most people can't write PRDs that agents execute correctly. Too vague? Agent guesses wrong. Too detailed? Agent misinterprets specifics.
Here's the framework you'll learn:
The 3-Layer PRD Structure:
- Context layer: What the agent needs to understand about the system
- Task layer: The specific change broken into atomic steps
- Validation layer: How the agent knows it succeeded
Plus the interview technique: You'll learn how to get the agent to help you build better PRDs by asking it the right questions about your task.
By the end, you'll have written PRDs that agents can actually execute—not theory, actual working examples.
Common mistakes that make agents hallucinate or fail:
- Being too abstract about "what should happen"
- Not specifying the validation criteria
- Forgetting to mention dependencies between tasks
- Using your internal jargon the agent doesn't understand
We'll debug these live.
What this unlocks: Production. Now your agent doesn't just run—it executes the RIGHT thing and ships real features.
Who this is for (and who it's not)
This is for you if:
- You already write code (JavaScript, TypeScript, Node, whatever—you're comfortable with code)
- You've used AI coding assistants before and know the frustration of babysitting them
- You want agents that actually execute tasks, not just suggest half-solutions
- You're willing to spend 4 hours to learn a technique that could save you 10+ hours a week
- You care about ROI on your time, not hype
This is NOT for you if:
- You're new to coding (this assumes you can read code and debug via logs)
- You're looking for a "no-code solution" (you're going to write and run code)
- You just want to watch a demo and not implement anything yourself
- You think AI will magically solve all your problems without you learning how it works
We're teaching practitioners who ship. If that's you, you're in the right place.
Smart concerns, not excuses
"Will this work on my real code? Or just demos?"
You're right to be skeptical. Most people oversell this.
Here's the thing: Ralph isn't trying to be fully autonomous. It's a hybrid. You balance HITL Ralph (Human In The Loop) with AFK Ralph (Away From Keyboard). You decide which 10% is worth your expertise—architecture decisions, design taste, the hard stuff—and which 90% can run unsupervised. Bug fixes. Refactors. Simple features. The stuff that eats your time but doesn't need your brain.
The key? Both modes use the exact same interface. Same prompts. Same setup. You work on the difficult stuff together with HITL Ralph, then hand it off to AFK Ralph when it's ready. Seamless.
Ralph works on brownfield. Ralph works on greenfield. Honestly, there's nothing I want to do without Ralph anymore because it's just such an improvement over every setup I've tried before.
That flexibility is the unlock. You're not betting everything on full autonomy. You're choosing when to supervise and when to let it run.
"I don't want to be dependent on Anthropic or some SaaS tool."
You won't be.
Ralph is just a bash loop over a CLI. The CLI doesn't matter. It's about the prompt you pass in and the feedback loops in your repo. You can swap out the CLI trivially.
I've done most of my work with Claude Code, but Ralph works equally well with Copilot CLI, with Codex, with OpenCode. If anything, it works better with OpenCode in certain ways.
No hard dependency on a model provider. Ralph is a technique that works across model providers.
You're learning the technique, not buying a tool.
"I don't know how to write good PRDs for agents. I'll be bad at it."
Writing PRDs for humans and agents isn't that different. The main difference? The agent will actually read the PRD. Humans often won't—they'll miss parts or de-emphasize what matters.
PRDs are how you frame large changes. How you build features that span more than one context window. Get a great PRD template, understand how to break it down, and you can use it for refactors, new features, greenfield—it's extremely flexible.
Understanding how to write PRDs puts you in the senior/lead bracket. This was true in the age of human coding. It's even more true in the age of AI coding.
I'm not claiming to make you an expert in 4 hours. PRD writing is wisdom you accrue over time. What I'm giving you are the frames—the structure within which you can experiment and tweak. You'll leave with a template that works and the understanding of why it works.
This is the meta-skill. This is what makes everything else work.
"4 hours isn't enough time to actually implement this."
The hardest part isn't the workshop. It's what comes after—developing intuition for Ralph and understanding how to steer it in your codebase.
This comes down to prompting. Prompting is always the hardest part of any AI system. It's experimental, not like traditional development. You tweak things. You iterate. It took me a solid three days of working with HITL Ralph before I felt comfortable letting it run AFK.
What I'm trying to do is shortcut that experimental phase. I'm giving you a prompt template I believe is a winner—battle-tested across real codebases. You'll iterate on it from there, but you won't be starting from zero.
By 1 PM, you'll have a working system. Not perfect. But running. Ready to generate PRs on your actual backlog.
The foundation will be running TODAY. Then you make it better. Then you scale it. Then you teach it to your team.
"Can't I just watch a YouTube video or read a blog post about this?"
Here's what a blog post can give you:
- The concept explained
- Maybe some code examples
- General best practices
Here's what a blog post CAN'T give you:
- Debugging your specific setup issues
- Answering "what happens if I structure my PRD THIS way instead?" immediately
- Seeing how other developers in the workshop are solving the same problems you're hitting right now
You're not paying for the information.
You're paying for immediate answers to your specific problems, live debugging when things break, and the wisdom that comes from seeing multiple developers working through the same challenges together.
What you need to bring
Technical requirements:
- Laptop with Node.js installed
- GitHub account with a repo you can work on
- Terminal comfort (you'll be running bash commands)
- Any one of these AI coding CLIs installed and working:
- Claude Code
- OpenCode
- Copilot CLI
- Codex
Mental requirements:
- Willingness to debug things that break
- Curiosity about how this actually works under the hood
- Patience for the parts that aren't perfect yet (this is cutting-edge, not battle-tested)
Register for February 11, 9 AM PST
$500 $375 • Limited to 40 developers • Save 25%!
⏰ Sale ends February 5, 2026 — 5 days remaining
We're capping at 40 because interactive means you get real feedback. You'll get screen time to debug YOUR code, not just watch me code.
When you register, you'll receive:
- Calendar invite with Zoom link sent immediately to your email
- Pre-workshop setup instructions so you're ready to code from minute one
- Access to workshop materials and code examples (templates, scripts, the exact bash loop setup)
- Recording of the session in case you need to review anything (but the real value is in the live debugging and Q&A)
Four hours. Real code. Real techniques. Real leverage.
Let's get this running
I've been talking to people using this pattern in production. I've debugged the parts that break. I've figured out the parts that matter.
This looks simple—just run an agent in a loop, right?
But there are lots of ways to screw it up. And lots of ways to make it better.
The PRD structure matters. The feedback loops matter. The way you break down tasks matters.
Those details are what turn "interesting concept" into "actually useful tool."
Once you understand them, you can build your own variations. You can iterate. You can make this work for your specific setup and your specific needs.
You won't be dependent on me or anyone else. You'll just know how it works.
So bring your GitHub issues. Bring your questions. Bring your skepticism.
Let's get this running.
-Matt
P.S. Not sure if this is for you? Ask yourself: "Do I spend more time babysitting AI coding assistants than I'd like?" If yes, this workshop is for you.
P.P.S. Those weird edge cases that still break? We're going to debug those exact scenarios live in the workshop. You'll see what breaks and why. That's how you learn to avoid them in your own PRDs.