We’ve been doing this in software engineering for decades.

Code reviews. Retrospectives. Test suites. Linting. CI/CD. All feedback loops designed to catch mistakes early and compound quality over time.

Why would working with AI be any different?

The engineers getting the best results aren’t just prompting better, they’re building feedback loops around the output. Same instinct, new medium.

Three techniques I’m using: - Make it teach itself: When something breaks, ask “where did you go wrong?” It’s basically a retro for your prompt. The model reflects, you learn what to adjust. - Fine-tune to your style: Don’t just say “try again.” Say “do it like this instead.” Concrete examples beat vague corrections every time. Think of it as pair programming with explicit feedback. - Prevent repeat mistake: Document what good looks like. Build a library of patterns. Your own linting rules for AI outputs.

We already know this works. We’ve just got to apply the same rigor to a new tool.


Media

image-1.jpg