Last Friday at 11pm I deleted three client websites.
Not a rogue AI agent. Not a hallucinating copilot. Just me, tired, rushing, clicking through confirmations I wasn’t really reading. The recovery took a few hours. Opalstack support team had backups ready and helped me out. No data lost, just some downtime. But those were a sharp few hours.
Here’s the thing though.
Every conversation about AI safety circles the same checklist: controlled access, confirmation gateways, rollback capability, audit trails. And those are all correct. But we talk about them like they’re uniquely necessary for AI. As if humans operating at 11pm on a Friday with full admin access aren’t a known failure mode.
The safeguards aren’t new. We just never enforced them on ourselves.
I didn’t need a smarter system that night. I needed the same guardrails people want to put around AI: a second confirmation, scoped permissions, a mandatory cooldown before destructive actions.
If your AI safety checklist wouldn’t also protect you from a tired engineer on a Friday night, it’s not a safety checklist. It’s a blame-shifting exercise.