We just played
In 2004, a monster called the Dahaka was chasing me through Prince of Persia: Warrior Within. It couldn't be killed. It couldn't be reasoned with. That thing gave me a trauma response that still lives in my body today. Twenty-one years later, I'm doing the same thing with AI. Just without the monster. Mostly.
---
Gamers have been managing context windows their whole lives. We just didn't call it that. Every save file is a checkpoint. Every inventory screen is a decision about what to carry forward and what to leave behind. Every quest log is a system prompt running in the background, keeping you oriented while you handle whatever's in front of you. When I open a long conversation with an AI model, I'm not learning a new behavior. I'm doing what I've done since I was a kid holding a controller. I'm managing state.
---
Most people type something into an AI, get a bad answer, and either start over or keep going. They pile bad context on top of bad context until the whole conversation is poisoned and they can't figure out why the answers got worse. A gamer doesn't flinch at a failed first attempt. You've died on the first try a thousand times before. You look at what went wrong, adjust, re-run. Reload from a cleaner checkpoint if you have to. Failure isn't frustrating. Failure is data. That's prompt engineering. That's literally it.
---
The Prince of Persia games had a rewind mechanic. The Sands of Time. You could undo your mistakes, but only a limited number of times. The sand tanks weren't infinite. So you learned to be selective. Some mistakes were worth rewinding. Some you just lived with and adapted around. I catch myself doing this with AI every day. I run a task, look at the output, and something's off. Maybe I asked it to refactor a file and it rewrote half the codebase. Maybe the tone drifted three messages ago and everything since has been slightly wrong. A non-gamer would try to fix the output. Iterate on the mess. A gamer rewinds. Go back to the last clean checkpoint and give better context this time instead of building on top of something that already went sideways. That's token management. You don't have unlimited context. You learn what's worth keeping in the window and what you can afford to lose. You get a feel for when to start fresh versus when to keep building on what you have. Nobody taught us this. We just played.
---
Boss fights are the same loop. You go in blind. You die. You notice a pattern. It always attacks left after the charge. You adjust. You die again, but later this time. You refine. Eventually you beat it not because you got stronger, but because you understood the system. Swap "boss" for "model" and nothing changes. The model keeps hallucinating citations. You notice it happens when you ask for too many at once. You break the request into smaller passes. You add constraints. Eventually it stops making things up — not because the model changed, but because you understood it better. Good prompt engineering isn't about finding magic words. It's about reading patterns, iterating, and understanding that the system has rules you can learn by interacting with it.
---
Inventory management is context curation. Every RPG player has stood in front of a chest and made hard choices. Do I keep the fire sword or the ice bow? I can't carry both. What's the next area? What am I going to need? That's you deciding what goes into a prompt. Background context, examples, constraints, tone instructions. You can't throw everything in and expect good results. You have to choose. You have to know what the next interaction needs. Gamers have been making that call for decades. And when you overload a prompt with everything you think the model needs, you get the same result as walking into a boss fight carrying sixty potions and no weapon. Technically prepared. Practically useless.
---
I'm not saying gamers are smarter. I'm saying there's a specific set of reflexes. Save state thinking, iterative problem solving, resource budgeting, pattern recognition under pressure. Gaming installs these in your brain through sheer repetition. Thousands of hours of it. And it's not just anecdotal.
Research from Attention, Perception, & Psychophysics https://link.springer.com/article/10.3758/s13414-012-0284-1 found that action video game players switch between tasks faster and more efficiently than non-gamers, with smaller cognitive costs when shifting focus. And those reflexes happen to be exactly what working with AI demands. It's not that we learned prompt engineering. It's that we've been practicing it our whole lives in worlds that punished us for getting it wrong and rewarded us for adapting fast.
---
But here's what I can't stop thinking about. Did we just get lucky that AI works the way our brains were already wired? Or did the people who built these systems design them this way because they grew up with a controller in their hands too? Context windows. Checkpoints. Token limits. Save states. Maybe we didn't adapt to AI. Maybe AI was built by people who think like us. I don't have the answer.
But I don't think it's a coincidence.
