I keep getting sent “AI strategy” documents to look at. Forty slides. A vision. Three horizons. A maturity model. Sometimes a pyramid. Almost always: a list of every AI thing the deck-writer has heard of, mapped to a function, with a confidence score made of vibes.
None of them are bad, exactly. They're just useless. By the time you've read them, you still don't know what to do on Monday.
The strategies I see actually work are the ones that fit on a Post-it. Not metaphorically. Physically. A small square of paper, four lines of writing, stuck on a wall.
The Post-it test
Write your AI strategy in one go. Limit yourself to a Post-it (real one, 76×76mm).
- One sentence on the bet you're making.
- One sentence on what changes for customers if it works.
- One sentence on what you'll stop doing to make room.
- One sentence on how you'll know in 90 days whether to keep going.
If you can't write that, you don't have a strategy. You have a wishlist with extra steps.
Why the long decks lose
Three reasons, in order of how often I see them.
They confuse capability with commitment. “We will use generative AI to enhance customer experience across all touchpoints” is a sentence that costs nothing to say and demands nothing to do. Any team can claim they're working on it. None of them have to ship anything.
They list things the org doesn't actually own. Half the AI plans I see depend on data the company doesn't have access to, vendor relationships nobody has, or behaviour changes from users who didn't agree. A strategy you can't act on is just opinion.
They optimise for the meeting, not the work. Forty slides exist because the document needs to survive a steering committee, not because the work needs forty slides of guidance. The result: the document is exquisitely calibrated to the politics of the room it was presented in, and useless in the rooms where the actual building happens.
What a Post-it strategy looks like in practice
Here's one I helped write for a B2B services business late last year. Real Post-it. Stuck on the meeting-room wall for nine months.
- Bet: Our analysts spend half their week assembling client briefings from the same five data sources. We can make that twenty minutes of editing instead of three days of writing.
- If it works: Clients get briefings the day they ask for them. Analysts spend their week on judgement, not assembly.
- We'll stop: Manual briefing-pack production. Two analysts move off it entirely.
- 90-day check: Time-to-first-draft drops from three days to under an hour for the top ten client request types.
That's the whole plan. It was wrong in places and we adjusted it twice. But every Monday morning, the team knew what they were working on, and the leadership team knew what they'd see in 90 days. The Post-it didn't generate hype. It generated decisions.
What this rules out
The Post-it format is, by design, deeply unfriendly to certain kinds of AI initiative. That's a feature.
It rules out anything where you can't name the user. “The org” is not a user. “Knowledge workers” is not a user. The named analyst whose job changes on Tuesday is a user.
It rules out anything where you can't say what you'll stop doing. AI without subtraction is just additional cost, regardless of how impressive the output is.
It rules out anything where the 90-day check is a feeling. “We'll have a better sense of capability” is not a check.
If you can't fit your strategy on a Post-it, the constraint isn't the Post-it. It's the strategy.