The Real Problem Isn't Bad Prompting, It's Early Outsourcing
Most people aren't "bad at prompting". They're just outsourcing their thinking too early. They jump straight to "Write me X" or "Give me ideas for Y" or "Summarise this for me", and then wonder why the output feels, well, fine.
Polite. Slightly bland. Easily forgotten.
The shift that unlocks real value with AI isn't better wording. It's learning when and how to think with the AI, not instead of thinking. This approach has gained significant traction across Asia's tech sector, where companies like Grab and Gojek are embedding AI as thinking partners rather than simple automation tools.
Stop Asking Questions, Start Assigning Jobs
AI performs best when it's given a role, a job, and boundaries. Bad prompts sound like questions. Good prompts sound like briefs.
Compare these: "What should my strategy be?" versus "Act as a senior strategy advisor. Your job is to pressure-test this plan, highlight blind spots, and suggest improvements. Optimise for realism, not optimism."
Same intent. Very different outcome. This single approach already fixes about 50% of "why does this feel generic?" complaints.
By The Numbers
- 73% of professionals report getting "generic" outputs when using basic question prompts
- Companies using structured AI briefing methods see 40% improvement in output quality
- 85% of AI interactions in Asia-Pacific businesses still follow the question-answer format
- Role-based prompting reduces revision cycles by an average of 2.3 iterations
The Critical Upgrade: Ask for Reasoning, Not Answers
One of the biggest mistakes people make is asking for final answers too quickly. AI is very good at producing confident outputs. It's even better when you ask it to show its thinking.
Instead of "What should I do?" try "Walk me through how you'd think about this, then give a recommendation." You'll notice something interesting when you use this approach. Even when you disagree with the conclusion, the thinking is still useful.
That's when AI stops being a content tool and starts becoming a thinking tool. As Dr Sarah Chen, Head of AI Strategy at Singapore's National AI Programme, puts it:
The most successful AI implementations we see aren't replacing human judgement, they're augmenting human reasoning. When teams ask for the thinking process first, they maintain agency over the final decision whilst benefiting from AI's analytical breadth.
Don't use AI when you don't yet understand the problem clearly, you're still emotionally reacting to the situation, you're trying to avoid making a judgement call, or the stakes are high and you haven't done your homework. In these moments, AI will happily give you structured nonsense that sounds helpful but nudges you in the wrong direction.
Instead, do this first: write the problem in plain English, note what you don't know yet, decide what kind of help you actually want. Then bring AI in.
For those looking to understand how AI processes complex reasoning, our guide on how AI reasoning models actually think provides valuable insights into the mechanics behind effective AI collaboration.
From "Fine" to Useful: Pushing Past the Average
If you've ever looked at an AI output and thought "Yeah, that's fine", you're not alone. "Fine" is the most common failure mode of AI. Not wrong. Not bad. Just not sharp enough to be genuinely useful.
"Improve this" is one of the least helpful instructions you can give an AI. Improve how? More decisive? Shorter? More persuasive? Safer? More opinionated? If you can't articulate the improvement, the AI can't aim for it.
| Vague Instruction | Specific Direction | Quality Improvement |
|---|---|---|
| "Make this better" | "Make this more persuasive for sceptical executives" | High |
| "Improve the tone" | "Write for confident peers, not subordinates" | High |
| "Make it clearer" | "Remove jargon, use concrete examples" | Medium |
| "Fix this" | "Strengthen the argument, remove hedging" | High |
One of the fastest ways to raise quality is to force contrast. Instead of asking for "the best version", ask for multiple positions. This approach has proven particularly effective in Asian markets, where nuanced communication styles often require multiple strategic approaches.
Marcus Wong, Chief Digital Officer at DBS Bank, explains:
We've found that asking AI for contrasting approaches, conservative versus bold, local versus global, gives our teams much clearer strategic choices. It's not about finding the perfect answer, it's about understanding the trade-offs between different paths.
Even if you don't use either version directly, the comparison sharpens your thinking. This technique works because it forces AI to take positions rather than hedging, and gives you explicit choices rather than bland compromise.
Treat Feedback as Prompt Material
Most people respond to weak outputs by starting again. That's unnecessary. Your feedback is the next prompt. This iterative approach is how prompts mature over time instead of staying disposable.
The key is being specific about what works, what doesn't, and what to change. Don't restart from scratch unless the fundamental approach is wrong. This is how prompts mature over time instead of staying disposable.
AI doesn't replace thinking. It amplifies whatever thinking you bring to it. Clear thinking in, useful insight out. Messy thinking in, polished confusion out.
AI is exceptionally good at producing acceptable output. It only becomes genuinely valuable when you give it direction, force it to take a position, and use it iteratively rather than transactionally. That's the difference between using AI and working with it.
This distinction is particularly crucial in Asia's rapidly evolving business landscape. Companies that master collaborative AI approaches are seeing measurable advantages in strategic planning and decision-making speed. For teams looking to enhance their professional communication through AI, exploring winning sales pitch techniques and team inspiration strategies can demonstrate these thinking-first principles in action.
How do I know if I'm thinking with AI or just using it?
You're thinking with AI when you disagree with its conclusions but still find the reasoning useful. You're just using it when you accept or reject outputs without engaging with the underlying logic.
What's the biggest mistake people make with AI reasoning?
Asking for final answers too quickly. Most valuable AI interactions happen in the reasoning phase, not the conclusion phase. Slow down and engage with the thinking process first.
Should I always ask AI to show its working?
For important decisions, yes. For quick tasks like formatting or simple rewrites, probably not. Match the depth of reasoning to the stakes of the outcome.
How do I avoid getting generic AI outputs?
Give AI a specific role, define success criteria clearly, and ask for reasoning before recommendations. Generic inputs almost always produce generic outputs, regardless of how you phrase the request.
When should I restart versus iterate with AI?
Restart when the fundamental approach is wrong or you realise you haven't defined the problem clearly. Iterate when the direction is right but execution needs refinement. Most people restart too often.
The most effective AI users treat each interaction as a conversation, not a command. They bring clear thinking to AI and use it to sharpen, challenge, and expand that thinking rather than replace it entirely.
For professionals ready to apply these principles in high-stakes situations, techniques for handling difficult clients and reducing workplace stress showcase how thinking-first AI collaboration can transform challenging professional scenarios.
What's your experience with moving from question-based to conversation-based AI interactions? Drop your take in the comments below.








Latest Comments (3)
This distinction between asking questions and assigning jobs is particularly relevant in our national digital transformation efforts here in Indonesia. We've seen firsthand that generic queries to AI tools, especially concerning policy drafting or public service design, yield outputs that are "polite, slightly bland, easily forgotten." It underscores the need for clear, well-defined roles for AI within government workflows, aligning with our objectives for more precise and impactful policy formulation. The idea of "pressure-testing" plans with AI, as suggested, could be very valuable in improving the robustness of new initiatives before they reach implementation. It's about integrating AI as a strategic partner, not just a search engine.
The "Act as a senior strategy advisor" example resonates. We see a clear correlation in pitch decks where founders use AI as a structured thought partner rather than a simple content generator. The nuance in prompting, moving from "what should my strategy be" to role-based directives, often surfaces in their early-stage market analysis and competitive positioning. This isn't just about better prompts; it reflects a more rigorous internal process. For us, it’s a subtle signal of founder quality and their ability to leverage tools effectively, impacting how we evaluate their investability in the AI-driven landscape.
Act as a senior strategy advisor." This reminds me of when we used to build expert systems back in the 90s, trying to encode all that knowledge. Same idea, different tech.
Leave a Comment