Prompt engineering has accumulated a thick layer of folklore. Some of it is real and consistently helpful; a lot of it is cargo-culted from a single tweet that worked once. This guide is the short list — twelve patterns that survive contact with real production work, with one concrete example each and a note on when to reach for it.
None of these are magic. They’re just ways of giving the model less to guess about.
1. Be specific about the deliverable
Instead of “Write a marketing email”, say “Write a 120-word marketing email for B2B SaaS founders, single CTA, no subject line, plain text.” The model can’t read your mind about format, length, or audience — and if you don’t say, you’ll get the average of all marketing emails on the internet.
2. Give one good example over three mediocre ones
One worked example in the prompt outperforms three half-baked ones. Few-shot prompting works because the example sets the bar. Pick the best version of the output you want and paste it whole.
3. Constraint stacking
List your constraints as a bulleted list above the request, not woven into prose. The model parses lists better than nested clauses.
Constraints:
- Maximum 80 words
- One sentence per paragraph
- No emojis
- End with a question
Now write a follow-up email to a cold lead who didn't reply.4. Reasoning before answer
For analysis tasks, ask the model to think step by step before giving the final answer. “Walk through your reasoning, then state your conclusion” outperforms “What’s the answer?” on most non-trivial questions.
5. Explicit role + explicit goal
“You are a senior product manager reviewing a launch checklist for a fintech app. Your goal is to flag risks the team might have missed” — the role narrows the prior, the goal narrows the output. Don’t use one without the other.
6. Negative examples
“Don’t write in marketing speak. No ‘leverage’, ‘unleash’, ‘unlock’.” Negative examples are stronger constraints than they look. If you’ve gotten three bad outputs in a row, name the bad pattern explicitly.
7. Output format with a schema
For structured output, give a JSON or YAML schema in the prompt. The model will follow it almost every time:
{
"title": "string",
"tags": ["string"],
"estimated_read_time_minutes": "integer"
}8. Inline anchors with XML-like tags
Wrap inputs in tags so the model can refer back to them precisely. <document>…</document> + “Using only the information in <document>…” reduces hallucination noticeably.
9. Two-pass: draft, then revise
Ask for a draft, then ask the model to critique its own draft against your constraints, then ask for a final. The critique step catches issues a single-pass prompt misses.
When to reach for this. Anything where you’d normally ask a junior teammate to “take another look before you send it.” If that instinct fires, run a critique pass.
10. Anchored exemplars for tone
If you want a specific voice, paste 2–3 paragraphs in that voice and say “Match this tone.” Style is much easier to imitate than to describe.
11. Clarify the audience
“Explain X to a senior backend engineer who has never used Y” produces a different (better) result than “Explain X”. The audience is half the prompt.
12. Refuse to fill in the blank
Add: “If you’re missing information needed to answer well, ask for it. Don’t guess.” The model will obediently fabricate when not given this out. Give it the out.
What to skip
- “You are an expert in…” — by itself, doesn’t help. Pair with a goal (#5) or skip.
- “Take a deep breath.” — was a meme; modern models don’t need it.
- Stacking 14 personas — pick one role, not seven.
Related
For when to use the system prompt vs the user prompt, see System vs user prompts. For reusable prompt scaffolding, see Prompt templates.