Prompt engineering sounds technical, but it is simply the practice of writing AI instructions that reliably produce good output. Once you understand the core techniques, you can apply them across ChatGPT, Microsoft Copilot, Google Gemini, and any other large language model. This guide covers the techniques that make the biggest practical difference.
Why Prompt Engineering Matters
Two people using the same AI tool can get very different results — not because one has a better subscription, but because one knows how to instruct the model effectively. The model’s capabilities are fixed; the quality of your output is determined by the quality of your input.
Professional prompt engineers at AI companies are paid significant salaries specifically because good prompts are hard to write at scale. For everyday users, learning even the basics of this discipline produces immediate, noticeable improvements.
Core Techniques
Role Prompting
Assigning ChatGPT a role changes how it frames its response — the vocabulary it uses, the level of depth, the assumptions it makes about your knowledge.
Basic: “You are a financial adviser. Explain the difference between an ISA and a SIPP to someone approaching retirement.”
Advanced: Add constraints to the role: “You are a UK HR manager with 15 years of experience in employment law. Answer the following question as you would advise a line manager, using plain language and flagging anything where they should seek legal advice.”
Chain-of-Thought Prompting
For complex problems that require reasoning — analysis, decision support, logical deduction — asking the model to “think step by step” or “reason through this carefully” significantly improves accuracy. Without this instruction, the model may jump straight to a conclusion that sounds plausible but skips important reasoning steps.
Example: “I am considering moving my business from sole trader to limited company. Think step by step through the tax implications, liability differences, and administrative requirements, then give me a balanced recommendation.”
Few-Shot Prompting
Providing examples of the output you want (before asking for the actual output) is one of the most powerful techniques available. Rather than describing what you want in the abstract, you show it.
Structure:
- Show two or three examples of the output format/style you want
- Then ask for the actual output
Example: “Here are two product descriptions in the style I want: [Example 1] / [Example 2]. Now write a product description for [product] in the same style.”
This is especially useful for maintaining consistent brand voice across multiple pieces of content.
Constraint Prompting
Adding explicit constraints to a prompt forces the model to prioritise and be precise. Common constraints:
- Length: “In exactly three sentences” / “Under 100 words” / “No more than five bullet points”
- Format: “As a table with columns: [X, Y, Z]” / “As a numbered list” / “In markdown format”
- Exclusions: “Do not include X” / “Avoid jargon” / “Do not hedge or add disclaimers”
- Audience: “Suitable for a non-technical reader” / “For an audience familiar with UK employment law”
Prompt Chaining
Complex tasks often produce better results when broken into sequential prompts rather than attempted in one go. Each prompt builds on the previous response.
Example sequence for a business document:
- “List the 6 most important sections a remote working policy for a UK small business should include.”
- “Now write the first section: [section name]. Approximately 150 words. Plain English.”
- “Write the next section: [section name].” [repeat for each section]
- “Now review the full document and suggest any gaps or improvements.”
This produces a more coherent and thorough result than asking for the entire document in a single prompt.
Self-Critique Prompting
After getting a response, ask ChatGPT to critique its own output. This often surfaces weaknesses or gaps that would otherwise require re-reading the response carefully yourself.
Example: “Review the response you just gave. What are the three weakest points or assumptions? What important aspects did you not cover?”
Then: “Now rewrite the response addressing these weaknesses.”
System Prompts: Setting Persistent Context
In the ChatGPT interface, you can set a “custom instruction” (via Settings → Personalisation → Custom Instructions) that acts as a permanent background prompt for all conversations. This is useful for setting context you would otherwise have to repeat every time.
Example custom instructions:
- “I am a UK business owner. Always assume UK spelling, UK laws, and UK tax rules unless I specify otherwise.”
- “I have a technical background. Do not over-explain basic concepts.”
- “My business is [description]. When writing business content, use this context.”
Building a Prompt Library
Once you find prompts that work well for recurring tasks, save them. A simple document or note with your best prompts — categorised by task type — saves time and produces consistent results. Treat effective prompts as reusable assets, not one-off experiments.
For teams, a shared prompt library ensures that everyone gets the same quality output and does not have to rediscover effective prompts independently.
What Prompt Engineering Cannot Fix
No prompt engineering technique can make ChatGPT reliably accurate about:
- Current events (the model has a training cutoff date)
- Specific real-time data (prices, stock levels, live statistics)
- Legal, medical, or financial advice that requires current jurisdiction-specific knowledge
Always verify factual claims in important outputs, regardless of how well-structured your prompt is.





