Prompt engineering is an essential skill for anyone wanting to get effective results from AI LLM models, according Nana Santhosh Reddy Vootukuri, principal software engineering manager for Microsoft.
Prompt engineering is the practice of writing and refining the instructions you give to AI systems. The better your prompts, the more accurate, relevant and useful the results you get back.
9 Prompt Engineering Best Practices
To get the most out of your prompts — and save yourself time and headaches — follow these best practices below.
1. Be Clear and Specific
Ensure your prompts are clear, specific and provide enough context for the model to understand what you are asking. Avoid ambiguity and be as precise as possible to get accurate and relevant responses.
If instructions are elaborative and precise there is less room for misinterpretation, said Vootukuri.
Related Article: Prompt Engineering: Techniques, Examples and Best Practices
2. Provide Examples
An example of desired outcomes is always helpful, for AI and humans alike. Vootukuri recommended providing relevant examples within your prompt so that AI can understand the tone and context. The more examples you offer, the better the AI can respond.
3. Create Personas or Roles
Provide the AI with a role or persona prior to giving it instructions — a technique called persona prompting.
For example: "You are a travel agent skilled in putting together itineraries that combine cost-savings with high-value cultural experiences." Then, provide instructions for the specific task you want to complete.
4. Use an Iterative Approach
“Prompt engineering is not a one-time process but its iterative," said Vootukuri.
Start with an initial prompt, review the response and refine the prompt based on the output. Adjust the wording, add more context or simplify the request as needed to improve outputs.
Prompt management systems can be useful here, because they track how a prompt evolves over time and stores everything in one easy-to-access place.
5. Request a Different Tone
Different tasks require different styles of writing or speech. You can tweak this tone by specifying what you're looking for: something formal, informal, professional, humorous or serious.
For example: "Explain this concept in a friendly and engaging tone."
6. Try Different Prompting Styles
Persona prompting is a popular type of prompt engineering that gives AI an "identity" it can use to refine its outputs. But other styles of prompting exist: zero-shot, few-shot, tree-of-thought, prompt chaining and more.
A certain style of prompting might work better for your specific task. Don't be afraid to play around with these prompting styles to figure out which works best.
7. Make the AI Model Explain Its Reasoning
Chain-of-thought prompting is one of the prompt engineering styles you can try out, as mentioned above. It requires the AI model to list out its reasoning process step-by-step, which is particularly useful when tackling complex problems or in highly regulated industries like finance and law.
Example: "Our website is getting traffic but very few newsletter signups. List three possible reasons why this might be happening, and for each one, walk through your reasoning step by step before giving a recommendation."
8. Know Each Model’s Strengths and Weaknesses
Different AI models have their own nuances. Understanding these strengths and weaknesses can help you avoid mistakes and achieve safer, more reliable outputs.
Here's a breakdown of today's leading models:
| AI Model | Strengths | Weaknesses |
|---|---|---|
| GPT 4.1 (OpenAI) | Great at coding, strong instruction-following capabilities, able to handle very long contexts (claims of up to 1 million tokens) | Can be expensive (compute cost, licensing), customization and transparency are limited |
| Gemini 2.5 Pro (Google) | Good for tasks requiring cross-modal reasoning, integrates into the Google ecosystem | Known for reliability and hallucination risks, fewer long-term independent audits or community validations |
| Claude Opus 4 (Anthropic) | Nuanced reasoning, safety-first design, great for coding, performs well within guardrails | Less flexibility for customization or transparency, more restrained or less stylistically flexible with writing tasks |
| DeepSeek‑R1 (DeepSeek) | Open source (so you can inspect, modify, deploy freely), balanced efficiency and reasoning depth, ideal for transparency, customization and avoiding vendor lock-in | Vulnerable to biases and misinformation, weaker in general language fluency and polish |
| Llama 3 (Meta) | Open source (customizable and transparent), large context window, strong general language capabilities | Performance degrades on challenging tasks, less mature infrastructure, safety/misuse risk due to being open source |
Related Article: CTO’s Guide to Strategic AI Prompting: 20+ Prompts to Master Today
9. Consider Ethical Implications
Prompt engineering best practices have to go beyond the science of prompt engineering, according to Dr. Jenny Shields of Shields Psychology & Consulting.
What often gets missed is that prompts are written to work for the model, not always for the people reading the results. But those people, whether clinicians, support staff or compliance teams, are often under time pressure and juggling competing tasks, according to Shields. In that context, a confident answer can easily be mistaken for a correct one.
The best prompts should include clear limits or define its purpose — otherwise, it increases the risk of over-trust.