Based on high-quality reference materials, we have easily compiled tips for creating LLM (large-scale language model) prompts.
Only the overview is provided. Please refer to each document for details.
Prompt engineering techniques can be learned to some extent through published materials. If you are considering purchasing a course, merchandise or book about prompt engineering (often for those with little information), please be sure to read these materials first.
Prompt engineering - OpenAI API
Six strategies
- Write clear instructions
- Include details
- Assign a persona
- Clearly divide the different parts of the input statement
- Specify the steps required to complete the task
- Present an example
- Specify the length of the answer you want
- Provide reference text
- Instructing you to respond using reference text
- Instruct them to unzip with a quote from the reference text
- Split complex tasks into simple subtasks
- Identify highly relevant instructions for a user's queries using intent classification
- Summarize or filter previous interactions in interactive applications that require very long conversations
- Long documents are summarized in fragments and then recursively constructed the overall summary.
- Give the model time to think
- Tell the model itself to find a solution before jumping to the conclusion
- Hide the model's inference process using inner monologues and a set of queries
- Ask the model if there was anything missing from the previous pass
- Use external tools
- Do efficient knowledge retrievel using embeddings-based search
- Use code execution to make more accurate calculations and external API calls
- Give the model access to a specific function
- Test your changes systematically
- Evaluate the model's output using the Gold Standard answer
Prompt design strategies | Google AI for Developers
Prompt Design Strategy
- Give clear and specific instructions
- Define the tasks to perform
- Specify all constraints
- Define the format of your answer
- Include a few examples
- zero-shot and few-shot prompts
- Find the optimal number of examples
- Examples are used to show patterns rather than anti-patterns.
- The examples presented use a consistent format
- Add contextual information
- Add a prefix
- Input prefix
- Output prefix
- Example Prefix
- Give the model partial input to complete it
- Breakdown prompts into simple components
- Break down instructions
- Create a chain of prompts (make the output of the previous prompt as input of the next prompt)
- Aggregate responses
- Try different parameter values
- Maximum output token
- Temperature
- Top-K
- Top-P
- Prompt Improvement Cycle Strategy
- Use a different phrase
- Switch to a similar task
- Reorder prompt content
- Fallback response
- Things to avoid
- Avoid relying on models to generate factual information
- Use mathematics and logic problems with caution
Prompt Engineering for Generic AI | Machine Learning | Google for Developers
Prompt Engineering for Generating AI
Prompt Creation Best Practices
- Clearly communicate which content and information is most important.
- Structure the prompt: Start with role definition, provide context and input data, and provide instructions.
- Use concrete and diverse examples to enable the model to produce accurate results with a focus.
- Give constraints to limit the scope of the model's output. Doing so will avoid deviating from instructions and providing inaccurate information.
- Complex tasks are broken down into simple sequences.
- Instruct the model to rate and check their own answers before generating them ("Responses should be within 3 sentences," "Rate the brevity of the output on a scale of 1-10." "Do you think this is correct?").
Prompt Type
- Direct prompting (zero-shot)
- Prompting with an example (one-shot/ few-shot/ multi-shot)
- CoT (Chain-of-Thought) Prompting
- zero-shot CoT
- Prompt Improvement Cycle Strategy
Prompt engineering
- Define tasks and success criteria
- Key success criteria to consider
- Performance and accuracy
- Latency
- price
- Create a test case
- Create a temporary prompt
- Try a prompt on a test case
- Improve the prompt
- Return to step 4 and repeat the improvements
- Release polished prompts
Starting with the most capable models and long prompts first, and once you have the desired output quality, try smaller models or shorter prompts for latency and cost savings.
Prompt Engineering Techniques
- Tell them clearly and directly
- Use the example
- Give a role to the model
- Using XML tags (Claude specific)
- Separate the big prompts
- Make the model think with step-by-step
- Specify the beginning of the expected output
- Specify the output format
- Please rewrite
- Models with long context windows take advantage of it
Prompt Engineering Guide
LLM Settings
name explanation temperature The degree of randomness. Increase temperature increases the randomness, while lower decreases the randomness. top P A method of sampling called nucleus sampling. Higher top P increases the diversity of responses. max length Maximum length of the answer. Units vary depending on the model, such as the number of tokens and characters. stop sequence A string pattern that stops generating answers. frequency penalty Penalty for the frequency of occurrence of a particular token. Presence penalty Penalty for the frequency of occurrence of any token. Prompt components
Consider the following as components of the prompt:
name Japanese explanation instruction Instructions Tasks you want to model context context External information and additional context input data Input data Inputs and questions to ask for answers output indicator Output indicator Output type and format General Tips
- Start simply and repeat the improvements
- Instruct requests in an orderly manner
- Specifically, directly
- Avoid inaccuracies
- "This is how it is" rather than "Don't do this."
Prompt creation techniques
- Zero-shot Prompting
- Few-shot Prompting
- CoT (Chain-of-Thought) Prompting
- Self-Consistency
- Generated Knowledge Prompting
- Prompt Chaining
- ToT (Tree of Thoughts)
- RAG (Retrieval Augmented Generation)
- ART (Automatic Reasoning and Tool-use)
- APE (Automatic Prompt Engineer)
- Active-Prompt
- Directional Stimulus Prompting
- PAL (Program-Aided Language Models)
- ReAct Prompting
- Reflexion
- Multimodal CoT Prompting
- GraphPrompt
Risk and misuse
- Hostile prompting (prompt attack)
- Prompt Injection
- Promptre King
- Jailbreaking (jailbreak)
- Truth
- bias