Prompt Engineering is a relatively new discipline designed to develop and optimize prompts to effectively use language models (LMs) for a variety of applications and research topics. Tips engineering skills help to better understand the functions and limitations of large language models (LLMs).
Prompt engineering can be used to improve LLMs' capabilities on a variety of common and complex tasks such as question and answer and arithmetic reasoning, design robust and effective prompting techniques that interface with LLMs and other tools.
Driven by the high interest in developing LLM, this collection of tip engineering guide materials was created, which contains all the latest papers, study guides, lectures, references and tools related to tip engineering. (I will update every three days for the time being)
Most of the content currently translates from dair-ai (https://github.com/dair-ai/Prompt-Engineering-Guide)
dair-ai notebooks and slideshows are included.
Dair-ai's Discord and Dair-ai's Twitter
Platform (recommended)
The following is a tip engineering guide for dair-ai development. The dair-ai guide is under development. (I will update every three days for the time being)
Here are the latest papers on prompt engineering (sorted by release date). We update these papers every day and add new papers. We add abstracts of these papers to the above guide every week:
Surveys/Overviews:
Techniques/Methods Approaches/Techniques:
Scalable Language Model Semi-supervised Learning Tips Generation (February 2023)
Define the scope of capabilities of large language models in open text generation by prompt constraints (February 2023)
À-la-carte Prompt Tuning (APT): Combining different data through composable tips (February 2023)
GraphPrompt: Unification of pre-training for graphical neural networks and downstream tasks (February 2023)
The Capacity of Large Language Models in Moral Self-Remedy (February 2023)
SwitchPrompt: Soft Tips for Gated Area Specific for Classified Learning in Low Resource Domains (February 2023)
Evaluate the robustness of discrete cues (Feb 2023)
Combination examples of context learning (Feb 2023)
Hard Tips Become Easy: Discrete Gradient-Based Optimization for Tips Adjustment and Discovery (Feb 2023)
Multimodal chain thinking reasoning in language model (Feb 2023)
Large language models are easily distracted by unrelated contexts (Feb 2023)
Synthesis tips: Generate thinking demonstrations for large language models (Feb 2023)
Progressive Tips: Continuous Learning of Language Models (Jan 2023)
Batch processing tips: Efficient reasoning of LLM API (Jan 2023)
Think about it again, let us not think step by step! Prejudice and toxicity in zero-point reasoning (Dec 2022)
Constitutional AI: Harmlessness from AI feedback (Dec 2022)
Continuous Tips: Solve Complex Problems (Dec 2022)
Discover language model behavior through evaluation written by model (Dec 2022)
Structural Tips: Extend context learning to 1,000 Examples (Dec 2022)
PAL: Program-assisted language model (Nov 2022)
Large language models are human-level prompt engineers (Nov 2022)
Ignore the previous tip: language model attack technology (Nov 2022)
Machine-generated text: A comprehensive study of threat models and detection methods (Nov 2022)
Teaching algorithmic reasoning through context learning (Nov 2022)
Enhance the self-consistency and performance of pre-trained language models through natural language inference (Nov 2022)
Ask casually: Simple strategies to prompt language models (Oct 2022)
ReAct: Coordinated reasoning and action in language models (Oct 2022)
Tip GPT-3 is reliable (Oct 2022)
Decomposition tips: Modular methods to solve complex tasks (Oct 2022)
Language Models Are Greedy Reasoners: Systematic Form Analysis of Thought Links (Oct 2022)
Evaluation of susceptibility of pre-trained language models through hand-designed adversarial examples (September 2022)
Promptagator: Get a small amount of intensive searches from 8 examples (September 2022)
Reasoners who drive language models to become better (June 2022)
Large language model is the zero-point reasoner (May 2022)
MRKL System: A modular neural symbol architecture combining large language models, external knowledge sources and discrete reasoning (May 2022)
Toxicity detection by generating prompts (May 2022)
Learning transfer tips for text generation (May 2022)
Unreliability of textual reasoning instructions with a small number of hints (May 2022)
Tips Modifier Classification for Text-to-Image Generation (April 2022)
PromptChainer: Linking large language model tips through visual programming (March 2022)
Ideological link reasoning for improving language models with self-consistency (March 2022)
Train the language model using human feedback to execute instructions
Reexamine the role of demonstration: What makes context learning successful? (February 2022)
Idea Link Tips Raises Inference for Large Language Models (January 2022)
Showcase your work: A temporary register for intermediate computing with language model (November 2021)
Common sense reasoning based on generation of knowledge hints (October 2021)
Multi-task prompt training can achieve generalization of zero-shooting tasks (Oct 2021)
Refactoring instruction prompts to adapt to GPTk's language (Sep 2021)
Design Guidelines for Text-to-Image Generation Models (Sep 2021)
Make pretrained language models a better sample-less learner (Aug 2021)
Magical and ordered prompts and their location: Overcome a small number of prompt sequence sensitivity (April 2021)
BERTese: Learn to talk to BERT (April 2021)
The power of scale adjustment with valid parameters (April 2021)
Tips programming for large language models: Going beyond a small number of Tips paradigms (Feb 2021)
Pre-use calibration: Improve the small sample performance of language models (Feb 2021)
Prefix adjustment: Optimize continuous prompts for generation (Jan 2021)
AutoPrompt: Prompt knowledge from language models through automatically generated prompts (Oct 2020)
Language Models Are Small Learners (May 2020)
How do we know what language models know? (July 2020)
Applications:
Collections:
If you think something is missing, please submit a PR. Feedback and suggestions are also welcome.