HumanPrompt是一個框架,可以更輕鬆地設計,管理,共享和使用及時和及時的方法。它是專門為研究人員設計的。它仍在進行中嗎?我們非常歡迎在方法和模塊上做出新的貢獻。在這裡查看我們的建議。
首先,克隆此倉庫,然後運行:
pip install -e .這將安裝HumanPrompt軟件包,並將軟鏈接集線器添加到./humanprompt/artifacts/hub 。
然後,您需要設置一些環境變量,例如OpenAI API密鑰:
export OPENAI_API_KEY = " YOUR_OPENAI_API_KEY "然後,這取決於您將如何使用此存儲庫。目前,此倉庫的使命是幫助研究人員驗證他們的想法。因此,我們使擴展和使用非常靈活。
運行方法的最小示例如下:
我們的用法非常簡單,如果您以前使用過HuggingFace Transformers幾乎相似。
例如,使用Consensensensensqa的思想鏈:
from humanprompt . methods . auto . method_auto import AutoMethod
from humanprompt . tasks . dataset_loader import DatasetLoader
# Get one built-in method
method = AutoMethod . from_config ( method_name = "cot" )
# Get one dataset, select one example for demo
data = DatasetLoader . load_dataset ( dataset_name = "commonsense_qa" , dataset_split = "test" )
data_item = data [ 0 ]
# Adapt the raw data to the method's input format, (we will improve this part later)
data_item [ "context" ] = "Answer choices: {}" . format (
" " . join (
[
"({}) {}" . format ( label . lower (), text . lower ())
for label , text in zip (
data_item [ "choices" ][ "label" ], data_item [ "choices" ][ "text" ]
)
]
)
)
# Run the method
result = method . run ( data_item )
print ( result )
print ( data_item )零擊文本2sql:
import os
from humanprompt . methods . auto . method_auto import AutoMethod
from humanprompt . tasks . dataset_loader import DatasetLoader
method = AutoMethod . from_config ( "db_text2sql" )
data = DatasetLoader . load_dataset ( dataset_name = "spider" , dataset_split = "validation" )
data_item = data [ 0 ]
data_item [ "db" ] = os . path . join (
data_item [ "db_path" ], data_item [ "db_id" ], data_item [ "db_id" ] + ".sqlite"
)
result = method . run ( data_item )
print ( result )
print ( data_item )我們採用“一種配置,一個實驗”範式來促進研究,尤其是在基準不同提示方法時。在每個實驗的配置examples/configs/ (.YAML)中,您可以配置數據集,提示方法和指標。
以下是GSM8K上的經營鏈方法的配置文件示例:
---
dataset :
dataset_name : " gsm8k " # dataset name, aligned with huggingface dataset if loaded from it
dataset_split : " test " # dataset split
dataset_subset_name : " main " # dataset subset name, null if not used
dataset_key_map : # mapping original dataset keys to humanprompt task keys to unify the interface
question : " question "
answer : " answer "
method :
method_name : " cot " # method name to initialize the prompting method class
method_config_file_path : null # method config file path, null if not used(will be overriden by method_args).
method_args :
client_name : " openai " # LLM API client name, adopted from github.com/HazyResearch/manifest
transform : " cot.gsm8k.transform_cot_gsm8k.CoTGSM8KTransform " # user-defined transform class to build the prompts
extract : " cot.gsm8k.extract_cot_gsm8k.CoTGSM8KExtract " # user-defined extract class to extract the answers from output
extraction_regex : " .*The answer is (.*). n ? " # user-defined regex to extract the answer from output
prompt_file_path : " cot/gsm8k/prompt.txt " # prompt file path
max_tokens : 512 # max generated tokens
temperature : 0 # temperature for generated tokens
engine : code-davinci-002 # LLM engine
stop_sequence : " nn " # stop sequence for generation
metrics :
- " exact_match " # metrics to evaluate the results用戶可以創建transform和extract類以自定義及時生成並回答提取過程。可以根據用戶的需要更換或指定提示文件。
要運行實驗,您可以在examples/目錄下指定命令行中的實驗名稱和其他元配置。
例如,運行以下命令在GSM8K上運行經過思考的鏈條:
python run_experiment.py
--exp_name cot-gsm8k
--num_test_samples 300對於方法和任務的新組合,您可以在examples/configs/中只需添加一個新的配置文件,然後運行命令。
.
├── examples
│ ├── configs # config files for experiments
│ ├── main.py # one sample demo script
│ └── run_experiment.py # experiment script
├── hub # hub contains static files for methods and tasks
│ ├── cot # method Chain-of-Thought
│ │ ├── gsm8k # task GSM8K, containing prompt file and transform/extract classes, etc.
│ │ └── ...
│ ├── ama_prompting # method Ask Me Anything
│ ├── binder # method Binder
│ ├── db_text2sql # method text2sql
│ ├── react # method ReAct
│ ├── standard # method standard prompting
│ └── zero_shot_cot # method zero-shot Chain-of-Thought
├── humanprompt # humanprompt package, containing building blocks for the complete prompting pipeline
│ ├── artifacts
│ │ ├── artifact.py
│ │ └── hub
│ ├── components # key components for the prompting pipeline
│ │ ├── aggregate # aggregate classes to aggregate the answers
│ │ ├── extract # extract classes to extract the answers from output
│ │ ├── post_hoc.py # post-hoc processing
│ │ ├── prompt.py # prompt classes to build the prompts
│ │ ├── retrieve # retrieve classes to retrieve in-context examples
│ │ └── transform # transform classes to transform the raw data to the method's input format
│ ├── evaluators # evaluators
│ │ └── evaluator.py # evaluator class to evaluate the dataset results
│ ├── methods # prompting methods, usually one method is related to one paper
│ │ ├── ama_prompting # Ask Me Anything(https://arxiv.org/pdf/2210.02441.pdf)
│ │ ├── binder # Binder(https://arxiv.org/pdf/2210.02875.pdf)
│ │ └── ...
│ ├── tasks # dataset loading and preprocessing
│ │ ├── add_sub.py # AddSub dataset
│ │ ├── wikitq.py # WikiTableQuestions dataset
│ │ └── ...
│ ├── third_party # third party packages
│ └── utils # utils
│ ├── config_utils.py
│ └── integrations.py
└── tests # test scripts
├── conftest.py
├── test_datasetloader.py
└── test_method.py
該存儲庫旨在研究人員快速使用和輕鬆操縱不同的及時方法。我們花了很多時間使其易於擴展和使用,因此我們希望您能為此回購做出貢獻。
如果您有興趣將方法貢獻到此框架中,則可以:
humanprompt/methods文件夾中。為此,您應該遵循以下步驟:main分支創建一個分支,命名您的方法。./humanprompt/methods中添加代碼,然後將您的方法添加到./humanprompt/methods/your_method_name文件夾中./hub/your_method_name中創建方法中心./hub/your_method_name中使用一個./examples文件夾以配置基本用法此方法./examples和測試方法的最小演示。main分支。我們使用預先承諾來控制代碼的質量。提交之前,請確保運行以下代碼以瀏覽您的代碼並解決問題。
pip install pre-commit
pre-commit install # install all hooks
pre-commit run --all-files # trigger all hooks
您可以使用git commit --no-verify跳過並允許我們以後處理。
如果您發現此存儲庫很有用,請引用我們的項目並體現:
@software { humanprompt ,
author = { Tianbao Xie and
Zhoujun Cheng and
Yiheng Xu and
Peng Shi and
Tao Yu } ,
title = { A framework for human-readable prompt-based method with large language models } ,
howpublished = { url{https://github.com/hkunlp/humanprompt} } ,
year = 2022 ,
month = October
} @misc { orr2022manifest ,
author = { Orr, Laurel } ,
title = { Manifest } ,
year = { 2022 } ,
publisher = { GitHub } ,
howpublished = { url{https://github.com/HazyResearch/manifest} } ,
}