HumanPrompt是一个框架,可以更轻松地设计,管理,共享和使用及时和及时的方法。它是专门为研究人员设计的。它仍在进行中吗?我们非常欢迎在方法和模块上做出新的贡献。在这里查看我们的建议。
首先,克隆此仓库,然后运行:
pip install -e .这将安装HumanPrompt软件包,并将软链接集线器添加到./humanprompt/artifacts/hub 。
然后,您需要设置一些环境变量,例如OpenAI API密钥:
export OPENAI_API_KEY = " YOUR_OPENAI_API_KEY "然后,这取决于您将如何使用此存储库。目前,此仓库的使命是帮助研究人员验证他们的想法。因此,我们使扩展和使用非常灵活。
运行方法的最小示例如下:
我们的用法非常简单,如果您以前使用过HuggingFace Transformers几乎相似。
例如,使用Consensensensensqa的思想链:
from humanprompt . methods . auto . method_auto import AutoMethod
from humanprompt . tasks . dataset_loader import DatasetLoader
# Get one built-in method
method = AutoMethod . from_config ( method_name = "cot" )
# Get one dataset, select one example for demo
data = DatasetLoader . load_dataset ( dataset_name = "commonsense_qa" , dataset_split = "test" )
data_item = data [ 0 ]
# Adapt the raw data to the method's input format, (we will improve this part later)
data_item [ "context" ] = "Answer choices: {}" . format (
" " . join (
[
"({}) {}" . format ( label . lower (), text . lower ())
for label , text in zip (
data_item [ "choices" ][ "label" ], data_item [ "choices" ][ "text" ]
)
]
)
)
# Run the method
result = method . run ( data_item )
print ( result )
print ( data_item )零击文本2sql:
import os
from humanprompt . methods . auto . method_auto import AutoMethod
from humanprompt . tasks . dataset_loader import DatasetLoader
method = AutoMethod . from_config ( "db_text2sql" )
data = DatasetLoader . load_dataset ( dataset_name = "spider" , dataset_split = "validation" )
data_item = data [ 0 ]
data_item [ "db" ] = os . path . join (
data_item [ "db_path" ], data_item [ "db_id" ], data_item [ "db_id" ] + ".sqlite"
)
result = method . run ( data_item )
print ( result )
print ( data_item )我们采用“一种配置,一个实验”范式来促进研究,尤其是在基准不同提示方法时。在每个实验的配置examples/configs/ (.YAML)中,您可以配置数据集,提示方法和指标。
以下是GSM8K上的经营链方法的配置文件示例:
---
dataset :
dataset_name : " gsm8k " # dataset name, aligned with huggingface dataset if loaded from it
dataset_split : " test " # dataset split
dataset_subset_name : " main " # dataset subset name, null if not used
dataset_key_map : # mapping original dataset keys to humanprompt task keys to unify the interface
question : " question "
answer : " answer "
method :
method_name : " cot " # method name to initialize the prompting method class
method_config_file_path : null # method config file path, null if not used(will be overriden by method_args).
method_args :
client_name : " openai " # LLM API client name, adopted from github.com/HazyResearch/manifest
transform : " cot.gsm8k.transform_cot_gsm8k.CoTGSM8KTransform " # user-defined transform class to build the prompts
extract : " cot.gsm8k.extract_cot_gsm8k.CoTGSM8KExtract " # user-defined extract class to extract the answers from output
extraction_regex : " .*The answer is (.*). n ? " # user-defined regex to extract the answer from output
prompt_file_path : " cot/gsm8k/prompt.txt " # prompt file path
max_tokens : 512 # max generated tokens
temperature : 0 # temperature for generated tokens
engine : code-davinci-002 # LLM engine
stop_sequence : " nn " # stop sequence for generation
metrics :
- " exact_match " # metrics to evaluate the results用户可以创建transform和extract类以自定义及时生成并回答提取过程。可以根据用户的需要更换或指定提示文件。
要运行实验,您可以在examples/目录下指定命令行中的实验名称和其他元配置。
例如,运行以下命令在GSM8K上运行经过思考的链条:
python run_experiment.py
--exp_name cot-gsm8k
--num_test_samples 300对于方法和任务的新组合,您可以在examples/configs/中只需添加一个新的配置文件,然后运行命令。
.
├── examples
│ ├── configs # config files for experiments
│ ├── main.py # one sample demo script
│ └── run_experiment.py # experiment script
├── hub # hub contains static files for methods and tasks
│ ├── cot # method Chain-of-Thought
│ │ ├── gsm8k # task GSM8K, containing prompt file and transform/extract classes, etc.
│ │ └── ...
│ ├── ama_prompting # method Ask Me Anything
│ ├── binder # method Binder
│ ├── db_text2sql # method text2sql
│ ├── react # method ReAct
│ ├── standard # method standard prompting
│ └── zero_shot_cot # method zero-shot Chain-of-Thought
├── humanprompt # humanprompt package, containing building blocks for the complete prompting pipeline
│ ├── artifacts
│ │ ├── artifact.py
│ │ └── hub
│ ├── components # key components for the prompting pipeline
│ │ ├── aggregate # aggregate classes to aggregate the answers
│ │ ├── extract # extract classes to extract the answers from output
│ │ ├── post_hoc.py # post-hoc processing
│ │ ├── prompt.py # prompt classes to build the prompts
│ │ ├── retrieve # retrieve classes to retrieve in-context examples
│ │ └── transform # transform classes to transform the raw data to the method's input format
│ ├── evaluators # evaluators
│ │ └── evaluator.py # evaluator class to evaluate the dataset results
│ ├── methods # prompting methods, usually one method is related to one paper
│ │ ├── ama_prompting # Ask Me Anything(https://arxiv.org/pdf/2210.02441.pdf)
│ │ ├── binder # Binder(https://arxiv.org/pdf/2210.02875.pdf)
│ │ └── ...
│ ├── tasks # dataset loading and preprocessing
│ │ ├── add_sub.py # AddSub dataset
│ │ ├── wikitq.py # WikiTableQuestions dataset
│ │ └── ...
│ ├── third_party # third party packages
│ └── utils # utils
│ ├── config_utils.py
│ └── integrations.py
└── tests # test scripts
├── conftest.py
├── test_datasetloader.py
└── test_method.py
该存储库旨在研究人员快速使用和轻松操纵不同的及时方法。我们花了很多时间使其易于扩展和使用,因此我们希望您能为此回购做出贡献。
如果您有兴趣将方法贡献到此框架中,则可以:
humanprompt/methods文件夹中。为此,您应该遵循以下步骤:main分支创建一个分支,命名您的方法。./humanprompt/methods中添加代码,然后将您的方法添加到./humanprompt/methods/your_method_name文件夹中./hub/your_method_name中创建方法中心./hub/your_method_name中使用一个./examples文件夹以配置基本用法此方法./examples和测试方法的最小演示。main分支。我们使用预先承诺来控制代码的质量。提交之前,请确保运行以下代码以浏览您的代码并解决问题。
pip install pre-commit
pre-commit install # install all hooks
pre-commit run --all-files # trigger all hooks
您可以使用git commit --no-verify跳过并允许我们以后处理。
如果您发现此存储库很有用,请引用我们的项目并体现:
@software { humanprompt ,
author = { Tianbao Xie and
Zhoujun Cheng and
Yiheng Xu and
Peng Shi and
Tao Yu } ,
title = { A framework for human-readable prompt-based method with large language models } ,
howpublished = { url{https://github.com/hkunlp/humanprompt} } ,
year = 2022 ,
month = October
} @misc { orr2022manifest ,
author = { Orr, Laurel } ,
title = { Manifest } ,
year = { 2022 } ,
publisher = { GitHub } ,
howpublished = { url{https://github.com/HazyResearch/manifest} } ,
}