LLM_json_schema
1.0.0
LLM_JSON_SCHEMA可以强制执行LLM模型的输出,以遵循给定的JSON模式。可用以下类型:字符串,数字,布尔值,数组,对象。
保证输出具有正确的格式。
python3 LLM_json_schema.py
--model models/Mistral-7B-Instruct-v0.1.gguf
--json-schema ' {"type":"object", "properties":{"country":{"type":"string"}, "capital":{"type":"string"}}} '
--prompt " What is the capital of France?nn "输出:
{ "country" : " France " , "capital" : " Paris " }python3 LLM_json_schema.py
--model models/Mistral-7B-Instruct-v0.1.gguf
--json-schema ' {"type":"array", "items":{"type":"number"}} '
--prompt " Count until 20.nn "输出:
[ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 ]它在LLM输出的逻辑中添加了偏差,以执行只能选择有效的令牌。
cd LLM_json_schema
pip3 install -r requirements.txt下载LLM型号,然后将其转换为GGUF格式。
例子:
mkdir models
cd models
git clone https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
git clone https://github.com/ggerganov/llama.cpp.git
pip install -r llama.cpp/requirements.txt
python3 llama.cpp/convert.py Mistral-7B-Instruct-v0.1
--outfile Mistral-7B-Instruct-v0.1.gguf
--outtype q8_0
cd .. usage: LLM_json_schema.py [-h] --model-path MODEL_PATH --prompt PROMPT [--json-schema JSON_SCHEMA]
options:
-h, --help show this help message and exit
--model-path MODEL_PATH
Path to the LLM model in gguf format
--prompt PROMPT Input prompt
--json-schema JSON_SCHEMA
JSON schema to enforce
python3 LLM_json_schema.py --model models/Mistral-7B-Instruct-v0.1.gguf --json-schema ' {"type":"object", "properties":{"country":{"type":"string"}, "captial":{"type":"string"}}} ' --prompt " What is the capital of France?nn " from LLM_json_schema import run_inference_constrained_by_json_schema
import os
script_path = os . path . dirname ( os . path . realpath ( __file__ ))
model_path = os . environ . get ( 'MODEL_PATH' , os . path . join ( script_path , "./models/Mistral-7B-Instruct-v0.1.gguf" ))
prompt = " n n ### Instruction: n What is the capital of France? n n ### Response: n "
json_schema = { "type" : "object" , "properties" :{ "country" :{ "type" : "string" }, "capital" :{ "type" : "string" }}}
for chunk in run_inference_constrained_by_json_schema ( model_path = model_path , json_schema = json_schema , prompt = prompt ):
print ( chunk , end = "" , flush = True )
print ( "" )如果您使用此工作,请引用以下内容:
@article{duchenne2023llm_json_schema,
title={LLM Json Schema},
author={Olivier Duchenne},
journal={Github},
url={https://github.com/olivierDuchenne/LLM_json_schema},
year={2023}
}