LLM_json_schema
1.0.0
llm_json_schema는 주어진 JSON 스키마를 따라 LLM 모델의 출력을 시행 할 수 있습니다. 다음 유형은 문자열, 번호, 부울, 배열, 객체를 사용할 수 있습니다.
출력은 올바른 형식을 갖습니다.
python3 LLM_json_schema.py
--model models/Mistral-7B-Instruct-v0.1.gguf
--json-schema ' {"type":"object", "properties":{"country":{"type":"string"}, "capital":{"type":"string"}}} '
--prompt " What is the capital of France?nn "산출:
{ "country" : " France " , "capital" : " Paris " }python3 LLM_json_schema.py
--model models/Mistral-7B-Instruct-v0.1.gguf
--json-schema ' {"type":"array", "items":{"type":"number"}} '
--prompt " Count until 20.nn "산출:
[ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 ]유효한 토큰 만 선택할 수 있음을 시행하기 위해 LLM에 의해 출력 된 로그에 편향이 추가됩니다.
cd LLM_json_schema
pip3 install -r requirements.txtLLM 모델을 다운로드하여 GGUF 형식으로 변환하십시오.
예:
mkdir models
cd models
git clone https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
git clone https://github.com/ggerganov/llama.cpp.git
pip install -r llama.cpp/requirements.txt
python3 llama.cpp/convert.py Mistral-7B-Instruct-v0.1
--outfile Mistral-7B-Instruct-v0.1.gguf
--outtype q8_0
cd .. usage: LLM_json_schema.py [-h] --model-path MODEL_PATH --prompt PROMPT [--json-schema JSON_SCHEMA]
options:
-h, --help show this help message and exit
--model-path MODEL_PATH
Path to the LLM model in gguf format
--prompt PROMPT Input prompt
--json-schema JSON_SCHEMA
JSON schema to enforce
python3 LLM_json_schema.py --model models/Mistral-7B-Instruct-v0.1.gguf --json-schema ' {"type":"object", "properties":{"country":{"type":"string"}, "captial":{"type":"string"}}} ' --prompt " What is the capital of France?nn " from LLM_json_schema import run_inference_constrained_by_json_schema
import os
script_path = os . path . dirname ( os . path . realpath ( __file__ ))
model_path = os . environ . get ( 'MODEL_PATH' , os . path . join ( script_path , "./models/Mistral-7B-Instruct-v0.1.gguf" ))
prompt = " n n ### Instruction: n What is the capital of France? n n ### Response: n "
json_schema = { "type" : "object" , "properties" :{ "country" :{ "type" : "string" }, "capital" :{ "type" : "string" }}}
for chunk in run_inference_constrained_by_json_schema ( model_path = model_path , json_schema = json_schema , prompt = prompt ):
print ( chunk , end = "" , flush = True )
print ( "" )이 작업을 사용하면 다음을 인용하십시오.
@article{duchenne2023llm_json_schema,
title={LLM Json Schema},
author={Olivier Duchenne},
journal={Github},
url={https://github.com/olivierDuchenne/LLM_json_schema},
year={2023}
}