llmapi server
1.0.0
Self-Host LLMAPI服務器
中文文檔
llmapi-server是一個抽象的後端,它封裝了各種大型語言模型(LLM,例如Chatgpt,GPT-3,GPT-4等),並通過OpenAPI提供簡單的訪問服務
?如果對您有幫助,請播放嗎?
圖LR
子圖LLMAPI服務器
OpenAPI->會話
OpenAPI-> PRE_POST
子圖後端
樣式後端填充:#f9f
pre_post-> chatgpt
pre_post-> dall-e
pre_post->駱駝
pre_post-> ...
結尾
結尾
文字 - > OpenAPI
圖像 - > OpenAPI
嵌入 - > OpenAPI
其他 - > Openapi
chatgpt :OpenAI的官方Chatgpt界面gpt3 :OpenAI的官方GPT-3界面gpt-embedding :OpenAI的官方嵌入界面dall-e :Openai的官方dall·e界面welm :微信的LLM界面newbing :基於chatgpt的新bing搜索(非官方) # python >= 3.8
python3 -m pip install -r requirements.txt
python3 run_api_server.py./build_docker.sh
./start_docker.shcurl命令訪問: # 1. Start a new session
curl -X POST -H " Content-Type: application/json " -d ' {"bot_type":"mock"} ' http://127.0.0.1:5050/v1/chat/start
# response sample: {"code":0,"msg":"Success","session":"123456"}
# 2. chat with LLMs
curl -X POST -H " Content-Type: application/json " -d ' {"session":"123456","content":"hello"} ' http://127.0.0.1:5050/v1/chat/ask
# response sample: {"code":0,"msg":"Success","reply":"Text mock reply for your prompt:hello","timestamp":1678865301.0842562}
# 3. Close the session and end chat
curl -X POST -H " Content-Type: application/json " -d ' {"session":"123456"} ' http://127.0.0.1:5050/v1/chat/end
# response: {"code":0,"msg":"Success"}llmapi_cli --host= " http://127.0.0.1:5050 " --bot=mock from llmapi_cli import LLMClient
client = LLMClient ( host = "http://127.0.0.1:5050" , bot = "mock" )
rep = client . ask ( "hello" )
print ( rep )newllm ),您可以直接cp -r mock newllmmock的實現,將後端名稱更改為newllmnewllm目錄中,添加必要的依賴項,所有相關的開發都與此目錄綁定backend.py中添加對newllm的支持