llmfoo
LLM
LLM FOO是一個尖端的項目,將功夫的藝術與大語言模型的科學融合在一起……或者實際上是關於自動製作OpenAI工具JSON模式,解析呼叫並為聊天模型構建結果。然後有第二個實用程序is_statement_true ,它使用Genius Logit_bias Trick僅使用一個輸出令牌。
但是,嘿,我希望這將成為一組小型有用的LLM輔助功能,這將使構建內容變得更加容易,因為當前出血的邊緣API有點混亂,我認為我們可以做得更好。

pip install llmfoo您需要在Env中擁有OpenAI_API_KEY,並可以致電gpt-4-1106-preview模型
is_statement_true應該易於理解。做一些自然語言陳述,並根據標准或一般真實性進行檢查。你回來布爾。
新介紹的pdf2md功能允許將PDF文檔轉換為降級格式,從而使它們更易於處理並與基於LLM的系統集成。此功能對於從PDF中提取文本和表並將其轉換為更易於管理的格式特別有用。
pdftocairo ,以將PDF頁面轉換為圖像。 from llmfoo . pdf2md import process_pdf
from pathlib import Path
pdf_path = Path ( "path/to/your/document.pdf" )
output_dir = Path ( "path/to/output/directory" )
# Process the PDF and generate Markdown
markdown_file = process_pdf ( pdf_path , output_dir )此功能將處理PDF的每個頁面,試圖提取文本,圖形和表格,並將其轉換為指定輸出目錄中的Markdown文件。
@tool註釋。openai_schema返回架構(如果您對機器所做的不滿意,則可以從JSON進行編輯)openai_tool_call進行工具調用並返回聊天API消息格式中的結果openai_tool_output進行工具調用並返回助手API工具輸出格式中的結果 from time import sleep
from openai import OpenAI
from llmfoo . functions import tool
from llmfoo import is_statement_true
def test_is_statement_true_with_default_criteria ():
assert is_statement_true ( "Earth is a planet." )
assert not is_statement_true ( "1 + 2 = 5" )
def test_is_statement_true_with_own_criteria ():
assert not is_statement_true ( "Temperature outside is -2 degrees celsius" ,
criteria = "Temperature above 0 degrees celsius" )
assert is_statement_true ( "1984 was written by George Orwell" ,
criteria = "George Orwell is the author of 1984" )
def test_is_statement_true_criteria_can_change_truth_value ():
assert is_statement_true ( "Earth is 3rd planet from the Sun" )
assert not is_statement_true ( "Earth is 3rd planet from the Sun" ,
criteria = "Earth is stated to be 5th planet from the Sun" )
@ tool
def adder ( x : int , y : int ) -> int :
return x + y
@ tool
def multiplier ( x : int , y : int ) -> int :
return x * y
client = OpenAI ()
def test_chat_completion_with_adder ():
number1 = 3267182746
number2 = 798472847
messages = [
{
"role" : "user" ,
"content" : f"What is { number1 } + { number2 } ?"
}
]
response = client . chat . completions . create (
model = "gpt-4-1106-preview" ,
messages = messages ,
tools = [ adder . openai_schema ]
)
messages . append ( response . choices [ 0 ]. message )
messages . append ( adder . openai_tool_call ( response . choices [ 0 ]. message . tool_calls [ 0 ]))
response2 = client . chat . completions . create (
model = "gpt-4-1106-preview" ,
messages = messages ,
tools = [ adder . openai_schema ]
)
assert str ( adder ( number1 , number2 )) in response2 . choices [ 0 ]. message . content . replace ( "," , "" )
def test_assistant_with_multiplier ():
number1 = 1238763428176
number2 = 172388743612
assistant = client . beta . assistants . create (
name = "The Calc Machina" ,
instructions = "You are a calculator with a funny pirate accent." ,
tools = [ multiplier . openai_schema ],
model = "gpt-4-1106-preview"
)
thread = client . beta . threads . create ( messages = [
{
"role" : "user" ,
"content" : f"What is { number1 } * { number2 } ?"
}
])
run = client . beta . threads . runs . create (
thread_id = thread . id ,
assistant_id = assistant . id
)
while True :
run_state = client . beta . threads . runs . retrieve (
run_id = run . id ,
thread_id = thread . id ,
)
if run_state . status not in [ 'in_progress' , 'requires_action' ]:
break
if run_state . status == 'requires_action' :
tool_call = run_state . required_action . submit_tool_outputs . tool_calls [ 0 ]
run = client . beta . threads . runs . submit_tool_outputs (
thread_id = thread . id ,
run_id = run . id ,
tool_outputs = [
multiplier . openai_tool_output ( tool_call )
]
)
sleep ( 1 )
sleep ( 0.1 )
messages = client . beta . threads . messages . list ( thread_id = thread . id )
assert str ( multiplier ( number1 , number2 )) in messages . data [ 0 ]. content [ 0 ]. text . value . replace ( "," , "" )有興趣貢獻嗎?喜歡獲得幫助,使這個項目變得更好!下面的API正在發生變化,系統仍然是第一個版本。
該項目已根據MIT許可獲得許可。