llmfoo
LLM
LLM FOO是一个尖端的项目,将功夫的艺术与大语言模型的科学融合在一起……或者实际上是关于自动制作OpenAI工具JSON模式,解析呼叫并为聊天模型构建结果。然后有第二个实用程序is_statement_true ,它使用Genius Logit_bias Trick仅使用一个输出令牌。
但是,嘿,我希望这将成为一组小型有用的LLM辅助功能,这将使构建内容变得更加容易,因为当前出血的边缘API有点混乱,我认为我们可以做得更好。

pip install llmfoo您需要在Env中拥有OpenAI_API_KEY,并可以致电gpt-4-1106-preview模型
is_statement_true应该易于理解。做一些自然语言陈述,并根据标准或一般真实性进行检查。你回来布尔。
新介绍的pdf2md功能允许将PDF文档转换为降级格式,从而使它们更易于处理并与基于LLM的系统集成。此功能对于从PDF中提取文本和表并将其转换为更易于管理的格式特别有用。
pdftocairo ,以将PDF页面转换为图像。 from llmfoo . pdf2md import process_pdf
from pathlib import Path
pdf_path = Path ( "path/to/your/document.pdf" )
output_dir = Path ( "path/to/output/directory" )
# Process the PDF and generate Markdown
markdown_file = process_pdf ( pdf_path , output_dir )此功能将处理PDF的每个页面,试图提取文本,图形和表格,并将其转换为指定输出目录中的Markdown文件。
@tool注释。openai_schema返回架构(如果您对机器所做的不满意,则可以从JSON进行编辑)openai_tool_call进行工具调用并返回聊天API消息格式中的结果openai_tool_output进行工具调用并返回助手API工具输出格式中的结果 from time import sleep
from openai import OpenAI
from llmfoo . functions import tool
from llmfoo import is_statement_true
def test_is_statement_true_with_default_criteria ():
assert is_statement_true ( "Earth is a planet." )
assert not is_statement_true ( "1 + 2 = 5" )
def test_is_statement_true_with_own_criteria ():
assert not is_statement_true ( "Temperature outside is -2 degrees celsius" ,
criteria = "Temperature above 0 degrees celsius" )
assert is_statement_true ( "1984 was written by George Orwell" ,
criteria = "George Orwell is the author of 1984" )
def test_is_statement_true_criteria_can_change_truth_value ():
assert is_statement_true ( "Earth is 3rd planet from the Sun" )
assert not is_statement_true ( "Earth is 3rd planet from the Sun" ,
criteria = "Earth is stated to be 5th planet from the Sun" )
@ tool
def adder ( x : int , y : int ) -> int :
return x + y
@ tool
def multiplier ( x : int , y : int ) -> int :
return x * y
client = OpenAI ()
def test_chat_completion_with_adder ():
number1 = 3267182746
number2 = 798472847
messages = [
{
"role" : "user" ,
"content" : f"What is { number1 } + { number2 } ?"
}
]
response = client . chat . completions . create (
model = "gpt-4-1106-preview" ,
messages = messages ,
tools = [ adder . openai_schema ]
)
messages . append ( response . choices [ 0 ]. message )
messages . append ( adder . openai_tool_call ( response . choices [ 0 ]. message . tool_calls [ 0 ]))
response2 = client . chat . completions . create (
model = "gpt-4-1106-preview" ,
messages = messages ,
tools = [ adder . openai_schema ]
)
assert str ( adder ( number1 , number2 )) in response2 . choices [ 0 ]. message . content . replace ( "," , "" )
def test_assistant_with_multiplier ():
number1 = 1238763428176
number2 = 172388743612
assistant = client . beta . assistants . create (
name = "The Calc Machina" ,
instructions = "You are a calculator with a funny pirate accent." ,
tools = [ multiplier . openai_schema ],
model = "gpt-4-1106-preview"
)
thread = client . beta . threads . create ( messages = [
{
"role" : "user" ,
"content" : f"What is { number1 } * { number2 } ?"
}
])
run = client . beta . threads . runs . create (
thread_id = thread . id ,
assistant_id = assistant . id
)
while True :
run_state = client . beta . threads . runs . retrieve (
run_id = run . id ,
thread_id = thread . id ,
)
if run_state . status not in [ 'in_progress' , 'requires_action' ]:
break
if run_state . status == 'requires_action' :
tool_call = run_state . required_action . submit_tool_outputs . tool_calls [ 0 ]
run = client . beta . threads . runs . submit_tool_outputs (
thread_id = thread . id ,
run_id = run . id ,
tool_outputs = [
multiplier . openai_tool_output ( tool_call )
]
)
sleep ( 1 )
sleep ( 0.1 )
messages = client . beta . threads . messages . list ( thread_id = thread . id )
assert str ( multiplier ( number1 , number2 )) in messages . data [ 0 ]. content [ 0 ]. text . value . replace ( "," , "" )有兴趣贡献吗?喜欢获得帮助,使这个项目变得更好!下面的API正在发生变化,系统仍然是第一个版本。
该项目已根据MIT许可获得许可。