轻松将大型语言模型集成到您的Python代码中。只需使用@prompt和@chatprompt装饰器来创建从LLM返回结构化输出的功能即可。将LLM查询和功能调用与常规Python代码混合以创建复杂的逻辑。
FunctionCall和ParallelFunctionCall返回类型调用。async def即可。pip install magentic或使用UV
uv add magentic通过设置OPENAI_API_KEY环境变量来配置OpenAI API键。要配置不同的LLM提供商,请参阅配置。
@prompt Decorator允许您将大型语言模型(LLM)提示的模板定义为Python函数。当调用此功能时,将参数插入模板中,然后将此提示发送到生成功能输出的LLM。
from magentic import prompt
@ prompt ( 'Add more "dude"ness to: {phrase}' )
def dudeify ( phrase : str ) -> str : ... # No function body as this is never executed
dudeify ( "Hello, how are you?" )
# "Hey, dude! What's up? How's it going, my man?" @prompt Decorator将尊重装饰功能的返回类型注释。这可以是Pydantic支持的任何类型,包括pydantic模型。
from magentic import prompt
from pydantic import BaseModel
class Superhero ( BaseModel ):
name : str
age : int
power : str
enemies : list [ str ]
@ prompt ( "Create a Superhero named {name}." )
def create_superhero ( name : str ) -> Superhero : ...
create_superhero ( "Garden Man" )
# Superhero(name='Garden Man', age=30, power='Control over plants', enemies=['Pollution Man', 'Concrete Woman'])有关更多信息,请参见结构化输出。
@chatprompt Decorator的工作原理就像@prompt一样,但您可以将聊天消息作为模板传递,而不是单个文本提示。这可用于提供系统消息或几次弹出提示,以提供示例响应以指导模型的输出。用卷曲括号表示的格式字段{example}将填充在所有消息中( FunctionResultMessage除外)。
from magentic import chatprompt , AssistantMessage , SystemMessage , UserMessage
from pydantic import BaseModel
class Quote ( BaseModel ):
quote : str
character : str
@ chatprompt (
SystemMessage ( "You are a movie buff." ),
UserMessage ( "What is your favorite quote from Harry Potter?" ),
AssistantMessage (
Quote (
quote = "It does not do to dwell on dreams and forget to live." ,
character = "Albus Dumbledore" ,
)
),
UserMessage ( "What is your favorite quote from {movie}?" ),
)
def get_movie_quote ( movie : str ) -> Quote : ...
get_movie_quote ( "Iron Man" )
# Quote(quote='I am Iron Man.', character='Tony Stark')请参阅聊天提示。
LLM还可以决定调用功能。在这种情况下, @prompt -decorated函数返回一个FunctionCall对象,可以使用LLM提供的参数来调用该对象来执行函数。
from typing import Literal
from magentic import prompt , FunctionCall
def search_twitter ( query : str , category : Literal [ "latest" , "people" ]) -> str :
"""Searches Twitter for a query."""
print ( f"Searching Twitter for { query !r } in category { category !r } " )
return "<twitter results>"
def search_youtube ( query : str , channel : str = "all" ) -> str :
"""Searches YouTube for a query."""
print ( f"Searching YouTube for { query !r } in channel { channel !r } " )
return "<youtube results>"
@ prompt (
"Use the appropriate search function to answer: {question}" ,
functions = [ search_twitter , search_youtube ],
)
def perform_search ( question : str ) -> FunctionCall [ str ]: ...
output = perform_search ( "What is the latest news on LLMs?" )
print ( output )
# > FunctionCall(<function search_twitter at 0x10c367d00>, 'LLMs', 'latest')
output ()
# > Searching Twitter for 'Large Language Models news' in category 'latest'
# '<twitter results>'请参阅函数请求更多。
有时,LLM需要进行一个或多个函数调用才能生成最终答案。 @prompt_chain Decorator将自动解析FunctionCall对象,并将输出传递回LLM,以继续直到达到最终答案。
在下面的示例中,当describe_weather称为LLM时,首先调用get_current_weather函数,然后使用此结果来提出其最终答案,以返回。
from magentic import prompt_chain
def get_current_weather ( location , unit = "fahrenheit" ):
"""Get the current weather in a given location"""
# Pretend to query an API
return {
"location" : location ,
"temperature" : "72" ,
"unit" : unit ,
"forecast" : [ "sunny" , "windy" ],
}
@ prompt_chain (
"What's the weather like in {city}?" ,
functions = [ get_current_weather ],
)
def describe_weather ( city : str ) -> str : ...
describe_weather ( "Boston" )
# 'The current weather in Boston is 72°F and it is sunny and windy.'使用@prompt , @chatprompt和@prompt_chain创建的LLM驱动功能可以作为functions提供给其他@prompt / @prompt_chain Decorator,就像常规Python函数一样。这使得越来越复杂的LLM驱动功能,同时允许隔离测试和改进单个组件。
可以使用StreamedStr (和AsyncStreamedStr )类来流式传输LLM的输出。这使您可以在生成文本时处理文本,而不是一次接收整个输出。
from magentic import prompt , StreamedStr
@ prompt ( "Tell me about {country}" )
def describe_country ( country : str ) -> StreamedStr : ...
# Print the chunks while they are being received
for chunk in describe_country ( "Brazil" ):
print ( chunk , end = "" )
# 'Brazil, officially known as the Federative Republic of Brazil, is ...'可以同时创建多个StreamedStr ,以同时流式传输LLM输出。在下面的示例中,生成多个国家的描述花费的时间大致与一个国家的时间大致相同。
from time import time
countries = [ "Australia" , "Brazil" , "Chile" ]
# Generate the descriptions one at a time
start_time = time ()
for country in countries :
# Converting `StreamedStr` to `str` blocks until the LLM output is fully generated
description = str ( describe_country ( country ))
print ( f" { time () - start_time :.2f } s : { country } - { len ( description ) } chars" )
# 22.72s : Australia - 2130 chars
# 41.63s : Brazil - 1884 chars
# 74.31s : Chile - 2968 chars
# Generate the descriptions concurrently by creating the StreamedStrs at the same time
start_time = time ()
streamed_strs = [ describe_country ( country ) for country in countries ]
for country , streamed_str in zip ( countries , streamed_strs ):
description = str ( streamed_str )
print ( f" { time () - start_time :.2f } s : { country } - { len ( description ) } chars" )
# 22.79s : Australia - 2147 chars
# 23.64s : Brazil - 2202 chars
# 24.67s : Chile - 2186 chars也可以使用返回类型注释Iterable (或AsyncIterable )从LLM流出结构化输出。这允许在生成下一个项目时处理每个项目。
from collections . abc import Iterable
from time import time
from magentic import prompt
from pydantic import BaseModel
class Superhero ( BaseModel ):
name : str
age : int
power : str
enemies : list [ str ]
@ prompt ( "Create a Superhero team named {name}." )
def create_superhero_team ( name : str ) -> Iterable [ Superhero ]: ...
start_time = time ()
for hero in create_superhero_team ( "The Food Dudes" ):
print ( f" { time () - start_time :.2f } s : { hero } " )
# 2.23s : name='Pizza Man' age=30 power='Can shoot pizza slices from his hands' enemies=['The Hungry Horde', 'The Junk Food Gang']
# 4.03s : name='Captain Carrot' age=35 power='Super strength and agility from eating carrots' enemies=['The Sugar Squad', 'The Greasy Gang']
# 6.05s : name='Ice Cream Girl' age=25 power='Can create ice cream out of thin air' enemies=['The Hot Sauce Squad', 'The Healthy Eaters']请参阅流媒体。
异步函数 / Coroutines可用于同时查询LLM。这可以大大提高总体发电速度,还可以在等待LLM输出时运行其他异步代码。在下面的示例中,LLM在每位总统等待列表中的下一个总统时为每个总统生成了描述。测量每秒生成的字符表明,该示例在串行处理上实现了7倍的加速。
import asyncio
from time import time
from typing import AsyncIterable
from magentic import prompt
@ prompt ( "List ten presidents of the United States" )
async def iter_presidents () -> AsyncIterable [ str ]: ...
@ prompt ( "Tell me more about {topic}" )
async def tell_me_more_about ( topic : str ) -> str : ...
# For each president listed, generate a description concurrently
start_time = time ()
tasks = []
async for president in await iter_presidents ():
# Use asyncio.create_task to schedule the coroutine for execution before awaiting it
# This way descriptions will start being generated while the list of presidents is still being generated
task = asyncio . create_task ( tell_me_more_about ( president ))
tasks . append ( task )
descriptions = await asyncio . gather ( * tasks )
# Measure the characters per second
total_chars = sum ( len ( desc ) for desc in descriptions )
time_elapsed = time () - start_time
print ( total_chars , time_elapsed , total_chars / time_elapsed )
# 24575 28.70 856.07
# Measure the characters per second to describe a single president
start_time = time ()
out = await tell_me_more_about ( "George Washington" )
time_elapsed = time () - start_time
print ( len ( out ), time_elapsed , len ( out ) / time_elapsed )
# 2206 18.72 117.78有关更多信息,请参见Asyncio。
@prompt的functions参数可以包含异步/coroutine函数。当称为相应的FunctionCall插入对象时,必须等待结果。Annotated的注释可用于提供描述和其他元数据以进行函数参数。请参阅有关使用Field来描述函数参数的pydantic文档。@prompt和@prompt_chain装饰器也接受model参数。您可以将OpenaiChatModel的实例用于使用GPT4或配置其他温度。见下文。@prompt功能中使用的类型。 Magentic支持多个“后端”(LLM提供商)。这些都是
openai :使用openai Python软件包的默认后端。支持洋红色的所有功能。 from magentic import OpenaiChatModelanthropic :使用anthropic Python软件包。支持洋红色的所有功能,但是目前一次收到流媒体响应。 pip install " magentic[anthropic] " from magentic . chat_model . anthropic_chat_model import AnthropicChatModellitellm :使用litellm Python软件包启用许多不同提供商的查询LLM。注意:某些模型可能不支持magentic函数呼叫/结构化输出和流媒体的所有功能。 pip install " magentic[litellm] " from magentic . chat_model . litellm_chat_model import LitellmChatModelmistral :使用openai Python软件包和一些小修改,使API查询与Mistral API兼容。支持Magentic的所有功能,但是工具调用(包括结构化输出)并未流式传输,因此一次收到全部接收。注意:未来版本的Magentic可能会切换到使用mistral Python软件包。 from magentic . chat_model . mistral_chat_model import MistralChatModel magentic使用的后端和LLM( ChatModel )可以通过多种方式进行配置。当调用洋红性功能时,使用的ChatModel遵循此优先顺序
ChatModel实例作为model参数提供给洋红色的装饰者with MyChatModel:ChatModel和src/aigentic/settings.py中的默认设置 from magentic import OpenaiChatModel , prompt
from magentic . chat_model . litellm_chat_model import LitellmChatModel
@ prompt ( "Say hello" )
def say_hello () -> str : ...
@ prompt (
"Say hello" ,
model = LitellmChatModel ( "ollama_chat/llama3" ),
)
def say_hello_litellm () -> str : ...
say_hello () # Uses env vars or default settings
with OpenaiChatModel ( "gpt-3.5-turbo" , temperature = 1 ):
say_hello () # Uses openai with gpt-3.5-turbo and temperature=1 due to context manager
say_hello_litellm () # Uses litellm with ollama_chat/llama3 because explicitly configured可以设置以下环境变量。
| 环境变量 | 描述 | 例子 |
|---|---|---|
| Magentic_backend | 用作LLM后端的软件包 | 人类 / openai / litellm |
| Magentic_anthropic_model | 人类模型 | Claude-3-Haiku-20240307 |
| 洋红色_anthropic_api_key | 拟人API密钥将由洋红色使用 | SK -... |
| Magentic_anthropic_base_url | 拟人兼容API的基本URL | http:// localhost:8080 |
| Magentic_anthropic_max_tokens | 最大生成的令牌数 | 1024 |
| 洋红色_anthropic_temperature | 温度 | 0.5 |
| Magentic_litellm_model | Litellm模型 | 克劳德-2 |
| Magentic_litellm_api_base | 查询的基本网址 | http:// localhost:11434 |
| agentic_litellm_max_tokens | Litellm最大生成的令牌数 | 1024 |
| Magentic_litellm_temperature | Litellm温度 | 0.5 |
| agentic_mistral_model | Mistral模型 | Mistral-Large-Latest |
| Magentic_mistral_api_key | MISTRAL API密钥将由洋红色使用 | XEG ... |
| agentic_mistral_base_url | 与Mistral兼容API的基本URL | http:// localhost:8080 |
| agentic_mistral_max_tokens | 最大生成的令牌数 | 1024 |
| agentic_mistral_seed | 确定性抽样的种子 | 42 |
| aigentic_mistral_temperature | 温度 | 0.5 |
| Magentic_openai_model | Openai模型 | GPT-4 |
| apentic_openai_api_key | openai api密钥将由洋红色使用 | SK -... |
| apentic_openai_api_type | 允许的选项:“ Openai”,“ Azure” | azure |
| Magentic_openai_base_url | 与OpenAI兼容API的基本URL | http:// localhost:8080 |
| Magentic_openai_max_tokens | Openai Max生成的令牌数 | 1024 |
| Magentic_openai_seed | 确定性抽样的种子 | 42 |
| Magentic_openai_temperature | Openai温度 | 0.5 |
使用openai Backend时,将MAGENTIC_OPENAI_BASE_URL环境设置或使用OpenaiChatModel(..., base_url="http://localhost:8080")在代码中允许您与任何openai兼容的api eg api eg a zure eprellm litellm litellm litellm允许您使用magentic OpenAI代理服务器,Localai。请注意,如果API不支持工具调用,那么您将无法创建返回Python对象的及时功能,但是magentic的其他功能仍然可以正常工作。
要将Azure与OpenAi后端一起使用,您需要将MAGENTIC_OPENAI_API_TYPE环境变量设置为“ Azure”或使用OpenaiChatModel(..., api_type="azure") ,还将OpenAI包装所需的环境变量设置为访问Azure所需的环境变量。请参阅https://github.com/openai/openai-python#microsoft-azure-openai
由于功能没有车身或返回值,许多类型的跳棋将通过@prompt Decorator引起警告或错误。有几种处理这些方法的方法。
empty-body在mypy中。 # pyproject.toml
[ tool . mypy ]
disable_error_code = [ " empty-body " ]... (这不满足Mypy)或raise 。 @ prompt ( "Choose a color" )
def random_color () -> str : ...# type: ignore[empty-body] 。在这种情况下,您可以添加docstring而不是... @ prompt ( "Choose a color" )
def random_color () -> str : # type: ignore[empty-body]
"""Returns a random color."""