輕鬆將大型語言模型集成到您的Python代碼中。只需使用@prompt和@chatprompt裝飾器來創建從LLM返回結構化輸出的功能即可。將LLM查詢和功能調用與常規Python代碼混合以創建複雜的邏輯。
FunctionCall和ParallelFunctionCall返回類型調用。async def即可。pip install magentic或使用UV
uv add magentic通過設置OPENAI_API_KEY環境變量來配置OpenAI API鍵。要配置不同的LLM提供商,請參閱配置。
@prompt Decorator允許您將大型語言模型(LLM)提示的模板定義為Python函數。當調用此功能時,將參數插入模板中,然後將此提示發送到生成功能輸出的LLM。
from magentic import prompt
@ prompt ( 'Add more "dude"ness to: {phrase}' )
def dudeify ( phrase : str ) -> str : ... # No function body as this is never executed
dudeify ( "Hello, how are you?" )
# "Hey, dude! What's up? How's it going, my man?" @prompt Decorator將尊重裝飾功能的返回類型註釋。這可以是Pydantic支持的任何類型,包括pydantic模型。
from magentic import prompt
from pydantic import BaseModel
class Superhero ( BaseModel ):
name : str
age : int
power : str
enemies : list [ str ]
@ prompt ( "Create a Superhero named {name}." )
def create_superhero ( name : str ) -> Superhero : ...
create_superhero ( "Garden Man" )
# Superhero(name='Garden Man', age=30, power='Control over plants', enemies=['Pollution Man', 'Concrete Woman'])有關更多信息,請參見結構化輸出。
@chatprompt Decorator的工作原理就像@prompt一樣,但您可以將聊天消息作為模板傳遞,而不是單個文本提示。這可用於提供系統消息或幾次彈出提示,以提供示例響應以指導模型的輸出。用捲曲括號表示的格式字段{example}將填充在所有消息中( FunctionResultMessage除外)。
from magentic import chatprompt , AssistantMessage , SystemMessage , UserMessage
from pydantic import BaseModel
class Quote ( BaseModel ):
quote : str
character : str
@ chatprompt (
SystemMessage ( "You are a movie buff." ),
UserMessage ( "What is your favorite quote from Harry Potter?" ),
AssistantMessage (
Quote (
quote = "It does not do to dwell on dreams and forget to live." ,
character = "Albus Dumbledore" ,
)
),
UserMessage ( "What is your favorite quote from {movie}?" ),
)
def get_movie_quote ( movie : str ) -> Quote : ...
get_movie_quote ( "Iron Man" )
# Quote(quote='I am Iron Man.', character='Tony Stark')請參閱聊天提示。
LLM還可以決定調用功能。在這種情況下, @prompt -decorated函數返回一個FunctionCall對象,可以使用LLM提供的參數來調用該對象來執行函數。
from typing import Literal
from magentic import prompt , FunctionCall
def search_twitter ( query : str , category : Literal [ "latest" , "people" ]) -> str :
"""Searches Twitter for a query."""
print ( f"Searching Twitter for { query !r } in category { category !r } " )
return "<twitter results>"
def search_youtube ( query : str , channel : str = "all" ) -> str :
"""Searches YouTube for a query."""
print ( f"Searching YouTube for { query !r } in channel { channel !r } " )
return "<youtube results>"
@ prompt (
"Use the appropriate search function to answer: {question}" ,
functions = [ search_twitter , search_youtube ],
)
def perform_search ( question : str ) -> FunctionCall [ str ]: ...
output = perform_search ( "What is the latest news on LLMs?" )
print ( output )
# > FunctionCall(<function search_twitter at 0x10c367d00>, 'LLMs', 'latest')
output ()
# > Searching Twitter for 'Large Language Models news' in category 'latest'
# '<twitter results>'請參閱函數請求更多。
有時,LLM需要進行一個或多個函數調用才能生成最終答案。 @prompt_chain Decorator將自動解析FunctionCall對象,並將輸出傳遞回LLM,以繼續直到達到最終答案。
在下面的示例中,當describe_weather稱為LLM時,首先調用get_current_weather函數,然後使用此結果來提出其最終答案,以返回。
from magentic import prompt_chain
def get_current_weather ( location , unit = "fahrenheit" ):
"""Get the current weather in a given location"""
# Pretend to query an API
return {
"location" : location ,
"temperature" : "72" ,
"unit" : unit ,
"forecast" : [ "sunny" , "windy" ],
}
@ prompt_chain (
"What's the weather like in {city}?" ,
functions = [ get_current_weather ],
)
def describe_weather ( city : str ) -> str : ...
describe_weather ( "Boston" )
# 'The current weather in Boston is 72°F and it is sunny and windy.'使用@prompt , @chatprompt和@prompt_chain創建的LLM驅動功能可以作為functions提供給其他@prompt / @prompt_chain Decorator,就像常規Python函數一樣。這使得越來越複雜的LLM驅動功能,同時允許隔離測試和改進單個組件。
可以使用StreamedStr (和AsyncStreamedStr )類來流式傳輸LLM的輸出。這使您可以在生成文本時處理文本,而不是一次接收整個輸出。
from magentic import prompt , StreamedStr
@ prompt ( "Tell me about {country}" )
def describe_country ( country : str ) -> StreamedStr : ...
# Print the chunks while they are being received
for chunk in describe_country ( "Brazil" ):
print ( chunk , end = "" )
# 'Brazil, officially known as the Federative Republic of Brazil, is ...'可以同時創建多個StreamedStr ,以同時流式傳輸LLM輸出。在下面的示例中,生成多個國家的描述花費的時間大致與一個國家的時間大致相同。
from time import time
countries = [ "Australia" , "Brazil" , "Chile" ]
# Generate the descriptions one at a time
start_time = time ()
for country in countries :
# Converting `StreamedStr` to `str` blocks until the LLM output is fully generated
description = str ( describe_country ( country ))
print ( f" { time () - start_time :.2f } s : { country } - { len ( description ) } chars" )
# 22.72s : Australia - 2130 chars
# 41.63s : Brazil - 1884 chars
# 74.31s : Chile - 2968 chars
# Generate the descriptions concurrently by creating the StreamedStrs at the same time
start_time = time ()
streamed_strs = [ describe_country ( country ) for country in countries ]
for country , streamed_str in zip ( countries , streamed_strs ):
description = str ( streamed_str )
print ( f" { time () - start_time :.2f } s : { country } - { len ( description ) } chars" )
# 22.79s : Australia - 2147 chars
# 23.64s : Brazil - 2202 chars
# 24.67s : Chile - 2186 chars也可以使用返回類型註釋Iterable (或AsyncIterable )從LLM流出結構化輸出。這允許在生成下一個項目時處理每個項目。
from collections . abc import Iterable
from time import time
from magentic import prompt
from pydantic import BaseModel
class Superhero ( BaseModel ):
name : str
age : int
power : str
enemies : list [ str ]
@ prompt ( "Create a Superhero team named {name}." )
def create_superhero_team ( name : str ) -> Iterable [ Superhero ]: ...
start_time = time ()
for hero in create_superhero_team ( "The Food Dudes" ):
print ( f" { time () - start_time :.2f } s : { hero } " )
# 2.23s : name='Pizza Man' age=30 power='Can shoot pizza slices from his hands' enemies=['The Hungry Horde', 'The Junk Food Gang']
# 4.03s : name='Captain Carrot' age=35 power='Super strength and agility from eating carrots' enemies=['The Sugar Squad', 'The Greasy Gang']
# 6.05s : name='Ice Cream Girl' age=25 power='Can create ice cream out of thin air' enemies=['The Hot Sauce Squad', 'The Healthy Eaters']請參閱流媒體。
異步函數 / Coroutines可用於同時查詢LLM。這可以大大提高總體發電速度,還可以在等待LLM輸出時運行其他異步代碼。在下面的示例中,LLM在每位總統等待列表中的下一個總統時為每個總統生成了描述。測量每秒生成的字符表明,該示例在串行處理上實現了7倍的加速。
import asyncio
from time import time
from typing import AsyncIterable
from magentic import prompt
@ prompt ( "List ten presidents of the United States" )
async def iter_presidents () -> AsyncIterable [ str ]: ...
@ prompt ( "Tell me more about {topic}" )
async def tell_me_more_about ( topic : str ) -> str : ...
# For each president listed, generate a description concurrently
start_time = time ()
tasks = []
async for president in await iter_presidents ():
# Use asyncio.create_task to schedule the coroutine for execution before awaiting it
# This way descriptions will start being generated while the list of presidents is still being generated
task = asyncio . create_task ( tell_me_more_about ( president ))
tasks . append ( task )
descriptions = await asyncio . gather ( * tasks )
# Measure the characters per second
total_chars = sum ( len ( desc ) for desc in descriptions )
time_elapsed = time () - start_time
print ( total_chars , time_elapsed , total_chars / time_elapsed )
# 24575 28.70 856.07
# Measure the characters per second to describe a single president
start_time = time ()
out = await tell_me_more_about ( "George Washington" )
time_elapsed = time () - start_time
print ( len ( out ), time_elapsed , len ( out ) / time_elapsed )
# 2206 18.72 117.78有關更多信息,請參見Asyncio。
@prompt的functions參數可以包含異步/coroutine函數。當稱為相應的FunctionCall插入對象時,必須等待結果。Annotated的註釋可用於提供描述和其他元數據以進行函數參數。請參閱有關使用Field來描述函數參數的pydantic文檔。@prompt和@prompt_chain裝飾器也接受model參數。您可以將OpenaiChatModel的實例用於使用GPT4或配置其他溫度。見下文。@prompt功能中使用的類型。 Magentic支持多個“後端”(LLM提供商)。這些都是
openai :使用openai Python軟件包的默認後端。支持洋紅色的所有功能。 from magentic import OpenaiChatModelanthropic :使用anthropic Python軟件包。支持洋紅色的所有功能,但是目前一次收到流媒體響應。 pip install " magentic[anthropic] " from magentic . chat_model . anthropic_chat_model import AnthropicChatModellitellm :使用litellm Python軟件包啟用許多不同提供商的查詢LLM。注意:某些模型可能不支持magentic函數呼叫/結構化輸出和流媒體的所有功能。 pip install " magentic[litellm] " from magentic . chat_model . litellm_chat_model import LitellmChatModelmistral :使用openai Python軟件包和一些小修改,使API查詢與Mistral API兼容。支持Magentic的所有功能,但是工具調用(包括結構化輸出)並未流式傳輸,因此一次收到全部接收。注意:未來版本的Magentic可能會切換到使用mistral Python軟件包。 from magentic . chat_model . mistral_chat_model import MistralChatModel magentic使用的後端和LLM( ChatModel )可以通過多種方式進行配置。當調用洋紅性功能時,使用的ChatModel遵循此優先順序
ChatModel實例作為model參數提供給洋紅色的裝飾者with MyChatModel:ChatModel和src/aigentic/settings.py中的默認設置 from magentic import OpenaiChatModel , prompt
from magentic . chat_model . litellm_chat_model import LitellmChatModel
@ prompt ( "Say hello" )
def say_hello () -> str : ...
@ prompt (
"Say hello" ,
model = LitellmChatModel ( "ollama_chat/llama3" ),
)
def say_hello_litellm () -> str : ...
say_hello () # Uses env vars or default settings
with OpenaiChatModel ( "gpt-3.5-turbo" , temperature = 1 ):
say_hello () # Uses openai with gpt-3.5-turbo and temperature=1 due to context manager
say_hello_litellm () # Uses litellm with ollama_chat/llama3 because explicitly configured可以設置以下環境變量。
| 環境變量 | 描述 | 例子 |
|---|---|---|
| Magentic_backend | 用作LLM後端的軟件包 | 人類 / openai / litellm |
| Magentic_anthropic_model | 人類模型 | Claude-3-Haiku-20240307 |
| 洋紅色_anthropic_api_key | 擬人API密鑰將由洋紅色使用 | SK -... |
| Magentic_anthropic_base_url | 擬人兼容API的基本URL | http:// localhost:8080 |
| Magentic_anthropic_max_tokens | 最大生成的令牌數 | 1024 |
| 洋紅色_anthropic_temperature | 溫度 | 0.5 |
| Magentic_litellm_model | Litellm模型 | 克勞德-2 |
| Magentic_litellm_api_base | 查詢的基本網址 | http:// localhost:11434 |
| agentic_litellm_max_tokens | Litellm最大生成的令牌數 | 1024 |
| Magentic_litellm_temperature | Litellm溫度 | 0.5 |
| agentic_mistral_model | Mistral模型 | Mistral-Large-Latest |
| Magentic_mistral_api_key | MISTRAL API密鑰將由洋紅色使用 | XEG ... |
| agentic_mistral_base_url | 與Mistral兼容API的基本URL | http:// localhost:8080 |
| agentic_mistral_max_tokens | 最大生成的令牌數 | 1024 |
| agentic_mistral_seed | 確定性抽樣的種子 | 42 |
| aigentic_mistral_temperature | 溫度 | 0.5 |
| Magentic_openai_model | Openai模型 | GPT-4 |
| apentic_openai_api_key | openai api密鑰將由洋紅色使用 | SK -... |
| apentic_openai_api_type | 允許的選項:“ Openai”,“ Azure” | azure |
| Magentic_openai_base_url | 與OpenAI兼容API的基本URL | http:// localhost:8080 |
| Magentic_openai_max_tokens | Openai Max生成的令牌數 | 1024 |
| Magentic_openai_seed | 確定性抽樣的種子 | 42 |
| Magentic_openai_temperature | Openai溫度 | 0.5 |
使用openai Backend時,將MAGENTIC_OPENAI_BASE_URL環境設置或使用OpenaiChatModel(..., base_url="http://localhost:8080")在代碼中允許您與任何openai兼容的api eg api eg a zure eprellm litellm litellm litellm允許您使用magentic OpenAI代理服務器,Localai。請注意,如果API不支持工具調用,那麼您將無法創建返回Python對象的及時功能,但是magentic的其他功能仍然可以正常工作。
要將Azure與OpenAi後端一起使用,您需要將MAGENTIC_OPENAI_API_TYPE環境變量設置為“ Azure”或使用OpenaiChatModel(..., api_type="azure") ,還將OpenAI包裝所需的環境變量設置為訪問Azure所需的環境變量。請參閱https://github.com/openai/openai-python#microsoft-azure-openai
由於功能沒有車身或返回值,許多類型的跳棋將通過@prompt Decorator引起警告或錯誤。有幾種處理這些方法的方法。
empty-body在mypy中。 # pyproject.toml
[ tool . mypy ]
disable_error_code = [ " empty-body " ]... (這不滿足Mypy)或raise 。 @ prompt ( "Choose a color" )
def random_color () -> str : ...# type: ignore[empty-body] 。在這種情況下,您可以添加docstring而不是... @ prompt ( "Choose a color" )
def random_color () -> str : # type: ignore[empty-body]
"""Returns a random color."""