Lanchchain裝飾器是Langchain頂部的一層,可以提供句法糖?為了編寫自定義蘭鏈提示和鏈條
注意:這是Langchain庫的非官方插件。它不是試圖競爭,只是為了使使用更輕鬆。這裡有很多想法是完全自以為是的
這是用Langchain裝飾器編寫的代碼的簡單示例
@ llm_prompt
def write_me_short_post ( topic : str , platform : str = "twitter" , audience : str = "developers" ) -> str :
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
return
# run it naturaly
write_me_short_post ( topic = "starwars" )
# or
write_me_short_post ( topic = "starwars" , platform = "redit" )主要原則和收益:
pythonic編寫代碼方式快速開始
提示聲明
LLM功能(OpenAI功能)
簡化流
自動LLM選擇
更複雜的結構
將提示綁定到對象
定義自定義設置
偵錯
傳遞內存,回調,停止等。
其他
pip install langchain_decorators關於如何開始的好主意是在此處查看示例:
默認情況下,提示是整個函數文檔,除非您標記提示
我們可以通過指定一個用語言標籤
@ llm_prompt
def write_me_short_post ( topic : str , platform : str = "twitter" , audience : str = "developers" ):
"""
Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.
It needs to be a code block, marked as a `<prompt>` language
```<prompt>
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
```
Now only the code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers.
(It also has a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
"""
return 因為聊天模型對於將提示定義為一組消息模板非常有用...這是如何做的:
@ llm_prompt
def simulate_conversation ( human_input : str , agent_role : str = "a pirate" ):
"""
## System message
- note the `:system` sufix inside the <prompt:_role_> tag
```<prompt:system>
You are a {agent_role} hacker. You must act like one.
You reply always in code, using python or javascript code block...
for example:
... do not reply with anything else.. just with code - respecting your role.
```
# human message
(we are using the real role that are enforced by the LLM - GPT supports system, assistant, user)
``` <prompt:user>
Helo, who are you
```
a reply:
``` <prompt:assistant>
``` python <<- escaping inner code block with that should be part of the prompt
def hello():
print("Argh... hello you pesky pirate")
```
```
we can also add some history using placeholder
```<prompt:placeholder>
{history}
```
```<prompt:user>
{human_input}
```
Now only the code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers.
(It also has a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
"""
pass這裡的角色是模型本機角色(助理,用戶,chatgpt系統)
該語法如下:
@ llm_prompt
def prompt_with_optional_partials ():
"""
this text will be rendered always, but
{? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "") ?}
you can also place it in between the words
this too will be rendered{? , but
this block will be rendered only if {this_value} and {this_value}
are not empty?} !
""" # this code example is complete and should run as it is
from langchain_decorators import llm_prompt
@ llm_prompt
def write_name_suggestions ( company_business : str , count : int ) -> list :
""" Write me {count} good name suggestions for company that {company_business}
"""
pass
write_name_suggestions ( company_business = "sells cookies" , count = 5 )目前僅支持最新的OpenAI聊天模型
您需要做的就是用@LLM_Function Decorator註釋您的功能。
這將解析LLM的描述(第一個相干段落被視為函數描述)
和ASO參數描述(目前支持Google,Numpy和Spihnx符號)
默認情況下,DOCSTRING格式將自動解決,但是設置DocString的格式可以稍微加快速度。 - auto (默認):格式自動從sphinx -google: Google :docstring被解析為降價(請參閱Google Docstring格式) numpy :DocString被解析為Markdown(請參閱Numpy Docstring Format) - SPHINX -SPHINX : google :DOCSTRING:SPHINX FOMAT(SPHINX Format docthinx docthinx docstring format)
定義枚舉的最佳方法是使用Literal的註釋通過類型註釋:
@ llm_function
def do_magic ( spell : str , strength : Literal [ "light" , "medium" , "strong" ]):
"""
Do some kind of magic
Args:
spell (str): spall text
strength (str): the strength of the spell
"""枚舉替代字面的替代品以註釋“枚舉”像參數一樣,您可以使用此“ typescript”,例如格式: ["value_a" | "value_b"] ...如果將被解析。該文本也將是描述的一部分...如果您不想要它,則可以將此符號用作類型符號。例子:
Args:
message_type (["email" | "sms"]): type of a message / channel how to send the message
然後,您將這些函數作為參數傳遞給和@llm_prompt (該參數必須命名functions
這是如何使用它:
from langchain . agents import load_tools
from langchian_decorators import llm_function , llm_prompt , GlobalSettings
@ llm_function
def send_message ( message : str , addressee : str = None , message_type : Literal [ "email" , "whatsapp" ] = "email" ):
""" Use this if user asks to send some message
Args:
message (str): message text to send
addressee (str): email of the addressee... in format [email protected]
message_type (str, optional): style of message by platform
"""
if message_type == "email" :
send_email ( addressee , message )
elif message_type == "whatsapp" :
send_whatsapp ( addressee , message )
# load some other tools from langchain
list_of_other_tools = load_tools (
tool_names = [...],
llm = GlobalSettings . get_current_settings (). default_llm )
@ llm_prompt
def do_what_user_asks_for ( user_input : str , functions : List [ Union [ Callable , BaseTool ]]):
"""
```<prompt:system>
Your role is to be a helpful asistant.
```
```<prompt:user>
{user_input}
```
"""
user_input = "Yo, send an email to John Smith that I will be late for the meeting"
result = do_what_user_asks_for (
user_input = user_input ,
functions = [ send_message , * list_of_other_tools ]
)
if result . is_function_call :
result . execute ()
else :
print ( result . output_text )此外,您還可以在LLM提示中添加
function_call參數以控制GPT行為。
- 如果將值設置為“無” - 它將禁用目前的函數調用,但是它仍然可以看到它們(在調用函數之前對某些推理/計劃有用)
- 如果將值設置為“自動” -GPT將選擇使用或使用這些功能
- 如果將值設置為函數的名稱 /或函數IT自我(裝飾器將處理與模式中使用的同一名稱),則將迫使GPT使用該函數
如果您使用函數參數,則輸出將始終OutputWithFunctionCall
class OutputWithFunctionCall ( BaseModel ):
output_text : str
output : T
function_name : str = None
function_arguments : Union [ Dict [ str , Any ], str , None ]
function : Callable = None
function_async : Callable = None
@ property
def is_function_call ( self ):
...
@ property
def support_async ( self ):
...
@ property
def support_sync ( self ):
...
async def execute_async ( self ):
"""Executes the function asynchronously."""
...
def execute ( self ):
""" Executes the function synchronously.
If the function is async, it will be executed in a event loop.
"""
...
def to_function_message ( self , result = None ):
"""
Converts the result to a FunctionMessage...
you can override the result collected via execute with your own
"""
...如果您想查看模式的構建方式,可以使用get_function_schema方法,該方法由裝飾器添加到功能:
from langchain_decorators import get_function_schema
@ llm_function
def my_func ( arg1 : str ):
...
f_schema = get_function_schema ( my_func . get_function_schema )
print ( f_schema )為了將結果添加到內存 / agent_scratchpad to_function_message
功能提供商使您可以根據輸入更動態地提供一組LLM函數,例如功能列表。它還使您可以為此LLM運行為每個功能提供一個唯一的名稱。這可能有兩個原因有用:
功能模式(尤其是其描述)是指導LLM的關鍵工具。如果啟用動態函數聲明,則可以(重新)在llm_function方案中使用同樣的主要提示符提示屬性:
@ llm_function ( dynamic_schema = True )
def db_search ( query_input : str ):
"""
This function is useful to search in our database.
{?Here are some examples of data available:
{closest_examples}?}
"""
@ llm_prompt
def run_agent ( query_input : str , closest_examples : str , functions ):
"""
Help user. Use a function when appropriate
"""
closest_examples = get_closest_examples ()
run_agent ( query_input , closest_examples , functions = [ db_search , ...])這僅是為了插圖,在代碼示例中提供了完全可執行的示例
如果我們想利用流媒體:
這樣,我們只是標記應該流式傳輸的提示,而不需要修補我們應該使用的LLM,而是將創建和分佈的流媒體處理程序傳遞到我們鏈條的特定部分中...只需打開/關閉流中/關閉提示/提示類型即可...
流媒體只有在流中上下文中調用它才會發生...我們可以定義一個簡單的功能來處理流
# this code example is complete and should run as it is
from langchain_decorators import StreamingContext , llm_prompt
# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
# note that only async functions can be streamed (will get an error if it's not)
@ llm_prompt ( capture_stream = True )
async def write_me_short_post ( topic : str , platform : str = "twitter" , audience : str = "developers" ):
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass
# just an arbitrary function to demonstrate the streaming... wil be some websockets code in the real world
tokens = []
def capture_stream_func ( new_token : str ):
tokens . append ( new_token )
# if we want to capture the stream, we need to wrap the execution into StreamingContext...
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
# only the prompts marked with capture_stream will be captured here
with StreamingContext ( stream_to_stdout = True , callback = capture_stream_func ):
result = await run_prompt ()
print ( "Stream finished ... we can distinguish tokens thanks to alternating colors" )
print ( " n We've captured" , len ( tokens ), "tokens? n " )
print ( "Here is the result:" )
print ( result )在現實生活中,可能存在情況,上下文將在您使用的基本模型的窗口(例如長期聊天歷史記錄)上生長出來...但是由於這可能只會發生,因此只有在這種情況下只有(通常更昂貴的)模型(通常更昂貴的)具有更大上下文窗口的模型,否則我們會使用更便宜的一個。
現在您可以使用LLMSELECTOR進行
from langchain_decorators import LlmSelector
my_llm_selector = LlmSelector (
generation_min_tokens = 0 , # how much token at min. I for generation I want to have as a buffer
prompt_to_generation_ratio = 1 / 3 # what percentage of the prompt length should be used for generation buffer
)
. with_llm_rule ( ChatGooglePalm (), max_tokens = 512 ) # ... if you want to use LLM whose window is not defined in langchain_decorators.common.MODEL_LIMITS (only OpenAI and Anthropic are there)
. with_llm ( ChatOpenAI ( model = "gpt-3.5-turbo" )) # these models are known, therefore we can just pass them and the max window will be resolved
. with_llm ( ChatOpenAI ( model = "gpt-3.5-turbo-16k-0613" ))
. with_llm ( ChatOpenAI ( model = "claude-v1.3-100k" ))此類允許您根據提示的長度和預期的生成長度來定義具有規則的LLM序列...並且僅在通過閾值之後,才能自動使用更昂貴的模型。
您可以將其定義為Globalsettings:
langchain_decorators . GlobalSettings . define_settings (
llm_selector = my_llm_selector # pass the selector into global settings
)注意:從版本v0.0.10開始,您在那裡的llmselector在默認設置預定義。您可以通過提供自己的自己或設置默認LLM或默認流llm來覆蓋它
或特定的提示類型:
from langchain_decorators import PromptTypes
class MyCustomPromptTypes ( PromptTypes ):
MY_TUBO_PROMPT = PromptTypeSettings ( llm_selector = my_llm_selector )對於dict / pydantic,您需要指定格式指令...這可能很乏味,這就是為什麼您可以讓輸出解析器基於模型(Pydantic)生成指令
from langchain_decorators import llm_prompt
from pydantic import BaseModel , Field
class TheOutputStructureWeExpect ( BaseModel ):
name : str = Field ( description = "The name of the company" )
headline : str = Field ( description = "The description of the company (for landing page)" )
employees : list [ str ] = Field ( description = "5-8 fake employee names with their positions" )
@ llm_prompt ()
def fake_company_generator ( company_business : str ) -> TheOutputStructureWeExpect :
""" Generate a fake company that {company_business}
{FORMAT_INSTRUCTIONS}
"""
return
company = fake_company_generator ( company_business = "sells cookies" )
# print the result nicely formatted
print ( "Company name: " , company . name )
print ( "company headline: " , company . headline )
print ( "company employees: " , company . employees ) from pydantic import BaseModel
from langchain_decorators import llm_prompt
class AssistantPersonality ( BaseModel ):
assistant_name : str
assistant_role : str
field : str
@ property
def a_property ( self ):
return "whatever"
def hello_world ( self , function_kwarg : str = None ):
"""
We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
"""
@ llm_prompt
def introduce_your_self ( self ) -> str :
"""
``` <prompt:system>
You are an assistant named {assistant_name}.
Your role is to act as {assistant_role}
```
```<prompt:user>
Introduce your self (in less than 20 words)
```
"""
personality = AssistantPersonality ( assistant_name = "John" , assistant_role = "a pirate" )
print ( personality . introduce_your_self ( personality ))在這裡,我們只是將功能標記為llm_prompt Decorator的提示,並將其有效地轉換為LLMchain。而不是運行它
標準llmchain的初始參數比僅僅是inputs_variables和提示更重要。這是裝飾器中隱藏的實現細節。這是它的工作方式:
使用全局設置:
# define global settings for all prompty (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings
GlobalSettings . define_settings (
default_llm = ChatOpenAI ( temperature = 0.0 ), this is default ... can change it here globally
default_streaming_llm = ChatOpenAI ( temperature = 0.0 , streaming = True ), this is default ... can change it here for all ... will be used for streaming
)使用預定義的提示類型
#You can change the default prompt types
from langchain_decorators import PromptTypes , PromptTypeSettings
PromptTypes . AGENT_REASONING . llm = ChatOpenAI ()
# Or you can just define your own ones:
class MyCustomPromptTypes ( PromptTypes ):
GPT4 = PromptTypeSettings ( llm = ChatOpenAI ( model = "gpt-4" ))
@ llm_prompt ( prompt_type = MyCustomPromptTypes . GPT4 )
def write_a_complicated_code ( app_idea : str ) -> str :
...直接在裝飾器中定義設置
from langchain . llms import OpenAI
@ llm_prompt (
llm = OpenAI ( temperature = 0.7 ),
stop_tokens = [ " n Observation" ],
...
)
def creative_writer ( book_title : str ) -> str :
...要通過其中任何一個,只需在功能中聲明它們(或使用Kwargs通過任何內容)
(它們不一定需要聲明,但是如果您要使用它們,這是一個很好的做法)
@ llm_prompt ()
async def write_me_short_post ( topic : str , platform : str = "twitter" , memory : SimpleMemory = None ):
"""
{history_key}
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass
await write_me_short_post ( topic = "old movies" )有幾種選擇如何控制登錄到控制台的輸出。最簡單的方法是定義ENV變量: LANGCHAIN_DECORATORS_VERBOSE並將其設置為“ true”
您還可以通過定義您的全局設置,如下所示,以編程方式控制這一
最後一個選項是每種情況下,只需在提示下以詳細模式進行圖紙來控制它:
@llm_prompt(verbose=True)
def your_prompt(param1):
...
Presswatch IO是一個平台,可以跟踪和跟踪Langchain執行中發生的一切的詳細信息。它可以通過將您的入口點代碼包裝在集成中的單行下降
with PromptWatch():
run_your_code()
在此處了解有關ProstingWatch的更多信息:www.promptwatch.io
歡迎反饋,貢獻和公關