Lanchchain装饰器是Langchain顶部的一层,可以提供句法糖?为了编写自定义兰链提示和链条
注意:这是Langchain库的非官方插件。它不是试图竞争,只是为了使使用更轻松。这里有很多想法是完全自以为是的
这是用Langchain装饰器编写的代码的简单示例
@ llm_prompt
def write_me_short_post ( topic : str , platform : str = "twitter" , audience : str = "developers" ) -> str :
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
return
# run it naturaly
write_me_short_post ( topic = "starwars" )
# or
write_me_short_post ( topic = "starwars" , platform = "redit" )主要原则和收益:
pythonic编写代码方式快速开始
提示声明
LLM功能(OpenAI功能)
简化流
自动LLM选择
更复杂的结构
将提示绑定到对象
定义自定义设置
调试
传递内存,回调,停止等。
其他
pip install langchain_decorators关于如何开始的好主意是在此处查看示例:
默认情况下,提示是整个函数文档,除非您标记提示
我们可以通过指定一个用语言标签
@ llm_prompt
def write_me_short_post ( topic : str , platform : str = "twitter" , audience : str = "developers" ):
"""
Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.
It needs to be a code block, marked as a `<prompt>` language
```<prompt>
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
```
Now only the code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers.
(It also has a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
"""
return 因为聊天模型对于将提示定义为一组消息模板非常有用...这是如何做的:
@ llm_prompt
def simulate_conversation ( human_input : str , agent_role : str = "a pirate" ):
"""
## System message
- note the `:system` sufix inside the <prompt:_role_> tag
```<prompt:system>
You are a {agent_role} hacker. You must act like one.
You reply always in code, using python or javascript code block...
for example:
... do not reply with anything else.. just with code - respecting your role.
```
# human message
(we are using the real role that are enforced by the LLM - GPT supports system, assistant, user)
``` <prompt:user>
Helo, who are you
```
a reply:
``` <prompt:assistant>
``` python <<- escaping inner code block with that should be part of the prompt
def hello():
print("Argh... hello you pesky pirate")
```
```
we can also add some history using placeholder
```<prompt:placeholder>
{history}
```
```<prompt:user>
{human_input}
```
Now only the code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers.
(It also has a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
"""
pass这里的角色是模型本机角色(助理,用户,chatgpt系统)
该语法如下:
@ llm_prompt
def prompt_with_optional_partials ():
"""
this text will be rendered always, but
{? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "") ?}
you can also place it in between the words
this too will be rendered{? , but
this block will be rendered only if {this_value} and {this_value}
are not empty?} !
""" # this code example is complete and should run as it is
from langchain_decorators import llm_prompt
@ llm_prompt
def write_name_suggestions ( company_business : str , count : int ) -> list :
""" Write me {count} good name suggestions for company that {company_business}
"""
pass
write_name_suggestions ( company_business = "sells cookies" , count = 5 )目前仅支持最新的OpenAI聊天模型
您需要做的就是用@LLM_Function Decorator注释您的功能。
这将解析LLM的描述(第一个相干段落被视为函数描述)
和ASO参数描述(目前支持Google,Numpy和Spihnx符号)
默认情况下,DOCSTRING格式将自动解决,但是设置DocString的格式可以稍微加快速度。 - auto (默认):格式自动从docstring -google: google :docstring被解析为降价(请参阅Google Docstring格式) numpy :DocString被解析为Markdown(请参阅Numpy Docstring Format) - sphinx :SPHINX:DOCSTRING:SPHINX FOMAT(SPHINX Format docthinx docthinx docstring format)
定义枚举的最佳方法是使用Literal的注释通过类型注释:
@ llm_function
def do_magic ( spell : str , strength : Literal [ "light" , "medium" , "strong" ]):
"""
Do some kind of magic
Args:
spell (str): spall text
strength (str): the strength of the spell
"""枚举替代字面的替代品以注释“枚举”像参数一样,您可以使用此“ typescript”,例如格式: ["value_a" | "value_b"] ...如果将被解析。该文本也将是描述的一部分...如果您不想要它,则可以将此符号用作类型符号。例子:
Args:
message_type (["email" | "sms"]): type of a message / channel how to send the message
然后,您将这些函数作为参数传递给和@llm_prompt (该参数必须命名functions
这是如何使用它:
from langchain . agents import load_tools
from langchian_decorators import llm_function , llm_prompt , GlobalSettings
@ llm_function
def send_message ( message : str , addressee : str = None , message_type : Literal [ "email" , "whatsapp" ] = "email" ):
""" Use this if user asks to send some message
Args:
message (str): message text to send
addressee (str): email of the addressee... in format [email protected]
message_type (str, optional): style of message by platform
"""
if message_type == "email" :
send_email ( addressee , message )
elif message_type == "whatsapp" :
send_whatsapp ( addressee , message )
# load some other tools from langchain
list_of_other_tools = load_tools (
tool_names = [...],
llm = GlobalSettings . get_current_settings (). default_llm )
@ llm_prompt
def do_what_user_asks_for ( user_input : str , functions : List [ Union [ Callable , BaseTool ]]):
"""
```<prompt:system>
Your role is to be a helpful asistant.
```
```<prompt:user>
{user_input}
```
"""
user_input = "Yo, send an email to John Smith that I will be late for the meeting"
result = do_what_user_asks_for (
user_input = user_input ,
functions = [ send_message , * list_of_other_tools ]
)
if result . is_function_call :
result . execute ()
else :
print ( result . output_text )此外,您还可以在LLM提示中添加
function_call参数以控制GPT行为。
- 如果将值设置为“无” - 它将禁用目前的函数调用,但是它仍然可以看到它们(在调用函数之前对某些推理/计划有用)
- 如果将值设置为“自动” -GPT将选择使用或使用这些功能
- 如果将值设置为函数的名称 /或函数IT自我(装饰器将处理与模式中使用的同一名称),则将迫使GPT使用该函数
如果您使用函数参数,则输出将始终OutputWithFunctionCall
class OutputWithFunctionCall ( BaseModel ):
output_text : str
output : T
function_name : str = None
function_arguments : Union [ Dict [ str , Any ], str , None ]
function : Callable = None
function_async : Callable = None
@ property
def is_function_call ( self ):
...
@ property
def support_async ( self ):
...
@ property
def support_sync ( self ):
...
async def execute_async ( self ):
"""Executes the function asynchronously."""
...
def execute ( self ):
""" Executes the function synchronously.
If the function is async, it will be executed in a event loop.
"""
...
def to_function_message ( self , result = None ):
"""
Converts the result to a FunctionMessage...
you can override the result collected via execute with your own
"""
...如果您想查看模式的构建方式,可以使用get_function_schema方法,该方法由装饰器添加到功能:
from langchain_decorators import get_function_schema
@ llm_function
def my_func ( arg1 : str ):
...
f_schema = get_function_schema ( my_func . get_function_schema )
print ( f_schema )为了将结果添加到内存 / agent_scratchpad to_function_message
功能提供商使您可以根据输入更动态地提供一组LLM函数,例如功能列表。它还使您可以为此LLM运行为每个功能提供一个唯一的名称。这可能有两个原因有用:
功能模式(尤其是其描述)是指导LLM的关键工具。如果启用动态函数声明,则可以(重新)在llm_function方案中使用同样的主要提示符提示属性:
@ llm_function ( dynamic_schema = True )
def db_search ( query_input : str ):
"""
This function is useful to search in our database.
{?Here are some examples of data available:
{closest_examples}?}
"""
@ llm_prompt
def run_agent ( query_input : str , closest_examples : str , functions ):
"""
Help user. Use a function when appropriate
"""
closest_examples = get_closest_examples ()
run_agent ( query_input , closest_examples , functions = [ db_search , ...])这仅是为了插图,在代码示例中提供了完全可执行的示例
如果我们想利用流媒体:
这样,我们只是标记应该流式传输的提示,而不需要修补我们应该使用的LLM,而是将创建和分布的流媒体处理程序传递到我们链条的特定部分中...只需打开/关闭流中/关闭提示/提示类型即可...
流媒体只有在流中上下文中调用它才会发生...我们可以定义一个简单的功能来处理流
# this code example is complete and should run as it is
from langchain_decorators import StreamingContext , llm_prompt
# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
# note that only async functions can be streamed (will get an error if it's not)
@ llm_prompt ( capture_stream = True )
async def write_me_short_post ( topic : str , platform : str = "twitter" , audience : str = "developers" ):
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass
# just an arbitrary function to demonstrate the streaming... wil be some websockets code in the real world
tokens = []
def capture_stream_func ( new_token : str ):
tokens . append ( new_token )
# if we want to capture the stream, we need to wrap the execution into StreamingContext...
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
# only the prompts marked with capture_stream will be captured here
with StreamingContext ( stream_to_stdout = True , callback = capture_stream_func ):
result = await run_prompt ()
print ( "Stream finished ... we can distinguish tokens thanks to alternating colors" )
print ( " n We've captured" , len ( tokens ), "tokens? n " )
print ( "Here is the result:" )
print ( result )在现实生活中,可能存在情况,上下文将在您使用的基本模型的窗口(例如长期聊天历史记录)上生长出来...但是由于这可能只会发生,因此只有在这种情况下只有(通常更昂贵的)模型(通常更昂贵的)具有更大上下文窗口的模型,否则我们会使用更便宜的一个。
现在您可以使用LLMSELECTOR进行
from langchain_decorators import LlmSelector
my_llm_selector = LlmSelector (
generation_min_tokens = 0 , # how much token at min. I for generation I want to have as a buffer
prompt_to_generation_ratio = 1 / 3 # what percentage of the prompt length should be used for generation buffer
)
. with_llm_rule ( ChatGooglePalm (), max_tokens = 512 ) # ... if you want to use LLM whose window is not defined in langchain_decorators.common.MODEL_LIMITS (only OpenAI and Anthropic are there)
. with_llm ( ChatOpenAI ( model = "gpt-3.5-turbo" )) # these models are known, therefore we can just pass them and the max window will be resolved
. with_llm ( ChatOpenAI ( model = "gpt-3.5-turbo-16k-0613" ))
. with_llm ( ChatOpenAI ( model = "claude-v1.3-100k" ))此类允许您根据提示的长度和预期的生成长度来定义具有规则的LLM序列...并且仅在通过阈值之后,才能自动使用更昂贵的模型。
您可以将其定义为Globalsettings:
langchain_decorators . GlobalSettings . define_settings (
llm_selector = my_llm_selector # pass the selector into global settings
)注意:从版本v0.0.10开始,您在那里的llmselector在默认设置预定义。您可以通过提供自己的自己或设置默认LLM或默认流llm来覆盖它
或特定的提示类型:
from langchain_decorators import PromptTypes
class MyCustomPromptTypes ( PromptTypes ):
MY_TUBO_PROMPT = PromptTypeSettings ( llm_selector = my_llm_selector )对于dict / pydantic,您需要指定格式指令...这可能很乏味,这就是为什么您可以让输出解析器基于模型(Pydantic)生成指令
from langchain_decorators import llm_prompt
from pydantic import BaseModel , Field
class TheOutputStructureWeExpect ( BaseModel ):
name : str = Field ( description = "The name of the company" )
headline : str = Field ( description = "The description of the company (for landing page)" )
employees : list [ str ] = Field ( description = "5-8 fake employee names with their positions" )
@ llm_prompt ()
def fake_company_generator ( company_business : str ) -> TheOutputStructureWeExpect :
""" Generate a fake company that {company_business}
{FORMAT_INSTRUCTIONS}
"""
return
company = fake_company_generator ( company_business = "sells cookies" )
# print the result nicely formatted
print ( "Company name: " , company . name )
print ( "company headline: " , company . headline )
print ( "company employees: " , company . employees ) from pydantic import BaseModel
from langchain_decorators import llm_prompt
class AssistantPersonality ( BaseModel ):
assistant_name : str
assistant_role : str
field : str
@ property
def a_property ( self ):
return "whatever"
def hello_world ( self , function_kwarg : str = None ):
"""
We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
"""
@ llm_prompt
def introduce_your_self ( self ) -> str :
"""
``` <prompt:system>
You are an assistant named {assistant_name}.
Your role is to act as {assistant_role}
```
```<prompt:user>
Introduce your self (in less than 20 words)
```
"""
personality = AssistantPersonality ( assistant_name = "John" , assistant_role = "a pirate" )
print ( personality . introduce_your_self ( personality ))在这里,我们只是将功能标记为llm_prompt Decorator的提示,并将其有效地转换为LLMchain。而不是运行它
标准llmchain的初始参数比仅仅是inputs_variables和提示更重要。这是装饰器中隐藏的实现细节。这是它的工作方式:
使用全局设置:
# define global settings for all prompty (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings
GlobalSettings . define_settings (
default_llm = ChatOpenAI ( temperature = 0.0 ), this is default ... can change it here globally
default_streaming_llm = ChatOpenAI ( temperature = 0.0 , streaming = True ), this is default ... can change it here for all ... will be used for streaming
)使用预定义的提示类型
#You can change the default prompt types
from langchain_decorators import PromptTypes , PromptTypeSettings
PromptTypes . AGENT_REASONING . llm = ChatOpenAI ()
# Or you can just define your own ones:
class MyCustomPromptTypes ( PromptTypes ):
GPT4 = PromptTypeSettings ( llm = ChatOpenAI ( model = "gpt-4" ))
@ llm_prompt ( prompt_type = MyCustomPromptTypes . GPT4 )
def write_a_complicated_code ( app_idea : str ) -> str :
...直接在装饰器中定义设置
from langchain . llms import OpenAI
@ llm_prompt (
llm = OpenAI ( temperature = 0.7 ),
stop_tokens = [ " n Observation" ],
...
)
def creative_writer ( book_title : str ) -> str :
...要通过其中任何一个,只需在功能中声明它们(或使用Kwargs通过任何内容)
(它们不一定需要声明,但是如果您要使用它们,这是一个很好的做法)
@ llm_prompt ()
async def write_me_short_post ( topic : str , platform : str = "twitter" , memory : SimpleMemory = None ):
"""
{history_key}
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass
await write_me_short_post ( topic = "old movies" )有几种选择如何控制登录到控制台的输出。最简单的方法是定义ENV变量: LANGCHAIN_DECORATORS_VERBOSE并将其设置为“ true”
您还可以通过定义您的全局设置,如下所示,以编程方式控制这一
最后一个选项是每种情况下,只需在提示下以详细模式进行图纸来控制它:
@llm_prompt(verbose=True)
def your_prompt(param1):
...
Presswatch IO是一个平台,可以跟踪和跟踪Langchain执行中发生的一切的详细信息。它可以通过将您的入口点代码包装在集成中的单行下降
with PromptWatch():
run_your_code()
在此处了解有关ProstingWatch的更多信息:www.promptwatch.io
欢迎反馈,贡献和公关