靈活而有力的框架,用於管理多個AI代理和處理複雜的對話。
多機構編排者是一個靈活的框架,用於管理多個AI代理並處理複雜的對話。它可以智能地路由查詢並保持跨交互的上下文。
該系統提供預先構建的組件,以快速部署,同時還可以輕鬆整合自定義代理和對話消息存儲解決方案。
這種適應性使其適合各種應用程序,從簡單的聊天機器人到復雜的AI系統,可滿足各種要求並有效地擴展。
為了快速了解多代理編排,我們為演示應用程序提供了一些基本代理。該交互式演示在用戶友好的界面中展示了編目的功能。要了解有關設置和運行演示應用程序的更多信息,請參閱我們的演示應用程序部分。
在下面的屏幕錄製中,我們演示了使用6種專業代理的演示應用程序的擴展版本:
觀察系統無縫地在不同主題之間無縫切換上下文,從預訂航班到檢查天氣,解決數學問題和提供健康信息。請注意,如何為每個查詢選擇適當的代理,即使有簡短的後續輸入,也可以保持連貫性。
該演示突出了系統處理複雜的多轉交談的能力,同時保留了各個領域的上下文並利用專門的代理。
要快速了解多代理編排者,請查看我們的演示應用程序。文檔和examples文件夾中都可以使用其他代碼示例。
通過我們各種各樣的示例,獲得多代理編排的動手經驗:
examples文件夾中的示例實現:chat-demo-app :基於Web的聊天界面,帶有多個專用代理ecommerce-support-simulator :AI驅動的客戶支持系統chat-chainlit-app :使用Chainlit構建的聊天應用程序fast-api-streaming :帶有流支持的FastAPI實現text-2-structured-output :自然語言到結構化數據bedrock-inline-agents :基岩內聯代理樣本所有示例均在Python和TypeScript實現中可用。查看我們的文檔,以獲取有關設置和使用多代理編排的全面指南!
發現創造性實現和多代理編排的各種應用:
從“ Bonjour”到“登機通行證”:多語言AI聊天機器人進行飛行預訂
本文演示瞭如何使用多代理編排框架構建多語言聊天機器人。本文解釋瞭如何將Amazon Lex機器人與其他兩個新代理一起使用Amazon Lex Bot作為代理,以使其僅使用幾行代碼以多種語言起作用。
超越自動修復:建立AI驅動的電子商務支持系統
本文演示瞭如何為自動化電子商務客戶電子郵件支持構建AI驅動的多代理系統。它涵蓋了使用多代理編排框架的專業AI代理的架構和設置,將自動處理與人類的監督集成在一起。該指南探討了電子郵件攝入,智能路由,自動響應生成和人類驗證,從而提供了一種全面的方法,可以平衡AI效率與人類在客戶支持方面的專業知識。
大聲說,AI:用Amazon Connect,Lex和Bedrock表達您的代理商
本文演示瞭如何建立AI客戶呼叫中心。它涵蓋了使用多代理編排框架通過Amazon Connect和Amazon Lex與Voice交互的多代理編排框架的架構和設置。
npm install multi-agent-orchestrator以下示例演示瞭如何使用兩種不同類型的代理的多代理編排者:具有匡威API支持和LEX bot劑的基岩LLM代理。這展示了系統在整合各種AI服務方面的靈活性。
import { MultiAgentOrchestrator , BedrockLLMAgent , LexBotAgent } from "multi-agent-orchestrator" ;
const orchestrator = new MultiAgentOrchestrator ( ) ;
// Add a Bedrock LLM Agent with Converse API support
orchestrator . addAgent (
new BedrockLLMAgent ( {
name : "Tech Agent" ,
description :
"Specializes in technology areas including software development, hardware, AI, cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs related to technology products and services." ,
streaming : true
} )
) ;
// Add a Lex Bot Agent for handling travel-related queries
orchestrator . addAgent (
new LexBotAgent ( {
name : "Travel Agent" ,
description : "Helps users book and manage their flight reservations" ,
botId : process . env . LEX_BOT_ID ,
botAliasId : process . env . LEX_BOT_ALIAS_ID ,
localeId : "en_US" ,
} )
) ;
// Example usage
const response = await orchestrator . routeRequest (
"I want to book a flight" ,
'user123' ,
'session456'
) ;
// Handle the response (streaming or non-streaming)
if ( response . streaming == true ) {
console . log ( "n** RESPONSE STREAMING ** n" ) ;
// Send metadata immediately
console . log ( `> Agent ID: ${ response . metadata . agentId } ` ) ;
console . log ( `> Agent Name: ${ response . metadata . agentName } ` ) ;
console . log ( `> User Input: ${ response . metadata . userInput } ` ) ;
console . log ( `> User ID: ${ response . metadata . userId } ` ) ;
console . log ( `> Session ID: ${ response . metadata . sessionId } ` ) ;
console . log (
`> Additional Parameters:` ,
response . metadata . additionalParams
) ;
console . log ( `n> Response: ` ) ;
// Stream the content
for await ( const chunk of response . output ) {
if ( typeof chunk === "string" ) {
process . stdout . write ( chunk ) ;
} else {
console . error ( "Received unexpected chunk type:" , typeof chunk ) ;
}
}
} else {
// Handle non-streaming response (AgentProcessingResult)
console . log ( "n** RESPONSE ** n" ) ;
console . log ( `> Agent ID: ${ response . metadata . agentId } ` ) ;
console . log ( `> Agent Name: ${ response . metadata . agentName } ` ) ;
console . log ( `> User Input: ${ response . metadata . userInput } ` ) ;
console . log ( `> User ID: ${ response . metadata . userId } ` ) ;
console . log ( `> Session ID: ${ response . metadata . sessionId } ` ) ;
console . log (
`> Additional Parameters:` ,
response . metadata . additionalParams
) ;
console . log ( `n> Response: ${ response . output } ` ) ;
} # Optional: Set up a virtual environment
python -m venv venv
source venv/bin/activate # On Windows use `venvScriptsactivate`
pip install multi-agent-orchestrator這是一個等效的python示例,證明了使用基岩LLM代理和Lex Bot代理使用多代理編排者:
import os
import asyncio
from multi_agent_orchestrator . orchestrator import MultiAgentOrchestrator
from multi_agent_orchestrator . agents import BedrockLLMAgent , LexBotAgent , BedrockLLMAgentOptions , LexBotAgentOptions , AgentCallbacks
orchestrator = MultiAgentOrchestrator ()
class BedrockLLMAgentCallbacks ( AgentCallbacks ):
def on_llm_new_token ( self , token : str ) -> None :
# handle response streaming here
print ( token , end = '' , flush = True )
tech_agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = "Tech Agent" ,
streaming = True ,
description = "Specializes in technology areas including software development, hardware, AI,
cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs
related to technology products and services." ,
model_id = "anthropic.claude-3-sonnet-20240229-v1:0" ,
callbacks = BedrockLLMAgentCallbacks ()
))
orchestrator . add_agent ( tech_agent )
# Add a Lex Bot Agent for handling travel-related queries
orchestrator . add_agent (
LexBotAgent ( LexBotAgentOptions (
name = "Travel Agent" ,
description = "Helps users book and manage their flight reservations" ,
bot_id = os . environ . get ( 'LEX_BOT_ID' ),
bot_alias_id = os . environ . get ( 'LEX_BOT_ALIAS_ID' ),
locale_id = "en_US" ,
))
)
async def main ():
# Example usage
response = await orchestrator . route_request (
"I want to book a flight" ,
'user123' ,
'session456'
)
# Handle the response (streaming or non-streaming)
if response . streaming :
print ( " n ** RESPONSE STREAMING ** n " )
# Send metadata immediately
print ( f"> Agent ID: { response . metadata . agent_id } " )
print ( f"> Agent Name: { response . metadata . agent_name } " )
print ( f"> User Input: { response . metadata . user_input } " )
print ( f"> User ID: { response . metadata . user_id } " )
print ( f"> Session ID: { response . metadata . session_id } " )
print ( f"> Additional Parameters: { response . metadata . additional_params } " )
print ( " n > Response: " )
# Stream the content
async for chunk in response . output :
if isinstance ( chunk , str ):
print ( chunk , end = '' , flush = True )
else :
print ( f"Received unexpected chunk type: { type ( chunk ) } " , file = sys . stderr )
else :
# Handle non-streaming response (AgentProcessingResult)
print ( " n ** RESPONSE ** n " )
print ( f"> Agent ID: { response . metadata . agent_id } " )
print ( f"> Agent Name: { response . metadata . agent_name } " )
print ( f"> User Input: { response . metadata . user_input } " )
print ( f"> User ID: { response . metadata . user_id } " )
print ( f"> Session ID: { response . metadata . session_id } " )
print ( f"> Additional Parameters: { response . metadata . additional_params } " )
print ( f" n > Response: { response . output . content } " )
if __name__ == "__main__" :
asyncio . run ( main ())這些示例展示:
如果您想將擬人化或OpenAI用於分類器和/或代理,請確保安裝具有相關額外功能的多代理 - 策劃者。
pip install " multi-agent-orchestrator[anthropic] "
pip install " multi-agent-orchestrator[openai] "要進行完整的安裝(包括人類和OpenAI):
pip install " multi-agent-orchestrator[all] " 我們歡迎捐款!請參閱我們的貢獻指南以獲取更多詳細信息。
向我們的貢獻者大聲喊叫!感謝您使這個項目變得更好! ?
請參閱我們的貢獻指南,以獲取有關如何提出錯誤編織和改進的指南。
該項目已在Apache 2.0許可下獲得許可 - 有關詳細信息,請參見許可證文件。
該項目使用JetbrainsMono NF字體,該字體已獲得SIL Open Font License 1.1的許可。有關完整的許可詳細信息,請參見字體libense.md。