autollm
v0.1.10

問題|功能請求
簡化。統一。放大。
| 特徵 | Autollm | Langchain | Llamaindex | Litellm |
|---|---|---|---|---|
| 100+ LLM | ✅ | ✅ | ✅ | ✅ |
| 統一API | ✅ | ✅ | ||
| 20+矢量數據庫 | ✅ | ✅ | ✅ | |
| 成本計算(100+ LLM) | ✅ | ✅ | ||
| 1線RAG LLM引擎 | ✅ | |||
| 1線FastApi | ✅ |
在Python中輕鬆安裝Autollm軟件包> = 3.8環境。
pip install autollm對於內置數據讀取器(GitHub,PDF,Docx,Ipynb,Epub,Mbox,網站..),請安裝:
pip install autollm[readers]視頻教程:
博客文章:
COLAB筆記本:
>> > from autollm import AutoQueryEngine , read_files_as_documents
>> > documents = read_files_as_documents ( input_dir = "path/to/documents" )
>> > query_engine = AutoQueryEngine . from_defaults ( documents )
>> > response = query_engine . query (
... "Why did SafeVideo AI develop this project?"
... )
>> > response . response
"Because they wanted to deploy rag based llm apis in no time!" >> > from autollm import AutoQueryEngine
>> > query_engine = AutoQueryEngine . from_defaults (
... documents = documents ,
... llm_model = "gpt-3.5-turbo" ,
... llm_max_tokens = "256" ,
... llm_temperature = "0.1" ,
... system_prompt = '...' ,
... query_wrapper_prompt = '...' ,
... enable_cost_calculator = True ,
... embed_model = "huggingface/BAAI/bge-large-zh" ,
... chunk_size = 512 ,
... chunk_overlap = 64 ,
... context_window = 4096 ,
... similarity_top_k = 3 ,
... response_mode = "compact" ,
... structured_answer_filtering = False ,
... vector_store_type = "LanceDBVectorStore" ,
... lancedb_uri = "./lancedb" ,
... lancedb_table_name = "vectors" ,
... exist_ok = True ,
... overwrite_existing = False ,
... )
>> > response = query_engine . query ( "Who is SafeVideo AI?" )
>> > print ( response . response )
"A startup that provides self hosted AI API's for companies!" >> > import uvicorn
>> > from autollm import AutoFastAPI
>> > app = AutoFastAPI . from_query_engine ( query_engine )
>> > uvicorn . run ( app , host = "0.0.0.0" , port = 8000 )
INFO : Started server process [ 12345 ]
INFO : Waiting for application startup .
INFO : Application startup complete .
INFO : Uvicorn running on http : // http : // 0.0 . 0.0 : 8000 / >> > from autollm import AutoFastAPI
>> > app = AutoFastAPI . from_query_engine (
... query_engine ,
... api_title = '...' ,
... api_description = '...' ,
... api_version = '...' ,
... api_term_of_service = '...' ,
)
>> > uvicorn . run ( app , host = "0.0.0.0" , port = 8000 )
INFO : Started server process [ 12345 ]
INFO : Waiting for application startup .
INFO : Application startup complete .
INFO : Uvicorn running on http : // http : // 0.0 . 0.0 : 8000 / >> > from autollm import AutoQueryEngine
>> > os . environ [ "HUGGINGFACE_API_KEY" ] = "huggingface_api_key"
>> > llm_model = "huggingface/WizardLM/WizardCoder-Python-34B-V1.0"
>> > llm_api_base = "https://my-endpoint.huggingface.cloud"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model ,
... llm_api_base = llm_api_base ,
... )擁抱面 - ollama示例:
>> > from autollm import AutoQueryEngine
>> > llm_model = "ollama/llama2"
>> > llm_api_base = "http://localhost:11434"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model ,
... llm_api_base = llm_api_base ,
... )Microsoft Azure- OpenAI示例:
>> > from autollm import AutoQueryEngine
>> > os . environ [ "AZURE_API_KEY" ] = ""
>> > os . environ [ "AZURE_API_BASE" ] = ""
>> > os . environ [ "AZURE_API_VERSION" ] = ""
>> > llm_model = "azure/<your_deployment_name>" )
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model
... )Google -Vertexai示例:
>> > from autollm import AutoQueryEngine
>> > os . environ [ "VERTEXAI_PROJECT" ] = "hardy-device-38811" # Your Project ID`
>> > os . environ [ "VERTEXAI_LOCATION" ] = "us-central1" # Your Location
>> > llm_model = "text-bison@001"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model
... )AWS BEDROCK -CLAUDE V2示例:
>> > from autollm import AutoQueryEngine
>> > os . environ [ "AWS_ACCESS_KEY_ID" ] = ""
>> > os . environ [ "AWS_SECRET_ACCESS_KEY" ] = ""
>> > os . environ [ "AWS_REGION_NAME" ] = ""
>> > llm_model = "anthropic.claude-v2"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model
... )?專家提示: autollm默認為lancedb作為矢量存儲:它不用設置,無服務器和100倍更具成本效益!
>> > from autollm import AutoQueryEngine
>> > import qdrant_client
>> > vector_store_type = "QdrantVectorStore"
>> > client = qdrant_client . QdrantClient (
... url = "http://<host>:<port>" ,
... api_key = "<qdrant-api-key>"
... )
>> > collection_name = "quickstart"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... vector_store_type = vector_store_type ,
... client = client ,
... collection_name = collection_name ,
... ) >> > from autollm import AutoServiceContext
>> > service_context = AutoServiceContext ( enable_cost_calculation = True )
# Example verbose output after query
Embedding Token Usage : 7
LLM Prompt Token Usage : 1482
LLM Completion Token Usage : 47
LLM Total Token Cost : $ 0.002317 >> > from autollm import AutoFastAPI
>> > app = AutoFastAPI . from_config ( config_path , env_path )在這裡, config和env應由您的配置和環境文件路徑代替。
創建FastApi應用程序後,在終端中運行以下命令以使其啟動並運行:
uvicorn main:app從駱駝索引切換?我們已經覆蓋了你。
>> > from llama_index import StorageContext , ServiceContext , VectorStoreIndex
>> > from llama_index . vectorstores import LanceDBVectorStore
>> > from autollm import AutoQueryEngine
>> > vector_store = LanceDBVectorStore ( uri = "./.lancedb" )
>> > storage_context = StorageContext . from_defaults ( vector_store = vector_store )
>> > service_context = ServiceContext . from_defaults ()
>> > index = VectorStoreIndex . from_documents (
documents = documents ,
storage_context = storage_contex ,
service_context = service_context ,
)
>> > query_engine = AutoQueryEngine . from_instances ( index )問:我可以將其用於商業項目嗎?
答:是的,Autollm是根據GNU Affero通用公共許可證(AGPL 3.0)許可的,該許可允許在某些條件下進行商業用途。請與我們聯繫以獲取更多信息。
我們的路線圖概述了即將到來的功能和集成,以使Autollm成為大型語言模型應用程序最可擴展和功能強大的基本軟件包。
1行Gradio應用程序創建和部署
基於預算的電子郵件通知
自動化LLM評估
在PDF-Chat,Documentation-Chat,學術紙 - 分析,專利分析等上添加更多的快速啟動應用程序!
Autollm可根據GNU AFFERO通用公共許可證(AGPL 3.0)獲得。
有關更多信息,支持或問題,請聯繫:
愛自動?星級倉庫或貢獻並幫助我們使其變得更好!有關更多信息,請參見我們的貢獻指南。


