autollm
v0.1.10

问题|功能请求
简化。统一。放大。
| 特征 | Autollm | Langchain | Llamaindex | Litellm |
|---|---|---|---|---|
| 100+ LLM | ✅ | ✅ | ✅ | ✅ |
| 统一API | ✅ | ✅ | ||
| 20+矢量数据库 | ✅ | ✅ | ✅ | |
| 成本计算(100+ LLM) | ✅ | ✅ | ||
| 1线RAG LLM引擎 | ✅ | |||
| 1线FastApi | ✅ |
在Python中轻松安装Autollm软件包> = 3.8环境。
pip install autollm对于内置数据读取器(GitHub,PDF,Docx,Ipynb,Epub,Mbox,网站..),请安装:
pip install autollm[readers]视频教程:
博客文章:
COLAB笔记本:
>> > from autollm import AutoQueryEngine , read_files_as_documents
>> > documents = read_files_as_documents ( input_dir = "path/to/documents" )
>> > query_engine = AutoQueryEngine . from_defaults ( documents )
>> > response = query_engine . query (
... "Why did SafeVideo AI develop this project?"
... )
>> > response . response
"Because they wanted to deploy rag based llm apis in no time!" >> > from autollm import AutoQueryEngine
>> > query_engine = AutoQueryEngine . from_defaults (
... documents = documents ,
... llm_model = "gpt-3.5-turbo" ,
... llm_max_tokens = "256" ,
... llm_temperature = "0.1" ,
... system_prompt = '...' ,
... query_wrapper_prompt = '...' ,
... enable_cost_calculator = True ,
... embed_model = "huggingface/BAAI/bge-large-zh" ,
... chunk_size = 512 ,
... chunk_overlap = 64 ,
... context_window = 4096 ,
... similarity_top_k = 3 ,
... response_mode = "compact" ,
... structured_answer_filtering = False ,
... vector_store_type = "LanceDBVectorStore" ,
... lancedb_uri = "./lancedb" ,
... lancedb_table_name = "vectors" ,
... exist_ok = True ,
... overwrite_existing = False ,
... )
>> > response = query_engine . query ( "Who is SafeVideo AI?" )
>> > print ( response . response )
"A startup that provides self hosted AI API's for companies!" >> > import uvicorn
>> > from autollm import AutoFastAPI
>> > app = AutoFastAPI . from_query_engine ( query_engine )
>> > uvicorn . run ( app , host = "0.0.0.0" , port = 8000 )
INFO : Started server process [ 12345 ]
INFO : Waiting for application startup .
INFO : Application startup complete .
INFO : Uvicorn running on http : // http : // 0.0 . 0.0 : 8000 / >> > from autollm import AutoFastAPI
>> > app = AutoFastAPI . from_query_engine (
... query_engine ,
... api_title = '...' ,
... api_description = '...' ,
... api_version = '...' ,
... api_term_of_service = '...' ,
)
>> > uvicorn . run ( app , host = "0.0.0.0" , port = 8000 )
INFO : Started server process [ 12345 ]
INFO : Waiting for application startup .
INFO : Application startup complete .
INFO : Uvicorn running on http : // http : // 0.0 . 0.0 : 8000 / >> > from autollm import AutoQueryEngine
>> > os . environ [ "HUGGINGFACE_API_KEY" ] = "huggingface_api_key"
>> > llm_model = "huggingface/WizardLM/WizardCoder-Python-34B-V1.0"
>> > llm_api_base = "https://my-endpoint.huggingface.cloud"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model ,
... llm_api_base = llm_api_base ,
... )拥抱面 - ollama示例:
>> > from autollm import AutoQueryEngine
>> > llm_model = "ollama/llama2"
>> > llm_api_base = "http://localhost:11434"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model ,
... llm_api_base = llm_api_base ,
... )Microsoft Azure- OpenAI示例:
>> > from autollm import AutoQueryEngine
>> > os . environ [ "AZURE_API_KEY" ] = ""
>> > os . environ [ "AZURE_API_BASE" ] = ""
>> > os . environ [ "AZURE_API_VERSION" ] = ""
>> > llm_model = "azure/<your_deployment_name>" )
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model
... )Google -Vertexai示例:
>> > from autollm import AutoQueryEngine
>> > os . environ [ "VERTEXAI_PROJECT" ] = "hardy-device-38811" # Your Project ID`
>> > os . environ [ "VERTEXAI_LOCATION" ] = "us-central1" # Your Location
>> > llm_model = "text-bison@001"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model
... )AWS BEDROCK -CLAUDE V2示例:
>> > from autollm import AutoQueryEngine
>> > os . environ [ "AWS_ACCESS_KEY_ID" ] = ""
>> > os . environ [ "AWS_SECRET_ACCESS_KEY" ] = ""
>> > os . environ [ "AWS_REGION_NAME" ] = ""
>> > llm_model = "anthropic.claude-v2"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model
... )?专家提示: autollm默认为lancedb作为矢量存储:它不用设置,无服务器和100倍更具成本效益!
>> > from autollm import AutoQueryEngine
>> > import qdrant_client
>> > vector_store_type = "QdrantVectorStore"
>> > client = qdrant_client . QdrantClient (
... url = "http://<host>:<port>" ,
... api_key = "<qdrant-api-key>"
... )
>> > collection_name = "quickstart"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... vector_store_type = vector_store_type ,
... client = client ,
... collection_name = collection_name ,
... ) >> > from autollm import AutoServiceContext
>> > service_context = AutoServiceContext ( enable_cost_calculation = True )
# Example verbose output after query
Embedding Token Usage : 7
LLM Prompt Token Usage : 1482
LLM Completion Token Usage : 47
LLM Total Token Cost : $ 0.002317 >> > from autollm import AutoFastAPI
>> > app = AutoFastAPI . from_config ( config_path , env_path )在这里, config和env应由您的配置和环境文件路径代替。
创建FastApi应用程序后,在终端中运行以下命令以使其启动并运行:
uvicorn main:app从骆驼索引切换?我们已经覆盖了你。
>> > from llama_index import StorageContext , ServiceContext , VectorStoreIndex
>> > from llama_index . vectorstores import LanceDBVectorStore
>> > from autollm import AutoQueryEngine
>> > vector_store = LanceDBVectorStore ( uri = "./.lancedb" )
>> > storage_context = StorageContext . from_defaults ( vector_store = vector_store )
>> > service_context = ServiceContext . from_defaults ()
>> > index = VectorStoreIndex . from_documents (
documents = documents ,
storage_context = storage_contex ,
service_context = service_context ,
)
>> > query_engine = AutoQueryEngine . from_instances ( index )问:我可以将其用于商业项目吗?
答:是的,Autollm是根据GNU Affero通用公共许可证(AGPL 3.0)许可的,该许可允许在某些条件下进行商业用途。请与我们联系以获取更多信息。
我们的路线图概述了即将到来的功能和集成,以使Autollm成为大型语言模型应用程序最可扩展和功能强大的基本软件包。
1行Gradio应用程序创建和部署
基于预算的电子邮件通知
自动化LLM评估
在PDF-Chat,Documentation-Chat,学术纸 - 分析,专利分析等上添加更多的快速启动应用程序!
Autollm可根据GNU AFFERO通用公共许可证(AGPL 3.0)获得。
有关更多信息,支持或问题,请联系:
爱自动?星级仓库或贡献并帮助我们使其变得更好!有关更多信息,请参见我们的贡献指南。


