
Pertanyaan | permintaan fitur
Menyederhanakan. Menyatukan. Memperkuat.
| Fitur | Autollm | Langchain | Llamaindex | Litellm |
|---|---|---|---|---|
| 100+ llms | ✅ | ✅ | ✅ | ✅ |
| API Terpadu | ✅ | ✅ | ||
| 20+ database vektor | ✅ | ✅ | ✅ | |
| Perhitungan Biaya (100+ LLMS) | ✅ | ✅ | ||
| Mesin LLM Rag 1 Line | ✅ | |||
| Fastapi 1-line | ✅ |
Mudah Instal Paket Autollm dengan Pip di Python> = 3.8 Lingkungan.
pip install autollmUntuk pembaca data bawaan (GitHub, PDF, DOCX, IPYNB, EPUB, MBox, situs web ..), instal dengan:
pip install autollm[readers]Tutorial Video :
Posting Blog :
Colab Notebooks :
>> > from autollm import AutoQueryEngine , read_files_as_documents
>> > documents = read_files_as_documents ( input_dir = "path/to/documents" )
>> > query_engine = AutoQueryEngine . from_defaults ( documents )
>> > response = query_engine . query (
... "Why did SafeVideo AI develop this project?"
... )
>> > response . response
"Because they wanted to deploy rag based llm apis in no time!" >> > from autollm import AutoQueryEngine
>> > query_engine = AutoQueryEngine . from_defaults (
... documents = documents ,
... llm_model = "gpt-3.5-turbo" ,
... llm_max_tokens = "256" ,
... llm_temperature = "0.1" ,
... system_prompt = '...' ,
... query_wrapper_prompt = '...' ,
... enable_cost_calculator = True ,
... embed_model = "huggingface/BAAI/bge-large-zh" ,
... chunk_size = 512 ,
... chunk_overlap = 64 ,
... context_window = 4096 ,
... similarity_top_k = 3 ,
... response_mode = "compact" ,
... structured_answer_filtering = False ,
... vector_store_type = "LanceDBVectorStore" ,
... lancedb_uri = "./lancedb" ,
... lancedb_table_name = "vectors" ,
... exist_ok = True ,
... overwrite_existing = False ,
... )
>> > response = query_engine . query ( "Who is SafeVideo AI?" )
>> > print ( response . response )
"A startup that provides self hosted AI API's for companies!" >> > import uvicorn
>> > from autollm import AutoFastAPI
>> > app = AutoFastAPI . from_query_engine ( query_engine )
>> > uvicorn . run ( app , host = "0.0.0.0" , port = 8000 )
INFO : Started server process [ 12345 ]
INFO : Waiting for application startup .
INFO : Application startup complete .
INFO : Uvicorn running on http : // http : // 0.0 . 0.0 : 8000 / >> > from autollm import AutoFastAPI
>> > app = AutoFastAPI . from_query_engine (
... query_engine ,
... api_title = '...' ,
... api_description = '...' ,
... api_version = '...' ,
... api_term_of_service = '...' ,
)
>> > uvicorn . run ( app , host = "0.0.0.0" , port = 8000 )
INFO : Started server process [ 12345 ]
INFO : Waiting for application startup .
INFO : Application startup complete .
INFO : Uvicorn running on http : // http : // 0.0 . 0.0 : 8000 / >> > from autollm import AutoQueryEngine
>> > os . environ [ "HUGGINGFACE_API_KEY" ] = "huggingface_api_key"
>> > llm_model = "huggingface/WizardLM/WizardCoder-Python-34B-V1.0"
>> > llm_api_base = "https://my-endpoint.huggingface.cloud"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model ,
... llm_api_base = llm_api_base ,
... )Huggingface - Contoh Ollama:
>> > from autollm import AutoQueryEngine
>> > llm_model = "ollama/llama2"
>> > llm_api_base = "http://localhost:11434"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model ,
... llm_api_base = llm_api_base ,
... )Microsoft Azure - Contoh OpenAI:
>> > from autollm import AutoQueryEngine
>> > os . environ [ "AZURE_API_KEY" ] = ""
>> > os . environ [ "AZURE_API_BASE" ] = ""
>> > os . environ [ "AZURE_API_VERSION" ] = ""
>> > llm_model = "azure/<your_deployment_name>" )
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model
... )Google - Vertexai Contoh:
>> > from autollm import AutoQueryEngine
>> > os . environ [ "VERTEXAI_PROJECT" ] = "hardy-device-38811" # Your Project ID`
>> > os . environ [ "VERTEXAI_LOCATION" ] = "us-central1" # Your Location
>> > llm_model = "text-bison@001"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model
... )AWS Bedrock - CLAUDE V2 Contoh:
>> > from autollm import AutoQueryEngine
>> > os . environ [ "AWS_ACCESS_KEY_ID" ] = ""
>> > os . environ [ "AWS_SECRET_ACCESS_KEY" ] = ""
>> > os . environ [ "AWS_REGION_NAME" ] = ""
>> > llm_model = "anthropic.claude-v2"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... llm_model = llm_model
... ) ? Kiat Pro : autollm default ke lancedb sebagai toko vektor: Ini pengaturan bebas, tanpa server, dan 100x lebih hemat biaya!
>> > from autollm import AutoQueryEngine
>> > import qdrant_client
>> > vector_store_type = "QdrantVectorStore"
>> > client = qdrant_client . QdrantClient (
... url = "http://<host>:<port>" ,
... api_key = "<qdrant-api-key>"
... )
>> > collection_name = "quickstart"
>> > AutoQueryEngine . from_defaults (
... documents = '...' ,
... vector_store_type = vector_store_type ,
... client = client ,
... collection_name = collection_name ,
... ) >> > from autollm import AutoServiceContext
>> > service_context = AutoServiceContext ( enable_cost_calculation = True )
# Example verbose output after query
Embedding Token Usage : 7
LLM Prompt Token Usage : 1482
LLM Completion Token Usage : 47
LLM Total Token Cost : $ 0.002317 >> > from autollm import AutoFastAPI
>> > app = AutoFastAPI . from_config ( config_path , env_path ) Di sini, config dan env harus diganti dengan jalur file konfigurasi dan lingkungan Anda.
Setelah membuat aplikasi FASTAPI Anda, jalankan perintah berikut di terminal Anda untuk menyelesaikannya dan menjalankan:
uvicorn main:appBeralih dari llama-index? Kami telah membantu Anda.
>> > from llama_index import StorageContext , ServiceContext , VectorStoreIndex
>> > from llama_index . vectorstores import LanceDBVectorStore
>> > from autollm import AutoQueryEngine
>> > vector_store = LanceDBVectorStore ( uri = "./.lancedb" )
>> > storage_context = StorageContext . from_defaults ( vector_store = vector_store )
>> > service_context = ServiceContext . from_defaults ()
>> > index = VectorStoreIndex . from_documents (
documents = documents ,
storage_context = storage_contex ,
service_context = service_context ,
)
>> > query_engine = AutoQueryEngine . from_instances ( index )T: Dapatkah saya menggunakan ini untuk proyek komersial?
A: Ya, Autollm dilisensikan di bawah Lisensi Publik Umum GNU Affero (AGPL 3.0), yang memungkinkan penggunaan komersial dalam kondisi tertentu. Hubungi kami untuk informasi lebih lanjut.
Roadmap kami menguraikan fitur dan integrasi mendatang untuk menjadikan Autollm paket dasar yang paling dapat diperluas dan kuat untuk aplikasi model bahasa besar.
Pembuatan dan penyebaran aplikasi 1-line gradio
Pemberitahuan Email Berbasis Anggaran
Evaluasi LLM Otomatis
Tambahkan lebih banyak aplikasi yang lebih cepat pada PDF-CHAT, Dokumentasi-Obrolan, analisis-kertas akademik, analisis paten dan banyak lagi!
Autollm tersedia di bawah Lisensi Publik Umum GNU Affero (AGPL 3.0).
Untuk informasi, dukungan, atau pertanyaan lebih lanjut, silakan hubungi:
Love Autollm? Bintang repo atau berkontribusi dan bantu kami membuatnya lebih baik! Lihat Pedoman Kontribusi kami untuk informasi lebih lanjut.


