llm ollama llamaindex bootstrap
1.0.0
该RAG应用模板设计用于离线使用,基于Andrej Baranovskij的教程。它为建立自己的本地抹布管道提供了一个起点,独立于在线API和基于云的LLM服务(例如OpenAI)。这使开发人员能够在受控环境中实验和部署破布应用程序。
可以在https://github.com/tyrell/llm-ylama-llamaindex-bootstrap-ui上找到使用create-lalama生成的FullStack UI应用程序并为此项目进行自定义的应用程序
我的博客文章在这些项目背后提供了更多的背景,动力和思考。
该RAG应用程序完全离线运行,利用您的本地CPU生成/检索/排名响应而无需互联网访问。此RAG部署仅依赖于您本地的CPU进行计算。请注意,处理大型数据集或使用资源密集型模型可能会减慢性能。
docker compose up -d
pip install -r requirements.txt
安装Ollama并拉出Config.yml中指定的首选LLM模型
将文本PDF文件复制到data文件夹
运行脚本,将文本转换为向量嵌入并保存在编织中:
python ingest.py
python main.py "Who are you?"
回答:
Answer:
I am an AI language model, designed to assist and provide information based on the context provided. In this case, the context is related to an invoice from Chapman, Kim and Green to Rodriguez-Stevens for various items such as wine glasses, stemware storage, corkscrew parts, and stemless wine glasses.
Here are some key details from the invoice:
- Invoice number: 61356291
- Date of issue: 09/06/2012
- Seller: Chapman, Kim and Green
- Buyer: Rodriguez-Stevens
- VAT rate: 10%
The invoice includes several items with their respective quantities, unit measures (UM), net prices, net worth, gross worth, and taxes. The summary section provides the total net worth, VAT amount, and gross worth of the invoice.
==================================================
Time to retrieve answer: 37.36918904201593
您可以在提示中找到更多提示,以测试模板应用程序。一旦阅读了代码库,就可以将抹布扩展到您的特定需求。
Apache 2.0
〜泰瑞尔·佩雷拉(Tyrell Perera)