An AI chatbot implementing RAG technique with Meta-Llama2-7b Large Language Model, along with langchain and pinecone vector databases. Resource Used : The Gale Encyclopedia of Medicine
Pinecone vector db stores the text_chunks embeddings generated from the Book Pdf. LangChain is used for building the LLMChain with promptTemplate to perform the similarity search from pinecone and then fine-grain the output with LLM.
To run web app locally, follow these steps:
1.Clone the Repo:
git clone https://github.com/4darsh-Dev/medicure-rag-chatbot.git Configure poetry:
pip install poetry
poetry init
poetry shell
Install Requirements:
poetry install
Run the Streamlit App:
poetry streamlit run app.pyAccess Your App: After running the command, Streamlit will start a local web server and provide a URL where you can access your app. Typically, it will be something like http://localhost:8501. Open this URL in your web browser.
Stop the Streamlit Server: To stop the Streamlit server, go back to the terminal or command prompt where it's running and press Ctrl + C to terminate the server.
If you have any feedback, please reach out to us at [email protected]