This project builds a medical chatbot that retrieves information from a medical PDF book and utilizes Langchain for processing and Pinecone for efficient information retrieval.
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
def create_knowledge_base(pdf_path):
# Load PDF text
loader = PyPDFLoader(pdf_path, glob="*.pdf")
text_data = loader.load()
# Text processing and chunking
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=20)
text_chunks = text_splitter.split_documents(text_data)
# Download Llama2 embeddings (or your chosen model)
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")
# Generate embeddings for each text chunk
# ... (code to generate embeddings for each chunk using embeddings object)
# Store text chunks and embeddings in a data structure (e.g., list of dictionaries)
knowledge_base = []
for i, chunk in enumerate(text_chunks):
chunk_embedding = embeddings.encode(chunk) # Generate embedding for the chunk
knowledge_base.append({
"text": chunk,
"embedding": chunk_embedding
})
return knowledge_basefrom langchain_pinecone import PineconeVectorStore
from pinecone.data.index import Index
from dotenv import load_dotenv
import os
def store_knowledge_base_in_pinecone(knowledge_base):
load_dotenv()
PINECONE_API_KEY = os.getenv("PINECONE_API_KEY")
PINECONE_ENV = os.getenv("PINECONE_ENV")
PINECONE_INDEX_NAME = os.getenv("PINECONE_INDEX_NAME")
# Connect to Pinecone
pc = pinecone.Pinecone(api_key=PINECONE_API_KEY, environment=PINECONE_ENV)
index = pc.Index(PINECONE_INDEX_NAME)
# Extract text and embeddings from knowledge base
text_data = [kb["text"] for kb in knowledge_base]
embeddings = [kb["embedding"] for kb in knowledge_base]
# Store embeddings in Pinecone
PineconeVectorStore.from_documents(text_data, embeddings, index_name=PINECONE_INDEX_NAME)
print(f"Knowledge base stored in Pinecone index: {PINECONE_INDEX_NAME}")# This section is a placeholder as the full chatbot development requires additional libraries
# like Rasa or Dialogflow. Here's a basic outline to illustrate the concept.
def chatbot_loop():
while True:
user_query = input("Ask me a medical question (or type 'quit' to exit): ")
if user_query.lower() == "quit":
break
# Process user query (similar to text processing in knowledge base creation)
processed_query = # (code to clean and process the user query)
# Generate embedding for the user query
query_embedding = embeddings.encode(processed_query)
# Retrieve similar text snippets from Pinecone using query embedding
similar_results = retrieve_from_pinecone(query_embedding)
# Extract and present relevant information to the user
if similar_results:
for result in similar_results:
print(f"Relevant Information: {result['text']}")
else:
print("Sorry, I couldn't find any information related to your question.")This project showcases the potential of Langchain and Pinecone in creating a medical chatbot that offers an accessible and efficient way to access and understand medical information. Remember to adapt and expand upon this concept to fit the specific needs of your medical PDF book and desired functionalities.
To set up the LangChain Pinecone Vector Store project, follow these steps:
Clone the repository:
git clone https://github.com/<username>/<repository>.git
cd <repository>Install dependencies:
pip install -r requirements.txtConfigure environment variables:
Create a .env file in the root directory and specify the following variables:
PINECONE_API_KEY=<your_pinecone_api_key>
PINECONE_ENV=<pinecone_environment>
PINECONE_INDEX_NAME=<pinecone_index_name>
To store vectors in the Pinecone Vector Database, execute the following command:
python store_vectors.pyContributions to the LangChain Pinecone Vector Store project are encouraged and appreciated! If you have ideas for enhancements, bug fixes, or new features, please submit a pull request. Be sure to follow the contribution guidelines outlined in the repository.
This project is licensed under the MIT License, which permits unrestricted use, distribution, and modification, subject to the terms and conditions specified in the license agreement.