RAG_llama3.3
1.0.0
RAG_llama3.3 is an advanced Retrieval-Augmented Generation (RAG) system using Llama 3.3 language model. This project integrates state-of-the-art natural language processing (NLP) techniques to enable accurate and context-aware question answering.
Back-End: Python with LangChain for orchestration.
Database: Pinecone for vector search and index management.
Model: Llama 3.3 via ChatGroq for generative tasks. ---> Model_name = 'llama-3.3-70b-versatile'
Embeddings: HuggingFace's embeddings for semantic search. ---> Model_name="all-MiniLM-L6-v2"