Ce référentiel héberge le code de Lightrag. La structure de ce code est basée sur le nano-graphrag.
textract . Figure 1: Organigramme d'indexation des lightrag Figure 2: Frivale Lightrag et Organigramme de requête
cd LightRAG
pip install -e .pip install lightrag-hkuexamples .export OPENAI_API_KEY="sk-...".curl https://raw.githubusercontent.com/gusye1234/nano-graphrag/main/tests/mock_data.txt > ./book.txtUtilisez l'extrait Python ci-dessous (dans un script) pour initialiser Lightrag et effectuer des requêtes:
import os
from lightrag import LightRAG , QueryParam
from lightrag . llm import gpt_4o_mini_complete , gpt_4o_complete
#########
# Uncomment the below two lines if running in a jupyter notebook to handle the async nature of rag.insert()
# import nest_asyncio
# nest_asyncio.apply()
#########
WORKING_DIR = "./dickens"
if not os . path . exists ( WORKING_DIR ):
os . mkdir ( WORKING_DIR )
rag = LightRAG (
working_dir = WORKING_DIR ,
llm_model_func = gpt_4o_mini_complete # Use gpt_4o_mini_complete LLM model
# llm_model_func=gpt_4o_complete # Optionally, use a stronger model
)
with open ( "./book.txt" ) as f :
rag . insert ( f . read ())
# Perform naive search
print ( rag . query ( "What are the top themes in this story?" , param = QueryParam ( mode = "naive" )))
# Perform local search
print ( rag . query ( "What are the top themes in this story?" , param = QueryParam ( mode = "local" )))
# Perform global search
print ( rag . query ( "What are the top themes in this story?" , param = QueryParam ( mode = "global" )))
# Perform hybrid search
print ( rag . query ( "What are the top themes in this story?" , param = QueryParam ( mode = "hybrid" ))) async def llm_model_func (
prompt , system_prompt = None , history_messages = [], ** kwargs
) -> str :
return await openai_complete_if_cache (
"solar-mini" ,
prompt ,
system_prompt = system_prompt ,
history_messages = history_messages ,
api_key = os . getenv ( "UPSTAGE_API_KEY" ),
base_url = "https://api.upstage.ai/v1/solar" ,
** kwargs
)
async def embedding_func ( texts : list [ str ]) -> np . ndarray :
return await openai_embedding (
texts ,
model = "solar-embedding-1-large-query" ,
api_key = os . getenv ( "UPSTAGE_API_KEY" ),
base_url = "https://api.upstage.ai/v1/solar"
)
rag = LightRAG (
working_dir = WORKING_DIR ,
llm_model_func = llm_model_func ,
embedding_func = EmbeddingFunc (
embedding_dim = 4096 ,
max_token_size = 8192 ,
func = embedding_func
)
) from lightrag . llm import hf_model_complete , hf_embedding
from transformers import AutoModel , AutoTokenizer
from lightrag . utils import EmbeddingFunc
# Initialize LightRAG with Hugging Face model
rag = LightRAG (
working_dir = WORKING_DIR ,
llm_model_func = hf_model_complete , # Use Hugging Face model for text generation
llm_model_name = 'meta-llama/Llama-3.1-8B-Instruct' , # Model name from Hugging Face
# Use Hugging Face embedding function
embedding_func = EmbeddingFunc (
embedding_dim = 384 ,
max_token_size = 5000 ,
func = lambda texts : hf_embedding (
texts ,
tokenizer = AutoTokenizer . from_pretrained ( "sentence-transformers/all-MiniLM-L6-v2" ),
embed_model = AutoModel . from_pretrained ( "sentence-transformers/all-MiniLM-L6-v2" )
)
),
) Si vous souhaitez utiliser les modèles Olllama, vous devez tirer le modèle que vous prévoyez d'utiliser et d'incorporer le modèle, par exemple nomic-embed-text .
Ensuite, il vous suffit de définir le lightrag comme suit:
from lightrag . llm import ollama_model_complete , ollama_embedding
from lightrag . utils import EmbeddingFunc
# Initialize LightRAG with Ollama model
rag = LightRAG (
working_dir = WORKING_DIR ,
llm_model_func = ollama_model_complete , # Use Ollama model for text generation
llm_model_name = 'your_model_name' , # Your model name
# Use Ollama embedding function
embedding_func = EmbeddingFunc (
embedding_dim = 768 ,
max_token_size = 8192 ,
func = lambda texts : ollama_embedding (
texts ,
embed_model = "nomic-embed-text"
)
),
) export NEO4J_URI = "neo4j://localhost:7687"
export NEO4J_USERNAME = "neo4j"
export NEO4J_PASSWORD = "password"
When you launch the project be sure to override the default KG : NetworkS
by specifying kg = "Neo4JStorage" .
# Note: Default settings use NetworkX
#Initialize LightRAG with Neo4J implementation.
WORKING_DIR = "./local_neo4jWorkDir"
rag = LightRAG (
working_dir = WORKING_DIR ,
llm_model_func = gpt_4o_mini_complete , # Use gpt_4o_mini_complete LLM model
kg = "Neo4JStorage" , #<-----------override KG default
log_level = "DEBUG" #<-----------override log_level default
)Voir test_neo4j.py pour un exemple de travail.
Pour que Lightrag to Work, le contexte devrait être d'au moins 32 000 jetons. Par défaut, les modèles OLLAMA ont une taille de contexte de 8k. Vous pouvez y parvenir en utilisant de deux manières:
num_ctx dans modelfile.ollama pull qwen2ollama show --modelfile qwen2 > ModelfilePARAMETER num_ctx 32768ollama create -f Modelfile qwen2mnum_ctx via l'API OLLAMA. Tiy peut utiliser llm_model_kwargs PARAM pour configurer Olllama:
rag = LightRAG (
working_dir = WORKING_DIR ,
llm_model_func = ollama_model_complete , # Use Ollama model for text generation
llm_model_name = 'your_model_name' , # Your model name
llm_model_kwargs = { "options" : { "num_ctx" : 32768 }},
# Use Ollama embedding function
embedding_func = EmbeddingFunc (
embedding_dim = 768 ,
max_token_size = 8192 ,
func = lambda texts : ollama_embedding (
texts ,
embed_model = "nomic-embed-text"
)
),
) Il y a examples/lightrag_ollama_demo.py qui utilise le modèle gemma2:2b , ne fait que 4 demandes en parallèle et définissez la taille du contexte sur 32k.
Afin d'exécuter cette expérience sur un GPU à faible RAM, vous devez sélectionner un petit modèle et régler la fenêtre de contexte (augmentation de la consommation de mémoire d'augmentation de contexte). Par exemple, l'exécution de cet exemple Olllama sur le GPU minier réutilisé avec 6 Go de RAM nécessaire pour définir la taille du contexte sur 26k tout en utilisant gemma2:2b . Il a pu trouver 197 entités et 19 relations sur book.txt .
class QueryParam :
mode : Literal [ "local" , "global" , "hybrid" , "naive" ] = "global"
only_need_context : bool = False
response_type : str = "Multiple Paragraphs"
# Number of top-k items to retrieve; corresponds to entities in "local" mode and relationships in "global" mode.
top_k : int = 60
# Number of tokens for the original chunks.
max_token_for_text_unit : int = 4000
# Number of tokens for the relationship descriptions
max_token_for_global_context : int = 4000
# Number of tokens for the entity descriptions
max_token_for_local_context : int = 4000 # Batch Insert: Insert multiple texts at once
rag . insert ([ "TEXT1" , "TEXT2" ,...]) # Incremental Insert: Insert new documents into an existing LightRAG instance
rag = LightRAG (
working_dir = WORKING_DIR ,
llm_model_func = llm_model_func ,
embedding_func = EmbeddingFunc (
embedding_dim = embedding_dimension ,
max_token_size = 8192 ,
func = embedding_func ,
),
)
with open ( "./newText.txt" ) as f :
rag . insert ( f . read ()) rag = LightRAG (
working_dir = WORKING_DIR ,
llm_model_func = llm_model_func ,
embedding_func = EmbeddingFunc (
embedding_dim = embedding_dimension ,
max_token_size = 8192 ,
func = embedding_func ,
),
)
custom_kg = {
"entities" : [
{
"entity_name" : "CompanyA" ,
"entity_type" : "Organization" ,
"description" : "A major technology company" ,
"source_id" : "Source1"
},
{
"entity_name" : "ProductX" ,
"entity_type" : "Product" ,
"description" : "A popular product developed by CompanyA" ,
"source_id" : "Source1"
}
],
"relationships" : [
{
"src_id" : "CompanyA" ,
"tgt_id" : "ProductX" ,
"description" : "CompanyA develops ProductX" ,
"keywords" : "develop, produce" ,
"weight" : 1.0 ,
"source_id" : "Source1"
}
]
}
rag . insert_custom_kg ( custom_kg ) # Delete Entity: Deleting entities by their names
rag = LightRAG (
working_dir = WORKING_DIR ,
llm_model_func = llm_model_func ,
embedding_func = EmbeddingFunc (
embedding_dim = embedding_dimension ,
max_token_size = 8192 ,
func = embedding_func ,
),
)
rag . delete_by_entity ( "Project Gutenberg" ) Le textract prend en charge les types de fichiers de lecture tels que TXT, DOCX, PPTX, CSV et PDF.
import textract
file_path = 'TEXT.pdf'
text_content = textract . process ( file_path )
rag . insert ( text_content . decode ( 'utf-8' ))examples/graph_visual_with_html.py import networkx as nx
from pyvis . network import Network
# Load the GraphML file
G = nx . read_graphml ( './dickens/graph_chunk_entity_relation.graphml' )
# Create a Pyvis network
net = Network ( notebook = True )
# Convert NetworkX graph to Pyvis network
net . from_nx ( G )
# Save and display the network
net . show ( 'knowledge_graph.html' )examples/graph_visual_with_neo4j.py import os
import json
from lightrag . utils import xml_to_json
from neo4j import GraphDatabase
# Constants
WORKING_DIR = "./dickens"
BATCH_SIZE_NODES = 500
BATCH_SIZE_EDGES = 100
# Neo4j connection credentials
NEO4J_URI = "bolt://localhost:7687"
NEO4J_USERNAME = "neo4j"
NEO4J_PASSWORD = "your_password"
def convert_xml_to_json ( xml_path , output_path ):
"""Converts XML file to JSON and saves the output."""
if not os . path . exists ( xml_path ):
print ( f"Error: File not found - { xml_path } " )
return None
json_data = xml_to_json ( xml_path )
if json_data :
with open ( output_path , 'w' , encoding = 'utf-8' ) as f :
json . dump ( json_data , f , ensure_ascii = False , indent = 2 )
print ( f"JSON file created: { output_path } " )
return json_data
else :
print ( "Failed to create JSON data" )
return None
def process_in_batches ( tx , query , data , batch_size ):
"""Process data in batches and execute the given query."""
for i in range ( 0 , len ( data ), batch_size ):
batch = data [ i : i + batch_size ]
tx . run ( query , { "nodes" : batch } if "nodes" in query else { "edges" : batch })
def main ():
# Paths
xml_file = os . path . join ( WORKING_DIR , 'graph_chunk_entity_relation.graphml' )
json_file = os . path . join ( WORKING_DIR , 'graph_data.json' )
# Convert XML to JSON
json_data = convert_xml_to_json ( xml_file , json_file )
if json_data is None :
return
# Load nodes and edges
nodes = json_data . get ( 'nodes' , [])
edges = json_data . get ( 'edges' , [])
# Neo4j queries
create_nodes_query = """
UNWIND $nodes AS node
MERGE (e:Entity {id: node.id})
SET e.entity_type = node.entity_type,
e.description = node.description,
e.source_id = node.source_id,
e.displayName = node.id
REMOVE e:Entity
WITH e, node
CALL apoc.create.addLabels(e, [node.entity_type]) YIELD node AS labeledNode
RETURN count(*)
"""
create_edges_query = """
UNWIND $edges AS edge
MATCH (source {id: edge.source})
MATCH (target {id: edge.target})
WITH source, target, edge,
CASE
WHEN edge.keywords CONTAINS 'lead' THEN 'lead'
WHEN edge.keywords CONTAINS 'participate' THEN 'participate'
WHEN edge.keywords CONTAINS 'uses' THEN 'uses'
WHEN edge.keywords CONTAINS 'located' THEN 'located'
WHEN edge.keywords CONTAINS 'occurs' THEN 'occurs'
ELSE REPLACE(SPLIT(edge.keywords, ',')[0], ' " ', '')
END AS relType
CALL apoc.create.relationship(source, relType, {
weight: edge.weight,
description: edge.description,
keywords: edge.keywords,
source_id: edge.source_id
}, target) YIELD rel
RETURN count(*)
"""
set_displayname_and_labels_query = """
MATCH (n)
SET n.displayName = n.id
WITH n
CALL apoc.create.setLabels(n, [n.entity_type]) YIELD node
RETURN count(*)
"""
# Create a Neo4j driver
driver = GraphDatabase . driver ( NEO4J_URI , auth = ( NEO4J_USERNAME , NEO4J_PASSWORD ))
try :
# Execute queries in batches
with driver . session () as session :
# Insert nodes in batches
session . execute_write ( process_in_batches , create_nodes_query , nodes , BATCH_SIZE_NODES )
# Insert edges in batches
session . execute_write ( process_in_batches , create_edges_query , edges , BATCH_SIZE_EDGES )
# Set displayName and labels
session . run ( set_displayname_and_labels_query )
except Exception as e :
print ( f"Error occurred: { e } " )
finally :
driver . close ()
if __name__ == "__main__" :
main ()| Paramètre | Taper | Explication | Défaut |
|---|---|---|---|
| working_dir | str | Répertoire où le cache sera stocké | lightrag_cache+timestamp |
| kv_storage | str | Type de stockage pour les documents et les morceaux de texte. Types pris en charge: JsonKVStorage , OracleKVStorage | JsonKVStorage |
| vector_storage | str | Type de stockage pour les vecteurs d'intégration. Types pris en charge: NanoVectorDBStorage , OracleVectorDBStorage | NanoVectorDBStorage |
| graphique | str | Type de stockage pour les bords graphiques et les nœuds. Types pris en charge: NetworkXStorage , Neo4JStorage , OracleGraphStorage | NetworkXStorage |
| log_level | Niveau de journal pour l'exécution des applications | logging.DEBUG | |
| chunk_token_size | int | Taille maximale de jeton par morceau lors de la division des documents | 1200 |
| chunk_overlap_token_size | int | Chevauchement de la taille du jeton entre deux morceaux lors de la division des documents | 100 |
| tiktoken_model_name | str | Nom du modèle pour l'encodeur tiktoken utilisé pour calculer les nombres de jetons | gpt-4o-mini |
| entité_extract_max_gleaning | int | Nombre de boucles dans le processus d'extraction de l'entité, messages historiques en ajoutant | 1 |
| entité_summary_to_max_tokens | int | Taille de jeton maximale pour chaque résumé d'entité | 500 |
| node_embedding_algorithme | str | Algorithme pour l'intégration de nœuds (actuellement non utilisé) | node2vec |
| node2vec_params | dict | Paramètres pour l'intégration du nœud | {"dimensions": 1536,"num_walks": 10,"walk_length": 40,"window_size": 2,"iterations": 3,"random_seed": 3,} |
| Embedding_func | EmbeddingFunc | Fonction pour générer des vecteurs d'intégration à partir de texte | openai_embedding |
| embedding_batch_num | int | Taille maximale du lot pour les processus d'intégration (plusieurs textes envoyés par lot) | 32 |
| embedding_func_max_async | int | Nombre maximum de processus d'incorporation asynchrones simultanés | 16 |
| llm_model_func | callable | Fonction pour la génération LLM | gpt_4o_mini_complete |
| LLM_MODEL_NAME | str | Nom du modèle LLM pour la génération | meta-llama/Llama-3.2-1B-Instruct |
| llm_model_max_token_size | int | Taille maximale de jeton pour la génération LLM (affecte les résumés de la relation d'entité) | 32768 |
| LLM_MODEL_MAX_ASYNC | int | Nombre maximum de processus LLM asynchrones simultanés | 16 |
| llm_model_kwargs | dict | Paramètres supplémentaires pour la génération LLM | |
| vector_db_storage_cls_kwargs | dict | Paramètres supplémentaires pour la base de données vectorielle (actuellement non utilisée) | |
| enable_llm_cache | bool | Si TRUE , les magasins LLM se traduisent en cache; invites répétées renvoie les réponses mises en cache | TRUE |
| addon_params | dict | Paramètres supplémentaires, par exemple, {"example_number": 1, "language": "Simplified Chinese"} : Définit l'exemple de limite et de sortie | example_number: all examples, language: English |
| convert_response_to_json_func | callable | Non utilisé | convert_response_to_json |
Lightrag fournit également une implémentation de serveur basée sur FastAPI pour l'accès API RESTful aux opérations de chiffon. Cela vous permet d'exécuter Lightrag en tant que service et d'interagir avec lui via les demandes HTTP.
pip install fastapi uvicorn pydantic export RAG_DIR= " your_index_directory " # Optional: Defaults to "index_default"
export OPENAI_BASE_URL= " Your OpenAI API base URL " # Optional: Defaults to "https://api.openai.com/v1"
export OPENAI_API_KEY= " Your OpenAI API key " # Required
export LLM_MODEL= " Your LLM model " # Optional: Defaults to "gpt-4o-mini"
export EMBEDDING_MODEL= " Your embedding model " # Optional: Defaults to "text-embedding-3-large"python examples/lightrag_api_openai_compatible_demo.py Le serveur commencera sur http://0.0.0.0:8020 .
Le serveur API fournit les points de terminaison suivants:
/query{
"query" : " Your question here " ,
"mode" : " hybrid " , // Can be "naive", "local", "global", or "hybrid"
"only_need_context" : true // Optional: Defaults to false, if true, only the referenced context will be returned, otherwise the llm answer will be returned
}curl -X POST " http://127.0.0.1:8020/query "
-H " Content-Type: application/json "
-d ' {"query": "What are the main themes?", "mode": "hybrid"} ' /insert{
"text" : " Your text content here "
}curl -X POST " http://127.0.0.1:8020/insert "
-H " Content-Type: application/json "
-d ' {"text": "Content to be inserted into RAG"} ' /insert_file{
"file_path" : " path/to/your/file.txt "
}curl -X POST " http://127.0.0.1:8020/insert_file "
-H " Content-Type: application/json "
-d ' {"file_path": "./book.txt"} ' /healthcurl -X GET " http://127.0.0.1:8020/health "Le serveur API peut être configuré à l'aide de variables d'environnement:
RAG_DIR : répertoire pour stocker l'index de chiffon (par défaut: "index_default")L'API comprend une gestion complète des erreurs:
L'ensemble de données utilisé dans Lightrag peut être téléchargé à partir de Tommychien / Ultradomain.
Lightrag utilise l'invite suivante pour générer des requêtes de haut niveau, avec le code correspondant dans example/generate_query.py .
Given the following description of a dataset :
{ description }
Please identify 5 potential users who would engage with this dataset . For each user , list 5 tasks they would perform with this dataset . Then , for each ( user , task ) combination , generate 5 questions that require a high - level understanding of the entire dataset .
Output the results in the following structure :
- User 1 : [ user description ]
- Task 1 : [ task description ]
- Question 1 :
- Question 2 :
- Question 3 :
- Question 4 :
- Question 5 :
- Task 2 : [ task description ]
...
- Task 5 : [ task description ]
- User 2 : [ user description ]
...
- User 5 : [ user description ]
... Pour évaluer les performances de deux systèmes de chiffon sur les requêtes de haut niveau, Lightrag utilise l'invite suivante, avec le code spécifique disponible en example/batch_eval.py .
- - - Role - - -
You are an expert tasked with evaluating two answers to the same question based on three criteria : ** Comprehensiveness ** , ** Diversity ** , and ** Empowerment ** .
- - - Goal - - -
You will evaluate two answers to the same question based on three criteria : ** Comprehensiveness ** , ** Diversity ** , and ** Empowerment ** .
- ** Comprehensiveness ** : How much detail does the answer provide to cover all aspects and details of the question ?
- ** Diversity ** : How varied and rich is the answer in providing different perspectives and insights on the question ?
- ** Empowerment ** : How well does the answer help the reader understand and make informed judgments about the topic ?
For each criterion , choose the better answer ( either Answer 1 or Answer 2 ) and explain why . Then , select an overall winner based on these three categories .
Here is the question :
{ query }
Here are the two answers :
** Answer 1 : **
{ answer1 }
** Answer 2 : **
{ answer2 }
Evaluate both answers using the three criteria listed above and provide detailed explanations for each criterion .
Output your evaluation in the following JSON format :
{{
"Comprehensiveness" : {{
"Winner" : "[Answer 1 or Answer 2]" ,
"Explanation" : "[Provide explanation here]"
}},
"Empowerment" : {{
"Winner" : "[Answer 1 or Answer 2]" ,
"Explanation" : "[Provide explanation here]"
}},
"Overall Winner" : {{
"Winner" : "[Answer 1 or Answer 2]" ,
"Explanation" : "[Summarize why this answer is the overall winner based on the three criteria]"
}}
}}| Agriculture | CS | Légal | Mélanger | |||||
|---|---|---|---|---|---|---|---|---|
| Naïf | Lightrag | Naïf | Lightrag | Naïf | Lightrag | Naïf | Lightrag | |
| Exhaustivité | 32,4% | 67,6% | 38,4% | 61,6% | 16,4% | 83,6% | 38,8% | 61,2% |
| Diversité | 23,6% | 76,4% | 38,0% | 62,0% | 13,6% | 86,4% | 32,4% | 67,6% |
| Autonomisation | 32,4% | 67,6% | 38,8% | 61,2% | 16,4% | 83,6% | 42,8% | 57,2% |
| Dans l'ensemble | 32,4% | 67,6% | 38,8% | 61,2% | 15,2% | 84,8% | 40,0% | 60,0% |
| Rq-rag | Lightrag | Rq-rag | Lightrag | Rq-rag | Lightrag | Rq-rag | Lightrag | |
| Exhaustivité | 31,6% | 68,4% | 38,8% | 61,2% | 15,2% | 84,8% | 39,2% | 60,8% |
| Diversité | 29,2% | 70,8% | 39,2% | 60,8% | 11,6% | 88,4% | 30,8% | 69,2% |
| Autonomisation | 31,6% | 68,4% | 36,4% | 63,6% | 15,2% | 84,8% | 42,4% | 57,6% |
| Dans l'ensemble | 32,4% | 67,6% | 38,0% | 62,0% | 14,4% | 85,6% | 40,0% | 60,0% |
| Hyde | Lightrag | Hyde | Lightrag | Hyde | Lightrag | Hyde | Lightrag | |
| Exhaustivité | 26,0% | 74,0% | 41,6% | 58,4% | 26,8% | 73,2% | 40,4% | 59,6% |
| Diversité | 24,0% | 76,0% | 38,8% | 61,2% | 20,0% | 80,0% | 32,4% | 67,6% |
| Autonomisation | 25,2% | 74,8% | 40,8% | 59,2% | 26,0% | 74,0% | 46,0% | 54,0% |
| Dans l'ensemble | 24,8% | 75,2% | 41,6% | 58,4% | 26,4% | 73,6% | 42,4% | 57,6% |
| Graphrag | Lightrag | Graphrag | Lightrag | Graphrag | Lightrag | Graphrag | Lightrag | |
| Exhaustivité | 45,6% | 54,4% | 48,4% | 51,6% | 48,4% | 51,6% | 50,4% | 49,6% |
| Diversité | 22,8% | 77,2% | 40,8% | 59,2% | 26,4% | 73,6% | 36,0% | 64,0% |
| Autonomisation | 41,2% | 58,8% | 45,2% | 54,8% | 43,6% | 56,4% | 50,8% | 49,2% |
| Dans l'ensemble | 45,2% | 54,8% | 48,0% | 52,0% | 47,2% | 52,8% | 50,4% | 49,6% |
Tout le code peut être trouvé dans le répertoire ./reproduce .
Tout d'abord, nous devons extraire des contextes uniques dans les ensembles de données.
def extract_unique_contexts ( input_directory , output_directory ):
os . makedirs ( output_directory , exist_ok = True )
jsonl_files = glob . glob ( os . path . join ( input_directory , '*.jsonl' ))
print ( f"Found { len ( jsonl_files ) } JSONL files." )
for file_path in jsonl_files :
filename = os . path . basename ( file_path )
name , ext = os . path . splitext ( filename )
output_filename = f" { name } _unique_contexts.json"
output_path = os . path . join ( output_directory , output_filename )
unique_contexts_dict = {}
print ( f"Processing file: { filename } " )
try :
with open ( file_path , 'r' , encoding = 'utf-8' ) as infile :
for line_number , line in enumerate ( infile , start = 1 ):
line = line . strip ()
if not line :
continue
try :
json_obj = json . loads ( line )
context = json_obj . get ( 'context' )
if context and context not in unique_contexts_dict :
unique_contexts_dict [ context ] = None
except json . JSONDecodeError as e :
print ( f"JSON decoding error in file { filename } at line { line_number } : { e } " )
except FileNotFoundError :
print ( f"File not found: { filename } " )
continue
except Exception as e :
print ( f"An error occurred while processing file { filename } : { e } " )
continue
unique_contexts_list = list ( unique_contexts_dict . keys ())
print ( f"There are { len ( unique_contexts_list ) } unique `context` entries in the file { filename } ." )
try :
with open ( output_path , 'w' , encoding = 'utf-8' ) as outfile :
json . dump ( unique_contexts_list , outfile , ensure_ascii = False , indent = 4 )
print ( f"Unique `context` entries have been saved to: { output_filename } " )
except Exception as e :
print ( f"An error occurred while saving to the file { output_filename } : { e } " )
print ( "All files have been processed." )Pour les contextes extraits, nous les insérons dans le système de lightrag.
def insert_text ( rag , file_path ):
with open ( file_path , mode = 'r' ) as f :
unique_contexts = json . load ( f )
retries = 0
max_retries = 3
while retries < max_retries :
try :
rag . insert ( unique_contexts )
break
except Exception as e :
retries += 1
print ( f"Insertion failed, retrying ( { retries } / { max_retries } ), error: { e } " )
time . sleep ( 10 )
if retries == max_retries :
print ( "Insertion failed after exceeding the maximum number of retries" )Nous extractons les jetons de la première et de la seconde moitié de chaque contexte de l'ensemble de données, puis les combinons en tant que descriptions de l'ensemble de données pour générer des requêtes.
tokenizer = GPT2Tokenizer . from_pretrained ( 'gpt2' )
def get_summary ( context , tot_tokens = 2000 ):
tokens = tokenizer . tokenize ( context )
half_tokens = tot_tokens // 2
start_tokens = tokens [ 1000 : 1000 + half_tokens ]
end_tokens = tokens [ - ( 1000 + half_tokens ): 1000 ]
summary_tokens = start_tokens + end_tokens
summary = tokenizer . convert_tokens_to_string ( summary_tokens )
return summaryPour les requêtes générées à l'étape-2, nous les extraire et les interrogerons Lightrag.
def extract_queries ( file_path ):
with open ( file_path , 'r' ) as f :
data = f . read ()
data = data . replace ( '**' , '' )
queries = re . findall ( r'- Question d+: (.+)' , data )
return queries .
├── examples
│ ├── batch_eval . py
│ ├── generate_query . py
│ ├── graph_visual_with_html . py
│ ├── graph_visual_with_neo4j . py
│ ├── lightrag_api_openai_compatible_demo . py
│ ├── lightrag_azure_openai_demo . py
│ ├── lightrag_bedrock_demo . py
│ ├── lightrag_hf_demo . py
│ ├── lightrag_lmdeploy_demo . py
│ ├── lightrag_ollama_demo . py
│ ├── lightrag_openai_compatible_demo . py
│ ├── lightrag_openai_demo . py
│ ├── lightrag_siliconcloud_demo . py
│ └── vram_management_demo . py
├── lightrag
│ ├── kg
│ │ ├── __init__ . py
│ │ └── neo4j_impl . py
│ ├── __init__ . py
│ ├── base . py
│ ├── lightrag . py
│ ├── llm . py
│ ├── operate . py
│ ├── prompt . py
│ ├── storage . py
│ └── utils . py
├── reproduce
│ ├── Step_0 . py
│ ├── Step_1_openai_compatible . py
│ ├── Step_1 . py
│ ├── Step_2 . py
│ ├── Step_3_openai_compatible . py
│ └── Step_3 . py
├── . gitignore
├── . pre - commit - config . yaml
├── Dockerfile
├── get_all_edges_nx . py
├── LICENSE
├── README . md
├── requirements . txt
├── setup . py
├── test_neo4j . py
└── test . py Merci à tous nos contributeurs!
@ article { guo2024lightrag ,
title = { LightRAG : Simple and Fast Retrieval - Augmented Generation },
author = { Zirui Guo and Lianghao Xia and Yanhua Yu and Tu Ao and Chao Huang },
year = { 2024 },
eprint = { 2410.05779 },
archivePrefix = { arXiv },
primaryClass = { cs . IR }
}Merci pour votre intérêt pour notre travail!