Personal-Graph adalah perpustakaan Python untuk membuat, mengelola, dan meminta grafik pengetahuan. Ini bertujuan untuk membantu menyelesaikan tantangan kerja dan memori jangka panjang dalam sistem AI, terutama model bahasa besar (LLM).
Instal Pribadi-Graph Menggunakan Pip:
pip install personal-graph from personal_graph import GraphDB
from personal_graph . text import text_to_graph
from personal_graph . vector_store import VliteVSS
vector_store = VliteVSS ( collection = "memories" )
graph = GraphDB ( vector_store = vector_store )
# Insert information into the graph
g = text_to_graph ( "Alice is Bob's sister. Bob works at Google." )
graph . insert_graph ( g )
# Retrieve relevant information from the graph
query = "Who is Alice?"
results = graph . search ( query )
print ( results )
# Use the retrieved information to answer questions
print ( f"Question: { query } " )
print ( f"Answer: Alice is Bob's sister." )
query = "Where does Bob work?"
results = graph . search ( query )
print ( results )
print ( f"Question: { query } " )
print ( f"Answer: Bob works at Google." )Dalam contoh ini, kami memasukkan informasi tentang Alice dan Bob ke dalam grafik pengetahuan. Kami kemudian menggunakan metode pencarian untuk mengambil informasi yang relevan berdasarkan kueri yang diberikan. Informasi yang diambil dapat digunakan sebagai bagian dari memori kerja AI untuk menjawab pertanyaan dan memberikan konteks untuk interaksi lebih lanjut.
from personal_graph import GraphDB
from personal_graph . vector_store import VliteVSS
vector_store = VliteVSS ( collection = "memories" )
graph = GraphDB ( vector_store = vector_store )
# Insert information about conversations with the user over time
graph . insert (
text = "User talked about their childhood dreams and aspirations." ,
attributes = {
"date" : "2023-01-15" ,
"topic" : "childhood dreams" ,
"depth_score" : 3
})
graph . insert (
text = "User discussed their fears and insecurities in their current relationship." ,
attributes = {
"date" : "2023-02-28" ,
"topic" : "relationship fears" ,
"depth_score" : 4
})
graph . insert (
text = "User shared their spiritual beliefs and existential questions." ,
attributes = {
"date" : "2023-03-10" ,
"topic" : "spirituality and existence" ,
"depth_score" : 5
})
graph . insert (
text = "User mentioned their favorite hobbies and weekend activities." ,
attributes = {
"date" : "2023-04-02" ,
"topic" : "hobbies" ,
"depth_score" : 2
})
# User queries about the deepest conversation
query = "What was the deepest conversation we've ever had?"
deepest_conversation = graph . search ( query , sort_by = "depth_score" , descending = True , limit = 1 )Dalam contoh ini, kami menyimpan informasi tentang percakapan dengan pengguna, termasuk tanggal, topik, dan skor kedalaman. Skor kedalaman mewakili seberapa bermakna percakapan itu.
Ketika pengguna bertanya tentang percakapan terdalam, kami mencari percakapan dengan skor kedalaman tertinggi menggunakan metode pencarian. Kami mengurutkan hasil berdasarkan skor kedalaman dalam urutan menurun dan membatasi output untuk satu percakapan.
Jika percakapan ditemukan, AI merespons dengan tanggal dan topik percakapan terdalam. Jika tidak ada percakapan yang ditemukan, AI memberi tahu pengguna bahwa ia tidak memiliki informasi yang cukup.
Contoh ini menunjukkan bagaimana grafik pribadi dapat digunakan untuk membangun memori jangka panjang tentang interaksi pengguna dan mengambil informasi spesifik berdasarkan kriteria seperti kedalaman percakapan.
from personal_graph import GraphDB
from personal_graph . text import text_to_graph
from personal_graph . vector_store import VliteVSS
vector_store = VliteVSS ( collection = "memories" )
graphdb = GraphDB ( vector_store = vector_store )
nl_query = "Increased thirst, weight loss, increased hunger, and frequent urination are all symptoms of diabetes."
kg = text_to_graph ( text = nl_query )
graphdb . insert_graph ( kg )
search_query = "I am losing weight too frequently."
g = text_to_graph ( search_query )
print ( g )
graphdb . insert_graph ( g ) import os
import dspy
from personal_graph import GraphDB , PersonalRM
db = GraphDB () # storage_db is in-memory sqlite, vector_db is in vlite
turbo = dspy . OpenAI ( api_key = os . getenv ( "OPENAI_API_KEY" ))
retriever = PersonalRM ( graph = db , k = 2 )
dspy . settings . configure ( lm = turbo , rm = retriever )
class GenerateAnswer ( dspy . Signature ):
"""Answer questions with short factoid answers."""
context = dspy . InputField ( desc = "may contain relevant facts from user's graph" )
question = dspy . InputField ()
answer = dspy . OutputField (
desc = "a short answer to the question, deduced from the information found in the user's graph"
)
class RAG ( dspy . Module ):
def __init__ ( self , depth = 3 ):
super (). __init__ ()
self . retrieve = dspy . Retrieve ( k = depth )
self . generate_answer = dspy . ChainOfThought ( GenerateAnswer )
def forward ( self , question ):
context = self . retrieve ( question ). passages
prediction = self . generate_answer ( context = context , question = question )
return dspy . Prediction ( context = context , answer = prediction . answer )
rag = RAG ( depth = 2 )
response = rag ( "How is Jack related to James?" )
print ( response . answer ) from personal_graph . graph import GraphDB
from personal_graph . graph_generator import OllamaTextToGraphParser
from personal_graph . database import SQLite
from personal_graph . vector_store import VliteVSS
from personal_graph . clients import OllamaClient , OllamaEmbeddingClient
phi3 = OllamaClient ( model_name = "phi3" )
nomic_embed = OllamaEmbeddingClient ( model_name = "nomic-embed-text" )
storage_db = SQLite ( local_path = "./local.db" )
vector_store = VliteVSS ( collection = "./vectors" )
graph_generator = OllamaTextToGraphParser ( llm_client = phi3 )
print ( graph_generator ) # Should print the InstructorGraphGenerator
with GraphDB (
database = storage_db ,
vector_store = vector_store ,
graph_generator = graph_generator
) as db :
print ( db )Berikut ini hanyalah sketsa aliran yang direncanakan. WIP.
graphdb = GraphDB ( storage = db , vector_store = vector_store , graph_generator = graph_generator )
graphdb . load_dataset ( "KarateClub" )
pyg_graph = graphdb . to_pyg ()
updated_graph = model ( pyg_graph ) # Run Neural Network algorithms here using PyG
graphdb . from_pyg ( updated_graph )Video ini paling menggambarkan perpustakaan grafik pribadi. [! Grafik Pribadi]
Untuk detail lebih lanjut dan dokumentasi API, lihat dokumentasi grafik pribadi.
Kontribusi dipersilakan! Jangan ragu untuk membuat masalah untuk bug dan permintaan fitur.
Grafi pribadi dirilis di bawah lisensi MIT.
Pertanyaan, umpan balik, atau saran? Jangkau di [email protected] atau buka masalah di GitHub.