personal graph
1.0.0
個人圖像是用於創建,管理和查詢知識圖的Python庫。它旨在幫助解決AI系統(尤其是大型語言模型(LLM))中的工作和長期記憶挑戰。
使用PIP安裝個人圖:
pip install personal-graph from personal_graph import GraphDB
from personal_graph . text import text_to_graph
from personal_graph . vector_store import VliteVSS
vector_store = VliteVSS ( collection = "memories" )
graph = GraphDB ( vector_store = vector_store )
# Insert information into the graph
g = text_to_graph ( "Alice is Bob's sister. Bob works at Google." )
graph . insert_graph ( g )
# Retrieve relevant information from the graph
query = "Who is Alice?"
results = graph . search ( query )
print ( results )
# Use the retrieved information to answer questions
print ( f"Question: { query } " )
print ( f"Answer: Alice is Bob's sister." )
query = "Where does Bob work?"
results = graph . search ( query )
print ( results )
print ( f"Question: { query } " )
print ( f"Answer: Bob works at Google." )在此示例中,我們將有關愛麗絲和鮑勃的信息插入知識圖。然後,我們使用搜索方法根據給定的查詢檢索相關信息。檢索到的信息可以用作AI工作記憶的一部分,以回答問題並為進一步的互動提供背景。
from personal_graph import GraphDB
from personal_graph . vector_store import VliteVSS
vector_store = VliteVSS ( collection = "memories" )
graph = GraphDB ( vector_store = vector_store )
# Insert information about conversations with the user over time
graph . insert (
text = "User talked about their childhood dreams and aspirations." ,
attributes = {
"date" : "2023-01-15" ,
"topic" : "childhood dreams" ,
"depth_score" : 3
})
graph . insert (
text = "User discussed their fears and insecurities in their current relationship." ,
attributes = {
"date" : "2023-02-28" ,
"topic" : "relationship fears" ,
"depth_score" : 4
})
graph . insert (
text = "User shared their spiritual beliefs and existential questions." ,
attributes = {
"date" : "2023-03-10" ,
"topic" : "spirituality and existence" ,
"depth_score" : 5
})
graph . insert (
text = "User mentioned their favorite hobbies and weekend activities." ,
attributes = {
"date" : "2023-04-02" ,
"topic" : "hobbies" ,
"depth_score" : 2
})
# User queries about the deepest conversation
query = "What was the deepest conversation we've ever had?"
deepest_conversation = graph . search ( query , sort_by = "depth_score" , descending = True , limit = 1 )在此示例中,我們存儲有關與用戶對話的信息,包括日期,主題和深度分數。深度分數表示對話的有意義。
當用戶詢問最深的對話時,我們使用搜索方法搜索具有最高深度分數的對話。我們按降序的深度分數對結果進行排序,並將輸出限制為一個對話。
如果發現對話,AI會以最深入的對話的日期和主題做出回應。如果找不到對話,AI會通知用戶它沒有足夠的信息。
此示例演示瞭如何使用個人圖形來建立有關用戶互動的長期記憶,並根據對話深度(例如對話深度)檢索特定信息。
from personal_graph import GraphDB
from personal_graph . text import text_to_graph
from personal_graph . vector_store import VliteVSS
vector_store = VliteVSS ( collection = "memories" )
graphdb = GraphDB ( vector_store = vector_store )
nl_query = "Increased thirst, weight loss, increased hunger, and frequent urination are all symptoms of diabetes."
kg = text_to_graph ( text = nl_query )
graphdb . insert_graph ( kg )
search_query = "I am losing weight too frequently."
g = text_to_graph ( search_query )
print ( g )
graphdb . insert_graph ( g ) import os
import dspy
from personal_graph import GraphDB , PersonalRM
db = GraphDB () # storage_db is in-memory sqlite, vector_db is in vlite
turbo = dspy . OpenAI ( api_key = os . getenv ( "OPENAI_API_KEY" ))
retriever = PersonalRM ( graph = db , k = 2 )
dspy . settings . configure ( lm = turbo , rm = retriever )
class GenerateAnswer ( dspy . Signature ):
"""Answer questions with short factoid answers."""
context = dspy . InputField ( desc = "may contain relevant facts from user's graph" )
question = dspy . InputField ()
answer = dspy . OutputField (
desc = "a short answer to the question, deduced from the information found in the user's graph"
)
class RAG ( dspy . Module ):
def __init__ ( self , depth = 3 ):
super (). __init__ ()
self . retrieve = dspy . Retrieve ( k = depth )
self . generate_answer = dspy . ChainOfThought ( GenerateAnswer )
def forward ( self , question ):
context = self . retrieve ( question ). passages
prediction = self . generate_answer ( context = context , question = question )
return dspy . Prediction ( context = context , answer = prediction . answer )
rag = RAG ( depth = 2 )
response = rag ( "How is Jack related to James?" )
print ( response . answer ) from personal_graph . graph import GraphDB
from personal_graph . graph_generator import OllamaTextToGraphParser
from personal_graph . database import SQLite
from personal_graph . vector_store import VliteVSS
from personal_graph . clients import OllamaClient , OllamaEmbeddingClient
phi3 = OllamaClient ( model_name = "phi3" )
nomic_embed = OllamaEmbeddingClient ( model_name = "nomic-embed-text" )
storage_db = SQLite ( local_path = "./local.db" )
vector_store = VliteVSS ( collection = "./vectors" )
graph_generator = OllamaTextToGraphParser ( llm_client = phi3 )
print ( graph_generator ) # Should print the InstructorGraphGenerator
with GraphDB (
database = storage_db ,
vector_store = vector_store ,
graph_generator = graph_generator
) as db :
print ( db )以下只是計劃流的草圖。 WIP。
graphdb = GraphDB ( storage = db , vector_store = vector_store , graph_generator = graph_generator )
graphdb . load_dataset ( "KarateClub" )
pyg_graph = graphdb . to_pyg ()
updated_graph = model ( pyg_graph ) # Run Neural Network algorithms here using PyG
graphdb . from_pyg ( updated_graph )該視頻最能描述個人圖庫。 [!個人圖]
有關更多詳細信息和API文檔,請參閱個人圖表文檔。
歡迎捐款!隨意為錯誤和功能請求創建問題。
個人圖紙是根據MIT許可發布的。
問題,反饋或建議?通過[email protected]與您聯繫,或在Github上打開問題。