

用於實體鏈接和關係提取的實體的快速且輕巧的信息提取模型。
來自PYPI的安裝
pip install relik安裝所有可選依賴項。
pip install relik[all]安裝可選的依賴項,以進行培訓和評估。
pip install relik[train]安裝faiss的可選依賴項
Faiss PYPI軟件包僅適用於CPU。對於GPU,請從源安裝或使用Conda軟件包。
對於CPU:
pip install relik[faiss]對於GPU:
conda create -n relik python=3.10
conda activate relik
# install pytorch
conda install -y pytorch=2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
# GPU
conda install -y -c pytorch -c nvidia faiss-gpu=1.8.0
# or GPU with NVIDIA RAFT
conda install -y -c pytorch -c nvidia -c rapidsai -c conda-forge faiss-gpu-raft=1.8.0
pip install relik以可選的依賴項安裝,以使用FastApi和Ray為模型提供服務。
pip install relik[serve]git clone https://github.com/SapienzaNLP/relik.git
cd relik
pip install -e .[all]Relik for Relik用於關係提取( relik-ie/relik-relation-extraction-large
封閉https://huggingface.co/relik-ie/relik-cie-large提取(?
封閉信息提取(?我們的thicc boi for el + re) : relik-ie/relik-cie-xl
Relik Small用於sapienzanlp/relik-entity-linking-small (??⚡微小而快速的EL,COLAB✅) : Sapienzanlp/relik-entility-linking-small
Relik Small用於sapienzanlp/relik-entity-linking-small (⚡小型和快速EL) : Sapienzanlp/relik-entity-linking-small
Relik Smill用於封閉信息提取(EL + RE) : relik-ie/relik-cie-small
Relik for實體鏈接(野外的EL) : relik-ie/relik-entity-linking-large-robust
Relik Small用於實體鏈接(re + ner) : relik-ie/relik-relation-extraction-small-wikipedia-ner
紙上的模型:
sapienzanlp/relik-entity-linking-largesapienzanlp/relik-entity-linking-base linking基礎sapienzanlp/relik-relation-extraction-nyt-large可以找到完整的模型列表?擁抱臉。
其他型號的尺寸將來可以使用嗎?
Relik是實體鏈接和關係提取的輕巧且快速的模型。它由兩個主要組成部分組成:一個獵犬和一個讀者。獵犬負責從大型收藏中檢索相關文件,而讀者負責從檢索到的文件中提取實體和關係。 RELIK可以與from_pretrained方法一起使用,以加載預訓練的管道。
這是如何將Relik用於實體鏈接的示例:
from relik import Relik
from relik . inference . data . objects import RelikOutput
relik = Relik . from_pretrained ( "sapienzanlp/relik-entity-linking-large" )
relik_out : RelikOutput = relik ( "Michael Jordan was one of the best players in the NBA." )輸出:
RelikOutput(
text="Michael Jordan was one of the best players in the NBA.",
tokens=['Michael', 'Jordan', 'was', 'one', 'of', 'the', 'best', 'players', 'in', 'the', 'NBA', '.'],
id=0,
spans=[
Span(start=0, end=14, label="Michael Jordan", text="Michael Jordan"),
Span(start=50, end=53, label="National Basketball Association", text="NBA"),
],
triples=[],
candidates=Candidates(
span=[
[
[
{"text": "Michael Jordan", "id": 4484083},
{"text": "National Basketball Association", "id": 5209815},
{"text": "Walter Jordan", "id": 2340190},
{"text": "Jordan", "id": 3486773},
{"text": "50 Greatest Players in NBA History", "id": 1742909},
...
]
]
]
),
)
並進行關係提取:
from relik import Relik
from relik . inference . data . objects import RelikOutput
relik = Relik . from_pretrained ( "sapienzanlp/relik-relation-extraction-nyt-large" )
relik_out : RelikOutput = relik ( "Michael Jordan was one of the best players in the NBA." )輸出:
RelikOutput(
text='Michael Jordan was one of the best players in the NBA.',
tokens=Michael Jordan was one of the best players in the NBA.,
id=0,
spans=[
Span(start=0, end=14, label='--NME--', text='Michael Jordan'),
Span(start=50, end=53, label='--NME--', text='NBA')
],
triplets=[
Triplets(
subject=Span(start=0, end=14, label='--NME--', text='Michael Jordan'),
label='company',
object=Span(start=50, end=53, label='--NME--', text='NBA'),
confidence=1.0
)
],
candidates=Candidates(
span=[],
triplet=[
[
[
{"text": "company", "id": 4, "metadata": {"definition": "company of this person"}},
{"text": "nationality", "id": 10, "metadata": {"definition": "nationality of this person or entity"}},
{"text": "child", "id": 17, "metadata": {"definition": "child of this person"}},
{"text": "founded by", "id": 0, "metadata": {"definition": "founder or co-founder of this organization, religion or place"}},
{"text": "residence", "id": 18, "metadata": {"definition": "place where this person has lived"}},
...
]
]
]
),
)
獵犬和讀者可以單獨使用。對於僅此獎項的Relik,輸出將包含輸入文本的候選物。
僅此獵犬的例子:
from relik import Relik
from relik . inference . data . objects import RelikOutput
# If you want to use only the retriever
retriever = Relik . from_pretrained ( "sapienzanlp/relik-entity-linking-large" , reader = None )
relik_out : RelikOutput = retriever ( "Michael Jordan was one of the best players in the NBA." )輸出:
RelikOutput(
text="Michael Jordan was one of the best players in the NBA.",
tokens=['Michael', 'Jordan', 'was', 'one', 'of', 'the', 'best', 'players', 'in', 'the', 'NBA', '.'],
id=0,
spans=[],
triples=[],
candidates=Candidates(
span=[
[
{"text": "Michael Jordan", "id": 4484083},
{"text": "National Basketball Association", "id": 5209815},
{"text": "Walter Jordan", "id": 2340190},
{"text": "Jordan", "id": 3486773},
{"text": "50 Greatest Players in NBA History", "id": 1742909},
...
]
],
triplet=[],
),
)
僅讀者的示例:
from relik import Relik
from relik . inference . data . objects import RelikOutput
# If you want to use only the reader
reader = Relik . from_pretrained ( "sapienzanlp/relik-entity-linking-large" , retriever = None )
candidates = [
"Michael Jordan" ,
"National Basketball Association" ,
"Walter Jordan" ,
"Jordan" ,
"50 Greatest Players in NBA History" ,
]
text = "Michael Jordan was one of the best players in the NBA."
relik_out : RelikOutput = reader ( text , candidates = candidates )輸出:
RelikOutput(
text="Michael Jordan was one of the best players in the NBA.",
tokens=['Michael', 'Jordan', 'was', 'one', 'of', 'the', 'best', 'players', 'in', 'the', 'NBA', '.'],
id=0,
spans=[
Span(start=0, end=14, label="Michael Jordan", text="Michael Jordan"),
Span(start=50, end=53, label="National Basketball Association", text="NBA"),
],
triples=[],
candidates=Candidates(
span=[
[
[
{
"text": "Michael Jordan",
"id": -731245042436891448,
},
{
"text": "National Basketball Association",
"id": 8135443493867772328,
},
{
"text": "Walter Jordan",
"id": -5873847607270755146,
"metadata": {},
},
{"text": "Jordan", "id": 6387058293887192208, "metadata": {}},
{
"text": "50 Greatest Players in NBA History",
"id": 2173802663468652889,
},
]
]
],
),
)
Relik提供了一個CLI,可以為模型提供FastAPI服務器或在數據集上執行推斷。
relik serverelik serve --help
Usage: relik serve [OPTIONS] RELIK_PRETRAINED [DEVICE] [RETRIEVER_DEVICE]
[DOCUMENT_INDEX_DEVICE] [READER_DEVICE] [PRECISION]
[RETRIEVER_PRECISION] [DOCUMENT_INDEX_PRECISION]
[READER_PRECISION] [ANNOTATION_TYPE]
╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────╮
│ * relik_pretrained TEXT [default: None] [required] │
│ device [DEVICE] The device to use for relik (e.g., │
│ ' cuda ' , ' cpu ' ). │
│ [default: None] │
│ retriever_device [RETRIEVER_DEVICE] The device to use for the retriever │
│ (e.g., ' cuda ' , ' cpu ' ). │
│ [default: None] │
│ document_index_device [DOCUMENT_INDEX_DEVICE] The device to use for the index │
│ (e.g., ' cuda ' , ' cpu ' ). │
│ [default: None] │
│ reader_device [READER_DEVICE] The device to use for the reader │
│ (e.g., ' cuda ' , ' cpu ' ). │
│ [default: None] │
│ precision [PRECISION] The precision to use for relik │
│ (e.g., ' 32 ' , ' 16 ' ). │
│ [default: 32] │
│ retriever_precision [RETRIEVER_PRECISION] The precision to use for the │
│ retriever (e.g., ' 32 ' , ' 16 ' ). │
│ [default: None] │
│ document_index_precision [DOCUMENT_INDEX_PRECISION] The precision to use for the index │
│ (e.g., ' 32 ' , ' 16 ' ). │
│ [default: None] │
│ reader_precision [READER_PRECISION] The precision to use for the reader │
│ (e.g., ' 32 ' , ' 16 ' ). │
│ [default: None] │
│ annotation_type [ANNOTATION_TYPE] The type of annotation to use (e.g., │
│ ' CHAR ' , ' WORD ' ). │
│ [default: char] │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────╮
│ --host TEXT [default: 0.0.0.0] │
│ --port INTEGER [default: 8000] │
│ --frontend --no-frontend [default: no-frontend] │
│ --help Show this message and exit. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────╯
例如:
relik serve sapienzanlp/relik-entity-linking-largerelik inferencerelik inference --help
Usage: relik inference [OPTIONS] MODEL_NAME_OR_PATH INPUT_PATH OUTPUT_PATH
╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────────╮
│ * model_name_or_path TEXT [default: None] [required] │
│ * input_path TEXT [default: None] [required] │
│ * output_path TEXT [default: None] [required] │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────╮
│ --batch-size INTEGER [default: 8] │
│ --num-workers INTEGER [default: 4] │
│ --device TEXT [default: cuda] │
│ --precision TEXT [default: fp16] │
│ --top-k INTEGER [default: 100] │
│ --window-size INTEGER [default: None] │
│ --window-stride INTEGER [default: None] │
│ --annotation-type TEXT [default: char] │
│ --progress-bar --no-progress-bar [default: progress-bar] │
│ --model-kwargs TEXT [default: None] │
│ --inference-kwargs TEXT [default: None] │
│ --help Show this message and exit. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────╯例如:
relik inference sapienzanlp/relik-entity-linking-large data.txt output.jsonlRelik的Docker圖像可在Docker Hub上找到。您可以用以下方式提取最新圖像
docker pull sapienzanlp/relik:latest並以:
docker run -p 12345:8000 sapienzanlp/relik:latest -c relik-ie/relik-cie-small API將在http://localhost:12345提供。它以幾個可以傳遞給模型的參數公開了一個端點/relik 。可以在http://localhost:12345/docs找到API的快速文檔。這是如何查詢API的一個簡單示例:
curl -X ' GET '
' http://127.0.0.1:12345/api/relik?text=Michael%20Jordan%20was%20one%20of%20the%20best%20players%20in%20the%20NBA.&is_split_into_words=false&retriever_batch_size=32&reader_batch_size=32&return_windows=false&use_doc_topic=false&annotation_type=char&relation_threshold=0.5 '
-H ' accept: application/json '在這裡,可以傳遞給Docker映像的參數的完整列表:
docker run sapienzanlp/relik:latest -h
Usage: relik [-h --help] [-c --config] [-p --precision] [-d --device] [--retriever] [--retriever-device]
[--retriever-precision] [--index-device] [--index-precision] [--reader] [--reader-device] [--reader-precision]
[--annotation-type] [--frontend] [--workers] -- start the FastAPI server for the RElik model
where:
-h --help Show this help text
-c --config Pretrained ReLiK config name (from HuggingFace) or path
-p --precision Precision, default ' 32 ' .
-d --device Device to use, default ' cpu ' .
--retriever Override retriever model name.
--retriever-device Override retriever device.
--retriever-precision Override retriever precision.
--index-device Override index device.
--index-precision Override index precision.
--reader Override reader model name.
--reader-device Override reader device.
--reader-precision Override reader precision.
--annotation-type Annotation type ( ' char ' , ' word ' ), default ' char ' .
--frontend Whether to start the frontend server.
--workers Number of workers to use.在以下各節中,我們提供了有關如何準備數據,訓練獵犬和讀者並評估模型的分步指南。
您所有的數據都應具有以下結構:
{
"doc_id" : int, # Unique identifier for the document
"doc_text" : txt, # Text of the document
"doc_span_annotations" : # Char level annotations
[
[ start, end, label ],
[ start, end, label ],
...
]
}我們使用眨眼(Wu等,2019)和Aida(Hoffart等,2011)數據集進行培訓和評估。更具體地說,我們使用眨眼數據集預先培訓獵犬和AIDA數據集來微調獵犬並培訓讀者。
可以使用此腳本從類型存儲庫中下載眨眼數據集。我們將blink-train-kilt.jsonl和blink-dev-kilt.jsonl用作培訓和驗證數據集。假設我們已經在data/blink文件夾中下載了兩個文件,我們使用以下腳本將眨眼數據集轉換為Relik格式:
# Train
python scripts/data/blink/preprocess_genre_blink.py
data/blink/blink-train-kilt.jsonl
data/blink/processed/blink-train-kilt-relik.jsonl
# Dev
python scripts/data/blink/preprocess_genre_blink.py
data/blink/blink-dev-kilt.jsonl
data/blink/processed/blink-dev-kilt-relik.jsonl AIDA數據集並非公開可用,但是我們提供了沒有text字段的文件。您可以在data/aida/processed文件夾中找到相關格式的文件。
我們使用的Wikipedia索引可以從這裡下載。
您所有的數據都應具有以下結構:
{
"doc_id" : int, # Unique identifier for the document
"doc_words: list[txt] # Tokenized text of the document
"doc_span_annotations" : # Token level annotations of mentions (label is optional)
[
[ start, end, label ],
[ start, end, label ],
...
],
"doc_triplet_annotations" : # Triplet annotations
[
{
"subject" : [ start, end, label ], # label is optional
"relation" : name, # type is optional
"object" : [ start, end, label ], # label is optional
},
{
"subject" : [ start, end, label ], # label is optional
"relation" : name, # type is optional
"object" : [ start, end, label ], # label is optional
},
]
}對於關係提取,我們提供了一個示例,說明如何從Copyre中獲取的RAW_NYT預處理NYT數據集。將數據集下載到數據/RAW_NYT,然後運行:
python scripts/data/nyt/preprocess_nyt.py data/raw_nyt data/nyt/processed/請注意,在公平的比較中,我們重現了先前工作的預處理,這導致複製的三胞胎,因為對實體跨度重複的表面形式的處理不正確。如果要正確地將原始數據解析為Relik格式,則可以設置標誌 - Legygacy-Format False。請注意,提供的RE NYT模型接受了舊式格式的培訓。
我們為獵犬執行了兩步訓練過程。首先,我們使用Blink(Wu等,2019)數據集“預訓練”獵犬,然後使用AIDA“微調”(Hoffart等,2011)。
獵犬需要類似於DPR: jsonl文件的格式的數據集,其中每行都是帶有以下鍵的字典:
{
"question" : " .... " ,
"positive_ctxs" : [{
"title" : " ... " ,
"text" : " .... "
}],
"negative_ctxs" : [{
"title" : " ... " ,
"text" : " .... "
}],
"hard_negative_ctxs" : [{
"title" : " ... " ,
"text" : " .... "
}]
}獵犬還需要索引來搜索文檔。索引的文檔可以是JSONL文件,也可以是類似於DPR的TSV文件:
jsonl :每行都是帶有以下鍵的JSON對象: id , text , metadatatsv :每行都是一個帶有id和text列的選項卡分隔字符串,然後是將存儲在metadata字段中的任何其他列jsonl示例:
{
"id" : " ... " ,
"text" : " ... " ,
"metadata" : [ " {...} " ]
},
... tsv示例:
id t text t any other column
... 一旦以Relik格式使用了眨眼數據集,就可以使用以下腳本創建窗口:
# train
relik data create-windows
data/blink/processed/blink-train-kilt-relik.jsonl
data/blink/processed/blink-train-kilt-relik-windowed.jsonl
# dev
relik data create-windows
data/blink/processed/blink-dev-kilt-relik.jsonl
data/blink/processed/blink-dev-kilt-relik-windowed.jsonl然後將其轉換為DPR格式:
# train
relik data convert-to-dpr
data/blink/processed/blink-train-kilt-relik-windowed.jsonl
data/blink/processed/blink-train-kilt-relik-windowed-dpr.jsonl
data/kb/wikipedia/documents.jsonl
--title-map data/kb/wikipedia/title_map.json
# dev
relik data convert-to-dpr
data/blink/processed/blink-dev-kilt-relik-windowed.jsonl
data/blink/processed/blink-dev-kilt-relik-windowed-dpr.jsonl
data/kb/wikipedia/documents.jsonl
--title-map data/kb/wikipedia/title_map.json由於AIDA數據集尚未公開可用,因此我們可以以Relik格式為AIDA數據集提供註釋。假設您在data/aida中擁有完整的AIDA數據集,則可以將其轉換為Relik格式,然後使用以下腳本創建窗口:
relik data create-windows
data/aida/processed/aida-train-relik.jsonl
data/aida/processed/aida-train-relik-windowed.jsonl然後將其轉換為DPR格式:
relik data convert-to-dpr
data/aida/processed/aida-train-relik-windowed.jsonl
data/aida/processed/aida-train-relik-windowed-dpr.jsonl
data/kb/wikipedia/documents.jsonl
--title-map data/kb/wikipedia/title_map.jsonrelik data create-windows
data/data/processed/nyt/train.jsonl
data/data/processed/nyt/train-windowed.jsonl
--is-split-into-words
--window-size none 然後將其轉換為DPR格式:
relik data convert-to-dpr
data/data/processed/nyt/train-windowed.jsonl
data/data/processed/nyt/train-windowed-dpr.jsonlrelik retriever train命令可用於訓練獵犬。它需要以下參數:
config_path :配置文件的路徑。overrides :以格式key=value配置文件覆蓋列表。配置文件的示例可以在relik/retriever/conf文件夾中找到。
relik/retriever/conf中的配置文件是pretrain_iterable_in_batch.yaml和finetune_iterable_in_batch.yaml ,我們分別用於預先培訓和調整此檢索器。
例如,要在AIDA數據集上訓練獵犬,您可以運行以下命令:
relik retriever train relik/retriever/conf/finetune_iterable_in_batch.yaml
model.language_model=intfloat/e5-base-v2
data.train_dataset_path=data/aida/processed/aida-train-relik-windowed-dpr.jsonl
data.val_dataset_path=data/aida/processed/aida-dev-relik-windowed-dpr.jsonl
data.test_dataset_path=data/aida/processed/aida-test-relik-windowed-dpr.jsonl
data.shared_params.documents_path=data/kb/wikipedia/documents.jsonlrelik/retriever/conf中的配置文件是finetune_nyt_iterable_in_batch.yaml ,我們用它來微調NYT數據集的回收器。對於CIE,我們在上一步中重新利用了從眨眼中預算的那個。
例如,要在NYT數據集上訓練Retriever,您可以運行以下命令:
relik retriever train relik/retriever/conf/finetune_nyt_iterable_in_batch.yaml
model.language_model=intfloat/e5-base-v2
data.train_dataset_path=data/nyt/processed/nyt-train-relik-windowed-dpr.jsonl
data.val_dataset_path=data/nyt/processed/nyt-dev-relik-windowed-dpr.jsonl
data.test_dataset_path=data/nyt/processed/nyt-test-relik-windowed-dpr.jsonl通過傳遞train.only_test=True to to relik retriever train命令,您可以跳過培訓,只能評估模型。它還需要通往Pytorch Lightning檢查點和數據集的路徑。
relik retriever train relik/retriever/conf/finetune_iterable_in_batch.yaml
train.only_test=True
test_dataset_path=data/aida/processed/aida-test-relik-windowed-dpr.jsonl
model.checkpoint_path=path/to/checkpoint可以使用以下命令從檢查點保存回收器編碼器:
from relik . retriever . lightning_modules . pl_modules import GoldenRetrieverPLModule
checkpoint_path = "path/to/checkpoint"
retriever_folder = "path/to/retriever"
# If you want to push the model to the Hugging Face Hub set push_to_hub=True
push_to_hub = False
# If you want to push the model to the Hugging Face Hub set the repo_id
repo_id = "sapienzanlp/relik-retriever-e5-base-v2-aida-blink-encoder"
pl_module = GoldenRetrieverPLModule . load_from_checkpoint ( checkpoint_path )
pl_module . model . save_pretrained ( retriever_folder , push_to_hub = push_to_hub , repo_id = repo_id )使用push_to_hub=True該模型將被推到?用repo_id擁抱面樞紐作為將推動模型的存儲庫ID。
獵犬需要索引來搜索文檔。可以使用relik retriever create-index
relik retriever create-index --help
Usage: relik retriever build-index [OPTIONS] QUESTION_ENCODER_NAME_OR_PATH
DOCUMENT_PATH OUTPUT_FOLDER
╭─ Arguments ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ * question_encoder_name_or_path TEXT [default: None] [required] │
│ * document_path TEXT [default: None] [required] │
│ * output_folder TEXT [default: None] [required] │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --document-file-type TEXT [default: jsonl] │
│ --passage-encoder-name-or-path TEXT [default: None] │
│ --indexer-class TEXT [default: relik.retriever.indexers.inmemory.InMemoryDocumentIndex] │
│ --batch-size INTEGER [default: 512] │
│ --num-workers INTEGER [default: 4] │
│ --passage-max-length INTEGER [default: 64] │
│ --device TEXT [default: cuda] │
│ --index-device TEXT [default: cpu] │
│ --precision TEXT [default: fp32] │
│ --push-to-hub --no-push-to-hub [default: no-push-to-hub] │
│ --repo-id TEXT [default: None] │
│ --help Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯使用編碼器和索引,可以從回購ID或本地路徑上加載回收者:
from relik . retriever import GoldenRetriever
encoder_name_or_path = "sapienzanlp/relik-retriever-e5-base-v2-aida-blink-encoder"
index_name_or_path = "sapienzanlp/relik-retriever-e5-base-v2-aida-blink-wikipedia-index"
retriever = GoldenRetriever (
question_encoder = encoder_name_or_path ,
document_index = index_name_or_path ,
device = "cuda" , # or "cpu"
precision = "16" , # or "32", "bf16"
index_device = "cuda" , # or "cpu"
index_precision = "16" , # or "32", "bf16"
)然後可以用於檢索文檔:
retriever . retrieve ( "Michael Jordan was one of the best players in the NBA." , top_k = 100 )讀者負責從一組候選人(例如,可能的實體或關係)中提取實體和關係。可以訓練讀者進行跨度提取或三重提取。 RelikReaderForSpanExtraction用於跨度提取,IE實體鏈接,而RelikReaderForTripletExtraction用於三重態提取,即IE關係提取。
讀者需要我們在部分中創建的窗口化數據集,然後再與獵犬的候選人進行增強。可以使用relik retriever add-candidates命令將候選人添加到數據集中。
relik retriever add-candidates --help
Usage: relik retriever add-candidates [OPTIONS] QUESTION_ENCODER_NAME_OR_PATH
DOCUMENT_NAME_OR_PATH INPUT_PATH
OUTPUT_PATH
╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────────────╮
│ * question_encoder_name_or_path TEXT [default: None] [required] │
│ * document_name_or_path TEXT [default: None] [required] │
│ * input_path TEXT [default: None] [required] │
│ * output_path TEXT [default: None] [required] │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --passage-encoder-name-or-path TEXT [default: None] │
│ --relations BOOLEAN [default: False] │
│ --top-k INTEGER [default: 100] │
│ --batch-size INTEGER [default: 128] │
│ --num-workers INTEGER [default: 4] │
│ --device TEXT [default: cuda] │
│ --index-device TEXT [default: cpu] │
│ --precision TEXT [default: fp32] │
│ --use-doc-topics --no-use-doc-topics [default: no-use-doc-topics] │
│ --help Show this message and exit. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯我們需要使用我們先前訓練的獵犬將候選人添加到讀者將使用的每個窗口中。這是使用我們在AIDA上已經訓練過的獵犬進行火車分割的示例:
relik retriever add-candidates sapienzanlp/relik-retriever-e5-base-v2-aida-blink-encoder sapienzanlp/relik-retriever-e5-base-v2-aida-blink-wikipedia-index data/aida/processed/aida-train-relik-windowed.jsonl data/aida/processed/aida-train-relik-windowed-candidates.jsonl關係提取也發生了同樣的事情。如果您想使用我們訓練有素的檢索員:
relik retriever add-candidates sapienzanlp/relik-retriever-small-nyt-question-encoder sapienzanlp/relik-retriever-small-nyt-document-index data/nyt/processed/nyt-train-relik-windowed.jsonl data/nyt/processed/nyt-train-relik-windowed-candidates.jsonl與獵犬類似, relik reader train命令可用於訓練獵犬。它需要以下參數:
config_path :配置文件的路徑。overrides :以格式key=value配置文件覆蓋列表。配置文件的示例可以在relik/reader/conf文件夾中找到。
relik/reader/conf中的配置文件是large.yaml和base.yaml ,我們用來分別訓練大型和基本讀者。例如,在AIDA數據集運行中訓練大型讀者:
relik reader train relik/reader/conf/large.yaml
train_dataset_path=data/aida/processed/aida-train-relik-windowed-candidates.jsonl
val_dataset_path=data/aida/processed/aida-dev-relik-windowed-candidates.jsonl
test_dataset_path=data/aida/processed/aida-dev-relik-windowed-candidates.jsonlrelik/reader/conf中的配置文件是large_nyt.yaml , base_nyt.yaml和small_nyt.yaml ,我們用來分別訓練大型,基礎和小讀者。例如,在AIDA數據集運行中訓練大型讀者:
relik reader train relik/reader/conf/large_nyt.yaml
train_dataset_path=data/nyt/processed/nyt-train-relik-windowed-candidates.jsonl
val_dataset_path=data/nyt/processed/nyt-dev-relik-windowed-candidates.jsonl
test_dataset_path=data/nyt/processed/nyt-test-relik-windowed-candidates.jsonl可以使用以下命令從檢查點保存讀者:
from relik . reader . lightning_modules . relik_reader_pl_module import RelikReaderPLModule
checkpoint_path = "path/to/checkpoint"
reader_folder = "path/to/reader"
# If you want to push the model to the Hugging Face Hub set push_to_hub=True
push_to_hub = False
# If you want to push the model to the Hugging Face Hub set the repo_id
repo_id = "sapienzanlp/relik-reader-deberta-v3-large-aida"
pl_model = RelikReaderPLModule . load_from_checkpoint (
trainer . checkpoint_callback . best_model_path
)
pl_model . relik_reader_core_model . save_pretrained ( experiment_path , push_to_hub = push_to_hub , repo_id = repo_id )使用push_to_hub=True該模型將被推到?以repo_id作為存儲庫ID擁抱臉部集線器,該存儲庫ID將在其中上傳。
可以從回購ID或本地路徑加載讀者:
from relik . reader import RelikReaderForSpanExtraction , RelikReaderForTripletExtraction
# the reader for span extraction
reader_span = RelikReaderForSpanExtraction (
"sapienzanlp/relik-reader-deberta-v3-large-aida"
)
# the reader for triplet extraction
reader_tripltes = RelikReaderForTripletExtraction (
"sapienzanlp/relik-reader-deberta-v3-large-nyt"
)並用於提取實體和關係:
# an example of candidates for the reader
candidates = [ "Michael Jordan" , "NBA" , "Chicago Bulls" , "Basketball" , "United States" ]
reader_span . read ( "Michael Jordan was one of the best players in the NBA." , candidates = candidates )我們評估了Relik在使用Gerbil鏈接的實體上的性能。下表顯示了Relik寬大和基礎的結果(Inkb Micro F1):
| 模型 | 艾達 | MSNBC | der | K50 | R128 | R500 | O15 | O16 | tot | ood | AIT(M:S) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 類型 | 83.7 | 73.7 | 54.1 | 60.7 | 46.7 | 40.3 | 56.1 | 50.0 | 58.2 | 54.5 | 38:00 |
| entqa | 85.8 | 72.1 | 52.9 | 64.5 | 54.1 | 41.9 | 61.1 | 51.3 | 60.5 | 56.4 | 20:00 |
| Relik Small | 82.2 | 72.7 | 55.6 | 68.3 | 48.0 | 42.3 | 62.7 | 53.6 | 60.7 | 57.6 | 00:29 |
| Relik基地 | 85.3 | 72.3 | 55.6 | 68.0 | 48.1 | 41.6 | 62.5 | 52.3 | 60.7 | 57.2 | 00:29 |
| Relik大 | 86.4 | 75.0 | 56.3 | 72.8 | 51.7 | 43.0 | 65.1 | 57.2 | 63.4 | 60.2 | 01:46 |
比較系統對內域AIDA測試集和室外MSNBC(MSN),Derczynski(DER),Kore50(K50),N3-Reuters-128(R128),N3-RSS-500(R500(R500),OKE-15(OKE-15(O15(O15)),比較系統的評估(INKB MICRO F1)。粗體表示最佳模型。類型使用提到字典。 AIT列顯示了系統需要使用NVIDIA RTX 4090來處理整個AIDA測試集的時間(m:s),除了不適合24GB RAM的ENTQA,並且使用A100。
為了評估Relik,我們使用以下步驟:
從這裡下載Gerbil服務器。
啟動Gerbil服務器:
cd gerbil && ./start.sh cd gerbil-SpotWrapNifWS4Test && mvn clean -Dmaven.tomcat.port=1235 tomcat:runsapienzanlp/relik-entity-linking-large ): python relik/reader/utils/gerbil.py --relik-model-name sapienzanlp/relik-entity-linking-large下表顯示了NYT數據集上的Relik的結果(Micro F1):
| 模型 | 紐約 | NYT(pripr) | AIT(M:S) |
|---|---|---|---|
| 反叛 | 93.1 | 93.4 | 01:45 |
| 你 | 93.5 | - - | - - |
| USM | 94.0 | 94.1 | - - |
| Relik大 | 95.0 | 94.9 | 00:30 |
為了評估關係提取,我們可以將讀者直接使用腳本relik/reader/triber/prective_re.py,指向已檢索的候選人的文件。如果您想使用我們訓練有素的讀者:
python relik/reader/trainer/predict_re.py --model_path sapienzanlp/relik-reader-deberta-v3-large-nyt --data_path /Users/perelluis/Documents/relik/data/debug/test.window.candidates.jsonl --is-eval請注意,我們計算基於開發集預測關係的閾值。要在評估時計算它,您可以運行以下內容:
python relik/reader/trainer/predict_re.py --model_path sapienzanlp/relik-reader-deberta-v3-large-nyt --data_path /Users/perelluis/Documents/relik/data/debug/dev.window.candidates.jsonl --is-eval --compute-threshold如果您使用此工作的任何部分,請考慮以下內容:
@inproceedings { orlando-etal-2024-relik ,
title = " Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget " ,
author = " Orlando, Riccardo and Huguet Cabot, Pere-Llu{'i}s and Barba, Edoardo and Navigli, Roberto " ,
booktitle = " Findings of the Association for Computational Linguistics: ACL 2024 " ,
month = aug,
year = " 2024 " ,
address = " Bangkok, Thailand " ,
publisher = " Association for Computational Linguistics " ,
}數據和軟件是在創意共享歸因非商業共享4.0下許可的。