該軟件包是在我們的PatternRank紙的撰寫期間開發的。您可以在這裡查看紙張。在學術論文和論文中使用KeyPhraseActectorizer或PatternRank時,請使用下面的Bibtex條目。
從文本集合中提取鍵形的鍵形詞形,並將其轉換為文檔鍵盤矩陣。文檔鍵盤矩陣是一個數學矩陣,描述了文檔集合中發生的鍵形頻率。矩陣行指示文本文檔和列指示唯一的鑰匙聲。
該軟件包包含sklearn.feature_extraction.text.countvectorizer和sklearn.feature_extraction.text.tfidfvectorizer類的包裝器。這些類不使用預定範圍的n-gram代幣,而是使用詞性文檔提取鍵字形來計算文檔文檔來計算文檔鍵盤矩陣。
相應的中帖子可以在此處和此處找到。
首先,文檔文本用Spacy Onal-Onal語音標籤註釋。此處鏈接了不同語言的所有可能的語音標籤列表。註釋需要使用spacy_pipeline參數將相應語言的Spacy管道傳遞給vectorizer。
其次,從文檔文本中提取單詞,其詞性詞性標籤匹配pos_pattern參數中定義的REGEX模式。密鑰詞是通過此方法從文本文檔中提取的唯一單詞的列表。
最後,矢量化計算文檔 - 鍵盤矩陣。
pip install keyphrase-vectorizers
有關詳細信息,請訪問API指南。
返回目錄
from keyphrase_vectorizers import KeyphraseCountVectorizer
docs = [ """Supervised learning is the machine learning task of learning a function that
maps an input to an output based on example input-output pairs. It infers a
function from labeled training data consisting of a set of training examples.
In supervised learning, each example is a pair consisting of an input object
(typically a vector) and a desired output value (also called the supervisory signal).
A supervised learning algorithm analyzes the training data and produces an inferred function,
which can be used for mapping new examples. An optimal scenario will allow for the
algorithm to correctly determine the class labels for unseen instances. This requires
the learning algorithm to generalize from the training data to unseen situations in a
'reasonable' way (see inductive bias).""" ,
"""Keywords are defined as phrases that capture the main topics discussed in a document.
As they offer a brief yet precise summary of document content, they can be utilized for various applications.
In an information retrieval environment, they serve as an indication of document relevance for users, as the list
of keywords can quickly help to determine whether a given document is relevant to their interest.
As keywords reflect a document's main topics, they can be utilized to classify documents into groups
by measuring the overlap between the keywords assigned to them. Keywords are also used proactively
in information retrieval.""" ]
# Init default vectorizer.
vectorizer = KeyphraseCountVectorizer ()
# Print parameters
print ( vectorizer . get_params ())
> >> { 'binary' : False , 'dtype' : < class 'numpy.int64' > , 'lowercase' : True , 'max_df' : None , 'min_df' : None , 'pos_pattern' : '<J.*>*<N.*>+' , 'spacy_exclude' : [ 'parser' , 'attribute_ruler' , 'lemmatizer' , 'ner' ], 'spacy_pipeline' : 'en_core_web_sm' , 'stop_words' : 'english' , 'workers' : 1 }默認情況下,對矢量器的英語初始化。這意味著,指定了英語spacy_pipeline ,刪除了英語stop_words , pos_pattern提取了具有0個或更多形容詞的關鍵字,然後使用英語spacy op-oppech tags進行1個或更多名詞。此外,Spacy管道組件['parser', 'attribute_ruler', 'lemmatizer', 'ner']被排除在默認情況下以提高效率。如果選擇其他spacy_pipeline ,則可能必須使用spacy_exclude參數來排除/包括不同的管道組件,以使Spacy POS Tagger正常工作。
# After initializing the vectorizer, it can be fitted
# to learn the keyphrases from the text documents.
vectorizer . fit ( docs ) # After learning the keyphrases, they can be returned.
keyphrases = vectorizer . get_feature_names_out ()
print ( keyphrases )
> >> [ 'users' 'main topics' 'learning algorithm' 'overlap' 'documents' 'output'
'keywords' 'precise summary' 'new examples' 'training data' 'input'
'document content' 'training examples' 'unseen instances'
'optimal scenario' 'document' 'task' 'supervised learning algorithm'
'example' 'interest' 'function' 'example input' 'various applications'
'unseen situations' 'phrases' 'indication' 'inductive bias'
'supervisory signal' 'document relevance' 'information retrieval' 'set'
'input object' 'groups' 'output value' 'list' 'learning' 'output pairs'
'pair' 'class labels' 'supervised learning' 'machine'
'information retrieval environment' 'algorithm' 'vector' 'way' ] # After fitting, the vectorizer can transform the documents
# to a document-keyphrase matrix.
# Matrix rows indicate the documents and columns indicate the unique keyphrases.
# Each cell represents the count.
document_keyphrase_matrix = vectorizer . transform ( docs ). toarray ()
print ( document_keyphrase_matrix )
> >> [[ 0 0 2 0 0 3 0 0 1 3 3 0 1 1 1 0 1 1 2 0 3 1 0 1 0 0 1 1 0 0 1 1 0 1 0 6
1 1 1 3 1 0 3 1 1 ]
[ 1 2 0 1 1 0 5 1 0 0 0 1 0 0 0 5 0 0 0 1 0 0 1 0 1 1 0 0 1 2 0 0 1 0 1 0
0 0 0 0 0 1 0 0 0 ]] # Fit and transform can also be executed in one step,
# which is more efficient.
document_keyphrase_matrix = vectorizer . fit_transform ( docs ). toarray ()
print ( document_keyphrase_matrix )
> >> [[ 0 0 2 0 0 3 0 0 1 3 3 0 1 1 1 0 1 1 2 0 3 1 0 1 0 0 1 1 0 0 1 1 0 1 0 6
1 1 1 3 1 0 3 1 1 ]
[ 1 2 0 1 1 0 5 1 0 0 0 1 0 0 0 5 0 0 0 1 0 0 1 0 1 1 0 0 1 2 0 0 1 0 1 0
0 0 0 0 0 1 0 0 0 ]]返回目錄
german_docs = [ """Goethe stammte aus einer angesehenen bürgerlichen Familie.
Sein Großvater mütterlicherseits war als Stadtschultheiß höchster Justizbeamter der Stadt Frankfurt,
sein Vater Doktor der Rechte und Kaiserlicher Rat. Er und seine Schwester Cornelia erfuhren eine aufwendige
Ausbildung durch Hauslehrer. Dem Wunsch seines Vaters folgend, studierte Goethe in Leipzig und Straßburg
Rechtswissenschaft und war danach als Advokat in Wetzlar und Frankfurt tätig.
Gleichzeitig folgte er seiner Neigung zur Dichtkunst.""" ,
"""Friedrich Schiller wurde als zweites Kind des Offiziers, Wundarztes und Leiters der Hofgärtnerei in
Marbach am Neckar Johann Kaspar Schiller und dessen Ehefrau Elisabetha Dorothea Schiller, geb. Kodweiß,
die Tochter eines Wirtes und Bäckers war, 1759 in Marbach am Neckar geboren
""" ]
# Init vectorizer for the german language
vectorizer = KeyphraseCountVectorizer ( spacy_pipeline = 'de_core_news_sm' , pos_pattern = '<ADJ.*>*<N.*>+' , stop_words = 'german' )指定了德語spacy_pipeline ,並刪除了德語stop_words 。由於德國詞彙一部分標籤與英文標籤不同,因此也定制了pos_pattern參數。 REGEX模式<ADJ.*>*<N.*>+
注意力!默認情況下,將Spacy管道組件['parser', 'attribute_ruler', 'lemmatizer', 'ner']排除在外,以提高效率。如果選擇其他spacy_pipeline ,則可能必須使用spacy_exclude參數來排除/包括不同的管道組件,以使Spacy POS Tagger正常工作。
返回目錄
KeyphraseTfidfVectorizer具有與KeyphraseCountVectorizer相同的函數調用和功能。唯一的區別是,根據參數設置而不是計數,文檔鍵盤矩陣單元格代表TF或TF-IDF值。
from keyphrase_vectorizers import KeyphraseTfidfVectorizer
docs = [ """Supervised learning is the machine learning task of learning a function that
maps an input to an output based on example input-output pairs. It infers a
function from labeled training data consisting of a set of training examples.
In supervised learning, each example is a pair consisting of an input object
(typically a vector) and a desired output value (also called the supervisory signal).
A supervised learning algorithm analyzes the training data and produces an inferred function,
which can be used for mapping new examples. An optimal scenario will allow for the
algorithm to correctly determine the class labels for unseen instances. This requires
the learning algorithm to generalize from the training data to unseen situations in a
'reasonable' way (see inductive bias).""" ,
"""Keywords are defined as phrases that capture the main topics discussed in a document.
As they offer a brief yet precise summary of document content, they can be utilized for various applications.
In an information retrieval environment, they serve as an indication of document relevance for users, as the list
of keywords can quickly help to determine whether a given document is relevant to their interest.
As keywords reflect a document's main topics, they can be utilized to classify documents into groups
by measuring the overlap between the keywords assigned to them. Keywords are also used proactively
in information retrieval.""" ]
# Init default vectorizer for the English language that computes tf-idf values
vectorizer = KeyphraseTfidfVectorizer ()
# Print parameters
print ( vectorizer . get_params ())
> >> { 'binary' : False , 'custom_pos_tagger' : None , 'decay' : None , 'delete_min_df' : None , 'dtype' : <
class 'numpy.int64' > , 'lowercase' : True , 'max_df' : None
, 'min_df' : None , 'pos_pattern' : '<J.*>*<N.*>+' , 'spacy_exclude' : [ 'parser' , 'attribute_ruler' , 'lemmatizer' , 'ner' ,
'textcat' ], 'spacy_pipeline' : 'en_core_web_sm' , 'stop_words' : 'english' , 'workers' : 1 }要計算TF值,請設置use_idf=False 。
# Fit and transform to document-keyphrase matrix.
document_keyphrase_matrix = vectorizer . fit_transform ( docs ). toarray ()
print ( document_keyphrase_matrix )
> >> [[ 0. 0. 0.09245003 0.09245003 0.09245003 0.09245003
0.2773501 0.09245003 0.2773501 0.2773501 0.09245003 0.
0. 0.09245003 0. 0.2773501 0.09245003 0.09245003
0. 0.09245003 0.09245003 0.09245003 0.09245003 0.09245003
0.5547002 0. 0. 0.09245003 0.09245003 0.
0.2773501 0.18490007 0.09245003 0. 0.2773501 0.
0. 0.09245003 0. 0.09245003 0. 0.
0. 0.18490007 0. ]
[ 0.11867817 0.11867817 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.11867817
0.11867817 0. 0.11867817 0. 0. 0.
0.11867817 0. 0. 0. 0. 0.
0. 0.11867817 0.23735633 0. 0. 0.11867817
0. 0. 0. 0.23735633 0. 0.11867817
0.11867817 0. 0.59339083 0. 0.11867817 0.11867817
0.11867817 0. 0.59339083 ]] # Return keyphrases
keyphrases = vectorizer . get_feature_names_out ()
print ( keyphrases )
> >> [ 'various applications' 'list' 'task' 'supervisory signal'
'inductive bias' 'supervised learning algorithm' 'supervised learning'
'example input' 'input' 'algorithm' 'set' 'precise summary' 'documents'
'input object' 'interest' 'function' 'class labels' 'machine'
'document content' 'output pairs' 'new examples' 'unseen situations'
'vector' 'output value' 'learning' 'document relevance' 'main topics'
'pair' 'training examples' 'information retrieval environment'
'training data' 'example' 'optimal scenario' 'information retrieval'
'output' 'groups' 'indication' 'unseen instances' 'keywords' 'way'
'phrases' 'overlap' 'users' 'learning algorithm' 'document' ]返回目錄
KeyPhraseVectorizer加載一個spacy.Language對象的每個KeyphraseVectorizer對象。當使用多個KeyphraseVectorizer對象時,加載spacy.Language效率更高。提前Language對象並將其作為spacy_pipeline參數傳遞。
import spacy
from keyphrase_vectorizers import KeyphraseCountVectorizer , KeyphraseTfidfVectorizer
docs = [ """Supervised learning is the machine learning task of learning a function that
maps an input to an output based on example input-output pairs. It infers a
function from labeled training data consisting of a set of training examples.
In supervised learning, each example is a pair consisting of an input object
(typically a vector) and a desired output value (also called the supervisory signal).
A supervised learning algorithm analyzes the training data and produces an inferred function,
which can be used for mapping new examples. An optimal scenario will allow for the
algorithm to correctly determine the class labels for unseen instances. This requires
the learning algorithm to generalize from the training data to unseen situations in a
'reasonable' way (see inductive bias).""" ,
"""Keywords are defined as phrases that capture the main topics discussed in a document.
As they offer a brief yet precise summary of document content, they can be utilized for various applications.
In an information retrieval environment, they serve as an indication of document relevance for users, as the list
of keywords can quickly help to determine whether a given document is relevant to their interest.
As keywords reflect a document's main topics, they can be utilized to classify documents into groups
by measuring the overlap between the keywords assigned to them. Keywords are also used proactively
in information retrieval.""" ]
nlp = spacy . load ( "en_core_web_sm" )
vectorizer1 = KeyphraseCountVectorizer ( spacy_pipeline = nlp )
vectorizer2 = KeyphraseTfidfVectorizer ( spacy_pipeline = nlp )
# the following calls use the nlp object
vectorizer1 . fit ( docs )
vectorizer2 . fit ( docs )返回目錄
要使用與Spacy提供的言論部分不同的標籤器,可以通過custom_pos_tagger參數定義自定義的POS-Tagger函數並將其傳遞給鍵盤向量。該參數期望一個可可的功能,反過來又需要期望“ RAW_DOCUMENTS”參數中的字符串列表,並且必須返回(單詞令牌,pos-tag)元組的列表。如果此參數不是沒有,則使用自定義標記函數用語音部分標記單詞,而Spacy管道則被忽略。
天賦可以通過pip install flair安裝。
from typing import List
import flair
from flair . models import SequenceTagger
from flair . tokenization import SegtokSentenceSplitter
docs = [ """Supervised learning is the machine learning task of learning a function that
maps an input to an output based on example input-output pairs. It infers a
function from labeled training data consisting of a set of training examples.
In supervised learning, each example is a pair consisting of an input object
(typically a vector) and a desired output value (also called the supervisory signal).
A supervised learning algorithm analyzes the training data and produces an inferred function,
which can be used for mapping new examples. An optimal scenario will allow for the
algorithm to correctly determine the class labels for unseen instances. This requires
the learning algorithm to generalize from the training data to unseen situations in a
'reasonable' way (see inductive bias).""" ,
"""Keywords are defined as phrases that capture the main topics discussed in a document.
As they offer a brief yet precise summary of document content, they can be utilized for various applications.
In an information retrieval environment, they serve as an indication of document relevance for users, as the list
of keywords can quickly help to determine whether a given document is relevant to their interest.
As keywords reflect a document's main topics, they can be utilized to classify documents into groups
by measuring the overlap between the keywords assigned to them. Keywords are also used proactively
in information retrieval.""" ]
# define flair POS-tagger and splitter
tagger = SequenceTagger . load ( 'pos' )
splitter = SegtokSentenceSplitter ()
# define custom POS-tagger function using flair
def custom_pos_tagger ( raw_documents : List [ str ], tagger : flair . models . SequenceTagger = tagger , splitter : flair . tokenization . SegtokSentenceSplitter = splitter ) -> List [ tuple ]:
"""
Important:
The mandatory 'raw_documents' parameter can NOT be named differently and has to expect a list of strings.
Any other parameter of the custom POS-tagger function can be arbitrarily defined, depending on the respective use case.
Furthermore the function has to return a list of (word token, POS-tag) tuples.
"""
# split texts into sentences
sentences = []
for doc in raw_documents :
sentences . extend ( splitter . split ( doc ))
# predict POS tags
tagger . predict ( sentences )
# iterate through sentences to get word tokens and predicted POS-tags
pos_tags = []
words = []
for sentence in sentences :
pos_tags . extend ([ label . value for label in sentence . get_labels ( 'pos' )])
words . extend ([ word . text for word in sentence ])
return list ( zip ( words , pos_tags ))
# check that the custom POS-tagger function returns a list of (word token, POS-tag) tuples
print ( custom_pos_tagger ( raw_documents = docs ))
> >> [( 'Supervised' , 'VBN' ), ( 'learning' , 'NN' ), ( 'is' , 'VBZ' ), ( 'the' , 'DT' ), ( 'machine' , 'NN' ), ( 'learning' , 'VBG' ), ( 'task' , 'NN' ), ( 'of' , 'IN' ), ( 'learning' , 'VBG' ), ( 'a' , 'DT' ), ( 'function' , 'NN' ), ( 'that' , 'WDT' ), ( 'maps' , 'VBZ' ), ( 'an' , 'DT' ), ( 'input' , 'NN' ), ( 'to' , 'IN' ), ( 'an' , 'DT' ), ( 'output' , 'NN' ), ( 'based' , 'VBN' ), ( 'on' , 'IN' ), ( 'example' , 'NN' ), ( 'input-output' , 'NN' ), ( 'pairs' , 'NNS' ), ( '.' , '.' ), ( 'It' , 'PRP' ), ( 'infers' , 'VBZ' ), ( 'a' , 'DT' ), ( 'function' , 'NN' ), ( 'from' , 'IN' ), ( 'labeled' , 'VBN' ), ( 'training' , 'NN' ), ( 'data' , 'NNS' ), ( 'consisting' , 'VBG' ), ( 'of' , 'IN' ), ( 'a' , 'DT' ), ( 'set' , 'NN' ), ( 'of' , 'IN' ), ( 'training' , 'NN' ), ( 'examples' , 'NNS' ), ( '.' , '.' ), ( 'In' , 'IN' ), ( 'supervised' , 'JJ' ), ( 'learning' , 'NN' ), ( ',' , ',' ), ( 'each' , 'DT' ), ( 'example' , 'NN' ), ( 'is' , 'VBZ' ), ( 'a' , 'DT' ), ( 'pair' , 'NN' ), ( 'consisting' , 'VBG' ), ( 'of' , 'IN' ), ( 'an' , 'DT' ), ( 'input' , 'NN' ), ( 'object' , 'NN' ), ( '(' , ':' ), ( 'typically' , 'RB' ), ( 'a' , 'DT' ), ( 'vector' , 'NN' ), ( ')' , ',' ), ( 'and' , 'CC' ), ( 'a' , 'DT' ), ( 'desired' , 'VBN' ), ( 'output' , 'NN' ), ( 'value' , 'NN' ), ( '(' , ',' ), ( 'also' , 'RB' ), ( 'called' , 'VBN' ), ( 'the' , 'DT' ), ( 'supervisory' , 'JJ' ), ( 'signal' , 'NN' ), ( ')' , '-RRB-' ), ( '.' , '.' ), ( 'A' , 'DT' ), ( 'supervised' , 'JJ' ), ( 'learning' , 'NN' ), ( 'algorithm' , 'NN' ), ( 'analyzes' , 'VBZ' ), ( 'the' , 'DT' ), ( 'training' , 'NN' ), ( 'data' , 'NNS' ), ( 'and' , 'CC' ), ( 'produces' , 'VBZ' ), ( 'an' , 'DT' ), ( 'inferred' , 'JJ' ), ( 'function' , 'NN' ), ( ',' , ',' ), ( 'which' , 'WDT' ), ( 'can' , 'MD' ), ( 'be' , 'VB' ), ( 'used' , 'VBN' ), ( 'for' , 'IN' ), ( 'mapping' , 'VBG' ), ( 'new' , 'JJ' ), ( 'examples' , 'NNS' ), ( '.' , '.' ), ( 'An' , 'DT' ), ( 'optimal' , 'JJ' ), ( 'scenario' , 'NN' ), ( 'will' , 'MD' ), ( 'allow' , 'VB' ), ( 'for' , 'IN' ), ( 'the' , 'DT' ), ( 'algorithm' , 'NN' ), ( 'to' , 'TO' ), ( 'correctly' , 'RB' ), ( 'determine' , 'VB' ), ( 'the' , 'DT' ), ( 'class' , 'NN' ), ( 'labels' , 'NNS' ), ( 'for' , 'IN' ), ( 'unseen' , 'JJ' ), ( 'instances' , 'NNS' ), ( '.' , '.' ), ( 'This' , 'DT' ), ( 'requires' , 'VBZ' ), ( 'the' , 'DT' ), ( 'learning' , 'NN' ), ( 'algorithm' , 'NN' ), ( 'to' , 'TO' ), ( 'generalize' , 'VB' ), ( 'from' , 'IN' ), ( 'the' , 'DT' ), ( 'training' , 'NN' ), ( 'data' , 'NNS' ), ( 'to' , 'IN' ), ( 'unseen' , 'JJ' ), ( 'situations' , 'NNS' ), ( 'in' , 'IN' ), ( 'a' , 'DT' ), ( "'" , '``' ), ( 'reasonable' , 'JJ' ), ( "'" , "''" ), ( 'way' , 'NN' ), ( '(' , ',' ), ( 'see' , 'VB' ), ( 'inductive' , 'JJ' ), ( 'bias' , 'NN' ), ( ')' , '-RRB-' ), ( '.' , '.' ), ( 'Keywords' , 'NNS' ), ( 'are' , 'VBP' ), ( 'defined' , 'VBN' ), ( 'as' , 'IN' ), ( 'phrases' , 'NNS' ), ( 'that' , 'WDT' ), ( 'capture' , 'VBP' ), ( 'the' , 'DT' ), ( 'main' , 'JJ' ), ( 'topics' , 'NNS' ), ( 'discussed' , 'VBN' ), ( 'in' , 'IN' ), ( 'a' , 'DT' ), ( 'document' , 'NN' ), ( '.' , '.' ), ( 'As' , 'IN' ), ( 'they' , 'PRP' ), ( 'offer' , 'VBP' ), ( 'a' , 'DT' ), ( 'brief' , 'JJ' ), ( 'yet' , 'CC' ), ( 'precise' , 'JJ' ), ( 'summary' , 'NN' ), ( 'of' , 'IN' ), ( 'document' , 'NN' ), ( 'content' , 'NN' ), ( ',' , ',' ), ( 'they' , 'PRP' ), ( 'can' , 'MD' ), ( 'be' , 'VB' ), ( 'utilized' , 'VBN' ), ( 'for' , 'IN' ), ( 'various' , 'JJ' ), ( 'applications' , 'NNS' ), ( '.' , '.' ), ( 'In' , 'IN' ), ( 'an' , 'DT' ), ( 'information' , 'NN' ), ( 'retrieval' , 'NN' ), ( 'environment' , 'NN' ), ( ',' , ',' ), ( 'they' , 'PRP' ), ( 'serve' , 'VBP' ), ( 'as' , 'IN' ), ( 'an' , 'DT' ), ( 'indication' , 'NN' ), ( 'of' , 'IN' ), ( 'document' , 'NN' ), ( 'relevance' , 'NN' ), ( 'for' , 'IN' ), ( 'users' , 'NNS' ), ( ',' , ',' ), ( 'as' , 'IN' ), ( 'the' , 'DT' ), ( 'list' , 'NN' ), ( 'of' , 'IN' ), ( 'keywords' , 'NNS' ), ( 'can' , 'MD' ), ( 'quickly' , 'RB' ), ( 'help' , 'VB' ), ( 'to' , 'TO' ), ( 'determine' , 'VB' ), ( 'whether' , 'IN' ), ( 'a' , 'DT' ), ( 'given' , 'VBN' ), ( 'document' , 'NN' ), ( 'is' , 'VBZ' ), ( 'relevant' , 'JJ' ), ( 'to' , 'IN' ), ( 'their' , 'PRP$' ), ( 'interest' , 'NN' ), ( '.' , '.' ), ( 'As' , 'IN' ), ( 'keywords' , 'NNS' ), ( 'reflect' , 'VBP' ), ( 'a' , 'DT' ), ( 'document' , 'NN' ), ( "'s" , 'POS' ), ( 'main' , 'JJ' ), ( 'topics' , 'NNS' ), ( ',' , ',' ), ( 'they' , 'PRP' ), ( 'can' , 'MD' ), ( 'be' , 'VB' ), ( 'utilized' , 'VBN' ), ( 'to' , 'TO' ), ( 'classify' , 'VB' ), ( 'documents' , 'NNS' ), ( 'into' , 'IN' ), ( 'groups' , 'NNS' ), ( 'by' , 'IN' ), ( 'measuring' , 'VBG' ), ( 'the' , 'DT' ), ( 'overlap' , 'NN' ), ( 'between' , 'IN' ), ( 'the' , 'DT' ), ( 'keywords' , 'NNS' ), ( 'assigned' , 'VBN' ), ( 'to' , 'IN' ), ( 'them' , 'PRP' ), ( '.' , '.' ), ( 'Keywords' , 'NNS' ), ( 'are' , 'VBP' ), ( 'also' , 'RB' ), ( 'used' , 'VBN' ), ( 'proactively' , 'RB' ), ( 'in' , 'IN' ), ( 'information' , 'NN' ), ( 'retrieval' , 'NN' ), ( '.' , '.' )]定義自定義POS-Tagger函數後,可以通過custom_pos_tagger參數將其傳遞給keyphraseVectorizer。
from keyphrase_vectorizers import KeyphraseCountVectorizer
# use custom POS-tagger with KeyphraseVectorizers
vectorizer = KeyphraseCountVectorizer ( custom_pos_tagger = custom_pos_tagger )
vectorizer . fit ( docs )
keyphrases = vectorizer . get_feature_names_out ()
print ( keyphrases )
> >> [ 'output value' 'information retrieval' 'algorithm' 'vector' 'groups'
'main topics' 'task' 'precise summary' 'supervised learning'
'inductive bias' 'information retrieval environment'
'supervised learning algorithm' 'function' 'input' 'pair'
'document relevance' 'learning' 'class labels' 'new examples' 'keywords'
'list' 'machine' 'training data' 'unseen situations' 'phrases' 'output'
'optimal scenario' 'document' 'training examples' 'documents' 'interest'
'indication' 'learning algorithm' 'inferred function'
'various applications' 'example' 'set' 'unseen instances'
'example input-output pairs' 'way' 'users' 'input object'
'supervisory signal' 'overlap' 'document content' ]返回目錄
將鍵形矢量與鑰匙室一起進行鍵形式提取導致模式線方法。 PatternRank可以提取與文檔最相似的語法上正確的鍵形。因此,矢量器首先從文本文檔中提取候選鍵形,隨後由Keybert基於其文檔相似性對其進行排名。然後可以將最相似的鍵字形視為文檔關鍵字。
除了KeyBert之外,使用鍵形矢量量的優點是,它允許用戶獲得語法上正確的鍵形鍵盤,而不是簡單的預定義長度的n-gram。在Keybert中,用戶可以指定keyphrase_ngram_range來定義檢索到的鍵形詞的長度。但是,這引發了兩個問題。首先,用戶通常不知道最佳的N-Gram範圍,因此必須花一些時間進行嘗試,直到找到合適的N-gram範圍。其次,即使找到了良好的n-gram範圍,返回的鑰匙源也有時在語法上仍然不太正確或略有關鍵。不幸的是,這限制了返回的鑰匙源的質量。
為了解決這個問題,我們可以使用此軟件包的矢量化來首先提取由零或更多形容詞組成的候選鍵形,然後在預處理步驟中而不是簡單的n-gram中提取一個或多個名詞。 Textrank,Singlerank和Embedrank已經成功地使用了這種名詞短語方法進行鍵形提取。隨後,提取的候選鍵形被傳遞給Keybert,以嵌入產生和相似性計算。要將兩個軟件包用於鍵形提取,我們需要將keybert通過vectorizer參數傳遞鍵形矢量器。由於現在的鍵字形的長度取決於詞性詞性標籤,因此無需再定義n-gram長度。
可以通過pip install keybert 。
from keyphrase_vectorizers import KeyphraseCountVectorizer
from keybert import KeyBERT
docs = [ """Supervised learning is the machine learning task of learning a function that
maps an input to an output based on example input-output pairs. It infers a
function from labeled training data consisting of a set of training examples.
In supervised learning, each example is a pair consisting of an input object
(typically a vector) and a desired output value (also called the supervisory signal).
A supervised learning algorithm analyzes the training data and produces an inferred function,
which can be used for mapping new examples. An optimal scenario will allow for the
algorithm to correctly determine the class labels for unseen instances. This requires
the learning algorithm to generalize from the training data to unseen situations in a
'reasonable' way (see inductive bias).""" ,
"""Keywords are defined as phrases that capture the main topics discussed in a document.
As they offer a brief yet precise summary of document content, they can be utilized for various applications.
In an information retrieval environment, they serve as an indication of document relevance for users, as the list
of keywords can quickly help to determine whether a given document is relevant to their interest.
As keywords reflect a document's main topics, they can be utilized to classify documents into groups
by measuring the overlap between the keywords assigned to them. Keywords are also used proactively
in information retrieval.""" ]
kw_model = KeyBERT ()而不是決定合適的n-gram範圍,例如(1,2)...
> >> kw_model . extract_keywords ( docs = docs , keyphrase_ngram_range = ( 1 , 2 ))
[[( 'labeled training' , 0.6013 ),
( 'examples supervised' , 0.6112 ),
( 'signal supervised' , 0.6152 ),
( 'supervised' , 0.6676 ),
( 'supervised learning' , 0.6779 )],
[( 'keywords assigned' , 0.6354 ),
( 'keywords used' , 0.6373 ),
( 'list keywords' , 0.6375 ),
( 'keywords quickly' , 0.6376 ),
( 'keywords defined' , 0.6997 )]]現在,我們只能讓鍵形矢量器決定合適的鍵形鍵盤,而無需限製到最大或最小n-gram範圍。我們只需要將鍵形矢量器作為參數傳遞給keybert:
> >> kw_model . extract_keywords ( docs = docs , vectorizer = KeyphraseCountVectorizer ())
[[( 'learning' , 0.4813 ),
( 'training data' , 0.5271 ),
( 'learning algorithm' , 0.5632 ),
( 'supervised learning' , 0.6779 ),
( 'supervised learning algorithm' , 0.6992 )],
[( 'document content' , 0.3988 ),
( 'information retrieval environment' , 0.5166 ),
( 'information retrieval' , 0.5792 ),
( 'keywords' , 0.6046 ),
( 'document relevance' , 0.633 )]]這使我們能夠確保我們不會因定義n-gram範圍太短而引起的重要單詞。例如,我們不會使用keyphrase_ngram_range=(1,2)找到鍵形的“監督學習算法”。此外,我們避免獲得像“標記訓練”,“信號監督”或“關鍵字”之類的鍵形短語。
有關如何與Keybert一起使用KeyPhraseActectorizers的更多提示,請訪問本指南。
返回目錄
與使用Keybert的應用相似,可以使用鍵形矢量化媒介器來獲取語法上正確的鍵形短語作為主題的描述,而不是簡單的n-grams。這使我們能夠確保通過定義n-gram範圍太短而不會切斷重要主題描述鍵形。此外,我們不需要預先清潔停止詞,可以獲得更精確的主題模型,並避免獲得略有關鍵的主題描述鍵形。
可以通過pip install bertopic 。
from keyphrase_vectorizers import KeyphraseCountVectorizer
from bertopic import BERTopic
from sklearn . datasets import fetch_20newsgroups
# load text documents
docs = fetch_20newsgroups ( subset = 'all' , remove = ( 'headers' , 'footers' , 'quotes' ))[ 'data' ]
# only use subset of the data
docs = docs [: 5000 ]
# train topic model with KeyphraseCountVectorizer
keyphrase_topic_model = BERTopic ( vectorizer_model = KeyphraseCountVectorizer ())
keyphrase_topics , keyphrase_probs = keyphrase_topic_model . fit_transform ( docs )
# get topics
> >> keyphrase_topic_model . topics
{ - 1 : [( 'file' , 0.007265527630674131 ),
( 'one' , 0.007055454904474792 ),
( 'use' , 0.00633563957153475 ),
( 'program' , 0.006053271092949018 ),
( 'get' , 0.006011060091056076 ),
( 'people' , 0.005729309058970368 ),
( 'know' , 0.005635951168273583 ),
( 'like' , 0.0055692449802916015 ),
( 'time' , 0.00527028825803415 ),
( 'us' , 0.00525564504880084 )],
0 : [( 'game' , 0.024134589719090525 ),
( 'team' , 0.021852806383170772 ),
( 'players' , 0.01749406934044139 ),
( 'games' , 0.014397938026886745 ),
( 'hockey' , 0.013932342023677305 ),
( 'win' , 0.013706115572901401 ),
( 'year' , 0.013297593024390321 ),
( 'play' , 0.012533185558169046 ),
( 'baseball' , 0.012412743802062559 ),
( 'season' , 0.011602725885164318 )],
1 : [( 'patients' , 0.022600352291162015 ),
( 'msg' , 0.02023877371575874 ),
( 'doctor' , 0.018816282737587457 ),
( 'medical' , 0.018614407917995103 ),
( 'treatment' , 0.0165028251400717 ),
( 'food' , 0.01604980195180696 ),
( 'candida' , 0.015255961242066143 ),
( 'disease' , 0.015115496310099693 ),
( 'pain' , 0.014129703072484495 ),
( 'hiv' , 0.012884503220341102 )],
2 : [( 'key' , 0.028851633177510126 ),
( 'encryption' , 0.024375137861044675 ),
( 'clipper' , 0.023565947302544528 ),
( 'privacy' , 0.019258719348097385 ),
( 'security' , 0.018983682856076434 ),
( 'chip' , 0.018822199098878365 ),
( 'keys' , 0.016060139239615384 ),
( 'internet' , 0.01450486904722165 ),
( 'encrypted' , 0.013194373119964168 ),
( 'government' , 0.01303978311708837 )],
...當未使用鍵形矢量器時,相同的主題看起來有所不同:
from bertopic import BERTopic
from sklearn . datasets import fetch_20newsgroups
# load text documents
docs = fetch_20newsgroups ( subset = 'all' , remove = ( 'headers' , 'footers' , 'quotes' ))[ 'data' ]
# only use subset of the data
docs = docs [: 5000 ]
# train topic model without KeyphraseCountVectorizer
topic_model = BERTopic ()
topics , probs = topic_model . fit_transform ( docs )
# get topics
> >> topic_model . topics
{ - 1 : [( 'the' , 0.012864641020408933 ),
( 'to' , 0.01187920529994724 ),
( 'and' , 0.011431498631699856 ),
( 'of' , 0.01099851927541331 ),
( 'is' , 0.010995478673036962 ),
( 'in' , 0.009908233622158523 ),
( 'for' , 0.009903667215879675 ),
( 'that' , 0.009619596716087699 ),
( 'it' , 0.009578499681829809 ),
( 'you' , 0.0095328846440753 )],
0 : [( 'game' , 0.013949166096523719 ),
( 'team' , 0.012458483177116456 ),
( 'he' , 0.012354733462693834 ),
( 'the' , 0.01119583508278812 ),
( '10' , 0.010190243555226108 ),
( 'in' , 0.0101436249231417 ),
( 'players' , 0.009682212470082758 ),
( 'to' , 0.00933700544705287 ),
( 'was' , 0.009172402203816335 ),
( 'and' , 0.008653375901739337 )],
1 : [( 'of' , 0.012771267188340924 ),
( 'to' , 0.012581337590513296 ),
( 'is' , 0.012554884458779008 ),
( 'patients' , 0.011983273578628046 ),
( 'and' , 0.011863499662237566 ),
( 'that' , 0.011616113472989725 ),
( 'it' , 0.011581944987387165 ),
( 'the' , 0.011475148304229873 ),
( 'in' , 0.011395485985801054 ),
( 'msg' , 0.010715000656335596 )],
2 : [( 'key' , 0.01725282988290282 ),
( 'the' , 0.014634841495851404 ),
( 'be' , 0.014429762197907552 ),
( 'encryption' , 0.013530733999898166 ),
( 'to' , 0.013443159534369817 ),
( 'clipper' , 0.01296614319927958 ),
( 'of' , 0.012164734232650158 ),
( 'is' , 0.012128295958613464 ),
( 'and' , 0.011972763728732667 ),
( 'chip' , 0.010785744492767285 )],
...返回目錄
KeyPhraseVectorizer還支持其表示形式的在線/增量更新(類似於OnlineCountVectorizer)。矢量機不僅可以更新量量的鍵盤,還可以實現衰減和清潔功能,以防止稀疏的文檔鍵盤矩陣變得太大。
在線更新的參數:
decay :在每次迭代中,我們將新文檔的文檔鍵盤表示用文檔鍵盤表示到目前為止所處理的所有文檔的表示。換句話說,文檔鍵盤矩陣隨著每次迭代而不斷增加。但是,尤其是在流媒體環境中,隨著時間的流逝,較舊的文檔可能變得越來越少。因此,在添加新文檔的文檔頻率之前,實現了一個衰減參數,該參數在每次迭代中衰減文檔鍵盤頻率。衰減參數是0到1之間的值,並指示頻率的百分比應減小為先前的文檔鍵盤矩陣。例如,在添加新的文檔鍵盤矩陣之前,每次迭代時,.1的值將使文檔 - 密碼矩陣中的頻率降低10%。這將確保最近的數據比以前的迭代更大。delete_min_df :我們可能需要從很少出現的文檔鍵字形表示中刪除鍵形。 min_df參數可以很好地工作。但是,當我們具有流式設置時, min_df無法正常工作,因為鍵形頻率可能始於min_df以下,但隨著時間的流逝,最終的頻率最終會高。設置高價值可能並不總是建議。結果,矢量器學到的鍵形和生成的文檔 - 鍵盤矩陣的列表可能會變得很大。同樣,如果我們實現decay參數,那麼一些值會隨著時間的推移而減小,直到它們低於min_df 。由於這些原因,實現了delete_min_df參數。該參數採用正整數,並在每次迭代中指示哪些鍵形聲將從已經學到的鍵中刪除。如果將值設置為5,則在每個迭代後將檢查鍵形的總頻率是否超過該值。如果是這樣,將從矢量器學到的鍵形列表中全部刪除鑰匙拼。這有助於保持可管理尺寸的文檔鍵盤矩陣。 from keyphrase_vectorizers import KeyphraseCountVectorizer
docs = [ """Supervised learning is the machine learning task of learning a function that
maps an input to an output based on example input-output pairs. It infers a
function from labeled training data consisting of a set of training examples.
In supervised learning, each example is a pair consisting of an input object
(typically a vector) and a desired output value (also called the supervisory signal).
A supervised learning algorithm analyzes the training data and produces an inferred function,
which can be used for mapping new examples. An optimal scenario will allow for the
algorithm to correctly determine the class labels for unseen instances. This requires
the learning algorithm to generalize from the training data to unseen situations in a
'reasonable' way (see inductive bias).""" ,
"""Keywords are defined as phrases that capture the main topics discussed in a document.
As they offer a brief yet precise summary of document content, they can be utilized for various applications.
In an information retrieval environment, they serve as an indication of document relevance for users, as the list
of keywords can quickly help to determine whether a given document is relevant to their interest.
As keywords reflect a document's main topics, they can be utilized to classify documents into groups
by measuring the overlap between the keywords assigned to them. Keywords are also used proactively
in information retrieval.""" ]
# Init default vectorizer.
vectorizer = KeyphraseCountVectorizer ( decay = 0.5 , delete_min_df = 3 )
# intitial vectorizer fit
vectorizer . fit_transform ([ docs [ 0 ]]). toarray ()
> >> array ([[ 1 , 1 , 3 , 1 , 1 , 3 , 1 , 3 , 1 , 1 , 1 , 1 , 2 , 1 , 3 , 1 , 1 , 1 , 1 , 3 , 1 , 3 ,
1 , 1 , 1 ]])
# check learned keyphrases
print ( vectorizer . get_feature_names_out ())
> >> [ 'output pairs' , 'output value' , 'function' , 'optimal scenario' ,
'pair' , 'supervised learning' , 'supervisory signal' , 'algorithm' ,
'supervised learning algorithm' , 'way' , 'training examples' ,
'input object' , 'example' , 'machine' , 'output' ,
'unseen situations' , 'unseen instances' , 'inductive bias' ,
'new examples' , 'input' , 'task' , 'training data' , 'class labels' ,
'set' , 'vector' ]
# learn additional keyphrases from new documents with partial fit
vectorizer . partial_fit ([ docs [ 1 ]])
vectorizer . transform ([ docs [ 1 ]]). toarray ()
> >> array ([[ 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ,
0 , 0 , 0 , 1 , 1 , 2 , 1 , 1 , 2 , 1 , 1 , 1 , 1 , 1 , 1 , 5 , 1 , 1 , 5 , 1 ]])
# check learned keyphrases, including newly learned ones
print ( vectorizer . get_feature_names_out ())
> >> [ 'output pairs' , 'output value' , 'function' , 'optimal scenario' ,
'pair' , 'supervised learning' , 'supervisory signal' , 'algorithm' ,
'supervised learning algorithm' , 'way' , 'training examples' ,
'input object' , 'example' , 'machine' , 'output' ,
'unseen situations' , 'unseen instances' , 'inductive bias' ,
'new examples' , 'input' , 'task' , 'training data' , 'class labels' ,
'set' , 'vector' , 'list' , 'various applications' ,
'information retrieval' , 'groups' , 'overlap' , 'main topics' ,
'precise summary' , 'document relevance' , 'interest' , 'indication' ,
'information retrieval environment' , 'phrases' , 'keywords' ,
'document content' , 'documents' , 'document' , 'users' ]
# update list of learned keyphrases according to 'delete_min_df'
vectorizer . update_bow ([ docs [ 1 ]])
vectorizer . transform ([ docs [ 1 ]]). toarray ()
> >> array ([[ 5 , 5 ]])
# check updated list of learned keyphrases (only the ones that appear more than 'delete_min_df' remain)
print ( vectorizer . get_feature_names_out ())
> >> [ 'keywords' , 'document' ]
# update again and check the impact of 'decay' on the learned document-keyphrase matrix
vectorizer . update_bow ([ docs [ 1 ]])
vectorizer . X_ . toarray ()
> >> array ([[ 7.5 , 7.5 ]])返回目錄
在引用鍵形矢量或圖案中的鍵盤向量時,請使用此Bibtex條目:
@conference{schopf_etal_kdir22,
author={Tim Schopf and Simon Klimek and Florian Matthes},
title={PatternRank: Leveraging Pretrained Language Models and Part of Speech for Unsupervised Keyphrase Extraction},
booktitle={Proceedings of the 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K 2022) - KDIR},
year={2022},
pages={243-248},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011546600003335},
isbn={978-989-758-614-9},
issn={2184-3228},
}