TextGenerationEvaluationMetrics
1.0.0
Il s'agit de la mise en œuvre de mesures pour mesurer la diversité et la qualité, qui sont introduites dans cet article. En outre, certaines autres mesures existent.
Pour BLEU et OFFEUU, cette implémentation d'hyperformance est utilisée.
Voici un exemple pour calculer la distance MS-Jaccard. La contribution de ces mesures est une liste de phrases tokenisées.
from multiset_distances import MultisetDistances
ref1 = [ 'It' , 'is' , 'a' , 'guide' , 'to' , 'action' , 'that' , 'ensures' , 'that' , 'the' , 'military' , 'will' , 'forever' , 'heed' , 'Party' , 'commands' ]
ref2 = [ 'It' , 'is' , 'the' , 'guiding' , 'principle' , 'which' , 'guarantees' , 'the' , 'military' , 'forces' , 'always' , 'being' , 'under' , 'the' , 'command' , 'of' , 'the' , 'Party' ]
ref3 = [ 'It' , 'is' , 'the' , 'practical' , 'guide' , 'for' , 'the' , 'army' , 'always' , 'to' , 'heed' , 'the' , 'directions' , 'of' , 'the' , 'party' ]
sen1 = [ 'It' , 'is' , 'a' , 'guide' , 'to' , 'action' , 'which' , 'ensures' , 'that' , 'the' , 'military' , 'always' , 'obeys' , 'the' , 'commands' , 'of' , 'the' , 'party' ]
sen2 = [ 'he' , 'read' , 'the' , 'book' , 'because' , 'he' , 'was' , 'interested' , 'in' , 'world' , 'history' ]
references = [ ref1 , ref2 , ref3 ]
sentences = [ sen1 , sen2 ]
msd = MultisetDistances ( references = references )
msj_distance = msd . get_jaccard_score ( sentences = sentences ) La valeur de msj_distance est {3: 0.17, 4: 0.13, 5: 0.09} , qui montre MS-Jaccard pour 3 grammes, 4-garm et 5 grammes, respectivement.
Voici un exemple pour calculer la distance FBD et EMBD. L'entrée de ces mesures est une liste de chaînes et Bert Tokenizer est utilisé dans le code.
from bert_distances import FBD , EMBD
references = [ "that is very good" , "it is great" ]
sentences1 = [ "this is nice" , "that is good" ]
sentences2 = [ "it is bad" , "this is very bad" ]
fbd = FBD ( references = references , model_name = "bert-base-uncased" , bert_model_dir = "/tmp/Bert/" )
fbd_distance_sentences1 = fbd . get_score ( sentences = sentences1 )
fbd_distance_sentences2 = fbd . get_score ( sentences = sentences2 )
# fbd_distance_sentences1 = 17.8, fbd_distance_sentences2 = 22.0
embd = EMBD ( references = references , model_name = "bert-base-uncased" , bert_model_dir = "/tmp/Bert/" )
embd_distance_sentences1 = embd . get_score ( sentences = sentences1 )
embd_distance_sentences2 = embd . get_score ( sentences = sentences2 )
# embd_distance_sentences1 = 10.9, embd_distance_sentences2 = 20.4Veuillez citer notre article si cela aide à vos recherches.
@misc{montahaei2019jointly,
title={Jointly Measuring Diversity and Quality in Text Generation Models},
author={Ehsan Montahaei and Danial Alihosseini and Mahdieh Soleymani Baghshah},
year={2019},
eprint={1904.03971},
archivePrefix={arXiv},
primaryClass={cs.LG}
}