图1:三种不同的QAG方法。
lmqg是使用语言模型(LMS)的问答生成(QAG)的Python库。在这里,我们考虑段落级别的QAG,用户将提供上下文(段落或文档),该模型将在上下文上生成问答对列表。使用lmqg ,您可以做以下操作:
2023年11月更新:中文QAG模型现在可在lmqg和AUTOQG上使用!
2023年5月更新:两篇论文被ACL 2023接受(QAG查找,LMQG在系统演示中)。
2022年10月更新:我们的QG论文被EMNLP Main 2022接受。
我们的QAG模型可以分为三种类型:管道,多任务和End2end (见图1)。管道由问题生成(QG)和答案提取(AE)模型独立组成,AE将在上下文中解析所有句子以提取答案,QG将在答案上产生问题。多任务遵循与管道相同的架构,但是QG和AE模型共享模型共享。最后, End2end模型将以端到端的方式生成问答对列表。在实践中,管道和多任务会产生更多的问答对,而end2end的产生速度却更快,几次,生成的问答对质量取决于语言。所有类型均以9种不同语言(en/fr/ja/ko/ru/it/es/de/zh)提供的lmqg种不同语言可用,并且模型都在HuggingFace上共享(请参阅模型卡)。要了解有关QAG的更多信息,请检查我们的ACL 2023论文,该论文描述了QAG模型,并报告了每种语言中每个QAG模型的完整性能比较。
图2:QAG(A)和QG(B)的示例。
所有功能也支持问题产生。我们的QG模型假设用户除了上下文之外还可以指定答案,并且QG模型将生成一个问题,该问题是可以通过给定上下文的答案来回答的(有关QAG和QG的比较,请参见图2)。要了解有关QG的更多信息,请查看我们的EMNLP 2022论文,该论文详细描述了QG模型。
让我们首先通过PIP安装lmqg 。
pip install lmqg以几行生成问答对。
from lmqg import TransformersQG
model = TransformersQG ( language = "en" )
context = "William Turner was an English painter who specialised in watercolour landscapes. He is often known "
"as William Turner of Oxford or just Turner of Oxford to distinguish him from his contemporary, "
"J. M. W. Turner. Many of Turner's paintings depicted the countryside around Oxford. One of his "
"best known pictures is a view of the city of Oxford from Hinksey Hill."
qa = model . generate_qa ( context )
print ( qa )
[
( 'Who was an English painter who specialised in watercolour landscapes?' , 'William Turner' ),
( 'What is William Turner often known as?' , 'William Turner of Oxford or just Turner of Oxford' ),
( "What did many of Turner's paintings depict?" , 'the countryside around Oxford' ),
( "What is one of Turner's best known pictures?" , 'a view of the city of Oxford from Hinksey Hill' )
]除了en外,我们还支持意大利语, es it ,俄罗斯ru ,韩国ko ,日本ja ,德国de ,French fr和中国zh 。您可以通过在加载模型(例如,对于西班牙语)指定语言ID(例如TransformersQG(language="es") )来切换语言。有关更详细的用法,请阅读下一节。
lmqg的主要功能是用方便的API在给定上下文上生成问答对。每个QAG类的可用型号可以在模型卡上找到。
from pprint import pprint
from lmqg import TransformersQG
# initialize model
model = TransformersQG ( 'lmqg/t5-base-squad-qag' ) # or TransformersQG(model='lmqg/t5-base-squad-qg-ae')
# paragraph to generate pairs of question and answer
context = "William Turner was an English painter who specialised in watercolour landscapes. He is often known "
"as William Turner of Oxford or just Turner of Oxford to distinguish him from his contemporary, "
"J. M. W. Turner. Many of Turner's paintings depicted the countryside around Oxford. One of his "
"best known pictures is a view of the city of Oxford from Hinksey Hill."
# model prediction
question_answer = model . generate_qa ( context )
# the output is a list of tuple (question, answer)
pprint ( question_answer )
[
( 'Who was an English painter who specialised in watercolour landscapes?' , 'William Turner' ),
( 'What is William Turner often known as?' , 'William Turner of Oxford or just Turner of Oxford' ),
( "What did many of Turner's paintings depict?" , 'the countryside around Oxford' ),
( "What is one of Turner's best known pictures?" , 'a view of the city of Oxford from Hinksey Hill' )
]model和model_ae分别是QG和AE模型。 from pprint import pprint
from lmqg import TransformersQG
# initialize model
model = TransformersQG ( model = 'lmqg/t5-base-squad-qg' , model_ae = 'lmqg/t5-base-squad-ae' )
# paragraph to generate pairs of question and answer
context = "William Turner was an English painter who specialised in watercolour landscapes. He is often known "
"as William Turner of Oxford or just Turner of Oxford to distinguish him from his contemporary, "
"J. M. W. Turner. Many of Turner's paintings depicted the countryside around Oxford. One of his "
"best known pictures is a view of the city of Oxford from Hinksey Hill."
# model prediction
question_answer = model . generate_qa ( context )
# the output is a list of tuple (question, answer)
pprint ( question_answer )
[
( 'Who was an English painter who specialised in watercolour landscapes?' , 'William Turner' ),
( 'What is another name for William Turner?' , 'William Turner of Oxford' ),
( "What did many of William Turner's paintings depict around Oxford?" , 'the countryside' ),
( 'From what hill is a view of the city of Oxford taken?' , 'Hinksey Hill.' )
]model是QG模型。有关可用QG型号的列表,请参见QG-Bench,这是一种多语言QG基准。 from pprint import pprint
from lmqg import TransformersQG
# initialize model
model = TransformersQG ( model = 'lmqg/t5-base-squad-qg' )
# a list of paragraph
context = [
"William Turner was an English painter who specialised in watercolour landscapes" ,
"William Turner was an English painter who specialised in watercolour landscapes"
]
# a list of answer (same size as the context)
answer = [
"William Turner" ,
"English"
]
# model prediction
question = model . generate_q ( list_context = context , list_answer = answer )
pprint ( question )
[
'Who was an English painter who specialised in watercolour landscapes?' ,
'What nationality was William Turner?'
]model是QG模型。 from pprint import pprint
from lmqg import TransformersQG
# initialize model
model = TransformersQG ( model = 'lmqg/t5-base-squad-ae' )
# model prediction
answer = model . generate_a ( "William Turner was an English painter who specialised in watercolour landscapes" )
pprint ( answer )
[ 'William Turner' ]AutoQG(https://autoqg.net)是一个免费的Web应用程序,托管了我们的QAG模型。
lmqg还提供了一个命令行接口来微调和评估QG,AE和QAG模型。
为了微调QG(或AE,QAG)模型,我们采用了两阶段的超参数优化,如上图所述。以下命令是通过参数优化运行微调。
lmqg-train-search -c " tmp_ckpt " -d " lmqg/qg_squad " -m " t5-small " -b 64 --epoch-partial 5 -e 15 --language " en " --n-max-config 1
-g 2 4 --lr 1e-04 5e-04 1e-03 --label-smoothing 0 0.15检查lmqg-train-search -h以显示所有选项。
Python中的微调模型如下。
from lmqg import GridSearcher
trainer = GridSearcher (
checkpoint_dir = 'tmp_ckpt' ,
dataset_path = 'lmqg/qg_squad' ,
model = 't5-small' ,
epoch = 15 ,
epoch_partial = 5 ,
batch = 64 ,
n_max_config = 5 ,
gradient_accumulation_steps = [ 2 , 4 ],
lr = [ 1e-04 , 5e-04 , 1e-03 ],
label_smoothing = [ 0 , 0.15 ]
)
trainer . run ()评估工具报告了QG基础后的BLEU4 , ROUGE-L , METEOR , BERTScore和MoverScore 。从命令行,运行以下命令
lmqg-eval -m " lmqg/t5-large-squad-qg " -e " ./eval_metrics " -d " lmqg/qg_squad " -l " en "其中-m是在拥抱面或通往本地检查点的路径上的模型别名, -e是直接导出度量文件的模型, -d是要评估的数据集, -l是测试集的语言。您可以提供一个预测文件,而不是运行模型预测,以避免每次计算。
lmqg-eval --hyp-test ' {your prediction file} ' -e " ./eval_metrics " -d " lmqg/qg_squad " -l " en "预测文件应在目标数据集(示例)中以test拆分的顺序在每行中的模型生成中的文本文件。检查lmqg-eval -h以显示所有选项。
最后, lmqg提供了一个REST API,该API通过HuggingFace推断API托管模型推理。您需要拥抱面API令牌才能运行自己的API并按照以下方式安装依赖项。
pip install lmqg[api] Swagger UI可在http://127.0.0.1:8088/docs上找到,当您本地运行该应用程序时(用服务器地址替换地址)。
export API_TOKEN={Your Huggingface API Token}
uvicorn app:app --host 0.0.0.0 --port 8088docker build -t lmqg/app:latest . --build-arg api_token={Your Huggingface API Token}
docker run -p 8080:8080 lmqg/app:latestuvicorn app_local:app --host 0.0.0.0 --port 8088您必须通过环境变量API_TOKEN传递拥抱面API令牌。主要端点是question_generation ,它具有以下参数,
| 范围 | 描述 |
|---|---|
| input_text | 输入文本,段落或句子生成问题 |
| 语言 | 语言 |
| qg_model | 问题生成模型 |
| 答案_MODEL | 回答提取模型 |
并返回一个question和answer的词典列表。
{
" qa " : [
{ " question " : " Who founded Nintendo Karuta? " , " answer " : " Fusajiro Yamauchi " },
{ " question " : " When did Nintendo distribute its first video game console, the Color TV-Game? " , " answer " : " 1977 " }
]
}如果您使用任何资源,请引用以下论文,并在需要时查看代码以复制模型。
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
@inproceedings{ushio-etal-2023-an-empirical,
title = "An Empirical Comparison of LM-based Question and Answer Generation Methods",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics: Findings",
month = Jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
}
@inproceedings{ushio-etal-2023-a-practical-toolkit,
title = "A Practical Toolkit for Multilingual Question and Answer Generation",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
month = Jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
}