Chinese
English
A, which means "oracle bone classical Chinese", is an NLP toolkit focusing on ancient Chinese processing.
Currently, the common Chinese NLP tools mostly use modern Chinese as the core corpus, and the processing effect of ancient Chinese is not satisfactory (see Participle for details). The original intention of this project is to assist in the processing of ancient Chinese information, and help ancient Chinese scholars and enthusiasts who are interested in digging out ancient cultural minerals to better analyze and utilize classical Chinese materials to create "new cultural products" from "cultural heritage".
The current version supports five functions: lexicon construction, automatic word segmentation, part-of-speech annotation, classical Chinese sentence reading and punctuation, and more functions are under development.
$ pip install jiayan
$ pip install https://github.com/kpu/kenlm/archive/master.zip
The following modules are used from examples.py.
Download the model and decompress: Baidu Netdisk, extract code: p0sc
Thesaurus construction
from jiayan import PMIEntropyLexiconConstructor
constructor = PMIEntropyLexiconConstructor()
lexicon = constructor.construct_lexicon('庄子.txt')
constructor.save(lexicon, '庄子词库.csv')
result:
Word,Frequency,PMI,R_Entropy,L_Entropy
之,2999,80,7.944909328101839,8.279435615456894
而,2089,80,7.354575005231323,8.615211168836439
不,1941,80,7.244331150611089,6.362131306822925
...
天下,280,195.23602384978196,5.158574399464853,5.24731990592901
圣人,111,150.0620531154239,4.622606551534004,4.6853474419338585
万物,94,377.59805590304126,4.5959107835319895,4.538837960294887
天地,92,186.73504238078462,3.1492586603863617,4.894533538722486
孔子,80,176.2550051738876,4.284638190120882,2.4056390622295662
庄子,76,169.26227942514097,2.328252899085616,2.1920058354921066
仁义,58,882.3468468468468,3.501609497059026,4.96900162987599
老聃,45,2281.2228260869565,2.384853500510039,2.4331958387289765
...
Participle
Character-level hidden Markov model word participle, the effect is in line with the sense of language, it is recommended to use, and the language model jiayan.klm needs to be loaded
from jiayan import load_lm
from jiayan import CharHMMTokenizer
text = '是故内圣外王之道,暗而不明,郁而不发,天下之人各为其所欲焉以自为方。'
lm = load_lm('jiayan.klm')
tokenizer = CharHMMTokenizer(lm)
print(list(tokenizer.tokenize(text)))
result:
['是', '故', '内圣外王', '之', '道', ',', '暗', '而', '不', '明', ',', '郁', '而', '不', '发', ',', '天下', '之', '人', '各', '为', '其', '所', '欲', '焉', '以', '自', '为', '方', '。']
Since ancient Chinese does not have public word segmentation data, it is impossible to evaluate the effect, but we can intuitively feel the advantages of this project through different NLP tools:
Try to compare the LTP (3.4.0) model participle results:
['是', '故内', '圣外王', '之', '道', ',', '暗而不明', ',', '郁', '而', '不', '发', ',', '天下', '之', '人', '各', '为', '其', '所', '欲', '焉以自为方', '。']
Try comparing HanLP word participle results again:
['是故', '内', '圣', '外', '王之道', ',', '暗', '而', '不明', ',', '郁', '而', '不', '发', ',', '天下', '之', '人', '各为其所欲焉', '以', '自为', '方', '。']
It can be seen that the word participle effect of this tool on ancient Chinese is significantly better than that of the general Chinese NLP tool.
*Update: Thanks to HanLP's author hankc for letting you know - from early 2021, HanLP released deep learning-driven 2.x. Due to the use of pre-trained language models on large-scale corpus, these corpus have already included almost all ancient and modern Chinese on the Internet, so the effect on ancient Chinese has been qualitatively improved. Not only participle words, but also part-of-shot learning effects and semantic analysis. For the corresponding specific word participle effect, please refer to this Issue.
Word-level maximum probability path participle, basically in units of characters, with coarse grain size
from jiayan import WordNgramTokenizer
text = '是故内圣外王之道,暗而不明,郁而不发,天下之人各为其所欲焉以自为方。'
tokenizer = WordNgramTokenizer()
print(list(tokenizer.tokenize(text)))
result:
['是', '故', '内', '圣', '外', '王', '之', '道', ',', '暗', '而', '不', '明', ',', '郁', '而', '不', '发', ',', '天下', '之', '人', '各', '为', '其', '所', '欲', '焉', '以', '自', '为', '方', '。']
Part of speech annotation
from jiayan import CRFPOSTagger
words = ['天下', '大乱', ',', '贤圣', '不', '明', ',', '道德', '不', '一', ',', '天下', '多', '得', '一', '察', '焉', '以', '自', '好', '。']
postagger = CRFPOSTagger()
postagger.load('pos_model')
print(postagger.postag(words))
result:
['n', 'a', 'wp', 'n', 'd', 'a', 'wp', 'n', 'd', 'm', 'wp', 'n', 'a', 'u', 'm', 'v', 'r', 'p', 'r', 'a', 'wp']
Break sentence
from jiayan import load_lm
from jiayan import CRFSentencizer
text = '天下大乱贤圣不明道德不一天下多得一察焉以自好譬如耳目皆有所明不能相通犹百家众技也皆有所长时有所用虽然不该不遍一之士也判天地之美析万物之理察古人之全寡能备于天地之美称神之容是故内圣外王之道暗而不明郁而不发天下之人各为其所欲焉以自为方悲夫百家往而不反必不合矣后世之学者不幸不见天地之纯古之大体道术将为天下裂'
lm = load_lm('jiayan.klm')
sentencizer = CRFSentencizer(lm)
sentencizer.load('cut_model')
print(sentencizer.sentencize(text))
result:
['天下大乱', '贤圣不明', '道德不一', '天下多得一察焉以自好', '譬如耳目', '皆有所明', '不能相通', '犹百家众技也', '皆有所长', '时有所用', '虽然', '不该不遍', '一之士也', '判天地之美', '析万物之理', '察古人之全', '寡能备于天地之美', '称神之容', '是故内圣外王之道', '暗而不明', '郁而不发', '天下之人各为其所欲焉以自为方', '悲夫', '百家往而不反', '必不合矣', '后世之学者', '不幸不见天地之纯', '古之大体', '道术将为天下裂']
punctuation
from jiayan import load_lm
from jiayan import CRFPunctuator
text = '天下大乱贤圣不明道德不一天下多得一察焉以自好譬如耳目皆有所明不能相通犹百家众技也皆有所长时有所用虽然不该不遍一之士也判天地之美析万物之理察古人之全寡能备于天地之美称神之容是故内圣外王之道暗而不明郁而不发天下之人各为其所欲焉以自为方悲夫百家往而不反必不合矣后世之学者不幸不见天地之纯古之大体道术将为天下裂'
lm = load_lm('jiayan.klm')
punctuator = CRFPunctuator(lm, 'cut_model')
punctuator.load('punc_model')
print(punctuator.punctuate(text))
result:天下大乱,贤圣不明,道德不一,天下多得一察焉以自好,譬如耳目,皆有所明,不能相通,犹百家众技也,皆有所长,时有所用,虽然,不该不遍,一之士也,判天地之美,析万物之理,察古人之全,寡能备于天地之美,称神之容,是故内圣外王之道,暗而不明,郁而不发,天下之人各为其所欲焉以自为方,悲夫!百家往而不反,必不合矣,后世之学者,不幸不见天地之纯,古之大体,道术将为天下裂。
Jiayan, which means Chinese characters engraved on oracle bones, is a professional Python NLP tool for Classical Chinese.
Prevailing Chinese NLP tools are mainly trained on modern Chinese data, which leads to bad performance on Classical Chinese (See Tokenizing ). The purpose of this project is to assist Classical Chinese information processing.
Current version supports lexicon construction, tokenizing, POS tagging, sentence segmentation and automatic punctuation, more features are in development.
$ pip install jiayan
$ pip install https://github.com/kpu/kenlm/archive/master.zip
The usage codes below are all from examples.py.
Download the models and unzip them: Google Drive
Lexicon Construction
from jiayan import PMIEntropyLexiconConstructor
constructor = PMIEntropyLexiconConstructor()
lexicon = constructor.construct_lexicon('庄子.txt')
constructor.save(lexicon, 'Zhuangzi_Lexicon.csv')
Results:
Word,Frequency,PMI,R_Entropy,L_Entropy
之,2999,80,7.944909328101839,8.279435615456894
而,2089,80,7.354575005231323,8.615211168836439
不,1941,80,7.244331150611089,6.362131306822925
...
天下,280,195.23602384978196,5.158574399464853,5.24731990592901
圣人,111,150.0620531154239,4.622606551534004,4.6853474419338585
万物,94,377.59805590304126,4.5959107835319895,4.538837960294887
天地,92,186.73504238078462,3.1492586603863617,4.894533538722486
孔子,80,176.2550051738876,4.284638190120882,2.4056390622295662
庄子,76,169.26227942514097,2.328252899085616,2.1920058354921066
仁义,58,882.3468468468468,3.501609497059026,4.96900162987599
老聃,45,2281.2228260869565,2.384853500510039,2.4331958387289765
...
Tokenizing
The character based HMM, recommended, needs language model: jiayan.klm
from jiayan import load_lm
from jiayan import CharHMMTokenizer
text = '是故内圣外王之道,暗而不明,郁而不发,天下之人各为其所欲焉以自为方。'
lm = load_lm('jiayan.klm')
tokenizer = CharHMMTokenizer(lm)
print(list(tokenizer.tokenize(text)))
Results:
['是', '故', '内圣外王', '之', '道', ',', '暗', '而', '不', '明', ',', '郁', '而', '不', '发', ',', '天下', '之', '人', '各', '为', '其', '所', '欲', '焉', '以', '自', '为', '方', '。']
Since there is no public tokenizing data for Classical Chinese, it's hard to do performance evaluation directly; however, we can compare the results with other popular modern Chinese NLP tools to check the performance:
Compare the tokenizing result of LTP (3.4.0):
['是', '故内', '圣外王', '之', '道', ',', '暗而不明', ',', '郁', '而', '不', '发', ',', '天下', '之', '人', '各', '为', '其', '所', '欲', '焉以自为方', '。']
Also, compare the tokenizing result of HanLP:
['是故', '内', '圣', '外', '王之道', ',', '暗', '而', '不明', ',', '郁', '而', '不', '发', ',', '天下', '之', '人', '各为其所欲焉', '以', '自为', '方', '。']
It's apparent that Jiayan has much better tokenizing performance than general Chinese NLP tools.
Max probability path approaching tokenizing based on words
from jiayan import WordNgramTokenizer
text = '是故内圣外王之道,暗而不明,郁而不发,天下之人各为其所欲焉以自为方。'
tokenizer = WordNgramTokenizer()
print(list(tokenizer.tokenize(text)))
Results:
['是', '故', '内', '圣', '外', '王', '之', '道', ',', '暗', '而', '不', '明', ',', '郁', '而', '不', '发', ',', '天下', '之', '人', '各', '为', '其', '所', '欲', '焉', '以', '自', '为', '方', '。']
POS Tagging
from jiayan import CRFPOSTagger
words = ['天下', '大乱', ',', '贤圣', '不', '明', ',', '道德', '不', '一', ',', '天下', '多', '得', '一', '察', '焉', '以', '自', '好', '。']
postagger = CRFPOSTagger()
postagger.load('pos_model')
print(postagger.postag(words))
Results:
['n', 'a', 'wp', 'n', 'd', 'a', 'wp', 'n', 'd', 'm', 'wp', 'n', 'a', 'u', 'm', 'v', 'r', 'p', 'r', 'a', 'wp']
Sentence Segmentation
from jiayan import load_lm
from jiayan import CRFSentencizer
text = '天下大乱贤圣不明道德不一天下多得一察焉以自好譬如耳目皆有所明不能相通犹百家众技也皆有所长时有所用虽然不该不遍一之士也判天地之美析万物之理察古人之全寡能备于天地之美称神之容是故内圣外王之道暗而不明郁而不发天下之人各为其所欲焉以自为方悲夫百家往而不反必不合矣后世之学者不幸不见天地之纯古之大体道术将为天下裂'
lm = load_lm('jiayan.klm')
sentencizer = CRFSentencizer(lm)
sentencizer.load('cut_model')
print(sentencizer.sentencize(text))
Results:
['天下大乱', '贤圣不明', '道德不一', '天下多得一察焉以自好', '譬如耳目', '皆有所明', '不能相通', '犹百家众技也', '皆有所长', '时有所用', '虽然', '不该不遍', '一之士也', '判天地之美', '析万物之理', '察古人之全', '寡能备于天地之美', '称神之容', '是故内圣外王之道', '暗而不明', '郁而不发', '天下之人各为其所欲焉以自为方', '悲夫', '百家往而不反', '必不合矣', '后世之学者', '不幸不见天地之纯', '古之大体', '道术将为天下裂']
Punctuation
from jiayan import load_lm
from jiayan import CRFPunctuator
text = '天下大乱贤圣不明道德不一天下多得一察焉以自好譬如耳目皆有所明不能相通犹百家众技也皆有所长时有所用虽然不该不遍一之士也判天地之美析万物之理察古人之全寡能备于天地之美称神之容是故内圣外王之道暗而不明郁而不发天下之人各为其所欲焉以自为方悲夫百家往而不反必不合矣后世之学者不幸不见天地之纯古之大体道术将为天下裂'
lm = load_lm('jiayan.klm')
punctuator = CRFPunctuator(lm, 'cut_model')
punctuator.load('punc_model')
print(punctuator.punctuate(text))
Results:天下大乱,贤圣不明,道德不一,天下多得一察焉以自好,譬如耳目,皆有所明,不能相通,犹百家众技也,皆有所长,时有所用,虽然,不该不遍,一之士也,判天地之美,析万物之理,察古人之全,寡能备于天地之美,称神之容,是故内圣外王之道,暗而不明,郁而不发,天下之人各为其所欲焉以自为方,悲夫!百家往而不反,必不合矣,后世之学者,不幸不见天地之纯,古之大体,道术将为天下裂。