mecab text cleaner
v0.1.1
This is a simple Python package for getting japanese readings (yomigana) and accents using MeCab. Please also consider using pyopenjtalk (no accents) or pyopenjtalk_g2p_prosody (ESPnet) (with accents), as this package does not account for accent changes in compound words.
Install this via pip or pipx (or your favourite package manager):
pipx install mecab-text-cleaner[unidecode,unidic]pip install mecab-text-cleaner[unidecode,unidic]> mtc いい天気ですね。
イ]ー テ]ンキ デス ネ。
> mtc いい天気ですね。 --ascii
i] te]nki desu ne.
> mtc いい天気ですね --no-add-atype --no-add-blank-between-words
イーテンキデスネ
> mtc いい天気ですね --no-add-atype --no-add-blank-between-words -r kana
イイテンキデスネfrom mecab_text_cleaner import to_reading, to_ascii_clean
assert to_reading(" 空、雲。n雨!(") == "ソ]ラ、 ク]モ。nア]メ!("
assert to_ascii_clean(" 한空、雲。n雨!(") == "han so]ra, ku]mo. na]me!("Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!