deepsegment
v2.
可以免費使用API(https://fastdeploy.notai.tech/free_apis),並且可以通過https://github.com/notai-tech/fastdeploy來使用深層段。
注意:有關原始實施,請使用此存儲庫的“主”分支。
代碼文檔可在http://bpraneeth.com/docs獲得
# Tested with (keras==2.3.1; tensorflow==2.2.0) and (keras==2.2.4; tensorflow==1.14.0)
pip install --upgrade deepsegmentEN-英語(對來自各種來源的數據培訓)
FR-法語(僅tatoeba數據)
它 - 意大利語(僅tatoeba數據)
from deepsegment import DeepSegment
# The default language is 'en'
segmenter = DeepSegment ( 'en' )
segmenter . segment ( 'I am Batman i live in gotham' )
# ['I am Batman', 'i live in gotham']docker pull bedapudi6788/deepsegment_en:v2
docker run -d -p 8500:8500 bedapudi6788/deepsegment_en:v2 from deepsegment import DeepSegment
# The default language is 'en'
segmenter = DeepSegment ( 'en' , tf_serving = True )
segmenter . segment ( 'I am Batman i live in gotham' )
# ['I am Batman', 'i live in gotham']由於一種尺寸永遠不會適合所有人,因此鼓勵使用您自己的數據將深段的默認模型使用。
from deepsegment import finetune , generate_data
x , y = generate_data ([ 'my name' , 'is batman' , 'who are' , 'you' ], n_examples = 10000 )
vx , vy = generate_data ([ 'my name' , 'is batman' ])
# NOTE: name, epochs, batch_size, lr are optional arguments.
finetune ( 'en' , x , y , vx , vy , name = 'finetuned_model_name' , epochs = number_of_epochs , batch_size = batch_size , lr = learning_rate ) from deepsegment import DeepSegment
segmenter = DeepSegment ( 'en' , checkpoint_name = 'finetuned_model_name' )培訓自定義數據的深段段:https://colab.research.google.com/drive/1cjybdbdhx1umiyvn7ndw2clqpnnnea_m
https://github.com/bminixhofer/nnsplit (帶有Python,Rust和JavaScript的綁定。)