BERT keras
1.0.0
ステータス:アーカイブ(コードが提供され、更新は予想されません)
KERAS Google Bertの実装(変圧器からの双方向エンコーダー表現)およびOpenaiの変圧器LMは、Finetuning APIで前処理されたモデルをロードできる。
更新:@highcwuのおかげで、このコラブノートブックのような推論とトレーニングの両方をサポートするTPUサポート
# this is a pseudo code you can read an actual working example in tutorial.ipynb or the colab notebook
text_encoder = MyTextEncoder ( ** my_text_encoder_params ) # you create a text encoder (sentence piece and openai's bpe are included)
lm_generator = lm_generator ( text_encoder , ** lm_generator_params ) # this is essentially your data reader (single sentence and double sentence reader with masking and is_next label are included)
task_meta_datas = [ lm_task , classification_task , pos_task ] # these are your tasks (the lm_generator must generate the labels for these tasks too)
encoder_model = create_transformer ( ** encoder_params ) # or you could simply load_openai() or you could write your own encoder(BiLSTM for example)
trained_model = train_model ( encoder_model , task_meta_datas , lm_generator , ** training_params ) # it does both pretraing and finetuning
trained_model . save_weights ( 'my_awesome_model' ) # save it
model = load_model ( 'my_awesome_model' , encoder_model ) # load it later and use it! ネイロン