pytorch version NEZHA, adapted to transformers
Paper download address: NEZHA: Neural Contextualized Representation for Chinese Language Understanding
If you need to run the case script , you need to install the following modules:
The official Tensorflow version weight download address: huawei-noah
The weight download address has been converted to PyTorch version:
nezha-cn-base Baidu network disk link extraction code: hckq
nezha-large-zh Baidu network disk link extraction code: qks2
nezha-base-wwm Baidu network disk link extraction code: ysg3
nezha-large-wwm Baidu Netdisk link extraction code: 8dig
Note : If the loaded model weight is the PyTorch model weight downloaded from the following Baidu Netdisk , you need to ensure that torch version >=1.6.0
Execute the command:
sh scripts/run_task_text_classification_chnsenti.sh Long text can be implemented by setting the config.max_position_embeddings parameter, with the default value of 512, such as:
config . max_position_embeddings = args . train_max_seq_length| NEZHA(base-wwm) | chnsenti |
|---|---|
| tensorflow | 94.75 |
| pytorch | 94.92 |