bert4torch
v0.5.4

Documentation | Torch4keras | Examples | build_MiniLLM_from_scratch | bert4vector
安装稳定版
pip install bert4torch安装最新版
pip install git+https://github.com/Tongjilibo/bert4torchgit clone https://github.com/Tongjilibo/bert4torch,修改example中的预训练模型文件路径和数据路径即可启动脚本torch==1.10版本进行开发,现已切换到torch2.0开发,如其他版本遇到不适配,欢迎反馈LLM模型: 加载chatglm、llama、 baichuan、ziya、bloom等开源大模型权重进行推理和微调,命令行一行部署大模型
核心功能:加载bert、roberta、albert、xlnet、nezha、bart、RoFormer、RoFormer_V2、ELECTRA、GPT、GPT2、T5、GAU-alpha、ERNIE等预训练权重继续进行finetune、并支持在bert基础上灵活定义自己模型
丰富示例:包含llm、pretrain、sentence_classfication、sentence_embedding、sequence_labeling、relation_extraction、seq2seq、serving等多种解决方案
实验验证:已在公开数据集实验验证,使用如下examples数据集和实验指标
易用trick:集成了常见的trick,即插即用
其他特性:加载transformers库模型一起使用;调用方式简洁高效;有训练进度条动态展示;配合torchinfo打印参数量;默认Logger和Tensorboard简便记录训练过程;自定义fit过程,满足高阶需求
训练过程:

| 功能 | bert4torch | transformers | 备注 |
|---|---|---|---|
| 训练进度条 | ✅ | ✅ | 进度条打印loss和定义的metrics |
| 分布式训练dp/ddp | ✅ | ✅ | torch自带dp/ddp |
| 各类callbacks | ✅ | ✅ | 日志/tensorboard/earlystop/wandb等 |
| 大模型推理,stream/batch输出 | ✅ | ✅ | 各个模型是通用的,无需单独维护脚本 |
| 大模型微调 | ✅ | ✅ | lora依赖peft库,pv2自带 |
| 丰富tricks | ✅ | 对抗训练等tricks即插即用 | |
| 代码简洁易懂,自定义空间大 | ✅ | 代码复用度高, keras代码训练风格 | |
| 仓库的维护能力/影响力/使用量/兼容性 | ✅ | 目前仓库个人维护 | |
| 一键部署大模型 |
# 联网下载全部文件
bert4torch-llm-server --checkpoint_path Qwen2-0.5B-Instruct
# 加载本地大模型,联网下载bert4torch_config.json
bert4torch-llm-server --checkpoint_path /data/pretrain_ckpt/Qwen/Qwen2-0.5B-Instruct --config_path Qwen/Qwen2-0.5B-Instruct
# 加载本地大模型,且bert4torch_config.json已经下载并放于同名目录下
bert4torch-llm-server --checkpoint_path /data/pretrain_ckpt/Qwen/Qwen2-0.5B-Instruct# 命令行
bert4torch-llm-server --checkpoint_path /data/pretrain_ckpt/Qwen/Qwen2-0.5B-Instruct --mode cli
# gradio网页
bert4torch-llm-server --checkpoint_path /data/pretrain_ckpt/Qwen/Qwen2-0.5B-Instruct --mode gradio
# openai_api
bert4torch-llm-server --checkpoint_path /data/pretrain_ckpt/Qwen/Qwen2-0.5B-Instruct --mode openai
| 更新日期 | bert4torch | torch4keras | 版本说明 |
|---|---|---|---|
| 20240928 | 0.5.4 | 0.2.7 | 【新功能】增加deepseek系列、MiniCPM、MiniCPMV、llama3.2、Qwen2.5;支持device_map=auto;【修复】修复batch_generate和n>1的bug |
| 20240814 | 0.5.3 | 0.2.6 | 【新功能】增加llama3.1/Yi1.5;自动选择从hfmirror下载;支持命令行参数bert4torch-llm-server |
| 20240801 | 0.5.2 | 0.2.5 | 【新功能】chatglm/qwen系列支持function call调用, 增加internlm2系列;【小优化】简化pipeline中chat demo的调用,generate的终止token元素允许为列表, 统一rope_scaling参数名,增加rope衍生类;【bug】修复flash_attn2的推理bug, 修复bart的tie_word_embedding的bug |
更多版本
更多历史
预训练模型支持多种代码加载方式
from bert4torch.models import build_transformer_model
# 1. 仅指定config_path: 从头初始化模型结构, 不加载预训练模型
model = build_transformer_model('./model/bert4torch_config.json')
# 2. 仅指定checkpoint_path:
## 2.1 文件夹路径: 自动寻找路径下的*.bin/*.safetensors权重文件 + 需把bert4torch_config.json下载并放于该目录下
model = build_transformer_model(checkpoint_path='./model')
## 2.2 文件路径/列表: 文件路径即权重路径/列表, bert4torch_config.json会从同级目录下寻找
model = build_transformer_model(checkpoint_path='./pytorch_model.bin')
## 2.3 model_name: hf上预训练权重名称, 会自动下载hf权重以及bert4torch_config.json文件
model = build_transformer_model(checkpoint_path='bert-base-chinese')
# 3. 同时指定config_path和checkpoint_path(本地路径名或model_name排列组合):
# 本地路径从本地加载,pretrained_model_name会联网下载
config_path = './model/bert4torch_config.json' # 或'bert-base-chinese'
checkpoint_path = './model/pytorch_model.bin' # 或'bert-base-chinese'
model = build_transformer_model(config_path, checkpoint_path)预训练权重链接和bert4torch_config.json
| 模型分类 | 模型名称 | 权重来源 | 权重链接/checkpoint_path | config_path |
|---|---|---|---|---|
| bert | bert-base-chinese | google-bert | bert-base-chinese |
bert-base-chinese |
| chinese_L-12_H-768_A-12 | 谷歌 | tf权重Tongjilibo/bert-chinese_L-12_H-768_A-12 |
||
| chinese-bert-wwm-ext | HFL | hfl/chinese-bert-wwm-ext |
hfl/chinese-bert-wwm-ext |
|
| bert-base-multilingual-cased | google-bert | bert-base-multilingual-cased |
bert-base-multilingual-cased |
|
| MacBERT | HFL | hfl/chinese-macbert-basehfl/chinese-macbert-large |
hfl/chinese-macbert-basehfl/chinese-macbert-large |
|
| WoBERT | 追一科技 | junnyu/wobert_chinese_base,junnyu/wobert_chinese_plus_base |
junnyu/wobert_chinese_basejunnyu/wobert_chinese_plus_base |
|
| roberta | chinese-roberta-wwm-ext | HFL | hfl/chinese-roberta-wwm-exthfl/chinese-roberta-wwm-ext-large(large的mlm权重是随机初始化) |
hfl/chinese-roberta-wwm-exthfl/chinese-roberta-wwm-ext-large |
| roberta-small/tiny | 追一科技 | Tongjilibo/chinese_roberta_L-4_H-312_A-12Tongjilibo/chinese_roberta_L-6_H-384_A-12 |
||
| roberta-base | FacebookAI | roberta-base |
roberta-base |
|
| guwenbert | ethanyt | ethanyt/guwenbert-base |
ethanyt/guwenbert-base |
|
| albert | albert_zh albert_pytorch |
brightmart | voidful/albert_chinese_tinyvoidful/albert_chinese_smallvoidful/albert_chinese_basevoidful/albert_chinese_largevoidful/albert_chinese_xlargevoidful/albert_chinese_xxlarge |
voidful/albert_chinese_tinyvoidful/albert_chinese_smallvoidful/albert_chinese_basevoidful/albert_chinese_largevoidful/albert_chinese_xlargevoidful/albert_chinese_xxlarge |
| nezha | NEZHA NeZha_Chinese_PyTorch |
huawei_noah | sijunhe/nezha-cn-basesijunhe/nezha-cn-largesijunhe/nezha-base-wwmsijunhe/nezha-large-wwm |
sijunhe/nezha-cn-basesijunhe/nezha-cn-largesijunhe/nezha-base-wwmsijunhe/nezha-large-wwm |
| nezha_gpt_dialog | bojone | Tongjilibo/nezha_gpt_dialog |
||
| xlnet | Chinese-XLNet | HFL | hfl/chinese-xlnet-base |
hfl/chinese-xlnet-base |
| tranformer_xl | huggingface | transfo-xl/transfo-xl-wt103 |
transfo-xl/transfo-xl-wt103 |
|
| deberta | Erlangshen-DeBERTa-v2 | IDEA | IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-ChineseIDEA-CCNL/Erlangshen-DeBERTa-v2-320M-ChineseIDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese |
IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-ChineseIDEA-CCNL/Erlangshen-DeBERTa-v2-320M-ChineseIDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese |
| electra | Chinese-ELECTRA | HFL | hfl/chinese-electra-base-discriminator |
hfl/chinese-electra-base-discriminator |
| ernie | ernie | 百度文心 | nghuyong/ernie-1.0-base-zhnghuyong/ernie-3.0-base-zh |
nghuyong/ernie-1.0-base-zhnghuyong/ernie-3.0-base-zh |
| roformer | roformer | 追一科技 | junnyu/roformer_chinese_base |
junnyu/roformer_chinese_base |
| roformer_v2 | 追一科技 | junnyu/roformer_v2_chinese_char_base |
junnyu/roformer_v2_chinese_char_base |
|
| simbert | simbert | 追一科技 | Tongjilibo/simbert-chinese-baseTongjilibo/simbert-chinese-smallTongjilibo/simbert-chinese-tiny |
|
| simbert_v2/roformer-sim | 追一科技 | junnyu/roformer_chinese_sim_char_base,junnyu/roformer_chinese_sim_char_ft_base,junnyu/roformer_chinese_sim_char_small,junnyu/roformer_chinese_sim_char_ft_small |
junnyu/roformer_chinese_sim_char_basejunnyu/roformer_chinese_sim_char_ft_basejunnyu/roformer_chinese_sim_char_smalljunnyu/roformer_chinese_sim_char_ft_small |
|
| gau | GAU-alpha | 追一科技 | Tongjilibo/chinese_GAU-alpha-char_L-24_H-768 |
|
| uie | uie uie_pytorch |
百度 | Tongjilibo/uie-base |
|
| gpt | CDial-GPT | thu-coai | thu-coai/CDial-GPT_LCCC-basethu-coai/CDial-GPT_LCCC-large |
thu-coai/CDial-GPT_LCCC-basethu-coai/CDial-GPT_LCCC-large |
| cmp_lm(26亿) | 清华 | TsinghuaAI/CPM-Generate |
TsinghuaAI/CPM-Generate |
|
| nezha_gen | huawei_noah | Tongjilibo/chinese_nezha_gpt_L-12_H-768_A-12 |
||
| gpt2-chinese-cluecorpussmall | UER | uer/gpt2-chinese-cluecorpussmall |
uer/gpt2-chinese-cluecorpussmall |
|
| gpt2-ml | imcaspar | torch BaiduYun(84dh) |
gpt2-ml_15g_corpusgpt2-ml_30g_corpus |
|
| bart | bart_base_chinese | 复旦fnlp | fnlp/bart-base-chinesev1.0 |
fnlp/bart-base-chinesefnlp/bart-base-chinese-v1.0 |
| t5 | t5 | UER | uer/t5-small-chinese-cluecorpussmalluer/t5-base-chinese-cluecorpussmall |
uer/t5-base-chinese-cluecorpussmalluer/t5-small-chinese-cluecorpussmall |
| mt5 | 谷歌 | google/mt5-base |
google/mt5-base |
|
| t5_pegasus | 追一科技 | Tongjilibo/chinese_t5_pegasus_smallTongjilibo/chinese_t5_pegasus_base |
||
| chatyuan | clue-ai | ClueAI/ChatYuan-large-v1ClueAI/ChatYuan-large-v2 |
ClueAI/ChatYuan-large-v1ClueAI/ChatYuan-large-v2 |
|
| PromptCLUE | clue-ai | ClueAI/PromptCLUE-base |
ClueAI/PromptCLUE-base |
|
| chatglm | chatglm-6b | THUDM | THUDM/chatglm-6bTHUDM/chatglm-6b-int8THUDM/chatglm-6b-int4v0.1.0 |
THUDM/chatglm-6bTHUDM/chatglm-6b-int8THUDM/chatglm-6b-int4THUDM/chatglm-6b-v0.1.0 |
| chatglm2-6b | THUDM | THUDM/chatglm2-6bTHUDM/chatglm2-6b-int4THUDM/chatglm2-6b-32k |
THUDM/chatglm2-6bTHUDM/chatglm2-6b-int4THUDM/chatglm2-6b-32k |
|
| chatglm3-6b | THUDM | THUDM/chatglm3-6bTHUDM/chatglm3-6b-32k |
THUDM/chatglm3-6bTHUDM/chatglm3-6b-32k |
|
| glm4-9b | THUDM | THUDM/glm-4-9bTHUDM/glm-4-9b-chatTHUDM/glm-4-9b-chat-1m |
THUDM/glm-4-9bTHUDM/glm-4-9b-chatTHUDM/glm-4-9b-chat-1m |
|
| llama | llama | meta | meta-llama/llama-7bmeta-llama/llama-13b |
|
| llama-2 | meta | meta-llama/Llama-2-7b-hf meta-llama/Llama-2-7b-chat-hf meta-llama/Llama-2-13b-hf meta-llama/Llama-2-13b-chat-hf |
meta-llama/Llama-2-7b-hfmeta-llama/Llama-2-7b-chat-hfmeta-llama/Llama-2-13b-hfmeta-llama/Llama-2-13b-chat-hf |
|
| llama-3 | meta | meta-llama/Meta-Llama-3-8Bmeta-llama/Meta-Llama-3-8B-Instruct |
meta-llama/Meta-Llama-3-8Bmeta-llama/Meta-Llama-3-8B-Instruct |
|
| llama-3.1 | meta | meta-llama/Meta-Llama-3.1-8Bmeta-llama/Meta-Llama-3.1-8B-Instruct |
meta-llama/Meta-Llama-3.1-8Bmeta-llama/Meta-Llama-3.1-8B-Instruct |
|
| llama-3.2 | meta | meta-llama/Llama-3.2-1Bmeta-llama/Llama-3.2-1B-Instructmeta-llama/Llama-3.2-3Bmeta-llama/Llama-3.2-3B-Instruct |
meta-llama/Llama-3.2-1Bmeta-llama/Llama-3.2-1B-Instructmeta-llama/Llama-3.2-3Bmeta-llama/Llama-3.2-3B-Instruct |
|
| Chinese-LLaMA-Alpaca | HFL | hfl/chinese_alpaca_plus_7bhfl/chinese_llama_plus_7b |
||
| Chinese-LLaMA-Alpaca-2 | HFL | 待添加 | ||
| Chinese-LLaMA-Alpaca-3 | HFL | 待添加 | ||
| Belle_llama | LianjiaTech | BelleGroup/BELLE-LLaMA-7B-2M-enc | 合成说明、BelleGroup/BELLE-LLaMA-7B-2M-enc |
|
| Ziya | IDEA-CCNL | IDEA-CCNL/Ziya-LLaMA-13B-v1 IDEA-CCNL/Ziya-LLaMA-13B-v1.1 IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1 |
IDEA-CCNL/Ziya-LLaMA-13B-v1IDEA-CCNL/Ziya-LLaMA-13B-v1.1 |
|
| vicuna | lmsys | lmsys/vicuna-7b-v1.5 |
lmsys/vicuna-7b-v1.5 |
|
| Baichuan | Baichuan | baichuan-inc | baichuan-inc/Baichuan-7Bbaichuan-inc/Baichuan-13B-Basebaichuan-inc/Baichuan-13B-Chat |
baichuan-inc/Baichuan-7Bbaichuan-inc/Baichuan-13B-Basebaichuan-inc/Baichuan-13B-Chat |
| Baichuan2 | baichuan-inc | baichuan-inc/Baichuan2-7B-Basebaichuan-inc/Baichuan2-7B-Chatbaichuan-inc/Baichuan2-13B-Basebaichuan-inc/Baichuan2-13B-Chat |
baichuan-inc/Baichuan2-7B-Basebaichuan-inc/Baichuan2-7B-Chatbaichuan-inc/Baichuan2-13B-Basebaichuan-inc/Baichuan2-13B-Chat |
|
| Yi | Yi | 01-ai | 01-ai/Yi-6B01-ai/Yi-6B-200K01-ai/Yi-9B01-ai/Yi-9B-200K |
01-ai/Yi-6B01-ai/Yi-6B-200K01-ai/Yi-9B01-ai/Yi-9B-200K |
| Yi-1.5 | 01-ai | 01-ai/Yi-1.5-6B01-ai/Yi-1.5-6B-Chat01-ai/Yi-1.5-9B01-ai/Yi-1.5-9B-32K01-ai/Yi-1.5-9B-Chat01-ai/Yi-1.5-9B-Chat-16K |
01-ai/Yi-1.5-6B01-ai/Yi-1.5-6B-Chat01-ai/Yi-1.5-9B01-ai/Yi-1.5-9B-32K01-ai/Yi-1.5-9B-Chat01-ai/Yi-1.5-9B-Chat-16K |
|
| bloom | bloom | bigscience | bigscience/bloom-560mbigscience/bloomz-560m |
bigscience/bloom-560mbigscience/bloomz-560m |
| Qwen | Qwen | 阿里云 | Qwen/Qwen-1_8BQwen/Qwen-1_8B-ChatQwen/Qwen-7BQwen/Qwen-7B-ChatQwen/Qwen-14BQwen/Qwen-14B-Chat |
Qwen/Qwen-1_8BQwen/Qwen-1_8B-ChatQwen/Qwen-7BQwen/Qwen-7B-ChatQwen/Qwen-14BQwen/Qwen-14B-Chat |
| Qwen1.5 | 阿里云 | Qwen/Qwen1.5-0.5BQwen/Qwen1.5-0.5B-ChatQwen/Qwen1.5-1.8BQwen/Qwen1.5-1.8B-ChatQwen/Qwen1.5-7BQwen/Qwen1.5-7B-ChatQwen/Qwen1.5-14BQwen/Qwen1.5-14B-Chat |
Qwen/Qwen1.5-0.5BQwen/Qwen1.5-0.5B-ChatQwen/Qwen1.5-1.8BQwen/Qwen1.5-1.8B-ChatQwen/Qwen1.5-7BQwen/Qwen1.5-7B-ChatQwen/Qwen1.5-14BQwen/Qwen1.5-14B-Chat |
|
| Qwen2 | 阿里云 | Qwen/Qwen2-0.5BQwen/Qwen2-0.5B-InstructQwen/Qwen2-1.5BQwen/Qwen2-1.5B-InstructQwen/Qwen2-7BQwen/Qwen2-7B-Instruct |
Qwen/Qwen2-0.5BQwen/Qwen2-0.5B-InstructQwen/Qwen2-1.5BQwen/Qwen2-1.5B-InstructQwen/Qwen2-7BQwen/Qwen2-7B-Instruct |
|
| Qwen2-VL | 阿里云 | Qwen/Qwen2-VL-2B-InstructQwen/Qwen2-VL-7B-Instruct |
Qwen/Qwen2-VL-2B-InstructQwen/Qwen2-VL-7B-Instruct |
|
| Qwen2.5 | 阿里云 | Qwen/Qwen2.5-0.5BQwen/Qwen2.5-0.5B-InstructQwen/Qwen2.5-1.5BQwen/Qwen2.5-1.5B-InstructQwen/Qwen2.5-3BQwen/Qwen2.5-3B-InstructQwen/Qwen2.5-7BQwen/Qwen2.5-7B-InstructQwen/Qwen2.5-14BQwen/Qwen2.5-14B-Instruct |
Qwen/Qwen2.5-0.5BQwen/Qwen2.5-0.5B-InstructQwen/Qwen2.5-1.5BQwen/Qwen2.5-1.5B-InstructQwen/Qwen2.5-3BQwen/Qwen2.5-3B-InstructQwen/Qwen2.5-7BQwen/Qwen2.5-7B-InstructQwen/Qwen2.5-14BQwen/Qwen2.5-14B-Instruct |
|
| InternLM | InternLM | 上海人工智能实验室 | internlm/internlm-7binternlm/internlm-chat-7b |
internlm/internlm-7binternlm/internlm-chat-7b |
| InternLM2 | 上海人工智能实验室 | internlm/internlm2-1_8binternlm/internlm2-chat-1_8binternlm/internlm2-7binternlm/internlm2-chat-7binternlm/internlm2-20binternlm/internlm2-chat-20b |
internlm/internlm2-1_8binternlm/internlm2-chat-1_8binternlm/internlm2-7binternlm/internlm2-chat-7b |
|
| InternLM2.5 | 上海人工智能实验室 | internlm/internlm2_5-7binternlm/internlm2_5-7b-chatinternlm/internlm2_5-7b-chat-1m |
internlm/internlm2_5-7binternlm/internlm2_5-7b-chatinternlm/internlm2_5-7b-chat-1m |
|
| Falcon | Falcon | tiiuae | tiiuae/falcon-rw-1btiiuae/falcon-7btiiuae/falcon-7b-instruct |
tiiuae/falcon-rw-1btiiuae/falcon-7btiiuae/falcon-7b-instruct |
| DeepSeek | DeepSeek-MoE | 深度求索 | deepseek-ai/deepseek-moe-16b-basedeepseek-ai/deepseek-moe-16b-chat |
deepseek-ai/deepseek-moe-16b-basedeepseek-ai/deepseek-moe-16b-chat |
| DeepSeek-LLM | 深度求索 | deepseek-ai/deepseek-llm-7b-basedeepseek-ai/deepseek-llm-7b-chat |
deepseek-ai/deepseek-llm-7b-basedeepseek-ai/deepseek-llm-7b-chat |
|
| DeepSeek-V2 | 深度求索 | deepseek-ai/DeepSeek-V2-Litedeepseek-ai/DeepSeek-V2-Lite-Chat |
deepseek-ai/DeepSeek-V2-Litedeepseek-ai/DeepSeek-V2-Lite-Chat |
|
| DeepSeek-Coder | 深度求索 | deepseek-ai/deepseek-coder-1.3b-basedeepseek-ai/deepseek-coder-1.3b-instructdeepseek-ai/deepseek-coder-6.7b-basedeepseek-ai/deepseek-coder-6.7b-instructdeepseek-ai/deepseek-coder-7b-base-v1.5deepseek-ai/deepseek-coder-7b-instruct-v1.5 |
deepseek-ai/deepseek-coder-1.3b-basedeepseek-ai/deepseek-coder-1.3b-instructdeepseek-ai/deepseek-coder-6.7b-basedeepseek-ai/deepseek-coder-6.7b-instructdeepseek-ai/deepseek-coder-7b-base-v1.5deepseek-ai/deepseek-coder-7b-instruct-v1.5 |
|
| DeepSeek-Coder-V2 | 深度求索 | deepseek-ai/DeepSeek-Coder-V2-Lite-Basedeepseek-ai/DeepSeek-Coder-V2-Lite-Instruct |
deepseek-ai/DeepSeek-Coder-V2-Lite-Basedeepseek-ai/DeepSeek-Coder-V2-Lite-Instruct |
|
| DeepSeek-Math | 深度求索 | deepseek-ai/deepseek-math-7b-basedeepseek-ai/deepseek-math-7b-instructdeepseek-ai/deepseek-math-7b-rl |
deepseek-ai/deepseek-math-7b-basedeepseek-ai/deepseek-math-7b-instructdeepseek-ai/deepseek-math-7b-rl |
|
| MiniCPM | MiniCPM | OpenBMB | openbmb/MiniCPM-2B-sft-bf16openbmb/MiniCPM-2B-dpo-bf16openbmb/MiniCPM-2B-128kopenbmb/MiniCPM-1B-sft-bf16 |
openbmb/MiniCPM-2B-sft-bf16openbmb/MiniCPM-2B-dpo-bf16openbmb/MiniCPM-2B-128kopenbmb/MiniCPM-1B-sft-bf16 |
| MiniCPM-V | OpenBMB | openbmb/MiniCPM-V-2_6openbmb/MiniCPM-Llama3-V-2_5 |
openbmb/MiniCPM-V-2_6openbmb/MiniCPM-Llama3-V-2_5 |
|
| embedding | text2vec-base-chinese | shibing624 | shibing624/text2vec-base-chinese |
shibing624/text2vec-base-chinese |
| m3e | moka-ai | moka-ai/m3e-base |
moka-ai/m3e-base |
|
| bge | BAAI | BAAI/bge-large-en-v1.5BAAI/bge-large-zh-v1.5BAAI/bge-base-en-v1.5BAAI/bge-base-zh-v1.5BAAI/bge-small-en-v1.5BAAI/bge-small-zh-v1.5 |
BAAI/bge-large-en-v1.5BAAI/bge-large-zh-v1.5BAAI/bge-base-en-v1.5BAAI/bge-base-zh-v1.5BAAI/bge-small-en-v1.5BAAI/bge-small-zh-v1.5 |
|
| gte | thenlper | thenlper/gte-large-zhthenlper/gte-base-zh |
thenlper/gte-base-zhthenlper/gte-large-zh |
*注:
高亮格式(如bert-base-chinese)的表示可直接build_transformer_model()联网下载HF_ENDPOINT=https://hf-mirror.com python your_script.pyexport HF_ENDPOINT=https://hf-mirror.com后再执行python代码import os
os.environ['HF_ENDPOINT'] = "https://hf-mirror.com"@misc{bert4torch,
title={bert4torch},
author={Bo Li},
year={2022},
howpublished={url{https://github.com/Tongjilibo/bert4torch}},
}
![]() 微信号 |
![]() 微信群 |
Star History Chart |