Kerangka kerja open-source untuk tuning efisien parameter (delta tuning).
Ikhtisar • Instalasi • Penggunaan Dasar • Dokumen • Kinerja •
OpenDelta adalah toolkit untuk metode tuning yang efisien parameter (kami menjulukinya sebagai delta tuning ), yang dengannya pengguna dapat secara fleksibel menetapkan (atau menambah) parameter jumlah kecil untuk diperbarui sambil menjaga parameter paling banyak dibekukan. Dengan menggunakan Opendelta, pengguna dapat dengan mudah menerapkan tuning awalan, adaptor, LORA, atau jenis tuning delta lainnya dengan PTM yang disukai.
Versi terbaru Opendelta diuji pada python == 3.8.13, pytorch == 1.12.1, transformers == 4.22.2. Versi lain juga kemungkinan akan didukung. Jika Anda menemukan bug saat menggunakan versi paket Anda sendiri, silakan angkat masalah, kami akan memeriksanya sesegera mungkin.
Demo menggunakan Opendelta untuk memodifikasi PLM (misalnya, BART). 
conda create -n opendelta_env python=3.8
conda activate opendelta_envpip install git+https://github.com/thunlp/OpenDelta.gitAtau instal versi PIP terbaru (lebih stabil)
pip install opendeltaatau membangun dari sumber
git clone [email protected]:thunlp/OpenDelta.git
cd OpenDelta
python setup.py install
# python setup.py develop # if you want to do some modifications on the code for your research:
Kode dan komentar berikut memandu Anda melalui fungsionalitas utama Opendelta. Itu juga di must_try.py dan must_try.ipynb di colab.
# use transformers as usual.
from transformers import AutoModelForSeq2SeqLM , AutoTokenizer
t5 = AutoModelForSeq2SeqLM . from_pretrained ( "t5-large" )
t5_tokenizer = AutoTokenizer . from_pretrained ( "t5-large" )
# A running example
inputs_ids = t5_tokenizer . encode ( "Is Harry Potter written by J.K. Rowling" , return_tensors = "pt" )
t5_tokenizer . decode ( t5 . generate ( inputs_ids )[ 0 ])
# >>> '<pad><extra_id_0>? Is it Harry Potter?</s>'
# use existing delta models
from opendelta import AutoDeltaModel , AutoDeltaConfig
# use existing delta models from DeltaCenter
delta = AutoDeltaModel . from_finetuned ( "thunlp/Spelling_Correction_T5_LRAdapter_demo" , backbone_model = t5 )
# freeze the whole backbone model except the delta models.
delta . freeze_module ()
# visualize the change
delta . log ()
t5_tokenizer . decode ( t5 . generate ( inputs_ids )[ 0 ])
# >>> <pad> Is Harry Potter written by J.K. Rowling?</s>
# Now save merely the delta models, not the whole backbone model, to tmp/
delta . save_finetuned ( ".tmp" )
import os ; os . listdir ( ".tmp" )
# >>> The state dict size is 1.443 MB
# >>> We encourage users to push their final and public models to delta center to share them with the community!
# reload the model from local url and add it to pre-trained T5.
t5 = AutoModelForSeq2SeqLM . from_pretrained ( "t5-large" )
delta1 = AutoDeltaModel . from_finetuned ( ".tmp" , backbone_model = t5 )
import shutil ; shutil . rmtree ( ".tmp" ) # don't forget to remove the tmp files.
t5_tokenizer . decode ( t5 . generate ( inputs_ids )[ 0 ])
# >>> <pad> Is Harry Potter written by J.K. Rowling?</s>
# detach the delta models, the model returns to the unmodified status.
delta1 . detach ()
t5_tokenizer . decode ( t5 . generate ( inputs_ids )[ 0 ])
# >>> '<pad><extra_id_0>? Is it Harry Potter?</s>'
# use default configuration for customized wrapped models which have PLMs inside. This is a common need for users.
import torch . nn as nn
class WrappedModel ( nn . Module ):
def __init__ ( self , inner_model ):
super (). __init__ ()
self . inner = inner_model
def forward ( self , * args , ** kwargs ):
return self . inner ( * args , ** kwargs )
wrapped_model = WrappedModel ( WrappedModel ( t5 ))
# say we use LoRA
delta_config = AutoDeltaConfig . from_dict ({ "delta_type" : "lora" })
delta2 = AutoDeltaModel . from_config ( delta_config , backbone_model = wrapped_model )
delta2 . log ()
# >>> root
# -- inner
# -- inner
# ...
# ... lora_A:[8,1024], lora_B:[1024,8]
delta2 . detach ()
# use a not default configuration
# say we add lora to the last four layer of the decoder of t5, with lora rank=5
delta_config3 = AutoDeltaConfig . from_dict ({ "delta_type" : "lora" , "modified_modules" :[ "[r]decoder.*((20)|(21)|(22)|(23)).*DenseReluDense.wi" ], "lora_r" : 5 })
delta3 = AutoDeltaModel . from_config ( delta_config3 , backbone_model = wrapped_model )
delta3 . log ()Anda dapat mencoba menggunakan Opendelta pada model backbone apa pun berdasarkan Pytorch.
Namun, dengan peluang kecil bahwa antarmuka submodul dari model backbone tidak didukung. Oleh karena itu kami memverifikasi beberapa model yang umum digunakan yang pasti akan didukung oleh Opendelta.
Kami akan terus menguji lebih banyak model yang muncul.
Permintaan tarik disambut ketika Anda berhasil menerapkan Opendelta pada model backbone Anda sendiri.
@article { hu2023opendelta ,
title = { OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models } ,
author = { Hu, Shengding and Ding, Ning and Zhao, Weilin and Lv, Xingtai and Zhang, Zhen and Liu, Zhiyuan and Sun, Maosong } ,
journal = { arXiv preprint arXiv:2307.03084 } ,
year = { 2023 }
} @article { ding2022delta ,
title = { Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models } ,
author = { Ding, Ning and Qin, Yujia and Yang, Guang and Wei, Fuchao and Yang, Zonghan and Su, Yusheng and Hu, Shengding and Chen, Yulin and Chan, Chi-Min and Chen, Weize and others } ,
journal = { arXiv preprint arXiv:2203.06904 } ,
year = { 2022 }
}