efficient attention
1.0.0
該存儲庫包含在
?回購結構:
efficient-attention :一種實現各種有效注意機制的小型獨立代碼庫。請參閱用法以獲取更多詳細信息。vit :用於圖像分類實驗的代碼庫,該實驗適用於fairseq :用於語言任務的Fairseq的修改後叉,包括機器翻譯和自回歸語言建模。main.sh :用於啟動所有實驗的BASH腳本。-e True之後的爭論直接傳遞給培訓命令。您可以將自定義參數傳遞給培訓命令,以通過-e True附加。 要設置環境,請運行以下命令以安裝所需的依賴項(在虛擬環境中推薦):
# install packages
pip install -r requirements.txt
# install efficient-attention library
pip install -e efficient-attention
# OPTIONAL: install fairseq library for running language tasks
cd fairseq
python3 setup.py build develop
cd ..用Python 3.8.10,Pytorch 1.12.0和CUDA 11.3測試了環境。另請注意,我們的FairSeq叉會修改原始代碼庫中的幾個文件;使用FairSeq的最新版本可能會導致意外的依賴衝突。
efficient-attention是一個小的獨立代碼庫,可以收集幾種有效的注意機制。
add_attn_specific_args()類方法。argparse解析器,請按照以下代碼段: import argparse
from efficient_attention import AttentionFactory
# ...
parser = argparse . ArgumentParser ()
parser . add_argument ( '--attn-name' , default = 'softmax' , type = str , metavar = 'ATTN' ,
help = 'Name of attention model to use' )
# ...
temp_args , _ = parser . parse_known_args ()
# add attention-specific arguments to the parser
# struct_name: name of the inner namespace to store all attention-specific arguments
# prefix: prefix to prepend to all argument names
# for example, if prefix = encoder-attn, then for the argument --window-size
# we need to pass --encoder-attn-window-size
# this is useful to avoid argument name conflicts.
efficient_attention . AttentionFactory . add_attn_specific_args ( parser , temp_args . attn_name , struct_name = "attn_args" , prefix = "" )
# parse arguments to a namespace that supports nested attributes
args = parser . parse_args ( namespace = efficient_attention . NestedNamespace ())
# now we can access the attention-specific arguments via args.attn_args
print ( args . attn_args . window_size )在torch.nn.Module類中,您可以創建一個有效的注意模塊,如下所示:
# we might want to pass attention-specific arguments to the attention module
# along with other related arguments
attn_args = {
** vars ( args . attn_args ),
** {
'dim' : args . embed_dim ,
'num_heads' : args . num_heads ,
'qkv_bias' : args . qkv_bias ,
'attn_drop' : args . attn_drop_rate ,
'proj_drop' : args . drop_rate ,
}
}
self . attn = AttentionFactory . build_attention ( attn_name = attn_name , attn_args = attn_args )
# the module can then be used as a normal function as
x = self . attn ( x )我們遵循類似於DEIT的設置,以預處理Imagenet數據集。下載Imagenet火車和Val圖像並將它們datasets.ImageFolder以下目錄結構中
/path/to/imagenet/
train/
class1/
img1.jpeg
class2/
img2.jpeg
val/
class1/
img3.jpeg
class2/
img4.jpeg
以下命令用於訓練和評估LARA/EVA的各種視覺變壓器。假定訓練是用8 GPU進行的。
在不同的DEIT體系結構中使用LARA/EVA :
# LARA: DeiT-tiny-p8
bash main.sh -m evit_tiny_p8 -p < dir-of-imagenet-data > -g 8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --warmup-epochs 10 --seed 1 --attn-name lara --mis-type mis-opt --proposal-gen pool-mixed --alpha-coeff 2.0 --num-landmarks 49
# LARA: DeiT-tiny-p16
bash main.sh -m evit_tiny_p16 -p < dir-of-imagenet-data > -g 8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --warmup-epochs 10 --seed 1 --attn-name lara --mis-type mis-opt --proposal-gen pool-mixed --alpha-coeff 2.0 --num-landmarks 49
# LARA: DeiT-small-p16
bash main.sh -m evit_small_p16 -p < dir-of-imagenet-data > -g 8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --warmup-epochs 10 --seed 1 --attn-name lara --mis-type mis-opt --proposal-gen pool-mixed --alpha-coeff 2.0 --num-landmarks 49
# EVA: DeiT-tiny-p8
bash main.sh -m evit_tiny_p8 -p < dir-of-imagenet-data > -g 8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --warmup-epochs 10 --seed 1 --attn-name eva --num-landmarks 49 --adaptive-proj default --window-size 7 --attn-2d --use-rpe
# EVA: DeiT-tiny-p16
bash main.sh -m evit_tiny_p16 -p < dir-of-imagenet-data > -g 8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --warmup-epochs 10 --seed 1 --attn-name eva --num-landmarks 49 --adaptive-proj default --window-size 7 --attn-2d --use-rpe
# EVA: DeiT-small-p16
bash main.sh -m evit_small_p16 -p < dir-of-imagenet-data > -g 8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --warmup-epochs 10 --seed 1 --attn-name eva --num-landmarks 49 --adaptive-proj default --window-size 7 --attn-2d --use-rpe適應PVTV2體系結構中的LARA/EVA :
# LARA Attention
bash main.sh -m pvt_medium2 -p < dir-of-imagenet-data > -g 8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 1.0 --drop-path-rate 0.3 --warmup-epochs 10 --seed 1 --attn-name lara --pool-module-type dense --mis-type mis-opt --proposal-gen pool-mixed --num-landmarks 49 --alpha-coeff 2.0 --repeated-aug
# EVA Attention
bash main.sh -m pvt_medium2 -p < dir-of-imagenet-data > -g 8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --drop-path-rate 0.3 --warmup-epochs 10 --seed 1 --attn-name eva --num-landmarks 49 --adaptive-proj default --window-size 7 --attn-2d --use-rpe --repeated-aug另外,您可能想嘗試其他注意機制:
# Softmax Attention
bash main.sh -m evit_tiny_p8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --warmup-epochs 10 --seed 1 --attn-name softmax
# RFA/Performer
bash main.sh -m evit_tiny_p8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --warmup-epochs 10 --seed 1 --attn-name performer --proj-method favorp --approx-attn-dim 64
# Local Attention
bash main.sh -m evit_tiny_p8 -d imagenet -e TRUE --dist-eval --num-workers 16 --clip-grad 5.0 --warmup-epochs 10 --seed 1 --attn-name local --window-size 7 --attn-2d --use-rpe我們使用標準的預處理FairSeq來準備語言任務的數據。
WMT'14 EN-DE數據;Wikitext-103數據集。-r <resume-ckpt-DIR>指定在培訓期間存儲您檢查站的目錄,可用於恢復培訓。--encoder-attn- (用於編碼器端) / --decoder-attn- (用於解碼器端)相關聯。請參閱下面的示例。 # # LARA
CUDA_VISIBLE_DEVICES=0,1,2,3 bash main.sh -p < dir-of-your-bin-data > -d wmt -s lara_8 -g 4 -e TRUE --attn-name-encoder lara --encoder-attn-num-landmarks 8 --encoder-attn-proposal-gen adaptive-1d --encoder-attn-mis-type mis-opt
CUDA_VISIBLE_DEVICES=0,1,2,3 bash main.sh -p < dir-of-your-bin-data > -d wmt -s lara_16 -g 4 -e TRUE --attn-name-encoder lara --encoder-attn-num-landmarks 16 --encoder-attn-proposal-gen adaptive-1d --encoder-attn-mis-type mis-opt
CUDA_VISIBLE_DEVICES=0,1,2,3 bash main.sh -p < dir-of-your-bin-data > -d wmt -s lara_32 -g 4 -e TRUE --attn-name-encoder lara --encoder-attn-num-landmarks 32 --encoder-attn-proposal-gen adaptive-1d --encoder-attn-mis-type mis-opt
# # EVA
CUDA_VISIBLE_DEVICES=0,1,2,3 bash main.sh -p < dir-of-your-bin-data > -d wmt -s eva_8_8 -g 4 -e TRUE --attn-name-encoder eva --encoder-attn-window-size 8 --encoder-attn-num-landmarks 8 --encoder-attn-adaptive-proj no-ln --encoder-attn-use-t5-rpe --encoder-attn-overlap-window
CUDA_VISIBLE_DEVICES=0,1,2,3 bash main.sh -p < dir-of-your-bin-data > -d wmt -s eva_16_8 -g 4 -e TRUE --attn-name-encoder eva --encoder-attn-window-size 16 --encoder-attn-num-landmarks 8 --encoder-attn-adaptive-proj no-ln --encoder-attn-use-t5-rpe --encoder-attn-overlap-window
CUDA_VISIBLE_DEVICES=0,1,2,3 bash main.sh -p < dir-of-your-bin-data > -d wmt -s eva_32_8 -g 4 -e TRUE --attn-name-encoder eva --encoder-attn-window-size 32 --encoder-attn-num-landmarks 8 --encoder-attn-adaptive-proj no-ln --encoder-attn-use-t5-rpe --encoder-attn-overlap-window # Currently, LARA does not support causal masking yet.
# EVA on a 16-layer Transformer LM
CUDA_VISIBLE_DEVICES=0,1,2,3 bash main.sh -p < dir-of-your-bin-data > -m 16layers -d wikitext103 -s eva_128_8_16layers -g 4 -e TRUE --attn-name-decoder causal_eva --decoder-attn-window-size 128 --decoder-attn-causal --decoder-attn-adaptive-proj qk --decoder-attn-chunk-size 8 --decoder-attn-use-t5-rpe
# EVA on a 32-layer Transformer LM
CUDA_VISIBLE_DEVICES=0,1,2,3 bash main.sh -p < dir-of-your-bin-data > -m 32layers -d wikitext103 -s eva_128_8_32layers -g 4 -e TRUE --attn-name-decoder causal_eva --decoder-attn-window-size 128 --decoder-attn-causal --decoder-attn-adaptive-proj qk --decoder-attn-chunk-size 8 --decoder-attn-use-t5-rpe對於生成和評估,只需通過參數-i true在調用main.sh僅執行推理過程時就正確。可以將檢查點路徑指定為-c <your-ckpt-path> 。例如,
# Machine Translation
CUDA_VISIBLE_DEVICES=0 bash main.sh -i true -c < your-possibly-avg-checkpoint.pt > -p < dir-of-your-bin-data > -d wmt -g 1
# Autoregressive Language Modeling
CUDA_VISIBLE_DEVICES=0 bash main.sh -i true -c < your-checkpoint_last.pt > -p < dir-of-your-bin-data > -d wikitext103 -g 1我們還為OneDrive提供了訓練有素的EVA模型檢查點,用於機器翻譯和語言建模任務:
@inproceedings { zheng2023efficient ,
title = { Efficient Attention via Control Variates } ,
author = { Lin Zheng and Jianbo Yuan and Chong Wang and Lingpeng Kong } ,
booktitle = { International Conference on Learning Representations } ,
year = { 2023 } ,
url = { https://openreview.net/forum?id=G-uNfHKrj46 }
} @inproceedings { zheng2022linear ,
title = { Linear complexity randomized self-attention mechanism } ,
author = { Lin Zheng and Chong Wang and Lingpeng Kong } ,
booktitle = { International Conference on Machine Learning } ,
pages = { 27011--27041 } ,
year = { 2022 } ,
organization = { PMLR }
}