Loldu是一种尖端参数有效的微调(PEFT)技术,旨在大幅度减少可训练参数的数量,同时达到与完整微调相当的性能水平。本文档概述了有效地将Loldu集成到您的项目中所需的步骤。
有关更多详细信息,请参阅论文:https://arxiv.org/pdf/2410.13618
要安装Loldu,只需使用PIP:
git clone https://github.com/SKDDJ/LoLDU
cd LoLDU
pip install -e . 这是如何使用loldu的快速示例:
import torch
import torch . nn as nn
from functools import partial
from minloldu import LoLDUParametrization , add_loldu , get_loldu_params
# Define your model
model = YourModel ()
# Define LoLDU configuration
loldu_config = {
nn . Linear : {
"weight" : partial ( LoLDUParametrization . from_linear , rank = 15 ),
},
}
# Add LoLDU to the model
add_loldu ( model , loldu_config = loldu_config )
# Freeze all parameters
for param in model . parameters ():
param . requires_grad = False
# Enable gradients for LoLDU parameters
for param in get_loldu_params ( model ):
param . requires_grad = True
# Now your model is ready for fine-tuning with LoLDU add_loldu(model, loldu_config)
model :要修改的Pytorch模型。loldu_config :loldu的配置字典。 get_loldu_params(model, print_shapes=False)
model :带有Loldu的Pytorch模型。print_shapes :如果为true,则打印loldu参数的形状。 disable_loldu(model)
enable_loldu(model)
remove_loldu(model)
merge_loldu(model)
get_loldu_state_dict(model)
LoLDUParametrization.from_linear(layer, rank)layer :要参数化的线性层。rank :低级别近似的等级。 from minloldu import LoLDUParametrization , add_loldu
from functools import partial
class MyModel ( nn . Module ):
def __init__ ( self ):
super (). __init__ ()
self . model = nn . Sequential (
nn . Linear ( in_features = 15 , out_features = 15 ),
)
def forward ( self , x ):
return self . model ( x )
model = MyModel ()
loldu_config = {
nn . Linear : {
"weight" : partial ( LoLDUParametrization . from_linear , rank = 15 ),
},
}
add_loldu ( model , loldu_config = loldu_config ) from minloldu import get_loldu_params
# Freeze all parameters
for param in model . parameters ():
param . requires_grad = False
# Enable gradients for LoLDU parameters
for param in get_loldu_params ( model ):
param . requires_grad = True
# Your training loop here from minloldu import get_loldu_state_dict
# Save LoLDU state
state_dict_to_save = get_loldu_state_dict ( model )
torch . save ( state_dict_to_save , "loldu_state.pth" )
# Load LoLDU state
loaded_state = torch . load ( "loldu_state.pth" )
model . load_state_dict ( loaded_state , strict = False ) from minloldu import merge_loldu
# After training, merge LoLDU for efficient inference
merge_loldu ( model )选择适当的等级:Lolduparametrization中的等级参数会影响参数效率和模型性能之间的权衡。试验不同的等级以找到任务的最佳平衡。
微调超参数:与完整的微调相比,Loldu可能需要不同的学习率。相应地调整您的学习率和其他超参数。
监视训练:请密切关注训练过程,以确保Loldu有效地适应模型。使用验证集防止过度拟合。
合并推理:在部署模型以取消任何计算开销之前,请务必使用merge_loldu() 。
与其他技术结合使用:Loldu可以与其他优化技术(例如量化)结合使用,以提高效率。
有关更多详细信息和高级用法,请参阅原始纸张和源代码存储库。
笔记:
请注意,该代码可能不会完全复制论文中呈现的结果,这是因为在发布之前可能发生和清洁代码期间发生的人类错误。如果您在复制我们的发现方面面临任何挑战,请随时与我们联系。此外,我们致力于在不久的将来进行理智检查实验。
致谢
Minlora代码库大大提高了我们的Loldu实施。
Bibtex
@misc { shi2024loldulowrankadaptationlowerdiagupper ,
title = { LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for Parameter-Efficient Fine-Tuning } ,
author = { Yiming Shi and Jiwei Wei and Yujia Wu and Ran Ran and Chengwei Sun and Shiyuan He and Yang Yang } ,
year = { 2024 } ,
eprint = { 2410.13618 } ,
archivePrefix = { arXiv } ,
primaryClass = { cs.CV } ,
url = { https://arxiv.org/abs/2410.13618 } ,
}