Perform full parameter fine-tuning, lora fine-tuning and qlora fine-tuning for llama3. In addition, fine-tuning of the qwen1.5 model is also supported. If you want to replace it with another model, the most important thing is to preprocess the data.
2023/07/28: Add fine tuning to Baichuan2-7B-Chat.
2024/07/24: Added fine tuning to llama3.1-8B-Instruct. transformers==4.43.1 and accelerate==0.33.0 .
2024/07/22:
Add fine tuning to glm-9B-chat. Note: You need to replace the 791 line in modeling_chatglm.py with padding_mask = padding_mask.to(torch.bfloat16) . The required transformers version is 4.42.4 . After installing the package in requirements.txt, you need to reinstall transformers.
Added fine tuning to qwen1.5-7B-Chat.
Add fine tuning to qwen2-7B-Instruct.
Add fine tuning to yi1.5-6B-Chat.
2024/07/19: Added fine tuning to internlm2.5. Note: Internlm2.5 does not support fine-tuning using bf16, so fp16 is selected in the run command.
2024/10/04: Added fine tuning to Qwen2.5-7B-Instruct. Add fine tuning to llama3.2-3B-Instruct. pip install transformers --upgrade and pip install accelerate --upgrade .
Run the device: Just use a graphics card with 24G graphics memory.
python==3.8.8
pip install -r requirements.txt
Data is stored under data, the specific format is:
[
{
"conversations" : [
{
"from" : " user " ,
"value" : "你是那个名字叫ChatGPT的模型吗? "
},
{
"from" : " assistant " ,
"value" : "我的名字是西西嘛呦,并且是通过家里蹲公司的大数据平台进行训练的。 "
}
]
}
...
]Multiple rounds of conversations are also prepared with data in the above format.
Enter the model_hub folder, run python download_modelscope.py to download the llama3-8B-Instruct model.
Enter the script folder.
Due to machine restrictions, there is no full parameter fine-tuning here. You can try it if you have the conditions.
The number of graphics cards specified by nproc_per_node and CUDA_VISIBLE_DEVICES must be consistent.
NCCL_P2P_DISABLE=1
NCCL_IB_DISABLE=1
CUDA_VISIBLE_DEVICES=0,1,2,4,5,6,7
torchrun
--nproc_per_node 7
--nnodes 1
--node_rank 0
--master_addr localhost
--master_port 6601
../finetune_llama3.py
--model_name_or_path " ../model_hub/LLM-Research/Meta-Llama-3-8B-Instruct/ "
--data_path " ../data/Belle_sampled_qwen.json "
--bf16 True
--output_dir " ../output/llama3_8B_lora "
--num_train_epochs 100
--per_device_train_batch_size 1
--per_device_eval_batch_size 1
--gradient_accumulation_steps 8
--evaluation_strategy " no "
--save_strategy " steps "
--save_steps 5
--save_total_limit 1
--learning_rate 1e-5
--weight_decay 0.1
--adam_beta2 0.95
--warmup_ratio 0.01
--lr_scheduler_type " cosine "
--logging_steps 1
--report_to " none "
--model_max_length 4096
--gradient_checkpointing True
--lazy_preprocess True
--deepspeed " ../config/ds_config_zero3_72B.json "
--use_loraThe number of graphics cards specified by nproc_per_node and CUDA_VISIBLE_DEVICES must be consistent. Use qlora to complete the training on a single 4090.
NCCL_P2P_DISABLE=1
NCCL_IB_DISABLE=1
CUDA_VISIBLE_DEVICES=0,1,2,4,5,6,7
torchrun
--nproc_per_node 7
--nnodes 1
--node_rank 0
--master_addr localhost
--master_port 6601
../finetune_llama3.py
--model_name_or_path " ../model_hub/LLM-Research/Meta-Llama-3-8B-Instruct/ "
--data_path " ../data/Belle_sampled_qwen.json "
--bf16 True
--output_dir " ../output/llama3_8B_qlora "
--num_train_epochs 100
--per_device_train_batch_size 1
--per_device_eval_batch_size 1
--gradient_accumulation_steps 16
--evaluation_strategy " no "
--save_strategy " steps "
--save_steps 5
--save_total_limit 1
--learning_rate 1e-5
--weight_decay 0.1
--adam_beta2 0.95
--warmup_ratio 0.01
--lr_scheduler_type " cosine "
--logging_steps 1
--report_to " none "
--model_max_length 4096
--gradient_checkpointing True
--lazy_preprocess True
--deepspeed " ../config/ds_config_zero2.json "
--use_lora
--load_in_4bit
--q_loraReasoning after training:
问题: 在互联网普及的时代,社交媒体已经成为了现代人生活中不可缺少的一部分。从微信、微博、 Facebook到Twitter 、 Instagram等等,社交媒体不仅满足了人们交流互动的需求,同时让人们更加容易地分享自己的生活瞬间、获取新闻资讯、找到志同道合的人群以及推广自己的业务。然而,与此同时,社交媒体也带来了一系列问题,如滥用个人信息、泄露隐私等。因此,如何正确地使用社交媒体已经成为了一个备受关注的话题。,总结上面这段文本的几个关键词
微调前结果: Arrrr , shiver me timbers ! Ye be wantin ' me to summarize the key words from that there text , eh ? Alright then , matey ! Here be the main points :
1. Social media ( the scourge o ' the seven seas ... er , I mean , the modern world )
2. Communication ( the lifeblood o ' the pirate' s existence ... er , I mean , the need for people to interact )
3. Sharing ( the treasure o ' the digital age... er, I mean, the ability to share one' s life moments )
4. Information ( the booty o ' the internet ... er , I mean , the news and knowledge shared online )
5. Privacy ( the anchor o ' security... er, I mean, the need to protect one' s personal info )
6. Correct usage ( the map to avoidin ' the rocky shores o' social media misuse ... er , I mean , the importance of using social media responsibly )
So hoist the colors , me hearties , and remember to use social media like a proper pirate : with caution , respect , and a keen eye for treasure !
微调后结果: 社交媒体、交流、分享、隐私、滥用信息The code mainly refers to:
The model can be downloaded on model scope: