?Training LLaMA with MMEngine!
LLaMA.MMEngine is an experimental repository that leverages the MMEngine training engine, originally designed for computer vision tasks, to train and fine-tune language models. The primary goal of this project is to explore the compatibility of MMEngine with language models, learn about fine-tuning techniques, and engage with the open-source community for knowledge sharing and collaboration.
Install PyTorch
Following this guide https://pytorch.org/get-started/locally/
Setup this repo
Clone the repo
git clone https://github.com/RangiLyu/llama.mmengine
cd llama.mmengineInstall dependencies
pip install -r requirements.txtRun setup.py
python setup.py developPlease Download the model weights from the official LLaMA repo.
The checkpoints folder should be like this:
checkpoints/llama
├── 7B
│ ├── checklist.chk
│ ├── consolidated.00.pth
│ └── params.json
├── 13B
│ ...
├── tokenizer_checklist.chk
└── tokenizer.model
Convert the weights (Thanks for the script from Lit-LLaMA):
python scripts/convert_checkpoint.py
--output_dir checkpoints/mm-llama
--ckpt_dir checkpoints/llama
--tokenizer_path checkpoints/llama/tokenizer.model
--model_size 7B
python tools/train.py configs/llama-7B_finetune_3e.pypython tools/generate.py configs/llama-7B_finetune_3e.py work_dirs/llama-7B_finetune_3e/epoch_3.pthI greatly appreciate your interest in contributing to LLaMA.MMEngine! Please note that this project is maintained as a personal side project, which means that available time for development and support is limited. With that in mind, I kindly encourage members of the community to get involved and actively contribute by submitting pull requests!