Finetune LLAMA 2 On Your DataSet AutoTrain From Hugging Face
1.0.0
This project demonstrates how to fine-tune the LLaMA-3.1-8B model using LoRA adapters, apply chat templates, and save the model for inference. The model is trained on local data, optimized for parameter-efficient training, and deployed to the Hugging Face Hub.
To run this project, you'll need to install the required packages. You can set this up in Google Colab or your local environment:
pip install torch transformers datasets pandas unsloth trl