Fine_tune_Anything
1.0.0
This repository offers tools for fine-tuning causal language models, leveraging the power of the PEFT (PyTorch Efficient Fine-Tuning) library. It's designed for enhancing code generation tailored to specific repositories.
conf/config.yaml file.requirements.txtgit clone [repository URL] :git:mkdir data make sure to update config.yaml accordingly.pip install -r requirements.txtTweak the confconfig.yaml file to set your model and tokenization parameters.
Kick off the fine-tuning process by running the main script (python main.py). Fine tuned model will be saved in directory specified in config.yaml -> "optimization" -> result_dir.
We welcome your contributions to make this tool even more efficient and feature-rich. Please adhere to standard open-source contribution guidelines.
Apache License 2.0