Project Repository: Fine-tuning T5 with Various Methods
Overview
This repository contains code and notebooks for fine-tuning the T5 model using different methods. The primary objectives of this project are:
- Implement and explore fine-tuning methods such as Soft Prompt, Adapter, LoRA, and Full Fine-tuning from scratch.
- Fine-tune T5 with the aforementioned methods using PEFT (Prompt Engineering Fine-Tuning), OpenDelta, and AdapterHub.
- Conduct a comprehensive comparison of accuracy and the number of parameters trained for each fine-tuning method.
Repository Structure
- 01_Full FineTune.ipynb: Jupyter notebook implementing Full Fine-tuning method from scratch.
- 02_Soft Prompt.ipynb: Jupyter notebook implementing fine-tuning using Soft Prompt.
- 03_Adapter.ipynb: Jupyter notebook implementing fine-tuning with Adapter method.
- 04_AdapterHub.ipynb: Jupyter notebook fine-tuning T5 using AdapterHub.
- 05_LoRA.ipynb: Jupyter notebook implementing fine-tuning with LoRA method.
- 402212503_HosnaOyarhoseini_Report.pdf: Report file providing insights, analysis, and results of the experiments.
Libraries
- PyTorch
- PEFT
- OpenDelta
- AdapterHub