Repositori ini mengimplementasikan augmentasi sakelar-kasus dan pengambilan negatif yang keras dari makalah "meningkatkan pembelajaran kontras dari kalimat kalimat dengan positif yang ditetapkan kasus dan pengambilan negatif". Menggabungkan dua pendekatan dengan SIMCSE mengarah pada model yang disebut pembelajaran kontras dengan data augmented dan diambil untuk penyematan kalimat (kartu).
Tabel 1. Contoh kalimat sampel yang dipecat dan diambil.
| Jenis | Kalimat |
|---|---|
| Asli | Kisah buku pertama berlanjut. |
| Kasus-beralih | Kisah buku pertama berlanjut. |
| Diambil | Kisah ini dimulai sebagai kisah cinta yang khas. |
| Acak | Ini diadakan sebagai hasil sementara. |
Tabel 2. Kinerja pada Tugas Menanamkan Kalimat
| Pretraining | Finetuning | STS12 | STS13 | STS14 | STS15 | STS16 | Stsb | Sakit-R | Rata -rata. |
|---|---|---|---|---|---|---|---|---|---|
| Roberta-Base | Kartu simcse + | 72.65 | 84.26 | 76.52 | 82.98 | 82.73 | 82.04 | 70.66 | 78.83 |
| Roberta-Large | Kartu simcse + | 74.63 | 86.27 | 79.25 | 85.93 | 83.17 | 83.86 | 72.77 | 80.84 |
Tautan Unduh: Kartu-Roberta-Base (Unduh, 440MB), Cards-Roberta-Large (Unduh, 1.23GB).
Tabel 3. Kinerja pada tugas lem
| Pretraining | Finetuning | Mnli-m | QQP | Qnli | SST-2 | Cola | STS-B | Mrpc | Rte | Rata -rata. |
|---|---|---|---|---|---|---|---|---|---|---|
| DEBERTAV2-XXLARGE | R-DROP + sakelar-kasus | 92.0 | 93.0 | 96.3 | 97.2 | 75.5 | 93.6 | 93.9 | 94.2 | 91.7 |
Repo ini dibangun berdasarkan transformator huggingface dan simcse. Lihat persyaratan.txt untuk versi paket.
# 1. Download wiki-1m dataset:
# - use wget -P target_folder in data/datasets/download_wiki.sh, and run
bash data/datasets/download_wiki.sh
# - modify train_file in scripts/bert/run_simcse_pretraining_v2.sh
# 2. preprocess wiki-1m dataset for negative retrieval
# - deduplicate the wiki-1m dataset, and (optionally) remove sentences with less than three words
# - modify paths in data/datasets/simcse_utils.py then run it to get model representations for all sentences in dataset
python data/datasets/simcse_utils.py
# 3. Download SentEval evaluation data:
# - use wget -P target_folder in data/datasets/download_senteval.sh, and run
bash data/datasets/download_senteval.shSebelum menjalankan kode, pengguna mungkin perlu mengubah pos pemeriksaan model default dan jalur I/O, termasuk:
scripts/bert/run_simcse_grid.sh : baris 42-50 (train_file, train_file_dedupl (opsional), output_dir, tensorboard_dir, sent_rep_cache_file, senteval_data_dir)scripts/bert/run_simcse_pretraining.sh : baris 17-20 (train_file, output_dir, tensorboard_dir, senteval_data_dir), baris 45 (sent_rep_cache_files), baris 166-213 (model_name_or_path, config_name). # MUST cd to the folder which contains data/, examples/, models/, scripts/, training/ and utils/
cd YOUR_CARDS_WORKING_DIRECTORY
# roberta-base
new_train_file=path_to_wiki1m
sent_rep_cache_file=path_to_sentence_representation_file # generated by data/datasets/simcse_utils.py
# run a model with a single set of hyper-parameters
# when running the model for the very first time, need to add overwrite_cache=True, this will produce a processed training data cache.
bash scripts/bert/run_simcse_grid.sh
model_type=roberta model_size=base
cuda=0,1,2,3 seed=42 learning_rate=4e-5
new_train_file= ${new_train_file} sent_rep_cache_file= ${sent_rep_cache_file}
dyn_knn=65 sample_k=1 knn_metric=cos
switch_case_probability=0.05 switch_case_method=v2
print_only=False
# grid-search on hyper-parameters
bash scripts/bert/run_simcse_grid.sh
model_type=roberta model_size=base
cuda=0,1,2,3 seed=42 learning_rate=1e-5,2e-5,4e-5
new_train_file= ${new_train_file} sent_rep_cache_file= ${sent_rep_cache_file}
dyn_knn=0,9,65 sample_k=1 knn_metric=cos
switch_case_probability=0,0.05,0.1,0.15 switch_case_method=v2
print_only=False
# roberta-large
bash scripts/bert/run_simcse_grid.sh
model_type=roberta model_size=large
cuda=0,1,2,3 seed=42 learning_rate=7.5e-6
new_train_file= ${new_train_file} sent_rep_cache_file= ${sent_rep_cache_file}
dyn_knn=9 sample_k=1 knn_metric=cos
switch_case_probability=0.1 switch_case_method=v1
print_only=False # provide train_file, output_dir, tensorboard_dir if different to the default values
model_name=name_of_saved_mdoel # e.g., roberta_large_bs128x4_lr2e-5_switchcase0.1_v2
bash ./scripts/bert/run_simcse_pretraining.sh
model_name_or_path= ${output_dir} / ${model_name} model_name= ${model_name} config_name= ${output_dir} / ${model_name} /config.json
train_file= ${train_file} output_dir= ${output_dir} /test_only tensorboard_dir= ${tensorboard_dir}
model_type=roberta model_size=base do_train=False
cuda=0 ngpu=1Untuk alasan yang tidak diketahui, himpunan model hiper-parameter yang baik berbeda ketika bekerja dengan Huggingface Transformers v4.11.3 dan v4.15.0. Hyper-parameter yang tercantum di atas dilihat pada Transformers 4.11.3.
@inproceedings{cards,
title = "Improving Contrastive Learning of Sentence Embeddings with Case-Augmented Positives and Retrieved Negatives",
author = "Wei Wang and Liangzhu Ge and Jingqiao Zhang and Cheng Yang",
booktitle = "The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)",
year = "2022"
}