Han-Wu-Shuang (Bruce) Bao 包寒吴霜
? psychbruce.github.io
library(FMAT) untuk format APA-7 dari versi yang Anda instal. Untuk menggunakan FMAT, paket R FMAT dan tiga paket Python ( transformers , torch , huggingface-hub ) semuanya perlu diinstal.
# # Method 1: Install from CRAN
install.packages( " FMAT " )
# # Method 2: Install from GitHub
install.packages( " devtools " )
devtools :: install_github( " psychbruce/FMAT " , force = TRUE )Instal Anaconda (Manajer Paket yang Disarankan yang secara otomatis menginstal Python, Python IDE seperti Spyder, dan daftar besar dependensi paket Python yang diperlukan).
Tentukan interpreter Python Anaconda di rstudio.
Rstudio → Tools → Opsi Global/Proyek
→ Python → Pilih → Lingkungan Conda
→ Pilih ".../anaconda3/python.exe"
Pasang versi spesifik paket Python "Transformers", "Torch", dan "Huggingface-Hub".
(Rstudio Terminal / Anaconda Prompt / Command Windows)
Untuk pengguna CPU:
pip install transformers==4.40.2 torch==2.2.1 huggingface-hub==0.20.3
Untuk pengguna GPU (CUDA):
pip install transformers==4.40.2 huggingface-hub==0.20.3
pip install torch==2.2.1 --index-url https://download.pytorch.org/whl/cu121
HTTPSConnectionPool(host='huggingface.co', port=443) , silakan coba (1) instal ulang Anaconda sehingga beberapa masalah yang tidak diketahui dapat diperbaiki atau (2) menurunkan Paket "URLLIB3" untuk Versi ≤ 1.25.11 ( pip install urllib3==1.25.11 ). versi) untuk terhubung ke wajah memeluk. Gunakan BERT_download() untuk mengunduh [Model Bert]. File model disimpan ke folder lokal Anda "%userprofile%/. Cache/huggingface". Daftar lengkap model Bert tersedia di Hugging Face.
Gunakan BERT_info() dan BERT_vocab() untuk menemukan informasi terperinci dari model Bert.
Desain kueri yang secara konseptual mewakili konstruksi yang akan Anda ukur (lihat Bao, 2024, JPSP untuk cara merancang kueri).
Gunakan FMAT_query() dan/atau FMAT_query_bind() untuk menyiapkan data.table .
Gunakan FMAT_run() untuk mendapatkan data mentah (estimasi probabilitas) untuk analisis lebih lanjut.
Beberapa langkah preprocessing telah dimasukkan dalam fungsi untuk lebih mudah digunakan (lihat FMAT_run() untuk detailnya).
<mask> daripada [MASK] sebagai token mask, kueri input akan dimodifikasi secara otomatis sehingga pengguna selalu dapat menggunakan [MASK] dalam desain kueri.u0120 dan u2581 akan ditambahkan secara otomatis agar sesuai dengan seluruh kata (bukan subwords) untuk [MASK] . Secara default, paket FMAT menggunakan CPU untuk mengaktifkan fungsionalitas untuk semua pengguna. Tetapi untuk pengguna tingkat lanjut yang ingin mempercepat pipa dengan GPU, fungsi FMAT_run() sekarang mendukung menggunakan perangkat GPU, sekitar 3x lebih cepat dari CPU.
Hasil tes (pada komputer pengembang, tergantung pada ukuran model Bert):
Daftar Periksa:
torch ) dengan dukungan CUDA.torch tanpa dukungan CUDA, silakan hapus instalan terlebih dahulu (Command: pip uninstall torch ) dan kemudian instal yang disarankan.torch yang mendukung CUDA 12.1, versi CUDA Toolkit 12.1 yang sama juga dapat diinstal). Contoh kode untuk menginstal pytorch dengan dukungan CUDA:
(Rstudio Terminal / Anaconda Prompt / Command Windows)
pip install torch==2.2.1 --index-url https://download.pytorch.org/whl/cu121
Keandalan dan validitas dari 12 model BERT yang representatif telah ditetapkan dalam artikel penelitian saya, tetapi pekerjaan di masa depan diperlukan untuk memeriksa kinerja model lain.
(Model nama pada pemelukan wajah - Ukuran file model yang diunduh)
Jika Anda baru mengenal Bert, referensi ini dapat membantu:
library( FMAT )
models = c(
" bert-base-uncased " ,
" bert-base-cased " ,
" bert-large-uncased " ,
" bert-large-cased " ,
" distilbert-base-uncased " ,
" distilbert-base-cased " ,
" albert-base-v1 " ,
" albert-base-v2 " ,
" roberta-base " ,
" distilroberta-base " ,
" vinai/bertweet-base " ,
" vinai/bertweet-large "
)
BERT_download( models ) ℹ Device Info:
R Packages:
FMAT 2024.5
reticulate 1.36.1
Python Packages:
transformers 4.40.2
torch 2.2.1+cu121
NVIDIA GPU CUDA Support:
CUDA Enabled: TRUE
CUDA Version: 12.1
GPU (Device): NVIDIA GeForce RTX 2050
── Downloading model "bert-base-uncased" ──────────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 570/570 [00:00<00:00, 114kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 23.9kB/s]
vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 1.50MB/s]
tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 1.98MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 440M/440M [00:36<00:00, 12.1MB/s]
✔ Successfully downloaded model "bert-base-uncased"
── Downloading model "bert-base-cased" ────────────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 570/570 [00:00<00:00, 63.3kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 49.0/49.0 [00:00<00:00, 8.66kB/s]
vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 1.39MB/s]
tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 10.1MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 436M/436M [00:37<00:00, 11.6MB/s]
✔ Successfully downloaded model "bert-base-cased"
── Downloading model "bert-large-uncased" ─────────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 571/571 [00:00<00:00, 268kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 12.0kB/s]
vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 1.50MB/s]
tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 1.99MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 1.34G/1.34G [01:36<00:00, 14.0MB/s]
✔ Successfully downloaded model "bert-large-uncased"
── Downloading model "bert-large-cased" ───────────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 762/762 [00:00<00:00, 125kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 49.0/49.0 [00:00<00:00, 12.3kB/s]
vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 1.41MB/s]
tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 5.39MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 1.34G/1.34G [01:35<00:00, 14.0MB/s]
✔ Successfully downloaded model "bert-large-cased"
── Downloading model "distilbert-base-uncased" ────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 483/483 [00:00<00:00, 161kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 9.46kB/s]
vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 16.5MB/s]
tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 14.8MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 268M/268M [00:19<00:00, 13.5MB/s]
✔ Successfully downloaded model "distilbert-base-uncased"
── Downloading model "distilbert-base-cased" ──────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 465/465 [00:00<00:00, 233kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 49.0/49.0 [00:00<00:00, 9.80kB/s]
vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 1.39MB/s]
tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 8.70MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 263M/263M [00:24<00:00, 10.9MB/s]
✔ Successfully downloaded model "distilbert-base-cased"
── Downloading model "albert-base-v1" ─────────────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 684/684 [00:00<00:00, 137kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 3.57kB/s]
spiece.model: 100%|██████████| 760k/760k [00:00<00:00, 4.93MB/s]
tokenizer.json: 100%|██████████| 1.31M/1.31M [00:00<00:00, 13.4MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 47.4M/47.4M [00:03<00:00, 13.4MB/s]
✔ Successfully downloaded model "albert-base-v1"
── Downloading model "albert-base-v2" ─────────────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 684/684 [00:00<00:00, 137kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 4.17kB/s]
spiece.model: 100%|██████████| 760k/760k [00:00<00:00, 5.10MB/s]
tokenizer.json: 100%|██████████| 1.31M/1.31M [00:00<00:00, 6.93MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 47.4M/47.4M [00:03<00:00, 13.8MB/s]
✔ Successfully downloaded model "albert-base-v2"
── Downloading model "roberta-base" ───────────────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 481/481 [00:00<00:00, 80.3kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 6.25kB/s]
vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 2.72MB/s]
merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 8.22MB/s]
tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.56MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 499M/499M [00:38<00:00, 12.9MB/s]
✔ Successfully downloaded model "roberta-base"
── Downloading model "distilroberta-base" ─────────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 480/480 [00:00<00:00, 96.4kB/s]
→ (2) Downloading tokenizer...
tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 12.0kB/s]
vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 6.59MB/s]
merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 9.46MB/s]
tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 11.5MB/s]
→ (3) Downloading model...
model.safetensors: 100%|██████████| 331M/331M [00:25<00:00, 13.0MB/s]
✔ Successfully downloaded model "distilroberta-base"
── Downloading model "vinai/bertweet-base" ────────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 558/558 [00:00<00:00, 187kB/s]
→ (2) Downloading tokenizer...
vocab.txt: 100%|██████████| 843k/843k [00:00<00:00, 7.44MB/s]
bpe.codes: 100%|██████████| 1.08M/1.08M [00:00<00:00, 7.01MB/s]
tokenizer.json: 100%|██████████| 2.91M/2.91M [00:00<00:00, 9.10MB/s]
→ (3) Downloading model...
pytorch_model.bin: 100%|██████████| 543M/543M [00:48<00:00, 11.1MB/s]
✔ Successfully downloaded model "vinai/bertweet-base"
── Downloading model "vinai/bertweet-large" ───────────────────────────────────────
→ (1) Downloading configuration...
config.json: 100%|██████████| 614/614 [00:00<00:00, 120kB/s]
→ (2) Downloading tokenizer...
vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 5.90MB/s]
merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 7.30MB/s]
tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.31MB/s]
→ (3) Downloading model...
pytorch_model.bin: 100%|██████████| 1.42G/1.42G [02:29<00:00, 9.53MB/s]
✔ Successfully downloaded model "vinai/bertweet-large"
── Downloaded models: ──
size
albert-base-v1 45 MB
albert-base-v2 45 MB
bert-base-cased 416 MB
bert-base-uncased 420 MB
bert-large-cased 1277 MB
bert-large-uncased 1283 MB
distilbert-base-cased 251 MB
distilbert-base-uncased 256 MB
distilroberta-base 316 MB
roberta-base 476 MB
vinai/bertweet-base 517 MB
vinai/bertweet-large 1356 MB
✔ Downloaded models saved at C:/Users/Bruce/.cache/huggingface/hub (6.52 GB)
BERT_info( models ) model size vocab dims mask
<fctr> <char> <int> <int> <char>
1: bert-base-uncased 420MB 30522 768 [MASK]
2: bert-base-cased 416MB 28996 768 [MASK]
3: bert-large-uncased 1283MB 30522 1024 [MASK]
4: bert-large-cased 1277MB 28996 1024 [MASK]
5: distilbert-base-uncased 256MB 30522 768 [MASK]
6: distilbert-base-cased 251MB 28996 768 [MASK]
7: albert-base-v1 45MB 30000 128 [MASK]
8: albert-base-v2 45MB 30000 128 [MASK]
9: roberta-base 476MB 50265 768 <mask>
10: distilroberta-base 316MB 50265 768 <mask>
11: vinai/bertweet-base 517MB 64001 768 <mask>
12: vinai/bertweet-large 1356MB 50265 1024 <mask>
(Diuji 2024-05-16 di komputer pengembang: HP Probook 450 G10 Notebook PC)
Sementara FMAT adalah metode inovatif untuk analisis cerdas komputasi psikologi dan masyarakat, Anda juga dapat mencari kotak alat integratif untuk metode analitik teks lainnya. Paket R lain yang saya kembangkan --- PsychwordVec --- berguna dan ramah pengguna untuk analisis penyematan kata (misalnya, tes asosiasi embedding, weat). Silakan merujuk ke dokumentasinya dan jangan ragu untuk menggunakannya.