japanese clip
v0.2.0

該存儲庫包括Rinna Co.,Ltd。的日本剪輯(對比語言圖像預訓練)變體的代碼。
| 目錄 |
|---|
| 消息 |
| 預驗證的模型 |
| 用法 |
| 引用 |
| 執照 |
v0.2.0發行了!
rinna/japanese-cloob-vit-b-16達到54.64。scripts/example.py )。這些模板根據Openai 80模板清洗了日語。| 模型名稱 | top1* | top5* |
|---|---|---|
| rinna/japesentle-clloob-vit-b-16 | 54.64 | 72.86 |
| rinna/japesent-clip-vit-b-16 | 50.69 | 72.35 |
| Sonoisa/clip-vit-B-32-JEAMENES-V1 | 38.88 | 60.71 |
| 多語言夾 | 14.36 | 27.28 |
*零擊成像網驗證設置TOP-K精度。
$ pip install git+https://github.com/rinnakk/japanese-clip.git from PIL import Image
import torch
import japanese_clip as ja_clip
device = "cuda" if torch . cuda . is_available () else "cpu"
# ja_clip.available_models()
# ['rinna/japanese-clip-vit-b-16', 'rinna/japanese-cloob-vit-b-16']
# If you want v0.1.0 models, set `revision='v0.1.0'`
model , preprocess = ja_clip . load ( "rinna/japanese-clip-vit-b-16" , cache_dir = "/tmp/japanese_clip" , device = device )
tokenizer = ja_clip . load_tokenizer ()
image = preprocess ( Image . open ( "./data/dog.jpeg" )). unsqueeze ( 0 ). to ( device )
encodings = ja_clip . tokenize (
texts = [ "犬" , "猫" , "象" ],
max_seq_len = 77 ,
device = device ,
tokenizer = tokenizer , # this is optional. if you don't pass, load tokenizer each time
)
with torch . no_grad ():
image_features = model . get_image_features ( image )
text_features = model . get_text_features ( ** encodings )
text_probs = ( 100.0 * image_features @ text_features . T ). softmax ( dim = - 1 )
print ( "Label probs:" , text_probs ) # prints: [[1.0, 0.0, 0.0]] 引用這個存儲庫:
@inproceedings{japanese-clip,
author = {シーン 誠, 趙 天雨, 沢田 慶},
title = {日本語における言語画像事前学習モデルの構築と公開},
booktitle= {The 25th Meeting on Image Recognition and Understanding},
year = 2022,
month = July,
}Apache 2.0許可證