A Toolkit for Evaluating Large Vision-Language Models.
• • • •
English | 简体中文 | 日本語
? OC Learderboard • Quickstart • Datasets & Models • Development • Goal • Citation
? HF Leaderboard • ? Evaluation Records • ? HF Video Leaderboard • ? Discord • Report
VLMEvalKit (the python package name is vlmeval) is an open-source evaluation toolkit of large vision-language models (LVLMs). It enables one-command evaluation of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt generation-based evaluation for all LVLMs, and provide the evaluation results obtained with both exact matching and LLM-based answer extraction.
VLMEVALKIT_USE_MODELSCOPE. By setting this environment variable, you can download the video benchmarks supported from modelscope python run.py --help for more details See [QuickStart | 快速开始] for a quick start guide.
The performance numbers on our official multi-modal leaderboards can be downloaded from here!
OpenVLM Leaderboard: Download All DETAILED Results.
Supported Image Understanding Dataset
MCQ: Multi-choice question; Y/N: Yes-or-No Questions; MTT: Benchmark with Multi-turn Conversations; MTI: Benchmark with Multi-Image as Inputs.| Dataset | Dataset Names (for run.py) | Task | Dataset | Dataset Names (for run.py) | Task |
|---|---|---|---|---|---|
|
MMBench Series: MMBench, MMBench-CN, CCBench |
MMBench_DEV_[EN/CN] MMBench_TEST_[EN/CN] MMBench_DEV_[EN/CN]_V11 MMBench_TEST_[EN/CN]_V11 CCBench |
MCQ | MMStar | MMStar | MCQ |
| MME | MME | Y/N | SEEDBench Series | SEEDBench_IMG SEEDBench2 SEEDBench2_Plus |
MCQ |
| MM-Vet | MMVet | VQA | MMMU | MMMU_[DEV_VAL/TEST] | MCQ |
| MathVista | MathVista_MINI | VQA | ScienceQA_IMG | ScienceQA_[VAL/TEST] | MCQ |
| COCO Caption | COCO_VAL | Caption | HallusionBench | HallusionBench | Y/N |
| OCRVQA* | OCRVQA_[TESTCORE/TEST] | VQA | TextVQA* | TextVQA_VAL | VQA |
| ChartQA* | ChartQA_TEST | VQA | AI2D | AI2D_[TEST/TEST_NO_MASK] | MCQ |
| LLaVABench | LLaVABench | VQA | DocVQA+ | DocVQA_[VAL/TEST] | VQA |
| InfoVQA+ | InfoVQA_[VAL/TEST] | VQA | OCRBench | OCRBench | VQA |
| RealWorldQA | RealWorldQA | MCQ | POPE | POPE | Y/N |
| Core-MM- | CORE_MM (MTI) | VQA | MMT-Bench | MMT-Bench_[VAL/ALL] MMT-Bench_[VAL/ALL]_MI |
MCQ (MTI) |
| MLLMGuard - | MLLMGuard_DS | VQA | AesBench+ | AesBench_[VAL/TEST] | MCQ |
| VCR-wiki + | VCR_[EN/ZH]_[EASY/HARD]_[ALL/500/100] | VQA | MMLongBench-Doc+ | MMLongBench_DOC | VQA (MTI) |
| BLINK | BLINK | MCQ (MTI) | MathVision+ | MathVision MathVision_MINI |
VQA |
| MT-VQA | MTVQA_TEST | VQA | MMDU+ | MMDU | VQA (MTT, MTI) |
| Q-Bench1 | Q-Bench1_[VAL/TEST] | MCQ | A-Bench | A-Bench_[VAL/TEST] | MCQ |
| DUDE+ | DUDE | VQA (MTI) | SlideVQA+ | SLIDEVQA SLIDEVQA_MINI |
VQA (MTI) |
| TaskMeAnything ImageQA Random+ | TaskMeAnything_v1_imageqa_random | MCQ | MMMB and Multilingual MMBench+ | MMMB_[ar/cn/en/pt/ru/tr] MMBench_dev_[ar/cn/en/pt/ru/tr] MMMB MTL_MMBench_DEV PS: MMMB & MTL_MMBench_DEV are all-in-one names for 6 langs |
MCQ |
| A-OKVQA+ | A-OKVQA | MCQ | MuirBench+ | MUIRBench | MCQ |
| GMAI-MMBench+ | GMAI-MMBench_VAL | MCQ | TableVQABench+ | TableVQABench | VQA |
| MME-RealWorld+ | MME-RealWorld[-CN] MME-RealWorld-Lite |
MCQ | HRBench+ | HRBench[4K/8K] | MCQ |
| MathVerse+ | MathVerse_MINI MathVerse_MINI_Vision_Only MathVerse_MINI_Vision_Dominant MathVerse_MINI_Vision_Intensive MathVerse_MINI_Text_Lite MathVerse_MINI_Text_Dominant |
VQA | AMBER+ | AMBER | Y/N |
| CRPE+ | CRPE_[EXIST/RELATION] | VQA |
MMSearch |
- | - |
| R-Bench+ | R-Bench-[Dis/Ref] | MCQ | WorldMedQA-V+ | WorldMedQA-V | MCQ |
| GQA+ | GQA_TestDev_Balanced | VQA | MIA-Bench+ | MIA-Bench | VQA |
| WildVision+ | WildVision | VQA | OlympiadBench+ | OlympiadBench | VQA |
| MM-Math+ | MM-Math | VQA | Dynamath | DynaMath | VQA |
| MMGenBench- | MMGenBench-Test MMGenBench-Domain |
- | QSpatial+ | QSpatial_[plus/scannet] | VQA |
| VizWiz+ | VizWiz | VQA |
* We only provide a subset of the evaluation results, since some VLMs do not yield reasonable results under the zero-shot setting
+ The evaluation results are not available yet
- Only inference is supported in VLMEvalKit (That includes the TEST splits of some benchmarks that do not include the ground truth answers).
VLMEvalKit will use a judge LLM to extract answer from the output if you set the key, otherwise it uses the exact matching mode (find "Yes", "No", "A", "B", "C"... in the output strings). The exact matching can only be applied to the Yes-or-No tasks and the Multi-choice tasks.
Supported Video Understanding Dataset
| Dataset | Dataset Names (for run.py) | Task | Dataset | Dataset Names (for run.py) | Task |
|---|---|---|---|---|---|
| MMBench-Video | MMBench-Video | VQA | Video-MME | Video-MME | MCQ |
| MVBench | MVBench/MVBench_MP4 | MCQ | MLVU | MLVU | MCQ & VQA |
| TempCompass | TempCompass | MCQ & Y/N & Caption | LongVideoBench | LongVideoBench | MCQ |
Supported API Models
| GPT-4v (20231106, 20240409) ? | GPT-4o ? | Gemini-1.0-Pro ? | Gemini-1.5-Pro ? | Step-1V ? |
|---|---|---|---|---|
| Reka-[Edge / Flash / Core]? |
Qwen-VL-[Plus / Max] ? Qwen-VL-[Plus / Max]-0809 ? |
Claude3-[Haiku / Sonnet / Opus] ? | GLM-4v ? | CongRong ? |
| Claude3.5-Sonnet (20240620, 20241022) ? | GPT-4o-Mini ? | Yi-Vision? | Hunyuan-Vision? | BlueLM-V ? |
| TeleMM? |
Supported PyTorch / HF Models
| IDEFICS-[9B/80B/v2-8B/v3-8B]-Instruct? | InstructBLIP-[7B/13B] | LLaVA-[v1-7B/v1.5-7B/v1.5-13B] | MiniGPT-4-[v1-7B/v1-13B/v2-7B] |
|---|---|---|---|
| mPLUG-Owl[2/3] | OpenFlamingo-v2 | PandaGPT-13B |
Qwen-VL? Qwen-VL-Chat? |
| VisualGLM-6B? | InternLM-XComposer-[1/2]? | ShareGPT4V-[7B/13B]? | TransCore-M |
| LLaVA (XTuner)? | CogVLM-[Chat/Llama3]? | ShareCaptioner? | CogVLM-Grounding-Generalist? |
|
Monkey? Monkey-Chat? |
EMU2-Chat? | Yi-VL-[6B/34B] | MMAlaya? |
| InternLM-XComposer-2.5? | MiniCPM-[V1/V2/V2.5/V2.6]? | OmniLMM-12B | InternVL-Chat-[V1-1/V1-2/V1-5/V2]? |
| DeepSeek-VL | LLaVA-NeXT? | Bunny-Llama3? | XVERSE-V-13B |
| PaliGemma-3B ? | 360VL-70B ? |
Phi-3-Vision? Phi-3.5-Vision? |
WeMM? |
| GLM-4v-9B ? | Cambrian-[8B/13B/34B] | LLaVA-Next-[Qwen-32B] | Chameleon-[7B/30B]? |
| Video-LLaVA-7B-[HF] ? | VILA1.5-[3B/8B/13B/40B] | Ovis[1.5-Llama3-8B/1.5-Gemma2-9B/1.6-Gemma2-9B/1.6-Llama3.2-3B/1.6-Gemma2-27B] ? | Mantis-8B-[siglip-llama3/clip-llama3/Idefics2/Fuyu] |
| Llama-3-MixSenseV1_1? | Parrot-7B ? | OmChat-v2.0-13B-sinlge-beta ? | Video-ChatGPT ? |
| Chat-UniVi-7B[-v1.5] ? | LLaMA-VID-7B ? | VideoChat2-HD ? | PLLaVA-[7B/13B/34B] ? |
| RBDash_72b ? | xgen-mm-phi3-[interleave/dpo]-r-v1.5 ? | Qwen2-VL-[2B/7B/72B]? | slime_[7b/8b/13b] |
|
Eagle-X4-[8B/13B]?, Eagle-X5-[7B/13B/34B]? |
Moondream1?, Moondream2? |
XinYuan-VL-2B-Instruct? | Llama-3.2-[11B/90B]-Vision-Instruct? |
| Kosmos2? | H2OVL-Mississippi-[0.8B/2B]? | **Pixtral-12B** | **Falcon2-VLM-11B**? |
| **MiniMonkey**? | **LLaVA-OneVision**? | **LLaVA-Video**? | **Aquila-VL-2B**? |
| Mini-InternVL-Chat-[2B/4B]-V1-5? | InternVL2 Series ? | **Janus-1.3B**? | **molmoE-1B/molmo-7B/molmo-72B**? |
| **Points-[Yi-1.5-9B/Qwen-2.5-7B]**? | **NVLM**? | **VIntern**? | **Aria**? |
: Support multiple images as inputs.
?: Models can be used without any additional configuration/operation.
?: Support Video as inputs.
Transformers Version Recommendation:
Note that some VLMs may not be able to run under certain transformer versions, we recommend the following settings to evaluate each VLM:
transformers==4.33.0 for: Qwen series, Monkey series, InternLM-XComposer Series, mPLUG-Owl2, OpenFlamingo v2, IDEFICS series, VisualGLM, MMAlaya, ShareCaptioner, MiniGPT-4 series, InstructBLIP series, PandaGPT, VXVERSE.transformers==4.36.2 for: Moondream1.transformers==4.37.0 for: LLaVA series, ShareGPT4V series, TransCore-M, LLaVA (XTuner), CogVLM Series, EMU2 Series, Yi-VL Series, MiniCPM-[V1/V2], OmniLMM-12B, DeepSeek-VL series, InternVL series, Cambrian Series, VILA Series, Llama-3-MixSenseV1_1, Parrot-7B, PLLaVA Series.transformers==4.40.0 for: IDEFICS2, Bunny-Llama3, MiniCPM-Llama3-V2.5, 360VL-70B, Phi-3-Vision, WeMM.transformers==4.44.0 for: Moondream2, H2OVL series.transformers==4.45.0 for: Aria.transformers==latest for: LLaVA-Next series, PaliGemma-3B, Chameleon series, Video-LLaVA-7B-HF, Ovis series, Mantis series, MiniCPM-V2.6, OmChat-v2.0-13B-sinlge-beta, Idefics-3, GLM-4v-9B, VideoChat2-HD, RBDash_72b, Llama-3.2 series, Kosmos series.Torchvision Version Recommendation:
Note that some VLMs may not be able to run under certain torchvision versions, we recommend the following settings to evaluate each VLM:
torchvision>=0.16 for: Moondream series and Aria
Flash-attn Version Recommendation:
Note that some VLMs may not be able to run under certain flash-attention versions, we recommend the following settings to evaluate each VLM:
pip install flash-attn --no-build-isolation for: Aria
# Demo
from vlmeval.config import supported_VLM
model = supported_VLM['idefics_9b_instruct']()
# Forward Single Image
ret = model.generate(['assets/apple.jpg', 'What is in this image?'])
print(ret) # The image features a red apple with a leaf on it.
# Forward Multiple Images
ret = model.generate(['assets/apple.jpg', 'assets/apple.jpg', 'How many apples are there in the provided images? '])
print(ret) # There are two apples in the provided images.To develop custom benchmarks, VLMs, or simply contribute other codes to VLMEvalKit, please refer to [Development_Guide | 开发指南].
Call for contributions
To promote the contribution from the community and share the corresponding credit (in the next report update):
Here is a contributor list we curated based on the records.
The codebase is designed to:
generate_inner() function, all other workloads (data downloading, data preprocessing, prediction inference, metric calculation) are handled by the codebase.The codebase is not designed to:
If you find this work helpful, please consider to star? this repo. Thanks for your support!
If you use VLMEvalKit in your research or wish to refer to published OpenSource evaluation results, please use the following BibTeX entry and the BibTex entry corresponding to the specific VLM / benchmark you used.
@misc{duan2024vlmevalkit,
title={VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models},
author={Haodong Duan and Junming Yang and Yuxuan Qiao and Xinyu Fang and Lin Chen and Yuan Liu and Xiaoyi Dong and Yuhang Zang and Pan Zhang and Jiaqi Wang and Dahua Lin and Kai Chen},
year={2024},
eprint={2407.11691},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.11691},
}?Back to top