Pytorch实施有效件。
它是基于
还有其他Pytorch实现。他们的方法要么不符合我的目标,即正确地重现Tensorflow模型(但具有Pytorch的感觉和灵活性),要么无法从头开始复制Coco MS培训。
除默认模型配置外,这里还有很大的灵活性来促进实验和快速改进 - 一些基于官方Tensorflow Impl的选项,我自己的一些:
timm模型集合中支持特征提取的任何主链( features_only arg)都可以用作BACBKONE。 timm 0.9timm convert_sync_batchnorm函数处理更新的型号,w/ batchnormact2d layerstimm的efficientnetv2_rw_s骨链,新的efficientnetv2_ds权重50.1地图 @ 1024x0124。内存使用与D3相当,速度比D4快。比最佳培训批量尺寸小,所以可能做得更好...efficientnetv2_dt重量为新集合,46.1 MAP @ 768x768,47.0 MAP @ 896x896使用AGC剪辑。timm通过自适应梯度剪辑支持)。想法来自( High-Performance Large-Scale Image Recognition Without Normalization -https://arxiv.org/abs/2102.06171)timm最低版本最多可达0.4.12timm的efficientnetv2_rw_t (TINY)模型,添加ExtricNETV2骨干实验efficientnetv2_dt 。 45.8地图 @ 768x768。tf_efficientdet_d?_apefficientdet_q1 (在40.6替换Prev模型)cspresdet50cspdarkdet53mtimm和先前的Torchscript兼容添加中的ModeleMav2包含在包含ModeleMav2的情况下,可以使用完全JIT脚本模型 +台式模型( --torchscript )训练。 CPU绑定训练的大速度提高。efficientdet_q0/q1/q2 )和cspresdext + pan( cspresdext50pan )。请参阅下面的更新表。特别感谢Artus为培训Q2模型提供资源。max并包括任何avgnew_focal ,使用--legacy-focal使用原始范围来使用原件。传统使用的内存较少,但存在更多的数值稳定性问题。timm > = 0.3.2,请注意double检查任何自定义定义的模型配置,以破坏更改合并了几个月的累积修复和补充。
size % 128 = 0 。我尚未解决的一些优先列表上的一些事情:
请注意,有一些破裂的变化:
timm版本更新为最新版本(> = 0.3),因为一些助手的API有所改变。培训理智检查是在VOC和OI上进行的
下表包含验证重量的模型。我在使用各种timm骨架的模型配置中定义了许多其他型号。
| 变体 | 地图(Val2017) | 地图(Test-Dev2017) | 地图(TF官方Val2017) | 地图(TF官方测试-DEV2017) | 参数(m) | IMG大小 |
|---|---|---|---|---|---|---|
| tf_efficitydet_lite0 | 27.1 | TBD | 26.4 | N/A。 | 3.24 | 320 |
| tf_efficitydet_lite1 | 32.2 | TBD | 31.5 | N/A。 | 4.25 | 384 |
| 有效det_d0 | 33.6 | TBD | N/A。 | N/A。 | 3.88 | 512 |
| tf_efficitydet_d0 | 34.2 | TBD | 34.3 | 34.6 | 3.88 | 512 |
| tf_efficitydet_d0_ap | 34.8 | TBD | 35.2 | 35.3 | 3.88 | 512 |
| 有效DET_Q0 | 35.7 | TBD | N/A。 | N/A。 | 4.13 | 512 |
| tf_efficitydet_lite2 | 35.9 | TBD | 35.1 | N/A。 | 5.25 | 448 |
| 有效det_d1 | 39.4 | 39.5 | N/A。 | N/A。 | 6.62 | 640 |
| tf_efficitydet_lite3 | 39.6 | TBD | 38.8 | N/A。 | 8.35 | 512 |
| tf_efficitydet_d1 | 40.1 | TBD | 40.2 | 40.5 | 6.63 | 640 |
| tf_efficitydet_d1_ap | 40.8 | TBD | 40.9 | 40.8 | 6.63 | 640 |
| 有效det_q1 | 40.9 | TBD | N/A。 | N/A。 | 6.98 | 640 |
| cspresdext50pan | 41.2 | TBD | N/A。 | N/A。 | 22.2 | 640 |
| Resdet50 | 41.6 | TBD | N/A。 | N/A。 | 27.6 | 640 |
| 有效det_q2 | 43.1 | TBD | N/A。 | N/A。 | 8.81 | 768 |
| cspresdet50 | 43.2 | TBD | N/A。 | N/A。 | 24.3 | 768 |
| tf_efficitydet_d2 | 43.4 | TBD | 42.5 | 43 | 8.10 | 768 |
| tf_efficitydet_lite3x | 43.6 | TBD | 42.6 | N/A。 | 9.28 | 640 |
| tf_efficitydet_lite4 | 44.2 | TBD | 43.2 | N/A。 | 15.1 | 640 |
| tf_efficitydet_d2_ap | 44.2 | TBD | 44.3 | 44.3 | 8.10 | 768 |
| CSPDARKDET53M | 45.2 | TBD | N/A。 | N/A。 | 35.6 | 768 |
| 有效detv2_dt | 46.1 | TBD | N/A。 | N/A。 | 13.4 | 768 |
| tf_efficitydet_d3 | 47.1 | TBD | 47.2 | 47.5 | 12.0 | 896 |
| tf_efficitydet_d3_ap | 47.7 | TBD | 48.0 | 47.7 | 12.0 | 896 |
| tf_efficitydet_d4 | 49.2 | TBD | 49.3 | 49.7 | 20.7 | 1024 |
| 有效DETV2_DS | 50.1 | TBD | N/A。 | N/A。 | 26.6 | 1024 |
| tf_efficitydet_d4_ap | 50.2 | TBD | 50.4 | 50.4 | 20.7 | 1024 |
| tf_efficitydet_d5 | 51.2 | TBD | 51.2 | 51.5 | 33.7 | 1280 |
| tf_efficitydet_d6 | 52.0 | TBD | 52.1 | 52.6 | 51.9 | 1280 |
| tf_efficitydet_d5_ap | 52.1 | TBD | 52.2 | 52.5 | 33.7 | 1280 |
| tf_efficitydet_d7 | 53.1 | 53.4 | 53.4 | 53.7 | 51.9 | 1536年 |
| tf_efficitydet_d7x | 54.3 | TBD | 54.4 | 55.1 | 77.1 | 1536年 |
有关模型检查点URL和差异,请参见模型配置。
注意:现在使用SOFT-NMS的所有模块的官方分数,但此处仍然使用普通NMS。
注意:在培训一些实验模型中,我注意到在分布式培训期间同步batchNorm( --sync-bn )和模型EMA重量挥动( --model-ema )的组合结合在一起。结果要么是一个模型无法收敛,要么似乎会收敛(训练损失),但是评估损失(运行的BN统计数据)是垃圾。我没有使用Extricnets观察到这一点,但是有一些骨架,例如CSPRESNEXT,VOVNET等。禁用EMA或同步BN似乎可以消除问题并导致良好的模型。我没有充分表征这个问题。
在Linux中的Python 3.7-3.9 Conda环境中进行了测试:
pip install timm或本地安装(https://github.com/rwightman/pytorch-image-models)注意- 与Numpy 1.18+和Pycocotools 2.0的冲突/错误,Force intermpy numpy <= 1.17.5或确保安装pycocotools> = 2.0.2
MSCOCO 2017验证数据:
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip val2017.zip
unzip annotations_trainval2017.zip
MSCOCO 2017 TEST-DEV数据:
wget http://images.cocodataset.org/zips/test2017.zip
unzip -q test2017.zip
wget http://images.cocodataset.org/annotations/image_info_test2017.zip
unzip image_info_test2017.zip
使用D2型号运行验证(默认情况下为Val2017): python validate.py /localtion/of/mscoco/ --model tf_efficientdet_d2
运行test-dev2017: python validate.py /localtion/of/mscoco/ --model tf_efficientdet_d2 --split testdev
./distributed_train.sh 4 /mscoco --model tf_efficientdet_d0 -b 16 --amp --lr .09 --warmup-epochs 5 --sync-bn --opt fusedmomentum --model-ema
笔记:
--fill-color mean )作为农作物/比例/方面填充的背景,官方仓库使用黑色像素(0)( --fill-color 0 )。两者都可以正常工作。2007,2012和合并为2007 + 2012 w/标记为2007验证测试。
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
find . -name '*.tar' -exec tar xf {} ;
VOCdevkit中应该有一个VOC2007和VOC2012文件夹,CMD系列的数据集root将是VOCDEVKIT。
替代下载链接,速度较慢,但比ox.ac.uk较慢:
http://pjreddie.com/media/files/VOCtrainval_11-May-2012.tar
http://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar
http://pjreddie.com/media/files/VOCtest_06-Nov-2007.tar
在VOC2012验证集上评估: python validate.py /data/VOCdevkit --model efficientdet_d0 --num-gpu 2 --dataset voc2007 --checkpoint mychekpoint.pth --num-classes 20
Fine tune COCO pretrained weights to VOC 2007 + 2012: /distributed_train.sh 4 /data/VOCdevkit --model efficientdet_d0 --dataset voc0712 -b 16 --amp --lr .008 --sync-bn --opt fusedmomentum --warmup-epochs 3 --model-ema --model-ema-decay 0.9966 --epochs 150 --num-classes 20 --pretrained
设置OpenImages数据集是一项承诺。我试图使注释变得更加容易,但是抓取数据集仍需要一些时间。它将需要大约560GB的存储空间。
要下载图像数据,我更喜欢CVDF包装。可以在以下网址找到主要的OpenImages数据集页面,注释,数据集许可证信息
在此处按照S3下载说明:https://github.com/cvdfoundation/open-images-dataset#download-images-with-bounding-boxes-antotions-
每个train_<x>.tar.gz应提取到train/<x>文件夹,其中x是0-f的十六进制数字。 validation.tar.gz可以作为平面文件提取到validation/ 。
注释可以从上面的OpenImages主页分开下载。为了方便起见,我将它们全部包装在一起,并包含所有图像文件的ID和统计数据的其他“信息” CSV文件。我的数据集依赖于<set>-info.csv文件。请参阅https://storage.googleapis.com/openimages/web/factsfigures.html以获取这些注释的许可。注释由Google LLC根据CC by 4.0许可证。图像列出为具有2.0许可证的CC。
wget https://github.com/rwightman/efficientdet-pytorch/releases/download/v0.1-anno/openimages-annotations.tar.bz2
wget https://github.com/rwightman/efficientdet-pytorch/releases/download/v0.1-anno/openimages-annotations-challenge-2019.tar.bz2
find . -name '*.tar.bz2' -exec tar xf {} ;
一旦下载所有内容并提取了您的OpenImages数据文件夹的根源:
annotations/<csv anno for openimages v5/v6>
annotations/challenge-2019/<csv anno for challenge2019>
train/0/<all the image files starting with '0'>
.
.
.
train/f/<all the image files starting with 'f'>
validation/<all the image files in same folder>
Training with Challenge2019 annotations (500 classes): ./distributed_train.sh 4 /data/openimages --model efficientdet_d0 --dataset openimages-challenge2019 -b 7 --amp --lr .042 --sync-bn --opt fusedmomentum --warmup-epochs 1 --lr-noise 0.4 0.9 --model-ema --model-ema-decay 0.999966 --epochs 100 --remode pixel --reprob 0.15 --recount 4 --num-classes 500 --val-skip 2
500(挑战2019年)或601(v5/v6)oi的班级负责人与可可的更多GPU内存相比。您可能需要一半批量的大小。
这里的模型已与定制培训例程和数据集一起使用,结果很棒。有很多细节要弄清楚,因此请不要提交任何“我在自定义数据集问题上获得废话结果”。如果您可以在公共,非专有,可下载的数据集上说明可再现的问题,并使用此存储库的公共github叉,包括工作数据集/解析器实现,我可能有时间看看。
示例:
timm EdgentyNETV2骨架的培训的一个很好的例子和此处的高效模型的最新版本如果您有一个很好的示例脚本或内核培训这些模型,请随时在此处通知我...
Latest training run with .336 for D0 (on 4x 1080ti): ./distributed_train.sh 4 /mscoco --model efficientdet_d0 -b 22 --amp --lr .12 --sync-bn --opt fusedmomentum --warmup-epochs 5 --lr-noise 0.4 0.9 --model-ema --model-ema-decay 0.9999
上面的这些HPARAM导致了一个很好的模型,几点:
Val2017
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.336251
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.521584
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.356439
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.123988
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.395033
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.521695
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.287121
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.441450
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.467914
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.197697
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.552515
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.689297
Latest run with .394 mAP (on 4x 1080ti): ./distributed_train.sh 4 /mscoco --model efficientdet_d1 -b 10 --amp --lr .06 --sync-bn --opt fusedmomentum --warmup-epochs 5 --lr-noise 0.4 0.9 --model-ema --model-ema-decay 0.99995
对于此运行,我使用了一些改进的增强功能,仍在进行实验,因此没有准备就绪,没有它们就可以正常工作,但可能会开始过度拟合一些,并可能最终达到.385-.39范围。
注意:到目前为止,我仅尝试将D7提交给开发服务器以进行理智检查
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.534
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.726
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.577
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.356
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.569
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.660
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.397
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.644
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.682
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.508
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.718
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.818
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.341877
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.525112
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.360218
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.131366
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.399686
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.537368
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.293137
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.447829
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.472954
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.195282
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.558127
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.695312
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.401070
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.590625
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.422998
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.211116
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.459650
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.577114
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.326565
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.507095
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.537278
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.308963
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.610450
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.731814
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.434042
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.627834
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.463488
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.237414
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.486118
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.606151
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.343016
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.538328
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.571489
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.350301
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.638884
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.746671
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.471223
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.661550
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.505127
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.301385
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.518339
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.626571
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.365186
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.582691
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.617252
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.424689
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.670761
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.779611
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.491759
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.686005
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.527791
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.325658
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.536508
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.635309
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.373752
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.601733
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.638343
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.463057
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.685103
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.789180
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.511767
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.704835
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.552920
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.355680
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.551341
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.650184
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.384516
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.619196
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.657445
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.499319
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.695617
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.788889
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.520200
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.713204
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.560973
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.361596
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.567414
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.657173
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.387733
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.629269
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.667495
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.499002
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.711909
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.802336
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.531256
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.724700
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.571787
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.368872
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.573938
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.668253
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.393620
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.637601
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.676987
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.524850
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.717553
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.806352
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.543
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.737
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.585
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.401
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.579
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.680
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.398
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.649
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.689
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.550
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.725
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.823
如果您是一个组织感兴趣的赞助商和任何这项工作,或者您将未来指示的优先级列入您的利益,请随时与我联系(问题,LinkedIn,Twitter,twitter,Hello in Rwightman dot com)。如果有任何兴趣,我将设置GitHub赞助商。