Pytorch實施有效件。
它是基於
還有其他Pytorch實現。他們的方法要么不符合我的目標,即正確地重現Tensorflow模型(但具有Pytorch的感覺和靈活性),要么無法從頭開始復制Coco MS培訓。
除默認模型配置外,這裡還有很大的靈活性來促進實驗和快速改進 - 一些基於官方Tensorflow Impl的選項,我自己的一些:
timm模型集合中支持特徵提取的任何主鏈( features_only arg)都可以用作BACBKONE。 timm 0.9timm convert_sync_batchnorm函數處理更新的型號,w/ batchnormact2d layerstimm的efficientnetv2_rw_s骨鏈,新的efficientnetv2_ds權重50.1地圖 @ 1024x0124。內存使用與D3相當,速度比D4快。比最佳培訓批量尺寸小,所以可能做得更好...efficientnetv2_dt重量為新集合,46.1 MAP @ 768x768,47.0 MAP @ 896x896使用AGC剪輯。timm通過自適應梯度剪輯支持)。想法來自( High-Performance Large-Scale Image Recognition Without Normalization -https://arxiv.org/abs/2102.06171)timm最低版本最多可達0.4.12timm的efficientnetv2_rw_t (TINY)模型,添加ExtricNETV2骨幹實驗efficientnetv2_dt 。 45.8地圖 @ 768x768。tf_efficientdet_d?_apefficientdet_q1 (在40.6替換Prev模型)cspresdet50cspdarkdet53mtimm和先前的Torchscript兼容添加中的ModeleMav2包含在包含ModeleMav2的情況下,可以使用完全JIT腳本模型 +台式模型( --torchscript )訓練。 CPU綁定訓練的大速度提高。efficientdet_q0/q1/q2 )和cspresdext + pan( cspresdext50pan )。請參閱下面的更新表。特別感謝Artus為培訓Q2模型提供資源。max並包括任何avgnew_focal ,使用--legacy-focal使用原始範圍來使用原件。傳統使用的內存較少,但存在更多的數值穩定性問題。timm > = 0.3.2,請注意double檢查任何自定義定義的模型配置,以破壞更改合併了幾個月的累積修復和補充。
size % 128 = 0 。我尚未解決的一些優先列表上的一些事情:
請注意,有一些破裂的變化:
timm版本更新為最新版本(> = 0.3),因為一些助手的API有所改變。培訓理智檢查是在VOC和OI上進行的
下表包含驗證重量的模型。我在使用各種timm骨架的模型配置中定義了許多其他型號。
| 變體 | 地圖(Val2017) | 地圖(Test-Dev2017) | 地圖(TF官方Val2017) | 地圖(TF官方測試-DEV2017) | 參數(m) | IMG大小 |
|---|---|---|---|---|---|---|
| tf_efficitydet_lite0 | 27.1 | TBD | 26.4 | N/A。 | 3.24 | 320 |
| tf_efficitydet_lite1 | 32.2 | TBD | 31.5 | N/A。 | 4.25 | 384 |
| 有效det_d0 | 33.6 | TBD | N/A。 | N/A。 | 3.88 | 512 |
| tf_efficitydet_d0 | 34.2 | TBD | 34.3 | 34.6 | 3.88 | 512 |
| tf_efficitydet_d0_ap | 34.8 | TBD | 35.2 | 35.3 | 3.88 | 512 |
| 有效DET_Q0 | 35.7 | TBD | N/A。 | N/A。 | 4.13 | 512 |
| tf_efficitydet_lite2 | 35.9 | TBD | 35.1 | N/A。 | 5.25 | 448 |
| 有效det_d1 | 39.4 | 39.5 | N/A。 | N/A。 | 6.62 | 640 |
| tf_efficitydet_lite3 | 39.6 | TBD | 38.8 | N/A。 | 8.35 | 512 |
| tf_efficitydet_d1 | 40.1 | TBD | 40.2 | 40.5 | 6.63 | 640 |
| tf_efficitydet_d1_ap | 40.8 | TBD | 40.9 | 40.8 | 6.63 | 640 |
| 有效det_q1 | 40.9 | TBD | N/A。 | N/A。 | 6.98 | 640 |
| cspresdext50pan | 41.2 | TBD | N/A。 | N/A。 | 22.2 | 640 |
| Resdet50 | 41.6 | TBD | N/A。 | N/A。 | 27.6 | 640 |
| 有效det_q2 | 43.1 | TBD | N/A。 | N/A。 | 8.81 | 768 |
| cspresdet50 | 43.2 | TBD | N/A。 | N/A。 | 24.3 | 768 |
| tf_efficitydet_d2 | 43.4 | TBD | 42.5 | 43 | 8.10 | 768 |
| tf_efficitydet_lite3x | 43.6 | TBD | 42.6 | N/A。 | 9.28 | 640 |
| tf_efficitydet_lite4 | 44.2 | TBD | 43.2 | N/A。 | 15.1 | 640 |
| tf_efficitydet_d2_ap | 44.2 | TBD | 44.3 | 44.3 | 8.10 | 768 |
| CSPDARKDET53M | 45.2 | TBD | N/A。 | N/A。 | 35.6 | 768 |
| 有效detv2_dt | 46.1 | TBD | N/A。 | N/A。 | 13.4 | 768 |
| tf_efficitydet_d3 | 47.1 | TBD | 47.2 | 47.5 | 12.0 | 896 |
| tf_efficitydet_d3_ap | 47.7 | TBD | 48.0 | 47.7 | 12.0 | 896 |
| tf_efficitydet_d4 | 49.2 | TBD | 49.3 | 49.7 | 20.7 | 1024 |
| 有效DETV2_DS | 50.1 | TBD | N/A。 | N/A。 | 26.6 | 1024 |
| tf_efficitydet_d4_ap | 50.2 | TBD | 50.4 | 50.4 | 20.7 | 1024 |
| tf_efficitydet_d5 | 51.2 | TBD | 51.2 | 51.5 | 33.7 | 1280 |
| tf_efficitydet_d6 | 52.0 | TBD | 52.1 | 52.6 | 51.9 | 1280 |
| tf_efficitydet_d5_ap | 52.1 | TBD | 52.2 | 52.5 | 33.7 | 1280 |
| tf_efficitydet_d7 | 53.1 | 53.4 | 53.4 | 53.7 | 51.9 | 1536年 |
| tf_efficitydet_d7x | 54.3 | TBD | 54.4 | 55.1 | 77.1 | 1536年 |
有關模型檢查點URL和差異,請參見模型配置。
注意:現在使用SOFT-NMS的所有模塊的官方分數,但此處仍然使用普通NMS。
注意:在培訓一些實驗模型中,我注意到在分佈式培訓期間同步batchNorm( --sync-bn )和模型EMA重量揮動( --model-ema )的組合結合在一起。結果要么是一個模型無法收斂,要么似乎會收斂(訓練損失),但是評估損失(運行的BN統計數據)是垃圾。我沒有使用Extricnets觀察到這一點,但是有一些骨架,例如CSPRESNEXT,VOVNET等。禁用EMA或同步BN似乎可以消除問題並導致良好的模型。我沒有充分錶徵這個問題。
在Linux中的Python 3.7-3.9 Conda環境中進行了測試:
pip install timm或本地安裝(https://github.com/rwightman/pytorch-image-models)注意- 與Numpy 1.18+和Pycocotools 2.0的衝突/錯誤,Force intermpy numpy <= 1.17.5或確保安裝pycocotools> = 2.0.2
MSCOCO 2017驗證數據:
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip val2017.zip
unzip annotations_trainval2017.zip
MSCOCO 2017 TEST-DEV數據:
wget http://images.cocodataset.org/zips/test2017.zip
unzip -q test2017.zip
wget http://images.cocodataset.org/annotations/image_info_test2017.zip
unzip image_info_test2017.zip
使用D2型號運行驗證(默認情況下為Val2017): python validate.py /localtion/of/mscoco/ --model tf_efficientdet_d2
運行test-dev2017: python validate.py /localtion/of/mscoco/ --model tf_efficientdet_d2 --split testdev
./distributed_train.sh 4 /mscoco --model tf_efficientdet_d0 -b 16 --amp --lr .09 --warmup-epochs 5 --sync-bn --opt fusedmomentum --model-ema
筆記:
--fill-color mean )作為農作物/比例/方面填充的背景,官方倉庫使用黑色像素(0)( --fill-color 0 )。兩者都可以正常工作。2007,2012和合併為2007 + 2012 w/標記為2007驗證測試。
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
find . -name '*.tar' -exec tar xf {} ;
VOCdevkit中應該有一個VOC2007和VOC2012文件夾,CMD系列的數據集root將是VOCDEVKIT。
替代下載鏈接,速度較慢,但比ox.ac.uk較慢:
http://pjreddie.com/media/files/VOCtrainval_11-May-2012.tar
http://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar
http://pjreddie.com/media/files/VOCtest_06-Nov-2007.tar
在VOC2012驗證集上評估: python validate.py /data/VOCdevkit --model efficientdet_d0 --num-gpu 2 --dataset voc2007 --checkpoint mychekpoint.pth --num-classes 20
Fine tune COCO pretrained weights to VOC 2007 + 2012: /distributed_train.sh 4 /data/VOCdevkit --model efficientdet_d0 --dataset voc0712 -b 16 --amp --lr .008 --sync-bn --opt fusedmomentum --warmup-epochs 3 --model-ema --model-ema-decay 0.9966 --epochs 150 --num-classes 20 --pretrained
設置OpenImages數據集是一項承諾。我試圖使註釋變得更加容易,但是抓取數據集仍需要一些時間。它將需要大約560GB的存儲空間。
要下載圖像數據,我更喜歡CVDF包裝。可以在以下網址找到主要的OpenImages數據集頁面,註釋,數據集許可證信息
在此處按照S3下載說明:https://github.com/cvdfoundation/open-images-dataset#download-images-with-bounding-boxes-antotions-
每個train_<x>.tar.gz應提取到train/<x>文件夾,其中x是0-f的十六進制數字。 validation.tar.gz可以作為平面文件提取到validation/ 。
註釋可以從上面的OpenImages主頁分開下載。為了方便起見,我將它們全部包裝在一起,並包含所有圖像文件的ID和統計數據的其他“信息” CSV文件。我的數據集依賴於<set>-info.csv文件。請參閱https://storage.googleapis.com/openimages/web/factsfigures.html以獲取這些註釋的許可。註釋由Google LLC根據CC by 4.0許可證。圖像列出為具有2.0許可證的CC。
wget https://github.com/rwightman/efficientdet-pytorch/releases/download/v0.1-anno/openimages-annotations.tar.bz2
wget https://github.com/rwightman/efficientdet-pytorch/releases/download/v0.1-anno/openimages-annotations-challenge-2019.tar.bz2
find . -name '*.tar.bz2' -exec tar xf {} ;
一旦下載所有內容並提取了您的OpenImages數據文件夾的根源:
annotations/<csv anno for openimages v5/v6>
annotations/challenge-2019/<csv anno for challenge2019>
train/0/<all the image files starting with '0'>
.
.
.
train/f/<all the image files starting with 'f'>
validation/<all the image files in same folder>
Training with Challenge2019 annotations (500 classes): ./distributed_train.sh 4 /data/openimages --model efficientdet_d0 --dataset openimages-challenge2019 -b 7 --amp --lr .042 --sync-bn --opt fusedmomentum --warmup-epochs 1 --lr-noise 0.4 0.9 --model-ema --model-ema-decay 0.999966 --epochs 100 --remode pixel --reprob 0.15 --recount 4 --num-classes 500 --val-skip 2
500(挑戰2019年)或601(v5/v6)oi的班級負責人與可可的更多GPU內存相比。您可能需要一半批量的大小。
這裡的模型已與定制培訓例程和數據集一起使用,結果很棒。有很多細節要弄清楚,因此請不要提交任何“我在自定義數據集問題上獲得廢話結果”。如果您可以在公共,非專有,可下載的數據集上說明可再現的問題,並使用此存儲庫的公共github叉,包括工作數據集/解析器實現,我可能有時間看看。
示例:
timm EdgentyNETV2骨架的培訓的一個很好的例子和此處的高效模型的最新版本如果您有一個很好的示例腳本或內核培訓這些模型,請隨時在此處通知我...
Latest training run with .336 for D0 (on 4x 1080ti): ./distributed_train.sh 4 /mscoco --model efficientdet_d0 -b 22 --amp --lr .12 --sync-bn --opt fusedmomentum --warmup-epochs 5 --lr-noise 0.4 0.9 --model-ema --model-ema-decay 0.9999
上面的這些HPARAM導致了一個很好的模型,幾點:
Val2017
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.336251
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.521584
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.356439
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.123988
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.395033
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.521695
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.287121
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.441450
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.467914
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.197697
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.552515
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.689297
Latest run with .394 mAP (on 4x 1080ti): ./distributed_train.sh 4 /mscoco --model efficientdet_d1 -b 10 --amp --lr .06 --sync-bn --opt fusedmomentum --warmup-epochs 5 --lr-noise 0.4 0.9 --model-ema --model-ema-decay 0.99995
對於此運行,我使用了一些改進的增強功能,仍在進行實驗,因此沒有準備就緒,沒有它們就可以正常工作,但可能會開始過度擬合一些,並可能最終達到.385-.39範圍。
注意:到目前為止,我僅嘗試將D7提交給開發服務器以進行理智檢查
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.534
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.726
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.577
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.356
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.569
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.660
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.397
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.644
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.682
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.508
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.718
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.818
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.341877
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.525112
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.360218
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.131366
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.399686
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.537368
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.293137
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.447829
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.472954
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.195282
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.558127
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.695312
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.401070
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.590625
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.422998
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.211116
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.459650
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.577114
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.326565
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.507095
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.537278
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.308963
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.610450
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.731814
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.434042
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.627834
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.463488
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.237414
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.486118
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.606151
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.343016
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.538328
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.571489
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.350301
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.638884
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.746671
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.471223
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.661550
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.505127
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.301385
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.518339
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.626571
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.365186
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.582691
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.617252
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.424689
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.670761
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.779611
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.491759
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.686005
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.527791
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.325658
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.536508
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.635309
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.373752
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.601733
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.638343
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.463057
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.685103
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.789180
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.511767
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.704835
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.552920
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.355680
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.551341
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.650184
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.384516
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.619196
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.657445
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.499319
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.695617
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.788889
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.520200
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.713204
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.560973
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.361596
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.567414
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.657173
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.387733
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.629269
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.667495
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.499002
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.711909
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.802336
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.531256
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.724700
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.571787
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.368872
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.573938
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.668253
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.393620
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.637601
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.676987
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.524850
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.717553
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.806352
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.543
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.737
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.585
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.401
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.579
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.680
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.398
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.649
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.689
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.550
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.725
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.823
如果您是一個組織感興趣的讚助商和任何這項工作,或者您將未來指示的優先級列入您的利益,請隨時與我聯繫(問題,LinkedIn,Twitter,twitter,Hello in Rwightman dot com)。如果有任何興趣,我將設置GitHub贊助商。