Pointcept是一個強大而靈活的代碼庫,用於點雲知覺研究。這也是以下論文的正式實施:
Point Transformer V3:更簡單,更快,更強
Xiaoyang Wu,Li Jiang,Peng-Shuai Wang,Zhijian Liu,Xihui Liu,Yu Qiao,Wanli Ouyang,Tong He,Hengshuang Zhao
IEEE計算機視覺和模式識別會議( CVPR )2024-口服
[骨幹] [PTV3] - [arxiv] [bib] [project]→這裡
OA-CNNS:3D語義分割的全適中稀疏CNN
Bohao Peng,Xiaoyang Wu,Li Jiang,Yukang Chen,Hengshuang Zhao,Zhuotao Tian,Jiaya Jia
IEEE計算機視覺和模式識別會議( CVPR )2024
[骨幹] [OA -CNNS] - [arxiv] [bib]→這裡
通過多數據集迅速培訓進行大規模3D表示學習
Xiaoyang Wu,Zhuotao Tian,Xin Wen,Bohao Peng,Xihui Liu,Kaicheng Yu,Hengshuang Zhao
IEEE計算機視覺和模式識別會議( CVPR )2024
[PRETRAIN] [PPT] - [ARXIV] [BIB]→此處
蒙版場景對比:無監督3D表示學習的可擴展框架
Xiaoyang Wu,Xin Wen,Xihui Liu,Hengshuang Zhao
IEEE計算機視覺和模式識別會議( CVPR )2023
[預處理] [MSC] - [arxiv] [bib]→這裡
學習上下文感知的語義分割分類器(3D部分)
Zhuotao Tian,Jiequan Cui,Li Jiang,Xiaojuan Qi,Xin Lai,Yixin Chen,Shu Liu,Jiaya Jia
AAAI人工智能會議( AAAI )2023-口腔
[SEMSEG] [CAC] - [ARXIV] [BIB] [2D PART]→此處
點變壓器V2:分組向量注意和基於分區的池
Xiaoyang Wu,Yixing Lao,Li Jiang,Xihui Liu,Hengshuang Zhao
神經信息處理系統會議( NEURIPS )2022
[骨幹] [PTV2] - [arxiv] [bib]→這裡
點變壓器
Hengshuang Zhao,Li Jiang,Jiay Jia,Philip Torr,Vladlen Koltun
IEEE國際計算機視覺會議( ICCV )2021-口頭
[骨幹] [PTV1] - [arxiv] [bib]→這裡
此外, PointCept集成了以下出色的工作(包含上述):
骨幹:Minkunet(此處),Spunet(此處),SPVCNN(此處),OACNNS(此處),PTV1(此處)(此處),PTV2(此處),PTV3(此處),StratifiedFormer(there),octformer(octformer),octformer(there),swin3d(there);
語義分割:mix3d(此處),CAC(此處);
實例分段:pointGroup(here);
預訓練:PointContrast(此處),對比場景上下文(此處),掩蓋場景對比(此處),點及時培訓(此處);
數據集:Scannet(此處),Scannet200(此處),Scannet ++(此處),S3DIS(此處),MatterPort3d(there),Arkitscene,structured3d(there),semantickitti(semantickitti(shere),nuscenes(there),nuscenes(there),modelnet40(shere),Waymo(此處),Waymo(此處)。
如果您發現PointCept對您的研究有用,請引用我們的工作作為鼓勵。 (੭ˊ꒳ˋ)੭✧
@misc{pointcept2023,
title={Pointcept: A Codebase for Point Cloud Perception Research},
author={Pointcept Contributors},
howpublished = {url{https://github.com/Pointcept/Pointcept}},
year={2023}
}
conda create -n pointcept python=3.8 -y
conda activate pointcept
conda install ninja -y
# Choose version you want here: https://pytorch.org/get-started/previous-versions/
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch -y
conda install h5py pyyaml -c anaconda -y
conda install sharedarray tensorboard tensorboardx yapf addict einops scipy plyfile termcolor timm -c conda-forge -y
conda install pytorch-cluster pytorch-scatter pytorch-sparse -c pyg -y
pip install torch-geometric
# spconv (SparseUNet)
# refer https://github.com/traveller59/spconv
pip install spconv-cu113
# PPT (clip)
pip install ftfy regex tqdm
pip install git+https://github.com/openai/CLIP.git
# PTv1 & PTv2 or precise eval
cd libs/pointops
# usual
python setup.py install
# docker & multi GPU arch
TORCH_CUDA_ARCH_LIST= " ARCH LIST " python setup.py install
# e.g. 7.5: RTX 3000; 8.0: a100 More available in: https://developer.nvidia.com/cuda-gpus
TORCH_CUDA_ARCH_LIST= " 7.5 8.0 " python setup.py install
cd ../..
# Open3D (visualization, optional)
pip install open3d該預處理支持ScanNet20 , ScanNet200和ScanNet Data Efficient語義和實例分割。
下載掃描儀V2數據集。
為RAW掃描板運行預處理代碼,如下所示:
# RAW_SCANNET_DIR: the directory of downloaded ScanNet v2 raw dataset.
# PROCESSED_SCANNET_DIR: the directory of the processed ScanNet dataset (output dir).
python pointcept/datasets/preprocessing/scannet/preprocess_scannet.py --dataset_root ${RAW_SCANNET_DIR} --output_root ${PROCESSED_SCANNET_DIR}(可選)下載掃描儀數據有效文件:
# download-scannet.py is the official download script
# or follow instructions here: https://kaldir.vc.in.tum.de/scannet_benchmark/data_efficient/documentation#download
python download-scannet.py --data_efficient -o ${RAW_SCANNET_DIR}
# unzip downloads
cd ${RAW_SCANNET_DIR} /tasks
unzip limited-annotation-points.zip
unzip limited-reconstruction-scenes.zip
# copy files to processed dataset folder
mkdir ${PROCESSED_SCANNET_DIR} /tasks
cp -r ${RAW_SCANNET_DIR} /tasks/points ${PROCESSED_SCANNET_DIR} /tasks
cp -r ${RAW_SCANNET_DIR} /tasks/scenes ${PROCESSED_SCANNET_DIR} /tasks(替代方案)我們的預處理數據可以直接下載[此處],請在下載之前同意官方許可證。
鏈接處理的數據集與代碼庫:
# PROCESSED_SCANNET_DIR: the directory of the processed ScanNet dataset.
mkdir data
ln -s ${PROCESSED_SCANNET_DIR} ${CODEBASE_DIR} /data/scannet # RAW_SCANNETPP_DIR: the directory of downloaded ScanNet++ raw dataset.
# PROCESSED_SCANNETPP_DIR: the directory of the processed ScanNet++ dataset (output dir).
# NUM_WORKERS: the number of workers for parallel preprocessing.
python pointcept/datasets/preprocessing/scannetpp/preprocess_scannetpp.py --dataset_root ${RAW_SCANNETPP_DIR} --output_root ${PROCESSED_SCANNETPP_DIR} --num_workers ${NUM_WORKERS} # PROCESSED_SCANNETPP_DIR: the directory of the processed ScanNet++ dataset (output dir).
# NUM_WORKERS: the number of workers for parallel preprocessing.
python pointcept/datasets/preprocessing/sampling_chunking_data.py --dataset_root ${PROCESSED_SCANNETPP_DIR} --grid_size 0.01 --chunk_range 6 6 --chunk_stride 3 3 --split train --num_workers ${NUM_WORKERS}
python pointcept/datasets/preprocessing/sampling_chunking_data.py --dataset_root ${PROCESSED_SCANNETPP_DIR} --grid_size 0.01 --chunk_range 6 6 --chunk_stride 3 3 --split val --num_workers ${NUM_WORKERS} # PROCESSED_SCANNETPP_DIR: the directory of the processed ScanNet dataset.
mkdir data
ln -s ${PROCESSED_SCANNETPP_DIR} ${CODEBASE_DIR} /data/scannetpp通過填寫此Google表格下載S3DIS數據。下載Stanford3dDataset_v1.2.zip文件並解壓縮它。
修復Area_5/office_19/Annotations/ceiling Line 323474(103.0.0000 => 103.000000)中的錯誤錯誤。
(可選)從此處下載完整的2d-3d S3DIS數據集(no XYZ)以進行正常解析。
為S3DIS運行預處理代碼如下:
# S3DIS_DIR: the directory of downloaded Stanford3dDataset_v1.2 dataset.
# RAW_S3DIS_DIR: the directory of Stanford2d3dDataset_noXYZ dataset. (optional, for parsing normal)
# PROCESSED_S3DIS_DIR: the directory of processed S3DIS dataset (output dir).
# S3DIS without aligned angle
python pointcept/datasets/preprocessing/s3dis/preprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR}
# S3DIS with aligned angle
python pointcept/datasets/preprocessing/s3dis/preprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR} --align_angle
# S3DIS with normal vector (recommended, normal is helpful)
python pointcept/datasets/preprocessing/s3dis/preprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR} --raw_root ${RAW_S3DIS_DIR} --parse_normal
python pointcept/datasets/preprocessing/s3dis/preprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR} --raw_root ${RAW_S3DIS_DIR} --align_angle --parse_normal(替代性)我們的預處理數據也可以下載[此處](以正常的向量和對齊角度為單位),請在下載之前同意官方許可證。
將處理的數據集鏈接到代碼庫。
# PROCESSED_S3DIS_DIR: the directory of processed S3DIS dataset.
mkdir data
ln -s ${PROCESSED_S3DIS_DIR} ${CODEBASE_DIR} /data/s3dis${STRUCT3D_DIR} )中組織所有下載的zip文件。 # STRUCT3D_DIR: the directory of downloaded Structured3D dataset.
# PROCESSED_STRUCT3D_DIR: the directory of processed Structured3D dataset (output dir).
# NUM_WORKERS: Number for workers for preprocessing, default same as cpu count (might OOM).
export PYTHONPATH=./
python pointcept/datasets/preprocessing/structured3d/preprocess_structured3d.py --dataset_root ${STRUCT3D_DIR} --output_root ${PROCESSED_STRUCT3D_DIR} --num_workers ${NUM_WORKERS} --grid_size 0.01 --fuse_prsp --fuse_pano按照Swin3d的指示,我們將25個類別保留在原始40個類別中,頻率超過0.001。
(替代方案)我們的預處理數據也可以下載[此處](透視圖和全景視圖,打開拉鍊後471.7g),請在下載之前同意官方許可證。
將處理的數據集鏈接到代碼庫。
# PROCESSED_STRUCT3D_DIR: the directory of processed Structured3D dataset (output dir).
mkdir data
ln -s ${PROCESSED_STRUCT3D_DIR} ${CODEBASE_DIR} /data/structured3d # download-mp.py is the official download script
# MATTERPORT3D_DIR: the directory of downloaded Matterport3D dataset.
python download-mp.py -o {MATTERPORT3D_DIR} --type region_segmentations # MATTERPORT3D_DIR: the directory of downloaded Matterport3D dataset.
python pointcept/datasets/preprocessing/matterport3d/unzip_matterport3d_region_segmentation.py --dataset_root {MATTERPORT3D_DIR} # MATTERPORT3D_DIR: the directory of downloaded Matterport3D dataset.
# PROCESSED_MATTERPORT3D_DIR: the directory of processed Matterport3D dataset (output dir).
# NUM_WORKERS: the number of workers for this preprocessing.
python pointcept/datasets/preprocessing/matterport3d/preprocess_matterport3d_mesh.py --dataset_root ${MATTERPORT3D_DIR} --output_root ${PROCESSED_MATTERPORT3D_DIR} --num_workers ${NUM_WORKERS} # PROCESSED_MATTERPORT3D_DIR: the directory of processed Matterport3D dataset (output dir).
mkdir data
ln -s ${PROCESSED_MATTERPORT3D_DIR} ${CODEBASE_DIR} /data/matterport3d按照開放式房間的指示,我們將MatterPort3D的類別重新映射到20個語義類別,並增加了天花板類別。
# SEMANTIC_KITTI_DIR: the directory of SemanticKITTI dataset.
# |- SEMANTIC_KITTI_DIR
# |- dataset
# |- sequences
# |- 00
# |- 01
# |- ...
mkdir -p data
ln -s ${SEMANTIC_KITTI_DIR} ${CODEBASE_DIR} /data/semantic_kitti下載官方的Nuscene數據集(帶有LIDAR細分),並按照以下方式組織下載的文件:
NUSCENES_DIR
│── samples
│── sweeps
│── lidarseg
...
│── v1.0-trainval
│── v1.0-test為Nuscenes運行信息預處理代碼(從OpenPCDET修改)如下:
# NUSCENES_DIR: the directory of downloaded nuScenes dataset.
# PROCESSED_NUSCENES_DIR: the directory of processed nuScenes dataset (output dir).
# MAX_SWEEPS: Max number of sweeps. Default: 10.
pip install nuscenes-devkit pyquaternion
python pointcept/datasets/preprocessing/nuscenes/preprocess_nuscenes_info.py --dataset_root ${NUSCENES_DIR} --output_root ${PROCESSED_NUSCENES_DIR} --max_sweeps ${MAX_SWEEPS} --with_camera(替代方案)我們的預處理Nuscenes信息數據也可以下載[此處](僅處理的信息,仍然需要下載RAW數據集並鏈接到該文件夾),請在下載之前同意官方許可證。
將RAW數據集鏈接到處理的Nuscene數據集文件夾:
# NUSCENES_DIR: the directory of downloaded nuScenes dataset.
# PROCESSED_NUSCENES_DIR: the directory of processed nuScenes dataset (output dir).
ln -s ${NUSCENES_DIR} {PROCESSED_NUSCENES_DIR}/raw然後,處理後的Nuscenes文件夾如下組織:
nuscene
| ── raw
│── samples
│── sweeps
│── lidarseg
...
│── v1.0-trainval
│── v1.0-test
| ── info將處理的數據集鏈接到代碼庫。
# PROCESSED_NUSCENES_DIR: the directory of processed nuScenes dataset (output dir).
mkdir data
ln -s ${PROCESSED_NUSCENES_DIR} ${CODEBASE_DIR} /data/nuscenes下載官方Waymo數據集(v1.4.3),並按照以下方式組織下載的文件:
WAYMO_RAW_DIR
│── training
│── validation
│── testing安裝以下依賴性:
# If shows "No matching distribution found", download whl directly from Pypi and install the package.
conda create -n waymo python=3.10 -y
conda activate waymo
pip install waymo-open-dataset-tf-2-12-0運行預處理代碼如下:
# WAYMO_DIR: the directory of the downloaded Waymo dataset.
# PROCESSED_WAYMO_DIR: the directory of the processed Waymo dataset (output dir).
# NUM_WORKERS: num workers for preprocessing
python pointcept/datasets/preprocessing/waymo/preprocess_waymo.py --dataset_root ${WAYMO_DIR} --output_root ${PROCESSED_WAYMO_DIR} --splits training validation --num_workers ${NUM_WORKERS}將處理的數據集鏈接到代碼庫。
# PROCESSED_WAYMO_DIR: the directory of the processed Waymo dataset (output dir).
mkdir data
ln -s ${PROCESSED_WAYMO_DIR} ${CODEBASE_DIR} /data/waymomkdir -p data
ln -s ${MODELNET_DIR} ${CODEBASE_DIR} /data/modelnet40_normal_resampled從頭開始訓練。培訓處理基於configs夾中的配置。培訓腳本將在實驗文件夾中的exp文件夾和備份基本代碼中生成實驗文件夾。培訓配置,日誌,張板和檢查點也將在培訓過程中保存到實驗文件夾中。
export CUDA_VISIBLE_DEVICES= ${CUDA_VISIBLE_DEVICES}
# Script (Recommended)
sh scripts/train.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -c ${CONFIG_NAME} -n ${EXP_NAME}
# Direct
export PYTHONPATH=./
python tools/train.py --config-file ${CONFIG_PATH} --num-gpus ${NUM_GPU} --options save_path= ${SAVE_PATH}例如:
# By script (Recommended)
# -p is default set as python and can be ignored
sh scripts/train.sh -p python -d scannet -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base
# Direct
export PYTHONPATH=./
python tools/train.py --config-file configs/scannet/semseg-pt-v2m2-0-base.py --options save_path=exp/scannet/semseg-pt-v2m2-0-base從檢查站恢復培訓。如果培訓過程被偶然中斷,則以下腳本可以從給定檢查站恢復培訓。
export CUDA_VISIBLE_DEVICES= ${CUDA_VISIBLE_DEVICES}
# Script (Recommended)
# simply add "-r true"
sh scripts/train.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -c ${CONFIG_NAME} -n ${EXP_NAME} -r true
# Direct
export PYTHONPATH=./
python tools/train.py --config-file ${CONFIG_PATH} --num-gpus ${NUM_GPU} --options save_path= ${SAVE_PATH} resume=True weight= ${CHECKPOINT_PATH}在訓練過程中,網格採樣後對點雲進行模型評估(體素化),提供了模型性能的初步評估。但是,為了獲得精確的評估結果,測試至關重要。測試過程涉及將密集的點雲次採樣到一系列體素化點雲中,以確保對所有點的全面覆蓋。然後對這些子積分進行預測並收集以形成整個點雲的完整預測。與簡單地繪製/插值預測相比,該方法產生更高的評估結果。此外,我們的測試代碼支持TTA(測試時間擴展)測試,這進一步增強了評估性能的穩定性。
# By script (Based on experiment folder created by training script)
sh scripts/test.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -n ${EXP_NAME} -w ${CHECKPOINT_NAME}
# Direct
export PYTHONPATH=./
python tools/test.py --config-file ${CONFIG_PATH} --num-gpus ${NUM_GPU} --options save_path= ${SAVE_PATH} weight= ${CHECKPOINT_PATH}例如:
# By script (Based on experiment folder created by training script)
# -p is default set as python and can be ignored
# -w is default set as model_best and can be ignored
sh scripts/test.sh -p python -d scannet -n semseg-pt-v2m2-0-base -w model_best
# Direct
export PYTHONPATH=./
python tools/test.py --config-file configs/scannet/semseg-pt-v2m2-0-base.py --options save_path=exp/scannet/semseg-pt-v2m2-0-base weight=exp/scannet/semseg-pt-v2m2-0-base/model/model_best.pth可以通過replace.test.test.test_cfg.aug_transform data.test.test_cfg.aug_transform = [...]禁用TTA。
data = dict (
train = dict (...),
val = dict (...),
test = dict (
...,
test_cfg = dict (
...,
aug_transform = [
[ dict ( type = "RandomRotateTargetAngle" , angle = [ 0 ], axis = "z" , center = [ 0 , 0 , 0 ], p = 1 )]
]
)
)
)Offset是批處理數據中點雲的分離器,它類似於PYG中Batch的概念。批處理和偏移的視覺說明如下:
Pointcept提供了由SpConv和MinkowskiEngine實施的SparseUNet 。建議使用SPCONV版本,因為SPCONV易於安裝,並且比Minkowskiengine更快。同時,SPCONV也廣泛應用於戶外感知。
代碼庫中的SPCONV版本SparseUNet已從MinkowskiEngine版本完全重寫,示例運行腳本如下:
# ScanNet val
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base
# ScanNet200
sh scripts/train.sh -g 4 -d scannet200 -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base
# S3DIS
sh scripts/train.sh -g 4 -d s3dis -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base
# S3DIS (with normal)
sh scripts/train.sh -g 4 -d s3dis -c semseg-spunet-v1m1-0-cn-base -n semseg-spunet-v1m1-0-cn-base
# SemanticKITTI
sh scripts/train.sh -g 4 -d semantic_kitti -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base
# nuScenes
sh scripts/train.sh -g 4 -d nuscenes -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base
# ModelNet40
sh scripts/train.sh -g 2 -d modelnet40 -c cls-spunet-v1m1-0-base -n cls-spunet-v1m1-0-base
# ScanNet Data Efficient
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la20 -n semseg-spunet-v1m1-2-efficient-la20
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la50 -n semseg-spunet-v1m1-2-efficient-la50
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la100 -n semseg-spunet-v1m1-2-efficient-la100
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la200 -n semseg-spunet-v1m1-2-efficient-la200
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr1 -n semseg-spunet-v1m1-2-efficient-lr1
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr5 -n semseg-spunet-v1m1-2-efficient-lr5
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr10 -n semseg-spunet-v1m1-2-efficient-lr10
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr20 -n semseg-spunet-v1m1-2-efficient-lr20
# Profile model run time
sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-v1m1-0-enable-profiler -n semseg-spunet-v1m1-0-enable-profiler代碼庫中的Minkowskiengine版本SparseUNet是根據原始Minkowskiengine repo修改的,並且運行腳本的示例如下:
# Uncomment "# from .sparse_unet import *" in "pointcept/models/__init__.py"
# Uncomment "# from .mink_unet import *" in "pointcept/models/sparse_unet/__init__.py"
# ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base
# ScanNet200
sh scripts/train.sh -g 4 -d scannet200 -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base
# S3DIS
sh scripts/train.sh -g 4 -d s3dis -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base
# SemanticKITTI
sh scripts/train.sh -g 2 -d semantic_kitti -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base引入Omni-Paptive 3D CNN( OA-CNNS ),該網絡家庭集成了輕巧的模塊,以極大的計算成本極大地增強了稀疏CNN的適應性。如果沒有任何自我發場模塊,就可以在室內和室外場景中的準確性方面有利地超過點變壓器,其潛伏期和內存成本要少得多。與OA-CNNS CAN @pbihao有關的問題。
# ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-oacnns-v1m1-0-base -n semseg-oacnns-v1m1-0-basePTV3是一種有效的骨幹模型,可在室內和室外場景中實現SOTA性能。完整的PTV3依賴於閃存的趨勢,而Flashingtention依賴於CUDA 11.6及更高版本,請確保您的本地點狀態環境滿足要求。
如果您無法升級本地環境以滿足要求(CUDA> = 11.6),則可以通過將模型參數enable_flash設置為false並將enc_patch_size和dec_patch_size設置為false,並將其禁用flashertention(例如128)。
閃光發揮力禁用RPE並迫使精度降低到FP16。如果需要這些功能,請禁用enable_flash並調整enable_rpe , upcast_attention和upcast_softmax 。
項目存儲庫可用詳細的說明和實驗記錄(包含權重)。示例運行腳本如下:
# Scratched ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base
# PPT joint training (ScanNet + Structured3D) and evaluate in ScanNet
sh scripts/train.sh -g 8 -d scannet -c semseg-pt-v3m1-1-ppt-extreme -n semseg-pt-v3m1-1-ppt-extreme
# Scratched ScanNet200
sh scripts/train.sh -g 4 -d scannet200 -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base
# Fine-tuning from PPT joint training (ScanNet + Structured3D) with ScanNet200
# PTV3_PPT_WEIGHT_PATH: Path to model weight trained by PPT multi-dataset joint training
# e.g. exp/scannet/semseg-pt-v3m1-1-ppt-extreme/model/model_best.pth
sh scripts/train.sh -g 4 -d scannet200 -c semseg-pt-v3m1-1-ppt-ft -n semseg-pt-v3m1-1-ppt-ft -w ${PTV3_PPT_WEIGHT_PATH}
# Scratched ScanNet++
sh scripts/train.sh -g 4 -d scannetpp -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base
# Scratched ScanNet++ test
sh scripts/train.sh -g 4 -d scannetpp -c semseg-pt-v3m1-1-submit -n semseg-pt-v3m1-1-submit
# Scratched S3DIS
sh scripts/train.sh -g 4 -d s3dis -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base
# an example for disbale flash_attention and enable rpe.
sh scripts/train.sh -g 4 -d s3dis -c semseg-pt-v3m1-1-rpe -n semseg-pt-v3m1-0-rpe
# PPT joint training (ScanNet + S3DIS + Structured3D) and evaluate in ScanNet
sh scripts/train.sh -g 8 -d s3dis -c semseg-pt-v3m1-1-ppt-extreme -n semseg-pt-v3m1-1-ppt-extreme
# S3DIS 6-fold cross validation
# 1. The default configs are evaluated on Area_5, modify the "data.train.split", "data.val.split", and "data.test.split" to make the config evaluated on Area_1 ~ Area_6 respectively.
# 2. Train and evaluate the model on each split of areas and gather result files located in "exp/s3dis/EXP_NAME/result/Area_x.pth" in one single folder, noted as RECORD_FOLDER.
# 3. Run the following script to get S3DIS 6-fold cross validation performance:
export PYTHONPATH=./
python tools/test_s3dis_6fold.py --record_root ${RECORD_FOLDER}
# Scratched nuScenes
sh scripts/train.sh -g 4 -d nuscenes -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base
# Scratched Waymo
sh scripts/train.sh -g 4 -d waymo -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base
# More configs and exp records for PTv3 will be available soon.室內語義細分
| 模型 | 基準 | 其他數據 | 數字 | Val Miou | config | 張板 | EXP記錄 |
|---|---|---|---|---|---|---|---|
| PTV3 | 掃描儀 | ✗ | 4 | 77.6% | 關聯 | 關聯 | 關聯 |
| ptv3 + ppt | 掃描儀 | ✓ | 8 | 78.5% | 關聯 | 關聯 | 關聯 |
| PTV3 | Scannet200 | ✗ | 4 | 35.3% | 關聯 | 關聯 | 關聯 |
| ptv3 + ppt | Scannet200 | ✓(ft) | 4 | ||||
| PTV3 | S3DI(區域5) | ✗ | 4 | 73.6% | 關聯 | 關聯 | 關聯 |
| ptv3 + ppt | S3DI(區域5) | ✓ | 8 | 75.4% | 關聯 | 關聯 | 關聯 |
戶外語義細分
| 模型 | 基準 | 其他數據 | 數字 | Val Miou | config | 張板 | EXP記錄 |
|---|---|---|---|---|---|---|---|
| PTV3 | Nuscenes | ✗ | 4 | 80.3 | 關聯 | 關聯 | 關聯 |
| ptv3 + ppt | Nuscenes | ✓ | 8 | ||||
| PTV3 | Semantickitti | ✗ | 4 | ||||
| ptv3 + ppt | Semantickitti | ✓ | 8 | ||||
| PTV3 | Waymo | ✗ | 4 | 71.2 | 關聯 | 關聯 | 鏈接(僅日誌) |
| ptv3 + ppt | Waymo | ✓ | 8 |
*釋放的型號權重訓練為v1.5.1,V1.5.2的權重仍在進行中。
對原始PTV2進行了4 * RTX A6000(48G內存)的培訓。即使啟用了放大器,原始PTV2的內存成本也略大於24克。考慮到帶有24G內存的GPU更容易訪問,我在最新的角度調整了PTV2,並在4 * RTX 3090計算機上使其可運行。
PTv2 Mode2啟用AMP和禁用位置編碼乘數和分組線性。在我們的進一步研究中,我們發現精確的坐標對於點雲的理解不是必需的(用網格坐標代替精確的坐標並不影響性能。此外,Sparseunet也是一個例子)。至於分組線性,我的分組線性實現似乎比Pytorch提供的線性層的成本更高。從代碼庫和更好的參數調整中受益,我們也可以緩解過度擬合的問題。複製性能甚至比本文報告的結果更好。
示例運行腳本如下:
# ptv2m2: PTv2 mode2, disable PEM & Grouped Linear, GPU memory cost < 24G (recommend)
# ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base
sh scripts/train.sh -g 4 -d scannet -c semseg-pt-v2m2-3-lovasz -n semseg-pt-v2m2-3-lovasz
# ScanNet test
sh scripts/train.sh -g 4 -d scannet -c semseg-pt-v2m2-1-submit -n semseg-pt-v2m2-1-submit
# ScanNet200
sh scripts/train.sh -g 4 -d scannet200 -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base
# ScanNet++
sh scripts/train.sh -g 4 -d scannetpp -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base
# ScanNet++ test
sh scripts/train.sh -g 4 -d scannetpp -c semseg-pt-v2m2-1-submit -n semseg-pt-v2m2-1-submit
# S3DIS
sh scripts/train.sh -g 4 -d s3dis -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base
# SemanticKITTI
sh scripts/train.sh -g 4 -d semantic_kitti -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base
# nuScenes
sh scripts/train.sh -g 4 -d nuscenes -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base PTv2 mode1是我們在論文中報告的原始PTV2,示例運行腳本如下:
# ptv2m1: PTv2 mode1, Original PTv2, GPU memory cost > 24G
# ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-pt-v2m1-0-base -n semseg-pt-v2m1-0-base
# ScanNet200
sh scripts/train.sh -g 4 -d scannet200 -c semseg-pt-v2m1-0-base -n semseg-pt-v2m1-0-base
# S3DIS
sh scripts/train.sh -g 4 -d s3dis -c semseg-pt-v2m1-0-base -n semseg-pt-v2m1-0-base原始PTV1也可以在我們的Pointcept代碼庫中獲得。我已經很長時間沒有運行PTV1,但是我確保了示例運行腳本效果很好。
# ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-pt-v1-0-base -n semseg-pt-v1-0-base
# ScanNet200
sh scripts/train.sh -g 4 -d scannet200 -c semseg-pt-v1-0-base -n semseg-pt-v1-0-base
# S3DIS
sh scripts/train.sh -g 4 -d s3dis -c semseg-pt-v1-0-base -n semseg-pt-v1-0-basepip install torch-points3d
# Fix dependence, caused by installing torch-points3d
pip uninstall SharedArray
pip install SharedArray==3.2.1
cd libs/pointops2
python setup.py install
cd ../..# from .stratified_transformer import * in pointcept/models/__init__.py 。 # stv1m1: Stratified Transformer mode1, Modified from the original Stratified Transformer code.
# PTv2m2: Stratified Transformer mode2, My rewrite version (recommend).
# ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-st-v1m2-0-refined -n semseg-st-v1m2-0-refined
sh scripts/train.sh -g 4 -d scannet -c semseg-st-v1m1-0-origin -n semseg-st-v1m1-0-origin
# ScanNet200
sh scripts/train.sh -g 4 -d scannet200 -c semseg-st-v1m2-0-refined -n semseg-st-v1m2-0-refined
# S3DIS
sh scripts/train.sh -g 4 -d s3dis -c semseg-st-v1m2-0-refined -n semseg-st-v1m2-0-refined SPVCNN是SPVNA的基線模型,它也是室外數據集的實用基線。
# refer https://github.com/mit-han-lab/torchsparse
# install method without sudo apt install
conda install google-sparsehash -c bioconda
export C_INCLUDE_PATH= ${CONDA_PREFIX} /include: $C_INCLUDE_PATH
export CPLUS_INCLUDE_PATH= ${CONDA_PREFIX} /include:CPLUS_INCLUDE_PATH
pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git # SemanticKITTI
sh scripts/train.sh -g 2 -d semantic_kitti -c semseg-spvcnn-v1m1-0-base -n semseg-spvcnn-v1m1-0-baseOctFormer的OctFormer:3D點雲的基於OCTREE的變壓器。
cd libs
git clone https://github.com/octree-nn/dwconv.git
pip install ./dwconv
pip install ocnn# from .octformer import * in pointcept/models/__init__.py 。 # ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-octformer-v1m1-0-base -n semseg-octformer-v1m1-0-baseSwin3D的Swin3D:經過驗證的變壓器骨幹,可用於3D室內場景。
# 1. Install MinkEngine v0.5.4, follow readme in https://github.com/NVIDIA/MinkowskiEngine;
# 2. Install Swin3D, mainly for cuda operation:
cd libs
git clone https://github.com/microsoft/Swin3D.git
cd Swin3D
pip install ./# from .swin3d import * in pointcept/models/__init__.py 。 # Structured3D + Swin-S
sh scripts/train.sh -g 4 -d structured3d -c semseg-swin3d-v1m1-0-small -n semseg-swin3d-v1m1-0-small
# Structured3D + Swin-L
sh scripts/train.sh -g 4 -d structured3d -c semseg-swin3d-v1m1-1-large -n semseg-swin3d-v1m1-1-large
# Addition
# Structured3D + SpUNet
sh scripts/train.sh -g 4 -d structured3d -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base
# Structured3D + PTv2
sh scripts/train.sh -g 4 -d structured3d -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base # ScanNet + Swin-S
sh scripts/train.sh -g 4 -d scannet -w exp/structured3d/semseg-swin3d-v1m1-1-large/model/model_last.pth -c semseg-swin3d-v1m1-0-small -n semseg-swin3d-v1m1-0-small
# ScanNet + Swin-L
sh scripts/train.sh -g 4 -d scannet -w exp/structured3d/semseg-swin3d-v1m1-1-large/model/model_last.pth -c semseg-swin3d-v1m1-1-large -n semseg-swin3d-v1m1-1-large
# S3DIS + Swin-S (here we provide config support S3DIS normal vector)
sh scripts/train.sh -g 4 -d s3dis -w exp/structured3d/semseg-swin3d-v1m1-1-large/model/model_last.pth -c semseg-swin3d-v1m1-0-small -n semseg-swin3d-v1m1-0-small
# S3DIS + Swin-L (here we provide config support S3DIS normal vector)
sh scripts/train.sh -g 4 -d s3dis -w exp/structured3d/semseg-swin3d-v1m1-1-large/model/model_last.pth -c semseg-swin3d-v1m1-1-large -n semseg-swin3d-v1m1-1-largeContext-Aware Classifier是一個分段器,可以進一步提高每個骨幹的性能,以替代Default Segmentor 。使用以下示例腳本進行培訓:
# ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-cac-v1m1-0-spunet-base -n semseg-cac-v1m1-0-spunet-base
sh scripts/train.sh -g 4 -d scannet -c semseg-cac-v1m1-1-spunet-lovasz -n semseg-cac-v1m1-1-spunet-lovasz
sh scripts/train.sh -g 4 -d scannet -c semseg-cac-v1m1-2-ptv2-lovasz -n semseg-cac-v1m1-2-ptv2-lovasz
# ScanNet200
sh scripts/train.sh -g 4 -d scannet200 -c semseg-cac-v1m1-0-spunet-base -n semseg-cac-v1m1-0-spunet-base
sh scripts/train.sh -g 4 -d scannet200 -c semseg-cac-v1m1-1-spunet-lovasz -n semseg-cac-v1m1-1-spunet-lovasz
sh scripts/train.sh -g 4 -d scannet200 -c semseg-cac-v1m1-2-ptv2-lovasz -n semseg-cac-v1m1-2-ptv2-lovaszPointGroup是點雲實例分割的基線框架。
conda install -c bioconda google-sparsehash
cd libs/pointgroup_ops
python setup.py install --include_dirs= ${CONDA_PREFIX} /include
cd ../..# from .point_group import * in pointcept/models/__init__.py 。 # ScanNet
sh scripts/train.sh -g 4 -d scannet -c insseg-pointgroup-v1m1-0-spunet-base -n insseg-pointgroup-v1m1-0-spunet-base
# S3DIS
sh scripts/train.sh -g 4 -d scannet -c insseg-pointgroup-v1m1-0-spunet-base -n insseg-pointgroup-v1m1-0-spunet-base # ScanNet
sh scripts/train.sh -g 8 -d scannet -c pretrain-msc-v1m1-0-spunet-base -n pretrain-msc-v1m1-0-spunet-base # ScanNet20 Semantic Segmentation
sh scripts/train.sh -g 8 -d scannet -w exp/scannet/pretrain-msc-v1m1-0-spunet-base/model/model_last.pth -c semseg-spunet-v1m1-4-ft -n semseg-msc-v1m1-0f-spunet-base
# ScanNet20 Instance Segmentation (enable PointGroup before running the script)
sh scripts/train.sh -g 4 -d scannet -w exp/scannet/pretrain-msc-v1m1-0-spunet-base/model/model_last.pth -c insseg-pointgroup-v1m1-0-spunet-base -n insseg-msc-v1m1-0f-pointgroup-spunet-basePPT提出了一個多數據集預訓練框架,並且與各種現有的訓練框架和骨幹兼容。
# ScanNet + Structured3d, validate on ScanNet (S3DIS might cause long data time, w/o S3DIS for a quick validation) >= 3090 * 8
sh scripts/train.sh -g 8 -d scannet -c semseg-ppt-v1m1-0-sc-st-spunet -n semseg-ppt-v1m1-0-sc-st-spunet
sh scripts/train.sh -g 8 -d scannet -c semseg-ppt-v1m1-1-sc-st-spunet-submit -n semseg-ppt-v1m1-1-sc-st-spunet-submit
# ScanNet + S3DIS + Structured3d, validate on S3DIS (>= a100 * 8)
sh scripts/train.sh -g 8 -d s3dis -c semseg-ppt-v1m1-0-s3-sc-st-spunet -n semseg-ppt-v1m1-0-s3-sc-st-spunet
# SemanticKITTI + nuScenes + Waymo, validate on SemanticKITTI (bs12 >= 3090 * 4 >= 3090 * 8, v1m1-0 is still on tuning)
sh scripts/train.sh -g 4 -d semantic_kitti -c semseg-ppt-v1m1-0-nu-sk-wa-spunet -n semseg-ppt-v1m1-0-nu-sk-wa-spunet
sh scripts/train.sh -g 4 -d semantic_kitti -c semseg-ppt-v1m2-0-sk-nu-wa-spunet -n semseg-ppt-v1m2-0-sk-nu-wa-spunet
sh scripts/train.sh -g 4 -d semantic_kitti -c semseg-ppt-v1m2-1-sk-nu-wa-spunet-submit -n semseg-ppt-v1m2-1-sk-nu-wa-spunet-submit
# SemanticKITTI + nuScenes + Waymo, validate on nuScenes (bs12 >= 3090 * 4; bs24 >= 3090 * 8, v1m1-0 is still on tuning))
sh scripts/train.sh -g 4 -d nuscenes -c semseg-ppt-v1m1-0-nu-sk-wa-spunet -n semseg-ppt-v1m1-0-nu-sk-wa-spunet
sh scripts/train.sh -g 4 -d nuscenes -c semseg-ppt-v1m2-0-nu-sk-wa-spunet -n semseg-ppt-v1m2-0-nu-sk-wa-spunet
sh scripts/train.sh -g 4 -d nuscenes -c semseg-ppt-v1m2-1-nu-sk-wa-spunet-submit -n semseg-ppt-v1m2-1-nu-sk-wa-spunet-submit # RAW_SCANNET_DIR: the directory of downloaded ScanNet v2 raw dataset.
# PROCESSED_SCANNET_PAIR_DIR: the directory of processed ScanNet pair dataset (output dir).
python pointcept/datasets/preprocessing/scannet/scannet_pair/preprocess.py --dataset_root ${RAW_SCANNET_DIR} --output_root ${PROCESSED_SCANNET_PAIR_DIR}
ln -s ${PROCESSED_SCANNET_PAIR_DIR} ${CODEBASE_DIR} /data/scannet # ScanNet
sh scripts/train.sh -g 8 -d scannet -c pretrain-msc-v1m1-1-spunet-pointcontrast -n pretrain-msc-v1m1-1-spunet-pointcontrast # ScanNet
sh scripts/train.sh -g 8 -d scannet -c pretrain-msc-v1m2-0-spunet-csc -n pretrain-msc-v1m2-0-spunet-cscPointcept由Xiaoyang設計,由Yixing命名,徽標由Yuechen創建。它源自Hengshuang的Semseg,並受到幾個存儲庫,EG,Minkowskiengine,PointNet2,MMCV和DentectRon2的啟發。