罗马·苏沃罗夫(Roman Suvorov),伊丽莎白·洛格切瓦(Elizaveta Logacheva),安东·马希基(Anton Mashikhin),阿纳斯塔西亚·雷姆佐瓦(Anastasia Remizova),阿尔森尼·阿什卡(Arsenii Ashukha),阿莱克西·西尔维斯特(Aleksei Silvestrov),naejin kong,harshith goka,kiiwoong goka,kiiwoong Park,Victor Lempitsky。
喇嘛比在训练期间看到的(256x256)所看到的要出乎意料的分辨率(〜2K❗️),即使在具有挑战性的情况下,也可以实现出色的表现,例如,例如完成周期性结构。
[项目页面] [arxiv] [补充] [Bibtex] [Casual Gan Papers摘要]
在Google Colab中尝试
(请随时通过创建问题分享您的论文)
(通过创建问题随意分享您的应用程序/实现/演示)
克隆回购: git clone https://github.com/advimman/lama.git
环境有三个选择:
Python Virtualenv:
virtualenv inpenv --python=/usr/bin/python3
source inpenv/bin/activate
pip install torch==1.8.0 torchvision==0.9.0
cd lama
pip install -r requirements.txt
康达
% Install conda for Linux, for other OS download miniconda at https://docs.conda.io/en/latest/miniconda.html
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda
$HOME/miniconda/bin/conda init bash
cd lama
conda env create -f conda_env.yml
conda activate lama
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch -y
pip install pytorch-lightning==1.2.9
Docker:不需要行动吗?
跑步
cd lama
export TORCH_HOME=$(pwd) && export PYTHONPATH=$(pwd)
1。下载预训练的模型
最佳模型(位置2,地方挑战):
curl -LJO https://huggingface.co/smartywu/big-lama/resolve/main/big-lama.zip
unzip big-lama.zip
所有模型(Places&Celeba-HQ):
download [https://drive.google.com/drive/folders/1B2x7eQDgecTL0oh3LSIBDGj0fTxs6Ips?usp=drive_link]
unzip lama-models.zip
2。准备图像和口罩
下载测试图像:
unzip LaMa_test_images.zip
image1_mask001.png
image1.png
image2_mask001.png
image2.png
image_suffix ,例如.png或.jpg或_input.jpg中的configs/prediction/default.yaml 。3。预测
在主机上:
python3 bin/predict.py model.path=$(pwd)/big-lama indir=$(pwd)/LaMa_test_images outdir=$(pwd)/output
或在Docker中
以下命令将从Docker Hub中取出Docker映像并执行预测脚本
bash docker/2_predict.sh $(pwd)/big-lama $(pwd)/LaMa_test_images $(pwd)/output device=cpu
Docker Cuda:
bash docker/2_predict_with_gpu.sh $(pwd)/big-lama $(pwd)/LaMa_test_images $(pwd)/output
4。精炼预测
在主机上:
python3 bin/predict.py refine=True model.path=$(pwd)/big-lama indir=$(pwd)/LaMa_test_images outdir=$(pwd)/output
确保您运行:
cd lama
export TORCH_HOME=$(pwd) && export PYTHONPATH=$(pwd)
然后下载用于感知损失的模型:
mkdir -p ade20k/ade20k-resnet50dilated-ppm_deepsup/
wget -P ade20k/ade20k-resnet50dilated-ppm_deepsup/ http://sceneparsing.csail.mit.edu/model/pytorch/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth
在主机上:
# Download data from http://places2.csail.mit.edu/download.html
# Places365-Standard: Train(105GB)/Test(19GB)/Val(2.1GB) from High-resolution images section
wget http://data.csail.mit.edu/places/places365/train_large_places365standard.tar
wget http://data.csail.mit.edu/places/places365/val_large.tar
wget http://data.csail.mit.edu/places/places365/test_large.tar
# Unpack train/test/val data and create .yaml config for it
bash fetch_data/places_standard_train_prepare.sh
bash fetch_data/places_standard_test_val_prepare.sh
# Sample images for test and viz at the end of epoch
bash fetch_data/places_standard_test_val_sample.sh
bash fetch_data/places_standard_test_val_gen_masks.sh
# Run training
python3 bin/train.py -cn lama-fourier location=places_standard
# To evaluate trained model and report metrics as in our paper
# we need to sample previously unseen 30k images and generate masks for them
bash fetch_data/places_standard_evaluation_prepare_data.sh
# Infer model on thick/thin/medium masks in 256 and 512 and run evaluation
# like this:
python3 bin/predict.py
model.path=$(pwd)/experiments/<user>_<date:time>_lama-fourier_/
indir=$(pwd)/places_standard_dataset/evaluation/random_thick_512/
outdir=$(pwd)/inference/random_thick_512 model.checkpoint=last.ckpt
python3 bin/evaluate_predicts.py
$(pwd)/configs/eval2_gpu.yaml
$(pwd)/places_standard_dataset/evaluation/random_thick_512/
$(pwd)/inference/random_thick_512
$(pwd)/inference/random_thick_512_metrics.csv
Docker:Todo
在主机上:
# Make shure you are in lama folder
cd lama
export TORCH_HOME=$(pwd) && export PYTHONPATH=$(pwd)
# Download CelebA-HQ dataset
# Download data256x256.zip from https://drive.google.com/drive/folders/11Vz0fqHS2rXDb5pprgTjpD7S2BAJhi1P
# unzip & split into train/test/visualization & create config for it
bash fetch_data/celebahq_dataset_prepare.sh
# generate masks for test and visual_test at the end of epoch
bash fetch_data/celebahq_gen_masks.sh
# Run training
python3 bin/train.py -cn lama-fourier-celeba data.batch_size=10
# Infer model on thick/thin/medium masks in 256 and run evaluation
# like this:
python3 bin/predict.py
model.path=$(pwd)/experiments/<user>_<date:time>_lama-fourier-celeba_/
indir=$(pwd)/celeba-hq-dataset/visual_test_256/random_thick_256/
outdir=$(pwd)/inference/celeba_random_thick_256 model.checkpoint=last.ckpt
Docker:Todo
在主机上:
# This script downloads multiple .tar files in parallel and unpacks them
# Places365-Challenge: Train(476GB) from High-resolution images (to train Big-Lama)
bash places_challenge_train_download.sh
TODO: prepare
TODO: train
TODO: eval
Docker:Todo
如果您遵守以下步骤之一,请检查BASH脚本是否从Celebahq部分进行数据准备和掩盖生成。
在主机上:
# Make shure you are in lama folder
cd lama
export TORCH_HOME=$(pwd) && export PYTHONPATH=$(pwd)
# You need to prepare following image folders:
$ ls my_dataset
train
val_source # 2000 or more images
visual_test_source # 100 or more images
eval_source # 2000 or more images
# LaMa generates random masks for the train data on the flight,
# but needs fixed masks for test and visual_test for consistency of evaluation.
# Suppose, we want to evaluate and pick best models
# on 512x512 val dataset with thick/thin/medium masks
# And your images have .jpg extention:
python3 bin/gen_mask_dataset.py
$(pwd)/configs/data_gen/random_<size>_512.yaml # thick, thin, medium
my_dataset/val_source/
my_dataset/val/random_<size>_512.yaml # thick, thin, medium
--ext jpg
# So the mask generator will:
# 1. resize and crop val images and save them as .png
# 2. generate masks
ls my_dataset/val/random_medium_512/
image1_crop000_mask000.png
image1_crop000.png
image2_crop000_mask000.png
image2_crop000.png
...
# Generate thick, thin, medium masks for visual_test folder:
python3 bin/gen_mask_dataset.py
$(pwd)/configs/data_gen/random_<size>_512.yaml #thick, thin, medium
my_dataset/visual_test_source/
my_dataset/visual_test/random_<size>_512/ #thick, thin, medium
--ext jpg
ls my_dataset/visual_test/random_thick_512/
image1_crop000_mask000.png
image1_crop000.png
image2_crop000_mask000.png
image2_crop000.png
...
# Same process for eval_source image folder:
python3 bin/gen_mask_dataset.py
$(pwd)/configs/data_gen/random_<size>_512.yaml #thick, thin, medium
my_dataset/eval_source/
my_dataset/eval/random_<size>_512/ #thick, thin, medium
--ext jpg
# Generate location config file which locate these folders:
touch my_dataset.yaml
echo "data_root_dir: $(pwd)/my_dataset/" >> my_dataset.yaml
echo "out_root_dir: $(pwd)/experiments/" >> my_dataset.yaml
echo "tb_dir: $(pwd)/tb_logs/" >> my_dataset.yaml
mv my_dataset.yaml ${PWD}/configs/training/location/
# Check data config for consistency with my_dataset folder structure:
$ cat ${PWD}/configs/training/data/abl-04-256-mh-dist
...
train:
indir: ${location.data_root_dir}/train
...
val:
indir: ${location.data_root_dir}/val
img_suffix: .png
visual_test:
indir: ${location.data_root_dir}/visual_test
img_suffix: .png
# Run training
python3 bin/train.py -cn lama-fourier location=my_dataset data.batch_size=10
# Evaluation: LaMa training procedure picks best few models according to
# scores on my_dataset/val/
# To evaluate one of your best models (i.e. at epoch=32)
# on previously unseen my_dataset/eval do the following
# for thin, thick and medium:
# infer:
python3 bin/predict.py
model.path=$(pwd)/experiments/<user>_<date:time>_lama-fourier_/
indir=$(pwd)/my_dataset/eval/random_<size>_512/
outdir=$(pwd)/inference/my_dataset/random_<size>_512
model.checkpoint=epoch32.ckpt
# metrics calculation:
python3 bin/evaluate_predicts.py
$(pwd)/configs/eval2_gpu.yaml
$(pwd)/my_dataset/eval/random_<size>_512/
$(pwd)/inference/my_dataset/random_<size>_512
$(pwd)/inference/my_dataset/random_<size>_512_metrics.csv
或在Docker中:
TODO: train
TODO: eval
以下命令将执行生成随机掩码的脚本。
bash docker/1_generate_masks_from_raw_images.sh
configs/data_gen/random_medium_512.yaml
/directory_with_input_images
/directory_where_to_store_images_and_masks
--ext png
测试数据生成命令以适合预测的格式存储图像。
下表描述了我们用来从纸张生成不同测试集的配置。请注意,我们不会修复随机种子,因此每次结果都会略有不同。
| 位置512x512 | Celeba 256x256 | |
|---|---|---|
| 狭窄的 | Random_thin_512.yaml | Random_thin_256.yaml |
| 中等的 | Random_Medium_512.yaml | Random_Medium_256.yaml |
| 宽的 | Random_thick_512.yaml | Random_thick_256.yaml |
可以随意将配置路径(参数#1)更改为configs/data_gen中的任何其他配置,或自行调整配置文件。
另外,您可以在config中覆盖参数:
python3 bin/train.py -cn <config> data.batch_size=10 run_title=my-title
其中省略了.YAML文件扩展名
纸上模型的配置名称(对培训命令的属性):
* big-lama
* big-lama-regular
* lama-fourier
* lama-regular
* lama_small_train_masks
坐在配置/培训/文件夹中
托多
如果您发现此代码有帮助,请考虑引用:
@article{suvorov2021resolution,
title={Resolution-robust Large Mask Inpainting with Fourier Convolutions},
author={Suvorov, Roman and Logacheva, Elizaveta and Mashikhin, Anton and Remizova, Anastasia and Ashukha, Arsenii and Silvestrov, Aleksei and Kong, Naejin and Goka, Harshith and Park, Kiwoong and Lempitsky, Victor},
journal={arXiv preprint arXiv:2109.07161},
year={2021}
}