使用深层过滤的全频段音频(48kHz)的低复杂性语音增强框架(48kHz)。
对于PipeWire集成为虚拟噪声抑制麦克风,在这里查看。
运行演示(仅Linux)使用:
cargo +nightly run -p df-demo --features ui --bin df-demo --release新的DeepFilternet演示: DeepFilternet:感知动机的实时演讲增强
新的多帧过滤纸:助听器的深层多帧过滤
实时版本和LADSPA插件
deep-filter audio-file.wavDeepFilternet2纸: DeepFilternet2:靠近嵌入式设备的全带音频的实时语音增强
原始DeepFilternet纸: DeepFilternet:基于Deep Feltering的全频段音频的低复杂性语音增强框架
从发布页面下载预编译的深滤器二进制文件。您可以使用deep-filter来抑制噪声.wav音频文件中的噪声。当前,仅支持具有48kHz的采样率的WAV文件。
USAGE:
deep-filter [OPTIONS] [FILES]...
ARGS:
< FILES > ...
OPTIONS:
-D, --compensate-delay
Compensate delay of STFT and model lookahead
-h, --help
Print help information
-m, --model < MODEL >
Path to model tar.gz. Defaults to DeepFilterNet2.
-o, --out-dir < OUT_DIR >
[default: out]
--pf
Enable postfilter
-v, --verbose
Logging verbosity
-V, --version
Print version information如果您想将Pytorch后端使用用于GPU处理,请参见下面的Python使用情况。
该框架支持Linux,MacOS和Windows。训练仅在Linux下进行测试。该框架的结构如下:
libDF包含用于数据加载和增强的RUST代码。DeepFilterNet包含DeepFilternet代码训练,评估和可视化以及预验证的模型权重。pyDF包含Libdf STFT/ISTFT处理环的Python包装器。pyDF-data包含LIBDF数据集功能的Python包装器,并提供了Pytorch数据加载程序。ladspa包含一个LADSPA插件,用于实时抑制噪声。models包含用于在DeepFilternet(Python)或Libdf/deep-Filter(Rust)中使用的预测模型通过PIP安装DeepFilternet Python轮:
# Install cpu/cuda pytorch (>=1.9) dependency from pytorch.org, e.g.:
pip install torch torchaudio -f https://download.pytorch.org/whl/cpu/torch_stable.html
# Install DeepFilterNet
pip install deepfilternet
# Or install DeepFilterNet including data loading functionality for training (Linux only)
pip install deepfilternet[train]使用DeepFilternet运行来增强嘈杂的音频文件
# Specify an output directory with --output-dir [OUTPUT_DIR]
deepFilter path/to/noisy_audio.wav通过Rusup安装货物。建议使用conda或virtualenv 。请阅读评论,仅执行所需的命令。
python依赖和libdf的安装:
cd path/to/DeepFilterNet/ # cd into repository
# Recommended: Install or activate a python env
# Mandatory: Install cpu/cuda pytorch (>=1.8) dependency from pytorch.org, e.g.:
pip install torch torchaudio -f https://download.pytorch.org/whl/cpu/torch_stable.html
# Install build dependencies used to compile libdf and DeepFilterNet python wheels
pip install maturin poetry
# Install remaining DeepFilterNet python dependencies
# *Option A:* Install DeepFilterNet python wheel globally within your environment. Do this if you want use
# this repos as is, and don't want to develop within this repository.
poetry -C DeepFilterNet install -E train -E eval
# *Option B:* If you want to develop within this repo, install only dependencies and work with the repository version
poetry -C DeepFilterNet install -E train -E eval --no-root
export PYTHONPATH= $PWD /DeepFilterNet # And set the python path correctly
# Build and install libdf python package required for enhance.py
maturin develop --release -m pyDF/Cargo.toml
# *Optional*: Install libdfdata python package with dataset and dataloading functionality for training
# Required build dependency: HDF5 headers (e.g. ubuntu: libhdf5-dev)
maturin develop --release -m pyDF-data/Cargo.toml
# If you have troubles with hdf5 you may try to build and link hdf5 statically:
maturin develop --release --features hdf5-static -m pyDF-data/Cargo.toml使用DeepFilternet运行来增强嘈杂的音频文件
$ python DeepFilterNet/df/enhance.py --help
usage: enhance.py [-h] [--model-base-dir MODEL_BASE_DIR] [--pf] [--output-dir OUTPUT_DIR] [--log-level LOG_LEVEL] [--compensate-delay]
noisy_audio_files [noisy_audio_files ...]
positional arguments:
noisy_audio_files List of noise files to mix with the clean speech file.
optional arguments:
-h, --help show this help message and exit
--model-base-dir MODEL_BASE_DIR, -m MODEL_BASE_DIR
Model directory containing checkpoints and config.
To load a pretrained model, you may just provide the model name, e.g. ` DeepFilterNet ` .
By default, the pretrained DeepFilterNet2 model is loaded.
--pf Post-filter that slightly over-attenuates very noisy sections.
--output-dir OUTPUT_DIR, -o OUTPUT_DIR
Directory in which the enhanced audio files will be stored.
--log-level LOG_LEVEL
Logger verbosity. Can be one of (debug, info, error, none)
--compensate-delay, -D
Add some paddig to compensate the delay introduced by the real-time STFT/ISTFT implementation.
# Enhance audio with original DeepFilterNet
python DeepFilterNet/df/enhance.py -m DeepFilterNet path/to/noisy_audio.wav
# Enhance audio with DeepFilterNet2
python DeepFilterNet/df/enhance.py -m DeepFilterNet2 path/to/noisy_audio.wav from df import enhance , init_df
model , df_state , _ = init_df () # Load default model
enhanced_audio = enhance ( model , df_state , noisy_audio )请参阅此处的完整示例。
入口点为DeepFilterNet/df/train.py 。它期望包含HDF5数据集的数据目录以及数据集配置JSON文件。
因此,您首先需要以HDF5格式创建数据集。每个数据集通常仅具有训练,验证或测试集的噪声,语音或RIR。
# Install additional dependencies for dataset creation
pip install h5py librosa soundfile
# Go to DeepFilterNet python package
cd path / to / DeepFilterNet / DeepFilterNet
# Prepare text file (e.g. called training_set.txt) containing paths to .wav files
#
# usage: prepare_data.py [-h] [--num_workers NUM_WORKERS] [--max_freq MAX_FREQ] [--sr SR] [--dtype DTYPE]
# [--codec CODEC] [--mono] [--compression COMPRESSION]
# type audio_files hdf5_db
#
# where:
# type: One of `speech`, `noise`, `rir`
# audio_files: Text file containing paths to audio files to include in the dataset
# hdf5_db: Output HDF5 dataset.
python df / scripts / prepare_data . py - - sr 48000 speech training_set . txt TRAIN_SET_SPEECH . hdf5所有数据集应在一个数据集文件夹中用于火车脚本。
数据集配置文件应包含3个条目:“火车”,“有效”,“测试”。其中每个包含数据集列表(例如语音,噪声和RIR数据集)。您可以使用多个语音或噪声数据集。可选地,可以指定可用于超过/不足数据集的抽样因子。假设您有一个具有瞬态噪声的特定数据集,并希望通过过采样来增加非平稳噪声的数量。在大多数情况下,您想将此因素设置为1。
dataset.cfg
{
"train" : [
[
" TRAIN_SET_SPEECH.hdf5 " ,
1.0
],
[
" TRAIN_SET_NOISE.hdf5 " ,
1.0
],
[
" TRAIN_SET_RIR.hdf5 " ,
1.0
]
],
"valid" : [
[
" VALID_SET_SPEECH.hdf5 " ,
1.0
],
[
" VALID_SET_NOISE.hdf5 " ,
1.0
],
[
" VALID_SET_RIR.hdf5 " ,
1.0
]
],
"test" : [
[
" TEST_SET_SPEECH.hdf5 " ,
1.0
],
[
" TEST_SET_NOISE.hdf5 " ,
1.0
],
[
" TEST_SET_RIR.hdf5 " ,
1.0
]
]
}最后,启动培训脚本。如果不存在用于日志记录,一些音频样本,模型检查点和配置,则培训脚本可能会创建模型base_dir 。如果找不到配置文件,它将创建默认配置。有关配置文件,请参见DeepFilternet/Pretraining_models/deepfilternet。
# usage: train.py [-h] [--debug] data_config_file data_dir base_dir
python df / train . py path / to / dataset . cfg path / to / data_dir / path / to / base_dir / 为了复制任何指标,我们建议通过pip install deepfilternet使用Python实现。
如果您使用此框架,请引用: DeepFilternet:基于深层过滤的全乐队音频的低复杂性语音增强框架
@inproceedings { schroeter2022deepfilternet ,
title = { {DeepFilterNet}: A Low Complexity Speech Enhancement Framework for Full-Band Audio based on Deep Filtering } ,
author = { Schröter, Hendrik and Escalante-B., Alberto N. and Rosenkranz, Tobias and Maier, Andreas } ,
booktitle = { ICASSP 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) } ,
year = { 2022 } ,
organization = { IEEE }
}如果您使用DeepFilternet2模型,请引用: DeepFilternEt2:在嵌入式设备上进行实时演讲,以进行全频段音频
@inproceedings { schroeter2022deepfilternet2 ,
title = { {DeepFilterNet2}: Towards Real-Time Speech Enhancement on Embedded Devices for Full-Band Audio } ,
author = { Schröter, Hendrik and Escalante-B., Alberto N. and Rosenkranz, Tobias and Maier, Andreas } ,
booktitle = { 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022) } ,
year = { 2022 } ,
}如果您使用DeepFilternet3模型,请引用: DeepFilternet:感知动机的实时演讲增强
@inproceedings { schroeter2023deepfilternet3 ,
title = { {DeepFilterNet}: Perceptually Motivated Real-Time Speech Enhancement } ,
author = { Schröter, Hendrik and Rosenkranz, Tobias and Escalante-B., Alberto N. and Maier, Andreas } ,
booktitle = { INTERSPEECH } ,
year = { 2023 } ,
}如果使用多帧波束形成算法。请引用助听器的深层多帧过滤
@inproceedings { schroeter2023deep_mf ,
title = { Deep Multi-Frame Filtering for Hearing Aids } ,
author = { Schröter, Hendrik and Rosenkranz, Tobias and Escalante-B., Alberto N. and Maier, Andreas } ,
booktitle = { INTERSPEECH } ,
year = { 2023 } ,
}DeepFilternet是免费的开源!该存储库中的所有代码均在两个下进行双许可:
可以选择。这意味着您可以选择自己喜欢的许可证!
除非您另有明确说明,否则任何有意提交的捐款(如Apache-2.0许可证中定义)应为双重许可,如上所述,没有任何其他条款或条件。