简体中文
A fork of so-vits-svc with realtime support and greatly improved interface. Based on branch 4.0 (v1) (or 4.1) and the models are compatible. 4.1 models are not supported. Other models are also not supported.
Always beware of the very few influencers who are quite overly surprised about any new project/technology. You need to take every social networking post with semi-doubt.
The voice changer boom that occurred in 2023 has come to an end, and many developers, not just those in this repository, have been not very active for a while.
There are too many alternatives to list here but:
Elsewhere, several start-ups have improved and marketed voice changers (probably for profit).
Updates to this repository have been limited to maintenance since Spring 2023.
It is difficult to narrow the list of alternatives here, but please consider trying other projects if you are looking for a voice changer with even better performance (especially in terms of latency other than quality).>However, this project may be ideal for those who want to try out voice conversion for the moment (because it is easy to install).
QuickVCContentVec in the original repository.1CREPE.pip.fairseq.This BAT file will automatically perform the steps described below.
Windows (development version required due to pypa/pipx#940):
py -3 -m pip install --user git+https://github.com/pypa/pipx.git
py -3 -m pipx ensurepathLinux/MacOS:
python -m pip install --user pipx
python -m pipx ensurepathpipx install so-vits-svc-fork --python=3.11
pipx inject so-vits-svc-fork torch torchaudio --pip-args="--upgrade" --index-url=https://download.pytorch.org/whl/cu121 # https://download.pytorch.org/whl/nightly/cu121Windows:
py -3.11 -m venv venv
venvScriptsactivateLinux/MacOS:
python3.11 -m venv venv
source venv/bin/activateAnaconda:
conda create -n so-vits-svc-fork python=3.11 pip
conda activate so-vits-svc-forkInstalling without creating a virtual environment may cause a PermissionError if Python is installed in Program Files, etc.
Install this via pip (or your favourite package manager that uses pip):
python -m pip install -U pip setuptools wheel
pip install -U torch torchaudio --index-url https://download.pytorch.org/whl/cu121 # https://download.pytorch.org/whl/nightly/cu121
pip install -U so-vits-svc-forkpip install -U torch torchaudio --index-url https://download.pytorch.org/whl/cu121. MPS is probably supported.--index-url https://download.pytorch.org/whl/cu121 with --index-url https://download.pytorch.org/whl/nightly/rocm5.7. AMD GPUs are not supported on Windows (#120).
Please update this package regularly to get the latest features and bug fixes.
pip install -U so-vits-svc-fork
# pipx upgrade so-vits-svc-forkGUI launches with the following command:
svcgsvc vcsvc infer source.wavPretrained models are available on Hugging Face or CIVITAI.
3_HP-Vocal-UVR.pth or UVR-MDX-NET Main is recommended. 3svc pre-split to split the dataset into multiple files (using librosa).svc pre-sd to split the dataset into multiple files (using pyannote.audio). Further manual classification may be necessary due to accuracy issues. If speakers speak with a variety of speech styles, set --min-speakers larger than the actual number of speakers. Due to unresolved dependencies, please install pyannote.audio manually: pip install pyannote-audio.svc pre-classify is available. Up and down arrow keys can be used to change the playback speed.4
If you do not have access to a GPU with more than 10 GB of VRAM, the free plan of Google Colab is recommended for light users and the Pro/Growth plan of Paperspace is recommended for heavy users. Conversely, if you have access to a high-end GPU, the use of cloud services is not recommended.
Place your dataset like dataset_raw/{speaker_id}/**/{wav_file}.{any_format} (subfolders and non-ASCII filenames are acceptable) and run:
svc pre-resample
svc pre-config
svc pre-hubert
svc train -tbatch_size as much as possible in config.json before the train command to match the VRAM capacity. Setting batch_size to auto-{init_batch_size}-{max_n_trials} (or simply auto) will automatically increase batch_size until OOM error occurs, but may not be useful in some cases.CREPE, replace svc pre-hubert with svc pre-hubert -fm crepe.ContentVec correctly, replace svc pre-config with -t so-vits-svc-4.0v1. Training may take slightly longer because some weights are reset due to reusing legacy initial generator weights.MS-iSTFT Decoder, replace svc pre-config with svc pre-config -t quickvc.For more details, run svc -h or svc <subcommand> -h.
> svc -h
Usage: svc [OPTIONS] COMMAND [ARGS]...
so-vits-svc allows any folder structure for training data.
However, the following folder structure is recommended.
When training: dataset_raw/{speaker_name}/**/{wav_name}.{any_format}
When inference: configs/44k/config.json, logs/44k/G_XXXX.pth
If the folder structure is followed, you DO NOT NEED TO SPECIFY model path, config path, etc.
(The latest model will be automatically loaded.)
To train a model, run pre-resample, pre-config, pre-hubert, train.
To infer a model, run infer.
Options:
-h, --help Show this message and exit.
Commands:
clean Clean up files, only useful if you are using the default file structure
infer Inference
onnx Export model to onnx (currently not working)
pre-classify Classify multiple audio files into multiple files
pre-config Preprocessing part 2: config
pre-hubert Preprocessing part 3: hubert If the HuBERT model is not found, it will be...
pre-resample Preprocessing part 1: resample
pre-sd Speech diarization using pyannote.audio
pre-split Split audio files into multiple files
train Train model If D_0.pth or G_0.pth not found, automatically download from hub.
train-cluster Train k-means clustering
vc Realtime inference from microphoneVideo Tutorial
Thanks goes to these wonderful people (emoji key):
34j ? ? ? ? |
GarrettConway ? ? |
BlueAmulet ? ? |
ThrowawayAccount01 ? |
緋 ? |
Lordmau5 ? ? ? ? |
DL909 ? |
Satisfy256 ? |
Pierluigi Zagaria ? |
ruckusmattster ? |
Desuka-art ? |
heyfixit |
Nerdy Rodent ? |
谢宇 |
ColdCawfee ? |
sbersier ? ? ? |
Meldoner ? ? |
mmodeusher ? |
AlonDan ? |
Likkkez ? |
Duct Tape Games ? |
Xianglong He ? |
75aosu ? |
tonyco82 ? |
yxlllc ? |
outhipped ? |
escoolioinglesias ? ? ? |
Blacksingh ? |
Mgs. M. Thoyib Antarnusa ? |
Exosfeer ? |
guranon ? ? |
Alexander Koumis |
acekagami ? |
Highupech ? |
Scorpi |
Maximxls |
Star3Lord ? |
Forkoz ? |
Zerui Chen ? |
Roee Shenberg ? ? |
Justas ? |
Onako2 |
4ll0w3v1l |
j5y0V6b ?️ |
marcellocirelli ? |
Priyanshu Patel |
Anna Gorshunova ? |
This project follows the all-contributors specification. Contributions of any kind welcome!
#206 ↩
#469 ↩
https://ytpmv.info/how-to-use-uvr/ ↩
If you register a referral code and then add a payment method, you may save about $5 on your first month's monthly billing. Note that both referral rewards are Paperspace credits and not cash. It was a tough decision but inserted because debugging and training the initial model requires a large amount of computing power and the developer is a student. ↩
#456 ↩