You can start the graphical UI by running `svcg` inside the virtualenv.
Example start script for bash:
```bash
#!/usr/bin/env bash
. ~/.var/venv/svc/bin/activate
svcg
```
### Models
Grab a model from HuggingFace (link on top of page). Models that work with this setup are the ones tagged with `
so-vits-svc-4.0` or `so-vits-svc-4.1`.
You will need 2 files for SVC to work:
-`G_0000.pth` where 0000 is some number. Usually a higher number means better, but only if you're comparing files within the same repository.
-`config.json` tells SVC how to use the pth file.
With SVC running, plop the `G_0000.pth` into `Model Path` on the top left, and `config.json` into `Config Path`.
### Starting the Voice Changer
Default settings usually work OK. What you want to change is `Pitch`. This will be different depending on how high your own voice is compared to the model's voice. You will need different `Pitch` setting for different models.
Check `Use GPU` on the bottom center if you want to torture your GPU with your voice. It better not complain for how expensive it was.
Click `(Re)Start Voice Changer` to do just that. You also need to click this after changing any settings.
### PipeWire Setup
This is for ALVR-only right now. TODO the rest.
Use `qpwgraph` or `helvum` to:
1. Disconnect `vrserver` from `ALVR-MIC-Sink`.
2. Pipe the output of `vrserver` to the input of `Python3.10`.
3. Pipe the output of `Python3.10` to the input of `ALVR-MIC-Sink`.