This creates the infamous "underwater" sound. The vocals become thin, phasey, and lose all low-end warmth. Why? Because drums are also center-panned. You’ve just made a trade: vocals for fidelity. Spell #2: The "Deep Learning" (OpenVINO) This is the modern, slightly terrifying approach. Audacity now supports AI-powered plugins (like OpenVINO or using external tools like UVR). This doesn't rely on stereo trickery. Instead, a neural network has been trained on thousands of songs to "learn" what a human voice sounds like vs. a guitar vs. a drum.
You highlight a section of music. The AI analyzes the waveform and asks, "Does this frequency pattern match a human larynx or a cymbal crash?" It then tries to erase the non-voice parts. vocal isolation audacity
Imagine you have a finished song. The vocalist is soaring, but the guitar is slightly out of tune. Or maybe you want to study a rapper’s flow without the beat. Or—here’s the holy grail—you want an a cappella version of a track that was never officially released. This creates the infamous "underwater" sound
This produces shockingly clean a cappellas. You can often hear breaths, lip smacks, and room reverb that were buried in the original mix. Because drums are also center-panned
If the song has heavy stereo reverb on the voice (common in shoegaze or 80s ballads), you are doomed. The reverb is spread to the sides, so when you cancel the center, you lose the voice but keep the echo. You end up with a ghost singing from a well.