Text-to-speech synthesis has been a booming research area, with Google, Facebook, Deepmind, and other tech giants showcasing their interesting research and trying to build better TTS models. Now Baidu has stolen the show with ClariNet, the first fully end-to-end TTS model, that directly converts text to a speech waveform in a single neural network.
Classical TTS models such as Deepmind’s Wavenet usually have a separately text-to-spectrogram and waveform synthesis models. Having two models may result in suboptimal performance. ClariNet combines the two models into one fully convolutional single neural network. Not only that, their text-to-wave model significantly outperforms the previous separate TTS models, they claim.
Baidu’s ClariNet consists of four components:
- Encoder, which encodes textual features into an internal hidden representation.
- Decoder, which decodes the encoder representation into the log-mel spectrogram in an autoregressive manner.
- Bridge-net: An intermediate processing block, which processes the hidden representation from the decoder and predicts log-linear spectrogram. It also upsamples the hidden representation from frame-level to sample-level.
- Vocoder: A Gaussian autoregressive WaveNet to synthesize the waveform. It is conditioned on the upsampled hidden representation from the bridge-net.
Baidu has also proposed a new parallel wave generation method based on the Gaussian inverse autoregressive flow (IAF). This mechanism generates all samples of an audio waveform in parallel, speeding up waveform synthesis dramatically as compared to traditional autoregressive methods.
To teach a parallel waveform synthesizer, they use a Gaussian autoregressive WaveNet as the teacher-net and the Gaussian IAF as the student-net. Their Gaussian autoregressive WaveNet is trained with maximum likelihood estimation (MLE).
The Gaussian IAF is distilled from the autoregressive WaveNet by minimizing KL divergence between their peaked output distributions, stabilizing the training process.