Authors
- Yuhta Takida
- Takashi Shibuya
- Wei-Hsiang Liao
- Chieh-Hsin Lai
- Junki Ohmura*
- Toshimitsu Uesaka
- Naoki Murata
- Shusuke Takahashi*
- Toshiyuki Kumakura*
- Yuki Mitsufuji
* External authors
Venue
- ICML 2022
Date
- 2022
SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization
Wei-Hsiang Liao
Junki Ohmura*
Shusuke Takahashi*
Toshiyuki Kumakura*
* External authors
ICML 2022
2022
Abstract
One noted issue of vector-quantized variational autoencoder (VQ-VAE) is that the learned discrete representation uses only a fraction of the full capacity of the codebook, also known as codebook collapse. We hypothesize that the training scheme of VQ-VAE, which involves some carefully designed heuristics, underlies this issue. In this paper, we propose a new training scheme that extends the standard VAE via novel stochastic dequantization and quantization, called stochastically quantized variational autoencoder (SQ-VAE). In SQ-VAE, we observe a trend that the quantization is stochastic at the initial stage of the training but gradually converges toward a deterministic quantization, which we call self-annealing. Our experiments show that SQ-VAE improves codebook utilization without using common heuristics. Furthermore, we empirically show that SQ-VAE is superior to VAE and VQ-VAE in vision- and speech-related tasks.
Related Publications
Generative adversarial networks (GANs) learn a target probability distribution by optimizing a generator and a discriminator with minimax objectives. This paper addresses the question of whether such optimization actually provides the generator with gradients that make its d…
Despite the recent advancements, conditional image generation still faces challenges of cost, generalizability, and the need for task-specific training. In this paper, we propose Manifold Preserving Guided Diffusion (MPGD), a training-free conditional generation framework th…
Consistency Models (CM) (Song et al., 2023) accelerate score-based diffusion model sampling at the cost of sample quality but lack a natural way to trade-off quality for speed. To address this limitation, we propose Consistency Trajectory Model (CTM), a generalization encomp…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.