Authors
- Kin Wai Cheuk*
- Ryosuke Sawata*
- Toshimitsu Uesaka
- Naoki Murata
- Naoya Takahashi
- Shusuke Takahashi*
- Dorien Herremans*
- Yuki Mitsufuji
* External authors
Venue
- ICASSP 2023
Date
- 2023
DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability
Kin Wai Cheuk*
Ryosuke Sawata*
Toshimitsu Uesaka
Naoki Murata
Shusuke Takahashi*
Dorien Herremans*
Yuki Mitsufuji
* External authors
ICASSP 2023
2023
Abstract
In this paper we propose a novel generative approach, DiffRoll, to tackle automatic music transcription (AMT).
Instead of treating AMT as a discriminative task in which the model is trained to convert spectrograms into piano rolls, we think of it as a conditional generative task where we train our model to generate realistic looking piano rolls from pure Gaussian noise conditioned on spectrograms.
This new AMT formulation enables DiffRoll to transcribe, generate and even inpaint music. Due to the classifier-free nature, DiffRoll is also able to be trained on unpaired datasets where only piano rolls are available. Our experiments show that DiffRoll outperforms its discriminative counterpart by 19 percentage points (ppt.) and our ablation studies also indicate that it outperforms similar existing methods by 4.8 ppt.
Related Publications
Primary goal of an emotional voice conversion (EVC) system is to convert the emotion of a given speech signal from one style to another style without modifying the linguistic content of the signal. Most of the state-of-the-art approaches convert emotions for seen speaker-emo…
Recent progress in deep generative models has improved the quality of neural vocoders in speech domain. However, generating a high-quality singing voice remains challenging due to a wider variety of musical expressions in pitch, loudness, and pronunciations. In this work, we…
Recent years have seen progress beyond domain-specific sound separation for speech or music towards universal sound separation for arbitrary sounds. Prior work on universal sound separation has investigated separating a target sound out of an audio mixture given a text query…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.