PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network

11/11/2018
by   Bryan Wang, et al.
0

Music creation is typically composed of two parts: composing the musical score, and then performing the score with instruments to make sounds. While recent work has made much progress in automatic music generation in the symbolic domain, few attempts have been made to build an AI model that can render realistic music audio from musical scores. Directly synthesizing audio with sound sample libraries often leads to mechanical and deadpan results, since musical scores do not contain performance-level information, such as subtle changes in timing and dynamics. Moreover, while the task may sound like a text-to-speech synthesis problem, there are fundamental differences since music audio has rich polyphonic sounds. To build such an AI performer, we propose in this paper a deep convolutional model that learns in an end-to-end manner the score-to-audio mapping between a symbolic representation of music called the piano rolls and an audio representation of music called the spectrograms. The model consists of two subnets: the ContourNet, which uses a U-Net structure to learn the correspondence between piano rolls and spectrograms and to give an initial result; and the TextureNet, which further uses a multi-band residual network to refine the result by adding the spectral texture of overtones and timbre. We train the model to generate music clips of the violin, cello, and flute, with a dataset of moderate size. We also present the result of a user study that shows our model achieves higher mean opinion score (MOS) in naturalness and emotional expressivity than a WaveNet-based model and two commercial sound libraries. We open our source code at https://github.com/bwang514/PerformanceNet

READ FULL TEXT

page 2

page 5

research
05/28/2019

Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation

We present in this paper PerformacnceNet, a neural network model we prop...
research
08/26/2022

Mel Spectrogram Inversion with Stable Pitch

Vocoders are models capable of transforming a low-dimensional spectral r...
research
02/12/2022

Deep Performer: Score-to-Audio Music Performance Synthesis

Music performance synthesis aims to synthesize a musical score into a na...
research
02/17/2023

jazznet: A Dataset of Fundamental Piano Patterns for Music Audio Machine Learning Research

This paper introduces the jazznet Dataset, a dataset of fundamental jazz...
research
12/08/2017

Representations of Sound in Deep Learning of Audio Features from Music

The work of a single musician, group or composer can vary widely in term...
research
11/26/2020

Towards Movement Generation with Audio Features

Sound and movement are closely coupled, particularly in dance. Certain a...
research
06/03/2016

Gaussian Processes for Music Audio Modelling and Content Analysis

Real music signals are highly variable, yet they have strong statistical...

Please sign up or login with your details

Forgot password? Click here to reset