Non-Local Musical Statistics as Guides for Audio-to-Score Piano Transcription

08/28/2020
by   Kentaro Shibata, et al.
1

We present an automatic piano transcription system that converts polyphonic audio recordings into musical scores. This has been a long-standing problem of music information processing and the two main components, multipitch detection and rhythm quantization, have been studied actively. Given the recent remarkable progress in these domains, we study a method integrating deep-neural-network-based multipitch detection and statistical-model-based rhythm quantization. In the first part of the study, we conducted systematic evaluations and found that while the present method achieved high transcription accuracies at the note level, global characteristics such as tempo scale, metre (time signature), and bar line positions were often incorrectly estimated. In the second part, we formulated non-local statistics of pitch and rhythmic content that are derived from musical knowledge and studied their effects in inferring those global characteristics. We found an optimal combination of these statistics for significantly improving the transcription results, which suggests using statistics obtained from separated hand parts. The integrated method can generate transcriptions that can be partially used for music performance and assisting human transcribers, which demonstrates the potential for a practical audio-to-score piano transcription system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset