The artificial synesthete: Image-melody translations with variational autoencoders

12/06/2021
by   Karl Wienand, et al.
0

Abstract This project presents a system of neural networks to translate between images and melodies. Autoencoders compress the information in samples to abstract representation. A translation network learns a set of correspondences between musical and visual concepts from repeated joint exposure. The resulting "artificial synesthete" generates simple melodies inspired by images, and images from music. These are novel interpretation (not transposed data), expressing the machine' perception and understanding. Observing the work, one explores the machine's perception and thus, by contrast, one's own.

READ FULL TEXT

page 2

page 5

research
06/14/2017

Learning and Evaluating Musical Features with Deep Autoencoders

In this work we describe and evaluate methods to learn musical embedding...
research
08/23/2020

Translating Paintings Into Music Using Neural Networks

We propose a system that learns from artistic pairings of music and corr...
research
04/07/2020

From Artificial Neural Networks to Deep Learning for Music Generation – History, Concepts and Trends

The current tsunami of deep learning (the hyper-vitamined return of arti...
research
05/21/2018

A Universal Music Translation Network

We present a method for translating music across musical instruments, ge...
research
08/01/2018

Subitizing with Variational Autoencoders

Numerosity, the number of objects in a set, is a basic property of a giv...
research
05/29/2022

Contributions to Representation Learning with Graph Autoencoders and Applications to Music Recommendation

Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerg...
research
02/17/2022

Recognizing Concepts and Recognizing Musical Themes. A Quantum Semantic Analysis

How are abstract concepts and musical themes recognized on the basis of ...

Please sign up or login with your details

Forgot password? Click here to reset