Autoencoding sensory substitution

07/14/2019
by   Viktor Tóth, et al.
5

Tens of millions of people live blind, and their number is ever increasing. Visual-to-auditory sensory substitution (SS) encompasses a family of cheap, generic solutions to assist the visually impaired by conveying visual information through sound. The required SS training is lengthy: months of effort is necessary to reach a practical level of adaptation. There are two reasons for the tedious training process: the elongated substituting audio signal, and the disregard for the compressive characteristics of the human hearing system. To overcome these obstacles, we developed a novel class of SS methods, by training deep recurrent autoencoders for image-to-sound conversion. We successfully trained deep learning models on different datasets to execute visual-to-auditory stimulus conversion. By constraining the visual space, we demonstrated the viability of shortened substituting audio signals, while proposing mechanisms, such as the integration of computational hearing models, to optimally convey visual features in the substituting stimulus as perceptually discernible auditory components. We tested our approach in two separate cases. In the first experiment, the author went blindfolded for 5 days, while performing SS training on hand posture discrimination. The second experiment assessed the accuracy of reaching movements towards objects on a table. In both test cases, above-chance-level accuracy was attained after a few hours of training. Our novel SS architecture broadens the horizon of rehabilitation methods engineered for the visually impaired. Further improvements on the proposed model shall yield hastened rehabilitation of the blind and a wider adaptation of SS devices as a consequence.

READ FULL TEXT

page 19

page 37

page 38

research
06/14/2021

A Novel mapping for visual to auditory sensory substitution

visual information can be converted into audio stream via sensory substi...
research
07/20/2021

FoleyGAN: Visually Guided Generative Adversarial Network-Based Synchronous Sound Generation in Silent Videos

Deep learning based visual to sound generation systems essentially need ...
research
04/19/2019

Listen to the Image

Visual-to-auditory sensory substitution devices can assist the blind in ...
research
02/03/2022

Mathematical Content Browsing for Print-Disabled Readers Based on Virtual-World Exploration and Audio-Visual Sensory substitution

Documents containing mathematical content remain largely inaccessible to...
research
02/01/2023

Evoking empathy with visually impaired people through an augmented reality embodiment experience

To promote empathy with people that have disabilities, we propose a mult...
research
04/28/2018

A Bimodal Learning Approach to Assist Multi-sensory Effects Synchronization

In mulsemedia applications, traditional media content (text, image, audi...

Please sign up or login with your details

Forgot password? Click here to reset