LISA: Localized Image Stylization with Audio via Implicit Neural Representation

by   Seung-Hyun Lee, et al.

We present a novel framework, Localized Image Stylization with Audio (LISA) which performs audio-driven localized image stylization. Sound often provides information about the specific context of the scene and is closely related to a certain part of the scene or object. However, existing image stylization works have focused on stylizing the entire image using an image or text input. Stylizing a particular part of the image based on audio input is natural but challenging. In this work, we propose a framework that a user provides an audio input to localize the sound source in the input image and another for locally stylizing the target object or scene. LISA first produces a delicate localization map with an audio-visual localization network by leveraging CLIP embedding space. We then utilize implicit neural representation (INR) along with the predicted localization map to stylize the target object or scene based on sound information. The proposed INR can manipulate the localized pixel values to be semantically consistent with the provided audio input. Through a series of experiments, we show that the proposed framework outperforms the other audio-guided stylization methods. Moreover, LISA constructs concise localization maps and naturally manipulates the target object or scene in accordance with the given audio input.


page 1

page 5

page 6

page 8

page 13

page 14

page 15

page 16


Robust Sound-Guided Image Manipulation

Recent successes suggest that an image can be manipulated by a text prom...

Sound-Guided Semantic Image Manipulation

The recent success of the generative model shows that leveraging the mul...

Sound-Guided Semantic Video Generation

The recent success in StyleGAN demonstrates that pre-trained StyleGAN la...

Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment

How does audio describe the world around us? In this paper, we propose a...

Learning Visual Styles from Audio-Visual Associations

From the patter of rain to the crunch of snow, the sounds we hear often ...

Attentional Graph Convolutional Network for Structure-aware Audio-Visual Scene Classification

Audio-Visual scene understanding is a challenging problem due to the uns...

Hypernetworks build Implicit Neural Representations of Sounds

Implicit Neural Representations (INRs) are nowadays used to represent mu...

Please sign up or login with your details

Forgot password? Click here to reset