Learning Spatial-Temporal Implicit Neural Representations for Event-Guided Video Super-Resolution

03/24/2023
by   Yunfan Lu, et al.
0

Event cameras sense the intensity changes asynchronously and produce event streams with high dynamic range and low latency. This has inspired research endeavors utilizing events to guide the challenging video superresolution (VSR) task. In this paper, we make the first attempt to address a novel problem of achieving VSR at random scales by taking advantages of the high temporal resolution property of events. This is hampered by the difficulties of representing the spatial-temporal information of events when guiding VSR. To this end, we propose a novel framework that incorporates the spatial-temporal interpolation of events to VSR in a unified framework. Our key idea is to learn implicit neural representations from queried spatial-temporal coordinates and features from both RGB frames and events. Our method contains three parts. Specifically, the Spatial-Temporal Fusion (STF) module first learns the 3D features from events and RGB frames. Then, the Temporal Filter (TF) module unlocks more explicit motion information from the events near the queried timestamp and generates the 2D features. Lastly, the SpatialTemporal Implicit Representation (STIR) module recovers the SR frame in arbitrary resolutions from the outputs of these two modules. In addition, we collect a real-world dataset with spatially aligned events and RGB frames. Extensive experiments show that our method significantly surpasses the prior-arts and achieves VSR with random scales, e.g., 6.5. Code and dataset are available at https: //vlis2022.github.io/cvpr23/egvsr.

READ FULL TEXT

page 1

page 3

page 6

page 8

research
07/18/2022

Enhancing Space-time Video Super-resolution via Spatial-temporal Feature Interaction

The target of space-time video super-resolution (STVSR) is to increase b...
research
06/23/2022

Anticipating the cost of drought events in France by super learning

Drought events are the second most expensive type of natural disaster wi...
research
05/24/2023

Learning INR for Event-guided Rolling Shutter Frame Correction, Deblur, and Interpolation

Images captured by rolling shutter (RS) cameras under fast camera motion...
research
08/11/2023

Generalizing Event-Based Motion Deblurring in Real-World Scenarios

Event-based motion deblurring has shown promising results by exploiting ...
research
02/08/2023

A Dynamic Graph CNN with Cross-Representation Distillation for Event-Based Recognition

It is a popular solution to convert events into dense frame-based repres...
research
07/17/2022

E-NeRV: Expedite Neural Video Representation with Disentangled Spatial-Temporal Context

Recently, the image-wise implicit neural representation of videos, NeRV,...

Please sign up or login with your details

Forgot password? Click here to reset