Most of the recent literature on image super-resolution (SR) assumes the
availability of training data in the form of paired low resolution (LR) and
high resolution (HR) images or the knowledge of the downgrading operator
(usually bicubic downscaling). While the proposed methods perform well on
standard benchmarks, they often fail to produce convincing results in
real-world settings. This is because real-world images can be subject to
corruptions such as sensor noise, which are severely altered by bicubic
downscaling. Therefore, the models never see a real-world image during
training, which limits their generalization capabilities. Moreover, it is
cumbersome to collect paired LR and HR images in the same source domain.
To address this problem, we propose DSGAN to introduce natural image
characteristics in bicubically downscaled images. It can be trained in an
unsupervised fashion on HR images, thereby generating LR images with the same
characteristics as the original images. We then use the generated data to train
a SR model, which greatly improves its performance on real-world images.
Furthermore, we propose to separate the low and high image frequencies and
treat them differently during training. Since the low frequencies are preserved
by downsampling operations, we only require adversarial training to modify the
high frequencies. This idea is applied to our DSGAN model as well as the SR
model. We demonstrate the effectiveness of our method in several experiments
through quantitative and qualitative analysis. Our solution is the winner of
the AIM Challenge on Real World SR at ICCV 2019.
[ICCVW 2019] PyTorch implementation of DSGAN and ESRGAN-FS from the paper "Frequency Separation for Real-World Super-Resolution". This code was the winning solution of the AIM challenge on Real-World Super-Resolution at ICCV 2019