A General Decoupled Learning Framework for Parameterized Image Operators

by   Qingnan Fan, et al.

Many different deep networks have been used to approximate, accelerate or improve traditional image operators. Among these traditional operators, many contain parameters which need to be tweaked to obtain the satisfactory results, which we refer to as parameterized image operators. However, most existing deep networks trained for these operators are only designed for one specific parameter configuration, which does not meet the needs of real scenarios that usually require flexible parameters settings. To overcome this limitation, we propose a new decoupled learning algorithm to learn from the operator parameters to dynamically adjust the weights of a deep network for image operators, denoted as the base network. The learned algorithm is formed as another network, namely the weight learning network, which can be end-to-end jointly trained with the base network. Experiments demonstrate that the proposed framework can be successfully applied to many traditional parameterized image operators. To accelerate the parameter tuning for practical scenarios, the proposed framework can be further extended to dynamically change the weights of only one single layer of the base network while sharing most computation cost. We demonstrate that this cheap parameter-tuning extension of the proposed decoupled learning framework even outperforms the state-of-the-art alternative approaches.


page 3

page 6

page 7

page 10

page 14


Decouple Learning for Parameterized Image Operators

Many different deep networks have been used to approximate, accelerate o...

ReDMark: Framework for Residual Diffusion Watermarking on Deep Networks

Due to the rapid growth of machine learning tools and specifically deep ...

Understanding over-parameterized deep networks by geometrization

A complete understanding of the widely used over-parameterized deep netw...

Learned Perceptual Image Enhancement

Learning a typical image enhancement pipeline involves minimization of a...

Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective

The top-k operator returns a k-sparse vector, where the non-zero values ...

A Multi-Agents Architecture to Learn Vision Operators and their Parameters

In a vision system, every task needs that the operators to apply should ...

Toward Evaluating the Complexity to Operate a Network

The task of determining which network architectures provide the best rat...

Please sign up or login with your details

Forgot password? Click here to reset