LORA: Learning to Optimize for Resource Allocation in Wireless Networks with Few Training Samples

12/18/2018
by   Yifei Shen, et al.
0

Effective resource allocation plays a pivotal role for performance optimization in wireless networks. Unfortunately, typical resource allocation problems are mixed-integer nonlinear programming (MINLP) problems, which are NP-hard in general. Machine learning-based methods recently emerge as a disruptive way to obtain near-optimal performance for MINLP problems with affordable computational complexity. However, a key challenge is that these methods require huge amounts of training samples, which are difficult to obtain in practice. Furthermore, they suffer from severe performance deterioration when the network parameters change, which commonly happens and can be characterized as the task mismatch issue. In this paper, to address the sample complexity issue, instead of directly learning the input-output mapping of a particular resource allocation algorithm, we propose a Learning to Optimize framework for Resource Allocation, called LORA, that learns the pruning policy in the optimal branch-and-bound algorithm. By exploiting the algorithm structure, this framework enjoys an extremely low sample complexity, in the order of tens or hundreds, compared with millions for existing methods. To further address the task mismatch issue, we propose a transfer learning method via self-imitation, named LORA-TL, which can adapt to the new task with only a few additional unlabeled training samples. Numerical simulations demonstrate that LORA outperforms specialized state-of-art algorithms and achieves near-optimal performance. Moreover, LORA-TL, relying on a few unlabeled samples, achieves comparable performance with the model trained from scratch with sufficient labeled samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2019

Learning to Branch: Accelerating Resource Allocation in Wireless Networks

Resource allocation in wireless networks, such as device-to-device (D2D)...
research
03/03/2020

Accelerating Generalized Benders Decomposition for Wireless Resource Allocation

Generalized Benders decomposition (GBD) is a globally optimal algorithm ...
research
11/05/2020

Unsupervised Learning for Asynchronous Resource Allocation in Ad-hoc Wireless Networks

We consider optimal resource allocation problems under asynchronous wire...
research
07/21/2018

Learning Optimal Resource Allocations in Wireless Systems

This paper considers the design of optimal resource allocation policies ...
research
12/16/2017

A Machine Learning Framework for Resource Allocation Assisted by Cloud Computing

Conventionally, the resource allocation is formulated as an optimization...
research
05/18/2020

Data Represention for Deep Learning with Priori Knowledge of Symmetric Wireless Tasks

Deep neural networks (DNNs) have been applied to address various wireles...
research
01/11/2021

Marketing Mix Optimization with Practical Constraints

In this paper, we address a variant of the marketing mix optimization (M...

Please sign up or login with your details

Forgot password? Click here to reset