Tensor-Based Classifiers for Hyperspectral Data Analysis

by   Konstantinos Makantasis, et al.

In this work, we present tensor-based linear and nonlinear models for hyperspectral data classification and analysis. By exploiting principles of tensor algebra, we introduce new classification architectures, the weight parameters of which satisfies the rank-1 canonical decomposition property. Then, we introduce learning algorithms to train both the linear and the non-linear classifier in a way to i) to minimize the error over the training samples and ii) the weight coefficients satisfies the rank-1 canonical decomposition property. The advantages of the proposed classification model is that i) it reduces the number of parameters required and thus reduces the respective number of training samples required to properly train the model, ii) it provides a physical interpretation regarding the model coefficients on the classification output and iii) it retains the spatial and spectral coherency of the input samples. To address issues related with linear classification, characterizing by low capacity, since it can produce rules that are linear in the input space, we introduce non-linear classification models based on a modification of a feedforward neural network. We call the proposed architecture rank-1 Feedfoward Neural Network (FNN), since their weights satisfy the rank-1 caconical decomposition property. Appropriate learning algorithms are also proposed to train the network. Experimental results and comparisons with state of the art classification methods, either linear (e.g., SVM) and non-linear (e.g., deep learning) indicates the outperformance of the proposed scheme, especially in cases where a small number of training samples are available. Furthermore, the proposed tensor-based classfiers are evaluated against their capabilities in dimensionality reduction.


Tensor-based Nonlinear Classifier for High-Order Data Analysis

In this paper we propose a tensor-based nonlinear model for high-order d...

Rank-R FNN: A Tensor-Based Learning Model for High-Order Data Classification

An increasing number of emerging applications in data science and engine...

Dictionary-based Tensor Canonical Polyadic Decomposition

To ensure interpretability of extracted sources in tensor decomposition,...

Non-Linear Self-Interference Cancellation via Tensor Completion

Non-linear self-interference (SI) cancellation constitutes a fundamental...

Spectral Tensor Train Parameterization of Deep Learning Layers

We study low-rank parameterizations of weight matrices with embedded spe...

A network that learns Strassen multiplication

We study neural networks whose only non-linear components are multiplier...

Please sign up or login with your details

Forgot password? Click here to reset