DeepAI AI Chat
Log In Sign Up

Exploring the effects of Lx-norm penalty terms in multivariate curve resolution methods for resolving LC/GC-MS data

There are different problems for resolution of complex LC-MS or GC-MS data, such as the existence of embedded chromatographic peaks, continuum background and overlapping in mass channels for different components. These problems cause rotational ambiguity in recovered profiles calculated using multivariate curve resolution (MCR) methods. Since mass spectra are sparse in nature, sparsity has been proposed recently as a constraint in MCR methods for analyzing LC-MS data. There are different ways for implementation of the sparsity constraint, and majority of methods rely on imposing a penalty based on the L0-, L1- and L2-norms of recovered mass spectra. Ridge regression and least absolute shrinkage and selection operator (Lasso) can be used for implementation of L2- and L1-norm penalties in MCR, respectively. The main question is which Lx-norm penalty is more worthwhile for implementation of the sparsity constraint in MCR methods. In order to address this question, two and three component LC-MS data were simulated and used for the case study in this work. The areas of feasible solutions (AFS) were calculated using the grid search strategy. Calculating Lx-norms values in AFS for x between zero and two revealed that the gradient of optimization surface increased from x values equal to two to x values near zero. However, for x equal to zero, the optimization surface was similar to a plateau, which increased the risk of sticking in local minima. Generally, results in this work, recommend the use of L1-norm penalty methods like Lasso for implementation of sparsity constraint in MCR-ALS algorithm for finding more sparse solutions and reducing the extent of rotational ambiguity.


A Trace Lasso Regularized L1-norm Graph Cut for Highly Correlated Noisy Hyperspectral Image

This work proposes an adaptive trace lasso regularized L1-norm based gra...

Iterative Log Thresholding

Sparse reconstruction approaches using the re-weighted l1-penalty have b...

Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes

Given a multivariate data set, sparse principal component analysis (SPCA...

Sample-efficient L0-L2 constrained structure learning of sparse Ising models

We consider the problem of learning the underlying graph of a sparse Isi...

Bayesian and L1 Approaches to Sparse Unsupervised Learning

The use of L1 regularisation for sparse learning has generated immense r...

Near-Ideal Behavior of Compressed Sensing Algorithms

In a recent paper, it is shown that the LASSO algorithm exhibits "near-i...

On Distributed Exact Sparse Linear Regression over Networks

In this work, we propose an algorithm for solving exact sparse linear re...