Framework for Multi-task Multiple Kernel Learning and Applications in Genome Analysis

06/30/2015
by   Christian Widmer, et al.
0

We present a general regularization-based framework for Multi-task learning (MTL), in which the similarity between tasks can be learned or refined using ℓ_p-norm Multiple Kernel learning (MKL). Based on this very general formulation (including a general loss function), we derive the corresponding dual formulation using Fenchel duality applied to Hermitian matrices. We show that numerous established MTL methods can be derived as special cases from both, the primal and dual of our formulation. Furthermore, we derive a modern dual-coordinate descend optimization strategy for the hinge-loss variant of our formulation and provide convergence bounds for our algorithm. As a special case, we implement in C++ a fast LibLinear-style solver for ℓ_p-norm MKL. In the experimental section, we analyze various aspects of our algorithm such as predictive performance and ability to reconstruct task relationships on biologically inspired synthetic data, where we have full control over the underlying ground truth. We also experiment on a new dataset from the domain of computational biology that we collected for the purpose of this paper. It concerns the prediction of transcription start sites (TSS) over nine organisms, which is a crucial task in gene finding. Our solvers including all discussed special cases are made available as open-source software as part of the SHOGUN machine learning toolbox (available at <http://shogun.ml>).

READ FULL TEXT
research
10/24/2020

Multi-task Supervised Learning via Cross-learning

In this paper we consider a problem known as multi-task learning, consis...
research
05/03/2023

Experimental Design for Any p-Norm

We consider a general p-norm objective for experimental design problems ...
research
10/04/2009

Regularization Techniques for Learning with Matrices

There is growing body of learning problems for which it is natural to or...
research
10/27/2022

Multi-task Bias-Variance Trade-off Through Functional Constraints

Multi-task learning aims to acquire a set of functions, either regressor...
research
07/11/2017

Multi-Task Learning Using Neighborhood Kernels

This paper introduces a new and effective algorithm for learning kernels...
research
08/30/2014

Kernel Coding: General Formulation and Special Cases

Representing images by compact codes has proven beneficial for many visu...
research
09/13/2012

Minimax Multi-Task Learning and a Generalized Loss-Compositional Paradigm for MTL

Since its inception, the modus operandi of multi-task learning (MTL) has...

Please sign up or login with your details

Forgot password? Click here to reset