Transformative Machine Learning

11/08/2018
by   Ivan Olier, et al.
0

The key to success in machine learning (ML) is the use of effective data representations. Traditionally, data representations were hand-crafted. Recently it has been demonstrated that, given sufficient data, deep neural networks can learn effective implicit representations from simple input representations. However, for most scientific problems, the use of deep learning is not appropriate as the amount of available data is limited, and/or the output models must be explainable. Nevertheless, many scientific problems do have significant amounts of data available on related tasks, which makes them amenable to multi-task learning, i.e. learning many related problems simultaneously. Here we propose a novel and general representation learning approach for multi-task learning that works successfully with small amounts of data. The fundamental new idea is to transform an input intrinsic data representation (i.e., handcrafted features), to an extrinsic representation based on what a pre-trained set of models predict about the examples. This transformation has the dual advantages of producing significantly more accurate predictions, and providing explainable models. To demonstrate the utility of this transformative learning approach, we have applied it to three real-world scientific problems: drug-design (quantitative structure activity relationship learning), predicting human gene expression (across different tissue types and drug treatments), and meta-learning for machine learning (predicting which machine learning methods work best for a given problem). In all three problems, transformative machine learning significantly outperforms the best intrinsic representation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2017

Meta-QSAR: a large-scale application of meta-learning to drug design and discovery

We investigate the learning of quantitative structure activity relations...
research
07/20/2020

Navigating the Trade-Off between Multi-Task Learning and Learning to Multitask in Deep Neural Networks

The terms multi-task learning and multitasking are easily confused. Mult...
research
06/15/2017

An Overview of Multi-Task Learning in Deep Neural Networks

Multi-task learning (MTL) has led to successes in many applications of m...
research
09/13/2022

MLT-LE: predicting drug-target binding affinity with multi-task residual neural networks

Assessing drug-target affinity is a critical step in the drug discovery ...
research
11/11/2019

Graph Representation Learning via Multi-task Knowledge Distillation

Machine learning on graph structured data has attracted much research in...
research
09/17/2018

Powerful, transferable representations for molecules through intelligent task selection in deep multitask networks

Chemical representations derived from deep learning are emerging as a po...
research
05/05/2022

Meta-learning Feature Representations for Adaptive Gaussian Processes via Implicit Differentiation

We propose Adaptive Deep Kernel Fitting (ADKF), a general framework for ...

Please sign up or login with your details

Forgot password? Click here to reset