Aerodynamic Data Predictions Based on Multi-task Learning

by   Liwei Hu, et al.

The quality of datasets is one of the key factors that affect the accuracy of aerodynamic data models. For example, in the uniformly sampled Burgers' dataset, the insufficient high-speed data is overwhelmed by massive low-speed data. Predicting high-speed data is more difficult than predicting low-speed data, owing to that the number of high-speed data is limited, i.e. the quality of the Burgers' dataset is not satisfactory. To improve the quality of datasets, traditional methods usually employ the data resampling technology to produce enough data for the insufficient parts in the original datasets before modeling, which increases computational costs. Recently, the mixtures of experts have been used in natural language processing to deal with different parts of sentences, which provides a solution for eliminating the need for data resampling in aerodynamic data modeling. Motivated by this, we propose the multi-task learning (MTL), a datasets quality-adaptive learning scheme, which combines task allocation and aerodynamic characteristics learning together to disperse the pressure of the entire learning task. The task allocation divides a whole learning task into several independent subtasks, while the aerodynamic characteristics learning learns these subtasks simultaneously to achieve better precision. Two experiments with poor quality datasets are conducted to verify the data quality-adaptivity of the MTL to datasets. The results show than the MTL is more accurate than FCNs and GANs in poor quality datasets.


Label Budget Allocation in Multi-Task Learning

The cost of labeling data often limits the performance of machine learni...

Modeling Prosodic Phrasing with Multi-Task Learning in Tacotron-based TTS

Tacotron-based end-to-end speech synthesis has shown remarkable voice qu...

Multi-Task Learning from Videos via Efficient Inter-Frame Attention

Prior work in multi-task learning has mainly focused on predictions on a...

Distillation based Multi-task Learning: A Candidate Generation Model for Improving Reading Duration

In feeds recommendation, the first step is candidate generation. Most of...

Heterogeneous Multi-task Learning with Expert Diversity

Predicting multiple heterogeneous biological and medical targets is a ch...

High-speed Privacy Amplification Scheme using GMP in Quantum Key Distribution

Privacy amplification (PA) is the art of distilling a highly secret key ...

A Framework for Fast Polarity Labelling of Massive Data Streams

Many of the existing sentiment analysis techniques are based on supervis...

Please sign up or login with your details

Forgot password? Click here to reset