Maximizing Model Generalization for Manufacturing with Self-Supervised Learning and Federated Learning

by   Matthew Russell, et al.

Deep Learning (DL) can diagnose faults and assess machine health from raw condition monitoring data without manually designed statistical features. However, practical manufacturing applications remain extremely difficult for existing DL methods. Machine data is often unlabeled and from very few health conditions (e.g., only normal operating data). Furthermore, models often encounter shifts in domain as process parameters change and new categories of faults emerge. Traditional supervised learning may struggle to learn compact, discriminative representations that generalize to these unseen target domains since it depends on having plentiful classes to partition the feature space with decision boundaries. Transfer Learning (TL) with domain adaptation attempts to adapt these models to unlabeled target domains but assumes similar underlying structure that may not be present if new faults emerge. This study proposes focusing on maximizing the feature generality on the source domain and applying TL via weight transfer to copy the model to the target domain. Specifically, Self-Supervised Learning (SSL) with Barlow Twins may produce more discriminative features for monitoring health condition than supervised learning by focusing on semantic properties of the data. Furthermore, Federated Learning (FL) for distributed training may also improve generalization by efficiently expanding the effective size and diversity of training data by sharing information across multiple client machines. Results show that Barlow Twins outperforms supervised learning in an unlabeled target domain with emerging motor faults when the source training data contains very few distinct categories. Incorporating FL may also provide a slight advantage by diffusing knowledge of health conditions between machines.


page 15

page 19

page 21


Multi-Source Domain Adaptation Based on Federated Knowledge Alignment

Federated Learning (FL) facilitates distributed model learning to protec...

Toward Improved Generalization: Meta Transfer of Self-supervised Knowledge on Graphs

Despite the remarkable success achieved by graph convolutional networks ...

Towards Unsupervised Domain Adaptation for Deep Face Recognition under Privacy Constraints via Federated Learning

Unsupervised domain adaptation has been widely adopted to generalize mod...

Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation

Domain adaptation (DA) tries to tackle the scenarios when the test data ...

Feature Correlation-guided Knowledge Transfer for Federated Self-supervised Learning

To eliminate the requirement of fully-labeled data for supervised model ...

Transfer learning for Remaining Useful Life Prediction Based on Consensus Self-Organizing Models

The traditional paradigm for developing machine prognostics usually reli...

Domain Adapting Ability of Self-Supervised Learning for Face Recognition

Although deep convolutional networks have achieved great performance in ...

Please sign up or login with your details

Forgot password? Click here to reset