Federated Gaussian Process: Convergence, Automatic Personalization and Multi-fidelity Modeling

11/28/2021
by   Xubo Yue, et al.
9

In this paper, we propose : a Federated Gaussian process (𝒢𝒫) regression framework that uses an averaging strategy for model aggregation and stochastic gradient descent for local client computations. Notably, the resulting global model excels in personalization as jointly learns a global 𝒢𝒫 prior across all clients. The predictive posterior then is obtained by exploiting this prior and conditioning on local data which encodes personalized features from a specific client. Theoretically, we show that converges to a critical point of the full log-likelihood function, subject to statistical error. Through extensive case studies we show that excels in a wide range of applications and is a promising approach for privacy-preserving multi-fidelity data modeling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2022

Federated Bayesian Neural Regression: A Scalable Global Federated Gaussian Process

In typical scenarios where the Federated Learning (FL) framework applies...
research
11/19/2022

Personalized Federated Learning with Hidden Information on Personalized Prior

Federated learning (FL for simplification) is a distributed machine lear...
research
09/16/2021

Federated Submodel Averaging

We study practical data characteristics underlying federated learning, w...
research
07/05/2023

Personalized Federated Learning via Amortized Bayesian Meta-Learning

Federated learning is a decentralized and privacy-preserving technique t...
research
07/15/2020

Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization

In federated optimization, heterogeneity in the clients' local datasets ...
research
03/28/2022

FedVLN: Privacy-preserving Federated Vision-and-Language Navigation

Data privacy is a central problem for embodied agents that can perceive ...

Please sign up or login with your details

Forgot password? Click here to reset