Contrastive losses as generalized models of global epistasis

by   David H. Brookes, et al.

Fitness functions map large combinatorial spaces of biological sequences to properties of interest. Inferring these multimodal functions from experimental data is a central task in modern protein engineering. Global epistasis models are an effective and physically-grounded class of models for estimating fitness functions from observed data. These models assume that a sparse latent function is transformed by a monotonic nonlinearity to emit measurable fitness. Here we demonstrate that minimizing contrastive loss functions, such as the Bradley-Terry loss, is a simple and flexible technique for extracting the sparse latent function implied by global epistasis. We argue by way of a fitness-epistasis uncertainty principle that the nonlinearities in global epistasis models can produce observed fitness functions that do not admit sparse representations, and thus may be inefficient to learn from observations when using a Mean Squared Error (MSE) loss (a common practice). We show that contrastive losses are able to accurately estimate a ranking function from limited data even in regimes where MSE is ineffective. We validate the practical utility of this insight by showing contrastive loss functions result in consistently improved performance on benchmark tasks.


page 1

page 2

page 3

page 4


On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

When training deep neural networks for classification tasks, an intrigui...

The E-Bayesian Estimation and its E-MSE of Lomax distribution under different loss functions

This paper studies the E-Bayesian (expectation of the Bayesian estimatio...

Comparing Reconstruction- and Contrastive-based Models for Visual Task Planning

Learning state representations enables robotic planning directly from ra...

Estimating Respiratory Rate From Breath Audio Obtained Through Wearable Microphones

Respiratory rate (RR) is a clinical metric used to assess overall health...

Learning Invariant Representations using Inverse Contrastive Loss

Learning invariant representations is a critical first step in a number ...

Correlation versus RMSE Loss Functions in Symbolic Regression Tasks

The use of correlation as a fitness function is explored in symbolic reg...

Please sign up or login with your details

Forgot password? Click here to reset