PROPS: Probabilistic personalization of black-box sequence models

03/05/2019
by   Michael Thomas Wojnowicz, et al.
0

We present PROPS, a lightweight transfer learning mechanism for sequential data. PROPS learns probabilistic perturbations around the predictions of one or more arbitrarily complex, pre-trained black box models (such as recurrent neural networks). The technique pins the black-box prediction functions to "source nodes" of a hidden Markov model (HMM), and uses the remaining nodes as "perturbation nodes" for learning customized perturbations around those predictions. In this paper, we describe the PROPS model, provide an algorithm for online learning of its parameters, and demonstrate the consistency of this estimation. We also explore the utility of PROPS in the context of personalized language modeling. In particular, we construct a baseline language model by training a LSTM on the entire Wikipedia corpus of 2.5 million articles (around 6.6 billion words), and then use PROPS to provide lightweight customization into a personalized language model of President Donald J. Trump's tweeting. We achieved good customization after only 2,000 additional words, and find that the PROPS model, being fully probabilistic, provides insight into when President Trump's speech departs from generic patterns in the Wikipedia corpus. Python code (for both the PROPS training algorithm as well as experiment reproducibility) is available at https://github.com/cylance/perturbed-sequence-model.

READ FULL TEXT

page 1

page 5

research
01/30/2023

REPLUG: Retrieval-Augmented Black-Box Language Models

We introduce REPLUG, a retrieval-augmented language modeling framework t...
research
05/11/2020

Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data

Adversarial black-box attacks aim to craft adversarial perturbations by ...
research
08/27/2020

Adversarial Eigen Attack on Black-Box Models

Black-box adversarial attack has attracted a lot of research interests f...
research
01/13/2017

Efficient Transfer Learning Schemes for Personalized Language Modeling using Recurrent Neural Network

In this paper, we propose an efficient transfer leaning methods for trai...
research
06/05/2021

Extracting Weighted Automata for Approximate Minimization in Language Modelling

In this paper we study the approximate minimization problem for language...
research
07/30/2022

Simplex Clustering via sBeta with Applications to Online Adjustment of Black-Box Predictions

We explore clustering the softmax predictions of deep neural networks an...
research
05/04/2023

LLM2Loss: Leveraging Language Models for Explainable Model Diagnostics

Trained on a vast amount of data, Large Language models (LLMs) have achi...

Please sign up or login with your details

Forgot password? Click here to reset