Understanding algorithmic collusion with experience replay

02/18/2021
by   Bingyan Han, et al.
10

In an infinitely repeated pricing game, pricing algorithms based on artificial intelligence (Q-learning) may consistently learn to charge supra-competitive prices even without communication. Although concerns on algorithmic collusion have arisen, little is known on underlying factors. In this work, we experimentally analyze the dynamics of algorithms with three variants of experience replay. Algorithmic collusion still has roots in human preferences. Randomizing experience yields prices close to the static Bertrand equilibrium and higher prices are easily restored by favoring the latest experience. Moreover, relative performance concerns also stabilize the collusion. Finally, we investigate the scenarios with heterogeneous agents and test robustness on various factors.

READ FULL TEXT

page 13

page 17

page 18

page 19

page 20

research
05/15/2018

Advances in Experience Replay

This project combines recent advances in experience replay techniques, n...
research
02/08/2018

Thompson Sampling for Dynamic Pricing

In this paper we apply active learning algorithms for dynamic pricing in...
research
05/18/2019

Combining Experience Replay with Exploration by Random Network Distillation

Our work is a simple extension of the paper "Exploration by Random Netwo...
research
10/23/2020

Perturbed Pricing

We propose a simple randomized rule for the optimization of prices in re...
research
01/03/2018

ViZDoom: DRQN with Prioritized Experience Replay, Double-Q Learning, & Snapshot Ensembling

ViZDoom is a robust, first-person shooter reinforcement learning environ...
research
02/18/2023

Does Machine Learning Amplify Pricing Errors in the Housing Market? – The Economics of Machine Learning Feedback Loops

Machine learning algorithms are increasingly employed to price or value ...

Please sign up or login with your details

Forgot password? Click here to reset