A 23 MW data centre is all you need

by   Samuel Albanie, et al.

The field of machine learning has achieved striking progress in recent years, witnessing breakthrough results on language modelling, protein folding and nitpickingly fine-grained dog breed classification. Some even succeeded at playing computer games and board games, a feat both of engineering and of setting their employers' expectations. The central contribution of this work is to carefully examine whether this progress, and technology more broadly, can be expected to continue indefinitely. Through a rigorous application of statistical theory and failure to extrapolate beyond the training data, we answer firmly in the negative and provide details: technology will peak at 3:07 am (BST) on 20th July, 2032. We then explore the implications of this finding, discovering that individuals awake at this ungodly hour with access to a sufficiently powerful computer possess an opportunity for myriad forms of long-term linguistic 'lock in'. All we need is a large (>> 1W) data centre to seize this pivotal moment. By setting our analogue alarm clocks, we propose a tractable algorithm to ensure that, for the future of humanity, the British spelling of colour becomes the default spelling across more than 80 global word processing software market.


page 1

page 2

page 3

page 4


How do you feel: Emotions exhibited while Playing Computer Games and their Relationship with Gaming Behaviors

This descriptive study utilized a validated questionnaire to determine t...

Incorporating Pragmatic Reasoning Communication into Emergent Language

Emergentism and pragmatics are two research fields that study the dynami...

The Devil is in the Tails: Fine-grained Classification in the Wild

The world is long-tailed. What does this mean for computer vision and vi...

Measuring Progress in Fine-grained Vision-and-Language Understanding

While pretraining on large-scale image-text data from the Web has facili...

Neural MMO v1.3: A Massively Multiagent Game Environment for Training and Evaluating Neural Networks

Progress in multiagent intelligence research is fundamentally limited by...

Improving Sample Efficiency of Value Based Models Using Attention and Vision Transformers

Much of recent Deep Reinforcement Learning success is owed to the neural...

Please sign up or login with your details

Forgot password? Click here to reset