Geoffrey E. Hinton

The English Canadian cognitive psychologist and informatician Geoffrey Everest Hinton has been most famous for his work on artificial neural networks. Hinton was also a co-author of a highly-cited paper, published in 1986 which popularized the back propagation algorithm for training multi-layered neural networks, with David E. Rumelhart and Ronald J. Williams. Some consider him to be a leading figure in the deep learning community and some refer to him as the “Dad of Deep Learning.” His student, Alex Krizhevsky’s dramatic image recognition milestone for the Imagenet 2012 challenge, helped to revolutionize the field of computer vision. Hinton was awarded the Turing Prize in 2018 for their work on deep learning, together with Yoshua Bengio and Yann LeCun.

Hinton received a Bachelor of Arts in experimental psychology from King’s College, Cambridge in 1970. He went on to study at Edinburgh University where in 1978 he received his doctorate in artificial intelligence for Christopher Longuet-Higgins’ research.

After his PhD he worked at Sussex University and San Diego and Carnegie Mellon University in California. He was founder director of the computer science unit of the Gatsby Charitable Foundation at University College London and now teaches at the University of Toronto in the field of computer science. He holds a Canada Research Chair in Machine Learning and is currently a Consultant for the Canadian Institute of Advanced Research’s Machine & Brain Learning Programme. In 2012, Hinton taught a free online course on Neural Networks on the Coursera education platform. When his company, DNNresearch Inc., was acquired, Hinton joined Google in March 2013. He plans to “divide his time between research at his University and his work at Google.” Hinton’s research explores ways to use neural networks for machine learning, recollection, perception and symbol processing. While Hinton was a professor at Carnegie Mellon, David E. Rumelhart and Hinton and Ronald J. Williams applied the backpropagation algorithm to multilayered neural networks. He wrote more than 200 publications and co-authored them. Their experiments have shown that these networks can learn useful internal data representations. While this work was important to popularize backpropagation, it was not the first time that the approach was suggested. Automatic reverse-mode, a special case for back-propagation, was proposed by Seppo Linnainmaa in 1970, while in 1974 Paul Werbos proposed to use it to train neural networks. In the same period Hinton, with David Ackley and Terry Sejnowski, jointly invented Boltzmann machines. His other contributions to research into neural networks comprise distributed representations, neural time delay networks, experts mixtures, Helmholtz machines and expert products. In 2007, Hinton co-authored an unattended learning paper entitled Unattended image transformation learning. Hinton’s articles in Scientific American in September 1992 and October 1993 give an accessible introduction to Geoffrey’s research. In October and November 2017, respectively, Hinton published two open-access research papers on the topic of the capsule neural networks, which according to Hinton are “finally work well.”

In 1998, Hinton was elected a Royal Society Fellow. He was the first Rumelhart Prize winner in 2001. His election certificate for the Royal Society reads:

In 2001, the University of Edinburgh awarded Hinton an honorary doctorate. He was the recipient of the IJCAI Lifetime Work Award for Research Excellence in 2005. The 2011 Herzberg Canada Gold Medal for Science and Engineering was also awarded to him. In 2016 Hinton was elected a foreign member of the national academy “for contributions to the theory and practice of artificial neural networks and their application for voice recognition and computer vision.” Hinton was awarded a PhD from Sherbrooke University. He has also received the 2016 IEEE/RSE Wolfson James Clerk Maxwell Award. He was awarded the BBVA Foundation Frontiers of Knowledge Awards in the category information and communication technologies for “His pioneering, highly influential work” to endow machines with the capacity for learning.

Hinton is the great grandson of logician George Boole, whose work became ultimately a foundation of modern computer science, as well as of surgeon and author James Hinton. Howard Hinton’s dad. His middle name is George Everest from another relative. He is the economist’s nephew, Colin Clark. In 1994, he lost his first wife to ovarian cancer.

Hinton moved to Canada partly from the U.S. because of disillusionment with Ronald Reagan-era policy and disapproval of military financing of artificial intelligence. With regard to the existential risk from artificial intelligence, Hinton typically refuses to make predictions in the future for more than five years. However, in a informal conversation between journalist Raffi Khatchadourian and the reported AI-risk warning maker Nick Bostrom in November 2015, he reportedly stated that he did not expect General A.I. In order to do so for decades, Hinton’s “in the hopeless” camp was hopeless in the context of a dichotomy previously introduced by Bostrom among people who think that the management of existential risks from artificial intelligence was probably hopeless versus easy enough for it to be solved automatically. However, the truth is that the prospects for discovery are too sweet.” – A reference to a statement by J. Robert Oppenheimer as to why the Manhattan research was carried out. According to this report Hinton doesn’t categorically exclude people controlling artificial superintelligence.

Featured Co-authors

Please sign up or login with your details

Forgot password? Click here to reset