DeepAI AI Chat
Log In Sign Up

Implications of the Convergence of Language and Vision Model Geometries

02/13/2023
by   Jiaang Li, et al.
0

Large-scale pretrained language models (LMs) are said to “lack the ability to connect [their] utterances to the world” (Bender and Koller, 2020). If so, we would expect LM representations to be unrelated to representations in computer vision models. To investigate this, we present an empirical evaluation across three different LMs (BERT, GPT2, and OPT) and three computer vision models (VMs, including ResNet, SegFormer, and MAE). Our experiments show that LMs converge towards representations that are partially isomorphic to those of VMs, with dispersion, and polysemy both factoring into the alignability of vision and language spaces. We discuss the implications of this finding.

READ FULL TEXT
04/15/2022

Vision-and-Language Pretrained Models: A Survey

Pretrained models have produced great success in both Computer Vision (C...
05/20/2019

Implications of Computer Vision Driven Assistive Technologies Towards Individuals with Visual Impairment

Computer vision based technology is becoming ubiquitous in society. One ...
06/28/2023

Towards Language Models That Can See: Computer Vision Through the LENS of Natural Language

We propose LENS, a modular approach for tackling computer vision problem...
05/30/2019

Does computer vision matter for action?

Computer vision produces representations of scene content. Much computer...
04/06/2020

NiLBS: Neural Inverse Linear Blend Skinning

In this technical report, we investigate efficient representations of ar...
02/18/2023

A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT

The Pretrained Foundation Models (PFMs) are regarded as the foundation f...
06/14/2023

Towards AGI in Computer Vision: Lessons Learned from GPT and Large Language Models

The AI community has been pursuing algorithms known as artificial genera...