Learning Canonical Transformations

11/17/2020
by   Zachary Dulberg, et al.
0

Humans understand a set of canonical geometric transformations (such as translation and rotation) that support generalization by being untethered to any specific object. We explore inductive biases that help a neural network model learn these transformations in pixel space in a way that can generalize out-of-domain. Specifically, we find that high training set diversity is sufficient for the extrapolation of translation to unseen shapes and scales, and that an iterative training scheme achieves significant extrapolation of rotation in time.

READ FULL TEXT
research
07/12/2018

HyperNets and their application to learning spatial transformations

In this paper we propose a conceptual framework for higher-order artific...
research
06/21/2022

Learning Continuous Rotation Canonicalization with Radial Beam Sampling

Nearly all state of the art vision models are sensitive to image rotatio...
research
08/29/2023

Canonical Factors for Hybrid Neural Fields

Factored feature volumes offer a simple way to build more compact, effic...
research
04/24/2014

A General Homogeneous Matrix Formulation to 3D Rotation Geometric Transformations

We present algebraic projective geometry definitions of 3D rotations so ...
research
03/31/2022

3D Equivariant Graph Implicit Functions

In recent years, neural implicit representations have made remarkable pr...
research
11/18/2019

Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring In Data

Equivariance is a nice property to have as it produces much more paramet...
research
12/03/2019

Learning Spatially Structured Image Transformations Using Planar Neural Networks

Learning image transformations is essential to the idea of mental simula...

Please sign up or login with your details

Forgot password? Click here to reset