DeepAI AI Chat
Log In Sign Up

FC2T2: The Fast Continuous Convolutional Taylor Transform with Applications in Vision and Graphics

by   Henning Lange, et al.

Series expansions have been a cornerstone of applied mathematics and engineering for centuries. In this paper, we revisit the Taylor series expansion from a modern Machine Learning perspective. Specifically, we introduce the Fast Continuous Convolutional Taylor Transform (FC2T2), a variant of the Fast Multipole Method (FMM), that allows for the efficient approximation of low dimensional convolutional operators in continuous space. We build upon the FMM which is an approximate algorithm that reduces the computational complexity of N-body problems from O(NM) to O(N+M) and finds application in e.g. particle simulations. As an intermediary step, the FMM produces a series expansion for every cell on a grid and we introduce algorithms that act directly upon this representation. These algorithms analytically but approximately compute the quantities required for the forward and backward pass of the backpropagation algorithm and can therefore be employed as (implicit) layers in Neural Networks. Specifically, we introduce a root-implicit layer that outputs surface normals and object distances as well as an integral-implicit layer that outputs a rendering of a radiance field given a 3D pose. In the context of Machine Learning, N and M can be understood as the number of model parameters and model evaluations respectively which entails that, for applications that require repeated function evaluations which are prevalent in Computer Vision and Graphics, unlike regular Neural Networks, the techniques introduce in this paper scale gracefully with parameters. For some applications, this results in a 200x reduction in FLOPs compared to state-of-the-art approaches at a reasonable or non-existent loss in accuracy.


page 2

page 6

page 14

page 27

page 34

page 36

page 37

page 38


Efficient Training of Deep Equilibrium Models

Deep equilibrium models (DEQs) have proven to be very powerful for learn...

Fast Walsh-Hadamard Transform and Smooth-Thresholding Based Binary Layers in Deep Neural Networks

In this paper, we propose a novel layer based on fast Walsh-Hadamard tra...

AutoInt: Automatic Integration for Fast Neural Volume Rendering

Numerical integration is a foundational technique in scientific computin...

TaylorImNet for Fast 3D Shape Reconstruction Based on Implicit Surface Function

Benefiting from the contiguous representation ability, deep implicit fun...

Differentiable Implicit Layers

In this paper, we introduce an efficient backpropagation scheme for non-...

Human 3D Avatar Modeling with Implicit Neural Representation: A Brief Survey

A human 3D avatar is one of the important elements in the metaverse, and...