How Powerful Are Randomly Initialized Pointcloud Set Functions?

03/11/2020
by   Aditya Sanghi, et al.
0

We study random embeddings produced by untrained neural set functions, and show that they are powerful representations which well capture the input features for downstream tasks such as classification, and are often linearly separable. We obtain surprising results that show that random set functions can often obtain close to or even better accuracy than fully trained models. We investigate factors that affect the representative power of such embeddings quantitatively and qualitatively.

READ FULL TEXT
research
05/17/2023

Bike2Vec: Vector Embedding Representations of Road Cycling Riders and Races

Vector embeddings have been successfully applied in several domains to o...
research
02/01/2018

A Comparison of Word Embeddings for the Biomedical Natural Language Processing

Neural word embeddings have been widely used in biomedical Natural Langu...
research
04/25/2018

Factors Influencing the Surprising Instability of Word Embeddings

Despite the recent popularity of word embedding methods, there is only a...
research
05/13/2019

Spectral Analysis of Kernel and Neural Embeddings: Optimization and Generalization

We extend the recent results of (Arora et al., 2019) by a spectral analy...
research
09/04/2018

Adversarial Attacks on Node Embeddings

The goal of network representation learning is to learn low-dimensional ...
research
08/19/2022

Demystifying Randomly Initialized Networks for Evaluating Generative Models

Evaluation of generative models is mostly based on the comparison betwee...
research
04/27/2020

Synonyms and Antonyms: Embedded Conflict

Since modern word embeddings are motivated by a distributional hypothesi...

Please sign up or login with your details

Forgot password? Click here to reset