Siamese Encoding and Alignment by Multiscale Learning with Self-Supervision

04/04/2019
by   Eric Mitchell, et al.
12

We propose a method of aligning a source image to a target image, where the transform is specified by a dense vector field. The two images are encoded as feature hierarchies by siamese convolutional nets. Then a hierarchy of aligner modules computes the transform in a coarse-to-fine recursion. Each module receives as input the transform that was computed by the module at the level above, aligns the source and target encodings at the same level of the hierarchy, and then computes an improved approximation to the transform using a convolutional net. The entire architecture of encoder and aligner nets is trained in a self-supervised manner to minimize the squared error between source and target remaining after alignment. We show that siamese encoding enables more accurate alignment than the image pyramids of SPyNet, a previous deep learning approach to coarse-to-fine alignment. Furthermore, self-supervision applies even without target values for the transform, unlike the strongly supervised SPyNet. We also show that our approach outperforms one-shot approaches to alignment, because the fine pathways in the latter approach may fail to contribute to alignment accuracy when displacements are large. As shown by previous one-shot approaches, good results from self-supervised learning require that the loss function additionally penalize non-smooth transforms. We demonstrate that "masking out" the penalty function near discontinuities leads to correct recovery of non-smooth transforms. Our claims are supported by empirical comparisons using images from serial section electron microscopy of brain tissue.

READ FULL TEXT

page 8

page 10

page 15

page 16

page 17

page 18

research
12/04/2021

Ablation study of self-supervised learning for image classification

This project focuses on the self-supervised training of convolutional ne...
research
11/25/2022

Ladder Siamese Network: a Method and Insights for Multi-level Self-Supervised Learning

Siamese-network-based self-supervised learning (SSL) suffers from slow c...
research
03/24/2021

Coarse-to-Fine Domain Adaptive Semantic Segmentation with Photometric Alignment and Category-Center Regularization

Unsupervised domain adaptation (UDA) in semantic segmentation is a funda...
research
10/20/2022

MixMask: Revisiting Masked Siamese Self-supervised Learning in Asymmetric Distance

Recent advances in self-supervised learning integrate Masked Modeling an...
research
06/02/2022

Siamese Image Modeling for Self-Supervised Vision Representation Learning

Self-supervised learning (SSL) has delivered superior performance on a v...
research
03/04/2022

AutoMap: Automatic Medical Code Mapping for Clinical Prediction Model Deployment

Given a deep learning model trained on data from a source site, how to d...
research
10/17/2018

Learning an MR acquisition-invariant representation using Siamese neural networks

Generalization of voxelwise classifiers is hampered by differences betwe...

Please sign up or login with your details

Forgot password? Click here to reset