What Goes Where: Predicting Object Distributions from Above

08/02/2018
by   Connor Greenwell, et al.
0

In this work, we propose a cross-view learning approach, in which images captured from a ground-level view are used as weakly supervised annotations for interpreting overhead imagery. The outcome is a convolutional neural network for overhead imagery that is capable of predicting the type and count of objects that are likely to be seen from a ground-level perspective. We demonstrate our approach on a large dataset of geotagged ground-level and overhead imagery and find that our network captures semantically meaningful features, despite being trained without manual annotations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset