Joint Dimensionality Reduction for Two Feature Vectors

02/13/2016
by   Yanjun Li, et al.
0

Many machine learning problems, especially multi-modal learning problems, have two sets of distinct features (e.g., image and text features in news story classification, or neuroimaging data and neurocognitive data in cognitive science research). This paper addresses the joint dimensionality reduction of two feature vectors in supervised learning problems. In particular, we assume a discriminative model where low-dimensional linear embeddings of the two feature vectors are sufficient statistics for predicting a dependent variable. We show that a simple algorithm involving singular value decomposition can accurately estimate the embeddings provided that certain sample complexities are satisfied, without specifying the nonlinear link function (regressor or classifier). The main results establish sample complexities under multiple settings. Sample complexities for different link functions only differ by constant factors.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset