Temporally Consistent Motion Segmentation from RGB-D Video

08/16/2016
by   Peter Bertholet, et al.
0

We present a method for temporally consistent motion segmentation from RGB-D videos assuming a piecewise rigid motion model. We formulate global energies over entire RGB-D sequences in terms of the segmentation of each frame into a number of objects, and the rigid motion of each object through the sequence. We develop a novel initialization procedure that clusters feature tracks obtained from the RGB data by leveraging the depth information. We minimize the energy using a coordinate descent approach that includes novel techniques to assemble object motion hypotheses. A main benefit of our approach is that it enables us to fuse consistently labeled object segments from all RGB-D frames of an input sequence into individual 3D object reconstructions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro