The primary goal in motion vision is to extract information about the motion and shape of an object in a scene that is encoded in the optic flow. While many solutions to this problem, both iterative and in closed form, have been proposed, practitioners still view the problem as unsolved, since these methods, for the most part, cannot deal with some important aspects of realistic scenes. Among these are complex unsegmented scenes, nonsmooth objects, and general motion of the camera. In addition, the performance of many methods degrades ungracefully as the quality of the data deteriorates. Here, we will derive a closed-form solution for motion estimation based on the first-order information from two image regions with distinct flow "structures". A unique solution is guaranteed when these corespond to two surface patches with different normal vectors. Given an image sequence, we will show how the image may be segmented into regions with the necessary properties, optical flow is computed for these regions, and motion parameters are calculated. The method can be applied to arbitrary scenes and any camera motion. We will show theoretically why the method is more robust than other proposed techniques that require the knowledge of the full flow or information up to the second-order terms of it. Experimental results are presented to support the theoretical derivations.
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Vision and Pattern Recognition
- Artificial Intelligence