Browsing by Subject "3-D motion and structure estimation"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access 3-D motion estimation of rigid objects for video coding applications using an improved iterative version of the E-matrix method(Institute of Electrical and Electronics Engineers, 1998-02) Alatan, A. A.; Onural, L.As an alternative to current two-dimensional (2-D) motion models, a robust three-dimensional (3-D) motion estimation method is proposed to be utilized in object-based video coding applications. Since the popular E-matrix method is well known for its susceptibility to input errors, a performance indicator, which tests the validity of the estimated 3-D motion parameters both explicitly and implicitly, is defined. This indicator is utilized within the RANSAC method to obtain a robust set of 2-D motion correspondences which leads to better 3-D motion parameters for each object. The experimental results support the superiority of the proposed method over direct application of the E-matrix method.Item Open Access Three-dimensional facial motion and structure estimation in video coding(1994) Bozdağı, GözdeWe propose a novel formulation where 3-D global and local motion estimation and the adaptation of a generic wire-frame model to a particular speaker are considered simultaneously within an optical flow based framework including the photometric effects of the motion. We use a flexible wire-frame model whose local structure is characterized by the normal vectors of the patches which are related to the coordinates of the nodes. Geometric constraints that describe the propagation of the movement of the nodes are introduced, which are then efficiently utilized to reduce the number of independent structure parameters. A stochastic relaxation algorithm has been used to determine optimum global motion estimates and the parameters describing the structure of the wire-frame model. For the initialization of the motion and structure parameters, a modified feature based algorithm is used whose performance has also been compared with the existing methods. Results with both simulated and real facial image sequences are provided.