Two 3D ear recognition systems using structure from motion (SFM) and shape from shading (SFS) techniques, respectively, are explored. Segmentation of the ear region is performed using interpolation of ridges and ravines identified in each frame in a video sequence. For the SFM system, salient features are tracked across the video sequence and are reconstructed in 3D using a factorization method. Reconstructed points located within the valid ear region are stored as the ear model. The dataset used consists of video sequences for 48 subjects. Each test model is optimally aligned to the database models using a combination of geometric transformations which result in a minimal partial Hausdorff distance. For the SFS system, the ear structure is recovered by using reflectance and illumination properties of the scene. Shape matching is performed via iterative closest point. Based on our results, we conclude that both structure from motion and shape from shading are viable approaches for 3D ear recognition from video sequences.