In this paper, we present an improved Active Shape Model (ASM) for facial feature extraction. The original ASM method developed by Cootes et al. highly relies on the initialization and the representation of the local structure of the facial features in the image. We use color information to improve the ASM approach for facial feature extraction. The color information is used to localize the centers of the mouth and the eyes to assist the initialization step. Moreover, we model the local structure of the feature points in the RGB color space. Besides, we use 2D affine transformation to align facial features that are perturbed by head pose variations. In fact, the 2D affine transformation compensates for the effects of both head pose variations and the projection of 3D data to 2D. Experiments on a face database of 50 subjects show that our approach outperforms the standard ASM and is successful in facial feature extraction.