Intensity measurement of spontaneous facial actions: Evaluation of different image representations

Nazanin Zaker, Mohammad H. Mahoor, Whitney I. Mattson, Daniel S Messinger, Jeffrey F. Cohn

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)

Abstract

Intensity measurements of infant facial expressions are central to understand emotion-mediated interactions and emotional development. We evaluate alternative image representations for automatic measurement of the intensity of spontaneous facial Action Units (AUs) related to infant emotion expression. Twelve infants were video-recorded during face-to-face interactions with their mothers. Facial features were tracked using active appearance models (AAMs) and registered to a canonical view. Three feature representations were compared: shape and grey scale texture, Histogram of Oriented Gradients (HOG), and Local Binary Pattern Histograms (LBPH). To reduce the high dimensionality of the appearance features (grey scale texture, HOG, and LBPH), a non-linear algorithm was used (Laplacian Eigenmaps). For each representation, support vector machine classifiers were used to learn six gradations of AU intensity (0 to maximal). The target AUs were those central to positive and negative infant emotion. Shape plus grey scale texture performed best for AUs that involve non-rigid deformations of permanent facial features (e.g., AU 12 and AU 20). These findings suggest that AU intensity detection may be maximized by choosing feature representations best suited for specific AU.

Original languageEnglish
Title of host publication2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012
DOIs
StatePublished - Dec 1 2012
Event2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012 - "San Diego,CA", United States
Duration: Nov 7 2012Nov 9 2012

Other

Other2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012
CountryUnited States
City"San Diego,CA"
Period11/7/1211/9/12

Fingerprint

Textures
Support vector machines
Classifiers

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction

Cite this

Zaker, N., Mahoor, M. H., Mattson, W. I., Messinger, D. S., & Cohn, J. F. (2012). Intensity measurement of spontaneous facial actions: Evaluation of different image representations. In 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012 [6400846] https://doi.org/10.1109/DevLrn.2012.6400846

Intensity measurement of spontaneous facial actions : Evaluation of different image representations. / Zaker, Nazanin; Mahoor, Mohammad H.; Mattson, Whitney I.; Messinger, Daniel S; Cohn, Jeffrey F.

2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012. 2012. 6400846.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zaker, N, Mahoor, MH, Mattson, WI, Messinger, DS & Cohn, JF 2012, Intensity measurement of spontaneous facial actions: Evaluation of different image representations. in 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012., 6400846, 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012, "San Diego,CA", United States, 11/7/12. https://doi.org/10.1109/DevLrn.2012.6400846
Zaker N, Mahoor MH, Mattson WI, Messinger DS, Cohn JF. Intensity measurement of spontaneous facial actions: Evaluation of different image representations. In 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012. 2012. 6400846 https://doi.org/10.1109/DevLrn.2012.6400846
Zaker, Nazanin ; Mahoor, Mohammad H. ; Mattson, Whitney I. ; Messinger, Daniel S ; Cohn, Jeffrey F. / Intensity measurement of spontaneous facial actions : Evaluation of different image representations. 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012. 2012.
@inproceedings{a12aadeb9c8245e3a46f188031138dce,
title = "Intensity measurement of spontaneous facial actions: Evaluation of different image representations",
abstract = "Intensity measurements of infant facial expressions are central to understand emotion-mediated interactions and emotional development. We evaluate alternative image representations for automatic measurement of the intensity of spontaneous facial Action Units (AUs) related to infant emotion expression. Twelve infants were video-recorded during face-to-face interactions with their mothers. Facial features were tracked using active appearance models (AAMs) and registered to a canonical view. Three feature representations were compared: shape and grey scale texture, Histogram of Oriented Gradients (HOG), and Local Binary Pattern Histograms (LBPH). To reduce the high dimensionality of the appearance features (grey scale texture, HOG, and LBPH), a non-linear algorithm was used (Laplacian Eigenmaps). For each representation, support vector machine classifiers were used to learn six gradations of AU intensity (0 to maximal). The target AUs were those central to positive and negative infant emotion. Shape plus grey scale texture performed best for AUs that involve non-rigid deformations of permanent facial features (e.g., AU 12 and AU 20). These findings suggest that AU intensity detection may be maximized by choosing feature representations best suited for specific AU.",
author = "Nazanin Zaker and Mahoor, {Mohammad H.} and Mattson, {Whitney I.} and Messinger, {Daniel S} and Cohn, {Jeffrey F.}",
year = "2012",
month = "12",
day = "1",
doi = "10.1109/DevLrn.2012.6400846",
language = "English",
isbn = "9781467349635",
booktitle = "2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012",

}

TY - GEN

T1 - Intensity measurement of spontaneous facial actions

T2 - Evaluation of different image representations

AU - Zaker, Nazanin

AU - Mahoor, Mohammad H.

AU - Mattson, Whitney I.

AU - Messinger, Daniel S

AU - Cohn, Jeffrey F.

PY - 2012/12/1

Y1 - 2012/12/1

N2 - Intensity measurements of infant facial expressions are central to understand emotion-mediated interactions and emotional development. We evaluate alternative image representations for automatic measurement of the intensity of spontaneous facial Action Units (AUs) related to infant emotion expression. Twelve infants were video-recorded during face-to-face interactions with their mothers. Facial features were tracked using active appearance models (AAMs) and registered to a canonical view. Three feature representations were compared: shape and grey scale texture, Histogram of Oriented Gradients (HOG), and Local Binary Pattern Histograms (LBPH). To reduce the high dimensionality of the appearance features (grey scale texture, HOG, and LBPH), a non-linear algorithm was used (Laplacian Eigenmaps). For each representation, support vector machine classifiers were used to learn six gradations of AU intensity (0 to maximal). The target AUs were those central to positive and negative infant emotion. Shape plus grey scale texture performed best for AUs that involve non-rigid deformations of permanent facial features (e.g., AU 12 and AU 20). These findings suggest that AU intensity detection may be maximized by choosing feature representations best suited for specific AU.

AB - Intensity measurements of infant facial expressions are central to understand emotion-mediated interactions and emotional development. We evaluate alternative image representations for automatic measurement of the intensity of spontaneous facial Action Units (AUs) related to infant emotion expression. Twelve infants were video-recorded during face-to-face interactions with their mothers. Facial features were tracked using active appearance models (AAMs) and registered to a canonical view. Three feature representations were compared: shape and grey scale texture, Histogram of Oriented Gradients (HOG), and Local Binary Pattern Histograms (LBPH). To reduce the high dimensionality of the appearance features (grey scale texture, HOG, and LBPH), a non-linear algorithm was used (Laplacian Eigenmaps). For each representation, support vector machine classifiers were used to learn six gradations of AU intensity (0 to maximal). The target AUs were those central to positive and negative infant emotion. Shape plus grey scale texture performed best for AUs that involve non-rigid deformations of permanent facial features (e.g., AU 12 and AU 20). These findings suggest that AU intensity detection may be maximized by choosing feature representations best suited for specific AU.

UR - http://www.scopus.com/inward/record.url?scp=84872851144&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84872851144&partnerID=8YFLogxK

U2 - 10.1109/DevLrn.2012.6400846

DO - 10.1109/DevLrn.2012.6400846

M3 - Conference contribution

AN - SCOPUS:84872851144

SN - 9781467349635

BT - 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL 2012

ER -