SP-ASDNet: CNN-LSTM based ASD classification model using observer scanpaths

Yudong Tao, Mei Ling Shyu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Autism spectrum disorder (ASD) is one of the common diseases that affects the language and even the behavior of the subjects. Since the large variations in the symptoms and severities of ASD, the diagnosis becomes a challenging problem. It has been witnessed that deep neural networks have been widely used and achieve good performance in various applications of visual data analysis. In this paper, we propose SP-ASDNet which utilizes both convolutional neural networks (CNNs) and long short-term memory (LSTM) networks to classify whether an observer is typical developed (TD) or has ASD, based on the scanpath of the corresponding observer's gaze at the given image. The proposed SP-ASDNet is submitted to 2019 Saliency4ASD grand challenge and achieves 74.22% accuracy for validation.

Original languageEnglish (US)
Title of host publicationProceedings - 2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages641-646
Number of pages6
ISBN (Electronic)9781538692141
DOIs
StatePublished - Jul 2019
Event2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019 - Shanghai, China
Duration: Jul 8 2019Jul 12 2019

Publication series

NameProceedings - 2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019

Conference

Conference2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019
CountryChina
CityShanghai
Period7/8/197/12/19

Keywords

  • Autism Spectrum Disorder (ASD)
  • Deep Neural Network (DNN)
  • Long Short Term Memory (LSTM) Network
  • Saliency Prediction

ASJC Scopus subject areas

  • Media Technology
  • Computer Vision and Pattern Recognition

Fingerprint Dive into the research topics of 'SP-ASDNet: CNN-LSTM based ASD classification model using observer scanpaths'. Together they form a unique fingerprint.

Cite this