Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks.

Authors

DOI:

https://doi.org/10.9781/ijimai.2021.06.002

Keywords:

Affective Computing, Convolutional Neural Network (CNN), Eye Detection, Deep Gradient Convolutional Neural Network, Short Time Fourier Transform

Abstract

Utilizing biomedical signals as a basis to calculate the human affective states is an essential issue of affective computing (AC). With the in-depth research on affective signals, the combination of multi-model cognition and physiological indicators, the establishment of a dynamic and complete database, and the addition of high-tech innovative products become recent trends in AC. This research aims to develop a deep gradient convolutional neural network (DGCNN) for classifying affection by using an eye-tracking signals. General signal process tools and pre-processing methods were applied firstly, such as Kalman filter, windowing with hamming, short-time Fourier transform (SIFT), and fast Fourier transform (FTT). Secondly, the eye-moving and tracking signals were converted into images. A convolutional neural networks-based training structure was subsequently applied; the experimental dataset was acquired by an eye-tracking device by assigning four affective stimuli (nervous, calm, happy, and sad) of 16 participants. Finally, the performance of DGCNN was compared with a decision tree (DT), Bayesian Gaussian model (BGM), and k-nearest neighbor (KNN) by using indices of true positive rate (TPR) and false negative rate (FPR). Customizing mini-batch, loss, learning rate, and gradients definition for the training structure of the deep neural network was also deployed finally. The predictive classification matrix showed the effectiveness of the proposed method for eye moving and tracking signals, which performs more than 87.2% inaccuracy. This research provided a feasible way to find more natural human-computer interaction through eye moving and tracking signals and has potential application on the affective production design process.

Downloads

Download data is not yet available.

References

Rahal, R.-M. and S. Fiedler, “Understanding cognitive and affective mechanisms in social psychology through eye-tracking”. Journal of Experimental Social Psychology, vol. 85, pp. 103842, 2019, doi: 10.1016/j.jesp.2019.103842.

Kale, G. V., & Patil, V. H. “A study of vision based human motion recognition and analysis”. International Journal of Ambient Computing and Intelligence, vol. 7, no.2, pp.75-92, 2016, doi: 10.4018/IJACI.2016070104.

Gargava, P., & Asawa, K. Brain, “Computer Interface for Micro-controller Driven Robot Based on Emotiv Sensors”, International Journal of Interactive Multimedia & Artificial Intelligence, vol. 4, no.5, pp.39-43, 2017, doi: 10.9781/ijimai.2017.457.

Poria, S., Cambria, E., Bajpai, R., & Hussain, A., “A review of affective computing: From unimodal analysis to multimodal fusion”, Information Fusion, vol. 37, pp. 98-125, 2017, doi: 10.1016/j.inffus.2017.02.003.

Lee, W., & Norman, M. D. “Affective Computing as Complex Systems Science”, Procedia Computer Science, vol. 95, pp. 18-23, 2016, doi: 10.1016/j.procs.2016.09.288.

Belkacem, A.N., et al., “Erratum to Online classification algorithm for eye-movement-based communication systems using two temporal EEG sensors”, Control, Biomedical Signal Processing and Control, vol. 19, pp. 137, 2015, doi: 10.1016/j.bspc.2015.01.006

Dai, W., Han, D., Dai, Y., et al., “Emotion recognition and affective computing on vocal social media”, Information & Management, vol. 52, no.7, pp.777-788, 2015, doi: 10.1016/j.im.2015.02.003.

Choi, M., Seo, M., Lee, J. S., et al., “Fuzzy support vector machine-based personalizing method to address the inter-subject variance problem of physiological signals in a driver monitoring system”, Artificial Intelligence in Medicine, vol. 105, pp.101843, 2020, doi: 10.1016/j.artmed.2020.101843.

R.C.A. Bendall, S. Lambert, A. Galpin, et al., “A cognitive style dataset including functional near-infrared spectroscopy, eye-tracking, psychometric and behavioral measures”, Data in Brief, vol. 26, 104544, 2019, doi: 10.1016/j.dib.2019.104544.

John Brand, Travis D. Masterson, et al, “Measuring attentional bias to food cues in young children using a visual search task: An eye-tracking study”, Appetite, vol. 148, 104610, 2020, doi: 10.1016/j.appet.2020.104610

Wen-ying Sylvia Chou, Neha Trivedi, et al., “How do social media users process cancer prevention messages on Facebook? An eye-tracking study”, Patient Education and Counseling, vol. 103, no.6, pp.1161-1167, 2020, doi: 10.1016/j.pec.2020.01.013

Gisele C. Gotardi, Sérgio T. Rodrigues, Fabio A. Barbieri, et al.,” Wearing a head-mounted eye tracker may reduce body sway”, Neuroscience Letters, vol. 722, 134799, 2020, doi: 10.1016/j.neulet.2020.134799

Hessels, R.S. and I.T.C. Hooge, “Eye tracking in developmental cognitive neuroscience - The good, the bad and the ugly”, Developmental Cognitive Neuroscience, vol. 40, 100710, 2019, doi: 10.1016/j.dcn.2019.100710.

Juhola, M., H. Aalto, and T. Hirvonen, “Using results of eye movement signal analysis in the neural network recognition of otoneurological patients”, Computer Methods and Programs in Biomedicine, vol. 86, no.3, pp. 216-226, 2017, doi: 10.1016/j.cmpb.2007.02.008.

Juhola, M., T. Tossavainen, and H. Aalto, “Influence of lossy compression on eye movement signals”, Computers in Biology and Medicine, vol. 34, no.3, pp.221-239, 2004, doi: 10.1016/S0010-4825(03)00059-3.

Enkelejda Kasneci, Thomas Kübler, Klaus Broelemann, et al., “Aggregating physiological and eye tracking signals to predict perception in the absence of ground truth”, Computers in Human Behavior, vol. 68, pp. 450-455, 2017, doi: 10.1016/j.chb.2016.11.067.

Oliver Faust, Yuki Hagiwara, Tan Jen Hong, et al., “Deep learning for healthcare applications based on physiological signals: A review”. Computer Methods and Programs in Biomedicine, vol. 161, pp. 1-13, 2018, doi: 10.1016/j.cmpb.2018.04.005.

Jue Li, Heng Li, Waleed Umer, et al., “Identification and classification of construction equipment operators’ mental fatigue using wearable eyetracking technology”, Automation in Construction, vol. 109, 103000, 2020, doi: 10.1016/j.autcon.2019.103000.

Xue Wang, Lin Lin, Meiqi Han, et al., “Impacts of cues on learning: Using eye-tracking technologies to examine the functions and designs of added cues in short instructional videos”, Computers in Human Behavior, vol. 107, 106279, 2020, doi: 10.1016/j.chb.2020.106279.

Dexiang Zhang, Jukka Hyönä, Lei Cui, et al., “Effects of task instructions and topic signaling on text processing among adult readers with different reading styles: An eye-tracking study”, Learning and Instruction, vol. 64, 101246, 2019, doi: 10.1016/j.learninstruc.2019.101246.

Król, M.E. and M. Król, “A novel machine learning analysis of eyetracking data reveals suboptimal visual information extraction from facial stimuli in individuals with autism”, Neuropsychologia, vol. 129, pp. 397-406, 2019, doi: 10.1016/j.neuropsychologia.2019.04.022.

Angelina Vernetti, Atsushi Senju, Tony Charman, “Simulating interaction: Using gaze-contingent eye-tracking to measure the reward value of social signals in toddlers with and without autism”, Developmental Cognitive Neuroscience, vol. 29, pp.21-29, 2018, doi: 10.1016/j.dcn.2017.08.004.

Vettori, S., et al., “Combined frequency-tagging EEG and eye tracking reveal reduced social bias in boys with autism spectrum disorder”, Cortex, vol.125, pp.135-148, 2020, doi: 10.1016/j.cortex.2019.12.013.

Ding, X., & Lv, Z., “Design and development of an EOG-based simplified Chinese eye-writing system”, Biomedical Signal Processing and Control, vol. 57, 101767, 2020, doi: 10.1016/j.bspc.2019.101767.

Dua, M., Gupta, R., Khari, M., et al., “Biometric iris recognition using radial basis function neural network”, Soft Computing, vol. 23, no. 22, pp.11801-11815, 2019, doi: 10.1007/s00500-018-03731-4.

Dutta, A., Mondal, A., Dey, N., et al., “Vision Tracking: A Survey of the State-of-the-Art”, SN Computer Science, vol.1, no.1, pp. 57, 2020, doi: 10.1007/s42979-019-0059-z.

Khari, M., Garg, A. K., Crespo, R. G., et al., “Gesture Recognition of RGB and RGB-D Static Images Using Convolutional Neural Networks”, International Journal of Interactive Multimedia & Artificial Intelligence, vol. 5, no. 7, pp. 22-27, 2019, doi: 10.9781/ijimai.2019.09.002.

Ahuja, R., Jain, D., Sachdeva, D., et al., “Convolutional Neural Network Based American Sign Language Static Hand Gesture Recognition”, International Journal of Ambient Computing and Intelligence, vol. 10, no. 3, pp. 60-73, 2019, doi: 10.4018/IJACI.2019070104.

Deng, M., Meng, T., Cao, J., et al., “Heart sound classification based on improved MFCC features and convolutional recurrent neural networks”, Neural Networks, vol.130, pp. 22-32, 2020, doi: 10.1016/j.neunet.2020.06.015.

Raj, R., Rajiv, P., Kumar, P., et al., “Feature based video stabilization based on boosted HAAR Cascade and representative point matching algorithm”, Image and Vision Computing, vol. 101, 103957, 2020, doi: 10.1016/j.imavis.2020.103957.

Dey, N., Ashour, A. S., & Hassanien, A. E., “Feature detectors and descriptors generations with numerous images and video applications: a recap”, Feature detectors and motion detection in video processing”, 2017, pp. 36-65, IGI Global.

Zhou, X., et al., “Eye tracking data guided feature selection for image classification”, Pattern Recognition, vol. 63, pp. 56-70, 2017, doi: 10.1016/j.patcog.2016.09.007.

Ali, M. N. Y., Sarowar, M. G., Rahman, M. L., et al., “Adam deep learning with SOM for human sentiment classification”, International Journal of Ambient Computing and Intelligence, vol. 10, no. 3, pp. 92-116, 2019, doi: 10.4018/IJACI.2019070106.

Shah, S., Kumar, A., Kumar, R., & Dey, N., “A robust framework for optimum feature extraction and recognition of P300 from raw EEG”, U-Healthcare Monitoring Systems, 2019, pp. 15-35, Academic Press.

Dey, A., Bhattacharya, D. K., Tibarewala, D. N., et al., “Chinese-chi and Kundalini yoga Meditations Effects on the Autonomic Nervous System: Comparative Study”, International Journal of Interactive Multimedia & Artificial Intelligence, vol. 3, no. 7, pp. 87-95, 2016, doi: 10.9781/ijimai.2016.3713.

Wu, H., & Zhao, X., “A Small and Portable Foot Motion Recognition Device Used in VR Environment”, International Journal of Ambient Computing and Intelligence, vol. 10, no. 3, pp. 1-16, 2019, doi: 10.4018/ijaci.2019070101.

Gopal, B., & Manohar, S., “VLSI architecture for the Winograd Fourier transform algorithm”, Microprocessing and Microprogramming, vol. 40, no. 9, pp. 605-616, 1994, doi: 10.1016/0165-6074(94)90089-2.

Prabhakar, D. V. N., Sreenivasa Kumar, M., & Gopala Krishna, A., “A Novel Hybrid Transform approach with integration of Fast Fourier, Discrete Wavelet and Discrete Shearlet Transforms for prediction of surface roughness on machined surfaces”, Measurement, vol. 164, 108011, 2020, doi: 10.1016/j.measurement.2020.108011.

George, A. E. W., So, S., Ghosh, R., & Paliwal, K. K., “Robustness metric-based tuning of the augmented Kalman filter for the enhancement of speech corrupted with coloured noise”, Speech Communication, vol. 105, 62-76, 2018, doi: 10.1016/j.specom.2018.10.002.

Olbrys, J., & Mursztyn, M., “Estimation of intraday stock market resiliency: Short-Time Fourier Transform approach”, Physica A: Statistical Mechanics and its Applications, vol. 535, 122413, 2019, doi:

Mateo, C., & Talavera, J. A., “Short-Time Fourier Transform with the Window Size Fixed in the Frequency Domain (STFT-FD): Implementation”, SoftwareX, vol. 8, pp.5-8, 2018, doi: 10.1016/j.physa.2019.122413.

Y. LeCun, L. Bottou, Y. Bengio et al., “Gradient-Based Learning Applied to Document Recognition”, Proceedings of the IEEE, vol. 86, no.11, pp.2278- 2324, 1998, doi: 10.1109/5.726791.

Hande Erkaymaz, Mahmut Ozer, İlhami Muharrem Orak, “Detection of directional eye movements based on the electrooculogram signals through an artificial neural network”, Chaos, Solitons & Fractals, vol. 77, pp. 225-229, 2015, doi: 10.1016/j.chaos.2015.05.033.

Bo Liu, Yanshan Xiao, Longbing Cao, “SVM-based multi-state-mapping approach for multi-class classification”, Knowledge-Based Systems, vol. 129, pp.79-96, 2017, doi: 10.1016/j.knosys.2017.05.011.

Ying Wang, Qun Wu, Nilanjan Dey, et al., “Deep back propagation–long short-term memory network based upper-limb sEMG signal classification for automated rehabilitation”, Biocybernetics and Biomedical Engineering, vol. 40, no. 3, pp. 987-1001, 2020, doi: 10.1016/j.bbe.2020.05.003.

Yu Wang, Yating Chen, Ningning Yang, et al., “Classification of mice hepatic granuloma microscopic images based on a deep convolutional neural network”, Applied Soft Computing, vol. 74, pp. 40-50, 2019, doi: 10.1016/j.asoc.2018.10.006.

Dan Wang, Zairan Li, Nilanjan Dey, “Optical pressure sensors based plantar image segmenting using an improved fully convolutional network”, Optik, vol. 179, 99-114, 2019, doi: 10.1016/j.ijleo.2018.10.155.

Zhenzhen Song, Longsheng Fu, Jingzhu Wu, et al., “Kiwifruit detection in field images using Faster R-CNN with VGG16”, IFAC-PapersOnLine, vol. 52, no. 30, pp. 76-81, 2019, doi: 10.1016/j.ifacol.2019.12.500.

Pengjie Tang, Hanli Wang, Sam Kwong, “G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition”, Neurocomputing, vol. 225, pp. 188-197, 2017, doi: 10.1016/j.neucom.2016.11.023.

Ashkan Shakarami, Hadis Tarrah, AliMahdavi-Hormat, “A CAD system for diagnosing Alzheimer’s disease using 2D slices and an improved AlexNet-SVM method”, Optik, vol. 212, 164237, 2020, doi: 10.1016/j.ijleo.2020.164237.

M. G. Huddar, S. S. Sannakki, V. S. Rajpurohit, “Attention-based Multimodal Sentiment Analysis and Emotion Detection in Conversation using RNN", International Journal of Interactive Multimedia and Artificial Intelligence, vol. 6, no. 6, 2020, doi: 10.9781/ijimai.2020.07.004.

Downloads

Published

2021-12-01
Metrics
Views/Downloads
  • Abstract
    167
  • PDF
    33

How to Cite

Li, Y., Deng, J., Wu, Q., and Wang, Y. (2021). Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks. International Journal of Interactive Multimedia and Artificial Intelligence, 7(2), 34–43. https://doi.org/10.9781/ijimai.2021.06.002