Modeling of Performance Creative Evaluation Driven by Multimodal Affective Data.
DOI:
https://doi.org/10.9781/ijimai.2021.08.005Keywords:
Performance Creative Evaluation, Multimodal Affective Feature, Multimedia Acquisition, Data-driven, Affective AcceptanceAbstract
Performance creative evaluation can be achieved through affective data, and the use of affective featuresto evaluate performance creative is a new research trend. This paper proposes a “Performance Creative—Multimodal Affective (PC-MulAff)” model based on the multimodal affective features for performance creative evaluation. The multimedia data acquisition equipment is used to collect the physiological data of the audience, including the multimodal affective data such as the facial expression, heart rate and eye movement. Calculate affective features of multimodal data combined with director annotation, and defined “Performance Creative—Affective Acceptance (PC-Acc)” based on multimodal affective features to evaluate the quality of performance creative. This paper verifies the PC-MulAff model on different performance data sets. The experimental results show that the PC-MulAff model shows high evaluation quality in different performance forms. In the creative evaluation of dance performance, the accuracy of the model is 7.44% and 13.95% higher than that of the single textual and single video evaluation.
Downloads
References
I. S. Lee, “Performing arts in the age of transmedia,” Journal of acting studies, vol. 17, pp. 17–32, 2020.
K.-W. Huang, C.-C. Lin, Y.-M. Lee, Z.-X. Wu, “A deep learning and image recognition system for image recognition,” Data Science and Pattern Recognition, vol. 3, no. 2, pp. 1–11, 2019.
K. He, X. Zhang, S. Ren, J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
J. S. Chung, B.-J. Lee, I. Han, “Who said that?: Audio-visual speaker diarisation of real-world meetings,” arXiv preprint arXiv:1906.10042, 2019.
J. Wu, Y. Xu, S.-X. Zhang, L.-W. Chen, M. Yu, L. Xie, D. Yu, “Time domain audio visual speech separation,” in 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2019, pp. 667–673, IEEE.
J. C.-W. Lin, Y. Shao, Y. Djenouri, U. Yun, “Asrnn: a recurrent neural network with an attention model for sequence labeling,” Knowledge-Based Systems, vol. 212, p. 106548, 2020.
J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, “Bert: Pretraining of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, L. Zettlemoyer, “Deep contextualized word representations,” arXiv preprint arXiv:1802.05365, 2018.
J. C.-W. Lin, G. Srivastava, Y. Zhang, Y. Djenouri, M. Aloqaily, “Privacy preserving multi-objective sanitization model in 6g iot environments,” IEEE Internet of Things Journal, 2020.
F. Abbé-Decarroux, “The perception of quality and the demand for services: Empirical application to the performing arts,” Journal of Economic Behavior & Organization, vol. 23, no. 1, pp. 99–107, 1994.
F. Nake, “Computer art: creativity and computability,” in Proceedings of the 6th ACM SIGCHI conference on Creativity & cognition, 2007, pp. 305–306.
K. Yamada, T. Taura, Y. Nagai, “Design and evaluation of creative and emotional motion,” in Proceedings of the 8th ACM conference on Creativity and cognition, 2011, pp. 239–248.
C.-y. Chang, Y.-p. Chen, “Fusing creative operations into evolutionary computation for composition: From a composer’s perspective,” in 2019 IEEE Congress on Evolutionary Computation (CEC), 2019, pp. 2113–2120, IEEE.
L. Goves, “Multimodal performer coordination as a creative compositional parameter,” Tempo, vol. 74, no. 293, pp. 32–53, 2020.
D. Cabral, J. G. Valente, U. Aragão, C. Fernandes, N. Correia, “Evaluation of a multimodal video annotator for contemporary dance,” in Proceedings of the International Working Conference on Advanced Visual Interfaces, 2012, pp. 572–579.
R. E. Cisneros, K. Wood, S. Whatley, M. Buccoli, M. Zanoni, A. Sarti, “Virtual reality and choreographic practice: The potential for new creative methods,” Body, Space & Technology, vol. 18, no. 1, 2019.
B. T. Christensen, L. J. Ball, “Dimensions of creative evaluation: Distinct design and reasoning strategies for aesthetic, functional and originality judgments,” Design Studies, vol. 45, pp. 116–136, 2016.
P. Karimi, N. Davis, M. L. Maher, K. Grace, L. Lee, “Relating cognitive models of design creativity to the similarity of sketches generated by an ai partner,” in Proceedings of the 2019 on Creativity and Cognition, 2019, pp. 259–270.
T. Knearem, X. Wang, J. Wan, J. M. Carroll, “Crafting in a community of practice: Resource sharing as key in supporting creativity,” in Proceedings of the 2019 on Creativity and Cognition, 2019, pp. 83–94.
M. Richardson, F. Hernández-Hernández, M. Hiltunen, A. Moura, M. Fulková, F. King, F. M. Collins, “Creative connections: The power of contemporary art to explore european citizenship,” London Review of Education, 2020.
K. H. Koh, “Computing indicators of creativity,” in 2011 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), 2011, pp. 231–232, IEEE.
A. Jain, “Measuring creativity: Multi-scale visual and conceptual design analysis,” in Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, 2017, pp. 490–495.
J. Oppenlaender, “Supporting creative workers with crowdsourced feedback,” in Proceedings of the 2019 on Creativity and Cognition, 2019, pp. 646–652.
D. Abfalter, “Authenticity and respect: Leading creative teams in the performing arts,” Creativity and innovation management, vol. 22, no. 3, pp. 295–306, 2013.
J. Lee, E. Jun, J. Chae, “Big data analysis for dance studies using text mining,” The Journal of Dance Society for Documentation & History, vol. 42, p. 191—212, 2016.
L. J. Min, “An analysis of semantic relations in knowledge information in dance research data in korea from 1958 to 2016,” The Korean Journal of Arts Studies, no. 16, p. 215—237, 2017.
K. H. Ryeon, “Exploring the determinants of korean dance recognition and importance: Application of decision tree analysis based on data mining,” Dance Research Journal of Dance, vol. 77, no. 1, p. 17—29, 2019.
Choi, Hyo-jin, “Previous study research on korean contemporary dance using text mining,” The Korean Journal of Dance Studies, vol. 76, no. 4, p. 97—111, 2019.
K. Woo-Kyung, J.-Y. Yoo, “Analysis on the trends of research themes of the korean dance using text mining,” Journal of the Korea Entertainment Industry Association, vol. 13, no. 5, p. 215—228, 2019.
Choihyojin, “Analysis of korean contemporary dance research trends using text mining,” Korean Journal of Arts Education, vol. 17, no. 4, p. 103—118, 2019.
Kimhayeon, “Analysis on the international contemporary dance research trend using text mining,” Korean Journal of Arts Education, vol. 18, no. 1, p. 171—192, 2020.
S. Zhou, J. Jia, Y. Wang, W. Chen, F. Meng, Y. Li, J. Tao, “Emotion inferring from large-scale internet voice data: A multimodal deep learning approach,” in 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia), 2018, pp. 1–6, IEEE.
W. Liang, H. Xie, Y. Rao, R. Y. Lau, F. L. Wang, “Universal affective model for readers’ emotion classification over short texts,” Expert Systems with Applications, vol. 114, pp. 322–333, 2018.
H. M. Hung, H.-J. Yang, S.-H. Kim, G.-S. Lee, “Variants of bert, random forests and svm approach for multimodal emotion-target sub-challenge,” arXiv preprint arXiv:2007.13928, 2020.
P. T. Sowden, L. Dawson, “Creative feelings: the effect of mood on creative ideation and evaluation,” in Proceedings of the 8th ACM Conference on Creativity and Cognition, 2011, pp. 393–394.
G. Corness, K. Carlson, T. Schiphorst, “Audience empathy: a phenomenological method for mediated performance,” in Proceedings of the 8th ACM conference on Creativity and cognition, 2011, pp. 127–136.
C. K. Coursaris, W. Van Osch, “A cognitive-affective model of perceived user satisfaction (campus): The complementary effects and interdependence of usability and aesthetics in is design,” Information & Management, vol. 53, no. 2, pp. 252–264, 2016.
K. Altuwairqi, S. K. Jarraya, A. Allinjawi, M. Hammami, “A new emotion– based affective model to detect student’s engagement,” Journal of King Saud University-Computer and Information Sciences, 2018.
F. Rahdari, E. Rashedi, M. Eftekhari, “A multimodal emotion recognition system using facial landmark analysis,” Iranian Journal of Science and Technology, Transactions of Electrical Engineering, vol. 43, no. 1, pp. 171–189, 2019.
P. D. Loprinzi, S. Pazirei, G. Robinson, B. Dickerson, M. Edwards, R. E. Rhodes, “Evaluation of a cognitive affective model of physical activity behavior,” Health Promotion Perspectives, vol. 10, no. 1, p. 88, 2020.
W. Liu, X. Xie, S. Ma, Y. Wang, “An improved evaluation method for soccer player performance using affective computing,” in 2020 3rd International Conference on Artificial Intelligence and Big Data (ICAIBD), 2020, pp. 324–329, IEEE.
W. Wei, Q. Jia, Y. Feng, G. Chen, M. Chu, “Multi-modal facial expression feature based on deep-neural networks,” Journal on Multimodal User Interfaces, vol. 14, no. 1, pp. 17–23, 2020.
G. Chen, X. Zhang, Y. Sun, J. Zhang, “Emotion feature analysis and recognition based on reconstructed eeg sources,” IEEE Access, vol. 8, pp. 11907–11916, 2020.
H.-J. Choi, Y.-J. Lee, “Deep learning based response generation using emotion feature extraction,” in 2020 IEEE International Conference on Big Data and Smart Computing (Big- Comp), 2020, pp. 255–262, IEEE.
C. Deepika, “Speech emotion recognition feature extraction and classification,” International Journal of Advanced Trends in Computer Science and Engineering, vol. 9, no. 2, pp. 1257–1261, 2020.
W. Wei, Q. Jia, Y. Feng, G. Chen, M. Chu, “Multi-modal facial expression feature based on deep-neural networks,” Journal on Multimodal User Interfaces, vol. 14, no. 1, pp. 17–23, 2020.
J. Radbourne, K. Johanson, H. Glow, T. White, “The audience experience: Measuring quality in the performing arts,” International journal of arts management, pp. 16–29, 2009.
Radbourne, Jennifer, “The quest for self actualization meeting new consumer needs in the cultural industries,” In ESRC Seminar Series Creative Futures-Driving the Cultural Industries Marketing Agenda, vol. 6, 2007.
C. Latulipe, E. A. Carroll, D. Lottridge, “Love, hate, arousal and engagement: exploring audience responses to performing arts,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011, pp. 1845–1854.
C. Wang, E. N. Geelhoed, P. P. Stenton, P. Cesar, “Sensing a live audience,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014, pp. 1909–1912.
C. Martella, E. Gedik, L. Cabrera-Quiros, G. Englebienne, H. Hung, “How was it? exploiting smartphone sensing to measure implicit audience responses to live performances,” in Proceedings of the 23rd ACM international conference on Multimedia, 2015, pp. 201–210.
R. Adolphs, D. Tranel, A. R. Damasio, “Dissociable neural systems for recognizing emotions,” Brain & Cognition, vol. 52, no. 1, pp. 61–69, 2003.
A. R. Damasio, T. J. Grabowski, A. Bechara, H. Damasio, L. L. Ponto, J. Parvizi, R. D. Hichwa, “Subcortical and cortical brain activity during the feeling of self-generated emotions,” Nature neuroscience, vol. 3, no. 10, pp. 1049–1056, 2000.
J. Radbourne, K. Johanson, H. Glow, T. White, “The audience experience: Measuring quality in the performing arts,” International journal of arts management, pp. 16–29, 2009.
M. Wyczesany, S. J. Grzybowski, R. J. Barry, J. Kaiser, A. M. Coenen, A. Potoczek, “Covariation of eeg synchronization and emotional state as modified by anxiolytics,” Journal of Clinical Neurophysiology, vol. 28, no. 3, pp. 289–296, 2011.
S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, I. Patras, “Deap: A database for emotion analysis; using physiological signals,” IEEE transactions on affective computing, vol. 3, no. 1, pp. 18–31, 2011.
G. Pinto, J. M. Carvalho, F. Barros, S. C. Soares, A. J. Pinho, S. Brás, “Multimodal emotion evaluation: A physiological model for cost-effective emotion classification,” Sensors, vol. 20, no. 12, p. 3510, 2020.
H. Zhang, “Expression-eeg based collaborative multimodal emotion recognition using deep autoencoder,” IEEE Access, vol. 8, pp. 164130– 164143, 2020.
M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, L. Zettlemoyer, “Deep contextualized word representations,” arXiv preprint arXiv:1802.05365, 2018.
A. Radford, K. Narasimhan, T. Salimans, I. Sutskever, “Improving language understanding by generative pretraining,” 2018.
J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, “Bert: Pretraining of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008.
C.-M. Kuo, N.-C. Yang, J.-Y. Wu, S.-C. Chen, “Histogrambased image enhancement in quasi-spatial domain for compressed image”.
S. Zhao, Y. Gao, X. Jiang, H. Yao, T.-S. Chua, X. Sun, “Exploring principlesof-art features for image emotion recognition,” in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 47–56.
C. Xu, S. Cetintas, K.-C. Lee, L.-J. Li, “Visual sentiment prediction with deep convolutional neural networks.,” arXiv preprint arXiv:1411.5731, 2014.
B. Jou, S.-F. Chang, “Deep cross residual learning for multitask visual recognition,” in Proceedings of the 24th ACM international conference on Multimedia, 2016, pp. 998–1007.
L. Gao, Z. Guo, H. Zhang, X. Xu, H. T. Shen, “Video captioning with attention-based lstm and semantic consistency,” IEEE Transactions on Multimedia, vol. 19, no. 9, pp. 2045–2055, 2017.
N. Zhao, H. Zhang, R. Hong, M. Wang, T.-S. Chua, “Videowhisper: Toward discriminative unsupervised video feature learning with attention-based recurrent neural networks,” IEEE Transactions on Multimedia, vol. 19, no. 9, pp. 2080–2092, 2017.
K. Simonyan, A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
C. L. Lisetti, F. Nasoz, “Using noninvasive wearable computers to recognize human emotions from physiological signals,” EURASIP Journal on Advances in Signal Processing vol. 2004, no. 11, p. 929414, 2004.
S. un Kai, Y. Junqing, “Audience oriented personalized movie affective content representation and recognition,” Journal of Computer-Aided Design &Computer Graphics, vol. 22, no. 1, pp. 136–144, 2010.
M. Soleymani, J. Lichtenauer, T. Pun, M. Pantic, “A multimodal database for affect recognition and implicit tagging,” IEEE transactions on affective computing, vol. 3, no. 1, pp. 42–55, 2011.
C. C. Aggarwal, C. Zhai, Mining text data. Springer Science & Business Media, 2012.
S. Yan, G. Ding, H. Li, N. Sun, Z. Guan, Y. Wu, L. Zhang, T. Huang, “Exploring audience response in performing arts with a brain-adaptive digital performance system,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 7, no. 4, pp. 1–28, 2017.
W.-L. Zheng, W. Liu, Y. Lu, B.-L. Lu, A. Cichocki, “Emotionmeter: A multimodal framework for recognizing human emotions,” IEEE transactions on cybernetics, vol. 49, no. 3, pp. 1110–1122, 2018.
B. Fu, F. Li, Y. Niu, H. Wu, Y. Li, G. Shi, “Conditional generative adversarial network for eeg-based emotion finegrained estimation and visualization,” Journal of Visual Communication and Image Representation, vol. 74, p. 102982.
Downloads
Published
- 
			Abstract207
 - 
                                        							PDF23
 
						
							





