Aligning Figurative Paintings With Their Sources for Semantic Interpretation.

Authors

  • Sinem Aslan Ege University, Ca’ Foscari University of Venice.
  • Luc Steels Barcelona Supercomputing Center.

DOI:

https://doi.org/10.9781/ijimai.2023.04.004

Keywords:

Artistic, Computer vision, Edge Detection, Figurative Art Analysis, Image Processing
Supporting Agencies
Experiments and preparation of the paper was partially funded by the EU Pathfinder project MUHAI through the Venice International University and by the EU Humane-AI.net coordination project. Additional funding for the interaction with Luc Tuymans came from the EU STARTS project through a scientist in residence grant to LS.

Abstract

This paper reports steps in probing the artistic methods of figurative painters through computational algorithms. We explore a comparative method that investigates the relation between the source of a painting, typically a photograph or an earlier painting, and the painting itself. A first crucial step in this process is to find the source and to crop, standardize and align it to the painting so that a comparison becomes possible. The next step is to apply different low-level algorithms to construct difference maps for color, edges, texture, brightness, etc. From this basis, various subsequent operations become possible to detect and compare features of the image, such as facial action units and the emotions they signify. This paper demonstrates a pipeline we have built and tested using paintings by a renowned contemporary painter Luc Tuymans. We focus in this paper particularly on the alignment process, on edge difference maps, and on the utility of the comparative method for bringing out the semantic significance of a painting.

Downloads

Download data is not yet available.

References

L. Gatys, A. Ecker, M. Bethge, “A neural algorithm of artistic style,” in 16th Annual Meeting of the Vision Sciences Society (VSS 2016), 2016, p. 326, Scholar One, Inc.

Y. Jing, Y. Yang, Z. Feng, J. Ye, Y. Yu, M. Song, “Neural style transfer: A review,” IEEE transactions on visualization and computer graphics, vol. 26, no. 11, pp. 3365–3385, 2019.

D. Hockney, M. Gayford, A history of pictures. From the cave to the computer screen. London, UK: Thames and Hudson, 2006.

S. Liu, J. Yang, S. S. Agaian, C. Yuan, “Novel features for art movement classification of portrait paintings,” Image and Vision Computing, vol. 108, p. 104121, 2021.

N. Gonthier, Y. Gousseau, S. Ladjal, O. Bonfait, “Weakly supervised object detection in artworks,” in Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0–0.

B. Seguin, C. Striolo, F. Kaplan, et al., “Visual link retrieval in a database of paintings,” in European conference on computer vision, 2016, pp. 753–767, Springer.

Y. Lin, “Sentiment analysis of painting based on deep learning,” in International Conference on Application of Intelligent Systems in Multi-modal Information Analytics, 2020, pp. 651–655, Springer.

D. Foster, Generative deep learning: teaching machines to paint, write, compose, and play. O’Reilly Media, 2019.

E. Cetinic, “Iconographic image captioning for artworks,” in ICPR International Workshops and Challenges, 2021, pp. 502–516.

S. DiPaola, “Painterly rendered portraits from photographs using a knowledge-based approach,” in Human Vision and Electronic Imaging XII, vol. 6492, 2007, p. 649203, International Society for Optics and Photonics.

A. A. Gooch, J. Long, L. Ji, A. Estey, B. S. Gooch, “Viewing progress in non-photorealistic rendering through heinlein’s lens,” in Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, NPAR ’10, New York, NY, USA, 2010, pp. 165–171, Association for Computing Machinery.

O. N. Yalçın, N. Abukhodair, S. DiPaola, “Empathic ai painter: A computational creativity system with embodied conversational interaction,” in Proceedings of the NeurIPS 2019 Competition and Demonstration Track, vol. 123, 2020, pp. 131–141.

S. DiPaola, C. Riebe, J. Enns, “Rembrandt’s textural agency: A shared perspective in visual art and science,” Leonardo, vol. 43(2), pp. 145–151, 2020.

S. Aslan, L. Steels, “Identifying centres of interest in paintings using alignment and edge detection,” in ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science, vol 12663, 2021, pp. 589–603, Springer, Cham.

N. S. U. Loock, J-V. Aliaga, Luc Tuymans. London: Phaidon, 1996.

L. Tuymans, The Image Revisited. In conversation with G. Boehm, T. Clark and H. De Wolf. Brussels: Ludion, 2018.

M. Wang, W. Deng, “Deep visual domain adaptation: A survey,” Neurocomputing, vol. 312, pp. 135–153, 2018.

B. Fredrickson, T.-A. Roberts, “Objectification theory. toward understanding women’s lived experiences and mental health risks,” Psychology of women quarterly, vol. 21, pp. 173–206, 1997.

B. Zitova, J. Flusser, “Image registration methods: a survey,” Image and vision computing, vol. 21, no. 11, pp. 977–1000, 2003.

F. A. Onyango, “Multi-resolution automated image registration,” Master’s thesis, University of Twente, 2017.

K. P. Wilkie, “Mutual information based methods to localize image registration,” Master’s thesis, University of Waterloo, 2005.

S. Wognum, S. Heethuis, T. Rosario, M. Hoogeman, Bel, “Validation of deformable image registration algorithms on ct images of ex vivo porcine bladders with fiducial markers,” Medical physics, vol. 41, no. 7, p. 071916, 2014.

M. Koenig, S. Kohle, H.-O. Peitgen, “Automatic cropping of breast regions for registration in mr mammography,” in Medical Imaging 2005: Image Processing, vol. 5747, 2005, pp. 1563–1570.

E. Zacharaki, G. Matsopoulos, P. Asvestas, K. Nikita, K. Grondahl, H. Grondahl, “A digital subtraction radiography scheme based on automatic multiresolution registration,” Dentomaxillofacial radiology, vol. 33, no. 6, pp. 379–390, 2004.

P. L. Chow, D. B. Stout, E. Komisopoulou, A. F. Chatziioannou, “A method of image registration for small animal, multi-modality imaging,” Physics in Medicine & Biology, vol. 51, no. 2, p. 379, 2006.

Y. Sun, M.-P. Jolly, J. Moura, “Integrated registration of dynamic renal perfusion mr images,” in International Conference on Image Processing, ICIP’04., vol. 3, 2004, pp. 1923–1926, IEEE.

Y. Guo, Y. Liu, T. Georgiou, M. Lew, “A review of semantic segmentation using deep neural networks,” International Journal of Multimedia Information Retrieval, vol. 7, pp. 87–93, 2017.

A. A. Goshtasby, 2-D and 3-D image registration: for medical, remote sensing, and industrial applications. John Wiley & Sons, 2005.

T. Tuytelaars, “Dense interest points,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 2281–2288, IEEE.

H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, “Speeded- up robust features (surf),” Computer vision and image understanding, vol. 110, no. 3, pp. 346–359, 2008.

D. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the 7th IEEE International Conference on Computer Vision, vol. 2, 1999, pp. 1150–1157 vol.2.

E. Rublee, V. Rabaud, K. Konolige, G. R. Bradski, “ORB: an efficient alternative to SIFT or SURF,” in IEEE International Conference on Computer Vision, ICCV, 2011, pp. 2564–2571.

P. H. Torr, A. Zisserman, “Mlesac: A new robust estimator with application to estimating image geometry,” Computer vision and image understanding, vol. 78, no. 1, pp. 138–156, 2000.

R. Hartley, A. Zisserman, “Multiple view geometry in computer vision. cambridge university press, isbn: 0521540518,” 2004.

S. Choi, T. Kim, W. Yu, “Performance evaluation of ransac family,” Journal of Computer Vision, vol. 24, no. 3, pp. 271–300, 1997.

M. Tomei, L. Baraldi, M. Cornia, R. Cucchiara, “What was monet seeing while painting? translating artworks to photo-realistic images,” in Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 601–616.

M. Styner, C. Brechbuhler, G. Szckely, G. Gerig, “Parametric estimate of intensity inhomogeneities applied to MRI,” IEEE Transactions on medical imaging, vol. 19, no. 3, pp. 153–165, 2000.

S. Rahunathan, D. Stredney, P. Schmalbrock, B. D. Clymer, “Image registration using rigid registration and maximization of mutual information,” in 13th Annu. Med. Meets Virtual Reality Conf, 2005.

A. Dame, E. Marchand, “Second-order optimization of mutual information for real-time image registration,” IEEE Transactions on Image Processing, vol. 21, no. 9, pp. 4190–4203, 2012, doi: 10.1109/TIP.2012.2199124.

N. Crombez, G. Caron, E. Mouaddib, “3d point cloud model colorization by dense registration of digital images,” International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, 2015.

L. Steels, B. Wahle, “Perceiving the focal point of a painting with ai. case studies on works of luc tuymans,” in 12th International Conference on Agents and Artificial Intelligence, vol. 8, 2020, pp. 895-901.

S. Aslan, L. Steels, “Finding signifiers and their possible interpretations using colour,” in Conference on Colors and Cultures. Couleurs et Cultures. Universit’e de Haute-Alsace, Mulhouse (France), 2021.

K. Wibisono, H.-M. Hang, “Traditional method inspired deep neural network for edge detection,” in IEEE International Conference on Image Processing (ICIP), 2020, pp. 678–682.

I. Sobel, G. Feldman, “A 3x3 isotropic gradient operator for image processing,” A talk at the Stanford Artificial Project, pp. 271–272, 1968.

N. Otsu, “A threshold selection method from gray- level histograms,” IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62–66, 1979.

M. González-Hidalgo, S. Massanet, A. Mir, D. Ruiz- Aguilera, “A new edge detector based on uninorms,” in International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, 2014, pp. 184–193, Springer.

J. Canny, “A computational approach to edge detection,” IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679–698, 1986.

T. Baltrusaitis, P. Robinson, L.-P. Morency, “Openface: An open source facial behavior analysis toolkit,” in WA C V, 2016, pp. 1–10, IEEE Computer Society.

J. Hong, H. J. Lee, Y. Kim, Y. M. Ro, “Face tells detailed expression: Generating comprehensive facial expression sentence through facial action units,” in International Conference on Multimedia Modeling, 2020, pp. 100–111, Springer.

S. Bekhet, H. Alahmer, “A robust deep learning approach for glasses detection in non-standard facial images,” IET Biometrics, vol. 10, no. 1, pp. 74–86, 2021.

J. Redmon, A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.

K. He, G. Gkioxari, P. Dollár, R. Girshick, “Mask r- cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.

T. Baltrusaitis, A. Zadeh, Y. C. Lim, L.-P. Morency, “Openface 2.0: Facial behavior analysis toolkit,” in 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), 2018, pp. 59–66, IEEE.

W. V. Friesen, P. Ekman, et al., “Emfacs-7: Emotional facial action coding system,” Unpublished manuscript, University of California at San Francisco, vol. 2, no. 36, p. 1, 1983.

S. Velusamy, H. Kannan, B. Anand, A. Sharma, Navathe, “A method to infer emotions from facial action units,” in 2011 IEEE international conference on acoustics, speech and signal processing (ICASSP), 2011, pp. 2028–2031, IEEE.

B. Fasel, F. Monay, D. Gatica-Perez, “Latent semantic analysis of facial action codes for automatic facial expression recognition,” in Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval, 2004, pp. 181–188.

M. F. Valstar, M. Pantic, “Biologically vs. logic inspired encoding of facial actions and emotions in video,” in 2006 IEEE International Conference on Multimedia and Expo, 2006, pp. 325–328, IEEE.

M. S. Bartlett, G. Littlewort, C. Lainscsek, I. Fasel, J. Movellan, “Machine learning methods for fully automatic recognition of facial expressions and facial actions,” in 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583), vol. 1, 2004, pp. 592–597, IEEE.

L. Yao, Y. Wan, H. Ni, B. Xu, “Action unit classification for facial expression recognition using active learning and svm,” Multimedia Tools and Applications, vol. 80, no. 16, pp. 24287–24301, 2021.

P. Ekman, W. V. Friesen, J. C. Hager, Facial Action Coding System: Facial action coding system: the manual: on CD- ROM. Research Nexus, 2002.

M. Berkane, K. Belhouchette, H. Belhadef, “Emotion recognition approach using multilayer perceptron network and motion estimation,” International Journal of Synthetic Emotions (IJSE), vol. 10, no. 1, pp. 38–53, 2019.

Downloads

Published

2025-03-01
Metrics
Views/Downloads
  • Abstract
    130
  • PDF
    10

How to Cite

Aslan, S. and Steels, L. (2025). Aligning Figurative Paintings With Their Sources for Semantic Interpretation. International Journal of Interactive Multimedia and Artificial Intelligence, 9(2), 49–58. https://doi.org/10.9781/ijimai.2023.04.004