World Scientific
  • Search
  •   
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at [email protected] for any enquiries.

Effect of Action Units, Viewpoint and Immersion on Emotion Recognition Using Dynamic Virtual Faces

    https://doi.org/10.1142/S0129065723500533Cited by:5 (Source: Crossref)

    Facial affect recognition is a critical skill in human interactions that is often impaired in psychiatric disorders. To address this challenge, tests have been developed to measure and train this skill. Recently, virtual human (VH) and virtual reality (VR) technologies have emerged as novel tools for this purpose. This study investigates the unique contributions of different factors in the communication and perception of emotions conveyed by VHs. Specifically, it examines the effects of the use of action units (AUs) in virtual faces, the positioning of the VH (frontal or mid-profile), and the level of immersion in the VR environment (desktop screen versus immersive VR). Thirty-six healthy subjects participated in each condition. Dynamic virtual faces (DVFs), VHs with facial animations, were used to represent the six basic emotions and the neutral expression. The results highlight the important role of the accurate implementation of AUs in virtual faces for emotion recognition. Furthermore, it is observed that frontal views outperform mid-profile views in both test conditions, while immersive VR shows a slight improvement in emotion recognition. This study provides novel insights into the influence of these factors on emotion perception and advances the understanding and application of these technologies for effective facial emotion recognition training.

    References

    • 1. J. C. Castillo, A. Fernández-Caballero, Á. Castro-González, M. A. Salichs and M. T. López, A framework for recognizing and regulating emotions in the elderly, in Ambient Assisted Living and Daily Activities, eds. L. Pecchia, L. L. Chen, C. Nugent and J. Bravo (Springer International Publishing, Cham, 2014), pp. 320–327. CrossrefGoogle Scholar
    • 2. D. Avola, M. Cascio, L. Cinque, A. Fagioli and G. L. Foresti, Lietome: An ensemble approach for deception detection from facial cues, Int. J. Neural Syst. 31(2) (2021) 2050068. Link, Web of ScienceGoogle Scholar
    • 3. J. De Lope and M. Graa, Behavioral activity recognition based on gaze ethograms, Int. J. Neural Syst. 30(7) (2020) 2050025. Link, Web of ScienceGoogle Scholar
    • 4. S. Borgomaneri, C. Bolloni, P. Sessa and A. Avenanti, Blocking facial mimicry affects recognition of facial and body expressions, PLoS One 15(2) (2020) e0229364. Crossref, Medline, Web of ScienceGoogle Scholar
    • 5. S. N. Avery, R. M. VanDerKlok, S. Heckers and J. U. Blackford, Impaired face recognition is associated with social inhibition, Psychiatr. Res. 236 (2016) 53–57. Crossref, Medline, Web of ScienceGoogle Scholar
    • 6. E. Lozano-Monasor, M. T. López, A. Fernández-Caballero and F. Vigo-Bustos, Facial expression recognition from webcam based on active shape models and support vector machines, in Ambient Assisted Living and Daily Activities, eds. L. Pecchia, L. L. Chen, C. Nugent and J. Bravo (Springer International Publishing, Cham, 2014), pp. 147–154. CrossrefGoogle Scholar
    • 7. E. Lozano-Monasor, M. Lpez, F. Vigo-Bustos and A. Fernndez-Caballero, Facial expression recognition in ageing adults: From lab to ambient assisted living, J. Ambient Intell. Humaniz. Comput. 8 (2017) 567–578. Crossref, Web of ScienceGoogle Scholar
    • 8. F. Fang, T. Potter, T. Nguyen and Y. Zhang, Dynamic reorganization of the cortical functional brain network in affective processing and cognitive reappraisal, Int. J. Neural Syst. 30(10) (2020) 2050051. Link, Web of ScienceGoogle Scholar
    • 9. L. Wang, X. Li, Y. Zhu, B. Lin, Q. Bo, F. Li and C. Wang, Discriminative analysis of symptom severity and ultra-high risk of schizophrenia using intrinsic functional connectivity, Int. J. Neural Syst. 30(9) (2020) 2050047. Link, Web of ScienceGoogle Scholar
    • 10. U. R. Acharya, S. L. Oh, Y. Hagiwara, J. H. Tan, H. Adeli and D. P. Subha, Automated eeg-based screening of depression using deep convolutional neural network, Comput. Methods Programs Biomed. 161 (2018) 103–113. Crossref, Medline, Web of ScienceGoogle Scholar
    • 11. M. Ahmadlou, H. Adeli and A. Adeli, Spatiotemporal analysis of relative convergence of EEGs reveals differences between brain dynamics of depressive women and men, Clin. EEG Neurosci. 44(3) (2013) 175–181. Crossref, Medline, Web of ScienceGoogle Scholar
    • 12. U. R. Acharya, V. K. Sudarshan, H. Adeli, J. Santhosh, J. E. Koh and A. Adeli, Computer-aided diagnosis of depression using EEG signals, Eur. Neurol. 73(5–6) (2015) 329–336. Crossref, Medline, Web of ScienceGoogle Scholar
    • 13. U. R. Acharya, V. K. Sudarshan, H. Adeli, J. Santhosh, J. E. Koh, S. D. Puthankatti and A. Adeli, A novel depression diagnosis index using nonlinear features in EEG signals, Eur. Neurol. 74(1–2) (2015) 79–83. Crossref, Medline, Web of ScienceGoogle Scholar
    • 14. M. Ahmadlou, H. Adeli and A. Adeli, Fractality analysis of frontal brain in major depressive disorder, Int. J. Psychophysiol. 85(2) (2012) 206–211. Crossref, Medline, Web of ScienceGoogle Scholar
    • 15. R. Yuvaraj, M. Murugappan, U. R. Acharya, H. Adeli, N. M. Ibrahim and E. Mesquita, Brain functional connectivity patterns for emotional state classification in Parkinsons disease patients without dementia, Behav. Brain Res. 298 (2016) 248–260. Crossref, Medline, Web of ScienceGoogle Scholar
    • 16. M. Graña and M. Silva, Impact of machine learning pipeline choices in autism prediction from functional connectivity data, Int. J. Neural Syst. 31(4) (2021) 2150009. Link, Web of ScienceGoogle Scholar
    • 17. H. Adeli and S.-L. Hung, An adaptive conjugate gradient learning algorithm for efficient training of neural networks, Appl. Math. Comput. 62(1) (1994) 81–102. Crossref, Web of ScienceGoogle Scholar
    • 18. H. Adeli and S.-L. Hung, Machine Learning: Neural Networks, Genetic Algorithms, and Fuzzy Systems (John Wiley & Sons, 1994). Google Scholar
    • 19. S.-L. Hung and H. Adeli, Parallel backpropagation learning algorithms on cray y-mp8/864 supercomputer, Neurocomputing 5(6) (1993) 287–302. Crossref, Web of ScienceGoogle Scholar
    • 20. S.-L. Hung and H. Adeli, A parallel genetic/neural network learning algorithm for MIMD shared memory machines, IEEE Trans. Neural Netw. 5(6) (1994) 900–909. Crossref, Medline, Web of ScienceGoogle Scholar
    • 21. R. Sánchez-Reolid, M. C. Martínez-Sáez, B. García-Martínez, L. Fernández-Aguilar, L. Ros, J. M. Latorre and A. Fernández-Caballero, Emotion classification from EEG with a low-cost BCI versus a high-end equipment, Int. J. Neural Syst. 32(10) (2022) 2250041. Link, Web of ScienceGoogle Scholar
    • 22. B. García-Martínez, A. Fernández-Caballero, A. Martínez-Rodrigo, R. Alcaraz and P. Novais, Evaluation of brain functional connectivity from electroencephalographic signals under different emotional states, Int. J. Neural Syst. 32(10) (2022) 2250026. Link, Web of ScienceGoogle Scholar
    • 23. A. Olamat, P. Ozel and S. Atasever, Deep learning methods for multi-channel EEG-based emotion recognition, Int. J. Neural Syst. 32(5) (2022) 2250021. Link, Web of ScienceGoogle Scholar
    • 24. Z. Cai, L. Wang, M. Guo, G. Xu, L. Guo and Y. Li, From intricacy to conciseness: A progressive transfer strategy for EEG-based cross-subject emotion recognition, Int. J. Neural Syst. 32(3) (2022) 2250005. Link, Web of ScienceGoogle Scholar
    • 25. J. De Lope and M. Graña, A hybrid time-distributed deep neural architecture for speech emotion recognition, Int. J. Neural Syst. 32(6) (2022) 2250024. Link, Web of ScienceGoogle Scholar
    • 26. P. Hajek, A. Barushka and M. Munk, Neural networks with emotion associations, topic modeling and supervised term weighting for sentiment analysis, Int. J. Neural Syst. 31(10) (2021) 2150013. Link, Web of ScienceGoogle Scholar
    • 27. P. Ekman and D. Cordaro, What is meant by calling emotions basic, Emot. Rev. 3(4) (2011) 364–370. CrossrefGoogle Scholar
    • 28. P. Ekman and W. Friesen, Facial Action Coding System: A Technique for the Measurement of Facial Movement (Consulting Psychologists Press, Palo Alto, CA, 1978). Google Scholar
    • 29. K. Guo, Holistic gaze strategy to categorize facial expression of varying intensities, PLoS One 7(8) (2012) e42585. Crossref, Medline, Web of ScienceGoogle Scholar
    • 30. K. Guo and H. Shaw, Face in profile view reduces perceived facial expression intensity: An eye-tracking study, Acta Psychol. 155 (2015) 19–28. Crossref, Medline, Web of ScienceGoogle Scholar
    • 31. P. Surcinelli, F. Andrei, O. Montebarocci and S. Grandi, Emotion recognition of facial expressions presented in profile, Psychol. Rep. 125 (2022) 2623–2635. Crossref, Medline, Web of ScienceGoogle Scholar
    • 32. A. Roy-Charland, M. Perrona, O. Beaudrya and K. Eadya, Confusion of fear and surprise: A test of the perceptual-attentional limitation hypothesis with eye movement monitoring, Cogn. Emot. 28 (2014) 1214–1222. Crossref, Medline, Web of ScienceGoogle Scholar
    • 33. J. Chamberland, A. Roy-Charland, M. Perron and J. Dickinson, Distinction between fear and surprise: An interpretation-independent test of the perceptual-attentional limitation hypothesis, Soc. Neurosci. 12 (2017) 751–768. Medline, Web of ScienceGoogle Scholar
    • 34. C. G. Kohler, T. Turner, N. M. Stolar, W. B. Bilker, C. M. Brensinger, R. E. Gur and R. C. Gur, Differences in facial expressions of four universal emotions, Psychiatr. Res. 128 (2004) 235–244. Crossref, Medline, Web of ScienceGoogle Scholar
    • 35. F. Poncet, R. Soussignan, M. Jaffiol, B. Gaudelus, A. Leleu, C. Demily, N. Franck and J. Y. Baudouin, The spatial distribution of eye movements predicts the (false) recognition of emotional facial expressions, PLoS One 16 (2021) 1–24. Crossref, Web of ScienceGoogle Scholar
    • 36. Y. Busin, K. Lukasova, M. K. Asthana and E. C. Macedo, Hemiface differences in visual exploration patterns when judging the authenticity of facial expressions, Front. Psychol. 8 (2018) 2332. Crossref, Medline, Web of ScienceGoogle Scholar
    • 37. A. Garca, P. Fernndez-Sotos, A. Fernndez-Caballero, E. Navarro, J. Latorre, R. Rodriguez-Jimenez and P. Gonzlez, Acceptance and use of a multi-modal avatar-based tool for remediation of social cognition deficits, J. Ambient Intell. Humaniz. Comput. 1 (2020) 4513–4524. Crossref, Web of ScienceGoogle Scholar
    • 38. J. Gutiérrez-Maldonado, M. Rus-Calafell and J. González-Conde, Creation of a new set of dynamic virtual reality faces for the assessment and training of facial emotion recognition ability, Virtual Reality 18(1) (2014) 61–71. Crossref, Web of ScienceGoogle Scholar
    • 39. C. G. Kohler, T. H. Turner, W. B. Bilker, C. M. Brensinger, S. J. Siegel, S. J. Kanes, R. E. Gur and R. C. Gur, Facial emotion recognition in schizophrenia: Intensity effects and error pattern, Am. J. Psychiatry 160(10) (2003) 1768–1774. Crossref, Medline, Web of ScienceGoogle Scholar
    • 40. G. C. Burdea and P. Coiffet, Virtual Reality Technology (John Wiley & Sons, 2003). CrossrefGoogle Scholar
    • 41. D. Roberts, R. Wolff, O. Otto and A. Steed, Constructing a gazebo: Supporting teamwork in a tightly coupled, distributed task in virtual reality, Presence 12(6) (2003) 644–657. CrossrefGoogle Scholar
    • 42. I. Heldal, A. Steed and R. Schroeder, Evaluating collaboration in distributed virtual environments for a puzzle-solving task, HCI Int. 2005, the 11th Int. Conf. Human Computer Interaction (Las Vegas, NV, 2005), pp. 1–10. Google Scholar
    • 43. A. S. García, D. Martínez, J. P. Molina and P. González, Collaborative virtual environments: You cant do it alone, can you? in Int. Conf. Virtual Reality (Springer, Beijing, China, 2007), pp. 224–233. CrossrefGoogle Scholar
    • 44. S. Baceviciute, T. Terkildsen and G. Makransky, Remediating learning from non-immersive to immersive media: Using EEG to investigate the effects of environmental embeddedness on reading in virtual reality, Comput. Educ. 164 (2021) 104122. Crossref, Web of ScienceGoogle Scholar
    • 45. M. Slater, V. Linakis, M. Usoh and R. Kooper, Immersion, presence and performance in virtual environments: An experiment with tri-dimensional chess, in ACM Symp. Virtual Reality Software and Technology (ACM, Hong Kong, 1996), pp. 163–172. CrossrefGoogle Scholar
    • 46. J. A. Stevens et al., The relationship between presence and performance in virtual simulation training, Open J. Model. Simul. 3(2) (2015) 41. CrossrefGoogle Scholar
    • 47. A. S. García, D. J. Roberts, T. Fernando, C. Bar, R. Wolff, J. Dodiya, W. Engelke and A. Gerndt, A collaborative workspace architecture for strengthening collaboration among space scientists, in 2015 IEEE Aerospace Conf. (IEEE, Big Sky, MT, 2015), pp. 1–12. CrossrefGoogle Scholar
    • 48. M. Dyck, M. Winbeck, S. Leiberg, Y. Chen, R. C. Gur and K. Mathiak, Recognition profile of emotions in natural and virtual faces, PLoS One 3(11) (2008) e3628. Crossref, Medline, Web of ScienceGoogle Scholar
    • 49. E. G. Krumhuber, L. Tamarit, E. B. Roesch and K. R. Scherer, Facsgen 2.0 animation software: Generating three-dimensional facs-valid facial expressions for emotion research, Emotion 12(2) (2012) 351. Crossref, Medline, Web of ScienceGoogle Scholar
    • 50. C. C. Joyal, L. Jacob, M. H. Cigna, J. P. Guay and P. Renaud, Virtual faces expressing emotions: An initial concomitant and construct validity study, Front. Hum. Neurosci. 8 (2014) 1–6. Crossref, Medline, Web of ScienceGoogle Scholar
    • 51. R. Amini, C. Lisetti and G. Ruiz, Hapfacs 3.0: Facs-based facial expression generator for 3d speaking virtual characters, IEEE Trans. Affect. Comput. 6(4) (2015) 348–360. Crossref, Web of ScienceGoogle Scholar
    • 52. C. Geraets, S. K. Tuente, B. Lestestuiver, M. Van Beilen, S. Nijman, J. Marsman and W. Veling, Virtual reality facial emotion recognition in social environments: An eye-tracking study, Internet Interv. 25 (2021) 100432. Crossref, MedlineGoogle Scholar
    • 53. C. Faita, F. Vanni, C. Tanca, E. Ruffaldi, M. Carrozzino and M. Bergamasco, Investigating the process of emotion recognition in immersive and non-immersive virtual technological setups, in Proc. 22nd ACM Conf. Virtual Reality Software and Technology (Christchurch, New Zealand, 2016), pp. 61–64. CrossrefGoogle Scholar
    • 54. A. S. García, P. Fernández-Sotos, M. A. Vicente-Querol, G. Lahera, R. Rodriguez-Jimenez and A. Fernandez-Caballero, Design of reliable virtual human facial expressions and validation by healthy people, Integr. Comput.-Aided Eng. 27(3) (2020) 287–299. Crossref, Web of ScienceGoogle Scholar
    • 55. P. Fernández-Sotos, A. S. García, M. A. Vicente-Querol, G. Lahera, R. Rodriguez-Jimenez and A. Fernández-Caballero, Validation of dynamic virtual faces for facial affect recognition, PLoS One 16(1) (2021) 1–15. Crossref, Web of ScienceGoogle Scholar
    • 56. N. I. Muros, A. S. García, C. Forner, P. López-Arcas, G. Lahera, R. Rodriguez-Jimenez, K. N. Nieto, J. M. Latorre, A. Fernández-Caballero and P. Fernández-Sotos, Facial affect recognition by patients with schizophrenia using human avatars, J. Clin. Med. 10(9) (2021) 1904. Crossref, Medline, Web of ScienceGoogle Scholar
    • 57. M. Monferrer, A. S. García, J. J. Ricarte, M. J. Montes, A. Fernández-Caballero and P. Fernández-Sotos, Facial emotion recognition in patients with depression compared to healthy controls when using human avatars, Sci. Rep. 13(1) (2023) 6007. Crossref, Medline, Web of ScienceGoogle Scholar
    • 58. M. Monferrer, A. S. García, J. J. Ricarte, M. J. Montes, P. Fernández-Sotos and A. Fernández-Caballero, Facial affect recognition in depression using human avatars, Appl. Sci. 13(3) (2023) 1609. CrossrefGoogle Scholar
    • 59. J. del guila, L. M. González-Gualda, M. A. Játiva, P. Fernández-Sotos, A. Fernández-Caballero and A. S. García, How interpersonal distance between avatar and human influences facial affect recognition in immersive virtual reality, Front. Psychol. 12 (2021) 675515. Crossref, Medline, Web of ScienceGoogle Scholar
    • 60. M. A. Vicente-Querol, A. Fernández-Caballero, J. P. Molina, P. González, L. M. González-Gualda, P. Fernández-Sotos and A. S. García, Influence of the level of immersion in emotion recognition using virtual humans, in Int. Work-Conf. Interplay between Natural and Artificial Computation (Springer, Puerto de la Cruz, Tenerife, 2022), pp. 464–474. CrossrefGoogle Scholar
    • 61. J. Marín-Morales, C. Llinares, J. Guixeres and M. Alcañiz, Emotion recognition in immersive virtual reality: From statistics to affective computing, Sensors 20(18) (2020) 5163. Crossref, Web of ScienceGoogle Scholar
    • 62. P. Ekman and W. Friesen, Unmasking the Face (Consulting Psychologists Pr, 2003). Google Scholar
    • 63. B. Sandín, P. Chorot, L. Lostao, T. E. Joiner, M. A. Santed and R. M. Valiente, Escalas PANAS de afecto positivo y negativo: Validacion factorial y convergencia transcultural, Psicothema 11(1) (1999) 37–51. Google Scholar
    • 64. N. Hussain, H. Ujir, I. Hipiny and J.-L. Minoi, 3d facial action units recognition for emotional expression arXiv:1712.00195. Google Scholar
    • 65. M. A. Vicente-Querol, A. Fernndez-Caballero, J. P. Molina, L. M. Gonzlez-Gualda, P. Fernndez-Sotos and A. S. Garca, Facial affect recognition in immersive virtual reality: Where is the participant looking? Int. J. Neural Syst. 32(10) (2022) 2250029. Link, Web of ScienceGoogle Scholar
    • 66. M. W. Schurgin, J. Nelson, S. Iida, H. Ohira, J. Y. Chiao and S. L. Franconeri, Eye movements during emotion recognition in faces, J. Vis. 14(13) (2014), https://doi.org/10.1167/14.13.14. Crossref, Medline, Web of ScienceGoogle Scholar
    • 67. M. G. Calvo, A. Gutirrez-Garca and M. D. Lbano, What makes a smiling face look happy? Visual saliency, distinctiveness, and affect, Psychol. Res. 82 (2018) 296–309. Crossref, Medline, Web of ScienceGoogle Scholar
    • 68. E. Krumhuber and A. Kappas, Moving smiles: The role of dynamic components for the perception of the genuineness of smiles, J. Nonverbal Behav. 29 (2005) 3–24. Crossref, Web of ScienceGoogle Scholar
    • 69. H. Hill, P. G. Schyns and S. Akamatsu, Information and viewpoint dependence in face recognition, Cognition 62 (1997) 201–222. Crossref, Medline, Web of ScienceGoogle Scholar
    • 70. S. Du and A. M. Martinez, The resolution of facial expressions of emotion, J. Vis. 11 (2011) 1–13. Crossref, Web of ScienceGoogle Scholar
    • 71. C. E. Waugh, E. Z. Shing and B. M. Avery, Temporal dynamics of emotional processing in the brain, Emot. Rev. 7(4) (2015) 323–329. CrossrefGoogle Scholar
    • 72. M. Arsalidou, D. Morris and M. J. Taylor, Converging evidence for the advantage of dynamic facial expressions, Brain Topogr. 24(2) (2011) 149–163. Crossref, Medline, Web of ScienceGoogle Scholar
    • 73. S. A. Trautmann, T. Fehr and M. Herrmann, Emotions in motion: Dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations, Brain Res. 1284 (2009) 100–115. Crossref, Medline, Web of ScienceGoogle Scholar
    • 74. N. Torro-Alves, I. A. d. O. Bezerra, R. G. e Claudino, M. R. Rodrigues, J. P. Machado-de Sousa, F. d. L. Osório and J. A. Crippa, Facial emotion recognition in social anxiety: The influence of dynamic information, Psychol. Neurosci. 9(1) (2016) 1–11. CrossrefGoogle Scholar
    • 75. E. G. Krumhuber, A. Kappas and A. S. Manstead, Effects of dynamic aspects of facial expressions: A review, Emot. Rev. 5(1) (2013) 41–46. CrossrefGoogle Scholar
    • 76. S. A. Grainger, J. D. Henry, L. H. Phillips, E. J. Vanman and R. Allen, Age deficits in facial affect recognition: The influence of dynamic cues, J. Gerontol. B Psychol. Sci. Soc. Sci. 72(4) (2017) 622–632. Medline, Web of ScienceGoogle Scholar
    • 77. H. Hoffmann, H. C. Traue, K. Limbrecht-Ecklundt, S. Walter and H. Kessler, Static and dynamic presentation of emotions in different facial areas: Fear and surprise show influences of temporal and spatial properties, Psychology 4(8) (2013) 663–668. CrossrefGoogle Scholar
    • 78. E. Bould, N. Morris and B. Wink, Recognising subtle emotional expressions: The role of facial movements, Cogn. Emot. 22(8) (2008) 1569–1587. Crossref, Web of ScienceGoogle Scholar
    • 79. Z. Ambadar, J. W. Schooler and J. F. Conn, Deciphering the enigmatic face the importance of facial dynamics in interpreting subtle facial expressions, Psychol. Sci. 16(5) (2005) 403–410. Crossref, Medline, Web of ScienceGoogle Scholar
    • 80. A. Leleu, M. Dzhelyova, B. Rossion, R. Brochard, K. Durand, B. Schaal and J. Y. Baudouin, Tuning functions for automatic detection of brief changes of facial expression in the human brain, NeuroImage 179 (2018) 235–251. Crossref, Medline, Web of ScienceGoogle Scholar
    • 81. C. Biele and A. Grabowska, Sex differences in perception of emotion intensity in dynamic and static facial expressions, Exp. Brain Res. 171(1) (2006) 1–6. Crossref, Medline, Web of ScienceGoogle Scholar
    • 82. S. Uono, W. Sato and M. Toichi, Brief report: Representational momentum for dynamic facial expressions in pervasive developmental disorder, J. Autism Dev. Disord. 40(3) (2010) 371–377. Crossref, Medline, Web of ScienceGoogle Scholar
    • 83. W. Sato, T. Kochiyama, S. Yoshikawa, E. Naito and M. Matsumura, Enhanced neural activity in response to dynamic facial expressions of emotion: An FMRI study, Cogn. Brain Res. 20(1) (2004) 81–91. Crossref, MedlineGoogle Scholar
    Remember to check out the Most Cited Articles!

    Check out our titles in neural networks today!