World Scientific
  • Search
  •   
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at [email protected] for any enquiries.

FACE-TO-FACE WITH A ROBOT: WHAT DO WE ACTUALLY TALK ABOUT?

    https://doi.org/10.1142/S0219843613500114Cited by:8 (Source: Crossref)

    While much of the state-of-the-art research in human–robot interaction (HRI) investigates task-oriented interaction, this paper aims at exploring what people talk about to a robot if the content of the conversation is not predefined. We used the robot head Furhat to explore the conversational behavior of people who encounter a robot in the public setting of a robot exhibition in a scientific museum, but without a predefined purpose. Upon analyzing the conversations, it could be shown that a sophisticated robot provides an inviting atmosphere for people to engage in interaction and to be experimental and challenge the robot's capabilities. Many visitors to the exhibition were willing to go beyond the guiding questions that were provided as a starting point. Amongst other things, they asked Furhat questions concerning the robot itself, such as how it would define a robot, or if it plans to take over the world. People were also interested in the feelings and likes of the robot and they asked many personal questions — this is how Furhat ended up with its first marriage proposal. People who talked to Furhat were asked to complete a questionnaire on their assessment of the conversation, with which we could show that the interaction with Furhat was rated as a pleasant experience.

    References

    • K.   Fischer , What Computer Talk is and isn't: Human-Computer Conversation as Intercultural Communication   17 ( AQ-Verlag , 2006 ) . Google Scholar
    • M. K. Lee and M. Makatchev, How do people talk with a robot? An analysis of human-robot dialogs in the real world, ACM SIGCHI Conf. Human Factors in Computing (CHI) (ACM, 2009) pp. 3769–3774. Google Scholar
    • E. S. Kimet al., How people talk when teaching a robot, ACM/IEEE Int. Conf Human-Robot Interaction (HRI) (ACM, 2009) pp. 23–30. Google Scholar
    • C. Breazeal, Robot in society: Friend or appliance, Proc. Autonomous Agents Workshop on Emotion-Based Agent Architectures (1999) pp. 18–26. Google Scholar
    • C. Torreyet al., Effects of adaptive robot dialog on information exchange and social relations, ACM/IEEE Int. Conf. Human-Robot Interaction (HRI) (ACM, 2006) pp. 126–133. Google Scholar
    • H. H. Clark, Talking as if, ACM/IEEE Int. Conf. Human-Robot Interaction (HRI) (ACM, 2008) pp. 393–394. Google Scholar
    • S. Kopp, How people talk to a virtual human-conversations from a real-world application, How People Talk to Computers, Robots, and Other Artificial Communication Partners (2006), Bremen, Germany, p. 101 . Google Scholar
    • J. Gustafson and L. Bell, Natural Lang. Eng. 6(4), 273 (2000), DOI: 10.1017/S1351324900002485. CrossrefGoogle Scholar
    • B. Pierceet al., Development of an integrated multi-modal communication robotic face, IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO) (IEEE, 2012) pp. 101–102. Google Scholar
    • F. Delaunay, J. de Greeff and T. Belpaeme, A study of a retro-projected robotic face and its effectiveness for gaze reading by humans, ACM/IEEE Int. Conf. Human-Robot Interaction (HRI) (ACM, 2010) pp. 39–44. Google Scholar
    • D. Bohus and E. Horvitz, Facilitating multiparty dialog with gaze, gesture, and speech, Int. Conf. Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI) (ACM, 2010) pp. 5:1–5:8. Google Scholar
    • E. Moseret al., J. Neurosci. Methods 161(1), 126 (2007), DOI: 10.1016/j.jneumeth.2006.10.016. Crossref, Web of ScienceGoogle Scholar
    • S. Al Moubayed, J. Edlund and J. Beskow, ACM Trans. Inter. Intell. Syst. (TiiS) 1(2), 11 (2012). Web of ScienceGoogle Scholar
    • S. Al Moubayedet al., Cognitive Behavioural Systems, Lecture Notes in Computer Science (Springer, 2012) pp. 114–130. CrossrefGoogle Scholar
    • S. Al Moubayed, G. Skantze and J. Beskow, Lip-reading Furhat: Audio visual intelligibility of a back projected animated face, Int. Conf. Intelligent Virtual Agents (IVA2012) (Springer, 2012) pp. 196–203. Google Scholar
    • S.   Al Moubayed and G.   Skantze , Turn-taking control using gaze in multiparty human-computer dialog: Effects of 2d and 3d displays , Int. Conf. Auditory-Visual Speech Processing (AVSP) ( 2011 ) . Google Scholar
    • D. Harel, Sci. Comput. Program. 8(3), 231 (1987), DOI: 10.1016/0167-6423(87)90035-9. Crossref, Web of ScienceGoogle Scholar
    • J. Gustafson, Developing multimodal spoken dialog systems, Empirical studies of spoken human-computer interaction, Doctoral Dissertation (KTH, 2002) . Google Scholar
    • W. Swartoutet al., Intelligent Virtual Agents (Springer, 2010) pp. 286–300. CrossrefGoogle Scholar
    Remember to check out the Most Cited Articles!

    Check out these Notable Titles in Robotics