Hideaki Takeda's Publication
- H. Takeda, N. Kobayashi, Y. Matsubara and
T. Nishida: Towards
Ubiquitous Human-Robot Interaction, in Working Notes for IJCAI-97
Workshop on Intelligent Multimodal Systems, pp. 1–8 (1997).
Multimodality for interaction tends to be considered as use of
different physical communication channels for face-to-face interaction. But
our usual communication is indeed more flexible, e.g., communication to
others at a distance, communication to someone who can reply, and
communication with a group of people together. We categorize intimate, loose,
and cooperative interaction as extended multimodal interaction. In this paper
we show how such different types of interaction is realized as an integrated
system with robots, people, and computers. Firstly, we introduce our
multi-agent architecture to model the environment which includes people,
robots, automated instruments, and computers. Each of them is modeled as
agent, and all communication is realized in an inter-agent communication
language. Secondly, we describe how intimate interaction is realized. We use
gesture recognition, gesture generation, and speech generation for
interaction. Thirdly, we introduce a software agent called watcher to realize
loose interaction. Watcher always looks at the environment to detect whether
someone is requesting interaction. It also uses gesture recognition in a
coarse level. Fourthly we provide mediators for cooperative interaction. A
mediator is invoked each time interaction is requested in the environment. It
can gather and enroll necessary agents for interaction by planning and
sometimes by consulting other mediators.
Hideaki Takeda (National Institute of Informatics)