Séminaires VENISE (no1)
(VENISE Series Seminars)
LIMSI-CNRS, Université Paris-Sud
Bât. 508, B.P. 133, 91403 Orsay cedex.
Dans le contexte de l'action transversale VENISE
(Virtualité et ENvironnement Immersif pour les
Sciences Expérimentales) du LIMSI-CNRS,
nous vous
informons que dans le cadre du premier séminaire VENISE:
Catherine PELACHAUD
University of Rome "La Sapienza"
http://www.dis.uniroma1.it/~pelachau/
cath@dis.uniroma1.it
nous présentera ses travaux sur :
"Facial Expressions in Embodied Conversational Agents"
le 27 Décembre 2001 de 14h00 à 16h00,
en Salle de Conférence (rez-de-chaussée) du LIMSI-CNRS, Orsay (Accès).
P. Bourdot & J. Mariani
ABSTRACT
Our goal is to develop a Believable Embodied Agent able
to dialog with a user, but we aim at making an Agent that
can also combine facial expressions in a complex and
subtle way, just like a Human Agent does. We are applying
the metaphor of face-to-face communication to human-computer
interaction. Face-to-face conversation is very complex
phenomenon as it involved a huge number of factors: we
speak with our voice, but also with our hand, eye, face
and body. Our gesture modifies, emphasizes, contradicts
what we say by words. It is therefore important to
consider both verbal and nonverbal behaviors while
building an embodied agent. The agent must have the
capacity to decide which facial expressions to show,
which words to say with which intonation. That is, the
agent should be able to plan not only what to communicate,
but also by what (verbal or nonverbal) signals, in what
combination and how synchronized. In this talk, we will
present our work on the creation of a multimodal
believable agent able to dialog with a user and whose
nonverbal behaviors reflect its affective state. We will
pay particular attention on the representation of the
``agent's mind'' as well as on the translation of the
agent's cognitive state into facial expressions.
We first review a taxonomy of communicative functions
that our Agent is able to express non-verbally; but
we point out that, due to the complexity of
communication, in some cases different information
can be provided at once by different parts and actions
of an Agent's face. We will present our method to
assess and manage what happens, at the meaning and
signal levels of multimodal communicative behavior,
when different communicative functions have to be
displayed at the same time and necessarily have to
make use of the same expressive resources.