EVA: expressive multipart virtual agent performing gestures and emotionsIzidor Mlakar
, Matej Rojc
, 2011, original scientific article
Abstract: Embodied Conversational Agents (ECAs) play an important role in the development of personalized and expressive human-machine interaction, allowing users to interact with a system over several communication channels, such as: natural speech, facial expression, and different body gestures. This paper presents a novel approach to the generation of ECAs for multimodal interfaces, by using the proprietary EVA framework. EVAćs articulated 3D model is mesh-based and built on the multipart concept. Each of its 3D submodels (body-parts) supports both bone and morph target-based animation, in order to simulate natural human movement. Each body movement's structural characteristics can be described by the composite movement of one or more elementary units (bones and/or morphs), and its temporal characteristics by the durations of each of the movementćs stages (expose, present, dissipate). EVA scripts provide a means of defining and fine-tuning body motion in the form of predefined gestures, or complex behavioural events (provided by external behaviour modelling sources). Since behavioural events can also be described as a combination of tuned predefined gestures and the movements of elementary units, a small number of predefined gestures can form infinite sets of gestures that ECA can perform. ECA EVA, as presented in this paper, provides both: a personalization of its behaviour (gesture level), and a personalization of its outlook.
Keywords: bone and morph based animation, distributive, expressive ECA, mesh based articulated model
Published: 01.06.2012; Views: 997; Downloads: 23
Link to full text