Open Access

Multispace Behavioral Model for Face-Based Affective Social Agents

EURASIP Journal on Image and Video Processing20072007:048757

DOI: 10.1155/2007/48757

Received: 26 April 2006

Accepted: 22 December 2006

Published: 7 March 2007

Abstract

This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, and mood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios.

[12345678910111213141516171819202122232425262728293031323334353637383940]

Authors’ Affiliations

(1)
Carleton School of Information Technology, Carleton University
(2)
School of Interactive Arts & Technology, Simon Fraser University

References

  1. Jones C: Chuck Amuck : The Life and Times of Animated Cartoonist. Farrar, Straus, and Giroux, New York, NY, USA; 1989.Google Scholar
  2. Badler N, Reich BD, Webber BL: Towards personalities for animated agents with reactive and planning behaviors. In Creating Personalities for Synthetic Actors: Towards Autonomous Personality Agents. Edited by: Trappl R, Petta P. Springer, New York, NY, USA; 1997:43-57.View ArticleGoogle Scholar
  3. Bates J: The role of emotion in believable agents. Communications of the ACM 1994,37(7):122-125. 10.1145/176789.176803View ArticleGoogle Scholar
  4. Loyall AB, Bates JB: Personality-rich believable agents that use language. Proceedings of the 1st International Conference on Autonomous Agents, February 1997, Marina del Rey, Calif, USA 106-113.View ArticleGoogle Scholar
  5. Egges A, Kshirsagar S, Magnenat-Thalmann N: A model for personality and emotion simulation. Proceedings of the 7th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES '03), September 2003, Oxford, UK 453-461.View ArticleGoogle Scholar
  6. Ekman P, Friesen WV: Facial Action Coding System. Consulting Psychologists Press, San Francisco, Calif, USA; 1978.Google Scholar
  7. Kshirsagar S, Magnenat-Thalmann N: A multilayer personality model. Proceedings of the 2nd International Symposium on Smart Graphics, June 2002, Hawthorne, NY, USA 107-115.View ArticleGoogle Scholar
  8. Pelachaud C, Bilvi M: Computational model of believable conversational agents. In Communication in Multiagent Systems: Background, Current Trends and Future. Edited by: Huget M-P. Springer, New York, NY, USA; 2003:300-317.View ArticleGoogle Scholar
  9. Rousseau D, Hayes-Roth B: Interacting with personality-rich characters. In Report KSL 97-06. Knowledge Systems Laboratory, Stanford University, Stanford, Calif, USA; 1997.Google Scholar
  10. Arya A, DiPaola S, Jefferies L, Enns JT: Socially communicative characters for interactive applications. Proceedings of the 14th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG '06), January-February 2006, Plzen-Bory, Czech RepublicGoogle Scholar
  11. Wiggins JS, Trapnell P, Phillips N: Psychometric and geometric characteristics of the revised interpersonal adjective scales (IAS-R). Multivariate Behavioral Research 1988,23(3):517-530.View ArticleGoogle Scholar
  12. Battista S, Casalino F, Lande C: MPEG-4: a multimedia standard for the third millennium—part 1. IEEE Multimedia 1999,6(4):74-83. 10.1109/93.809236View ArticleGoogle Scholar
  13. Bulterman DCA: SMIL 2.0—part 1: overview, concepts, and structure. IEEE Multimedia 2001,8(4):82-88. 10.1109/93.959106View ArticleGoogle Scholar
  14. Arafa Y, Kamyab K, Mamdani E, et al.: Two approaches to scripting character animation. Proceedings of the 1st International Conference on Autonomous Agents & Multi-Agent Systems, Workshop on Embodied Conversational Agents, July 2002, Bologna, ItalyGoogle Scholar
  15. De Carolis B, Pelachaud C, Poggi I, Steedman M: APML, a markup language for believable behaviour generation. Proceedings of the 1st International Conference on Autonomous Agents & Multi-Agent Systems, Workshop on Embodied Conversational Agents, July 2002, Bologna, ItalyGoogle Scholar
  16. Marriott A, Stallo J: VHML: uncertainties and problems. A discussion. Proceedings of the 1st International Conference on Autonomous Agents & Multi-Agent Systems, Workshop on Embodied Conversational Agents, July 2002, Bologna, ItalyGoogle Scholar
  17. Prendinger H, Descamps S, Ishizuka M: Scripting affective communication with life-like characters in web-based interaction systems. Applied Artificial Intelligence 2002,16(7-8):519-553. 10.1080/08839510290030381View ArticleGoogle Scholar
  18. Goldberg LR: An alternative "description of personality": the big-five factor structure. Journal of Personality and Social Psychology 1990,59(6):1216-1229.View ArticleGoogle Scholar
  19. Watson D: Strangers' ratings of the five robust personality factors: evidence of a surprising convergence with self-report. Journal of Personality and Social Psychology 1989,57(1):120-128.View ArticleGoogle Scholar
  20. Berry DS: Accuracy in social perception: contributions of facial and vocal information. Journal of Personality and Social Psychology 1991,61(2):298-307.View ArticleGoogle Scholar
  21. Borkenau P, Mauer N, Riemann R, Spinath FM, Angleitner A: Thin slices of behavior as cues of personality and intelligence. Journal of Personality and Social Psychology 2004,86(4):599-614.View ArticleGoogle Scholar
  22. Borkenau P, Liebler A: Trait inferences: sources of validity at zero acquaintance. Journal of Personality and Social Psychology 1992,62(4):645-657.View ArticleGoogle Scholar
  23. Knutson B: Facial expressions of emotion influence interpersonal trait inferences. Journal of Nonverbal Behavior 1996,20(3):165-181. 10.1007/BF02281954View ArticleGoogle Scholar
  24. Ekman P: Emotions Revealed. Consulting Psychologists Press, San Francisco, Calif, USA; 1978.Google Scholar
  25. Funge J, Tu X, Terzopoulos D: Cognitive modeling: knowledge, reasoning and planning for intelligent characters. Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '99), August 1999, Los Angeles, Calif, USA 29-38.View ArticleGoogle Scholar
  26. Cassell J, Pelachaud C, Badler N, et al.: Animated conversation: rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '94), July 1994, New York, NY, USA 413-420.View ArticleGoogle Scholar
  27. Cassell J, Vilhjálmsson HH, Bickmore T: BEAT: the behaviour expression animation toolkit. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '01), August 2001, Los Angeles, Calif, USA 477-486.View ArticleGoogle Scholar
  28. King SA, Knott A, McCane B: Language-driven nonverbal communication in a bilingual conversational agent. Proceedings of the 16th International Conference on Computer Animation and Social Agents (CASA '03), May 2003, New-Brunswick, NJ, USA 17-22.Google Scholar
  29. Smid K, Pandzic I, Radman V: Autonomous speaker agent. Proceedings of Computer Animation and Social Agents Conference (CASA '04), July 2004, Geneva, SwitzerlandGoogle Scholar
  30. Russell JA: A circumplex model of affect. Journal of Personality and Social Psychology 1980,39(6):1161-1178.View ArticleGoogle Scholar
  31. Lee W-S, Escher M, Sannier G, Magnenat-Thalmann N: MPEG-4 compatible faces from orthogonal photos. Proceedings of Computer Animation (CA '99), May 1999, Geneva, Switzerland 186-194.Google Scholar
  32. Noh J-Y, Neumann U: Expression cloning. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '01), August 2001, Los Angeles, Calif, USA 277-288.View ArticleGoogle Scholar
  33. Paradiso A: An algebra of facial expressions. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '00), July 2000, New Orleans, La, USAGoogle Scholar
  34. Perlin K: Layered compositing of facial expression. Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '97), August 1997, Los Angeles, Calif, USAGoogle Scholar
  35. Arya A, Jefferies LN, Enns JT, DiPaola S: Facial actions as visual cues for personality. Computer Animation and Virtual Worlds 2006,17(3-4):371-382. 10.1002/cav.140View ArticleGoogle Scholar
  36. Beedie CJ, Terry PC, Lane AM: Distinctions between emotion and mood. Cognition and Emotion 2005,19(6):847-878. 10.1080/02699930541000057View ArticleGoogle Scholar
  37. DiPaola S, Arya A: Affective communication remapping in musicface system. Proceedings of the 10th European Conference on Electronic Imaging and the Visual Arts (EVA '04), July 2004, London, UKGoogle Scholar
  38. Bresin R, Friberg A: Synthesis and decoding of emotionally expressive music performance. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, October 1999, Tokyo, Japan 4: 317-322.Google Scholar
  39. Juslin PN: Cue utilization in communication of emotion in music performance: relating performance to perception. Journal of Experimental Psychology: Human Perception and Performance 2000,26(6):1797-1813.Google Scholar
  40. Liu D, Lu L, Zhang H-J: Automatic mood detection from acoustic music data. Proceedings of the 4th International Symposium on Music Information Retrieval (ISMIR '03), October 2003, Baltimore, Md, USAGoogle Scholar

Copyright

© A. Arya and S. DiPaola. 2007

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement