Table of Contents

A Message from the Chairs
James Crowley, Yuri Ivanov, Christopher Wren (General Chairs)
Daniel Gatica-Perez, Rainer Stiefelhagen, Michael Johnston (Program Chairs)

ICMI-MLMI 2009 Organizing Committee

Keynote Address 1 
Session Chair: Yuri Ivanov (MERL)

Living Better with Robots (Page 1)
Cynthia Breazeal (Massachussetts Institute of Technology)

Session 1: Multimodal Communication Analysis (Oral) 
Session Chair: Steve Renals (University of Edinburgh)

Discovering Group Nonverbal Conversational Patterns with Topics (Page 3)
Dinesh Babu Jayagopi (Idiap Research Institute & Ecole Polytechnique Federale de Lausanne)
Daniel Gatica-Perez (Idiap Research Institute)

Agreement Detection in Multiparty Conversation (Page 7)
Sebastian Germesin (DFKI - German Research Center for Artificial Intelligence)
Theresa Wilson (University of Edinburgh)

Multimodal Floor Control Shift Detection (Page 15)
Lei Chen (Purdue University)
Mary P. Harper (University of Maryland)

Static vs. Dynamic Modeling of Human Nonverbal Behavior from Multiple Cues and Modalities (Page 23)
Stavros Petridis (Imperial College London)
Hatice Gunes (Imperial College London)
Sebastian Kaltwang (University of Karlsruhe)
Maja Pantic (Imperial College London)

Session 2: Multimodal Dialog 
Session Chair: Alexandros Potamianos (Technical University of Crete)

Dialog in the Open World: Platform and Applications (Page 31)
Dan Bohus (Microsoft Research)
Eric Horvitz (Microsoft Research)

Towards Adapting Fantasy, Curiosity and Challenge in Multimodal Dialogue Systems for Preschoolers (Page 39)
Theofanis Kannetis (Technical University of Crete)
Alexandros Potamianos (Technical University of Crete)

Building Multimodal Applications with EMMA (Page 47)
Michael Johnston (AT&T Labs Research)

Session 3: Multimodal Communication Analysis and Dialog (Poster) 
Session Chair: Kenji Mase (Nagoya University)

A Speaker Diarization Method Based on the Probabilistic Fusion of Audio-Visual Location Information (Page 55)
Kentaro Ishizuka (NTT Corporation)
Shoko Araki (NTT Corporation)
Kazuhiro Otsuka (NTT Corporation)
Tomohiro Nakatani (NTT Corporation)
Masakiyo Fujimoto (NTT Corporation)

Dynamic Robot Autonomy: Investigating the Effects of Robot Decision-Making in a Human-Robot Team Task (Page 63)
Paul Schermerhorn (Indiana University)
Matthias Scheutz (Indiana University)

A Speech Mashup Framework for Multimodal Mobile Services (Page 71)
Giuseppe Di Fabbrizio (AT&T Labs - Research, Inc.)
Thomas Okken (AT&T Labs - Research, Inc.)
Jay G. Wilpon (AT&T Labs - Research, Inc.)

Detecting, Tracking and Interacting with People in a Public Space (Page 79)
Sunsern Cheamanunkul (University of California, San Diego)
Evan Ettinger (University of California, San Diego)
Matt Jacobsen (University of California, San Diego)
Patrick Lai (Stanford University)
Yoav Freund (University of California, San Diego)

Cache-based Language Model Adaptation Using Visual Attention for ASR in Meeting Scenarios (Page 87)
Neil J. Cooke (University of Birmingham)
Martin J. Russell (University of Birmingham)

Multimodal End-of-Turn Prediction in Multi-Party Meetings (Page 91)
Iwan de Kok (University of Twente)
Dirk Heylen (University of Twente)=

(Return to Top)

Recognizing Communicative Facial Expressions for Discovering Interpersonal Emotions in Group Meetings (Page 99)
Shiro Kumano (NTT Corporation)
Kazuhiro Otsuka (NTT Corporation)
Dan Mikami (NTT Corporation)
Junji Yamato (NTT Corporation)

Classification of Patient Case Discussions Through Analysis of Vocalisation Graphs (Page 107)
Saturnino Luz (Trinity College Dublin)
Bridget Kane (Trinity College Dublin)

Learning from Preferences and Selected Multimodal Features of Players (Page 115)
Georgios N. Yannakakis (IT University of Copenhagen)

Detecting User Engagement with a Robot Companion Using Task and Social Interaction-based Features (Page 119)
Ginevra Castellano (Queen Mary University of London)
André Pereira (Instituto Superior Técnico)
Iolanda Leite (Instituto Superior Técnico)
Ana Paiva (Instituto Superior Técnico)
Peter W. McOwan (Queen Mary University of London)

Multi-Modal Features for Real-Time Detection of Human-Robot Interaction Categories (Page 127)
Ian R. Fasel (The University of Arizona)
Masahiro Shiomi (Applied Telecommunications Research International)
Pilippe-Emmanuel Chadutaud (Applied Telecommunications Research International)
Takayuki Kanda (Applied Telecommunications Research International)
Norihiro Hagita (Applied Telecommunications Research International)
Hiroshi Ishiguro (Applied Telecommunications Research International)

Modeling Culturally Authentic Style Shifting with Virtual Peers (Page 135)
Justine Cassell (Northwestern University)
Kathleen Geraghty (Northwestern University)
Berto Gonzalez (Northwestern University)
John Borland (Northwestern University)

Between Linguistic Attention and Gaze Fixations in Multimodal Conversational Interfaces (Page 143)
Rui Fang (Michigan State University)
Joyce Y. Chai (Michigan State University)
Fernanda Ferreira (University of Edinburgh)

Keynote Address 2 
Session Chair: Chris Wren (Google)

Head-up Interaction: Can we break our addiction to the screen and keyboard? (Page 151)
Stephen Brewster (University of Glasgow)

(Return to Top)

Session 4: Multimodal Fusion (Special Session) 
Session Chair: Philippe Palanque (University of Toulouse)

Fusion Engines for Multimodal Input: A Survey (Page 153)
Denis Lalanne (University of Fribourg)
Laurence Nigay (University of Grenoble)
Philippe Palanque (University of Toulouse)
Peter Robinson (University of Cambridge)
Jean Vanderdonckt (Université catholique de Louvain)
Jean-François Ladry (University of Toulouse)

A Fusion Framework for Multimodal Interactive Applications (Page 161)
Hildeberto Mendonça (Université catholique de Louvain)
Jean-Yves Lionel Lawson (Université catholique de Louvain)
Olga Vybornova (Université catholique de Louvain)
Benoit Macq (Université catholique de Louvain)
Jean Vanderdonckt (Université catholique de Louvain)

Benchmarking Fusion Engines of Multimodal Interactive Systems (Page 169)
Bruno Dumas (University of Fribourg)
Rolf Ingold (University of Fribourg)
Denis Lalanne (University of Fribourg)

Temporal Aspects of CARE-based Multimodal Fusion: From a Fusion Mechanism to Composition Components and WoZ Components (Page 177)
Marcos Serrano (University of Grenoble, CNRS, LIG)
Laurence Nigay (University of Grenoble, CNRS, LIG)

Formal Description Techniques to Support the Design, Construction and Evaluation of Fusion Engines for SURE (Safe, Usable, Reliable and Evolvable) Multimodal Interfaces (Page 185)
Jean-François Ladry (University of Toulouse)
David Navarre (University of Toulouse)
Philippe Palanque (University of Toulouse)

Multimodal Inference for Driver-Vehicle Interaction (Page 193)
Tevfik Metin Sezgin (Koç University)
Ian Davies (University of Cambridge)
Peter Robinson (University of Cambridge)

(Return to Top)

Session 5: Gaze, Gesture, and Reference (Oral) 
Session Chair: Louis-Philippe Morency (University of Southern California)

Multimodal Integration of Natural Gaze Behavior for Intention Recognition During Object Manipulation (Page 199)
Thomas Bader (Universität Karlsruhe)
Matthias Vogelgesang (Fraunhofer IITB)
Edmund Klaus (Fraunhofer IITB)

Salience in the Generation of Multimodal Referring Acts (Page 207)
Paul Piwek (The Open University)

Communicative Gestures in Coreference Identification in Multiparty Meetings (Page 211)
Tyler Baldwin (Michigan State University)
Joyce Y. Chai (Michigan State University)
Katrin Kirchhoff (University of Washington)

Session 6: Demonstration Session 
Session Chairs: Denis Lalanne (University of Fribourg)
Enrique Vidal (Polytechnic University of Valencia)

Realtime Meeting Analysis and 3D Meeting Viewer Based on Omnidirectional Multimodal Sensors (Page 219)
Kazuhiro Otsuka (NTT Corporation)
Shoko Araki (NTT Corporation)
Dan Mikami (NTT Corporation)
Kentaro Ishizuka (NTT Corporation)
Masakiyo Fujimoto (NTT Corporation)
Junji Yamato (NTT Corporation)

Guiding Hand: A Teaching Tool for Handwriting (Page 221)
Nalini Vishnoi (George Mason University)
Cody Narber (George Mason University)
Zoran Duric (George Mason University)
Naomi Lynn Gerber (George Mason University)

A Multimedia Retrieval System Using Speech Input (Page 223)
Andrei Popescu-Belis (Idiap Research Institute)
Peter Poller (DFKI GmbH)
Jonathan Kilgour (University of Edinburgh)

(Return to Top)

Navigation with a Passive Brain Based Interface (Page 225)
Jan B. F. van Erp (TNO Human Factors)
Peter J. Werkhoven (Utrecht University)
Marieke E. Thurlings (Utrecht University)
Anne-Marie M. Brouwer (TNO Human Factors)

A Multimodal Predictive-Interactive Application for Computer Assisted Transcription and Translation (Page 227)
Vicent Alabau (Universitat Politècnica de València)
Daniel Ortiz (Universitat Politècnica de València)
Verónica Romero (Universitat Politècnica de València)
Jorge Ocampo (Universitat Politècnica de València)

Multi-Modal Communication (Page 229)
Victor S. Finomore (Air Force Research Laboratory)
Dianne K. Popik (Air Force Research Laboratory)
Douglas S. Brungart (Air Force Research Laboratory)
Brian D. Simpson (Air Force Research Laboratory)

HephaisTK: A Toolkit for Rapid Prototyping of Multimodal Interfaces (Page 231)
Bruno Dumas (University of Fribourg)
Denis Lalanne (University of Fribourg)
Rolf Ingold (University of Fribourg)

State, an Assisted Document Transcription System (Page 233)
David Llorens (Universitat Jaume I)
Andrés Marzal (Universitat Jaume I)
Federico Prat (Universitat Jaume I)
Juan Miguel Vilar (Universitat Jaume I)

Demonstration - First Steps in Emotional Expression of the Humanoid Robot Nao (Page 235)
Jérôme Monceaux (Aldebaran Robotics)
Joffrey Becker (EHESS - LAS)
Céline Boudier (Aldebaran Robotics)
Alexandre Mazel (Aldebaran Robotics)

WiiNote: Multimodal Application Facilitating Multi-User Photo Annotation Activity (Page 237)
Elena Mugellini (University of Applied Sciences of Western Switzerland)
Maria Sokhn (University of Applied Sciences of Western Switzerland)
Stefano Carrino (University of Applied Sciences of Western Switzerland)
Omar Abou Khaled (University of Applied Sciences of Western Switzerland)

Keynote Address 3 
Session Chair: James Crowley (INRIA Grenoble Rhones-Alpes Research Centre)

Are Gesture-based Interfaces the Future of Human Computer Interaction? (Page 239)
Frederic Kaplan (EPFL-CRAFT and OZWE)

(Return to Top)

Session 7: Doctoral Spotlight Oral Session 
Session Chair: Michael Johnston (AT&T Research Labs)

Providing Expressive Eye Movement to Virtual Agents (Page 241)
Zheng Li (Beihang University)
Xia Mao (Beihang University)
Lei Liu (Beihang University)

Mediated Attention with Multimodal Augmented Reality (Page 245)
Angelika Dierker (Bielefeld University)
Christian Mertes (Bielefeld University)
Thomas Hermann (Bielefeld University)
Marc Hanheide (University of Birmingham)
Gerhard Sagerer (Bielefeld University)

Grounding Spatial Prepositions for Video Search (Page 253)
Stefanie Tellex (Massachussetts Institute of Technology)
Deb Roy (Massachussetts Institute of Technology)

Multi-Modal and Multi-Camera Attention in Smart Environments (Page 261)
Boris Schauerte (TU Dortmund)
Jan Richarz (TU Dortmund)
Thomas Plötz (TU Dortmund)
Christian Thurau (Fraunhofer IAIS)
Gernot A. Fink (TU Dortmund)

Session 8: Multimodal Devices and Sensors (Oral) 
Session Chair: David Demirdjian (Toyota Research Institute)

RVDT: A Design Space for Multiple Input Devices, Multiple Views and Multiple Display Surfaces Combination (Page 269)
Rami Ajaj (LIMSI-CNRS and University of Paris 11)
Christian Jacquemin (LIMSI-CNRS and University of Paris 11)
Frédéric Vernier (LIMSI-CNRS and University of Paris 11)

Learning and Predicting Multimodal Daily Life Patterns from Cell Phones (Page 277)
Katayoun Farrahi (Idiap Research Institute/EPFL)
Daniel Gatica-Perez (Idiap Research Institute/EPFL)

Visual Based Picking Supported by Context Awareness: Comparing Picking Performance Using Paper-based Lists Versus List Presented on a Head Mounted Display with Contextual Support (Page 281)
Hendrik Iben (Unversity of Bremen)
Hannes Baumann (University of Bremen)
Carmen Ruthenbeck (BIBA - Bremen Institut für Produktion und Logistik GmbH)
Tobias Klug (SAP AG)

(Return to Top)

Session 9: Multimodal Applications and Techniques (Poster) 
Session Chair: Rainer Stiefelhagen (Karlsruhe Institute of Technology & Fraunhofer IITB)

Adaptation from Partially Supervised Handwritten Text Transcriptions (Page 289)
Nicolás Serrano (Universitat Politècnica de València)
Daniel Pérez (Universitat Politècnica de València)
Albert Sanchis (Universitat Politècnica de València)
Alfons Juan (Universitat Politècnica de València)

Recognizing Events with Temporal Random Forests (Page 293)
David Demirdjian (Toyota Research Institute)
Chenna Varri (Toyota Research Institute)

Activity-aware ECG-based Patient Authentication for Remote Health Monitoring (Page 297)
Janani C. Sriram (Dartmouth College)
Minho Shin (Dartmouth College)
Tanzeem Choudhury (Dartmouth College)
David Kotz (Dartmouth College)

GaZIR: Gaze-based Zooming Interface for Image Retrieval (Page 305)
László Kozma (Helsinki University of Technology)
Arto Klami (Helsinki University of Technology)
Samuel Kaski (Helsinki University of Technology)

Voice Key Board: Multimodal Indic Text Input (Page 313)
Prasenjit Dey (Hewlett Packard Laboratories)
Ramchandrula Sitaram (Hewlett Packard Laboratories)
Rahul Ajmera (Human Factors International)
Kalika Bali (Microsoft Research India)

Evaluating the Effect of Temporal Parameters for Vibrotactile Saltatory Patterns (Page 319)
Jukka Raisamo (University of Tampere)
Roope Raisamo (University of Tampere)
Veikko Surakka (University of Tampere)

Mapping Information to Audio and Tactile Icons (Page 327)
Eve Hoggan (University of Glasgow & University of Tampere)
Roope Raisamo (University of Tampere)
Stephen A. Brewster (University of Glasgow)

Augmented Reality Target Finding Based on Tactile Cues (Page 335)
Teemu Tuomas Ahmaniemi (Nokia Research Center)
Vuokko Tuulikki Lantz (Nokia Research Center)

(Return to Top)

Session 10: Doctoral Spotlight Posters 
Session Chair: Daniel Gatica-Perez (Idiap Research Institute & Ecole Polytechnique Fédérale de Lausanne)

Speaker Change Detection with Privacy-Preserving Audio Cues (Page 343)
Sree Hari Krishnan Parthasarathi (Idiap Research Institute & Ecole Polytechnique Fédérale de Lausanne)
Mathew Magimai.-Doss (Idiap Research Institute)
Daniel Gatica-Perez (Idiap Research Institute & Ecole Polytechnique Fédérale de Lausanne)
Hervé Bourlard (Idiap Research Institute & Ecole Polytechnique Fédérale de Lausanne)

Providing Expressive Eye Movement to Virtual Agents (Page 241, also presented in Session 7)
Zheng Li (Beihang University)
Xia Mao (Beihang University)
Lei Liu (Beihang University)

Mediated Attention with Multimodal Augmented Reality (Page 245, also presented in Session 7)
Angelika Dierker (Bielefeld University)
Christian Mertes (Bielefeld University)
Thomas Hermann (Bielefeld University)
Marc Hanheide (University of Birmingham)
Gerhard Sagerer (Bielefeld University)

Grounding Spatial Prepositions for Video Search (Page 253, also presented in Session 7)
Stefanie Tellex (Massachussetts Institute of Technology)
Deb Roy (Massachussetts Institute of Technology)

Multi-Modal and Multi-Camera Attention in Smart Environments (Page 261, also presented in Session 7)
Boris Schauerte (TU Dortmund)
Jan Richarz (TU Dortmund)
Thomas Plötz (TU Dortmund)
Christian Thurau (Fraunhofer IAIS)
Gernot A. Fink (TU Dortmund)

MirrorTrack - Tracking with Reflection - Comparison with Top-Down Approach (Page 347)
Yannick Verdie (Virginia Polytechnic Institute and State University)
Bing Fang (Virginia Polytechnic Institute and State University)
Francis Quek (Virginia Polytechnic Institute and State University)

A Framework for Continuous Multimodal Sign Language Recognition (Page 351)
Daniel Kelly (National University of Ireland Maynooth)
Jane Reilly Delannoy (National University of Ireland Maynooth)
John Mc Donald (National University of Ireland Maynooth)
Charles Markham (National University of Ireland Maynooth)

Discovering Group Nonverbal Conversational Patterns with Topics (Page 3, also presented in Session 1)
Dinesh Babu Jayagopi (Idiap Research Institute & Ecole Polytechnique Federale de Lausanne)
Daniel Gatica-Perez (Idiap Research Institute)

Learning and Predicting Multimodal Daily Life Patterns from Cell Phones (Page 277, also presented in Session 8)
Katayoun Farrahi (Idiap Research Institute/EPFL)
Daniel Gatica-Perez (Idiap Research Institute/EPFL)