|
ICMI'08
Proceedings
of the Tenth International Conference on Multimodal Interfaces
Chania, Crete, Greece
October 20-22, 2008 |
Workshops
Sponsored by
In
cooperation with
|
|
(Return
to Top)
|
Special Session on Social Signal Processing (Oral Session)
Role Recognition in Multiparty Recordings using Social Affiliation Networks and Discrete Distributions (Page 29)
Sarah Favre (Ecole Polytechnique Fédérale de Lausanne and Idiap Research Institute)
Hugues Salamin (Ecole Polytechnique Fédérale de Lausanne and Idiap Research Institute)
John Dines (Ecole Polytechnique Fédérale de Lausanne and Idiap Research Institute)
Alessandro Vinciarelli (Ecole Polytechnique Fédérale de Lausanne and Idiap Research Institute)
Audiovisual Laughter Detection Based on Temporal Features (Page 37)
Stavros Petridis (Imperial College)
Maja Pantic (Imperial College)
Predicting Two Facets of Social Verticality in Meetings from Five-Minute Time Slices and Nonverbal Cues (Page 45)
Dinesh Babu Jayagopi (Idiap Research Institute and Ecole Polytechnique Fédérale de Lausanne)
Sileye Ba (Idiap Research Institute)
Jean-Marc Odobez (Idiap Research Institute and Ecole Polytechnique Fédérale de Lausanne)
Daniel Gatica-Perez (Idiap Research Institute and Ecole Polytechnique Fédérale de Lausanne)
|
(Return
to Top)
|
Multimodal Recognition of Personality Traits in Social Interactions (Page 53)
Fabio Pianesi (FBK-irst)
Nadia Mana (FBK-irst)
Alessandro Cappelletti (FBK-irst)
Bruno Lepri (FBK-irst)
Massimo Zancanaro (FBK-irst)
Social Signals, their Function, and Automatic Analysis: A Survey (Page 61)
Alessandro Vinciarelli (Idiap Research Institute and Ecole Polytechnique Fédérale de Lausanne)
Maja Pantic (Imperial College London)
Herve' Bourlard (Idiap Research Institute and Ecole Polytechnique Fédérale de Lausanne)
Alex Pentland (Massachusettes Institute of Technology)
Session: Multimodal Systems I (Poster Session)
VoiceLabel: Using Speech to Label Mobile Sensor Data (Page 69)
Susumu Harada (University of Washington)
Jonathan Lester (University of Washington)
Kayur Patel (University of Washingotn)
T. Scott Saponas (University of Washington)
James Fogarty (University of Washington)
James A. Landay (University of Washington and Intel Research)
Jacob O. Wobbrock (University of Washington)
The BabbleTunes System - Talk to Your iPod! (Page 77)
Jan Schehl (DFKI GmbH)
Alexander Pfalzgraf (DFKI GmbH)
Norbert Pfleger (DFKI GmbH)
Jochen Steigner (DFKI GmbH)
Evaluating Talking Heads for Smart Home Systems (Page 81)
Christine Kühnel (Berlin Institute of Technology)
Benjamin Weiss (Berlin Institute of Technology)
Ina Wechsung (Berlin Institute of Technology)
Sascha Fagel (Berlin Institute of Technology)
Sebastian Möller (Berlin Institute of Technology)
|
(Return
to Top)
|
Perception of Dynamic Audiotactile Feedback to Gesture Input (Page 85)
Teemu Tuomas Ahmaniemi (Nokia Research Center)
Vuokko Lantz (Nokia Research Center)
Juha Marila (Nokia Research Center)
An Integrative Recognition Method for Speech and Gestures (Page 93)
Madoka Miki (Nagoya University)
Chiyomi Miyajima (Nagoya University)
Nishino Takanori (Nagoya University)
Norihide Kitaoka (Nagoya University)
Kazuya Takeda (Nagoya University)
As Go the Feet…: On the Estimation of Attentional Focus from Stance (Page 97)
Francis Quek (Virginia Tech)
Roger Ehrich (Virginia Tech)
Thurmon Lockhart (Virginia Tech)
Knowledge and Data Flow Architecture for Reference Processing in Multimodal Dialog Systems (Page 105)
Ali Choumane (IRISA, University of Rennes 1)
Jacques Siroux (IRISA, University of Rennes 1)
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements (Page 109)
Elise Arnaud (Université Joseph Fourier, LJK and INRIA
Rhône-Alpes)
Heidi Christensen (University of Sheffield)
Yan-Chen Lu (University of Sheffield)
Jon Barker (University of Sheffield)
Vasil Khalidov (INRIA Rhone-Alpes)
Miles Hansard (INRIA Rhone-Alpes)
Bertrand Holveck (INRIA Rhone-Alpes)
Hervé Mathieu (INRIA Rhone-Alpes)
Ramya Narasimha (INRIA Rhone-Alpes)
Elise Taillant (INRIA Rhone-Alpes)
Florence Forbes (INRIA Rhone-Alpes)
Radu Horaud (INRIA Rhone-Alpes)
Towards A Minimalist Multimodal Dialogue Framework Using Recursive MVC Pattern (Page 117)
Li Li (Avaya Inc.)
Wu Chou (Avaya Inc.)
Explorative Studies on Multimodal Interaction in a PDA- and Desktop-based Scenario (Page 121)
Andreas Ratzka (University of Regensburg)
|
(Return
to Top)
|
Session: Multimodal System Design and Tools (Oral Session)
Designing Context-Aware Multimodal Virtual Environments (Page 129)
Lode Vanacken (Hasselt University - tUL -
IBBT)
Joan De Boeck (Hasselt University - tUL -
IBBT)
Chris Raymaekers (Hasselt University - tUL -
IBBT)
Karin Coninx (Hasselt University - tUL - IBBT)
A High-Performance Dual-Wizard Infrastructure for Designing Speech, Pen, and Multimodal Interfaces (Page 137)
Phil Cohen (Adapx Inc)
Colin Swindells (Incaa Designs)
Sharon Oviatt (Incaa Designs)
Alex Arthur (Adapx Inc)
The WAMI Toolkit for Developing, Deploying, and Evaluating Web-Accessible Multimodal Interfaces (Page 141)
Alexander Gruenstein (Massachusetts Institute of Technology)
Ian McGraw (Massachusetts Institute of Technology)
Ibrahim Badr (Massachusetts Institute of Technology)
A Three-dimensional Characterization Space of Software Components for Rapidly Developing Multimodal Interfaces (Page 149)
Marcos Serrano (University of Grenoble)
David Juras (University of Grenoble)
Laurence Nigay (University of Grenoble)
Session: Multimodal Interfaces I (Oral Session)
Crossmodal Congruence: The Look, Feel and Sound of Touchscreen Widgets (Page 157)
Eve Hoggan (University of Glasgow)
Topi Kaaresoja (Nokia Research Center)
Pauli Laitinen (Nokia Research Center)
Stephen Brewster (University of Glasgow)
MultiML - A General Purpose Representation Language for Multimodal Human Utterances (Page 165)
Manuel Giuliani (Technische Universität
München)
Alois Knoll (Technische Universität München)
Deducing the Visual Focus of Attention from Head Pose Estimation in Dynamic Multi-View Meeting Scenarios (Page 173)
Michael Voit (Fraunhofer IITB)
Rainer Stiefelhagen (Universität Karlsruhe)
Context-based Recognition during Human Interactions: Automatic Feature Selection and Encoding Dictionary (Page 181)
Louis-Philippe Morency (USC Institute for Creative Technologies)
Iwan de Kok (University of Twente)
Jonathan Gratch (USC Institute for Creative Technologies)
|
(Return
to Top)
|
Demo Session
AcceleSpell, a Gestural Interactive Game to Learn and Practice Finger Spelling. (Page 189)
José L. Hernandez-Rebollar (Universidad Tecnológica de
Puebla)
Ethar Ibrahim Elsakay (Institute for Disabilities Research and Training Inc.)
Jose D. Alanís-Urquieta (Universidad Tecnológica de Puebla)
A Multi-modal Spoken Dialog System for Interactive TV (Page 191)
Rajesh Balchandran (IBM T. J. Watson Research Center)
Mark E. Epstein (IBM T. J. Watson Research Center)
Gerasimos Potamianos (IBM T. J. Watson Research Center)
Ladislav Seredi (IBM T. J. Watson Research Center)
Multimodal Slideshow: Demonstration of the OpenInterface Interaction Development Environment (Page 193)
David Juras (University of Grenoble)
Laurence Nigay (University of Grenoble)
Michael Ortega (University of Grenoble)
Marcos Serrano (University of Grenoble)
A Browser-based Multimodal Interaction System (Page 195)
Kouichi Katsurada (Toyohashi University of Technology)
Teruki Kirihata (Toyohashi University of Technology)
Masashi Kudo (Toyohashi University of Technology)
Junki Takada (Toyohashi University of Technology)
Tsuneo Nitta (Toyohashi University of Technology)
iGlasses: An Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Situations (Page 197)
Dominic W. Massaro (University of California, Santa Cruz)
Miguel Á. Carreira-Perpiñán (University of California, Merced)
David J. Merrill (Massachusetts Institute of Technology)
Cass Sterling (University of California, Santa Cruz)
Stephanie Bigler (University of California, Santa Cruz)
Elise Piazza (University of California, Santa Cruz)
Marcus Perlman (University of California, Santa Cruz)
|
(Return
to Top)
|
Innovative interfaces in MonAMI: The Reminder (Page 199)
Jonas Beskow (KTH Speech Music & Hearing)
Jens Edlund (KTH Speech Music & Hearing)
Teodor Gjermani (KTH Speech Music & Hearing)
Björn Granström (KTH Speech Music & Hearing)
Joakim Gustafson (KTH Speech Music & Hearing)
Oskar Jonsson (Swedish institute of Assistive Technology)
Gabriel Skanze (KTH Speech Music & Hearing)
Helena Tobiasson (KTH Human Computer Interaction)
PHANTOM Prototype: Exploring the Potential for Learning with Multimodal Features in Dentistry (Page 201)
Jonathan P. San Diego (King's College London)
Alastair Barrow (University of Reading)
Margaret Cox (King's College London)
William Harwin (University of Reading)
Keynote Address
Audiovisual 3D Rendering as a Tool for Multimodal Interfaces (Page 203)
George Drettakis (INRIA Sophia-Antipolis)
|
(Return
to Top)
|
Session: Multimodal Interfaces II (Oral Session)
Multimodal Presentation and Browsing of Music (Page 205)
David Damm (University of Bonn)
Christian Fremerey (University of Bonn)
Frank Kurth (Research Establishment for Applied Science)
Meinard Müller (Max-Planck-Institut für
Informatik)
Michael Clausen (University of Bonn)
An Audio-Haptic Interface Based on Auditory Depth Cues (Page 209)
Delphine Devallez (University of Verona)
Federico Fontana (University of Verona)
Davide Rocchesso (IUAV of Venice)
Detection and Localization of 3D Audio-Visual Objects Using Unsupervised Clustering (Page 217)
Vasil Khalidov (INRIA Rhone-Alpes and Université Joseph Fourier)
Florence Forbes (INRIA Rhone-Alpes)
Miles Hansard (INRIA Rhone-Alpes)
Elise Arnaud (INRIA Rhone-Alpes and Université Joseph Fourier)
Radu Horaud (INRIA Rhone-Alpes)
Robust Gesture Processing for Multimodal Interaction (Page 225)
Srinivas Bangalore (AT&T Labs Research)
Michael Johnston (AT&T Labs Research)
Session: Multimodal Modelling (Oral Session)
Investigating Automatic Dominance Estimation in Groups From Visual Attention and Speaking Activity (Page 233)
Hayley Hung (Idiap Research Institute)
Dinesh Babu Jayagopi (Idiap Research Institute and Ecole Polytechnique Fédérale de Lausanne)
Sileye Ba (Idiap Research Institute)
Jean-Marc Odobez (Idiap Research Institute and Ecole Polytechnique Fédérale de Lausanne)
Daniel Gatica-Perez (Idiap Research Institute and Ecole Polytechnique Fédérale de Lausanne)
Dynamic Modality Weighting for Multi-Stream HMMs in Audio-Visual Speech Recognition (Page 237)
Mihai Gurban (École Polytechnique Fédérale de Lausanne)
Jean-Philippe Thiran (École Polytechnique Fédérale de Lausanne)
Thomas Drugman (Faculté Polytechnique de
Mons)
Thierry Dutoit (Faculté Polytechnique de Mons)
A Fitts' Law Comparison of Eye Tracking and Manual Input in the Selection of Visual Targets (Page 241)
Roel Vertegaal (Queen's University)
A Wizard of Oz Study for an AR Multimodal Interface (Page 249)
Minkyung Lee (HIT Lab NZ, University of Canterbury)
Mark Billinghurst (HIT Lab NZ, University of Canterbury)
|
(Return
to Top)
|
Session: Multimodal Systems II (Poster Session)
A Realtime Multimodal System for Analyzing Group Meetings by Combining Face Pose Tracking and Speaker Diarization (Page 257)
Kazuhiro Otsuka (NTT Communication Science Labs.)
Shoko Araki (NTT Communication Science Labs.)
Kentaro Ishizuka (NTT Communication Science Labs.)
Masakiyo Fujimoto (NTT Communication Science Labs.)
Martin Heinrich (NTT Communication Science Labs.)
Junji Yamato (NTT Communication Science Labs.)
Designing and Evaluating Multimodal Interaction for Mobile Contexts (Page 265)
Saija Lemmelä (Nokia Research Center)
Akos Vetek (Nokia Research Center)
Kaj Mäkelä (Nokia Research Center)
Dari Trendafilov (Nokia Research Center)
Automated Sip Detection in Naturally-evoked Video (Page 273)
Rana el Kaliouby (Massachusetts Institute of Technology)
Mina Mikhail (American University in Cairo)
Perception of Low-Amplitude Haptic Stimuli when Biking (Page 281)
Toni Pakkanen (University of Tampere)
Jani Lylykangas (University of Tampere)
Jukka Raisamo (University of Tampere)
Roope Raisamo (University of Tampere)
Katri Salminen (University of Tampere)
Jussi Rantala (University of Tampere)
Veikko Surakka (University of Tampere)
TactiMote: A Tactile Remote Control for Navigating in Long Lists (Page 285)
Muhammad Tahir (TELECOM ParisTech)
Gilles Bailly (LIG University of Grenoble 1)
Eric Lecolinet (TELECOM ParisTech)
Gérard Mouret (TELECOM ParisTech)
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events (Page 289)
Jöern Anemueller (University of Oldenburg)
Jörg-Hendrik Bach (University of Oldenburg)
Barbara Caputo (IDIAP Research Institute)
Michal Havlena (Czech Technical University in Prague)
Luo Jie (IDIAP Research Institute)
Hendrik Kayser (University of Oldenburg)
Bastian Leibe (ETH Zurich)
Petr Motlicek (IDIAP Research Institute)
Tomas Pajdla (Czech Technical University in Prague)
Misha Pavel (Oregon Health & Science University)
Akihiko Torii (Czech Technical University in Prague)
Luc Van Gool (KU Leuven and ETH Zurich)
Alon Zweig (Hebrew University of Jerusalem)
Hynek Hermansky (IDIAP Research Institute)
|
(Return
to Top)
|
Smoothing Human-robot Speech Interactions by Using a Blinking-Light as Subtle Expression (Page 293)
Kotaro Funakoshi (Honda Research Institute Japan Co., Ltd.)
Kazuki Kobayashi (Shinshu University)
Mikio Nakano (Honda Research Institute Japan Co., Ltd.)
Seiji Yamada (National Institute of Informatics)
Yasuhiko Kitamura (Kwansei Gakuin University)
Hiroshi Tsujino (Honda Research Institute Japan Co., Ltd.)
Feel-Good Touch: Finding the Most Pleasant Tactile Feedback for a Mobile Touch Screen Button (Page 297)
Emilia Koskinen (Nokia Research Center)
Topi Kaaresoja (Nokia Research Center)
Pauli Laitinen (Nokia Research Center)
Embodied Conversational Agents for Voice-Biometric Interfaces (Page 305)
Álvaro Hernández-Trapote (Universidad Politécnica de Madrid)
Beatriz López-Mencía (Universidad Politécnica de Madrid)
David Díaz (Universidad Politécnica de Madrid)
Rubén Fernández-Pozo (Universidad Politécnica de Madrid)
Javier Caminero (Multilinguism & Speech Technology Group, Telefónica I+D)
|