![]() |
|
Table of Contents A
Message from the Chairs ICMI-MLMI 2009 Organizing Committee Keynote
Address 1 Living
Better with Robots (Page
1) Session
1: Multimodal Communication Analysis (Oral) Discovering
Group Nonverbal Conversational Patterns with Topics (Page
3) Agreement
Detection in Multiparty Conversation (Page
7) Multimodal
Floor Control Shift Detection (Page
15) Static
vs. Dynamic Modeling of Human Nonverbal Behavior from Multiple Cues
and Modalities (Page
23) |
|
Session
2: Multimodal Dialog Dialog
in the Open World: Platform and Applications (Page
31) Towards
Adapting Fantasy, Curiosity and Challenge in Multimodal Dialogue Systems
for Preschoolers (Page
39) Building
Multimodal Applications with EMMA (Page
47) Session
3: Multimodal Communication Analysis and Dialog (Poster) A
Speaker Diarization Method Based on the Probabilistic Fusion of Audio-Visual
Location Information (Page
55) Dynamic
Robot Autonomy: Investigating the Effects of Robot Decision-Making in
a Human-Robot Team Task (Page
63) A
Speech Mashup Framework for Multimodal Mobile Services (Page
71) Detecting,
Tracking and Interacting with People in a Public Space (Page
79) Cache-based
Language Model Adaptation Using Visual Attention for ASR in Meeting
Scenarios (Page
87) Multimodal
End-of-Turn Prediction in Multi-Party Meetings (Page
91) |
|
(Return to Top) | Recognizing
Communicative Facial Expressions for Discovering Interpersonal Emotions
in Group Meetings (Page
99) Classification
of Patient Case Discussions Through Analysis of Vocalisation Graphs (Page
107) Learning
from Preferences and Selected Multimodal Features of Players (Page
115) Detecting
User Engagement with a Robot Companion Using Task and Social Interaction-based
Features (Page
119) Multi-Modal
Features for Real-Time Detection of Human-Robot Interaction Categories (Page
127) Modeling
Culturally Authentic Style Shifting with Virtual Peers (Page
135) Between
Linguistic Attention and Gaze Fixations in Multimodal Conversational
Interfaces (Page
143) Keynote
Address 2 Head-up
Interaction: Can we break our addiction to the screen and keyboard? (Page
151) |
(Return to Top) | Session
4: Multimodal Fusion (Special Session) Fusion
Engines for Multimodal Input: A Survey (Page
153) A
Fusion Framework for Multimodal Interactive Applications (Page
161) Benchmarking
Fusion Engines of Multimodal Interactive Systems (Page
169) Temporal
Aspects of CARE-based Multimodal Fusion: From a Fusion Mechanism to
Composition Components and WoZ Components (Page
177) Formal
Description Techniques to Support the Design, Construction and Evaluation
of Fusion Engines for SURE (Safe, Usable, Reliable and Evolvable) Multimodal
Interfaces (Page
185) Multimodal
Inference for Driver-Vehicle Interaction (Page
193) |
(Return to Top) | Session
5: Gaze, Gesture, and Reference (Oral) Multimodal
Integration of Natural Gaze Behavior for Intention Recognition During
Object Manipulation (Page
199) Salience
in the Generation of Multimodal Referring Acts (Page
207) Communicative
Gestures in Coreference Identification in Multiparty Meetings (Page
211) Session
6: Demonstration Session Realtime
Meeting Analysis and 3D Meeting Viewer Based on Omnidirectional Multimodal
Sensors (Page
219) Guiding
Hand: A Teaching Tool for Handwriting (Page
221) A
Multimedia Retrieval System Using Speech Input (Page
223) |
(Return to Top) | Navigation
with a Passive Brain Based Interface (Page
225) A
Multimodal Predictive-Interactive Application for Computer Assisted
Transcription and Translation (Page
227) Multi-Modal
Communication (Page
229) HephaisTK:
A Toolkit for Rapid Prototyping of Multimodal Interfaces (Page
231) State,
an Assisted Document Transcription System (Page
233) Demonstration
- First Steps in Emotional Expression of the Humanoid Robot Nao (Page
235) WiiNote:
Multimodal Application Facilitating Multi-User Photo Annotation Activity (Page
237) Keynote
Address 3 Are
Gesture-based Interfaces the Future of Human Computer Interaction? (Page
239) |
(Return to Top) | Session
7: Doctoral Spotlight Oral Session Providing
Expressive Eye Movement to Virtual Agents (Page
241) Mediated
Attention with Multimodal Augmented Reality (Page
245) Grounding
Spatial Prepositions for Video Search (Page
253) Multi-Modal
and Multi-Camera Attention in Smart Environments (Page
261) Session
8: Multimodal Devices and Sensors (Oral) RVDT:
A Design Space for Multiple Input Devices, Multiple Views and Multiple
Display Surfaces Combination (Page
269) Learning
and Predicting Multimodal Daily Life Patterns from Cell Phones (Page
277) Visual
Based Picking Supported by Context Awareness: Comparing Picking Performance
Using Paper-based Lists Versus List Presented on a Head Mounted Display
with Contextual Support (Page
281) |
(Return to Top) | Session
9: Multimodal Applications and Techniques (Poster) Adaptation
from Partially Supervised Handwritten Text Transcriptions (Page
289) Recognizing
Events with Temporal Random Forests (Page
293) Activity-aware
ECG-based Patient Authentication for Remote Health Monitoring (Page
297) GaZIR:
Gaze-based Zooming Interface for Image Retrieval (Page
305) Voice
Key Board: Multimodal Indic Text Input (Page
313) Evaluating
the Effect of Temporal Parameters for Vibrotactile Saltatory Patterns (Page
319) Mapping
Information to Audio and Tactile Icons (Page
327) Augmented
Reality Target Finding Based on Tactile Cues (Page
335) |
(Return to Top) | Session
10: Doctoral Spotlight Posters Speaker
Change Detection with Privacy-Preserving Audio Cues (Page
343) Providing
Expressive Eye Movement to Virtual Agents (Page
241, also presented in Session 7) Mediated
Attention with Multimodal Augmented Reality (Page
245, also presented in Session 7) Grounding
Spatial Prepositions for Video Search (Page
253, also presented in Session 7) Multi-Modal
and Multi-Camera Attention in Smart Environments (Page
261, also presented in Session 7) MirrorTrack
- Tracking with Reflection - Comparison with Top-Down Approach (Page
347) A
Framework for Continuous Multimodal Sign Language Recognition (Page
351) Discovering
Group Nonverbal Conversational Patterns with Topics (Page
3, also presented in Session 1) Learning
and Predicting Multimodal Daily Life Patterns from Cell Phones (Page
277, also presented in Session 8) |