 |
|
Author
Index
Ladry,
Jean-François
Fusion
Engines for Multimodal Input: A Survey (Page
153)
Formal
Description Techniques to Support the Design, Construction and Evaluation
of Fusion Engines for SURE (Safe, Usable, Reliable and Evolvable) Multimodal
Interfaces (Page
185)
Lai,
Patrick
Detecting,
Tracking and Interacting with People in a Public Space
(Page 79)
Lalanne,
Denis
Fusion
Engines for Multimodal Input: A Survey (Page
153)
Benchmarking
Fusion Engines of Multimodal Interactive Systems
(Page 169)
HephaisTK:
A Toolkit for Rapid Prototyping of Multimodal Interfaces
(Page 231)
Lantz,
Vuokko Tuulikki
Augmented
Reality Target Finding Based on Tactile Cues (Page
335)
Lawson,
Jean-Yves Lionel
A
Fusion Framework for Multimodal Interactive Applications
(Page 161)
Leite,
Iolanda
Detecting
User Engagement with a Robot Companion Using Task and Social Interaction-based
Features (Page
119)
Li,
Zheng
Providing
Expressive Eye Movement to Virtual Agents (Page
241)
Liu,
Lei
Providing
Expressive Eye Movement to Virtual Agents (Page
241)
|
|
Llorens,
David
State,
an Assisted Document Transcription System (Page
233)
Luz,
Saturnino
Classification
of Patient Case Discussions Through Analysis of Vocalisation Graphs
(Page 107)
Macq,
Benoit
A
Fusion Framework for Multimodal Interactive Applications
(Page 161)
Magimai.-Doss,
Mathew
Speaker
Change Detection with Privacy-Preserving Audio Cues
(Page 343)
Mao,
Xia
Providing
Expressive Eye Movement to Virtual Agents (Page
241)
Markham,
Charles
A
Framework for Continuous Multimodal Sign Language Recognition
(Page 351)
Marzal,
Andrés
State,
an Assisted Document Transcription System (Page
233)
Mazel,
Alexandre
Demonstration
- First Steps in Emotional Expression of the Humanoid Robot Nao
(Page 235)
Mc
Donald, John
A
Framework for Continuous Multimodal Sign Language Recognition
(Page 351)
McOwan,
Peter W.
Detecting
User Engagement with a Robot Companion Using Task and Social Interaction-based
Features (Page
119)
Mendonça,
Hildeberto
A
Fusion Framework for Multimodal Interactive Applications
(Page 161)
Mertes,
Christian
Mediated
Attention with Multimodal Augmented Reality (Page
245)
Mikami,
Dan
Recognizing
Communicative Facial Expressions for Discovering Interpersonal Emotions
in Group Meetings (Page
99)
Realtime
Meeting Analysis and 3D Meeting Viewer Based on Omnidirectional Multimodal
Sensors (Page
219)
Monceaux,
Jérôme
Demonstration
- First Steps in Emotional Expression of the Humanoid Robot Nao
(Page 235)
Mugellini,
Elena
WiiNote:
Multimodal Application Facilitating Multi-User Photo Annotation Activity
(Page 237)
Nakatani,
Tomohiro
A
Speaker Diarization Method Based on the Probabilistic Fusion of Audio-Visual
Location Information (Page
55)
|
|
Narber,
Cody
Guiding
Hand: A Teaching Tool for Handwriting (Page
221)
Navarre,
David
Formal
Description Techniques to Support the Design, Construction and Evaluation
of Fusion Engines for SURE (Safe, Usable, Reliable and Evolvable) Multimodal
Interfaces (Page
185)
Nigay,
Laurence
Fusion
Engines for Multimodal Input: A Survey (Page
153)
Nigay,
Laurence
Temporal
Aspects of CARE-based Multimodal Fusion: From a Fusion Mechanism to
Composition Components and WoZ Components (Page
177)
Ocampo,
Jorge
A
Multimodal Predictive-Interactive Application for Computer Assisted
Transcription and Translation (Page
227)
Okken,
Thomas
A
Speech Mashup Framework for Multimodal Mobile Services
(Page 71)
Ortiz,
Daniel
A
Multimodal Predictive-Interactive Application for Computer Assisted
Transcription and Translation (Page
227)
Otsuka,
Kazuhiro
A
Speaker Diarization Method Based on the Probabilistic Fusion of Audio-Visual
Location Information (Page
55)
Otsuka,
Kazuhiro
Recognizing
Communicative Facial Expressions for Discovering Interpersonal Emotions
in Group Meetings (Page
99)
Realtime
Meeting Analysis and 3D Meeting Viewer Based on Omnidirectional Multimodal
Sensors (Page
219)
Paiva,
Ana
Detecting
User Engagement with a Robot Companion Using Task and Social Interaction-based
Features (Page
119)
Palanque,
Philippe
Fusion
Engines for Multimodal Input: A Survey (Page
153)
Formal
Description Techniques to Support the Design, Construction and Evaluation
of Fusion Engines for SURE (Safe, Usable, Reliable and Evolvable) Multimodal
Interfaces (Page
185)
Pantic,
Maja
Static
vs. Dynamic Modeling of Human Nonverbal Behavior from Multiple Cues
and Modalities (Page
23)
Parthasarathi,
Sree Hari Krishnan
Speaker
Change Detection with Privacy-Preserving Audio Cues
(Page 343)
|
|
Pereira,
André
Detecting
User Engagement with a Robot Companion Using Task and Social Interaction-based
Features (Page
119)
Pérez,
Daniel
Adaptation
from Partially Supervised Handwritten Text Transcriptions
(Page 289)
Petridis,
Stavros
Static
vs. Dynamic Modeling of Human Nonverbal Behavior from Multiple Cues
and Modalities (Page
23)
Piwek,
Paul
Salience
in the Generation of Multimodal Referring Acts (Page
207)
Plötz,
Thomas
Multi-Modal
and Multi-Camera Attention in Smart Environments
(Page 261)
Poller,
Peter
A
Multimedia Retrieval System Using Speech Input (Page
223)
Popescu-Belis,
Andrei
A
Multimedia Retrieval System Using Speech Input (Page
223)
Popik,
Dianne K.
Multi-Modal
Communication (Page
229)
Potamianos,
Alexandros
Towards
Adapting Fantasy, Curiosity and Challenge in Multimodal Dialogue Systems
for Preschoolers (Page
39)
Prat,
Federico
State,
an Assisted Document Transcription System (Page
233)
Quek,
Francis
MirrorTrack
- Tracking with Reflection - Comparison with Top-Down Approach
(Page 347)
Raisamo,
Jukka
Evaluating
the Effect of Temporal Parameters for Vibrotactile Saltatory Patterns
(Page 319)
Raisamo,
Roope
Evaluating
the Effect of Temporal Parameters for Vibrotactile Saltatory Patterns
(Page 319)
Mapping
Information to Audio and Tactile Icons (Page
327)
Reilly
Delannoy, Jane
A
Framework for Continuous Multimodal Sign Language Recognition
(Page 351)
Richarz,
Jan
Multi-Modal
and Multi-Camera Attention in Smart Environments
(Page 261)
Robinson,
Peter
Fusion
Engines for Multimodal Input: A Survey (Page
153)
Multimodal
Inference for Driver-Vehicle Interaction (Page
193)
Romero,
Verónica
A
Multimodal Predictive-Interactive Application for Computer Assisted
Transcription and Translation (Page
227)
|
|
Roy,
Deb
Grounding
Spatial Prepositions for Video Search (Page
253)
Russell,
Martin J.
Cache-based
Language Model Adaptation Using Visual Attention for ASR in Meeting
Scenarios (Page
87)
Ruthenbeck,
Carmen
Visual
Based Picking Supported by Context Awareness: Comparing Picking Performance
Using Paper-based Lists Versus List Presented on a Head Mounted Display
with Contextual Support (Page
281)
Sagerer,
Gerhard
Mediated
Attention with Multimodal Augmented Reality (Page
245)
Sanchis,
Albert
Adaptation
from Partially Supervised Handwritten Text Transcriptions
(Page 289)
Schauerte,
Boris
Multi-Modal
and Multi-Camera Attention in Smart Environments
(Page 261)
Schermerhorn,
Paul
Dynamic
Robot Autonomy: Investigating the Effects of Robot Decision-Making in
a Human-Robot Team Task (Page
63)
Scheutz,
Matthias
Dynamic
Robot Autonomy: Investigating the Effects of Robot Decision-Making in
a Human-Robot Team Task (Page
63)
Serrano,
Marcos
Temporal
Aspects of CARE-based Multimodal Fusion: From a Fusion Mechanism to
Composition Components and WoZ Components (Page
177)
Serrano,
Nicolás
Adaptation
from Partially Supervised Handwritten Text Transcriptions
(Page 289)
Sezgin,
Tevfik Metin
Multimodal
Inference for Driver-Vehicle Interaction (Page
193)
Shin,
Minho
Activity-aware
ECG-based Patient Authentication for Remote Health Monitoring
(Page 297)
Shiomi,
Masahiro
Multi-Modal
Features for Real-Time Detection of Human-Robot Interaction Categories
(Page 127)
Simpson,
Brian D.
Multi-Modal
Communication (Page
229)
Sitaram,
Ramchandrula
Voice
Key Board: Multimodal Indic Text Input (Page
313)
|
|
Sokhn,
Maria
WiiNote:
Multimodal Application Facilitating Multi-User Photo Annotation Activity
(Page 237)
Sriram,
Janani C.
Activity-aware
ECG-based Patient Authentication for Remote Health Monitoring
(Page 297)
Stiefelhagen,
Rainer
A
Message from the Chairs
Surakka,
Veikko
Evaluating
the Effect of Temporal Parameters for Vibrotactile Saltatory Patterns
(Page 319)
Tellex,
Stefanie
Grounding
Spatial Prepositions for Video Search (Page
253)
Thurau,
Christian
Multi-Modal
and Multi-Camera Attention in Smart Environments
(Page 261)
Thurlings,
Marieke E.
Navigation
with a Passive Brain Based Interface (Page
225)
van
Erp, Jan B. F.
Navigation
with a Passive Brain Based Interface (Page
225)
Vanderdonckt,
Jean
Fusion
Engines for Multimodal Input: A Survey (Page
153)
A
Fusion Framework for Multimodal Interactive Applications
(Page 161)
Varri,
Chenna
Recognizing
Events with Temporal Random Forests (Page
293)
Verdie,
Yannick
MirrorTrack
- Tracking with Reflection - Comparison with Top-Down Approach
(Page 347)
Vernier,
Frédéric
RVDT:
A Design Space for Multiple Input Devices, Multiple Views and Multiple
Display Surfaces Combination (Page
269)
|
(Return
to Top) |
Vilar,
Juan Miguel
State,
an Assisted Document Transcription System (Page
233)
Vishnoi,
Nalini
Guiding
Hand: A Teaching Tool for Handwriting (Page
221)
Vogelgesang,
Matthias
Multimodal
Integration of Natural Gaze Behavior for Intention Recognition During
Object Manipulation (Page
199)
Vybornova,
Olga
A
Fusion Framework for Multimodal Interactive Applications
(Page 161)
Werkhoven,
Peter J.
Navigation
with a Passive Brain Based Interface (Page
225)
Wilpon,
Jay G.
A
Speech Mashup Framework for Multimodal Mobile Services
(Page 71)
Wilson,
Theresa
Agreement
Detection in Multiparty Conversation (Page
7)
Wren,
Christopher
A
Message from the Chairs
Yamato,
Junji
Recognizing
Communicative Facial Expressions for Discovering Interpersonal Emotions
in Group Meetings (Page
99)
Realtime
Meeting Analysis and 3D Meeting Viewer Based on Omnidirectional Multimodal
Sensors (Page
219)
Yannakakis,
Georgios N.
Learning
from Preferences and Selected Multimodal Features of Players
(Page 115)
|