 |
|
Author
Index
Abou
Khaled, Omar
WiiNote:
Multimodal Application Facilitating Multi-User Photo Annotation Activity
(Page 237)
Ahmaniemi,
Teemu Tuomas
Augmented
Reality Target Finding Based on Tactile Cues (Page
335)
Ajaj,
Rami
RVDT:
A Design Space for Multiple Input Devices, Multiple Views and Multiple
Display Surfaces Combination (Page
269)
Ajmera,
Rahul
Voice
Key Board: Multimodal Indic Text Input (Page
313)
Alabau,
Vicent
A
Multimodal Predictive-Interactive Application for Computer Assisted
Transcription and Translation (Page
227)
Araki,
Shoko
A
Speaker Diarization Method Based on the Probabilistic Fusion of Audio-Visual
Location Information (Page
55)
Araki,
Shoko
Realtime
Meeting Analysis and 3D Meeting Viewer Based on Omnidirectional Multimodal
Sensors (Page
219)
|
|
Bader,
Thomas
Multimodal
Integration of Natural Gaze Behavior for Intention Recognition During
Object Manipulation (Page
199)
Baldwin,
Tyler
Communicative
Gestures in Coreference Identification in Multiparty Meetings
(Page 211)
Bali,
Kalika
Voice
Key Board: Multimodal Indic Text Input (Page
313)
Baumann,
Hannes
Visual
Based Picking Supported by Context Awareness: Comparing Picking Performance
Using Paper-based Lists Versus List Presented on a Head Mounted Display
with Contextual Support (Page
281)
Becker,
Joffrey
Demonstration
- First Steps in Emotional Expression of the Humanoid Robot Nao
(Page 235)
Bohus,
Dan
Dialog
in the Open World: Platform and Applications (Page
31)
Borland,
John
Modeling
Culturally Authentic Style Shifting with Virtual Peers
(Page 135)
Boudier,
Céline
Demonstration
- First Steps in Emotional Expression of the Humanoid Robot Nao
(Page 235)
Bourlard,
Hervé
Speaker
Change Detection with Privacy-Preserving Audio Cues
(Page 343)
Breazeal,
Cynthia
Living
Better with Robots (Page
1)
Brewster,
Stephen A.
Head-up
Interaction: Can we break our addiction to the screen and keyboard?
(Page 151)
Mapping
Information to Audio and Tactile Icons (Page
327)
|
|
Brouwer,
Anne-Marie M.
Navigation
with a Passive Brain Based Interface (Page
225)
Brungart,
Douglas S.
Multi-Modal
Communication (Page
229)
Carrino,
Stefano
WiiNote:
Multimodal Application Facilitating Multi-User Photo Annotation Activity
(Page 237)
Cassell,
Justine
Modeling
Culturally Authentic Style Shifting with Virtual Peers
(Page 135)
Castellano,
Ginevra
Detecting
User Engagement with a Robot Companion Using Task and Social Interaction-based
Features (Page
119)
Chadutaud,
Pilippe-Emmanuel
Multi-Modal
Features for Real-Time Detection of Human-Robot Interaction Categories
(Page 127)
Chai,
Joyce Y.
Between
Linguistic Attention and Gaze Fixations in Multimodal Conversational
Interfaces (Page
143)
Communicative
Gestures in Coreference Identification in Multiparty Meetings
(Page 211)
Cheamanunkul,
Sunsern
Detecting,
Tracking and Interacting with People in a Public Space
(Page 79)
Chen,
Lei
Multimodal
Floor Control Shift Detection (Page
15)
Choudhury,
Tanzeem
Activity-aware
ECG-based Patient Authentication for Remote Health Monitoring
(Page 297)
Cooke,
Neil J.
Cache-based
Language Model Adaptation Using Visual Attention for ASR in Meeting
Scenarios (Page
87)
Crowley,
James
A
Message from the Chairs
Davies,
Ian
Multimodal
Inference for Driver-Vehicle Interaction (Page
193)
de
Kok, Iwan
Multimodal
End-of-Turn Prediction in Multi-Party Meetings (Page
91)
|
|
Demirdjian,
David
Recognizing
Events with Temporal Random Forests (Page
293)
Dey,
Prasenjit
Voice
Key Board: Multimodal Indic Text Input (Page
313)
Di
Fabbrizio, Giuseppe
A
Speech Mashup Framework for Multimodal Mobile Services
(Page 71)
Dierker,
Angelika
Mediated
Attention with Multimodal Augmented Reality (Page
245)
Dumas,
Bruno
Benchmarking
Fusion Engines of Multimodal Interactive Systems
(Page 169)
HephaisTK:
A Toolkit for Rapid Prototyping of Multimodal Interfaces
(Page 231)
Duric,
Zoran
Guiding
Hand: A Teaching Tool for Handwriting (Page
221)
Ettinger,
Evan
Detecting,
Tracking and Interacting with People in a Public Space
(Page 79)
Fang,
Bing
MirrorTrack
- Tracking with Reflection - Comparison with Top-Down Approach
(Page 347)
Fang,
Rui
Between
Linguistic Attention and Gaze Fixations in Multimodal Conversational
Interfaces (Page
143)
Farrahi,
Katayoun
Learning
and Predicting Multimodal Daily Life Patterns from Cell Phones
(Page 277)
Fasel,
Ian R.
Multi-Modal
Features for Real-Time Detection of Human-Robot Interaction Categories
(Page 127)
Ferreira,
Fernanda
Between
Linguistic Attention and Gaze Fixations in Multimodal Conversational
Interfaces (Page
143)
Fink,
Gernot A.
Multi-Modal
and Multi-Camera Attention in Smart Environments
(Page 261)
Finomore,
Victor S.
Multi-Modal
Communication (Page
229)
Freund,
Yoav
Detecting,
Tracking and Interacting with People in a Public Space
(Page 79)
|
|
Fujimoto,
Masakiyo
A
Speaker Diarization Method Based on the Probabilistic Fusion of Audio-Visual
Location Information (Page
55)
Fujimoto,
Masakiyo
Realtime
Meeting Analysis and 3D Meeting Viewer Based on Omnidirectional Multimodal
Sensors (Page
219)
Gatica-Perez,
Daniel
A
Message from the Chairs
Discovering
Group Nonverbal Conversational Patterns with Topics
(Page 3)
Learning
and Predicting Multimodal Daily Life Patterns from Cell Phones
(Page 277)
Speaker
Change Detection with Privacy-Preserving Audio Cues
(Page 343)
Geraghty,
Kathleen
Modeling
Culturally Authentic Style Shifting with Virtual Peers
(Page 135)
Gerber,
Naomi Lynn
Guiding
Hand: A Teaching Tool for Handwriting (Page
221)
Germesin,
Sebastian
Agreement
Detection in Multiparty Conversation (Page
7)
Gonzalez,
Berto
Modeling
Culturally Authentic Style Shifting with Virtual Peers
(Page 135)
Gunes,
Hatice
Static
vs. Dynamic Modeling of Human Nonverbal Behavior from Multiple Cues
and Modalities (Page
23)
Hagita,
Norihiro
Multi-Modal
Features for Real-Time Detection of Human-Robot Interaction Categories
(Page 127)
Hanheide,
Marc
Mediated
Attention with Multimodal Augmented Reality (Page
245)
Harper,
Mary P.
Multimodal
Floor Control Shift Detection (Page
15)
Hermann,
Thomas
Mediated
Attention with Multimodal Augmented Reality (Page
245)
Heylen,
Dirk
Multimodal
End-of-Turn Prediction in Multi-Party Meetings (Page
91)
Hoggan,
Eve
Mapping
Information to Audio and Tactile Icons (Page
327)
Horvitz,
Eric
Dialog
in the Open World: Platform and Applications (Page
31)
|
|
Iben,
Hendrik
Visual
Based Picking Supported by Context Awareness: Comparing Picking Performance
Using Paper-based Lists Versus List Presented on a Head Mounted Display
with Contextual Support (Page
281)
Ingold,
Rolf
Benchmarking
Fusion Engines of Multimodal Interactive Systems
(Page 169)
HephaisTK:
A Toolkit for Rapid Prototyping of Multimodal Interfaces
(Page 231)
Ishiguro,
Hiroshi
Multi-Modal
Features for Real-Time Detection of Human-Robot Interaction Categories
(Page 127)
Ishizuka,
Kentaro
A
Speaker Diarization Method Based on the Probabilistic Fusion of Audio-Visual
Location Information (Page
55)
Realtime
Meeting Analysis and 3D Meeting Viewer Based on Omnidirectional Multimodal
Sensors (Page
219)
Ivanov,
Yuri
A
Message from the Chairs
Jacobsen,
Matt
Detecting,
Tracking and Interacting with People in a Public Space
(Page 79)
Jacquemin,
Christian
RVDT:
A Design Space for Multiple Input Devices, Multiple Views and Multiple
Display Surfaces Combination (Page
269)
Jayagopi,
Dinesh Babu
Discovering
Group Nonverbal Conversational Patterns with Topics
(Page 3)
Johnston,
Michael
A
Message from the Chairs
Building
Multimodal Applications with EMMA (Page
47)
Juan,
Alfons
Adaptation
from Partially Supervised Handwritten Text Transcriptions
(Page 289)
Kaltwang,
Sebastian
Static
vs. Dynamic Modeling of Human Nonverbal Behavior from Multiple Cues
and Modalities (Page
23)
Kanda,
Takayuki
Multi-Modal
Features for Real-Time Detection of Human-Robot Interaction Categories
(Page 127)
Kane,
Bridget
Classification
of Patient Case Discussions Through Analysis of Vocalisation Graphs
(Page 107)
Kannetis,
Theofanis
Towards
Adapting Fantasy, Curiosity and Challenge in Multimodal Dialogue Systems
for Preschoolers (Page
39)
Kaplan,
Frederic
Are
Gesture-based Interfaces the Future of Human Computer Interaction?
(Page 239)
Kaski,
Samuel
GaZIR:
Gaze-based Zooming Interface for Image Retrieval
(Page 305)
|
|
Kelly,
Daniel
A
Framework for Continuous Multimodal Sign Language Recognition
(Page 351)
Kilgour,
Jonathan
A
Multimedia Retrieval System Using Speech Input (Page
223)
Kirchhoff,
Katrin
Communicative
Gestures in Coreference Identification in Multiparty Meetings
(Page 211)
Klami,
Arto
GaZIR:
Gaze-based Zooming Interface for Image Retrieval
(Page 305)
Klaus,
Edmund
Multimodal
Integration of Natural Gaze Behavior for Intention Recognition During
Object Manipulation (Page
199)
Klug,
Tobias
Visual
Based Picking Supported by Context Awareness: Comparing Picking Performance
Using Paper-based Lists Versus List Presented on a Head Mounted Display
with Contextual Support (Page
281)
Kotz,
David
Activity-aware
ECG-based Patient Authentication for Remote Health Monitoring
(Page 297)
Kozma,
László
GaZIR:
Gaze-based Zooming Interface for Image Retrieval
(Page 305)
Kumano,
Shiro
Recognizing
Communicative Facial Expressions for Discovering Interpersonal Emotions
in Group Meetings (Page
99)
|