|
ICMI'08
Proceedings
of the Tenth International Conference on Multimodal Interfaces
Chania, Crete, Greece
October 20-22, 2008 |
|
|
|
Ahmaniemi, Teemu Tuomas
Perception of Dynamic Audiotactile Feedback to Gesture Input
(Page 85)
Alanís-Urquieta, Jose D.
AcceleSpell, a Gestural Interactive Game to Learn and Practice Finger Spelling.
(Page 189)
Anemueller, Jöern
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Araki, Shoko
A Realtime Multimodal System for Analyzing Group Meetings by Combining Face Pose Tracking and Speaker Diarization
(Page 257)
Arnaud, Elise
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
Arnaud, Elise
Detection and Localization of 3D Audio-Visual Objects Using Unsupervised Clustering
(Page 217)
Arthur, Alex
A High-Performance Dual-Wizard Infrastructure for Designing Speech, Pen, and Multimodal Interfaces
(Page 137)
|
(Return
to Top)
|
Ba, Sileye
Predicting Two Facets of Social Verticality in Meetings from Five-Minute Time Slices and Nonverbal Cues
(Page 45)
Investigating Automatic Dominance Estimation in Groups From Visual Attention and Speaking Activity
(Page 233)
Bach, Jörg-Hendrik
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Badr, Ibrahim
The WAMI Toolkit for Developing, Deploying, and Evaluating Web-Accessible Multimodal Interfaces
(Page 141)
Bailly, Gilles
TactiMote: A Tactile Remote Control for Navigating in Long Lists
(Page 285)
Balchandran, Rajesh
A Multi-modal Spoken Dialog System for Interactive TV
(Page 191)
Bangalore, Srinivas
Robust Gesture Processing for Multimodal Interaction
(Page 225)
Barker, Jon
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
|
(Return
to Top)
|
Barrow, Alastair
PHANTOM Prototype: Exploring the Potential for Learning with Multimodal Features in Dentistry
(Page 201)
Beskow, Jonas
Innovative interfaces in MonAMI: The Reminder
(Page 199)
Bigler, Stephanie
iGlasses: An Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Situations
(Page 197)
Billinghurst, Mark
A Wizard of Oz Study for an AR Multimodal Interface
(Page 249)
Bourlard, Herve'
Social Signals, their Function, and Automatic Analysis: A Survey
(Page 61)
Brewster, Stephen
Crossmodal Congruence: The Look, Feel and Sound of Touchscreen Widgets
(Page 157)
Caminero, Javier
Embodied Conversational Agents for Voice-Biometric Interfaces
(Page 305)
Cappelletti, Alessandro
Multimodal Recognition of Personality Traits in Social Interactions
(Page 53)
Caputo, Barbara
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Carbonell, Noëlle
Effectiveness and Usability of an Online Help Agent Embodied as a Talking Head
(Page 17)
Carreira-Perpiñán, Miguel Á.
iGlasses: An Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Situations
(Page 197)
|
(Return
to Top)
|
Chou, Wu
Towards A Minimalist Multimodal Dialogue Framework Using Recursive MVC Pattern
(Page 117)
Choumane, Ali
Knowledge and Data Flow Architecture for Reference Processing in Multimodal Dialog Systems
(Page 105)
Christensen, Heidi
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
Chung, Pak
Interaction Techniques for the Analysis of Complex Data on High-Resolution Displays
(Page 21)
Clausen, Michael
Multimodal Presentation and Browsing of Music
(Page 205)
Cohen, Phil
Natural Interfaces in the Field: The Case of Pen and Paper
(Page 1)
A High-Performance Dual-Wizard Infrastructure for Designing Speech, Pen, and Multimodal Interfaces
(Page 137)
Coninx, Karin
Designing Context-Aware Multimodal Virtual Environments
(Page 129)
Cox, Margaret
PHANTOM Prototype: Exploring the Potential for Learning with Multimodal Features in Dentistry
(Page 201)
Damm, David
Multimodal Presentation and Browsing of Music
(Page 205)
De Boeck, Joan
Designing Context-Aware Multimodal Virtual Environments
(Page 129)
|
(Return
to Top)
|
de Kok, Iwan
Context-based Recognition during Human Interactions: Automatic Feature Selection and Encoding Dictionary
(Page 181)
Devallez, Delphine
An Audio-Haptic Interface Based on Auditory Depth Cues
(Page 209)
Díaz, David
Embodied Conversational Agents for Voice-Biometric Interfaces
(Page 305)
Digalakis, Vassillis
Message from the Chairs
(Page 0)
Dines,
John
Role Recognition in Multiparty Recordings using Social Affiliation Networks and Discrete Distributions
(Page 29)
Drettakis, George
Audiovisual 3D Rendering as a Tool for Multimodal Interfaces
(Page 203)
Drugman, Thomas
Dynamic Modality Weighting for Multi-Stream HMMs in Audio-Visual Speech Recognition
(Page 237)
Dutoit, Thierry
Dynamic Modality Weighting for Multi-Stream HMMs in Audio-Visual Speech Recognition
(Page 237)
Edlund, Jens
Innovative interfaces in MonAMI: The Reminder
(Page 199)
Ehrich, Roger
As Go the Feet…: On the Estimation of Attentional Focus from Stance
(Page 97)
el Kaliouby, Rana
Automated Sip Detection in Naturally-evoked Video
(Page 273)
|
(Return
to Top)
|
Elsakay, Ethar Ibrahim
AcceleSpell, a Gestural Interactive Game to Learn and Practice Finger Spelling.
(Page 189)
Epstein, Mark E.
A Multi-modal Spoken Dialog System for Interactive TV
(Page 191)
Evreinova, Tatiana G.
Manipulating Trigonometric Expressions Encoded through Electro-Tactile Signals
(Page 3)
Fagel, Sascha
Evaluating Talking Heads for Smart Home Systems
(Page 81)
Favre, Sarah
Role Recognition in Multiparty Recordings using Social Affiliation Networks and Discrete Distributions
(Page 29)
Fernández-Pozo, Rubén
Embodied Conversational Agents for Voice-Biometric Interfaces
(Page 305)
Fogarty, James
VoiceLabel: Using Speech to Label Mobile Sensor Data
(Page 69)
Fontana, Federico
An Audio-Haptic Interface Based on Auditory Depth Cues
(Page 209)
Forbes, Florence
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
Detection and Localization of 3D Audio-Visual Objects Using Unsupervised Clustering
(Page 217)
Fremerey, Christian
Multimodal Presentation and Browsing of Music
(Page 205)
Fujimoto, Masakiyo
A Realtime Multimodal System for Analyzing Group Meetings by Combining Face Pose Tracking and Speaker Diarization
(Page 257)
|
(Return
to Top)
|
Funakoshi, Kotaro
Smoothing Human-robot Speech Interactions by Using a Blinking-Light as Subtle Expression
(Page 293)
Gatica-Perez, Daniel
Predicting Two Facets of Social Verticality in Meetings from Five-Minute Time Slices and Nonverbal Cues
(Page 45)
Gatica-Perez, Daniel
Investigating Automatic Dominance Estimation in Groups From Visual Attention and Speaking Activity
(Page 233)
Giuliani, Manuel
MultiML - A General Purpose Representation Language for Multimodal Human Utterances
(Page 165)
Gjermani, Teodor
Innovative interfaces in MonAMI: The Reminder
(Page 199)
Granström, Björn
Innovative interfaces in MonAMI: The Reminder
(Page 199)
Gratch, Jonathan
Context-based Recognition during Human Interactions: Automatic Feature Selection and Encoding Dictionary
(Page 181)
Gruenstein, Alexander
The WAMI Toolkit for Developing, Deploying, and Evaluating Web-Accessible Multimodal Interfaces
(Page 141)
Gurban, Mihai
Dynamic Modality Weighting for Multi-Stream HMMs in Audio-Visual Speech Recognition
(Page 237)
Gustafson, Joakim
Innovative interfaces in MonAMI: The Reminder
(Page 199)
|
(Return
to Top)
|
Hansard, Miles
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
Detection and Localization of 3D Audio-Visual Objects Using Unsupervised Clustering
(Page 217)
Harada, Susumu
VoiceLabel: Using Speech to Label Mobile Sensor Data
(Page 69)
Harwin, William
PHANTOM Prototype: Exploring the Potential for Learning with Multimodal Features in Dentistry
(Page 201)
Havlena, Michal
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Heinrich, Martin
A Realtime Multimodal System for Analyzing Group Meetings by Combining Face Pose Tracking and Speaker Diarization
(Page 257)
Hermansky, Hynek
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Hernandez-Rebollar, José L.
AcceleSpell, a Gestural Interactive Game to Learn and Practice Finger Spelling.
(Page 189)
Hernández-Trapote, Álvaro
Embodied Conversational Agents for Voice-Biometric Interfaces
(Page 305)
Hoggan, Eve
Crossmodal Congruence: The Look, Feel and Sound of Touchscreen Widgets
(Page 157)
Holveck, Bertrand
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
|
(Return
to Top)
|
Horaud, Radu
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
Detection and Localization of 3D Audio-Visual Objects Using Unsupervised Clustering
(Page 217)
Hung, Hayley
Investigating Automatic Dominance Estimation in Groups From Visual Attention and Speaking Activity
(Page 233)
Ishizuka, Kentaro
A Realtime Multimodal System for Analyzing Group Meetings by Combining Face Pose Tracking and Speaker Diarization
(Page 257)
Jayagopi, Dinesh Babu
Investigating Automatic Dominance Estimation in Groups From Visual Attention and Speaking Activity
(Page 233)
Jayagopi, Dinesh Babu
Predicting Two Facets of Social Verticality in Meetings from Five-Minute Time Slices and Nonverbal Cues
(Page 45)
Jie, Luo
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Johnston, Michael
Robust Gesture Processing for Multimodal Interaction
(Page 225)
Jonsson, Oskar
Innovative interfaces in MonAMI: The Reminder
(Page 199)
Juras, David
A Three-dimensional Characterization Space of Software Components for Rapidly Developing Multimodal Interfaces
(Page 149)
Juras, David
Multimodal Slideshow: Demonstration of the OpenInterface Interaction Development Environment
(Page 193)
|
(Return
to Top)
|
Kaaresoja, Topi
Crossmodal Congruence: The Look, Feel and Sound of Touchscreen Widgets
(Page 157)
Feel-Good Touch: Finding the Most Pleasant Tactile Feedback for a Mobile Touch Screen Button
(Page 297)
Katsurada, Kouichi
A Browser-based Multimodal Interaction System
(Page 195)
Kayser, Hendrik
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Khalidov, Vasil
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
Detection and Localization of 3D Audio-Visual Objects Using Unsupervised Clustering
(Page 217)
Kirihata, Teruki
A Browser-based Multimodal Interaction System
(Page 195)
Kitamura, Yasuhiko
Smoothing Human-robot Speech Interactions by Using a Blinking-Light as Subtle Expression
(Page 293)
Kitaoka, Norihide
An Integrative Recognition Method for Speech and Gestures
(Page 93)
Knoll, Alois
MultiML - A General Purpose Representation Language for Multimodal Human Utterances
(Page 165)
Kobayashi, Kazuki
Smoothing Human-robot Speech Interactions by Using a Blinking-Light as Subtle Expression
(Page 293)
Koskinen, Emilia
Feel-Good Touch: Finding the Most Pleasant Tactile Feedback for a Mobile Touch Screen Button
(Page 297)
|
(Return
to Top)
|
Kudo, Masashi
A Browser-based Multimodal Interaction System
(Page 195)
Kühnel, Christine
Evaluating Talking Heads for Smart Home Systems
(Page 81)
Kurth, Frank
Multimodal Presentation and Browsing of Music
(Page 205)
Laitinen, Pauli
Crossmodal Congruence: The Look, Feel and Sound of Touchscreen Widgets
(Page 157)
Feel-Good Touch: Finding the Most Pleasant Tactile Feedback for a Mobile Touch Screen Button
(Page 297)
Landay, James A.
VoiceLabel: Using Speech to Label Mobile Sensor Data
(Page 69)
Lantz, Vuokko
Perception of Dynamic Audiotactile Feedback to Gesture Input
(Page 85)
Lecolinet, Eric
TactiMote: A Tactile Remote Control for Navigating in Long Lists
(Page 285)
Lee, Minkyung
A Wizard of Oz Study for an AR Multimodal Interface
(Page 249)
Leibe, Bastian
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Lemmelä, Saija
Designing and Evaluating Multimodal Interaction for Mobile Contexts
(Page 265)
|
(Return
to Top)
|
Lepri, Bruno
Multimodal Recognition of Personality Traits in Social Interactions
(Page 53)
Lester, Jonathan
VoiceLabel: Using Speech to Label Mobile Sensor Data
(Page 69)
Li, Li
Towards A Minimalist Multimodal Dialogue Framework Using Recursive MVC Pattern
(Page 117)
Lockhart, Thurmon
As Go the Feet…: On the Estimation of Attentional Focus from Stance
(Page 97)
López-Mencía, Beatriz
Embodied Conversational Agents for Voice-Biometric Interfaces
(Page 305)
Lu, Yan-Chen
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
Lylykangas, Jani
Perception of Low-Amplitude Haptic Stimuli when Biking
(Page 281)
Mäkelä, Kaj
Designing and Evaluating Multimodal Interaction for Mobile Contexts
(Page 265)
Mana, Nadia
Multimodal Recognition of Personality Traits in Social Interactions
(Page 53)
Marila, Juha
Perception of Dynamic Audiotactile Feedback to Gesture Input
(Page 85)
Massaro, Dominic W.
iGlasses: An Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Situations
(Page 197)
Mathieu, Hervé
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
McGraw, Ian
The WAMI Toolkit for Developing, Deploying, and Evaluating Web-Accessible Multimodal Interfaces
(Page 141)
|
(Return
to Top)
|
Merrill, David J.
iGlasses: An Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Situations
(Page 197)
Mikhail, Mina
Automated Sip Detection in Naturally-evoked Video
(Page 273)
Miki, Madoka
An Integrative Recognition Method for Speech and Gestures
(Page 93)
Miller, Chreston
Interaction Techniques for the Analysis of Complex Data on High-Resolution Displays
(Page 21)
Miyajima, Chiyomi
An Integrative Recognition Method for Speech and Gestures
(Page 93)
Möller, Sebastian
Evaluating Talking Heads for Smart Home Systems
(Page 81)
Morency, Louis-Philippe
Context-based Recognition during Human Interactions: Automatic Feature Selection and Encoding Dictionary
(Page 181)
Motlicek, Petr
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Mouret, Gérard
TactiMote: A Tactile Remote Control for Navigating in Long Lists
(Page 285)
Müller, Meinard
Multimodal Presentation and Browsing of Music
(Page 205)
Nakano, Mikio
Smoothing Human-robot Speech Interactions by Using a Blinking-Light as Subtle Expression
(Page 293)
Narasimha, Ramya
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
|
(Return
to Top)
|
Nigay, Laurence
A Three-dimensional Characterization Space of Software Components for Rapidly Developing Multimodal Interfaces
(Page 149)
Multimodal Slideshow: Demonstration of the OpenInterface Interaction Development Environment
(Page 193)
Nitta, Tsuneo
A Browser-based Multimodal Interaction System
(Page 195)
Odobez, Jean-Marc
Predicting Two Facets of Social Verticality in Meetings from Five-Minute Time Slices and Nonverbal Cues
(Page 45)
Investigating Automatic Dominance Estimation in Groups From Visual Attention and Speaking Activity
(Page 233)
Ortega, Michael
Multimodal Slideshow: Demonstration of the OpenInterface Interaction Development Environment
(Page 193)
Otsuka, Kazuhiro
A Realtime Multimodal System for Analyzing Group Meetings by Combining Face Pose Tracking and Speaker Diarization
(Page 257)
Oviatt, Sharon
A High-Performance Dual-Wizard Infrastructure for Designing Speech, Pen, and Multimodal Interfaces
(Page 137)
Pajdla, Tomas
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Pakkanen, Toni
Perception of Low-Amplitude Haptic Stimuli when Biking
(Page 281)
Pantic, Maja
Audiovisual Laughter Detection Based on Temporal Features
(Page 37)
Social Signals, their Function, and Automatic Analysis: A Survey
(Page 61)
|
(Return
to Top)
|
Patel, Kayur
VoiceLabel: Using Speech to Label Mobile Sensor Data
(Page 69)
Pavel, Misha
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Pelé, Danielle
Effectiveness and Usability of an Online Help Agent Embodied as a Talking Head
(Page 17)
Pentland, Alex
Social Signals, their Function, and Automatic Analysis: A Survey
(Page 61)
Perakakis, Manolis
Multimodal System Evaluation using Modality Efficiency and Synergy Metrics
(Page 9)
Perlman, Marcus
iGlasses: An Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Situations
(Page 197)
Petridis, Stavros
Audiovisual Laughter Detection Based on Temporal Features
(Page 37)
Pfalzgraf, Alexander
The BabbleTunes System - Talk to Your iPod!
(Page 77)
Pfleger, Norbert
The BabbleTunes System - Talk to Your iPod!
(Page 77)
Pianesi, Fabio
Multimodal Recognition of Personality Traits in Social Interactions
(Page 53)
Piazza, Elise
iGlasses: An Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Situations
(Page 197)
|
(Return
to Top)
|
Potamianos, Alexandros
Message from the Chairs
Multimodal System Evaluation using Modality Efficiency and Synergy Metrics
(Page 9)
Potamianos, Gerasimos
A Multi-modal Spoken Dialog System for Interactive TV
(Page 191)
Quek, Francis
Interaction Techniques for the Analysis of Complex Data on High-Resolution Displays
(Page 21)
As Go the Feet…: On the Estimation of Attentional Focus from Stance
(Page 97)
Raisamo, Jukka
Perception of Low-Amplitude Haptic Stimuli when Biking
(Page 281)
Raisamo, Roope
Perception of Low-Amplitude Haptic Stimuli when Biking
(Page 281)
Rantala, Jussi
Perception of Low-Amplitude Haptic Stimuli when Biking
(Page 281)
Ratzka, Andreas
Explorative Studies on Multimodal Interaction in a PDA- and Desktop-based Scenario
(Page 121)
Raymaekers, Chris
Designing Context-Aware Multimodal Virtual Environments
(Page 129)
Robinson, Ashley
Interaction Techniques for the Analysis of Complex Data on High-Resolution Displays
(Page 21)
Rocchesso, Davide
An Audio-Haptic Interface Based on Auditory Depth Cues
(Page 209)
Salamin, Hugues
Role Recognition in Multiparty Recordings using Social Affiliation Networks and Discrete Distributions
(Page 29)
|
(Return
to Top)
|
Salminen, Katri
Perception of Low-Amplitude Haptic Stimuli when Biking
(Page 281)
San Diego, Jonathan P.
PHANTOM Prototype: Exploring the Potential for Learning with Multimodal Features in Dentistry
(Page 201)
Saponas, T. Scott
VoiceLabel: Using Speech to Label Mobile Sensor Data
(Page 69)
Schehl, Jan
The BabbleTunes System - Talk to Your iPod!
(Page 77)
Seredi, Ladislav
A Multi-modal Spoken Dialog System for Interactive TV
(Page 191)
Serrano, Marcos
A Three-dimensional Characterization Space of Software Components for Rapidly Developing Multimodal Interfaces
(Page 149)
Multimodal Slideshow: Demonstration of the OpenInterface Interaction Development Environment
(Page 193)
Simonin, Jérôme
Effectiveness and Usability of an Online Help Agent Embodied as a Talking Head
(Page 17)
Siroux, Jacques
Knowledge and Data Flow Architecture for Reference Processing in Multimodal Dialog Systems
(Page 105)
Skanze, Gabriel
Innovative interfaces in MonAMI: The Reminder
(Page 199)
Steigner, Jochen
The BabbleTunes System - Talk to Your iPod!
(Page 77)
Sterling, Cass
iGlasses: An Automatic Wearable Speech Supplement in Face-to-Face Communication and Classroom Situations
(Page 197)
|
(Return
to Top)
|
Stiefelhagen, Rainer
Deducing the Visual Focus of Attention from Head Pose Estimation in Dynamic Multi-View Meeting Scenarios
(Page 173)
Surakka, Veikko
Perception of Low-Amplitude Haptic Stimuli when Biking
(Page 281)
Swindells, Colin
A High-Performance Dual-Wizard Infrastructure for Designing Speech, Pen, and Multimodal Interfaces
(Page 137)
Tahir, Muhammad
TactiMote: A Tactile Remote Control for Navigating in Long Lists
(Page 285)
Taillant, Elise
The CAVA corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements
(Page 109)
Takada, Junki
A Browser-based Multimodal Interaction System
(Page 195)
Takanori, Nishino
An Integrative Recognition Method for Speech and Gestures
(Page 93)
Takeda, Kazuya
An Integrative Recognition Method for Speech and Gestures
(Page 93)
Thiran, Jean-Philippe
Dynamic Modality Weighting for Multi-Stream HMMs in Audio-Visual Speech Recognition
(Page 237)
Tobiasson, Helena
Innovative interfaces in MonAMI: The Reminder
(Page 199)
|
(Return
to Top)
|
Torii, Akihiko
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Trendafilov, Dari
Designing and Evaluating Multimodal Interaction for Mobile Contexts
(Page 265)
Tsujino, Hiroshi
Smoothing Human-robot Speech Interactions by Using a Blinking-Light as Subtle Expression
(Page 293)
Turk, Matthew
Message from the Chairs
(Page 0)
Van
Gool, Luc
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
Vanacken, Lode
Designing Context-Aware Multimodal Virtual Environments
(Page 129)
Vertegaal, Roel
A Fitts' Law Comparison of Eye Tracking and Manual Input in the Selection of Visual Targets
(Page 241)
Vetek, Akos
Designing and Evaluating Multimodal Interaction for Mobile Contexts
(Page 265)
Vinciarelli, Alessandro
Role Recognition in Multiparty Recordings using Social Affiliation Networks and Discrete Distributions
(Page 29)
Social Signals, their Function, and Automatic Analysis: A Survey
(Page 61)
Voit, Michael
Deducing the Visual Focus of Attention from Head Pose Estimation in Dynamic Multi-View Meeting Scenarios
(Page 173)
|
(Return
to Top)
|
Wang, Rongrong
Interaction Techniques for the Analysis of Complex Data on High-Resolution Displays
(Page 21)
Wechsung, Ina
Evaluating Talking Heads for Smart Home Systems
(Page 81)
Weiss, Benjamin
Evaluating Talking Heads for Smart Home Systems
(Page 81)
Wobbrock, Jacob O.
VoiceLabel: Using Speech to Label Mobile Sensor Data
(Page 69)
Yamada, Seiji
Smoothing Human-robot Speech Interactions by Using a Blinking-Light as Subtle Expression
(Page 293)
Yamato, Junji
A Realtime Multimodal System for Analyzing Group Meetings by Combining Face Pose Tracking and Speaker Diarization
(Page 257)
Zancanaro, Massimo
Multimodal Recognition of Personality Traits in Social Interactions
(Page 53)
Zweig, Alon
The DIRAC AWEAR Audio-Visual Platform for Detection of Unexpected and Incongruent Events
(Page 289)
|