Home
 Mirror Sites
 General Information
 Confernce Schedule
 Technical Program
 Tutorials
 Industry Technology Tracks
 Exhibits
 Sponsors
 Registration
 Coming to Phoenix
 Call for Papers
 Author's Kit
 On-line Review
 Future Conferences
 Help
|
Abstract: Session SP-20 |
|
SP-20.1
|
USING NON-WORD LEXICAL UNITS IN AUTOMATIC SPEECH UNDERSTANDING
Mikel Penagarikano,
German Bordel,
Amparo Varona,
Karmele Lopez de Ipina (Dpto. Electricidad y Electrónica, Universidad del País Vasco (UPV/EHU))
If the objective of a Continuous Automatic Speech Understanding system is not a speech-to-text translation, words are not strictly needed, and then the use of alternative lexical units (LUs) will bring us a new degree of freedom to improve the system performance. Consequently, we experimentally explore some methods to automatically extract a set of LUs from a Spanish training corpus and verify that the system can be improved in two ways: reducing the computational costs and increasing the recognition rates. Moreover, preliminary results point out that, even if the system target is a speech-to-text translation, using non-word units and post-processing the output to produce the corresponding word chain outperforms the word based system.
|
SP-20.2
|
Message-Driven Speech Recogniton and Topic-Word Extraction
Katsutoshi Ohtsuki (NTT Human Interface Laboratories),
Sadaoki Furui,
Atsushi Iwasaki,
Naoyuki Sakurai (Tokyo Institute of Technology)
This paper proposes a new formulation for speech recognition/understanding systems, in which the a posteriori probability of a speaker's message that the speaker intend to address given an observed acoustic sequence is maximized. This is an extension of the current criterion that maximizes a probability of a word sequence. Among the various possible representations, we employ co-occurrence score of words measured mutual information as the conditional probability of a word sequence occurring in a given message. The word sequence hypotheses obtained by bigram and trigram language models are rescored using the co-occurrence score. Experimental results show that the word accuracy is improved by this method. Topic-words, which represent the content of a speech signal are then extracted from speech recognition results based on the significance score of each word. When five topic-words are extracted for each broadcast-news article, 82.8% of them are correct in average. This paper also proposes a verbalization-dependent language model, which is useful for Japanese dictation systems.
|
SP-20.3
|
PROFER: Predictive, Robust Finite-State Parsing for Spoken Language
Edward C Kaiser (Center for Spoken Language Understanding, Oregon Graduate Institute),
Michael Johnston (Center for Human-Computer Communication, Oregon Graduate Institute),
Peter A Heeman (Center for Spoken Language Understanding, Oregon Graduate Institute)
The natural language processing component of a speech understanding system is commonly a robust, semantic parser, implemented as either a chart-based transition network, or as a generalized left-right (GLR) parser. In contrast, we are developing a robust, semantic parser that is a single, predictive finite-state machine. Our approach is motivated by our belief that such a finite-state parser can ultimately provide an efficient vehicle for tightly integrating higher-level linguistic knowledge into speech recognition. We report on our development of this parser, with an example of its use, and a description of how it compares to both finite-state predictors and chart-based semantic parsers, while combining the elements of both.
|
SP-20.4
|
Usability Field-Test of a Spoken Data-Entry System
Marcello Federico,
Fabio Brugnara,
Roberto Gretter (ITC-Irst Istituto per la Ricerca Scientifica e Tecnologica)
This paper reports on the field-test of a speech based data-entry
system developed as a follow-up of an EC funded project. The
application domain is the data-entry of personnel absence records from
a huge historical paper file (about 100,000 records). The application
was required by the personnel office of a public administration. The
tested system resulted both sufficiently simple to make a detailed
analysis feasible, and sufficiently representative of the potentials
of spoken data-entry.
|
SP-20.5
|
A Framework of Performance Evaluation And Error Analysis Methodology for Speech Understanding Systems
Bor-shen Lin (National Taiwan University),
Lin-shan Lee (National Taiwan University and Academia Sinica)
With improved speech understanding technology, many successful working systems have been developed. However, the high degree of complexity and wide variety of design methodology make the performance evaluation and error analysis for such systems very difficult. The different metrics for individual modules such as the word accuracy, spotting rate, language model coverage and slot accuracy are very often helpful, but it is always difficult to select or tune each of the individual modules or determine which module contributed to how much percentage of understanding errors based on such metrics.
In this paper, a new framework for performance evaluation and error analysis for speech understanding systems is proposed based on the comparison with the 'best-matched' references obtained from the word graphs with the target words and tags given. In this framework, all test utterances can be classified based on the error types, and various understanding metrics can be obtained accordingly. Error analysis approaches based on an error plane are then proposed, with which the sources for understanding errors (e.g. poor acoustic recognition, poor language model, search error, etc.) can be identified for each utterance. Such a framework will be very helpful for design and analysis of speech understanding systems.
|
SP-20.6
|
Acoustic and syntactical modeling in the ATROS system
D. Llorens (Unitat Predepartamental d'Informatica, Universitat Jaume I, Castello, Spain.),
F. Casacuberta,
E. Segarra,
J.A. Sanchez (Dpto. Sistemas Informaticos y Computacion, Universidad Politecnica de Valencia, Valencia, Spain.),
P. Aibar (Unitat Predepartamental d'Informatica, Universitat Jaume I, Castello, Spain.),
M.J. Castro (Dpto. Sistemas Informaticos y Computacion, Universidad Politecnica de Valencia, Valencia, Spain.)
Current speech technology allows us to build efficient speech
recognition systems. However, model learning of knowledge sources in
a speech recognition system is not a closed problem. In addition,
lower demand of computational requirements are crucial to building
real-time systems.
ATROS is an automatic speech recognition system whose acoustic,
lexical, and syntactical models can be learnt automatically from
training data by using similar techniques. In this paper, an
improved version of ATROS which can deal with large smoothed
language models and with large vocabularies is presented. This
version supports acoustic and syntactical models trained with
advanced grammatical inference techniques. It also incorporates new
data structures and improved search algorithms to reduce the
computational requirements for decoding. The system has been tested
on a Spanish task of queries to a geographical database (with a
vocabulary of 1,208 words).
|
SP-20.7
|
TOWARDS A ROBUST REAL-TIME DECODER
Jason C Davenport,
Richard Schwartz,
Long Nguyen (BBN Technologies, GTE Internetworking)
In this paper we present several algorithms that speed
up our BBN BYBLOS decoder. We briefly describe the
techniques that we have used before this year. Then
we present new techniques that speed up the recognition
search by a factor of 10 with little effect on accuracy
using a combination of Fast Gaussian Computation,
grammar spreading, and grammar caching, within the
2-Pass n-best paradigm. We also describe our decoder
metering strategy, which allows us to conveniently test
for search errors. Finally, we describe a grammar compression
technique that decreases the storage needed for each
additional ngram to only 10 bits.
|
SP-20.8
|
A Statistical Text-To-Phone Function Using Ngrams and Rules
William M Fisher (National Institute of Standards and Technology)
Adopting concepts from statistical language modeling
and rule-based transformations can lead to effective
and efficient text-to-phone (TTP) functions. We
present here the methods and results of one such
effort, resulting in a relatively compact and fast
set of TTP rules that achieves 94.5% segmental phonemic
accuracy.
|
|