This specification defines a JavaScript API to enable web developers to incorporate speech recognition and synthesis into their web pages. It enables developers to use scripting to generate text-to-speech output and to use speech recognition as an input for forms, continuous dictation and control. The JavaScript API allows web pages to control activation and timing and to handle results and alternatives.
It is a fully-functional subset of the specification proposed in the HTML Speech Incubator Group Final Report [1]. Specifically, this subset excludes the underlying transport protocol, the proposed additions to HTML markup, and it defines a simplified subset of the JavaScript API. This subset supports the majority of use-cases and sample code in the Incubator Group Final Report. This subset does not preclude future standardization of additions to the markup, API or underlying transport protocols, and indeed the Incubator Report defines a potential roadmap for such future work.
This document is an API proposal from Google Inc. to the Web Applications (WEBAPPS) Working Group.
All feedback is welcome.
No working group is yet responsible for this specification. This is just an informal proposal at this time.
All diagrams, examples, and notes in this specification are non-normative, as are all sections explicitly marked non-normative. Everything else in this specification is normative.
The key words "MUST", "MUST NOT", "REQUIRED", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in the normative parts of this document are to be interpreted as described in RFC2119. For readability, these words do not appear in all uppercase letters in this specification. [RFC2119]
Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.
Conformance requirements phrased as algorithms or specific steps may be implemented in any manner, so long as the end result is equivalent. (In particular, the algorithms defined in this specification are intended to be easy to follow, and not intended to be performant.)
User agents may impose implementation-specific limits on otherwise unconstrained inputs, e.g. to prevent denial of service attacks, to guard against running out of memory, or to work around platform-specific limitations.
Implementations that use ECMAScript to implement the APIs defined in this specification must implement them in a manner consistent with the ECMAScript Bindings defined in the Web IDL specification, as this specification uses that specification's terminology. [WEBIDL]
This section is non-normative.
The JavaScript Speech API aims to enable web developers to provide, in a web browser, speech-input and text-to-speech output features that are typically not available when using standard speech-recognition or screen-reader software. The API itself is agnostic of the underlying speech recognition and synthesis implementation and can support both server-based and client-based/embedded recognition and synthesis. The API is designed to enable both brief (one-shot) speech input and continuous speech input. Speech recognition results are provided to the web page as a list of hypotheses, along with other relevant information for each hypothesis.
This specification is a subset of the API defined in the HTML Speech Incubator Group Final Report. That report is entirely informative since it is not a standards track document. This document is intended to be the basis of a standards track document, and therefore defines portions of that report to be normative. All other portions of that report may be considered informative with regards to this document, and provide an informative background to this document.
This section is non-normative.
This specification supports the following use cases, as defined in Section 4 of the Incubator Report.
To keep the API to a minimum, this specification does not directly support the following use cases. This does not preclude adding support for these as future API enhancements, and indeed the Incubator report provides a roadmap for doing so.
Note that for many usages and implementations, it is possible to avoid the need for Rerecognition by using a larger grammar, or by combining multiple grammars — both of these techniques are supported in this specification.
SpeechRecognition.start
.This section is non-normative.
This section is normative.
The speech recognition interface is the scripted web API for controlling a given recognition.
[Constructor]
interface SpeechRecognition : EventTarget {
// recognition parameters
attribute SpeechGrammarList grammars;
attribute DOMString lang;
attribute boolean continuous;
// methods to drive the speech interaction
void start();
void stop();
void abort();
// event methods
attribute Function onaudiostart;
attribute Function onsoundstart;
attribute Function onspeechstart;
attribute Function onspeechend;
attribute Function onsoundend;
attribute Function onaudioend;
attribute Function onresult;
attribute Function onnomatch;
attribute Function onresultdeleted;
attribute Function onerror;
attribute Function onstart;
attribute Function onend;
};
interface SpeechRecognitionError {
const unsigned short OTHER = 0;
const unsigned short NO_SPEECH = 1;
const unsigned short ABORTED = 2;
const unsigned short AUDIO_CAPTURE = 3;
const unsigned short NETWORK = 4;
const unsigned short NOT_ALLOWED = 5;
const unsigned short SERVICE_NOT_ALLOWED = 6;
const unsigned short BAD_GRAMMAR = 7;
const unsigned short LANGUAGE_NOT_SUPPORTED = 8;
readonly attribute unsigned short code;
readonly attribute DOMString message;
};
// Item in N-best list
interface SpeechRecognitionAlternative {
readonly attribute DOMString transcript;
readonly attribute float confidence;
readonly attribute any interpretation;
};
// A complete one-shot simple response
interface SpeechRecognitionResult {
readonly attribute unsigned long length;
getter SpeechRecognitionAlternative item(in unsigned long index);
readonly attribute boolean final;
};
// A collection of responses (used in continuous mode)
interface SpeechRecognitionResultList {
readonly attribute unsigned long length;
getter SpeechRecognitionResult item(in unsigned long index);
};
// A full response, which could be interim or final, part of a continuous response or not
interface SpeechRecognitionEvent : Event {
readonly attribute SpeechRecognitionResult result;
readonly attribute SpeechRecognitionError error;
readonly attribute short resultIndex;
readonly attribute SpeechRecognitionResultList resultHistory;
};
// The object representing a speech grammar
[Constructor]
interface SpeechGrammar {
attribute DOMString src;
attribute float weight;
};
// The object representing a speech grammar collection
[Constructor]
interface SpeechGrammarList {
readonly attribute unsigned long length;
getter SpeechGrammar item(in unsigned long index);
void addFromUri(in DOMString src,
optional float weight);
void addFromString(in DOMString string,
optional float weight);
};
The DOM Level 2 Event Model is used for speech recognition events. The methods in the EventTarget interface should be used for registering event listeners. The SpeechRecognition interface also contains convenience attributes for registering a single event handler for each event type.
For all these events, the timeStamp attribute defined in the DOM Level 2 Event interface must be set to the best possible estimate of when the real-world event which the event object represents occurred. This timestamp must be represented in the User Agent's view of time, even for events where the timestamps in question could be raised on a different machine like a remote recognition service (i.e., in a speechend event with a remote speech endpointer).
Unless specified below, the ordering of the different events is undefined. For example, some implementations may fire audioend before speechstart or speechend if the audio detector is client-side and the speech detector is server-side.
The speech recognition error object has two attributes code
and message
.
The SpeechRecognitionAlternative represents a simple view of the response that gets used in a n-best list.
The SpeechRecognitionResult object represents a single one-shot recognition match, either as one small part of a continuous recognition or as the complete return result of a non-continuous recognition.
The SpeechRecognitionResultList object holds a sequence of recognition results representing the complete return result of a continuous recognition. For a non-continuous recognition it will hold only a single value.
The Speech Recognition Event is the event that is raised each time there is an interim or final result. The event contains both the current most recent recognized bit (in the result object) as well as a history of the complete recognition session so far (in the results object).
The SpeechGrammar object represents a container for a grammar. This structure has the following attributes:
The SpeechGrammarList object represents a collection of SpeechGrammar objects. This structure has the following attributes:
The TTS interface is the scripted web API for controlling a text-to-speech output.
[Constructor]
interface TTS {
attribute DOMString text;
attribute DOMString lang;
readonly attribute boolean paused;
readonly attribute boolean ended;
// methods to drive the speech interaction
void play();
void pause();
void stop();
attribute Function onstart;
attribute Function onend;
};
This section is non-normative.
Using speech recognition to perform a web search.
<script type="text/javascript">
var sr = new SpeechReco();
sr.onresult = function(event) {
var q = document.getElementById("q");
q.value = event.result[0].transcript;
q.form.submit();
}
</script>
<form action="http://www.example.com/search">
<input type="search" id="q" name="q">
<input type="button" value="Speak" onclick="sr.start()">
</form>
Using speech synthesis.
<script type="text/javascript">
var tts = new TTS();
function speak(text, lang) {
tts.text = text;
tts.lang = lang;
tts.play();
}
speak("Hello world.", "en-US");
</script>
This API supports all of the examples in the HTML Speech Incubator Group Final Report that are within the scope of the JavaScript API and are relevant to the Section 3 Use Cases, with minimal or no changes. Specifically, the following are supported from Section 7.1.7.
The members of the HTML Speech Incubator Group, and the corresponding Final Report, created the basis for this proposal.