Google Patents US6181351B1 - Synchronizing the moveable mouths of animated characters with recorded speech Then, with all necessary preparations made, we start the utterance being spoken by invoking SpeechSynthesis.speak(), passing it the SpeechSynthesisUtterance instance as a parameter.US6181351B1 - Synchronizing the moveable mouths of animated characters with recorded speech We set the matching voice object to be the value of the SpeechSynthesisUtterance.voice property.įinally, we set the SpeechSynthesisUtterance.pitch and SpeechSynthesisUtterance.rate to the values of the relevant range form elements. We then use this element's data-name attribute, finding the SpeechSynthesisVoice object whose name matches this attribute's value. We use the HTMLSelectElement selectedOptions property to return the currently selected element. Next, we need to figure out which voice to use. We first create a new SpeechSynthesisUtterance() instance using its constructor - this is passed the text input's value as a parameter.
We are using an onsubmit handler on the form so that the action happens when Enter/ Return is pressed. Next, we create an event handler to start speaking the text entered into the text field. onresult = function ( event ) Speaking the entered text But it is not needed for this simple demo, so we are just specifying one (which is actually the default anyway.) This can sometimes be useful, say if a result is not completely clear and you want to display a list if alternatives for the user to choose the correct one from.
SpeechRecognition.maxAlternatives: Sets the number of alternative potential matches that should be returned per result. Final results are good enough for this simple demo. SpeechRecognition.interimResults: Defines whether the speech recognition system should return interim results, or just final results. Setting this is good practice, and therefore recommended. SpeechRecognition.lang: Sets the language of the recognition. ntinuous: Controls whether continuous results are captured ( true), or just a single result each time recognition is started ( false). We also set a few other properties of the recognition instance before we move on: We then add the SpeechGrammarList to the speech recognition instance by setting it to the value of the ammars property. We also create a new speech grammar list to contain our grammar, using the SpeechGrammarList() constructor. This is done using the SpeechRecognition() constructor. The next thing to do is define a speech recognition instance to control the recognition for our application. Plugging the grammar into our speech recognition
For this basic demo, we are just keeping things simple.
You can have as many terms defined as you want on separate lines following the above structure, and include fairly complex grammar definitions. Note how each is separated by a pipe character. public declares that it is a public rule, the string in angle brackets defines the recognized name for this term ( color), and the list of items that follow the equals sign are the alternative values that will be recognized and accepted as appropriate values for the term. The second line indicates a type of term that we want to recognize. The first line - #JSGF V1.0 - states the format and version used. The lines are separated by semi-colons, just like in JavaScript. However, for now let's just run through it quickly: The grammar format used is JSpeech Grammar Format ( JSGF) - you can find a lot more about it at the previous link to its spec. Const colors = const grammar = '#JSGF V1.0 grammar colors public = ' + colors.