This article was published 8 yearsago

amazon, alexa, echo, Amazon, Alexa,

In addition to making Alexa more capable by integrating new skills, Amazon now plans to make the AI-powered voice assistant more interactive. You may soon notice that Alexa will shout out your grocery list, but whisper the stories you instruct it to relay during night time. It means the voice assistant will now show emotions or excitement, with differences in speed and pitch as well.

In an official blog post, Amazon has today detailed that developers will soon be able to take advantage of a new language framework called Speech Synthesis Markup Language (SSML). It has been integrated into the voice assistant and will provide developers a set of tags to easily control pronunciation, intonation, timing, and emotion. This means Alexa will try to emulate human speech by laying emphasis on words, as well changing frequency while conversing.

In addition, the blog post adds,

SSML support on Alexa allows you to control how Alexa generates speech from your skill’s text responses.

The framework also enables developers to add pauses, change dialect or pronunciation, spell out a word, add a short audio snippet in their skills to further the capability of its service. You also have the option to include speechcons, meaning special words and phrases that may be restricted to certain locations. For example: Amazon Echo device owners in U.K can now use phrases such as ‘Blimey’ or ‘Bada-bing’ with Alexa.

The five new SSML tags which have added to upgrade Alexa’s capabilities are as under:

[mks_accordion] [mks_accordion_item title=”Whispers “] Enables the developer to convey a softer dialog with the use of <amazon:effect name=”whispered”>
[/mks_accordion_item] [mks_accordion_item title=”Expletive beeps”] Alexa wouldn’t belt out swear words if developers bleep out certain words with <say-as interpret-as=”expletive”>
[/mks_accordion_item] [mks_accordion_item title=”Sub”] Use the <sub> tag when you want Alexa to say something other than what’s written.
[/mks_accordion_item] [mks_accordion_item title=”Emphasis”] This enables you to add <emphasis> to change the rate and volume at which the assistant speaks.
[/mks_accordion_item] [mks_accordion_item title=”Prosody”] Developers can use this tag to control the volume, pitch, and rate of speech.
[/mks_accordion_item] [/mks_accordion]

The new features have currently been extended only to Alexa skill developers in the U.S, U.K, and Germany. The developers can also find sample codes in the official blog post to upgrade their skills with a wide range of natural expressions. Amazon is steadily trying to make most of its technology and code base available to developers to build intuitive skills — whose number has crossed 10,000. It had recently also made its high-quality 7-mic audio recognition tech available to selective hardware makers to build smart devices.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.