Blazeclan Technologies Recognized as a Niche Player in the 2023 Gartner® Magic Quadrant™ for Public Cloud IT Transformation Services

Voice is about Action – Into the World of Digital Voice Assistants

“Alexa, how’s the josh?”

Don’t be surprised if you end up asking the same question to your new buddy in town Alexa and she replies “High, Sir!”.

One of my friend’s mom was recently admitted in a hospital for a paralytic attack. This made it impossible for her to call the nurse by ringing the bell, when no one was around- Virtual assistant would have come handy here!

‘Alexa, call the nurse’ and Alexa would reply ‘Sure, I have sent a notification to nurse for further assistance’. WOW!

On a funny note, I recently came across a news where a kid got his Maths homework done by asking Alexa the questions! Isn’t that interesting?

Why Virtual/Voice Assistance?

Looking at the technology evolution of late, there are several devices available in market, with Voice Assistants like Apple Siri (2011), Google Assistant (2012), Microsoft Cortana (2014), Amazon Alexa (2015) to name a few. These are already providing “virtual assistance” and helping people in many ways –offering an individualized experience while “listening” to multiple speakers.

At Blazeclan, we started developing capabilities/skills for Amazon Alexa and Google Home devices. We developed a skill which can fulfil the requirement of a “Nurse” who could be there to “listen” to a patient 24/7. The patient would just have to start interacting saying “Alexa” or “Ok Google”; and there comes our ‘virtual nurse’ asking the patient “How may I help you ?”, and then acting upon the request.

Under the hood

Let’s take a look at how to go about a skill using Alexa Skill Console for Alexa and Dialogflow for Google Assistant. Let’s name our skill “Nurse”. Once the skill is ready, we’ll need to add the sample utterances that user (patient) would usually state while interacting. These all utterances will be grouped under intents, and accordingly, the response to these utterances would be defined.

Let’s understand with an example:

Intent : Uncomfortable

Utterance sample :

  • “I am feeling uncomfortable.”
  • “I am not ok.”
  • “I am not feeling good.”
  • “Something is bothering me.”
  • “Need a help.”

Response :

  • “What is making you uncomfortable”
  • “How may I help you”
  • “What is bothering you”

Similarly, there would be multiple intents, set of sample utterance under each intent, and a corresponding responses to them.

Example of some sample interactions are as follows:

Patient: “Alexa, open Nurse” or “Ok google, talk to nurse”

Voice Assistant: “Hello how may I help you”

Patient: “I am feeling uncomfortable”

Voice Assistant: “What is making you uncomfortable”.

Patient: “AC temperature is too high”

Voice Assistant: “How much temperature would you like me to set”

Patient: “20 degrees”

Voice Assistant: “Setting AC temperature to 20 degrees”

In the first step ‘Speech To Text’ the device listens to the statement and converts the voice command to the text input that voice assistant understands.

Under the hood, the software breaks your speech down into tiny, recognizable parts called phonemes. It’s the order, combination and context of these phonemes that allows the sophisticated audio analysis software to figure out what exactly user is saying. For words that are pronounced the same way, such as ‘eight’ and ‘ate’, the software analyzes the context and syntax of the sentence to figure out the best text match for the word you spoke.

The second step ‘Text To Intent’, interprets what exactly does the user mean. Voice Assistant works out what the question is asking, this is done with the help of a function — either AWS Lambda or Google Functions –which, in turn, triggers the appropriate Intent. AWS lambda or Google Function contains the code which makes the voice assistant better understand the utterance and come up with an appropriate response. These function can be written in multiple coding languages ex. Java, Node.js, Python, to name a few.

The final step ‘Intent to action’ aims to fulfil the user’s need, then works out some possible answers based on the information it has in hand. If any Intent requires API to be called, it is typically taken care by AWS Lambda function or Google Function itself.

Conclusion

Thus, we see that Voice assistants in the form of Amazon Alexa or Google Home, come-in handy in several of the real world situations. These assistants can be used to leverage the best options to build an integrated solution for customers. Contact us today at sales@139.59.4.14 to know how we can influence the benefits of voice assistance in our day to day life.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.