Alexa Skill - Part 3

Reading time ~2 minutes

In the previous post I described the LaunchRequest and SessionEndedRequest which I think of as system level session requests. They are events that happen at startup and shutdown but don’t have a lot to do with the actual interaction.

In this post I’ll cover the IntentRequest request type. This request type plus the name of the intent is how interactions unique to your skill are handled.

In part one of the tutorial you defined the HelloIntent in the interaction model definition which should handle phrases like “hello” and “say hello”. Here’s the definition in case you forgot:

"intents": [
  ...
  {
    "name": "HelloIntent",
    "samples": [
      "hello",
      "say hello"
    ]
  }
]

In part two we added the a request handler for the HelloIntent in index.js. Here’s a reminder of that.

exports.handler = skillBuilder
  .addRequestHandlers(
    ...
    HelloIntentHandler
  )
  .addErrorHandlers(ErrorHandler)
  .lambda();

This simply configures the skill to use the HelloIntentHandler code to process intents.

Here is the code which I will discuss below.

const HelloIntentHandler = {
  canHandle(handlerInput) {
    return handlerInput.requestEnvelope.request.type === 'IntentRequest'
      && handlerInput.requestEnvelope.request.intent.name === 'HelloIntent';
  },
  handle(handlerInput) {
    const speechText = 'Hello, from Shepherd of the Valley Lutheran Church.';

    return handlerInput.responseBuilder
      .speak(speechText)
      .withSimpleCard('Hello', speechText)
      .getResponse();
  },
};

You saw the canHandle function in part two. There you saw it handle the standard system request type LaunchRequest. The HelloIntentHandler handles the IntentRequest type. The problem is that non-system requests are all of the IntentRequest type. We need to distinguish between them by adding the intent name to the check. This is done with this section of code:

&& handlerInput.requestEnvelope.request.intent.name === 'HelloIntent'

Finally the good part! You handle the request in the handle(handlerInput) function. What you do want to do is return the text you want spoken to the user saying “hello” or “say hello” to your skill. In this case you set this up with the constant string (which you should change of course):

const speechText = 'Hello, from Shepherd of the Valley Lutheran Church.';

Then you build and return the response:

return handlerInput.responseBuilder
      .speak(speechText)
      .withSimpleCard('Hello', speechText)
      .getResponse();

You start building the response with handlerInput.responseBuilder. Then you pass the text you want to speak to the speak() function. The withSimpleCard function adds a visual for Alexa devices that have a screen (I don’t know much about this yet, but I’ll be digging into in the future.). Then you send the response by calling the getResponse() function.

We’ve walked through the code and while there is a bunch of configuration stuff that’s not intuitive, I think programmatically there isn’t anything tricky. Now that you understand the code and configuration behind the skill the next part of the tutorial you will deploy and test it.

See you then and please let me know if you have any questions.

How Do I Document?

I don't have an answer to the question. I'm just riffing here. Trying to figure out how to create value for a yet to be determined audien...… Continue reading

Alexa Skill - Part 4

Published on September 19, 2018

A Tutorial Break - Software Careers

Published on September 09, 2018