I don’t have an answer to the question. I’m just riffing here. Trying to figure out how to create value for a yet to be determined audience.

There are multiple types of content even within each medium. I’m thinking of the vlogging vs screencasting. Both are video, but they show the difference between documenting and creating. I guess the ideal for me would be to create a project and document my progress. What I’m learning and then turn the progress each day into blogging or social media posts. I don’t know how this works into podcasting but the smaller posts I could turn into tutorials.

There would be three things that come out of this:

The project itself - I’m experimenting this right now with the Alexa Skills project. A more ambitious project would be the podcasting translation and hosting platform with AWS. I will document my daily progress and post those as social as Twitter or Instagram posts. I could also experiment with how to record programming sessions and post them to YouTube or Instagram or SnapChat. Turn the daily documents into a tutorial that I could turn that into a book or a course.

In the end I would build my project, an audience around that project and the technology I used to build it, and finally, a book and or course that could be monetized. This sounds like an ideal plan. So how do I start this?

First I need a project to start and a plan to document it in someway. I don’t know if that mean screencasts or video or just document by writing. I’d like to find some interesting way to talk about programming beyond just writing about it. Maybe I need to experiment with screencasting.

In the previous parts of this extended tutorial you learned about how a Skill works. Now, you’re going to learn how to deploy it so you can test it. From the command line enter: ask deploy that’s it. Really, that’s it. You don’t have to do anything else. The lambda function, the skill intent model, and everything else is uploaded, configured, and deployed. Cool!

Now you might be asking, “Why should I do this from the command line when every other tutorial shows you how to do it from the web console?” First, it’s easier, right? Second, this is how stuff is done in production. If you want to do stuff in production it needs to be automated. Now, we’re not automated yet because we’re running a step from the command line, but it’s a short way from a manual command line execution to a step in an automated build process. If you learn how to do this from a web UI, that’s something totally different. There’s a huge gap between clicking around in a web UI and deploying in an automated fashion. This is why you need to know how to deploy this way. Remember, the goal is to learn how to do this in a real environment, not just as a toy experiment.]

Now take your skill for a test run. First we’ll do this from the command line. Here is a sample: ask simulate --text 'Alexa open s.o.t.v.' --locale en-US That tests the opening of the skill.

Let’s try the hello intent: ask simulate --text 'Alexa ask s.o.t.v. hello' --locale en-US

Let’t try the intent with a slot [add command line for slot intent]

Now try the open skill and hello intent when we are in the skill [command line to open] [command line to say hello step]

If that worked for you then I tricked you. That shouldn’t work. The command line doesn’t keep state so a multi step interaction fails from the command line. So, let’s try that one from the web console. Yes I know I just gave a lecture on not using the GUI, but this is a test. And sometimes testing isn’t as clean as we would like it to be. I am wondering if there is some session ID that can be used as a command line argument to actually execute a multi-step intent from the command line. I haven’t seen it it a tutorial, but that doesn’t mean it doesn’t exist. I’ll do some digging, but if it doesn’t exist it should probably be a developer request.

[If that worked for you then I tricked you. It shouldn’t work. The command line doesn’t keep track of multi-step interactions, so that should fail from the command line. Let’s try this from the web console. Yeah, I know I just told you not to do anything from the GUI, but this is a test. And sometimes testing isn’t as clean as you’d like it to be. I am wondering if there is some sort of session ID that you can pass from the command line to actually execute a multi-step intent. I haven’t seen it in a tutorial but that doesn’t mean it doesn’t exist. I guess I’ll do some digging. But if it doesn’t exist it should probably be a developer request.]

What’s next. Next we’l do some calendar integration and give more than a canned response. You did learn how simple it was to deploy your intent. And how you can test from the command line a n GUI.

In the previous post I described the LaunchRequest and SessionEndedRequest which I think of as system level session requests. They are events that happen at startup and shutdown but don’t have a lot to do with the actual interaction.

In this post I’ll cover the IntentRequest request type. This request type plus the name of the intent is how interactions unique to your skill are handled.

In part one of the tutorial you defined the HelloIntent in the interaction model definition which should handle phrases like “hello” and “say hello”. Here’s the definition in case you forgot:

"intents": [
  ...
  {
    "name": "HelloIntent",
    "samples": [
      "hello",
      "say hello"
    ]
  }
]

In part two we added the a request handler for the HelloIntent in index.js. Here’s a reminder of that.

exports.handler = skillBuilder
  .addRequestHandlers(
    ...
    HelloIntentHandler
  )
  .addErrorHandlers(ErrorHandler)
  .lambda();

This simply configures the skill to use the HelloIntentHandler code to process intents.

Here is the code which I will discuss below.

const HelloIntentHandler = {
  canHandle(handlerInput) {
    return handlerInput.requestEnvelope.request.type === 'IntentRequest'
      && handlerInput.requestEnvelope.request.intent.name === 'HelloIntent';
  },
  handle(handlerInput) {
    const speechText = 'Hello, from Shepherd of the Valley Lutheran Church.';

    return handlerInput.responseBuilder
      .speak(speechText)
      .withSimpleCard('Hello', speechText)
      .getResponse();
  },
};

You saw the canHandle function in part two. There you saw it handle the standard system request type LaunchRequest. The HelloIntentHandler handles the IntentRequest type. The problem is that non-system requests are all of the IntentRequest type. We need to distinguish between them by adding the intent name to the check. This is done with this section of code:

&& handlerInput.requestEnvelope.request.intent.name === 'HelloIntent'

Finally the good part! You handle the request in the handle(handlerInput) function. What you do want to do is return the text you want spoken to the user saying “hello” or “say hello” to your skill. In this case you set this up with the constant string (which you should change of course):

const speechText = 'Hello, from Shepherd of the Valley Lutheran Church.';

Then you build and return the response:

return handlerInput.responseBuilder
      .speak(speechText)
      .withSimpleCard('Hello', speechText)
      .getResponse();

You start building the response with handlerInput.responseBuilder. Then you pass the text you want to speak to the speak() function. The withSimpleCard function adds a visual for Alexa devices that have a screen (I don’t know much about this yet, but I’ll be digging into in the future.). Then you send the response by calling the getResponse() function.

We’ve walked through the code and while there is a bunch of configuration stuff that’s not intuitive, I think programmatically there isn’t anything tricky. Now that you understand the code and configuration behind the skill the next part of the tutorial you will deploy and test it.

See you then and please let me know if you have any questions.

I’m taking a break from the Alexa tutorial this week. I wanted to talk a little bit about software development careers. As I mature, I find that I am less drawn to the idea of being a technical expert. There can be a lot more to a fulfilling software career than being the expert in Java or C or the technical aspects of the job. Just solving a problem or fixing a bug was a good day. But now, working with others to create a productive team and having a satisfying experience at work are much more important than they used to be for me. I still enjoy the technical things immensely, but I also like helping a teammate figure something out or helping another finish a task. One thing I really started enjoying is helping a younger developer see things in a new light.

Most young developers I talk to enjoy the technical stuff and problem solving, ut they forget that there is more to life than the job. I don’t mean that they don’t have outside lives, I mean they don’t have any real balance or boundaries. It’s like they are the job. Everything takes a back seat to the job. They willing drop what they are doing to handle a work issue. Please young developers, don’t do this. This may sound like an old developer who no longer wants to sacrifice having a life, and yes, I don’t want to sacrifice my life, but I want young developers it’s wrong for the employer to be that demanding of your time.

You are not the owner of the company, to expect that you should throw away portions of your life for the sake of a company that you don’t own (.00001% of stock ownership doesn’t make you a company owner). Yes, you won’t climb the ladder as quickly, but don’t succumb to the pressure of doing everything the boss wants you to at any hour of the day. Do your best to resist the 10 minute weekend task. I know I still do it, but I’m sure there are tasks that take 10 minutes during your personal time that you will do as a “favor” to your boss. Try to do them as little as possible and make sure that your boss appreciates that you are doing a favor. That this is not normally something you would do. Couch it with something like you are out and that when you are able you will do your best if you have time. You don’t have to go too deep, but just let the boss know that you have a life and that you are living it and that this favor is intruding on it.

You then have to do work at work. You know all this personal time stuff the job allows you to do at work. It’s a trap! They want you to blur the boundaries. They want you to not know when to shut it off. If you can’t shut it off then you can’t refuse when they come knocking during your personal time. Hey, you used company time to do personal stuff of course we can intrude on your personal time to do company work. See that trap. Just do work at work. If you have to break up your work day to do a personal task, go ahead I don’t see a problem with that. Just make sure you have a defined split. If it mixes, there’s no defense against them taking your time. You know all those “perks” at work? Where are they? At work. Wait now, when am I doing work stuff? When am I doing personal stuff? Oh I better do work stuff at home since I did home stuff at work. What?!?!? Hours don’t matter it’s what you accomplish they say. Well that sounds good until you ask how many hours it takes to produce what is expected of you. Have the conversation with your boss. Know what’s expected. If you don’t have the conversation you could work and work for hours and not know if you are under or over delivering. Just find out.

Finally, don’t regularly check email during non-work hours. You know all the things about email killing productivity. Well, it will kill your personal life as well. What are you doing with work email on your personal device? What is that? I get mad when I see that. Unless that’s in the job description, stop! If your job is to be on call or on “pager duty” than sure. But if you have a regular feature development software development job I see no reason to regularly check your work email outside of work. If your bosses want something done they should be asking for it during business hours. Their whim or fancy at 10:30PM on Saturday should have no bearing on you until monday morning when you start your day. What can you do if this is expected behavior? Well, first don’t do it just because everyone else is. That’s bad culture. Just like the time boundaries, have the conversation with your boss about communication boundaries.

Why am I writing this. My impression of Silicon Valley is that this has somehow become the norm. It seems like every big company doesn’t know what boundaries are. I think their customers have come to expect blurred boundaries. I mean every SaaS customer expects 24/7 service. So every company seems to expect 24/7 employees. I don’t think the culture has had an actual conversation about this. Bosses just saw that they needed employees to support the product at any hour so the expect employees to do their job. Well I’m guessing in most cases it was never negotiated that this was part of the job. Really what I’m asking for is a conversation. Employees, please have these conversations with your managers. Ask about expectations for off hours work. And clarify how you will be compensated for work outside the norm. If you are up late working on a presentation, is the presentation your job and it was your fault that your up so late? Or is it something that got thrown at you last minute and now you have to go above and beyond because of mismanagement?

The code for the Skill is written in JavaScript and uses npm to manage package dependencies. The ask-sdk-core and ask-sdk-model packages are included when we ran ask new.

lambda/custom/index.js

I’m going to break up the explanation of the code so I can explain it in a digestable fashion.

Let’s start slow. First, I import the ask-sdk-core.

const Alexa = require('ask-sdk-core');

I use the ask-sdk-core when I register the intent handlers (described below). The skillbuild.addRequestHandlers() registers handlers for launch, help, cancel and stop, session end, and the non-built-in Hello intent.

const skillBuilder = Alexa.SkillBuilders.custom();

exports.handler = skillBuilder
  .addRequestHandlers(
    LaunchRequestHandler,
    HelpIntentHandler,
    CancelAndStopIntentHandler,
    SessionEndedRequestHandler,
    HelloIntentHandler
  )
  .addErrorHandlers(ErrorHandler)
  .lambda();

Anatomy of an Intent Handler

The intent handler requires two functions canHandle(handlerInput) and handle(handlerInput)

const LaunchRequestHandler = {
  canHandle(handlerInput) {
    return handlerInput.requestEnvelope.request.type === 'LaunchRequest';
  },
  handle(handlerInput) {
    const speechText = 'Welcome to the Alexa skill for Shepherd of the Valley Lutheran Church.';
    const repromptText = "Say hello."

    return handlerInput.responseBuilder
      .speak(speechText)
      .reprompt(repromptText)
      .withSimpleCard('Hello', speechText)
      .getResponse();
  },
};

The canHandle(handlerInput) function returns true for all the request types it handles. In the case of the LaunchRequestHandler it supports the LaunchRequest request type. The handle(handlerInput) function is the actual response to the intent. The function shows how the handlerInput.responseBuilder function is used to build the response. speak takes a string that is the text of the response. reprompt takes a string that will be used to further the conversation if there hasn’t been any interaction after 8 seconds.

There are two other types of requests I’ll be handling in this tutorial and they are IntentRequest and SessionEndedRequest. SessionEndedRequest is used to cleanup any state we need for the session. I’m not sure there will be much to it as I don’t know if I will need any state for this Skill. I will explore IntentRequest in the next post. It will be used for some default intents, but it is also the mechanism used to do the interesting, custom parts of the skill.

Please send me (brian@yamabe.net) any comments or corrections.