Serverless development feels like Java in 1998. It has been around for a little while, is gaining traction, but there are still some sceptics. The skepticism around Java was the hype around write-once-run-anywhere. I think that caution was appropriate, but the ideal of a virtual machine that could be deployed on almost any platform to run the same non-GUI code became a big deal.

The big idea around serverless is the reduction of operational costs around deploying an application. The cost of maintaining even virtual machines is tremendous compared to an environment where the code is the only thing to maintain. Just like the vision for Java in 1998 didn’t turn out exactly as people were hyping it; I don’t think we know what serverless will look like in 20 years, but I’m betting it will a significant impact.

In the short time I’ve been working with AWS Lambda and other AWS managed services, I can see the norm for server side development moving away from placing software on even virtualized machines. Programming models that abstract out the notion of a host OS seem like a no-brainer at this point. The reduced cost of maintenance and operation in a serverless environment seems like it would be a win for any type of back-end development.

That being said working for a company that primary tracks and optimizes software and hardware assets in an on-premise environment feels like an uncomfortable to be. Of course, on-premise software and hardware won’t go away, just be reduced in size and percentage of importance. Can a company that is entrenched in the on-premise model morph itself into something that can adapt to this new world and combine and optimize the serverless and on-premise mixed model. That’s the question.

Yay! I passed the certification exam! I’ll have to find a way to incorporate the logo. It’s probably the most meaningful exam I’ve taken since college and I’m glad I’m done. Although I do plan to take the Developer exam and likely the SysOps one as well.

Here’s to a study break until next week.

Which if the services could spread across Multi-AZ (chose 2 correct answers)

A. EC2 B. ELB C. RDS D. Dynamo DB E. EBS

The correct answers are said to be B and C. First, the grammar is poor. Second, you can “enable cross-zone load balancing” for ELB and “provision a Multi-AZ DB Instance” RDS, and for DynamoDB “The service runs across Amazon’s proven, high-availability data centers. The service replicates data across three facilities in an AWS Region to provide fault tolerance in the event of a server failure or Availability Zone outage.” B,C, and D are all available Multi-AZ. If the question were what services are optionally Multi-AZ then the answer would be B and C but that is not how the question is worded.

You have created a Route 53 latency record set from your domain to a machine in Singapore and a similar record to a machine in Oregon. When a user located in India visits your domain he will be routed to:

A. Singapore B. Oregon C. Depends on the load on each machine D. Both, because 2 requests are made, 1 to each machine

The answer given is A which I believe is a poor answer because it assumes that the latency between India and Singapore will be the lowest. While this may often be the case it is not guaranteed and makes for a poor test question because there is additional information needed to make an accurate answer.