7 min read

If serverless computing sounds a little contrived to you, you’re right, it is. Serverless computing isn’t really serverless, well not yet anyway. It would be more accurate to call it serverless development. If you are a backend boffin, or you spend most of your time writing Dockerfiles, you are probably not going to be super into serverless computing. This is because serverless computing allows for applications to consist of chunks of code that do things in response to stimulus. What makes this different that other development is that the chunks of code don’t need to be woven into a traditional frontend-backend setup. Instead, serverless computing allows code to execute without the need for complicated backend configurations. Additionally, the services that provide serverless computing can easily scale an application as necessary, based on the activity the application is receiving.

How AWS Lambda supports serverless computing

We will discuss Amazon Web Services (AWS) Lambda, Amazon’s serverless computing offering. We are going to go over one of Amazon’s use cases to better understand the value of serverless computing, and how someone can get started.

  1. Have an application, build an application, or have an idea for an application.

    This could also be step zero, but you can’t really have a serverless application without an application. We are going to be looking at a simple abstraction of an app, but if you want to put this into practice, you’ll need a project.

  2. Create an AWS account, if you don’t already have one, and set up the AWS Command Line Interface on your machine.

    Quick Note: I am on OSX and I had a lot of trouble getting the AWS Command Line Interface installed and working. AWS recommends using pip to install, but the bash command never seemed to end up in the right place. Instead I used Homebrew and then it worked fine.

  3. Navigate to the S3 on AWS and create two buckets for testing purposes. One is going to be used for uploading, and the other is going to receive uploaded pictures that have been transformed from the other bucket. The bucket used to receive the transformed pictures should have a name of this form “Other buckets name”+“resized”. The code we are using requires this format in order to work. If you really don’t like that, you can modify the code to use a different format.
  4. Navigate to the AWS Lambda Management Console and choose the Create Function option, choose Author from scratch, and click the empty box next to the Lambda symbol in order to create a trigger. Choose S3. Now specify the bucket that the pictures are going to be initially uploaded into. Then under the event type choose Object Created (All). Leave the trigger disabled and press the Next button.
  5. Give your function a name, and for now, we are done with the console.
  6. On your local machine set up a workspace creating a root directory for our project with a node_modules folder. Then install the async and gm libraries.
  7. Create a JavaScript file named index.js and copy and paste the code from the end of the blog into the file. It needs to be name index.js for this example to work. There are settings that determine what the function entry point is that can be changed to look for a different filename. The code we are using comes from an example on AWS located here. I recommend you check out their documentation. If we look at the code that we are pasting into our editor we can learn a few things about using Lambda. We can see that there is an aws-sdk in use and that we use that dependency to create an S3 object. We get the information about the source bucket from the event object that is passed into the main function. This is why we named our buckets the way we did. We can get our uploaded picture using the getObject method of our S3 object. We have the S3 file information we want to get from the event object passed into the main function. This code grabs that file, puts it into a buffer, uses the gm library to resize the object and then use the same S3 object, specifying the destination bucket this time, to upload the file. Now we are ready ZIP up your root folder and let’s deploy this function to our new Lambda instance that we have created.

    Quick Note: While using OSX I had to zip my JS file and node_modules folder directly into a ZIP archive instead of recursively zipping the root folder. For some reason the upload doesn’t work unless the zipping is done this way. This is at least true when using OSX.

  8. We are going upload using the Lambda Management Console, if you’re fancy you can use the AWS Command Line Interface. So, get to the management console and choose Upload a .ZIP File. Click the upload button, specify your ZIP file and then press the Save button.
  9. Now we will test our work. Click the Actions drop down and choose the Configure test event option. Now choose the S3 PUT test event and specify the bucket that images will be uploaded too. This creates a test that simulates an upload and if everything goes according to plan, your function should pass.
  10. Profit!

I hope this introduction in AWS Lambda serves as a primer on Serverless development in general. The goal here is to get you started. Serverless computing has some real promise. As a primarily front-end developer, I revel in the idea of serverless anything. I find that the absolute worst part of any development project is the back-end. That being said, I don’t think that sysadmins will be lining up for unemployment checks tomorrow. Once serverless computing catches on, and maybe grows and matures a little bit, we’re going to have a real juggernaut on our hands.

The code below is used in this example and comes from AWS:

// dependencies
varasync = require('async');
var AWS = require('aws-sdk');
var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration.
var util = require('util');
// constants
var MAX_WIDTH = 100;
var MAX_HEIGHT = 100;
// get reference to S3 client
var s3 = new AWS.S3();

exports.handler = function(event, context, callback) {
       // Read options from the event.
       console.log("Reading options from event:n", util.inspect(event, {depth: 5}));
       var srcBucket = event.Records[0].s3.bucket.name;
       // Object key may have spaces or unicode non-ASCII characters.
       var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, " "));
       var dstBucket = srcBucket + "resized";
       var dstKey    = "resized-" + srcKey;
       // Sanity check: validate that source and destination are different buckets.
       if (srcBucket == dstBucket) {
       callback("Source and destination buckets are the same.");
       // Infer the image type.
       var typeMatch = srcKey.match(/.([^.]*)$/);
       if (!typeMatch) {
       callback("Could not determine the image type.");
       var imageType = typeMatch[1];
       if (imageType != "jpg"&& imageType != "png") {
       callback('Unsupported image type: ${imageType}');
       // Download the image from S3, transform, and upload to a different S3 bucket.
       functiondownload(next) {
       // Download the image from S3 into a buffer.
       Bucket: srcBucket,
       Key: srcKey
       functiontransform(response, next) {
       gm(response.Body).size(function(err, size) {
       // Infer the scaling factor to avoid stretching the image unnaturally.
       var scalingFactor = Math.min(
       MAX_WIDTH / size.width,
       MAX_HEIGHT / size.height
       var width = scalingFactor * size.width;
       var height = scalingFactor * size.height;
       // Transform the image buffer in memory.
       this.resize(width, height)
       .toBuffer(imageType, function(err, buffer) {
       if (err) {
       } else {
       next(null, response.ContentType, buffer);
       functionupload(contentType, data, next) {
       // Stream the transformed image to a different S3 bucket.
       Bucket: dstBucket,
       Key: dstKey,
       Body: data,
       ContentType: contentType
       ], function (err) {
       if (err) {
       'Unable to resize ' + srcBucket + '/' + srcKey +
       ' and upload to ' + dstBucket + '/' + dstKey +
       ' due to an error: ' + err
       } else {
       'Successfully resized ' + srcBucket + '/' + srcKey +
       ' and uploaded to ' + dstBucket + '/' + dstKey
       callback(null, "message");

Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.

Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for the Department of Transportation as a transportation demand modeler.


Please enter your comment!
Please enter your name here