Serverless Node.js and Angular S3 Uploads

Node.js is an incredible JavaScript runtime for server based APIs. The one place it falls down is during long running event loop blocking operations. One of the first Node based backends I wrote was Chalkup, a company I co-founded in college nearly three years ago. Everything ran great, until a few people tried to upload large PDFs or PPTs simultaneously. Everything ground to a halt, response times skyrocketed, and autoscaling would kick in.

After spending a massive amount of time optimizing the code to stream files directly to the file system instead of storing it in RAM first, and then sending that file to S3. It still wasn't performant enough though, as we were spending way too many clock cycles piping this file around and deleting it from the file system, when we didn't even need it on the server at the end of the day.

Enter serverless S3 uploads. "Serverless" is a bit of a misnomer, as you still need a server to create a signature to send to S3 along with your file. The file never touches your RAM or file system though - no wasted clock cycles.

We're going to use Angular and Hapi.js for this tutorial, my current frameworks of choice. Let's jump in and work through the flow from a users perspective.

As a user, I'm looking at some sort of file picker or upload form. Let's say it's this directive, which is a super generic directive that creates a file input.


class FileUploadCtrl
    @$inject: ['$scope', 'S3']
    constructor: (@scope, @S3) ->

    uploadSuccess: (file) =>
        @scope.progressCallback {
            status: 'complete'
            file: file
        }

    uploadFailure: (error) =>
        @scope.progressCallback {
            status: 'failed'
            error: error
        }

    handleFileChange: (event) ->
        @scope.$apply =>
            @progressCallback {status: 'uploading'}
            file = event.target.files[0]
            uploadedFile = @S3.upload file
            uploadedFile.$promise.then @success, @error

angular.module('myApp').controller 'FileUploadCtrl', FileUploadCtrl

angular.module('myApp').directive 'fileUploader', ->
    {
        restrict: 'AE'
        scope:
            progressCallback: "&"
        template: '<input type="file" />'
        controller: 'FileUploadCtrl'
        controllerAs: 'fileUploadCtrl'
        link: (scope, element, attrs) ->
            $(element).find('input').on 'change', scope.handleFileChange
    }

This controller and directive will control my experience as the user, and allow the engineer to provide me with meaningful progress feedback during the upload process. The directive is super easy to add, by just adding the following element to your page:

.file-upload(file-uploader, progress-callback='myPageCtrl.uploadCallback(file, status)')

This part is pretty boilerplate and straightforward, so I don't think it warrants much more explanation. You might notice the S3 factory injected into the upload controller above. After I click on the upload button and select my file, that S3 factory is going to do the rest of the work for me.

angular.module('myApp').factory 'S3', ('$resource', '$q', '$http') ->
    SignAPI = $resource "/api/sign_s3", {},
        sign:
            method: "POST"
    S3 = 
        _getSignedUrl: (file) ->
            params = 
                file_name: file.name
                file_type: file.type
            return SignAPI.sign(params)
        upload: (file) ->
            response =
                $resolved = false
            response.$promise = $q (resolve, reject) ->
                signature = S3._getSignedUrl file
                success = (data) ->
                    params =
                        name: file.name
                        type: signature.file_type
                        size: file.size
                        uri: signature.url
                    resolve params
                    response.$resolved = true
                failure = (data) ->
                    reject data
                    response.$resolved = true
                sigSuccess = ->
                    $http.put(signature.signed_request, file, {
                        withCredentials: false
                        headers:
                            'Content-Type': signature.file_type
                    }).then success, failure
                signature.$promise.then sigSuccess, failure
            return value
    return S3

Well that got code heavy real fast. Let's walk through what this factory does step by step, starting from when S3.upload gets called.

The first thing that we do is set up a fake $resource return object, so this function can be used in a promise based environment the same way $resource would be. We set $resolved to false because the server hasn't responded to us yet. Then we use $q to create a new Promise for us.

After the promise is created, we return the response object to the controller that called S3.upload, and then kick off the actual upload process.

The second thing we have to do is create a signed S3 URL. To do this we call the private S3._getSignedUrl function, which kicks off a call to our Hapi.js server to sign the url. That request goes to the server and hits a handler that looks something like this one:

Joi = require 'joi'
uuid = require 'node-uuid'
AWS = require 'aws-sdk'
Boom = require 'boom'
mime = require 'mime'

S3 = new AWS.S3()

server.route
    method: "POST"
    path: "/api/sign_s3"
    config:
        validate:
            payload:
                file_name: Joi.string().trim().lowercase().replace(/\s/g, '_').description("Name of the File")
                file_type: Joi.string().regex(/^[^\s]+$/).description("MIME type").allow('')
        handler: (req, reply) ->
            file_type = req.payload.file_type
            if file_type.length is 0
                file_type = mime.lookup req.payload.file_name
            s3_params =
                Bucket: "my-bucket-name",
                Key: "uploads/#{uuid.v4()}/#{req.payload.file_name}",
                Expires: 60,
                ContentType: file_type,
                ACL: 'public-read'
            S3.getSignedUrl 'putObject', s3_params, (err, data) ->
                if err?
                    return reply Boom.wrap err
                else
                    return reply {
                        file_type: file_type
                        signed_request: data
                        url: "https://s3.amazonaws.com/my-bucket-name/#{s3_params.Key}"
                    }

Another big block of code. It's pretty straight forward though. It expects a POST to /api/sign_s3, with a payload that contains the file_name and file_type. Occasionally the browser doesn't provide the MIME type of the file for us, so we also have a MIME helper module that tries to detect the MIME type of the file based on the file name.

Next part is the magical part. Using our S3 client on the server that has our AWS Access Key and AWS Secret Access Key, we create a URL with a signature on the end of it that allows the users browser (me) to send my file directly to your write protected S3 bucket. We then send a response back to the front end with the MIME type that we used to create the signature, the signed request URL, and the URL the file will be located at after the upload is complete.

Back to the browser. Here's the short bit of code that gets executed after the signed request is returned from the server, copied from above:

$http.put(signature.signed_request, file, {
    withCredentials: false
    headers:
        'Content-Type': signature.file_type
}).then success, failure

Using the response from the server, we can do a PUT directly to S3 with our file. Note that we are setting withCredentials: false and the Content-Type header. Without these, S3 won't be able to validate your signature and will send you back some cryptic response saying the signature doesn't match the file.

Once the upload the S3 is completed, the success callback will be executed from above, all the way up to the controller that controls the view that I'm looking at as the user. You can then tell me the upload is finished, add the file to wherever it belongs, and create your database record for the file. It really is that easy, and the file never gets streamed to or from your server!