Deploying a Single SPA Application on AWS

This post is a follow up to Hosting a Single Page Application in AWS. It builds on that article with some specifics for the Single SPA micro frontend famework.

There are couple core problems to solve here when using the recommended setup outside of just hosting the applicaiton as described in the article linked above.

  1. Hosting and managing shared dependencies
  2. Deploying individual microfrontends
  3. Managing and deploying the import map
  4. Automating all of this

Hosting and Managing Shared Dependencies

There are couple approaches here described in the recommended setup. I went with the import map system.

From that choice one could choose to use already existing CNDs like unpkg to host those dependencies, but that choice makes me slightly uncomfortable.

Instead I’d recommend using self hosted shared dependencies library to build and then push shared dependencies to to the target s3 bucket.

To accomplish this, a bit of build script in package.json

{
  "scripts": {
    "build": "shared-deps build"
  }
}

And then syncing the files to S3:

aws s3 sync --exclude README.md build/ s3://example-spa-app/assets/shared/

No need to worry about versioning here since shared deps includes versions in URLs.

I choose to include these shared deps as an inline import map in the root config’s index.html simply because they don’t change often and I don’t have many of them to manage. sharedAssetPrefix is configured during the render of the template.

<script type="systemjs-importmap">
    {
        "imports": {
            "single-spa": "<%= sharedAssetPrefix %>/single-spa@5.9.3/lib/system/single-spa.min.js",
            "react": "<%= sharedAssetPrefix %>/react@16.13.1/umd/react.production.min.js",
            "react-dom": "<%= sharedAssetPrefix %>/react-dom@16.13.1/umd/react-dom.production.min.js",
            "rxjs": "<%= sharedAssetPrefix %>/rxjs@7.5.5/dist/bundles/rxjs.umd.min.js"
        }
    }
</script>

Deploying Individual Microfrontends

One of the deployment considerations described in aws deployment article is that assets are cached by cloudfront at its edges. This can lead to old versions of microfrontends being service if new URLs are not used.

The way around this in the Single SPA world is to use the version of the application in the URL. Version, for me, means either a git commit hash on staging or a git tag name in production. SPA’s default webpack config builds without any hashes in filenames so the actual version thing becomes a concern of the deploy process.

The gist of it is to npm run build then aws s3 sync the resulting files into a versioned prefix (directory) on s3.

We’ll skip the import map thing for now and cover that below, but here is an example deploy script:

#!/usr/bin/env bash

set -e

if [ "$#" != 2 ]; then
    echo "Usage: $0 {environment} {version}" >&2
    exit 1
fi

ENV=$1
VERSION=$2

PACKAGE_NAME="name-of-microfrontend-here"
BUCKET="example-spa-app-${ENV}" # different bucket per environment

NODE_ENV="$ENV" npm run build

aws s3 sync dist/ "s3://${BUCKET}/assets/${PACKAGE_NAME}/${VERSION}/"

And run the script like this:

./deploy staging c1156698b77bf771ae69b21a7b02d8ddb7f7a262 # commit hash
./deploy prod 20230517.1 # tag, more on this in automation below

Managing and Deploying Import Maps

The recommend setup mentions this import map deployer thing. For me, that’s not the ideal solution. I avoid running services where I can.

The import map problem is:

  1. Managing the state of what’s meant to be in the import map
  2. Uploading and deploying the import map from the state

#1 could be a file on the S3 bucket and that’s the problem import map deployer solves: concurrently reading and writing a file on S3 is gonna cause you to have a bad time.

An alternative since we’re all in on AWS in this article is using something like DynamoDB or, my preference, SSM parameter store.

To make this work, the SSM value needs to be set to the new path of the deployed file, so the deploy script above largely remains the same, but with an extra line to set an SSM parameter value.

#!/usr/bin/env bash

set -e

if [ "$#" != 2 ]; then
    echo "Usage: $0 {environment} {version}" >&2
    exit 1
fi

ENV=$1
VERSION=$2

PACKAGE_NAME="name-of-microfrontend-here"
BUCKET="example-spa-app-${ENV}" # different bucket per environment

NODE_ENV="$ENV" npm run build

aws s3 sync dist/ "s3://${BUCKET}/assets/${PACKAGE_NAME}/${VERSION}/"

 aws ssm put-parameter \
    --name "/exampleapp/${ENV}/${PACKAGE_NAME}" \
    --description "Deploy ${PACKAGE_NAME} ${VERSION}" \
    --type String \
    --overwrite \
    --output json \
    --value "${PACKAGE_NAME}/${VERSION}/${PACKAGE_NAME}.js"

The value here is relative to the assets prefix used in the sync. Now we just need code to pull the parameters and build an import map.

#!/usr/bin/env node

import { writeFileSync } from 'fs';
import { SSMClient, GetParametersByPathCommand } from '@aws-sdk/client-ssm';

if (process.argv.length < 3) {
    console.error('missing {environment} argument');
    process.exit(1)
}

const environment = process.argv[2].toLowerCase()
const ssm = new SSMClient();
const pathPrefix = `/exampleapp/${environment}`;
let nextToken = undefined;
const parameters = [];

do {
    let command = new GetParametersByPathCommand({
        MaxResults: 10,
        NextToken: nextToken,
        Path: pathPrefix,
        Recursive: true,
    });
    let response = await ssm.send(command);
    parameters.push(...response.Parameters);
    nextToken = response.NextToken;
} while (typeof nextToken !== 'undefined');

const namespace = '@exampleapp';
let imports = parameters.reduce((result, parameter) => {
    // all of the params being `/exampleapp/{env}/{packageName} we can use
    // the last bit of the parameter name as our package name and use a
    // constant namespace
    let packageName = parameter.Name.substr(1).split('/').pop();

    result[`@${namespace}/${packageName}`] = `/assets/${parameter.Value}`;

    return result;
}, {});

writeFileSync('dist/importmap.json', JSON.stringify({imports}));

console.log(JSON.stringify({imports}, undefined, 2));

After writing the file locally, it needs to be uploaded to S3 to /assets/importmap.json or wherever. Becuase the filename won’t change to reflect the version, it also needs to be invalidated in cloudfront.

So all together the deploy process for the import map is a build, push to s3, start cloudfront invalidation:

#!/usr/bin/env bash

set -e

if [ "$#" != 1 ]; then
    echo "Usage: $0 {environment}" >&2
    exit 1
fi

ENV=$1

case "$ENV" in
    prod)
        CLOUDFRONT_DISTRIBUTION="changeme: dist for prod"
        ;;
    staging)
        CLOUDFRONT_DISTRIBUTION="changeme: dist for staging"
        ;;
    *)
        echo "environment should be one of: [prod, staging]"
        exit 1
esac

BUCKET="example-spa-app-${ENV}"

npm run build "$ENV" # runs the script above

exec aws s3 cp \
    --cache-control 'max-age=1800,must-revalidate' \
    dist/importmap.json \
    "s3://${BUCKET}/assets/importmap.json"

aws cloudfront create-invalidation \
    --distribution-id "$CLOUDFRONT_DISTRIBUTION" \
    --paths "/assets/importmap.json"

Pulling from parameter store doesn’t mean we need to avoid concurrency control completely, but that can happen at the build process level.

In practice, I use github actions to deploy the import map with a workflow_dispatch trigger that’s let’s other repos call into the import map repository and start the build. Github actions offers concurrency control to avoid multiple builds running at the same time. Github offers OpenID connect to access AWS resources, which is what this example action workflow does. That’s a whole article in itself, so I’ll gloss over it for now.

name: deploy

run-name: Deploy to ${{ inputs.environment }} from ${{ inputs.source }}

concurrency:
  group: deploy-${{ inputs.environment }}

on:
  workflow_dispatch:
    inputs:
      environment:
        description: The environment to deploy
        required: true
        type: choice
        options:
          - prod
          - staging
      source:
        description: What triggered the workflow
        required: false
        default: '-'
        type: string

permissions:
  contents: read
  id-token: write # necessary for AWS

jobs:
  deploy:
    environment:
      name: ${{ inputs.environment }}
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: aws login
        uses: aws-actions/configure-aws-credentials@v1-node16
        with:
          role-to-assume: ${{ vars.DEPLOY_AWS_ROLE }}
          aws-region: us-east-1
          role-session-name: importmap-deploy-${{ inputs.environment }}
      - name: setup node
        uses: actions/setup-node@v3
        with:
          node-version: '18'
          registry-url: 'https://registry.npmjs.org'
          cache: 'npm'
      - name: npm install
        run: npm ci
        env:
          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
      # run the bash script above
      - name: deploy
        run: ./deploy ${{ inputs.environment }}

Deployment Automation (Continuous Delivery)

A lot of the code above is bash scripts. Those can be run manually or via github actions or any other CI system.

As I touched on in the import map section, I settled on github actions to deploy my own application and 10/10 would recommend.

An example workflow would build a static asset, push it to S3 at the versioned URL prefix, point the import map parameter for the microfronted, then kicking off the import map build. Fun huh?

I’m deploying to multiple environments, so my CD pipeline works like this:

  • Anything merged into main (the default branch) does directly to staging
  • Any new tag goes to production
  • Users should be able to manually trigger the workflow to deploy

Github actions also provides the concept of environments, we use these to provide a set of variables (accessed via ${{ vars.THING }} to workflows based on what environment is being deployed. The environment variable can also be used to harden OIDC + AWS role assumption a bit.

In short, the entire deploy process is pretty much the same regardless of microfrontend, so the actual shared workflow might look something like this:

name: deploy

permissions:
  contents: read
  id-token: write # for aws access

on:
  workflow_call:
    inputs:
      environment:
        required: true
        type: string
      version:
        required: true
        type: string

jobs:
  deploy:
    runs-on: ubuntu-latest
    environment:
      name: ${{ inputs.environment }}
    steps:
      - uses: actions/checkout@v3
      - name: aws login
        uses: aws-actions/configure-aws-credentials@v1-node16
        with:
          # environment specific variable :point_down:
          role-to-assume: ${{ vars.AWS_DEPLOY_ROLE }}
          role-session-name: ${{ github.event.repository.name }}
          aws-region: us-east-1
      - uses: actions/setup-node@v3
        with:
          node-version: '18'
          registry-url: https://registry.npmjs.org
          cache: 'npm'
      - name: npm ci
        run: npm ci --omit=dev
        env:
          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

      # the deploys script from the "microfrontend deploy" section above
      # remember this builds the input, uploads to s3, sets import map version
      - name: deploy
        run: ./deploy "${{ inputs.environment }}"

      # then start the import map build in the other repository
      - name: start import map build
        uses: actions/github-script@v6
        with:
          script: |
            github.rest.actions.createWorkflowDispatch({
              owner: 'ChangeMeYourOwner',
              repo: 'spa-importmap-repo',
              workflow_id: 'deploy.yml',
              ref: 'main',
              inputs: {
                environment: '${{ inputs.environment }}',
                source: '${{ github.repository }}',
              },
            })

And then deploying to staging uses the shared workflow:

name: deploy-to-staging
# use concurrency to stop builds and only keep the latest
concurrency:
  group: ${{ github.workflow }}
  cancel-in-progress: true

on:
  push:
    branches:
      - main
  workflow_dispatch:

jobs:
  deploy:
    uses: .github/workflows/deploy.yml
    secrets: inherit
    with:
      environment: staging
      version: ${{ github.sha }} # commit hash for staging

And a deploy to prod would be very similar but only run for tags:

name: deploy-to-prod

concurrency:
  group: ${{ github.workflow }}
  cancel-in-progress: true

on:
  push:
    tags:
      - '*'
  workflow_dispatch:

jobs:
  deploy:
    if: startsWith(github.ref, 'refs/tags/') # make sure this can only run on tags
    uses: .github/workflows/deploy.yml
    secrets: inherit
    with:
      environment: prod
      version: ${{ github.ref_name }} # use the tag for the version

Deployment Considerations for the Root Config

For the actual root configuration, it will upload an index.hml file as part of it’s build and deploy process. Like the import map, this needs a cloudfront invalidation as part of the deploy.

Wrap Up

The key to all of this stuff is building the process outside of continous integration/delivery pipelines, then integrating the process (scripts) into it.

When I first implemented this workflow on my Single SPA prototype, I did it all manually. Every deploy across ~4 repositories. This forced me to script the process and dogfood the actual workflow before automating it.

Hopefully this helps some of y’all struggling with Single SPA application deployments!