Devops & Deployments
This document outlines how our deployment process works, as well as the workarounds used to debug and develop on the cloud.
How to Deploy to Production
Firstly, merge your new changes to the master branch of the monorepo. The deployment process is the exact same for backend and frontend.
Navigate to the repository homepage.
Click on the Releases Tab on the right of the homepage
Copy the most recent revision, then click on “Draft a new Release”.
Give the release a descriptive title, then click publish to trigger a production release.
As part of the pipeline, we trigger alerts to slack to notify of progress. The release process to production, while automated, requires 2 manual confirmations to continue. The first manual approval is directly after the container is built, but just before it is deployed. The second manual approval is required directly after the API and sidekiq have been deployed successfully but before the new version of the angular app is deployed. These steps are present to allow 100% uptime if a schema or data migration is required.
How to deploy to staging
Merge your changes to the “staging” branch in the monorepo. Once merged, Codepipeline will kick off a new build, and the latest commit will be deployed without delay to the staging stack. The only difference between the production deployment pipeline and the staging deployment pipeline is that the production pipeline requires manual confirmation to fully succeed.
Schema and Data Migrations
Schema and data migrations are not automated whatsoever on either the staging or production stack. This was an intentional decision, as migrations were infrequent enough and dangerous enough that we’d prefer a developer to run them manually and respond immediately in the event of an issue.
How do deployments work under the hood?
First, the pipeline is triggered via github hooks. For production, it is triggered by a new release version being created. For staging, it is triggered by a new commit merged to the staging branch.
Once the source code is pulled and stored in S3, the container for the API/Sidekiq worker is built. Full steps can be found in the `./mustard/deploy` directory. The built images are then published to ECR.
Once published to ECR, the sidekiq and api clusters are restarted. This forces the cluster to automatically pull the latest version of the code and update automatically.
The frontend step is last, which compiles the angular assets to bundles, and is published directly to S3. Full steps can be found in the `./mustard/deploy` directory.
How do I run the rails console in production/staging?
Because the rails API is hosted with ECS, we can’t go the traditional route off ssh’ing into the ec2 instance and running the console directly on the same box. Instead, we choose to whitelist our local IP, and use an up to date container image to run the console locally, connecting to the external dependencies (postgres etc) directly.
To add your local IP to the whitelist:
navigate to the AWS security group associated with the intended postgres instance (i.e. production/staging).
Visit the “Inbound Rules” tab.
Add a rule, and set the type to PSQL, Source: My IP, and give it a descriptive name
To run a specific version of the codebase against the production/staging environment:
Checkout the intended commit (e.g. latest master branch)
Build the docker image
docker build -t api:staging .
docker build -t api:latest .
Run a rails console on that image
docker run --env-file .env.staging -i -t api:staging rails c
How do I run a sidekiq job in production/staging?
Run the sidekiq job directly from the Rails console (outlined previously). To run it without a dependency on redis, ensure you use the `perform_now` function of the sidekiq API. Without this, you may unintentionally trigger the job to be performed by your local sidekiq, which is watching your local redis for new jobs.
How do I access the production database with my favourite Postgres application?
Whitelist your IP as described in “How do I run the rails console in production/staging?”, download your favourite PSQL client, and use the credentials in the .env files to access them. Alternatively, accessing the environment variables of the “task definition” section in the AWS account should give you the correct credentials.
How do I run a migration against staging/production?
Build the docker image as described in “How do I run the rails console in production/staging?”
docker run --env-file .env.staging -i -t api:staging rails db:migrate
docker run --env-file .env.production -i -t api:latest rails db:migrate
How do I view/change the environment variables in staging/production?
Visit the “task definitions” section of the ECS service on AWS. Ensure you are on the correct region for the cluster (or you won’t see anything).
In task definitions, view the latest version of the service you want to change.
Click “Create new revision”.
Scroll down to “Container Definitions” and click on your container.
Scroll down to “Environment -> Environment Variables” and you will see the current env variable values.
Edit as necessary.
Click “update”.
Go back to the associated cluster.
Click on the associated service.
Click “Update” to update the service.
In the “Task definition” section of the update menu, change the revision to the most recent one you just created.
Click “skip to review”.
Then click “update service”.
Last updated
Was this helpful?