How to build an app pipeline with your engineers and customers in mind

Resilient Tech
6 min readMar 9, 2021

--

by Blue Thomas, CTO @ Resilia

At the start of 2020 we had a legacy monolith that represented the company’s MVP, and a whole roadmap of feature expansion ahead of us. After a couple of minor updates that turned into major fires, it became clear that a new stack was in order, aka the dreaded Rebuild ™.

The Resilia engineering team steeled themselves, spiked a couple of POCs, picked our languages and frameworks, and set out to answer a number of DevOps-esque questions:

  • How are we going to build, test and deploy this thing?
  • How can we enable this to be done continuously for every changeset?
  • How are we going to manage the variations between environments?
  • How can we configure our application’s connections to databases and other resources?
  • How can we keep the developer experience as simple as possible?

One way to tackle complex questions involving many moving parts is to constrain your options. My favorite way to do this is adhering to tried and true conventions that have known upsides; and in this instance for our application delivery architecture, the Twelve-Factor App methodology provides just that.

The Twelve-Factor App

Given that the original Twelve-Factor App spec was published in 2012 by Adam Wiggins, the co-founder of Heroku, it may come as no surprise that we decided to use Heroku for our application pipelines and runtimes. You can still use the conventions in other cloud environments that utilize ephemeral machines such as container based runtimes like Kubernetes.

You can read the spec yourself, but let’s point out some of my favorite “factors” that help build a mental model on how your app should work:

  • Codebase — one codebase, many deploys
  • Config — store your configuration in the environment
  • Build, release, run — strictly separate build and run phases
  • Dev/prod parity — keep development, staging and production as similar as possible

Codebase

We technically have 2 “apps” in one codebase — our GraphQL server, and our statically built React SPA (Single Page Application). More on this in another article, but suffice it to say, this allows us to build quickly in small iterations and simplify how the entire stack gets pushed through each stage of the deployment pipeline.

Config && Build, release, run && Dev/prod parity

These 3 “factors” are related; the best way to maintain environment parity is to build a single final (and immutable) artifact and allow it to “come alive” (release) in any environment by having it react at runtime to the configuration around it. The configuration does not live in the codebase — it lives in the mechanism that controls the runtime (think: Kubernetes Orchestrator + ConfigMaps or Heroku’s Dyno Manager + Config Vars). These are then usually exposed by the runtime mechanism as environment variables to your application.

In Practice

Do you have bootstrapping logic — or worse, on-the-fly logic in a request context — that branches depending on what environment it senses it’s in? This is an anti Twelve-Factor-App smell! Get rid of it by pushing those nuances to the edges of your runtime by reimplementing that logic as close to the release phase as possible. Exorcise all of this logic from your application code base.

One way we do this is by keeping the dependent resources (like databases) across all environments as similar as possible. We make sure the environment variable (config) is named the same thing and in the same format everywhere, and that the application can connect to the resource, regardless of where it is started. But sometimes you need to do a little magic — this is where an entrypoint comes in.

Entrypoint

We use an entrypoint.sh script as, well, the entrypoint for runtime. This simple shell script acts as a wrapper and a bridge between any environmental nuances to your application’s expectations. Any environmental variables that need to be manipulated so that they’re named and formatted uniformly can be done here. The more you can keep the way your resources are managed consistent across all environments, the simpler this script is and easier to maintain. This concept is ripped right from Docker’s best practices.

Ephemeral environments

Ephemeral environments are short lived runtimes that are provisioned on the fly and torn down when they’re no longer needed. This is very different from your local development, staging and production environments (which are statically resourced and configured). On Heroku, you’re able to configure lifecycle hooks within the app.json config and use those hooks to spin up and tear down resources that Heroku itself can’t do. This allows you to keep your entrypoint.sh file even cleaner. On Heroku, there are 2 types of ephemeral environments: test and review.

Velocity, Variance and Visualization

By building the final application artifact once and allowing it to behave similarly against different sets of resources according to its runtime configuration, you can more easily achieve the Twelve-Factor App model. Heroku manages this for you with an explicit build step that creates what they call a “slug”, and container based runtimes do this by executing binary “images” — both of which are essentially immutable artifacts representing your business logic. Building per environment (such as branch based deployment schemes) is an anti-pattern as it can introduce variance and can break dev/prod parity.

A single artifact also allows you to progress your code through testing and all environments in your pipeline more quickly and with higher visibility into what’s being tested and run at any given time.

What is this even? Check out the whitepaper linked below.

Another great read is Docker and the 3 ways of DevOps — while it doesn’t reference Twelve-Factor App, it shares a lot of concepts and introduces you to some of the higher order thinking behind advanced DevOps best practices. It’s a great complement to the mental model described in this write up.

Where are we now?

Let’s walk through the part of Resilia’s developer experience workflow that gets a changeset from an engineer’s laptop into production:

(1) Open a Pull Request in Github

  • Heroku runs CI tests against the branch
  • Heroku spins up a new ephemeral Review app for manual QA
  • — Heroku provisions resources via add-ons as needed
  • — app.json postdeploy hook allocates Auth0 resources we need

(2) Pull Request is approved, closed, and the branch merged into main

  • Heroku tears down the Review app
  • — Heroku de-provisions the add-on resources and throws them away
  • — app.json pr-predestroy hook deallocates our Auth0 resources
  • Heroku runs CI tests against the main branch
  • Heroku builds the final artifact (aka the “slug”)
  • Heroku deploys slug to our staging app

(3) Promote to production in Heroku *

  • Heroku deploys slug to our production app

(4) Rinse and repeat as often as needed

* a manual process deliberately introduced in the pipeline, via a button in the Heroku UI

For the future: with a disciplined feature flagging practice, you can easily automate continuous deployment straight to production and mitigate risks by keeping your Pull Requests small and merging often.

By adhering to the major “factors” described above, we have succeeded in building a pretty straightforward and comprehensible developer experience that dovetails right into our test, review and deployment pipeline quite seamlessly.

How does your team build and deploy applications?

--

--

Resilient Tech
Resilient Tech

Written by Resilient Tech

Resilia’s mission is to strengthen the capacity of nonprofits and help grantors scale impact through data-driven technology solutions. https://www.resilia.com

No responses yet