Release Practices at Grio
In a previous blog post, I covered the “textbook” definition of continuous integration, along with a handful of tools and practices that fulfill or help to fulfill said definition. These tools and practices include breaking up your app into components (e.g. front-end and back-end, or, for much larger projects, using microservices), utilizing “watch” utilities locally to iteratively run tests, and choosing test-oriented frameworks (e.g. Rails, Django, Grails, etc.). However, I didn’t talk much about any specific continuous integration setup, nor some of the third-party services that go together to make an efficient release process. I also didn’t talk much about how continuous integration fits into the larger cycle of deployment and release management. I aim to cover some of those topics here, and fill in the larger picture of how CI helps to ensure code quality and stability in a software project.
Much like projects themselves, release processes can be more or less complex depending upon the requirements of the project and problem domain. Adapting a release workflow depends on a number of factors, including the complexity of the project, team size, and stage of the product. For example, for one of our larger clients, Rivals.com, we use a combination of more than a half-dozen third-party services and custom tools to manage the full release cycle. Rivals is a large, high-traffic, well-established product with a comparatively large development team (currently eight developers on the web team alone). Insight into the changes that are going into any given release is critical for site reliability purposes, and orchestrating the activity of this many team members is more involved than with a solo or dual development team. I’ll contrast Rivals’ setup with the one we are using for our internal billing system, Moment. On Moment, anywhere from one to two developers work at a time on a smaller scale, beta product. At this stage in the product cycle, fast iteration takes precedence over change management.
As mentioned above, Rivals is a larger project, in terms of complexity, team size, and userbase. As a result, agility needs to be carefully balanced with site reliability. What this means in practice is that in addition to the usual CI/CD tools (i.e. auto-triggered test builds and deployments), we also use a handful of dedicated code quality and change management tools.
Pivotal Tracker is one of the first and last stops in the release cycle. Tasks are created here and organized by priority into sprints, and then closed out as work is completed. Though this can be managed via the Tracker web UI, we prefer to change task states by tagging commits with a Tracker ID, i.e.  or [finished 1234567]. Tagging them in this way allows us to auto-close issues, and also allows us to determine which tasks will eventually be included in the release, using a custom tool we’ll talk about a little later.
At Grio, like many agencies, we rely on Github for version-control. However, Github additionally supports a ton of third-party integrations for triggering various builds and analyses in response to new commits. On Rivals, for example, Github notifies Snap CI of new commits in order to run various test suites, Code Climate in order to lint and perform analyses on the code, and Pivotal Tracker in order to start/finish tasks.
Snap CI and Code Climate
As mentioned above, Snap CI and Code Climate are both configured to perform automated test runs and code analyses in response to Github webhooks. Github allows for strict enforcement of these integrations (i.e. by disabling the merge button on a pull request until all of the checks pass). We do not enforce these checks to this degree on our project, but the vast majority of our PRs do pass these checks, or are modified until they do. One of the critical tasks performed by Snap CI is build tagging; after each successful build, the corresponding commit is tagged along the lines of acceptance-[date-string]. We use these tags to find a good build prior to a release, as described below.
Release Notes Generator
After some number of cycles of committing, testing, analyzing, and merging (typically every half to full sprint, i.e. about twice weekly), we are ready to perform a release. In preparation, we use a custom Rake task to manually generate release notes in the form of a simple HTML page. By reading from the git history and Pivotal Tracker, this Rake task lists all of the commits and Tracker stories between a given acceptance tag and the most recent production tag. (Production tags are generated automatically after each successful deployment.) Once a tag is identified that contains only accepted stories, we run an automated deployment script to push the latest version of the app to Pivotal Web Services.
New Relic and Airbrake
Though only tenuously related to the deployment itself, New Relic and AirBrake are nonetheless indispensable parts of the full release cycle. During and after each release, we monitor New Relic and AirBrake for performance and stability data. For example, using New Relic, we may determine that a new controller method has a bottleneck in a particular query, and prioritize a performance optimization story for the next sprint. Likewise, AirBrake can auto-generate bug tickets in Tracker in response to errors, which are then prioritized accordingly.
Moment – Time-tracking Software
In contrast to Rivals.com, on Moment we use only a handful of tools as part of our release process. Jira is used instead of Pivotal Tracker, albeit without webhooks for automatic update of task state (i.e. started/finished). There is less velocity and a smaller team size, so this manual overhead is acceptable. Github once again acts as the communication hub for the project (pun intended), triggering automated test runs. In contrast to Rivals, we use Codeship for testing purposes, though this is equivalent to Snap CI for purposes of this blog post (a comparison of these CI tools is worthy of a blog post unto itself). Prior to release, we determine the contents of a release by manual inspection, i.e. reading over the most recently completed stories in Jira. On a larger project (in terms of either team size or traffic), this degree of manual intervention would be suboptimal, but, again, is acceptable for this stage in Moment’s lifespan. Deployment is handled on Heroku via their simple git push mechanism. New Relic is not yet integrated, though we’ll be adding this before too long. I mention New Relic here only to point out that it is not a strict requirement of a release setup. This is arguably a very abbreviated description of the release process on Moment, but hopefully highlights the degree of flexibility that is possible across projects of different sizes.
Adapt Process to Project
Process can and should adapt to a project’s lifecycle. Enforcing too much process early on in a project’s lifespan can have an adverse affect on velocity, and may also enforce practices that are ill-suited or wasted from the perspective of the product’s final state. Certainly, an issue tracker, version control, automated testing, and an automation-friendly hosting solution are essential. Beyond that, process should grow, shrink, and otherwise adapt as needed.
Leverage Third-Party Services
There is a multitude of off-the-shelf solutions for common automation, release, and deployment problems. Some of these are mentioned above (Github, Snap CI, Codeship, etc.), and numerous others that may be better suited to the financial and technical requirements of a given project. These should at least be considered before choosing a potentially more costly self-hosted or custom solution. Taking this approach can provide considerable cost and time savings for the purposes of getting an MVP to market. For a small startup, these delays and costs can mean the difference between succeeding versus never getting past the starting line.