Engineering fundamentals are proven methods that lead to better code. Some of these methods are known more than others. When talking about engineering fundamentals, the first concepts that come to mind are unit tests, code coverage, linting and CI. However, little is spoken about a build pipeline and what are some of the requirements that make a build pipeline more effective.
Let’s take a step back and understand what we mean by a build pipeline. A typical testing and delivery pipeline is composed of the following three pipelines:
- Build: The build pipeline is responsible for testing any new changes that are introduced. Every time a new Pull Request is created against master, the pipeline is triggered on that change and runs the unit tests. This build does not retain any of the build artifacts.
- Continuous Integration (CI): This pipeline runs on master every time a new change is merged to master. The goal of the pipelines is to create artifacts for every new change on master and create the snapshots. It will also run tests (integration and E2E testing) to ensure the new change hasn’t caused a regression.
- Continuous Delivery pipeline (CD): This pipeline is responsible for deploying the artifacts created at the CI stage to testing and integration and production environments.
But not all build pipelines are created equal and in this article, I am going to call out a few practices that will make your pipeline more effective at its job.
- Treat warnings as errors in dependency install stage: You may have seen the following dreaded security message in your repositories in the past. It occurs when one of the dependencies you’re referencing has a known security issue in it. It may happen weeks after you’re done writing the code. Always create your build pipelines to treat warnings as errors at the install stage and have your build pipeline run on a daily schedule in addition to new PRs to catch this class of issues.
- Don’t allow merging on failed builds: This one is obvious. However, what qualifies as a failed build is the topic of discussion here. Many assume a failed build just means a compilation error — however, here I am going to argue it should be way more. The build should fail if:
- Any of the unit tests fail. This is rather obvious.
- The linter finds any violation in coding style or static analysis stage
- The introduced change reduces the code coverage below a certain percentage. It’s important to set a code coverage goal for a project. We all know having even 100% code coverage doesn’t mean tested code, however having a 10% code coverage should raise some flags. The best way to prevent ending with a low code coverage in your project is to enforce it every time a new change is introduced. A build pipeline is the perfect place for this enforcement. If the change drops the targeted code coverage by a large margin, the build pipeline should fail and henceforth prevent the poorly tested code to get through to master.
- Require multiple sign-offs on every PR. More developers review the code, the better it is. Not only more issues are found, it also helps with cross pollination and to ensure everyone remains up to speed with new changes introduced. One way to ensure multiple code reviewers look at every PR is to set it in the pipeline policy.
In this blog, I’ve focused on the concepts instead of a specific tool or product to make it applicable to everyone. Most major devOps tools (Azure, jenkins, code etc.) all support the concepts that I’ve called out here.
What are some of the other policies in a build pipeline that you’ve found useful? We would love to hear your thoughts and comments.