Commit e86da8ec authored by Adam Hawkins's avatar Adam Hawkins

Merge branch 'post/development-phase' into 'master'

Add "Expectations of the Development Phase" post

A checklist for what each repo should include it's CI run.

See merge request !8
parents 320fe966 376fb71b
Pipeline #4941854 passed with stages
in 1 minute
title: Expectations of the Development Stage
layout: post
author: ahawkins
type: BlogPosting
description: "A checklist for what should happen in each CI run."
- docker
- continuous deployment
- continuous delivery
- continuous integration
- development phase
- testing
- test driven development
Recently I've been considering the initial step in the continuous
deployment pipeline. This is the development step. The development step
is where the _real_ work happens. Everything after that _should_ be
automated tests followed by a production deploy. Each step in the
pipeline must verify the next's pre-requisites and prove the absence of
known regressions.
I consider the development phase to include writing code, committing
to source control, then pushing code to trigger the first continuous
integration step. I say _first_ because continuous deployment
pipelines often have more than one automated testing step. Code first
goes through a set of local tests, then probably some sort of cross
service/component integration test, hopefully some sort of performance
testing, perhaps even security testing, then even more business
specific tests, before finally deploying. Let's consider the
development phase bounded to everything under source control for a
particular project.
Each commit's goal is to make through CI with the highest quality code
as possible. Repositories should include these checks as a minimum (in
no particular order):
1. **Code linting / formatting**. Code must be consistent across the
project. Inconsistencies slow down individual developers and take
more time in code review.
1. **White box unit and integration tests**. Each
codebase's technical structure varies. I expect every repository to
have tests for individual functions/classes etc, integration tests
for multiple objects, and integration tests across internal design
boundaries. I say "whitebox" here because these test use the
objects/functions themselves. These tests ensure the internal code
works as expected.
1. **Blackbox smoke tests for every artifact**. Consider an HTTP & JSON
API. The test should start the process (using whatever command
would be executed in production using the built artifact) then make
requests to all expected paths. The tests assert clients are able
to make a request and get a response back. Thrift/gRPC/other RPC
servers should start the server and use a client to make a request
to all expected RPCs. These tests _should_ be concurrent and
randomized to expose any strange in implementation details. The
tests ensure build artifact works as expected.
1. **Utility command smoke tests**. Many projects have commands to
prepare their environment. This may be a command to create a
database, apply migrations, or bootstrap a third party service. The
test process should smoke test these commands and assert they do
not fail. Their internal behavior is tested via the whitebox tests.
1. **Boot Tests**. Every application requires some configuration,
however it's rare the configuration works as expected. I've seen it
happen to many times where processes fail to boot in
staging/production/etc because it's the first time that code path
was executed. This is unacceptable. Boot tests are paired with some
sort of "dry run" mode. These are matrix level tests. This ensures
that all document configuration values are parsed/handled without
causing errors or other unexpected failures. These tests are
required because the implementation is usually covered in whitebox
tests, but not covered when booting the process and specifying
config files, command line options, or environment variables.
1. **External Config File Tests**. Projects usually rely on third
party services for a number of things. Your CI system may have a
JSON file or YAML file describing the build process. These files
may be changed incorrectly. The affect may be felt immediately or
maybe not until code is deployed to production. All external
configuration files (JSON,YML,TOML,etc) should be smoke tested a by
passing through a parser at a minimum. Use case specific linting
tools should be used where possible. This tests for unknown keys or
configuration options. You'd be surprised how easy it is to
eliminate an entire class of regressions with this approach.
These points have eliminated every regression class I've come across.
Following these points ensures that individual repository is
functioning accordingly. Further bugs/regressions are found in the
integration step (if there is one).
I suggest you adopt these practices in your team. These points have
drastically lowered the amount of defects leaked to production, thus
ultimately raising the quality level. You'll be surprised what they
can do for you.
Good luck out there and happy shipping!
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment