Commit ff863aad authored by Adam Hawkins's avatar Adam Hawkins

Add expectations of the development phase post

parent 5b2c7988
---
title: Expectations of the Development Phase
layout: post
author: ahawkins
seo:
type: BlogPosting
description: "A CI checklist"
keywords:
- docker
- continuous deployment
- continuous delivery
- continuous integration
- development phase
- testing
- test driven development
- TDD
---
Recently I've been considering the initial step in the contnuous
deployment pipeline. This is the development step. The developmen step
is where the _real_ work happens. Everything after that _should_ be
automated tests followed by a production deploy. Each step in the
pipeline must verify the next's pre-requisites and prove absense of
all known regressions.
I consider the development phase to include writing the code,
committing to source control, then pushing code to trigger the first
continous integration step. I say first because continuous deployment
pipelines often have more than one automated testing step. Code first
goes through a set of local tests, then probably some sort of cross
service/component integration test, hopefully some sort of performance
testing, perhaps even security testing, then even more business
specific tests, before finally deploying to production. Let's consider
the development phase bounded to everything under source control for a
particular project.
Each commit's goal is to make through CI with the highest quality code
as possible. Repositories should include these checks as a minimum (in
no particular order):
1. **Code linting / formatting**. Code must be consistent across the
project. Inconsistencies slow down individual developers and take
more time in code review.
1. **White box unit and integration tests**. Each
codebase's technical structure varies. I expect every repository to
have tests for individual functions/classes etc, integration tests
for multiple objects, and integration tests across boundaries in
the internal design. I say "whitebox" here because these use the
objects/functions themselves. These tests ensure the internal code
works as expected.
1. **Blackbox smoke tests for every artifact**. Consider an HTTP & JSON
API. The test should start the process (using whatever command
would be executed in production using the built artifact) then make
requests to all expected paths. The tests assert clients are able
to make a request and get a response back. Thrift/gRPC/other RPC
servers should start the server and use a client to make a request
to all expected RPCs. These tests _should_ be concurrent and
randomized to expose any strange in implementation details. The
tests ensure build artifact works as expected.
1. **Utility command smoke tests**. Many projects have commands to
prepare their environment. This may be a command to create a
database, apply migrations, or bootstrap a third party service. The
test process should smoke test these commands and assert they do
not fail. Their internal behavior is tested via the whitebox tests.
1. **Boot Tests**. Every application requires some configuration,
however it's rare the configuration works as expected. I've seen it
happen to many times where process fail to boot in the
staging/production/etc environment because it's the first time that
code path was executed. This is unacceptable because it is an
regression. Boot tests are paried with some sort of "dry run" mode.
These are matrix level tests. The goal here is 1) test the
relevant combination of configuration flags/values and 2) test all
possible combinations. This ensures that all document configuration
values are parsed/handled without causing errors or other
unexpected failures. These tests are required because the
implementation is usually covered in whitebox tests, but not
covered when booting the process and specifying config files,
command line options, or environment variables.
1. **External Config File Tests**. Projects usually rely on third
party services for a number of things. Your CI system may have a
JSON file or YAML file describing the build process. These files
may be changed incorrectly. The affect may be felt immediately or
maybe not until code is deployed to production. All external
configuration files (JSON,YML,TOML,etc) should be smoke tested a
by passing through a parser at a minimum. Application specific
linting tools should be used where possible. This tests for
unknown keys or configuration options. You'd be surprised how easy
it is to eliminate an entire class of regressions with this
approach.
These points have eliminated every regression class I've come across.
Following these points ensures that individual repository is
functioning accordingly. Further bugs/regressions are found in the
integration step (if there is one).
I suggest you adopt these points in your team. These points have
drastically lowered the amount of defects leaked to production, thus
ultimately raising the quality level.
Good luck out there and happy shipping!
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment