Commit 376fb71b authored by Adam Hawkins's avatar Adam Hawkins

Publish development phase post

parent e9775ed3
Pipeline #4941804 passed with stage
in 46 seconds
---
title: Expectations of the Development Phase
title: Expectations of the Development Stage
layout: post
author: ahawkins
seo:
type: BlogPosting
description: "A CI checklist"
description: "A checklist for what should happen in each CI run."
keywords:
- docker
- continuous deployment
......@@ -21,17 +21,17 @@ deployment pipeline. This is the development step. The development step
is where the _real_ work happens. Everything after that _should_ be
automated tests followed by a production deploy. Each step in the
pipeline must verify the next's pre-requisites and prove the absence of
all known regressions.
known regressions.
I consider the development phase to include writing the code,
committing to source control, then pushing code to trigger the first
continuous integration step. I say first because continuous deployment
I consider the development phase to include writing code, committing
to source control, then pushing code to trigger the first continuous
integration step. I say _first_ because continuous deployment
pipelines often have more than one automated testing step. Code first
goes through a set of local tests, then probably some sort of cross
service/component integration test, hopefully some sort of performance
testing, perhaps even security testing, then even more business
specific tests, before finally deploying to production. Let's consider
the development phase bounded to everything under source control for a
specific tests, before finally deploying. Let's consider the
development phase bounded to everything under source control for a
particular project.
Each commit's goal is to make through CI with the highest quality code
......@@ -44,8 +44,8 @@ no particular order):
1. **White box unit and integration tests**. Each
codebase's technical structure varies. I expect every repository to
have tests for individual functions/classes etc, integration tests
for multiple objects, and integration tests across boundaries in
the internal design. I say "whitebox" here because these use the
for multiple objects, and integration tests across internal design
boundaries. I say "whitebox" here because these test use the
objects/functions themselves. These tests ensure the internal code
works as expected.
1. **Blackbox smoke tests for every artifact**. Consider an HTTP & JSON
......@@ -64,37 +64,34 @@ no particular order):
not fail. Their internal behavior is tested via the whitebox tests.
1. **Boot Tests**. Every application requires some configuration,
however it's rare the configuration works as expected. I've seen it
happen to many times where process fail to boot in the
staging/production/etc environment because it's the first time that
code path was executed. This is unacceptable because it is an
regression. Boot tests are paired with some sort of "dry run" mode.
These are matrix level tests. The goal here is 1) test the
relevant combination of configuration flags/values and 2) test all
possible combinations. This ensures that all document configuration
values are parsed/handled without causing errors or other
unexpected failures. These tests are required because the
implementation is usually covered in whitebox tests, but not
covered when booting the process and specifying config files,
command line options, or environment variables.
happen to many times where processes fail to boot in
staging/production/etc because it's the first time that code path
was executed. This is unacceptable. Boot tests are paired with some
sort of "dry run" mode. These are matrix level tests. This ensures
that all document configuration values are parsed/handled without
causing errors or other unexpected failures. These tests are
required because the implementation is usually covered in whitebox
tests, but not covered when booting the process and specifying
config files, command line options, or environment variables.
1. **External Config File Tests**. Projects usually rely on third
party services for a number of things. Your CI system may have a
JSON file or YAML file describing the build process. These files
may be changed incorrectly. The affect may be felt immediately or
maybe not until code is deployed to production. All external
configuration files (JSON,YML,TOML,etc) should be smoke tested a
by passing through a parser at a minimum. Application specific
linting tools should be used where possible. This tests for
unknown keys or configuration options. You'd be surprised how easy
it is to eliminate an entire class of regressions with this
approach.
configuration files (JSON,YML,TOML,etc) should be smoke tested a by
passing through a parser at a minimum. Use case specific linting
tools should be used where possible. This tests for unknown keys or
configuration options. You'd be surprised how easy it is to
eliminate an entire class of regressions with this approach.
These points have eliminated every regression class I've come across.
Following these points ensures that individual repository is
functioning accordingly. Further bugs/regressions are found in the
integration step (if there is one).
I suggest you adopt these points in your team. These points have
I suggest you adopt these practices in your team. These points have
drastically lowered the amount of defects leaked to production, thus
ultimately raising the quality level.
ultimately raising the quality level. You'll be surprised what they
can do for you.
Good luck out there and happy shipping!
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment