Commit 0c976111 authored by Adam Hawkins's avatar Adam Hawkins

Fix spelling errors

parent 340a1e50
Pipeline #781709 failed with stage
......@@ -35,12 +35,12 @@ dist: $(DOCKER_IMAGE)
@docker stop $$(cat tmp/dist_container) > /dev/null
@docker rm -v $$(cat tmp/dist_container) > /dev/null
@rm -rf tmp/dist_container
@mkdir -p dist/_sentinal
@echo "$(GIT_COMMIT)" > dist/_sentinal/$(GIT_COMMIT).txt
@mkdir -p dist/_sentinel
@echo "$(GIT_COMMIT)" > dist/_sentinel/$(GIT_COMMIT).txt
.PHONY: test-dist
test-dist:
env DIST_PATH=$(PWD)/dist SENTINAL_VALUE=$(GIT_COMMIT) test/dist_test.bats
env DIST_PATH=$(PWD)/dist SENTINEL_VALUE=$(GIT_COMMIT) test/dist_test.bats
.PHONY: test-shellcheck
test-shellcheck:
......
......@@ -60,12 +60,12 @@ test_cloudformation_status() {
poll -t test_cloudformation_status -n 100 -i 5
declare url sentinal
declare url sentinel
url="$(bin/blog url)"
sentinal="$(git rev-parse --short HEAD)"
sentinel="$(git rev-parse --short HEAD)"
test_publish() {
curl -s "${url}/_sentinal/${sentinal}.txt" > /dev/null
curl -s "${url}/_sentinel/${sentinel}.txt" > /dev/null
}
bin/blog publish
......
......@@ -6,7 +6,7 @@ author: ahawkins
The SlashDeploy blog was previously deployed from my local machine. I
figured it is time to change that. This post walks though how this
blog is continuously delivere from the infrastructure, testing, and
blog is continuously delivered from the infrastructure, testing, and
deployment perspective. You may be thinking: the blog is just some
static content right? This is true and it is a perfect test bed for
applying continuous delivery to a small (but important) piece of
......@@ -15,7 +15,7 @@ software.
Achieving continuous delivery is no small task. It requires careful
engineering choices and sticking to a few key principals. First and
foremost: continuous integration. Every change must be run through an
automated test suite. The test suite should verify a particlar change
automated test suite. The test suite should verify a particular change
is production ready. It is impossible to have continuous delivery
without continuous integration. Production bugs must be patched with
regression tests to ensure they are not repeated. Second:
......@@ -27,20 +27,20 @@ change in web server. Their must be code to do so. That code (by the
first principal) must also have continuous integration. Finally there
must be production verification criteria for a particular change. This
signals a deployment completed successfully or not. Automation is the
common ground. These prinicpals must be applied and baked into the
common ground. These principles must be applied and baked into the
system from the ground up. The blog describes how they are
applied component in the blog.
## 10,000 Feet
This blog is statically generated site. Any websever can server the
This blog is statically generated site. Any web sever can server the
generated artifacts. This requires from infrastructure. A CDN should
be used as well to ensure readers get the content as fast as possible.
Finally the webserver needs a readable domain name. OK, so how do we
Finally the web server needs a readable domain name. OK, so how do we
make that happen? Use [jekyll][] to generate the site. Use
[CloudFormation][] to create an S3 website behina a [CloudFront][] CDN and
an appopritate [Route53][] DNS entry. Right, those are the tools but what
does the whole pipeine look like?
[CloudFormation][] to create an S3 website behind a [CloudFront][] CDN and
an appropriate [Route53][] DNS entry. Right, those are the tools but what
does the whole pipeline look like?
1. Generate the release artifact
1. Run tests against the release artifact
......@@ -49,7 +49,7 @@ does the whole pipeine look like?
1. Deploy the CloudFormation stack
1. Test CloudFormation deploy went as expected
1. Copy release artifact to S3 bucket
1. Test release arfact available through public DNS
1. Test release artifact available through public DNS
This whole process can be coordinated with some bash programs and some
`make` targets. Time to dive deeper into each level.
......@@ -68,39 +68,39 @@ through the following tests:
1. The root `index.html` exists
1. The defined `error.html` exists
1. The sentinal file exists
1. The sentinel file exists
1. `robots.txt` blocks access to `error.html`
1. `robots.txt` blocks access to the sentinal file
1. `robots.txt` blocks access to the sentinel file
1. Each HTML file has a tracking snippet
You may be wondering about the sentinal file. The sentinal file
You may be wondering about the sentinel file. The sentinel file
uniquely identifies each release artifact. The file name includes the
git commit that built it. It lives in `_sentinals/GIT_COMMIT.txt`. Its
git commit that built it. It lives in `_sentinels/GIT_COMMIT.txt`. Its
sole purpose to indicate a release artifact is available via the CDN.
The sentinal file name should should be unique to bust caches. If it
were not (simple `sentinal.txt` with unique content) it would subject
The sentinel file name should should be unique to bust caches. If it
were not (simple `sentinel.txt` with unique content) it would subject
to any cache rules the CDN may apply (such as how long content can
lives in edge nodes before rechecking origin). This would play havoc
on deploy verification.
Each test focuses around production behavior. The first two assert the
release artifact should function properly behind the CloudFront CDN.
The sentinal tests assert this build steps meets the next stage's
The sentinel tests assert this build steps meets the next stage's
requirements. The `robots.txt` test assert proper things are not
included in search engines. Finally tracking (page views, browser,
etc) is important so it must be included.
## Infastructure
## Infrastructure
I have touched on the infrastructure a bit. The infrastructure is an
S3 bucket behind a CloudFront CDN with a RouteR3 DNS entry.
S3 bucket behind a CloudFront CDN with a RouteR53 DNS entry.
CloudFormation manages the whole bit. The `bin/blog`
coordinates the AWS calls. The `deploy` command is the heart. This
either creates a non-existent stack or updates an existing one. There
are also utility commands to get stack status, outputs, and
importantly for testing. The `validate` command validates the
CloudFormation template through an API call. This elminates errors
such as invalid resource types, missing keys, synatax errors, and
CloudFormation template through an API call. This eliminate errors
such as invalid resource types, missing keys, syntax errors, and
other things a complier might point out. Unfortunately this does not
assert a template will _work_. Deploying it is the only way to know
for sure. This is a key limitation with CloudFormation[^CF]. However it is
......@@ -108,7 +108,7 @@ enough for this project. Finally the `publish` command copies files
into the appropriate S3 bucket.
The Bash code itself passes through [shellcheck][] to eliminate stupid
mistakes and to enforce coding style. This is desparately needed to
mistakes and to enforce coding style. This is desperately needed to
write sane Bash programs.
## Deploying
......@@ -120,16 +120,16 @@ shakes out like so:
1. `bin/blog deploy` to deploy infrastructure changes
1. Poll the `bin/blog status` until the state is green
1. `bin/blog publish` to copy the release artifacts into S3
1. Poll the public DNS until the sentinal file is available.
1. Poll the public DNS until the sentinel file is available.
There is single script (`script/ci/deploy`) to get the job done. The
coolest bit is a simple Bash function that will execute a function N
times at T time interval. This is a simple timeout style function.
It is used to handle the asyncronitcity of each step. The deploy
It is used to handle the asynchronicity of each step. The deploy
script can vary the interval depending on how long a change should
take. This is more important for CloudFormation changes since some
components update much more slowly than others. RouteR3 compared to
CloudFront is one exmaple.
components update much more slowly than others. Route53 compared to
CloudFront is one example.
## The Complete Pipeline
......@@ -149,12 +149,12 @@ CloudFront is one exmaple.
1. `bin/blog deploy` - Deploy infrastructure changes
1. Poll for `UPDATE_COMPLETE` or `CREATE_COMPLETE` stack status
1. `bin/blog publish` - Upload release artifact to S3
1. Poll with `curl` for the sentinal file on `bin/blog url`
1. Poll with `curl` for the sentinel file on `bin/blog url`
## Closing Thoughts
The entire pipeline turned out well. This was a great exercise in
setting up continuous delivery for a simple system. The pratices
setting up continuous delivery for a simple system. The practices
applied here can be applied to to large systems. Here some other take-aways:
* CloudFormation testing. It would be nice if a set of changes could
......
User-Agent: *
Disallow: /error.html
Disallow: /_sentinal/
Disallow: /_sentinel/
......@@ -2,7 +2,7 @@
function setup() {
[ -n "${DIST_PATH}" ]
[ -n "${SENTINAL_VALUE}" ]
[ -n "${SENTINEL_VALUE}" ]
}
@test "index.html exists" {
......@@ -17,12 +17,12 @@ function setup() {
grep -qF "Disallow: /error.html" "${DIST_PATH}/robots.txt"
}
@test "sentinal.txt is not searchable" {
grep -qF "Disallow: /_sentinal/" "${DIST_PATH}/robots.txt"
@test "sentinel.txt is not searchable" {
grep -qF "Disallow: /_sentinel/" "${DIST_PATH}/robots.txt"
}
@test "sentinal file" {
[ -f "${DIST_PATH}/_sentinal/${SENTINAL_VALUE}.txt" ]
@test "sentinel file" {
[ -f "${DIST_PATH}/_sentinel/${SENTINEL_VALUE}.txt" ]
}
@test "tracking code added" {
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment