Commit 27292e3f authored by Adam Hawkins's avatar Adam Hawkins

Write first build container post draft

parent 5b2c7988
---
title: Docker Build Container Pattern
layout: post
author: ahawkins
seo:
type: BlogPosting
description: "How to use Docker containers to generate artifacts
without local volume mounts. tl;dr: use docker cp."
keywords:
- docker
- boilerplate
- build container
- artifact container
---
Docker containers are a fantastic way to encapsulate complex build
processes. Software requires a host of dependencies. Dependency sprawl
is especially problematic in polygot teams. It becomes infeasible to
maintain multiple configurations on engineer's machines and then on CI
systems. Say you need to produce a JAR, WAR, or even generate a static
site. You can do that with Docker containers.
## The Easy Way
However doing so introduces a small problem. Most examples (even the
official images) do something like this:
```bash
docker run --rm -it "${PWD}:/data" some_image
```
I do the same thing to generate my Ruby dependencies:
```bash
docker run --rm -it "${PWD}:/data" -w /data ruby:2.3 \
bundle package --all
```
This mounts the current directory at `/data` then instructs the
container to write back to `/data`. Tada! This is how you can use a
container to generate artifcats on the Docker host.
## A Fix
This is the easiest to get data in/out of the container. This creates
a problem though since Docker containers are often run as root. This
approach may litter the filesystem with root owner artifacts depending
on your Docker setup (e.g. the docker daemon runs directly on your
host or the docker daemon is running in a VM). The problem usually
reveals itself if the commands happen on CI. These machines usually
run Linux with a native Docker install so container run as root and
write root owned files back to the bind mount. This is solved by
running the container as the current user (`-u $(id -u)`). Here's an
example:
```bash
docker run --rm -it -u $(id -u) "${PWD}:/data" -w /data ruby:2.3 \
bundle package --all
```
Now the container runs as `$USER`, thus files generated by the
container are owned by that `$USER`. This solves probably 90% of use
cases. There are scenarios where this may not work.
This solution does not work with remote Docker hosts (e.g. something
like Swarm). `docker-machine` on OSX solves this by mounting `$HOME`
as a shared directory in the VM so file system mounts (inside `$HOME`)
work transparently.
## The Correct Way
There is a sure fire way to do this that works in 100% of scenarios
without any arounds. The solution is to use `docker cp`. The `cp`
command allows you to copy files into and out of containers. `cp`
works on individual files or directories. Files are copied directly
while directories are tarred. This solution requires a few more
commands but works **100% of the time**. Let's see some examples.
Here we assume the Docker image contains everything required to build
the artifact(s).
```
docker run --name builder my-image script/build path/to/something.jar
docker cp builder:path/to/something.jar something.jar
docker stop builder
docker rm builder
```
This example runs the container. The `path/to/something.jar` is an
example. Note that `--rm` is not used with `docker run`. This ensures
the container is *not* removed so we can copy files out of the
container. Next `docker cp` specifies the `container:path` and copies
that to `something.jar`. No permissions or anything to worry about.
Finally the container is stopped and removed.
Here is another example from this blog's [source][]. The example uses
`docker create`, and `docker start` to prepare a container, then
`docker cp` to copy an entire directory.
```Makefile
.PHONY: dist
dist: $(DOCKER_IMAGE)
mkdir -p dist tmp
docker create -i $(BUILD_ENV) slashdeploy/blog \
bundle exec jekyll build -d /data -s /usr/src/app/src > tmp/dist_container
docker start -a $$(cat tmp/dist_container)
@docker cp $$(cat tmp/dist_container):/data - | tar xf - -C dist --strip-components=1 > /dev/null
@docker stop $$(cat tmp/dist_container) > /dev/null
@docker rm -v $$(cat tmp/dist_container) > /dev/null
@rm -rf tmp/dist_container
```
Here `docker cp` streams a tar file to stdout. That tar is extracted
to the proper directory. Also it uses a container ID (captured from
`docker create`) instead of a container name. You could give the
container a name if you prefer.
I hope this post clarifies how to implement the build container
pattern. It's astoundingly useful when done right. tl;dr: Use `docker
cp` as described. Good luck out there and happy shipping!
[source]: https://gitlab.com/slashdeploy/blog
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment