gitlab-runner exec: easily test builds locally
I just pushed a brand new feature to GitLab Runner:
It allows you to run the jobs defined in
In turn, this allows for faster testing cycles, and it makes it easier to fix broken builds.
The command supports any executor and supports all
How to use it
- Install Bleeding Edge Runner release locally.
- Run the build:
gitlab-runner exec docker my-job.
This will run
my-job defined in the local
.gitlab-ci.yml in a docker container.
exec with Docker
- Create a new docker VM:
docker-machine create -d virtualbox runner
- Configure shell:
eval $(docker-machine env runner)
- Some environment variables are not accurate: CI_BUILD_ID, CI_PROJECT_ID, etc.
Let me know what do you think about it.
Maybe we can export env variables from the UI as a
.somethingfiles, to be stored outside of version control, but found by gitlab-runner exec
Title changed from gitlab-runner exec: easily test builds to gitlab-runner exec: easily test builds locallyToggle commit list
Status changed to closedToggle commit list
@ayufan where is this documented? I want to link it from about.gitlab.com/gitlab-ci I found https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/executors/README.md but it doesn't seem to talk about the local use case that much, also it isn't linked from http://doc.gitlab.com/ce/ci/
Is it possible to override this variable if it already exists in the .gitlab-ci.yml file? It doesn't appear so. It would be nice to be able to override variables used for databases and other services when running tests locally.
@ayufan I'm new to gitlab-ci and I've started to setup my gitlab-ci on my own gitlab-ce server 3 days ago. "gitlab-runner exec" is really useful thanks! My environment: gitlab-ce 8.7.3 on Debian 7, gitlab-runner 1.1.3 on Debian 7. I have 2 questions:
- It seems "gitlab-runner exec shell myjob" doesn't take the "variables:" keyword into account (if "variables:" is used inside myjob or outside of it). It does work when the build is run by the gitlab-runner service but it doesn't when I use "gitlab-runner exec shell". Should I create an issue about this ? (I've not found an existing issue about this)
- I'd like to re-use the .gitlab-ci.yml in another context than gitlab-ci. On a developer workstation where the git repository of the project is already cloned in "/home/khelkun/myproject", I wish I could run "/home/khelkun/myproject/.gitlab-ci.yml" to build the project without cloning a new fresh copy of the HEAD of the repository, e.g: build the sources which already exist in "/home/khelkun/myproject". It doesn't seems to me that gitlab-runner can do that. Am I wrong ? Is there another way I could do it ?
It seems "gitlab-runner exec shell myjob" doesn't take the "variables:" keyword into account (if "variables:" is used inside myjob or outside of it). It does work when the build is run by the gitlab-runner service but it doesn't when I use "gitlab-runner exec shell". Should I create an issue about this ? (I've not found an existing issue about this)
Yes. Please create an issue.
I'd like to re-use the .gitlab-ci.yml in another context than gitlab-ci. On a developer workstation where the git repository of the project is already cloned in "/home/khelkun/myproject", I wish I could run "/home/khelkun/myproject/.gitlab-ci.yml" to build the project without cloning a new fresh copy of the HEAD of the repository, e.g: build the sources which already exist in "/home/khelkun/myproject". It doesn't seems to me that gitlab-runner can do that. Am I wrong ? Is there another way I could do it ?
gitlab-runner execdoes that.
About my question 2., when I run:
~/dev/srv-git/myproject$ gitlab-runner exec shell build-release:myjob
The output starts with:
Using Shell executor... Running on khelkun-workstation... Cloning repository... Cloning in '/home/khelkun/dev/srv-git/myproject/builds/0/project-1'...
Can I skip the cloning step ? And tell "gitlab-runner exec" to run 'build-release:myjob' directly on the '/home/khelkun/dev/srv-git/myproject/' instead of '/home/khelkun/dev/srv-git/myproject/builds/0/project-1' ?
gdubicki@mbp-greg:~/git/myproject$ ~/gitlab-runner --version Version: 1.3.0~beta.20.g36963db Git revision: 36963db Git branch: master GO version: go1.6.2 Built: Mon, 13 Jun 2016 17:33:02 +0000 OS/Arch: darwin/amd64
gdubicki@mbp-greg:~/git/myproject$ ~/gitlab-runner exec shell test WARNING: You most probably have uncommitted changes. WARNING: These changes will not be tested. gitlab-ci-multi-runner 1.3.0~beta.20.g36963db (36963db) Using Shell executor... Running on mbp-greg... Cloning repository... Cloning into '/Users/gdubicki/git/myproject/builds/0/project-1'... done.
- it's still cloning.
Hi there, just wanted to mention that I'd also be a user of an option to mount the current folder in the runner image instead of cloning. In my case, the
.gitlab-ci.ymlfile calls external scripts, so if I need to iterate on these scripts then I need to push them... Which kinds of defeats the purpose of local testing :)
Local automated testing is just that, workstation local. That is valuable in itself, since pushing would no longer be required, avoiding history writing and potentially improving the initial quality of pushed commits. Reducing time cost for testing is an independent objective. I agree that's an important objective as well.
I've just started playing with this great feature and I've seen in the docs that the cache and artefacts may and may not work. I'm wondering if I can end up on the "may" side though.
The ultimate goal is to build a node app via a gitlab-ci task locally before pushing to make sure there are no surprises later. For this I need
node_modulesto be cached between
gitlab-runner execcalls somehow - the process takes ages otherwise.
Here is a simplified
build: image: node:6.5.0 cache: key: '42' # attempt to cache the folder 'globally' for even more simplicity paths: - node_modules/ script: - ls -la - mkdir node_modules - touch node_modules/hi - ls -la
gitlab-runner exec docker --docker-privileged --docker-cache-dir /tmp/gitlabrunner buildfor the second time, I'd expect to see
node_modulesin the first
ls, but this does not happen no matter how I modify various options of the run script. According to the output, the cache is being attempted to be restored and saved, but this has no effect:
Running with gitlab-ci-multi-runner 1.5.2 (76fdacd) Using Docker executor with image node:6.5.0 ... Pulling docker image node:6.5.0 ... Running on runner--project-1-concurrent-0 via machinename... Cloning repository... Cloning into '/builds/project-1'... done. Checking out b316f21c as dev... Checking cache for 42... ### ### output from ls, mkdir, touch and ls ### Creating cache 42... node_modules/: found 2 matching files Build succeeded
I suspect that some caching may be achieved with docker volumes, but it's not quite clear to me how to make work while keeping
.gitlab-ci.ymlboth good for local execution and remote CI. If anyone has succeed in this challenge, could you please share a hint?
P.S.: all is happening on Ubuntu, i.e. docker is local.
Looks like I got it working, yay!
gitlab-runner exec docker --docker-privileged --cache-dir=/tmp/gitlabrunner --docker-volumes /tmp/gitlabrunner:/tmp/gitlabrunner build
Directory name inside docker runner can be any as long as it mounts to smth meaningful on the host machine:
gitlab-runner exec docker --docker-privileged --cache-dir=/dir/inside/docker --docker-volumes /tmp/gitlabrunner:/dir/inside/docker whatevertask
The reason why cache does not work out of box as I understand it now is because each time you call
execa new runner machine is created and then the container gets removed. Unless you mount the cache to the host, all the data you preserve within that container gets wiped. This peculiarity of how
execworks might be worth noting in the docs as it was not very clear in the beginning. A lot of users may want local caching, I'm sure!
I'll additionally add that if you use a variable for your cache name then it has to be passed in
cache: key: "$CI_PROJECT_NAME" paths: - .m2/repository/
Then you need to pass in
I can't make dependencies and artifacts work with the local runner. It's a pain to setup a proper deployment pipeline without the ability to test changes locally.
Gitlab 8.9.6, gitlab-runner 1.5.3
Note to self/knowledge sharing: to pass more than one env var use multiple
--envparameters, like this:
--env VAR1=val1 --env VAR2=val2 (...)
If you want artifacts out of a container, be sure to use Docker's volume mounting capabilities to mount a local host directory into the container.
gitlab-ci-multi-runner exec docker --docker-volumes `pwd`/build-output:/build/my-project/build-output build
For shared cache and artifacts between locally jobs run I use:
gitlab-ci-multi-runner exec docker --cache-dir /cache --docker-volumes `pwd`/build-output:/cache JOB1 gitlab-ci-multi-runner exec docker --cache-dir /cache --docker-volumes `pwd`/build-output:/cache JOB2
I have my stages as 1) build 2) test. How do I locally trigger runner to execute test after build?
gitlab-ci-multi-runner exec shell test_job
If I just run test job, it will not trigger build first. This simply gives me error as some required files that should be generated in build stages is missing. Could we kick off the whole sequential build, test, deployment, etc locally?
My tests assumes that project path's basename equals repository name. Can gitlab-runner clone project
Is it possible to execute the whole pipeline defined in .gitlab-ci.yml not just a single job this way for testing purposes?
This works great - one small thing that tripped me up is that
execdoesn't read docker pull policy from config.toml so you have to pass it in manually:
sudo gitlab-runner exec docker master_build --docker-pull-policy never