I want to create build artifacts, but Gitlab CI keeps telling me no matching files, although I explicitly check if the files exists.
This is the output of the Gitlab CI runner:
gitlab-ci-multi-runner 1.0.1 (cffb5c7)Using Docker executor with image golang:latest ...Running on runner-e67c915c-project-38-concurrent-0 via ip-172-31-24-144...Fetching changes...HEAD is now at ca8a993 Add full path to artifact.Checking out ca8a9932 as master...HEAD is now at ca8a993... Add full path to artifact.$ export PROJECT_DIR=$GOPATH/src/gitlab.xxx.nl/rio/iris-api.go$ mkdir -p $GOPATH/src/gitlab.xxx.nl/rio/$ cp -r $(pwd) $PROJECT_DIR$ cd $PROJECT_DIR$ go get github.com/stretchr/testify$ go get ./...$ make build-arm$ ls -la /go/src/gitlab.xxx.nl/rio/iris-api.go/build/read_digital-rwxr-xr-x 1 root root 7962904 Apr 22 14:53 /go/src/gitlab.xxx.nl/rio/iris-api.go/build/read_digitalArchiving artifacts...WARNING: /go/src/gitlab.xxx.nl/rio/iris-api.go/build/read_digital: no matching files No files to archive. Build succeeded.
My Gitlab CI configuration:
image: golang:latestbefore_script: ## Location of our project. - export PROJECT_DIR=$GOPATH/src/gitlab.xxx.nl/rio/iris-api.go ## Now create workspace for our project. - mkdir -p $GOPATH/src/gitlab.xxx.nl/rio/ ## Currently our projects is in the current working directory, but Go ## expects it in its GOPATH at $GOPATH/src/gitlab.xxx.nl/rio/. ## So let's move it there. - cp -r $(pwd) $PROJECT_DIR - cd $PROJECT_DIR # Install dependencies. - go get github.com/stretchr/testify - go get ./...test: script: make testbuild_binaries: stage: deploy script: - make build-arm - ls -la /go/src/gitlab.xxx.nl/rio/iris-api.go/build/read_digital artifacts: paths: - /go/src/gitlab.xxx.nl/rio/iris-api.go/build/read_digital
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
This is how I solve it for Go. Instead of copying it into GOPATH I symnlink it there, so everything is still accessable from the build directory.
variables:PKG_ROOT:/go/src/gitlab.com/my/PKG_PATH:${PKG_ROOT}project.go_stuff:&go_stuffimage:golang:1.10before_script:-mkdir -p ${PKG_ROOT}-ln -s /builds/my/project ${PKG_PATH}# <-- this is the important part. it needs to be symlinked, not copied-cd ${PKG_PATH}package:<<:*go_stuffstage:packagescript:-make releaseartifacts:paths:-project-${CI_COMMIT_REF_NAME}.tar.gz
Whats the reasoning? It means for some bizarre reason best be known to someone I can't set up my virtual build environment to match a real one and KEEP it between stages. This is daft, it creates a huge amount of waste, a massive amount of extra overhead and then someone cant even be bothered to make a decent error message.
At the very very very very very very least document what the f********k you mean by 'your build directory'.
On mine I have a file in /builds/mydir (call it fred.txt for now)
I have tried artifacts as /builds/mydir/fred.txt and mydir/fred.txt and neither of these work, to me the directory says builds so which ****** directory are you actually talking about
Wasting what little is left of my life on shit like this really doesnt appeal
Hi Martin, probably do, my blood is right up now, lets face it it doesnt even consider it an error when it doesnt find anything to store away when I have told it... it just blunders on until it fails somewhere else
All I actually want is someone to explain just what files it will and wont archive in a proper way with examples. Then it would be nice to fix the BUG which means I (and from this thread plenty of others) cant use this functionality because it is restricted and badly documented.
I am going to have to revert back to dragging zip files back off amazon at EACH stage and unzipping them, of not being able to parallel builds up because of the number of accesses to amazon I need. Ridiculous
For anyone who is still facing this issue, in gitlab ce 11.6 the warning is still very cryptic which let me to this issue. The folder you specify in your gitlab-ci.yml file:
artifacts: paths: - build/
Is relative to /builds/$CI_PROJECT_PATH/. So in my case the artifacts would have to be in /builds/$CI_PROJECT_PATH/build
@_joost solution worked for me. The important thing is to understand the folder artifact uses is your project folder. so you need to give path relative to that folder. In my case (dotnet core) I used the following path
/bin/Debug/netcoreapp2.2/
The path which it uses is /builds/{username}/{repositoryname} as base path. So my artifacts were in /builds/aceinthedeck/myproject/bin/Debug/netcoreapp2.2/
Still doesn't accept absolute paths. If you want to keep it that way, please document it with a "cannot use absolute paths" message or something similar.
@pmatos still though, the error message is misleading. It should say "cannot use absolute paths" instead of saying "file not found". And, it's as easy as checking if the first letter is a /, it's not a difficult change
For me, this document wasn't fixed either. I ran into it and, like most, spent a day or so figuring it out. Wasted time or, considering the community size, wasted years.
Why is this marked as closed, @ayufan ? I ran into this bug just today, the error message "no matching files" is very clearly factually incorrect and a bug.
In our CI pipeline, we need to move the project out of the initial build directory and into a specific user's home directory. This is done due to various permissions issues and to mirror our production environment. As an application level restraint, this can't really be worked around. After reading that the artifacts need to be in the initial build directory to be picked up (as pointed out by @_joost above), I decided to try to copy the files back into the initial build directory at the end of the script before re-tooling our entire project. The following simplified gitlab-ci.yml roughly shows how the after_script section can be used to do this, and allow the files to be picked up correctly.
image: ubuntu/xenialstages: testbefore_script: # Move project to test user - mv /builds/group/repo/* /home/testuser - chown -R testuser:www-data /home/testuserafter_script: # Move logs back to build dir so they can be picked up as artifacts - mkdir /builds/group/repo/logs - mv /home/testuser/logs/* /builds/group/repo/logstest: stage: test artifacts: when: always paths: - logs/ script: # Test as testuser - su - testuser -c './tests/runTests.sh'# etc for more stages
/builds/group/repo is the initial directory, so /builds/group/repo/logs becomes just logs/ in the eyes of the artifacts path.
I would assume you could also either symlink the files back into the initial directory to avoid additional copies/moves, or symlink the build into wherever it needs to go instead of moving the files directly, as we do in before_script, to achieve the same effect. Additionally, in our case, using after_script is preferable to sticking something at the end of script, since we have multiple parallel tasks running in our tests, and any errors that occur during script cause our CI process to immediately halt and ignore the rest of the script section. after_script ensures that the logs are always in place to be picked up, whether on a successful or failed run.
Hopefully this helps someone else not spend hours trying to figure this out.
Can someone please help me how can I make this artifactory work in this case ?
I am mounting the $(pwd)/target folder that contains my report{timestamp}.html page.
When i am trying to give this path it says file not found.How can i make this work ?
Now I just wonder - is that ${CI_PROJECT_DIR} just one directory for the whole project or is it specific to the job / pipeline?
Because otherwise I fear concurrent pipelines overwriting each other with regards to that file... no?
This is a mono-repo and frontend is a folder in the root project folder.
Should it be confusing where you actually are use the "-pwd" (or an equivalent for your OS) in the scripts section to see exactly where you are. The CI_PROJECT_DIR will be part of the path.
This still annoys me to this day.. Why can't this behave like scripts in Gitlab CI. Starting from the project directory itself. it creates confusion when people do ls file is there but artifacts can't find it.
Its more about consistency in Gitlab CI when it comes to paths. Also in documentation the term "relative to the repository where the job was created." can be a bit confusing as many people don't get to the technical detail and think that where scripts are ran (the project directory) is where my CI job is created. I do get that the documentation is accurate, However my main complain is that its inconsistent. Make path behavior of Artifacts identical to path behavior of the scripts.
#Consistency
Solution that I suggest :
Also if this is not possible now.. as i do agree in many cases artifacts like JUnit etc are generated on the relative directory. I would recommend to have a keyword for specifying that you want artifact to be run from the project directory.
This seems to be the best solution, as existing CI jobs will not break as current behavior can remain the default behavior.
somehow, even with gitlab 14.8, this issue still exists. to make things even bizarre, it always works from the main (develop), however, on and off would throw alert "no matching files" from any merge request pipeline.
this is the setup:
build: stage: build script: - mvn package - find . -type f -path '*/target/*.jar' # this shows the list of jars artifacts: paths: - ./*/target/*.jar' # this only works in main branch, breaks on and off from other pipelines
Because herokuish is compiling the sources etc in /tmp/build, and immediately cleans up that location after the build, the best setup is to put in your gradle.build the locations where you would like to store your artifacts outside that temporary location.
So put in your gradle.build for unit and jacoco tests.
I would however ask you to take care of such issues differently in the future: Pulling a version that has already been released is adversely affecting everyone who already has this version installed.
Especially now, as more and more people and organizations are using tools to automatically update their dependencies, pulling an existing version breaks existing configurations (and not only the tool itself).
This affected both the OCI image of the runner and the helm chart for the runners (of which version 0.42.1 has been deleted).
In case this occurs again, please either release a new version with a bug fix or re-release the older version with a bumped patch version. This would have led to a new version 1.15.2 being released, making resolution of this easier in multiple ways:
Automated systems would have picked up the new version and (at least in some cases) automatically deployed it, leading to some organizations not having been affected at all or the issue being resolved automatically
Pulling the version led to broken configurations for every organization pinning versions as 1.15.1 was suddenly unavailable. In the case of a new version (1.15.2) being released, the bug would still have occured for people with 1.15.1, but their configuration would have been reproducible.
Pointing latest to an older version only fixes the issue for configurations that do not pin versions, which is discouraged anyway - pointing to latest does not ensure a reproducible setup.
Again, thanks for resolving this issue and I hope you take my points above into consideration to update the process.