Commit 5ffe38db authored by Veronika Kabátová's avatar Veronika Kabátová
Browse files

Add reproducer docs for builds



Signed-off-by: Veronika Kabátová's avatarVeronika Kabatova <vkabatov@redhat.com>
parent 91cbaeb6
---
title: Reproducing and debugging kernel builds
linkTitle: Reproducing and debugging kernel builds
description: How to reproduce or debug kernel builds done in the CKI pipeline
---
CKI uses containers to build the kernels, thus making it easy to reproduce the
builds on any local environment. Before starting, install `podman` to be able
to pull and run the containers:
```bash
dnf install -y podman
```
## Finding the correct job
There are multiple places in the pipeline where the builds are run. Most likely,
you will want to open the failed builds which are marked with the red cross.
However, if you need to reproduce a *successful* build, you need to get the
correct job in the pipeline first:
- Base kernel build: Open the desired architecture and config job in the `build`
stage.
- Kernel tools build for non-`x86_64` architectures: Open the desired
architecture in the `build-tools` stage.
## Download the builder container image locally
Information about which container image and tag was used to build the kernel is
provided in the job we found in the previous step, towards the top of the logs:
![Logged container info](logged_container_info.png)
Use `podman` to download the container image:
```bash
podman image pull <IMAGE_NAME_AND_TAG>
```
e.g. the command from the screenshot example would be
```bash
podman image pull registry.gitlab.com/cki-project/containers/builder-stream9:latest
```
### Accessing private container images
RHEL container images are not publicly accessible. If the image pulling fails
with permission issues, contact the CKI team to get access. The RHEL CKI
containers are stored in [Quay] and you need to connect your Red Hat account to
Quay.
Once you are granted permissions, log in to Quay:
```bash
podman login quay.io
```
After login you can use the original `podman image pull` command.
## Reproducing the build
### Start the container
```bash
podman run -it <IMAGE_NAME_AND_TAG> /bin/bash
```
### Get the kernel rebuild artifacts
If you are rebuilding a tarball or source RPM, you will need to clone or copy
the git repository into the container. If you are reproducing the RPM build or
tools, you will need to retrieve the built source RPM from the artifacts of the
`merge` job.
You can either run the commands (e.g. `curl`) in the container directly, or in
case you already have the artifacts present locally, use `podman` to copy the
artifacts over. Run the command from *outside* the container:
```bash
podman cp <LOCAL_FILE_PATH> <CONTAINER_ID>:<RESULT_FILE_PATH>
```
You can retrieve the `<CONTAINER_ID>` from
```bash
podman container list
```
You can also first rebuild the source RPM from git and then reproduce the
actual build if you wish so.
### Get the kernel configuration
For kernels built as tarballs, the config file used to build the kernel is
available in the artifacts of the `build` job. Retrieve the configuration the
same way as the rebuild artifacts in the previous step and name it as `.config`
in the kernel repository.
For RPM-built kernels, the default configuration for a given architecture is
used. The configuration is part of the built source RPM so no steps are needed
here.
### Export required variables
For `build` and `build-tools` jobs, some environment variables are needed to
properly reproduce the build. These are printed in the job output:
![Environment variables](logged_variables.png)
Export the same variables in the container. If the `CROSS_COMPILE` variable is
not exported, the kernel build ran on a native architecture. In that case, you
*need* to run the container on the same architecture to properly reproduce the
build.
### Run the actual build command
The command used to build the kernel is also printed in the job logs. Some
examples of the tarball build and `rpmbuild` commands:
![Tarball build command](logged_tarball_command.png)
![rpmbuild command](logged_rpmbuild_command.png)
Copy and run the command. For the `rpmbuild` commands, you'll have to append the
source RPM path to the end.
## Local reproducer does not work
If you fail to reproduce the builds locally and need to access the pipeline run
directly to debug what is going on, please reach out to the CKI team. The
process to get access is described in the [debugging a pipeline job]
documentation.
[Quay]: https://quay.io/organization/cki/
[debugging a pipeline job]: ../operations/debug-pipeline-job.md
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment