The GitLab Runner Docker executor now includes a pull_policy configuration option that supports multiple values. This feature means that you can now specify in the gitlab-runner config.toml configuration file that multiple policies can be used by the Docker executor when retrieving a container image. For example, pull_policy =[always, if-not-present]. In this configuration example, the pull policy always will be attempted first. If the target container registry is not available, then the executor will fallback and use the if-not-present policy.
Problem to solve
Lost network connection to a container registry used for retrieving container images required for CI job execution can result in lost development time hours. In some instances, these outages can also negatively impact revenue generation if the business relies on software updates to production environments that can no longer complete due to the inability to execute the CI jobs because of inaccessible container images.
Today in technologies like Kubernetes, and gitlab-runner, the container image pull policy logic does not include any fall back mechanisms for network connection failures to the target container registry.
Having the ability to use locally cached container images in the CI jobs can mitigate the impact caused by lost connectivity to the target container registry.
Background
This is a spiritual successor to #3279 (closed), which was closed with the recognition that the gitlab-runner pull_policy of always checks to see if an image of the same version is available locally, and only fetches from the remote registry if necessary. However, when a specified remote is unavailable, alwaysfails.
A pull policy that:
Checks for the latest image on the remote registry
Checks for the presence of a locally-cached copy and leverages that first
If no locally-cached copy is available, fetches from remote
And finally, if the remote registry is unavailable, leverages the most recent locally-cached copy (if available)
...would improve the robustness of often-run pipelines and limit the impact of registry outages.
if-newer still seems like an acceptable name, or something such as always-with-fallback?
Potential Risk: This approach may lead to unexpected results for pipelines not often run, as if the remote is unavailable, we'd have no way to confirm how up-to-date the locally-cached image may be. For this reason it's probably best as a distinct pull policy from our default always behavior.
Proposal
Instead of us creating a new pull policy, we allow users to define multiple pull policies. For example, the user can define pull_policy = ["always", "if-not-present"] inside of their config.toml. It will first use the always pull policy, if that fails it will use the next one in line which is if-not-present. This will achieve the always-or-fallback pull policy without introducing it. A small PoC of this was achieved in !2587 (closed)
So for example imagine I have the following config.toml
concurrent=1check_interval=0[session_server]session_timeout=1800[[runners]]name="steve-mbp-gitlab.local"url="https://gitlab.com/"token="xxxxxx"executor="docker"[runners.docker]tls_verify=falseimage="localonly/alpine:3.12"privileged=falsedisable_entrypoint_overwrite=falseoom_kill_disable=falsedisable_cache=falsevolumes=["/cache"]pull_policy=["always","if-not-present"]# Multiple pull policies specified, we'll go one by one if it fails. In this case, first it will try and pull the image, then use the local image if it's presentshm_size=0
We can it working like below
Specification
Allow pull_policy for the executordocker to be either a string pull_policy = "always" or a slice of strings pull_policy = ["always", "if-not-present"], for example using custom unmarshaling created from the PoC
Start with the first pull policy (left to right) if any error is presented, even a 403 (because it might be a production issue) fallback to the next pull policy. For example, if we have pull_policy = ["always", "if-not-present"] we will use always and then if it errors we will use if-not-present.
Show a warning level log that the first pull policy failed.
Show an info level log that we are changing the pull policy.
Check out the PoC that implements most of this apart from logging
Steps to implement this
Ideally if time allows it, we should move all the pull policy logic into it's own package, for example executors/docker/internal/pull. This will also make sure that we have all the test coverage we need before we refactor
Allow users to specify a single string or a slice of string inside of the config
Loop through all the pull policies specified until 1 succseds
Update documentation showing off this feature. Being explicit of security implications because it will ignore the 403 error, and justify that is because auth can be down.
When we tried to do this in #3279 (closed) we had some trouble because we were trying to be smart to check if there is a new image on the remote or not, but then figured out that always already does this. We failed to think of how we can improve the always policy and why not just use if-not-present policy, but taking another look at this now I see how we can implement this.
The Runner can do
I think we can have the following logic in the Runner if we want to have the benefits of what always provides (minus the security parts).
docker pull $IMAGE.
If docker pull failed because 403 (Unauthorized) fail the job.
If docker pull failed because of any other error use local copy if available.
Tasks to do
Explain clearly the security implications of using this policy over the always.
Implement this both on the Docker executor and Kubernetes executor.
Comments on the original proposal
Checks for the presence of a locally-cached copy and leverages that first
docker pull is smart enough to do that already.
if-newer still seems like an acceptable name, or something such as always-with-fallback?
if-newer seems to hide some of the features that this policy will provide (which is what we want falling back to local image). What if we name this always-if-available, I'm fine with always-with-fallback as well ?
Questions
Are we OK with failing docker pull if we get a 403?
If we get an error on Docker pull, should we add retries before we fall back to the local image?
I like always-if-available as -with-fallback has the potential to be misinterpreted as falling-back to a list of alternative remotes. Otherwise, ditto to what @chill104 said.
Implement this both on the Docker executor and Kubernetes executor.
Not as easy, as with Kubernetes Runner doesn't pull the image. It's kubernetes that does this. And we pass our three policies to the policies that Kubernetes expects. In other words - if we define pull_policy = "always" in the Kubernetes executor configuration in config.toml then we just pass this setting to the ImagePullPlicy definiton of the container within the pod. It's Kubernetes cluster that determines how the cluster should handle such image pull.
Unless Kubernetes provides a pulling policy that would work as we want, we're unable to replicate what we will do "manually" in case of Docker executor.
If docker pull failed because 403 (Unauthorized) fail the job.
If docker pull failed because of any other error use local copy if available.
And what if pull will fail with non-403 code - for example when there is network connectivity problem - but it would return 403 (read: no access to the target image) in a working scenario?
This opens the security vulnerability that always prevents - that in a multi-tennant services, where different users have access to the same Docker/Kubernetes executor, unauthorized user will get access to the image that is stored locally. always prevents this because we always force Docker/Kubernetes to pull the image and in case of any failure - including 403 because of missing permission - the job fails.
With this strategy it will not prevent such situation. While fail on 403 will give us a false feeling that it does.
This opens the security vulnerability that always prevents - that in a multi-tennant services, where different users have access to the same Docker/Kubernetes executor, unauthorized user will get access to the image that is stored locally. always prevents this because we always force Docker/Kubernetes to pull the image and in case of any failure - including 403 because of missing permission - the job fails.
@tmaczukin this is a concern I've raised multiple times, and it doesn't seem like it's a concern for any user who wants to use this. I think we should say "If you want security use always, if you want reliability for when the service is done with added security risk, use this new policy". At least we have some security with this new policy when the registry is up, but if the registry is done we will fallback to the local disk for reliability and this is something we should make super clear to users setting this.
but if the registry is done we will fallback to the local disk for reliability
Which means we have no security in case of this policy. A policy that is secure when everything goes well and fails when problems are started is not a "security"
I understand the need for having this policy. And while I'd personally not advise to use it (and even not want us to implemented it), finally it's the administraror's choice if it should be used and is it safe in the target environment.
For example we, as administrators of a multi tennant runner environment where VMs are re-used (our gsrmX GitLab.com runners), should never use this policy - to make sure that our users data is secure. But in case of organozation-internall runners this may be not a problem at all. And as I said, I understand why people may want to have this option.
But let's not connect this policy with the word secure.
There are two secure policies:
always - when the image is alwas pulled, pull must be authenticated and succeeded,
never - where no pulling is done and jobs can use only images explicitly provided by runner's admunistraror.
Everything else, in a shared environment, shouldn't be considered nor named secure.
The second problem is the compatibility with Kubernetes executor.
Currently we have the same three policies for both executors. Adding this always-with-cache (or however we will name it) we create another difference between them.
If we can ensure the same behavior for Kunernetes executor, then it's OK and the only problem is the security discussed above. But if no and id we can't introduce the always-with-cache pull policy for Kubernetes then we should carefuly think if this is a good direction.
Especially that another feature - pull policy controlled from .gitlab-ci.yml (with its own set of security concerns that were pointed by me in its issue) - is being discussed.
With this we again hit a case when one can't simply switch from Rummer with Docker executor to Runner with Kubernetes executor without updating all of the jobs.
this is a concern I've raised multiple times, and it doesn't seem like it's a concern for any user who wants to use this.
@steveazz I think you summed this up really well. It's really clear from the discussion that there's some uncomfortable security implications to using this pull-policy. However everyone who's looking to use it has acknowledged that they aren't concerned about the security impacts while using it and would prefer to have it.
There are two secure policies:
I think we should re-structure the documentation to include this way of looking at the pull-policies when we add this new one. Making it very, very clear what the impacts are.
The second problem is the compatibility with Kubernetes executor.
Implement this both on the Docker executor and Kubernetes executor.
For our use case, we need it in the docker+machine executor, not sure if it inherits from docker.
Are we OK with failing docker pull if we get a 403?
IMHO, 403 fallsback on cache as well. If the image is there is cache, sometime in the past we've trusted the endpoint and had the container. Are there security implications if the local cache is faked by renaming some other image? Can we see if the local was genuine from upstream at one point?
If we get an error on Docker pull, should we add retries before we fall back to the local image?
There are already 3 retries AFAIK, I say keep them.
Decide a name about the policy.
I prefer always-with-fallback
We're scrambling to look into other solutions which have us accumulate a lot of maintenance burden with our own dependency proxy, which is not the direction I want to go. The most frustrating thing about the incident yesterday is that local copy of the container was right there because we cache every day to ensure it's there, but couldn't be used.
sometime in the past we've trusted the endpoint and had the container. Are there security implications if the local cache is faked by renaming some other image?
It's not a matter of if it's faked or not, but it depends on your environment. Imagine the following scenario where you have a Runner used by multiple teams, and teamA doesn't have access to teamB images. If teamA updates their .gitlab-ci.yml to pull the image from teamB, the always policy (set by the Runner admin) would protect teamB images from being stolen even if the Runner already pulled the image on disk. However with always-with-fallback teamA still get to use the image, and do anything they want to that private image, even if they get a 403.
If we make always-with-fallback to use local image even if we get a 403 it is a bit less secure, even though if the registry is down they can still get the private image. I'd like to check how if-not-present handles this scenario, maybe we can have the same logic.
The most frustrating thing about the incident yesterday is that local copy of the container was right there because we cache every day to ensure it's there, but couldn't be used.
@chill104 I'm not sure about your environment, but if you update the cache every day, why not use if-not-present
@chill104 I'm not sure about your environment, but if you update the cache every day, why not use if-not-present
We cache a container tag that changes upwards towards 20 times per day, so we always want to ensure we are getting the latest (but the caching helps start up times)... but in the case of the outage - changes to the tag went to 0 (because of the outage), which means the cached version of the container would have been just fine to get us through and we'll sacrifice the changes to the tag for the 15,000 other projects that need it.
One idea, that I haven't really formed in my head just yet, is to see if we can just make if-not-present do this behavior Basically extending the logic of if-not-present to use local image if we get an error from the registry. I think (haven't look at the code yet) if-not-present already tried to do a pull
@jenglan5 had a great suggestion on a call yesterday (perhaps best captured in a separate issue once we flesh it out). Managing the pull policy of a runner fleet via a feature flag (or some other easily-changed method that doesn't require restarting/repaving a runner fleet) would be ideal. There is a concern that an always-if-available style pull policy might confuse users. Because it's not surfaced in the gitlab UI (outside of job logs) if the registry is unavailable or erroring out, it is a poor UX if a user was attempting to run a pipeline and couldn't figure out why a newer version of a container image wasn't available.
For that reason, their team is thinking of an always-if-available (or indeed an if-not-present) as a "break glass in case of emergency" that they'd only want to enable in the event of a known registry problem.
Perhaps another solve for this concern of confusing UX would be a visual indication in-pipeline that the runner has "fallen back" on a cached copy of an image.
@jrreid @steveazz One output of our post mortem was an idea to change the pull policy on the fly within the runner configuration settings... i.e. during the outage we could go to group settings -> ci/cd -> runners and click each runner and change the pull policy to if-not-present and on the next check-in by the runner it would have been instructed to do it differently. This would have forced reliance on the cache instead of use cache as a start-up primer.
Has it been explored to manipulate config.toml on-the-fly like this?
This approach would obviate the need for any newly-defined pull policies right (it's always when things are good, and if-not-present in case of emergency)?
Is managing this via group settings the ideal approach in your view? I believe I also heard of using ConfigMaps for k8s clusters.
Has it been explored to manipulate config.toml on-the-fly like this?
@chill104 No, and it's for a good reason, it opens up a lot for security backdoors that we would have to consider. At the moment the communication is GitLab <- GitLab Runner and never GitLab -> GitLab Runner, Runner talks to GitLab and GitLab never talks to Runner, the only expectation to this rule interactive web terminals for debugging jobs.
There are few reasons we follow this design and why I would strongly encourage against this.
Runner IPs can change at any time: We would need to make sure that GitLab has the latest IP address of the Runner, if not we might end up sending a request to an invalid IP (think about autoscaling Runner managers. Rails does store the IP address when it receives a request from the Runner, but this is not always accurate because of proxies and can change at any time.
Change configuration over HTTP: There are a lot of security implications that we would need to think about if we allow Rails to change the configuration over HTTP, we would have to think about authentication at the moment we have nothing around this since authentication happens from the Rails side, not Runner. We would have to make an allow/disallow list for which configuration fields can change as well most likely which complicates A LOT of things.
File permissions: At the moment you can set up the runner config.toml to be ready only for the service so it can't be modified by the process, we would have to open up the permissions for this as well.
Security: There is a lot of security implications that we would need to think about and over complicate things, drastically for little benefit.
Communication: Users at the moment can have their network configured so that GitLab can't contact the Runners and only Runners can connect to GitLab, think about all the firewall changes users would have to make and open up their Runner fleet to the public internet if they are using GitLab.com.
Gating/Compliance: How are we going to make so everyone is aware that a change has happened in the configuration, doesn't this go against configuration as code?
Adding more vulnerable points to the Runner: You are opening another route where a malicious user can attack to gain control over the Runner which can end up causing harm.
I think if you are willing to have a manual process to change configuration we should think about a different solution because on the fly configuration changes over HTTP has a lot of security and architecture implications, and the benefits do not outweigh the problems we will face.
At the moment if you change something from the config.toml the change is automatically picked up there is no need to restart the service. At GitLab.com we should chef to manage the config.toml and every time we want to change something, for example, the region we spawn the machines it's just a chef change, where it's tracked/reviewed and propagated. Would it be possible for your infrastructure as code tool to do the same thing?
Provide an environment variable in .gitlab-ci.yml to specify which pull policy to use. We can specify an environment variable that users can specify in their CI/CD settings, but this comes with more configuration because we would have to allow users to turn off this feature for security reasons. This is also less than ideal simply because it adds another point where the user can misconfigure something. This can work similarly to git depth or git strategy if we want it to be through the UI.
I would strongly argue again the manipulate config.toml on-the-fly idea just because it brings all the drawbacks mentioned above. @chill104 I'm not familiar with your set up, but would it be possible to change the config.toml via your configuration management tool, wouldn't that be simpler if you are will to do manual fallbacks yourself? The simplest and ideal solution would be to introduce the always-with-fallback policy or use the configuration management route.
@steveazz - agreed, just a thought. We don't have automated chef against our gitlab-runner fleet at this point, and don't see making that investment in the foreseeable future unfortunately.
So I guess this bubbles back to always-if-available. But now I'm even more nervous as we lost almost an hour with our entire fleet today over docker+machine with 13.2, we're pinned to the 13.1.
But now I'm even more nervous as we lost almost an hour with our entire fleet today over docker+machine with 13.2, we're pinned to the 13.1.
I understand, we have been running 13.2.0 in production over a week now (13.2.0-rc2 is the same as 13.2.0) and we didn't experience the problems we saw. I'm waiting on @pschwar1 to open up an issue with an example config.toml and the errors that you folks started to see to help figure out what the problem is.
@chill104 hi - is this feature still a high priority item for your team? Reason I am asking is that I don't think we can get to this in the upcoming milestone unless we readjust priorities.
Of course we experience one of the worst gitlab registry outages till date this weekend... we’re at 9 hours I think now when most jobs are failing to pull from the registry even though they exists on the local machines...
@chill104 Not yet. We will have to assign someone to get it done in 13.7 as we only have a a few business days left before we freeze the code for Runner 13.6.
@pschwar1 did confirm with me via slack on Friday. Their shared runners are docker+machine and there is not a firm timeline set for a transition to kubernetes runners.
Darren Eastmanchanged title from Create a pull policy that can fall back on local cache to Create a pull policy that can fall back on local cache [docker+machine executor]
changed title from Create a pull policy that can fall back on local cache to Create a pull policy that can fall back on local cache [docker+machine executor]
Darren Eastmanchanged title from Create a pull policy that can fall back on local cache [docker+machine executor] to Create a pull policy (always-or-fallback) that can fall back on local cache [docker+machine executor]
changed title from Create a pull policy that can fall back on local cache [docker+machine executor] to Create a pull policy (always-or-fallback) that can fall back on local cache [docker+machine executor]
Darren Eastmanchanged title from Create a pull policy (always-or-fallback) that can fall back on local cache [docker+machine executor] to Create a pull policy (always-or-fallback) that can fall back on local cache for the docker+machine executor
changed title from Create a pull policy (always-or-fallback) that can fall back on local cache [docker+machine executor] to Create a pull policy (always-or-fallback) that can fall back on local cache for the docker+machine executor
Darren Eastmanchanged the descriptionCompare with previous version
@DarrenEastman - One question came up during the review -is there a good reason why we wouldn't make it default so that everyone can use this without changing their settings?
@adawar One reason is outlined here, which is the security concern raised by the use of this policy.
I understand the need for having this policy. And while I'd personally not advise to use it (and even not want us to implemented it), finally it's the administrator's choice if it should be used and is it safe in the target environment.
Note - one argument against implementing this new policy, ahead of something similar, Always-If-Available Image Pull policy 95854, being introduced in Kubernetes, is that our implementation would temporarily be different than Kubernetes. In addition, we would no longer have consistent pull policies across our docker, docker+machine and kubernetes executors.
Yes and no, for multi tenant environment we don’t want project A pull private images from project B if they run on the same environment. Yes they can try escape from the container to look at the private image but that is much harder to do then just updating the .gitlab-ci.yml with their image name.
We have to remember we should always have multiple layers of security and always adds this layer for multi tenant environments. Having always the default is to have smart and secure defaults for the runner (ed
Beyond security concerns- another thought that occurred to me; having a fallback pull policy as default might be confusing to users. Imagine an example:
If you have not explicitly chosen an always-if-available pull policy, and you were expecting a pipeline to pull a recently-published image tag, having your pipeline succeed (even though it's failed to fetch your expected image) during a registry outage might be confusing to troubleshoot.
We should consider how to make this an intuitive UX that makes a fallback-on-cache scenario immediately obvious to users.
is there a good reason why we wouldn't make it default so that everyone can use this without changing their settings?
We also need to make our default configuration as secure as reasonable. We can't tell if a user is using the runner in a shared environment or not so our default security settings should assume they are.
@chill104@pschwar1 One other idea that comes to mind is to actually have a mirror for registry.gitlab.com that users can host. Similar to what we can already do for DockerHub, there is 1 gotcha with the registry mirror that specifies It’s currently not possible to mirror another private registry. Only the central Hub can be mirrored.. I wonder if it's better to put the effort into extending the mirror or even have https://gitlab.com/gitlab-org/container-registry support private registries mirror so it adds resiliency to the user's infrastructure. If we add it to https://gitlab.com/gitlab-org/container-registry we can easily package this for users to have registry.gitlab.com mirrored by default.
The benefits I see from using a registry mirror over this solution or the solutions proposed here are below:
Implement the exact same logic as this new policy is proposing. Always push fresh images, if the registry is down use the image from local storage of the registry
You end up saving network bandwidth the egress so we don't reach out to registry.gitlab.com
You get control of what images can be pulled with an allow/deny list
This will be executor agnostic so it will work for both Docker, Kubernetes without having any discrepancies between them
Still have the security you get with always since authentication is always going to be checked.
Pulls are going to be faster if the mirror is hosted in the same AZ as the runner fleet.
The biggest drawback I see with this one is that users have to host their own registry.
@steveazz So this is one of the things we wanted to consider. The issue though is that you can't mirror our top level namespace as a whole with how the gitlab registry currently works.
Currently we are attempting to use artifactory as a passthru proxy where we can append the artifactory url in front of the gitlab registry path, but the pain here is we have to modify every ci yaml file and template to include ${REGISTRY_PROXY}/ in the image line.
The biggest drawback I see with this one is that users have to host their own registry.
Their own registry with it's own set of permissions, and fragmented experience unless investment's made to pass through project/sub-group structure. It's been mentioned that gitlab wants to bundle in an integrated experience, so this may mean gitlab is left on the hook to package a self-starter gitlab registry mirror for their enterprise customers.
I think I've come up with an implementation that will solve the problem of when the registry is down, and it will also solve it for both Kubernetes and Docker executor. All without introducing a new policy, but still depending on pull policies.
Proposal
What if instead of us creating a new pull policy, we allow users to define multiple pull policies? For example, the user can define pull_policy = ["always", "if-not-present"] inside of their config.toml. It will first use the always pull policy, if that fails it will use the next one in line which is if-not-present. This will achieve the always-or-fallback pull policy without introducing it.
So for example image I have the following config.toml
concurrent=1check_interval=0[session_server]session_timeout=1800[[runners]]name="steve-mbp-gitlab.local"url="https://gitlab.com/"token="xxxxxx"executor="docker"[runners.docker]tls_verify=falseimage="localonly/alpine:3.12"privileged=falsedisable_entrypoint_overwrite=falseoom_kill_disable=falsedisable_cache=falsevolumes=["/cache"]pull_policy=["always","if-not-present"]# Multiple pull policies specified, we'll go one by one if it fails. In this case, first it will try and pull the image, then use the local image if it's presentshm_size=0
We can it working like below
Benefits
Implement it for Kubernetes executor: We can see if the error on the pod is about pulling the image. If it is, we just recreate the pod but this time with the if-not-present policy.
Simpler to implement: We can just check the error of the pull, if there is an error we log it and move onto the next implementation.
Not introducing a specific policy for the runner: Not custom/new policy for the runner that other tools and the industry doesn't use.
Not introducing a new policy with the same security flaws: This is the biggest concern we had when discussing this, there is still the security flaw because we have if-not-present as fallback but at least that is already well documented, and most users are educated around it.
Stackable: We can stack multiple policies on top of each other, and this might give even more control to what the user might want.
Less risky to change: We don't have to touch or change any of the existing logic, we just have to wrap it and check if there is an error or not, if there is move onto the next one in line.
Drawbacks
Breaking change: This of course is going to be a breaking change because from a string of pull_policy = "always" to a slice of strings pull_policy = ["always", "if-not-present"]. We can gradually change this as we did for docker services where we support both syntax for a while and then depreciate it.
@tmaczukin @ggeorgiev_ @erushton this should address most of our concerns what do you think? I've done a PoC to implement this in !2587 (closed). If we just want it for Docker executor, there is a high chance that it can make %13.7 if we make it the most important thing on our plate.
@DarrenEastman what do you think from a product perspective?
@chill104@pschwar1 would this implement what you are looking for? Is updating the configuration to pull_policy = ["always", "if-not-present"] acceptable for you?
This way we won't have to come up with policies specifically for the Runner and can keep them more "native". It also has the added bonus of not starting a trend of creating custom policies.
When we discussed the new pull policy we agreed it would be temporary. This is less than ideal since inevitably people will start depending on it and removing it will be nothing but an inconvenience. Supporting multiple policies is fairly painless and we can keep it forever. I don't even see a drawback to keeping both syntaxes - a single pull policy and an array of pull policies.
One of the bigger wins is that we can implement this in Kubernetes as well. While a custom policy is impossible to implement there.
At the end of the day, the two solutions aren't that different as they achieve the same result, but multiple pull policies feel more right, and native in terms of implementation and versatility.
Yeah I like this @steveazz - I like that it doesn't lead us to make-up terms that aren't industry standard and that it works cleanly for k8s and docker.
When the user has configured multiple policies (len > 1), we can assume that they expect possible errors. Therefore I'd suggest the following logging states:
WARN (yellow) if a pull policy fails
INFO (green) when the next pull policy is taken, and one has failed before. (or combine it into a general message logging)
If all pull policies fail (count == len), then add an additional stage with
ERROR (red) all pull policies failed. Exit state = 1 (or similar)
For configuration compatibility, maybe always convert the string into a single element array.
@DarrenEastman yes that is fair to say, I haven't started working on this just yet (apart from doing the PoC) so I still need a few more days of development.
Steve Xuereb - Out of Office Back 2025-04-21changed title from Create a pull policy (always-or-fallback) that can fall back on local cache for the docker+machine executor to Allow user to specify multiple pull policies for Docker executor
changed title from Create a pull policy (always-or-fallback) that can fall back on local cache for the docker+machine executor to Allow user to specify multiple pull policies for Docker executor
Darren Eastmanchanged title from Allow user to specify multiple pull policies for Docker executor to Configure multiple pull image policies for Docker executor
changed title from Allow user to specify multiple pull policies for Docker executor to Configure multiple pull image policies for Docker executor
Darren Eastmanchanged title from Configure multiple pull image policies for Docker executor to Configure multiple image pull policies for Docker executor
changed title from Configure multiple pull image policies for Docker executor to Configure multiple image pull policies for Docker executor