We currently show deployment histories for successful deploys to each environment. Developers and Ops need more control and insight into deploys such as:
following a deploy from the start, not just when it's done
watching rollout of a build across multiple servers
finer state detail (Ready, Preparing, Waiting, Deploying, Finished, Failed)
ability to abort a deploy
Proposal
In auto-deploy template, label each deployment with information about version
Inspect Kubernetes for state of each node (e.g. spinning up, down, running version A, running version B)
There is a time picker in the upper right corner (to be designed) that will allow you to change the time span. The time span chosen will show the deploys performed during that time. Hovering over an instance will show a tool tip with it's current state.
You've designed it as if we'd see multiple deploy graphs at the same time, but since these are all deploys to the same environment, I think we'd only show the current state of that environment, thus only one of these at a time. e.g. when that one deploy aborts, it would then likely rollback to 0%. Although I guess it might depend on why it failed. Some deploy strategy might leave it partially shipped. A more realistic scenario might show a single failed box inside the sea of successful ones. But either way, once the next deploy overwrites that, the old deploy information would no longer be current.
It brings up an interesting though, of when a deploy is happening, we can se which boxes are running the new code, but what are the other boxes running? This is where a blue/green visualization may help. Where you can see that new code is running on X% of containers, and old code is running on the rest. This is even more valuable when you use canary deploys or incremental rollout where you consciously only deploy to a portion of the fleet so a deploy could be "successful" and yet only cover 10%.
@tauriedavis What do you think about merging this into either the Environments or a peer-level Deployments page? It would likely be overkill for review apps in the Environments page, but that's also the place where we list staging and production, and so this could be seen as additional information about the environment itself. Alternatively, if we had Deployments list across all environments, it could be seen as related to that. Or maybe it's separate. :)
The original image shows two environments "Production" and "Next". Next would be staging in our vernacular. It has far fewer containers than production.
In this case, I think that image would be only showing builds on master, so it's not quite equivalent to our deployments page or pipeline list.
You've designed it as if we'd see multiple deploy graphs at the same time, but since these are all deploys to the same environment, I think we'd only show the current state of that environment, thus only one of these at a time.
Ah, yeah I was thinking you'd be able to go back and look at all deploys but maybe that isn't useful, since they are out of date as you mention. In that case I think it does make sense to combine with the environments information. I do like all the monitoring being in one place, though.
Sidenote: How does this work for gitlab where there are no environments?
when that one deploy aborts, it would then likely rollback to 0%.
I was wondering about the different between rollback and abort. Functionally, what is the difference? Is it realistic to have both buttons?
It brings up an interesting though, of when a deploy is happening, we can se which boxes are running the new code, but what are the other boxes running? This is where a blue/green visualization may help. Where you can see that new code is running on X% of containers, and old code is running on the rest. This is even more valuable when you use canary deploys or incremental rollout where you consciously only deploy to a portion of the fleet so a deploy could be "successful" and yet only cover 10%.
I'm kind of confused by this. What are the different states you would be imaging for each box if we were to show new code vs. old code?
@tauriedavis Until gitlab-ce uses deploys, the deployboard would be empty or disabled.
Rollback would usually be if you find a problem the latest deployed version and want to rollback to a different version. Abort would abort a currently-in-progress deploy (and likely rollback to the previous version).
@markpundsack What are your thoughts on moving the deployment list to this deployboard page? It definitely seems to make sense for them to be together.
@tauriedavis You said "deployment list", but what you've shown is the environment list (which shows the latest deployment for each environment, but not the historical list of deploys). Is that what you meant?
I think this can work. It's a little weird that staging and production appear twice, one for the deploy graphic and one for the current information (about last deploy). I wonder if the graphic is really just more meta data about the environment. Perhaps for review apps it would be hidden by default, but for consistency, it could be shown.
Having never used a deploy progress graphic before, I can't say which is the most important part of this. Would I want to see the graphic all the time? Would I want it as a separate page because I'm doing a different job when i see it? Or should it be secondary information in the environment list, to show when I am specifically debugging something or otherwise interested in it?
Here's some use-cases that may help.
I want to see what's running on staging, I go to the environment list, look at the staging row, see what last commit was (or list of commits or even better, list of MRs that exist in staging, but not in production), then click on URL button to see what's running live.
I want to promote what's running in staging, to production. I go to the environments list, verify that what's running in staging is what I think is running, then click on the manual action to deploy to production.
I trigger a deploy, and I've got lots of containers to upgrade so I know it'll take a while (and we've throttled our deploy to only take down X containers at a time). But I need to tell someone when it's deployed, so I go to the environments list, look at the production environment to see what the progress % is. I'm particularly concerned about this deploy, so I want to see the graph updating in realtime as each container is rolled.
I get a report that something is weird in production, so I look at the production environment to see what is running, and if a deploy is ongoing or stuck or something.
I've got a MR that looks good, but I want to run it on staging because staging is set up in some way closer to production. I go to the environment list, find the review app I'm interested in, click the manual action to deploy it to staging. [Really, I'd probably go to the MR, ideally find the latest pipeline section of the system information box, and trigger the action from the mini pipeline graph. For now, I'd have to go to the latest pipeline, click through to details, and trigger the action from there.
A problem is detected in production, I go to the environments list, click through on production, see the deploy history, pick the last release and rollback.
So, for all these use-cases, the progress graphic isn't the primary thing. It's kind of eye-candy, unless you're trying to debug something. But thinking about debugging, I'm not sure what I'd really want. Would I want to see failed deploys, and be able to click through to see details for them? Retry individual container deploys? If the deploy is via puppet or something that updates a container/VM in-place, then maybe this makes sense. For a lot of container deploys, you spin up a new container and when it's running, you kill an old one. So a failed deploy would mean some container failed to start. Hopefully that's rare, but there could be lots of reasons for that.
@tauriedavis There's a screenshot and link to a video from Redhat that might help explain the incremental deploy stuff. https://gitlab.com/gitlab-org/gitlab-ee/issues/1387 I don't know how to factor that into this deployboard discussion, so let's put it aside to not get distracted for now.
This is actually what I was originally designing but then I thought we would want to show a histroy of deploys and not just the latest. But that doesn't seem as useful as having it with the environments view. Like you said, all the use cases mentioned seemed to focused around the environments table. The graph is more of a subset to check just in case. It doesn't really seem necessary to include the graph on review apps unless I'm missing something. It doesn't seem like you would abort or roll back a review app.
Theres a lot going on in the table now with this graph (which is why I tried it with the graphics above the table). Maybe I've just been looking at it for too long. @dimitrieh what are your thoughts?
In that sense, is the roll-back option only available when the graph is expanded?
is the abort button on completely finished deploys needed to be shown inactive?
My thoughts here being that I thought the roll-back button was inactive too, due to contrast.. but perhaps that is a different problem.
Another thought towards this is (and perhaps this is feature creep @markpundsack ), but what about a compare button when it is finished.. or even while it's still deploying..
In that sense, what about metrics such as deploy time vs container, this way you can see which machines run faster then others etc. and thus compare..
the following mockup shows what that's like and is probably more for an iteration.. also its less flexible as all instances need to be on the same line.
@dimitrieh A compare button is feature creep, and should be a separate issue because it's really unrelated to multi-container deploy states. But, it is a good idea. People should be able to compare what is in staging vs what's in production, for example. Especially if you're about to hit the manual action to deploy whatever is in staging into production. Heroku even shows a dialog when you click on promote, which includes links to the compare on GitHub. (https://devcenter.heroku.com/articles/pipelines#promoting). IIRC, clicking on the Git SHA in the pipelines view takes you to the diff as well. Or maybe that is the diff to master.
@dimitrieh I'm not sold on the value of "metrics such as deploy time vs container". Most people will be deploying to homogenous fleets, so deploy times should be roughly equal. If not, that might be something worth alerting on, but it's not something I'd want graphed by default, and I don't think it would be more important than the existing view Taurie has mocked up. It would provide an interesting visual for incremental deploys, but just doesn't seem high importance. What's the use-case you're concerned about?
@tauriedavis just a thought here, do these graphs come when you push the graph button or will the have the caret, the same way as dynamic environment folders have? if so when is the button used exactly? :)
Note that (3) above may have significant edge changes, such as knowing when a deploy has started and displaying the environment/deployment information before it has finished.
@ayufan I wonder if we can cut scope further for the first MR to ignore versions, and simply focus on state of pods, basically ready/not-ready. From what I can see, Kubernetes creates a new ID for each deployment, and we should just show them each distinctly.
e.g.
$ kubectl get podsNAME READY STATUS RESTARTS AGEproduction-2035384211-khku8 1/1 Running 0 14sproduction-1564180365-nacti 1/1 Running 0 14sproduction-1564180365-z9gth 1/1 Running 0 14s
The above would show 3 boxes, 1 running 2035384211, 2 running 1564180365. It might even make sense to graph each ID as a separate row of boxes so you can see boxes leaving one ID and moving to the other ID. Not sure if this is the best representation, but might be easier with Kubernetes.
Also note that we can always step a tier down, not up. I think making this EE Premium is acceptable. This feels like a replacement for an entire product.
It makes sense, but to be fair adding that information to auto-deploy is 3-hour change. My concern is that checking only the pods name, how will you know which one is the latest?
What we should do is to look at ReplicationController and monitor the state of deployment from there.
@ayufan yeah, I'm torn. Tracking versions seems like a good idea. But being tolerant of that information not being there seems like a good idea too. In particular, I don't like tying this only to Auto Deploy since that's likely to only cover a small portion of Kubernetes users of CD. So telling everyone to add a specific label to their deploys might be fine, but slightly annoying. So it's not just about being easier for us, it's about what's easier for the user.
Can we look at the list of deployments and just found out which one is most recent? Wouldn't that tell us which one is latest reliably?
@ayufan's asked me to take on implementing the initial backend for Kubernetes in 9.0, and I'm just trying to reason through the data kubernetes provides to make this possible.
Is each "instance" meant to represent a pod, or a container? How about resources related to a deployment that aren't either, such as services?
We'd be caching the kubernetes data through ReactiveCaching, so polling gitlab-rails for changes to make it "realtime" wouldn't be particularly expensive, even if you've got hundreds of people watching the same page. However, this does limit the update resolution.
I wouldn't expect us to re-read the data from kubernetes more frequently than every minute, and we don't have great control over that, given how we're currently leaning on Sidekiq to do this. Does that frequency of update make sense for a first iteration, or are most deploy boards going to be everything is pending <poll> everything is done <poll> nothing has changed <repeat> ?
@nick.thomas good questions. My 2c is we would want this to represent pods not containers as those tend to be what you specify when trying to do a typical canary deploy, rolling update, or managing the number of replicas.
On the cache lifetime, 1 minute is probably okay. We definitely wouldn't want longer, as the value here is in getting an accurate picture of the current deployment and if there are any issues. If we can't represent an accurate picture, people would use another tool for the job.
1m is fine to start, but we might very well want finer updates after the first iteration. People will probably expect more like 5s resolution.
@nick.thomas Good question about services. Any suggestions? What would you want to see? Auto deploy likely won't roll the services on each commit, so I'm guessing we can ignore for now. But a "real" app might see things differently.
For what it's worth, we are using the same reactive caching method I believe for Prometheus data (gitlab-ce!8935). We are trying to have a lifetime of 30s there. If we can do that reliably there, maybe we can make it 30s here to?
Does deploy board is shown by default, or we have to "expand" environment the same way as we do expand folder?
Are these states returned by Kubernetes: Finished, Deploying, Failed, Ready, Preparing, Waiting? @nick?
Where should we get deployment status from? Is it ReplicaSet or ReplicationController?
Deployment entry is created post-build, not pre-build. Which may be a problem, because on Environment page we will still show the previous deployment, where a new one is being created. It may be required to create a deployment that is "running" and show this running one on this page. @nick could you chime in?
How will actually deploy abort do work? Is it CANCEL of deployment job, but it still will run, it will not cancel and rollback existing deployment? Should we talk directly to Kubernetes and do some magic there? It is not very clear to me yet how this would be done.
(Stretch) Should also fetch version out of Pods to show how many of them are running latest? Where should we show it?
Choices
We should return an array of squares with state,
We should return a tooltip name with name of Pod that is being processed and status,
For Abort and Rollback, we will send a URL (or not sent it) that will be button will run when clicked?
Should API return the percentage of completion?
(Stretch) Updating a deploy board: Fire API call every some time to update a visible graph? What happens if we display 10 different graphs, do we update all of them? (define 1.) Maybe in API, we should tell frontend to update or not update further.
Rollback: will trigger deployment job before the current one.
Add Environment#abort: to cancel the latest deployment that is happening, but will this actually cancel the outgoing deployment?
Add Environment#rollback: to trigger the retry of last succeeded job,
For now, we will redirect to a new page when you click either Abort or Rollback.
API
/group/project/environments/1/status.json{ instances: [ { status: "finished", tooltip: "tanuki-2334 Finished" }, { status: "finished", tooltip: "tanuki-2334 Finished" }, { status: "deploying", tooltip: "tanuki-2334 Finished" }, { status: "preparing", tooltip: "tanuki-2334 Finished" } ], abort_url: "/group/project/environments/1/abort or cancel (POST)", # can be `false` if we can't Abort rollback_url: "/group/project/environments/1/rollback (POST)", # can be `false` if we can't Rollback completion: 100, # percentage of deployment completion, is_completed: true # this says to Frontend whether they have to continue updating this graph}
Next steps
We know most of the things for Frontend, we have definition of API, we miss information when we should display a deploy board, but this allows @filipa to be unblocked and be able to work on this,
We have a few blank holes in how to handle deployment itself and how to get instances information out of Kubernetes, this needs to be figured out,
We miss our deployments to be stateful, not stateless. Should we create a running deployment that would be shown on a list?
We have to figure out a mechanics behind the Abort, as we effectively have to revert to previous Replication Controller. So semantically it may mean that we need to Re-deploy a previous successful deployment, and maybe this is exact meaning of Abort. What would then be the difference to Rollback?
@nick@markpundsack I believe that we need some extra time on figuring out steps 2, 3 and 4.
Does deploy board is shown by default, or we have to "expand" environment the same way as we do expand folder?
Deployboard is expanded but can be collapsed.
How will actually deploy abort do work? Is it CANCEL of deployment job, but it still will run, it will not cancel and rollback existing deployment? Should we talk directly to Kubernetes and do some magic there? It is not very clear to me yet how this would be done
@markpundsack Can you answer this one? I remember talking with you about how abort will work but do not remember the exact answer.
(Stretch) Should also fetch version out of Pods to show how many of them are running latest? Where should we show it?
Is this a collective number showing #/# latest or can we add to the tooltip for now when it is latest?
Agree, a hard #/# count would be a good idea. Maybe we can put it near where we dispaly the percentage? (Perhaps "59% Complete" is one line with "59/100" on the next?)
Latest doesn't represent completed, right? Or does it? If we add to the left side next to percent complete, we should add label it with Latest. I can make a comp after clarification. Thanks!
If needed, you could cut "Abort" functionality, or evaluate the simplest implementation, which might stop the rollout, but not revert until you click Rollback. I'm not saying that's the best experience, but if we're worried about time, let's cut whatever scope is needed to get something out so we can learn more and iterate.
The main reason it's vague right now is that I haven't explored what's possible/normal with Kubernetes. I'd look at this as product discovery work, and we should find out what makes sense given Kubernetes rather than being stuck what a mockup.
To make sure if we have 30 environments, we will show deployboard to each of them?
It's kind of unlikely to happen, but it is possible that we will have 5, and we will have to fetch data, so this may have performance implications as this is a costly operation to do: 1. frontend -> backend -> background -> kubernetes, 2. fronted -> backend -> read cache with deploy information. So it has a potential of DoS-ing GitLab.
We had talked about only expanding deployboards for top-level environments by default, so if you use review/* for review apps, they wouldn't show up.
Also, I wonder if we should only expand environments with an active deploy. Is there much value in showing the deploy board for a finished deploy? (Might be even better with realtime, if the board automatically shows up when a deploy kicks off. But then I'm not sure we'd want it to auto-collapse when done. Maybe after X second delay...) Anyway, all out of scope for now. Let's just keep it simple for first iteration.
@tauriedavis@filipa This are "figured-out" statuses, we need to confirm that this are exact statuses that we get out of Kubernetes. So, be prepared that this is something to change.
There's kubectl rollout status deployment/environment-slug. However this looks only at single deployment, something that is done currently by auto-deploy, but it's not far fetched to have multiple deployments happening for the app, concurrently, if our App would consist of DB and Application.
To make it simple now, we would assume that each environment has exactly one deployment.
kubectl rollout status deployment/... looks at deployment manifest.
$ kubectl get deployments --all-namespaces [13:16:20]NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEgitlab-ce review-enable-rev-87dnif 1 1 1 1 20d
If we would have to describe the deployment we would see something like this:
This is everything that we need. We need to get a latest running deployment from database, find a sha. Get a deployment for that sha, see if this generation is the latest one, and we will receive a status ouf our deployment.
Should we allow to extend deployboard for "finished" deployments on environment list? Ex. to see a number of replicas and their health, as number of replicas can vary based on auto-scaling and health of deploys.
This is probably out of scope, but it would be worth to think also about something different:
We will be doing rolling deployment soon, so it would be helpful to not only see the latest one but show the versions that we are currently running.
Ex. we do know that some of the pods are running the previous version, due to rolling out or update strategy. Current design assumes to show only the latest data.
If this is the case then the endpoint should be at environments level, but should include information which deployment was the pod created by, and whether is it a latest one or not.
For example. I would advise naming the endpoint after Kubernetes to be: rollout_status.json. Deployboard is a feature that consumed rollout information.
@filipa You may need to add some retry mechanism for that. It can also be helpful to sync how you are doing that with @jivanvl as he is working on Prometheus Graphs.
@jivanvl please ping me if you already have this figured out so I can reuse the solution, if you don't I will continue with a script of backoff I once shared in slack
I search the code and could not find it :) Can you please point me in the right direction?
Should we allow to extend deployboard for "finished" deployments on environment list? Ex. to see a number of replicas and their health, as number of replicas can vary based on auto-scaling and health of deploys.
@ayufan Yeah, that's a good idea. Eventually, we might even show pod status beyond just deployments. Like if a node gets unhealthy and pods need to be moved to new nodes, you could watch live status. And tying that into Prometheus, we could show more detailed stats on hover/click. But that's all unknown future.
One thing I just realized is that the script to start a deployment doesn't actually wait until a deploy finishes, so we're going to think a deploy is done, when it's actually still ongoing. So that's another good reason to default to showing the current status even for finished deployments.
Knowing that we want to get to canary deploys soon (couple releases), plus the possibility that some changes may be made either by scaling/auto-scaling or other manual changes... I wonder if the status of "100% Complete" really makes sense, for the Environment page specifically. It is not tracking a specific deployment, but rather an environment more broadly. (A MR page may make more sense in tracking a specific deploy?)
I would think that status should attempt to be more specific about what is going on, based on the results of the k8s API: current, desired, ready, etc. It's entirely possible a deploy finished and is "complete" but there is auto-scaling going on which is throwing off your %'s.
Maybe it's as simple as saying "100% Ready" as opposed to complete, but I think the # itself is interesting to see as well.
OK, I've been digging into status a bit. Going from GET /apis/extensions/v1beta1/deployments, we have a very limited view of the kubernetes pods themselves.
If metadata.generation > status.observedGeneration, we can show status.replicas instances, all in state waiting.
Otherwise, we can do some counting.
status.availableReplicas instances in state finished
status.updatedReplicas - status.availableReplicas instances in state deploying
spec.replicas - status.updatedReplicas instances in state waiting
We can't link changes in these numbers to individual instances, and nor do we know the pod names. We also don't get to see failures of individual pods. We could look out for the deployment timing out to set spec.replicas - status.availableReplicas as failed instances, though.
I can handle multiple kubernetes deployments and use the deployment names to provide tooltips, at least.
Maybe we start by not linking individual pods. Just show them like a progress meter, moving from left to right.
If these boxes were persistent VMs, and we were doing an old-school rollout, we'd probably be rolling each instance in a specific order, and show so the graph would make sense. If we're showing pods, and letting Kubernetes do the rollout, then it's going to pick pods in a seemingly random order, in which case the nice left-to-right-ness of our graph disappears, and instead we'd have boxes changing colors randomly until they're all updated.
@nick@tauriedavis Feel free to cut scope as necessary to ship something in 9.0 that adds value. We'll iterate from there.
So I think @nick.thomas structure is generally good, but the deployment data could also change over time if there is horizontal scaling occurring. In this case, it could be that spec.replicas > status.replicas which would mean that either the deployment is scaling, something is going wrong, or it's being updated.
You'd have to then dig into the pod status, which is also available, to determine what is going on. I imagine we will need to do this anyway, to get a good representation of each box/pod in the deploy. The primary reason would be to check ready status.
To summarize, I think you have 3 key top level metrics:
# of desired pods
# of ready pods (not current, as I think current can include pods that are not ready/failed which can be misleading)
# of up to date (updated) pods
Since this chart is actually on the Environment tab, I think the most important metric is whether your pods are ready as that is the primary gauge of health of the environment. This could be represented by ready/desired for a percentage. Next priority would be tracking updated, to see how any deployment is going. That would be updated/desired.
@tauriedavis maybe we could have a ready percentage larger, then beneath it a smaller line for updated? Or how do you think that would be best represented?
Just look at a deployment that progresses from 1 to X of Y, and just graph X of Y as a progress bar.
Look at state of pods in a replica set. When you create a new deploy, it creates a new replica set, grows that replica set while the other one shrinks. Graph each replica set separately, and just have boxes for counts of pods in each replica set. In steady state, it'll be Y boxes. During a deploy, there will be two lines, totally Y+/-1.
Similar to above, but determine which replica set is most recent, treat that as the current deploy, draw a box for each pod, and pods that match the current deploy get colored one way, pods that match older deploy get colored differently.
I think we've got a bit of a conflict on the goals of this feature. By virtue of the name, deploy boards, we want to focus on the status of a deploy. But by virtue of being on the environment page, we want to focus on the status of the environment. These are clearly related, but not exactly 100% aligned. Ideally we'd cover both, eventually. But we could conceivably focus on only one goal for now. e.g. if we want to just show ready/not ready status of pods of an environment, and ignore deployments and versions, that's still go some value and signals our direction.
To do any better, we have to load the replica sets and the pods and do some fairly complicated graph-building to work out which generation a given pod belongs to, or introduce our own versioning scheme on pods by adding a label with version information. Either of those approaches would be fine for 9.1, but is !1278 (merged) inappropriate for 9.0? Not detailed enough?
Ok. It seems that there are too many conflicting ideas here :)
Show status of deployment: the deployment board,
Show status of the environment: the environment status.
Currently, what @nick.thomas did in !1278 (merged) is a status of deployment: rollout status. I agree that we also need to see the status of the environment, but it seems that this is out of scope for %9.0. The mechanics for these two methods are different and requirements for implementing that are different too:
for deployment status, we only need latest deployment and data stored there, generally, I would try to avoid looking at replica sets and pods if possible,
for environment status, we need to start storing versions in pods and somehow fuse a lot of sources: recent deployment (expected number of replicas), pods (figure out versions of pod), probably we don't need to look at replica set, maybe look at latest one.
The ~Frontend implementation will in the future be compatible with 1. and 2., as the API is quite generic and describes: instances, boxes statuses, and tooltips. We pretty much have 1. implemented, but we did not yet figure out:
What we do with builds that will create the deployment. Currently, we only know about successful deployments, we do not know that deployment is being created. Auto-deploy scripts when creating deployment is waiting for it to finish. So we do have a discrepancy in presented data: as we would show deployboard for the future created deployment,
When we show deployment board? Should we show deployboard only for top-level deployments? should we make them expandable for in-folder (expanded and show in separate view) deployments, effectively review-apps? IMHO: Yes,
For %9.0 we should continue with the deployment board. For %9.1 we could think about showing the environment status, maybe replacing the current deployment board (making it more accurate) to be environment status, but moving deployment status to be an expandable property of deployment when you click on the environment?
Discussed elsewhere, but writing here for the benefit of others... Auto-deploy does wait for the deployment to finish, but because it doesn't have proper health checks, you can still get a 500/503 until the app actually finishes coming up.
We had a call with @markpundsack where we discussed current Deployboard.
And basically it looks great, the usability of that feature is still limited, but it shows a direction in which we are going.
There are some bugs and improvements that it would be helpful to push:
If there's no deployment for an app, we show 100% complete with no instances, instead of showing an empty state that we couldn't find that application. @dimitrieh Could you propose some empty state when we cannot find an app? @nick Can we return in API: valid: true|false?
@filipa Would you create an MR on top of the master that would refresh deployboard if it is not completed every 1s, even if we receive 200? We would test whether it would be sufficient to have semi real-time experience and how it would affect the performance,
Rephrase Kubernetes Integration to mention, that: In order to have terminal and deployboard, you need to label your deployments, replica sets and pods with an app=$app and each project has to have a unique namespace. @dimitrieh Do you have some proposal for better wording? There's also a concept to include this message in the empty state when we cannot find deployment.
@filipa Since we don't have deployboard actions yet, it creates unnatural padding on the right. Can we remove it for now, until we fill that space with buttons?
@ayufan@dimitrieh The empty state of the deploy board should probably only include what needs to be set up for deploy boards to work, so technically I think that means that deployments need to be labeled with app=$CI_ENVIRONMENT_SLUG in the unique namespace specified in the kubernetes service setting (which we know at this point, so can echo as well). Or do replica sets need app= as well? I'm assuming replica sets and pods need that label for terminal to work, but not for deploy boards. The k8s service itself should list it all, of course, but be explicit about which needs which.
I'm afraid I don't have any time to continue working on !1278 (merged) in any event. As I said, I really, absolutely have to focus on elasticsearch for the remainder of the 9.0 release.