- We'll add compute platform credentials (DO) to the admin settings so GitLab can autoscale CI without needing to setup a runner.
- Support the other platform credentials that supported by Docker Machine
- Reuse this functionality for dev, deploy, pages
- Add support for schedulers (Kubernetes)
Currently we require a user to:
- Have or create VM
- Login to VM
- Find GitLab Runner installation docs
- Copy & paste commands
- Copy & paste URL
- Copy & paste token
It takes a long time, and too many clicks in my opinion.
Maybe we could add a button:
- User clicks
Launch VM. Gets redirected to
https://cloud.digitalocean.com/droplets/newand has the VM creation form filled.
- Add SSH key (if doesn't have)
- Choose hostname (if doesn't like existing one)
- Click create
- Wait around 2 minutes to start building a project.
How it would work:
- We would fill the
user-data(cloud-init) with GitLab Runner installation and registration commands.
- User would have to wait some time.
- We could send an e-mail that your runner is registered and you can start using it.
However, Digital Ocean doesn't allow to pre-fill user-data. They considering that: https://twitter.com/digitalocean/status/715557008603267073.
The nice thing about this approach would be that we would not require any API tokens to put into GitLab. Since we don't want to manage the machines we don't need to. We want to ease the process of launching VM with Runner. This would work well on GitLab.com and as well on-premise installations.
@ayufan great idea, but it is not very automated if you can't pre-fill user data.
What about adding your Digital Ocean credentials to GitLab and using Docker machine to launch new instances?
In the future we can expand to support all cloud drivers https://docs.docker.com/machine/drivers/
This also relates to unifying the interface from GitLab to cloud providers/schedulers so dev/ci/cd can reuse the same connection #14698
We discussed the following:
- The cloud provider credentials belong in GitLab itself
- GitLab can then use the same cloud provider for dev/ci/cd
- If we finish this quickly the Koding people can reuse it
- The autoscaling part should be part of the Omnibus package
- It would be nice if you can configure the autoscaling in the admin settings of GitLab
- It should use Docker machine so we support https://docs.docker.com/machine/drivers/
- Basically we're making our own 'poor person' scheduler (boot and deprecate VM's, a boring solution)
- In the future we'll add support for proper schedulers like Kubernetes #14698 but this is later, we can Docker Machine done quickly and make lots of people happy
We're working towards people installing GitLab for all their software development lifecycle needs. GitLab will provision the needed dev/ci/staging/production capacity itself in the cloud (DO) or on-premises (vmware/openstack/Kubernetes).
@ayufan will look into splitting the Runner into an autoscaling component and the rest so we can bundle the autoscaling in Omnibus.
Title changed from Add button to launch Runner to Handoff to compute platformsToggle commit list
Changed the description to: We'll add compute platform credentials (DO) to the admin settings so GitLab can autoscale CI without needing to setup a runner.
Does Digital Ocean support OAuth? That would be preferable to copy/pasting API keys.
To me it's not very clear how the points 2. and 4. will interact. Docker Machine uses completely different mechanisms than Kubernetes. Could you please elaborate on this?
Furthermore, I'd really like to ask you to consider adding support for autoscaling on Mesos/Marathon as well. Basically, this would mean calling the Marathon REST API endpoints, instead of issuing docker-machine commands, see the Marathon docs.
It would be great if our runners could use our own AWS resources such as service roles, network security, parallelism, and billing. Along with Kubernetes and Marathon, ECS service/task definitions would be awesome. GitLab would need AWS credentials with permissions to launch the task on the ECS scheduler and an ECS ARN. Thats it. Leave the autoscale and other cluster settings up to the user.
Personally, I think it's a good idea to split the functionality for scheduler support and auto-scaling. I recently created a project for having the CI Runners work on DC/OS / vanilla Mesos via Marathon.
There are two versions, on without Docker-in-Docker, and one with DinD support. You can have a look here:
Concerning auto-scaling, as far as I know there's no "general" way currently to do this for DC/OS or vanilla Mesos. Not sure about Kubernetes though.
The Kubernetes executor effectively auto-scales within a cluster, spinning up containers as needed to run jobs. Google Container Engine (GKE) has (possibly experimental) support for auto-scaling the cluster itself.
changed title from Handoff to compute platforms to Services for compute platforms and container schedulersToggle commit list