@ogolowinski Also, I'm realizing that there may be an overlap between what was delivered in #201742 (closed) and what's already included in AutoDevOps.
On one hand, AutoDevOps uses a Build stage that creates a Docker image of the application then pushes it to a registry. On the other hand, by including the CF-Provision-and-Deploy-EC2.gitlab-ci.yml, we're including the push to S3 steps, which pushes the built project onto an S3 Bucket.
I see these two actions as being mutually exclusive (ie. Docker image pushed to registry vs built artifact pushed to S3). But maybe I'm wrong here, please let me know what you think.
When commenting this graph, we said that Push to S3 should be separated from the Deploy to EC2 step. Would 'Push to S3' then be part of that Build stage, which would exclude the whole Docker-based operations (if it's confirmed that these two actions are indeed mutually excludive)? WDYT?
@ogolowinski Just to confirm: we want to include in AutoDevOps the provision of the AWS stack via CloudFormation (Provision stage). Is that correct?
yes
Regarding
Also, I'm realizing that there may be an overlap between what was delivered in #201742 (closed) and what's already included in AutoDevOps.
This should be done in a similar way to what we did for auto deploy to ECS - it is also using atuo-build if I am not mistaken
we're including the push to S3 steps, which pushes the built project onto an S3 Bucket.
There are 3 things here.
For this iteration leave this as is. I wouldn't change the template now. The idea here is to connect the template that created last milestone to autodevops with minimal changes to fit the autodevops flow
Hypothetically, Can we change the S3 target of the built project to out own repo/registry? When you say "built project" is that the entire repo or the binary artifact?
We said that Push to S3 should be separated from the Deploy to EC2 step. Would 'Push to S3' then be part of that Build stage, which would exclude the whole Docker-based operations
When you say "built project" is that the entire repo or the binary artifact?
I meant the binary artifact.
I was thinking that push to S3 is a docker image that pushes anything to S3. It can be a file, an artifact, a docker image or anything
To push to S3, we use the aws push command, and according to the related documentation. Before pushing to s3, it says:
You must specify both a bucket and a key that represent the Amazon S3 bucket name and the object key name.Content will be zipped before uploading. Use the format s3:///
I don't think it's possible to push Docker images to S3. ECR would be used instead. Artifcats are pushed to S3 instead.
And consequently, when running the aws deploy create-deployment command to deploy to an EC2 instance (documentation here), that uploaded artifact needs to be specified: when specifying S3 for the --revision option, we need to also define the bundleType which can be tar, tgzorzip`.
@ogolowinski I think we need to redefine what we want to achieve here
This graph above shows the high-level process that includes Build and Deploy stages when deploying an app to EC2.
At the top of the group, there's the simplified flow (from having a Specification to Deploy to EC2). Then underneath, that same flow is shown when Deploying to ECS today (let's call it the Docker flow). Finally, at the bottom, we have what the "flow" could be here if we were to deploy an artifact to EC2 (let's call it the artifact flow).
This issue here is about including the CF-provisioning-and-deploy-EC2.gitlab-ci.yml template into AutoDevOps. This template leverages the aws deploy API by uploading a zipped archive to S3 then deploying it to an EC2 instance. This flow fits then the artifact flow.
A few comments about the artifact flow to justify what's shown in the graph:
How do we go through the Build stage for the artifact flow? Auto-build only produces Docker images, and that's the problem: we need to have a similar build process that produces artifacts relevant to the project they're produced from. And I don't know if this can be automated via AutoDevOps.
At minima, we need a custom Build job tailored for the project before going through all following stages (possibly those provided by AutoDevOps), but I don't think we can have one-size-fits-all solution for non-Dockerized applications (ie. projects that don't hold a Dockerfile).
I know we said that users should write their custom build job in a build.yml file or something. But I think we should introduce that need for customization by sticking with the "modify your .gitlab-ci.ymlfile" strategy: to create a new build yml file on the side requires users to reference it with ainclude:localattribute in their.gitlab-ci.yml`, which means adding an additional step to the process.
@ogolowinski The sample project I've been using is a Jekyll website. It turns out that Auto-Test was not working for this type of project. So I had to override the test job.
I'm not sure if we can use code formatting for UI messages. If not, just skip the quotation marks and leave them without any special styling.
@ngaskill No we can't use code formatting in UI messages. I'll leave the double-quotes then.
If so - isn't that even more hassle, as I need to provide a Cloud Formation schema? If not (and I have my AWS setup already done, but lack schema) - how do I configure the deploy? Unfortunately there's no tutorial/guide showcasing that simple use case, which is quite common for people that don't use scaling (ECS). Thanks!
You don't have to use that template, if you don't want to include the provisioning part in your pipeline. You could only include teh Deploy/EC2 template by creating a custom template with the following:
Hey @ebaque, thanks a lot for your guidance, I really appreciate it! I was able to make the CD work, but as I stumbled upon a few problems/questions, I'm posting them here as others might find them helpful. My current setup:
Because of the template, there's a limitation in the stage I'm allowed to use (I got the error: build_artifact job: chosen stage does not exist; available stages are .pre, review, production, .post). I overcome that by adding my own build stage, but isn't there a simple way to override build_artifact job as described above?
I put my AWS files in the repository, the same way you did. In real life that won't work for me as I'd have different values for different targets (testing, staging, production), and it would become a nightmare to put that many files in the repo. Also I'd like to decouple the code from the infrastructure. Is there some elegant way to achieve per target/stage configuration? Can I use file-typed environment variables to put the file content inside, and make those repeatedly for each branch? Best would be if I just put those as pipeline variables without the need of files and structure (as other providers do).
Regardless of my build process, if my artifact is not located in the public folder, I get an error /public: No such file or directory. I am not aware where this is hardcoded and how to overcome that? Surely I can move files there, but it seems weird.
What's the purpose of review stage? What's the proper way to introduce tests step?
What's the simplest way I can use the same template to just deploy to S3? Or deploying to EC2 is always connected following?
Is there a plan to put some tutorials/guides about all that, as currently there's not plenty of info, despite you support simple EC2 (and probably S3) deploy. The guide says After you have completed these three templates based on your requirements, you have two ways to pass in these JSON objects, which surely means you must provide all three, which obviously is not the case.
Is your aws-ec2:latest image public? Can I know what kind of scripts the deploy procedure calls so I know how can I tweak the steps if needed?
Thank you very much for your time and patience. I'm sorry for bothering you with such questions, but not all GitLab users are experienced DevOps and having a very simple way to deploy your code is crucial for our work.
I'm not sure why your build_artifact job has the review stage. The docs mentions that the build stage should be used. Did you try that?
an I use file-typed environment variables to put the file content inside, and make those repeatedly for each branch?
You can definitely use that, this is what I was going to suggest: you can create a file-type env. variable. This is where your infra-related files. Please note that, for each env. variable, you can select a targetted environment.
Alternatively, you can try to define your main json object in a file-type environment for all environment, then populate it with env. variables which you would specify on an environment basis.
I know this topic is an on-going one, but I just tried the following, and it worked:
You may public written in your Push to S3 yml file, under source. More info here.
the review stage enables your to deploy Review applications. This stage usually run for branches.
As you can see here, push to S3 and deploy to EC2 have been coupled. That said, you could decouple them by creating your own template. Based on the template I just linked, for a review stage only:
The guide says After you have completed these three templates based on your requirements, you have two ways to pass in these JSON objects, which surely means you must provide all three, which obviously is not the case.
That's correct, you need to provide the three JSON objects. And there are two ways of doing it, as mentioned by the guide. My apologies if this part of the documentation, maybe it should be re-written(?)
Is your aws-ec2:latest image public?
Yes, the image can be pulled from registry.gitlab.com/gitlab-org/cloud-deploy/aws-ec2:latest. You can find the Dockerfile (and other related Dockerfiles) here.