Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: Add Dockerfile from cli setup, and create production ready docker images #1139

Merged
merged 27 commits into from
Sep 12, 2019

Conversation

shahabganji
Copy link
Contributor

@shahabganji shahabganji commented Jul 17, 2019

This Feature enables Aurelia developers to have a basic Docker file based on the configuration they have chosen during the project setup.

The final step asks you whether you like to have a docker file or not, the default answer is set to true though.

Some of the steps in the Dockerfile are depending on #1143, which enables headless runs and overriding port and host for e2e tests.

When done, this will close #1127

Let's start, any ideas for now?

/cc: @EisenbergEffect , @fkleuver

@avrahamcool
Copy link
Contributor

beautiful work so far.

I think the default behavior for docker file support should be no.
the default CLI setup is usually used for quick initialization of projects, with minimal requirements.
so I think we should minimize the "stuff" it does for you by default.

@3cp
Copy link
Member

3cp commented Sep 3, 2019

LGTM on the surface. I don't use docker. Let me know when it's ready to merge.

@shahabganji shahabganji changed the title WIP: feature: Add Dockerfile from cli setup feature: Add Dockerfile from cli setup, and create production ready docker images Sep 6, 2019
@shahabganji
Copy link
Contributor Author

@3cp, @EisenbergEffect

This is ready, it could become simpler if we have an official docker image for aurelia-cli.

I've tested it with almost all of the possible combinations.

Copy link
Contributor

@EisenbergEffect EisenbergEffect left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a minor change to wording.

lib/workflow/questionnaire.js Outdated Show resolved Hide resolved
@EisenbergEffect
Copy link
Contributor

I think we want @3cp to do a general review of the CLI integration work and @fkleuver to do a quick review of the docker part itself. We can work towards the official docker image as well as a next step.

`--run` is removed on master because it's not needed by protractor task.
@fkleuver
Copy link
Member

fkleuver commented Sep 7, 2019

Excited to see progress in the area of Docker :)

I do see a few small things that affect layer size. There's no cleanup and a few duplicates.

e.g. if you changed this:

#update apt-get
RUN apt-get update

RUN apt-get install -y \
    apt-utils \
    fonts-liberation \
    libappindicator3-1 \
    libatk-bridge2.0-0 \
    libatspi2.0-0 \
    libgtk-3-0 \
    libnspr4 \
    libnss3 \
    libx11-xcb1 \
    libxtst6 \
    lsb-release \
    xdg-utils \
    libgtk2.0-0 \
    libnotify-dev \
    libgconf-2-4 \
    libnss3 \
    libxss1 \
    libasound2 \
    xvfb

# cypress dependencies or use cypress/base images
RUN apt-get install -y \
    libgtk2.0-0 \
    libnotify-dev \
    libgconf-2-4 \
    libnss3 \
    libxss1 \
    libasound2 \
    xvfb

To this:

#update apt-get
RUN apt-get update && apt-get install -y \
    apt-utils \
    fonts-liberation \
    libappindicator3-1 \
    libatk-bridge2.0-0 \
    libatspi2.0-0 \
    libgtk-3-0 \
    libnspr4 \
    libnss3 \
    libx11-xcb1 \
    libxtst6 \
    lsb-release \
    xdg-utils \
    libgtk2.0-0 \
    libnotify-dev \
    libgconf-2-4 \
    libxss1 \
    libasound2 \
    xvfb \
  && rm -rf /var/lib/apt/lists/*

Then all the dependencies are installed (and immediately their temp stuff cleaned up) in one single layer, which should significantly reduce download size/time of the container.

Other than that, I'm still not entirely sure of the exact use case so I can't say much about the rest of the script stuff. If you say this works then I'm happy to believe you.

Do we have any tests in place to verify this though? Honestly I'd even be fine with a simple bash script that kicks off the container and verifies its output and spits out a non-zero exit code if it fails. Just so long as there is some kind of automated check that the thing works, to prevent regressions in the future, that would be perfect

This is useful when running in background and unintended mode rather than interactive.
There is no need that user bring up the application manually when running e2e tests in headless mode.
This is to test the generated Dockerfile for various aurelia configuration, it does several steps

1. Copies aurelia application inside a docker container
2. Runs unit and e2e test
3. Builds the application for production `au build --env prod`
4. Deploys on nginx
5. Tags the image

If all of these steps pass for all defined configurations the new changes had no negative effect on the docker support
@shahabganji
Copy link
Contributor Author

shahabganji commented Sep 8, 2019

@fkleuver

I do see a few small things that affect layer size. There's no cleanup and a few duplicates.

You are right, I did not know very much about these cleanups and required tools, I just found them based on research, I changed them to what you suggested tho.

Other than that, I'm still not entirely sure of the exact use case so I can't say much about the rest of the script stuff

Well, the main purpose is to facilitate CI/CD pipeline for aurelia developers, based on the features they chose for their application. We have a CI/CD environment for our backend services, so we could easily deploy nightly builds by the help of a git push and we have several images for testers and staging based on the features in each release. besides when everything goes well, we could deploy to the production with the ease of a click. I thought the same scenario could apply for the front-end services as well. So I created a Dockerfile for my front-end app at work and created the same pipeline with the help of our DevOps Guru. Then I realized if aurelia-cli could help me to have such file out of the box based on the features I have selected it could be useful. Please correct me if I am wrong 🙂 🙏


Do we have any tests in place to verify this though? Honestly, I'd even be fine with a simple bash script that kicks off the container and verifies its output and spits out a non-zero exit code if it fails. Just so long as there is some kind of automated check that the thing works, to prevent regressions in the future, that would be perfect

I am far from disagreeing you. Since this Dockerfile will be generated when one runs au new command, its test will be creating an Aurelia application inside CircleCI, and running the docker:build script, which is actually docker build -t app:0.1.0 ., file against the created application. If it passes all the steps, then the new changes had no negative effect on this feature. To hit the ground running I have added three application creation with different configurations. Check if this kind of test suffice and let me know of your opinion. I've added test_docker step for the CircleCI.


@3cp we need the --run flag from e2e:headless script for protractor skeleton to run the application automatically inside containers. 🙏 Please don't remove it, or if you think it would be better off not to have that flag, we should also remove it from cypress skeleton as well to keep the behaviors of different configs consistent.

@fkleuver
Copy link
Member

fkleuver commented Sep 8, 2019

if aurelia-cli could help me to have such file out of the box based on the features I have selected it could be useful

Yep, I get it now (I think). The presence of the Dockerfile had me confused a bit. I thought you wanted to publish docker images to the registry. Now I understand this is just a template for scaffolding.

I have added three application creation with different configurations. Check if this kind of test suffice and let me know of your opinion. I've added test_docker step for the CircleCI.

In terms of quantity / coverage this should be plenty - the main point here is to make sure we're generating working Dockerfiles. As long as it works OOTB, users can always tweak details of installed tools and configs.

Can we somehow verify though that the script inside the docker completed successfully? I'm not 100% sure but it seems to me we're now only verifying that aurelia-cli successfully ran the docker command, but if the container itself failed somehow then we wouldn't know, I think.

@shahabganji
Copy link
Contributor Author

shahabganji commented Sep 8, 2019

@fkleuver

In terms of quantity/coverage this should be plenty - the main point here is to make sure we're generating working Dockerfiles. As long as it works OOTB, users can always tweak details of installed tools and configs.

Having enough of those generated applications and running the docker file against them, could satisfy that we are creating a robust aurelia-cli and will cover concerns like this to some extent. Because whenever a change pushed this test_docker will run and creates a real aurelia application based on the latest aurelia-cli, runs the generated unit tests and e2e test of the generated aurelia application; not aurelia-cli, aurelia-cli tests have been run before as well. So I think it would be beneficial to have 5 or 6 different configurations, on my local machine I had to test it with 8 to 10 configs to make sure the Dockerfile is working in almost all situations, still not 100% coverage.


Can we somehow verify though that the script inside the docker completed successfully? I'm not 100% sure but it seems to me we're now only verifying that aurelia-cli successfully ran the docker command, but if the container itself failed somehow then we wouldn't know, I think.

Honestly, aurelia-cli does not run the docker, its pure docker command behind the scenes, I just asked npm to run docker build -t first_sample:0.1.0 ., so if any of the commands inside the docker container fails, such as unit tests with Karma, or e2e test with cypress, the whole command will return a non-zero status code causing the test_docker to report failure. I'm not sure you could see this link. If so, you will see that I had a failed run before. If you want to make sure, I could change sth to deliberately fail the pipeline.


I thought you wanted to publish docker images to the registry.

That would be the purpose of this PR. It could simplify the Dockerfile in current PR.

We could replace these lines for instance with the following:

FROM bluspire/aurelia:1.1.0

@shahabganji
Copy link
Contributor Author

@fkleuver

I used these configurations to test the Dockerfile is working properly

      # four scaffolding using Webpack with various configurations

au new first -u -s htmlmin-max,sass,postcss-typical,karma,cypress,docker

au new second -u -s http2,typescript,htmlmin-min,less,postcss-typical,jest,protractor,scaffold-navigation,docker

au new third -u -s dotnet-core,typescript,htmlmin-max,stylus,postcss-basic,jest,protractor,docker

au new forth -u -s http2,dotnet-core,htmlmin-min,sass,postcss-typical,karma,cypress,scaffold-navigation,docker

      # four scaffolding using Alameda with various configurations

au new fifth -u -s cli-bundler,alameda,htmlmin-max,sass,postcss-typical,karma,cypress,docker

au new sixth -u -s cli-bundler,alameda,http2,typescript,htmlmin-min,less,postcss-typical,jest,protractor,scaffold-navigation,docker

au new seventh -u -s cli-bundler,alameda,dotnet-core,typescript,htmlmin-max,stylus,postcss-basic,jest,protractor,docker

au new eighth -u -s cli-bundler,alameda,http2,dotnet-core,htmlmin-min,sass,postcss-typical,karma,cypress,scaffold-navigation,docker

I am not sure how long it takes for the CircleCI to run all of them, I push these to see what would happen. If you think this is too much of an effort we could remove those which are less common.

@fkleuver
Copy link
Member

fkleuver commented Sep 8, 2019

So I think it would be beneficial to have 5 or 6 different configurations, on my local machine I had to test it with 8 to 10 configs to make sure the Dockerfile is working in almost all situations, still not 100% coverage.

...

I am not sure how long it takes for the CircleCI to run all of them, I push these to see what would happen. If you think this is too much of an effort we could remove those which are less common.

We follow a rule of thumb that CI should complete within 30 minutes on normal commits, but have no time limit (tho in practice up to a few hours max) for whatever precedes a release (e.g. a merge to master).

Looks like your latest job took about 18 minutes so that seems fine to me.

Covering all combinations is a responsibility of the cli integration tests. I believe these do take a few hours to complete. The docker tests primarily need to make sure that all options (individual features) work, because what those tests are primarily verifying is whether the docker container has the correct dependencies installed and configurations to support aurelia-cli's features.

if any of the commands inside the docker container fails, such as unit tests with Karma, or e2e test with cypress, the whole command will return a non-zero status code causing the test_docker to report failure. I'm not sure you could see this link.

I didn't see that, thanks for the link. Then that's good. Just wanted to make sure we don't get false positives :)

@fkleuver
Copy link
Member

fkleuver commented Sep 8, 2019

I see one failure, any idea what that might be? How would we investigate such issues?

@shahabganji
Copy link
Contributor Author

Covering all combinations is a responsibility of the cli integration tests.

I couldn't be agree more.

I didn't see that, thanks for the link. Then that's good. Just wanted to make sure we don't get false positives :)

I am not sure it was due to permissions something else, because I faced a situation where CircleCI clears the logs after hours, and you need to re-run the flow.

Screen Shot 2019-09-09 at 10 58 30

I ran the flow again and think that the failure is owing to the fact that in the sixth configuration when running e2e tests inside the container the browser takes longer than expected to run and it causes some of the tests to fail and as a result failing the docker command and test_docker

Screen Shot 2019-09-09 at 11 11 50

Screen Shot 2019-09-09 at 14 02 54

Screen Shot 2019-09-09 at 14 02 09

These are the results from several runs, of that configuration, however, it has also successful hits and to make sure it is not a false positive I ran it on my local docker-machine as well,

Screen Shot 2019-09-09 at 11 29 13

Screen Shot 2019-09-09 at 12 12 03

Screen Shot 2019-09-09 at 12 12 17

How would we investigate such issues?

As long as CircleCI shows the logs, there are messages out there just like when we run normal Aurelia applications.

For now, I leave out the sixth configuration. and push again. BTW, I think the approach of CI systems could also affect this, for instance, I tested this on TravisCI as well, it also reported exit code 1, but not immediately, it ran till the end while CircleCI stops the whole process as soon as it gets the first non-zero exit code.

The sixth configuration was using the following features : cli-bundler,alameda,http2,typescript,htmlmin-min,less,postcss-typical,jest,protractor,scaffold-navigation,docker
Copy link
Member

@fkleuver fkleuver left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks @shahabganji !

@fkleuver
Copy link
Member

Regarding that sixth configuration, ideally I would like to have that included and passing consistently as well. But, I understand that this may be a false negative or something specific to the e2e setup.

The tests prove that in principle the generated Dockerfile works in all core scenarios and that there may or may not be something flaky going on with one particular setup.

So what I mean to say is I think this is good enough to merge as an initial version of docker support, but we can keep that 6th configuration (getting that to work) around as a todo for future improvements when you have time.
Of course, if you can sort it right away, that would be perfect, but it doesn't seem like something worth blocking this PR for. I think you've already done a great job in providing pretty good tests in something that is relatively hard to test in the first place :)

@shahabganji shahabganji changed the title feature: Add Dockerfile from cli setup, and create production ready docker images WIP: feature: Add Dockerfile from cli setup, and create production ready docker images Sep 10, 2019
@shahabganji shahabganji changed the title WIP: feature: Add Dockerfile from cli setup, and create production ready docker images feature: Add Dockerfile from cli setup, and create production ready docker images Sep 12, 2019
@shahabganji
Copy link
Contributor Author

I removed the WIP flag, I made the tests on the CircleCI to run in parallel, that brings advantages not only in time but each individual test now installs its dependencies like Chrome and Linux stuff separately making them purer. For the 6th configuration which is failing on CircleCI I tested it both locally(Mac OSX) and on AppVeyor, and it passed the tests, I have no clue 👀 . Here are screenshots of the AppVeyor:

The part which causes CircleCI to fail:

Screen Shot 2019-09-12 at 10 17 46

And here is the success message:

Screen Shot 2019-09-12 at 10 18 08

The reason that I did not move all configurations on AppVeyor is due to the fact that they take longer to run on AppVeyor and it hits the time limit of Appveyor.

I pushed the appveyor.yml file so if we want to have it run for aurelia-cli we need to install it on this repository and also send a request to AppVeyor support team to enable Windows Server 2019 image for this organization/user, as discussed here.

if I missed anything please let me know. 🙂

@3cp 3cp merged commit 0b8af67 into aurelia:master Sep 12, 2019
@3cp
Copy link
Member

3cp commented Sep 12, 2019

If this causes more CI issue, we turn them off in CI :-)

@shahabganji shahabganji deleted the feature/docker branch September 12, 2019 11:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Suggestion] Support adding Dockerfile in CLI
5 participants