At PetalMD we build cloud applications for Healthcare Professionals. We have a backend running Ruby on Rails (API / admin app) and front-end apps using Webpack and Angular2. All applications have unit/end-to-end tests and can be deployed as needed, typically several times a day.
The purpose of the article is to give a quick overview of what we have in place currently and begin a series of technical posts on our CI/CD stack.
Each time a developer starts a new project, he creates a branch and a corresponding Pull Request to master.
When a PR is created or a new commit is made to a PR, the CI should rerun all tests and all previous approvals (from designers or product owners for example) should be reset. When a build is done (and successful), we should have a docker image ready to test/deploy.
The entire test harness should run as fast as possible (unit and end-to-end tests). For our Ruby on Rails application, we have more than 17K tests and each build should take less than 20 minutes to run.
The code quality and code coverage should not be worse than master.
Each time master builds (usually after a PR is merged), the application should be deployed in production if the build succeeds. If not, a notification should be sent to a Slack channel.
We previously tried Assembla, GitLab and Github for our versioning part and we now use Github principally due to all the 3rd party integrations available. The second reason for our choice is the possibility to fork open source libraries we use and send back our changes in few clicks.
We also looked at different testing SaaS tools (Travis, CircleCI, etc) but, for our requirements, the price was really an issue (a lot of concurrences, and to many minutes of build usage per month). We finally built our solution, based on the open-source project Jenkins.
Within Jenkins, we need to setup different slaves per application type and, for the end-to-end tests, setup all services needed (MySQL, Redis, ElasticSearch, Sidekiq). For the end-to-end tests on front-end applications, we need to configure Firefox, some virtual display (xvfb), etc.
The maintenance and cleanup each time we change versions added some complexity that we didn’t want to deal with. This is why we use docker and docker compose: we run our applications in containers linked with a docker-compose file.
The only requirement for a slave is now to have Docker and Docker compose installed.
Google Cloud Engine
We would like to always have slaves for running our jobs as needed, and with the lowest cost possible.
Jenkins has an integration with AWS EC2 and Digital Ocean, but [...].
In order to reduce the build time even further, we split Ruby tests into 3 processes on a single machine, that runs MySQL, Elasticsearch and Redis at the same time. We use 10 instances having 4 cores and 8G of RAM.
For reducing cost, having the ability to made custom VMs spec for our needs permit having an instance with fewer rams and more CPU processing power, we don't waste money on RAM we not use.
Another interesting pricing facts about GCE is the paid-per-minutes, with a build time quicker than 1 hours, we don't want to pay for 1 hour but only for the time the instance was UP, we don't have this option with AWS, AZURE, Digital Ocean or other cloud providers.
If your instance was UP less than 10 minutes, GCE will charge you 10 minutes.
Pull Approve / Code Climate / Codecov
For our Code Quality tools, we choose them for their simplicity of use and the integration with GitHub.
Each time Jenkins run a build, we upload our coverage data to Codecov.io.
Each time a commit is made in a PR, Code Climate analyzes our code and we reset all approvals in the Pull Approve integration.
Codecov and Pull Approve use a per private repository bundle billing, Code Climate uses a per account pricing model.
- How to install and configure a Jenkins master node
- How to configure our first build using Jenkins DSL
- How to scale our build on demand with GCE
- Slides of our "Build, Ship and Run" speak explaining our tools and our workflow
See you soon!