At Moltin, we’re constantly iterating over our local development environments, and how we use these to ship changes faster.
If you don’t already know, nearly all of our production code is containerized with Docker, which overwhelmingly gives us freedom with how we work and build software.
For some time now, mimicking our production setup locally has been a very static affair, using rigid Docker-compose files and configs. It’s worked well for us so far, but as our list of individual services grows, managing those configurations becomes unwieldy and time-consuming. It also limits the ability for our engineers to test new patterns and structures locally without having to destroy and rebuild huge service stacks.
At the moment, our development pace is quick, and we often find that our internal development practices have around a six to ten month period before the cracks start to appear. Configs grow too big to manage, edge cases become full-blown problems, and flexibility turns to rigidity.
When we first began with Docker over a year ago, our API consisted of a single monolithic application.
We would spin up a VirtualBox with Docker tooling installed which meant we could graft local copies of the service locally using simple docker-compose files.
api: hostname: api.development environment: - ... build: ../dockerfiles/api/. restart: always links: - ... ports: - ... volumes: - ... apiLb: environment: - ... image: ... restart: always links: - api ports: - ...
Then as our Dockerized ecosystem expanded we would add extra services to the main compose file and link them together.
We maintained a single repository holding all of our development configurations so it was quick to go from a new laptop/machine to local development in very few steps. When a new service or application was being built, Dockerized or maintained, we would add it to the development repository and everyone would instantly have access to a local copy of it.
api: ... apiLb: ... forge: hostname: forge.development build: ../dockerfiles/forge/. restart: always links: - ... ports: - ... volumes: - ... forgeLb: environment: - ... image: ... restart: always links: - forge ports: - ...
Eventually, this compose file would include the store management dashboard, accounts, Kibana, Elasticsearch, Redis, tests, and more, on top of all the local databases needed to run them and so on and so on, ever expanding.
This system was pretty robust, but as we began to split our monolithic API into its constituent parts, the list of extra applications and services we ran expanded this method and therefore became quite difficult to maintain. It also meant some parts of the team were removing services they didn’t need running all the time, only to find them restarting again when they pulled in any changes to the main development repository.
So, as the team expanded and our systems became more complicated, it became pretty clear we needed to move to a better solution for local development, and this is where we decided on Rancher.
There were several things to consider for new local development practices:
- Allow creativity and make it easy to try out new things without impacting running applications.
- Go from blank machine to local development quickly.
- Easy for non-technical team members to try services locally.
- Easy to enable and disable services.
- Simple to manage as a team.
We’d been using Tutum for some time for managing our container stacks and their inter-connectivity. So it wasn’t a major leap to look into Rancher as a point to progress from there.
Running Rancher locally brought some really interesting abilities:
- Switching between Orchestration drivers (Kubernetes, Swarm, Cattle) quickly and easily for testing.
- We get a visual representation of containers running on our local systems and how they link together.
- We are able to check how our apps perform in real-time while developing them.
- We can test real-world scenarios against real-world servers from our local machines quickly using Rancher’s Docker-machine drivers.
- We could add certificates and registries for team-wide use.
- We can create local networks quickly and get visual feedback.
- We can translate local changes to live systems almost identically.
- We’ll dig into how we run Rancher locally a little more.
We moved from Vagrant to Docker-machine for spinning up the Rancher systems.
We quickly decided on a pattern of implementation where the Rancher Server acted as an orchestrator and all services were run on additional machines registered as hosts.
So we ended up with the following:
- 1 x Rancher Server in a specific virtual machine
- Multiple Rancher Hosts in different virtual machines
This means you can add or destroy multiple hosts without affecting the main server, create throwaway hosts for your own tests, or completely scrap and rebuild hosts quickly.
You can also use Rancher’s labeling and scheduling to tag specific hosts so they can be used for specific purposes if.
Go from blank machine to local development quickly
We retained storing our main development configurations in a single repository. This meant all team members had a single point for what code and config would get their local development applications running.
It also simplified the task of doing that:
$ git clone https://github.com/moltin/DEVELOPMENT_REPO $ cd DEVELOPMENT_REPO $ ./up.sh
Three commands not beyond even the least technical of our team.
The task of maintaining setup configurations is also simplified. One team member can commit a change to a single place and offer updates for the entire team.
Running up would launch the virtual machines, and provision the services defined in the configuration.
Easy for non-technical team members to try services locally
During the move to Rancher, we spent some time testing their catalog system. Catalogs are a way of packaging up Dockerized applications so they can be run on hosts via the Rancher control panel.
If you haven’t seen Rancher catalogs you can check it out here.
By cataloging our services, non-technical team members can choose the service from a drop-down menu and have a local application up and running in seconds. Catalogs also offer configuration via forms, so we use the catalog questions heavily in order to give more technical members control over how services are linked together and how they will run.
Easy to enable and disable services
We have our own internal tools which we use to amalgamate Dockerized repositories to specify what services we want up and running on Rancher and how they link to each other on the command line.
Each repository for a Dockerized application we use contains a simple YML configuration that explains to our systems how this container should run.
We created a small application that we run through a Docker machine that can accomplish that by reading those configurations and booting up interlinked services.
So booting up a full stack is a simple case of configuring the YML file and running the command.
If you’re not technical you can opt for the using catalogs, or start systems via the Rancher control panel.
There’s a great deal of flexibility being able to run containers ad-hoc via the command line, using compose files outside of rancher and being able to use a GUI to interact with.
Everything is a container
Rancher takes the purist approach to containers with RancherOS which runs their Rancher Server.
Everything is a container - so basically every service running on the host and server should be a container in its own right. In the past, some of our containers have been an aggregate of multiple services all run by supervisors in the foreground.
By following the RancherOS approach we’re instilling the single service container behavior into our engineers from the offset which forces the entire team to think in a specific manner.
There are some downsides to using Rancher.
- Catalogs are strict in convention when it comes to providing upgrades for services. So strict that they won’t accept folder structures outside of the single incrementing version numbers.
- Catalog upgrades can also duplicate load balancers when you make changes to them.
- And sadly RancherOS doesn’t always work properly when syncing folders between the host and the virtual machine. So there’s still some work to go to make this a seamless process, and as we mentioned at the beginning of this post, it will be an evolution over time.
But for now, we have a great idea of how Rancher will power local development for the team over the next few months, utilizing a development approach that mocks our live systems, gives our engineers the flexibility to try new scenarios quicker, increases the pace at which we can develop, and also gives us freedom to experiment with new setups and systems, all to benefit our wonderful users and the great things they build.