Docker done right: docker compose and up to kompose

Docker was born for the cloud. It is the easiest way to run different software in a tiny box, installing it in a rapid way and with a nice way to wipe it in a snap.
But when docker shines are the little docker compose files, which realize the Infrastructure as Code, a configuration file which declare in an easy and simple way how services are connected.

After learning docker, playing with “docker run/exec/compose/swarm” and so on, you could be rightly confused. So my suggestion is to start to use docker in a “docker-compose” way. Every time you need to do something, even a simple test, fire your visual studio code and write down a docker compose file.

 

A good docker-compose setup take care of

  1. Allocating volumes
  2. Mapping right ports
  3. Interconnecting

As a general rule, your docker-compose should be able to be resilient to docker-compose up & down commands.

docker-compose down normally destroys volumes.

Example1: JBoss Business Process Manager

jBPM is a complex beast with only two components: the server and the database. The server keep its workspace in a .niogit folder. This folder is normally created under the working directory the server is started.

version: '2'
volumes:
  postgres_data:
    # GG: It should not destroy it if external
    external:
      name: jbpm_pgsql_data  
  jbpm_git:
    external:
      name: jbpm_git
services:
  postgres:
    image: postgres:9.6
    volumes:
    - postgres_data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: jbpm
      POSTGRES_USER: jbpm
      POSTGRES_PASSWORD: jbpm
  jbpm:
    image: jboss/jbpm-server-full:7.29.0.Final
    volumes:
      - jbpm_git:/opt/jboss/wildfly/bin/.niogit
    environment:
      JBPM_DB_DRIVER: postgres
      JBPM_DB_HOST: postgres
    ports:
    - 8080:8080
    - 8001:8001
    depends_on:
    - postgres

Also you do not want the database be wiped out after a docker-compose down. So you set it up as an “external volume” and is up to you to create it before running docker-compose up.

Example2: Grafana + InfluxDB monitoring

In this article, the author present a way to setup a docker swarm stack to monitor the nodes.

The resulting docker compose can be summarized like this:

version: '3.3'
services:
  telegraf:
    image: telegraf:1.3
    networks:
      - tig-net
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    configs:
      - source: telegraf-config
        target: /etc/telegraf/telegraf.conf
    deploy:
      restart_policy:
        condition: on-failure
      mode: global
      resources:
        limits:
          cpus: "0.5"          
  influxdb:
    image: influxdb:1.2
    networks:
      - tig-net
    deploy:
      restart_policy:
        condition: on-failure
      placement:
        constraints:
          - node.role == worker
    environment:
     - INFLUXDB_ADMIN_ENABLED=true
     - INFLUXDB_ADMIN_USER=admin
     - INFLUXDB_ADMIN_PASSWORD=admin
    ports:
      # Admin
      - "8083:8083"
      # HTTP API port
      - "8086:8086"
    volumes:
      - influxdb:/var/lib/influxdb
  # Grafana
  grafana:
    container_name: grafana
    image: grafana/grafana:4.3.2
    ports:
      - "3000:3000"
    networks:
      - tig-net
    deploy:
      restart_policy:
        condition: on-failure
      placement:
        constraints:
          - node.role == manager
    volumes:
      - grafana-storage:/var/lib/grafana
configs:
  telegraf-config:
    file: $PWD/config/telegraf/telegraf.conf
networks:
  tig-net:
    driver: overlay
volumes:
  influxdb:
  grafana-storage:

This file is a bit more complex because ask to store grafana on the manager, and the influxdb on the workers

 

Kompose

If you need to switch to a more mature setup, you can use kompose to migrate your file to the K8s fanfare.
Docker is a good starting point because of its history. Also it is fast to set up on a Windows Machine, and its faster.

K8s is your way to go in production, and its knowledge will be mandatory for the short term future.