Continuous Delivery with Docker and Jenkins

Docker

Docker

  • Platform for developing, shipping, and running applications in a standardized way
  • Separates applications from the underlying infrastructure by isolating them inside containers
  • Can be installed on Linux, OS X, and Windows

Images and containers

  • Docker containers are based on Docker images
  • When starting up an image, you get a running container
  • A process running inside the container is isolated from the underlying host

Docker Hub

Dockerfile

FROM python:2.7

# Install Python packages
RUN pip install gunicorn Flask

# Add our own code to the image
ADD . /code

# Run the application
CMD ["bin/start.sh"]

Docker compose

  • Tool for container configuration and orchestration
  • Containers are configured in a docker-compose.yml file
  • Simplifies building and running multiple containers

docker-compose.yml

myapi:
  build: .
  volumes:
    - .:/code
  ports: 
    - 8080:80
  environment:
    - DB_URL=postgresql://user:pw@database:5432/mydb
  links:
    - database
database:
  image: postgres:9.4
  environment:
    - POSTGRES_USER=user
    - POSTGRES_PASSWORD=pw

Docker compose usage


# Build and start all containers
$ docker-compose up -d

# List running containers
$ docker-compose ps
Name            Command              State    Ports
------------------------------------------------------------------
myapi_myapi_1   bin/start.sh         Up       0.0.0.0:8080->80/tcp
myapi_db_1      postgres             Up       5432/tcp           
						

Local development

Projects

repositories
├── myapi
│   ├── docker-compose.yml
│   ├── Dockerfile
│   └── ...
└── mywebapp
    ├── docker-compose.yml
    ├── Dockerfile
    └── ...
						

Environment configuration

repositories
├── myapi
│   └── ...
├── mywebapp
│   └── ...
└── myenvironments
    ├── local
    │   └── docker-compose.yml
    ├── develop
    │   └── docker-compose.yml
    ├── test
    │   └── docker-compose.yml
    └── production
        └── docker-compose.yml
						

Local docker-compose.yml

myapi:
  build: ../../myapi
  ...
mywebapp:
  build: ../../mywebapp
  ...
database:
  image: postgres:9.4
  ...
elasticsearch:
  image: elasticsearch:2.3
  ...
nginx:
  image: nginx:1.8
  ...

Build pipeline

Build pipeline

Setting up Jenkins and Docker registry

# docker-compose.yml

jenkins:
  image: jenkins
  links:
    - registry
  volumes:
    - /home/jenkins/:/var/jenkins_home
  ports:
    - "8080:8080"
registry:
  image: registry:2.0
  volumes:
    - /home/registry:/tmp/registry-dev
  ports:
    - "5000:5000"

Build jobs

Build, test, and push

# Build script executed by Jenkins

# Build and run tests
docker-compose build
docker-compose run myapi nosetests

# Where to push image
IMAGE_LOCATION=localhost:5000/myapi:master-$BUILD_NUMBER

# Tag and push the image to our Docker registry
docker tag myapi:latest $IMAGE_LOCATION
docker push $IMAGE_LOCATION

Deployment

Deployment job

Choose environment and version

The deployment approach

What we have achieved

  • Simple continuous delivery pipeline with little effort
  • Isolated our applications from the infrastructure
  • Can easily reproduce the entire system on any server or PC where Docker is installed
  • As the system is containerized, it will be a lot easier to scale when that time comes

Still room for improvement

  • Services are unavailable for a few seconds on deployment
  • No multi-host scaling of containers
  • No automatic failover if server crashes

But this is often "good enough" for non-critical systems, or during the initial project phase

Taking things further

  • Docker Swarm
  • Kubernetes (Google)
  • Apache Mesos
  • Cloud providers

Docker Control Panel

Questions?