AddThis

Monday, March 14, 2016

A Docker Movie - Starring Chris Pratt

Ships Ahoy

Lately I've been mucking around with a lot of Docker.  If you are unfamiliar, Docker provides a new approach to deploying software.  Prior to Docker, you had to install all of the libs and dependencies yourself manually on the target machine. But with Docker, you assemble containers that together all make up your application.



Think of it like Lego blocks.  You have a block for, let's say Ruby.  And you have a Lego block for Rails, and another Lego block for your database, say Postgres.  And you put all these blocks together and you have your entire app.  So you don't think of installing servers and language runtimes, you instead think of deploying containers (Lego blocks) that all work with each other.  I would imagine that Chris Pratt could play Ruby in this particular movie.  From the Docker website:

"Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in."

A Docker container for Ruby is completely self contained and works by itself.  A Docker container for a Postgres database is completely self contained and works by itself.  Or you could have them both running at the same time and poof they can work together.  And a Docker container is super easy to run.  You simply pull down the image of a container from a repository (typically DockerHub), then run it.  Done.

Let's take a very simple example.  I've been working recently with a design team that was building a prototype with static html, css and javascript.  They had all their code locally and was zipping them up to me so I could open them up on my computer and view them.  But this process was slow and it was very manual.  No one could see the code except whoever they sent the zip to and I always had to bother them for the latest stuff.  I wanted to be less turtle and more cat with my process.




My first thought was to standup an sftp server somewhere that they could copy files to that would then be served up by nginx or something.  That approach is fine but required me to setup an sftp server and nginx and then configure both.  It also required the design team to periodically upload the files so that I could view them.  Things still felt very manual, and I was feeling lazy.  If you guessed my ultimate solution involved Docker, you are correct :)  The instructions below assume you have docker installed already.  I am running ubuntu and the instructions for setting Docker up are great (https://docs.docker.com/engine/installation/linux/ubuntulinux/).

Setup

Nginx

1.  Download the nginx image from DockerHub and install it locally.
docker pull nginx

2.  Run the image
docker run --name prototype /usr/local/code:/usr/share/nginx/html:ro -d -p 8081:80 --restart=unless-stopped -v nginx

There's a lot here, so I'm going to break down the arguments a bit:
Option Explanation
docker run runs the nginx image that we pulled down from DockerHub
--name prototype this is what I named my container.  This makes it easy to start and stop the container bc I can reference it using this name, 'prototype'
-v /usr/local/code:/usr/share/nginx/html:ro this mounts a volume from the host machine (my ubuntu aws instance) and makes it available to nginx.  So the nginx docker container will see /usr/share/nginx/html, but on my real ubuntu machine, it's actually /usr/local/code.  The last bit 'ro' means that docker cannot write to the volume (read only).  One last gotcha, make sure that the docker group can read /usr/local/code on the host ubuntu machine or you will get permission denied errors.
-d runs the container as a daemon
-p 8081:80 port bindings.  the host ubuntu machine will forward port 8081 to the containers port 80.
--restart=unless-stopped if the docker container dies for any reason, docker will attempt to restart it, unless a person stopped it manually using `docker stop [name]`


Source Code

I created a BitBucket repo that the design team could check code into.  I went with bitbucket b/c they have nice free private plans.

CI Tool

I had Jenkins running already for other projects so I just added a new project.  On BitBucket checkins to the repo I created, Jenkins would pull down the code and build it.  Since everything was static and it's a prototype so I don't really care about code quality yet, my build doesn't really do anything.  But what's important is that the Jenkins project has an scp step that scp's the code to my host ubuntu server.  And it scp's all the code to /usr/local/code.

Flow

So to recap:
1.  Design team checks in code
2.  Jenkins pulls down latest and scp's it to ubuntu:/usr/local/code
3.  Nginx Docker container has /usr/local/code mounted correctly and will serve everything in that dir to http://ubuntu:8081

The major wins for me here was that to download and setup nginx was very easy.  And the deploy process was just as easy.  And of course it happens on every checkin so I can see the latest whenever they push.

So Much More

Docker can do so much more than what I've put here.  On my other projects, as part of my build process, I actually have Jenkins create Docker images that I check into DockerHub.  I can then pull them from anywhere and run them; be it another developers laptop, or even production.  And on every machine, it will run identically.   And Docker just added Docker Cloud to make it even easier to deploy your docker images to a cloud like AWS, MS Azure or DigitalOcean.  So if you haven't played with Docker yet, I really recommend doing so.  And getting started is probably going to be a lot easier than you think it would be.



No comments: