Getting a free SSL certificate from Letsencrypt and configuring it on Nginx with automatic renewal

Finally Letsencrypt went to public beta and I really couldn’t wait to use it on my VPS (where this blog is hosted). Until few days ago I was using a free SSL certificate from StartSSL. The service is nice and I’m grateful to them for this important resource they are providing for free, but it must be said that their renewal procedure isn’t one of the most user friendly.

For people who don’t know the service yet, Letsencrypt not only gives free SSL certificates, they also provide a command line tool that people can use to request a new certificate or to renew an existing one. This means that you don’t have to worry anymore if/when your certificate expires, you can set a crontab command and have the certificate automatically renewed for you.

Client installation

To request a SSL certificate you need to install their command line utility. Unless it has already been packaged for your distribution, for the moment it’s much easier to get it from git as they explain in their installation instructions:

Getting the SSL certificate

There are a few different options available to request a certificate, but the easiest one is to use the –webroot option, specifying the document root of your website so that the client will be able to put there a verification (temporary) file that will be served to the remote service and used as verification method. In my case I only needed this command:

Please note that I had to specify both www.andreagrandi.it and andreagrandi.it as domains, otherwise it would have been invalid when requesting just andreagrandi.it resources.

Configuration files and certificates installation

The command above will save all the configuration under /etc/letsencrypt/ and all the generated certificates under /etc/letsencrypt/live/www.andreagrandi.it/*.pem (all the *.pem files here are symbolic links to the current certificate). If you are using Nginx the only files you need are fullchain.pem and privkey.pem and you can set them in your Nginx configuration using these two parameters:

In case you want to have a look at my full Nginx configuration file, as reference, you can find it here https://gist.github.com/andreagrandi/8b194c99cd3e77fdb5a8

Automatic renewal

The last thing to be configured is a crontab rule to call the script every… 2 months. Why 2 months? Letsencrypt SSL certificates expire in 3 months. Usually SSL certificates are valid at least for 1 year, but Letsencrypt decided to make it 3 months to incentivate the automation of the renewal. I set it to 2 months, so if anything goes wrong I still have plenty of time to do it manually. To edit crontab for root user execute crontab -e and add this line:

Just a final note. You may have noticed that this website presents an SSL certificate issued by COMODO. That’s because I have CloudFlare in front of my website and that’s how their SSL strict option works (at least for free plans).

 

How to create a Docker image for PostgreSQL and persist data

Before I start, let me confirm to you that official Docker images for PostgreSQL already exist and are available here: https://registry.hub.docker.com/_/postgres/ so this howto wants to be a guide to explain how to create these images and talk about some of the Docker features.

I will assume that you have already installed Docker on your machine. I have tested these instructions both on Ubuntu Linux and OSX (OSX users will need to install boot2docker, instructions are not available in this guide).

Dockerfile

To create a Docker image we need to create a text file named Dockerfile and use the available commands and syntax to declare how the image will be built. At the beginning of the file we need to specify the base image we are going to use and our contact informations:

In our case we are using Ubuntu 14.04 as base image. After these instructions we need to add PostgreSQL package repository and GnuPG public key:

then we need to update the packages available in Ubuntu and install PostgreSQL:

We are installing version 9.3 of PostgreSQL, instructions would be very similar for any other version of the database.

Note: it’s important to have apt-get update and apt-get install commands in the same RUN line, else they would be considered two different layers by Docker and in case an updated package is available it won’t be installed when the image is rebuilt.

At this point we switch to postgres user to execute the next commands:

We switch to root user and we complete the configuration:

We expose the port where PostgreSQL will listen to:

We setup the data and shared folders that we will use later:

Finally we switch again to the postgres user and we define the entry command for this image:

The full Dockerfile is available here https://github.com/andreagrandi/postgresql-docker/blob/master/Dockerfile

Building Docker image

Once the Dockerfile is ready, we need to build the image before running it in a container. Please customize the tag name using your own docker.io hub account (or you won’t be able to push it to the hub):

Running the PostgreSQL Docker container

To run the container, once the image is built, you just need to use this command:

Testing the running PostgreSQL

To test the running container we can use any client, even the commandline one:

When you are prompted for password, type: pguser
Please note that localhost is only valid if you are running Docker on Ubuntu. If you are an OSX user, you need to discover the correct ip using: boot2docker ip

Persisting data

You may have noticed that once you stop the container, if you previously wrote some data on the DB, that data is lost. This is because by default Docker containers are not persistent. We can resolve this problem using a data container. My only suggestion is not to do it manually and use a tool like fig to orchestrate this. Fig is a tool to orchestrate containers and its features are being rewritten in Go language and integrated into Docker itself. So if you prepare a fig.yml configuration file now, you will be able, hopefully, to reuse it once this feature will be integrated into Docker. Please refer to fig website for the instructions to install it (briefly: under Ubuntu you can use pip install fig and under OSX you can use brew install fig).

Save this file as fig.yml in the same folder of the Dockerfile and spin up the container using this command: fig up

If you try to write some data on the database and then you stop (CTRL+C) the running containers and spin up them again, you will see that your data is still there.

Conclusion

This is just an example of how to prepare a Docker container for a specific service. The difficoult part is when you have to spin up multiple services (for example a Django web application using PostgreSQL, RabbitMQ, MongoDB etc…), connect them all together and orchestrate the solution. I will maybe talk about this in one of the next posts. You can find the full source code of my PostgreSQL Docker image, including the fig.yml file in this repository https://github.com/andreagrandi/postgresql-docker

Moving away from Google Talk to a real Jabber/XMPP service

I’ve been recently concerned about the future of Google Talk service and all the implications related to recent changes to the existing service. What has been a nice implementation of the Jabber/XMPP protocol once, now is just a closed and proprietary service. The main problem with these changes are:

  • Jabber/XMPP users of other services won’t be able to talk anymore to Google Talk users
  • Google is killing some of their native clients (like the Windows one) and forcing users to Chrome or Android/iOS versions
  • Google has disabled the possibility to turn off chat recording (you can still do it individually, for each contact)

So, what are the alternatives to Google Talk? Luckly you have at least three options.

Using an existing Jabber/XMPP service

This is surely the easiest way to get a Jabber/XMPP account. There is a list of free services available here: https://xmpp.net/directory.php registering a new account is usually very easy. Most of the clients have an option that let you register the account while you are configuring it. For example if you are using Pidgin and you want to register an account with DukGo service, you can configure it in this way:

2_addaccount

 

Using an hosted Jabber/XMPP service with your domain

A service called HostedIM offer a very nice service. Basically if you already have a domain, you can register an account on hosted.im, setup your DNS following their instructions and create an account directly on their dashboard. You can create up to 5 accounts for free. If you need more, they offer a paid service for that. In my case all I had to do was updating my DNS with the following configuration:

Hosting your own Jabber/XMPP service

If you have a VPS and some syasdmin skills, why not hosting your own XMPP server? There are different options available, but I can suggest you three in particular:

I haven’t tried any of these personally, because for the moment I’m using the service offered by hosted.im. I’m curious anyway to configure at least one of them and when I will do it I will publish a dedicated tutorial about it.

Conclusion

Given the recent changes that Google is doing to all their services, I’m more than happy when I can abandon one of them, because I personally don’t like to rely (and bind myself) to a single company, expecially if that company closes a service whenever they want and try to lock you inside their ecosystem.

Automatically pull updated Docker images and restart containers with docker-puller

If you use docker.io (or any similar service) to build your Docker containers, it may be possible that, once the new image is generated, you want your Docker host to automatically pull it and restart the container.

Docker.io gives you the possibility to set a web hook after a successful build. Basically it does a POST on a defined URL and send some informations in JSON format.

docker-puller listens to these web hooks and can be configured to run a particular script, given a specific hook. It’s a very simple service I wrote using Python/Flask. It’s also my first Flask application, so if you want to improve it, feel free to send me a pull request on GitHub.

Note: this is not the only existing service that is able to do this task. I took inspiration from this article http://nathanleclaire.com/blog/2014/08/17/automagical-deploys-from-docker-hub/ and I really tried to customize https://github.com/cpuguy83/dockerhub-webhook-listener for my own needs, but the problem is that dockerhub-webhook-listener is not ready to be used as is (you have to customize it) and I’m not very good with Golang (yet) to be able to do it in little time. This is why I rewrote the service in Python (that is my daily language). I want to thank Brian Goff for the idea and all the people in #docker @ FreeNode for the support.

How to use docker-puller

Setting up the service should be quite easy. After you clone the repository from https://github.com/glowdigitalmedia/docker-puller there is a config.json file where you define the host, port, a token and a list of hooks you want to react to. For example:

Create a bash script (in this case it was called hello.sh) and put it under script folder and write the instructions to be executed to pull the new image and restart the container (example):

Once configured, I suggest you to setup a Nginx entry (instructions not covered here) that for example redirect yourhost.com/dockerpuller to localhost:8000 (I would advise to enable SSL too, or people could be able to sniff your token). The service can be started with: “python app.py” (or you can setup a Supervisor script).

At this point docker-puller is up and running. Go to docker.io automatic build settings and setup a webhook like this: http://yourhost.com/dockerpuller?token=abc123&hook=hello

Every time docker.io finishes building and pushing your image to the docker registry, it will POST on that URL. docker-puller will catch the POST, check for a valid token, get the hook name and will execute the relative script.

That’s all! I hope this very simple service can be useful to other people and once again, if you want to improve it, I will be glad to accept your pull requests on GitHub.

Create an EncFS volume compatible with BoxCryptor Classic

If you are planning to share an encrypted volume between Linux/OSX and Windows (I will assume you are sharing it on Dropbox, but you could use any similar service) and you are using EncFS under Linux/OSX and BoxCryptor under Windows, there are some specifig settings to use when you create the EncFS volume. Infact even if BoxCryptor claims to be “encfs compatible”, it’s not 100%.

Suppose you want to create an encrypted volume located at $HOME/.TestTmpEncrypted and mounted at $HOME/TestTmp you need the following command:

answer “Y” when you are asked if you want to create the folders:

At this point you will need to select between default paranoia mode or advanced mode. Please choose the advanced one (x):

Manual configuration mode selected.

Select 256 as key size:

Choose 1024 as block size:

Select Stream as filename encoding:

Do NOT enable filename initialization vector chaining:

Do NOT enable per-file initialization vectors:

Do NOT enable external chained IV:

Do NOT enable random bytes to each block header:

Enable file-hole pass-through:

Finally you will see:

At this point set a passphrase for your new volume:

You should be able to mount this volume using BoxCryptor.