Goenv – Go Environment Manager

To briefly explain what Goenv is, I will assume you have previously worked with Python. Basically it’s what Virtualenv is for Python. Goenv (and it’s wrapper goof) creates a folder for a new project and set the $GOPATH env variable to that folder path. At this point every time you do go get, the libraries will be installed in that specific $GOPATH.

It’s very important to use separate $GOPATH for each project, because this allow us to use different library versions for each project and avoid version conflicts.

Installation

Goenv is now installed, we will now install its wrapper goof:

Edit .bashrc (or .zshrc if you use zsh) and append these lines:

How to use it

To create a new go environment use make:

To exit the go environment use deactivate:

To use an environment use workon:

To show available environments use show:

Goenv itself is not enough to manage Go packages. It would be like using Virtualenv only and not using pip and requirements. In a future post I will explain how to use Godep.

 

 

Go: defining methods on struct types

In Go it’s possible to define methods on struct types. The syntax needed for it can be a bit strange for people that are used to define classes and methods in Java, C# etc… but once you learn it it’s quite easy to use.

In my case for example I needed something that could contain a Timer object, a string and a method that could start the timer and call a method at the end of the Timer execution. I implemented it in this way:

The key point is row 6func (timer DeviceTimer) startTimer() { … }” where I defined a method called startTimer and I specify timer DeviceTimer inside the func definition. This basically “extends” the struct DeviceTimer adding that method to it. This means that I can call that method in this way:

This is all you need to do. If you want to read more about this subject, I can suggest to read these two articles:

Note: I’m not a Go expert and these are just my personal notes I’m taking during my learning experience. I’m very keen to share my notes with everyone, but please don’t take them as notes from an expert Go developer.

How to create a Docker image for PostgreSQL and persist data

Before I start, let me confirm to you that official Docker images for PostgreSQL already exist and are available here: https://registry.hub.docker.com/_/postgres/ so this howto wants to be a guide to explain how to create these images and talk about some of the Docker features.

I will assume that you have already installed Docker on your machine. I have tested these instructions both on Ubuntu Linux and OSX (OSX users will need to install boot2docker, instructions are not available in this guide).

Dockerfile

To create a Docker image we need to create a text file named Dockerfile and use the available commands and syntax to declare how the image will be built. At the beginning of the file we need to specify the base image we are going to use and our contact informations:

In our case we are using Ubuntu 14.04 as base image. After these instructions we need to add PostgreSQL package repository and GnuPG public key:

then we need to update the packages available in Ubuntu and install PostgreSQL:

We are installing version 9.3 of PostgreSQL, instructions would be very similar for any other version of the database.

Note: it’s important to have apt-get update and apt-get install commands in the same RUN line, else they would be considered two different layers by Docker and in case an updated package is available it won’t be installed when the image is rebuilt.

At this point we switch to postgres user to execute the next commands:

We switch to root user and we complete the configuration:

We expose the port where PostgreSQL will listen to:

We setup the data and shared folders that we will use later:

Finally we switch again to the postgres user and we define the entry command for this image:

The full Dockerfile is available here https://github.com/andreagrandi/postgresql-docker/blob/master/Dockerfile

Building Docker image

Once the Dockerfile is ready, we need to build the image before running it in a container. Please customize the tag name using your own docker.io hub account (or you won’t be able to push it to the hub):

Running the PostgreSQL Docker container

To run the container, once the image is built, you just need to use this command:

Testing the running PostgreSQL

To test the running container we can use any client, even the commandline one:

When you are prompted for password, type: pguser
Please note that localhost is only valid if you are running Docker on Ubuntu. If you are an OSX user, you need to discover the correct ip using: boot2docker ip

Persisting data

You may have noticed that once you stop the container, if you previously wrote some data on the DB, that data is lost. This is because by default Docker containers are not persistent. We can resolve this problem using a data container. My only suggestion is not to do it manually and use a tool like fig to orchestrate this. Fig is a tool to orchestrate containers and its features are being rewritten in Go language and integrated into Docker itself. So if you prepare a fig.yml configuration file now, you will be able, hopefully, to reuse it once this feature will be integrated into Docker. Please refer to fig website for the instructions to install it (briefly: under Ubuntu you can use pip install fig and under OSX you can use brew install fig).

Save this file as fig.yml in the same folder of the Dockerfile and spin up the container using this command: fig up

If you try to write some data on the database and then you stop (CTRL+C) the running containers and spin up them again, you will see that your data is still there.

Conclusion

This is just an example of how to prepare a Docker container for a specific service. The difficoult part is when you have to spin up multiple services (for example a Django web application using PostgreSQL, RabbitMQ, MongoDB etc…), connect them all together and orchestrate the solution. I will maybe talk about this in one of the next posts. You can find the full source code of my PostgreSQL Docker image, including the fig.yml file in this repository https://github.com/andreagrandi/postgresql-docker

Moving away from Google Talk to a real Jabber/XMPP service

I’ve been recently concerned about the future of Google Talk service and all the implications related to recent changes to the existing service. What has been a nice implementation of the Jabber/XMPP protocol once, now is just a closed and proprietary service. The main problem with these changes are:

  • Jabber/XMPP users of other services won’t be able to talk anymore to Google Talk users
  • Google is killing some of their native clients (like the Windows one) and forcing users to Chrome or Android/iOS versions
  • Google has disabled the possibility to turn off chat recording (you can still do it individually, for each contact)

So, what are the alternatives to Google Talk? Luckly you have at least three options.

Using an existing Jabber/XMPP service

This is surely the easiest way to get a Jabber/XMPP account. There is a list of free services available here: https://xmpp.net/directory.php registering a new account is usually very easy. Most of the clients have an option that let you register the account while you are configuring it. For example if you are using Pidgin and you want to register an account with DukGo service, you can configure it in this way:

2_addaccount

 

Using an hosted Jabber/XMPP service with your domain

A service called HostedIM offer a very nice service. Basically if you already have a domain, you can register an account on hosted.im, setup your DNS following their instructions and create an account directly on their dashboard. You can create up to 5 accounts for free. If you need more, they offer a paid service for that. In my case all I had to do was updating my DNS with the following configuration:

Hosting your own Jabber/XMPP service

If you have a VPS and some syasdmin skills, why not hosting your own XMPP server? There are different options available, but I can suggest you three in particular:

I haven’t tried any of these personally, because for the moment I’m using the service offered by hosted.im. I’m curious anyway to configure at least one of them and when I will do it I will publish a dedicated tutorial about it.

Conclusion

Given the recent changes that Google is doing to all their services, I’m more than happy when I can abandon one of them, because I personally don’t like to rely (and bind myself) to a single company, expecially if that company closes a service whenever they want and try to lock you inside their ecosystem.

Automatically pull updated Docker images and restart containers with docker-puller

If you use docker.io (or any similar service) to build your Docker containers, it may be possible that, once the new image is generated, you want your Docker host to automatically pull it and restart the container.

Docker.io gives you the possibility to set a web hook after a successful build. Basically it does a POST on a defined URL and send some informations in JSON format.

docker-puller listens to these web hooks and can be configured to run a particular script, given a specific hook. It’s a very simple service I wrote using Python/Flask. It’s also my first Flask application, so if you want to improve it, feel free to send me a pull request on GitHub.

Note: this is not the only existing service that is able to do this task. I took inspiration from this article http://nathanleclaire.com/blog/2014/08/17/automagical-deploys-from-docker-hub/ and I really tried to customize https://github.com/cpuguy83/dockerhub-webhook-listener for my own needs, but the problem is that dockerhub-webhook-listener is not ready to be used as is (you have to customize it) and I’m not very good with Golang (yet) to be able to do it in little time. This is why I rewrote the service in Python (that is my daily language). I want to thank Brian Goff for the idea and all the people in #docker @ FreeNode for the support.

How to use docker-puller

Setting up the service should be quite easy. After you clone the repository from https://github.com/glowdigitalmedia/docker-puller there is a config.json file where you define the host, port, a token and a list of hooks you want to react to. For example:

Create a bash script (in this case it was called hello.sh) and put it under script folder and write the instructions to be executed to pull the new image and restart the container (example):

Once configured, I suggest you to setup a Nginx entry (instructions not covered here) that for example redirect yourhost.com/dockerpuller to localhost:8000 (I would advise to enable SSL too, or people could be able to sniff your token). The service can be started with: “python app.py” (or you can setup a Supervisor script).

At this point docker-puller is up and running. Go to docker.io automatic build settings and setup a webhook like this: http://yourhost.com/dockerpuller?token=abc123&hook=hello

Every time docker.io finishes building and pushing your image to the docker registry, it will POST on that URL. docker-puller will catch the POST, check for a valid token, get the hook name and will execute the relative script.

That’s all! I hope this very simple service can be useful to other people and once again, if you want to improve it, I will be glad to accept your pull requests on GitHub.