Understanding Python decorators optimizing a recursive Fibonacci implementation

A decorator is a Python function that takes a function object as an argument and returns a function as a value. Here is an example of decorator definition:

To apply a decorator to an existing function, you just need to put @decorator_name in the line before its definition, like this example:

This decorator doesn’t do anything, so let’s think about a more concrete problem we could solve using decorators.

Fibonacci sequence

By definition, the first two numbers in the Fibonacci sequence are either 1 and 1 or 0 and 1. All the other numbers are the sum of the previous two numbers of the sequence. Example:

  1. 0, 1: the third number is 1
  2. 0, 1, 1: the fourth number is 2
  3. 0, 1, 1, 2: the fifth number is 3
  4. 0, 1, 1, 2, 3: the sixth number is 5
  5. etc…

If we wanted to give a math definition of the sequence, we could describe it in this way:

  • F(0): 0
  • F(1): 1
  • F(n): F(n-1) + F(n-2)

In Python we could have a recursive function like the following one:

What’s the problem with this implementation? The code works as expected, but it’s very inefficient. The next picture will explain what happens when we will try, for example, to calculate the 5th number of the sequence:



Fib(5) is Fib(4) + Fib(3), but Fib(4) itself is Fib(3) + Fib(2), and… the picture just tell us that we have calculated Fib(3) 2 times, Fib(2) 3 times, Fib(1) 5 times! Why are we repeating the same operation every time if we already calculated the result?


In computing, memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

We need to store values of the sequence we have already calculated and get them later when we need them. Let’s implement a simple memoization decorator:

The decorator defines a dict at the beginning that is used as a cache. When we want to find the n number of the sequence, it first checks if the value was already calculated and that value is returned instead of being calculated again. If the value is not found, then the original function is being called and then the value is store in the cache, then returned to the caller.

Using the memoize decorator

How much this decorator can speed up our fib method? Let’s try to benchmark the execution using Python timeit module.

The required time to calculate the 35th number of the Fibonacci sequence on my laptop is: 4.73480010033

The required time to calculate the 35th number of the Fibonacci sequence on my laptop is: 0.000133037567139

Quite faster, don’t you think? I will let you try how long does it take to calculate the 60th number of the sequence with and without using the decorator. Hint: grab a cup of coffee before beginning!

Django Notes: read values from settings in a safe way

Working on Django projects I find very often that many developers access the values that are defined in settings in this way

What happens if MY_SETTING has not been defined in settings.py? The code will raise an error and crash, of course. How can we make the code more reliable? We could try/except the code block that tries to read the value and maybe set a value if we get an exception, but this would not be a clean way to do this job.

A cleaner way to do it is to use getattr in this way:

getattr will try to read MY_SETTING value from settings.py, if the value doesn’t exist my_value will be assigned with my-default-value.

How to write a custom Django Middleware

To understand how a Django Middleware works we need to remember that the basic architecture of Django is composed by a request and a response. A middleware is something that stays in the middle. Let’s give a look to the next diagram, taken from official Django documentation:



Important things to know

There are four important things to know about middlewares:

  • You need to write a class that just inherit from object
  • The order where you place your middleware in settings.py is important: middlewares are processed from top to bottom during a request and from bottom to top during a response.
  • You don’t need to implement all the available methods of a middleware. For example you can just implement process_request and process_template_response
  • If you implement process_request and you decide to return an HttpResponse, all the other middlewares, views etc… will be ignored and only your response will be returned

Writing a middleware

In my example I wanted to implement a feature that saves the time when a request is made and the time when a request has been processed, then calculates the time delta and exposes this value in the context so that is accessible from our templates. How to implement a similar feature using a middleware? Here is my example:

Please don’t care about how I calculated the time. I’m aware that there are better ways to do it, but I just wanted to keep it simple and show how to implement a simple middleware.

If you want to see a complete example of a project that includes and uses this middleware, here you can find the complete source code: https://github.com/andreagrandi/benchmark-middleware-example


Goenv – Go Environment Manager

To briefly explain what Goenv is, I will assume you have previously worked with Python. Basically it’s what Virtualenv is for Python. Goenv (and it’s wrapper goof) creates a folder for a new project and set the $GOPATH env variable to that folder path. At this point every time you do go get, the libraries will be installed in that specific $GOPATH.

It’s very important to use separate $GOPATH for each project, because this allow us to use different library versions for each project and avoid version conflicts.


Goenv is now installed, we will now install its wrapper goof:

Edit .bashrc (or .zshrc if you use zsh) and append these lines:

How to use it

To create a new go environment use make:

To exit the go environment use deactivate:

To use an environment use workon:

To show available environments use show:

Goenv itself is not enough to manage Go packages. It would be like using Virtualenv only and not using pip and requirements. In a future post I will explain how to use Godep.



Travis-ci.org and Coveralls.io: Continuous Integration and QA made easy

Developing a large web application or before deploying some code is very important to verify the quality of the code itself, check if we have introduced any regression or bug and have something that tell us if we are increasing or decreasing the quality of the code.

Suppose we are in an organization or a company where the basic rule is: master branch is always ready/stable to be deployed. In a team usually people work on personal branches, then when the code is stable it’s merged with master.

How do we check if the code is stable and ready to be merged? First of all we need to cover all our code with proper tests (I won’t go in details about unit testing here, I assume that the reader knows what I’m talking about), then we need to actually run them, possibly in an isolated environment that is similar to the production one, and check if they all pass. If they do, we are quite safe to merge our code with master branch.

How can we ensure that all the developers remember to run tests when they push some new code? To make things a bit more real, let’s take the example of a Python/Django product (or even a library) that currently supports Python 2.6, 2.7, 3.3 and Django 1.4.x, 1.5.x, 1.6.x. The whole matrix consists of 9 possible combinations. Do we have to manually run tests on 9 configurations? No, we don’t.


Travis is a continuous integration tool that, once configured, takes care of these tasks and let us save lot of time (that we can use to actually write code). Travis-ci.org is an online service that works with GitHub (it requires we use GitHub as repository for our code), and once we have connected the two accounts and configured a very simple file in our projects, it’s automatically triggered when we push on our GitHub repository.

The configuration consists of adding a file named .travis.yml in the root of our project. A working example is available here https://github.com/andreagrandi/workshopvenues/blob/master/.travis.yml (all the env variables I set are not required normally, but that’s where I save the values of my configuration, so they need to be initialized before I can run tests).

The service supports most of the languages that are commonly used and even a good number of PAAS, making it very easy to automatically deploy our code. If it should not be enough for your needs, they also expose a public API. I suggest you to give a look at the official documentation that will explain everything in details http://docs.travis-ci.com

Once everything is configured, we will have something like this on our console https://travis-ci.org/andreagrandi/workshopvenues/jobs/19882128


If something goes wrong (if tests don’t pass for example) we receive a notification with all the informations about the failing build, and if we had configured an automatic deployment of course the code would not be deployed in case of a failing build.

Travis-ci.org is completly free for opensource projects and has also a paid version for private repositories.


There is a nice tool available for Python called coverage. Basically it runs tests and checks the percentage of the source code that is covered by tests, producing a nice report that shows us the percentage for every single file/module and even the lines of code that have been tested.

Thanks to Coveralls.io and the use of Travis, even these tasks are completly automatized and the results are available online like in this example https://coveralls.io/builds/560853

The configuration is quite easy. We need to connect our Coveralls.io profile with GitHub, like we did for Travis-ci.org and then enable the repository. To trigger Coveralls after a successful Travis build, we need to have these lines at the end of our .travis.yml file

Even Coveralls.io is completly free for opensource projects and offers a paid version for private repositories.


I use Heroku to host and run my web application. Normally to deploy on Heroku you so something like this: git push heroku master

Adding these settings to the .travis.yaml file, I can automatically deploy the application on Heroku, if the build was successful:

Not only the code is deployed, after deployment the South migrations are executed.


These two tools are saving me lot of time and are ensuring that the code I release for a project I’m working on (WorkshopVenues) is always tested when I push it on my repository.