Other articles

  1. Getting latest Ubuntu AMI with Terraform

    When we need to create an EC2 resource on AWS using Terraform, we need to specify the AMI id to get the correct image. The id is not easy to memorise and it changes depending on the zone we are working one. On every new release the id changes again. So, how can we be sure to get the correct ID for our region, of the latest image available for a given Linux distribution?

    Getting latest Ubuntu AMI id

    In this example I will show how to get the ID for the latest version of Ubuntu 16.04 server, for the London region and create an EC2 instance using that ID.

    variable "aws_region" { default = "eu-west-2" } # London
    
    provider "aws" {
        region = "${var.aws_region}"
        access_key = "youraccesskey"
        secret_key = "yoursecretkey"
    }
    
    data "aws_ami" "ubuntu" {
        most_recent = true
    
        filter {
            name   = "name"
            values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
        }
    
        filter {
            name   = "virtualization-type"
            values = ["hvm"]
        }
    
        owners = ["099720109477"] # Canonical
    }
    
    resource "aws_instance" "web" {
        ami           = "${data.aws_ami.ubuntu.id}"
        instance_type = "t2.micro"
    
        tags {
            Name = "HelloUbuntu"
        }
    }
    
    output "image_id" {
        value = "${data.aws_ami.ubuntu.id}"
    }
    

    After we have initialised our script using terraform init if we run it, we will get the AMI id and the instance will be created:

    ➜  example1$: terraform apply
    data.aws_ami.ubuntu: Refreshing state...
    aws_instance.web: Creating...
        ami:                          "" => "ami-03998867"
        associate_public_ip_address:  "" => "<computed>"
        availability_zone:            "" => "<computed>"
        ebs_block_device.#:           "" => "<computed>"
        ephemeral_block_device.#:     "" => "<computed>"
        instance_state:               "" => "<computed>"
        instance_type:                "" => "t2.micro"
        ipv6_address_count:           "" => "<computed>"
        ipv6_addresses.#:             "" => "<computed>"
        key_name:                     "" => "<computed>"
        network_interface.#:          "" => "<computed>"
        network_interface_id:         "" => "<computed>"
        placement_group:              "" => "<computed>"
        primary_network_interface_id: "" => "<computed>"
        private_dns:                  "" => "<computed>"
        private_ip:                   "" => "<computed>"
        public_dns:                   "" => "<computed>"
        public_ip:                    "" => "<computed>"
        root_block_device.#:          "" => "<computed>"
        security_groups.#:            "" => "<computed>"
        source_dest_check:            "" => "true"
        subnet_id:                    "" => "<computed>"
        tags.%:                       "" => "1"
        tags.Name:                    "" => "HelloUbuntu"
        tenancy:                      "" => "<computed>"
        volume_tags.%:                "" => "<computed>"
        vpc_security_group_ids.#:     "" => "<computed>"
    aws_instance.web: Still creating... (10s elapsed)
    aws_instance.web: Still creating... (20s elapsed)
    aws_instance.web: Still creating... (30s elapsed)
    aws_instance.web: Creation complete (ID: i-0f58f8bd55b3a7e38)
    
    Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
    
    Outputs:
    
    image_id = ami-03998867
    

    That's all we need to spin up an EC2 instance on AWS using latest Ubuntu image available.

    read more

    comments

  2. Creating a production ready API with Python and Django Rest Framework – part 4

    In the previous part of the tutorial we implemented details management, relations between models, nested APIs and a different level of permissions. Our API is basically complete but it is working properly? Is the source code free of bugs? Would you feel confident to refactor the code without breaking something? The answer to all our question is probably no. I can't be sure if the code behaves properly nor I would feel confident refactoring anything without having some tests coverage.

    As I mentioned previously, we should have written tests since the beginning, but I really didn't want to mix too many concepts together and I wanted to let the user concentrate on the Rest Framework instead.

    Test structure and configuration

    Before beginning the fourth part of this tutorial, make sure you have grabbed the latest source code from https://github.com/andreagrandi/drf-tutorial and you have checked out the previous git tag:

    git checkout tutorial-1.14
    

    Django has an integrated test runner but my personal choice is to use pytest, so as first thing let's install the needed libraries:

    pip install pytest pytest-django
    

    As long as we respect a minimum of conventions (test files must start with test_ prefix), tests can be placed anywhere in the code. My advice is to put them all together in a separate folder and divide them according to app names. In our case we are going to create a folder named "tests" at the same level of manage.py file. Inside this folder we need to create a __init__.py file and another folder called catalog with an additional __init__.py inside. Now, still at the same level of manage.py create a file called pytest.ini with this content:

    [pytest]
    DJANGO_SETTINGS_MODULE=drftutorial.settings
    

    Are you feeling confused? No problem. You can checkout the source code containing these changes.

    git checkout tutorial-1.15
    

    You can check if you have done everything correctly going inside the drftutorial folder (the one containing manage.py) and launching pytest. If you see something like this, you did your changes correctly:

    (drf-tutorial) ➜  drftutorial git:(master) pytest
    ============================================================================================================================= test session starts ==============================================================================================================================
    platform darwin -- Python 2.7.13, pytest-3.0.6, py-1.4.32, pluggy-0.4.0
    Django settings: drftutorial.settings (from ini file)
    rootdir: /Users/andrea/Projects/drf-tutorial/drftutorial, inifile: pytest.ini
    plugins: django-3.1.2
    collected 0 items
    
    ========================================================================================================================= no tests ran in 0.01 seconds =========================================================================================================================
    (drf-tutorial) ➜  drftutorial git:(master)
    

    Writing the first test

    To begin with, I will show you how to write a simple test that will verify if the API can return the products list. If you remember we implemented this API in the first part of the tutorial. First of all create a file called test_views.py under the folder drftutorial/tests/catalog/ and add this code:

    import pytest
    from django.urls import reverse
    from rest_framework import status
    from rest_framework.test import APITestCase
    
    
    class TestProductList(APITestCase):
        @pytest.mark.django_db
        def test_can_get_product_list(self):
            url = reverse('product-list')
            response = self.client.get(url)
            self.assertEqual(response.status_code, status.HTTP_200_OK)
            self.assertEqual(len(response.json()), 8)
    

    before being able to run this test we need to change a little thing in the catalog/urls.py file, something we should have done since the beginning. Please change the first url in this way, adding the name parameter:

    urlpatterns = [
        url(r'^products/$', views.ProductList.as_view(), name='product-list'),
        ...
    

    at this point we are able to run our test suite again and verify the test is passing:

    (drf-tutorial) ➜  drftutorial git:(test-productlist) ✗ pytest -v
    ============================================================================================================================= test session starts ==============================================================================================================================
    platform darwin -- Python 2.7.13, pytest-3.0.6, py-1.4.32, pluggy-0.4.0 -- /Users/andrea/.virtualenvs/drf-tutorial/bin/python2.7
    cachedir: .cache
    Django settings: drftutorial.settings (from ini file)
    rootdir: /Users/andrea/Projects/drf-tutorial/drftutorial, inifile: pytest.ini
    plugins: django-3.1.2
    collected 1 items
    
    tests/catalog/test_views.py::TestProductList::test_can_get_product_list PASSED
    
    =========================================================================================================================== 1 passed in 0.98 seconds ===========================================================================================================================
    

    To checkout the source code at this point:

    git checkout tutorial-1.16
    

    Explaining the test code

    When we implement a test, the first thing to do is to create a test_* file and import the minimum necessary to write a test class and method. Each test class must inherit from APITestCase and have a name that start with Test, like TestProductList. Since we use pytest, we need to mark our method with @pytest.mark.django_db decorator, to tell the test suite our code will explicitly access the database. We are going to use the client object that is integrated in APITestCase to perform the request. Before doing that we first get the local url using Django's reverse function. At this point we do the call using the client:

    response = self.client.get(url)
    

    and then we assert a couple of things that we expect to be true:

    self.assertEqual(response.status_code, status.HTTP_200_OK)
    self.assertEqual(len(response.json()), 8)
    

    We check that our API returns the 200 status code and that in the returned JSON there are 8 elements.

    It's normally a good practice to create test data inside the tests, but in our case we previously created a data migration that creates test data. Migrations are run every time we run tests so when we call our API, the data will be already there.

    Wrapping up

    I've written a few tests for all the views we have implemented until now and they are available if you checkout this version of the code:

    git checkout tutorial-1.17
    

    I've only tested the views but it would be nice to test even the permission class, for example. Please remember to write your tests first, if possible: implementing the code will be much more natural once the tests are already in place.

    read more

    comments

  3. Migrating from WordPress to a static generated website

    As you may have noticed, my website looks very different compared to a few days ago. It's just a different theme or template, I completely changed how the pages are generated and I'm hosting it in a completely different way.

    A brief history

    When I started this blog 10 years ago, I was hosting it on a shared hosting service and it was based on WordPress. I then decided to keep WordPress as backend (I don't like PHP very much but I wasn't even good at front end development at the time, so using a tool that allowed me to concentrate on the content rather than on design was a natural choice for me) but to move my website to a VPS on DigitalOcean, where I've self-hosted Nginx + PHP + MySQL and even Postfix for email aliases until a few days ago.

    Why moving to a static website?

    In these three or four years I've been using a VPS, I must say I've been good enough (or maybe lucky?) at keeping "bad people" out of my server, but it's true that maintaining a VPS can be very time consuming and you can never be sure that your website is always safe. I've heard about static website before, but I was a bit skeptic because I had not spent enough time investigating all the possibilities (search and comments are still possible, thanks to external services and plugins).

    Another advantage of a static website is that I can perfectly "run" (preview) on my local computer without publishing it online. Pages can be rendered locally and will appear in the browser exactly as they will appear once published online.

    If you use a tool like WordPress, you need to be constantly connected to Internet to write any change. With static pages I can write my content offline (so I can do it while commuting on the train or while I'm flying somewhere) and publish it once I'm back online.

    Pelican

    The tool that I'm using to generate this website is called Pelican. There are many static website generators, the reason why I chose Pelican is because it's written in Python, so if I need to do any change I can do them and because its templates use Jinja2 which I'm already familiar with. It can also import posts from WordPress (and I had over 180 posts to import from my previous website) so if you are migrating from it it's a good choice. Please note that the import script is not perfect and that you may have to adjust some formatting here and there.

    A new deployment pipeline

    When you use WordPress your website is already online and all you have to do is to login, use the integrated editor, write content and finally publish it. A static website doesn't have any admin tool, it's just static pages. How do you publish content then? There are of course multiple solutions available. In my case my website source code is hosted in a repository on GitHub. When I commit on master branch there is a webhook that triggers a build job on TravisCI. TravisCI fetches the latest source code, installs Pelican on the CI and builds the static pages. Once the build is finished, a bash script is used to publish the generated pages on the static website hosting service.

    Hosting a static website

    The good thing about hosting a static website is that you don't need a database so you can host it almost anywhere at a cheaper price or even for free. In my case I've decided to use GitHub pages, mainly for simplicity. Every GitHub user can have a static website hosted at <yourusername>.github.io for free. To start using it, you just have to create a repository named <yourusername>.github.io under your GitHub account. In my case the repository is https://github.com/andreagrandi/andreagrandi.github.io. My deploy script simply takes the generated content that is in the output/ folder and git push it on this repository. Once the website has been pushed to git, it's immediately available at https://andreagrandi.github.io

    CloudFlare

    GitHub Pages service has a little limitation: you can either have your website served from a URL similar to the one I've just mentioned, including SSL support or you can use your own domain, but you can't have both things (SSL + custom domain). To workaround this, you can instruct your domain registrar (in my case is Gandi.net) to let CloudFlare manage your domain and just enabling "Full SSL" support will do the trick. I won't repeat here how to use CloudFlare since they have a very nice tutorial explaining how to configure their service to be used with GitHub Pages: https://blog.cloudflare.com/secure-and-fast-github-pages-with-cloudflare/. Remember to include a CNAME file containing your domain name and let your static generetor put it on the root of your website, otherwise GitHub pages won't serve the pages correctly.

    Why not Amazon S3?

    While I was looking for instructions about how to host a static websites, I found many examples of websites using Amazon S3. There is nothing wrong with using this service (just keep in mind that it's not free, Amazon charges you for space usage and requests, so keep an eye on the AWS bill) but the way these websites were using it was completely wrong. The most common error I noticed was the fact they were enabling the flexible SSL option on CloudFlare: this means that the connection between the visitor and CloudFlare was encrypted (and visitor could see the the SSL enabled) but the connection between CloudFlare and Amazon S3 was being served with HTTP only, meaning that potentially the pages could have been modified before being served. Infact Amazon doesn't serve the S3 website buckets with SSL, they use plain HTTP (Why are you doing this Amazon?!). To use the S3 bucket correctly one should also configure Route 53 (to manage DNS) and CloudFront (the Amazon equivalent of CloudFlare service, beware because this is also charged separately depending on usage/traffic), making the whole setup a bit more complicated.

    Conclusion

    I finally moved away from my VPS and from now on I will be able to concentrate my time on content only instead of spending part of it to maintain my server. Last but not least, the possibility to write my content offline, will hopefully allow me to write from places (train, airplane) where I've never written from before. If you have any suggestion or if if you notice any error, feel free to leave a comment here below. In alternative, since now this blog is completely open source and on GitHub, you can fork it and make a pull request!

    read more

    comments

Page 1 / 49 »

social