Create an EncFS volume compatible with BoxCryptor Classic

If you are planning to share an encrypted volume between Linux/OSX and Windows (I will assume you are sharing it on Dropbox, but you could use any similar service) and you are using EncFS under Linux/OSX and BoxCryptor under Windows, there are some specifig settings to use when you create the EncFS volume. Infact even if BoxCryptor claims to be “encfs compatible”, it’s not 100%.

Suppose you want to create an encrypted volume located at $HOME/.TestTmpEncrypted and mounted at $HOME/TestTmp you need the following command:

andrea-Inspiron-660:~ andrea $ encfs ~/.TestTmpEncrypted ~/TestTmp

answer “Y” when you are asked if you want to create the folders:

The directory "/home/andrea/.TestTmpEncrypted/" does not exist. Should it be created? (y,n) y
The directory "/home/andrea/TestTmp" does not exist. Should it be created? (y,n) y

At this point you will need to select between default paranoia mode or advanced mode. Please choose the advanced one (x):

Creating new encrypted volume.
Please choose from one of the following options:
 enter "x" for expert configuration mode,
 enter "p" for pre-configured paranoia mode,
 anything else, or an empty line will select standard mode.
?> x

Manual configuration mode selected.

Select <strong>AES</strong> as cypher algorithm:

The following cypher algorithms are available:
1. AES : 16 byte block cipher
-- Supports key lengths of 128 to 256 bits
-- Supports block sizes of 64 to 4096 bytes
2. Blowfish : 8 byte block cypher
-- Supports key lengths of 128 to 256 bits
-- Supports block sizes of 64 to 4096 bytes

Enter the number corresponding to your choice: 1

Selected algorithm "AES"

Select 256 as key size:

Please select a key size in bits. The cypher you have chosen
supports sizes from 128 to 256 bits in increments of 64 bits.
For example:
128, 192, 256
Selected key size: 256

Using key size of 256 bits

Choose 1024 as block size:

Select a block size in bytes. The cypher you have chosen
supports sizes from 64 to 4096 bytes in increments of 16.
Alternatively, just press enter for the default (1024 bytes)

filesystem block size:

Using filesystem block size of 1024 bytes

Select Stream as filename encoding:

The following filename encoding algorithms are available:
1. Block : Block encoding, hides file name size somewhat
2. Null : No encryption of filenames
3. Stream : Stream encoding, keeps filenames as short as possible

Enter the number corresponding to your choice: 3

Selected algorithm "Stream""

Do NOT enable filename initialization vector chaining:

Enable filename initialization vector chaining?
This makes filename encoding dependent on the complete path,
rather then encoding each path element individually.
The default here is Yes.
Any response that does not begin with 'n' will mean Yes: no

Do NOT enable per-file initialization vectors:

Enable per-file initialization vectors?
This adds about 8 bytes per file to the storage requirements.
It should not affect performance except possibly with applications
which rely on block-aligned file io for performance.
The default here is Yes.
Any response that does not begin with 'n' will mean Yes: no

Do NOT enable external chained IV:

External chained IV disabled, as both 'IV chaining'
and 'unique IV' features are required for this option.
Enable block authentication code headers
on every block in a file? This adds about 12 bytes per block
to the storage requirements for a file, and significantly affects
performance but it also means [almost] any modifications or errors
within a block will be caught and will cause a read error.
The default here is No.
Any response that does not begin with 'y' will mean No: no

Do NOT enable random bytes to each block header:

Add random bytes to each block header?
This adds a performance penalty, but ensures that blocks
have different authentication codes. Note that you can
have the same benefits by enabling per-file initialisation
vectors, which does not come with as great a performance
penalty.
Select a number of bytes, from 0 (no random bytes) to 8: 0

Enable file-hole pass-through:

Enable file-hole pass-through?
This avoids writing encrypted blocks when file holes are created.
The default here is Yes.
Any response that does not begin with 'n' will mean Yes: yes

Finally you will see:

Configuration finished. The filesystem to be created has
the following properties:
Filesystem cypher: "ssl/aes", version 3:0:2
Filename encoding: "nameio/stream", version 2:1:2
Key Size: 256 bits
Block Size: 1024 bytes
File holes passed through to ciphertext.

At this point set a passphrase for your new volume:

Now you will need to enter a password for your filesystem.
You will need to remember this password, as there is absolutely
no recovery mechanism. However, the password can be changed
later using encfsctl.

New Encfs Password:
Verify Encfs Password:

You should be able to mount this volume using BoxCryptor.

How to configure Edimax EW-7811UN Wifi dongle on Raspbian

If you want to connect your RaspberryPi to your home network and you want to avoid cables, I suggest you to use the Edimax wifi adapter. This device is quite cheap (around £8 on Amazon) and it’s very easy to configure on Raspbian (I assume you are using a recent version of Raspbian. I’m using the one released on 20/06/2014).

edimax-pi3

 

Configure the wifi adapter

Edit /etc/network/interfaces and insert these configuration values:

auto lo
iface lo inet loopback
iface eth0 inet dhcp

allow-hotplug wlan0
auto wlan0

iface wlan0 inet dhcp
wpa-ssid YOURESSID
wpa-psk YOURWPAPASSWORD

Power management issue

There is a known “issue” with this adapter default configuration that makes it to turn off if the wlan interface is not in use for some minutes. To avoid this you have to customize the parameters used to load the kernel module. First check that your adapter is using 8192cu module:

sudo lsmod | grep 8192
8192cu 551136 0

Create the file /etc/modprobe.d/8192cu.conf and insert the following lines inside:

# prevent power down of wireless when idle
options 8192cu rtw_power_mgnt=0 rtw_enusbss=0

I also suggest to create a little entry in crontab to make the RaspberryPi ping your router every minute. This will ensure that your wifi connection will stay alive. To edit crontab just type (from pi user, you don’t need to be root):

crontab -e

and insert this line at the end:

*/1 * * * * ping -c 1 192.168.0.1

where 192.168.0.1 is the IP of your router (of course substitute this value with the ip of your router).

Keep Alive Script

I created a further script to keep my WIFI alive. This script will ping the router (change the IP using the one of your router) every 5 minutes and if the ping fails it brings down the wlan0 interface, the kernel module for the wifi and bring them up again.

Just put this script in /root/wifi_recover.sh and then execute from root user:

chmod +x wifi_recover.sh
crontab -e

Insert this line inside the crontab editor:

*/5 * * * * /root/wifi_recover.sh

Conclusion

The configuration is done. Just reboot your RaspberryPi and enjoy your wifi connection.

Configuring ddclient to update your dynamic DNS at noip.com

noip.com is one of the few dynamic DNS free services that are reliable to use. If you have, like in my situation, a RaspberryPi connected to your home DSL and you want it to be always reachable without knowing the current IP address (the IP could change if you have a normal DSL service at home), you need a dynamic DNS service.

To update the noip.com one you just need ddclient, a tool that is available in Raspbian/Debian repository. You can install it with this command:

sudo apt-get install ddclient

then you just need to edit /etc/ddclient.conf

protocol=dyndns2
use=web, web=checkip.dyndns.com/, web-skip='IP Address'
server=dynupdate.no-ip.com
login=yourusername
password=yourpassword
yourhostname.no-ip.org

and restart the client:

sudo /etc/init.d/ddclient restart

That’s all! Please remember that noip.com free accounts have a limitation: they need to be confirmed every 30 days (you will receive an email and you need to click on the link contained to update your DNS).

Getting started with Digital Ocean VPS: configuring DNS and Postfix for email forwarding

I have recently migrated my website from a shared hosting to a dedicated VPS on Digital Ocean. Having a VPS surely gives you unlimited possibilities, compared to a shared hosting, but of course you have to manage some services by yourself.

In my case I only needed: SSH access, LEMP configuration (Nginx + MySQL + PHP) to serve my WordPress blog and Postfix to use email forwarding from my aliases to my personal email.

Configuring DNS on Digital Ocean

Understanding how to properly configure the DNS entries in the panel could be a bit tricky if it’s not your daily bread. In particular there is a Digital Ocean configuration that assumes certain things about your droplet, so it’s better to configure it properly.

For example the droplet name should not be casual, but it should match your domain name: I initially called my host “andreagrandi” and I had to rename it to “andreagrandi.it” to have the proper PTR values.

You will need to create at least a “mail” record, pointing to your IP and an “MX” record pointing to mail.yourdomain.com. (please note the dot at the end of the domain name). Here is the configuration of my own droplet (you will notice also a CNAME record. You need it if you want www.yourdomain.com to correctly point to your ip.

dns_config_digitalocean

 

Configuring Postfix

In my case I only needed some aliases that I use to forward emails to my GMail account, so the configuration is quite easy. First you need to install Postfix:

sudo apt-get install postfix

Then you need to edit /etc/postfix/main.cf customizing myhostname with your domain name and add virtual_alias_maps and virtual_alias_domains parameters. Please also check that mynetworks is configured exactly as I did, or you will make your mail server vulnerable to spam bots. You can see my complete configuration here:

Add your email aliases

Edit /etc/postfix/virtual file and add your aliases, one per line, like in this example:

info@yourdomain.com youremail@gmail.com
sales@yourdomain.com youremail@gmail.com

At this point update the alias map and reload Postfix configuration:

sudo postmap /etc/postfix/virtual
sudo /etc/init.d/postfix reload

Conclusion

As you can see, configuring Postfix is quite easy, you just need to be careful when you configure the DNS records in the control panel. Are you curious to try how Digital Ocean VPS works? Fancy 10$ credit (enough for 2 months if you choose the basic droplet) for free? Use this link and enjoy it https://www.digitalocean.com/?refcode=cc8349e328a5

Travis-ci.org and Coveralls.io: Continuous Integration and QA made easy

Developing a large web application or before deploying some code is very important to verify the quality of the code itself, check if we have introduced any regression or bug and have something that tell us if we are increasing or decreasing the quality of the code.

Suppose we are in an organization or a company where the basic rule is: master branch is always ready/stable to be deployed. In a team usually people work on personal branches, then when the code is stable it’s merged with master.

How do we check if the code is stable and ready to be merged? First of all we need to cover all our code with proper tests (I won’t go in details about unit testing here, I assume that the reader knows what I’m talking about), then we need to actually run them, possibly in an isolated environment that is similar to the production one, and check if they all pass. If they do, we are quite safe to merge our code with master branch.

How can we ensure that all the developers remember to run tests when they push some new code? To make things a bit more real, let’s take the example of a Python/Django product (or even a library) that currently supports Python 2.6, 2.7, 3.3 and Django 1.4.x, 1.5.x, 1.6.x. The whole matrix consists of 9 possible combinations. Do we have to manually run tests on 9 configurations? No, we don’t.

Travis-ci.org

Travis is a continuous integration tool that, once configured, takes care of these tasks and let us save lot of time (that we can use to actually write code). Travis-ci.org is an online service that works with GitHub (it requires we use GitHub as repository for our code), and once we have connected the two accounts and configured a very simple file in our projects, it’s automatically triggered when we push on our GitHub repository.

The configuration consists of adding a file named .travis.yml in the root of our project. A working example is available here https://github.com/andreagrandi/workshopvenues/blob/master/.travis.yml (all the env variables I set are not required normally, but that’s where I save the values of my configuration, so they need to be initialized before I can run tests).

The service supports most of the languages that are commonly used and even a good number of PAAS, making it very easy to automatically deploy our code. If it should not be enough for your needs, they also expose a public API. I suggest you to give a look at the official documentation that will explain everything in details http://docs.travis-ci.com

Once everything is configured, we will have something like this on our console https://travis-ci.org/andreagrandi/workshopvenues/jobs/19882128

travis-ci-console

If something goes wrong (if tests don’t pass for example) we receive a notification with all the informations about the failing build, and if we had configured an automatic deployment of course the code would not be deployed in case of a failing build.

Travis-ci.org is completly free for opensource projects and has also a paid version for private repositories.

Coveralls.io

There is a nice tool available for Python called coverage. Basically it runs tests and checks the percentage of the source code that is covered by tests, producing a nice report that shows us the percentage for every single file/module and even the lines of code that have been tested.

Thanks to Coveralls.io and the use of Travis, even these tasks are completly automatized and the results are available online like in this example https://coveralls.io/builds/560853

The configuration is quite easy. We need to connect our Coveralls.io profile with GitHub, like we did for Travis-ci.org and then enable the repository. To trigger Coveralls after a successful Travis build, we need to have these lines at the end of our .travis.yml file

after_success:
  - coveralls

coveralls-console
Even Coveralls.io is completly free for opensource projects and offers a paid version for private repositories.

Heroku

I use Heroku to host and run my web application. Normally to deploy on Heroku you so something like this: git push heroku master

Adding these settings to the .travis.yaml file, I can automatically deploy the application on Heroku, if the build was successful:

deploy:
  provider: heroku
  api_key:
    secure: R4LFkVu1/io9wSb/FvVL6UEaKU7Y4vfen/gCDe0OnEwsH+VyOwcT5tyINAg05jWXhRhsgjYT9AuyB84uCuNZg+lO7HwV5Q4WnHo5IVcCrv0PUq/CbRPUS4C2kDD7zbA1ByCd224tcfBmUtu+DPzyouk23oJH+lUwa/FeUk0Yl+I=
  app: workshopvenues
  on:
    repo: andreagrandi/workshopvenues
  run:
    - "python workshopvenues/manage.py syncdb"
    - "python workshopvenues/manage.py migrate"

Not only the code is deployed, after deployment the South migrations are executed.

Conclusion

These two tools are saving me lot of time and are ensuring that the code I release for a project I’m working on (WorkshopVenues) is always tested when I push it on my repository.