Configuring Apache 2.2 SSL/TLS for Forward Secrecy

The Apache HTTP server is one of the most commonly-used web servers on the Internet, typically used on Linux and BSD Unix servers. Mainstream Linux distributions intended for server use tend to be relatively conservative, eschewing “bleeding-edge” packages and newer versions in favour of older, tried and trusted software. While this is useful for stability, it can have a downside in the fast-changing world of Transport Layer Security (TLS), where new vulnerabilities in protocols, cipers and implementations are continually uncovered by researchers. Although emergency patches are issued in the case of major vulnerabilities such as Heartbleed, at the time of writing many major Linux distributions still use Apache 2.2 which, until recently, did not support Forward Secrecy which is being viewed as increasingly important. However, later versions of Apache 2.2 (for example, Apache 2.2.22 which currently ships with Debian 7 ‘wheezy’) now appear to support forward secrecy.

What is Forward Secrecy and Why is it desirable?

In TLS, user data passed between a client and server are encrypted and decrypted using a session key which is known to both peers. Obviously, the secrecy of communications depends on the session key being known only to the peers, so a problem is how the client and server can agree on the session key value without revealing it to an eavesdropping third party. TLS supports various key agreement or key exchange protocols that achieve this. A pre-shared key value could be used, but this has to be installed on the client and server in advance, presenting a key management problem. On public Internet sites, an RSA key derivation method is typically used to determine the session key. However, in this case the session key is protected by the server’s private key, so if that key is compromised (revealed or stolen, for example via the Heartbleed bug), an attacker could determine the contents of the communication between the server and any client. Further, an attacker could decrypt previously recorded TLS sessions, so confidentiality of the communication is thus contingent on the long-term security of the server’s private key.

Forward secrecy (sometimes called Perfect Forward Secrecy) attempts to avoid this vulnerability to the long-term secrecy of keys by creating a random, ephemeral (i.e. throw-away) session key for each session. That way, if a session key is ever compromised, the attacker will only be able to decipher that single session which it protected. TLS supports two key exchange/agreement protocols that support forward secrecy: Diffie-Hellman (DHE) key exchange and Elliptic Curve Diffie-Hellman (ECDHE) key agreement.

One drawback is that ECDHE is slower than RSA key derivation, and DHE is slower still, leading to an overhead on TLS connection establishment (see Vincent Bernat’s blog post for some concrete measurements; of course, this overhead will become less relevant as computer power increases). Another drawback pointed out by Ivan Ristic is that network security devices that enabled to allows them to decrypt communications using the server’s private key to monitor for intruders will no longer be able to do so.

Apache 2.2 Configuration

Without further ado, the following incantations should be placed in the appropriate place in the Apache configuration file (e.g. between <VirtualHost> … </VirtualHost>). This is based on Geoffroy Gramaize’s research and recommendations and a tutorial by Remy Van Elst.

SSLEngine on
SSLProtocol +TLSv1.2 +TLSv1.1 +TLSv1 -SSLv2 -SSLv3
SSLCompression Off
SSLHonorCipherOrder on
P !DSS !RC4"

Note: The available ciphers depends on the version of OpenSSL used, not on the version of Apache.

Check the configuration, then restart apache2. On Debian/Ubuntu:

# apache2ctl -t
# /etc/init.d/apache2 restart

or on RHEL/CentOS:

# apache2ctl -t
# /etc/init.d/httpd restart

Finally, test you site with Qualys(R) SSL Labs’ SSL Server test page.

Note the following points.

  • SSLv3 is disabled to protect against the POODLE attack. Some very old, unsupported browsers (Internet Explorer 6) do not support TLSv1 or later and so will not be able to connect to your web site using TLS.
  • SSL compression is turned off to mitigate against the CRIME attack. (HTML DEFLATE compression can still be used.)
  • BEAST attack mitigation is tricky. Previous advice was to disable TLSv1.0 and offer RC4. However, some clients cannot use TLSv1.1 or TLSv1.2, so must use TLSv1.0. Alas, for TLSv1.0 users, using RC4 mitigates BEAST but RC4 is vulnerable to being broken in the near future, or already has been broken. Tough call. The recent draft IETF Recommendations for the Secure Use of TLS and DTLS states Implementations MUST NOT negotiate RC4 cipher suites, so RC4 is disabled. Older versions of Internet Explorer on Windows XP will try to use 3DES instead of RC4, which is more computationally expensive – i.e. slow. 3DES is therefore disabled — you might consider re-enabling it if supporting these legacy browsers is required, but note that even if 3DES-EDE cipher suites use a 168 bit key, the effective key strength is 112 bits, as pointed out by Stephane Moore. If you must use them, put them at the end of the list as suggested by Remy Van Elst (ibid.)
  • Put the ECDHE ciphers at the top of the SSLCipherSuite list, since these are faster than DHE and should be given preference.
  • Apache 2.2 does not at present allow the length of the DHE ephemeral keys to be configured; they are fixed at 1024 bits. (Apache prior to v 2.4.7 relies on OpenSSL for the DH input parameters, which defaults to 1024 bits.) Until this becomes configurable, there are arguments to disable DHE altogether.
    • DHE ephemeral keys should be of at least the same length as the authentication key length if X.509 certificates are used for authentication. Having a 1024-bit DHE key length while using 2048-bit RSA certificates reduces TLS security.
    • The majority of modern browsers support ECDHE.
  • When you check your site with the SSL Server Test tool, pay attention to the list of supported clients in the Handshake Simulation. There is trade-off between the level of security and the range of clients you can support.

TLS recommendations are changing frequently, so it pays to keep up to date.


DNS Privacy: using OpenNIC and DNScrypt

[Update: Many thanks for the comments from Maciej Soltysiak of OpenNIC Poland. Please refer to his reply for information. I may update this article in time after more experimentation.]


My ISP’s DNS server started playing up recently, so I investigated more reliable alternatives. However, I’m wary of the two most well-known public DNS providers, Google and OpenDNS. OpenDNS trumpets its use of DNSSEC but logs your DNS queries, discloses information to advertisers and “analytics companies” and performs censorship to be “family friendly” ( Google also collects logs, purportedly for technical purposes ( but the meaning of some of the language in their privacy statement is unclear and given Google’s past and current behaviour and attitude to privacy, using Google DNS for privacy feels like having Dracula in charge of a blood bank.

Hitherto I’ve been avoiding learning about DNS because it’s relatively complex, but this time I had no choice. When I emerged from the rabbit hole, I’d set up a local DNS server on my network that resolves using OpenNIC (an alternative DNS root not under ICANN control) with servers that don’t keep logs and using DNScrypt to authenticate the servers and encrypt the queries and responses.

Problems with DNS

Many people are by now aware of threats to the privacy of Internet browsing through things like tracking cookies and web beacons. Meanwhile, many organisations and companies are moving towards supporting https on their web servers so that traffic is encryted as it passes over the Internet. Perhaps less widely recognised, however, are the risks to privacy posed by DNS. A few examples:

  1. Each time your web browser retrieves information from a server on the Internet, it performs a DNS query to get the IP address of that server. Thus, someone with access to DNS query logs can determine what Internet sites you’ve been visiting and when.
  2. DNS queries are typically transmitted unencrypted, so can be passively monitored.
  3. Instead of performing a DNS lookup and returning the result, a malicious DNS server can fake the response to direct your browser elsewhere (DNS hijacking:, block access to certain websites or domains, or if the lookup fails, direct you to a page of advertisements.
  4. DNS “black holes” can be set up. While these can be useful for blocking spam, on the other hand even legitimate DNS servers can be manipulated by governments to censor parts of the Internet (e.g. Twitter, YouTube) for political or commercial reasons.

DNS hijacking by consumer ISPs for their own gain rather than customer benefit is unfortunately not uncommon (see the references on the Wiki page above and Furthermore, there are pressures on ISPs from governments and law enforcement agencies to compromise their customers’ privacy or censor their browsing, in even Western democracies.

Desirable features for DNS

So what are some desirable properties of a DNS system? We may desire some or all of the following:

  1. A decentralised DNS system to prevent censorship.
  2. Ability to select the location of the DNS server to avoid states with excessive censorship or surveillance.
  3. Authenticated DNS lookups and responses to prevent tampering with DNS responses by a “man-in-the-middle”.
  4. Confidentiality: Encrypted DNS lookups and responses to prevent logging by packet sniffing.
  5. Anonymity: No logging of DNS requests by the DNS server.
  6. Anonymity: Untracability of the origin of our DNS request.

Not all of these are easy to achieve, so there are trade-offs involved. The main issues are (i) finding a trustworthy DNS provider and (ii) using technical measures to achieve security of DNS communications.

Since DNS can be abused by ISPs, the first step is to use an alternative to the ISP’s DNS server. Searching the web for public DNS services will turn up a number of free and paid-for services. Each has different privacy policies (not always obvious) and may offer value-added services such as filtering of web sites that host malware or scams.

To protect against the response to a DNS query being forged or manipulated, the IETF has developed DNS Security Extensions (DNSSEC) which use digital signatures for authentication and anti-tampering. This was designed to protect against a specific set of threats such as DNS spoofing. However it does not address confidentiality (i.e. it does not support encryption) and one still has to trust that the peer DNS server. To address the problem of confidentiality and other issues, Daniel Bernstein, an American academic, proposed DNSCurve which uses high-speed cryptography. However, DNSSEC is not yet widely supported, and the only large-scale DNS provider to offer DNScurve so far is OpenDNS, which has privacy issues.

So, ideally we want a DNS service provider with those features we require (trustworthy, private, decentralised, value-added services, service availability etc.) which supports DNSSEC or better still, DNSCrypt. Does such a provider exist?


I eventually settled on the Open NIC project, an alternative DNS root and DNS registry free from commercial control. They administer their own top-level domains but can also resolve all ICANN top-level domains. The DNS servers are run by individuals or private organisations and have various management policies. Some of the servers support DNSCrypt. The list of servers is available at .

Using DNSCrypt

DNSCrypt is software based on the DNScurve protocol and that provides confidentiality of DNS queries and responses between the client and the server. The steps are as follows:

  1. Select the appropriate software depending on how you want to use DNSCrypt.
  2. Build (or download) and install.
  3. Configure.
  4. Test.

I got a lot of information from Marcus Povey’s blog here:

Select software

There are a few packages depending on how you want to use DNScrypt.

A DNS client that runs on your computer to allow it to use a name resolver that supports DNSCrypt. This is the easiest to set up and configure.
Acts as a forwarding proxy to allow a local name resolver to access a DNSCrypt-enabled DNS server.
A server-side proxy that allows a name resolver (DNS server) to support DNSCrypt

Since I have my a local DNS server for my home network, I needed to use DNScrypt-proxy, comme ça:

machines       |                                         T'Internet
 +-----+       |     Local name resolver                                  DNSCrypt-enabled
 |     +-------+        (e.g. bind)                         ####           name resolver 
 +-----+       |         +------+       +-------+         ########           +------+
               +---------+      +-------+       +=========#########==========+      +
               |         +------+       +-------+          #######           +------+
               |                                            #####
             Local                    DNSCrypt-proxy

Download, (build) and install

If you use Microsoft Windows or Apple OSX, there may be pre-built packages available: see

However, you may want to roll up your sleeves and build it yourself. I installed mine on a Raspberry Pi running the Raspberian variant of Debian linux.

The easiest way is to use the dnscrypt-autoinstall package for debian-like systems put together by a kind chap called Simon Clausen: This automates the entire build and installation procedure, but I didn’t like the init script. I therefore appropriated Marcus Povey’s init script from here:

If you want to build it manually, first build and install libsodium, which is based on Daniel Bernstein’s libnacl crypto library. (If you forget this step, dnscrypt-proxy will build and run but you may not be able to use DNScurve encryption!)

Then download and build dnscrypt-proxy:

$ git clone
$ cd dnscrypt-proxy
$ ./
$ ./configure
$ make
$ make check
$ sudo make install

Installation is into /usr/local/sbin by default. This takes some time on the Raspberry Pi! Go and get a cuppa.

DNSCrypt-proxy configuration

dncrypt-proxy listens on a specified interface (IP address and port) for DNS queries from a local resolver/caching proxy such as bind, then forwards them to a DNScrypt DNS server (resolver). You need to configure the following items:

  • Local address and port on which to listen for queries.
  • Remote resolver’s IP address and port.
  • The resolver’s certificate provider name and key value.

The latter two items are provided by the DNS resolver’s administrator. For OpenNIC servers, the information is available from although the items are not explicitly labelled. DNScrypt queries are often on a different port to the default DNS port 53. Examples of the certificate provider name and key value are and 1C19:7933:1BE8:23CC:CF08:9A79:0693:7E5C:3410:2A56:AC7F:6270:E046:25B2:EDDB:04E3.

These options are specified as arguments on the command line, e.g.

dnscrypt-proxy --local-address= --daemonize --resolver-address= --provider-key=1C19:7933:1BE8:23CC:CF08:9A79:0693:7E5C:3410:2A56:AC7F:6270:E046:25B2:EDDB:04E3

This instructs dnscrypt-proxy to run as a daemon in the background (--daemonize) listening on the localhost address port 5553 (--local-address= connecting to one of the OpenNIC servers. These would be typically set in the init script in /etc/init.d/dnscrypt-proxy. Don’t forget to enable dnscrypt-proxy to start at boot time, too.

Once the configuration files have been changed, re/start the dnscrypt-proxy daemon (e.g. sudo /etc/init.d/dnscrypt-proxy start) and check the log files in /var/log/syslog or /var/log/daemon.log . A successful DNScrypt startup log looks something like the following:

dnscrypt-proxy[1613]: Starting dnscrypt-proxy 1.4.0
dnscrypt-proxy[1613]: Initializing libsodium for optimal performance
dnscrypt-proxy[1613]: Generating a new key pair
dnscrypt-proxy[1613]: Done
dnscrypt-proxy[1613]: Server certificate #808464433 received
dnscrypt-proxy[1613]: This certificate looks valid
dnscrypt-proxy[1613]: Chosen certificate #808464433 is valid from [2014-02-10] to [2015-02-10]
dnscrypt-proxy[1613]: Server key fingerprint is A448:B056:C9E0:D320:F0C3:345C:AA58:260C:D67D:1859:BDBD:9E7A:014C:7686:09C3:9E26
Aug 9 19:46:02 raspberrypi dnscrypt-proxy[1613]: Proxying from to

Bind configuration

In the above example, dnscrypt-proxy listens on port 5553 on the localhost address, so we must configure our forwarding proxy (running on the same machine) to forward DNS queries to that interface.

For bind, edit the forwarders in the options section of the named.conf.options configuration file (typically stored at
/etc/bind/named.conf.options) as follows:

forwarders { port 5553

You will also need to change the value of the dnssec-validation parameter to yes (it defaults to auto).

dnssec-validation yes;

Once the configuration file has been changed, restart bind (e.g. sudo /etc/init.d/bind restart). To check things are working, first do a DNS lookup (e.g. $ nslookup to see that DNS is working on your local system. (The server name reported should be the interface of the bind server.) To check that lookups are using dnscurve-proxy, use a network packet analyser (like tcpdump) to examine network traffic between dnscurve-proxy on your local machine and the remote DNS server. For example, say the remote DNS server uses port 443 and the Ethernet port eth0 is connected to the Internet, we can monitor that traffic with the command

$ sudo tcpdump -i eth0 -vvv 'port 443'

Then run a DNS lookup on a different domain to the previous (otherwise you’ll just get a reply from bind’s local cache).

$ dig

and you should get gobbledegook such as the following

tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
19:57:25.158916 IP (tos 0x0, ttl 64, id 53899, offset 0, flags [none], proto UDP (17), length 540) > [udp sum ok] 14160 updateDA+% [b2&3=0x5971] [31348a] [30566q] [59967n] [767au] Type50645 (Class 17488) (QU)? [|domain]
19:57:25.166805 IP (tos 0x0, ttl 64, id 53900, offset 0, flags [none], proto UDP (17), length 540) > [udp sum ok] 14160 updateDA+% [b2&3=0x5971] [31348a] [30566q] [59967n] [767au] Type50645 (Class 17488) (QU)? [|domain]
19:57:25.466642 IP (tos 0x0, ttl 49, id 5993, offset 0, flags [none], proto UDP (17), length 332) > [udp sum ok] 29238 updateM [b2&3=0x666e] [27192a] [30295q] [64895n] [26494au] Type30578 (Class 3221) (QU)? M-@^@^T^@M-<M-C^@^T^@M-@M-?hM-YMM-u^C]pM-^CkM-CM-R)^E_^G,@M--}M-@,}qrM-tM-^NM-^D^P1cM-oLM-ecM-u^]^S^H^CM-^_M-^XM-iM-qM-^U^DM-^Wd^?^EjPM-^NlwM-(^AM-uHb^D}M-u$M-2M-^AM-TM-+^R;M--M-yM-Mip^IM-6M-Z`T$sOu^J_M-^TlM-*M-IGcM-^QK^@=FM-1M-,^PjM-]M-^GN3UM-<M-8M-[^B8zM-,M-8yoM-wM-L8M-^LB!M-pN^^^NM-KJM-H'M-KeM-^DM-^YM-^VM-^RM-yM-97^?M-lo@^PM-[WTM-^^_M-AS>M-\M-QM-q.M-^I.M-=^E$yM-[M-;M-^OM-iheM-^ZM-\7M-B%DgM-\^\M-5Z.T^C_N^KPM-V^_M-*M-= [udp sum ok] 29238 updateM [b2&3=0x666e] [27192a] [30295q] [49961n] [60541au][|domain]

You can satisfy yourself that the actual lookup does not appear in plaintext.

Check the logs in /var/log/syslog (labelled [named]) to check nothing untoward is happening. If you get messages such as NS: no valid signature found or NS: got insecure response; parent indicates it should be secure then check that dnssec-validation is set to yes rather than auto.
You should also not get messages from dnscrypt-proxy such as [dnscrypt-proxy] Unable to retrieve server certificates. If these occur, check the dnscrypt-proxy provider name and key.

As a final step for assurance, repeat the experiment running the packet sniffer on the default DNS port (53) on the Internet-connected Ethernet interface:

$ sudo tcpdump -i eth0 -vvv 'port 53'

Then run a DNS lookup on a different domain again.

$ dig

You should find *no* traffic on port 53. (If you do, again check that dnssec-validation is set to yes rather than auto.)

Finally, you can configure your firewall rules to close UDP/TCP port 53 traffic to/from the Internet.

Open Issues


DNSCrypt as currently implemented only uses a single remote resolver. This means you lose DNS if the server becomes unavailable. The man page says that it can also accept a CSV file that contains multiple resolvers, but this feature appears not to have been implemented. As a workaround, I suppose you could have two instances of dnscrypt-proxy listening on different addresses and configure bind to use them both as forwarders.


A DNS server may claim not to keep logs of DNS requests and queries but you have to take that on trust. For greater assurance of anonymity, it may be possible to tunnel DNS requests over Tor or I2P using Onioncat Some OpenNIC servers claim to support Tor/OnionCat.

USB Serial Devices with Mac OS X

In bygone days, computers used to come with RS-232 serial interfaces as standard. Those days are now long gone and instead we have USB ports. However, occasionally we still need to hook up an RS-232 serial device. In that case, the solution is a USB to serial adapter.

Interfacing USB to serial devices to Mac OS X is straightforward but a little involved. The following procedure is based on that outlined on the instructions at

Confirm the device

Many USB to serial converter devices seem to be made by Prolific Technology Inc., a Taiwanese firm. We first need to confirm the device and then install the driver.

From the Apple icon in the top left corner of the screen, select “About this Mac”, then click “More Info…”. From the next window, click “System Report…”. In the left-hand pane, click on USB under Hardware and browse the USB devices. You should see a device USB-Serial Controller. Click on it to confirm the manufacturer and vendor ID. In the screenshot below we confirm a Prolific Technology 2303.

/”>Browsing USB devices

Obtain and Install the Driver

Cruise over to the Prolific USA website for driver installation guides. The drivers themselves can be obtained from here. Follow the instructions to install the driver. Note that a reboot is required.

Verify driver installation

Fire up a terminal and enter

kextstat | grep prolific

to verify the driver has been installed. Also enter

ioreg -c IOSerialBSDClient | grep usb

to show the USB serial devices. The output should be similar to that shown below.

Verifying USB serial converter driver installation


The device is accessible through /dev/cu.usbserial. You can use the screen virtual terminal program to test it: screen /dev/cu.usbserial.

Happy hacking!

A bare bones mobile compass app in HTML/JS using PhoneGap

As covered in a previous blog post, PhoneGap, based on Apache Cordova, is a compatibility layer and set of Javascript APIs that enable HTML/Javascript web pages running on mobile devices to access features of the underlying platform. This allows reasonably portable mobile apps to be created without having to resort to native code.

This blog post walks through writing a dirt simple compass app in HTML/JS with PhoneGap.

Debugging Tools

Javascript is designed for web browsers. Web browsers are designed to be error tolerant; if there’s an error in Javascript or the HTML, they will try to keep going and display the page without informing the user (who, after all, can do little).

This presents a problem for developers; if you’re testing your app on a mobile device or emulator and there’s even a syntax error in your Javascript code (let alone a bug), there’s no indication of what the problem is except that the Javascript won’t run. However, because the applications we’re developing with PhoneGap/Cordova are HTML/Javascript, a lot of debugging can be done with a desktop web browser with debugging tools (e.g. Firebug for Firefox).

Without a browser, you can use a static analysis tool like JSLint from the command line to check whether the Javascript will compile cleanly. (JSLint has some suckage in that it reports line numbers wrong — maybe it doesn’t count comment lines. C’est la vie.)

The Compass App

The quickest way to get started is to create a skeleton project and modify that.

$ phonegap create Compass

will create a directory called Compass that contains various project files and a very simple skeleton application.

The documentation says that we need a plugin to use the compass. (What happens if we don’t install the plugin? The app won’t work with no indication why.) The documentation says to install the plugin using the cordova command but since we’re using PhoneGap we have to use that command instead.

$ cd Compass
$ phonegap plugin add org.apache.cordova.device-orientation
[phonegap] adding the plugin: org.apache.cordova.device-orientation
[phonegap] successfully added the plugin

The HTML code for the app is in www/index.html. Change the contents of the <body> section as follows:

    <div class="app">
        <div id="deviceready" class="blink">
            <p class="event listening">Connecting to Device</p>
        <div id="heading" style="display:none;">
            <p class="event listening blink">WAITING FOR COMPASS</p>
            <p class="event received">***</p>
    <script type="text/javascript" src="phonegap.js"></script>
    <script type="text/javascript" src="js/index.js"></script>
    <script type="text/javascript">

As well as changing the heading in the <h1> tag, we’ve modified the deviceready div and added a new heading div to contain the heading display.

Next, edit the file www/config.xml and change the name of the app in the <name> tag from ‘Hello World’ to ‘Compass’. This will be the name of the app shown underneath its icon.

The logic is contained in the file www/js/index.js. Replace the file with the following code:

var app = app || {};

app.watchID = null;

app.initialize = function () {
    document.addEventListener('deviceready', app.onDeviceReady, false);

app.onDeviceReady = function () {
    app.watchID = navigator.compass.watchHeading(
        app.compassError, { frequency : 3000 });

app.compassUpdate = function (hdg) {
  var mh = hdg.magneticHeading;
  app.showHeading(true, 'Heading: ' + mh);

app.compassError = function (err) {
  var errcode = err.code;
  app.showHeading(false, 'Compass error: ' + errcode);

app.showHeading = function (f_ok, s) {
  var parentElem = document.getElementById('heading');
  var nodataElem = parentElem.querySelector('.listening');
  var dataElem   = parentElem.querySelector('.received');
  if (f_ok) { 
    nodataElem.setAttribute('style', 'display:none;'); 
    dataElem.setAttribute('style', 'display:block;'); 
    dataElem.innerHTML = s;
  else {
    nodataElem.setAttribute('style', 'display:block;'); 
    dataElem.setAttribute('style', 'display:none;'); 
    nodataElem.innerHTML = s;

app.receivedEvent = function(id) {
    var parentElement = document.getElementById('deviceready');
    parentElement.setAttribute('style', 'display:none;');
    var headingElement = document.getElementById('heading');
    headingElement.setAttribute('style', 'display:block');

Once the application has started, the PhoneGap (Cordova) runtime sends a deviceready event which invokes the app.onDeviceReady callback function. This hides the deviceready div and displays the heading div, then calls navigator.compass.watchHeading to sample the compass heading at 3000ms intervals. The app.compassUpdate function is then called periodically with the value of the magnetic heading, or app.compassError is called with an error on failure. The function app.showHeading is displays the heading value or the error code.

To compile the program for Android and install it on a device or an emulator, execute the command

$ phonegap run android

from the Compass directory, and voila:
Compass app
(You can get rid of the icon by editing the CSS file but I couldn’t be arsed.)


So we have a mobile app that reads the compass and spits out the heading to the screen, in around than 100 lines of HTML/Javascript and without writing a single line of Java or Objective-C. Of course there are issues; Javascript sucks, HTML/JS apps are slower than native apps, and your source code is available for all the world to see (it’s just HTML and Javascript after all), but if you can live with those issues you can create interesting mobile apps with minimal effort. Could also be useful for proof-of-concept/prototyping.

I’ll be looking to escape Javascript and HTML suckage altogether by using something like Elm. But that’s for a later blog.

Cross-Platform Mobile Apps in HTML5: PhoneGap


Introduces PhoneGap, an abstraction layer based on Apache Cordova that exposes mobile platform sensors and capabilities to HTML5 apps through a Javascript API. Goes through installation and running a skeleton app on Android and iOS emulators.


Mobile applications are currently a huge growth area for software. Unfortunately, mobile devices have a variety of operating systems, programming languages and APIs, each with their own particular forms of brain damage.

Now HTML5 is establishing itself on the scene, sweeping away great piles of old cruft and bringing new capabilities. It’s supported by all major mobile devices, making it feasible to create mobile applications using HTML5, Javascript and CSS. Of course, HTML5/JS apps are going to be slower than native apps and you won’t have a native “look and feel” (at least, not out of the box) but not every app needs great performance and looking at iOS7, maybe lack of native look and feel is not so bad. Since a pure HTML5/CSS/JS application is basically just a web page you don’t have to worry about going through some Crapp store for approval.

One thing that’s missing, though, is access to the mobile “platform” itself from Javascript; the accelerometers and GPS sensors, media players and the like. That’s where PhoneGap (and the open source package it’s based on, Apache Cordova) comes in — it provides a device-independent Javascript API which exposes the platform. (Unfortunately, that means you can no longer deploy your app purely from the web – the HTML5/JS app needs to be bundled with some sort of runtime for deployment.)

Installing PhoneGap

PhoneGap depends on NodeJS, so you need to install that first – download from here. Once you have NodeJS installed, PhoneGap is installed using NodeJS’s package manager npm which automatically pulls in all its dependencies — a very straightforward process (providing you don’t have complications like an authenticating proxy to worry about, of course).

$ sudo npm install -g phonegap
npm http GET
npm http 304
npm http GET
npm http GET

The application is installed in /usr/local/ on Mac OS X. PhoneGap is then basically driven from the command line program /usr/local/bin/phonegap.

You will also need the SDK for the mobile platform you’re targeting installed: XCode for iOS, the Android SDK for Android etc.

Creating a Test Application

A new application is created using the phonegap create command: e.g. to create an application called my-app:

$ phonegap create my-app

This creates a directory my-app and populates it with various directories and files for a skeleton application that just response to the DeviceReady event. Let’s try it on a couple of target platforms; Google Android and Apple iOS.


To run the skeleton app on the Android emulator, execute

$ phonegap run android

in the my-app directory. (The Android SDK directories sdk/tools and sdk/platform-tools need to be set in the PATH environment variable.)

$ phonegap run android
[phonegap] detecting Android SDK environment...
[phonegap] using the local environment
[phonegap] compiling Android...
[phonegap] successfully compiled Android app
[phonegap] trying to install app onto device
[phonegap] no device was found
[phonegap] trying to install app onto emulator
   [error] An error occurred while emulating/deploying the android project. 
ERROR : No emulator images (avds) found, if you would like to create an
 avd follow the instructions provided here:

 Or run 'android create avd --name  --target '
 in on the command line.

Note the helpful diagnostics! Bum, I forgot to create an Android Virtual Device (AVD) (or plug in a real Android device). Let’s create an AVD using the AVD tool or command line, fire it up and try again.

$ phonegap run android
[phonegap] detecting Android SDK environment...
[phonegap] using the local environment
[phonegap] compiling Android...
[phonegap] successfully compiled Android app
[phonegap] trying to install app onto device
[phonegap] no device was found
[phonegap] trying to install app onto emulator
[phonegap] successfully installed onto emulator

Basic PhoneGap app on Android


Okay, let’s try iOS. I’ve got XCode 5 installed and fired up an iPhone emulator, so let’s see how far we’ll get.

$ phonegap run ios
[phonegap] detecting iOS SDK environment...
[phonegap] using the local environment
[phonegap] adding the iOS platform...
[phonegap] missing library cordova/ios/3.3.0
[phonegap] downloading;a=snapshot;h=3.3.0;sf=tgz...
[phonegap] compiling iOS...
[phonegap] successfully compiled iOS app
[phonegap] trying to install app onto device
[phonegap] no device was found
[phonegap] trying to install app onto emulator
 [warning] missing ios-sim
 [warning] install ios-sim from
   [error] An error occurred while emulating/deploying the ios project. Error: ios-sim was not found. Please download, build and install version 1.7 or greater from into your path. Or 'npm install -g ios-sim' using node.js:

Again, excellent diagnostics! We’re missing the ios-sim package so we’ll just install that with npm (run as root) and try again.

$ phonegap run ios
[phonegap] detecting iOS SDK environment...
[phonegap] using the local environment
[phonegap] compiling iOS...
[phonegap] successfully compiled iOS app
[phonegap] trying to install app onto device
[phonegap] no device was found
[phonegap] trying to install app onto emulator
[phonegap] successfully installed onto emulator

Skeleton PhoneGap app on iOS simulator


So far, things look encouraging, because:

  1. Apache Cordova seems to be a result of Firefox OS efforts so should be receiving a lot of positive development.
  2. NodeJS is also an upcoming technology receiving a lot of care and attention. The npm package manager does what it says on the tin and does it well.
  3. So far the diagnostics are nothing short of excellent.

The next step is to delve into the API and try to write a mobile HTML5 app.

Creating a private cloud with ownCloud. Part 2: The Clients

Now you have an ownCloud server. What can you do with it?

Synchronise calendars and reminders between devices.
Synchronises calendars and reminders between calendar applications that support the Caldav protocol, such as those built into iOS or Android devices, and the Mac OS X calendar application.
Synchronise contacts between devices.
Like calendars, allows you to synchronise contact information between applications that support the CardDav protocol.
Automagically upload photos taken on mobile devices.
You need to install with ownCloud client application to do this.
Share files between devices.
Uses the ownCloud client app.

For setting up, basically follow the documentation but there are a few gotchas I’ve found with Mac OS X (Mavericks). I assume the use of ownCloud server version 6.

1. Calendar and Contacts

The ownCloud server’s CalDav service for user name username is accessed from the URL /owncloud/remote.php/caldav/principals/ (assuming ownCloud is accessed from https://server/owncloud).

1.1 Apple iOS

Follow the instructions in the user guide.

1.2 Apple Mac OS X

The ownCloud documentation tells you how to configure Mac OS X to share contacts, reminders and calendar information. However, there’s a twist for Mac OS X Mavericks and also if you have a user name that contains an @ symbol.

  1. Select ‘Internet Accounts’ from ‘System Preferences’.
  2. Scroll to the bottom of the list of providers on the right of the screen and click ‘Add another account’. Select ‘Add a CalDav account’ and click ‘Create’.
    Create a new CalDav account
  3. In the ‘Add a CalDAV Account’ dialogue box, select ‘Advanced’ as the account type, then enter the other details. For the server address, do not include the protocol name; just enter the server’s FQDN or IP address, whichever is appropriate. Also, check ‘Use SSL’ (the ownCloud server does use HTTPS, right?) and enter 443 as the port number (the HTTPS port). I found you also need to check ‘Use Kerberos v5 for authentication’ if you have an @ symbol in the user name. Finally, click ‘Create’.
    Add a CalDAV account
  4. That’s it! Open iCal and check you can see your ownCloud calendar in the Calendars list.

1.3 Android

CalDAV-Sync logoThe application CalDAV-Sync enables synchronisation of events and tasks with the default Android calendar application. Though only a beta release at the time of writing, it seems to work, including ‘two-way sync’ which allows upload of calendar information from the mobile to the server as well as vice versa.

2. Contacts

2.1 Apple iOS

Follow the instructions in the user guide.

2.2 Apple Mac OS X

The ownCloud documentation tells lies for OS X Mavericks: you have to use the server address https://server/owncloud/remote.php/carddav/addressbooks/username. (See this forum post for details.)

  1. Proceed as for setting up a CalDav account, except select ‘Add a CardDav account’.
    Add a CardDav account
  2. At the ‘Add a CardDav account’ dialogue, enter the user name and password as requested, and https://server/owncloud/remote.php/carddav/addressbooks/username as the Server Address, then click ‘Create’.
    Add a CardDav account

2.3 Android

CardDAV-SyncSimilarly to CalDAV-Sync, the CardDAV-Sync application allows synchronisation between the ownCloud server and the Android default contacts apps. There’s a free version as well as a paid-for version with more features.


Once you have the ownCloud server up and running, clients installed and CalDAV and CardDAV configured, you can leave iCloud and DropBox behind.

Creating a private cloud with ownCloud. Part 1: The Server


As mobile devices proliferate, so does our need to share data between them: calendars, contacts, music, photographs, documents. Many companies are competing to offer “cloud services” to allow us to access our data anytime, from anywhere. However, there is a big drawback: once the data is on their servers you are no longer in control of it. Concerns include:

  1. Privacy: the provider may “share” your data with third parties or attempt to “mine” it to find out more about you from it.
  2. Jurisdiction: the laws of where your data are stored, not where you live, are applicable. Some jurisdictions offer less protection of your privacy and intellectual property than others.
  3. Intellectual property issues: the cloud provider’s user agreement might give them some rights to use what you upload. You did check the fine print, didn’t you? Even if you did, they can still…
  4. Bait and switch: Companies can change their terms of service at any time. When they do so, you might already be so invested in their services that migration will be a major hassle.
  5. Security: A lot of data concentrated in one place makes a juicy target for criminals.
  6. Continuity: Companies can discontinue services with little or no notice, leaving you in the lurch.

So what can you do about it? Many of these issues can be avoided altogether (or at least mitigated) by hosting your own cloud service. ownCloud is an open source “private cloud” package that can provide file synchronisation, calendar and contacts management. The server supports multiple operating systems and clients are available for common mobile platforms and desktops. Service interfaces are standard rather than proprietary (WebDAV, CalDAV and CardDAV for file management, calendar and contacts respectively), increasing interoperability.

To use ownCloud, you need to:

  1. Install the ownCloud server package on a server which is always connected to the internet.
  2. Install ownCloud clients on your devices that you wish to share files between.
  3. Configure your calendar and contacts applications to synchronise with your ownCloud server using the CalDAV/CardDAV protocols.

This blog post covers the first part.

One great thing about the ownCloud server is that, being written in PHP, it will run on nearly all common web servers and platforms more or less out of the box. That said there are a few shortcomings. There is no client-side encryption and files are stored unencrypted on the server by default. WebDAV/CalDAV/CardDAV are layered on top of HTTP, so an HTTPS connection (i.e. use of SSL/TLS transport layer security) is required to secure communications between the client and server. SSL/TLS needs a bit of knowledge to configure and deploy securely; it has several old protocols and ciphers which are insecure and deprecated but are still used to support antediluvian browsers. Since we control all the clients we can avoid the need to use insecure features but we still have to know how to configure SSL/TLS properly if we want the best security.

Virtual Private Server

The server requires a reliable machine that is “always on” and always connected to the Internet, and has a public IP address. This kind of infrastructure could be hard to set up at home, so an alternative is to use a Virtual Private Server (VPS); basically a virtual machine running on someone else’s hardware. VPS offerings range from the cheap and cheerful, aimed at personal web sites and blogs, to enterprise-grade solutions with prices to match. You can choose services that provide given amounts of CPU, memory and disk, different levels of redundancy and backups, and levels of support. There is often a choice of operating system, typically between Microsoft Windows and variants of Linux.

Installing ownCloud on Linux (Debian)

1. Prepare the server

Once you sign up for a VPS, you will often get a server within a few minutes of submitting your credit card details. The first thing to do is to make the server reasonably secure. Changing the root password is the first priority, especially if it was sent to you by email. See My First 5 minutes on a server for further ideas.

2. Install ownCloud

You then need to get an Owncloud server installation. ownCloud is written in PHP so doesn’t require compilation, but has a number of dependencies. The easiest option is to use a Linux package installer, since that way dependencies and updates can be installed automatically. Packages are hosted by OpenSUSE here. Select your operating system and follow the instructions.

3. Create an encrypted data directory (Optional)

ownCloud lives in /var/www/owncloud by default, but allows you to select another directory to store data. To (marginally) increase security, I used encfs to create an encrypted directory. This will be inaccessible if the machine is rebooted, requiring manual remounting with a password. An advantage of encfs is that it doesn’t require you to create a fixed-size file or partition. Here, we store encrypted files in a directory /srv/encrypted-owncloud and mount the decrypted directory on /srv/decrypted-owncloud. The procedure below is cribbed from here with some modifications and corrections.

# apt-get install encfs
# mkdir -p /srv/encrypted-owncloud /srv/decrypted-owncloud
# chgrp www-data /srv/decrypted-owncloud
# chmod -R g+rw /srv/decrypted-owncloud
# gpasswd -a www-data fuse
# chgrp fuse /dev/fuse
# chmod g+rw /dev/fuse
# encfs --public /srv/encrypted-owncloud /srv/decrypted-owncloud

Select the ‘paranoia’ configuration (p) when prompted. Make sure you use a strong password, and don’t lose it! The /srv/decrypted-owncloud directory can be unmounted with unmount as usual, and mounted using the enfcs command again.

4. Update PHP

Operating systems such as Debian and CentOS are designed for server use and prioritise stability than being “bleeding edge”. The downside is that some packages are a little old. In particular, ownCloud 6 recommends PHP 5.3.8 or later, which is later than the version that ships with Debian 6. Fortunately, the dotdeb group maintains a repository of more up-to-date packages including PHP. Follow the instructions there.

5. Configure HTTPS

HTTPS is critical to the security of the ownCloud installation. Do NOT deploy ownCloud without it! Basically, you need to create an X.509 certificate and a private key, and configure Apache with their locations. A few points of note:

  • The Common Name (CN) of the X.509 certificate must match the name of the site (i.e. the virtual host name). If you have multiple web host names you will need multiple certificates.
  • Since this is a private-use server we can use a self-signed certificate. This will cause web browsers and other clients to issue a warning when we first attempt to connect, but this can be ignored. (For https services you want to offer to the general public, it’s advisable to obtain a certificate signed by a Certification Authority.)
  • Since we control the web browsers that will connect to the server, we don’t need to support antique browsers and so can disable the use of older, insecure protocols and ciphers.

Unfortunately some guides are out of date. mod-ssl is now provided in the apache2-common package. Documentation (including on certificate setup) is in /usr/share/doc/apache2.2-common/README.Debian.gz. This guide provides details for debian 7. The following borrows from it.

(1) Enable apache2 ssl

SSL is already provided in the default apache2 distribution.

$ sudo a2ensite default-ssl
$ sudo a2enmod ssl
$ sudo service apache2 restart

(2) Generate an SSL key and self-signed certificate

As mentioned the SSL certificate’s Common Name (CN) must match the domain name being served.

An X.509 certificate was created automatically by the ssl-cert package using the fully qualified domain name (hostname) at that time as the Common Name. The certificate saved in /etc/ssl/certs/ssl-cert-snakeoil.pem and the key in in /etc/ssl/private/ssl-cert-snakeoil.key (readable only by root). If the hostname has changed, these can be regenerated by

# make-ssl-cert generate-default-snakeoil --force-overwrite 

Alternatively, use openssl to generate a self-signed certificate, giving finer control, e.g.:

$ sudo openssl req -x509 -nodes -days 365 \ 
    -newkey rsa:2048 -keyout /etc/ssl/private/hostname.key \
    -out /etc/ssl/hostname.crt

creates a certificate and a 2048-bit RSA key valid for 356 days. You will be prompted for various inputs including the CN. Remember to set the access restrictions for the private key. (In debian, the group should be ssl-cert, the owner should be root and the permissions 640.)

You can view the details of the certificate with the incantation

openssl x509 -in /etc/ssl/certs/ssl-cert-snakeoil.pem -noout -text

Note: openssl is a kitchen-sink command that has multiple functions. See for a useful overview.

(4) Configure Apache to use the certificate/key and test

You can use the file /etc/apache2/sites-available/default-ssl as an starting point. Test by enabling the site file (if not already enabled: see whether a symbolic link exists in /etc/apache2/sites-enabled) and restarting the server.

$ sudo a2ensite default-ssl
$ sudo service restart apache2

then connect using a web browser.

As a quick and dirty hack for better security, have the following directives outside any blocks but inside the block:

SSLRandomSeed startup file:/dev/urandom 1024
SSLRandomSeed connect file:/dev/urandom 1024

Then inside the block for the owncloud server:

	SSLEngine on
        SSLProtocol -all +TLSv1 +SSLv3
        SSLCipherSuite HIGH:MEDIUM
	SSLCertificateFile    /etc/apache2/ssl/mycert.crt
	SSLCertificateKeyFile /etc/apache2/ssl/mykey.key

obviously substituting the appropriate paths to your certificate and key files. This disables all except the TLS 1.0 and SSL 3.0 protocols and uses whatever of OpenSSL regards as good cipher suites. However, these change with time as attacks and vulnerabilties are discovered.

More advanced stuff:

The version of openssl provided by Debian 6 does not support the TLS 1.2 protocol or the ciphers necessary for forward secrecy. To solve this issue, you will need to compile recent versions of OpenSSL and Apache2, preferably without touching the default OS packages: see here. Regarding SSL configuration for forward secrecy, see here.

6. ownCloud configuration

You need to enable rewrites

$ sudo a2enmod rewrite
$ sudo service apache2 restart

Now to create an ownCloud configuration file as per the ownCloud installation guide. You can create one from scratch or copy the default /etc/apache2/sites-available/default-ssl to something like /etc/apache2/sites-available/owncloud-ssl and edit that. Disable the default SSL configuration and enable ownCloud configuration file:

$ sudo a2dissite default-ssl
$ sudo a2ensite owncloud-ssl
$ sudo service reload apache2

The point your browser at it, and you should be rewarded with a screen that invites you to create an administrative user. Do this now (and use a strong password), obviously, because at this point anyone can do this. You’ll also have the option of setting up a data directory other than the default /var/www/data. Point it at /srv/decrypted-owncloud directory you created earlier (if you did so).

Assuming you’re successful, ownCloud will then run through some diagnostics, then you’ll be in.


What could go wrong? Quite a lot actually, but most issues are probably due to misconfiguration rather than breakage. Fortunately, ownCloud does produce some mostly helpful diagnostics that are only occasionally misleading.

  • If the apache2 server fails to start then the server configuration file is probably broken. A good starting point for troubleshooting is examining web server logs (by default in /var/log/apache2).
  • Check permissions. ownCloud expects the /var/www/owncloud directory (and data directory) to have the owner and group www-data. The web server also has to run as the user www-data to access these files. Although this is the default, some installations can differ.
  • If ownCloud issues some diagnostics as a result of its self-test, you can see details in the log accessible from the Admin link.


That’s basically it. You should now have a working ownCloud server. In Part 2 I’ll explain how to configure various devices to use it.