Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression conference conferences containerisation css dailyprogrammer data analysis debugging defining ai demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics guide hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs latex learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation outreach own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering research resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

Client certificates and HTTPS: A deep dive into the world of certificates

Recently I've been checking out client side certificates in order to secure access to various secret parts of this website. In order to do this, I found myself learning more than I ever thought I would about certificates and certification authorities, so I thought I'd share it here.

Before I get into how client side certificates work, I should start at the beginning. In order to serve pages over HTTPS, a web server must have both a private key and a certificate. The private key does the encryption, and the certificate verifies the identity of the web server (so you know you're connecting to the real thing and not a fake).

The certificate is signed by a Certification Authority. A certification authority (CA) is an organisation that has a Root Certificate, which is a certificate that's explicitly trusted by your operating system and browser. If a chain of valid certificates can be made from the web server's certificate back to a trusted root certificate, then we know that the web server's certificate can be trusted. Since a picture is worth 1000 words, here's a diagram describing the above:

A diagram of the hierarchy of a CA.

The web server isn't the only one who can have a certificate. The client can also have a certificate, and the web server can choose whether or not it likes the certificate that the client presents based on which certificates it can be linked back to in the signed certificate chain.

This presents some interesting possibilities. We can create our own certification authority and use it to issue client certificates. Then we can tell our web server that it should only accept clients certificates that can be linked back to our own certification authority.

For the certificate creation process, I can recommend tinyca2 (direct apt install link), which should be available in the repositories of most linux distributions. It's got a great GUI that makes the process relatively painless - this tutorial is pretty good at explaining it, although you'll need to read it more than once to understand everything. XCA is pretty good too, although a tad more complex. Once you've created your CA and certificates, come back here.

The Nginx logo.

Next up is configuring our web server. I'm using Nginx in this tutorial (that's the web server that's currently behind this website!), but there are guides elsewhere for Apache 2, Lighttpd, Hiawatha (For Hiawatha it's the RequiredCA configuration directive in the manual!), and probably others too!

Configuring Nginx to accept client certificates signed by our CA is actually fairly straight forward, despite the lack of information on the internet currently. I'm going to assume that you've already got a working Nginx webserver online that supports HTTPS (or you haven't try checking out this and this and this) To do this, grab a copy of your root CA's certificate (excluding the private key of course) and upload it to your server. Then, open up your Nginx configuration file in your favourite text editor (I use sudo nano) and add the following lines to your http block:

ssl_client_certificate  /path/to/your/root/certificate.pem;
ssl_verify_client               on;
ssl_verify_depth                2;

Let's go through these one at a time. The ssl_client_certificate directive tells Nginx where to find our root certificate, which it should use to verify the certificate of connecting clients. ssl_verify_client tells Nginx that it should perform verification of all clients connecting via HTTPS, and that nobody is allowed to connect unless they have a certificate signed by the above root CA. Finally, the ssl_verify_depth parameter instructs Nginx that if a client's certificate isn't directly signed by our root CA then it should follow the certificate chain down to a depth of 2.

Once done, you should only be able to connect to your webserver if you have an appropriate certificate installed in your browser. This is nice and all, but what if we only want to apply this to a single subdomain? Or even a single folder? This presents a problem: Nginx only supports client certificates on everything, or nothing at all.

Thankfully, there's a workaround. First, change ssl_verify_client from on to optional, and then the following to the server or location block you want to verify certificates for:

if ($ssl_client_verify != SUCCESS) {
    return 403;
}

I've found through trial and error that the whitespace is important. If it doesn't work, double check it and try again! The above snippet simply checks to see if the user has connected with a valid certificate, and will send an HTTP 403 error if they haven't.

That concludes this tutorial. Did it work for you? Did you find it useful Did you read this far? Comment below!

Sources

Pocketblock: Simple encryption tutorials

The Pocketblock logo.

Recently I found a project that aims to explain cryptography and encryption in a simple fashion through this Ars Technica article. The repository is called Pocketblock and is being created by an insanely clever guy called Justin Troutman. Initially the repository didn't have anything in it (which was confusing to say the least), but now that the first guide of sorts has been released I'd like to take the time to recommend it here.

The first article explains an encryption algorithm called 'Pockenacci', an encryption algorithm that is from the same family as AES. It's a great start to what I hope will be an awesome series! If you're interested in encryption or interested in getting into encryption, you should certainly go and check it out.

The web needs encrypting, but that's not the whole story

There's been a lot of talk about 'encrypting the web' recently, so I decided to post about it myself. In this post I'll explain the problem, the solution, and why the solution may cause more problems than it solves.

The problem itself has actually been around since the web was designed, but it hasn't really been a huge problem until recently. Let's consider a legitimate origin server that serves HTML files about animals, such as bunnies.html. Since it doesn't have anything to hide (who'd want to steal pictures of bunnies, anyway?), it serves these files over regular HTTP, not HTTPS.

Suppose that our animal website becomes hugely popular overnight, and has thousands of users browsing it per hour. Let's also suppose that someone has managed to gain control of the network that connects our origin server's network to the wider internet. Since this someone is smart, rather than sending a DOS attack or a ton of spam from the compromised network, rather they fiddle about with the responses that our origin server is sending to, turning our unsuspecting users' browsers into the source of a massive DDOS attack or worse! A prime example of this was the attack on GitHub last year.

A diagram demonstrating a MITM attack.

If a website's communications aren't encrypted, it allows anyone to inspect and tamper with both requests and responses. You can't even guarantee that you're connecting to the server you think you are and not a cleverly designed imitation!

HTTPS solves all of these problems. It encrypts communications between the client and server, preventing messages from being inspected or tampered with. It also verifies the identity of the server, which is why you need a certificate in order to serve things over HTTPS.

The other side

At the beginning of this post, I said that HTTPS isn't the whole story. Obviously it solves the problem at hand, right? Yes it does, but it also brings to other issues to the table: complicating the setup process, and breaking links. The second of these problems is easy - we can just setup automatic redirects that send you to the HTTPS version of a site.

The first problem is decidedly more difficult to solve. HTTPS is difficult to set up, and if it becomes the default, it could reduce the accessibility of setting up and running your own website. Thankfully it isn't the required yet.

At the moment, nobody has a complete solution to this issue. Letsencrypt is a new brilliant service that makes obtaining SSL certificates easy, provided that you can either fiddle with your web server config, or are willing to let a script do it for you, but it doesn't help you set it up correctly (another excellent read).

The other slightly related issue is that users often mistake 'HTTPS' to mean secure. While this is true of the communications between their computer and the server, it doesn't stop the server from setting the root password to 1234 and storing passwords in plain text.

If you're still here, thank you for reading! Hopefully you now understand some of the issues surrounding web security at the moment. Please post a comment down below if you have anything to say (-:

Sources and further reading

Importing your friends' public keys automatically with gpg

Today's post is just a quick one, as I've had a rather busy week that I'm now recovering from :)

Several times over the past few weeks I've been finding myself googling around to try and find a way to download someone's GPG / PGP public key and import it into GPG automatically, so that I can verify their GPG signatures. I'm posting the command here so that I don't have to keep looking for it :D

Here's the command to search for a someone's public key:


gpg --keyserver pgp.mit.edu --search [email protected]

The above asks the key server at pgp.mit.edu for GPG keys associated with the email address [email protected]. You don't actually have to provide the keyserver - GPG will default to searching keys.gnupg.net.

Sources and further reading

Art by Mythdael