Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression conference conferences containerisation css dailyprogrammer data analysis debugging defining ai demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics guide hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs latex learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation outreach own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering research resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

Encrypting and formatting a disk with LUKS + Btrfs

Hey there, a wild tutorial appeared! This is just a quick one for self-reference, but I hope it helps others too.

The problem at hand is that of formatting a data disk (if you want to format your root / disk please look elsewhere - it usually has to be done before or during installation unless you like fiddling around in a live environment) with Btrfs.... but also encrypting the disk, which isn't something that Btrfs natively supports.

I'm copying over some data to my new lab PC, and I've decided to up the security on the data disk I store my research data on.

Unfortunately, both GParted and KDE Partition Manager were unable to help me (the former not supporting LUKS, and the latter crashing with a strange error), so I ended up looking through more posts that should be reasonable to find a solution that didn't involve encrypting either / or /boot.

It's actually quite simple. First, find your disk's name via lsblk, and ensure you have created the partition in question. You can format it with anything (e.g. using the above) since we'll be overwriting it anyway.

Note: You may need to reboot after creating the partition (or after some of the below) if you encounter errors, as Linux sometimes doesn't like new partitions appearing out of the blue with names that were used previously on that boot very much.

Then, format it with LUKS, the most common encryption scheme on Linux:

sudo cryptsetup luksFormat /dev/nvmeXnYpZ

...then, formatting with Btrfs is a 2-step process. First we hafta unlock the LUKS encrypted partition:

sudo cryptsetup luksOpen /dev/nvme0n1p1 SOME_MAPPER_NAME

...this creates a virtual 'mapper' block device we can hit like any other normal (physical) partition. Change SOME_MAPPER_NAME to anything you like so long as it doesn't match anything else in lsblk/df -h and also doesn't contain spaces. Avoid unicode/special characters too, just to be safe.

Then, format it with Btrfs:

sudo mkfs.btrfs --metadata single --data single --label "SOME_LABEL" /dev/mapper/SOME_MAPPER_NAME

...replacing SOME_MAPPER_NAME (same value you chose earlier) and SOME_LABEL as appropriate. If you have multiple disks, rinse and repeat the above steps for them, and then bung them on the end:

sudo mkfs.btrfs --metadata raid1 --data raid1 --label "SOME_LABEL" /dev/mapper/MAPPER_NAME_A /dev/mapper/MAPPER_NAME_B ... /dev/mapper/MAPPER_NAME_N

Note the change from single to raid1. raid1 stores at least 2 copies on different disks - it's a bit of a misnomer as I've talked about before.

Now that you have a kewl Btrfs-formatted partition, mount it as normal:

sudo mount /dev/mapper/SOME_MAPPER_NAME /absolute/path/to/mount/point

For Btrfs filesystems with multiple disks, it shouldn't matter which source partition you pick here as Btrfs should pick up on the other disks.

Automation

Now that we have it formatted, we don't want to hafta keep typing all those commands again. The simple solution to this is to create a shell script and put it somewhere in our $PATH.

To do this, we should ensure we have a robust name for the disk instead of /dev/nvme, which could point to a different disk in future if your motherboard or kernel decides to present them in a different order for a giggle. That's easy by looking over the output of blkid and cross-referencing it with lsblk and/or df -h:

sudo lsblk
sudo df -h
sudo blkid # → UUID

The number you're after should be in the UUID="" field. The shell script I came up with is short and sweet:

#!/usr/bin/env bash
disk_id="ID_FROM_BLKID";
mapper_name="SOME_NAME";
mount_path="/absolute/path/to/mount/dir";

sudo cryptsetup luksOpen "/dev/disk/by-uuid/${disk_id}" "${mapper_name}";
sudo mount "/dev/mapper/${mapper_name}" "${mount_path}"

Fill in the values as appropriate:

  • disk_id: The UUID of the disk in question from blkid.
  • mapper_name: A name of your choosing that doesn't clash with anything else in /dev/mapper on your system
  • mount_path: The absolute path to the directory that you want to mount into - usually in /mnt or /media.

Put this script in e.g. $HOME/.local/bin or somewhere else in $PATH that suits you and your setup. Don't forget to run chmod +x path/to/script!

Conclusion

We've formatted an existing partition with LUKS and Btrfs, and written a quick-and-dirty shell script to semi-automate the process of mounting it here.

If this has been useful or if you have any suggestions, please do leave a comment below!

Sources and further reading

Configuring an endlessh honeypot with rsyslog email notifications

Security is all about defence in depth, so I'm always looking for ways to better secure my home network. For example, I have cluster management traffic running over a Wireguard mesh VPN. Now, I'm turning my attention to the rest of my network.

To this end, while I have a guest network with wireless isolation enabled, I do not currently have a way to detect unauthorised devices connecting to my home WiFi network, or fake WiFi networks with the same name, etc. Detecting this is my next focus. While I've seen nzyme recently and it looks fantastic, it also looks more complicated to setup.

While I look into the documentation for nzyme, inspired by this reddit post I decided to setup a honeypot on my home network.

The goal of a honeypot is to detect threats moving around in a network. In my case, I want to detect if someone has connected to my network who shouldn't have done. Honeypots achieve this by pretending to be a popular service, but in reality they are there to collect information about potential threats.

To set one up, I found endlessh, which pretends to be an SSH server - but instead slowly sends an endless banner to the client, keeping the connection open as long as possible. It can also connection attempts to syslog, which allows us to detect connections and send an alert.

Implementing this comes in 2 steps. First, we setup endlessh and configure it to log connection attempts. Then, we reconfigure rsyslog to send email alerts.

Setting up endlessh

I'm working on one of the Raspberry Pis running Raspberry Pi OS in my network, but this should with with other machines too.

If you're following along to implement this yourself, make sure you've moved SSH to another port number before you continue, as we'll be configuring endlessh to listen on port 22 - the default port for ssh, as this is the port I imagine that an automated network scanner might attempt to connect to by default if it were looking for ssh servers to attempt to crack.

Conveniently, endlessh has a package in the default Debian repositories:

sudo apt install endlessh

...adjust this for your own package manager if you aren't on an apt-based system.

endlessh has a configuration file at /etc/endlessh/config by default. Open it up for editing, and make it look something like this:

# The port on which to listen for new SSH connections.
Port 22

# Set the detail level for the log.
#   0 = Quiet
#   1 = Standard, useful log messages
#   2 = Very noisy debugging information
LogLevel 1

Beforee we can start the endlessh service, we need to reconfigure it to allow it to listen on port 22, as this is a privileged port number. Doing this requires 2 steps. First, allow the binary to listen on privileged ports:

sudo setcap CAP_NET_BIND_SERVICE=+eip "$(which "endlessh")";

Then, if you are running systemd (most distributions do by default), execute the following command:

sudo systemctl edit endlessh.service

This will allow you to append some additional directives to the service definition for endlessh, without editing the original apt-managed systemd service file. Add the following, and then save and quit:

[Service]
AmbientCapabilities=CAP_NET_BIND_SERVICE
PrivateUsers=false

Finally, we can restart the endlessh service:

sudo systemctl restart endlessh
sudo systemctl enable --now endlessh

That completes the setup of endlessh!

Configuring rsyslog to send email alerts

The second part of this process is to send automatic alerts whenever anyone connects to our endlessh service. Since endlessh forwards logs to syslog by default, reconfiguring rsyslog to send the alerts seems like the logical choice. In my case, I'm going to send email alerts - but other ways of sending alerts do exist - I just haven't looked into them yet.

To do this requires that you have either a working email server (I followed the Ars Technica taking email back series, but whatever you do it's not for the faint for heart! Command line experience is definitely required - if you're looking for a nice first project to try, a web server instead), or an email account you can use. Note that I do not recommend using your own personal email account, as you'll have to store the password in plain text!

In my case, I have my own email server, and I have forwarded port 25 down an SSH tunnel so that I can use it to send emails (in the future I want to configure a proper smart host that listen on port 25 and forwards emails by authenticating against my server properly, but that's for another time as I have yet to find a relay-only MTA that also listens on port 25).

In a previous post, implemented centralised logging - so I'm going to be reconfiguring my main centralised rsyslog instance.

To do this, open up /etc/rsyslog.d/10-endlessh.conf for editing, and paste in something like this:

template (name="mailSubjectEndlessh" type="string" string="[HONEYPOT] endlessh connection on %hostname%")

if ( ($programname == 'endlessh') and (($msg contains "ACCEPT") or ($msg contains "CLOSE")) ) then {
    action(type="ommail" server="localhost" port="20205"
        mailfrom="[email protected]"
        mailto=["[email protected]"]
        subject.template="mailSubjectEndlessh"
        action.execonlyonceeveryinterval="3600"
    )
}

...where:

  • [HONEYPOT] endlessh connection on %hostname% is the subject name, and %hostname% is substituted for the actual hostname the honeypot is running on
  • [email protected] is the address that you want to send the alert FROM
  • [email protected] is the address that you want to send the alert TO
  • 3600 is the minimum interval between emails, in seconds. Log lines are not collected up - only 1 log line is sent at a time, and others logged in-between are ignored and handled as if the above email directive doesn't exist until the given number of seconds expires - at which point it will then email for the next log line that comes through, and the cycle then repeats. If anyone knows how to change that, please leave a command below.

Note that the template line is outside the if statement. This is important - I got a syntax error if I put it inside the if statement.

The if statement specifically looks for log messages with a tag of endlessh that contain either the substring ACCEPT or CLOSE. Only if those conditions are true will it send an email.

I have yet to learn how to configure rsyslog to authenticate while sending emails. I would suspect though that the easiest way of achieving this is to setup a local SMTP relay-only MTA (Mail Transfer Agent) that rsyslog can connect to and send emails, and then the relay will authenticate against the real server and send the email on rsyslog's behalf. I have yet to find such an MTA however other than Postfix - which, while great, can be hugely complicated to setup. Other alternatives I've tried include:

....but they all implement sendmail and while that's useful they do not listen on port 25 (or any other port for that matter) as far as I can tell.

Anyway, the other file you need to edit is /etc/rsyslog.conf. Open it up for editing, and put this near the top:

module(load="ommail")

...this loads the mail output plugin that sends the emails.

Now that we've reconfigured rsyslog, we need to restart it:

sudo systemctl restart rsyslog

rsyslog is picky about it's config file syntax, so make sure to check it's status for error messages:

sudo systemctl status rsyslog

You can also use lnav analyse your logs and find any error messages there too.

Conclusion

We've setup endlessh as a honeypot, and then reconfigured rsyslog to send email alerts. Test the system like so on your local machine:

ssh -vvv -p 22 someuser@yourserver

...and watch your inbox for the email alert that will follow shortly!

While this system isn't particularly useful on it's own, it's a small part of a larger strategy for securing my network. It's also been a testing ground for me to configure rsyslog to send email alerts - something I may want to configure my centralised rsyslog logging system to do for other things in the future.

If you've found this post useful or you have some suggestions, please leave a comment below!

Sources and further reading

systemquery, part 2: replay attack

Hey there! As promised I'll have my writeup about AAAI-22, but in the meantime I wanted to make a quick post about a replay attack I found in my systemquery encryption protocol, and how I fixed it. I commented quickly about this on the last post in this series, but I thought that it warranted a full blog post.

In this post, I'm going to explain the replay attack in question I discovered, how replay attacks work, and how I fixed the replay attack in question. It should be noted though that at this time my project systemquery is not being used in production (it's still under development), so there is no real-world impact to this particular bug. However, it can still serve as a useful reminder as to why implementing your own crypto / encryption protocols is a really bad idea.

As I explained in the first blog post in this series, the systemquery protocol is based on JSON messages. These messages are not just sent in the clear though (much though that would simplify things!), as I want to ensure they are encrypted with authenticated encryption. To this end, I have devised a 3 layer protocol:

Objects are stringified to JSON, before being encrypted (with a cryptographically secure random IV that's different for every message) and then finally packaged into what I call a framed transport - in essence a 4 byte unsigned integer which represents the length in bytes of the block of data that immediately follows.

The encryption algorithm itself is provided by tweetnacl's secretbox() function, which provides authenticated encryption. It's also been independently audited and has 16 million weekly downloads, so it should be a good choice here.

While this protocol I've devised looks secure at first glance, all is not as it seems. As I alluded to at the beginning of this post, it's vulnerable to a reply attack. This attack is perhaps best explained with the aid of a diagram:

Let's imagine that Alice has an open connection to Bob, and is sending some messages. To simplify things, we will only consider 1 direction - but remember that in reality such a connection is bidirectional.

Now let's assume that there's an attacker with the ability listen to our connection and insert bogus messages into our message stream. Since the messages are encrypted, our attacker can't read their contents - but they can copy and store messages and then insert them into the message stream at a later date.

When Bob receives a message, they will decrypt it and then parse the JSON message contained within. Should Bob receive a bogus copy of a message that Alice sent earlier, Bob will still be able to decrypt it as a normal message, and won't be able to tell it apart from a genuine message! Should our attacker figure out what a message's function is, they could do all kinds of unpleasant things.

Not to worry though, as there are multiple solutions to this problem:

  1. Include a timestamp in the message, which is then checked later
  2. Add a sequence counter to keep track of the ordering of messages

In my case, I've decided to go with the latter option here, as given that I'm using TCP I can guarantee that the order I receive messages in is the order in which I sent them. Let's take a look at what happens if we implement such a sequence counter:

When sending a message, Alice adds a sequence counter field that increments by 1 for each message sent. At the other end, Bob increments their sequence counter by 1 every time they receive a message. In this way, Bob can detect if our attacker attempts a replay attack, because the sequence number on the message they copied will be out of order.

To ensure there aren't any leaks here, should the sequence counter overflow (unlikely), we need to also re-exchange the session key that's used to encrypt messages. In doing so, we can avoid a situation where the sequence number has rolled over but the session key is the same, which would give an attacker an opportunity to replay a message.

With that, we can prevent replay attacks. The other thing worth mentioning here is that the sequence numbering needs to be done in both directions - so Alice and Bob will have both a read sequence number and a write sequence number which are incremented independently of one another whenever they receive and send a message respectively.

Conclusion

In this post, we've gone on a little bit of a tangent to explore replay attacks and how to mitigate them. In the next post in this series, I'd like to talk about the peer-to-peer swarming algorithm I've devised - both the parts thereof I've implemented, and those that I have yet to implement.

Sources and further reading

Cluster, Part 12: TLS for Breakfast | Configuring Fabio for HTTPS

Hey there, and happy new year 2022! It's been a little while, but I'm back now with another blog post in my cluster series. In this shorter post, I'm going to show you how I've configured my Fabio load balancer to serve HTTPS.

Before we get started though, I can recommend visiting the series list to check out all the previous parts in this series, as a number of them give useful context for this post.

In the last post, I showed you how to setup certbot / let's encrypt in a Docker container. Building on this, we can now reconfigure Fabio (which we setup in part 9) to take in the TLS certificates we are now generating. I'll be assuming that the certificates are stored on your NFS share you've got setup (see part 8) for this post. In the future I'd love to use Hashicorp Vault for storing these certificates, but as of now I've found Hashicorp Vault to be far too complicated to setup, so I'll be using the filesystem instead.

Configuring Fabio to use HTTPS is actually really quite simple. Open /etc/fabio/fabio.properties for editing, and at the beginning insert a line like this:

proxy.cs = cs=some_name_here;type=file;cert=/absolute/path/to/fullchain.pem;key=/absolute/path/to/privkey.pem

cs stands for certificate store, and this tells Fabio about where your certificates are located. some_name_here is a name you'd like to assign to your certificate store - this is used to reference it elsewhere in the configuration file. /absolute/path/to/fullchain.pem and /absolute/path/to/privkey.pem are the absolute paths to the fullchaim.pem and privkey.pem files from Let's Encrypt. These can be found in the live directory in the Let's Encrypt configuration directory in the subdirectory for the domain in question.

Now that Fabio knows about your new certificates, find the line that starts with proxy.addr. In the last tutorial, we configured this to have a value of :80;proto=http. proxy.addr can take a comma-separated list of ports to listen on, so append the following to the existing value:

:443;proto=https;cs=some_name_here;tlsmin=tls12

This tells Fabio to listen on TCP port 443 for HTTPS requests, and also tells it which certificate store to use for encryption. We also set the minimum TLS version supported to TLS 1.2 - but you should set this value to 1 version behind the current latest version (check this page for that). For those who want extra security, you can also add the tlsciphers="CIPHER,LIST" argument too (see the official documentation for more information - cross referencing it with the ssl-config.mozilla.org is a good idea).

Now that we have this configured, this should be all you need to enable HTTPS! That was easy, right?

We still have little more work to do though to make HTTPS the default and to redirect all HTTP requests to HTTPS. We can do this by adding a route to the Consul key-value store under the path fabio/config. You can do this either by editing it in the web interface by creating a new key under fabio/config and pasting the following in & saving it:

route add route_name_here example.com:80 https://example.com$path opts "redirect=308"

Alternatively, through the command line:

consul kv put fabio/config/some_name_here 'route add some_name_here example.com:80 https://example.com$path opts "redirect=308"'

No need to restart fabio - it should pick routes up automatically. I have found however that I do need to restart it occasionally if it doesn't pick up some changed routes as fast as I'd like though.

With this, we now have automatic HTTPS setup and configured! Coming up in this series:

  • Using Caddy as an entrypoint for port forwarding on my router (status: implemented; there's an awesome plugin for single sign-on, and it's amazing in other ways too) - this replaces the role HAProxy was going to play that I mentioned in part 11
  • Password protecting Docker, Nomad, and Consul (status: on the todo list)
  • Semi-automatic docker image rebuilding with Laminar CI (status: implemented)

Sources and further reading

Encryption demystified: What to use and when

The other day, I found myself explaining different types of encryption, how they work, and what they are used for to someone in my lab implementing a secure system. During this process, I ended up creating a series of fancy diagrams in draw.io - so I thought I'd write it up into a proper demystification blog post.

To start us off here, let's define encryption. Encryption is the process of transforming a given input block of data (of an arbitrary data) using some kind of secret key into a form that is then completely unreadable. Any adversary obtaining a block of encrypted data encrypted with a suitably strong key (and algorithm) is not able to read or understand the data at all - except perhaps infer its original length.

Conversely, decryption is the process of undoing the encryption process with the same (or different, in some cases) key to get back the original data.

For purpose of this blog post, we will assume:

  1. The encryption algorithms in question are perfect with no known weaknesses
  2. Keys used to encrypt and/or decrypt are very strong and can't be cracked

Each of these are fields in their own right that could quite easily take many blog posts to fully explore.

From the perspective of a developer, there are 3 different basic places one needs to aware of. Others certainly exist, but to avoid making this post too long I'll just be covering the following 3:

  1. Device encryption
  2. Transport layer encryption
  3. End-to end encryption

If there's any other encryption scheme you'd like me to cover, please leave a comment below and I'll try my best to explain it in a separate post.

Device encryption

First up is device encryption. Most modern operating systems for phones and PCs alike support device encryption:

  1. Windows
  2. Linux
  3. Android
  4. iOS

Not sure on macOS since I don't own one, but I'd be surprised if it didn't. The purpose of device encryption is that when the device is powered off, all data is stored physically on disk in an encrypted format, making it unreadable should the device be physically stolen - thereby protecting all data stored on it.

This is accomplished in a layered fashion. Let's explain it with a diagram:

A vertical layered diagram explaining device encryption. Physical block devices on the bottom, software applications on the top.

Although they may have different names for it, most operating systems back a concept of a "block device". Such a device is capable of storing a given number of bytes of data. Such devices need not be physical disks: they can instead be virtual. For example, zram presents block devices that store data compressed in RAM.

We can make use of this to encrypt hard drives. An encryption layer such as LUKS on Linux presents a virtual block device to the operating system which encrypts all data written to it before saving them back to some physical disk by which it is backed.

On boot, the encryption layer is initialised by the operating system and it asks the user for a password. Upon being given the correct password, the encryption layer is activated, and the operating system can then both request data blocks from the virtual block device (which causes the encryption layer to fetch the encrypted block from disk and then decrypt it before passing it to the requester) and write data blocks back to the virtual block device (whereby the encryption layer will encrypt the new data block before writing it to disk).

Even operating systems such as Windows (e.g. Bitlocker) and iOS which don't expose block devices in the same way as Linux does, the same principles I've explained here apply.

When the device is powered off, the key that was being stored in memory is wiped (it's stored in RAM, and RAM requires power to store data) and the data is secured.

Transport layer encryption

Another place encryption is commonly encountered in when transferring data to and from remote hosts over the Internet. Since the Internet is untrusted, it becomes rather a problem when one wants to transfer personal information such as passwords, bank card numbers, and location information across the Internet, in that such data could be stolen or modified in transit.

To solve this problem, the Transport Layer Security (TLS) protocol was invented. The purpose of TLS is to provide a secure connection between 2 hosts using authenticated encryption that has the following properties:

  1. Eavesdroppers are unable to read data being transmitted
  2. Attackers are unable to successfully modify any data in transit without the destination host knowing about it
  3. The 2 hosts communicating with each other can verify each other's identities 1

Although TLS itself is a protocol that is usually spoken over TCP, because it provides a generic bidirectional pipe through which any binary data can be transmitted and received, it is commonly used to wrap around other protocols to secure them. Examples include:

  1. HTTP: Hypertext Transfer Protocol (used in web browsers)
  2. SMTP: Simple 2 Mail Transfer Protocol (used for sending and receiving emails)
  3. IMAP: Internet Message Access Protocol (used for accessing email inboxes)
  4. XMPP: Extensible Messaging and Presence Protocol (a federated messaging protocol used for instant messaging) 3

....and many others. There's a reason it's so prevalent: The most important rule when dealing with encryption and security is to never roll your own. Follow the standards, and use existing crypto libraries for your platform. Don't implement your own, as it's much more difficult than it appears to ensure your system is actually secure.

Here's a diagram of how it works:

End-to-end encryption

The last form of encryption I'm going to talk about is also perhaps the most misunderstood: end-to-end encryption.

End-to-end encryption is useful when you have 3 parties involved in the equation - usually 2 clients and a server. Suppose Alice and Bob have a messaging app on their phone that sends messages through an intermediary server (perhaps performing store-and-forward functions), but they do not want the server to be able to read their message. The solution here is end-to-end encryption, which prevents the intermediary server from being able to read the message.

Here's a diagram to explain what I mean:

End-to-end encryption is accomplished by using asymmetric cryptography. Asymmetric encryption - unlike symmetric encryption uses 2 keys instead of 1, and these keys also have to possess special properties, so you can't just generate some cryptographically secure random numbers and call it a day 4.

In asymmetric encryption, you have a public key which can only encrypt data, and a private key which can then decrypt the data. An example of this in practice is GPG, which is extensively used e.g. by apt (the package manager on some Linux systems).

In the diagram above, the sender encrypts the message with the public key that belongs to the receiver. They then send the message to the server, who forwards it on to the receiver. The receiver then decrypts the message with the private key (sometimes called a secret key).

In this way, the server is never able to read the content of the message. If the receiver wanted to reply to the sender, the same would happen in reverse. The receiver would need to ask the sender to securely transmit their public key to them, which they could then use to encrypt a message to send back.

In practice, every client involved in an end-to-end encryption system will generate their own keypair that consists of a public and a private key. They will then advertise their public key to everyone, allowing anyone to encrypt a message that only they can decrypt (an example of this: my GPG key can be found here).

It is important to avoid confusing end-to-end encryption with transport layer encryption. Indeed, end-to-end encryption is absolutely no substitute for transport layer encryption, because an application may for example need to authenticate with the intermediary server before being allowed to transmit end-to-end encrypted messages.

Transport layer encryption:

  1. Allows 2 parties to communicate with each other securely
  2. Does not prevent the receiver from reading received data (even if device encryption is employed)

End-to-end encryption:

  1. Requires 3 parties to be involved in order to be effective
  2. Ensures that 2 parties can communicate securely through an intermediary party
  3. Requires that 2 parties wishing to communicate must first securely exchange their public keys and be confident that the public keys they have received actually belong to the other party they wish to communicate with
  4. Can be significantly complicated to implement

Conclusion

In this post, we've looked at 3 types of encryption, how they work, and when they are useful. To summarise:

  1. Device encryption protects data from physical theft
  2. Transport layer encryption protects data in transit between 2 communicating parties talking to each other directly
  3. End-to-end encryption protects the communications of 2 parties who are talking through 1 or more intermediary parties

Each of these are useful in different situations - and most likely are already solved problems. Do not implement any of these yourself. Use well known, battle tested libraries and programs for your platform that are regularly receiving updates instead.

While I've simplified this a lot in writing this post (we'd be here all week if I didn't!), I hope you've found this helpful (or even if you're still confused). This is a starting point, not an ending point - if this kind of thing interests you I can recommend researching it further and playing around with some practical implementations thereof.

Please do comment below (especially if you've spotted a mistake)! It's very motivating to hear that the things I write here are actually helpful to people.


  1. In TLS, this is done using certificates. Each host has a list of certificate authorities (CAs) it trusts, and when a connection is initiated between a client and a server during the handshake certificates signed by these CAs are exchanged securely and checked. In practice, generally only the server sends a certificate which is then checked by the client - for example in HTTPS in web browsers. Server-to-server connections in a federated system (e.g. email) however give an opportunity to put this mutual authentication into action though. 

  2. SMTP is not simple. While it was simple once upon a time, unfortunately it was not designed with the modern web and security in mind (given that it was first invented in 1981, I'm not surprised). Since it was invented, a large number of additions (both standardised and otherwise) have been adopted, significantly complicating it. Setting up a mail server correctly and ensuring your emails are delivered properly is not a simple task. 

  3. See Snikket for a server, and Conversations for an Android client. See also the full client list

  4. Use a crypto library like your programming language's crypto built-ins. If your language doesn't have a built-in module and you've tried checking your package manager, try libsodium, bearssl, or openssl

Securing your port-forwarded reverse proxy

Recently, I answered a question on Reddit about reverse proxies, and said answer was long enough and interesting enough to be tidied up and posted here.

The question itself is concerning port forwarded reverse proxies and internal services:

Hey everyone, I've been scratching my head over this for a while.

If I have internal services which I've mapped a subdomain like dashboard.domain.com through NGINX but haven't enabled the CNAME on my DNS which would map my dashboard.domain.com to my DDNS.

To me this seems like an external person can't access my service because dashboard.domain.com wouldn't resolve to an IP address but I'm just trying to make sure that this is the case.

For my internal access I have a local DNS that maps my dashboard.domain.com to my NGINX.

Is this right?

--u/Jhonquil

So to answer this question, let's first consider an example network architecture:

So we have a router sitting between the Internet and a server running Nginx.

Let's say you've port forwarded to your Nginx instance on 80 & 443, and Nginx serves 2 domains: wiki.bobsrockets.com and dashboard.bobsrockets.com. wiki.bobsrockets.com might resolve both internally and externally for example, while dashboard.bobsrockets.com may only resolve internally.

In this scenario, you might think that dashboard.bobsrockets.com is safe from people accessing it outside, because you can't enter dashboard.bobsrockets.com into a web browser from outside to access it.

Unfortunately, that's not true. Suppose an attacker catches wind that you have an internal service called dashboard.bobsrockets.com running (e.g. through crt.sh, which makes certificate transparency logs searchable). With this information, they could for example modify the Host header of a HTTP request like this with curl:

curl --header "Host: dashboard.bobsrockets.com" http://wiki.bobsrockets.com/

....which would cause Nginx to return dashboard.bobsrockets.com to the external attacker! The same can also be done with HTTPS with a bit more work.

That's no good. To rectify this, we have 2 options. The first is to run 2 separate reverse proxies, with all the internal-only content on the first and the externally-viewable stuff on the second. Most routers that offer the ability to port forward also offer the ability to do transparent port translation too, so you could run your external reverse proxy on ports 81 and 444 for example.

This can get difficult to manage though, so I recommend the following:

  1. Force redirect to HTTPS
  2. Then, use HTTP Basic Authentication like so:
server {
    # ....
    satisfy any;
    allow   192.168.0.0/24; # Your internal network IP address block
    allow   10.31.0.0/16; # Multiple blocks are allowed
    deny    all;
    auth_basic              "Example";
    auth_basic_user_file    /etc/nginx/.passwds;

    # ....
}

This allows connections from your local network through no problem, but requires a username / password for access from outside.

For your internal services, note that you can get a TLS certificate for HTTPS for services that run inside by using Let's Encrypt's DNS-01 challenge. No outside access is required for your internal services, as the DNS challenge is completed by automatically setting (and then removing again afterwards) a DNS record, which proves that you have ownership of the domain in question.

Just because a service is running on your internal network doesn't mean to say that running HTTPS isn't a good idea - defence in depth is absolutely a good idea.

Unethically disclosed vulnerabilities in Pepperminty Wiki: My perspective

Recently, I've made a new release of my PHP-based wiki engine Pepperminty Wiki - v0.23. This would not normally be notable, but as it turns out there were a number of security issues (the severity of which varies) that needed fixing. I fixed them of course, but the manner in which they were disclosed to me was less than ethical.

In this post, I want to explain what happened from my perspective, and why I'm rather frustrated with the way the reporter handled things.

It all started with issue #222 that was opened by @hmaverickadams. In that issue, they say that they have discovered a number of vulnerabilities:

Hi,

I am a penetration tester and discovered a couple of vulnerabilities within your application. I will be applying for CVE status on the findings, but would like to work with you on the issues if possible. I could not locate an email, so please feel free to shoot me your contact info if possible.

Thank you!

So far, so good! Seems responsible, right? It did to me too. For reference, CVE there refers to the Common Vulnerabilities and Exposures, a website that tracks vulnerabilities in software from across the globe.

I would have left it at that, but I decided to check out the GitHub projects that @hmaverickadams (henceforth "the reporter") had. To my surprise, I found these public GitHub repositories:

These appeared to be created just 1 day after the issue #222 was opened against Pepperminty Wiki. I was on holiday at the time (3 weeks; and I've haven't been checking my GitHub notifications as often as I perhaps should), so it took me 22 days to get to it. Generally speaking I would consider a minimum of 90 days with no response before publishing a vulnerability publicly like that - this is the core of the matter, but more on this later. Here are links these vulnerabilites on the CVE website:

You may also ask yourself "what were the vulnerabilities in question in the first place?" - glad you asked! Let's take a look.

CVE-2021-38600

Described officially as a "a stored Cross Site Scripting (XSS) vulnerability", this essentially that you can convince Pepperminty Wiki to store some arbitrary HTML (which may contain a malicious script for example) and later serve it to some poor unsuspecting visitors.

In this particular vulnerability, the reporter found that when filling out the initial setup web form that appears the first time you load Pepperminty Wiki up with a wiki name that contains arbitrary HTML, Pepperminty Wiki will blindly serve this to users.

It sounds like a big issue, but once you realise that to fill out the first run web form you need the site secret - which is generated randomly and stored in peppermint.json, which itself has a check to ensure it can't be loaded through the web server, you realise that this isn't actually a big deal. In fact, Pepperminty Wiki has a number of settings that by design allow one to serve arbitrary HTML:

  • editing_message - a message that appears below the page editing form and before the submit button
  • admindisplaychar - inserts text (or arbitrary HTML) before the name of an administrator
  • footer_message - a message (that may contain arbitrary HTML) that is displayed at the bottom of every page

All of these can be modified either by a moderator in the site settings page, or through peppermint.json directly.

...so personally I don't class this as a vulnerability. Regardless, I've fixed this by running the wiki name through htmlentities() - but in doing so I speculate that some special characters (e.g. quotes) will no longer display properly because of how I fixed CVE-2021-38600 (see below) - I'll continue working on this.

CVE-2021-38601

This vulnerability is described as "a reflected Cross Site Scripting (XSS) vulnerability". This is similar to CVE-2021-38600, but instead of storing a value the attack makes use of various GET parameters. There are (were, since I've fixed it) examples of GET parameters that caused this issue, including action (sets the subcommand/action that should be taken - e.g. view, edit, etc) and page (the current page on the wiki you're looking at).

Unlike CVE-2021-38600, this is a more serious vulnerability. Someone could generate a malicious link to a Pepperminty Wiki instance that instead redirects you to an attacker-controller website (i.e. by using location.href = "blah").

Fixing it though required me to do a comprehensive review of every single line of Pepperminty Wiki's codebase, which took me multiple hours of intense programming and was really rather unpleasant. The description by the reporter in the repo was quite unhelpful:

A reflected Cross Site Scripting (XSS) vulnerability exists on multiple parameters in version 0.23-dev of the Pepperminty-Wiki application that allows for arbitrary execution of JavaScript commands.

Affected page: http://localhost/index.php

Sample payload: https://localhost/index.php?action=<script>alert(1)</script>

CVE: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-38601

I discovered in my comprehensive review that action and page were not the only parameters affected - I fixed every instance I could find.

Reaching out

To try and understand their side of the story and gain additional insight into the vulnerabilities they discovered, I attempted to reach out to them. First, I tried opening an issue on the above GitHub repositories they created.

Instead of replying to the issues though, the reporter instead deleted the issues I opened, and set it so that nobody could opened issues on the repositories anymore! As of the time of typing I still do not have a response from the reporter about this.

Not to be deterred, I found a pair of twitter accounts they controlled and tweeted at them:

As you can probably guess, I haven't had a response yet (though I will of course update this blog post if I do). To make absolutely sure that I got through, I also filled out the contact form on their website - also to no avail so far.

With all this in mind, I get the impression the reporter does not want to talk to me - especially since they deleted the issues I opened against their repositories instead of replying to them. This is frustrating, because I was put in a really awkward position of having to deal with a zero day vulnerability as fast as I could after they publicly disclosed the vulnerabilities (worse still, I could tell that those repositories had some significant traffic since they have been starred by 7 + 4 people as of the time of typing).

Can't find an email address?

After this (and in between comprehensively reviewing Pepperminty Wiki's codebase), I also re-read the initial issue. When re-reading it, a particular sentence also struck me as odd:

I could not locate an email, so please feel free to shoot me your contact info if possible.

This is very strange, since I have the following paragraph in Pepperminty Wiki's README:

If you've found a security issue, please don't open an issue. Instead, get in touch privately - e.g. via Keybase or by email (security [at sign] starbeamrainbowlabs [replace me with a dot] com), and I'll try to respond ASAP.

I also have my website and email address on my GitHub profile, and my website lists:

  • My email address
  • My Keybase details
  • My Twitter account
  • My Stack Exchange account
  • My reddit account
  • My GPG/PGP key id

I don't have my Discord account on there, but I can chat over that too after first using one of the above.

With this in mind, I found it to be very strange that the reporter was unable to find a means of contact to use to responsibly disclose the vulnerabilities.

CVE confusion

Now that I've fixed the vulnerabilities, I'm somewhat confused above how I update the pair of CVEs. This website gives the following instructions:

  1. Identify the CNA that published the CVE Record by searching for the CVE Record on the CVE List.
  2. Locate the responsible CNA in the “Assigning CNA” field of the CVE Record.
  3. Contact the CNA using their preferred contact method to request the update.

In my case, the assigning CNA is stated as "N/A" - I assume it's the unresponsive reporter above. I'm confused here then about how I'm supposed to update the the CVE, since I can't contract the original reporter.

If anyone can suggest a way in which I can update these CVEs to reflect what has happened and the fact that I've fixed them, I'd love to know - I would hate to leave those CVEs outdated as they may misinform someone. Contact details on my website homepage. You can also leave a comment on this blog post.

Conclusion

I'm upset not because the reporter found a vulnerability - it's great they even took the time to find it in the first place in my little small-time project! I'm upset because they failed to properly disclose the vulnerabilities by privately contacting me. In fact, they would have discovered that CVE-2021-38600 is not really a vulnerability at all.

I'm also upset because despite the effort I've gone to in order to reach out, instead of opening a civil and polite discussion about the whole issue I've instead been met with doors slammed in my face (i.e. issues deleted instead of being replied to).

I wanted to document my experiences here also to educate others about ethical vulnerability / security issue / disclosure. Ethics, justice, and honesty are really important to me - and I'd like to try and avoid any accidental disclosures if at all possible.

If you find a vulnerability in someone's code (be it open or closed source), I would advise you to:

  1. Go in search of a security vulnerability disclosure policy (or, for open-source projects, search the README / find the contact details for the maintainers)
  2. Contact the authors of the code (or company, if commercial) to organise responsible disclosure
  3. When a patch has been written and tested, co-ordinate the release of the patch and the disclosure of the vulnerability

Please do not release the vulnerability publicly without first contacting the author (I suggest waiting 60 days and trying multiple methods of communication). This causes maintainers of projects (who in the case of open source are mostly volunteers who pour their time into projects without asking for anything in return) a lot of stress and anxiety, as I've discovered during this incident.

Given that I haven't experienced anything like this before and that I'm only human I'm sure that my response to this incident could use some work - but the manner in which these vulnerabilities were disclosed could use a lot of work too.

Sources and further reading

Cluster, Part 11: Lock and Key | Let's Encrypt DNS-01 for wildcard TLS certificates

Welcome one and all to another cluster blog post! Cluster blog posts always take a while to write, so sorry for the delay. As is customary, let's start this post off with a list of all the parts in the series so far:

With that out of the way, in this post we're going to look at obtaining a wildcard TLS certificate using the Let's Encrypt DNS-01 challenge. We want this because you need a TLS certificate to serve HTTPS without lighting everyone's browsers up with warnings like a Christmas tree.

The DNS-01 challenge is an alternate challenge to the default HTTP-01 challenge you may already me familiar with.

Unlike the HTTP-01 challenge which proves you have access to single domain by automatically placing a file on your web server, the DNS-01 challenge proves you have control over an entire domain - thus allowing you to obtain a wildcard certificate - which is valid for not only your domain, but all possible subdomains! This should save a lot of hassle - but it's important we keep it secure too.

As with regular Let's Encrypt certificates, we'll also need to ensure that our wildcard certificate we obtain will be auto-renewed, so we'll be setting up a periodic task on our Nomad cluster to do this for us.

If you don't have a Nomad cluster, don't worry. It's not required, and I'll be showing you how to do it without one too. But if you'd like to set one up, I recommend part 7 of this series.

In order to complete the DNS-01 challenge successfully, we need to automatically place a DNS record in our domain. This can be done via an API, if your DNS provider has one and it's supported. Personally, I have the domain name I'm using for my cluster (mooncarrot.space.) with Gandi. We'll be using certbot to perform the DNS-01 challenge, which has a plugin system for different DNS API providers.

We'll be installing the challenge provider we need with pip3 (a Python 3 package manager, as certbot is written in Python), so you can find an up-to-date list of challenge providers over on PyPi here: https://pypi.org/search/?q=certbot-dns

If you don't see a plugin for your provider, don't worry. I couldn't find one for Gandi, so I added my domain name to Cloudflare and followed the setup to change the name servers for my domain name to point at them. After doing this, I can now use the Cloudflare API through the certbot-dns-cloudflare plugin.

With that sorted, we can look at obtaining that TLS certificate. I opt to put certbot in a Docker container here so that I can run it through a Nomad periodic task. This proved to be a useful tool to test the process out though, as I hit a number of snags with the process that made things interesting.

The first order of business is to install certbot and the associate plugins. You'd think that simply doing an sudo apt install certbot certbot-dns-cloudflare would do the job, but you'd be wrong.

As it turns out, it does install that way, but it installs an older version of the certbot-dns-cloudflare plugin that requires you give it your Global API Key from your Cloudflare account, which has permission to do anything on your account!

That's no good at all, because if the key gets compromised an attacker could edit any of the domain names on our account they like, which would quickly turn into a disaster!

Instead, we want to install the latest version of certbot and the associated Cloudflare DNS plugin, which support regular Cloudflare API Tokens, upon which we can set restrictive permissions to only allow it to edit the one domain name we want to obtain a TLS certificate for.

I tried multiple different ways of installing certbot in order to get a version recent enough to get it to take an API token. The way that worked for me was a script called certbot-auto, which you can download from here: https://dl.eff.org/certbot-auto.

Now we have a way to install certbot, we also need the Cloudflare DNS plugin. As I mentioned above, we can do this using pip3, a Python package manager. In our case, the pip3 package we want is certbot-dns-cloudflare - incidentally it has the same name as the outdated apt package that would have made life so much simpler if it had supported API tokens.

Now we have a plan, let's start to draft out the commands we'll need to execute to get certbot up and running. If you're planning on following this tutorial on bare metal (i.e. without Docker), go ahead and execute these directly on your target machine. If you're following along with Docker though, hang on because we'll be wrapping these up into a Dockerfile shortly.

First, let's install certbot:

sudo apt install curl ca-certificates
cd some_permanent_directory;
curl -sS https://dl.eff.org/certbot-auto -o certbot-auto
chmod +x certbot-auto
sudo certbot-auto --debug --noninteractive --install-only

Installation with certbot-auto comprises downloading a script and executing it. with a bunch of flags. Next up, we need to shoe-horn our certbot-dns-cloudflare plugin into the certbot-auto installation. This requires some interesting trickery here, because certbot-auto uses something called virtualenv to install itself and all its dependencies locally into a single directory.

sudo apt install python3-pip
cd /opt/eff.org/certbot/venv
source bin/activate
pip install certbot-dns-cloudflare
deactivate

In short, we cd into the certbot-auto installation, activate the virtualenv local environment, install our dns plugin package, and then exit out of the virtual environment again.

With that done, we can finally add a convenience synlink so that the certbot command is in our PATH:

ln -s /opt/eff.org/certbot/venv/bin/certbot /usr/bin/certbot

That completes the certbot installation process. Then, to use certbot to create the TLS certificate, we'll need an API as mentioned earlier. Navigate to the API Tokens part of your profile and create one, and then create an INI file in the following format:

# Cloudflare API token used by Certbot
dns_cloudflare_api_token = "YOUR_API_TOKEN_HERE"

...replacing YOUR_API_TOKEN_HERE with your API token of course.

Finally, with all that in place, we can create our wildcard certificate! Do that like this:

sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials path/to/credentials.ini -d 'bobsrockets.io,*.bobsrockets.io' --preferred-challenges dns-01

It'll ask you a bunch of interactive questions the first time you do this, but follow it through and it should issue you a TLS certificate (and tell you where it stored it). Actually utilising it is beyond the scope of this post - we'll be tackling that in a future post in this series.

For those following along on bare metal, this is where you'll want to skip to the end of the post. Before you do, I'll leave you with a quick note about auto-renewing your TLS certificates. Do this:

sudo letsencrypt renew
sudo systemctl reload nginx postfix

....on a regular basis, replacing nginx postfix with a space-separated list of services that need reloading after you've renewed your certificates. A great way to do this is to setup a cron job.

Sweeping things under the carpet

For the Docker users here, we aren't quite finished yet: We need to package this mess up into a nice neat Docker container where we can forget about it :P

Some things we need to be aware of:

  • certbot has a number of data directories it interacts with that we need to ensure don't get wiped when the Docker ends instances of our container.
  • Since I'm serving the shared storage of my cluster over NFS, we can't have certbot running as root as it'll get a permission denied error when it tries to access the disk.
  • While curl and ca-certificates are needed to download certbot-auto, they aren't needed by certbot itself - so we can avoid installing them in the resulting Docker container by using a multi-stage Dockerfile.

To save you the trouble, I've already gone to the trouble of developing just such a Dockerfile that takes all of this into account. Here it is:

ARG REPO_LOCATION
# ARG BASE_VERSION

FROM ${REPO_LOCATION}minideb AS builder

RUN install_packages curl ca-certificates \
    && curl -sS https://dl.eff.org/certbot-auto -o /srv/certbot-auto \
    && chmod +x /srv/certbot-auto

FROM ${REPO_LOCATION}minideb

COPY --from=builder /srv/certbot-auto /srv/certbot-auto

RUN /srv/certbot-auto --debug --noninteractive --install-only && \
    install_packages python3-pip

WORKDIR /opt/eff.org/certbot/venv
RUN . bin/activate \
    && pip install certbot-dns-cloudflare \
    && deactivate \
    && ln -s /opt/eff.org/certbot/venv/bin/certbot /usr/bin/certbot

VOLUME /srv/configdir /srv/workdir /srv/logsdir

USER 999:994
ENTRYPOINT [ "/usr/bin/certbot", \
    "--config-dir", "/srv/configdir", \
    "--work-dir", "/srv/workdir", \
    "--logs-dir", "/srv/logsdir" ]

A few things to note here:

  • We use a multi-stage dockerfile here to avoid installing curl and ca-certificates in the resulting docker image.
  • I'm using minideb as a base image that resides on my private Docker registry (see part 8). For the curious, the script I use to do this located on my personal git server here: https://git.starbeamrainbowlabs.com/sbrl/docker-images/src/branch/master/images/minideb.
    • If you don't have minideb pushed to a private Docker registry, replace minideb with bitnami/minideb in the above.
  • We set the user and group certbot runs as to 999:994 to avoid the NFS permissions issue.
  • We define 3 Docker volumes /srv/configdir, /srv/workdir, and /srv/logsdir to contain all of certbot's data that needs to be persisted and use an elaborate ENTRYPOINT to ensure that we tell certbot about them.

Save this in a new directory with the name Dockerfile and build it:

sudo docker build --no-cache --pull --tag "certbot" .;

...if you have a private Docker registry with a local minideb image you'd like to use as a base, do this instead:

sudo docker build --no-cache --pull --tag "myregistry.seanssatellites.io:5000/certbot" --build-arg "REPO_LOCATION=myregistry.seanssatellites.io:5000/" .;

In my case, I do this on my CI server:

laminarc queue docker-rebuild IMAGE=certbot

The hows of how I set that up will be the subject of a future post. Part of the answer is located in my docker-images Git repository, but the other part is in my private continuous integration Git repo (but rest assured I'll be talking about it and sharing it here).

Anyway, with the Docker container built we can now obtain our certificates with this monster of a one-liner:

sudo docker run -it --rm -v /mnt/shared/services/certbot/workdir:/srv/workdir -v /mnt/shared/services/certbot/configdir:/srv/configdir -v /mnt/shared/services/certbot/logsdir:/srv/logsdir certbot certonly --dns-cloudflare --dns-cloudflare-credentials path/to/credentials.ini -d 'bobsrockets.io,*.bobsrockets.io' --preferred-challenges dns-01

The reason this is so long is that we need to mount the 3 different volumes into the container that contain certbot's data files. If you're running a private registry, don't forget to prefix certbot there with registry.bobsrockets.com:5000/.

Don't forget also to update the Docker volume locations on the host here to point a empty directories owned by 999:994.

Even if you want to run this on Nomad, I still advise that you execute this manually. This is because the first time you do so it'll ask you a bunch of questions interactively (which it doesn't do on subsequent times).

If you're not using Nomad, this is the point you'll want to skip to the end. As before with the bare-metal users, you'll want to add a cron job that runs certbot renew - just in your case inside your Docker container.

Nomad

For the truly intrepid Nomad users, we still have one last task to complete before our work is done: Auto-renewing our certificate(s) with a Nomad periodic task.

This isn't really that complicated I found. Here's what I came up with:

job "certbot" {
    datacenters = ["dc1"]
    priority = 100
    type = "batch"

    periodic {
        cron = "@weekly"
        prohibit_overlap = true
    }

    task "certbot" {
        driver = "docker"

        config {
            image = "registry.service.mooncarrot.space:5000/certbot"
            labels { group = "maintenance" }
            entrypoint = [ "/usr/bin/certbot" ]
            command = "renew"
            args = [
                "--config-dir", "/srv/configdir/",
                "--work-dir", "/srv/workdir/",
                "--logs-dir", "/srv/logsdir/"
            ]
            # To generate a new cert:
            # /usr/bin/certbot --work-dir /srv/workdir/ --config-dir /srv/configdir/ --logs-dir /srv/logsdir/ certonly --dns-cloudflare --dns-cloudflare-credentials /srv/configdir/__cloudflare_credentials.ini -d 'mooncarrot.space,*.mooncarrot.space' --preferred-challenges dns-01

            volumes = [
                "/mnt/shared/services/certbot/workdir:/srv/workdir",
                "/mnt/shared/services/certbot/configdir:/srv/configdir",
                "/mnt/shared/services/certbot/logsdir:/srv/logsdir"
            ]
        }
    }
}

If you want to use it yourself, replace the various references to things like the private Docker registry and the Docker volumes (which require "docker.volumes.enabled" = "True" in clientoptions in your Nomad agent configuration) with values that make sense in your context.

I have some confidence that this is working as intended by inspecting logs and watching TLS certificate expiry times. Save it to a file called certbot.nomad and then run it:

nomad job run certbot.nomad

Conclusion

If you've made it this far, congratulations! We've installed certbot and used the Cloudflare DNS plugin to obtain a DNS wildcard certificate. For the more adventurous, we've packaged it all into a Docker container. Finally for the truly intrepid we implemented a Nomad periodic job to auto-renew our TLS certificates.

Even if you don't use Docker or Nomad, I hope this has been a helpful read. If you're interested in the rest of my cluster build I've done, why not go back and start reading from part 1? All the posts in my cluster series are tagged with "cluster" to make them easier to find.

Unfortunately, I haven't managed to determine a way to import TLS certificates into Hashicorp Vault automatically, as I've stalled a bit on the Vault front (permissions and policies are wildly complicated), so in future posts it's unlikely I'll be touching Vault any time soon (if anyone has an alternative that is simpler and easier to understand / configure, please comment below).

Despite this, in future posts I've got a number of topics lined up I'd like to talk about:

  • Configuring Fabio (see part 9) to serve HTTPS and force-redirect from HTTP to HTTPS (status: implemented)
  • Implementing HAProxy to terminate port forwarding (status: initial research)
  • Password protecting the private docker registry, Consul, and Nomad (status: on the todo list)
  • Semi-automatic docker image rebuilding with Laminar CI (status: implemented)

In the meantime, please comment below if you liked this post, are having issues, or have any suggestions. I'd love to hear if this helped you out!

Sources and Further Reading

Switching TOTP providers from Authy to andOTP

Since I first started using 2-factor authentication with TOTP (Time based One Time Passwords), I've been using Authy to store my TOTP secrets. This has worked well for a number of years, but recently I decided that I wanted to change. This was for a number of reasons:

  1. I've acquired a large number of TOTP secrets for various websites and services, and I'd like a better way of sorting the list
  2. Most of the web services I have TOTP secrets for don't have an icon in Authy - and there are only so many times you can repeat the 6 generic colours before it becomes totally confusing
  3. I'd like the backups of my TOTP secrets to be completely self-hosted (i.e. completely on my own infrastructure)

After asking on Reddit, I received a recommendation to use andOTP (F-Droid, Google Play). After installing it, I realised that I needed to export my TOTP secrets from Authy first.

Unfortunately, it turns out that this isn't an easy process. Many guides tell you to alter the code behind the official Authy Chrome app - and since I don't have Chrome installed (I'm a Firefox user :D), that's not particularly helpful.

Thankfully, all is not lost. During my research I found the authy project on GitHub, which is a command-line app - written in Go - temporarily registers as a 'TOTP provider' with Authy and then exports all of your TOTP secrets to a standard text file of URIs.

These can then be imported into whatever TOTP-supporting authenticator app you like. Personally, I did this by generating QR codes for each URI and scanning them into my phone. The URIs generated, when converted to a QR code, are actually in the same format that they were originally when you scan them in the first place on the original website. This makes for an easy time importing them - at least from a walled garden.

Generating all those QR codes manually isn't much fun though, so I automated the process. This was pretty simple:

#!/usr/bin/env bash
exec 3<&0; # Copy stdin
while read url; do
    echo "${url}" | qr --error-correction=H;
    read -p "Press a enter to continue" <&3; # Pipe in stdin, since we override it with the read loop
done <secrets.txt;

The exec 3<&0 bit copies the standard input to file descriptor 3 for later. Then we enter a while loop, and read in the file that contains the secrets and iterate over it.

For each line, we convert it to a QR code that displays in the terminal with VT-100 ANSI escape codes with the Python program qr.

Finally, after generating each QR code we pause for a moment until we press the enter key, so that we can generate the QR codes 1 at a time. We pipe in file descriptor 3 here that we copied earlier, because inside the while loop the standard input is the file we're reading line-by-line and not the keyboard input.

With my secrets migrated, I set to work changing the labels, images, and tags for each of them. I'm impressed by the number of different icons it supports - and since it's open-source if there's one I really want that it doesn't have, I'm sure I can open a PR to add it. It also encrypts the TOTP secrets database at rest on disk, which is pretty great.

Lastly came the backups. It looks like andOTP is pretty flexible when it comes to backups - supporting plain text files as well as various forms of encrypted file. I opted for the latter, with GPG encryption instead of a password or PIN. I'm sure it'll come back to bite me later when I struggle to decrypt the database in an emergency because I find the gpg CLI terribly difficult to use - perhaps I should take multiple backups encrypted with long and difficult password too.

To encrypt the backups with GPG, you need to have a GPG provider installed on your phone. It recommended that I install OpenKeychain for managing my GPG private keys on Android, which I did. So far, it seems to be functioning as expected too - additionally providing me with a mechanism by which I can encrypt and decrypt files easily and perform other GPG-related tasks...... if only it was this easy in the Linux terminal!

Once setup, I saved my encrypted backups directly to my Nextcloud instance, since it turns out that in Android 10 (or maybe before? I'm not sure) it appears that if you have the Nextcloud app installed it appears as a file system provider when saving things. I'm certainly not complaining!

While I'm still experimenting with my new setup, I'm pretty happy with it at the moment. I'm still considering how I can make my TOTP backups even more secure while not compromising the '2nd factor' nature of the thing, so it's possible I might post again in the future about that.

Next on my security / privacy todo list is to configure my Keepass database to use my Solo for authentication, and possibly figure out how I can get my phone to pretend to be a keyboard to input passwords into machines I don't have my password database configured on :D

Found this interesting? Got a suggestion? Comment below!

Solo hardware security key review

Sometime last year (I forget when), I backed a kickstarter that promised the first open-source hardware security key that supports FIDO2. Since the people doing the kickstarter have done this before for an older standard, I decided to back it.

Last week they finally arrived, and the wait was totally worth it! I got 1 with a USB type c connector (in yellow below), and 1 type a regular type a connector that also supports nfc (in red, for using with my phone).

Before I get into why they are so awesome, it's probably a good idea if we take small step back and look at what a hardware security key does and why it does it.

My Solos!

In short, a hardware security key has a unique secret key baked into it that you can't extract. If I understand it, this is sometimes known as a physically unclonable function (correct me in a comment if I'm wrong). It makes use of this secret key for authentication purposes by way of a chain of protocols, which are collectively known as FIDO2.

A diagram showing the different FIDO2 protocols. It's basically WebAuthn between browser and OS, and CTAP2 between OS and hardware security key

There are 2 important protocols here: WebAuthn that the browser provides to web pages to interact with hardware devices, and CTAP2 - which allows the browser to interface with the hardware security key through a channel that the operating system provides (be that over USB, NFC, Bluetooth, or some other means).

FIDO2 is new. Like very very new. To this end, browsers and websites don't yet have full support for it. Those that do aren't always enabled by default (in Firefox you've got to set security.webauth.u2f, security.webauth.webauthn, and security.webauth.webauthn_enable_usbtoken to true, but I think these will set by default in a coming update) or incorrectly 'detect' support by sniffing the user-agent string ( cough I'm looking at you, GitHub and Facebook cough ).

Despite this, when it is supported it works fabulously. Solo goes a long way to making the process as painless as possible - supporting both CTAP (for the older U2F protocol) and CTAP 2 (which is part of the FIDO 2 protcol suite). It's designed well (though the cases on the NFC-enabled version called the Solo Tap are a bit on the snug side), and since it's open source you can both inspect and contribute to the firmware to improve the Solo and add new features for everyone to enjoy.

Extra features like direct access to the onboard TRNG (true random number generator) are really nice to have - and the promise of more features to come makes it even better. I'm excited to see what new capabilities my Solo will gain with future updates!

In the future I want to take a deeper dive into Webauthn and implement support in applications I've written (e.g. Pepperminty Wiki). It looks like it might be quite complicated, but I'll post here when I've figured it out.

Art by Mythdael