Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression conference conferences containerisation css dailyprogrammer data analysis debugging defining ai demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics guide hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs latex learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation outreach own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering research resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

My Hull Science Festival Demo: How do AIs understand text?

Banner showing gently coloured point clouds of words against a dark background on the left, with the Humber Science Festival logo, fading into a screenshot of the attract screen of the actual demo on the right.

Hello there! On Saturday 9th September 2023, I was on the supercomputing stand for the Hull Science Festival with a cool demo illustrating how artificial intelligences understand and process text. Since then, I've been hard at work tidying that demo up, and today I can announce that it's available to view online here on my website!

This post is a general high-level announcement post. A series of technical posts will follow on the nuts and bolts of both the theory behind the demo and the actual code itself and how its put together, because it's quite interesting and I want to talk about it.

I've written this post to serve as a foreword / quick explanation of what you're looking at (similar to the explanation I gave in person), but if you're impatient you can just find it here.

All AIs currently developed are essentially complex parametrised mathematical models. We train these models by updating their parameters little by little until the output of the model is similar to the output of some ground truth label.

In other words, and AI is just a bunch of maths. So how does it understand text? The answer to this question lies in converting text to numbers - a process often called 'word embedding'.

This is done by splitting an input sentence into words, and then individually converting each word into a series of numbers, which is what you will see in the demo at the link below - just convert with some magic to 3 dimensions to make it look fancy.

Similar sorts of words will have similar sorts of numbers (or positions in 3D space in the demo). As an example here, at the science festival we found a group of footballers, a group of countries, and so on.

In the demo below, you will see clouds of words processed from Wikipedia. I downloaded a bunch of page abstracts for Wikipedia in a number of different languages (source), extracted a list of words, converted them to numbers (GloVeUMAP), and plotted them in 3D space. Can you identify every language displayed here?


Find the demo here: https://starbeamrainbowlabs.com/labs/research-smflooding-vis/

A screenshot of the initial attract screen of the demo. A central box allows one to choose a file to load, with a large load button directly beneath it. The background is a blurred + bloomed screenshot of a point cloud from the demo itself.

Find the demo here: https://starbeamrainbowlabs.com/labs/research-smflooding-vis/


If you were one of the lucky people to see my demo in person, you may notice that this online demo looks very different to the one I originally presented at the science festival. That's because the in-person demo uses data from social media, but this one uses data from Wikipedia to preserve privacy, just in case.

I hope you enjoy the demo! Time permitting, I will be back with some more posts soon to explain how I did this and the AI/NLP theory behind it at a more technical level. Some topics I want to talk about, in no particular order:

  • General technical outline of the nuts and bolts of how the demo works and what technologies I used to throw it together
  • How I monkeypatched Babylon.js's gamepad support
  • A detailed and technical explanation of the AI + NLP theory behind the demo, the things I've learnt about word embeddings while doing it, and what future research could look like to improve word embeddings based on what I've learnt
  • Word embeddings, the options available, how they differ, and which one to choose.

Until next time, I'll leave you with 2 pictures I took on the day. See you in the next post!

Edit 2023-11-30: Oops! I forgot to link to the source code....! If you'd like to take a gander at the source code behind the demo, you can find it here: https://github.com/sbrl/research-smflooding-vis

A photo of my demo up and running on a PC with a PS4 controller on a wooden desk. An Entroware laptop sits partially obscured by a desktop PC monitor, the latter of which has the demo full screen.

(Above: A photo of my demo in action!)

A photo of some piles of postcards arranged on a light wooden desk. My research is not shown, but visuals from other researchers' projects are printed, such as microbiology to disease research to jellyfish galaxies.

(Above: A photo of the postcards on the desk next to my demo. My research is not shown, but visuals from other researchers' projects are printed, with everything from microbiology to disease research to jellyfish galaxies.)

The journal article about my social media research is out now!

This is just a quick little post to announce that I have published my first journal article! This has been a significantly long time in the making, with the review process and all associated corrections alone taking from October 2022 until a week or two ago.

It has been published in the Elsevier journal Computers and Geosciences, with the following title:

Real-time social media sentiment analysis for rapid impact assessment of floods

The article is open access, so everyone should be able to read it. I must thank everyone who has helped and contributed to the process of putting this journal article together - their names are on the journal article.

Hopefully this is the first of many!

Visualising Tensorflow model summaries

It's no secret that my PhD is based in machine learning / AI (my specific title is "Using Big Data and AI to dynamically map flood risk"). Recently a problem I have been plagued with is quickly understanding the architecture of new (and old) models I've come across at a high level. I could read the paper a model comes from in detail (and I do this regularly), but it's much less complicated and much easier to understand if I can visualise it in a flowchart.

To remedy this, I've written a neat little tool that does just this. When you're training a new AI model, one of the things that it's common to print out is the summary of the model (I make it a habit to always print this out for later inspection, comparison, and debugging), like this:

model = setup_model()
model.summary()

This might print something like this:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, 500, 16)           160000    

lstm (LSTM)                  (None, 32)                6272      

dense (Dense)                (None, 1)                 33        
=================================================================
Total params: 166,305
Trainable params: 166,305
Non-trainable params: 0
_________________________________________________________________

(Source: ChatGPT)

This is just a simple model, but it is common for larger ones to have hundreds of layers, like this one I'm currently playing with as part of my research:

(Can't see the above? Try a direct link.)

Woah, that's some model! It must be really complicated. How are we supposed to make sense of it?

If you look closely, you'll notice that it has a Connected to column, as it's not a linear tf.keras.Sequential() model. We can use that to plot a flowchart!

This is what the tool I've written generates, using a graphing library called nomnoml. It parses the Tensorflow summary, and then compiles it into a flowchart. Here's a sample:

It's a purely web-based tool - no data leaves your client. Try it out for yourself here:

https://starbeamrainbowlabs.com/labs/tfsummaryvis/

For the curious, the source code for this tool is available here:

https://git.starbeamrainbowlabs.com/sbrl/tfsummaryvis

Add your blog to hullblogs.com

For many years, csblogs.com has aggregated posts from the blogs of current and past students of the University of Hull. Unfortunately, recently the site has started to generate random error messages. To fix this, Freeside (of which I'm the head, as it turns out) have decided to implement a replacement.

It looks rather cool if I do say so myself, so I thought I'd share it here. I'll talk first about how you can add your blog to it and why it's a good idea to start one yourself (and how you can do so for free!). Then, I'll talk a little bit at the end about how I built hullblogs.com and how it's put together.

@closebracket had the idea to call it hullblogs.com, and then I implemented the code for it:

A screenshot of hullblogs.com

(Above: A screenshot of hullblogs.com)

You can visit it here: hullblogs.com.

If you're a current or past student at the University of Hull, then you we want to hear from you! If you've got a blog (and even if you haven't yet!), then you can add your blog by going to hullblogs.com, and then clicking "add your blog".

If you don't yet have a blog, then we have you covered. Our guide has a number fo really easy ways to get started and host a blog for free! There are multiple ways to host a blog without paying a penny (no hidden charges after some time either).

But I don't have a blog!

If you're not convinced about setting up a blog and putting it on the Internet, there are a number of key reasons you should:

  • It's not about other people reading it. By setting up a blog, you don't need to have 100s of views. What matters is that you write for you - not for anyone else.
  • If only 1 person reads your blog, then it's worth it. You personal blog and/or website makes a great addition to your CV. It's a wonderful way of documenting the things you're doing and learning while doing your degree and beyond - and a great way for potential employers to find more detail on all this information in 1 place.
  • Write for yourself. Personally, I find my blog a great place to post about difficult problems that I've encountered and solved. Then when I encounter a similar problem again, I can re-read my own blog post about it. Because I'm the one who wrote the post, I can anticipate what I may find confusing and explain that in more detail to help out future me.
  • Improve your technical writing skills. Technical writing - the process of writing about complex technical subjects (whether this is Computer Science or beyond) is an invaluable skill. Writing documentation, communicating complex ideas, and helping others are all situations which can call for technical writing - and it's a great thing to put on your CV too. Just doing a thing isn't enough - being able to write about it and document it is really important when working for example on commercial (or even academic) work.

If you're not computer science oriented, then Wordpress, Blogger, or maybe Squarespace [unclear if squarespace is free or not] would be a a great place to start your blogging journey. You can host a blog for free too!

If you are computer science oriented (I'd guess most of the people reading this probably are given the kinds of posts I write for my blog here), then I strongly recommend Eleventy. It's a static site generator, and there are a whole bunch of ready-made templates for you to use to get started quickly.

In fact, hullblogs.com itself is built with Eleventy, using feedme to parse RSS/Atom feeds.

Every night, the site will be rebuilt. During this process, it will download the RSS/Atom feed from all the blogs registered, and order the posts chronologically. If an image is associated with a post, this will also be downloaded - the site also looks for the first image in a post's content if an image isn't explicitly specified to be attached to a blog. Once ordered, the posts are then paginated (split into multiple pages) and the rest of the site is rendered.

Being a static site, once the (re)build is complete hosting the site is simple and secure. Currently, Freeside hosts it using Nginx to statically serve it.

If your feed is malformed/contains errors or your blog is offline don't worry! hullblogs.com will transparently ignore your feed when it rebuilds. Then, once your blog is back up and running, it will add it back into the main hullblogs.com site the next night when it rebuilds again.

If you're interested in digging into the source code of the site, I've open sourced it on GitHub under the Apache 2.0 Licence: https://github.com/FreesideHull/hullblogs.com

If you've read this far, thanks so much for reading! We're after a better favicon logo for hullblogs.com. If you've got an idea, please do let us know by opening an issue.

stl2png Nautilus Thumbnailer

Recently I've found myself working with STL files a lot more since I bought a 3d printer (more on that in a separate post!) (.obj is technically the better format, but STL is still widely used). I'm a visual sort of person, and with this in mind I like to have previews of things in my file manager. When I found no good STL thumbnailers for Nautilus (the default file manager on Ubuntu), I set out to write my own.

In my research, I discovered that OpenSCAD can be used to generate a PNG image from an STL file if one writes a small .scad file wrapper (ref), and wanted to blog about it here.

First, a screenshot of it in action:

(Above: stl2png in action as a nautilus thumbnailer. STL credit: Entitled Goose from the Untitled Goose Game)

You can find installation instructions here:

https://github.com/sbrl/nautilus-thumbnailer-stl/#nautilus-thumbnailer-stl

The original inspiration for this twofold:

From there, wrapping it in a shell script and turning it into a nautilus thumbnailer was not too challenging. To do that, I followed this guide, ehich was very helpful (though the update-mime-database bit was wrong - the filepath there needs to have the /packages suffix removed).

I did encounter a few issues though. Firstly finding the name for a suitable fallback icon was not easy - I resorted to browsing the contents of /usr/share/icons/gnome/256x256/mimetypes/ in my file manager, as this spec was not helpful because STL model files don't fit neatly into any of the categories there.

The other major issue was that the script worked fine when I called it manually, but failed when I tried to use it via the nautilus thumbnailing engine. It turned out that OpenSCAD couldn't open an OpenGL context, so as a quick hack I wrapped the openscad call in xvfb-run (from X Virtual FrameBuffer; sudo apt install xvfb)).

With those issues sorted, it worked flawlessly. I also added optional oxipng (or optipng as an optional fallback) support for optimising the generated PNG image - this I found in casual testing saved between 70% and 90% on file sizes.

Found this interesting or helpful? Comment below! It really helps motivate me.

MIDI to Music Box score converter

Keeping with the theme of things I've forgotten to blog about (every time I think I've mentioned everything, I remember something else), in this post I'm going to talk about yet another project of mine that's been happily sitting around (and I've nearly blogged about but then forgot on at least 3 separate occasions): A MIDI file to music box conversion script. A while ago, I bought one of these customisable music boxes:

(Above: My music box (centre), and the associated hole punching device it came with (top left).)

The basic principle is that you bunch holes in a piece of card, and then you feed it into the music box to make it play the notes you've punched into the card. Different variants exist with different numbers of notes - mine has a staggering 30 different nodes it can play!

It has a few problems though:

  1. The note names on the card are wrong
  2. Figuring out which one is which is a problem
  3. Once you've punched a hole, it's permanent
  4. Punching the holes is fiddly

Solving problem #4 might take a bit more work (a future project out-of-scope of this post), but the other 3 are definitely solvable.

MIDI is a protocol (and a file format) for storing notes from a variety of different instruments (this is just the tip of the iceberg, but it's all that's needed for this post), and it just so happens that there's a C♯ library on NuGet called DryWetMIDI that can read MIDI files in and allow one to iterate over the notes inside.

By using this and an homebrew SVG writer class (another blog post topic for another time once I've tidied it up), I implemented a command-line program I've called MusicBoxConverter (the name needs work - comment below if you can think of a better one) that converts MIDI files to an SVG file, which can then be printed (carefully - full instructions on the project's page - see below).

Before I continue, it's perhaps best to give an example of my (command line) program's output:

Example output

(Above: A lovely little tune from the TV classic The Clangers, converted for a 30 note music box by my MusicBoxConverter program. I do not own the song!)

The output from the program can (once printed to the correct size) then be cut out and pinned onto a piece of card for hole punching. I find paper clips work for this, but in future I might build myself an Arduino-powered device to punch holes for me.

The SVG generated has a bunch of useful features that makes reading it easy:

  • Each note is marked by a blue circle with a red plus sign to help with accuracy (the hole puncher that came with mine has lines on it that make a plus shape over the viewing hole)
  • Each hole is numbered to aid with getting it the right way around
  • Big arrows in the background ensure you don't get it backwards
  • The orange bar indicates the beginning
  • The blue circle should always be at the bottom and the and the red circle at the top, so you don't get it upside down
  • The green lines should be lined up with the lines on the card

While it's still a bit tedious to punch the holes, it's much easier be able to adapt a score in Musescore, export to MIDI, push it through MusicBoxConverter than doing lots of trial-and-error punching it directly unaided.

I won't explain how to use the program in this blog post in any detail, since it might change. However, the project's README explains in detail how to install and use it:

https://git.starbeamrainbowlabs.com/sbrl/MusicBoxConverter

Something worth a particular mention here is the printing process for the generated SVGs. It's somewhat complicated, but this is unfortunately necessary in order to avoid rescaling the SVG during the conversion process - otherwise it doesn't come out the right size to be compatible with the music box.

A number of improvements stand out to me that I could make to this project. While it only supports 30 note music boxes at the moment, it can easily be extended to support multiple other different type of music box (I just don't have them to hand).

Implementing a GUI is another possible improvement but would take a lot of work, and might be best served as a different project that uses my MusicBoxConverter CLI under the hood.

Finally, as I mentioned above, in a future project (once I have a 3D printer) I want to investigate building an automated hole punching machine to make punching the holes much easier.

If you make something cool with this program, please comment below! I'd love to hear from you (and I may feature you in the README of the project too).

Sources and further reading

  • Musescore - free and open source music notation program with a MIDI export function
  • MusicBoxConverter (name suggestions wanted in the comments below)

WorldEditAdditions: More WorldEdit commands for Minetest

Personally, I see enormous creative potential in games such as Minetest. For those who aren't in the know, Minetest is a voxel sandbox game rather like the popular Minecraft. Being open-source though, it has a solid Lua Modding API that allows players with a bit of Lua knowledge to script their own mods with little difficulty (though Lua doesn't come with batteries included, but that's a topic for another day). Given the ease of making mods, Minetest puts much more emphasis on installing and using mods - in fact most content is obtained this way.

Personally I find creative building in Minetest a relaxing activity, and one of the mods I have found most useful for this is WorldEdit. Those familiar with Minecraft may already be aware of WorldEdit for Minecraft - Minetest has it's own equivalent too. WorldEdit in both games provides an array of commands one can type into the chat window to perform various functions to manipulate the world - for example fill an area with blocks, create shapes such as spheres and pyramids, or replace 1 type of node with another.

Unfortunately though, WorldEdit for Minetest (henceforth simply WorldEdit) doesn't have quite the same feature set that WorldEdit for Minecraft does - so I decided to do something about it. Initially, I contributed a pull request to node weighting support to the //mix command, but I quickly realised that the scale of the plans I had in mind weren't exactly compatible with WorldEdit's codebase (as of the time of typing WorldEdit for Minetest's codebase consists of a relatively small number of very long files).

(Above: The WorldEditAdditions logo, kindly created by @VorTechnix with Blender 2.9)

To this end, I decided to create my own project in which I could build out the codebase in my own way, without waiting for lots of pull requests to be merged (which would probably put strain on the maintainers of WorldEdit).

The result of this is a new mod for Minetest which I've called WorldEditAdditions. Currently, it has over 35 additional commands (though by the time you read this that number has almost certainly grown!) and 2 handheld tools that extend the core feature set provided by WorldEdit, adding commands to do things like:

These are just a few of my favourites - I've implemented many more beyond those in this list. VorTechnix has also contributed a raft of improvements, from improvements to //torus to an entire suite of selection commands (//srect, //scol, //scube, and more), which has been an amazing experience to collaborate on an open-source project with someone on another continent (open-source is so cool like that).

A fundamental difference I decided on at the beginning when working of WorldEditAdditions was to split my new codebase into lots of loosely connected files, and have each file have a single purpose. If a files gets over 150 lines, then it's a candidate for being split up - for example every command has a backend (which sometimes itself spans multiple files, as in the case of //convolve and //erode) that actually manipulates the Minetest world and a front-end that handles the chat command parsing (this way we also get an API for free! Still working on documenting that properly though - if anyone knows of something like documentation for Lua please comment below). By doing this, I enable the codebase to scale in a way that the original WorldEdit codebase does not.

I've had a ton of fun so far implementing different commands and subsequently using them (it's so satisfying to see a new command finally working!), and they have already proved very useful in creative building. Evidently others think so too, as we've already had over 4800 downloads on ContentDB: ContentDB Shield listing the live number of downloads

Given the enormous amount of work I and others have put into WorldEditAdditions and the level of polish it is now achieving (on par with my other big project Pepperminty Wiki, which I've blogged about before), recently I also built a website for it:

The WorldEditAdditions website

You can visit it here: https://worldeditadditions.mooncarrot.space/

I built the website with Eleventy, as I did with the Pepperminty Wiki website (blog post). My experience with Eleventy this time around was much more positive than last time - more on this in future blog post. For the observant, I did lift the fancy button code on the new WorldEditAdditions website from the Pepperminty Wiki website :D

The website has the number of functions:

  • Explaining what WorldEditAdditions is
  • Instructing people on how to install it
  • Showing people how to use it
  • Providing a central reference of commands

I think I covered all of these bases fairly well, but only time will tell how actual users find it (please comment below! It gives me a huge amount of motivation to continue working on stuff like this).

Several components of the WorldEditAdditions codebase deserve their own blog posts that have not yet got one - especially //erode and //convolve, so do look out for those at some point in the future.

Moving forwards with WorldEditAdditions, I want to bring it up to feature parity with Worldedit for Minecraft. I also have a number of cool and unique commands in mind I'm working on such as //noise (apply an arbitrary 2d noise function to the height of the terrain), //mathapply (execute another command, but only apply changes to the nodes whose coordinates result in a number greater than 0.5 when pushed through a given arbitrary mathematical expression - I've already started implementing a mathematical expression parser in Lua with a recursive-descent parser, but have run into difficulties with operator precedence), and exporting Minetest regions to obj files that can then be imported into Blender (a collaboration with @VorTechnix).

If WorldEditAdditions has caught your interest, you can get started by visiting the WorldEditAdditions website: https://worldeditadditions.mooncarrot.space/.

If you find it useful, please consider starring it on GitHub and leaving a review on ContentDB! If you run into difficulties, please open an issue and I'll help you out.

applause-cli: A Node.js CLI handling library

Continuing in the theme of things I've forgotten to talk about, I'd like to post about another package I've released a little while ago. I've been building a number of command line interfaces for my PhD, so I thought it would be best to use a library for this function.

I found [clap](), but it didn't quite do what I wanted - so I wrote my own inspired by it. Soon enough I needed to use the code in several different projects, so I abstracted the logic for it out and called it applause-cli, which you can now find on npm.

It has no dependencies, and it allows you do define a set of arguments and have it parsed out the values from a given input array of items automatically. Here's an example of how it works:

import Program from 'applause-cli';

let program = new Program("path/to/package.json");
program.argument("food", "Specifies the food to find.", "apple")
    .argument("count", "The number of items to find", 1, "number");

program.parse(process.argv.slice(2)); // Might return { food: "banana", count: 6 }

I even have automated documentation generated with documentation and uploaded to my website via Continuous Integration: https://starbeamrainbowlabs.com/code/applause-cli/. I've worked pretty hard on the documentation for this library actually - it even has integrated examples to show you how to use each function!

The library can also automatically generate help output from the provided information when the --help argument is detected too - though I have yet to improve the output if a subcommand is called (e.g. mycommand dostuff --help) - this is on my todo list :-)

Here's an example of the help text it automatically generates:

If this looks like something you'd be interested in using, I recommend checking out the npm package here: https://www.npmjs.com/package/applause-cli

For the curious, applause-cli is open-source under the MPL-2.0 licence. Find the code here: https://github.com/sbrl/applause-cli.

Rendering Time plan / Gantt charts: hourgraph

I have a number of tools and other programs I've implemented, but forgotten to blog about here - hourgraph is one such tool I stumbled across today again. Originally I implemented it for my PhD panel 1 topic project analysis report, as I realised that not only have I manually created a number of these, but I'm going to have to create a bunch more in the future, but I open-sourced it as I usually do with most of the things I write in the hopes that someone else will find it useful.

I've published it on NPM, so you can install it like this:

npm install --global hourgraph

You'll need Node.js installed, and Linux users will need to prefix the above with sudo.

The program takes in a TOML definition file. Here's an example:

width = 1500
height = 480
title = "Apples"

[[task]]
name = "Pick apples"
start = 0
duration = 3

[[task]]
name = "Make apple juice"
start = 2
duration = 2

[[task]]
name = "Enjoy!"
start = 4
duration = 4
colour = "hsl(46, 90%, 60%)"
ghost_colour = "hsla(46, 90%, 60%, 0.1)"

The full set of options are available in the default config file, which is loaded in to fill in any gaps of things you haven't specified in your custom file.

Comprehensive usage instructions are found in the README, but you can render a new time plan chart thingy like this:

hourgraph --input path/to/input.toml --output path/to/output.toml

The above renders to this:

Hourgraph output

Personally, I find it's much easier to create charts like this by defining them in a simple text file that is then rendered into the actual thing. That way, I don't have to fiddle with the layout myself - it all comes out in the wash automatically.

For those interested in the code, it can be found here: https://github.com/sbrl/hourgraph

consulstatus: public status pages drawn from Consul

In my cluster series of blog posts, I've been talking about how I've been building my cluster from scratch. Now that I've got it into some sorta stable state (though I'm still working on it), one of the things I discovered might be helpful for other users of my cluster is a status page.

(Above: The logo for consulstatus. Consulstatus is written by me and not endorsed by Hashicorp or the Consul project.)

To this end, I ended up implementing a quick solution to this problem in PHP. Here's a screenshot of what it looks like:

Screenshot of what consulstatus looks like. See explanation below

The colour scheme changes depending on your browser's prefers-colour-scheme. The circles to the right of each service are either green (indicating no issues), yellow (some problems are occurring), or red (it's down and everything's terrible))

As the name suggests, it's backed by Hashicorp Consul (which I blogged about in cluster, part 6: superglue service discovery). I recommend reading my blog post about it, but in short Consul allows you to register services that it should keep track of, and checks that define whether said services are healthy or not.

It supports a TOML config file that allows you to specify where Consul is, along with the names of the services you'd like to display:

title = "Cluster status page"

[consul]

base_url = "http://consul.service.bobsrockets.com:8500"


services = [
    "some_service",
    "another_service"
    # .....
]

The status page is designed to be as simple to understand to understand as possible, so that anyone (even those who aren't technically skilled) can get an idea as to what is working and what isn't at any given time.

So far, it's been moderately successful. The status page itself is stable and behaves expectedly (which is always a plus), and it does reflect the status of the services in question.

I did initially toy with the idea of exposing more information about the specific checks in Consul that have failed, but then I thought that I'd be then doing what the Consul web interface already does, which seems a bit pointless.

Instead, I decided to keep it rather minimalist instead, such that it could be exposed publicly (in theory, though my instance is only accessible on my local LAN) in a way that the main Consul web interface really can't.

Moving forwards, I'm quite happy with consulstatus as-is, so if I make any changes they aren't likely to be too drastic. I'd like to look at adding a description to each service so that it's more obvious what it is, or maybe have display names that are shown instead of the Consul service names.

I'd also maybe like to display an icon to the left of each service as well to further help with visual identification and understanding, and perhaps allow grouping services too.

Out of scope though is logging service status history. That can be done elsewhere if desired (and I don't particularly have a need for that) - and PHP isn't particularly suited to that anyway.

Found this interesting? Got a suggestion? Comment below!

Art by Mythdael