Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression conference conferences containerisation css dailyprogrammer data analysis debugging defining ai demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics guide hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs latex learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation outreach own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering research resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

Powahroot: Client and Server-side routing in Javascript

The powahroot logo, which is a 16x16 pixel-art image and looks like a purple-red carrot with bright orange stripes and yellow light lines coming out of the sides

If I want to really understand something, I usually end up implementing it myself. This is the case with my latest library - powahroot, but also because I didn't really like the way any of the alternatives functioned because I'm picky.

Originally I wrote it for this project (although it's actually for a little satellite project that isn't open-source unfortunately - maybe at some point in the future!) - but I liked it so much that I decided that I had to turn it into a full library that I could share here.

In short, a routing framework helps you get requests handled in the right places in your application. I've actually blogged about this before, so I'd recommend you go and read that post first before continuing with this one.

For all the similarities between the server side (as mentioned in my earlier post) and the client side, the 2 environments are different enough that they warrant having 2 distinctly separate routers. In powahroot, I provide both a ServerRouter and a ClientRouter.

The ServerRouter is designed to handle Node.js HTTP request and response objects. It provides shortcut methods .get(), .post(), and others to quickly create routes for different request types - and also supports middleware to enable logical separation of authentication, request processing, and response generation.

The ClientRouter, on the other hand, is essentially a stripped-down version of the ServerRouter that's tailored to functioning in a browser environment. It doesn't support middleware (yet?), but it does support the pushstate that's part of the History API.

I've also published it on npm, so you can install it like this:

npm install --save powahroot

Then you can use it like this:

# On the server
import ServerRouter from 'powahroot/Server.mjs';

// ....

const router = new ServerRouter();
router.on_all(async (context, next) => { console.debug(context.url); await next()})
router.get("/files/::filepath", (context, _next) => context.send.plain(200, `You requested ${context.params.filepath}`));
// .....
# On the client
import ClientRouter from 'powahroot/Client.mjs';

// ....

const router = new ClientRouter({
    // Options object. Default settings:
    verbose: false, // Whether to be verbose in console.log() messages
    listen_pushstate: true, // Whether to react to browser pushstate events (excluding those generated by powahroot itself, because that would cause an infinite loop :P)
});

As you can see, powahroot uses ES6 Modules, which makes it easy to split up your code into separate independently-operating sections.

In addition, I've also generated some documentation with the documentation tool on npm. It details the API available to you, and should serve as a good reference when using the library.

You can find that here: https://starbeamrainbowlabs.com/code/powahroot/docs/

It's automatically updated via continuous integration and continuous deployment, which I really do need to get around to blogging about (I've spent a significant amount of time setting up the base system upon which powahroot's CI and CD works. In short I use Laminar CI and a GitHub Webhook, but there's a lot of complicated details).

Found this interesting? Used it in your own project? Got an idea to improve powahroot? Comment below!

Redis in Review: First Impressions

The red Redis logo backed by a yellow voronoi diagram

(Above: The Redis logo backed by a voronoi diagram generated by my specially-upgraded voronoi diagram generator :D)

I've known about Redis for a while now - but I've never gotten around to looking into it until now. A bit of background: I'm building a user management panel thingy for this project (yes, again. I'll be blogging about it for a bit more yet before it's ready). Since I want it to be a separate, and optional, component that you don't have to run in order to use the main program, I decided to make it a separate program in a different language (Nojde.JS is rather good at doing this sort of thing I've found).

Anyway, whilst writing the login system, I found that I needed a place to store session tokens such that they persist. I considered a few different options here:

  1. A JSON file on disk - This wouldn't be particularly scalable if I found myself with lots of users. It also means I've got to read in and decode a bunch of JSON when my program starts up, which can get mighty fiddly. Furthermore, persisting it back to disk would also be rather awkward and inefficient.
  2. An SQL database of some description - Better, not still not great. The setup for a database is complicated, and requires a complex and resource-hungry process running at all times. Even if I used SQLite, I wouldn't be able to escape the complexities associated with database schemas - which I'd likely end up spending more time fiddling with than actually writing the program itself!

Then I remembered Redis. At first, I thought it was an in-memory data storage solution. While I was right, I discovered that there's a lot more to Redis than I first thought. For one, it does actually allow you to persist things to disk - and is very transparent about it too, allowing a great deal of control over how data is persisted to disk (also this post is a great read on the subject, even if it's a bit old now - it goes into a great deal of depth about how the operating system handles and caches writes to disk).

When it comes to actually storing your data, Redis provides just a handful of delightfully simple data structures for you to model your data with. Unlike a traditional SQL-based database, storing data in Redis requires a different way of thinking. Rather than considering how data relates to one another and trying to represent it as a series of interconnected tables, in Redis you store things in a manner that's much closer to that which you might find in the program that you're writing.

The core of Redis is a key-value store - rather like localStorage in the browser. Unlike localStorage though, the following data types are provided:

  • A simple string value
  • A hash-table
  • A list of values (which can be also be treated as a queue)
  • A set of unique strings (which can be optionally sorted)
  • Binary blobs (called bit arrays)
  • A HyperLogLog (apparently a probabilistic data structure of some kind)

The official Redis tutorial on the subject does a much better job of explaining this than I could (I have only been playing with it for about a week after all!), so I'd recommend you read it if you're curious.

All this has some interesting effects. Firstly, the data storage system can be more closely representative of the data structures it is storing, allowing for clever data-specific optimisations to be made in ways that simply aren't possible with a traditional SQL-based system. To this end, a custom index can be created in Redis via simple values to accelerate the process of finding a particular hash-table given a particular piece of identifying information.

Secondly, it makes Redis very versatile. You could use it as a caching layer to store the results of queries made against a traditional SQL-based database. In that scenario, no persistence to disk would be required to make it as fast as possible. You could use it for storing session tokens and short-lived chunks of data, utilising the inbuilt EXPIRE command to tell Redis to automatically delete certain keys after a specified amount of time. You could use it store all your data in and do away with an SQL server altogether. You could even do all of these things at the same time!

When storing data in Redis, it's important to make sure you've got the right mindset. I've found that despite learning about the different commands and data structures Redis has to offer, it's taken me a few goes to figure out how I'm going to store my program's data in a way that both makes sense and makes it easy to get it back out again. I'm certain that I'll take quite a bit of practice (and trial and error!) to figure out what works and what doesn't.

It's not the kind of thing that would make you want to drop your database system like a hot potato and move everything to Redis before the week is out. But it is a tool to keep in mind that can be used to solve a class of problems that SQL-based database servers have traditionally been very bad at handling - in particular high-throughput systems that need to store lots of mainly short-lived data (e.g. session tokens, etc.).

Found this interesting? Got your own thoughts on Redis? Post a comment below!

Routers: Essential, everywhere, and yet exasperatingly elusive

Now that I've finished my University work for the semester (though do have a few loose ends left to tie up), I've got some time on my hands to do a bunch of experimenting that I haven't had the time for earlier in the year.

In this case, it's been tracking down an HTTP router that I used a few years ago. I've experimented with a few now (find-my-way, micro-http-router, and rill) - but all of them few something wrong with them, or feel too opinionated for my taste.

I'm getting slightly ahead of myself though. What's this router you speak of, and why is it so important? Well, it call comes down to application design. When using PHP, you can, to some extent, split your application up by having multiple files (though I would recommend filtering everything through a master index.php). In Node.JS, which I've been playing around with again recently, that's not really possible.

A comparison of the way PHP and Node.JS applications are structured. See the explanation below.

Unlike PHP, which gets requests handed to it from a web server like Nginx via CGI (Common Gateway Interface), Node.JS is the server. You can set up your very own HTTP server listening on port 9898 like this:

import http from 'http';
const http_server = http.createServer((request, response) => {
    response.writeHead(200, {
        "x-custom-header": "yay"
    });
    response.end("Hello, world!");
}).listen(9898, () => console.log("Listening on pot 9898"));

This poses a problem. How do we know what the client requested? Well, there's the request object for that - and I'm sure you can guess what the response object is for - but the other question that remains is how to we figure out which bit of code to call to send the client the correct response?

That's where a request router comes in handy. They come in all shapes and sizes - ranging from a bare-bones router to a full-scale framework - but their basic function is the same: to route a client's request to the right place. For example, a router might work a little bit like this:

import http from 'http';
import Router from 'my-awesome-router-library';

// ....

const router = new Router();

router.get("/login", route_login);
router.put("/inbox/:username", route_user_inbox_put);

const http_server = http.createServer(router.handler()).listen(20202);

Pretty simple, right? This way, every route can lead to a different function, and each of those functions can be in a separate file! Very cool. It all makes for a nice and neat way to structure one's application, preventing any issues relating to any one file getting too big - whilst simultaneously keeping everything orderly and in its own place.

Except when you're picky like me and you can't find a router you like, of course. I've got some pretty specific requirements. For one, I want something flexible and unopinionated enough that I can do my own thing without it getting in the way. For another, I'd like first-class support for middleware.

What's middleware you ask? Well, I've only just discovered it recently, but I can already tell that's its a very powerful method of structuring more complex applications - and devastatingly dangerous if used incorrectly (the spaghetti is real).

Basically, the endpoint of a route might parse some data that a client has sent it, and maybe authenticate the request against a backend. Perhaps a specific environment needs to be set up in order for a request to be fulfilled.

While we could do these things in the end route, it would clutter up the code in the end route, and we'd likely have more boilerplate, parsing, and environment setup code than we have actual application logic! The solution here is middleware. Think of it as an onion, with the final route application logic in the middle, and the parsing, logging, and error handling code as the layers on the outside.

A diagram visualising an application with 3 layers of middleware: an error handler, a logger, and a data parser - with the application  logic in the middle. Arrows show that a request makes its way through these layers of middleware both on the way in, and the way out.

In order to reach the application logic at the centre, an incoming request must first make its way through all the layers of middleware that are in the way. Similarly, it must also churn through the layers of middleware in order to get out again. We could represent this in code like so:

// Middleware that runs for every request
router.use(middleware_error_handler);
router.use(middleware_request_logger);

// Decode all post data with middleware
// This won't run for GET / HEAD / PUT / etc. requests - only POST requests
router.post(middleware_decode_post_data);

// For GET requestsin under `/inbox`, run some middleware
router.get("/inbox", middleware_setup_user_area);

// Endpoint routes
// These function just like middleware too (i.e. we could
// pass the request through to another layer if we wanted
// to), but that don't lead anywhere else, so it's probably
// better if we keep them separate
router.get("/inbox/:username", route_user_inbox);
router.any("/honeypot", route_spambot_trap);

router.get("/login", route_display_login_page);
router.post("/login", route_do_login);

Quite a neat way of looking at it, right? Lets take a look at some example middleware for our fictional router:

async function middleware_catch_errors(context, next) {
    try {
        await next();
    } catch(error) {
        console.error(error.stack);
        context.response.writeHead(503, {
            "content-type": "text/plain"
        });
        // todo make this fancier
        context.response.end("Ouch! The server encountered an error and couldn't respond to your request. Please contact bob at [email protected]!");
    }
}

See that next() call there? That function call there causes the application to enter the next layer of middleware. We can have as many of these layers as we like - but don't go crazy! It'll cause you problems later whilst debugging.....

What I've shown here is actually very similar to the rill framework - it just has a bunch of extras tagged on that I don't like - along with some annoying limitations when it comes to defining routes.

To that end, I think I'll end up writing my own router, since none of the ones I've found will do the job just right. It kinda fits with the spirit of the project that this is for, too - teaching myself new things that I didn't know before.

If you're curious as to how a Node.JS application is going to fit in with a custom HTTP + WebSockets server written in C♯, then the answer is a user management panel. I'm not totally sure where this is going myself - I'll see where I end up! After all, with my track record, you're bound to find another post or three showing up on here again some time soon.

Until then, Goodnight!

Found this useful? Still got questions? Comment below!

A comparison of compression formats for storing JSON

Happy new year, everyone!

I've blogged about different aspects of a (not so?) little project of mine several times now (exhibits A, B, C, D, and finally E - even if I didn't know it at the time), but it appears that I end up running into all sorts of interesting problems that I invent cool solutions for. I also find myself doing a bunch of research that I'm surprised nobody's compiled into a single place yet - as is the case in this blog post. (Also, Happy 2018 everyone! First post of the year :D)

I've been refactoring the subsystem that saves a considerable amount of JSON data to a bunch of different files on disk. Obviously, I'm interested in minimising the amount of space this JSON data takes up on disk. As this saving process happens in the background on a separate thread, I'm not too concerned about performance - other than it can't be too slow. With this in mind, I've found myself testing a bunch of different compression algorithms. Let's introduce our test data:

  1. A 17MiB card game dataset, as a single minified JSON file
  2. A 40KiB 'live' specimen chunk's data, saved as a single minified JSON file.

I can't remember which card game the first dataset is from, but I do know that I found it in the awesome JSON datasets list. Next up, here's our cast of compression algorithms we'll be testing:

  1. The venerable GZip
  2. BZip2 - Apparently GZIP, but smaller and slightly more computationally expensive
  3. XZ (the newer child of LZMA2)
  4. 7zip
  5. Google's Brotli

A colourful cast, to be sure! Let's run them through their paces - starting with the card game dataset. Here are the results I observe with each set to their default settings:

Format Size
Uncompressed 17M
gzip 2.4M
bzip2 1.6M
xz 1.3M
7zip 1.4M
brotli 1.3M

Very interesting. It looks like xz and brotli are tied in first place - though I observed that brotli took ages in comparison to all the other algorithms I tested - and upon closer inspection xz beat it by 17.3KiB. Numbers are all very well, but to really see what's going on here, let's plot it on a graph:

A graph of the data in the table above.

That's better! I can actually make some comparisons now. From this graph we can observe that gzip is the worst of the lot, followed by bzip2. 7zip is surprisingly in third place, but then again it is designed for multiple files, whereas the rest of them are designed for a single stream of data. In second place is the terribly slow brotli, and finally in first place is xz.

Hrm - very interesting. How do our algorithms measure up when confronted a smaller load though? Here are the results for the sample chunk data:

Format Size
Uncompressed 40K
gzip 5.4K
bzip2 4.3K
xz 4.6K
7zip 4.9K
Brotli 4.6K

Interesting results, to be sure, but I can't discern much from that. Let's plot a graph:

A graph of the data in the above table. Further explanation below.

Very interesting. With smaller loads, it appears that bzip2 performs much better with smaller loads than any other algorithm. While gzip is still the worst performing algorithm, while xz and brotli, surprisingly, performed much worse than bzip2.

To that end, I'm think I'm going to be choosing bzip2 as my compression of choice for this job, as it produces the best results for the type of work I'm going to be doing.

I'm really surprised about brotli though, actually. I had high hopes for it, considering it's a new algorithm invented by Google. They claimed that it would provide xz-like compression with gzip-like speeds - but from what I'm seeing, it does anything but.

Sources and Further Reading

GlidingSquirrel is now on NuGet with automatic API documentation!

(This post is a bit late as I'm rather under the weather.)

A while ago, I posted about a HTTP/1.0 server library I'd written in C♯ for a challenge. One thing led to another, and I've now ended up adding support for HTTP/1.1, WebSockets, and more....

While these enhancements have been driven mainly by a certain project of mine that keeps stealing all of my attention, they've been fun to add - and now that I've (finally!) worked out how to package it as a NuGet package, it's available on NuGet for everyone to use, under the Mozilla Public License 2.0!

Caution should be advised though, as I've not thoroughly tested it yet to weed out the slew of bugs and vulnerabilities I'm sure are hiding in there somewhere - it's designed mainly to sit behind a reverse-proxy such as Nginx (not that's it's any excuse, I know!) - and also I don't have a good set of test cases I can check it against (I thought for sure there would be at least one test suite out there for HTTP servers, considering the age of the protocol, but apparently not!).

With that out of the way, specifying dependencies didn't turn out to be that hard, actually. Here's the extra bit I added to the .nuspec file to specify the GlidingSquirrel's dependencies:

<dependencies>
    <dependency id="MimeSharp" version="1.0.0" />
    <dependency id="System.ValueTuple" version="4.4.0" />
</dependencies>

The above goes inside the <metadata> element. If you're curious about what this strange .nuspec file is and it's format, I've blogged about it a while back when I figured out how to package my TeleConsole Client - in which I explain my findings and how you can do it yourself too!

I've still not figured out the strange $version$ and $description$ things that are supposed to reference the appropriate values in the .csproj file (if you know how those work, please leave a comment below, as I'd love to know!) - as it keeps telling me that <description> cannot be empty.....

In addition to packaging it and putting it up on NuGet, I've also taken a look at automatic documentation generation. For a while I've been painstakingly documenting the entire project with intellisense comments for especially this purpose - which I'm glad I started early, as it's been a real pain and taken a rather decent chunk of time to do.

Anyway, with the last of the intellisense comments finished today, I set about generating automatic documentation - which turned out to be a surprisingly difficult thing to achieve.

  • Sandcastle, Microsoft's offering has been helpfully discontinued.
  • Sandcastle Help File Builder, the community open-source fork of Sandcastle, appears to be Windows-only (I use Linux myself).
  • DocFX, Microsoft's new offering, appears to be more of a static site generator that uses your code and a bunch of other things as input, rather than the simple HTML API documentation generator I was after.
  • I found Docxygen to produce an acceptable result out-of-the-box, I discovered that configuring it was not a trivial task - the initial configuration file the init action created was over 100 KiB!

I found a few other options too, but they were either Windows-only, or commercial offerings that I can't justify using for an open-source project.

When I was about to give up the search for the day, I stumbled across this page on generating documentation by the mono project, who just happen to be the ones behind mono, the cross-platform .NET runtime that runs on Linux, macOS, and, of course, Windows. They're also the ones who have built mcs, the C♯ compiler that compliments the mono runtime.

Apparently, they've also built a documentation generation that has the ability to export to HTML. While it's certainly nothing fancy, it does look like you've all the power you need at your fingertips to customise the look and feel by tweaking the XSLT stylesheet it renders with, should you need it.

After a bit of research, I found this pair of commands to build the documentation for me quite nicely:

mdoc update -o docs_xml/ -i GlidingSquirrel.xml GlidingSquirrel.dll
mdoc export-html --out docs/ docs_xml

The first command builds the multi-page XML tree from the XML documentation generated during the build process (make sure the "Generate xml documentation" option is ticked in the project build options!), and the second one transforms that into HTML that you can view in your browser.

With the commands figured out, I set about including it in my build process. Here's what I came up with:

<Target Name="AfterBuild">
    <Message Importance="high" Text="----------[ Building documentation ]----------" />

    <Exec Command="mdoc update -o docs_xml/ -i GlidingSquirrel.xml GlidingSquirrel.dll" WorkingDirectory="$(TargetDir)" IgnoreExitCode="true" />
    <Exec Command="mdoc export-html --out $(SolutionDir)/docs docs_xml" WorkingDirectory="$(TargetDir)" IgnoreExitCode="true" />
</Target>

This outputs the documentation to the docs/ folder in the root of the solution. If you're a bit confused as to how to utilise this in your own project, then perhaps the Untangling MSBuild post I wrote to figure out how MSBuild works can help! You can view the GlidingSquirrel's automatically-generated documentation here. It's nothing fancy (yet), but it certainly does the job. I'll have to look into prettifying it later!

Untangling MSBuild: MSBuild for normal people

I don't know about you, but I find the documentation on MSBuild is be rather confusing. Even their definition of what MSBuild is is a bit misleading:

MSBuild is the build system for Visual Studio.

Whilst having to pull apart the .csproj file of a project of mine and put it back together again to get it to do what I wanted, I spent a considerable amount of time reading Microsoft's (bad) documentation and various web tutorials on what MSBuild is and what it does. I'm bound to forget what I've learnt, so I'm detailing it here both to save myself the bother of looking everything up again and to make sense of everything I've read and experimented with myself.

Before continuing, you might find my previous post, Understanding your compiler: C# an interesting read if you aren't already aware of some of the basics of Visual Studio solutions, project files, msbuild, and the C♯ compiler.

Let's start with a real definition. MSBuild is Microsoft's build framework that ties into Visual Studio, Mono, and basically anything in the .NET world. It has an XML-based syntax that allows one to describe what MSBuild has to do to build a project - optionally depending on other projects elsewhere in the file system. It's most commonly used to build C♯ projects - but it can be used to build other things, too.

The structure of your typical Visual Studio solution might look a bit like this:

The basic structure of a Visual Studio solution. Explained below.

As you can see, the .sln file references one or more .csproj files, each of which may reference other .csproj files - not all of which have to be tied to the same .sln file, although they usually are (this can be handy if you're using Git Submodules). The .sln file will also specify a default project to build, too.

The file extension .csproj isn't the only one recognised by MSBuild, either - others such as .pssproj (PowerShell project), .vcxproj (Visual C++ Project), .targets (Shared tasks + targets - we'll get to these later), and others - though the generic extension is simply .proj.

So far, I've observed that MSBuild is pretty intelligent about automatically detecting project / solution files in it's working directory - you can call it with msbuild in a directory and most of the time it'll find and build the right project - even if it finds a .sln file that it has to parse first.

Let's get to the real meat of the subject: targets. Consider the following:

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Build" ToolsVersion="4.0">
    <Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />

    <Target Name="BeforeBuild">
        <Message Importance="high" Text="Before build executing" />
    </Target>
</Project>

I've simplified it a bit to make it a bit clearer, but in essence the above imports the predefined C♯ set of targets, which includes (amongst others) BeforeBuild, Build itself, and After Build - the former of which is overridden by the local MSBuild project file. Sound complicated? It is a bit!

MSBuild uses a system of targets. When you ask it to do something, you're actually asking it to reach a particular target. By default, this is the Build target. Targets can have dependencies, too - in the case of the above, the Build target depends on the (blank) BeforeBuild target in the default C♯ target set, which is then overridden by the MSBuild project file above.

The second key component of MSBuild project files are tasks. I've used one in the example above - the Message task which, obviously enough, outputs a message to the build output. There are lots of other types of task, too:

  • Reference - reference another assembly or core namespace
  • Compile - specify a C♯ file to compile
  • EmbeddedResource - specify a file to include as an embedded resource
  • MakeDir - create a directory
  • MSBuild - recursively build another MSBuild project
  • Exec - Execute a shell command (on Windows this is with cmd.exe)
  • Copy - Copy file(s) and/or director(y/ies) from one place to another

This is just a small sample of what's available - check out the MSBuild task reference for a full list and more information about how each is used. It's worth noting that MSBuild targets can include multiple tasks one after another - and they'll all be executed in sequence.

Unfortunately, if you try to specify a wildcard in an EmbeddedResource directive, both Visual Studio and Monodevelop 'helpfully' try to auto-expand these wildcards to reference the actual files themselves. To avoid this, a clever hack can be instituted that uses the CreateItem task:

<CreateItem Include="$(ProjectDir)/obj/client_dist/**">
    <Output ItemName="EmbeddedResource" TaskParameter="Include" />
</CreateItem>

The above loops above all the files that are in $(ProjectDir)/obj/client_dist, and dynamically creates an EmbeddedResource directive for each - thereby preventing annoying auto-expansion.

The $(ProjectDir) bit is a variable - the final key component of MSBuild project files. There are a number of built-in variables for different things, but you can also define your own.

  • $(ProjectDir) - The current project's root directory
  • $(SolutionDir) - The root directory of the current solution. Undefined if a project is being built directly.
  • $(TargetDir) - The output directory that the result of the build will be written to.
  • $(Configuration) - The selected configuration to build. Most commonly Debug or Release.

As with the tasks, there's a full list available that you can check out for more information.

That's the basics of how MSBuild works, as far as I'm aware at the moment. If I discover anything else of note, I'll post again in the future about what I've learnt. If this has helped, please comment below! If you're still confused, let me know in the comments too, and I'll try my best to help you out :-)

Want a tutorial on Git Submodules? Let me know by posting a comment below!

Finding the distance to a (finite) line from a point in Javascript

A screenshot of the library I've written. Explanation below.

For a project of mine (which I might post about once it's more stable), I'm going to need a way to find the distance to a point from the mouse cursor to implement an eraser. I've attempted this problem before - but it didn't exactly go to plan. To that end, I decided to implement the algorithm on its own to start with - so that I could debug it properly without all the (numerous) moving parts of the project I'm writing it for getting in the way.

As you may have guessed since you're reading this post, it actually went rather well! Using the C++ implementation on this page as a reference, it didn't take more than an hour or two to get a reasonable implementation working - and it didn't take a huge amount of time to tidy it up into an npm package for everyone to use!

My implementation uses ES6 Modules - so you may need to enable them in about:config or chrome://flags if you haven't already (don't believe the pages online that say you need Firefox / Chrome nightly - it's available in stable, just disabled by default) before taking a look at the demo, which you can find here:

Line Distance Calculator

(Click and drag to draw a line - your distance from it is shown in the top left)

The code behind it is actually quite simple - just rather full of nasty maths that will give you a headache if you try and understand it all at once (I broke it down, which helped). The library exposes multiple methods to detect a point's distance from different kinds of line - one for multi-segmented lines (which I needed in the first place), one for a single (finite) line (which the multi-segmented line employs), and one for a single infinite line - which I implemented first, using this Wikipedia article - before finding that it was buggy because it was for an infinite line (even though the article's name is apparently correct)!

I've written up a usage guide if you're interested in playing around with it yourself.

I've also got another library that I've released recently (also for Nibriboard) that simplifies multi-segmented lines instead of finding the distance to them, which I may post about about soon too!

Update: Looks like I forgot that I've already posted about the other library! You can read about it here: Line Simplification: Visvalingam's Algorithm

Got a question? Wondering why I've gone to the trouble of implementing such an algorithm? Comment below - I'd love to hear your thoughts!

Line Simplification: Visvalingam's Algorithm

An screenshot of my demo of my implementation of Visvalingam's Algorithm. (Above: A screenshot of the demo of my implementation of Visvalingam's line simplification algorithm. Link below!)

For a secret project of mine I've been working on since about February time (if I recall correctly), I've discovered that I could make some considerable use of a line simplification algorithm. The tricky thing is though that I need an implementation in both Javascript and C♯ - which will both return identical results.

Initially, I chose the Ramer-Douglas-Peucker Algorithm, but I ended up implementing Visvalingam's Algorithm instead, as I encountered issues with calculating the shortest distance from a point to a line reliably along with other algorithmic problems that I determined weren't worth the time to fix.

Visvalingam's algorithm is actually really simple. Suppose we take a line:

A line with 6 points in it.

If we create a sliding window with a width of 3 and slide it along the list of points, then we get a set of triangles. To simplify the line, we can calculate the area of each of these triangles, and remove the centre point of the triangle with the smallest area.

The same line with the triangles highlighted.

The same line with a point removed.

Then we can continue removing the centre point of the smallest triangle until we reach a triangle with an area that's above a threshold we set - and this is Visvalingam's Algorithm.

Though I haven't written the C♯ version yet, I've completed the Javascript implementation - and created a demo for you to play around with! Here's a link:

Visvalingam's Algorithm Demo

Note that you'll need to enable ES6 Module support in your browser to get it to work, as I've used ES6 Modules whilst building it.

In Firefox this can be done by setting dom.moduleScripts.enabled to true in about:config, and in chrome by visiting chrome://flags/#enable-javascript-harmony (sorry, hyperlinks don't work for chrome:// urls IIRC!), enabling it, and restarting your browser.

It's open-source, of course - under the Mozilla Public License 2.0. You can find my code on GitLab - and pull requests are welcome :D

Finally, I've released it as an npm package. If you aren't aware of npm, it's really cool. It's the primary package manager for Javascript - I've written a blog post on this here.

Once I've written the C♯ version I'll have another bash at trying to get Nuget to package it. I think I know what the issue has been so far - so hopefully it works this time! If it does I'll blog about that too.

Found this useful? Think it's cool? Let me know in the comments below!

/r/dailyprogrammer hard challenge #322: Static HTTP 1.0 server

Recently I happened to stumble across the dailyprogrammer subreddit's latest challenge. It was for a static HTTP 1.0 server, and while I built something similar for my networking ACW, I thought I'd give this one a go to create an extendable http server that I can use in other projects. If you want to follow along, you can find the challenge here!

My language of choice, as you might have guessed, was C♯ (I know that C♯ has a HttpServer class inbuilt already, but to listen on 0.0.0.0 on Windows it requires administrative privileges).

It ended up going rather well, actually. In a little less than 24 hours after reading the post, I had myself a working solution, and I thought I'd share here how I built it. Let's start with a class diagram:

A class diagram for the gliding squirrel. (Above: A class diagram for the GlidingSquirrel. Is this diagram better than the last one I drew?)

I'm only showing properties on here, as I'll be showing you the methods attached to each class later. It's a pretty simple design, actually - HttpServer deals with all the core HTTP and networking logic, FileHttpServer handles the file system calls (and can be swapped out for your own class), and HttpRequest, HttpResponse, HttpMethod, HttpResponseCode all store the data parsed out from the raw request coming in, and the data we're about to send back out again.

With a general idea as to how it's put together, lets dive into how it actually works. HttpServer would probably be a good place to start:

public abstract class HttpServer
{
    public static readonly string Version = "0.1-alpha";

    public readonly IPAddress BindAddress;
    public readonly int Port;

    public string BindEndpoint { /* ... */ }

    protected TcpListener server;

    private Mime mimeLookup = new Mime();
    public Dictionary<string, string> MimeTypeOverrides = new Dictionary<string, string>() {
        [".html"] = "text/html"
    };

    public HttpServer(IPAddress inBindAddress, int inPort)
    { /* ... */ }
    public HttpServer(int inPort) : this(IPAddress.IPv6Any, inPort)
    {
    }

    public async Task Start() { /* ... */ }

    public string LookupMimeType(string filePath) { /* ... */ }

    protected async void HandleClientThreadRoot(object transferredClient) { /* ... */ }

    public async Task HandleClient(TcpClient client) { /* ... */ }

    protected abstract Task setup();

    public abstract Task HandleRequest(HttpRequest request, HttpResponse response);
}

(Full version)

It's heavily abbreviated because there's actually quite a bit of code to get through here, but you get the general idea. The start method is the main loop that accepts the TcpClients, and calls HandleClientThreadRoot for each client it accepts. I decided to use the inbuilt ThreadPool class to do the threading for me here:

TcpClient nextClient = await server.AcceptTcpClientAsync();
ThreadPool.QueueUserWorkItem(new WaitCallback(HandleClientThreadRoot), nextClient);

C♯ handles all the thread spawning and killing for me internally this way, which is rather nice. Next, HandleClientThreadRoot sets up a net to catch any errors that are thrown by the next stage (as we're now in a new thread, which can make debugging a nightmare otherwise), and then calls the main HandleClient:

try
{
    await HandleClient(client);
}
catch(Exception error)
{
    Console.WriteLine(error);
}
finally
{
    client.Close();
}

No matter what happens, the client's connection will always get closed. HandleClient is where the magic start to happen. It attaches a StreamReader and a StreamWriter to the client:

StreamReader source = new StreamReader(client.GetStream());
StreamWriter destination = new StreamWriter(client.GetStream()) { AutoFlush = true };

...and calls a static method on HttpRequest to read in and decode the request:

HttpRequest request = await HttpRequest.FromStream(source);
request.ClientAddress = client.Client.RemoteEndPoint as IPEndPoint;

More on that later. With the request decoded, HandleClient hands off the request to the abstract method HandleRequest - but not before setting up a secondary safety net first:

try
{
    await HandleRequest(request, response);
}
catch(Exception error)
{
    response.ResponseCode = new HttpResponseCode(503, "Server Error Occurred");
    await response.SetBody(
        $"An error ocurred whilst serving your request to '{request.Url}'. Details:\n\n" +
        $"{error.ToString()}"
    );
}

This secondary safety net means that we can send a meaningful error message back to the requesting client in the case that the abstract request handler throws an exception for some reason. In the future, I'll probably make this customisable - after all, you don't always want to let the client know exactly what crashed inside the server's internals!

The FileHttpServer class that handles the file system logic is quite simple, actually. The magic is in it's implementation of the abstract HandleRequest method that the HttpServer itself exposes:

public override async Task HandleRequest(HttpRequest request, HttpResponse response)
{
    if(request.Url.Contains(".."))
    {
        response.ResponseCode = HttpResponseCode.BadRequest;
        await response.SetBody("Error the requested path contains dangerous characters.");
        return;
    }

    string filePath = getFilePathFromRequestUrl(request.Url);
    if(!File.Exists(filePath))
    {
        response.ResponseCode = HttpResponseCode.NotFound;
        await response.SetBody($"Error: The file path '{request.Url}' could not be found.\n");
        return;
    }

    FileInfo requestFileStat = null;
    try {
        requestFileStat = new FileInfo(filePath);
    }
    catch(UnauthorizedAccessException error) {
        response.ResponseCode = HttpResponseCode.Forbidden;
        await response.SetBody(
            "Unfortunately, the server was unable to access the file requested.\n" + 
            "Details:\n\n" + 
            error.ToString() + 
            "\n"
        );
        return;
    }

    response.Headers.Add("content-type", LookupMimeType(filePath));
    response.Headers.Add("content-length", requestFileStat.Length.ToString());

    if(request.Method == HttpMethod.GET)
    {
        response.Body = new StreamReader(filePath);
    }
}

With all the helper methods and properties on HttpResponse, it's much shorter than it would otherwise be! Let's go through it step by step.

if(request.Url.Contains(".."))

This first step is a quick check for anything obvious that could be used against the server to break out of the web root. There are probably other dangerous things you can do(or try to do, anyway!) to a web server to attempt to trick it into returning arbitrary files, but I can't think of any of the top of my head that aren't covered further down. If you can, let me know in the comments!

string filePath = getFilePathFromRequestUrl(request.Url);

Next, we translate the raw path received in the request into a path to a file on disk. Let's take a look inside that method:

protected string getFilePathFromRequestUrl(string requestUrl)
{
    return $"{WebRoot}{requestUrl}";
}

It's rather simplistic, I know. I can't help but feel that there's something I missed here.... Let me know if you can think of anything. (If you're interested about the dollar syntax there - it's called an interpolated string, and is new in C♯ 6! Fancy name, I know. Check it out!)

if(!File.Exists(filePath))
{
    response.ResponseCode = HttpResponseCode.NotFound;
    await response.SetBody($"Error: The file path '{request.Url}' could not be found.\n");
    return;
}

Another obvious check. Can't have the server crashing every time it runs into a 404! A somewhat interesting note here: File.Exists only checks to see if there's a file that exists under the specified path. To check for the existence of a directory, you have to use Directory.Exists - which would make directory listing rather easy to implement. I might actually try that later - with an option to turn it off, of course.

FileInfo requestFileStat = null;
try {
    requestFileStat = new FileInfo(filePath);
}
catch(UnauthorizedAccessException error) {
    response.ResponseCode = HttpResponseCode.Forbidden;
    await response.SetBody(
        "Unfortunately, the server was unable to access the file requested.\n" + 
        "Details:\n\n" + 
        error.ToString() + 
        "\n"
    );
    return;
}

Ok, on to something that might be a bit more unfamiliar. The FileInfo class can be used to get, unsurprisingly, information about a file. You can get all sorts of statistics about a file or directory with it, such as the last modified time, whether it's read-only from the perspective of the current user, etc. We're only interested in the size of the file though for the next few lines:

response.Headers.Add("content-type", LookupMimeType(filePath));
response.Headers.Add("content-length", requestFileStat.Length.ToString());

These headers are important, as you might expect. Browsers to tend to like to know the type of content they are receiving - and especially it's size.

if(request.Method == HttpMethod.GET)
{
    response.Body = new StreamReader(filePath);
}

Lastly, we send the file's contents back to the user in the response - but only if it's a GET request. This rather neatly takes care of HEAD requests - but might cause issues elsewhere. I'll probably end up changing it if it does become an issue.

Anyway, now that we've covered everything right up to sending the response back to the client, let's end our tour with a look at the request parsing system. It's a bit backwards, but it does seem to work in an odd sort of way! It all starts in HttpRequest.FromStream.

public static async Task<HttpRequest> FromStream(StreamReader source)
{
    HttpRequest request = new HttpRequest();

    // Parse the first line
    string firstLine = await source.ReadLineAsync();
    var firstLineData = ParseFirstLine(firstLine);

    request.HttpVersion = firstLineData.httpVersion;
    request.Method = firstLineData.requestMethod;
    request.Url = firstLineData.requestPath;

    // Extract the headers
    List<string> rawHeaders = new List<string>();
    string nextLine;
    while((nextLine = source.ReadLine()).Length > 0)
        rawHeaders.Add(nextLine);

    request.Headers = ParseHeaders(rawHeaders);

    // Store the source stream as the request body now that we've extracts the headers
    request.Body = source;

    return request;
}

It looks deceptively simple at first glance. To start with, I read in the first line, extract everything useful from it, and attach them to a new request object. Then, I read in all the headers I can find, parse those too, and attach them to the request object we're building.

Finally, I attach the StreamReader to the request itself, as it's now pointing at the body of the request from the user. I haven't actually tested this, as I don't actually use it anywhere just yet, but it's a nice reminder just in case I do end up needing it :-)

Now, let's take a look at the cream on the cake - the method that parses the first line of the incoming request. I'm quite pleased with this actually, as it's my first time using a brand new feature of C♯:

public static (float httpVersion, HttpMethod requestMethod, string requestPath) ParseFirstLine(string firstLine)
{
    List<string> lineParts = new List<string>(firstLine.Split(' '));

    float httpVersion = float.Parse(lineParts.Last().Split('/')[1]);
    HttpMethod httpMethod = MethodFromString(lineParts.First());

    lineParts.RemoveAt(0); lineParts.RemoveAt(lineParts.Count - 1);
    string requestUrl = lineParts.Aggregate((string one, string two) => $"{one} {two}");

    return (
        httpVersion,
        httpMethod,
        requestUrl
    );
}

Monodevelop, my C♯ IDE, appears to go absolutely nuts over this with red squiggly lines everywhere, but it still compiles just fine :D

As I was writing this, a thought popped into my head that a tuple would be perfect here. After reading somewhere a month or two ago about a new tuple syntax that's coming to C♯ I thought I'd get awesomely distracted and take a look before continuing, and what I found was really cool. In C♯ 7 (the latest and quite possibly greatest version of C♯ to come yet!), there's a new feature called value tuples, which let's you dynamically declare tuples like I have above. They're already fully supported by the C♯ compiler, so you can use them today! Just try to ignore your editor if it gets as confused as mine did... :P

If you're interested in learning more about them, I'll leave a few links at the bottom of this post. Anyway, back to the GlidingSquirrel! Other than the new value tuples in the above, there's not much going on, actually. A few linq calls take care of the heavy lifting quite nicely.

And finally, here's my header parsing method.

public static Dictionary<string, string> ParseHeaders(List<string> rawHeaders)
{
    Dictionary<string, string> result = new Dictionary<string, string>();

    foreach(string header in rawHeaders)
    {
        string[] parts = header.Split(':');
        KeyValuePair<string, string> nextHeader = new KeyValuePair<string, string>(
            parts[0].Trim().ToLower(),
            parts[1].Trim()
        );
        if(result.ContainsKey(nextHeader.Key))
            result[nextHeader.Key] = $"{result[nextHeader.Key]},{nextHeader.Value}";
        else
            result[nextHeader.Key] = nextHeader.Value;
    }

    return result;
}

While I have attempted to build in support for multiple definitions of the same header according to the spec, I haven't actually encountered a time when it's actually been needed. Again, this is one of those things I've built in now for later - as I do intend on updating this and adding more features later - and perhaps even work it into another secret project I might post about soon.

Lastly, I'll leave you with a link to the repository I'm storing the code for the GlidingSquirrel, and a few links for your enjoyment:

GlidingSquirrel

Update 2018-05-01: Fixed a few links.

Sources and Further Reading

Art by Mythdael