Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression conference conferences containerisation css dailyprogrammer data analysis debugging defining ai demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics guide hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs latex learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation outreach own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering research resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

Orange Pi 3 in review

An Orange Pi 3, along with it's logo. Of course, I'm not affiliated with the manufacturers in any way. In fact, they are probably not aware that this post even exists

I recently bought an Orange Pi 3 (based on the Allwinner H6 chipset) to perform a graphics-based task, and I've had an interesting enough time with it that I thought I'd share my experiences in a sort of review post here.

The first problem when it arrived was to find an operating system that supports it. My initial thought was to use Devuan, but I quickly realised that practically the only operating system that supports it at the moment is Armbian.

Not to be deterred, after a few false starts I got Armbian based on Ubuntu 18.04 Bionic Beaver installed. The next order of business was to install the software I wanted to use.

For the most part, I didn't have too much trouble with this - though it was definitely obvious that the arm64 (specifically sunxi64) architecture isn't a build target that's often supported by apt repository owners. This wasn't helped by the fact that apt has a habit of throw really weird error messages when you try to install something that exists in an apt repository, but for a different architecture.

After I got Kodi installed, the next order of business was to get it to display on the screen. I ended up managing this (eventually) with the help of a lot of tutorials and troubleshooting, but the experience was really rather unpleasant. I kept getting odd errors, like failed to load driver sun4i-drm when trying to start Kodi via an X11 server and other strangeness.

The trick in the end was to force X11 to use the fbdev driver, but I'm not entirely sure what that means or why it fixed the issue.

Moving on, I then started to explore the other capabilities of the device. Here, too, I discovered that a number of shortcomings in the software support provided by Linux, such as a lack of support for audio via HDMI and Bluetooth. I found the status matrix of the SunXI project, which is the community working to add support for the Allwinner H6 chipset to the Linux Kernel.

They do note that support for the H6 chipset is currently under development and is incomplete at the moment - and I wish I'd checked on software support before choosing a device to purchase.

The other big problem I encountered was a lack of kernel headers provided by Armbian. Normally, you can install the headers for your kernel by installing the linux-headers-XXXXXX package with your favourite package manager, where XXXXXX is the same as the string present in the linux-image-XXXXXX package you've got installed that contains the kernel itself.

This is actually kind of a problem, because it means that you can't compile any software that calls kernel functions yourself without the associated header files, preventing you from installing various dkms-based kernel modules that auto-recompile against the kernel you've got installed.

I ended up finding this forum thread, but the response who I assume is an armbian developer was less than stellar - they basically said that if you want kernel headers, you need to compile the kernel yourself! That's a significant undertaking, for those not in the know, and certainly not something that should be undertaken lightly.

While I've encountered a number of awkward issues that I haven't seen before, the device does have some good things worth noting. For one, it actually packs a pretty significant punch: it's much more powerful than a Raspberry Pi 3B+ (of which I have one; I bought this device before the Raspberry Pi 4 was released). This makes it an ideal choice for more demanding workloads, which a Raspberry Pi wouldn't quite be suitable for.

In conclusion, while it's a nice device, I can't recommend it to people just yet. Software support is definitely only half-baked at this point with some glaring holes (HDMI audio is one of them, which doesn't look like it's coming any time soon).

I think part of the problem is that Xunlong (that company that makes the device and others in it's family) don't appear to be interested in supporting the community at all, choosing instead to dump custom low-quality firmware for people to use as blobs of binary code (which apparently doesn't work) - which causes the SunXI community a lot of extra work to reverse-engineer it all and figure out how it all works before they can start implementing support in the Linux Kernel.

If you're interested in buying a similar embedded board, I can recommend instead using HackerBoards to find one that suits your needs. Don't forget to check for operating system support!

Found this interesting? Thinking of buying a board yourself? Had a different experience? Comment below!

Solo hardware security key review

Sometime last year (I forget when), I backed a kickstarter that promised the first open-source hardware security key that supports FIDO2. Since the people doing the kickstarter have done this before for an older standard, I decided to back it.

Last week they finally arrived, and the wait was totally worth it! I got 1 with a USB type c connector (in yellow below), and 1 type a regular type a connector that also supports nfc (in red, for using with my phone).

Before I get into why they are so awesome, it's probably a good idea if we take small step back and look at what a hardware security key does and why it does it.

My Solos!

In short, a hardware security key has a unique secret key baked into it that you can't extract. If I understand it, this is sometimes known as a physically unclonable function (correct me in a comment if I'm wrong). It makes use of this secret key for authentication purposes by way of a chain of protocols, which are collectively known as FIDO2.

A diagram showing the different FIDO2 protocols. It's basically WebAuthn between browser and OS, and CTAP2 between OS and hardware security key

There are 2 important protocols here: WebAuthn that the browser provides to web pages to interact with hardware devices, and CTAP2 - which allows the browser to interface with the hardware security key through a channel that the operating system provides (be that over USB, NFC, Bluetooth, or some other means).

FIDO2 is new. Like very very new. To this end, browsers and websites don't yet have full support for it. Those that do aren't always enabled by default (in Firefox you've got to set security.webauth.u2f, security.webauth.webauthn, and security.webauth.webauthn_enable_usbtoken to true, but I think these will set by default in a coming update) or incorrectly 'detect' support by sniffing the user-agent string ( cough I'm looking at you, GitHub and Facebook cough ).

Despite this, when it is supported it works fabulously. Solo goes a long way to making the process as painless as possible - supporting both CTAP (for the older U2F protocol) and CTAP 2 (which is part of the FIDO 2 protcol suite). It's designed well (though the cases on the NFC-enabled version called the Solo Tap are a bit on the snug side), and since it's open source you can both inspect and contribute to the firmware to improve the Solo and add new features for everyone to enjoy.

Extra features like direct access to the onboard TRNG (true random number generator) are really nice to have - and the promise of more features to come makes it even better. I'm excited to see what new capabilities my Solo will gain with future updates!

In the future I want to take a deeper dive into Webauthn and implement support in applications I've written (e.g. Pepperminty Wiki). It looks like it might be quite complicated, but I'll post here when I've figured it out.

ASP.NET: First Impressions

Admittedly, I haven't really got too far into ASP.NET (core). I've only gone through the first few tutorials or so, and based on what I've found so far, I've decided that it warrants a full first impressions blog post.

ASP.NET is fascinating, because it takes the design goals centred around developer efficiency and combines them with the likes of PHP to provide a framework with which one can write a web-server. Such a combination makes for a promising start - providing developers with everything they need to rapidly create a web-based application that's backed by any one of a number of different types of database.

Coming part-and-parcel with the ASP.NET library comes Entity Framework. It's purpose is to provide an easy mechanism by which developers can both create and query a database. I haven't really explored it much, but it appears to perform this task well.

If I were to criticise it, I'd probably say that the existing tutorials on how to use it are far too Windows and Visual Studio-oriented. Being a Linux user, I found it somewhat of a challenge to wade though the large amount of Visual Studio-specific parts of the tutorial and piece together how it actually works - independently of the automatic code generators built-in to Visual Studio.

This criticism, I've found is a running theme throughout ASP.NET and ASP.NET Core. Even in the official tutorials (which, although they say you can use Visual Studio Code on macOS and Linux, don't actually make any accommodations for users of anything other than Visual Studio), it leans heavily on the inbuilt code and template generators - choosing to instruct you on how to make the absolute minimum amount of changes to the templates provided in order to achieve the goal of the tutorial.

This, unfortunately, leaves the reader wondering precisely how ASP.NET core works under the hood. For example, what does services.AddDefaultIdentity<IdentityUser>().AddEntityFrameworkStores<ApplicationDbContext>(); do? Or what's an IdentityUser, and how do I customise it? Why isn't ASP.NET just a NuGet package I can import? None of these things are explained.

Being the kind of person who works from the ground up, I'm increasingly finding the "all that matters is that it works" approach taken by ASP.NET to, ironically enough, ease the experience for developers new to the library, rather frustrating. For me, it's hard to work with something if I don't understand what it does and how it works - so a tutorial that leans heavily on templates and scaffolding (don't even get me started on that) confusing and unhelpful.

To an extent, I can compare my experience starting out with ASP.NET with my experience starting out with Android development in Java. Both experiences were rather painful, and both experiences were unpleasant because of the large amount of pre-generated template code.

Having said this, in Java's case, there was the additional pain from learning a new language (even if it is similar to C♯), and the irritation in keeping a constant balance between syntax errors from not catching an exception, and being unable to find a bug because it's actually an exception that's been eaten somewhere that I can't see.

Although ASP.NET doesn't have terrible exception handling rules, it does have it's fair share of issues. It's equivalent, I guess, would be the number of undocumented and difficult-to-search bugs and issues one encounters when setting it up for the first time - both on Windows (with Microsoft's own Visual Studio!) and on Linux (though, to be fair, it's only .NET Core that has issues here). The complexity of the system and the lack of decent tutorials and documentation result in a confusing and irritating experience trying to get it to work (especially on Windows).

In conclusion, I'm finding ASP.NET to be a failed attempt at bringing the amazing developer efficiency from .NET to web development, but I suspect that this is largely down to me being inexperienced with it. Hampered by unhelpful tutorials, opaque black-boxed frameworks with blurred lines between library and template (despite the fact it's apparently open-source), and heavy tie-ins with Visual Studio, I think I'll be using other technologies such as Node.js to develop web-based projects in the future.

Redis in Review: First Impressions

The red Redis logo backed by a yellow voronoi diagram

(Above: The Redis logo backed by a voronoi diagram generated by my specially-upgraded voronoi diagram generator :D)

I've known about Redis for a while now - but I've never gotten around to looking into it until now. A bit of background: I'm building a user management panel thingy for this project (yes, again. I'll be blogging about it for a bit more yet before it's ready). Since I want it to be a separate, and optional, component that you don't have to run in order to use the main program, I decided to make it a separate program in a different language (Nojde.JS is rather good at doing this sort of thing I've found).

Anyway, whilst writing the login system, I found that I needed a place to store session tokens such that they persist. I considered a few different options here:

  1. A JSON file on disk - This wouldn't be particularly scalable if I found myself with lots of users. It also means I've got to read in and decode a bunch of JSON when my program starts up, which can get mighty fiddly. Furthermore, persisting it back to disk would also be rather awkward and inefficient.
  2. An SQL database of some description - Better, not still not great. The setup for a database is complicated, and requires a complex and resource-hungry process running at all times. Even if I used SQLite, I wouldn't be able to escape the complexities associated with database schemas - which I'd likely end up spending more time fiddling with than actually writing the program itself!

Then I remembered Redis. At first, I thought it was an in-memory data storage solution. While I was right, I discovered that there's a lot more to Redis than I first thought. For one, it does actually allow you to persist things to disk - and is very transparent about it too, allowing a great deal of control over how data is persisted to disk (also this post is a great read on the subject, even if it's a bit old now - it goes into a great deal of depth about how the operating system handles and caches writes to disk).

When it comes to actually storing your data, Redis provides just a handful of delightfully simple data structures for you to model your data with. Unlike a traditional SQL-based database, storing data in Redis requires a different way of thinking. Rather than considering how data relates to one another and trying to represent it as a series of interconnected tables, in Redis you store things in a manner that's much closer to that which you might find in the program that you're writing.

The core of Redis is a key-value store - rather like localStorage in the browser. Unlike localStorage though, the following data types are provided:

  • A simple string value
  • A hash-table
  • A list of values (which can be also be treated as a queue)
  • A set of unique strings (which can be optionally sorted)
  • Binary blobs (called bit arrays)
  • A HyperLogLog (apparently a probabilistic data structure of some kind)

The official Redis tutorial on the subject does a much better job of explaining this than I could (I have only been playing with it for about a week after all!), so I'd recommend you read it if you're curious.

All this has some interesting effects. Firstly, the data storage system can be more closely representative of the data structures it is storing, allowing for clever data-specific optimisations to be made in ways that simply aren't possible with a traditional SQL-based system. To this end, a custom index can be created in Redis via simple values to accelerate the process of finding a particular hash-table given a particular piece of identifying information.

Secondly, it makes Redis very versatile. You could use it as a caching layer to store the results of queries made against a traditional SQL-based database. In that scenario, no persistence to disk would be required to make it as fast as possible. You could use it for storing session tokens and short-lived chunks of data, utilising the inbuilt EXPIRE command to tell Redis to automatically delete certain keys after a specified amount of time. You could use it store all your data in and do away with an SQL server altogether. You could even do all of these things at the same time!

When storing data in Redis, it's important to make sure you've got the right mindset. I've found that despite learning about the different commands and data structures Redis has to offer, it's taken me a few goes to figure out how I'm going to store my program's data in a way that both makes sense and makes it easy to get it back out again. I'm certain that I'll take quite a bit of practice (and trial and error!) to figure out what works and what doesn't.

It's not the kind of thing that would make you want to drop your database system like a hot potato and move everything to Redis before the week is out. But it is a tool to keep in mind that can be used to solve a class of problems that SQL-based database servers have traditionally been very bad at handling - in particular high-throughput systems that need to store lots of mainly short-lived data (e.g. session tokens, etc.).

Found this interesting? Got your own thoughts on Redis? Post a comment below!

#movingtogitlab: What's up, Thoughts, and First Impressions

I'm moving some of my repositories to GitLab! Read on to find out why.

You've probably heard by now that GitHub has been bought by Microsoft. It was certainly huge news at the time! While I did tweet at the time (also here too), I've been waiting until I've gotten all of the repositories I've decided to move over to GitLab settled in before blogging about the experience and sharing my thoughts.

While I've got some of them settled in (you can tell which ones I've done because they are public on GitLab and I've deleted the GitHub repository in most cases), I've still got a fair few to go - there should be 13 in total on GitLab once I'm finished.

Since it's taking me longer than I anticipated to properly update them after the transfer (which in and of itself was actually really quick and painless - thanks GitLab!), I thought I'd blog about the experience so far - as it's probably going to be a little while before I've got all my repositories sorted :P

What's up?

Firstly, Microsoft. Buying GitHub. Who'd have thought it? I know that I was certainly surprised. I even had to check multiple websites to triple-check that I wasn't seeing things.... GitHub have certainly done an excellent job of hiding that fact that they were looking for a buyer. Upon further inspection, it appears that the issues GitHub have been facing are twofold. Firstly, they've been without a boss of the company for quite a while, and from the way things are looking, they are having a little bit of trouble finding a new one. Secondly, there's been talk that they haven't been making enough money.

Let's back up a second. How does GitHub actually make money? Good question. The answer is surprisingly simple: GitHub Enterprise - which is a version of GitHUb for businesses that they can host on their own servers for a significant fee. From what I can tell, it's targeted at medium-to-large companies.

If they aren't making enough money to sustain themselves, then something obviously needs to be done about that, or the github.com service that open-source developers the world over suddenly vanish overnight O.o! This presents GitHub with a very awkward problem. In this case, they chose to seek a buyer, who could then perhaps help them to sell GitHub Enterprise better.

Looking at the alternative companies, I think that GitHub could have done a lot worse than Microsoft. Alternatives include Apple (who operate a gigantic walled garden called macOS and iOS), IBM (International Business Machines, who sell big and powerful mainframes to other big and powerful businesses. Most of the time, we - or I at least - don't hear from them, unless they are showing off some research or other that they've just completed), and Facebook (who don't have a particularly great reputation right now). The problem is that they need a big company who have lots of money, because they are also a big company (so they'll be worth a lot of money, like it or not. Also, in order to sort out the money making issue they'll need cash in the meantime).

Microsoft ha ve been quite active in the open-source scene as of late, and seem to really have changed their tune in recent years, with their open-sourcing of large parts of the .NET framework - and their code editor Visual Studio Code. While they aren't the best at naming things (I'm looking at you, .NET Core / .NET Standard / .....), they do appear to be more in-line with GitHub's goals and ethics than alternative companies, as discussed above.

What's the problem?

So why do I have a problem with this deal? Clearly, Microsoft are the best of the bunch to have bought GitHub. It could be a lot worse. Microsoft have even announced that GitHub can continue to operate as a separate entity. The new boss of GitHub did an Ask Me Anything on reddit, and he seems like a really cool guy, and genuinely wants the best for GitHub.

Well, it's complicated. For me, it all boils down to independence. GitHub is (or was) an independent company. To that end, they can make their own decisions, and aren't being told what to or at risk of being told what to do by someone else. They aren't being influenced by someone else. That's not to say that GitHub will now be influenced by Microsoft (though they probably will be) - and it's not to say that it's a bad thing.

Coupled with the some 85 million open-source repositories, 28 million developers using the service, and 1.8 million businesses utilising github.com, it's a huge responsibility. In order to effectively serve the community that has grown around github.com, I feel that GitHub has to remain impartial. It's absolutely essential. The number of different workflows, tools, programs, operating systems, and more that those 28 million developers use will be staggering - and I feel that only a completely independent GitHub will truly be able to meet the needs of that community. Microsoft is great for some things, but taking on GitHub is asking them to actively support users of one of their services using their competitors software, such as Linux - or devices running macOS.

What's the alternative then?

That's huge ask, and I guess only time will tell whether they are able to pull it off. I find myself asking though: What's the alternative? if they are losing money as the rumours say, then what could they do about it?

Obviously, I don't and can't know everything that's going on inside GitHub - I can only read marketing-y web articles, theorise, and make guesses. I'm sure they've tried it, but I don't see why they can't enter a partnership with Microsoft instead. How about a partnership in which Microsoft helps sell GitHub Enterprise, and they get a cut of the profits in return? Microsoft have a much bigger base on enterprisey companies, so they'd be much better placed to do that part of things, and GitHub would be able to retain it's independence.

Where does GitLab fit into this puzzle?

All of this brings me to GitLab. Similar to GitHub, GitLab offers free code hosting on gitlab.com, along with free continuous integration. Unlike GitHub though, GitLab's source code is open - and you can download and run your own instance if you like! They sell support packages to businesses - thereby making enough money to support the continued development of GitLab. I think that they sell a few additional features with GitLab Enterprise too - but they aren't anything that the average user would want - only businesses. The also do a free package for students and open-source developers too - all whilst staying and independent company. Very cool.

As of today, I'm moving a total of 13 of my smaller repositories to GitLab. My reasoning here is that I really don't want to keep all my eggs in one basket. By moving some repositories to GitLab, I can ensure that if one or the other goes in a direction I seriously object to, I've got an alternative that I'm familiar with that I can move all my repositories to in a hurry.

I've used GitLab before. Last academic year (2016 / 2017) I interned at a local company, at which I helped to move over to Git as their primary version-control system. To that end, they chose GitLab as the server they'd use for the job - and so I got quite a bit of experience with it!

I'd use it for my personal git server but unfortunately it uses far too many resources - which I can't currently dedicate to it full-time - so I use Gitea, a lighter-weight alternative.

How does GitLab compare?

Back to the matter at hand. The experience with gitlab.com and my open-soruce repositories so far has been a hugely positive one so far! The migration process itself was completely painless - it just took a while since there were thousands of other developers doing the exact same thing I was :P

Updating my repositories and adjusting all the links & references has been a tedious process so far - but that's nothing that GitLab can really help me with - I've gotta do it myself :-) GitLab's documentation has been really helpful for those times when I've needed some assistance to figure something out, with plenty of GitLab CI examples available to get me started.

If there's one thing I'd change, it's the releases (or tags) system. The system itself is fine, but the way the tags are presented is horrible. The text doesn't even wrap properly! A /latest shortcut url would would be welcome too.

You can't attach release binaries to tags either, which is also rather annoying. There is a workaround though - you can link directly to the GitLab CI (Continuous-Integration) artifacts instead, which seems to work just fine for my purposes so far. If it doesn't work so well in the future, I'm probably going to have to find an alternative solution for hosting release binaries - at least until they implement the feature (if at all).

Other than that, the interface, while very different to GitHub's, does feel appropriately polished and suitably easy to use. If anything, I feel as though it's rather difficult to explore any of the existing repositories on GitLab - I hadn't realised until using GitLab just how useful the new repository tags GitHub have implemented are in exploring others' repositories. I'm sure that there's a 3rd-party website that enables one to explore the repositories on gitlab.com much more effectively - I just haven't found it yet.

The sidebar when viewing a repository is quite handy - but a little unwieldy at times. Similarly, the 2-3 layers of navigation directly below the repository tagline are also rather unwieldy, and difficult to tell their differing functions apart. Perhaps a rethink here would bring these 2 different parts of the user interface together in a more cohesive and usable fashion?

All in all, gitlab.com is a great service for hosting my open-source repositories. The continuous integration services provided are superb - and completely unparalleled in my (admittedly limited) experience. The interface is functional, for the most part, and gets the job done - there are just a few rough edges in places that could do with a slight rethink.

Enjoyed this? Have your own opinion about what's happened to GitHub? Found a better service? Comment below!

Markdown editors compared

Parts of the 3 markdown editors I'll be comparing in this post.

If you didn't know already, I write all my blog posts here in markdown. I've used several markdown editors over the years (wow it's strange to write that), and I thought I'd talk a little bit about the ones I've used, what I liked about them (and the things I didn't), and what my current preference is.

Firstly though, why would you want one? Couldn't you just use a regular text editor like Notepad++? Well, yes - but a dedicated editor has several benefits: Proper spell-checking for one, a live-preview for another, and other nice features that make the experience just that little bit better (I'm even writing one of my reports for University in Markdown, and I have to say that the experience is much more pleasurable than using Microsoft Word :P).

I like Markdown itself rather a lot too. First invented by John Gruber over on daringfireball.net, Markdown is a simple markup language that's inspired by the things that people already do in instant messaging and other text-based mediums. It's designed to be both easy to read and understand on it's own, and easy to write in - such that it doesn't break your flow as a writer by requiring you to look up how to figure out how to apply that particular bit of formatting (I find myself having to do that with LaTeX and others a lot).

A Screenshot of StackEdit.

(Above: A Screenshot of StackEdit.)

The first contender up is StackEdit. It's an in-browser offering, which saves it's data to your local machine (or the cloud). It comes with a number of nice features - apart from not having to install it of course - such as synchronised scrolling in the live-preview, and a 'publish' button to send your document to a number of different sources automatically.

Since I used it last (which was quite a while ago, actually), it appears to have received a sizeable update, updating the user-interface to be more polished and aesthetically pleasing, and adding a toggleable folder structure to the left-hand-side, amongst other things.

If you can't install anything or run portable programs from a flash drive, StackEdit would be my recommendation.

A screenshot of Classeur.

(Above: A Screenshot of Classeur.)

Next up on my list is Classeur. It's another browser-based offering, with many of the same features, just with a different UI. When I discovered it I was using Stack Edit, and at the time the interface of Classeur was vastly superior.

The main thing I don't like about it is that it's 'freemium' -- meaning that you get to keep about 100 documents on it, and then you either have to delete something or pay. While Markdown is just text documents I can keep on my computer, if I'm going to use a browser-based solution I would prefer to keep them all in the same place (though I never did hit this limit :P).

A screenshot of me writing this post in ghostwriter. Meta!

More recently, now that I've got a travel-laptop that is running Linux (and not Chrome OS, as nice that was I ended up out-growing it), I've been using ghostwriter. It's a desktop application for both Windows and Linux. While it doesn't have synchronised-scrolling for the live-preview as Stack Edit does, it allows you to save your text files to your local disk (or other mounted partition!), and open them as you would a Spreadsheet or other file - in a way that you can't with a browser-based tool.

The interface is also highly customisable - if you don't like the built-in themes, you can write your own. You can also write your own stylesheet for exported documents too. In addition, it automatically detects multiple different markdown renderers that may or may not have installed, allowing you to switch between them (and the inbuilt sundown processor) at will to get the exported document (e.g. HTML, PDF, EPUB, etc.) looking just the way you want it to.

For me, you can't beat the feeling of a native desktop application, so currently ghostwriter is my markdown editor of choice. If I can't use ghostwriter, I'll probably use StackEdit, with Classeur coming at the bottom of the pile.

If you're thinking of doing some writing, I'd highly suggest considering using a proper markdown editor such as the ones I've mentioned here. If you're not familiar with markdown, fear not! It's easy to learn, and all 3 of the editors featured here feature a quick-reference guide sidebar (or floating window) that you can enable to help you along.

Found this useful? Got a different editor of choice? Comment below!

Java: First Impressions

The logos of a few of the tools and language I've been using recently.

(Above: The Android, Android Studio, and Java logos. I don't own any of these - nor is this post endorsed by any of the entities represented here - they are just for illustrative purposes.)

I've been using Java pretty extensively recently, as I've been doing a module on Android development at University. It's a pretty interesting language, so I thought I'd share my first impressions here. Later on in a separate post, I'll also talk a little bit about Kotlin, Google's new language they are championing for development on the Android platform.

Firstly, Android Studio has made it really easy to get started. The code hinting / autocompletion is fairly intelligent, and provides enough support that it's not too much of a bother programming in a new environment that you've never seen before - lessening the burden of learning a new language.

It seems to me that the whole build process for Java applications has been greatly overcomplicated though. It's slow, and keeps throwing random errors - especially when I've only just opened Android Studio. This non-determinism proves especially challenging for beginners, such as myself - as sometimes there's no real way to know what's gone wrong (the error messages are not particularly helpful - I've seen several languages with much more helpful ones).

There seem to be a bunch of assumptions that the developers have made too about the user's setup and programming style - leading to confusing situations in which it doesn't work - but there's no real way to know why, as there aren't any obvious error messages.

Despite this, Java as a language has some interesting features. As a whole, I can definitely see where Microsoft got their inspiration for C♯ from, as it's very similar - just without a lot of the syntactical sugar I'm used to in C♯ that makes expressing complex data structures and algorithms much easier, such as getters and setters.

Particularly of note is the exception system. In Java, if you want to throw an exception, you have to add throws ExceptionName to the method signature. Since your main activity in Android contains overridden methods at the top level, this means that you have to use lots of try..catch blocks to trap exceptions and deal with them before they bubble up to higher levels - otherwise it's a compilation error!

While this can be helpful, I've found that it can lead to awkward bugs in which an exception is eaten higher up, and the default value that's returned by the method that eats the exception causes strange things to happen that aren't immediately obvious - and it's only when you check the log that you realise what happened.....

The other bothersome thing I've found is the deeply-nested folder structure that a Java project appears to generate for even the simplest of projects. This makes it a rather difficult and involved process to find any code outside of the IDE - which I often do because Android Studio is far too slow and bulky just to check on or reference something quickly.

Finally, the last issue that concerns me are the licensing issues that have plagued Java in recent years. If you haven't heard, Google and Oracle (the company that owns Java) have been in disagreement over licensing fees which Oracle claims Google should pay them because they used Java in the making of Android (which is an open-source project). If Oracle are going after Google over licensing fees for just using a language, then what does that say about any projects I do? It's not exactly confidence inspiring, that's for sure. I for one will be keeping as much of my code library out of the Java ecosystem as possible.

Java seems to be the kind of language with a lot of history. While some of this has led to innovations that have ultimately improved the language, I feel that as a language it's being bogged down by lots of bloat and unnecessary garbage that it could really do without. C♯ has done a brilliant job so cutting through this clutter and rubbish, creating a language that both works with you and is easy to understand (except .NET Standard and .NET Core, but that's a story for another time :P).

Change the way you think: Languages and Compilers in Review

Change the way you think.

I haven't seen it used recently, but when I first arrived at Hull University to start my degree, that's the phrase I saw on a number of posters about the place. I've been thinking about it a lot over the course of my degree - and I've found that it has rung true more than one. My understanding of programming has been transformed 3 times that I can count, and before I get to the Languages and Compilers review that this post is supposed to be about, I'd like to talk a little bit about that first.

The first time my understanding transformed was when I arrived in my first year. Up until that point, I'd been entirely self-taught - learning languages such as _GML_ and later Javascript. All of a sudden, I was introduced to the concept of _Object-Oriented Programming_, and suddenly I understood that by representing things as classes and objects, it was possible to build larger programs without having them fall apart because they were a nightmare to maintain.

Then, when I was finishing up my year in industry, my understanding was transformed again. I realised the value of the experienced I'd had while I was out on my year in industry - and they have not only shown me mechanisms by which a project can be effectively managed, but they have also given me a bit more of an idea what I'd like to do when I finish my degree.

Finally, I think that in Languages and Compilers is the third time I've changed the way I think about programming. It's transformed my understanding of how programming languages are built, how their compilers and interpreters work, why things are they way they are. It's also shown me that there's no best programming language - there only the best one for the task at hand.

With this in hand, it gives me the tools I need to pick up and understand a new language much more easily than I could before, by comparing it's features to the ones that I already know about. I've found a totally new way of looking at programming languages: looking at them not on their own, but how their lexical style and paradigms compare to those employed by other languages.

If you're considering whether a degree in Computer Science is worth it, I'd say that if you're serious about programming for a living, then the answer is a resounding yes.

Have your own thoughts to add about (your?) CS degree? Have a question? Post a comment below!

Virtual Reality: A Review

A considerable number of different 3D glasses scattered around. (Above: A considerable number of stereo 3D glasses technologies. Can you name all the techniques shown here? Comment below!)

Yesterday I spent a morning experimenting with my University's latest stereo equipment as part of the Virtual Environments module I've been taking this semester. With all that I've seen, I wanted to write something about my experiences on here.

Virtual reality and 3D is something that I haven't really had the chance to experience very often. In fact, the last time I was truly able to experience 3D was also through my University - probably through the open day (I can't remember). I've also never had the experience of using a controller before - which I'll talk about later.

With this in mind, it was all a rather new experience for me. The first tech we looked at was a stereo projector with active nvidia shutter glasses. They work by using a variant on the LCD to block out each eye when the image for the other eye is being shown. To this end, they need to sync this with the PC - hence their active nature - and the reason cinemas usually use clever cylindrical polarising filters instead (especially since the screen must be running at a minimum of 120Hz to avoid sickness and provide a reasonable experience).

Even so, the experience was quite amazing - even after seeing it once or twice before. With the additional knowledge about the way stereoscopic images are put together (using techniques such as parallax and concepts such as depth cues and depth budget), I found that I could appreciate what was going on much more than I could previously.

The head tracking that was paired with the shutter glasses was absolutely fascinating. If you were sitting in the seats in front of the stage you got a bunch of window violations and a pair of hurting eyes, when you were on the stage with the tracked glasses, it was a whole different story. It was literally like a window into another world - made all the more real by the projection onto the floor!

We also took a look at the cave, as it's colloquially known - a variant on the screen with 4 panels of a cube, with pairs of projectors back-projecting onto each of the sides - with the same infrared-based head tracking technology. This, too, was similarly cool - it has the ability to make you feel unsteady when looking down from the crows' nest of a large navel ship....

Though this is probably old news to most readers of this post, I found that the idea of using an Xbox controller to move the user around was quite a clever solution to the awkward issue that you can't walk around yourself much unless you like walking into invisible boxes wireframed in black. It certainly felt more natural than using a keyboard - which would have felt bulky and out-of-place. I'll be paying more attention to both controllers and other forms of alternative input when designing applications in future - as I've seen first-hand what a difference the appropriate form of input can make to the overall experience.

Until today, I've also been rather skeptical of Microsoft's HoloLens. Sorting through all the microsoft-speak and buzzwords is somewhat challenging - but the lectures we've had over the last 5 weeks helped with that :D The headset itself is actually plenty comfortable (especially compared to the Oculus Rift), and the head-tracking is astonishing - especially considering that it's all inside-out (as opposed to outside-in). The holograms really look like they're hovering in the environment around you - apart from the fact that they're clearly computer generated of course, and the gestures are actually pretty intuitive for how different the experience is to anything else I've experienced before.

The biggest problem though, as you're probably aware, is the small field-of-view. It's offset slightly by the fact that you can see around the hologram-enabled area, but it still causes frequent window-violations and only covers a fraction of your effective vision - which they don't appear to take any notice of in their marketing material (see the image below - the pair of people in the image can probably only see the very centre quarter of that thundercloud). If they can fix that - then I think that they may have something truly world-changing. It could be used for all sorts of applications - especially in engineering I think.

An image of a pair of people standing altogether far too close to a holographic thundercloud diagram.

The sound system built into it was cool too - I didn't manage to check, but I'm pretty sure only I could hear it, but it sure didn't sound like it! In the tutorial it really sounded like there was a voice coming from all around me - which leads me to think it might be programmable such that it appears to come from anywhere in the room - so you might even be able to have a conversation with a holographic projection of someone standing on the table in front of you (like Microsoft's holoportation demo).

Finally, we took a look at some of the things that the department have been doing with the Oculus Rift. VR is an experience on a whole 'nother level - and best experienced for one's self (it's really important to remember to clean the lenses in the headset thoroughly, and spend some time aligning them precisely to your eyes I found - otherwise everything will be blurry). I found the latter half of the (rather extensive) setup tutorial I went through later that day to test my ACW particularly immersive - to the point where you had consciously remember where you were in the real world - personally I had my leg just touching the edge of my chair to remind me! Though the audio wasn't as good as the HoloLens (see above), it was still adequate for the task at hand.

While I was running through the first-use setup tutorial it was evident though that it was quite clearly a Facebook product - in that you had to create an account (or sign in with Facebook), set privacy settings, and a few other things it hinted at during the setup (I was interested in testing my code I'd written, so I didn't explore the consumer side of the device), so if you're concerned about privacy, then the Oculus Rift is certainly not for you. Thankfully there are lots of other virtual reality headsets around to investigate instead :-)

The controllers made for an interesting experience too - they were a clever solution to the awkward problem that they couldn't track your hand as well as they'd need to in order to display it fully in virtual reality (Microsoft had it easy with the gestures for their HoloLens, apparently) - and they didn't end up breaking immersion too badly in the tutorial by roughly simulating your hand position based on which buttons and triggers you had pressed down. Definitely much better than a keyboard in this instance, since you couldn't even feel where the keyboard was in virtual reality - let alone find the keys on the keyboard to press, and that's not even mentioning the loss of movement and rotation you'd experience.

In conclusion, my whole view on stereo 3D, VR, and input methods have all been changed in a single day - which I think is pretty good going! Stereo 3D and Virtual reality is never going to go away - the potential behind it just far too tempting to not play around with. Designing applications for VR is going to be a challenge for many developers I think - since an understanding of depth dues and immersion is essential to designing effective experiences that don't make you feel sick. We can't leave the real world behind with VR yet (walking into a chair or table is an unpleasant experience), but what we've got right now is absolutely astonishing.

The other side of the fence: A Manjaro review

Oen of the default Manjaro wallpapers. (Above: One of the default Manjaro wallpapers.)

Sorry for the delay! I've had rather a lot to do recently - including set up the machine I'm using to write this blog post.

For a while now, I've been running Ubuntu on my main laptop. After making the switch from Windows 7, I haven't looked back. Recently though, a friend of mine suggested I check out Manjaro - another distribution of Linux based on Arch Linux . After setting it up on a secondary machine and playing around with it, I rather like it, actually - and I've decided to write a post about my experiences coming from Ubuntu.

Like most things, I've got multiple different reasons for playing around with Manjaro. Not least of which is to experience a different ecosystem and a different way of doing things - namely the Arch Linux ecosystem. To that end, I've selected the OpenRC init system - since I've got experience with Systemd already, I feel it's essential to gain experience with other technologies.

With my preferences selected, I fired up manjaro-architect (available on the Manjaro website, which is linked above) and began the installation. I quickly found that the installation was not a simple process - requiring several reboots to get the options just right. In particular, the partitioning tools available are somewhat limited - such that I had to boot into a live Ubuntu environment to sort them out to get a dual boot setup working correctly.

On the other side, the installer allows the configuration of so many more options, like the mount options of the partitions, the kernel to use and it's associated modules, the init system that is used, and the desktop environment you want to use (I've picked XFCE). During the install process I've learnt about a bunch of different things that I had no idea about before.

After installation, I then started on the long task of configuring it to my liking. I'm still working on that, but I'm constantly amazed at the level of flexibility it offers. Nearly everything can be customised - including all the title bar graphics and the ordering and position of everything on the task bar (called a panel in XFCE.

I've found OpenRC an interesting learning experience too. It's very similar to upstart - another init system I used before Ubuntu switched to systemd. As a result, it's so uch simpler to get my head around. It feels a lot more.... transparent than systemd, which is a good thing I think. I do miss a few of the features that systemd offers, however. In time, though, I'm sure that I'll find alternative ways of doing things - different projects do have different ways of thinking, after all!

The concept of the [AUR]() (The Arch User Repository) is possibly one of my faviourite things out of all the things I've encountered so far. It's a community-driven archive of packages, but instead of containing the package binaries themselves, each package contains instructions to fetch build, and install said package.

This way requires much less maintenance I suspect, and makes it much easier to stay up to date with things. The install process for a package from the AUR is a little complex, sure, but so much easier and more automated than doing it by hand. It's like taking the benefits of downloading an installer manually from a program's website like you have to on Windows, and combining it with the ease of use and automation that comes with package managers like apt (Debian-based distrubutions) and pacman / yaourt (Arch Linux-based distributions).

In short, Manjaro is a breath of fresh air, and very different to what I've tried before. While it's certainly not for the linux beginner (try Ubuntu or Linux Mint if you're a beginner!) - especially the installer - I think it fulfills a different purpose for me at least - as platform from which to explore the Arch Linux ecosystem in relative comfort and dive deeper into the way that all the different parts in a linux system interact with each other.

Art by Mythdael