Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression conference conferences containerisation css dailyprogrammer data analysis debugging defining ai demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics guide hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs latex learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation outreach own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering research resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

Proteus VIII Laptop from PC Specialist in Review

Recently I bought a new laptop from PC Specialist. Unfortunately I'm lost the original quote / specs that were sent to me, but it was a Proteus VIII. It has the following specs:

  • CPU: Intel i7-10875H
  • RAM: 32 GiB DDR4 2666MHz
  • Disk: 1 TiB SSD (M.2; nvme)
  • GPU: Nvidia GeForce RTX 2060

In this post, I want to give a review now that I've had the device for a short while. I'm still experiencing some teething issues (more on those later), but I've experienced enough of the device to form an opinion on it. This post will also serve as a sort-of review of the installation process of Ubuntu too.

It arrived in good time - thankfully I didn't have any issues with their choice of delivery service (DPD in my area have some problems). I did have to wait a week or 2 for them to build the system, but I wasn't in any rush so this was fine for me. The packaging it arrived it was ok. It came in a rather large cardboard box, inside which there was some plastic padding (sad face), inside which there was another smaller cardboard box. Work to be done in the eco-friendly department, but on the whole good here.

I ordered without an operating system, as my preferred operating system is Ubuntu (the latest version is currently 20.10 Groovy Gorilla). The first order of business was the OS installation here. This went went fine - but only after I could actually get the machine to boot! It turns out that despite it appearing to have support for booting from USB flash drives as advertised in the boot menu, this feature doesn't actually work. I tried the following:

  • The official Ubuntu ISO flashed to a USB 3 flash drive
  • A GRUB installation on a USB 3 flash drive
  • A GRUB installation on a USB 2 flash drive
  • Ubuntu 20.10 burned to a DVD in an external DVD drive (ordered with the laptop)

....and only the last one worked. I've worked with a diverse range of different devices, but never have I encountered one that completely refused to boot at all from a USB drive. Clearly some serious work is required on the BIOS. The number of different settings in the BIOS were also somewhat limited compared to other systems I've poked around on, but I can't give any specific examples here of things that were missing (other than a setting to toggle the virtualisation extensions, which was on by default) - so I guess it doesn't matter all that much. The biggest problem is the lack of USB flash drive boot support - that was really frustrating.

When installing Ubuntu this time around, I decided to try enabling LVM (Logical Volume Management, it's very cool I've discovered) and a LUKS encrypted hard drive. Although I've encountered these technologies before, this will be my first time using them regularly myself. Thankfully, the Ubuntu installer did a great job of setting this up automatically (except the swap partition, which was too small to hibernate, but I'll talk about that in a moment).

Once installed, I got to doing the initial setup. I'm particularly picky here - I use the Unity 7.5 Desktop (yes, I know Ubuntu now uses the GNOME shell, and no I haven't yet been able to get along with it). I'll skip over the details of the setup here, as it's not really relevant to the review. I will mention though that I'm also using X11, not Wayland at the moment - and that I have the propriety Nvidia driver installed (version 450 at the time of typing).

Although I've had a discrete graphics card before (most recently an AMD Radeon R7 M445, and an Nvidia 525M), this is the first time I've had one that's significantly more powerful than the integrated graphics that's built into the CPU. My experience with this so far is mostly positive (it's rather good at rendering in Blender, but I have yet to stress it significantly), and in some graphical tests it gives significantly higher frame rates than the integrated graphics. If you use the propriety graphics drivers, I recommend going into the Nvidia X server settings (accessed through the launcher) → PRIME Profiles, changing it to "On-Demand", and then rebooting. This will prolong your battery life and reduce the noise from the fans by using the integrated graphics by default, but allow you to run select applications on the GPU (see my recent post on how to do this).

It's not without its teething issues though. I think I'm just unlucky, but I have yet to setup a system with an Nvidia graphics card where I haven't had some kind of problem. In this case, it's screen flickering. To alleviate this somewhat, I found and followed the instructions in this Ask Ubuntu Answer. I also found I had to enable the Force synchronization between X and GLX workaround (and maybe another one as well, I can't remember). Even with these enabled, sometimes I still get flickering after it resumes from suspension / stand by.

Speaking of stand by mode, I've found that this laptop does not like hibernation at all. I'm unsure as to whether this is just because I'm using LVM + LUKS, or whether it's an issue with the device more generally, but if I try sudo pm-hibernate from the terminal, the screen flashes a bit, the mouse cursor disappears, and then the fan spins up - with the screen still on and all my windows apparently still open.

I haven't experimented with the quirks / workarounds provided yet, but I guess ties into the early issues with the BIOS, in that there are some clear issues with the BIOS that need to be resolved.

This hibernation issue also ties into the upower subsystem, in that even if you tell it (in both the Unity and GNOME desktop shells) to "do nothing" on low battery, it will forcefully turn the device off - even if you're in the middle of typing a sentence! I think this is because upower doesn't seem to have an option for suspend or "do nothing" in /etc/Upower/UPower.conf or something? I'm still investigating this issue (if you have any suggestions, please do get in touch!).

Despite these problems, the build quality seems good. It's certainly nice having a metal frame, as it feels a lot more solid than my previous laptop. The keyboard feels great too - the feedback from pressing the keys enhances the feeling of a solid frame. The keyboard is backlit too, which makes more a more pleasant experience in dimly lit rooms (though proper lighting is a must in any workspace).

The layout of the keyboard feels a little odd to me. It's a UK keyboard yes (I use a UK keyboard myself), but it doesn't have dedicated Home / End / Page Up / Page Down keys - these are built into the number pad at the right hand side of the keyboard. It's taken some getting used to toggling the number lock every time I want to use these keys, which increases cognitive load.

It does have a dedicated SysRq key though (which my last laptop didn't have), so now I can articles like this one and use the SysRq feature to talk to the Linux Kernel directly in case of a lock-up or crash (I have had the screen freeze on me once or twice - I later discovered this was because it had attempted to hibernate and failed, and I also ran into this problem, which I have yet to find a resolution to), or in case I accidentally set off a program that eats all of the available RAM.

The backlight of the keyboard goes from red at the left-hand side to green in the middle, and blue at the right-hand side. According to the PC Specialist forums, there's a driver that you can install to control this, but the installation seems messy - and would probably need recompiling every time you install a new kernel since DKMS (Dynamic Kernel Module System, I think) isn't used. I'm ok with the default for now, so I haven't bothered with this.

The touchpad does feel ok. It supports precision scrolling, has a nice feel to it, and isn't too small, so I can't complain about it.

The laptop doesn't have an inbuilt optical drive, which is another first for me. I don't use optical disks often, but it was nice having a built-in drive for this in previous laptops. An external one just feels clunky - but I guess I can't complain too much because of the extra components and power that are built-in to the system.

The airflow of the system - as far as I can tell so far, is very good. Air comes in through the bottom, and is then pushed out again through the back and the back of the sides by 2 different fans. These fans are, however, rather noisy at times - and have taken some getting used to as my previous Dell laptop's fans were near silent until I started to stress the system. The noise they make is also slightly higher pitched too, which makes it more noticeable - and sound like a jet engine (though I admit I've never heard a real one in person, and I'm also somewhat hypersensitive to sound) when at full blast. Curiously, there's a dedicated key on the keyboard that - as far as I can tell - toggles between the normal on-demand fan mode and locking the fans at full blast. Great to quickly cool down the system if the fans haven't kicked in yet, but not so great for your ears!

I haven't tested the speakers much, but from what I can tell they are appropriately placed in front of the keyboard just before the hinge for the screen - which is a much better placement than on the underside at the front in my last laptop! Definitely a positive improvement there.

I wasn't sure based on the details on the PC specialist website, but the thickness of the base is 17.5mm at the thickest point, and 6mm for the screen - making ~23.5mm in total (although my measurements may not be completely accurate).

To summarise, the hardware I received was great - overlooking a few pain points such as the BIOS and poor keyboard layout decisions. Some work is still needed on environmental issues and sustainability, but packaging was on the whole ok. Watch out for the delivery service, as my laptop was delivered by DPD who don't have a great track record in my area.

Overall, the hardware build quality is excellent. I'm not sure if I can recommend them yet, but if you want a new PC or laptop they are certainly not a bad place to look.

Found this helpful? Got a suggestion? Want to say hi? Comment below!

ASP.NET: First Impressions

Admittedly, I haven't really got too far into ASP.NET (core). I've only gone through the first few tutorials or so, and based on what I've found so far, I've decided that it warrants a full first impressions blog post.

ASP.NET is fascinating, because it takes the design goals centred around developer efficiency and combines them with the likes of PHP to provide a framework with which one can write a web-server. Such a combination makes for a promising start - providing developers with everything they need to rapidly create a web-based application that's backed by any one of a number of different types of database.

Coming part-and-parcel with the ASP.NET library comes Entity Framework. It's purpose is to provide an easy mechanism by which developers can both create and query a database. I haven't really explored it much, but it appears to perform this task well.

If I were to criticise it, I'd probably say that the existing tutorials on how to use it are far too Windows and Visual Studio-oriented. Being a Linux user, I found it somewhat of a challenge to wade though the large amount of Visual Studio-specific parts of the tutorial and piece together how it actually works - independently of the automatic code generators built-in to Visual Studio.

This criticism, I've found is a running theme throughout ASP.NET and ASP.NET Core. Even in the official tutorials (which, although they say you can use Visual Studio Code on macOS and Linux, don't actually make any accommodations for users of anything other than Visual Studio), it leans heavily on the inbuilt code and template generators - choosing to instruct you on how to make the absolute minimum amount of changes to the templates provided in order to achieve the goal of the tutorial.

This, unfortunately, leaves the reader wondering precisely how ASP.NET core works under the hood. For example, what does services.AddDefaultIdentity<IdentityUser>().AddEntityFrameworkStores<ApplicationDbContext>(); do? Or what's an IdentityUser, and how do I customise it? Why isn't ASP.NET just a NuGet package I can import? None of these things are explained.

Being the kind of person who works from the ground up, I'm increasingly finding the "all that matters is that it works" approach taken by ASP.NET to, ironically enough, ease the experience for developers new to the library, rather frustrating. For me, it's hard to work with something if I don't understand what it does and how it works - so a tutorial that leans heavily on templates and scaffolding (don't even get me started on that) confusing and unhelpful.

To an extent, I can compare my experience starting out with ASP.NET with my experience starting out with Android development in Java. Both experiences were rather painful, and both experiences were unpleasant because of the large amount of pre-generated template code.

Having said this, in Java's case, there was the additional pain from learning a new language (even if it is similar to C♯), and the irritation in keeping a constant balance between syntax errors from not catching an exception, and being unable to find a bug because it's actually an exception that's been eaten somewhere that I can't see.

Although ASP.NET doesn't have terrible exception handling rules, it does have it's fair share of issues. It's equivalent, I guess, would be the number of undocumented and difficult-to-search bugs and issues one encounters when setting it up for the first time - both on Windows (with Microsoft's own Visual Studio!) and on Linux (though, to be fair, it's only .NET Core that has issues here). The complexity of the system and the lack of decent tutorials and documentation result in a confusing and irritating experience trying to get it to work (especially on Windows).

In conclusion, I'm finding ASP.NET to be a failed attempt at bringing the amazing developer efficiency from .NET to web development, but I suspect that this is largely down to me being inexperienced with it. Hampered by unhelpful tutorials, opaque black-boxed frameworks with blurred lines between library and template (despite the fact it's apparently open-source), and heavy tie-ins with Visual Studio, I think I'll be using other technologies such as Node.js to develop web-based projects in the future.

#movingtogitlab: What's up, Thoughts, and First Impressions

I'm moving some of my repositories to GitLab! Read on to find out why.

You've probably heard by now that GitHub has been bought by Microsoft. It was certainly huge news at the time! While I did tweet at the time (also here too), I've been waiting until I've gotten all of the repositories I've decided to move over to GitLab settled in before blogging about the experience and sharing my thoughts.

While I've got some of them settled in (you can tell which ones I've done because they are public on GitLab and I've deleted the GitHub repository in most cases), I've still got a fair few to go - there should be 13 in total on GitLab once I'm finished.

Since it's taking me longer than I anticipated to properly update them after the transfer (which in and of itself was actually really quick and painless - thanks GitLab!), I thought I'd blog about the experience so far - as it's probably going to be a little while before I've got all my repositories sorted :P

What's up?

Firstly, Microsoft. Buying GitHub. Who'd have thought it? I know that I was certainly surprised. I even had to check multiple websites to triple-check that I wasn't seeing things.... GitHub have certainly done an excellent job of hiding that fact that they were looking for a buyer. Upon further inspection, it appears that the issues GitHub have been facing are twofold. Firstly, they've been without a boss of the company for quite a while, and from the way things are looking, they are having a little bit of trouble finding a new one. Secondly, there's been talk that they haven't been making enough money.

Let's back up a second. How does GitHub actually make money? Good question. The answer is surprisingly simple: GitHub Enterprise - which is a version of GitHUb for businesses that they can host on their own servers for a significant fee. From what I can tell, it's targeted at medium-to-large companies.

If they aren't making enough money to sustain themselves, then something obviously needs to be done about that, or the github.com service that open-source developers the world over suddenly vanish overnight O.o! This presents GitHub with a very awkward problem. In this case, they chose to seek a buyer, who could then perhaps help them to sell GitHub Enterprise better.

Looking at the alternative companies, I think that GitHub could have done a lot worse than Microsoft. Alternatives include Apple (who operate a gigantic walled garden called macOS and iOS), IBM (International Business Machines, who sell big and powerful mainframes to other big and powerful businesses. Most of the time, we - or I at least - don't hear from them, unless they are showing off some research or other that they've just completed), and Facebook (who don't have a particularly great reputation right now). The problem is that they need a big company who have lots of money, because they are also a big company (so they'll be worth a lot of money, like it or not. Also, in order to sort out the money making issue they'll need cash in the meantime).

Microsoft ha ve been quite active in the open-source scene as of late, and seem to really have changed their tune in recent years, with their open-sourcing of large parts of the .NET framework - and their code editor Visual Studio Code. While they aren't the best at naming things (I'm looking at you, .NET Core / .NET Standard / .....), they do appear to be more in-line with GitHub's goals and ethics than alternative companies, as discussed above.

What's the problem?

So why do I have a problem with this deal? Clearly, Microsoft are the best of the bunch to have bought GitHub. It could be a lot worse. Microsoft have even announced that GitHub can continue to operate as a separate entity. The new boss of GitHub did an Ask Me Anything on reddit, and he seems like a really cool guy, and genuinely wants the best for GitHub.

Well, it's complicated. For me, it all boils down to independence. GitHub is (or was) an independent company. To that end, they can make their own decisions, and aren't being told what to or at risk of being told what to do by someone else. They aren't being influenced by someone else. That's not to say that GitHub will now be influenced by Microsoft (though they probably will be) - and it's not to say that it's a bad thing.

Coupled with the some 85 million open-source repositories, 28 million developers using the service, and 1.8 million businesses utilising github.com, it's a huge responsibility. In order to effectively serve the community that has grown around github.com, I feel that GitHub has to remain impartial. It's absolutely essential. The number of different workflows, tools, programs, operating systems, and more that those 28 million developers use will be staggering - and I feel that only a completely independent GitHub will truly be able to meet the needs of that community. Microsoft is great for some things, but taking on GitHub is asking them to actively support users of one of their services using their competitors software, such as Linux - or devices running macOS.

What's the alternative then?

That's huge ask, and I guess only time will tell whether they are able to pull it off. I find myself asking though: What's the alternative? if they are losing money as the rumours say, then what could they do about it?

Obviously, I don't and can't know everything that's going on inside GitHub - I can only read marketing-y web articles, theorise, and make guesses. I'm sure they've tried it, but I don't see why they can't enter a partnership with Microsoft instead. How about a partnership in which Microsoft helps sell GitHub Enterprise, and they get a cut of the profits in return? Microsoft have a much bigger base on enterprisey companies, so they'd be much better placed to do that part of things, and GitHub would be able to retain it's independence.

Where does GitLab fit into this puzzle?

All of this brings me to GitLab. Similar to GitHub, GitLab offers free code hosting on gitlab.com, along with free continuous integration. Unlike GitHub though, GitLab's source code is open - and you can download and run your own instance if you like! They sell support packages to businesses - thereby making enough money to support the continued development of GitLab. I think that they sell a few additional features with GitLab Enterprise too - but they aren't anything that the average user would want - only businesses. The also do a free package for students and open-source developers too - all whilst staying and independent company. Very cool.

As of today, I'm moving a total of 13 of my smaller repositories to GitLab. My reasoning here is that I really don't want to keep all my eggs in one basket. By moving some repositories to GitLab, I can ensure that if one or the other goes in a direction I seriously object to, I've got an alternative that I'm familiar with that I can move all my repositories to in a hurry.

I've used GitLab before. Last academic year (2016 / 2017) I interned at a local company, at which I helped to move over to Git as their primary version-control system. To that end, they chose GitLab as the server they'd use for the job - and so I got quite a bit of experience with it!

I'd use it for my personal git server but unfortunately it uses far too many resources - which I can't currently dedicate to it full-time - so I use Gitea, a lighter-weight alternative.

How does GitLab compare?

Back to the matter at hand. The experience with gitlab.com and my open-soruce repositories so far has been a hugely positive one so far! The migration process itself was completely painless - it just took a while since there were thousands of other developers doing the exact same thing I was :P

Updating my repositories and adjusting all the links & references has been a tedious process so far - but that's nothing that GitLab can really help me with - I've gotta do it myself :-) GitLab's documentation has been really helpful for those times when I've needed some assistance to figure something out, with plenty of GitLab CI examples available to get me started.

If there's one thing I'd change, it's the releases (or tags) system. The system itself is fine, but the way the tags are presented is horrible. The text doesn't even wrap properly! A /latest shortcut url would would be welcome too.

You can't attach release binaries to tags either, which is also rather annoying. There is a workaround though - you can link directly to the GitLab CI (Continuous-Integration) artifacts instead, which seems to work just fine for my purposes so far. If it doesn't work so well in the future, I'm probably going to have to find an alternative solution for hosting release binaries - at least until they implement the feature (if at all).

Other than that, the interface, while very different to GitHub's, does feel appropriately polished and suitably easy to use. If anything, I feel as though it's rather difficult to explore any of the existing repositories on GitLab - I hadn't realised until using GitLab just how useful the new repository tags GitHub have implemented are in exploring others' repositories. I'm sure that there's a 3rd-party website that enables one to explore the repositories on gitlab.com much more effectively - I just haven't found it yet.

The sidebar when viewing a repository is quite handy - but a little unwieldy at times. Similarly, the 2-3 layers of navigation directly below the repository tagline are also rather unwieldy, and difficult to tell their differing functions apart. Perhaps a rethink here would bring these 2 different parts of the user interface together in a more cohesive and usable fashion?

All in all, gitlab.com is a great service for hosting my open-source repositories. The continuous integration services provided are superb - and completely unparalleled in my (admittedly limited) experience. The interface is functional, for the most part, and gets the job done - there are just a few rough edges in places that could do with a slight rethink.

Enjoyed this? Have your own opinion about what's happened to GitHub? Found a better service? Comment below!

Java: First Impressions

The logos of a few of the tools and language I've been using recently.

(Above: The Android, Android Studio, and Java logos. I don't own any of these - nor is this post endorsed by any of the entities represented here - they are just for illustrative purposes.)

I've been using Java pretty extensively recently, as I've been doing a module on Android development at University. It's a pretty interesting language, so I thought I'd share my first impressions here. Later on in a separate post, I'll also talk a little bit about Kotlin, Google's new language they are championing for development on the Android platform.

Firstly, Android Studio has made it really easy to get started. The code hinting / autocompletion is fairly intelligent, and provides enough support that it's not too much of a bother programming in a new environment that you've never seen before - lessening the burden of learning a new language.

It seems to me that the whole build process for Java applications has been greatly overcomplicated though. It's slow, and keeps throwing random errors - especially when I've only just opened Android Studio. This non-determinism proves especially challenging for beginners, such as myself - as sometimes there's no real way to know what's gone wrong (the error messages are not particularly helpful - I've seen several languages with much more helpful ones).

There seem to be a bunch of assumptions that the developers have made too about the user's setup and programming style - leading to confusing situations in which it doesn't work - but there's no real way to know why, as there aren't any obvious error messages.

Despite this, Java as a language has some interesting features. As a whole, I can definitely see where Microsoft got their inspiration for C♯ from, as it's very similar - just without a lot of the syntactical sugar I'm used to in C♯ that makes expressing complex data structures and algorithms much easier, such as getters and setters.

Particularly of note is the exception system. In Java, if you want to throw an exception, you have to add throws ExceptionName to the method signature. Since your main activity in Android contains overridden methods at the top level, this means that you have to use lots of try..catch blocks to trap exceptions and deal with them before they bubble up to higher levels - otherwise it's a compilation error!

While this can be helpful, I've found that it can lead to awkward bugs in which an exception is eaten higher up, and the default value that's returned by the method that eats the exception causes strange things to happen that aren't immediately obvious - and it's only when you check the log that you realise what happened.....

The other bothersome thing I've found is the deeply-nested folder structure that a Java project appears to generate for even the simplest of projects. This makes it a rather difficult and involved process to find any code outside of the IDE - which I often do because Android Studio is far too slow and bulky just to check on or reference something quickly.

Finally, the last issue that concerns me are the licensing issues that have plagued Java in recent years. If you haven't heard, Google and Oracle (the company that owns Java) have been in disagreement over licensing fees which Oracle claims Google should pay them because they used Java in the making of Android (which is an open-source project). If Oracle are going after Google over licensing fees for just using a language, then what does that say about any projects I do? It's not exactly confidence inspiring, that's for sure. I for one will be keeping as much of my code library out of the Java ecosystem as possible.

Java seems to be the kind of language with a lot of history. While some of this has led to innovations that have ultimately improved the language, I feel that as a language it's being bogged down by lots of bloat and unnecessary garbage that it could really do without. C♯ has done a brilliant job so cutting through this clutter and rubbish, creating a language that both works with you and is easy to understand (except .NET Standard and .NET Core, but that's a story for another time :P).

Prolog: First Impressions (or a basic tutorial on the first lab session)

The new learning prolog banner!

Yesterday I had my first artificial intelligence lab session on Prolog. It didn't start off very well. The instructions we were given didn't seem to have been tested and were largely useless. Thankfully one of the demonstrators came over to help and showed me the basics.

This blog post is part review, part first impressions, and part tutorial. It's purpose is to consolidate the things that I have learnt. Hopefully it will be useful to someone else out there :) Also note, if you haven't figured it out by now, I am a complete beginner with Prolog. There will be mistakes! If you spot one, please comment below so that I can fix it :)

My first impression of Prolog is that it is hard. It is very hard to learn. This isn't helped by the fact that the tools available to the new Prolog programmer are very out of date and feel exceptionally clunky. Perhaps a redesign is in order?

Anyway, I discovered at first that Prolog is very much like a detective. You give it facts and relationships, and then question it on the information you've given it (much like comprehension) using things called queries. Here's an example of a fact:

cat.

In the above, we are telling prolog that something called a cat exists. In it's world now, a cat is the only thing in existence. We can then ask it whether a cat exists in the world in the query window by running the prolog file. If you are using the default editor that comes with Prolog, simply press CTRL+C and then CTRL+B. If you aren't, type prolog filename.pl to launch a query window based on filename.

?- cat.
true.

Prolog is telling us that a cat exists, but we knew that already. Let's try asking it whether a dog exists:

?- dog.
ERROR: toplevel: Undefined procedure: dog/0 (DWIM could not correct goal)

Oh dear! We get a nasty (over)complicated error! Thankfully, this error (in this case) is a really simple one. Prolog is basically telling us that it doesn't know what a dog is, because a dog doesn't exist in Prolog's world at the moment.

Getting Prolog to tell us whether something exists in it's world is nice, but it isn't terribly useful. Let's try something a little bit more complicated.

animal(cat).

The above tells prolog that a cat is an animal. For reference, the bit before the opening bracket is called a predicate I think. After executing the above, Prolog's world jsut got a bit more complicated. It now knows that there is something called a cat, and that a cat is an animal. We can then ask Prolog if a cat is an animal like so:

?- animal(cat).
true.

Prolog tells us that yes, a cat is an animal. This is also nice, but still not amazingly clever. Let's write some more Prolog.

animal(cat).
animal(dog).
animal(cow).

pet(cat).
pet(dog).

food(milk).
food(bone).

likes(cat, milk).
likes(dog, bone).

The above looks much more complicated, but it's not as bad you might first think. In the above, we tell Prolog that a cat, a dog, and a cow are animals. We also tell it that only a cat and dog are pets. In addition, we say that both milk and a bone are forms of food, and that a cat likes milk, and a dog likes a bone. We can then ask Prolog a bunch of questions on the above:

?- animal(cow).
true.
?- pet(cow).
false.
?- likes(cat, milk).
true.
?- likes(cat, bone).
false.

In the above, we ask prolog 4 questions:

  1. Is a cow an animal? Prolog answers that yes, a cow is in fact an animal (we told it so).
  2. Is a cow a pet? Prolog answers that no, a cow isn't a pet. It knows that because we told it that a cow is an animal, but we didn't tell it that a cow is a pet.
  3. Does a cat like milk? Prolog answers that yes, cats do like milk - because we told it so.
  4. Does a cat like a bone? Prolog answers that no, cats don't like bones, as we told it that cats like milk instead.

Prolog is starting to show how powerful it is, but I have a feeling that I haven't even scratched the surfaced yet. Next week, assuming I understand the next lab, I'll make another post about what I've learnt.

Ubuntu: Second Impressions

Ubuntu's Default Background I've had my laptop dual booted with Ubuntu for a while now, and I've been using Ubuntu in a Virtual Machine and as a live CD, but I've only just gotten around to rearrenging my partitions and reimaging my Ubuntu partition with Ubuntu 15.04. Previously, I had a bunch of issues with ubuntu (for example my laptop kept heating up), but I seem to have solved most of them and I thought that I'd post here about the problems I encountered, how I fixed them, and what I think of the latest version of ubuntu.

Firstly, I installed ubuntu from a live CD iso on my flash drive. Annoyingly, I used the 32 bit version by accident, and had to do it again. It would be nice if it told you which version you were about to install. Anyway, I found the installer to be rather temperamental. It kept freezing for ages, and all I could do was wait.

After the installation finished, I was left with a brand new, and very buggy, 64 bit Ubuntu 15.04 installation. As soon as it booted, the first job was to stop my cursor from flickering. Because I have an Nvidia GeForce 550M GPU, Ubuntu didn't recognise it properly (it detected it as a second 'unknown display') and so custom drivers were needed to fix it. I found this post, which guided me through the installation of both bumblebee (to control which of my two GPUs I use), and the official Nvidia drivers for my graphics card.

After banishing the flickering cursor, I found my laptop cooler, though it still wasn't right. Next up was to install thermald, indicator-cpufreq and lm-sensors. This trio of packages automatically controls the frequency of your CPU to both save power and prevent overheating. Normally, linux doesn't pay any attention to the frequency of the CPU of it's host system, leaving to run at it's maximum speed all the time - which causes battery drain and overheating.

Now that my laptop wasn't overheating too much, I could focus on other problems. When in Windows 7, I have something called SRS Premium Sound. It is brilliant at tweaking audio just before it reaches the speakers to improve it's quality. I quickly found when I got this laptop that it was essential - the speakers are facing downwards and the output sounds 'tinny' or 'hollow' without it. Since linux doesn't have SRS, the next best thing was PulseAudio, which provides you with an equaliser to tune your sound output with. Note that PulseAudio does actually work with Ubuntu 14, even though some people have said that it has been discontinued (I don't think it has?).

The other thing that needed changing was my touchpad. I felt like I had to hammer it in order to get it to recognise my touch, whereas in Windows it picked up the lightest of touches. My solution was to add the following to my .profile:

synclient FingerLow=2
synclient FingerHigh=3
synclient AccelFactor=0.145
synclient TouchpadOff=0
synclient MinSpeed=1.25
synclient MaxSpeed=2
synclient CoastingFriction=30

This improved the responsiveness of my touchpad a whole lot to the point where I could actually use it without getting frustrated :)

That covers the main problems I came across. As for what I think, I'm finding Ubuntu to be a great operating system to work with - now that I've worked most of the bugs out. Things like indicator-cpufreq and thermald ought to be automatically installed on systems that support them at install time. You should also be prompted to install bumblebee and the offical nvidia graphics drivers at install time too, as a system with multiple GPUs (i.e. integrated graphics and a graphics card) are pretty unusable without them. Sensible default settings would be nice too - nobody likes hammering their touchpad just to get a response.

The Ubuntu unity desktop developers seem to have remvoved a bunch of configuration options from the GUI in recent releases. Hopefully they wil readd them - it's rather annoying to have to enter the terminal to change something as simple as the login screen background.

On the plus side, Ubuntu seems to load much faster than Windows 7, and is more responsive too. I also feel like I have more screen space to work with as there isn't a task bar taking up space at the bottom of the screen. The customisability is amazing too. I am finding that there are far more things that you can tweak and fiddle with in Ubuntu compared to Windows.

To finish off this post, here's a list of smaller problems I had, and a link to the appropriate post that fixed it for me:

First Impressions: C♯

I will be learning C♯ over the next year or so, and since I am almost a week into learning it, I thought that I would post my first impressions here on this blog.

C♯ is a compiled language developed by Microsoft that looks a little bit javascript, and a little bit like Rust (If you haven't heard of rust yet, I have written up a first impressions post here). Like rust, it has a type system, so you can't set a variable with the type int to a string. This type system is not a confusing, though, since it only has a single type for strings (rust has two - one that you can change and one that you can't) and you can convert between the different types quite easily (e.g. int.Parse() converts a string to an int).

The brackets syntax is almost identical to that of Javascript, except C♯ has classes and namespacees too (I don't really know what those do yet). Indentation also seems to be fairly important, which is perfectly fine by me since it improves the readability of your code.

Looking around on the internet, it seems that C♯ is tied in with Visual Studio quite closely. This might cause problems in the future, because I don't really want ot install visual studio (express) on my computer because of it's file size (the installer alone is > 600MB!). Hopefully I can continue to use the csc C♯ compiler I found in the .NET 4.0 folder on my computer as my code becomes more complex.

All in all C♯ looks like a good introductory language to the world of compiled programming. The syntax is straight forward and easy to understand so far, and it is kind of similar to Javascript, which eases the learning process considerably.

Rust: First Impressions

You may have heard of Mozilla's Rust - a fairly new language developed by Mozilla for their new rendering engine, Servo. Recently, I took a look at Rust, and this post contains my thoughts.

Please note that I am looking at Rust (a compiled language) from the prespective of an interpreted language programmer, since I have not actually learnt any other compiled languages yet.

To start off with, Rust code appears fairly easy to read, and is loosely based on other languages that I already know, such as Javascript. You can quickly get the gist of a piece of Rust code just by reading it through, even without the comments.

The compiler is easy to use for beginners, and also quite helpful when you make mistakes, although some of the error messages are definitely not friendly to a new Rust programmer. I personally got totally confused by the finer points of Rust's type system, and the compiler was most certainly not helpful when it came to sorting the problems out.

The ability to set a variable equal to the output of a complex expression is also nice - allowing you to do this:

fn main() {
    let x = 2u;

    let y = match x {
        0 => "x is zero!",
        1 => "x is one!",
        2 => "x is two!",
        3 => "x is three!",
        4 => "x is four!",
        5 => "x is five!",
        _ => "x is larger than five!"
    };

    println!("{}", y);
}

Rust has some nice language constructs too - the match statement above is an example is this. It looks so much better than the traditional switch statement, and is more flexable as well.

The problem with Rust, though, is that it doesn't really have very many different tutorials and guides out there for new programmers, especially those coming from an interpreted language background, like me. This will sort itself out though, in time. The language itself is also changing frequently, which makes keeping a tutorial updated a considerable amount of work. This also make the language more difficult to learn, since you can't be sure whether the tutorial you are reading is up to date or not. This problem will also gradually disappear on it's own as the language stabilises though.

To conclude, Rust looks like a promising new language that I will resume learning at some point in the future, perhaps when I have more experience with compiled languages (and type systems!), and when rust has stabilised a little more. At the moment Rust does not feel stable enough and mature enough for me to easily learn it.

Art by Mythdael