Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression conference conferences containerisation css dailyprogrammer data analysis debugging defining ai demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics guide hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs latex learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation outreach own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering research resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

Creating a 3D Grid of points in Blender 3.0

In my spare time, one of the things I like to play with is rendering stuff in Blender. While I'm very much a beginner and not learning Blender professionally, it is a lot of fun to play around it!

Recently, Blender has added geometry nodes (which I alluded to in a previous post), which are an extremely powerful way of describing and creating geometry using a node-based system.

While playing around with this feature, I wanted create a 3D grid of points to instance an object onto. When I discovered that this wasn't really possible, I set to work creating my own node group to do the job, and I thought I'd quickly share it here.

First, here's a render I threw together demonstrating what you can do with this technique:

(Above: Coloured spheres surrounded by sparkles)

The above is actually just the default cube, just with a geometry shader applied!

The core of the technique is a node group I call Grid3D. By instancing a grid at a 90° angle on another grid, we can create a grid of points:

(Above: The Grid3D node group)

The complicated bit at the beginning is me breaking out the parameters in a way that makes it easier to understand on the outside of the node - abstracting a lot of the head scratching away!

Since instancing objects onto the grid is by far my most common use-case, I wrapped the Grid3D node group in a second node group called Grid3D Instance:

This node group transfers all the parameters of the inner Grid3D node group, but also adds a new position randomness vector parameter that controls by how much each instance is translated (since I couldn't find a way to translate the points directly - only instances on those points) on all 3 axes.

(Above: instanced cubes growing and shrinking)

Now that Blender 3.1 has just come out, I'm excited to see what more can be done with the new volumetric point cloud functions in geometry nodes - which may (or may not, I have yet to check it out) obsolete this method. Still, I wanted to post about it anyway for my own future reference.

Another new feature of Blender 3.1 is that node groups can now be marked as assets, so here's a sample blender file you can put in your assets folder that contains my Grid3D and Grid3D Instance node groups:

https://starbeamrainbowlabs.com/blog/images/20220326-Grid3D.blend

A review of graph / node based logic declaration through Blender

Recently, Blender started their Everything Nodes project. The first output of this project is their fantastic geometry nodes system (debuted in Blender 2.9, and still under development), which allows the geometry of an mesh (and the materials it uses) to be dynamically modified to apply procedural effects - or even declare a new geometry altogether!

I've been playing around with and learning Blender a bit recently for fun, and as soon as I saw the new geometry nodes system in Blender I knew it would enable to powerful new techniques to be applied. In this post, I want to talk more generally about node / graph-based logic declaration, and why it can sometimes make a complex concept like modifying geometry much easier to understand and work with efficiently.

Blender's geometry nodes at work.

(Above: Blender's geometry nodes at work.)

Manipulating 3d geometry carries more inherent complexity than it's 2d counterpart - programs such as Inkscape and GIMP have that pretty much sorted. To this end, Blender supplies a number of tools for editing 3d geometry, like edit mode and a sculpting system. These are powerful in their own right, but what if we want to do some procedural generation? Suddenly these feel far from the right tools for the job.

One solution here is to provide an API reference and allow scripts to be written to manipulate geometry. While blender does this already, it's not only inaccessible to those who aren't proficient programmers but large APIs often come with a steep learning curve (and higher cognitive load) - and it can often often be a challenge to "think in 3d" while programming (I know when I was doing the 3d graphics module at University this took some getting used to!).

In a sense, node based programming systems feel a bit like a functional programming style. Their strength is composability, in that you can quickly throw together a bunch of different functions (or nodes in this case) to get the desired effect. This reduces cognitive load (especially when there's an instantly updating preview available) as I mentioned earlier - which also has the side effect of reducing the barrier to entry.

Blender's implementation

There's a lot to like about Blender's implementation of a node-based editor. The visual cues for both the nodes themselves and the sockets great. Nodes are colour coded to group them by related functionality, and sockets are coloured according to data type. I would be slightly wary of issues with colourblind users though - while it looks like this has been discussed already, it doesn't seem like an easy solution has been implemented yet.

This minor issue aside, in Blender's new geometry nodes feature they have also made use of shape for the sockets to distinguish between single values and values that can change for each instance - which feels intuitive to understand.

When implementing a UI like this - as in API design - the design of the user interface needs to be carefully considered and polished. This is the case for Blender's implementation - and this only became apparent when I tried Material Maker's node implementation. While Material Maker is cool, I encountered a few minor issues which made the UI feel "clunky" when compared to Blender's implementation. For example:

  • Blender automatically wraps your cursor around the screen when you're scrubbing a value
  • Material Maker's preview didn't stack correctly underneath thee node graph, leading to visual artefacts

Improvements

Blender's implementation of a node-based editor isn't all perfect though. Now that I've used it a while, I've observed a few frustrations I (and I assume others) have had - starting with the names of nodes. When you're first starting out, it can be a challenge to guess the name of the node you want.

For example, the switch node functions like an if statement, but I didn't immediately think of calling it a switch node - so I had to do a web search to discover this. To remedy this issue, each node could have a number of hidden alias names that are also searched, or perhaps each node has a short description in the selection menu that is also searched.

Another related issue is that nodes don't always do what you expect them to, or you're completely baffled as to what their purpose is in the first place. This is where great documentation is essential. Blender has documentation on every node in all their node editors (shader, compositor, and now geometry), but they don't always give examples as to how each node could be used. It would also be nice to see a short tooltip when I hover over a node's header explaining what it does.

In the same vein, it's also important to ensure a measure of consistency if you have multiple node editors. While this is mostly the case with Blender, I have noticed that a few nodes have different names across the compositing, shading, and geometry nodes workspaces (the switch node), and some straight up don't exist in other workspaces (the curve nodes). This can be the source of both confusion and frustration.

Conclusion

In conclusion, node-based editors are cool, and a good way to present a complex set of options in an easy to understand interface. While we've looked at Blender's implementation of a node-based editor, others do exist such as Material Maker.

Node-based interfaces have limitless possibilities - for example the Web Audio API is graph-based already, so I can only imagine how neat a graphical node-based audio editor could be - or indeed other ideas I've had including a node-based SVG generator (which I probably won't get around to experimenting with for a while).

As a final thought, a node-based flowchart could potentially be a good first introduction to logic and programming. For example, something a bit like Scratch or some other robotics control project - I'm sure something like this exists already.

If you know of a cool node-based interface, do leave a comment below.

Further reading

3D mazes with Lua, OpenSCAD, and Blender

Way back in 2015, I posted a language review about Lua. In that post, I ported an even older 2D maze generator I implemented in Python when I was in secondary school on a Raspberry Pi (this was one of the first experiences I had with the Raspberry Pi). I talked about how Lua was easy to get started with, but difficult do anything serious because everything starts from 1, not 0 - and that immutable strings are awkward.

Since then, I've gained lots more experience with the language. As an aside, I discovered a nice paradigm for building strings:

local function string_example()
    local parts = {} -- Create a table
    table.insert(parts, "This is ") -- Add some strings
    table.insert(parts, "a ")
    table.insert(parts, "string")
    return table.concat(result, "") -- Concatenate them all at once and return
end

Anyway, before I get too distracted, I think the best way to continue this post is with a picture:

Fair warning: This blog post is pretty media heavy. If you are viewing on your mobile device with a limited data connection, you might want to continue reading on another device later.

An awesome render of a 3D maze done in Blender - see below for an explanation.

Pretty cool, right? Perhaps I should explain a little about how I got here. A month or two ago, I rediscovered the above blog post and the Lua port of my Python 2d maze generator. It outputs mazes like this:

#################
#   #     #     #
### ##### ##### #
# #   #       # #
# # # # # ##### #
# # #   #       #
# ### # ### #####
#     #   #     #
#################

(I can't believe I didn't include example output in my previous blog post!)

My first thought was that I could upgrade it to support 3d mazes as well. One thing led to another, and I ended up with a 3D maze generator that output something like this:

#################
#################
#################
#################
#################
#################
#################
#################
#################

#################
#   #   #       #
# ### ###########
#           # # #
# ####### ### # #
#       #   # # #
# ### ####### # #
#   #       # # #
#################

#################
##### ### #######
#################
############### #
#################
############# # #
#################
# ########### ###
#################

#################
#               #
# ### ###########
#   #         # #
# ###############
#               #
##### # ####### #
#   # #     # # #
#################

#################
#################
#################
#################
#################
#################
#################
#################
#################

Each block of hash (#) symbols is a layer of the maze. It's a bit hard to visualise though, so I decided to do something about it. For my masters project, I used OpenSCAD to design a housing for an Internet of Things project I did. Since it's essentially a programming language for expressing 3D models, I realised that it would be perfect for representing my 3D mazes - and since said mazes use a grid, I can simply generate an OpenSCAD file full of cubes for all the locations at which I have a hash symbol in the output (the data itself is stored in a nested table setup, which I then process).

A screenshot of OpenSCAD showing a generated maze.

This is much better. We can clearly see the maze now and navigate around it. OpenSCAD's preview controls are really quite easy to pick up. What you see in the above screenshot is an 'inverted' version of the maze - i.e. instead of carving out a solid block, the algorithm walks around an empty space inside a defined region.

The algorithm that generates the maze itself is pretty much the same as the original algorithm I devised myself in Python (which I've now lost, sadly - as I didn't use Git back then).

It starts in the top left corner, and then does a random walk around the defined area. It keeps track of where it has been in a node list (basically a list of coordinates), and every time it takes a step forwards, there's a chance it will jump back to a previous position in the nodes list. Once it can't jump anywhere from a position, that position is considered complete and is removed from the nodes list. Once the node list is empty, the maze is considered complete and it returns the output.

A divider made up of orange renders of a small 7x7x7 maze rotating

As soon as I saw the STL export function though, I knew I could do better. I've used Blender before a little bit - it's a production-grade free open-source rendering program. You can model things in it and apply textures to them, and then render the result. It is using a program like this that many CGI pictures (and films!) are created.

Crucially for my case, I found the STL import function. With that, I could import the STL I exported from OpenSCAD, and then have some fun playing around with the settings to get some cool renders of some mazes:

(Above: Some renders of some of the outputs of the maze generator. See the full size image [3 MiB])

The sizes of the above are as follows, in grid squares as generated by the Lua 3d maze generator:

  • Blue: 15 x 15 x 15
  • Orange: 7 x 7 x 7
  • Purple: 17 x 15 x 11, with a path length of 4 (i.e. the generator jumps forwards by 4 spaces instead of 2 during the random walk)
  • Green: 21 x 21 x 7

Somehow it's quite satisfying to watch it render, with the little squares gradually spiralling their way out from the centre in a hilbert curve - so I looked into how to create a glass texture, and how to setup volumetric rendering. It was not actually too difficult to do (the most challenging part was getting the lights in the right place with the right strength). Here's a trio of renders that show the iterative process to getting to the final image you see at the top of this post:

(Above: Some renders of some of the blue 15x15x15 above in the previous image with a glass texture. See the full size image [3.4 MiB])

From left to right:

  1. My initial attempt using clear glass
  2. Frosting the glass made it look better
  3. Adding volumetric lighting makes it look way cooler!

I guess that you could give the same treatment to any STL file you like.

Anyway, the code for my maze generator can be found here on my private git server: sbrl/multimaze

The repository README contains instructions on how to use it. I won't duplicate that here, because it will probably change over time, and then this blog post would be out of date.

Before I go, I'll leave you with some animations of some mazes rotating. This whole experience of generating and rendering mazes has been really fun - it's quite far outside what I've been doing recently. I think I'd like to do some more of this in the future!

Update: I've re-rendered a new version at a lower quality. This should help mobile devices! The high-quality version can still be accessed via the links below.

(High-quality version: webm - vp9, ogv - ogg theora, mp4 - h264)

Virtual Reality: A Review

A considerable number of different 3D glasses scattered around. (Above: A considerable number of stereo 3D glasses technologies. Can you name all the techniques shown here? Comment below!)

Yesterday I spent a morning experimenting with my University's latest stereo equipment as part of the Virtual Environments module I've been taking this semester. With all that I've seen, I wanted to write something about my experiences on here.

Virtual reality and 3D is something that I haven't really had the chance to experience very often. In fact, the last time I was truly able to experience 3D was also through my University - probably through the open day (I can't remember). I've also never had the experience of using a controller before - which I'll talk about later.

With this in mind, it was all a rather new experience for me. The first tech we looked at was a stereo projector with active nvidia shutter glasses. They work by using a variant on the LCD to block out each eye when the image for the other eye is being shown. To this end, they need to sync this with the PC - hence their active nature - and the reason cinemas usually use clever cylindrical polarising filters instead (especially since the screen must be running at a minimum of 120Hz to avoid sickness and provide a reasonable experience).

Even so, the experience was quite amazing - even after seeing it once or twice before. With the additional knowledge about the way stereoscopic images are put together (using techniques such as parallax and concepts such as depth cues and depth budget), I found that I could appreciate what was going on much more than I could previously.

The head tracking that was paired with the shutter glasses was absolutely fascinating. If you were sitting in the seats in front of the stage you got a bunch of window violations and a pair of hurting eyes, when you were on the stage with the tracked glasses, it was a whole different story. It was literally like a window into another world - made all the more real by the projection onto the floor!

We also took a look at the cave, as it's colloquially known - a variant on the screen with 4 panels of a cube, with pairs of projectors back-projecting onto each of the sides - with the same infrared-based head tracking technology. This, too, was similarly cool - it has the ability to make you feel unsteady when looking down from the crows' nest of a large navel ship....

Though this is probably old news to most readers of this post, I found that the idea of using an Xbox controller to move the user around was quite a clever solution to the awkward issue that you can't walk around yourself much unless you like walking into invisible boxes wireframed in black. It certainly felt more natural than using a keyboard - which would have felt bulky and out-of-place. I'll be paying more attention to both controllers and other forms of alternative input when designing applications in future - as I've seen first-hand what a difference the appropriate form of input can make to the overall experience.

Until today, I've also been rather skeptical of Microsoft's HoloLens. Sorting through all the microsoft-speak and buzzwords is somewhat challenging - but the lectures we've had over the last 5 weeks helped with that :D The headset itself is actually plenty comfortable (especially compared to the Oculus Rift), and the head-tracking is astonishing - especially considering that it's all inside-out (as opposed to outside-in). The holograms really look like they're hovering in the environment around you - apart from the fact that they're clearly computer generated of course, and the gestures are actually pretty intuitive for how different the experience is to anything else I've experienced before.

The biggest problem though, as you're probably aware, is the small field-of-view. It's offset slightly by the fact that you can see around the hologram-enabled area, but it still causes frequent window-violations and only covers a fraction of your effective vision - which they don't appear to take any notice of in their marketing material (see the image below - the pair of people in the image can probably only see the very centre quarter of that thundercloud). If they can fix that - then I think that they may have something truly world-changing. It could be used for all sorts of applications - especially in engineering I think.

An image of a pair of people standing altogether far too close to a holographic thundercloud diagram.

The sound system built into it was cool too - I didn't manage to check, but I'm pretty sure only I could hear it, but it sure didn't sound like it! In the tutorial it really sounded like there was a voice coming from all around me - which leads me to think it might be programmable such that it appears to come from anywhere in the room - so you might even be able to have a conversation with a holographic projection of someone standing on the table in front of you (like Microsoft's holoportation demo).

Finally, we took a look at some of the things that the department have been doing with the Oculus Rift. VR is an experience on a whole 'nother level - and best experienced for one's self (it's really important to remember to clean the lenses in the headset thoroughly, and spend some time aligning them precisely to your eyes I found - otherwise everything will be blurry). I found the latter half of the (rather extensive) setup tutorial I went through later that day to test my ACW particularly immersive - to the point where you had consciously remember where you were in the real world - personally I had my leg just touching the edge of my chair to remind me! Though the audio wasn't as good as the HoloLens (see above), it was still adequate for the task at hand.

While I was running through the first-use setup tutorial it was evident though that it was quite clearly a Facebook product - in that you had to create an account (or sign in with Facebook), set privacy settings, and a few other things it hinted at during the setup (I was interested in testing my code I'd written, so I didn't explore the consumer side of the device), so if you're concerned about privacy, then the Oculus Rift is certainly not for you. Thankfully there are lots of other virtual reality headsets around to investigate instead :-)

The controllers made for an interesting experience too - they were a clever solution to the awkward problem that they couldn't track your hand as well as they'd need to in order to display it fully in virtual reality (Microsoft had it easy with the gestures for their HoloLens, apparently) - and they didn't end up breaking immersion too badly in the tutorial by roughly simulating your hand position based on which buttons and triggers you had pressed down. Definitely much better than a keyboard in this instance, since you couldn't even feel where the keyboard was in virtual reality - let alone find the keys on the keyboard to press, and that's not even mentioning the loss of movement and rotation you'd experience.

In conclusion, my whole view on stereo 3D, VR, and input methods have all been changed in a single day - which I think is pretty good going! Stereo 3D and Virtual reality is never going to go away - the potential behind it just far too tempting to not play around with. Designing applications for VR is going to be a challenge for many developers I think - since an understanding of depth dues and immersion is essential to designing effective experiences that don't make you feel sick. We can't leave the real world behind with VR yet (walking into a chair or table is an unpleasant experience), but what we've got right now is absolutely astonishing.

The Graphics Pipeline

Since the demonstration for my 3D work is tomorrow and I keep forgetting the details of the OpenGL graphics pipeline, I thought I'd write a blog post about it in the hopes that I'll remember it.

In case you didn't know, OpenGL uses a pipeline system to render graphics. Basically, your vertices and other stuff go in one end, and a video stream gets displayed at the other. This pipeline is made up of number of stages. Each stage has it's own shader, too:

The OpenGL pipeline.

There are rather a lot of stages here, so I've made this table that lists all the different shaders along with what they do:

Stage Programmable? Function
Vertex Shader Yes Raw vertex manipulation.
Hull Shader No Aka the Tessellation Control Shader. Determines control points for the tessellator. Although it's fixed function, it's highly configurable.
Tessellator No Subdivides surfaces and adds vertices using the control points specified in the hull shader.
Domain Shader Yes Aka the Tessellation Evaluation Shader. Adds details to vertices. Example uses include simplifying models that are far away from the camera. Has access to the control points outputted by the hull shader.
Geometry Shader Yes Superseded by the tessellator (see above). Very slow.
Rasterisation No Fixed function. Converts the models etc. into fragments ready for the fragment shader.
Fragment Shader Yes Insanely flexible. This is the shader that is used to add most, if not all, special effects. Lighting and shadows are done here too. Oddly enough, Microsoft decided that they would call it the "Pixel Shader" in DirectX and not the fragment shader.
Compute Shader Yes Not part of the graphics pipeline. Lets you utilise the power of the matrix calculator graphics card to do arbitrary calculations.

The tessellator is very interesting. It replaces the geometry shader (which, although you can technically use, you really shouldn't), and allows you to add details to your model on the GPU, thereby reducing the number of vertices you send to graphics card. It also allows you to customize your models before they hit rasterisation and the fragment shader, so you could simplify those models that are further away, for instance.

As an example in our lecture, we were shown the Haven Benchmark. Our lecturer turned the tessellator on and off to show us what it actually does. Since you can't see what I saw, here's an animation I made showing you the difference:

The other pipeline to be aware of is the coordinate pipeline. This pipeline specifies how coordinates are transformed from one space to another. Here's another diagram:

The coordinate pipeline.

Again, this looks complicated, but it isn't really. A similar process would be followed for 2D graphics as well as 3D ones. If you take it one step at a time, it doesn't seem so bad.

  • Model Space - This refers to coordinates relative to any given model. Each model will have the coordinates of each of its vertices stored relative to its central point.
  • World Space - Multiplying all of a model's coordinates by the model matrix brings it into World Space. World space is relative to the centre of your scene.
  • View Space - Multiplying all the coordinates in a world by the view matrix brings everything into into View Space. View Space is relative to the camera. It is for this reason that people say that you cheat and move everything around the camera - instead of moving the camera around a scene.
  • Normalised Device Space - Multiplying everything in view space by the projection matrix brings it into Normalised Device Coordinates. Graphics cards these days apparently like to consider points between $(-1, -1, -1)$ and $(1, 1, 1)$ (if you're OpenGL, that is. DirectX is different - it prefers $(-1, -1, 0)$ to $(1, 1, 1)$ instead). Points in this space are called Normalised Device Coordinates and anything outside of the aforementioned ranges will be cut off. No exceptions.
  • Image Space - When your scene has been through the entirety of the Graphics pipeline described above, it will find itself in Image Space. Image space is 2D (most of the time) and references the actual pixels in the resulting image.

Converting between all these different coordinate spaces is best left up to the vertex shader - it's much easier to shove a bunch of transformation matrices at it and get it to do all the calculations for you. It's so easy, you can do it in just 11 lines of vertex shader code:

#version 330
uniform mat4 uModel; // The model matrix
uniform mat4 uView; // The view matrix
uniform mat4 uProjection; // The projection matrix

in vec3 vPosition; // The position of the current vertex

void main() 
{ 
    gl_Position = vec4(vPosition, 1) * uModel * uView * uProjection;
}

If you made it this far, congratulations! That concludes our (rather long) journey through the graphics pipeline and its associated coordinate spaces. We looked at each of the various shaders and what they do, and learnt about each of the different coordinate spaces involved and why they are important.

I hope that someone besides myself found it both useful and educational! If you did, or you have any questions, please post a comment below. If you have spotted a mistake - please correct me in the comments below too! I try to make sure that posts like this one can be used by both myself and others as a reference in the future.

Sources

Learning Three.js four: Maze Birthday Card

A maze for a birthday card

Hooray for the fourth 3 dimensional experiment!

This one is a birthday card for someone I know and is also much more complex than the others I have done so far. I ported a maze generation algorithm that I originally wrote in Python 3 to Javascript (I also have ported it to Lua, keep a look out for a post about Lua soon!) and then set about rendering it in 3D.

The aim is to get to the centre of the maze, where some 3D(!) spinning text is waiting for you. The maze generation algorithm is not designed to have the goal in the centre of the maze, so you will find that there are multiple paths that you can take from the centre that lead to different parts of the maze. I will make a separate post about the maze generation algorithm I have written later.

The collision detection is tile based and rather complicated (for me, anyway). I am surprised that nobody has written a library for this sort of thing.....

You can play with it here: Learning Three.js four

Learning Three.js three: Texturing

Three.js three

This week I have been busy, but I have still had time to create a new Three.js experiment. This week, I looked into simple texturing. The Cube in the middle has a simple soil texture, and I gave the skybox a bunch of clouds from opengameart.org. It took a whole lot of fiddling to get it to display the way it has now - I couldn't figure out the correct rotation for the tiles :D

This experiment also uses orbitcontrols.js from here I think? Click and drag to move around, and scroll up / down to zoom.

I have attempted to optimise it by only rendering a frame when the camera is actually moved or a texture is loaded too.

You can find it here: three

Learning Three.js 2: Catch the Sphere

Catch the sphere in action.

The second thing that I have made in my endeavours to learn three.js is a simple catch the sphere game. Every time you get close to the orange sphere, it will move it to another random place in the room, add one to your score, and play a sound.

I cheated a little bit on the collision detection... I really ought to look into writing a simple 3D collision detection library that uses boxes and spheres.

The collision detection for the sphere is really done in 2D - I check only on the x and z axes to see if you are within a certain distance of the sphere since your y position doesn't change. For the bounding box, I simply check to make sure that your x and z co-ordinates aren't too big or too small.

You can find it here: two

Learning Three.js: Spinning Cube

A Spinning Cube

TL; DR: I am going to learn Three.js. I made this.

3D programming is much harder than 2D programming. You have to think about renderers and scenes and cameras and that extra dimension when specifying co-ordinates. I have wanted to break into the 3D world with my programming for some time, but every time I tried, I got confused.

Enter Three.js.

Three.js is a Javascript library that abstracts away all the complications of 3D programming that come with WebGl and makes 3D programming much easier. This series (hopefully!) will document the things that I learn about Three.js and programming in 3D.

This post is about my first attempt: A spinning cube. I found this tutorial to start me off. Although it is a little bit outdated, it works fine for my purposes.

The first thing I needed to wrap my head around were co-ordinates. In Three.JS they work as follows:

Three.js Co-ordinates visualisation

(Image credit: Originally made by Keistutis on openclipart.org, tweaked for use here by me)

If you imagine your physical laptop screen as the 3D space that your stuff lives in, then the x co-ordinate is from side to side (left: negative, right: positive), the y co-ordinate goes up (positive) and down (negative), and the z co-ordinate goes in front and behind (coming out of your screen: positive, going into your screen: negative) of your screen.

I am noting down the co-ordinate system here for my own reference... :D

You can see the code in action that I have written here: one - spinning cube.

If you can't see it, please check get.webgl.org. It will tell you whether your browser supports WebGL or not. Some (older?) chromebooks also have a buggy WebGL implementation if you use one of those.

Art by Mythdael