Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression conference conferences containerisation css dailyprogrammer data analysis debugging defining ai demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics guide hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs latex learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation outreach own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering research resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

PhD Update 19: The Reckoning

The inevitability of all PhDs. At first it seems distant and ephemeral, but it is also the inescapable and unavoidable destination for all on the epic journey of the PhD.

Sit down and listen as I tell my own tale of the event I speak of.

I am, of course, talking about the PhD Viva. It differs from country to country, but here in the UK the viva is an "exam" that happens a few months after you have submitted your thesis (PhD Update 18: The end and the beginning). Unlike across the pond in the US, in the UK vivas are a much more private affair, with only you, the chair, and your internal and external examiners normally attending.

In my case, that was 2 externals (as I am also staff, ref Achievement get: Experimental Officer Position!), an internal, and of course the chair. I won't name them as I'm unsure of policy there, but they were experts in the field and very kind people.

I write this a few weeks removed from the actual event (see also my post on Fediscience at the time), and I thought that my viva itself deserved a special entry in this series dedicated to it.

My purpose in this post is to talk about my experience as honestly and candidly as I can, and offer some helpful advice from someone who has now been through the process.

The Structure

The viva itself took about 4 hours. It's actually a pretty complicated affair: all your examiners (both internal and external) have to read your thesis and come up with a list of questions (hidden from you of course). Then, on the day but before you enter the room they have to debate who is going to ask what to avoid duplication.

In practice this usually means that the examiners will meet in the morning to discuss, before having lunch and then convening for the actual viva bit where they ask the questions. In my case, I entered the room to meet the examiners and say hi, before leaving again for them to sort out who was going to ask what.

Then, the main part of the viva simply consists of you answering all the questions that they have for you. Once all the questions are answered, then the viva is done.

You are usually allowed a copy of your thesis in one form or another to assist you while answering their questions. The exact form this will take varies from institution to institution, so I recommended always checking this with someone in charge (e.g. the Doctoral College in my case) well in advance - you don't want to be hit with paperwork and confusion minutes before your viva is scheduled to start!

After the questions, you leave the room again for the examiners to deliberate over what the outcome will be, before calling you back into the room to give you the news.

Once they have done this: the whole thing is over and you can go sleep (trust me, you will not want to do anything else).

My experience

As I alluded to in the aforementioned post on fediscience (a node in the fediverse), I found the viva a significantly intense experience - and one I'm not keen on repeating any time soon. I strongly recommend having someone nearby as emotional support for after the viva and during those periods when you have to step out of the room. I am not ashamed to admit that there were tears after the exam had ended.

More of the questions than I expected focused on the 'big picture' kinda stuff, like how my research questions linked in with the rest of the thesis, and how the thesis flowed. I was prepared for technical questions -- and there were some technical questions -- but the 'fluffy stuff' kinda questions caught me a little off guard. For example, there were some questions about my introduction and how while I introduced the subject matter well, the jump into the technical stuff with the research questions was quite jarring, with concepts mentioned that weren't introduced beforehand.

To this end, I can recommend looking over the 'big picture' stuff beforehand so that you are prepared for questions that quiz you on your motivations for doing your research in the first place and question different aspects research questions.

It can also feel quite demoralising, being questioned for hours on what has been your entire life for multiple years. It can feel like all you have done is pointless, and you need to start over. While it is sure that you could improve upon your methods if you started from scratch, remember that you have worked hard to get to this point! You have discovered things that were not known to the world before your research began, and that is a significant accomplishment!

Try not to think too hard about the corrections you will need to make once the viva is done. Institutions differ, but in my case it is the job of the chair to compile the list of corrections and then send them to you (in one form or another). The list of corrections - even if they are explained to you verbally when you go back in to receive the result - may surprise you.

Outcome

As I am sure that most of you reading this are wondering, what was my result?! Before I tell you, I will preface the answer to your burning question with a list of the possible outcomes:

  • Pass with no corrections (extremely rare)
  • Pass with X months corrections (common, where X is a multiple of 3)
  • Fail (also extremely rare)

In my case, I passed with corrections!

It is complicated by the fact that while the panel decided that I had 6 months of corrections to do, I am not able to spend 100% of my time doing them. To this end, it is currently undefined how long I will have to do them - paperwork is still being sorted out.

The reasons for this are many, but chief among them is that I will be doing some teaching in September - more to come on my experience doing that in a separate post (series?) just as soon as I have clarified what I can talk about and what I can't.

I have yet to recieve a list of the corrections themselves (although I have not checked my email recently as I'm on holiday now as I write this), but it is likely that the corrections will include re-running some experiments - a process I have begun already.

Looking ahead

So here we are. I have passed my viva with corrections! This is not the end of this series - I will keep everyone updated in future posts as I work through the corrections.

I also intend to write a post or two about my experience learning to teach - a (side)quest that I am currently persuing in my capacity as Experimental Officer (research is still my focus - don't worry!)

Hopefully this post has provided some helpful insight into the process of the PhD viva - and my experience in mine.

The viva is not a destination: only a waypoint on a longer journey.

If you have any questions, I am happy to anwser them in the comments, and chat on the fediverse and via other related channels.

PhD Update 18: The end and the beginning

Hello! It has been a while. Things have been most certainly happening, and I'm sorry I haven't had the energy to update my blog here as often as I'd like. Most notably, I submitted my thesis last week (gasp!)! This does not mean the end of this series though - see below.

Before we continue, here's our traditional list of past posts:

Since last time, that detecting persuasive tactic challenge has ended too, and we have a paper going through at the moment: BDA at SemEval-2024 Task 4: Detection of Persuasion in Memes Across Languages with Ensemble Learning and External Knowledge.

Theeeeeeeeeeeeesis

Hi! A wild thesis appeared! Final counts are 35,417 words, 443 separate sources, 167 pages, and 50 pages of bibliography - making that 217 pages in total. No wonder it took so long to write! I submitted at 2:35pm BST on Friday 10th May 2024.

I. can. finally. rest.

It has been such a long process, and taken a lot of energy to complete it, especially since large amounts of formal academic writing isn't usually my thing. I would like to extend a heartfelt thanks especially to my supervisor for being there from beginning to end and beyond to support me through this endeavour - and everyone else who has helped out in one way or another (you know who you are).

Next step is the viva, which will be some time in July. I know who my examiners are going to be, but I'm unsure whether it would be wise to say here. Between now and then, I want to stalk investigate my examiners' research histories, which should give me an insight into their perspective on my research.

Once the viva is done, I expect to have a bunch of corrections to do. Once those are completed, I will to the best of my ability be releasing my thesis for all to read for free. I still need to talk to people to figure out how to do that, but rest assured that if you can't get enough of my research via the papers I've written for some reason, then my thesis will not be far behind.

Coming to the end of my PhD and submitting my thesis has been surprisingly emotionally demanding, so I thank everyone who is still here for sticking around and being patient as I navigate these unfamiliar events.

Researchy things

While my PhD may be coming to a close (I still can't believe this is happening), I have confirmed that I will have dedicated time for research-related activities. Yay!

This means, of course, that as one ending draws near, a new beginning is also starting. Today's task after writing this post is to readificate around my chosen idea to figure out where there's a gap in existing research for me to make a meaningful contribution. In a very real way, it's almost like I am searching for directions as I did in my very first post in this series.

My idea is connected to the social media research that I did previously on multimodal natural language processing of flooding tweets and images with respect to sentiment analysis (it sounded better in my head).

Specifically, I think I can do better than just sentiment analysis. Imagine an image of a street that's partially underwater. Is there a rescue team on a boat rescuing someone? What about the person on the roof waving for help? Perhaps it's a bridge that's about to be swept away, or a tree that has fallen down? Can we both identify these things in images and map them to physical locations?

Existing approaches to e.g. detect where the water is in the image are prone to misidentifying water that is infact where it should be for once, such as in rivers and lakes. To this end, I propose looking for the people and things in the water rather than the water itself and go for a people-centred approach to flood information management.

I imagine that while I'll probably use data from social media I already have (getting a hold of new data from social media is very difficult at the moment) - filtered for memes and misinformation this time - if you know of any relevant sources of data or datasets, I'm absolutely interested and please get in touch. It would be helpful but not required if it's related to a specific natural disaster event (I'm currently looking at floods, branching out to others is absolutely possible and on the cards but I will need to submit a new ethics form for that before touching any data).

Another challenge I anticipate is that of unlabelled data. It is often the case that large volumes of data are generated during an unfolding natural disaster, and processing it all can be a challenge. To this end, somehow I want my approach here to make sense of unlabelled images. Of course, generalist foundational models like CLIP are great, but lack the ability to be specific and accurate enough with natural disaster images.

I also intend that this idea would be applicable to images from a range of sources, and not just with respect to social media. I don't know what those sources could be just yet, but if you have some ideas, please let me know.

Finally, I am particularly interested if you or someone you know are in any way involved in natural disaster management. What kinds of challenges do you face? Would this be in any way useful? Please do get in touch either in the comments below or sending me an email (my email address is on the homepage of this website).

Persuasive tactics challenge

The research group I'm part of were successful in completing the SemEval Task 4: Multilingual Detection of Persuasion Techniques in Memes! I implemented the 'late fusion engine', which is a fancy name for an algorithm that uses in basic probability to combine categorical predictions from multiple different models depending on how accurate each model was on a per-category basis.

I'm unsure of the status of the paper, but I think it's been through peer-review so you can find that here: BDA at SemEval-2024 Task 4: Detection of Persuasion in Memes Across Languages with Ensemble Learning and External Knowledge.

I wasn't the lead on that challenge, but I believe the lead person (a friend of mine, if you are reading this and want me to link to somewhere here get in touch) on that project will be going to mexico to present it.

Teaching

I'm still not sure what I can say and what I can't, but starting in september I have been asked to teach a module on basic system administration skills. It's a rather daunting prospect, but I have a bunch of people much more experienced than me to guide me through the process. At the moment the plan is for 21 lecture-ish things, 9 labs, and the assessment stuff, so I'm rather nervous about preparing all of this content.

Of course, as a disclaimer nothing written in this section should be taken as absolute. (Hopefully) more information at some point, though unfortunately I doubt that I would be allowed to share the content created given it's University course material.

As always though, if there's a specific topic that lies anywhere within my expertise that you'd like explaining, I'm happy to write a blog post about it (in my own time, of course).

Conclusion

We've taken a little look at what is been going on since I last posted, and while this post has been rather talky (will try for some kewl graphics next time!), nonetheless I hope this has been an interesting read. I've submitted my thesis, started initial readificating for my next research project - which we've explored the ideas here, helped out a group research challenge project thingy, and been invited to do some teaching!

Hopefully the next post in this series will come out on time - long-term the plan is to absolutely continue blogging about the research I'm doing.

Until next time, the journey continues!

(Oh yeah! and finally finally, to the person who asked a question by email about this old post (I think?), I'm sorry for the delay and I'll try to get back to you soon.)

NLDL was awesome! >> NLDL-2024 writeup

A cool night sky and northern lights banner I made in Inkscape. It features mountains and AI-shaped constellations, with my logo and the text "@ NLDL 2024".

It's the week after I attended the Northern Lights Deep Learning Conference 2024, and now that I've had time to start to process everything I leant and experienced, it's blog post time~~✨ :D

Edit: Wow, this post took a lot more effort to put together than I expected! It's now the beginning of February and I'm just finishing this post up - sorry about the wait! I think this is the longest blog post to date. Consider this a mega-post :D

In this post that is likely to be quite long I'm going to talk a bit about the trip itself, and about what happened, and the key things that I learnt. Bear in mind that this post is written while I've still sorting my thoughts out - it's likely going to take many months to fully dive into everything I saw that interested me :P

Given I'm still working my way through everything I've learnt, it is likely that I've made some errors here. Don't take my word for any facts written here please! If you're a researcher I'm quoting here and you have spotted a mistake, please let me know.

I have lots of images of slides and posters that interested me, but they don't make a very good collage! To this end images shown below are from the things I saw & experienced. Images of slides & posters etc available upon request.

Note: All paper links will be updated to DOIs when the DOIs are released. All papers have been peer-reviewed.

Day 1

A collage of images from travelling to the conference. Description below. (Above: A collage of images from the trip travelling to the conference. Description below.)

Images starting top-left, going anticlockwise:

  1. The moon & venus set against the first.... and last sunrise I would see for the next week
  2. Getting off the first plane in Amsterdam Schiphol airport
  3. Flying into Bergen airport
  4. The teeny Widerøe turboprop aircraft that took me from Bergen to Tromsø
  5. A view from the airport window when collecting my bags
  6. Walking to the hotel
  7. Departures board in Bergen airport

After 3 very long flights the day before (the views were spectacular, but they left me exhausted), the first day of the conference finally arrived. As I negotiated public transport to get myself to UiT, The Arctic University of Norway I wasn't sure what to expect, but as it turned out academic conferences are held in (large) lecture theatres (at least this one was) with a variety of different events in sequence:

  • Opening/closing addresses: Usually at the beginning & ends of a conference. Particularly the beginning address can include useful practical information such as where and when food will be made available.
  • Keynote: A longer (usually ~45 minute) talk that sets the theme for the day or morning/afternoon. Often done by famous and/or accomplished researchers.
  • Oral Session: A session chaired by a person [usually relatively distinguished] in which multiple talks are given by individual researchers with 20 minutes per talk. Each talk is 15 minutes with 5 minutes for questions. I spoke in one of these!
  • Poster session: Posters are put up by researchers in a designated area (this time just outside the room) and people can walk around and chat with researchers about their researchers. If talks have a wide reach and shallow depth, posters have a narrow reach and much more depth.
    • I have bunch of photographs of posters that interested me for 1 reason or another, but it will take me quite a while to work through them all to properly dig into them all and extract the interesting bits.
  • Panel discussion: This was where a number of distinguished people sit down on chairs/stools at the front, and a chair asks a series of questions and moderates the resulting discussion. Questions from the audience may also be allowed after some of the key preset questions have been asked.

This particular conference didn't have multiple events happening at once (called 'tracks' in some conferences I think), which I found very helpful as I didn't need to figure out which events I should attend or not. Some talks didn't sound very interesting but then turned out to be some of the highlights of the conference for me, as I'll discuss below. Definitely a fan of this format!

The talks started off looking at the fundamentals of AI. Naturally, this included a bunch of complex mathematics - the understanding of which in real-time is not my strong point - so while I did make some notes on these I need to go back and take a gander at the papers of some of the talks to fully grasp what was going on.

Moving on from AI fundamentals, the next topic was reinforcement learning. While not my current area of research area, some interesting new uses of the technology were discussed, such as dynamic pathing/navigation based on the information gained from onboard sensors by Alouette von Hove from the University of Oslo - the presented example was determining the locations of emitters of greenhouse gasses such as CO₂ & methane.

Interspersed in-between the oral sessions were poster sessions. At NLDL-2024 these were held in the afternoons and also had fruit served alongside them, which I greatly appreciated (I'm a huge fan of fruit). At these there were posters for the people who had presented earlier in the day, but also some additional posters from researchers who were presenting a talk.

If talks research a wide audience at a shallow depth, the posters reached a narrower audience but at a much greater depth. I found the structure of having the talk before the poster very useful - not only for presenting my own research (more on that later), but also for picking out some of the posters I wanted to visit to learn more about their approaches.

On day 1, the standout poster for me was one on uncertainty quantification in image segmentation models - Confident Naturalness Explanation (CNE): A Framework to Explain and Assess Patterns Forming Naturalness. While their approach to increasing the explainability of image segmentation models (particularly along class borders) was applied to land use and habitat identification, I can totally see it being applicable to many other different projects in a generic 'uncertainty-aware image segmentation' form. I would very much like to look into this one deeply and consider applying lessons learnt to my rainfall radar model.

Another interesting poster worked to segment LiDAR data in a similar fashion to that of normal 'image segmentation' (that I'm abusing in my research) - Beyond Siamese KPConv: Early Encoding and Less Supervision.

Finally, an honourable mention is one which applied reinforcement learning to task scheduling - Scheduling conditional task graphs with deep reinforcement learning.

Diversity in AI

In the afternoon, the Diversity in AI event was held. The theme was fairness of AI models, and this event was hugely influential for me. Through a combination of cutting edge research and helpful case-studies and illustrations, the speakers revealed hidden sources of bias and novel ways to try and correct for them. They asked the question of "what do we mean by a fair AI model?", and discovered the multiple different facets to the question and how fairness in an AI model can mean different things in different contexts and to different people.

They also demonstrated how taking a naïve approach to correcting for e.g. bias in a binary classifier could actually make the problem worse!

I have not yet had time to go digging into this, but I absolutely want to spend at least an entire afternoon dedicated to digging into and reading around the subject. Previously, I had no idea how big and pervasive the problem of bias in AI was, so I most certainly want to educate myself to ensure models that I create as a consequence of the research I do are as ethical as possible.

Depending on how this research reading goes, I could write a dedicated blog post on it in the future. If this would be interesting to you, please comment below with the kinds of things you'd be interesting in.

Another facet of the diversity event was that of hiring practices and diversity in academia. In the discussion panel that closed out the day, the current dilemma of low diversity (e.g. gender balance) in students taking computer science as a subject. It was suggested that how computer science is portrayed can make a difference, and that people with different backgrounds on the subject will approach and teach the subject through different lenses. Mental health was also mentioned as being a factor that requires work and effort to reduce stigma, encourage discussions, and generally improve the situation.

All in all I found the diversity event to be a very useful and eye-opening event that I'm glad I attended.

A collage from day 1 of the conference

(Above: A collage from day 1 of the conference)

Images starting top-left, going anticlockwise in an inwards spiral:

  1. The conference theatre during a break
  2. Coats hung up on the back wall of the conference theatre - little cultural details stood out to me and were really cool!
  3. On the way in to the UiT campus on day 1
  4. Some plants under some artificial sunlight bulbs I found while having a wander
  5. Lunch on day 1: rice (+ curry, but I don't like curry)
  6. NLDL-2024 sign
  7. View from the top on Fjellheisen
  8. Cinnamon bun that was very nice and I need to find a recipe
  9. View from the cable car on the way up Fjellheisen

Social 1: Fjellheisen

The first social after the talks closed out for the day was that of the local mountain Fjellheisen (pronounced fyell-hai-sen as far as I can tell). Thankfully a cable car was available to take conference attendees (myself included) up the mountain, as it was significantly cold and snowy - especially 420m above sea level at the top. Although it was very cloudy at the time with a stratus cloud base around 300m (perhaps even lower than that), we still got some fabulous views of Tromsø and the surrounding area.

There was an indoor seating area too, in which I warmed up with a cinnamon bun and had some great conversations with some of the other conference attendees. Social events and ad-hoc discussions are, I have discovered, an integral part of the conference experience. You get to meet so many interesting people and discover so many new things that you wouldn't otherwise get the chance to explore.

Day 2

Day 2 started with AI for medical applications, and what seemed to be an unofficial secondary theme continuing the discussion of bias and fairness which made the talks just as interesting and fascinating as the previous day. By this point I figured out the conference-provided bus, resulting in more cool discussions on the way to and from the conference venue at UiT.

Every talk was interesting in it's own way, with discussions of shortcut learning (where a model learns to recognise something else other than your intended target - e.g. that some medical device in an X-Ray is an indicator of some condition when it wouldn't ordinarily present at test time), techniques to utilise contrastive learning in new ways (classifying areas of interest in very large images from microscopes) and applying the previous discussion of bias and fairness to understanding bias in contrastive learning systems, and what we can do about it through framing the task the model is presented with.

The research project that stood out to be was entitled Local gamma augmentation for ischemic stroke lesion segmentation on MRI by Jon Middleton at the University of Copenhagen. Essentially they correct for differing ranges of brightness in images from MRI scans of brains before training a model to increase accuracy and reduce bias.

The poster session again had some amazing projects that are worth mentioning. Of course, as with this entire blog post this is just my own personal recount of the things that I found interesting - I encourage you to go to a conference in person at some point if you can!

The highlight was a poster entitled LiDAR-based Norwegian Tree Species Detection Using Deep Learning. The authors segment LiDAR data by tree species, but have also invented a clever augmentation technique they call 'cowmix augmentation' to stretch the model's attention to detail on class borders and the diversity of their dataset.

Another cool poster was Automatic segmentation of ice floes in SAR images for floe size distribution. By training an autoencoder to reconstruct SAR (Synthetic Aperture Radar) images, they use the resulting output to analyse the distribution in sizes of icebergs in Antarctica.

I found that NLDL-2024 had quite a number of people working in various aspects of computer vision and image segmentation as you can probably tell from the research projects that have stood out to me so far. Given I went to present my rainfall radar data (more on that later), image handling projects stood out to me more easily than others. There seemed to be less of a focus on Natural Language Processing - which, although discussed at points, wasn't nearly as prominent a theme.

One NLP project that was a thing though was a talk on anonymising data in medical records before they are e.g. used in research projects. The researcher presented an approach using a generative text model to identify personal information in medical records. By combining it with a regular expression system, more personal information could be identified and removed than before.

While I'm not immediately working with textual data at the minute, part of my PhD does involve natural language processing. Maybe in the future when I have some more NLP-based research to present it might be nice to attend an NLP-focused conference too.

A collage of photos from day 2 of the conference.

(Above: A collage of photos from day 2 of the conference.)

Images from top-left in an anticlockwise inwards spiral:

  1. Fjellheisen from the hotel at which the conference dinner took place
  2. A cool church building in a square I walked through to get to the bus
  3. The hallway from the conference area up to the plants in the day 1 collage
  4. Books in the store on the left of #3. I got postcards here!
  5. Everyone walking down towards the front of the conference theatre to have the group photo taken. I hope they release that photo publicly! I want a copy so bad...
  6. The lobby of the conference dinner hotel. It's easily the fanciest place I have ever seen....!
  7. The northern lights!!! The clouds parted for half and hour and it was such a magical experience.
  8. Moar northern lights of awesomeness
  9. They served fruit during the afternoon poster sessions! I am a big fan. I wonder if the University of Hull could do this in their events?
  10. Lunch on day 2: fish pie. It was very nice!

Social 2: Conference dinner

The second social event that was arranged was a conference dinner. It was again nice to have a chance to chat with others in my field in a smaller, more focused setting - each table had about 7 people sitting at it. The food served was also very fancy - nothing like what I'm used to eating on a day-to-day basis.

The thing I will always remember though is shortly before the final address, someone came running back into the conference dinner hall to tell us they'd seen the northern lights!

Grabbing my coat and rushing out the door to some bewildered looks, I looked up and.... there they were.

As if they had always been there.

I saw the northern lights!

Seeing them has always been something I wanted to do, so I am so happy to have a chance to see them. The rest of the time it was cloudy, but the clouds parted for half an hour that evening and it was such a magical moment.

They were simultaneously exactly and nothing like what I expected. They danced around the sky a lot, so you really had to have a very good view in all directions and keep scanning the sky. They also moved much faster than I expected. They could flash and be gone in just moments, while others would just stick and hang around seemingly doing nothing for over a minute.

A technique I found to be helpful was to scan the sky with my phone's camera. It could see 'more' of the northern lights than you can see with your eyes, so you could find a spot in the sky that had a faint green glow with your phone and then stare at it - and more often than not it would brighten up quickly so you could see it with your own eyes.

Day 3

It felt like the conference went by at lightning speed. For the entire time I was focused on learning and experiencing as much as I could, and just as soon as the conference started we all reached the last day.

The theme for the third day started with self-supervised learning. As I'm increasingly discovering, self-supervised learning is all about framing the learning task you give an AI model in a clever way that partially or completely does away with the need for traditional labels. There were certainly some clever solutions on show at NLDL-2024:

An honourable mention goes to the paper on a new speech-editing model called FastStitch. Unfortunately the primary researcher was not able to attend - a shame, cause it would have been cool to meet up and chat - but their approach looks useful for correcting speech in films, anime, etc after filming... even though it could also be used for some much more nefarious purposes too (though that goes for all of the new next-gen generative AI models coming out at the moment).

This was also the day I presented my research! As I write this, I realise that this post is now significantly long so I will dedicate a separate post to my experiences presenting my research. Suffice to say it was a very useful experience - both from the talks and the poster sessions.

Speaking of poster sessions, there was a really interesting poster today entitled Deep Perceptual Similarity is Adaptable to Ambiguous Contexts which proposes that image similarity is more complicated just comparing pixels: it's about shapes and the object shown -- not just the style a given image is drawn in. To this end, they use some kind of contrastive approach to compare how similar a series of augmentations are to the original source image as a training task.

Panel Discussion

Before the final poster session of the (main) conference, a panel discussion between 6 academics (1 chairing) who sounded very distinguished (sadly I did not completely catch their names, and I will save everyone the embarrassment of the nickname I had to assign them to keep track of the discussion in my notes) closed out the talks. There was no set theme that jumped out to me (other than AI of course), but like the diversity in AI conference discussion panel on day 1 the chair had some set questions to ask the academics making up the discussion.

The role of Universities and academics was discussed at some length. Recently large tech companies like OpenAI, Google, and others are driven by profit to race to put next-generation foundational models (a term new to me that describes large general models like GPT, Stable Diffusion, Segment Anything, CLIP, etc) to work in anything and everything they can get their hands on..... and often to the detriment of user privacy.

It was mentioned that researchers in academia have a unique freedom to choose what they research in a way that those working in industry do not. It was suggested that academia must be one step ahead of industry, and understand the strengths/weaknesses of the new technologies -- such as foundational models, and how they impact society. With this freedom, researchers in academia can ask the how and why, which industry can't spare the resources for.

The weaknesses of academia was also touched on, in that academia is very project-based - and funding for long-term initiatives can be very difficult come by. It was also mentioned that academia can get stuck on optimising e.g. a benchmark in the field of AI specifically. To this end, I would guess creativity is really important to invent innovative new ways of solving existing real-world problems rather than focusing too much on abstract benchmarks.

The topic of the risks of AI in the future also came up. While the currently-scifi concept of the Artificial General Intelligence (AGI) that is smarter than humans is a hot topic at the moment, whether or not it's actually possible is not clear (personally, it seem rather questionable that it's even possible at all) - and certainly not in the next few decades. Rather than worrying about AGI, everyone agreed that bias and unfairness in AI models is already a problem that needs to be urgently addressed.

The panel agreed that people believing the media hype generated by the large tech companies is arguably more dangerous than AGI itself... even if it were right around the corner.

The EU AI Act is right around the corner, which requires transparency of data used to train a given AI model, among many other things. This is a positive step forwards, but the panel was concerned that the act could lead to companies cutting corners on safety to tick boxes. They were also concerned that an industry would spring up around the act providing services of helping other businesses to comply with the act, which risked raising the barrier to entry significantly. How the act is actually implemented with have a large effect on its effectiveness.

While the act risks e.g. ChatGPT being forced to pull out of the EU if it does not comply with the transparency rules, the panel agreed that we must take alternate path than that of closed-source models. Open source alternatives to e.g. ChatGPT do exist and are only about 1.5 years behind the current state of the art. It appears at first that privacy and openness are at odds, but in Europe we need both.

The panel was asked what advice they had for young early-career researchers (like me!) in the audience, and had a range of helpful tips:

  • Don't just follow trends because everyone else is. You might see something different in a neglected area, and that's important too!
  • Multidisciplinary research is a great way to see different perspectives.
  • Speak to real people on the ground as to what their problems are, and use that as inspiration
  • Don't keep chasing after the 'next big thing' (although keeping up to date in your field is important, I think).
  • Work on the projects you want to work on - academia affords a unique freedom to researchers working within

All in all, the panel was a fascinating big-picture discussion, and there was discussion of the bigger picture of the role of academia in big-picture current global issues I haven't really seen before this point.

AI in Industry Event

The last event of the conference came around much faster than I expected - I suppose spending every waking moment focused on conferencey things will make time fly by! This event was run by 4 different people from 4 different companies involved in AI in one way or another.

It was immediately obvious that these talks were by industry professionals rather than researchers, since they somehow managed say a lot without revealing all that much about the internals of their companies. It was also interesting that some of them were almost a pitch to the researchers present to ask if they had any ideas or solutions to their problems.

This is not to say that the talks weren't useful. They were a useful insight into how industry works, and how the impact of research can be multiplied by being applied in an industry context.

It was especially interesting to listen to the discussion panel that was held between the 4 presenters / organisers of the industry event. 1 of them served as chair, moderating the discussion and asking the questions to direct the discussion. They discussed issues like silos of knowledge in industry vs academia, the importance of sharing knowledge between the 2 disciplines, and the challenges of AI explainability in practice. The panellists had valuable insights into the realities of implementing research outputs on the ground, the importance of communication, and some advice for PhD students in the audience considering a move into industry after their PhD.

A collage of photos I took during day 3

(Above: A collage from day 1 of the conference)

Images starting top-left, going anticlockwise in an inwards spiral:

  • A cool ship I saw on the way to the bus that morning
  • A neat building I saw on the way to the bus. The building design is just so different to what I'm used to.... it gives me loads of building inspiration for Minetest!
  • The cafeteria in which we ate lunch each day! It was so well designed, and the self-clear system was very functional and cool!
  • The conference theatre during one of the talks.
  • Day 3's lunch: lasagna! They do cool food in UiT in Tromsø, Norway!
  • The last meal I ate (I don't count breakfast :P) in the evening before leaving the following day
  • The library building I went past on the way back to the hotel in the evening. The integrated design with books + tables and interactive areas is just so cool - we don't have anything like that I know of over here!
  • A postbox! I almost didn't find it, but I'm glad I was able to sent a postcard or two.
  • The final closing address! It was a bittersweet moment that the conference was already coming to a close.

Closing thoughts

Once the industry event was wrapped up, it was time for the closing address. Just as soon as it started, the conference was over! I felt a strange mix of exhaustion, disbelief that the conference was already over, and sadness that everyone would be going their separate ways.

The very first thing I did after eating something and getting back to my hotel was collapse in bed and sleep until some horribly early hour in the morning (~4-5am, anyone?) when I needed to catch the flight home.

Overall it was an amazing conference, and I've learnt so much! It's felt so magical, like anything is possible ✨ I've met so many cool people and been introduced to so many interesting ideas, it's gonna take some time to process them all.

I apologise for how long and rambly this post has turned out to be! I wanted to get all my thoughts down in something coherent enough I can refer to it in the future. This conference has changed my outlook on AI and academia, and I'm hugely grateful to my institution for finding the money to make it possible for me to go.

It feels impossible to summarise the entire conference in 4 bullet points, but here goes:

  • Fairness: What do we mean by 'fair'? Hidden biases, etc. Explainable AI sounds like an easy solution but can not only mislead you but attempts to improve perceived 'fairness' can in actuality make the problem worse and you would never know!
  • Self-supervised learning: Clustering, contrastive learning, also tying in with the fairness theme ref sample weighting and other techniques etc.
  • Fundamental models: Large language models etc that are very large and expensive to train, sometimes available pretrained sand sometimes only as an API. They can zero-shot many different tasks, but what about fairness, bias, ethics?
  • Reinforcement learning: ...and it's applications

Advice

I'm going to end this mammoth post with some advice to prospective first-time conference goers. I'm still rather inexperienced with these sortsa things, but I do have a few things I've picked up.

If you've unsure about going to a conference, I can thoroughly recommend attending one. If you don't know which conference you'd like to attend, I recommend seeking someone with more experience than you in your field but what I can say is that I really appreciated how NLDL-2024 was not too big and not too small. It had an estimated 250 conference attendees, and I'm very thankful it did not have multiple tracks - this way I didn't hafta sort through which talks interested me and which ones didn't. The talks that did interest me sometimes surprised me: if I had the choice I would have picked an alternative, but in the end I'm glad I sat through all of them.

Next, speak to people! You're all in this together. Speak to people at lunch. On the bus/train/whatever. Anywhere and everywhere you see someone with the same conference lanyard as you, strike up a conversation! The other conference attendees have likely worked just as hard as you to get here & will likely be totally willing to you. You'll meet all sorts of new people who are just as passionate about your field as you are, which is an amazing experience.

TAKE NOTES. I used Obsidian for this purpose, but use anything that works for you. This includes both formally from talks, panel discussions, and other formal events, but also informally during chats and discussions you hold with other conference attendees. Don't forget to include who you spoke to as well! I'm bad at names and faces, but your notes will serve as a permanent record of the things you learnt and experienced at the time that you can refer back to again later. You aren't going to remember everything you see (unless you have a perfect photographic memory of course), so make notes!

On the subject of recording your experiences, take photos too. I'm now finding it very useful to have photos of important slides and posters that I saw to refer back to. I later developed a habit of photographing the first slide of every talk, which has also proved to be invaluable.

Having business cards to hand out can be extremely useful to follow up conversations. If you have time, get some made before you go and take them with you. I included some pretty graphics from my research on mine, which served as useful talking points to get conversations started.

Finally, have fun! You've worked hard to be here. Enjoy it!

If you have any thoughts / comments about my experiences at NLDL-2024, I'd love to hear them! Do leave a comment below.

PhD Update 17: Light at the end of the tunnel

Wow..... it's been what, 5 months since I last wrote one of these? Oops. I'll do my best to write them at the proper frequency in the future! Things have been busy. Before I talk about what's been happening, here's the ever-lengthening list of posts in this series:

As I sit here at the very bitter end of the very last day of a long but fulfilling semester, I'm feeling quite reflective about the past year and how things have gone on my PhD. One of these posts is definitely long overdue.

Timescales

Naturally the first question here is about timescales. "What happened?" I hear you ask. "I thought you said you were aiming for intent to submit September 2023 for December 2023 finish?"

Well, about that.......

As it turns out, spending half of one's week working as Experimental Officer throws off one's estimation of how much work they do. To this end, it's looking more likely that I will be submitting my thesis in early-mid semester 2 this year. In other words, that's around about March 2024 time - give or take a month or two.

After submission the next step will be my viva. Hoping I pass, it's then likely followed by corrections that must be completed based on the feedback from the viva.

What is a viva though? From what I understand, it is an oral exam in which you, your primary supervisor, and 2 examiners comb through your thesis with a fine toothcomb and ask you lots of questions. I've heard it can take several hours to complete. While the standard is to have 1 examiner be chosen internally from your department / institute and one to be chosen externally (chosen by your primary supervisor), in my case I will be having both chosen from external sources as I am now a (part-time) staff member in the Department of Computer Science at the University of Hull (my home institution).

While it's still a little ways out yet, I can't deny that the thought of my viva is making me rather nervous - having everything I've done over the past 4.5 years scrutinised by completely unknown people. In a sense, it feels like once it is time for my viva, there will be nothing more I can do. I will either know the answers to their questions.... or I will not.

Writing

As you might have guessed by now, writing has been the name - and, indeed, aim - of the game since the last post in this series. Everything is coming together rather nicely. It's looking like I'm going to end up with the following structure:

  1. Introduction (not written*)
  2. Background (almost there! currently working on this)
  3. Rainfall radar for 2d flood forecasting (needs expanding)
  4. Social media sentiment analysis (done!)
  5. Conclusion
  6. Acknowledgements, Appendices, etc
  7. Dictionary of terms; List of acronyms (grows organically as I write - I need to go through and make sure I \gls all the terms I've added later)
  8. Bibliography (currently 27 pages and counting O.o)
  • Technically I have written it, it's just outdated and very bad and needs throwing out the window of the tallest building I can find. Rewrite is pending - see below.

A sneak preview of my thesis as a PDF.

(Above: A sneak preview of my thesis PDF. I'm writing in LaTeX - check out my templates with the University of Hull reference style here! Evidently the pictured section needs some work.....)

I've finished the chapter on social media work, barring some minor adjustments I need to apply to ensure consistency. My current focus is the background chapter. This is most of the way there, but I need some more detail in several sections so I'm working my way through them one at a time. This is resulting a bunch more reading (especiall for vision-based water detection via satellite data), so this is taking some time.

Once I've wrapped up the background section, it will be time to turn my attention to the content chapter #2: Rainfall radar for 2d flood forecasting. Currently, it sits at halfway between a conference paper (check it out! You can read it now, though a DOI is pending and should be available after the conference) and a thesis chapter - so I need to push (pull? drag?) it the rest of the way to the finish line. This will primarily entail 2 things:

  • Filling out the chapter-specific related works, which are currently rather brief given space and time limitations in a conference paper
  • Elaborating on things like the data preprocessing, experiments, discussion, etc.

This will also take some time, which together with the background section explains the uncertaincy I still have in my finish date. Once these are both complete, I will be submitting my intent to submit! This will start a 3 month timer, by the end of which I must have submitted my thesis. During this timer period, I will be working on the introduction and conclusion chapters, which I do not expect to take nearly as long as any of the other chapters.

Once I am done writing and have submitted my thesis, I will do everything I can to ensure it is available under an open source licence for everyone to read. I believe strongly in the power of open source (and, open science) to benefit everyone, and want to share everything I've learned with all of you reading this.

At 102 pages A4 single space so far and counting though (not including the aforementioned bibliography), it's a big time investment to read. To this end, I have various publications I've written and posted about here previous that cover most of the stuff I've done (namely the rainfall radar conference paper and social media journal article), and I also want to somehow condense the content of my thesis down into a 'mini-thesis' that's about 3-6 pages ish and post that alongside my main thesis here on my website. I hope that this should provide the broad strokes and a navigation aid for the main document.

Predicting Persuasive Posts

All this writing is going to drive me crazy if I don't do something practical alongside it. Unfortuantely I have long since run out of exuses to run more experiments on my PhD work, so a good friend of mine who is also doing a PhD (they've published this paper) came along at the perfect time the other day asking for some help with a challenge competition submission they want to do. Of course, I had to agree to help out in a support role as the project sounds really interesting1.

The official title of the challenge is thus: Multilingual Detection of Persuasion Techniques in Memes

The challenge is part of SemEval-2024 and it's basically about classifying memes from some social media network (it's unclear which one they are from) as to which persuasion tactic they are employing to manipulate the reader's opinions / beliefs.

The full challenge page is can be found here: https://propaganda.math.unipd.it/semeval2024task4/index.html

We had a meeting earlier this week to discuss, and one of the key problems we identified was that to score challengers they be using posts in multiple unseen languages. To this end, it strikes me that it is important to have multiple languages embedded in the same space for optimal results.

This is not what GloVe does (it embeds them to different 'spaces', so a model trained data in 1 language won't necessarily work well with another) - as I discovered in my demo for the Hull Science Festival - definitely want to write about this in the final post in that series - so as my role in the team I'm going to push a number of different word embeddings through the system I have developed for the aforementioned science demo to identify which one is best for embedding multilingual text. Expect some additional entries to be added to the demo and an associated blog post on my findings very soon!

Currently, I have the following word embedding systems on my list:

  • Word2vec
  • FastText
  • CLIP
  • BERT/mBERT
  • XLM/XLM-RoBERTa

If you know of any other good word embedding models / algorithms, please do leave a comment below.

It also occurs to me while writing this that I'll have to make sure the multilingual dataset I used for the online demo has the same or similar words translated to every language to rule out any difference in embeddings there.

A nice challenge for the Christmas holidays! My experience of collaborating with other researchers is rather limited at the moment, so I'm looking forward to working in a team to achieve a goal much faster than would otherwise be possible.

Beyond the edge

Something that has been constant nagging presence in my mind and steadily growing is the question of what happens next after my thesis. While the details have not been confirmed yet, once everything PhD-related is wrapped up I will most likely be increasing my hours by some amount such that I work Monday - Friday rather than just Monday - Wednesday lunchtime as I have been doing so far.

This extra time will consist of 2 main activities. To the best of my current understanding, this will include some additional teaching responsibilities - I will probably be teaching a module that lies squarely within 1 of my strong points. It will also, crucially, include some dedicated time for research.

This time for research I believe I will be able to spend on research related activities, including for example collaborating with other researchers, reading papers, designing and running experiments, and writing up results into publication form. Essentially what I've been doing on my PhD, just minus the thesis writing!

Of course, the things I talk about here are not set in stone, and me talking about them here is not a declaration of such.

Either way, I do feel that the technical is a strong point of mine that I am rather passsionate about, so I do desire very much to continue dedicating a significant portion of my energy towards doing practical research tasks.

I'm not sure how much I am allowed to talk about the teaching I will be doing, but do expect some updates on that here on my blog too - however high-level and broad strokesy they happen to be. What kind of teaching-related things would you be interested in being updated about here? Please do leave a comment below.

Talking more specifically, I do have a number of research ideas - one of which I have alluded to above - that I want to explore after my PhD. Most of these are based on what I have learnt from doing my PhD and the logical next steps to analyse complex real-time data sources with a view to extracting and processing information to increase situational awareness in natural disaster scenarios. When I get around to this, I will be blogging about my progress in detail here on my blog.

It should probably be mentioned that I am still quite a long way off actually putting any of these ideas into practice (I would definitely not recommend trusting any predictions my current rainfall radar → binarised water depth model makes in the real world yet!), but if you or someone you know works in the field of managing natural disasters, I would be fascinated to know what you would find most useful related to this - please leave a comment below.

Conclusion

This post has ended up being a lot longer than I expected! I've talked about my current writing progress, a rather interesting side-project (more details in a future blog post!), and initial conceptual future plans - both researchy and otherwise.

While my thesis is drawing close to completion (relatively, at least), I hope you will join me here beyond the end of this long journey that is almost at an end. As one book closes, so does another one open. A new journey is / will be only just beginning - one I can't wait to share with everyone here in future blog posts.

If you've got any thoughts, it would be cool if you could share them below.


  1. It goes without saying, but I won't let it impact my writing progress. I divide my day up into multiple slices - one of which is dedicated to focused PhD work - and I'll be pulling from a different slice of time other than the one for my PhD writing to help out with this project. 

NLDL 2024: My rainfall radar paper is out!

A cool night sky and northern lights banner I made in Ink
scape. It features mountains and AI-shaped constellations, with my logo and the text "@ NLDL 2024".

Towards AI for approximating hydrodynamic simulations as a 2D segmentation task

......that's the title of the conference paper I've written about my rainfall radar research that I've been doing as part of my PhD, and now that the review process is complete I'm told by my supervisor that I can now share it!

This paper is the culmination of one half of my PhD (the other is multimodal social media sentiment analysis and it resulted in a journal article). Essentially, the idea behind the whole project was asking the question of "Can we make flood predictions all at once in 2D?".

The answer, as it turns out, is yes*.... but with a few caveats and a lot more work required before it's anywhere near ready to be coming to a smartphone near you.

It all sort of spiralled from there - and resulted in the development of a DeepLabV3+-based image semantic segmentation model that learns to approximate a physics-based water simulation.

The abstract of the paper is as follows:

Traditional predictive simulations and remote sensing techniques for forecasting floods are based on fixed and spatially restricted physics-based models. These models are computationally expensive and can take many hours to run, resulting in predictions made based on outdated data. They are also spatially fixed, and unable to scale to unknown areas.

By modelling the task as an image segmentation problem, an alternative approach using artificial intelligence to approximate the parameters of a physics-based model in 2D is demonstrated, enabling rapid predictions to be made in real-time.

I'll let the paper explain the work I've done in detail (I've tried my best to make it understandable by a wide audience). You can read it here:

https://openreview.net/forum?id=TpOsdB4gwR

(Direct link to PDF)

Long-time readers of my blog here will know that I haven't had an easy time of getting the model to work. If you'd like to read about the struggles of developing this and other models over the course of my PhD so far, I've been blogging about the whole process semi-regularly. We're currently up to part 16:

PhD Update 16: Realising the possibilities of the past

Speaking of which, it's high time I wrote another PhD update blog post, isn't it? A lot has been going on, and I'd really like to document it all here on my blog. I've also been finding it's been really useful to get me to take a step back to look at the big picture of my research - something that I've found very helpful in more ways than one. I'll definitely discuss this and my progress in the next part of my PhD update blog post series, which I tag with PhD to make them easy to find.

Until then, I'll see you in the next post!

I'm going to NLDL 2024!

A cool night sky and northern lights banner I made in Ink
scape. It features mountains and AI-shaped constellations, with my logo and the text "@ NLDL 2024".

Heya! Been a moment 'cause I've been doing a lot of writing and revising of writing for my PhD recently (the promised last part of the scifest demo series is coming eventually, promise!), but I'm here to announce that, as you might have guessed by the cool new banner, I have today (yesterday? time is weird when you stay up this late) had a paper accepted for the Northern Lights Deep Learning Conference 2024, which is to be held on 9th - 11th January 2024 in Tromsø, Norway!

I have a lot of paperwork to do between now and then (and many ducks to get in a row), but I have every intention of attending the conference in person to present my rainfall radar research I've been rambling on about in my PhD update series.

I am unsure whether I'm allowed to share the paper at this stage - if anyone knows, please do get in touch. In the meantime, I'm pretty sure I can share the title without breaking any rules:

Towards AI for approximating hydrodynamic simulations as a 2D segmentation task

I also have a Cool Poster, which I'll share here after the event too in the new (work-in-progress) Research section of the main homepage that I need to throw some CSS at.

I do hope that this cool new banner gets some use bringing you more posts about (and, hopefully, from!) NLDL 2024 :D

--Starbeamrainbowlabs

My Hull Science Festival Demo: How do AIs understand text?

Banner showing gently coloured point clouds of words against a dark background on the left, with the Humber Science Festival logo, fading into a screenshot of the attract screen of the actual demo on the right.

Hello there! On Saturday 9th September 2023, I was on the supercomputing stand for the Hull Science Festival with a cool demo illustrating how artificial intelligences understand and process text. Since then, I've been hard at work tidying that demo up, and today I can announce that it's available to view online here on my website!

This post is a general high-level announcement post. A series of technical posts will follow on the nuts and bolts of both the theory behind the demo and the actual code itself and how its put together, because it's quite interesting and I want to talk about it.

I've written this post to serve as a foreword / quick explanation of what you're looking at (similar to the explanation I gave in person), but if you're impatient you can just find it here.

All AIs currently developed are essentially complex parametrised mathematical models. We train these models by updating their parameters little by little until the output of the model is similar to the output of some ground truth label.

In other words, and AI is just a bunch of maths. So how does it understand text? The answer to this question lies in converting text to numbers - a process often called 'word embedding'.

This is done by splitting an input sentence into words, and then individually converting each word into a series of numbers, which is what you will see in the demo at the link below - just convert with some magic to 3 dimensions to make it look fancy.

Similar sorts of words will have similar sorts of numbers (or positions in 3D space in the demo). As an example here, at the science festival we found a group of footballers, a group of countries, and so on.

In the demo below, you will see clouds of words processed from Wikipedia. I downloaded a bunch of page abstracts for Wikipedia in a number of different languages (source), extracted a list of words, converted them to numbers (GloVeUMAP), and plotted them in 3D space. Can you identify every language displayed here?


Find the demo here: https://starbeamrainbowlabs.com/labs/research-smflooding-vis/

A screenshot of the initial attract screen of the demo. A central box allows one to choose a file to load, with a large load button directly beneath it. The background is a blurred + bloomed screenshot of a point cloud from the demo itself.

Find the demo here: https://starbeamrainbowlabs.com/labs/research-smflooding-vis/


If you were one of the lucky people to see my demo in person, you may notice that this online demo looks very different to the one I originally presented at the science festival. That's because the in-person demo uses data from social media, but this one uses data from Wikipedia to preserve privacy, just in case.

I hope you enjoy the demo! Time permitting, I will be back with some more posts soon to explain how I did this and the AI/NLP theory behind it at a more technical level. Some topics I want to talk about, in no particular order:

  • General technical outline of the nuts and bolts of how the demo works and what technologies I used to throw it together
  • How I monkeypatched Babylon.js's gamepad support
  • A detailed and technical explanation of the AI + NLP theory behind the demo, the things I've learnt about word embeddings while doing it, and what future research could look like to improve word embeddings based on what I've learnt
  • Word embeddings, the options available, how they differ, and which one to choose.

Until next time, I'll leave you with 2 pictures I took on the day. See you in the next post!

Edit 2023-11-30: Oops! I forgot to link to the source code....! If you'd like to take a gander at the source code behind the demo, you can find it here: https://github.com/sbrl/research-smflooding-vis

A photo of my demo up and running on a PC with a PS4 controller on a wooden desk. An Entroware laptop sits partially obscured by a desktop PC monitor, the latter of which has the demo full screen.

(Above: A photo of my demo in action!)

A photo of some piles of postcards arranged on a light wooden desk. My research is not shown, but visuals from other researchers' projects are printed, such as microbiology to disease research to jellyfish galaxies.

(Above: A photo of the postcards on the desk next to my demo. My research is not shown, but visuals from other researchers' projects are printed, with everything from microbiology to disease research to jellyfish galaxies.)

I've submitted a paper on my rainfall radar research to NLDL 2024!

A screenshot of the nldl.org conference website.

(Above: A screenshot of the NLDL website)

Hey there! I'm excited that last week I submitted a paper to what I hope will become my very first conference! I've attended the AAAI-22 doctoral consortium online, but I haven't had the opportunity to attend a conference until now. Of course, I had to post about it here.

First things first, which conference have I chosen? With the help of my supervisor, we chose the Northern Lights Deep Learning Conference. It's relatively close by the UK (where I live), it's relevant to my area and the paper I wanted to submit (I've been working on the paper since ~July/August 2023), and the deadline wasn't too tight. There were a few other conferences I was considering, but they either had really awkward deadlines (sorry, HADR! I've missed you twice now), or got moved to an unsafe country (IJCAI → China).

The timeline is roughly as follows:

  • ~early - ~mid November 2023: acceptance / rejection notification
  • somewhere in the middle: paper revision time
  • 9th - 11th January 2024: conference time!

Should I get accepted, I'll be attending in person! I hope to meet some cool new people in the field of AI/machine learning and have lots of fascinating discussions about the field.

As longtime readers of my blog here might have guessed, the paper I've submitted is on my research using rainfall radar data and abusing image segmentation to predict floods. The exact title is as follows:

Towards AI for approximating hydrodynamic simulations as a 2D segmentation task

As the paper is unreviewed, I don't feel comfortable with releasing it publicly yet. However, feel free to contact me if you'd like to read it and I'm happy to hand out a copy of the unreviewed paper individually.

Most of the content has been covered quite casually in my phd update blog post series (16 posts in the series so far! easily my longest series by now), just explained in formal language.

This paper will also form the foundation of the second of two big meaty chapters of my thesis, the first being based on my social media journal article. I'm currently at 80 pages of thesis (including appendices, excluding bibliography, single spaced a4), and I still have a little way to go before it's done.

I'll be back soon with another PhD update blog post with more details about the thesis writing process and everything else I've been up to over the last 2 months. I may also write a post on the hull science festival which I'll be attending on the supercomputing stand with a Cool Demo™, 'cause the demo is indeed very cool.

See you then!

PhD Update 16: Realising the possibilities of the past

Hey there! It's been a while. As I explained in a previous post, I've been adjusting to a new part-time position in my department! In short, posts will continue on here by may be slightly less frequent than before.

Before we begin, here is the customary list of previous posts in this series:

A lot has happened since last time! I'm going to split this up into sections as I have in previous posts, but in summary the noodling around I've done since the last post has really paid off.

Publication

After another round of revisions, my journal article on my social media research has now been accepted and published!

View it here:

https://doi.org/10.1016/j.cageo.2023.105405

It's my first published journal article, so I am quite excited about it :D

If I haven't already (I'm writing this post first, as I only got notified about its publication while I was writing this post!), I'll definitely be making another post about it here!

Rainfall Radar

The main thing I've focused on a lot is my rainfall radar model, and beating it into some kind fo shape that actually works. This has not been a simple process, but I think that this graph speaks for itself:

It works! I can scarcely believe that after nearly 3 and a half years it finally produces a useful output. This feels like a big personal achievement - as those who have been following this series will know, I have tried many different things before reaching this point.

The first question I know will be on your mind is "How did we get here?", and the answer to this lies in 2 things:

  1. Connectedness
  2. Resolution and boundary difficulties

Let's tackle connectedness first. By connectedness, I mean specifically parts of a given model connecting back on themselves. This is important for multiple reasons, not least because it reduces the effect of the vanishing gradient problem. This is also the reason that ResNet adds skip-connections, as then the gradient weight updated used when the model is backpropagating can flow all the way up to the top of the model. Without this, the weights in the initial layers can't update, and information is then lost before it even makes it very far into the model.

From what I can tell, this is the primary problem with the models that I have tried so far, in one way or another. Autoencoders, for example, do not have very much connectedness, making it difficult for backpropagation to do its thing.... especially when the task at hand is a significantly difficult one.

The solution I ended up employing here was DeepLabV3+. It uses an image encoder at first, and then a PSPNet-style pyramid scheme for analysing multiple scales of features, and then finally an image segmentation head. It also has a skip connection between halfway up the image encoder and the segmentation head too, further increasing the connectedness of the model.

Once I had something from the initial DeepLabV3+ model, improving the output was a matter of increasing the resolution of the output and adjusting things so that the model can better resolve the boundaries of the water / no water regions I was asking it to predict.

To this end, with this tweaking I describe DeepLabV3+ has turned out to be the ideal model for the task. The changes and hyperparameters I used can be summarised like so:

  • Loss function: Add dice loss to cross-entropy loss.
  • Learning rate: Reduce to 0.00001 from the default 0.001
  • Upscaling: Hack the model to upscale the input/outputs. This increases the resolution the model operates at, improving performance significantly
  • Removing isolated pixels: I removed water pixels with no neighbour pixels being water. This is like a band-aid on the real problem (a bad physics-based model run), but it did help.

With this working models, I can now consider the other avenues I was exploring in part 15 of this series reasonably as dead ends, though I have learned a lot by investigating them.

I consider this model to be a proof of concept only. The idea needs a lot more adjusting and improving before it will be actually useful to anyone. Still, it might be improve short-medium term (i.e. ~up to a few hours in advance) flooding forecasts with some a lot more work I think. While my focus currently is writing up my thesis (see below), I do plan on continuing to work on this and other research projects on the side on a long-term basis. One of my many goals is to wrangle this model into something more than just a proof of concept, and somehow measure it's effectiveness more precisely.

The long road to thesis and beyond

With my funding for the main research period of my PhD at an end, my focus has been shifting to the writing of my thesis. The feeling is actually quite surreal - for the longest time writing my thesis has been a mystical objective far off in the distance in a blurry haze, but now the details are very much resolving into something more tangible.

So far I have a draft chapter based on my recent journal article, and part of a chapter on the rainfall radar model I've talked about briefly above. I also have part of an introduction and a background sections, but these require significant reworking because I wrote them ages ago (they are just bad).

The plan is to have my thesis complete by December 2023, potentially giving the required 3 months submission notice in ~September 2023 - depending on how things go with writing.

When my PhD comes to a close, that will also mean the end of this series of blog posts. I think this is the longest series of blog posts I've ever posted here, and certainly one of the most personal. This does not mean the end of posts on here about my research though, as I plan to continue blogging about it on here. The form this will take is likely to be similar to the form that the posts for my PhD have taken.

As I don't currently know what form my research will take after my PhD, I cannot say what will happen about blogging about it, only that it will happen ;-)

Thanks for sticking with me throughout this long and at times difficult process - it's been a wonderful and wild ride! Even as this part of my journey is beginning to come to a close, I really appreciate all the help and support everyone has given me throughout the process.

I'll try my best to keep up with this series again, now that I've had some time to adjust to being experimental officer. Until next time!

The journal article about my social media research is out now!

This is just a quick little post to announce that I have published my first journal article! This has been a significantly long time in the making, with the review process and all associated corrections alone taking from October 2022 until a week or two ago.

It has been published in the Elsevier journal Computers and Geosciences, with the following title:

Real-time social media sentiment analysis for rapid impact assessment of floods

The article is open access, so everyone should be able to read it. I must thank everyone who has helped and contributed to the process of putting this journal article together - their names are on the journal article.

Hopefully this is the first of many!

Art by Mythdael