NLDL was awesome! >> NLDL-2024 writeup
It's the week after I attended the Northern Lights Deep Learning Conference 2024, and now that I've had time to start to process everything I leant and experienced, it's blog post time~~✨ :D
Edit: Wow, this post took a lot more effort to put together than I expected! It's now the beginning of February and I'm just finishing this post up - sorry about the wait! I think this is the longest blog post to date. Consider this a mega-post :D
In this post that is likely to be quite long I'm going to talk a bit about the trip itself, and about what happened, and the key things that I learnt. Bear in mind that this post is written while I've still sorting my thoughts out - it's likely going to take many months to fully dive into everything I saw that interested me :P
Given I'm still working my way through everything I've learnt, it is likely that I've made some errors here. Don't take my word for any facts written here please! If you're a researcher I'm quoting here and you have spotted a mistake, please let me know.
I have lots of images of slides and posters that interested me, but they don't make a very good collage! To this end images shown below are from the things I saw & experienced. Images of slides & posters etc available upon request.
Note: All paper links will be updated to DOIs when the DOIs are released. All papers have been peer-reviewed.
Day 1
(Above: A collage of images from the trip travelling to the conference. Description below.)
Images starting top-left, going anticlockwise:
- The moon & venus set against the first.... and last sunrise I would see for the next week
- Getting off the first plane in Amsterdam Schiphol airport
- Flying into Bergen airport
- The teeny Widerøe turboprop aircraft that took me from Bergen to Tromsø
- A view from the airport window when collecting my bags
- Walking to the hotel
- Departures board in Bergen airport
After 3 very long flights the day before (the views were spectacular, but they left me exhausted), the first day of the conference finally arrived. As I negotiated public transport to get myself to UiT, The Arctic University of Norway I wasn't sure what to expect, but as it turned out academic conferences are held in (large) lecture theatres (at least this one was) with a variety of different events in sequence:
- Opening/closing addresses: Usually at the beginning & ends of a conference. Particularly the beginning address can include useful practical information such as where and when food will be made available.
- Keynote: A longer (usually ~45 minute) talk that sets the theme for the day or morning/afternoon. Often done by famous and/or accomplished researchers.
- Oral Session: A session chaired by a person [usually relatively distinguished] in which multiple talks are given by individual researchers with 20 minutes per talk. Each talk is 15 minutes with 5 minutes for questions. I spoke in one of these!
- Poster session: Posters are put up by researchers in a designated area (this time just outside the room) and people can walk around and chat with researchers about their researchers. If talks have a wide reach and shallow depth, posters have a narrow reach and much more depth.
- I have bunch of photographs of posters that interested me for 1 reason or another, but it will take me quite a while to work through them all to properly dig into them all and extract the interesting bits.
- Panel discussion: This was where a number of distinguished people sit down on chairs/stools at the front, and a chair asks a series of questions and moderates the resulting discussion. Questions from the audience may also be allowed after some of the key preset questions have been asked.
This particular conference didn't have multiple events happening at once (called 'tracks' in some conferences I think), which I found very helpful as I didn't need to figure out which events I should attend or not. Some talks didn't sound very interesting but then turned out to be some of the highlights of the conference for me, as I'll discuss below. Definitely a fan of this format!
The talks started off looking at the fundamentals of AI. Naturally, this included a bunch of complex mathematics - the understanding of which in real-time is not my strong point - so while I did make some notes on these I need to go back and take a gander at the papers of some of the talks to fully grasp what was going on.
Moving on from AI fundamentals, the next topic was reinforcement learning. While not my current area of research area, some interesting new uses of the technology were discussed, such as dynamic pathing/navigation based on the information gained from onboard sensors by Alouette von Hove from the University of Oslo - the presented example was determining the locations of emitters of greenhouse gasses such as CO₂ & methane.
Interspersed in-between the oral sessions were poster sessions. At NLDL-2024 these were held in the afternoons and also had fruit served alongside them, which I greatly appreciated (I'm a huge fan of fruit). At these there were posters for the people who had presented earlier in the day, but also some additional posters from researchers who were presenting a talk.
If talks research a wide audience at a shallow depth, the posters reached a narrower audience but at a much greater depth. I found the structure of having the talk before the poster very useful - not only for presenting my own research (more on that later), but also for picking out some of the posters I wanted to visit to learn more about their approaches.
On day 1, the standout poster for me was one on uncertainty quantification in image segmentation models - Confident Naturalness Explanation (CNE): A Framework to Explain and Assess Patterns Forming Naturalness. While their approach to increasing the explainability of image segmentation models (particularly along class borders) was applied to land use and habitat identification, I can totally see it being applicable to many other different projects in a generic 'uncertainty-aware image segmentation' form. I would very much like to look into this one deeply and consider applying lessons learnt to my rainfall radar model.
Another interesting poster worked to segment LiDAR data in a similar fashion to that of normal 'image segmentation' (that I'm abusing in my research) - Beyond Siamese KPConv: Early Encoding and Less Supervision.
Finally, an honourable mention is one which applied reinforcement learning to task scheduling - Scheduling conditional task graphs with deep reinforcement learning.
Diversity in AI
In the afternoon, the Diversity in AI event was held. The theme was fairness of AI models, and this event was hugely influential for me. Through a combination of cutting edge research and helpful case-studies and illustrations, the speakers revealed hidden sources of bias and novel ways to try and correct for them. They asked the question of "what do we mean by a fair AI model?", and discovered the multiple different facets to the question and how fairness in an AI model can mean different things in different contexts and to different people.
They also demonstrated how taking a naïve approach to correcting for e.g. bias in a binary classifier could actually make the problem worse!
I have not yet had time to go digging into this, but I absolutely want to spend at least an entire afternoon dedicated to digging into and reading around the subject. Previously, I had no idea how big and pervasive the problem of bias in AI was, so I most certainly want to educate myself to ensure models that I create as a consequence of the research I do are as ethical as possible.
Depending on how this research reading goes, I could write a dedicated blog post on it in the future. If this would be interesting to you, please comment below with the kinds of things you'd be interesting in.
Another facet of the diversity event was that of hiring practices and diversity in academia. In the discussion panel that closed out the day, the current dilemma of low diversity (e.g. gender balance) in students taking computer science as a subject. It was suggested that how computer science is portrayed can make a difference, and that people with different backgrounds on the subject will approach and teach the subject through different lenses. Mental health was also mentioned as being a factor that requires work and effort to reduce stigma, encourage discussions, and generally improve the situation.
All in all I found the diversity event to be a very useful and eye-opening event that I'm glad I attended.
(Above: A collage from day 1 of the conference)
Images starting top-left, going anticlockwise in an inwards spiral:
- The conference theatre during a break
- Coats hung up on the back wall of the conference theatre - little cultural details stood out to me and were really cool!
- On the way in to the UiT campus on day 1
- Some plants under some artificial sunlight bulbs I found while having a wander
- Lunch on day 1: rice (+ curry, but I don't like curry)
- NLDL-2024 sign
- View from the top on Fjellheisen
- Cinnamon bun that was very nice and I need to find a recipe
- View from the cable car on the way up Fjellheisen
Social 1: Fjellheisen
The first social after the talks closed out for the day was that of the local mountain Fjellheisen (pronounced fyell-hai-sen as far as I can tell). Thankfully a cable car was available to take conference attendees (myself included) up the mountain, as it was significantly cold and snowy - especially 420m above sea level at the top. Although it was very cloudy at the time with a stratus cloud base around 300m (perhaps even lower than that), we still got some fabulous views of Tromsø and the surrounding area.
There was an indoor seating area too, in which I warmed up with a cinnamon bun and had some great conversations with some of the other conference attendees. Social events and ad-hoc discussions are, I have discovered, an integral part of the conference experience. You get to meet so many interesting people and discover so many new things that you wouldn't otherwise get the chance to explore.
Day 2
Day 2 started with AI for medical applications, and what seemed to be an unofficial secondary theme continuing the discussion of bias and fairness which made the talks just as interesting and fascinating as the previous day. By this point I figured out the conference-provided bus, resulting in more cool discussions on the way to and from the conference venue at UiT.
Every talk was interesting in it's own way, with discussions of shortcut learning (where a model learns to recognise something else other than your intended target - e.g. that some medical device in an X-Ray is an indicator of some condition when it wouldn't ordinarily present at test time), techniques to utilise contrastive learning in new ways (classifying areas of interest in very large images from microscopes) and applying the previous discussion of bias and fairness to understanding bias in contrastive learning systems, and what we can do about it through framing the task the model is presented with.
The research project that stood out to be was entitled Local gamma augmentation for ischemic stroke lesion segmentation on MRI by Jon Middleton at the University of Copenhagen. Essentially they correct for differing ranges of brightness in images from MRI scans of brains before training a model to increase accuracy and reduce bias.
The poster session again had some amazing projects that are worth mentioning. Of course, as with this entire blog post this is just my own personal recount of the things that I found interesting - I encourage you to go to a conference in person at some point if you can!
The highlight was a poster entitled LiDAR-based Norwegian Tree Species Detection Using Deep Learning. The authors segment LiDAR data by tree species, but have also invented a clever augmentation technique they call 'cowmix augmentation' to stretch the model's attention to detail on class borders and the diversity of their dataset.
Another cool poster was Automatic segmentation of ice floes in SAR images for floe size distribution. By training an autoencoder to reconstruct SAR (Synthetic Aperture Radar) images, they use the resulting output to analyse the distribution in sizes of icebergs in Antarctica.
I found that NLDL-2024 had quite a number of people working in various aspects of computer vision and image segmentation as you can probably tell from the research projects that have stood out to me so far. Given I went to present my rainfall radar data (more on that later), image handling projects stood out to me more easily than others. There seemed to be less of a focus on Natural Language Processing - which, although discussed at points, wasn't nearly as prominent a theme.
One NLP project that was a thing though was a talk on anonymising data in medical records before they are e.g. used in research projects. The researcher presented an approach using a generative text model to identify personal information in medical records. By combining it with a regular expression system, more personal information could be identified and removed than before.
While I'm not immediately working with textual data at the minute, part of my PhD does involve natural language processing. Maybe in the future when I have some more NLP-based research to present it might be nice to attend an NLP-focused conference too.
(Above: A collage of photos from day 2 of the conference.)
Images from top-left in an anticlockwise inwards spiral:
- Fjellheisen from the hotel at which the conference dinner took place
- A cool church building in a square I walked through to get to the bus
- The hallway from the conference area up to the plants in the day 1 collage
- Books in the store on the left of #3. I got postcards here!
- Everyone walking down towards the front of the conference theatre to have the group photo taken. I hope they release that photo publicly! I want a copy so bad...
- The lobby of the conference dinner hotel. It's easily the fanciest place I have ever seen....!
- The northern lights!!! The clouds parted for half and hour and it was such a magical experience.
- Moar northern lights of awesomeness
- They served fruit during the afternoon poster sessions! I am a big fan. I wonder if the University of Hull could do this in their events?
- Lunch on day 2: fish pie. It was very nice!
Social 2: Conference dinner
The second social event that was arranged was a conference dinner. It was again nice to have a chance to chat with others in my field in a smaller, more focused setting - each table had about 7 people sitting at it. The food served was also very fancy - nothing like what I'm used to eating on a day-to-day basis.
The thing I will always remember though is shortly before the final address, someone came running back into the conference dinner hall to tell us they'd seen the northern lights!
Grabbing my coat and rushing out the door to some bewildered looks, I looked up and.... there they were.
As if they had always been there.
I saw the northern lights!
Seeing them has always been something I wanted to do, so I am so happy to have a chance to see them. The rest of the time it was cloudy, but the clouds parted for half an hour that evening and it was such a magical moment.
They were simultaneously exactly and nothing like what I expected. They danced around the sky a lot, so you really had to have a very good view in all directions and keep scanning the sky. They also moved much faster than I expected. They could flash and be gone in just moments, while others would just stick and hang around seemingly doing nothing for over a minute.
A technique I found to be helpful was to scan the sky with my phone's camera. It could see 'more' of the northern lights than you can see with your eyes, so you could find a spot in the sky that had a faint green glow with your phone and then stare at it - and more often than not it would brighten up quickly so you could see it with your own eyes.
Day 3
It felt like the conference went by at lightning speed. For the entire time I was focused on learning and experiencing as much as I could, and just as soon as the conference started we all reached the last day.
The theme for the third day started with self-supervised learning. As I'm increasingly discovering, self-supervised learning is all about framing the learning task you give an AI model in a clever way that partially or completely does away with the need for traditional labels. There were certainly some clever solutions on show at NLDL-2024:
- Beyond output-mask comparison: A self-supervised inspired object scoring system for building change detection: Detecting changes in buildings across time, and avoiding the aforementioned shortcut learning - reminds me of SpaceNet8 in that it could be applicable to natural disaster damage detection.
- Loop closure with a lower power millimetre wave radar sensor using deep learning: Sounds like a fancy title, but basically they use a cool autoencoder-based solution to figuring out when a drone that's exploring its environment doubles back on itself.
An honourable mention goes to the paper on a new speech-editing model called FastStitch. Unfortunately the primary researcher was not able to attend - a shame, cause it would have been cool to meet up and chat - but their approach looks useful for correcting speech in films, anime, etc after filming... even though it could also be used for some much more nefarious purposes too (though that goes for all of the new next-gen generative AI models coming out at the moment).
This was also the day I presented my research! As I write this, I realise that this post is now significantly long so I will dedicate a separate post to my experiences presenting my research. Suffice to say it was a very useful experience - both from the talks and the poster sessions.
Speaking of poster sessions, there was a really interesting poster today entitled Deep Perceptual Similarity is Adaptable to Ambiguous Contexts which proposes that image similarity is more complicated just comparing pixels: it's about shapes and the object shown -- not just the style a given image is drawn in. To this end, they use some kind of contrastive approach to compare how similar a series of augmentations are to the original source image as a training task.
Panel Discussion
Before the final poster session of the (main) conference, a panel discussion between 6 academics (1 chairing) who sounded very distinguished (sadly I did not completely catch their names, and I will save everyone the embarrassment of the nickname I had to assign them to keep track of the discussion in my notes) closed out the talks. There was no set theme that jumped out to me (other than AI of course), but like the diversity in AI conference discussion panel on day 1 the chair had some set questions to ask the academics making up the discussion.
The role of Universities and academics was discussed at some length. Recently large tech companies like OpenAI, Google, and others are driven by profit to race to put next-generation foundational models (a term new to me that describes large general models like GPT, Stable Diffusion, Segment Anything, CLIP, etc) to work in anything and everything they can get their hands on..... and often to the detriment of user privacy.
It was mentioned that researchers in academia have a unique freedom to choose what they research in a way that those working in industry do not. It was suggested that academia must be one step ahead of industry, and understand the strengths/weaknesses of the new technologies -- such as foundational models, and how they impact society. With this freedom, researchers in academia can ask the how and why, which industry can't spare the resources for.
The weaknesses of academia was also touched on, in that academia is very project-based - and funding for long-term initiatives can be very difficult come by. It was also mentioned that academia can get stuck on optimising e.g. a benchmark in the field of AI specifically. To this end, I would guess creativity is really important to invent innovative new ways of solving existing real-world problems rather than focusing too much on abstract benchmarks.
The topic of the risks of AI in the future also came up. While the currently-scifi concept of the Artificial General Intelligence (AGI) that is smarter than humans is a hot topic at the moment, whether or not it's actually possible is not clear (personally, it seem rather questionable that it's even possible at all) - and certainly not in the next few decades. Rather than worrying about AGI, everyone agreed that bias and unfairness in AI models is already a problem that needs to be urgently addressed.
The panel agreed that people believing the media hype generated by the large tech companies is arguably more dangerous than AGI itself... even if it were right around the corner.
The EU AI Act is right around the corner, which requires transparency of data used to train a given AI model, among many other things. This is a positive step forwards, but the panel was concerned that the act could lead to companies cutting corners on safety to tick boxes. They were also concerned that an industry would spring up around the act providing services of helping other businesses to comply with the act, which risked raising the barrier to entry significantly. How the act is actually implemented with have a large effect on its effectiveness.
While the act risks e.g. ChatGPT being forced to pull out of the EU if it does not comply with the transparency rules, the panel agreed that we must take alternate path than that of closed-source models. Open source alternatives to e.g. ChatGPT do exist and are only about 1.5 years behind the current state of the art. It appears at first that privacy and openness are at odds, but in Europe we need both.
The panel was asked what advice they had for young early-career researchers (like me!) in the audience, and had a range of helpful tips:
- Don't just follow trends because everyone else is. You might see something different in a neglected area, and that's important too!
- Multidisciplinary research is a great way to see different perspectives.
- Speak to real people on the ground as to what their problems are, and use that as inspiration
- Don't keep chasing after the 'next big thing' (although keeping up to date in your field is important, I think).
- Work on the projects you want to work on - academia affords a unique freedom to researchers working within
All in all, the panel was a fascinating big-picture discussion, and there was discussion of the bigger picture of the role of academia in big-picture current global issues I haven't really seen before this point.
AI in Industry Event
The last event of the conference came around much faster than I expected - I suppose spending every waking moment focused on conferencey things will make time fly by! This event was run by 4 different people from 4 different companies involved in AI in one way or another.
It was immediately obvious that these talks were by industry professionals rather than researchers, since they somehow managed say a lot without revealing all that much about the internals of their companies. It was also interesting that some of them were almost a pitch to the researchers present to ask if they had any ideas or solutions to their problems.
This is not to say that the talks weren't useful. They were a useful insight into how industry works, and how the impact of research can be multiplied by being applied in an industry context.
It was especially interesting to listen to the discussion panel that was held between the 4 presenters / organisers of the industry event. 1 of them served as chair, moderating the discussion and asking the questions to direct the discussion. They discussed issues like silos of knowledge in industry vs academia, the importance of sharing knowledge between the 2 disciplines, and the challenges of AI explainability in practice. The panellists had valuable insights into the realities of implementing research outputs on the ground, the importance of communication, and some advice for PhD students in the audience considering a move into industry after their PhD.
(Above: A collage from day 1 of the conference)
Images starting top-left, going anticlockwise in an inwards spiral:
- A cool ship I saw on the way to the bus that morning
- A neat building I saw on the way to the bus. The building design is just so different to what I'm used to.... it gives me loads of building inspiration for Minetest!
- The cafeteria in which we ate lunch each day! It was so well designed, and the self-clear system was very functional and cool!
- The conference theatre during one of the talks.
- Day 3's lunch: lasagna! They do cool food in UiT in Tromsø, Norway!
- The last meal I ate (I don't count breakfast :P) in the evening before leaving the following day
- The library building I went past on the way back to the hotel in the evening. The integrated design with books + tables and interactive areas is just so cool - we don't have anything like that I know of over here!
- A postbox! I almost didn't find it, but I'm glad I was able to sent a postcard or two.
- The final closing address! It was a bittersweet moment that the conference was already coming to a close.
Closing thoughts
Once the industry event was wrapped up, it was time for the closing address. Just as soon as it started, the conference was over! I felt a strange mix of exhaustion, disbelief that the conference was already over, and sadness that everyone would be going their separate ways.
The very first thing I did after eating something and getting back to my hotel was collapse in bed and sleep until some horribly early hour in the morning (~4-5am, anyone?) when I needed to catch the flight home.
Overall it was an amazing conference, and I've learnt so much! It's felt so magical, like anything is possible ✨ I've met so many cool people and been introduced to so many interesting ideas, it's gonna take some time to process them all.
I apologise for how long and rambly this post has turned out to be! I wanted to get all my thoughts down in something coherent enough I can refer to it in the future. This conference has changed my outlook on AI and academia, and I'm hugely grateful to my institution for finding the money to make it possible for me to go.
It feels impossible to summarise the entire conference in 4 bullet points, but here goes:
- Fairness: What do we mean by 'fair'? Hidden biases, etc. Explainable AI sounds like an easy solution but can not only mislead you but attempts to improve perceived 'fairness' can in actuality make the problem worse and you would never know!
- Self-supervised learning: Clustering, contrastive learning, also tying in with the fairness theme ref sample weighting and other techniques etc.
- Fundamental models: Large language models etc that are very large and expensive to train, sometimes available pretrained sand sometimes only as an API. They can zero-shot many different tasks, but what about fairness, bias, ethics?
- Reinforcement learning: ...and it's applications
Advice
I'm going to end this mammoth post with some advice to prospective first-time conference goers. I'm still rather inexperienced with these sortsa things, but I do have a few things I've picked up.
If you've unsure about going to a conference, I can thoroughly recommend attending one. If you don't know which conference you'd like to attend, I recommend seeking someone with more experience than you in your field but what I can say is that I really appreciated how NLDL-2024 was not too big and not too small. It had an estimated 250 conference attendees, and I'm very thankful it did not have multiple tracks - this way I didn't hafta sort through which talks interested me and which ones didn't. The talks that did interest me sometimes surprised me: if I had the choice I would have picked an alternative, but in the end I'm glad I sat through all of them.
Next, speak to people! You're all in this together. Speak to people at lunch. On the bus/train/whatever. Anywhere and everywhere you see someone with the same conference lanyard as you, strike up a conversation! The other conference attendees have likely worked just as hard as you to get here & will likely be totally willing to you. You'll meet all sorts of new people who are just as passionate about your field as you are, which is an amazing experience.
TAKE NOTES. I used Obsidian for this purpose, but use anything that works for you. This includes both formally from talks, panel discussions, and other formal events, but also informally during chats and discussions you hold with other conference attendees. Don't forget to include who you spoke to as well! I'm bad at names and faces, but your notes will serve as a permanent record of the things you learnt and experienced at the time that you can refer back to again later. You aren't going to remember everything you see (unless you have a perfect photographic memory of course), so make notes!
On the subject of recording your experiences, take photos too. I'm now finding it very useful to have photos of important slides and posters that I saw to refer back to. I later developed a habit of photographing the first slide of every talk, which has also proved to be invaluable.
Having business cards to hand out can be extremely useful to follow up conversations. If you have time, get some made before you go and take them with you. I included some pretty graphics from my research on mine, which served as useful talking points to get conversations started.
Finally, have fun! You've worked hard to be here. Enjoy it!
If you have any thoughts / comments about my experiences at NLDL-2024, I'd love to hear them! Do leave a comment below.