GSoC 2022 Project Idea 4.3: Creating benchmark datasets for object recognition with event-based cameras (175 h)

In our group we have a setup for generating event-based camera recordings of objects with ground truth position (pose) and semantic segmentation data from a separate 3D tracking system. In this project we will augment our recordings in different ways to create challenging segmentation and pose estimation benchmarks.

Perlin noise augmentation: We will generate 2D Perlin noise and transform it into visual event streams that are added to our recordings of moving objects.

Movie background augmentation: We will use off-the-shelf and/or recorded videos and transform them into visual events by implementing a simulation of an event-based camera and add the resulting event stream to our recordings of moving objects.
As a stretch target, we may train a classification network using a spike-based learning rule such as eProp or eventProp to recognise objects in the newly created benchmark datasets.

Skills required: Python. Experience with event-based cameras (and (Py)GeNN for the stretch target) would be beneficial.

Mentors: James Turner (J.P.Turner@sussex.ac.uk) and Thomas Nowotny @tnowotny (t.nowotny@sussex.ac.uk)

Tech keywords: Python

I am very interested in this project can you please tell how to proceed further.

Hello @imad08 .

First of all, sincerest apologies for the long delay in getting back to you.

In this particular project, we are
looking for someone to produce code which augments an event-based
camera dataset (have a look at the DVS DAVIS 346 cameras or simuilar on
Google to get an idea of them.

More specifically, there are a couple of options to get started with.
One can either convert ordinary RGB frame based videos into equivalent
event-based videos, or another approach is to inject ‘Perlin noise’
into the existing event dataset (have a look at this paper for more
info: https://www.nature.com/articles/s41598-018-36047-2). Ideally we
would like a complete parameterised script which we could pass event
data in as an argument, and it outputs the augmented event dataset.

If you are interested, let us know and perhaps we can discuss it
further over a Zoom meeting.

Best wishes,
James.

1 Like

Thank you for the detailed idea, I would go through this and would ask in case of any query, then probably we can have a short meet if required.
Regards

Hi @jamesturner246, I am Neelay Shah, a final year engineering undergraduate student at BITS Pilani, India. I’m broadly interested in machine learning and have some previous research experience in computer vision. I’m also interested in open-source development and have worked on developing libraries / tools for machine learning problems: EzFlow, KD-Lib, and VFormer. You can find more about me at my webpage.

I went through the description of the project and also gave the paper you have linked a quick read. While I don’t have any previous experience with even-based vision, I’m finding the topic pretty intriguing and would love to work on something in the space. I have some familiarity with PyGeNN, which I gained when I tried to implement a research paper which used SNNs for speech classification (link to the GitHub repository).

Overall, I’m pretty excited about this project and also curious to know in more detail about the ideas you have in mind. I was wondering whether you’d like to discuss further over a Zoom meeting. Please let me know.

Thanks.

Hi @NeelayS.

Thanks for getting in touch. It sounds like you have a bit of experience with SNNs, which would indeed be useful! Of course, happy to have a Zoom meeting to discuss the project in a bit more depth. Could you give us an idea of what times and dates you are available please, and we can organise something soon!

Best regards,
James.

Sure. Any time between 8am and 12 noon GMT on all weekdays except Wednesday would be good for me (unless something unforeseen comes up). Please let me know if any time within this is suitable.

@NeelayS Great, then I’d say let’s have a quick informal chat about the project on say 11:00 GMT on Thursday 17th March, if that works for you, just to go over the basics of the project and have a quick informal chat. Look forward to meeting you!

Best,
James.

That time would work for me @jamesturner246. I’ve sent you a Zoom invitation on your University of Sussex email address.