GSoC 2021 Project 20.1: Brainbox project - Brain Badges: Online certification for collaborative annotation and segmentation of brain imaging data

Hello @katjaq and @rto ,
I have many doubts in Training and Practice module , Evaluation Module and Badge Issuing Module , So if possible could you please schedule a meet for the same in this week only.

Some doubts regarding the project @katjaq @rto @arnab1896

  1. How to connect with the mentor or researcher when stuck in practice/training phase (chat channel )
  2. What would be the evaluation platform and what kind of test would be there in the evaluation? ( hands on evaluation on Brain Box or Microdraw)
  3. How would be practice module implemented ?

Hi @Uttkarsh_Singh , the mentors will revert with answers to your query. Also, you can try reaching out to them on LinkedIn.

Thanks,
Arnab

Sure sir I am trying to reach mentors on LinkedIn too,
Thanks

Hi, @arnab1896, I tried to reach the mentors through Linkedin but didn’t get any response, I need a little help so that I can complete my draft proposal and get it reviewed by mentors ASAP as the submission period has already started.

1 Like

Hi @Abhir-24 ,
Please submit the draft proposal on the GSOC portal as well. I am reaching out to the mentors in parallel. Hopefully, they should reach out with comments for feedback.
Thanks.

@katjaq @rto , please help with above urgently

1 Like

Hello Abhir! We’re happy to answer your questions here : )

Hello @Abdulbaasit, unfortunately the meeting was not recorded. But feel free to post your questions here, or share a draft!

Hello sir, thanks for replying! I had a few doubts regarding the evaluation system to be made. On what basis the system is to be built so that people giving tests, score and earn a Brain Badge. Means like the score will be given to a user on the basis of an expert’s evaluation or will it be given by the system based on a certain amount of information or data which is included in the evaluation system and it will evaluate the user in comparison to the installed data.

1 Like

The tasks and evaluation method are proposed by the researchers. In our case, we are suggesting 3 examples: data annotation, manual segmentation of a structure, correction of a previous segmentation. In each case, we provide examples of the task. For example, for the annotation of the quality of neuroimaging data, we will provide example annotations; for the segmentation of a structure, same thing. After that, the user practices annotating or segmenting a few dataset which we have already annotated. Finally, during the test, the user has to annotate additional datasets. Their answers are automatically compared with the recorded answers. Based on that test, a badge is generated and added to the user’s profile; or the user is invited to repeat the training.

We had proposed to @Uttkarsh_Singh to use https://rrweb.io for recording the tasks. For the badges, we’ll use https://openbadges.org.

can us pls tell where the rrweb is supposed to be used?

Yes. It’s just an idea. rrweb is used to record a series of interactions with a web app. It could be used to record an expert performing a task in brainbox (drawing a brain region, for example). This recording could then (1) be used to demonstrate users the task, (2) evaluate the task by comparing the expert recording and the user recording.

hello @Abdulbaasit – we saw your draft on the GSoC platform! \\ö// Looks great! Thank you. If you have any questions, feel free to ping us here and ask here, and we can also come have a look again if you make changes and would like feedback.

alright so basically the test system needs to have both the task comparison evaluator as well as a manual evaluator where the recorded video of the task is stored and further evaluated by an expert thrugh comparison.

Also, when the user is working on the particular task, a separate BrainBox or MicroDraw work environment is to be created for that or the user needs to work on the BrainBox or MicroDraw websites and after saving and recording his work, he needs to submit those project and video links/files on the evaluation platform?

No, there’s no human evaluator. The comparison of the recorded result and the user’s result has to be automatic. That’s easy to do in the case of text annotations (you just check that they are the same). In the case of region segmentations (which are drawings), you need to compare that the drawings are similar enough. That can be done using a measurement of overlap, such as the Dice coefficient.

1 Like

Thanks for clearing out the confusion!

1 Like

Can we add some layout images for different pages of application ?

To me that sounds like a great idea @Uttkarsh_Singh , but I am pinging @arnab1896 and @malin who have experience with Google summer of code applications and know if they like illustrations in the applications – thanks for your help with that :slight_smile:

1 Like

@Uttkarsh_Singh , sure please go ahead and add images if you think they are relevant to project and will help judge your project proposal with more clarity. If they are too large or unnecessary, then, i will put suggestions in your proposal to remove them. So, for now, feel free to add them. But only where required and relevant.

Cheers!

1 Like