GSoC 2023 Project Idea 8.1 Efficient app-based measurement of visual functions in in infants and young children (350 h)

Accurate and efficient measurement of visual function is difficult in infants and young children because of limited cooperation, inability to provide cognitive verbal responses and lack of efficient behavioural methods. This is important in the clinical and research context where detection and treatment of eye conditions in infancy is dependent on measurement of visual function. Visual deprivation in infants disrupts normal visual development and affects multiple visual functions that are important in visually guided behaviors in everyday life such as contrast sensitivity, motion perception, contour integration, and face recognition. At present there are no reliable automated objective methods for measuring visual functions in infants and young children below the age of 3 years.

This project, continuing on from GSoC 2022, will address these limitations. Last year’s project made progress towards developing an API to handle communication with an eye-tracking module (GitHub - wizofe/ao-baby-tracker: Google Summer of Code 2022 - Eye tracking project for neonates). This year, we will work towards bringing this project towards a proof of concept. The project involves a) the development of an application with a suite of visual stimuli and analytical procedures to probe multiple visual functions; b) incorporating and further developing a deep-learning based infant eye-tracker and c) developing a GUI and controller that holds the display, eye-tracking and analysis components together.

Skill level: Intermediate/advanced

Required skills: Comfortable with Python. Experience with image/video processing and using deep-learning based image-processing models. Ideally, also comfortable with Android/iOS app development and especially ARKit/equivalent, but not necessary.

Time commitment: Full-time (350 h)

Lead mentor: Arvind Chandna

Project website: GitHub - wizofe/ao-baby-tracker: Google Summer of Code 2022 - Eye tracking project for neonates

Backup mentors: Suresh Krishna

Tech keywords: Health AI, Infant vision, Image processing, App development, Python, health AI, IOS/Android

1 Like

Hi, I am jyothi swaroop. I am interested in this project. I have gone through the openvisionapi written by wizofe in gsoc 2022.
Can you share a few insights over the what the deep learning model must calculate? Like at what point the infant is gazing at screen co-ordinates?
What are we analyzing after gazing tracking?

1 Like

@JyothiSwaroopReddy07 - welcome aboard. Thanks for the interest.

Yes, the deep learning model (for which we already have a prototype from another group - GitHub - yoterel/icatcher_plus: iCatcher+: Robust and automated annotation of infant gaze from videos collected in laboratory, field, and online studies) will output where the infant is looking. The other option is to use a hardware-based eye-tracker, for which last year’s project provided some linking code. The infant’s gaze location is the experimental data that is used to guide the next stimulus that will be displayed, as well as analyzed to indicate whether the infant looked at the “correct” stimulus or not.

GitHub - m2b3/APL: automated preferential looking is the github page for this year’s project.

Note that this is not intended to be an iOS/Android app. A Windows program will work fine.

Hello Sir, I find this project interesting and I very well fulfill the required skillset for this project. This project can be very useful to me and I very well want to contribute to the Open Source, So if possible I can send you my resume and can work with you!

Hi, I am Harshitha. I am interested in this project. I went over the descriptions and project details mentioned and even the preceding year project. I would like to know if we can implement last year eye track here. For clear understanding what should I do.

Hii sir @suresh.krishna , soham here.
I would love to start contributing on the project as discussed on the gsoc discussion page of [ GSoC 2023 Project Idea 12.1 Using markerless motion capture to drive music generation , i would love to work on this project. I went through the last year’s project as well .Could you please guide me how should i start contributing further?

@Taran_Tuteja - welcome aboard. Please feel free to send me your CV via direct message here, and we can talk further.

@Sai_Harshitha_Peddi - please feel free to send me your CV via direct message here. You can also ask any specific questions you have - I am not sure I understand what you are asking here.

Hi @suresh.krishna. I am Shobhit Gupta, a second year at IIT Dhanbad. I am interested in this project and am willing to start working on this too. I am an ML/DL enthusiast and have experience in CV tasks using Pytorch and tensorflow. I have also worked on hugging face transformers in my college projects. I have gone through all the info above. Is there something else I should know about. Please let me know as I am super excited for this project.

1 Like

According to the project I understand along with python experience with image processing, and experience with deep learning based image-processing models are to be used. I have recently done two projects which involves this work such as Image Segmentation on Artificial Lunar Landscape and Processing Music Data into Images and then providing matching scores which was infact a research project plus I am halfway into another research project which involves this same work which is not mentioned in the resume that is Indian Landcover Classification Using Satellite Imagery. I can also develop app as I have an experience in development too!

Finally I can contribute in the project by incorporating and further developing a deep-learning based infant eye-tracker plus with development of the app too!
And last but not the least I have already done projects which impact the modern human life, so this topic interests me too!

1 Like

@not_shobhit - welcome aboard. Thanks for your interest. I am going to place a list of tasks on the Github where people can start contributing. I will also add info related to the GSoC there. You can feel free to send me a CV and a proposal when you have it for comments.

@Taran_Tuteja - thank you for your interest. Please see my reply to @not_shobhit below. I am going to place a list of tasks on the Github where people can start contributing. I will also add info related to the GSoC there.

Just btw, in case you were not intending to post your CV here, please note the difference between the project thread and personal direct messages to me :slight_smile: Of course, if you are fine with posting the CV here, no issues from my side.

GitHub - m2b3/APL: automated preferential looking is the Github page for this year’s project.

Hello Sir, I am unable to see the tasks related to this project by which I can start contributing if you have already mentioned the tasks, Please share the link!

Hey @suresh.krishna, I am very very interested in working on this project, I would really love to get started with contributing towards it. I’ve attached my CV for your reference.

I am also currently working on a project about semi-supervised learning for mass cytometry data

CV:

@Taran_Tuteja - I will do this tomorrow. Stay tuned.

@Sahil_Sahu - thanks for your interest. I will post a list of tasks tomorrow on the Github. Please stay tuned.

Hi @suresh.krishna. I am Shikha Sharma, a third year at IIT Kanpur. I am interested in this project and am willing to start working on this too. I have done project on Face Recognition based Smart Attendence System under Microsoft Engage Mentorship Program’22 and also learnt various image processing techniques and algorithms in my Image Processing course. I would really love to get started with contributing towards it. I’ve attached my CV for your reference.
Link: Shikha Sharma_Resume.pdf - Google Drive