GSoC 2021 Project Idea 26.1: Eye-tracker based on a convolutional neural network in Python + TensorFlow/Pytorch

Current eye-trackers generally rely on previous-generation computer-vision algorithms and the best ones are also expensive and closed-source. A recent publication from Google Research has shown that it is possible to obtain very good performance using a simple convolutional neural network (CNN) running off a simple mobile phone camera. The details of the CNN have been made available, but the actual implementation is not available. The goal of the project is to implement this algorithm in an open-source package and then explore various extensions including a) incorporating head-position estimation for eye-in-head measurements; and b) extending the algorithm to higher sampling-rates and incorporating filtered estimates of eye-position that take the time-series of previously estimated eye-positions into account.

Mentor: Suresh Krishna @suresh.krishna


This project seems really interesting!
I’ve implemented a few projects in TensorFlow before as well
Can you tell me, How can I know more about this?
Like maybe a Github Repo or something?

1 Like

@suresh.krishna , please could you help with details that @shubh is looking for?
Thanks :+1:

Hi !

Thanks for the interest.

There is no Github repo yet. The source paper is this one : Accelerating eye movement research via accurate and affordable smartphone eye tracking – Google Research

I am in touch with Vidhya Navalpakkam (the senior author) and she is happy to assist with project translation.

As you can see from the paper, there is a model description but the code is not open source, and will not be open-sourced. So the first part of the project is to simply replicate the algorithm from that paper. The second part would be to implement various extensions as I mention above.

Creating a functional, open-source eye-tracker is likely to be of great interest in many areas.


Very interesting project! It is so occurred that my research is also about convolutional neural network using Pytroch. Thank you for the source paper. As I understand the simple mobile phone camera supposed be anodroid? Is there something using NNAPI?

1 Like

Hi @suresh.krishna, I am Neelay Shah (website), a pre-final year undergraduate student at BITS Pilani in India. I enjoyed reading the paper you’ve shared and am excited about this project.
I was trying to download the GazeCapture dataset used in the paper for getting started on a minimal re-implementation of the paper in PyTorch. However, something seems to be off with the registration process on the website and hence I’m not able to download the dataset. Could you please help me in gaining access to the dataset?
Thank you.

1 Like

Hi, Thanks for the interest !

In the paper, they used an Android camera whose output was sent to a central server for individualized tuning of the network etc. But on-device training is something Google is interested in, and that is why there is an interest in keeping the model small as well.

However, for the purposes of the project, one does not have to be chained to Android or any other particular system… I am not familiar with NNAPI, but any project that essentially collects a sequence of video-frames and tries to predict gaze and/or eye-position from them would fit.


Hi Neelay… thanks for the interest !

Ah yes, seems to be a problem there. I will check with the authors of the dataset. In any case, alternative datasets can be generated - it is not an issue for the project itself if this dataset becomes unavailable.

ps. The website admin has been roped in and is looking into the issue.


Hi @suresh.krishna, what do you think would be some good initial steps towards a project proposal?

@Neelay The website registration works now for the gaze capture data. Also associated with it is a neural network ([1606.05814] Eye Tracking for Everyone) with apparently poorer performance compared to the work from Google.

A good initial step would be to understand the Google paper and come up with an outline of how to implement their network - doesn’t have to be on Android for now. From there, one could outline how to create pipelines to evaluate the model with other datasets or newly collected datasets, extend it (and in what directions), etc. Then one could actually start the implementation phase and see what problems one runs into – and in the normal course, there will be many, like temporarily not being to able to download the data :slight_smile:



Hello everyone! I’m Saarah (profile) from India currently doing a Masters of Technology in Computer Science at MSRIT, India. I’m excited and look forward to contributing and being valuable to INCF’s cause of open and FAIR neuroscience. This happens to be one of the two ideas which I feel passionate about at INCF and would be joyed to make beneficial contributions to!

I have been stuyding and conducting research in the fields of machine learning, deep learning and natural language processing since 2018. I’ve worked on application based problems such as using machine learning and NLP on legal research to ease the arduous task of researching and reading lengthy judgments using a conditional random fields model and built classification models and neural networks to classify judgements based on their final decision. My current research is focused on finding better solutions for computational problems using machine intelligence.

I have read the conversation on this forum and I’ve been working on said steps as mentioned here on this thread. I look forward to making beneficial contributions with the knowledge I’ve sought and continue to seek.

Have a great day!

1 Like

Hi @arnab1896 , @suresh.krishna ,
I’m Siddharth from India, pursuing B.Tech in Computer Science Engineering form Amrita University. This project really excites me and I would like to contribute to this project. I have done a few projects in Tensorflow/Keras and Pytorch (Github), I also have experience in using opencv.
I have read through the above discussions and have got the clarity of what is expected. But, I would like to know if there is any slack channel for regular discussion so that even I would also be a part of it.

Have a great day!!

1 Like

Hi Siddharth,

No, there is no Slack channel at the moment. It is a brand new project… and one that will take shape once the GSoC details are finalized.

I would love to participate in this project. Is there any evaluation task before official application starts, and is there a template for applying for this project?

Thank you

1 Like

Hello, Mr @suresh.krishna and Mr @arnab1896
My name is Kareem Negm.
I am a student at the Faculty of Artificial Intelligence in Egypt
Kaggle Expert X2
You can visit my Kaggle account from here My Kaggle Acc

Data scientist and ML Eng as Freelancer on Upwork

I’m so excited for this wonderful summer code with INCF

I have two years of experience in Machine learning, deep learning, computer vision, Tensorflow, python, data science, data analysis, etc.

You can visit my GitHub Count to learn more about my experience in this field. GitHub Acc
You can also visit my LinkedIn count where we can be friends. :wink:
LinkedIn Acc

No, not really… please feel free to ask questions here if you have any. The source paper is the best place to start.

Let me talk about this wonderful project.
If you’ll excuse me, of course.
I’ve worked on a lot of projects that use computer vision ,Tensorflow ,python and OpenCv
Besides, I’ve been getting ready for Google summer of code for a year.
When I was looking at the INCS ideas list, I signed my eyes on project number 26, which is eye-trackers.
And because I worked on all the requirements that the project needs and also I have worked on other projects such as face tracker and face detection and emotion detection
I knew that this project is perfect for me.

I fell in love with this project.
Allow me to know more details of this project and I can start and work any time you ask me
It’s a pleasure to know you, monitors.
And thank you.

Hi @suresh.krishna
Are we restricted to working on google paper only, or can we make a proposal for either paper?
google paper or Eye Tracking for Everyone

Also, should I make a proposal to run it on Android, or can I try on the desktop for the time being?
That’s because of my lack of experience in Android.
But I know the general form of converting the code from the desktop to Android by Tensorflow lite.

Hi @suresh.krishna, should the module/package have a pre-trained CNN with its weights saved or should it be in such a way that the architecture of CNN is flexible and can be changed after creating an instance of it according to the available datasets and then be trained?
Thank You.

Both are possible… whatever works.