GSoC 2021 Project Idea 26.1: Eye-tracker based on a convolutional neural network in Python + TensorFlow/Pytorch

Desktop is fine. The Google paper is more recent, and state of the art; incorporating ideas from the MIT paper is of course fine. Everything works, as long as your proposal is coherent.

1 Like

So in the proposal should I state both the approaches along with pros and cons and later discuss it with you? would that be fine?

yes, i think. perhaps @arnab1896 can advise better. i would think you could discuss pros and cons and then propose what you want to do, as a sample course of action.

1 Like

Sure, will do that.
Thanks a lot

1 Like

Hello. this is Bikram, from Indian Institute of Technology (IIT) Varanasi. I have gone through the codes and had previous experience in working with Tensorflow and Keras. I want to contribute in the project, and I am willing to write a proposal for that. Can you help me out how should I proceed.

Hi @suresh.krishna, in the google paper it is quoted as ā€œModel accuracy was improved by adding fine-tuning and per-participant personalization. Calibration data (see next paragraph) was recorded over a period of ~30 s, resulting in ~1000 input/target pairs.ā€ So I wanted to know if the collected calibration data is available somewhere, so that even the CNN model for this project can be combined with the regression model using the calibration data as mentioned in the paper.
Thank you.

Hi. Myself Ritacheta Das. Is there any pre-task for this project? @suresh.krishna

hi @suresh.krishna, I am Veer Shah from India, I read through both the papers that you cited in the previous responses and I enjoyed reading them. I have made some projects with pytorch before, and currently I am working on a point based tracking system, based on a CNN, which does seem similar to this project, So is there any guide for contributions to this project before the application period ?
Hoping to hear back from you soon

Regards
Veer Shah

P.s: The website hosting the gazecapture dataset is stuck on the registration screen

@Ritacheta_Das ritachetaā€¦ no, there is no pre-taskā€¦

@Veershah26 hi, the project will become live only as part of gsoc once the team is clear. as of now, it exists in concept form only.

the website is not stuck at registration - you have to make sure you supply valid entries for all fields. the field you have not entered correctly will be marked in red. or at least, that is what it looks like when i just tested it.

@siddharthc30 no, the calibration data are not available for privacy reasons. however, calibration data can be obtained as part of the project.

@bickrombishsass that is great. go ahead and make a proposal for implementation of the code in the paper as well as extensions, as described in the original description above.

Hi @suresh.krishna could you explain a bit about what is ā€œextending the algorithm to a higher sampling-rate and incorporating filtered estimates of eye-position that take the time-series of previously estimated eye-positions into accountā€ ?

Higher sampling-rate of eye-positions per second ā€“ high-end trackers can sample at 2kHz, while the paper deals with a cell-phone camera with a much lower sampling rate. Also, the paper does not use a smoothing filter to estimate eye-position, each sample is assigned to an eye-position independent of preceding samples - this is an obvious area for improvement.

Hi @suresh.krishna and @arnab1896!
Iā€™m a first year masters student at the University of Michigan, Ann Arbor, specializing in computer vision. This project is interesting and I have already worked on eye state detection during my undergraduate studies. I believe I will be able to contribute to this project.

I wanted to know if the proposal submission happens only after 29th March or if I can send an email within the next few days.

The architecture in the paper seems simple enough to implement and Iā€™m excited about the opportunities to extend the work. In my undergraduate project, to improve accuracy we did implement a very basic version of extension b) where we used past estimates to filter the new positions. I look forward to working with you and contributing to this project.

Regards,
Dinesh Sathia Raj.
website

1 Like

Welcome aboard, Dinesh. I think @Arnab or others will know more about the proposal submission timeline.

@DSSR Not sure what you mean by email, but you can put in draft proposals starting March 29th and I am happy to provide feedback before you submit the final versionā€¦

Hi @suresh.krishna, what could be the end result of the estimation of head position? Could it be a graphical representation overlapped on the output video of the user or should it return some co-ordinates and angle values?

@siddharthc30 both are fineā€¦ the estimation is critical, how it is used depends on the application.

@suresh.krishna Are we limited to write our proposal within the given format?

Which format would that be ? Probably a good idea to adhere to it, unless there is good reasonā€¦

Hi, This is Nidhi Koppikar, I am an pre-final year student studying Mechatronics, this is actually the kind of project I have been looking for. I just hope Iā€™m not too late , but irrespective of anything Iā€™d love to contribute to this project , would write a proposal for the same.
Thank you!

not a format more like adding any points to the given one. Anyways got it.

1 Like