Hello, Mr @suresh.krishna and Mr @arnab1896
My name is Kareem Negm.
I am a student at the Faculty of Artificial Intelligence in Egypt
Kaggle Expert X2
You can visit my Kaggle account from here My Kaggle Acc
Data scientist and ML Eng as Freelancer on Upwork
I’m so excited for this wonderful summer code with INCF
I have two years of experience in Machine learning, deep learning, computer vision, Tensorflow, python, data science, data analysis, etc.
You can visit my GitHub Count to learn more about my experience in this field. GitHub Acc
You can also visit my LinkedIn count where we can be friends. LinkedIn Acc
Let me talk about this wonderful project.
If you’ll excuse me, of course.
I’ve worked on a lot of projects that use computer vision ,Tensorflow ,python and OpenCv
Besides, I’ve been getting ready for Google summer of code for a year.
When I was looking at the INCS ideas list, I signed my eyes on project number 26, which is eye-trackers.
And because I worked on all the requirements that the project needs and also I have worked on other projects such as face tracker and face detection and emotion detection
I knew that this project is perfect for me.
I fell in love with this project.
Allow me to know more details of this project and I can start and work any time you ask me
It’s a pleasure to know you, monitors.
And thank you.
Also, should I make a proposal to run it on Android, or can I try on the desktop for the time being?
That’s because of my lack of experience in Android.
But I know the general form of converting the code from the desktop to Android by Tensorflow lite.
Hi @suresh.krishna, should the module/package have a pre-trained CNN with its weights saved or should it be in such a way that the architecture of CNN is flexible and can be changed after creating an instance of it according to the available datasets and then be trained?
Thank You.
Desktop is fine. The Google paper is more recent, and state of the art; incorporating ideas from the MIT paper is of course fine. Everything works, as long as your proposal is coherent.
yes, i think. perhaps @arnab1896 can advise better. i would think you could discuss pros and cons and then propose what you want to do, as a sample course of action.
Hello. this is Bikram, from Indian Institute of Technology (IIT) Varanasi. I have gone through the codes and had previous experience in working with Tensorflow and Keras. I want to contribute in the project, and I am willing to write a proposal for that. Can you help me out how should I proceed.
Hi @suresh.krishna, in the google paper it is quoted as “Model accuracy was improved by adding fine-tuning and per-participant personalization. Calibration data (see next paragraph) was recorded over a period of ~30 s, resulting in ~1000 input/target pairs.” So I wanted to know if the collected calibration data is available somewhere, so that even the CNN model for this project can be combined with the regression model using the calibration data as mentioned in the paper.
Thank you.
hi @suresh.krishna, I am Veer Shah from India, I read through both the papers that you cited in the previous responses and I enjoyed reading them. I have made some projects with pytorch before, and currently I am working on a point based tracking system, based on a CNN, which does seem similar to this project, So is there any guide for contributions to this project before the application period ?
Hoping to hear back from you soon
Regards
Veer Shah
P.s: The website hosting the gazecapture dataset is stuck on the registration screen
@Veershah26 hi, the project will become live only as part of gsoc once the team is clear. as of now, it exists in concept form only.
the website is not stuck at registration - you have to make sure you supply valid entries for all fields. the field you have not entered correctly will be marked in red. or at least, that is what it looks like when i just tested it.
@siddharthc30 no, the calibration data are not available for privacy reasons. however, calibration data can be obtained as part of the project.
@bickrombishsass that is great. go ahead and make a proposal for implementation of the code in the paper as well as extensions, as described in the original description above.
Hi @suresh.krishna could you explain a bit about what is “extending the algorithm to a higher sampling-rate and incorporating filtered estimates of eye-position that take the time-series of previously estimated eye-positions into account” ?
Higher sampling-rate of eye-positions per second – high-end trackers can sample at 2kHz, while the paper deals with a cell-phone camera with a much lower sampling rate. Also, the paper does not use a smoothing filter to estimate eye-position, each sample is assigned to an eye-position independent of preceding samples - this is an obvious area for improvement.
Hi @suresh.krishna and @arnab1896!
I’m a first year masters student at the University of Michigan, Ann Arbor, specializing in computer vision. This project is interesting and I have already worked on eye state detection during my undergraduate studies. I believe I will be able to contribute to this project.
I wanted to know if the proposal submission happens only after 29th March or if I can send an email within the next few days.
The architecture in the paper seems simple enough to implement and I’m excited about the opportunities to extend the work. In my undergraduate project, to improve accuracy we did implement a very basic version of extension b) where we used past estimates to filter the new positions. I look forward to working with you and contributing to this project.
@DSSR Not sure what you mean by email, but you can put in draft proposals starting March 29th and I am happy to provide feedback before you submit the final version…