Hi @alexdewar,
Firstly I apologize for the extreme delay in replying; it so happened the mail from Neurostars got filtered in my mailbox, and in expectancy of an email reply, I missed continuing this thread. I read through all the links. I seem to develop a good ground understanding of how cue integration occurs and how the whole process of visual scene familiarity was used for route navigation in ants and then in general for an autonomous ground robot. This concept is new to me, and I am highly interested and looking forward to contributing to the project in any form possible.
As far as my background in Sensor Fusion stretches, I’ve worked on this quite a lot, as this is almost an essential part of every autonomous vehicle. Consequently, I am confident I can implement these sensor fusion algorithms (the ones relevant to this project seems to be Bayesian-based, CNNs and Kalman Filters) in Gazebo with different sensors. Though the links mostly had visual sensors, I am in the process of building up the repo linked and running the examples so that I get a hold of what type of sensory feedback I might have to incorporate.
Below are some of the queries which I thought were urgent at this point of time:
- It would be great if you could let me know whether, initially in Gazebo will I have to work with sensors like IMUs, GPS etc., or should I focus on mostly visual-based sensors?
- The Path Integration part was common in all the papers, and I was wondering about the possibility of improving that by fusing different sensors?
- You have mentioned simple and biological sensors, but I could not find a classification somewhere along these lines in the literature, hence, it would be great if you could elaborate a bit about your idea of this classification backed with examples
Since the deadline is quite near, I had started to compile a brief outline for the proposal as follows:
-
Broad Goal: Implement optimized sensor fusion algorithms combining inputs from simple + biological sensors
-
Stretch Goals:
(Considering I might have to work on cue integration for visually guided navigation)
- Implement basic sensor fusion algorithms with simple sensors in Gazebo
- Analyze the sensory data from biological sensors basis the need for problem
- Identify suitable algorithms for selecting sensors to fuse for optimal results
- Tune the algorithms for fusing both types of sensors and integrate them with the current environment
I plan to build upon the lack in my background knowledge and fine-tune the existing skills/ pick new skills (relevant to the project) in the community bonding period, familiarize myself with code and start before the actual coding periods starts so as to improve upon the chances of achieving more. Since I have no commitments the entire summer, I would try to dedicate extra efforts to the project
Lastly, I plan to send in the first draft of proposal latest by tomorrow, so that there is atleast some time to reiterate and fine-tune. I shall try to stick to the template guideline I found on INCF Org page here
Nitik