GSoC 2021 Project Idea 6.4: Open-source biorobotics framework

6.4 Open-source biorobotics framework

As researchers in biorobotics, we are fascinated with the remarkable navigational prowess of desert ants and are striving to produce resilient robots that are able to navigate just as dependably over challenging environments. As part of this, we are developing an open-source biorobotics framework in C++ (BoB robotics: GitHub - BrainsOnBoard/bob_robotics: Collection of code for interfacing with robot platforms + simulations and visualisation) to enable collaboration on this vision.

Your task will be to work on the problem of sensor fusion using BoB robotics and the open-source robotics simulation software Gazebo, to investigate how best to combine information from multiple incoming sensory sources – a problem faced by both ants and robots – on a simulated bioinspired robotic platform.

This project can involve computer vision (we are particularly interested in visual sensors) and machine learning, depending on the student’s interests and commitment level. Some prior experience with C++ is a must. Experience with the git workflow and using GitHub is also a plus, as this will be the primary way in which you interact with our team.

Mentors: @jamie Jamie Knight (J.C.Knight@sussex.ac.uk),
@alexdewar Alex Dewar (alex.dewar90@gmail.com )

1 Like

Dear Malin,
Hi, I am Nitik, Final Year Undergraduate at IIT-Kanpur. I have been involved in Autonomous System Development since my freshman year and will now go on to pursue Masters in a similar field. I have strong coding skills in python and C++ & have experience with ROS+Gazebo environment.

I read about the project and this aligns with my interests, and I am looking forward to contributing to this under GSoC 2021. I found some issues too on the Git Repo, but since the application deadline is near, I was wondering if there is any procedure (eg. Tests etc.) I need to complete too.

Looking forward to your reply.

Hi Nitik,

Thanks for your interest in our project - it does indeed sound like you have exactly the type of skills we’re looking for! We don’t have any formal tests and, as this is more of a research-based project and the deadline is indeed quite near, I think focussing on the proposal would be the best use of your time. A good starting point might be to have a look at some of our group’s recent papers:

https://doi.org/10.1371/journal.pcbi.1002336
https://www.mitpressjournals.org/doi/abs/10.1162/isal_a_00141
https://www.mitpressjournals.org/doi/abs/10.1162/isal_a_00307

might be a good starting point.

Jamie

Hi Nitik,

I agree with @jamie that you’d seem to be a great fit with the project! As is often the case with these sorts of things, we had an idea of roughly the sort of thing we think would make for an interesting project, but you can obviously tailor it according to your own interests somewhat.

We haven’t done any work on sensor fusion before as a group, which is why I thought it would be a nice novel direction to take things, but it’s also why there aren’t any Github issues related to this at present. As our research group is interested in implementing biological-like sensor processing on robots (see https://brainsonboard.co.uk), at some stage we will have to work on integrating various bio-inspired/regular sensors together to produce behaviours like visually guided navigation. My idea for this project was something along the lines of having a student start implementing sensor fusion with simple sensors in Gazebo, but gradually building up to integrating the inputs from more biological sensors (e.g. see the links that @jamie sent). There is an analogous problem in animal behaviour where it’s generally called “cue integration” (see e.g. https://royalsocietypublishing.org/doi/pdf/10.1098/rspb.2015.1484), so there is also the possibility that these investigations could be interest from an insect behaviour perspective. It would then be up to the student to figure out what sort of approaches would be sensible and then to select and tune the relevant algorithms for our use case.

Hi @alexdewar,
Firstly I apologize for the extreme delay in replying; it so happened the mail from Neurostars got filtered in my mailbox, and in expectancy of an email reply, I missed continuing this thread. I read through all the links. I seem to develop a good ground understanding of how cue integration occurs and how the whole process of visual scene familiarity was used for route navigation in ants and then in general for an autonomous ground robot. This concept is new to me, and I am highly interested and looking forward to contributing to the project in any form possible.

As far as my background in Sensor Fusion stretches, I’ve worked on this quite a lot, as this is almost an essential part of every autonomous vehicle. Consequently, I am confident I can implement these sensor fusion algorithms (the ones relevant to this project seems to be Bayesian-based, CNNs and Kalman Filters) in Gazebo with different sensors. Though the links mostly had visual sensors, I am in the process of building up the repo linked and running the examples so that I get a hold of what type of sensory feedback I might have to incorporate.

Below are some of the queries which I thought were urgent at this point of time:

  1. It would be great if you could let me know whether, initially in Gazebo will I have to work with sensors like IMUs, GPS etc., or should I focus on mostly visual-based sensors?
  2. The Path Integration part was common in all the papers, and I was wondering about the possibility of improving that by fusing different sensors?
  3. You have mentioned simple and biological sensors, but I could not find a classification somewhere along these lines in the literature, hence, it would be great if you could elaborate a bit about your idea of this classification backed with examples

Since the deadline is quite near, I had started to compile a brief outline for the proposal as follows:

  • Broad Goal: Implement optimized sensor fusion algorithms combining inputs from simple + biological sensors

  • Stretch Goals:
    (Considering I might have to work on cue integration for visually guided navigation)

  1. Implement basic sensor fusion algorithms with simple sensors in Gazebo
  2. Analyze the sensory data from biological sensors basis the need for problem
  3. Identify suitable algorithms for selecting sensors to fuse for optimal results
  4. Tune the algorithms for fusing both types of sensors and integrate them with the current environment

I plan to build upon the lack in my background knowledge and fine-tune the existing skills/ pick new skills (relevant to the project) in the community bonding period, familiarize myself with code and start before the actual coding periods starts so as to improve upon the chances of achieving more. Since I have no commitments the entire summer, I would try to dedicate extra efforts to the project

Lastly, I plan to send in the first draft of proposal latest by tomorrow, so that there is atleast some time to reiterate and fine-tune. I shall try to stick to the template guideline I found on INCF Org page here
Nitik

Hi @alexdewar and @jamie,
While converging upon the deliverables of the project, I was stuck at few things.

  1. Would the algorithms majorly pertain to being used for tasks pertaining to visual sensors, and if yes, do you have any plans on which task should I focus on for the summer term?

  2. Does the project involves scope of centralised sensor fusion algorithms? As in, in the papers, ants were said to learn unique paths to themselves, so I was just wondering if there might be a possibility of some sort of communication between individual identities (ants/robots) in this case, which might lead to creation of centralised system

Lastly, I am stuck as to how exactly should I develop the project goals, I’m getting too overwhelmed with the new information, hence I would be grateful if you could suggest your idea, which then I can mould it into a quick draft proposal for further iteration.

Hoping for your reply,
Nitik

Hi @jamie and @alexdewar
While I was stuck with the proposal, I tried building the examples in this repo. The initial setup had some dependencies which were throwing error while trying to build. The one involving ARSDK_PATH, as mentioned in the documentation will still show an error because of Python Path. Maybe, that portion can be added to the documentation. Should I open up an PR?

Also, I was trying the examples but could not find again which sensors might pertain exactly to the problem statement I would have to build upon. Would be great if you could guide a bit on how should I proceed next, since I am technically stuck at the core of the proposal right now, as to how should I formulate the summer goals.

Hoping for your reply,
Nitik Jain

Hi @nitik1998

In answer to your questions:

  1. I think the initial goal should indeed be to get things working with more conventional sensors, rather than any in-house sensor, just so we have a proof of concept. If you could fuse e.g. GPS and IMU inputs that would be fantastic – apart from anything we do use these sensors anyway so it could be directly useful for something – but I think it would also be ok as a very first step to use whatever simulated sensors best demonstrate the process (e.g. some kind of idealised simulated sensor).
  2. Path integration is indeed super important to the kinds of insects we study (e.g. desert ants) and fusing it with visual input would be an eminently sensible thing to do and very relevant to the biological system that we’re interested (hence why this was what the authors looked at in the Webb paper I sent you). It might be good to list it as a goal in the proposal for this reason; it’s an interesting problem for both robotics and biology. The only question would be how we should frame it. There are some cool biological models of path integration (see e.g. our implementation of someone else’s here: bob_robotics/projects/stone_cx at master · BrainsOnBoard/bob_robotics · GitHub), but it might be hard to figure out the most biologically accurate way of doing this. Perhaps one goal could be to fuse the input from a visual navigation algorithm with that from an idealised pseudo-path integration system (e.g. using an IMU + wheel encoders)? Then another goal could be to subsequently substitute this for a more biologically realistic model, such as the Stone CX model in the link above.
  3. I probably should have elaborated more on what I meant by “simple” and “biological” sensors. An example would be what I mentioned in my answer to Q2 (i.e. using a neural model vs IMU + wheel encoders). This project is technically an engineering project so we are focused on making things actually work rather than purely doing theoretical things, but we obviously have a particular emphasis on using biological models as a starting point, so that’s where our research tends to lie. That said, you’re always making some assumptions when modelling a biological system and we do want things to work so there is always a trade-off between biological realism and practical considerations. Does that help, or have I just made things more confusing? If so, then don’t worry; we can always talk about this kind of thing further down the line when it comes to it.

PS – The broad and stretch goals you mentioned all seem sensible.

  1. Yes, we are mostly interested in visually guided navigation. If you’re looking for a specific task to get a (simulated) robot to do, something like returning to a goal location would be a good one. Then you could integrate path integration with visual homing in e.g. a Bayesian fashion.
  2. Nope. We work on individual animals rather than swarms etc. (Even though everyone always thinks of ants in terms of their group behaviours!

Do you feel like you have enough information from what I’ve said so far…?

Hi Alex,
Thanks for your replies. It took some time for me to get the pieces in place, but now I seem to have a clear understanding of how I can shape this as a very exciting project.

Do you feel like you have enough information from what I’ve said so far…?

I do think now I can better formulate goals for the summer project.

  1. Nope. We work on individual animals rather than swarms etc. (Even though everyone always thinks of ants in terms of their group behaviours!

This actually does provide us with the privilege to not look at a particular genre of fusion algorithms- centralized computation based. Glad :stuck_out_tongue:

  1. Yes, we are mostly interested in visually guided navigation. If you’re looking for a specific task to get a (simulated) robot to do, something like returning to a goal location would be a good one. Then you could integrate path integration with visual homing in e.g. a Bayesian fashion.

I feel this could be the final goal for the project, to evaluate the performance of idealised PI algorithm + visual navigation algorithm and over some biological suitable model (according to your reply to point 2 here) -which I would have to figure out which one to use. Since this might be the final goal, should I mention in the proposal which biological method I will try to use, or should I mention first some time for literature survey and then choose the appropriate method in due course of summer?

There are some final (probably) doubts I have,

  1. Though for Proof Of Concept, I shall try to implement 2-3 Fusion Algorithms on both ground vehicle (eg. Turtlebot or if BoB has a model for one) and aerial vehicle, but for the final task, that is to perfrom visual homing with PI, would it be okay if I stick to ground vehicle (analogus to ants), since those allow for safely assuming pitch and roll to be zero.
  2. In continuation of your mail, the final goal would be to use suitable sensors and perform visual homing/ guided navigation with PI, comparing two kinds of PI model. Now, in one of the papers @jamie had linked did something on similar sort, but in MATLAB enviroment if I am not mistaken. So, will I get and exisiting enviroment for this or I will have to create one (preferably in Gazebo) and use sensory input from BoB and evaluate it upon that environment?

Ok, this is now reaching the point where I can meaningfully comment :slight_smile: Sticking to a ground-based vehicle is definitely a good simplification for the final task and we will be providing one or more environments which you can use in Gazebo.

Can you share a draft of your proposal through GSoC as soon as possible? That way we can both help you out with editing before the deadline tomorrow (not today at all @alexdewar - sorry!)

Hi @jamie,
Now that I would have the environment, this simplifies my task a lot. I have just shared the draft (and a bit incomplete) proposal over to you on email.

Regards,
Nitik
P.S. I am really apologetic for the delay from my end in drafting the proposal, this is highly rare, but given the surprise assignments that sprung up, I was nothing but alone :sweat_smile: