W2D3 content discussion

Video 2: SPRT and the Random Dot Motion Task

I think so. The log likelihood ratio is defined for any type of likelihood function. But you would need to make sure that the mean of the Poisson distributions for p_L and p_R are well enough separated.

1 Like

Video 2: SPRT and the Random Dot Motion Task!

It comes from the the property that if you have a random variable X that follows a Gaussian distribution N(mu,sigma), it can be brought back to the normal reduced centered distribution (N(0,1)) by using :

\dfrac{X-\mu}{\sigma}

In this case mu is null so X = sigma * epsilon where epsilon is derived from the normal centered reduced distribution.

Okay got it. Thank you Antoine!

1 Like

Hi! Not sure whether there are still people here answering questions but I also find the eye-tracking example hard to understand.
I don’t understand why the gaze point can be modeled with a linear system…if it’s a linear system with F = identity, doesn’t that mean the gaze point is not expected to move and the move is purely caused by random noise?

Hello :slight_smile:

The eye tracking example is slightly different from what you had before. Indeed, in this case, the parameters of the linear dynamical system (F, Q, H and R matrices) are not know beforehand and the EM algorithm will be used to infer them.

In order to do that, we will start with the first step which consists in running a Kalman filter on the observed data in order to obtain some estimate of the latent variable. To compute this estimate, the Kalman filter will compute a weighted sum of the observed data and the prediction based on the last estimated point and the knowledge of the parameters of the LDS (F, Q, H and R).

Once you’ve done that for all your time step, the EM algorithm will go through the second step which consists in estimating the LDS parameters (F, Q, H and R) based on all the estimates computed in the first step.

In summary the algorithm will be have like this :

  1. Initialize your LDS parameters to some initial guess (in this case we choose identity for F)
  2. Run the first step to obtain all the Kalman estimates of the your latent variable
  3. With these estimates, recompute your LDS parameters
  4. Do 2-3 untill your reach convergence

Does that help?

Hi, thanks for your reply!
I understand how the algorithm works to estimate the parameters and the latent states, but it’s just I don’t understand why the gaze point can be modeled as a linear system - I mean, where we will look at at the next time point not only depends on where we are looking at now, but also the structure of the image, where we have looked at before, etc. So it should be neither Markovian nor linear. But why can we still model it as s(t+1)=Fs(t)+\epsilon with a fixed F?

Hi,

The idea behind this part of the tutorial was to show that, even though, the Kalman filter is very good to smooth observations (such as gaze trajectory itself), it is not suited to model movement. Indeed, in the LDS used in the example, there is no driving force (ie command introduced in W2D4 material).

The last figure (just before the bonus) shows what happens when you used this F matrix to model movement. You end up with a random walk (so F has no impact) which is clearly not doing the same thing as the real data.

Okay got it, thanks for the detailed explanations!

1 Like

Hi

related to W2D4, do you know how to estimate the L parameter so that to derive a form s, with the formula a = L*s? I am trying to model some reaching data to calculate the cost function, but can’t find the way to calculate L, so that to be able to derive a.
I’ve figured out the way to define s with the Kalman filter. I have 2 dimensional data, x and y (as in the eye tracking example of W2D3).

Hi,

There are several way to derive the parameter L depending on the kind of controller you want to use. If you are working with reaching data, I would suggest to use the linear quadratic regulator that is detailed in the second tutorial of W2D4. Briefly, this kind of controller will select the sequence of action that minimise a cost function that weights the end-point error (ie. reaching the goal target) and the energy expenditure (sum of the square of the motor commands).

In order to find these L (you will have one per time step), you have to implement some recurring equations (see lines 12->16 of exercise 2.1). These will give your the L parameters to apply at every time step.

If you want to chat more about modelling of reaching movements don’t hesitate, I’m playing with that a lot in my research and it might be quite though to explain in a single message :sweat_smile:

Hope this helps!

Have you looked at W2D4 Tutorial 2, exercise 2.1, definition of class LQR(LDS), function control_gain_LQR? To my understanding, this is where they calculate L, according to equation (4), substituting definitions (2). I think they differentiate J(s,a) with respect to L and make it 0, then calculate L from here - try and ask again if it doesn’t work.

Many thanks.

I was looking at it. I was trying to figure out how to calculate the cost of the action.
So basically, I have to simulate different levels of L and ro, so that to find the best combinations that allow minimizing the cost?

How can I define the D, B, T, ini_state, noise_var parameters? Are they decided a priori?

Hi Ivan, I’ve undeleted my post, it might answer your questions.
EDIT: I think the parameters come from the model you have about your system, or perhaps you can estimate them from measurements.

Thanks for the reply. I am still confused about how to define the D, B, T, ini_state, noise_var parameters. I’ll look at some papers to see if I can find the way to do it.
Many thanks again

Hi Ivan,

I’ll try to answer you last two messages. As @aep said, there are some answers in the tutorial as well.

Let’s start by this:

I am not 100% certain about what you mean by cost of action but I guess you are speaking about the term that multiplies the square of the command in the cost function of the LQG. There is no way to compute it per se but you have to select some values for it as well as for the term that multplies the penalty of the state, in expression (3) of section 2.1 of Tuto 2 this is encapsulated by the parameter \rho. This parameter is the cost of the control effort compared to the penalty on the end-point error. In my modelling I used to put it to 10^{-4} which is a value that i found in the literature.

Here is the process I would suggest to design such a controler (LQG - LQR controler + Kalman observer):

  1. Design the state space dynamics of your system from what you know. If you are working on upper limb reaching movements you can use Newton’s law to do so, you will end up with :

s_{t+1} = Ds_{t} + Ba_{t}+w_{t}

where s_t is the state vector at time t, D is the matrix that characterises the dynamic of the system (derived from newton’s law), B is the way the external output modifies the state (in the case of reaching movement, these will be x- and y-forces that modifies the acceleration along x- and y-axes respectively), a_t is the action at time t (that you don’t know yet) and w_t is some gaussian noise. You also have to select the noise level, for that i would suggest to start from values that you found in papers modelling the same kind of system and movements as you do.

  1. Based on the cost function, choose the cost parameters for both the motor cost and the end-point penalty, in the equations of tutorial it corresponds to selecting the \rho parameter but it can be different in the literature.

  2. Implement the backward recursion (lines 12->16 of exerise 2.1) to obtain the optimal set of gains to apply. These will be the L matrices, one for each time step.

  3. Implement kalman

  4. Run everything together to model the movement

You have to define D, B, T, ini_{state} and noise_{var} a priori. To do so, I would suggest to write 2d Newton’s law for D and you will end up with a matrix characterising the dynamics of the system; you will also find B. For the other parameters, I would suggest to refer to the literature. What kind of movement are you modelling? Upper limb reaching movements? Is yes, I can help you directing to the literature and selecting your different parameters.

Hope this helps :slightly_smiling_face:

Antoine, you are amazing! Many thanks, this is extremely helpful! So, the movement is not constrained on a plane, but is a normal reaching movement, with x, y and z coordinates. However, I am trying to model just the x and y directions for now. There is no force field applied, participants were free to move. I am doing this as exercise to better understand the modeling part.

So I suppose that for my D matrix I have to use the mass of the participant’s limb. I’ll use a standard one since I haven’t measured it. For B, I haven’t had any forces that affected my movement, except gravity.

I’ve noticed that part of this process is in section 4.1 of W2D4 tutorial 2, where at the end they simulate an LQG controller with control. This is implemented only on one direction, from what I understood.

I’ll follow your suggestions and see what happens. I’ll start to implement it in one direction, and see if I can do it on both x and y. In case, I’ll write again.

Thank you very much for your help!

1 Like

Hey Ivan,

I am glad to heard that it helped you!

For the D matrix you indeed need the mass of the limb of participants. To begin, you don’t need to put the exact value, I’ve always started with a unit value (this is not the parameter that influenced the most the behaviour).

For the B matrix, it characterises how the input (ie the command/action) interferes with the state. In the case of human reaching movement, we can consider that this action is the input of participants to move the system.and that it will act on the derivative of the force (see this paper for an example of model).

If you want more examples of LQG models for human reaching movements, there is whole literature from the same group and others that develop this kind of model in many different contexts :slightly_smiling_face:

1 Like