Logistic GLM with non negative sign coefficients

Does setting a constraint on the sign of the weights impose a constraint on the neural information that the model takes into account to make predictions in a decoding setting?

More specifically, does the imposition of non negative weights restrict the model to use neuronal positive modulation to predict the output in a decoding setting?

I did a quick simulation, for a linear regression, with true model y=-3x_1+2x_2+0.4x_3+0.1x_4

If the predictors are uncorrelated, the fit is bad, but the largest weight is estimated not too far from its true value.

If the predictors are correlated, it seems that the algorithm tries to minimise the error by attributing most or all variance to the predictor that is the least correlated with x_1.

I did not get exactly what you did: your true model seems to have no constraints on the weights, am I right?

If the true model has only positive coefficients, then constraining the weights to be positive shouldn’t have any effect on the fit (quick simulation confirms).

So I tried to see what happens if we constrain the weights of the fit to be positive, when the true model also has negative weights.

1 Like

Great, your simulation is neat.
If anybody has broader intuition of the consequence of such constraints, please comment!

I think the broader intuition depends on your context.

If you have a very strong reason to believe that the true model contains no significant negative weights, then it may be ok to restrict weights to be positive. If, however, the true model has large negative weights, restricting it to positive domain will probably underestimate (a lot) the contributions of the variables that are correlated with the variable whose true weight is negative. One might imagine mathematical methods to solve this problem (perhaps try orthogonalising the variables, but more elegant ideas might also be possible), but if the true model does have large negative weights, then why not just estimate them?

A quicker idea might be to try a softer method, for example to define a model that penalises negative weights (this way, if negative weights are not truly significant but just artefacts of fitting, they will be squashed), or to use something like Bayesian regression with a strong prior on the weights being positive.

I don’t know what your context is, but I think brain variables are pretty correlated, and positive as well as negative weights are not unexpected. But perhaps in your application this is not the case.

I forgot to say, I think, also, that if the true model has large negative weights and you restrict the weights to be positive, the estimated parameters will vary a lot across different samplings.

1 Like