I was wondering about W1D1 tutorial 3, and the claim that exponentially distributed ISIs maximize entropy. This is true ( I think!) only if the maximum isi is much longer than the mean isi, right? (For example, if max(isi) = 2 * mean(isi) then the uniform distribution will be entropy-maximizing).
Is this assumption unproblematic? I was thinking that at least one factor that could prevent max(isi) from growing is computational speed: the brain would not want a coding scheme that took forever to implement, by needing very long ISIs.
The exponential distribution is the maximal entropy distribution for a random variable which is >=0 and has a fixed mean. If a distribution is upper bounded, then it is not a “true” exponential distribution (which doesn’t have a maximal value), so to approach an exponential distribution, real neurons must have a very large (and rare) maximal ISI, like you suggested.
Hope it answers the question.
Thanks, that was helpful I guess my worry is that the need for truncated distributions (so that the maximum ISI is not too long) is as much a constraint on entropy maximization as a fixed mean ISI is. A full information-theoretic explanation of why exponential should perhaps take this into account as well.
For those who understand the concept well (of entropy), is this a robust / reliable explainer? https://towardsdatascience.com/the-intuition-behind-shannons-entropy-e74820fe9800
I haven’t read that article, it looks ok. I’d recommend checking out the first few chapters of David MacKay’s textbook, freely available here.
If there was more time, I would suggest an information theory*, entropy, etc breakout / discussion group. Or even just watching the claude shannon biopic: https://thebitplayer.com/ (If anyone is particularly interested in that post NM, feel free to DM me)
*(and/or cybernetics, some computational history stuff, etc)
I think only the lower bound is constrained because of the meaning of ISI. Is there a legitimate reason why there could be an upper bound?
I’m down for a movie night.
Yeah, that should be doable!
Yes, this sounds interesting!
I was thinking computational speed?