Hi,
One of the main things we are exploring in our project is the relationship between image complexity/dimensionality and neuronal population response dimensionality/complexity to some extend inspired by this paper from Carsen Stringer: https://www.nature.com/articles/s4158601913465
We are still figuring out good metrics to do this. The important thing is that we want to do this for single images rather than across images like in the paper above.
So far we have the following ideas (with help from our awesome mentor Miguel Maravall):

Divide the image into patches of a fixed size (we can vary this). Each patch will be pxp pixels which we can unroll into a single dimension P=pxp and we get a data matrix PxN_pacthes. We then obtain covariance matrix PxP and do PCA on that. We then, for each image, see how many PCs we need to capture 90% of the variance in this image or e.g. take the slope of this eigenspectrum. Our naive intuition is that more complex images would need more PCs to be described.

Fractal dimension: http://www.fractalyse.org/enhome.html

“Complexity index”: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0207879

Image compression: Take the ratio of file sizes for the original Bitmap image and a JPEGcompressed copy. Because JPG compression removes redundancies, the larger the JPG/Bitmap ratio, the more complex the image is.
Do you know of any other metrics of images complexity/dimensionality that we might be able to test/try out?
Thank you!