I wonder if anyone wants to label the Kay/Gallant dataset by hand. The stimuli in the original dataset don’t have any labels. The labels in the data are predicted by a fine-tuned ResNet. So if you want to train a neural network model with these labels, it may not be effective. We can work together to label the dataset.
What we want to do is to divide the pictures into 4 categories: animate_animals, animate_human, inanimate_artifitial, inanimate_natural. We think this kind of classification seems plausible.