Steinmetz et al, 2019 dataset questions

Thanks for a quick reply. I have a follow-up question. I saw that in the provided notebook you did PCA on specific time points of data [51:130] but then multiplied W by the original data (all time points). Do you think it makes sense to use weights from PCA (done on trial averages) to project original single-trial data?

Absolutely, yes. The trial to trial variability in each PC could be correlated to something interesting, like reaction time variability or errors.

Thanks, I will try it.

Any updates on this question? Did anyone find out?

Thank you so much @pachitarium
The preprocessed data has been a huge help for our team!

I think those are just channels outside of the brain, or on the edge of the brain. If there are neurons there (there shouldn’t be very many), they are likely noise or there’s something weird with them, or they’re just classified as “other” in Nick’s classification.

Hello! I’m unfamiliar about LFP data processing, and was wondering how the LFP data is extracted from the raw data in this case. I assume a median filter was applied on the raw data to remove action potentials in the raw trace for each Neuropixels’ channel, and this median-filtered data from channels in the same brain region got binned into 10 ms, and then averaged across channels. As a result, the LFP data here is representative of 1-100 Hz. Is it true? Or is it actually from low-pass filtered raw data and got binned into 10 ms?

This is just low-passed filtered data that I binned at 10ms. It actually comes out low-passed from the probe, so there is no chance to remove the spikes, but the spikes are very small compared to the LFP anyway so it shouldn’t matter much.

Gotcha. Thank you very much!