Advice for optimizing subcortical signal with phased-array head coils?

Hi – Our imaging center has a 3T Prisma with a 20-channel and 64-channel head coil. With both coils, there’s a gradient in tSNR/SFNR, with maximal tSNR/SFNR near the coils (i.e., the cortical surface). I realize this is the expected behavior of these coils, but I was wondering if anyone had advice for optimizing subcortical signal, especially in the striatum? Unlike the HCP, we would generally be running event-related designs.

We’ve done some initial tests comparing different resolutions (e.g., 2mm vs. 3mm) and different TRs (e.g., 2s vs. 1s). We’ve also done some test with multiband on (sms = 3) or off. These initial tests suggest multiband, faster TRs, and smaller voxels reduce tSNR/SFNR, but I don’t think tSNR/SFNR metrics are really the whole story here, so I’d be happy to look at any additional metrics.


1 Like

Cross linking the relevant twitter conversation:

I recently tested different tSNRs in 7T with 32-channel and 8-channel coils, and also a 3T Connectom (both Siemens) with a specific sequence and differing voxel sizes.

Here’s a table with tSNR values in the Thalamus and inferior colliculi averaged within a r = 6mm sphere:


Thanks! This is very helpful to see. There’s still some chatter on the twitter link, so I wanted to return to this post and share what we had at this stage. Here’s a link to our OSF page (in progress):

One thing that I learned while looking into this issue is that it looks like many people recommend using the pre-scan normalization option with this phased-array head coils.

This does seem to do a good job with making the images look better (more homogenous), but we had some masking issues with FMRIPREP with those runs, which can be seen for some of the task-rest data for sub-103 and sub-104. The FMRIPREP folks are using those data to make their approach more robust. (Thanks!)