Fmriprep set up for resting state

Hello Neurostars and experts,

I am new to fMRI and fMRIprep and I wanted to get some opinions regarding the fMRIprep set-up I am using, as well as the selections in CONN toolbox to make sure I am doing it correctly.

The data – resting-state scans are from different sites with different voxel sizes and TR. In some cases, the scans come fro a single session, in others, resting scans were collected in 2 separate sessions but regardless of the number of sessions, the total duration of the resting scans is similar.

Because of the different resolution of the native functional spaces, and the character of the task (non, aka resting state), I used the following options (not all subjects have necessary data to use fieldmaps option):

–output-spaces MNI152NLin6Asym:res-2 MNI1522NLin2009cAsym --use-syn-sdc --fs-no-reconall --ignore fieldmaps slicetiming

I was told that slice timing correction is not needed with resting-state scans and because I want to run ROI-to-ROI comparisons defined on a 2mm template consistent with MNI152NLin6Asym, I should resample the functional scans to the same space rather than use resampled to native space that varies in resolution across subjects.

Once preprocessed in fMRIprep, the following steps and confounds from fMRIprep for denoising are to follow, before ROI-to-ROI analysis and between-groups comparisons:

* Smoothing: with a 5mm Gaussian kernel
* Realignment
* Scrubbing

fMRIprep Confounds in denoising:
* aCompCor, tCompCor & their corresponding cosine_XX regressors
* Motion correction: DVARS, framewise displacement, motion outlier
* Rotational parameters: rot, rot_x_derivative, rot_y_derivative, rot_z_derivative
* Translational parameters: trans, trans_x_derivative, trans_y_derivative, trans_z_derivative
Additionally:
* Default band-pass filter: 0.008-0.09 Hz
* Detrending: linear

So far I was able to troubleshoot any problems related to preprocessing data, but it would be great if I could get some input an opinions:

Are any of the steps redundant?
Is resampling the functional data to a standard space helpful?
Is the band-pass filter of an appropriate range given the type of the data?
Can you see any red flags that could potentially distort the results?

Hi,

I was wondering if you managed to successfully denoise your fmri prep data in CONN? Did CONN create realignment and scrubbing files for you even though you did not preprocess with CONN?

In the most recent conn version you can import directly from fmriprep, and it creates realignment and scrubbing 1st-level covariates from the confounds file. You probably will still have to smooth the data, which you can do either within conn or outside of it, but probably easier to do it within conn.

Thank you steven! If I were to use only fmri-prep derived confound variables (framewise displacement, aCompCor, Xyz Rot Xyz), and not CONN derived variables (realignment, WM, CSF and Effects of session), would that be okay? Also, will I have to perform smoothing if I’m not going to be using smoothed data?

Doesn’t matter which set you use, and smoothing is optional. Also you might want o look into XCPEngine as an alternative to CONN, especially when using fmriprep outputs. CONN does not import acompcor correctly, as it doesn’t separate between the two acompcor methods fMRIprep does and just imports both of them, so you end up doubly regressing out the signal.

Thank you steven! According to CONN’s denoising document, potential confounding effects are taken from WM and CSF. Does this mean CONN uses WM and CSF in the denoising step to generate CompCor confounds?
In fmriprep confounds output acompcor is taken from (CSF and WM). If I use denoising in conn where I include WM, and CSF, and remove Acompcor(generated from my fmri-prep) data. Will this be a feasible alternative since CONN does not correctly import Acompcor?

The WM and CSF confounds in CONN might be the first 5 principal acompcor components in the WM and CSF mask, respectively. So if you want to use that, then it should be fine. However, a popular acompcor implementation is to include all principal components that explain 50% variance in these masks. fMRIPrep additionally does acompcor in a combined WM+CSF mask, which CONN does not separate correctly from the separate WM and CSF compcor components. So it is up to you, but I personally recommend using XCPEngine since it allows you to directly choose a published denoising pipeline without having you do the guesswork.

Thank you so much, this is really helpful! I’ll check out XCPEngine!

Hi! In conn, after denoising, I successfully conducted 1st level analysis. But
when I try to conduct the second level ROI to ROI analysis, the error ‘index exceeds array bounds’ pops out. My seed to voxel analysis works, it’s only the ROI to ROI that doesn’t. I was wondering if you know what this error means? This is the error:

Index exceeds array bounds.

Error in conn_process (line 5172)
newneffects=CONN_x.Results.saved.nsubjecteffects{ncontrast};

Error in conn_process (line 61)
case ‘results_roi’, [varargout{1:nargout}]=conn_process(17,varargin{:});

Error in conn (line 9522)
CONN_h.menus.m_results.roiresults=conn_process(‘results_ROI’,CONN_x.Results.xX.nsources,CONN_x.Results.xX.csources);

Error in conn (line 7636)
else conn gui_results_r2r;

Error in conn_menumanager (line 121)
feval(CONN_MM.MENU{n0}.callback{n1}{1},CONN_MM.MENU{n0}.callback{n1}{2:end});

Hi,

Unfortunately without having worked on the project these error messages are not enough for me to pinpoint the problem. My guess would be that there may be some data that are missing or have not been run through the full pipeline. It may be a subject, a condition, an ROI, or some combination thereof, so I would make sure that everything has been processed.

Best,
Steven

Hi Steven,

Thank you, I updated my CONN to the latest version and resolved this issue. I was wondering if you know about CONN’s output variables in matlab (2nd level cluster analysis)?

There’s a file in the ROIs folder in MATLAB called ‘Summary’, that file contains T stats, uncorrected p values and FDR corrected P values. On some of the threads, it was mentioned that these values correspond to 2nd level Parametric analysis, however upon manually looking through the results, my Summary stats corresponds to my Non-parametric analysis. Is this possible (perhaps they have made changes to the toolbox’s outputs)?
I was also wondering for cluster analysis (parametric-FNC) and (non-parametric :permutation), does the T stat and uncorrected p values change, or does only the FDR corrected p value changes?