Is it possible to specify a target EPI volume to realign to (motion correct to) in fmriprep?

Use case:

We usually preform single voxel analysis in the subject’s EPI space. We also usually pool data across runs/sessions.

When visualizing results we transform them to the subject’s T1w.

We do this by:
1)Aligning all volumes to a single target EPI volume (for example the first volume of the first run, which I think is consistent with SPM)
2)Estimate a single transformation from the target EPI volume to the subject’s T1w. This transformation is applied when visualizing the results.

I think our approach is similar to the second use case mentioned here:
Fmriprep and native EPI space.

My (potentially incorrect) understanding is that fmriprep:
1) aligns each run to some type of mean image,
2) transforms each run to the desired output space (T1w, fsaverage. Or, if using the ‘func’ option, then the output is in the ‘native’ EPI space but not aligned across sessions/runs )

I have done some quick comparisons between fmriprep (output in T1w) and our existing pipeline and found that fmriprep indeed results in lower signal to noise ratio (estimated as response reliability). Happy to share more details about this as/if required.

We think that maybe the additional inter-run alignments in fmriprep hurts the signal to noise ratio, though we cannot rule out something else.

I know that there are some related things in the works, (https://github.com/poldracklab/fmriprep/issues/1604, https://github.com/poldracklab/fmriprep/issues/620, https://github.com/poldracklab/fmriprep/issues/1294) but I can’t make out if these fixes will result in that sort of flexibility.

All and all I’ve found fmriprep really excellent and, aside from the reduced signal to noise ratio, much better than my current processes. It’d be great if this extra alignment option was available.

Cheers and apologies if I missed something obvious.

Dror

This is correct.

I would love to see these results, it could make a good case for us to push towards an EPI norm style of workflow (as you referenced with one of your links).

I would bet against you here :smiley:

Thanks for the nice words, looking forward to maxing out the priority of the EPI alignment.

No need, your feedback is really necessary to make fMRIPrep better. Thank you!

1 Like

Cheers for getting back @oesteban.

One easy way to try and tease out the effect of the extra alignments on the signal to noise ratio is to organize all the raw data as if it came from one run in one session. This way there would be no inter-run alignments, so it should be more similar to our existing pipeline. I’d like to give that a go.

I wanted to check with you if this would be breaking some fmriprep rules? Would you trust data processed in this way?

Thanks,
Dror

I think the approach is pretty smart, although it won’t save you from session effects: the grand mean of each run will be different, and some larger motion is expected between runs. These two factors may lead FSL MCFLIRT (the tools we use for head motion estimation and correction) to fail.

That said, I think that is the best first option to evaluate this. In principle, it shouldn’t break fMRIPrep, although you will end up with a humongous time series - be ready to allocate a lot of RAM.

I have finally had a chance to try and run this pseudo-single-run experiment, but I’ve run into some errors.

Briefly, I merge runs from different sessions using dcm2nixx + nilearn.image.concat_imgs(). I then try and run fmriprep v1.50 on this pseudo-single-run:
fmriprep-docker $Fmri_preproc_in_folder $Fmri_preproc_out_folder participant --participant-label $Subj --fs-license-file $Fs_lic_file -w $work_dir -v --nthreads 2 --omp-nthreads 4 --mem-mb 14000 --low-mem --output-spaces func anat MNI152NLin2009cAsym fsaverage5

The anat seem to run fine, as per the reports, but then:


191001-15:03:17,731 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 4 jobs ready. Free memory (GB): 13.67/13.67, Free processors: 2/2.
191001-15:03:17,952 nipype.workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.segs_to_native_aparc_aseg.tonative” in “/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/segs_to_native_aparc_aseg/tonative”.
191001-15:03:18,38 nipype.workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.segs_to_native_aseg.fs_datasource” in “/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/segs_to_native_aseg/fs_datasource”.
191001-15:03:18,100 nipype.workflow WARNING:
[Node] Error on “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.segs_to_native_aparc_aseg.tonative” (/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/segs_to_native_aparc_aseg/tonative)
191001-15:03:18,108 nipype.workflow INFO:
[Node] Running “fs_datasource” (“nipype.interfaces.io.FreeSurferSource”)
191001-15:03:18,560 nipype.workflow INFO:
[Node] Finished “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.segs_to_native_aseg.fs_datasource”.
191001-15:03:19,743 nipype.workflow ERROR:
Node tonative failed to run on host e4249d3280ab.
191001-15:03:19,766 nipype.workflow ERROR:
Saving crash info to /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-150319-root-tonative-e34bb60a-594f-4333-8d76-3d654d9adad7.txt
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py”, line 69, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 473, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 564, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 627, in _run_command
self._copyfiles_to_wd(execute=execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 711, in _copyfiles_to_wd
infiles, [outdir], copy=info[‘copy’], create_new=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 590, in copyfiles
destfile = copyfile(f, destfile, copy, create_new=create_new)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 513, in copyfile
shutil.copyfile(originalfile, newfile)
File “/usr/local/miniconda/lib/python3.7/shutil.py”, line 121, in copyfile
with open(dst, ‘wb’) as fdst:
FileExistsError: [Errno 17] File exists: ‘/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/segs_to_native_aparc_aseg/tonative/aparc+aseg.mgz’

191001-15:03:19,790 nipype.workflow INFO:
[Job 149] Completed (fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.segs_to_native_aseg.fs_datasource).
191001-15:03:19,793 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 3 jobs ready. Free memory (GB): 13.67/13.67, Free processors: 2/2.
191001-15:03:19,938 nipype.workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.segs_to_native_aseg.tonative” in “/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/segs_to_native_aseg/tonative”.
191001-15:03:19,959 nipype.workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.gifti_surface_wf.get_surfaces” in “/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/gifti_surface_wf/get_surfaces”.
191001-15:03:20,0 nipype.workflow WARNING:
[Node] Error on “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.segs_to_native_aseg.tonative” (/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/segs_to_native_aseg/tonative)
191001-15:03:20,3 nipype.workflow INFO:
[Node] Running “get_surfaces” (“nipype.interfaces.io.FreeSurferSource”)
191001-15:03:20,497 nipype.workflow INFO:
[Node] Finished “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.gifti_surface_wf.get_surfaces”.
191001-15:03:21,748 nipype.workflow ERROR:
Node tonative failed to run on host e4249d3280ab.
191001-15:03:21,769 nipype.workflow ERROR:
Saving crash info to /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-150321-root-tonative-8ad25fd8-b253-45ef-b402-a07feec239eb.txt
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py”, line 69, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 473, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 564, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 627, in _run_command
self._copyfiles_to_wd(execute=execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 711, in _copyfiles_to_wd
infiles, [outdir], copy=info[‘copy’], create_new=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 590, in copyfiles
destfile = copyfile(f, destfile, copy, create_new=create_new)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 513, in copyfile
shutil.copyfile(originalfile, newfile)
File “/usr/local/miniconda/lib/python3.7/shutil.py”, line 121, in copyfile
with open(dst, ‘wb’) as fdst:
FileExistsError: [Errno 17] File exists: ‘/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/segs_to_native_aseg/tonative/aseg.mgz’

191001-15:03:21,798 nipype.workflow INFO:
[Job 288] Completed (fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.gifti_surface_wf.get_surfaces).
191001-15:03:21,801 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 2 jobs ready. Free memory (GB): 13.67/13.67, Free processors: 2/2.
191001-15:03:22,6 nipype.workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.autorecon_resume_wf.recon_report” in “/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/autorecon_resume_wf/recon_report”.
191001-15:03:22,146 nipype.interface INFO:
recon-all complete : Not running
191001-15:03:22,154 nipype.workflow INFO:
[Node] Running “recon_report” (“niworkflows.interfaces.segmentation.ReconAllRPT”), a CommandLine Interface with command:
echo recon-all: nothing to do
191001-15:03:22,256 nipype.interface INFO:
recon-all complete : Not running
191001-15:03:23,746 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 2 jobs ready. Free memory (GB): 8.67/13.67, Free processors: 1/2.
Currently running:
* fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.autorecon_resume_wf.recon_report
191001-15:03:23,840 nipype.workflow INFO:
[Node] Setting-up “_midthickness0” in “/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/gifti_surface_wf/midthickness/mapflow/_midthickness0”.
191001-15:03:23,949 nipype.workflow WARNING:
[Node] Error on “_midthickness0” (/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/gifti_surface_wf/midthickness/mapflow/_midthickness0)
191001-15:03:25,758 nipype.workflow ERROR:
Node _midthickness0 failed to run on host e4249d3280ab.
191001-15:03:25,787 nipype.workflow ERROR:
Saving crash info to /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-150325-root-_midthickness0-3a24557a-54b6-4453-971b-71e9ed417d82.txt
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py”, line 69, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 473, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 564, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 627, in _run_command
self._copyfiles_to_wd(execute=execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 711, in _copyfiles_to_wd
infiles, [outdir], copy=info[‘copy’], create_new=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 590, in copyfiles
destfile = copyfile(f, destfile, copy, create_new=create_new)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 513, in copyfile
shutil.copyfile(originalfile, newfile)
File “/usr/local/miniconda/lib/python3.7/shutil.py”, line 121, in copyfile
with open(dst, ‘wb’) as fdst:
FileExistsError: [Errno 17] File exists: ‘/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/gifti_surface_wf/midthickness/mapflow/_midthickness0/lh.smoothwm’

191001-15:03:25,815 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 1 jobs ready. Free memory (GB): 8.67/13.67, Free processors: 1/2.
Currently running:
* fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.autorecon_resume_wf.recon_report
191001-15:03:25,913 nipype.workflow INFO:
[Node] Setting-up “_midthickness1” in “/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/gifti_surface_wf/midthickness/mapflow/_midthickness1”.
191001-15:03:26,88 nipype.workflow WARNING:
[Node] Error on “_midthickness1” (/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/gifti_surface_wf/midthickness/mapflow/_midthickness1)
191001-15:03:27,740 nipype.workflow ERROR:
Node _midthickness1 failed to run on host e4249d3280ab.
191001-15:03:27,764 nipype.workflow ERROR:
Saving crash info to /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-150327-root-_midthickness1-8ba3996a-0b98-4c64-9125-09f5ccae90ec.txt
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 513, in copyfile
shutil.copyfile(originalfile, newfile)
File “/usr/local/miniconda/lib/python3.7/shutil.py”, line 104, in copyfile
raise SameFileError("{!r} and {!r} are the same file".format(src, dst))
shutil.SameFileError: ‘/out/freesurfer/sub-001/surf/rh.smoothwm’ and ‘/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/gifti_surface_wf/midthickness/mapflow/_midthickness1/rh.smoothwm’ are the same file

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py”, line 69, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 473, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 564, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 627, in _run_command
self._copyfiles_to_wd(execute=execute)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 711, in _copyfiles_to_wd
infiles, [outdir], copy=info[‘copy’], create_new=True)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 590, in copyfiles
destfile = copyfile(f, destfile, copy, create_new=create_new)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 515, in copyfile
fmlogger.warning(e.message)
AttributeError: ‘SameFileError’ object has no attribute ‘message’

191001-15:03:27,795 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 0 jobs ready. Free memory (GB): 8.67/13.67, Free processors: 1/2.
Currently running:
* fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.autorecon_resume_wf.recon_report
191001-15:03:29,886 nipype.workflow INFO:
[Node] Finished “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.autorecon_resume_wf.recon_report”.
191001-15:03:31,736 nipype.workflow INFO:
[Job 296] Completed (fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.autorecon_resume_wf.recon_report).
191001-15:03:31,739 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 1 jobs ready. Free memory (GB): 13.67/13.67, Free processors: 2/2.
191001-15:03:31,872 nipype.workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.anat_reports_wf.ds_recon_report” in “/scratch/fmriprep_wf/single_subject_001_wf/anat_preproc_wf/anat_reports_wf/ds_recon_report”.
191001-15:03:31,988 nipype.workflow INFO:
[Node] Running “ds_recon_report” (“smriprep.interfaces.DerivativesDataSink”)
191001-15:03:32,189 nipype.workflow INFO:
[Node] Finished “fmriprep_wf.single_subject_001_wf.anat_preproc_wf.anat_reports_wf.ds_recon_report”.
191001-15:03:32,190 nipype.workflow INFO:
[Job 297] Completed (fmriprep_wf.single_subject_001_wf.anat_preproc_wf.anat_reports_wf.ds_recon_report).
191001-15:03:33,740 nipype.workflow INFO:
***********************************
191001-15:03:33,741 nipype.workflow ERROR:
could not run node: fmriprep_wf.single_subject_001_wf.func_preproc_ses_1_task_countback_acq_TR1000_run_1_wf.bold_stc_wf.slice_timing_correction
191001-15:03:33,760 nipype.workflow INFO:
crashfile: /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-083059-root-slice_timing_correction-1fcc7bfe-2dfa-445c-95f1-7431666b09f4.txt
191001-15:03:33,761 nipype.workflow ERROR:
could not run node: fmriprep_wf.single_subject_001_wf.func_preproc_ses_1_task_countback_acq_TR1000_run_1_wf.bold_reference_wf.enhance_and_skullstrip_bold_wf.unifize
191001-15:03:33,775 nipype.workflow INFO:
crashfile: /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-090311-root-unifize-3c96763b-d97b-4585-882a-54704a7405b8.txt
191001-15:03:33,775 nipype.workflow ERROR:
could not run node: fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.segs_to_native_aparc_aseg.tonative
191001-15:03:33,792 nipype.workflow INFO:
crashfile: /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-150319-root-tonative-e34bb60a-594f-4333-8d76-3d654d9adad7.txt
191001-15:03:33,793 nipype.workflow ERROR:
could not run node: fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.segs_to_native_aseg.tonative
191001-15:03:33,807 nipype.workflow INFO:
crashfile: /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-150321-root-tonative-8ad25fd8-b253-45ef-b402-a07feec239eb.txt
191001-15:03:33,807 nipype.workflow ERROR:
could not run node: fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.gifti_surface_wf.midthickness
191001-15:03:33,821 nipype.workflow INFO:
crashfile: /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-150325-root-_midthickness0-3a24557a-54b6-4453-971b-71e9ed417d82.txt
191001-15:03:33,821 nipype.workflow ERROR:
could not run node: fmriprep_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.gifti_surface_wf.midthickness
191001-15:03:33,836 nipype.workflow INFO:
crashfile: /out/fmriprep/sub-001/log/20191001-082932_f258cbae-61f6-4819-9b13-e7b52a43625e/crash-20191001-150327-root-_midthickness1-8ba3996a-0b98-4c64-9125-09f5ccae90ec.txt
191001-15:03:33,836 nipype.workflow INFO:
***********************************
fMRIPrep failed: Workflow did not execute cleanly. Check log for details
Preprocessing did not finish successfully. Errors occurred while processing data from participants: 001 (6). Check the HTML reports for details.


In there, I see that there are issues with the slice timing correction. One potential problem is that the json side car I use for the pseudo-single-run, is based on only one of the runs, so the info in there is not correct. @oesteban, does fmriprep use that json to inform the preproc?

As a side note, a different group (Sessions misaligned after fmriprep) seem to also think that when trying to get the best within-subject, inter-session alignment, fmriprep may not be optimal.

Hello,

Given the fast pace of fmriprep development I’d thought I check back in on this.

  1. First, in the output-spaces doc I see “… Standard spaces may be specified by the form <TEMPLATE>[:res-<resolution>][:cohort-<label>][...] , where <TEMPLATE> is a keyword (valid keywords: “MNI152Lin”, “MNI152NLin2009cAsym”, “MNI152NLin6Asym”, “MNI152NLin6Sym”, “MNIInfant”, “MNIPediatricAsym”, “NKI”, “OASIS30ANTs”, “PNC”, “fsLR”, “fsaverage”) or path pointing to a user-supplied template, and may be followed by …”
    Does that me we can specify, say, the first EPI volume of the first run and thus get all the session aligned in the subject’s EPI space?

  2. Alternatively, could we abuse the sbref option for this purpose? that is, pass (for example) the first multiband EPI volume of the first run as the target, thus getting all the runs and sessions aligned to the same EPI image?

Cheers!

Could you submit an issue to the repo sketching your proposal?

You mean for the sbref-abuse option? The first option won’t work?

Hi @12552
I’m struggling with some of the same issues that you describe here. Was curious if you had any success with any of the two approaches you suggested?

Hi,
Atm, I (1) use fprep to get the prreprocessed bold in the native space for each run and session (2) realign these to a single reference (the fprep ref image from the first run and session).

The second step is not ideal but works ok for now.

Apparently, sbref option can be used, but it is a bit more work and I haven’t tried it yet. See [this] post(fMRIPrep coregistration within sessions?).

Hope that helps

Great, thanks, I’ll try something like that.

Hello! Its been a few years but we are having some similar questions. Is there a different/better way to implement this in fmriprep now? Basically we just want our various runs/session to be better aligned with each other

Hi,

To my knowledge, the situation has not evolved within fmrirep with respect to the possibility of inter-run registration.
Just one note though, it was recently noticed that the SBref images, if present and not ignored, are only used as the target for motion correction and not for realignment with the T1w image: