Reproducibility of fmriprep + ciftify-toolbox on same computer

So the tag latest, as the name suggests, is often used for the latest version of the software. So when the fmiprep team updates software it updates the poldracklab/fmriprep:latest image. That means that if you pulled the image a month ago and again you pull it today you can have a different image with a different version of software. If you want to be sure that you’re using the same version of software, you should use images with more precise tag, e.g, 1.0.8. (see the list of tags from docker hub).

But I do not know if the difference you see could be explained by different version of the fmriprep image. This is just a general comment.

1 Like

I undersatand!

I run both analysis at same time…

But, your suggestion is important.
Thank you for your comment !

Might be worth dissociating FMRIPREP and ciftify effects. Compare outputs of FMRIPREP run twice on the same data. Pick one output of FMRIPREP and run ciftify on it twice and compare results.

Hi Chris,
Thank you for your comment!
That’s a good idea!
I’ll try it.

These are the outputs (surface reconstruction) of FMRIPREP run twice on the same data (sub-001 and sub-150).


The results are almost same but little bit different for each other.

These are the outputs (MNI_LM) of ciftify-toolbox run twice on the same two data separately (sub-001 twice and sub-150 twice).

Result of sub-001

Results of sub-150

The results are also little bit different but almost same for each other.

In summary, big difference of results (difference between sub-001 and sub-150 after ciftify-toolbox; position of red and blue brain region) may be caused by fmriprep (maybe recon-all step?).

Thanks - lets dig more into this:

  1. What are the exact command lines you used to run the two fmriprep analyses on data from the same subject?
  2. If you run freesurfer BIDS-App (https://github.com/BIDS-Apps/freesurfer) do you also get variable results (when running twice on the same subject)?
  3. As an additional exercise I would try running fmriprep with --omp-nthreads 1 option.

BTW this paper might be of interest to you https://www.frontiersin.org/articles/10.3389/fninf.2015.00012/full

Thanks!!

What are the exact command lines you used to run the two fmriprep analyses on data from the same subject?

These are exact command lines.
“docker run -ti --rm -v …/sourcedata:/data:ro -v …/out:/out poldracklab/fmriprep:latest /data participant-label 001 --use-aroma”
“docker run -ti --rm -v …/sourcedata:/data:ro -v …/out:/out poldracklab/fmriprep:latest /data participant-label 150 --use-aroma”

  1. If you run freesurfer BIDS-App (https://github.com/BIDS-Apps/freesurfer) do you also get variable results (when running twice on the same subject)?
  2. As an additional exercise I would try running fmriprep with --omp-nthreads 1 option.

Thank you for your advice!
I’ll try it!!

BTW this paper might be of interest to you https://www.frontiersin.org/articles/10.3389/fninf.2015.00012/full

Thank you for your information!

Thanks. Was the …/out folder empty before you run the first command?

Wow! Great to here you are doing this!

Are the ciftify version and software envs the same for both runs? Do the first bit of the ciftify_recon_all.log files list the same packages and versions?

1 Like

Was the …/out folder empty before you run the first command?

Yes. I empty the folder before run the first command.

By using BIDS-app, I got perfectly same results when running twice on the same subject.


Since this image is a gif which is a combination of two results, we can see that it has exactly the same results!

By using, --omp-nthreads 1 option, I didn’t get same results when running twice on the same subject.

So, I want to use combination of BIDS-app recon-all and fmriprep for analysis!
Is it possible??

So, I want to use combination of BIDS-app recon-all and fmriprep for analysis!
Is it possible??

Yes. If there is already a freesurfer/ directory in the output directory given to fmriprep, then fmriprep will use that. (If recon-all was not fully run on a given subject, fmriprep will run unfinished steps, but a fully run FreeSurfer will just be checked for completeness and used as-is. This is both for the sake of efficiency and for accommodating situations in which custom FreeSurfer runs are required.)

Please see the documentation for further details.

Hi edickie,

Thank you for your comment!
I have to apologize to you. Actually, what I said (below) was wrong.
“the outputs (MNI_LM) of ciftify-toolbox run twice on the same two data separately (sub-001 twice and sub-150 twice). The results are also little bit different but almost same for each other.”
Because I ran the commands on different OS for the same two data.
Actually, the outputs (MNI_LM) of ciftify-toolbox run twice on the same two data on same OS were perfectly same, but the outputs of ciftify-toolbox run twice on the same data on same OS were little bit different.

The ciftify versions which I was used were Version: 1.0.1 and the exact environment settings are below.

ciftify:
Version: 1.0.1
wb_command:
Path: /home/dcn/ayumu/workbench/bin_rh_linux64/wb_command
Version: 1.2.3
Commit Date: 2016-08-23 19:08:10 -0500
Operating System: Linux
freesurfer:
Path: /home/dcn/ayumu/freesurfer/bin
Build Stamp: freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0-2beb96c
FSL:
Path: /home/dcn/ayumu/fsl
Version: 5.0.10

Hi effigies,
Thank you for your comment!

This is good news for me!
Thank you very much!

So to sum up

  1. When you run FreeSurfer on two “subject” with exactly same raw data you get the same results.
  2. When you run FMRIPREP (with or without --omp-threads 1) on two “subject” with exactly same raw data you get different results.

Correct?

It seems that one of the FMRIPREP results is more similar to the stable pure FreeSurfer result. Could you share this data (the two “subjects”) so we could try to replicate this?

  1. When you run FreeSurfer on two “subject” with exactly same raw data you get the same results.
  2. When you run FMRIPREP (with or without --omp-threads 1) on two “subject” with exactly same raw data you get different results.

Yes.

Could you share this data (the two “subjects”) so we could try to replicate this?

OK. I will check with my boss whether we can share the data.
Thanks!

1 Like

Hi Chris,

We can share the data (the two “subjects”).
Please tell me how to share it.
Thanks!

Guessing Chris missed this, but I believe the currently preferred way to share data is to upload to OpenNeuro and share with his account:

1 Like

Thank you so much!
I’ll upload to OpenNeuro and share with Chris!

1 Like

Hi Chris,

I’m very sorry for my very late response…
I was so busy in this April…
I uploaded my data to OpenNeuro and share with " krzysztof.gorgolewski@gmail.com " at last.
Data-set name is “BIDS”.
Please confirm it.
Best,

Ayumu

1 Like

Hello

I am encountering a similar “issue”: when I run two times the same subject, I can see that the intensity values of e.g. _preproc_bold.nii.gz are slightly different in corresponding voxels of both runs. Also, when I compare the components identified by ICA-AROMA, I can see that some of the components identified as signal in the first run are getting identified as noise in the second run. I am using fmriprep 20.2.4. I was wondering if there is any update on reproducibility of the output?

Thank you very much for the info!
best wishes
Julie