FitLins "no functional images that match criteria found"

Summary of what happened:

Hi all, I’m running FitLins on Haxby’s visual object recognition dataset. I’m trying to run a very simple run-level GLM for one single subject. From the error message it seems like FitLins cannot locate my NIfTI images, but I can’t find errors in how I’m mounting my data. Any help would be greatly appreciated!
Here’s how my json file looks like (which I’ve placed under [bids_root]/models/model-001_smdl.json):

{
   "Name":"ds_model001",
   "Description":"Model for Visual Object Recognition task",
   "Input":{
      "task":"objectviewing"
   },
   "Nodes":[
      {
         "Level":"run",
         "Name":"subject",
         "GroupBy":[
            "run",
            "subject"
         ],
         "Transformations":{
            "Transformer": "pybids-transforms-v1",
            "Instructions": [
              {
                "Name": "Factor",
                "Input": ["trial_type"]
              },
              {
                "Name": "Convolve",
                "Model": "spm",
                "Input": ["trial_type.face", "trial_type.scrambledpix"]
              }
            ]
         },
         "Model":{
            "X":[
               "trial_type.face",
               "trial_type.scrambledpix",
               "framewise_displacement",
               "trans_x",
               "trans_y",
               "trans_z",
               "rot_x",
               "rot_y",
               "rot_z",
               "a_comp_cor_00",
               "a_comp_cor_01",
               "a_comp_cor_02",
               "a_comp_cor_03",
               "a_comp_cor_04",
               "a_comp_cor_05"
            ]
         },
         "Contrasts":[
            {
               "Name":"face_gt_scrambled",
               "ConditionList":[
                  "trial_type.face",
                  "trial_type.scrambled"
               ],
               "Weights":[
                  1,
                  -1
               ],
               "Test":"t"
            }
         ]
      }
   ]
}

Command used (and if a helper script was used, a link to the helper script or the command generated):

docker run --rm -it \
-v /Users/sajjad/Documents/Haxby/data/:/bids:ro \
-v /Users/sajjad/Documents/Haxby/data/derivatives:/prep:ro \
-v /Users/sajjad/Documents/Haxby/data/derivatives/fitlins:/out \
-v /Users/sajjad/Documents/Haxby/data/derivatives/fitlins_work:/scratch \
poldracklab/fitlins:latest \
/bids /out participant -d /prep -w /scratch --participant-label 1 -m /bids/models/model-001_smdl.json --estimator nilearn

Version:

Environment (Docker, Singularity / Apptainer, custom installation):

Docker

Data formatted according to a validatable standard? Please provide the output of the validator:

bids-validator@1.14.5
(node:1) Warning: Closing directory handle on garbage collection
(Use `node --trace-warnings ...` to show where the warning was created)
	1: [ERR] Files with such naming scheme are not part of BIDS specification. This error is most commonly caused by typos in file names that make them not BIDS compatible. Please consult the specification and make sure your files are named correctly. If this is not a file naming issue (for example when including files not yet covered by the BIDS specification) you should include a ".bidsignore" file in your dataset (see https://github.com/bids-standard/bids-validator#bidsignore for details). Please note that derived (processed) data should be placed in /derivatives folder and source data (such as DICOMS or behavioural logs in proprietary formats) should be placed in the /sourcedata folder. (code: 1 - NOT_INCLUDED)
		./models/example1_model-001_smdl.json
			Evidence: example1_model-001_smdl.json
		./models/example2_model-001_smdl.json
			Evidence: example2_model-001_smdl.json
		./models/example3_model-001_smdl.json
			Evidence: example3_model-001_smdl.json
		./models/example4_model-001_smdl.json
			Evidence: example4_model-001_smdl.json
		./models/model-001_smdl.json
			Evidence: model-001_smdl.json

	Please visit https://neurostars.org/search?q=NOT_INCLUDED for existing conversations about this issue.

	1: [WARN] You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. 'Slice Timing' is the time at which each slice was acquired within each volume (frame) of the acquisition. Slice timing is not slice order -- rather, it is a list of times containing the time (in seconds) of each slice acquisition in relation to the beginning of volume acquisition. (code: 13 - SLICE_TIMING_NOT_DEFINED)
		./sub-1/func/sub-1_task-objectviewing_run-01_bold.nii.gz
		./sub-1/func/sub-1_task-objectviewing_run-02_bold.nii.gz
		./sub-1/func/sub-1_task-objectviewing_run-03_bold.nii.gz
		./sub-1/func/sub-1_task-objectviewing_run-04_bold.nii.gz
		./sub-1/func/sub-1_task-objectviewing_run-05_bold.nii.gz
		./sub-1/func/sub-1_task-objectviewing_run-06_bold.nii.gz
		./sub-1/func/sub-1_task-objectviewing_run-07_bold.nii.gz
		./sub-1/func/sub-1_task-objectviewing_run-08_bold.nii.gz
		./sub-1/func/sub-1_task-objectviewing_run-09_bold.nii.gz
		./sub-1/func/sub-1_task-objectviewing_run-10_bold.nii.gz
		... and 61 more files having this issue (Use --verbose to see them all).

	Please visit https://neurostars.org/search?q=SLICE_TIMING_NOT_DEFINED for existing conversations about this issue.

	2: [WARN] Not all subjects contain the same files. Each subject should contain the same number of files with the same naming unless some files are known to be missing. (code: 38 - INCONSISTENT_SUBJECTS)
		./sub-5/func/sub-5_task-objectviewing_run-12_bold.nii.gz
			Evidence: Subject: sub-5; Missing file: sub-5_task-objectviewing_run-12_bold.nii.gz
		./sub-5/func/sub-5_task-objectviewing_run-12_events.tsv
			Evidence: Subject: sub-5; Missing file: sub-5_task-objectviewing_run-12_events.tsv

	Please visit https://neurostars.org/search?q=INCONSISTENT_SUBJECTS for existing conversations about this issue.

	3: [WARN] Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS)
		./sub-6/anat/sub-6_T1w.nii.gz

	Please visit https://neurostars.org/search?q=INCONSISTENT_PARAMETERS for existing conversations about this issue.

        Summary:                 Available Tasks:        Available Modalities: 
        157 Files, 1.75GB        object viewing          MRI                   
        6 - Subjects                                                           
        1 - Session                                                   

Relevant log outputs (up to 20 lines):

Captured warning (<class 'UserWarning'>): [Node subject]:Transformations reformatted to {'transformer': 'pybids-transforms-v1', 'instructions': [{'name': 'Factor', 'input': ['trial_type']}, {'name': 'Convolve', 'input': ['trial_type.face', 'trial_type.scrambledpix']}]}
240430-02:47:42,784 nipype.workflow INFO:
	 [Node] Setting-up "fitlins_wf.loader" in "/scratch/fitlins_wf/loader".
240430-02:47:42,870 nipype.workflow INFO:
	 [Node] Executing "loader" <fitlins.interfaces.bids.LoadBIDSModel>
/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py:63: UserWarning: [Node subject]:Transformations reformatted to {'transformer': 'pybids-transforms-v1', 'instructions': [{'name': 'Factor', 'input': ['trial_type']}, {'name': 'Convolve', 'input': ['trial_type.face', 'trial_type.scrambledpix']}]}
  warnings.warn(f"[Node {node['name']}]:"
240430-02:47:43,109 nipype.workflow INFO:
	 [Node] Finished "loader", elapsed time 0.204128s.
240430-02:47:43,109 nipype.workflow WARNING:
	 Storing result file without outputs
240430-02:47:43,115 nipype.workflow WARNING:
	 [Node] Error on "fitlins_wf.loader" (/scratch/fitlins_wf/loader)
240430-02:47:44,143 nipype.workflow ERROR:
	 Node loader failed to run on host eec0cd6b518f.
240430-02:47:44,144 nipype.workflow ERROR:
	 Saving crash info to /scratch/crash-20240430-024744-neuro-loader-529c9467-e290-4851-8b2c-7b5d8e2e9ebc.txt
Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node loader.

Traceback:
	Traceback (most recent call last):
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 246, in _run_interface
	    graph.load_collections(**selectors)
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 198, in load_collections
	    collections = self.layout.get_collections(node.level, drop_na=drop_na,
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/layout/layout.py", line 860, in get_collections
	    index = load_variables(self, types=types, levels=level,
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 93, in load_variables
	    dataset = _load_time_variables(layout, dataset, scope=scope, **_kwargs)
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 186, in _load_time_variables
	    raise ValueError("No functional images that match criteria found.")
	ValueError: No functional images that match criteria found.

Screenshots / relevant information:


Hi @sajjad,

What are the contents of /Users/sajjad/Documents/Haxby/data/derivatives? Can you also return the tree output of an fmriprep output func folder for this subject?

Additionally, I see you are using a_comp_cor components, but not including the cosine and non_stead_state regressors. That is not recommended.

Best,
Steven

1 Like

I don’t see anything ostensibly wrong with your model specification or command.

Looks like PyBIDS is not even able to find the original Niftis when trying to load the variables.

Can you try running it without the --participant-label flag?

Thank you both for the help.
@Steven here is my derivatives tree structure:

.
├── dataset_description.json
├── fitlins
│   └── dataset_description.json
├── fitlins_work
│   ├── crash-20240430-024744-neuro-loader-529c9467-e290-4851-8b2c-7b5d8e2e9ebc.txt
│   ├── crash-20240430-031609-neuro-loader-a1f9ed77-5115-4810-a720-f0ef2e897c36.txt
│   ├── crash-20240430-033245-neuro-loader-eb279f92-5297-434c-8896-adebf7f1f055.txt
│   ├── crash-20240430-034123-neuro-loader-847ac057-428d-4e7b-9fe3-a35e6494993b.txt
│   ├── crash-20240430-034155-neuro-loader-9512bced-371e-4138-863a-5e79fd45a697.txt
│   ├── dbcache
│   │   ├── fMRIPrep
│   │   │   └── layout_index.sqlite
│   │   └── layout_index.sqlite
│   ├── fitlins_wf
│   │   ├── d3.js
│   │   ├── graph.json
│   │   ├── graph1.json
│   │   ├── index.html
│   │   └── loader
│   │       ├── _inputs.pklz
│   │       ├── _node.pklz
│   │       ├── _report
│   │       │   └── report.rst
│   │       └── result_loader.pklz
│   └── reportlets
│       └── fitlins
├── logs
│   ├── CITATION.bib
│   ├── CITATION.html
│   ├── CITATION.md
│   └── CITATION.tex
├── mriqc
│   ├── anatMRIQC.csv
│   ├── anatomical_group.pdf
│   ├── anatomical_sub-1.pdf
│   ├── anatomical_sub-2.pdf
│   ├── anatomical_sub-3.pdf
│   ├── anatomical_sub-4.pdf
│   ├── anatomical_sub-5.pdf
│   ├── anatomical_sub-6.pdf
│   ├── funcMRIQC.csv
│   ├── functional_group.pdf
│   ├── functional_sub-1.pdf
│   ├── functional_sub-2.pdf
│   ├── functional_sub-3.pdf
│   ├── functional_sub-4.pdf
│   ├── functional_sub-5.pdf
│   └── functional_sub-6.pdf
├── sub-1
│   ├── anat
│   │   ├── sub-1_desc-brain_mask.json
│   │   ├── sub-1_desc-brain_mask.nii.gz
│   │   ├── sub-1_desc-preproc_T1w.json
│   │   ├── sub-1_desc-preproc_T1w.nii.gz
│   │   ├── sub-1_dseg.nii.gz
│   │   ├── sub-1_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5
│   │   ├── sub-1_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5
│   │   ├── sub-1_label-CSF_probseg.nii.gz
│   │   ├── sub-1_label-GM_probseg.nii.gz
│   │   └── sub-1_label-WM_probseg.nii.gz
│   ├── figures
│   │   ├── sub-1_desc-about_T1w.html
│   │   ├── sub-1_desc-conform_T1w.html
│   │   ├── sub-1_desc-summary_T1w.html
│   │   ├── sub-1_dseg.svg
│   │   ├── sub-1_task-objectviewing_run-01_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-01_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-01_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-01_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-01_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-01_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-01_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-02_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-02_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-02_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-02_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-02_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-02_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-02_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-03_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-03_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-03_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-03_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-03_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-03_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-03_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-04_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-04_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-04_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-04_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-04_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-04_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-04_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-05_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-05_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-05_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-05_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-05_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-05_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-05_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-06_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-06_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-06_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-06_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-06_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-06_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-06_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-07_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-07_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-07_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-07_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-07_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-07_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-07_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-08_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-08_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-08_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-08_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-08_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-08_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-08_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-09_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-09_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-09_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-09_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-09_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-09_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-09_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-10_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-10_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-10_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-10_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-10_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-10_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-10_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-11_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-11_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-11_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-11_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-11_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-11_desc-summary_bold.html
│   │   ├── sub-1_task-objectviewing_run-11_desc-validation_bold.html
│   │   ├── sub-1_task-objectviewing_run-12_desc-carpetplot_bold.svg
│   │   ├── sub-1_task-objectviewing_run-12_desc-compcorvar_bold.svg
│   │   ├── sub-1_task-objectviewing_run-12_desc-confoundcorr_bold.svg
│   │   ├── sub-1_task-objectviewing_run-12_desc-coreg_bold.svg
│   │   ├── sub-1_task-objectviewing_run-12_desc-rois_bold.svg
│   │   ├── sub-1_task-objectviewing_run-12_desc-summary_bold.html
│   │   └── sub-1_task-objectviewing_run-12_desc-validation_bold.html
│   ├── func
│   │   ├── sub-1_task-objectviewing_run-01_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-01_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-01_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-01_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-01_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-01_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-01_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-01_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-01_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-01_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-01_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-01_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-01_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-01_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-02_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-02_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-02_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-02_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-02_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-02_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-02_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-02_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-02_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-02_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-02_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-02_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-02_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-02_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-03_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-03_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-03_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-03_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-03_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-03_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-03_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-03_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-03_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-03_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-03_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-03_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-03_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-03_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-04_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-04_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-04_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-04_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-04_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-04_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-04_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-04_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-04_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-04_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-04_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-04_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-04_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-04_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-05_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-05_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-05_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-05_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-05_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-05_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-05_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-05_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-05_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-05_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-05_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-05_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-05_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-05_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-06_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-06_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-06_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-06_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-06_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-06_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-06_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-06_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-06_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-06_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-06_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-06_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-06_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-06_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-07_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-07_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-07_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-07_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-07_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-07_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-07_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-07_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-07_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-07_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-07_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-07_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-07_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-07_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-08_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-08_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-08_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-08_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-08_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-08_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-08_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-08_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-08_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-08_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-08_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-08_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-08_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-08_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-09_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-09_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-09_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-09_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-09_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-09_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-09_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-09_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-09_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-09_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-09_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-09_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-09_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-09_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-10_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-10_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-10_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-10_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-10_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-10_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-10_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-10_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-10_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-10_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-10_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-10_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-10_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-10_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-11_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-11_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-11_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-11_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-11_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-11_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-11_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-11_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-11_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-11_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-11_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-11_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-11_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   ├── sub-1_task-objectviewing_run-11_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-12_desc-brain_mask.json
│   │   ├── sub-1_task-objectviewing_run-12_desc-brain_mask.nii.gz
│   │   ├── sub-1_task-objectviewing_run-12_desc-confounds_timeseries.json
│   │   ├── sub-1_task-objectviewing_run-12_desc-confounds_timeseries.tsv
│   │   ├── sub-1_task-objectviewing_run-12_desc-coreg_boldref.json
│   │   ├── sub-1_task-objectviewing_run-12_desc-coreg_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-12_desc-hmc_boldref.json
│   │   ├── sub-1_task-objectviewing_run-12_desc-hmc_boldref.nii.gz
│   │   ├── sub-1_task-objectviewing_run-12_desc-preproc_bold.json
│   │   ├── sub-1_task-objectviewing_run-12_desc-preproc_bold.nii.gz
│   │   ├── sub-1_task-objectviewing_run-12_from-boldref_to-T1w_mode-image_desc-coreg_xfm.json
│   │   ├── sub-1_task-objectviewing_run-12_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt
│   │   ├── sub-1_task-objectviewing_run-12_from-orig_to-boldref_mode-image_desc-hmc_xfm.json
│   │   └── sub-1_task-objectviewing_run-12_from-orig_to-boldref_mode-image_desc-hmc_xfm.txt
│   └── log
│       ├── 20240423-180527_0474c9e8-4351-4aa5-bb0a-9986e96cfd7a
│       │   └── fmriprep.toml
│       ├── 20240423-181207_00d98061-57d5-40a4-9b9a-230c0d6c6481
│       │   └── fmriprep.toml
│       ├── 20240423-181259_a7775cb9-1003-4d20-8935-b5805b66bfbd
│       │   └── fmriprep.toml
│       ├── 20240423-182502_1aad6019-5195-447c-bc2e-248017a18323
│       │   └── fmriprep.toml
│       └── 20240423-183358_74c46d7c-a5f4-4138-a2de-be8543d96b7d
│           └── fmriprep.toml
└── sub-1.html

I’ll follow your suggestion on compcor

@adelavega I ran the command without participant-label and got the same error

Hi @sajjad,

It looks like none of your fmriprep func outputs have a space-<> label. How did you run fmriprep? Does adding --space "" to fitlins help?

Best,
Steven

1 Like

Here is my fmriprep command:

docker run -ti --rm \                                                           
-v /Users/sajjad/Documents/Haxby/data:/data:ro \
-v /Users/sajjad/Documents/Haxby/data/derivatives:/out \
-v /Users/sajjad/Documents/freesurfer/license.txt:/opt/freesurfer/license.txt \
nipreps/fmriprep:latest \
/data /out participant \
--skip_bids_validation --participant-label 1 --output-spaces func --ignore fieldmaps --fs-no-reconall

I changed my fitlins command by adding --space func but I’m getting the same error.

/bids /out participant -d /prep -w /scratch -m /bids/models/model-001_smdl.json --estimator nilearn --space func

Sorry, I thought I need to set --space func, but I’m now trying --space “”. I’ll send the results soon…

@Steven FitLins now ran without the “no images found” error, but it crashed after a couple minutes. I’m not sure which part of the log I should share to resolve the issue. Below I manually copied everything I saw that raised an error:

240430-15:57:42,559 nipype.workflow WARNING:
	 [Node] Error on "_plot_run_contrast_matrix0" (/scratch/fitlins_wf/plot_run_contrast_matrix/mapflow/_plot_run_contrast_matrix0)

and here:

40430-15:57:44,375 nipype.workflow ERROR:
	 Saving crash info to /scratch/crash-20240430-155744-neuro-_plot_run_contrast_matrix0-af5d7410-46f7-4b58-b5a1-98246b9ed366.txt
Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node _plot_run_contrast_matrix0.

Traceback:
	Traceback (most recent call last):
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/visualizations.py", line 52, in _run_interface
	    self._visualize(data, out_name)
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/visualizations.py", line 163, in _visualize
	    plot_and_save(
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/viz/__init__.py", line 14, in plot_and_save
	    plotter(*args, **kwargs)
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/viz/contrasts.py", line 36, in plot_contrast_matrix
	    vmax = np.abs(contrast_matrix.values).max()
	  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/numpy/core/_methods.py", line 40, in _amax
	    return umr_maximum(a, axis, None, out, keepdims, initial, where)
	ValueError: zero-size array to reduction operation maximum which has no identity

and here:

Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 292, in _send_procs_to_workers
    num_subnodes = self.procs[jobid].num_subnodes()
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 1309, in num_subnodes
    self._check_iterfield()
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 1332, in _check_iterfield
    raise ValueError(
ValueError: Input data was not set but it is listed in iterfields.

and here:

240430-15:59:05,710 nipype.workflow ERROR:
	 Node ds_runLevel_contrast_maps failed to run on host a7afda7871d8.
240430-15:59:05,713 nipype.workflow ERROR:
	 Saving crash info to /scratch/crash-20240430-155905-neuro-ds_runLevel_contrast_maps-50cc048e-4c6a-4c48-a2aa-66993c641467.txt
Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 344, in _send_procs_to_workers
    self.procs[jobid].run(updatehash=updatehash)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 722, in _run_command
    result = self._interface.run(cwd=outdir, ignore_exception=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 388, in run
    self._check_mandatory_inputs()
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 275, in _check_mandatory_inputs
    raise ValueError(msg)
ValueError: BIDSDataSink requires a value for input 'in_file'. For a list of required inputs, see BIDSDataSink.help()

and here:

When creating this crashfile, the results file corresponding
to the node could not be found.
240430-15:59:07,517 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,518 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,518 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,518 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,518 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,518 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,518 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,519 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,519 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,519 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,520 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
240430-15:59:07,520 nipype.workflow ERROR:
	 could not run node: fitlins_wf.plot_run_contrast_matrix
FitLins failed: 12 raised. Re-raising first.

Finally this is where it crashed:

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/bin/fitlins", line 8, in <module>
    sys.exit(main())
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/cli/run.py", line 442, in main
    sys.exit(run_fitlins(sys.argv[1:]))
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/cli/run.py", line 419, in run_fitlins
    fitlins_wf.run(**plugin_settings)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/workflows.py", line 638, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/base.py", line 212, in run
    raise error from cause
RuntimeError: 12 raised. Re-raising first.

Please let me know if there’s a specific part of the log that I can send here.

Ah, this is starting to make sense.

FitLins expects space = MNI152NLin2009cAsym by default. I’m also wondering why that file doesn’t exist in a fmriprep output.

This other error is unrelated, and is due to a malformed Contrast specification.

This is your ConditionList:

               "ConditionList":[
                  "trial_type.face",
                  "trial_type.scrambled"
               ],

But you don’t list both of those variables in X.
You instead list trial_type.scrambledpix

Therefore the variable trial_type.scrambled is not available for a Contrast.

For the record, we need to implement more checks of the model and command specification that would more gracefully catch error such as these, so thank you! I’ve opened two issues in FitLins to track these improvements.

CC: @effigies

@adelavega Ah, that was a typo actually! I don’t have a “scrambled” condition in my trial_type. The name is actually “scrambledpix”. I changed this in ConditionList and fitlins now ran without any errors. When I open the html output (under derivatives/fitlins/reports/model-dsModel001.html), however, it seems like it cannot locate the output figures and include them in the report (see screenshot below). If, however, I go to derivatives/fitlins/node-runLevel/sub-1 I can see effect, p, t, z, and variance statmaps for each of my 12 runs, and also under derivatives/fitlins/node-runLevel/reports/sub-1/figures I see 12 run folders and each includes the figures that should go on the html output.

1 Like

@Steven Do you know what might be the cause of figures missing from the html output? I thought maybe you have ideas on this. Thanks again for the help!

Hi @sajjad

Do you know if those file exist at all?

If you try to find their location, do they exist?

Do you get any runtime errors or warnings?

Sometimes the browser simply doesn’t show them correctly.

@adelavega Yes, there are figures that I see for example for the subject level under derivatives/fitlins/node-subjectlevel/reports/sub-1/figures/sub-1_contrast-faceGtScrambled_stat-t_ortho.png
Similarly the run level has figures generated for each of my 12 runs, but it seems like the html output is not linked with them.
I did not get any warning or error messages.

What are the image URLs in the report?

@effigies on top of my report for instance (under design matrix), I have this path for the image:
file:///out/node-runLevel/reports/sub-1/figures/run-1/sub-1_task-objectviewing_run-1_design.svg
which does exist under this path and I can see the design matrix when I directly open that file, but it doesn’t show on the report. I tried opening the html in another browser and that didn’t work either.

There are two other parts under “Correlation matrix” and the first “Contrasts” section with the same situation as above. But strangely, although my NIfTI contrast files are generated and they look reasonable (visualized them in FSL), under each run’s contrast section I don’t even see an image icon and as the screenshot in my message above shows, I see “Missing contrast skipped (used: --drop-missing )”

Thanks for the help!

file:///out/... is what’s in the HTML source? I would expect a relative path.

The problem is likely that your browser is unwilling to pull in SVG files from file:/// links. You could spin up a quick web-server to view with:

python -m http.server -d $OUTPUT

@effigies Thanks, Chris! I viewed the file in my web-server but again saw the exact same thing as before. I think I’m missing something in my command or maybe in my model json file. I’m on Mac but tested things on Windows as well and encountered the same issue. Again a report with no images loaded. And even if I solve this problem with pulling in the SVG files, my contrast files (not even with the wrong paths) are in my report: