Node loader failed to run on host

Summary of what happened:

Hi,
I’m trying to use fitlins for the first try and I’m struggling with building the model in the JSON file. I have a task that has been measured over 3 sessions (out of 6) for each participant in two different groups. I tried to start easy and build a model for 2 participants just with the contrast (no movement regressors, whatsoever) but it seems I’m still doing something wrong in the JSON file and I can’t figure it out.

Command used (and if a helper script was used, a link to the helper script or the command generated):

fitlins /dataset/rawdata /dataset/analyzed participant \
-d /dataset/derivatives/fmriprep -w fitlins_cache \
--smoothing 6:dataset:iso -m /dataset/rawdata/models/model-faces_smdl.json 

Version:

Environment (Docker, Singularity / Apptainer, custom installation):

Data formatted according to a validatable standard? Please provide the output of the validator:

bids compatible, my derivatives folder comes from fmriPrep

Relevant log outputs (up to 20 lines):

The error I get when running fitlins:

fitlins /dataset/rawdata /dataset/analyzed participant -d /dataset/derivatives/fmriprep -w fitlins_cache --smoothing 6:dataset:iso -m /dataset/rawdata/models/model-faces_smdl.json 
Captured warning (<class 'UserWarning'>): `--estimator nistats` is a deprecated synonym for `--estimator nilearn`. Future versions will raise an error.
Captured warning (<class 'UserWarning'>): The PipelineDescription field was superseded by GeneratedBy in BIDS 1.4.0. You can use ``pybids upgrade`` to update your derivative dataset.
240315-12:15:32,595 nipype.workflow INFO:
         [Node] Setting-up "fitlins_wf.loader" in "/zi/home/miroslava.jindrova/fitlins_cache/fitlins_wf/loader".
240315-12:15:32,623 nipype.workflow INFO:
         [Node] Executing "loader" <fitlins.interfaces.bids.LoadBIDSModel>
/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/layout/validation.py:156: UserWarning: The PipelineDescription field was superseded by GeneratedBy in BIDS 1.4.0. You can use ``pybids upgrade`` to update your derivative dataset.
  warnings.warn("The PipelineDescription field was superseded "
240315-12:17:03,654 nipype.workflow INFO:
         [Node] Finished "loader", elapsed time 91.015491s.
240315-12:17:03,655 nipype.workflow WARNING:
         Storing result file without outputs
240315-12:17:03,662 nipype.workflow WARNING:
         [Node] Error on "fitlins_wf.loader" (/zi/home/miroslava.jindrova/fitlins_cache/fitlins_wf/loader)
240315-12:17:04,267 nipype.workflow ERROR:
         Node loader failed to run on host zislrds0068.zi.local.
240315-12:17:04,268 nipype.workflow ERROR:
         Saving crash info to /zi/home/miroslava.jindrova/fitlins_cache/crash-20240315-121704-miroslava.jindrova-loader-43f6b8c7-b1f6-463d-af78-42c0a68d3551.txt
Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node loader.

Traceback:
        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
            runtime = self._run_interface(runtime)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 248, in _run_interface
            self._results['all_specs'] = self._load_graph(runtime, graph)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 256, in _load_graph
            specs = node.run(inputs, group_by=node.group_by, **filters)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 471, in run
            node_output = BIDSStatsModelsNodeOutput(
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 593, in __init__
            df = reduce(pd.DataFrame.merge, dfs)
        TypeError: reduce() of empty sequence with no initial value


240315-12:17:06,266 nipype.workflow ERROR:
         could not run node: fitlins_wf.loader
FitLins failed: Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node loader.

Traceback:
        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
            runtime = self._run_interface(runtime)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 248, in _run_interface
            self._results['all_specs'] = self._load_graph(runtime, graph)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 256, in _load_graph
            specs = node.run(inputs, group_by=node.group_by, **filters)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 471, in run
            node_output = BIDSStatsModelsNodeOutput(
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 593, in __init__
            df = reduce(pd.DataFrame.merge, dfs)
        TypeError: reduce() of empty sequence with no initial value


Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/bin/fitlins", line 8, in <module>
    sys.exit(main())
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/cli/run.py", line 442, in main
    sys.exit(run_fitlins(sys.argv[1:]))
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/cli/run.py", line 419, in run_fitlins
    fitlins_wf.run(**plugin_settings)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/workflows.py", line 638, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/base.py", line 212, in run
    raise error from cause
RuntimeError: Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node loader.

Traceback:
        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
            runtime = self._run_interface(runtime)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 248, in _run_interface
            self._results['all_specs'] = self._load_graph(runtime, graph)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 256, in _load_graph
            specs = node.run(inputs, group_by=node.group_by, **filters)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 471, in run
            node_output = BIDSStatsModelsNodeOutput(
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 593, in __init__
            df = reduce(pd.DataFrame.merge, dfs)
        TypeError: reduce() of empty sequence with no initial value

Screenshots / relevant information:


The JSON file:

{
  "Name": "Faces FvS",
  "BIDSModelVersion": "1.0.0",
  "Input": {"subject": ["SUB07", "SUB08"], "task": ["faces"], "session": ["pre","post", "fu"]},
  "Description": "A simple two-condition contrast",
  "Nodes": [
    {
      "Level": "Run",
      "Name": "run_level",
      "GroupBy": ["run", "subject", "session"],
      "Model": {"X": [1, "face", "scrambled"], "Type": "glm"},
      "Contrasts": [
        {
          "Name": "FvS",
          "ConditionList": ["face", "scrambled"],
          "Weights": [1, -1],
          "Test": "t"
        }
      ]
    },
        {
      "Level": "Subject",
      "Name": "subject",
      "GroupBy": ["contrast", "subject"],
      "Model": {
        "X": [1],
        "Type": "meta"
      },
      "DummyContrasts": {"Test": "t"}
    },
        {
      "Level": "Session",
      "Name": "session_level",
      "GroupBy": ["contrast", "subject", "session"],
      "Model": {
        "X": [1],
        "Type": "meta"
      },
      "DummyContrasts": {"Test": "t"}
    },
    {
      "Level": "Dataset",
      "Name": "dataset_level",
      "GroupBy": ["contrast", "session"],
      "Model": {"X": [1], "Type": "glm"},
      "DummyContrasts": {"Test": "t"}
    }
  ]
}

Any ideas on where the problem is would be much appreciated!
Mirus

Hmm, this model looks fine in principle to me, I wonder if there’s something a miss in your event files

Do you mind sharing an example?

My tsv files look like this;

onset duration trial_type
50.0087 18.1167 face
110.5088 18.1167 face
171.509 18.1166 face
231.0091 18.1167 face
261.0091 18.1168 face
320.5093 18.1167 face
411.5095 18.1167 face
470.0096 18.1167 face
530.0097 18.1167 face
591.0099 18.1167 face
650.51 18.1167 face
681.5101 18.1167 face
20.0086 18.1167 scrambled
81.0088 18.1167 scrambled
141.5089 18.1167 scrambled
200.509 18.1167 scrambled
291.0092 18.1167 scrambled
351.5093 18.1167 scrambled
380.0094 18.1167 scrambled
440.5095 18.1167 scrambled
501.0097 18.1167 scrambled
561.5098 18.1166999999999 scrambled
620.0099 18.1167 scrambled
710.5101 18.1167 scrambled

I don’t see anything obviously wrong with this model or events files.

@effigies Do you see anything obvious?

Typically this type of error occurs if the Input filtering or GroupBy somehow fails and no inputs are passed forward to the next node.

Not 100% sure why it would give rise to this sort of error, but if you want to group by session at the third (session) level, I think you need to group by session at the second (subject) as well.

It has been a bit since I’ve dug into one of these, so I hope I’m not way off here.

Thanks for your input!

Generally speaking, does this error sound to you as if there would be something wrong in the model as well then? Or could the problem be somewhere else (e.g. installation of fitlins)?

I have tried to change the grouping according to what Chris wrote, and it leads to the same error. I have also tried to simplify the model even more by running just the run level and still getting the same outcome…

It’s probably a bug caused by an interaction with your dataset. My guess is “session” could be causing a problem.

Here’s about the simplest model I can think of if you can try this.
You could also try removing “Input”.

Question: Did every subject in every session and run see the “face” and “scrambled” conditions?
The only thing I can think of is that someone the two sessions are different in a way that is leading to different inputs for each GroupBy.

{
  "Name": "Faces only",
  "BIDSModelVersion": "1.0.0",
  "Input": {
    "subject": [
      "SUB07"
    ]
  },
  "Nodes": [
    {
      "Level": "Run",
      "Name": "run_level",
      "GroupBy": [
        "run",
        "subject",
        "session"
      ],
      "Model": {
        "X": [
          1,
          "face"
        ],
        "Type": "glm"
      },
      "DummyContrasts": {
        "Test": "t"
      }
    }
  ]
}

It might also be helpful to see some example file names.
E.g. file names for those two subjects accross both session for both the event files and functional files.

Another possibility is that pybids somehow is not able to load the events correctly, if the event file hierarchy is violated somehow.

Hi, sorry for my pause. I have tried the simple model you have proposed and unfortunately, it doesn’t solve the problem.

Here you can see the naming of my files:

Sorry, forgot the derivatives folder:

To answer your question: Yes, all participants get both conditions. The task is the same for all of them in all 3 sessions.

Ah, it looks like you actually don’t have a run entity defined, correct?

This is fine, but I was under the impression that there was several runs per session, but actually there’s only subject and session.

You should remove “run” from the “GroupBy”, as you don’t actually have that as a variable, and thus can’t group on it.

If your files has a run explicitly defined (i.e. ....task-faces_run-1_bold.nii.g) then this would be fine, but you actually don’t have it in the filename, so it’s value is None.

So just to be clear can you try this:

{
  "Name": "Faces only",
  "BIDSModelVersion": "1.0.0",
  "Input": {
    "subject": [
      "SUB07"
    ]
  },
  "Nodes": [
    {
      "Level": "Run",
      "Name": "run_level",
      "GroupBy": [
        "subject",
        "session"
      ],
      "Model": {
        "X": [
          1,
          "face"
        ],
        "Type": "glm"
      },
      "DummyContrasts": {
        "Test": "t"
      }
    }
  ]
}

I have tried that too but still got the same error message. I also HAVE TO define the task otherwise, it looks for different tasks and gives me error like this:

Captured warning (<class 'UserWarning'>): `--estimator nistats` is a deprecated synonym for `--estimator nilearn`. Future versions will raise an error.
Captured warning (<class 'UserWarning'>): The PipelineDescription field was superseded by GeneratedBy in BIDS 1.4.0. You can use ``pybids upgrade`` to update your derivative dataset.
240402-12:08:36,516 nipype.workflow INFO:
         [Node] Setting-up "fitlins_wf.loader" in "/zi/home/miroslava.jindrova/fitlins_cache/fitlins_wf/loader".
240402-12:08:36,544 nipype.workflow INFO:
         [Node] Executing "loader" <fitlins.interfaces.bids.LoadBIDSModel>
/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/layout/validation.py:156: UserWarning: The PipelineDescription field was superseded by GeneratedBy in BIDS 1.4.0. You can use ``pybids upgrade`` to update your derivative dataset.
  warnings.warn("The PipelineDescription field was superseded "
240402-12:12:33,577 nipype.workflow INFO:
         [Node] Finished "loader", elapsed time 236.96664s.
240402-12:12:33,577 nipype.workflow WARNING:
         Storing result file without outputs
240402-12:12:33,587 nipype.workflow WARNING:
         [Node] Error on "fitlins_wf.loader" (/zi/home/miroslava.jindrova/fitlins_cache/fitlins_wf/loader)
240402-12:12:34,282 nipype.workflow ERROR:
         Node loader failed to run on host zislrds0068.zi.local.
240402-12:12:34,283 nipype.workflow ERROR:
         Saving crash info to /zi/home/miroslava.jindrova/fitlins_cache/crash-20240402-121234-miroslava.jindrova-loader-f87fe8d3-94a6-4361-a091-764c13585d88.txt
Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node loader.

Traceback:
        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nibabel/loadsave.py", line 90, in load
            stat_result = os.stat(filename)
        OSError: [Errno 126] Required key not available: '/zi/flstorage/group_psm/AG-Paret/Projects/BrainBoost/data_analysis/dataset/derivatives/fmriprep/sub-SUB07/ses-post/func/sub-SUB07_ses-post_task-rest1_space-MNI152NLin2009cAsym_desc-preproc_bold.nii'

        During handling of the above exception, another exception occurred:

        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 203, in _load_time_variables
            nvols = _get_nvols(img_f)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 105, in _get_nvols
            img = nb.load(img_f)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nibabel/loadsave.py", line 92, in load
            raise FileNotFoundError(f"No such file or no access: '{filename}'")
        FileNotFoundError: No such file or no access: '/zi/flstorage/group_psm/AG-Paret/Projects/BrainBoost/data_analysis/dataset/derivatives/fmriprep/sub-SUB07/ses-post/func/sub-SUB07_ses-post_task-rest1_space-MNI152NLin2009cAsym_desc-preproc_bold.nii'

        The above exception was the direct cause of the following exception:

        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
            runtime = self._run_interface(runtime)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 246, in _run_interface
            graph.load_collections(**selectors)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 198, in load_collections
            collections = self.layout.get_collections(node.level, drop_na=drop_na,
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/layout/layout.py", line 860, in get_collections
            index = load_variables(self, types=types, levels=level,
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 93, in load_variables
            dataset = _load_time_variables(layout, dataset, scope=scope, **_kwargs)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 214, in _load_time_variables
            raise ValueError(msg) from e
        ValueError: Unable to extract scan duration from one or more BOLD runs, and no scan_length argument was provided as a fallback. Please check that the image files are available, or manually specify the scan duration.


240402-12:12:36,282 nipype.workflow ERROR:
         could not run node: fitlins_wf.loader
FitLins failed: Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node loader.

Traceback:
        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nibabel/loadsave.py", line 90, in load
            stat_result = os.stat(filename)
        OSError: [Errno 126] Required key not available: '/zi/flstorage/group_psm/AG-Paret/Projects/BrainBoost/data_analysis/dataset/derivatives/fmriprep/sub-SUB07/ses-post/func/sub-SUB07_ses-post_task-rest1_space-MNI152NLin2009cAsym_desc-preproc_bold.nii'

        During handling of the above exception, another exception occurred:

        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 203, in _load_time_variables
            nvols = _get_nvols(img_f)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 105, in _get_nvols
            img = nb.load(img_f)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nibabel/loadsave.py", line 92, in load
            raise FileNotFoundError(f"No such file or no access: '{filename}'")
        FileNotFoundError: No such file or no access: '/zi/flstorage/group_psm/AG-Paret/Projects/BrainBoost/data_analysis/dataset/derivatives/fmriprep/sub-SUB07/ses-post/func/sub-SUB07_ses-post_task-rest1_space-MNI152NLin2009cAsym_desc-preproc_bold.nii'

        The above exception was the direct cause of the following exception:

        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
            runtime = self._run_interface(runtime)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 246, in _run_interface
            graph.load_collections(**selectors)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 198, in load_collections
            collections = self.layout.get_collections(node.level, drop_na=drop_na,
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/layout/layout.py", line 860, in get_collections
            index = load_variables(self, types=types, levels=level,
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 93, in load_variables
            dataset = _load_time_variables(layout, dataset, scope=scope, **_kwargs)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 214, in _load_time_variables
            raise ValueError(msg) from e
        ValueError: Unable to extract scan duration from one or more BOLD runs, and no scan_length argument was provided as a fallback. Please check that the image files are available, or manually specify the scan duration.


Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/bin/fitlins", line 8, in <module>
    sys.exit(main())
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/cli/run.py", line 442, in main
    sys.exit(run_fitlins(sys.argv[1:]))
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/cli/run.py", line 419, in run_fitlins
    fitlins_wf.run(**plugin_settings)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/workflows.py", line 638, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/base.py", line 212, in run
    raise error from cause
RuntimeError: Traceback (most recent call last):
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node loader.

Traceback:
        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nibabel/loadsave.py", line 90, in load
            stat_result = os.stat(filename)
        OSError: [Errno 126] Required key not available: '/zi/flstorage/group_psm/AG-Paret/Projects/BrainBoost/data_analysis/dataset/derivatives/fmriprep/sub-SUB07/ses-post/func/sub-SUB07_ses-post_task-rest1_space-MNI152NLin2009cAsym_desc-preproc_bold.nii'

        During handling of the above exception, another exception occurred:

        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 203, in _load_time_variables
            nvols = _get_nvols(img_f)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 105, in _get_nvols
            img = nb.load(img_f)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nibabel/loadsave.py", line 92, in load
            raise FileNotFoundError(f"No such file or no access: '{filename}'")
        FileNotFoundError: No such file or no access: '/zi/flstorage/group_psm/AG-Paret/Projects/BrainBoost/data_analysis/dataset/derivatives/fmriprep/sub-SUB07/ses-post/func/sub-SUB07_ses-post_task-rest1_space-MNI152NLin2009cAsym_desc-preproc_bold.nii'

        The above exception was the direct cause of the following exception:

        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run
            runtime = self._run_interface(runtime)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/fitlins/interfaces/bids.py", line 246, in _run_interface
            graph.load_collections(**selectors)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/modeling/statsmodels.py", line 198, in load_collections
            collections = self.layout.get_collections(node.level, drop_na=drop_na,
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/layout/layout.py", line 860, in get_collections
            index = load_variables(self, types=types, levels=level,
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 93, in load_variables
            dataset = _load_time_variables(layout, dataset, scope=scope, **_kwargs)
          File "/opt/miniconda-latest/envs/neuro/lib/python3.9/site-packages/bids/variables/io.py", line 214, in _load_time_variables
            raise ValueError(msg) from e
        ValueError: Unable to extract scan duration from one or more BOLD runs, and no scan_length argument was provided as a fallback. Please check that the image files are available, or manually specify the scan duration.

Here it looks for rest1 instead of faces but when I define

    "task": [
      "faces"
    ]

I still get the error from my initial post.

Ah, sorry the exclusion of task was my mistake.

My guess at the moment is that the lack of run_id is causing the problem (not your fault, ours).

Without access to the database it’s almost impossible to debug at the moment unfortunately. If you want to get in touch via email let me know and we can try to fix this, but it probably wont resolve your immediate problem

1 Like