Post FMRIPREP modelling questions

Hi guys,

I am trying to adapt the jupyter notebook written by Chris on my data:

at the stage of modelgen for the level1 stats with FSL FEAT I get this Error:

190204-19:53:49,130 nipype.workflow INFO:
[Node] Setting-up “d0a98d67288f6564381b8944704bf770” in “/Users/dima/Desktop/odormixture/jupyter/nipype_mem/nipype-interfaces-fsl-model-FEATModel/d0a98d67288f6564381b8944704bf770”.
190204-19:53:49,142 nipype.workflow INFO:
[Node] Running “d0a98d67288f6564381b8944704bf770” (“nipype.interfaces.fsl.model.FEATModel”), a CommandLine Interface with command:
feat_model run0
190204-19:53:49,355 nipype.interface INFO:
stdout 2019-02-04T19:53:49.354427:
190204-19:53:49,358 nipype.interface INFO:
stdout 2019-02-04T19:53:49.354427:error: pinv(): svd failed
190204-19:53:49,359 nipype.interface INFO:
stdout 2019-02-04T19:53:49.354427:
190204-19:53:49,361 nipype.interface INFO:
stderr 2019-02-04T19:53:49.361079:libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: pinv(): svd failed
190204-19:53:49,461 nipype.workflow WARNING:
[Node] Error on “d0a98d67288f6564381b8944704bf770” (/Users/dima/Desktop/odormixture/jupyter/nipype_mem/nipype-interfaces-fsl-model-FEATModel/d0a98d67288f6564381b8944704bf770)

RuntimeError Traceback (most recent call last)
in ()
1 modelgen = mem.cache(fsl.model.FEATModel)
2 modelgen_results = modelgen(fsf_file=level1design_results.outputs.fsf_files,
----> 3 ev_files=level1design_results.outputs.ev_files)
4 modelgen_results.outputs

~/anaconda3/lib/python3.7/site-packages/nipype/caching/ in call(self, **kwargs)
79 cwd = os.getcwd()
80 try:
—> 81 out =
82 finally:
83 # changes to the node directory - if something goes

~/anaconda3/lib/python3.7/site-packages/nipype/pipeline/engine/ in run(self, updatehash)
470 try:
–> 471 result = self._run_interface(execute=True)
472 except Exception:
473 logger.warning(’[Node] Error on “%s” (%s)’, self.fullname, outdir)

~/anaconda3/lib/python3.7/site-packages/nipype/pipeline/engine/ in _run_interface(self, execute, updatehash)
553 self._update_hash()
554 return self._load_results()
–> 555 return self._run_command(execute)
557 def _load_results(self):

~/anaconda3/lib/python3.7/site-packages/nipype/pipeline/engine/ in _run_command(self, execute, copyfiles)
634 try:
–> 635 result =
636 except Exception as msg:
637 result.runtime.stderr = ‘%s\n\n%s’.format(

~/anaconda3/lib/python3.7/site-packages/nipype/interfaces/base/ in run(self, cwd, ignore_exception, **inputs)
520 try:
521 runtime = self._pre_run_hook(runtime)
–> 522 runtime = self._run_interface(runtime)
523 runtime = self._post_run_hook(runtime)
524 outputs = self.aggregate_outputs(runtime)

~/anaconda3/lib/python3.7/site-packages/nipype/interfaces/base/ in _run_interface(self, runtime, correct_return_codes)
1036 if runtime.returncode is None or
1037 runtime.returncode not in correct_return_codes:
-> 1038 self.raise_exception(runtime)
1040 return runtime

~/anaconda3/lib/python3.7/site-packages/nipype/interfaces/base/ in raise_exception(self, runtime)
973 (‘Command:\n{cmdline}\nStandard output:\n{stdout}\n’
974 ‘Standard error:\n{stderr}\nReturn code: {returncode}’
–> 975 ).format(**runtime.dictcopy()))
977 def _get_environ(self):

RuntimeError: Command:
feat_model run0
Standard output:

error: pinv(): svd failed

Standard error:
libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: pinv(): svd failed
Return code: -6

Does anybody has any ideas what is the reason?



I have a guess – fmriprep has changed the names of some of its outputs since that modeling script was written, and now you have a bunch of empty EVs that FEAT tries to do an SVD on and doesn’t like it. For example, FramewiseDisplacement is now framewise_displacement.

One way to try to figure this out is to find the modelgen directory in nipype’s working directory and look at the node’s .json file. (Or you can look at the _report/report.rst file for the same information.) Figure out what files are being used as ev_files, then take a look at those files and make sure they make sense. Are they empty? Are they single columns with varied values with as many entries as you have TRs? Alternatively, are they three-column files where your onsets, durations, and weightings look right?

1 Like

Thank you very much for the tip! The problem was the number of raws in FramewiseDisplacement was different to aCompcor because of dummy scans I think…

Now it works fine!




thank you very much again for your help!

I am wondering, what is actually the second level analysis in this notebook?

Is it the fsl.randomise function? I mean, there are no contrasts defined for the second level analysis?

And what is the final output? Is it the concatinated image of one contrast over all subjects?

Does anybody maybe has a suggestion, what could be the best way to perform the entire analysis, if conditions (trial_types) and therefore contrasts differ from run to run:
run1 cond A and B,
run2 cond C and D,
run3 cond AB and CD
(randomised over the subjects)

I just have done it with a for loop and ran the notebook as described in the tutorial.
But, for the second level I need to compare different contrasts from the first level analysis with each other, so between different runs too.

similar to this notebook from Miykael:

I would appreciate any help very much!

In the notebook you linked, the group level analysis happens in cell 115 and is indeed the fsl.randomise function. The analysis is only run over the “lips vs. others” contrast taken from 10 subjects, which were smoothed (cell 110) and then concatenated into a single input for randomise (cell 113). The output, in cells 116-117, is the statistical map from randomise showing regions where the input maps from the group had statistically significant activations.

I don’t really like the term “second level” analysis because it’s not well-specified. In many experiments, the first level GLMs are done at the run level, the second level GLMs are done to combine runs within a subject with a fixed-effects model, and then there’s a random-effects group level analysis. In other experiments, there’s only 1 run per subject, and the second level model is the group model. So I prefer run-level, subject-level and group-level terminology.

re: your entire analysis – hopefully you don’t want to compare D > A, since D and A are never in the same run?

Thank you very much for your detailed reply!

I do not have much experience with FSL, therefore I was a little confused. Thank you for the clarification, so think I understood it rightly.

Actually, I have to compare conditions, that are never in the same run. (Not my own paradigm, I just doing the analysis)

I tried to run MVPA first, but the impact of the run itself was too big. Therefore, I assumed that the univariate analysis would deal better with this issue, that some conditions are never in the same run.

A, B, C and D are different odors and AB and CD are the combinations of this odors. There are only A + B, or C + D, or AB + CD in the same run. (condtions are presented in block design)

Is there any statistically meaningful way to compare activations A + B to AB and C+ D to CD with each other?

Thank you so much!