[Node] Error on "fmriprep_wf.fsdir"

Dear all,

i’ve run my bids data with a script in docker mode but i have this error:

180427-16:11:24,48 workflow WARNING:
 [Node] Error on "fmriprep_wf.fsdir" (/root/src/fmriprep/work/fmriprep_wf/fsdir)
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/plugins/multiproc.py", line 339, in _send_procs_to_workers
self.procs[jobid].run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py", line 487, in run
result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py", line 571, in _run_interface
return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py", line 650, in _run_command
result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/interfaces/base/core.py", line 516, in run
runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/interfaces/bids.py", line 352, in _run_interface
copytree(source, dest)
  File "/usr/local/miniconda/lib/python3.6/shutil.py", line 353, in copytree
raise Error(errors)
shutil.Error: [('/opt/freesurfer/subjects/fsaverage/surf/rh.white_avg', '/out/out/freesurfer/fsaverage/surf/rh.white_avg', '[Errno 5] Input/output error')]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/miniconda/bin/fmriprep", line 11, in <module>
    load_entry_point('fmriprep==1.0.11', 'console_scripts', 'fmriprep')()
  File "/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/cli/run.py", line 274, in main
    fmriprep_wf.run(**plugin_settings)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/workflows.py", line 602, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/plugins/base.py", line 190, in run
    self._send_procs_to_workers(updatehash=updatehash, graph=graph)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/plugins/multiproc.py", line 347, in _send_procs_to_workers
    'traceback': traceback
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/plugins/base.py", line 227, in _clean_queue
    raise RuntimeError("".join(result['traceback']))
RuntimeError: Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/plugins/multiproc.py", line 339, in _send_procs_to_workers
    self.procs[jobid].run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py", line 487, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py", line 571, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py", line 650, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/interfaces/base/core.py", line 516, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/interfaces/bids.py", line 352, in _run_interface
    copytree(source, dest)
  File "/usr/local/miniconda/lib/python3.6/shutil.py", line 353, in copytree
    raise Error(errors)
shutil.Error: [('/opt/freesurfer/subjects/fsaverage/surf/rh.white_avg', '/out/out/freesurfer/fsaverage/surf/rh.white_avg', '[Errno 5] Input/output error')]

my bash/docker script:

docker run -it --rm \
   -v /media/DATA/BIDS/test/test_project:/data:ro \
   -v /media/DATA/BIDS/test/test_project/derivatives/fmriprep:/out \
   -v /media/DATA/BIDS/test/license.txt:/opt/freesurfer/license.txt \
   poldracklab/fmriprep:latest \
   /data /out/out \
   participant \
   --participant-label sub-0001 \
   --write-graph \
   --nthreads 4 \
   --mem_mb 6000 \
   --write-graph \
   --fs-license-file /opt/freesurfer/license.txt

An IOError can be a lot of things. It looks like you’re using an external hard drive or possibly a network share. Is there any chance it got disconnected during the run? And do you have enough space on the drive?

Also, just as a warning, it’s highly recommended to run fMRIPrep with at least 8GB of RAM.

the directory (NAS) is mounted in NFS

Okay, so there could be sync issues here. Not much we can do about that; perhaps you can check your syslog and see if there were any problems that can be debugged on your end?

The error doesn’t lead me to think it’s likely, but is there any chance you were running two copies of fmriprep simultaneously? If there was competition to create these files, that could cause problems. Once the FreeSurfer directories are copied properly, there shouldn’t be problems, but the first time you run fmriprep, there can be a race condition.

i use mriqc in the same mode but work very well. The docker/mriqc script read the same dataset and write in the same directory (derivatives) of fmriprep. So why mriqc work and fmriprep doesn’t work?

It’s hard to say, in the absence of more detail. Again, I/O errors are very difficult to debug, and generally reflect issues that are outside the scope of something that we can deal with programmatically. I would check that your NAS has plenty of empty space (and you haven’t hit a user quota for files/space), and make sure that you do not start multiple instances of fmriprep simultaneously (a 1 minute delay should be sufficient). If it was just a connectivity or synchronization issue, perhaps re-running will resolve it.

Can you run:

docker run -it --rm \
   -v /media/DATA/BIDS/test/test_project:/data:ro \
   -v /media/DATA/BIDS/test/test_project/derivatives/fmriprep:/out \
   -v /media/DATA/BIDS/test/license.txt:/opt/freesurfer/license.txt \
   --entrypoint ls poldracklab/fmriprep:latest \
   /opt/freesurfer/subjects/fsaverage/surf/rh.white_avg

the output is:
/opt/freesurfer/subjects/fsaverage/surf/rh.white_avg

I make another proof, add the option “–fs-no-reconall” at my original script and works…but not run freesurfer.
If the problem is writing to a remote directory because eliminating freesurfer everything works (obviously without the segmentation of FS). The other functions of fmriprep also write to the disk but they have no problem. I do not know how to debug this

Can you run docker info? It will tell you where the image layers are being stored (Root Directory property).

That filesystem is probably causing these I/O errors.

In any case, however, it must write to disk, it should give error in both cases but it is not so. Only if disabled freesurfer works and instead with the active option no, mhhh very strange. Anyway
the output of docker info is:

 Containers: 2
 Running: 1
 Paused: 0
 Stopped: 1
Images: 6
Server Version: 17.03.1-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 158
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-53-generic
Operating System: Linux Mint 18.1
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.787 GiB
Name: JANO
ID: QI5T:WE6A:EU73:7OIY:T6IJ:KNMS:7VER:P7FZ:LQ4C:6K5G:OUYR:NGRT
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false