Nipy SpaceTimeRealigner exits with IOError

I have a preprocessing script that includes a node that utilizes Nipy’s SpaceTimeRealigner. It runs (after some time) for ‘most’ of my participants but for the remaining consistently exits with an IOError (see crash file output below). I’ve experienced similar behavior with jobs that are submitted by my scripts in the past and have simply had to re-run things and that will typically work. No matter how many times for these particular participants I re-run the script, these individuals will not get past this step. There is nothing notably different about these participants from what I can gather (same number of time points, same number of slices, same TR, etc…). Any insight or help would be greatly appreciated.

File: /om/scratch/Wed/amattfel/gates_preproc/crash/crash-20170621-162300-amattfel-motion_correct-67abe873-ecb1-478a-974f-ca0c36df8531.pklz
Node: gates.motion_correct
Working directory: /om/scratch/Wed/amattfel/gates_preproc/GAB_MIEPIC_e015/gates/motion_correct


Node inputs:

ignore_exception = False
in_file = [u'/om/scratch/Wed/amattfel/gates_preproc/GAB_MIEPIC_e015/gates/despike/mapflow/_despike0/gates_enc_01_dtype_despike.nii.gz', u'/om/scratch/Wed/amattfel/gates_preproc/GAB_MIEPIC_e015/gates/despike/mapflow/_despike1/gates_enc_02_dtype_despike.nii.gz', u'/om/scratch/Wed/amattfel/gates_preproc/GAB_MIEPIC_e015/gates/despike/mapflow/_despike2/gates_enc_03_dtype_despike.nii.gz', u'/om/scratch/Wed/amattfel/gates_preproc/GAB_MIEPIC_e015/gates/despike/mapflow/_despike3/gates_enc_04_dtype_despike.nii.gz']
slice_info = 2
slice_times = [0.0, 1.1, 0.061111111111111116, 1.1611111111111112, 0.12222222222222223, 1.2222222222222223, 0.18333333333333335, 1.2833333333333334, 0.24444444444444446, 1.3444444444444446, 0.3055555555555556, 1.4055555555555557, 0.3666666666666667, 1.4666666666666668, 0.4277777777777778, 1.527777777777778, 0.48888888888888893, 1.588888888888889, 0.55, 1.6500000000000001, 0.6111111111111112, 1.7111111111111112, 0.6722222222222223, 1.7722222222222224, 0.7333333333333334, 1.8333333333333333, 0.7944444444444445, 1.8944444444444446, 0.8555555555555556, 1.9555555555555557, 0.9166666666666666, 2.016666666666667, 0.9777777777777779, 2.077777777777778, 1.038888888888889, 2.138888888888889]
tr = 2.2



Traceback: 
Traceback (most recent call last):
  File "/om/user/amattfel/envs/gates_motmem_om_env/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 555, in _get_result
    raise IOError(error_message)
IOError: Job id (8916175) finished or terminated, but results file does not exist after (5.0) seconds. Batch dir contains crashdump file if node raised an exception.
Node working directory: (/om/scratch/Wed/amattfel/gates_preproc/GAB_MIEPIC_e015/gates/motion_correct)

Never mind. I took another look at my script and noticed that I included a walltime in my sbatch_args. These participants were exceeding that walltime leading to the IOError. When I increased that walltime by 30 min everything made it through to the datasink.