Indeed that’s a very interesting method you are proposing!
Empirically, I think the loading duration is short enough that I can try to use this for my application!
However, I still have two inquires:
- it feels like the
slicer loading procedure still depends on the size of the image as exemplified below
- is this method applicable to gifti files as well? I couldn’t find a
slicer method in nibabel for gifti objects ; in case it does not exist, do you think it’s implementable?
Example illustrating the fact that slicer still scales linearly in the number of voxels:
I just tried the following snippet (I divided output durations by the number of steps to approximate how much time it takes to actually load these things):
# %%
import numpy as np
import nibabel as nb
import timeit
import tempfile
# %%
number = 100
# %%
with tempfile.NamedTemporaryFile(suffix=".nii.gz") as f:
# prep fake image
Z = np.random.normal(size=(512, 512, 512))
n = nb.Nifti1Image(Z, affine=np.eye(4))
n.to_filename(f.name)
setup = f"""
import numpy as np
import nibabel as nb
# load full image and then slice
def load_full(f, x, y, z) -> None:
img: nb.Nifti1Image = nb.load(f)
_ = img.get_fdata()[x : (x + 1), y : (y + 1), z : (z + 1)]
# load sliced image only
def load_sliced(f, x, y, z) -> None:
n: nb.Nifti1Image = nb.load(f).slicer[x : (x + 1), y : (y + 1), z : (z + 1)]
_ = n.get_fdata()
f = '{f.name}'
x, y, z = 10, 10, 10
"""
print(f"{timeit.timeit(stmt='load_full(f, x, y, z)', setup=setup, number=number)/number}")
print(f"{timeit.timeit(stmt='load_sliced(f, x, y, z)', setup=setup, number=number)/number}")
which yields (careful, it takes 10 minutes to run haha):
5.1362942935799945
0.10730613767998876
Running the same snippet with an image size of (256, 256, 256) yields
0.6240910130699922
0.02270433284007595
which is roughtly 8x faster than with images of size (512, 512, 512), which seems to indicate a linear dependency in n (the number of voxels in the image).