Derivatives folders w/ too many files (in the 20k files) will make accessing the folder slow

Hi all,
I am wondering how others deal w/ big data? In a study, let’s say with ~30k images, I would like to (for example) apply registration to MNI. I would create a derivative folder called derivatives/reg_to_MNI/* and would attempt to output my results in there.

Given I have 30k niftiis w/ additional *.mat files my list is growing to up to 60k files. A simple ls command in linux will take a few seconds to load and worse if I use any interface [xdg-open or windows explorer].
This exacerbates when additional files need to be kept with your process (e.g. *.json) or additional timepoints.
Have you guys experienced this type of problem? Any solution | tool I am not aware that could help me organize my files so a simple ls -lurt does not take that long?