@mrrichardchou while converting a single DICOM file can be simple, the challenge is to robustly handle the different conventions, transfer syntaxes, and errors encountered in the wild. There are several good tools out there, so I would suggest investigating them rather than re-inventing the wheel. In particular, I would suggest you consider dcm2niix (written in C) or dicom2nifti (Python, though depends on gdcmconv for compressed transfer syntaxes).
For evaluating performance, I would examine real world datasets where many series are jumbled together. This will show you how the tool scales. I would look at how performance scales both in terms of time to convert and maximum memory used to convert. In my experience, some conversion tools that are terrific for small datasets exceed the RAM available in modern computers when asked to convert large real world datasets such as the HCP sequences.
A good place to start is a modestly large dataset like this 428mb DICOM DWI dataset. In my testing, this dataset was converted x10 faster and with 1/8th the RAM using dcm2niix versus dicom2nifti. My sense is that dicom2nifti attempts to store all input images in RAM during conversion, while dcm2niix reads all the headers and then loads and unloads each DICOM image data as required. I do think dicom2nifti does a great job for small datasets, and therefore is a terrific foundation for future work. If you want to pursue a Python-based solution, I would start with dicom2nifti and see if the conversion could be modified to handle large DWI datasets.