How to get a single brain MRI scan from each DICOM file as there are 30 scans (approx.) in a single DICOM file of COBRE and MCICShare data

I am using Schizophrenia datasets from COBRE and MCICShare repository for developing a CNN based architecture for the detection of Schizophrenic patient. Both COBRE and MCICShare datasets contain Brain MRI in .dcm (DICOM) format and each DICOM file contains 30 scans of brain. So, my question is, how to apply this dataset in Deep Learning (how to make this dataset in a form so that we can use it as an input to CNN). Whether we have to convert .dcm into JPG/JPEG/PNG/any other or to .nii and How to get a single brain MRI scan from each DICOM file as there are 30 scans (approx.) in a single DICOM file of COBRE and MCICShare data.

From my perspective, If you want to convert 3D images to PNG or JPG can use a tool like med2image. However, the 32-bit RGBA images in typical JPEG and PNG files have just 256 levels of gray. In contrast, raw DICOMs usually use 16-bit for 65535 levels of gray (in excess of the SNR). For this reason, when you convert DICOMs to JPG/PNG you will need to be very careful in choosing your brightness and contrast (window center and width). Finally, PNG and JPG are inherently 2D formats, and my instinct is that preserving 3D information may be crucial for good prediction (e.g. not merely classifying better than chance, but providing predictions that can impact standard of care). For these reasons, I would recommend converting your DICOMs to NIfTI which retains the precision and 3D information of the raw data.

If your sample size is small, you may want to consider first spatially normalizing data across individuals and then aggregating data across brain regions for a stable feature. Here is a minimal Python script that illustrates this approach.

In theory, one could convert 2D legacy DICOM files into a single enhanced DICOM file. The challenge here is that each manufacturer (GE, Philips, UIH, Siemens, Mediso) uses different DICOM tags to describe variables. Further, one needs to assign new UIDs. Therefore, most repositories that store DICOM images (e.g. ADNI, HCP, ICBM, etc) store the raw (though anonymized) 2D DICOM images, so the DICOM data preserves the manufacturer specific details. Therefore, teams upload the raw 2D DICOMs to these repositories, and users can download the 2D DICOM files. The user can then convert the DICOMs to a manufacturer agnostic NIfTI file using

dcm2niix /path/to/DICOMs

With regards to processing DICOM data directly, instead of converting data to NIfTI, Fedorov et al. note the output of analysis results using DICOM is severely limited or non-existent in current tools. However, their dcmqi shows promise in filling this niche.

How to make predictions using .nii data? I tried it, but it always shows me error related to shape and dimension. Do you have a sample piece of python script that read .nii or .dcm files of both classes that is CONTROL and SCHIZOPHRENIA trained using any pre-trained model or simple CNN model to get binary predictions? I think I do something wrong after extracting both zipped files (ex schizconnect_COBRE_images_19776.7z.001), So, can you tell me complete procedure from extraction t prediction with python script (optional)?