What should Nifti images consist of?

Hello all
First, I want to inquire about the content of Nifty’s images, especially those that are in ABIDE dataset?

As I have private data that I want to merge with ABIDE.

So are the one scan of Nifty from ABIDE? It is a single image, for example, sagitial, coronal, or axial converted from Dicom to Nifty.

Or is the single image more than one plane that was converted at the same time?
and if that , so how to combine sagitial, coronal, and axial planes dicom at one nifti?

Hi,

Nifti images are essentially 3D or 4D matrices (depending on if there is a time component or not), along with a header containing file metadata. They are converted from collections of dicoms (which usually make up single volumes) taken directly from the scanner.

You would want to be careful doing that. If you didn’t acquire data with the same parameters as the ABIDE dataset, then you should not pool your data with it.

Planes are simply different cross sections of your 3D matrix. A Nifti file contains information pertaining to all of them.

Best,
Steven

1 Like

Big thanks Steven for your reply

I meant to only integrate data (private and ABIDE) when training machine learning models
What I know is that ABIDE’s is basically a collection of scans from more than 10 centers.

Therefore, the difference in the origin of the data exists, and it is preferable to use techniques to reduce its impact on the decisions of machine learning models, from which I thought that there is no problem in including scans from another geographical location as well, but this may enrich our obtaining more realistic and applicable results.

This can still be a problem. You do not want noise that comes from scanning parameters to influence your models. Yes, ABIDE uses different sites, but big multi-site studies like this take special care to harmonize scanning parameters across sites.

Steven

I agree with you . On the one hand, the problem of diversity among data will increase. But isn’t that better than training a model specialized in a particular site, and it may fail when fed with other data?

Also, ABIDE is for autistic patients, and this disorder is related to the geographical location (its spread varies from one region to another), I do not think that there is one of the sites contributing to ABIDE is from Asia, for example. Don’t you think that combining data from a different region will contribute to discovering the disorder more? , It is a point that I found during my research in a group of scientific papers and it caught my attention.

Thanks again for the information you provided, it raised important questions for me :slight_smile:

I agree that you should use multiple sites of ABIDE (unless your hypothesis is related to something more culturally specific, such as native language), but even across the international ABIDE sites, acquisitions are harmonized to allow one to safely analyze multiple sites in a single study.

1 Like

Actually, now that I read more into ABIDE, harmonization might have actually not been done in ABIDE I (but is something that could and perhaps should be done if you are using multiple sites), but ABIDE II might involve more across-site harmonization. Perhaps if you harmonize your data along with ABIDE it will be okay, but I am not an expert on multi-site analysis concerns. Perhaps this paper will help. Harmonization of resting-state functional MRI data across multiple imaging sites via the separation of site differences into sampling bias and measurement bias

This paper would be a really big help! Thanks :pray:t2:

How can I combine files of three planes into one nifty image?
I’ve tried MRIcroGl but the output is on each plane its own picture

Sorry I am confused by this statement. When one looks at MRI images, unless using a 3D viewer, this is normal behavior. When using 2D viewers (most are like this), one looks at MRI data by cross-sectional slices.

It looks like you have 3 series of low resolution T1 scans, acquired as 2D slices in the coronal, axial and sagittal planes respectively. The protocol names suggest there are other differences, such as whether an inversion recovery sequence is used or not (which will limit the contrast). In theory, you could make a higher resolution mean image by averaging these, but I suspect it would be plagued by aliasing errors and contrast differences.

You should be able to acquire a single high resolution 3D acquisition in similar or less time than your three series (referred to as MPRAGE by Siemens users). If you have a Siemens Research Master Agreement, I suggest you evaluate the MGH ME-MPRAGE sequence.

By the way, for future acquisitions, I would heed the dcm2niix warnings regarding acquiring interpolated data. This quadruples disk usage, slows processing, and blocks you from applying some corrections.

1 Like