What are the preprocessing steps to be done on this image

sizeof_hdr : 348
data_type :
db_name :
extents : 0
session_error : 0
regular : r
dim_info : 0
dim : [ 3 305 305 39 1 1 1 1]
intent_p1 : 0.0
intent_p2 : 0.0
intent_p3 : 0.0
intent_code : none
datatype : float32
bitpix : 32
slice_start : 0
pixdim : [ 1. 1.25 1.25 1.99999976 0. 0. 0.
0. ]
vox_offset : 0.0
scl_slope : nan
scl_inter : nan
slice_end : 0
slice_code : unknown
xyzt_units : 10
cal_max : 0.0
cal_min : 0.0
slice_duration : 0.0
toffset : 0.0
glmax : 0
glmin : 0
descrip : 5.0.11
aux_file :
qform_code : scanner
sform_code : scanner
quatern_b : 0.5
quatern_c : 0.5
quatern_d : 0.5
qoffset_x : -0.499992370605
qoffset_y : 0.0
qoffset_z : 0.0
srow_x : [ 0. 0. 1.99999976 -0.49999237]
srow_y : [ 1.25 0. 0. 0. ]
srow_z : [ 0. 1.25 0. 0. ]
intent_name :
magic : n+1

These are the properties of my image.can you suggest what are the preprocessing steps to be done on this image
slice timing correction,partial volume correction etc

This question is insufficiently specified to answer. However, you describe a 3D volume, whereas motion correction, slice-timing, etc. are applied to 4D datasets (where there are multiple 3D images of the brain taken at different times and/or with different gradients applied). Your processing pipeline depends on the modality and the question you want to answer. Often we normalize our images: warping data from different individuals to have the same shape and alignment.

In a previous question you said your goal was to detect degenerate disc. If you want to achieve this by visual inspection, I would use the raw data without any manipulation. If you want to do this quantitatively, you will want data from a group of individuals where the disc integrity varies across people. Statistics like GLM use this group to estimate the mean difference explained by disc integrity (the signal) versus the variability that can not be explained by this factor (the noise). On the other hand, machine learning will require data from a large group to provide a training and testing dataset. Therefore, I think you would want to start with data of similar scans from a large group, next use a tool like ANTS to warp to a common shape, finally apply your statistics or machine learning to this data. You may want to sign up for a workshop or course that describes these methods. Alternatively, you can read the PowerPoints, web pages, demos and tutorials from my Image to Inference class.

1 Like