Dump voxel data in csv file for GLM analysis

Summary of what happened:

Hello folks,
I have tried to dump all my mask within a text file to then analyse it through GLM in R, but it seems there is no easy way to do it in AFNI.
Basically, I have 26 subjects, each subject have 3 beta-coeffs datasets (1 sub-brik).
In my mask, I have 130 voxels.
I would like to dump all these voxels in one file which would be in the format of GLM :
One column with subject number (1-26), one column with level of condition (1-3) and one column with voxel number (1-130).
This would be optimal to do a good regression, rather than doing the mean of the mask and losing information on the variance.

Does anyone have an easy purely coding solution ?

Thanks
That’s all folks

Hi-

To dump a mask into a text file, there is an AFNI program called 3dmaskdump:

Usage: 3dmaskdump [options] dataset dataset ...
Writes to an ASCII file values from the input datasets
which satisfy the mask criteria given in the options.
If no options are given, then all voxels are included.
This might result in a GIGANTIC output file.
...

It displays output to the screen, which can be redirected to a file with >. I typically use a mask, so an example usage case might be:

3dmaskdump -mask mask_epi_anat.FT+tlrc. stats.FT+tlrc. > file.txt

Note, to search for AFNI programs that do tasks, this is Classified program list of the AFNI documentation is probably useful.

  • Programs are grouped together by batches of functionality (3dmaskdump is under “Get info/stats within ROIs”, because a mask is just a particular ROI), with ones that are probably most widely used at the top (rating 4 or 5) and more niche ones below.
  • There are brief descriptions of each program, so you can also try to search the text for words/functionality of interest. For example, searching for “dump” on the page would have taken you to 3dmaskdump on the first instance.
  • If you click on a given program name, it takes you to the full help page for it.

–pt

Ah, now I see the question title more—you specifically want a CSV file, not a whitespace separated file? Then how about this:

# create space-separated column file
3dmaskdump -mask mask_epi_anat.FT+tlrc. stats.FT+tlrc. > file.txt

# replace space with a comma to create CSV
cat file.txt | tr ' ' ', ' > file.csv

?

–pt

OK, now I’ve had coffee and read more carefully.

I assumed you might want 4 rows: dset, subbrick label, voxel ID, voxel value.

#!/bin/tcsh


set group_file = text_file_group.dat
set group_csv  = text_file_group.csv
set top_row    = ""

# subject dataset list
set all_dset = ( stats.FT+tlrc.HEAD )
set ndset    = ${#all_dset}

# coef/subbrick label list
set all_subbr = ( 'vis#0_Coef' 'aud#0_Coef' 'V-A_GLT#0_Coef' )
set nsubbr    = ${#all_subbr}

# column labels
set all_col   = ( 'subj_id' 'subbr' 'vox_idx' 'vox_val' ) 

# =========================================================================
# start output file

# Build top row
foreach col ( ${all_col} )
    set top_row  = "${top_row}${col} "
end

# clear output file, and put in top row (remove any dangling white space)
printf "%s\n" "${top_row}" > __tmp_grp.txt
sed 's/ *$//' __tmp_grp.txt > ${group_file}

# =========================================================================
# populate with subj data

foreach ii ( `seq 1 1 ${ndset}` )   # loop over all subj
    set dset     = ${all_dset[$ii]}
    set dset_pad = `printf "%20s" ${dset}`

    foreach jj ( `seq 1 1 ${nsubbr}` )  # loop over all subbricks
        set subbr     = ${all_subbr[$jj]}
        set subbr_pad = `printf "%20s" ${subbr}`
        set dump_file = dump_file_${dset}_${jj}.txt

        # dump output into a 2 col file: voxel_ijk  value
        3dmaskdump                           \
            -index -noijk                    \
            -mask mask_epi_anat.FT+tlrc.     \
            ${dset}"[${subbr}]"       \
            > ${dump_file}

        # number of values in file
        set nrow = `cat ${dump_file} | wc -l`

        # create tmp col file sof dset and subbr names, correct length
        yes ${dset_pad}  | head -n ${nrow} > __tmp_file_dset.txt
        yes ${subbr_pad} | head -n ${nrow} > __tmp_file_subbr.txt

        # combine columns, as space separated
        paste                    \
            __tmp_file_dset.txt  \
            __tmp_file_subbr.txt \
            ${dump_file}         \
            -d " "               \
            > __tmp_file_all_cols.txt

        # append that multi col file to 
        cat __tmp_file_all_cols.txt >> ${group_file}

    end  # end loop over subbrick
end  # end loop over subj

# FINALLY, turn that into a CSV
cat ${group_file} | tr ' ' ', ' > ${group_csv}

# =========================================================================
# assemble pieces