I am trying out ASL prep on a sequence of non-standardized ASL images. For this sequence, I have…
- PCASL single delay - the scanner outputs an extra calibration volume at the beginning that I removed prior to analysis. Collected as 5 groups of tag/control volumes with background suppression.
- PCASL calibration - this is just exactly the same scanning parameters as above but without background suppression. I am again removing the first volume and then averaging 4 volumes of this acquisition together to get an average calibration.
I am trying to set up my data in bids format to run the ASLprep (which looks awesome), but I don’t entirely understand some of the parts that I need to include prior to the analysis within the json file to run this.
I see here that I need to include the following for a json file…
’BackgroundSuppressionPulseTime` - This an array of numbers containing the timing in seconds of the background suppression pulses with respect to the start of the labeling.
How in the world do I find out the BackgroundSuppressionPulseTime and is the strictly necessary for the ASLprep?
Finally, how do I label the MO image and would I need to include the same info for it’s json file?