As you can see it seems to be working for some of the participants but not others. This is when running using --forcehr. It is only slightly better than without that argument. I’m also using the deep learning filter.
I can produce pulse waveforms similar to your paper for several subjects I looked at (from app output):
That is from happy rapidtide v 1.9.1 because rapidtide 2a11 doesn’t seem to work quite as well (middle of vessels blank and artifact where it looks like noise travels up to the top of the brain):
This artifact is also present in to a lesser degree in 1.9.1 some runs. Any idea arguments should I play around with to get better performance and get rid of this artifact? Or do you think happy won’t work with such a long TR?
I have to say, that top map looks nicer than anything I’ve ever gotten out of happy…
I’m pretty sure the problem with the video on the bottom is that the specified slice order is wrong - probably flipped top to bottom (i.e. if you specified ascending slices, they may in fact be descending, or vice versa). I’d try changing that first and seeing if that fixes it.
As to whether the TR is too long - in our testing, we didn’t get good results with TR 2 MB 1 data. However it’s important to note that happy is really two programs blended together. There’s the cardiac waveform extraction part, which works best with shorter TRs and higher multi band factors, and the analytic phase projection part, which doesn’t really care about either of those factors. If you are supplying an external cardiac waveform (and it seems that you have one, since you are comparing the heart rates), then TR 2 MB 1 should be fine.
Unfortunately we do not have an external cardiac waveform, just hand-recorded heart rate.
Despite the TR=2 and MB=1 it seems to be getting the heart rate at least somewhat accurately. Your paper mentions training the deep learning model with more representative data. Maybe training it with the Amsterdam Open MRI collection, which has TR=2, would improve performance.
The slice times, I believe, are accurate. The two brain images are actually the same, just processed with old and new versions of happy. That artifact is present on other images to a greater or lesser extent.
The scanner is philips though, so getting slice timing is a bit more complicated.
I also ran happy on some multiband data we have (TR=0.660 and MB=2) and the performance is slightly better but still fails on a good portion and produces the above artifact.
I noticed you put slice timing calculation into happy. I’m using your script with the philips ascending interleaved option for the multiband data and it seems to work pretty well. Nonetheless, I also tried using a script to get the slice order which gives a different ordering.
I then take that script, which produces ordering (to a certain extent, i clean it up by hand), and use another script to calculate slice timing. This procedure gives slightly better performance than your script.
If you have the time, would you mind trying the slice order script on one of your philips dicoms to see if it gives interpretable slice ordering?
Hey - I don’t know if this is still something you were interested in/wondering about, but I just found and fixed a bug (or ambiguity) in happy slice time specification. Happy accepts either BIDS json sidecar files or FSL style slice time files to specify the slice time. BIDS sidecars specify slice offset times in seconds, but FSL slice time files expect slice offsets in fractions of a TR. I didn’t really fully internalize that - I always thought it was in seconds, and happy assumed it was. So if you used a properly constructed FSL style slice time file, in fractions of a TR, it would be interpreted incorrectly by happy. The new behavior is to assume .json files are in seconds, and non .json files are in fractions of a TR. If you have a non-json file in seconds, you can override this with --slicetimesareinseconds. Since your TR is 2, the difference would likely be significant.
I specified in seconds, but I think the issues may be due either to motion or something strange going on with the timing for the dataset.
Probably too much info/you won’t be interested, but:
The scan card says the scan lasts for 12:07.9 seconds but there are 360 dynamics with a TR of 2. So the timing doesn’t exactly match up, either due to equilibration dummy scans at beginning or some sort of gap between each TR. Not sure why it would work well for a few sessions, but not as well for others though.
I have applied this to some ADNI data (a while back as well) and the results good, so going forward I may just do it on that.