Beauchamp:MVPA Notes

From OpenWetWare
Jump to navigationJump to search
Brain picture
Beauchamp Lab Notebook






To analyse data, we use the 3dsvm program. Data from some runs is used for training; data from other runs is used for testing. This process can be automated.


To see what brain areas are important for making the classification, we can look at the weight mask output by

 3dsvm -bucket

For this, we can't just do all runs at once since we don't care about getting an accuracy %.

3dsvm needs brain volumes to train and test on. For block designs, this would just be the volumes collected during each task (possibly eliminating the first few as the MR signal increases). For event-related designs, this can be approximated by just picking the brain volume collected 2 TRs (4 seconds) after the stimulus was presented. Or, it can be calculated using STIM_TIMES_IM in 3dDeconvolve. It is not clear that one way gives better results or not.

For STIM_TIMES_IM, the estimates of the response to trials that are very late in a run will be wacky, so it is best to delete these from the text file first.

 foreach f (*txt)
 cp $f newreg/trunc_{$f}
 end

Manually get rid of anything in last 10 seconds of each run. We only need to use IM for events that we are trying to classify; for other trial types (e.g. controls) there is no need. Sample command line:

 3dDeconvolve -fout -tout -full_first -polort a -concat runs.txt \
 -input {$ec}Albl+orig -num_stimts 15 -nfirst 0 -jobs 2 \
 -mask {$ec}maskAlbl+orig  \
 -stim_times_IM 1 newreg/trunc_VTv1T2.txt 'BLOCK(2,1)'  -stim_label 1 TacD2 \
 -stim_times_IM 2 newreg/trunc_VTv1T5.txt 'BLOCK(2,1)'  -stim_label 2 TacD5 \
 -stim_times_IM 3 newreg/trunc_VTv1V2.txt 'BLOCK(2,1)'  -stim_label 3 VisD2 \
 -stim_times_IM 4 newreg/trunc_VTv1V5.txt 'BLOCK(2,1)'  -stim_label 4 VisD5 \
 -stim_times_IM 5 newreg/trunc_VTv1TV2.txt 'BLOCK(2,1)'  -stim_label 5 TacVisD2 \
 -stim_times_IM 6 newreg/trunc_VTv1TV5.txt 'BLOCK(2,1)'  -stim_label 6 TacVisD5 \
 -stim_times 7 VTv1CT.txt 'BLOCK(2,1)'  -stim_label 7 TacCtrl \
 -stim_times 8 VTv1CV.txt 'BLOCK(2,1)'  -stim_label 8 VisCtrl \
 -stim_times 9 VTv1CVT.txt 'BLOCK(2,1)'  -stim_label 9 TacVisCtrl \
 -stim_file 10 {$ec}vr.1D'[0]'  -stim_base 10 \
 -stim_file 11 {$ec}vr.1D'[1]'  -stim_base 11 \
 -stim_file 12 {$ec}vr.1D'[2]'  -stim_base 12 \
 -stim_file 13 {$ec}vr.1D'[3]'  -stim_base 13 \
 -stim_file 14 {$ec}vr.1D'[4]'  -stim_base 14 \
 -stim_file 15 {$ec}vr.1D'[5]'  -stim_base 15 \
 -prefix {$ec}v{$v}mr

Next, we chop this up into separate files for each event type; it is easiest if there is the same number of events of each type.

 set f = EVv2mr+orig.HEAD
 3dinfo -verb $f | grep Coef
 3dbucket -overwrite -prefix {$ec}_T2 -fbuc $f'[1..119(2)]'
 3dbucket -overwrite -prefix {$ec}_T5 -fbuc $f'[122..240(2)]'
 3dbucket -overwrite -prefix {$ec}_V2 -fbuc $f'[243..357(2)]'
 3dbucket -overwrite -prefix {$ec}_V5 -fbuc $f'[360..476(2)]'
 3dbucket -overwrite -prefix {$ec}_TV2 -fbuc $f'[479..597(2)]'
 3dbucket -overwrite -prefix {$ec}_TV5 -fbuc $f'[600..718(2)]'

We can further subdivide these for training/testing, e.g. into even and odd trials.

 3dinfo EV_*HEAD | grep "pixel ="

++ 3dinfo: AFNI version=AFNI_2008_07_18_1710 (May 21 2009) [64-bit] Number of values stored at each pixel = 60 Number of values stored at each pixel = 60 Number of values stored at each pixel = 60 Number of values stored at each pixel = 60 Number of values stored at each pixel = 58 Number of values stored at each pixel = 59

 foreach f (EV_*HEAD)
  3dbucket -overwrite -prefix even_{$f} -fbuc $f'[0..56(2)]'
  3dbucket -overwrite -prefix odd_{$f} -fbuc $f'[1..57(2)]'
 end

Since the classification is easiest for pairwise comparisons, we pair the BRIKs. 3dbucket -prefix even_T2_T5 even_EV_T2+orig.HEAD+orig.HEAD even_EV_T5+orig.HEAD+orig.HEAD 3dbucket -prefix even_V2_V5 even_EV_V2+orig.HEAD+orig.HEAD even_EV_V5+orig.HEAD+orig.HEAD 3dbucket -prefix even_TV2_TV5 even_EV_TV2+orig.HEAD+orig.HEAD even_EV_TV5+orig.HEAD+orig.HEAD 3dbucket -prefix odd_T2_T5 odd_EV_T2+orig.HEAD+orig.HEAD odd_EV_T5+orig.HEAD+orig.HEAD 3dbucket -prefix odd_V2_V5 odd_EV_V2+orig.HEAD+orig.HEAD odd_EV_V5+orig.HEAD+orig.HEAD 3dbucket -prefix odd_TV2_TV5 odd_EV_TV2+orig.HEAD+orig.HEAD odd_EV_TV5+orig.HEAD+orig.HEAD

3dsvm requires the data to have a time axis, so add this back in 3drefit -TR 2000 even* odd*

Create the train/test text files depending on how many of each event there is:

 1deval -num 29 -expr '0' > {$ec}_svmv1_train.1D
 1deval -num 29 -expr '1' >> {$ec}_svmv1_train.1D

Classification usually works best if given some sort of mask, typically all brain voxels, or only brain voxels showing some activation.

 set ds = TV2_TV5
 set v = 1
 set f = {$ec}_svmv{$v}_results.txt
 rm $f *svmv{$v}*model*
 set train = even
 set test = odd
 3dsvm -overwrite -trainvol  {$train}_{$ds}+orig \
 -trainlabels {$ec}_svmv1_train.1D \
 -mask  EVmaskAlbl+orig \
 -model {$ec}_svmv{$v}_{$train}_model
 echo Results from training on {$train} trials, testing on {$test} trials  >>  $f
 3dsvm -overwrite  -nodetrend -classout \
 -testvol {$test}_{$ds}+orig \
 -model {$ec}_svmv{$v}_{$train}_model+orig \
 -testlabels  {$ec}_svmv1_train.1D  \
 -predictions {$ec}_svmv{$v}_predict >> $f
 end
 cat $f | grep -i "accuracy"
 open -e $f

set ds = T2_T5 Accuracy on test set: 55.17% (32 correct, 26 incorrect, 58 total) set ds = T2_T5 Accuracy on test set: 43.10% (25 correct, 33 incorrect, 58 total)