Beauchamp:Electrophysiology
Electrophysiology Protocols
After analysing fMRI data, upload the entire contents of the AFNI and SUMA directories to Xfiles. This can be simplfied by Apple-K (Connect to Server) in Finder and choosing XFiles;
xfiles.hsc.uth.tmc.edu (129.106.148.217)
then the folders can be dragged from the server to Xfiles, or copied in the command line, easily (without using the Web-based GUI interface).
In the EMU
It is also good to collect 10 minutes of resting data (no stimulation) from as many visual electrodes as possible for later analyses.
January 2008 Subjects
Proposed experiments for January 2008 subjects.
TODO LIST
Decide on screening stimuli i.e. pick 20 from each category
faces, houses, bodies, scenes, tools
et rid of bad looking stimuli; make detailed protocol
Focus on ventral temporal and lateral occipital-temporal electrodes with visual responses in fMRI
not on electrodes over early visual cortex
EXPERIMENT: stimulation at 2 (up to 8) mA (no psychometrics) to see which, if any, late sites evoke percepts
If there is no percept, at the highest current do 20 trials with behavioral responses to quantify the lack of response.
If there is a percept, see if it is complex or not.
If a simple phosphene in an early site, do 20 trials with behavioral response at a current to prove there was a percept.
If a complex percept or a later site, do the complete psychometric function.
GOAL: additional data for Dona's current paper; pilot data for grant to show that stimulation in higher areas does NOT produce a percept.
ANTICIPATED RESULT: few, if any, sites will produce percepts
EXPERIMENT: object selectivity to determine preferred and nonpreferred stimuli with well-defined categories, including: faces, bodies, houses, scenes, etc.
GOAL: pilot data on category selectivity, determine preferred objects
ANTICIPATED RESULT:Like in the Malach paper, there will be a sharp tuning with some electrodes only responding to stimuli in their preferred category.
EXPERIMENT: For electrode(s) with nice clean responses to a preferred stimulus, do RF mapping with the preferred stimulus
GOAL: Determine RFs in higher areas (identified with fMRI)
PREDICTION: Higher areas will have large but not completely homogenous spatial RFs
Possibility:also map RF with less-preferred stimuli
EXPERIMENT: repeated presentation of preferred stimulus; repeated presentation of nonpreferred stimulus (context: letter detection foveally)
GOAL: Pilot data for adaptation
PREDICTION: AAAB more than BBBB
If there is ample time:
EXPERIMENT: stimulation of higher electrodes while subject makes object or noise discrimination
i.e. perceptual biasing with preferred and nonpreferred stimuli embedded in noise
GOAL:Pilot data for grant
study motion, orientation selectivity using Ping's new screening program
object selectivity with preferred stimulus in big screen of same category stimuli
object selectivity with preferred stimulus in big screen of nonpreferred category stimuli
object selectivity with nonpreferred stimulus in big screen of same category stimuli
object selectivity with nonpreferred stimulus in big screen of preferred category stimuli
Processing Subject Data
After obtaining the CD containing the patient CT data from St. Luke's, use OsiriX to export all images (using the export to DICOM option, and the hierarchical, uncompress options).
CT scans have voxel size 0.488x0.488x1 mm; this may need to be adjusted manually with
3drefit -zdel 1.000 DE_CTSDE+orig
(If the CTs look distorted in AFNI, then the voxel size must be adjusted). Next, the CTs must be registered with the hi-res presurgical MRI anatomy. This may fail because the CT has a coordinate system with a very different origin than the MRI. Registration routines will not work if the input datasets are not in rough alignment. To check this, type
3dinfo DE_CTSDE+orig
returns
R-to-L extent: -124.756 [R] -to- 124.756 [L] -step- 0.488 mm [512 voxels] A-to-P extent: -124.756 [A] -to- 124.756 [P] -step- 0.488 mm [512 voxels] I-to-S extent: -258.000 [I] -to- -86.000 [I] -step- 1.000 mm [173 voxels]
We want the center of the dataset to be roughly at (0,0,0). For this example, this is true for (x,y) but not for z. First, create a copy of the dataset
3dcopy DE_CTSDE+orig DE_CTSDEshift
Then, recenter the z-axis
3drefit -zorigin 80 DE_CTSDEshift+orig
3dinfo returns
R-to-L extent: -124.756 [R] -to- 124.756 [L] -step- 0.488 mm [512 voxels] A-to-P extent: -124.756 [A] -to- 124.756 [P] -step- 0.488 mm [512 voxels] I-to-S extent: -80.000 [I] -to- 92.000 [S] -step- 1.000 mm [173 voxels]
The z-axis is now roughly centered around 0. In AFNI, examine the MR and the shifted CT to make sure they are in rough alignment. Next, use 3dAllineate to align the two datasets.
3dAllineate -base {$ec}anatavg+orig -source DE_CTSDEshift+orig -prefix {$ec}CTSDE_REGtoanatV4 -verb -warp shift_rotate -cost mutualinfo -1Dfile {$ec}CTSDE_REGtoanatXformV4
Check in AFNI to make sure that they alignment is correct. NB: It is also possible to crop the MRI before Allineating since the MR coverage is typically greater than the CT coverage. In a test case, this did not have a big effect.
Things to do
HumanImageDetection
- Can stimuli be vector-based rather than pixel based, so as not to lose resolution with scaling? POSSIBLE if original file is vector-based
- Enable online scrambling LOOKING INTO IT
- Enable online color to black and white conversion LOOKING INTO IT
HumanLetterDetection
- Analyze data from LR to see where the RFs are