All 8 entries tagged Spm5

No other Warwick Blogs use the tag Spm5 on entries | View entries tagged Spm5 at Technorati | There are no images tagged Spm5 on this blog

July 31, 2012

SPM plot units

What are the units of a plot in SPM?

In SPM, when you click on “Plot” and ask for a “Contrast estimates and 90% CI”, what exactly are the units of the thing plotted? I get this question so many times I wanted to record my answer once and for all.

The short answer is: 2-3 times an approximation to percent BOLD change, if you have a long-block design; otherwise, it depends. Here’s the long answer.

The final units of a plotted contrast of parameter estimates depends on three things:
  1. The scaling of the data.
  2. The scaling of the predictor(s) that are involved in the selected contrast.
  3. The scaling of the selected contrast.

The scaling of the data

For a first level fMRI analysis in SPM the scaling is done “behind the scenes”... there are no options to change the default behavior: The global mean intensity for each image is computed, and the average of those values (the “grand grand mean”) is scaled to be equal to be 100. However, SPM’s spm_global produces a rather dysmal estimate of typical intracerebral intensity (note that the author is listed as “Anonymous”; to be fair, it worked well on the tightly-cropped, standard-space PET images that it was originally designed for). The estimate of spm_global is generally too low, and so when the data are divided by it and multiplied by 100, the typical brain intensity is well over 100. In my experience, if you examine a beta_XXXX.img corresponding to a constant in a first-level regression model, typically gray matter values range from 200 to 300 when it should be 100; for first-level analyses done in subject-space (where there are even more air voxels) the bias can be even greater. Let’s call the typical gray matter intensity of your BOLD T2* images to be B for later correcting this effect (e.g. B = 250 or 300 or whatever you find).

At the second level, you depend on the contrast estimates passed up from the first level to have a valid interpretation. So all three of these rules should apply to the first and second level models, but the data scaling is done at the first level (no further scaling should be done at the second level; e.g., in the Batch Editor, second-level models have “Global normalisation -> Overall grand mean scaling” set to “No”).

The scaling of the predictor(s) that are involved in the selected contrast

At the first level, the predictors must be scaled appropriately to give the regression coefficients the same units as the data. For BOLD fMRI, this means the baseline to peak magnitude should be 1.0… i.e. a one unit change in a beta coefficient will result in a one unit change in predicted BOLD effect. In SPM, for loooong blocks/epochs, 20 seconds or longer, the scaling is as expected, with the baseline-to-plateau difference being 1.0. For shorter blocks however, the peak can be higher – as this corresponds to the ‘initial peak’ part of the HRF, as the duration is too short for the flat plateau to be established (this ‘early peak’ is due to symmetry of convolution, as it must exactly mirror the post-stimulus overshoot; there is some physiological basis for this as well, though) – or they can be lower. For events, the HRF is normalized to sum to unity, which doesn’t help you predict the peak.

If you have anything but long blocks, then, you have to examine the baseline-to-peak (or if you prefer, baseline-to-trough) height, and make a note of it for later scaling (call it H). (The design matrix is in SPM.xX.X, in SPM.mat; plot individual columns to judge the scaling). For event-related designs, however, it can be tricky to uniquely define the baseline-to-peak height; if you have a slow design, with isolated events, you’ll find a different peak height than if you have a design with some closely-spaced events as the responses pile-up additively. A reasonable unique definition of a event-related response magnitude is the height of an isolated event; again, call this H.

At the second level, you must also make sure that your regressors preserve the units of the data. For example, if you are manually coding dummy regressors, like gender, as -1/1, realize that the unit effect of beta coefficient for gender will result in a two unit change in the data. Safer is using a 0/1 coding, though that then re-defines the intercept; best yet is to use -1/2 and 1/2 as dummy values for a dichotomous variable.

For parametric modulation and covariates, note that a regression coefficient’s units are determined by the units of the covariate. For example, if you have a parametric modulation in a reward experiment, where the units are £’s that are at risk in a bet, then your regressor coefficient has units (if everything else is working as it should) “percent BOLD change per £ at risk”. Likewise, if you use an age predictor in the second level, if you enter age in years (centered or uncentered), the units of the age coefficient are “percent BOLD change per year”

Finally, beware of trying to compare between percent BOLD change in Block Designs vs. Event Related designs. Due to the nonlinearities of the BOLD effect, a percent BOLD change for an event related response is really a different beast from a percent BOLD change for an block design. When reporting one type, be sure it is clear what sort of effect you’re reporting, and don’t try averaging or comparing their magnitudes.

The scaling of the selected contrast

The General Linear Model is a wondrous thing, as you’ll get the same T-values and P-values for the contrast [-1 -1 1 1] as you do for [-1/2 -1/2 1/2 1/2]. However, the interpretation of the contrast of parameter estimates (contrast times beta-hats) is different for each. The following rules to scale contrasts will usually work to ensure that a contrast preserves the units of the regression coefficients:
  • Sum of any positive contrast elements is 1.
  • Sum of any negative contrast elements is -1.

If the contrast elements are all positive or all negative, the interpretation is clear, as you are ensuring that the beta-coefficents (each which should have unit-change interpretations) will be averaged, giving a (average) unit change interpretation. If there are a mix of positive and negative elements, I see this rule as carefully ensuring there is a positive unit change quantity subtracted from a negative unit change quantity, giving the expected difference of unit change. (This rule should work for covariates and simple 0/1 dummy variables; this FSL example shows a case with -1/0/1 dummy variables where the contrasts violate these rules yet actually give the correct units.)

When it goes right, when it goes wrong

If everything had gone well, then the plotted units are “approximate percent BOLD change”. The “approximate” qualification results from only using a global-brain mean of 100; an “exact” percent BOLD change requires that the voxel in question has had baseline (or grand mean) intensity of exactly 100. We don’t grand-mean-normalise voxel-by-voxel because of edge effects: At the edge of the brain, the image intensity falls off to zero, and dividing by these values will give artifactually large values.

Of the three scalings in play, the first two of these three items are basically out of the control of the user. As hinted, first-level fMRI data in SPM are generally mis-scaled, resulting in global brain intensity between 200 and 300, i.e., 2 to 3 times greater than anticipated. Also, if you don’t have long blocks, you’ll need to apply a correction for the non-unity H you measured. The corrections are as follows: Dividing the reported results B/100 (i.e. multiplying by 100/B) will correct the problem of the data being incorrectly scaled. Multiplying by H will correct the units of the regression coefficients. After both of these corrections, you should get something closer to approximate percent BOLD change.

This all sounds rather depressing, doesn’t it? Well, there is a better way. If you use the Marsbar toolbox it will carefully estimate baseline intensity for the ROI (or single-voxel ROI—make sure it isn’t on the edge of the brain though!) and gives exact percent BOLD change units.

Postscript: What about FSL?

FSL has an good estimate of the global mean, because it uses morphological operations to identify intracerebral voxels. It scales grand mean brain intensity to 10,000, however (because it used to use a integer data representation and 100 would have resulted in too few significant digits.) Block-design predictors are scaled to have unit height; the default HRF doesn’t use a post-stimulus undershoot, and so baseline-to-peak height is the same as trough-to-peak height. Event-related predictors are not scaled to have peak height of unity; however, the recommended tool for extracting percent BOLD change values, featquery, estimates a trough-to-peak height and then appropriate scales the data. However, as Jeanette Mumford has pointed out, this isn’t really the right thing to do when you have closely spaced trials; see her excellent A Guide to Calculating Percent Change with Featquery to see how to deal with this.

Other references

Thanks to Jeanette Mumford & Tor Wager for helpful input and additions to this entry, and Guillaume Flandin for the additional references.


February 16, 2012

Creating Paired Differences

If you analyze 2-time point longitudinal data, you will eventually observe that it is easier to create and analyze difference data (e.g. data from time 2 minus data from time 1). However, if you’re not a scripting guru this might be an annoying task.

This PairDiff.m script will take an even number of images and produce a set of paired difference images, even minus odd.

function PairDiff(Imgs,BaseNm)
% PairDiff(Imgs,BaseNm)
% Imgs   - Matrix of (even-number of) filenames
% BaseNm - Basename for difference images
%
% Create pairwise difference for a set of images (img2-img1, img4-img3, etc),
% named according to BaseNm (BaseNm_0001, BaseNm_0002, BaseNm_0003, etc).
%______________________________________________________________________
% $Id: PairDiff.m,v 1.2 2012/02/16 21:36:41 nichols Exp $

if nargin<1 | isempty(Imgs)
  Imgs = spm_select(Inf,'image','Select n*2 images');
end
if nargin<2 | isempty(BaseNm)
  BaseNm = spm_input('Enter difference basename','+1','s','Diff');
end
V = spm_vol(Imgs);
n = length(V);
if rem(n,2)~=0
  error('Must specify an even number of images')
end

V1 = V(1);
V1  = rmfield(V1,{'fname','descrip','n','private'});
V1.dt = [spm_type('float32') spm_platform('bigend')];

for i=1:n/2
  V1.fname = sprintf('%s_%04d.img',BaseNm,i);
  V1.descrip = sprintf('Difference %d',i);
  Vo(i) = spm_create_vol(V1);
end

%
% Do the differencing
%

fprintf('Creating differenced data ')

for i=1:2:n

  img1 = spm_read_vols(V(i));
  img2 = spm_read_vols(V(i+1));

  img = img2-img1;

  spm_write_vol(Vo((i+1)/2),img);

end
After I created this I realized that Ged Ridgway also has a similar script, make_diffs.m , that takes two lists of images (baseline, followup) and does the same thing, though with perhaps more intuitive filenames.

June 03, 2010

SPM5 Gem 6: Corrected cluster size threshold

This is SPM5 version of SPM2 Gem 13.

This is a script that will tell you the corrected cluster size threshold for given cluster-defining threshold: CorrClusTh.m

The usage is pretty self explanatory:

 Find the corrected cluster size threshold for a given alpha
 function [k,Pc] =CorrClusTh(SPM,u,alpha,guess)
 SPM   - SPM data structure
 u     - Cluster defining threshold
         If less than zero, u is taken to be uncorrected P-value
 alpha - FWE-corrected level (defaults to 0.05)
 guess - Set to NaN to use a Newton-Rhapson search (default)
         Or provide a explicit list (e.g. 1:1000) of cluster sizes to
         search over.
         If guess is a (non-NaN) scalar nothing happens, except the the
         corrected P-value of guess is printed. 

 Finds the corrected cluster size (spatial extent) threshold for a given
 cluster defining threshold u and FWE-corrected level alpha. 
  

To find the 0.05 (default alpha) corrected cluster size threshold for a 0.01 cluster-defining threshold:

>> load SPM
>> CorrClusTh(SPM,0.01)
  For a cluster-defining threshold of 2.4671 the level 0.05 corrected
  cluster size threshold is 7860 and has size (corrected P-value) 0.0499926
  

Notice that, due to the discreteness of cluster sizes you cannot get an exact 0.05 threshold.

The function uses an automated search which may sometimes fail. If you specify a 4th argument you can manually specify the cluster sizes to search over:

>> CorrClusTh(SPM,0.01,0.05,6000:7000)

  WARNING: Within the range of cluster sizes searched (6000...7000)
  a corrected P-value <= alpha was not found (smallest P: 0.0819589)

  Try increasing the range or an automatic search.

Ooops... bad range.

Lastly, you can use it as a look up for a specific cluster size threshold. For example, how much over the 0.05 level would a cluster size of 7859 be?

>> CorrClusTh(SPM,0.01,0.05,7859)
  For a cluster-defining threshold of 2.4671 a cluster size threshold of
  7859 has corrected P-value 0.050021

Just a pinch!


SPM5 Gem 5: Unnormalizing a point

This script of Johns will find the corresponding co-ordinate in the un-normalized image: get_orig_coord2.m (same script as for SPM2).


SPM5 Gem 4: Switching between SPM versions

This function will allow you to switch between different SPM versions. WARNING! As SPM depends on various global (and sometimes, local workspace) variables, this function clears all variables as part of the switch.

The function will need to be put in a directory in your Matlab path that does not contain SPM. spmsel.m


SPM5 Gem 3: Resizing images

A generalized version of John's reorient script (see Gem2) by Ged Ridgeway, which allows specification of arbitrary voxel dimensions: resize_img.m (this was current as of April 2007; see Ged's script directory for updates.)


SPM5 Gem 2: Reslicing images

To reslice an image at 1mm cubic voxels, axial orientation, use this reorient.m script (from an email from John dated Mon, 5 Jun 2006 13:02:05 +0100; see also Gem3 and SPM99 Gem7).


SPM5 Gem 1: Introduction to SPM5 scripting

Date: Tue, 13 Dec 2005 17:03:13 +0100
From: John Ashburner <john@FIL.ION.UCL.AC.UK>
To: SPM@JISCMAIL.AC.UK
Subject: Re: [SPM] Where can we find some development materials for SPM ?

>    As you know,we usually need to modify the code of SPM to fit our
> problem.but we can not find some relevant development  tutorials. Would you
> please tell me how to learn the framework of SPM step by step ?
>   In addition, I want to know where I can find the details of the SPM
> structure.

It may be easiest to learn by example.  If you want to develop a new 
user-interface for SPM5, then you would create a file called spm_config_*.m, 
similar to the other spm_config.m files (if you strip out the documentation 
parts, you will see that these are actually quite small).  Your spm_config* 
file can then be added to the toolbox subdirectory and accessed through the 
TOOLS pulldown.

The help button allows you to navigate through the documentation of each of 
the Matlab functions, which you may find useful.

For reading and writing images, you would use these functions.
    spm_vol
    spm_slice_vol
    spm_sample_vol

    spm_create_vol
    spm_write_plane
    spm_write_vol

    spm_get_space

There is Matlab help on all these functions.  Alternatively, you could use the 
NIFTI routines directly.  There is no documentation on this, but here are a 
few examples of how you can use them:
======================================

% Example of creating a simulated .nii file.
dat            = file_array;
dat.fname = 'junk.nii';
dat.dim     = [64 64 32];
dat.dtype  = 'FLOAT64-BE';
dat.offset  = ceil(348/8)*8;

% alternatively:
% dat = file_array( 'junk.nii',dim,dtype,off,scale,inter)

disp(dat)

% Create an empty NIFTI structure
N = nifti;

fieldnames(N) % Dump fieldnames

% Creating all the NIFTI header stuff
N.dat = dat;
N.mat = [2 0 0 -110 ; 0 2 0 -110; 0 0 -2 92; 0 0 0 1];
N.mat_intent = 'xxx'
N.mat_intent = 'Scanner';
N.mat0 = N.mat;
N.mat0_intent = 'Aligned';

N.diminfo.slice = 3;
N.diminfo.phase = 2;
N.diminfo.frequency = 2;
N.diminfo.slice_time.code='xxx';
N.diminfo.slice_time.code = 'sequential_increasing';
N.diminfo.slice_time.start = 1;
N.diminfo.slice_time.end = 32;
N.diminfo.slice_time.duration = 3/32;

N.intent.code='xxx' ; % dump possibilities
N.intent.code='FTEST'; % or N.intent.code=4;
N.intent.param = [4 8];

N.timing.toffset = 28800;
N.timing.tspace=3;
N.descrip = 'This is a NIFTI-1 file';
N.aux_file='aux-file-name.txt';
N.cal = [0 1];

create(N); % Writes hdr info

dat(:,:,:)=0;

[i,j,k] = ndgrid(1:64,1:64,1:32);
dat(find((i-32).^2+(j-32).^2+(k*2-32).^2 < 30^2))=1;
dat(find((i-32).^2+(j-32).^2+(k*2-32).^2 < 15^2))=2;


% displaying a slice
imagesc(dat(:,:,12));colorbar

% get a handle to 'junk.nii';
M=nifti('junk.nii');

imagesc(M.dat(:,:,12));
======================================

Best regards,
-John

Search this blog

Tags

Most recent comments

  • Hi This doesn't really address the issue at hand (powerpoint to html), but an alternative and very e… by Rose on this entry
  • @Michael. You are a SAINT. BLESS. by emma riley on this entry
  • I look this up every couple of years, and always struggle with it, so here are some notes for improv… by johann beda on this entry
  • I love this quote: "Complete reporting of results, i.e. filing of statistical maps in public reposit… by Kevin Black on this entry
  • Taylor: I've just quickly scanned the BrainVoyager documentation, and it appears that the Cluster–Le… by Thomas Nichols on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXVII