Archived FMRI pipeline
About fMRI data and file types
Raw fMRI data is saved in .dcm (dicom) files. Typically these .dcm files correspond to individual slices, and the resulting 3D image (or a time-series of 3D images) is saved in a .nii (nifti) file. Raw dicom files can be transformed into nifti format using SPM (a MATLAB software package implementing Statistical Parametric Mapping for neuroimaging data) or other software such as MRIcro or Freesurfer. (Note that, when handling neuroimaging data, you need to take special care that the orientation of the images is correct.)
Standard Preprocessing Steps & The Pipeline
The code for the pipeline can be obtained from: [[1]] just rename the file with a .sh extension instead of .txt after downloading it.
The input to the pre-processing pipeline must be provided in nifti (.nii) format (see section above). The following PDF [[2]] describes 6 broad stages of fMRI preprocessing:
Signal preprocessing
1. Preprocessing of anatomical images
2. Preprocessing of functional images
3. Anatomical standardization of functional images
4. Removal of noise signal
Network construction
5. Construction of nodes: Parcellation
6. Construction of links
NB: The pipeline ends once the full, weighted adjacency matrix is defined. Network analyses need to be carried out separately.
The pipeline uses the following software packages:
- AFNI (Analysis of Functional NeuroImages - made by the NIH)
- FSL (FMRIB Software Library - made by the FMRIB in Oxford)
- WMTSA (Wavelet Methods for Time-Series Analysis - a Matlab or R program for computing frequency-band specific “wavelet” correlations) . A very basic tutorial on wavelets can be found here: [[3]]. For details on the wavelet toolbox in MATLAB, read: [[4]].
Note that it saves the data after each intermediate step in newly created files with relevant prefixes.
The Motion Problem
In short, the 'motion problem' refers to the recent (2012) discovery that even tiny head-motion can lead to severe artifacts in connectvitiy analyses of resting state fMRI data. We have recently set up a 'task-force' to try to better understand and deal with this problem. The task-force holds regular meetings to keep efforts integrated and the key ideas emerging from these meetings will be logged on a parallel (secure) wiki which also contains benchmark data and useful code generated along the way: