Summary

Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time

Published: July 01, 2014
doi:

Summary

Informational connectivity measures the correspondence between time courses of multi-voxel information across different brain regions. Multi-voxel pattern discriminability time series are extracted from regions and compared, revealing networks that are not identified in a typical functional connectivity approach.

Abstract

It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.

Introduction

The goal of the analysis method described here is to measure connectivity between brain regions based on fluctuations in their multi-voxel information. Advances in functional magnetic resonance imaging (fMRI) analysis techniques have revealed that a large amount of information can be contained within blood-oxygenation-level-dependent (BOLD) activity patterns that are distributed across multiple voxels1-3. A set of techniques that are sensitive to multivariate information – known as multi-voxel pattern analysis (MVPA) – has been used to show that conditions can have distinguishable MVPs despite having indistinguishable univariate responses1,2,4. Standard analyses, which compare univariate responses, can be insensitive to this multi-voxel information.

Multiple brain regions are engaged when humans process stimuli and perform cognitive operations. Functional connectivity (FC) is a method commonly employed to investigate such functional networks5,6. In its most basic form, FC quantifies co-activation, or synchrony, between different voxels or regions. FC has been used to identify functionally connected brain networks with much success. For many regions and conditions, however, univariate responses do not reflect all the available information within BOLD activity. FC techniques that track dynamically changing univariate response levels may lack sensitivity to common fluctuations in multi-voxel information. The analysis method presented here, informational connectivity (IC; first described in a recent paper7), bridges a gap between MVPA and FC, by measuring connectivity with a metric that is sensitive to multi-voxel information across time. While FC tracks dynamically changing univariate activation, IC tracks dynamically changing MVP discriminability – a measure of how well a MVP's true condition can be distinguished from (incorrect) alternatives. Importantly, in the same way that different regions can show similar levels of univariate responses to a condition despite performing distinct computations (e.g., visual processing or action planning when a person views man made objects), distinct regions can also have similar (and synchronized) levels of MVP discriminability while they process conditions differently. A recent investigation demonstrated that IC can reveal inter-regional connectivity that is not detectable with a standard FC approach7. Investigators can therefore use IC to probe interactions between brain regions as participants respond to conditions or stimuli that have characteristic distributed patterns. IC is distinct from several recent connectivity applications that examined fluctuations in univariate activation in relation to classification results8,9. Unlike these approaches, IC detects synchronous multi-voxel pattern discriminability between regions.

Protocol

1. Prepare the fMRI Data

Note: After conducting an fMRI scan, pre-process the collected data using the tools available in most fMRI software packages prior to starting this protocol (although spatial smoothing should be avoided or minimized to preserve multi-voxel patterns). An example of a suitable dataset is described in a previous application of the method7.

  1. Remove motion and mean white matter signals from the time series of pre-processed fMRI data by creating a regression model with predictors for motion parameters (roll, pitch, yaw, x, y, z) and mean white matter signal. Conduct the analyses below on the resulting residuals (i.e. the remaining variance).
  2. Import the generated residuals into an analysis package (e.g., MATLAB, Python). The open source Informational Connectivity Toolbox (http://www.informationalconnectivity.org) can import fMRI data into MATLAB.
  3. Z-score each voxel's time series.
  4. Separate the dataset's timepoints into independent sets ('folds'), such as different scanner runs. Note: Using scanner runs ensures independence between folds, which can otherwise be difficult to guarantee (for example, dependencies could be created between a run's timepoints during pre-processing). Runs could be grouped together to reduce the number of folds (e.g. ,even and odd runs2), although using single runs will give more training data.
  5. Create a record of the condition labels associated with timepoints by generating a vector of condition labels that is N timepoints long.
  6. Shift the condition labels forward in each run by a number of times-to-repetition (TRs) equivalent to 5 sec, in order to account for the hemodynamic lag between events and recorded fMRI signals.

2. Select and Analyze a Seed Region

  1. Select a seed region by isolating an anatomical area, functionally localized region or top-performing 'information brain mapping' searchlights10.
    Note: Steps 2.2 to 4.2 below can be performed by the open-source Informational Connectivity MATLAB Toolbox (http://www.informationalconnectivity.org).
  2. Compare the MVP of each time-point to a prototypical MVP for every condition (Note: this is the same approach as used in the popular correlation-based nearest neighbor classifier2). Figure 1 (top) gives an example from real data collected as participants viewed blocks of four types of man made objects.
    1. Calculate a prototypical (mean) MVP for every condition by averaging the timepoints of each condition in all-but-one fold. This is the 'training' data for each fold (e.g., for fold 2 of 5, mean-MVPs are calculated from timepoints in folds 1, 3, 4, and 5).
    2. Correlate every timepoint's MVP with the mean-MVP of each condition from the training data. This will give every timepoint one correlation value for each condition (Note: the condition with the highest correlation here would be the prediction of the popular correlation based MVPA classifier2)
    3. Fisher-transform the r-values to z-scores.
  3. Quantify 'MVP discriminability' for each timepoint: First identify the correlation from 2.2.3 that represents the relationship between the timepoint's MVP and the mean-MVP of that timepoint's condition, and then subtract the highest of the remaining correlations (i.e. 'correlation with correct condition' minus 'maximum correlation with an incorrect condition'). The result is that timepoint's MVP discriminability. An alternative (and valid) approach would be to subtract the average correlation of the incorrect conditions.
    Note: The suggested approach has the intuitive advantage that timepoints with positive discriminability values are correctly classified by the correlation based classifier, while timepoints with negative values are not correctly predicted. An example of the resulting values is shown in Figure 1 (bottom). The steps up to this point are captured in the formulae below.
    2.3 equations
    is a normalized 1-by-m row vector with m voxel activation values at time-point n, is a normalized 1-by-m row vector of the mean training pattern for correct (c) or incorrect (i) conditions for time-point n. The artanh function applies the Fisher z transform.

3. Calculate a Time Series of MVP Discriminability for Each Searchlight

  1. Conduct a searchlight analysis10: Place a three-dimensional cluster around each voxel in turn (a 'searchlight').
  2. Repeat steps 2.2 and 2.3 for each searchlight, so that every searchlight has a time series of MVP discriminability values (one per timepoint).

4. Calculate Informational Connectivity Between the Seed and Searchlights

  1. Correlate the seed's MVP discriminability time series (from 2.3) with each searchlight's discriminability time series (from 3.2) using Spearman's Rank correlation. The resulting rs value is the IC between the seed and searchlight.
  2. Assign each searchlight's IC value to the searchlight's central voxel and write out the resulting individual's brain map.

5. Calculate Group Statistical Map

  1. If the data are not already in standardized space (e.g., Talairach or MNI), transform participants' IC maps into the same space.
  2. Optionally smooth the individuals' searchlight maps.
  3. Create a group statistical map using a one way t-test for whether each searchlight's IC value is significantly greater than zero.

6. Test Significance

Note: Numerous approaches exist for determining statistical significance of fMRI group maps. As a permutation test can determine significance with minimal assumptions, while accounting for the dataset's level of smoothing (as each permuted group map undergoes the same processing), this option is outlined below.

  1. For each of 1,000 permutations, randomly shuffle the seed's MVP-discriminability values across the time series. Keep adjacent timepoints with temporal autocorrelations (such as timepoints within the same block) together (for example, by shuffling blocks rather than adjacent TRs).
  2. Calculate individuals' IC maps for each permutation (step 4 above).
  3. Generate 1,000 permuted group maps: Randomly select one permuted IC map from each participant and conduct a group test on this random set (step 5 above).
  4. Threshold every permuted group map at a desired threshold (e.g., p < 0.001) and extract the maximum cluster size from the map.
  5. Sort the resulting 1,000 maximum-cluster volumes and identify the cluster size at the 95th percentile (e.g., the 50th largest for 1,000 permutations).
  6. Apply the threshold used in 6.4 (e.g., p < 0.001) and the minimum cluster size from 6.5 to the real (non-permuted) IC group map in order to make it cluster-corrected to p < 0.05. Because every permuted map draws on the same MVP discriminability values (in a different order), this significance map highlights regions with more synchronous discriminability values than expected by chance.

Representative Results

The IC results can now be displayed using the investigator’s preferred fMRI analysis software package. Figure 2 shows IC results, calculated from blocks of visually presented man made objects (full details in the associated publication7).

The IC analysis is particularly valuable for conditions known to have associated MVPs: Conditions with characteristic MVPs, but without differences in univariate responses, are more likely to have distinctions between IC and FC (illustrated with data that was recorded as participants viewed different types of man made objects in Figure 3). Figure 4 shows that searchlights with significant multi-voxel information can have high IC, but are less well represented in FC results.

Figure 1
Figure 1. Examples of pattern discriminability over time. Top: The substrates of MVP discriminability calculated from a subject in Haxby et al. (2001)2, as analyzed in Coutanche & Thompson-Schill (2013)7. The blue line shows the z-scored correlation between time-points’ MVPs and the mean (‘training’) pattern of the correct class. The green lines represent the MVPs’ correlations with three incorrect classes. Bottom: Pattern discriminability is the difference between correlations for the correct class and highest incorrect class. Time-points with positive pattern discriminability values would be correctly classified by a correlation-based classifier. Figure originally published in Coutanche & Thompson-Schill (2013)7. Please click here to view a larger version of this figure.

Figure 2
Figure 2. Example connectivity maps. Each row shows regions significantly connected to a seed (shown in blue). Significance is determined by a group t-test (p < 0.001) with minimum cluster size from permutation testing. The IC results are displayed using AFNI11 on surface maps produced with FreeSurfer12. Figure is modified from Coutanche & Thompson-Schill (2013)7. Please click here to view a larger version of this figure.

Figure 3
Figure 3. Synchronized MVP discriminability compared to mean activation. Examples of MVP discriminability in two regions with synchronous MVP discriminability (i.e. informational connectivity) without synchronous mean activation (i.e. functional connectivity); data comes from a subject from Haxby et al. (2001)2, as analyzed in Coutanche & Thompson-Schill (2013)7. These data points were collected while the subject viewed visual presentations of man made objects, which are distinguishable by multi-voxel patterns, but not mean responses. Please click here to view a larger version of this figure.

Figure 4
Figure 4. Example IC and FC values between a seed in the left fusiform gyrus and searchlights across the brain. Informational and functional connectivity strengths (z-axes) are shown between a seed and searchlights, with respect to each searchlight’s mean response (x-axis) and MVPA classification accuracy (y-axis) to four types of man made objects (chance = 25%). Searchlights sharing voxels with the seed region were removed. The IC graph includes examples of searchlights with strong connectivity that have high classification performance but low mean response levels, which are not picked up in a typical FC approach (seen by the gap in the top-left octant of the right graph). Figure originally published in Coutanche & Thompson-Schill (2013)7. Please click here to view a larger version of this figure.

Discussion

Informational connectivity has MVPA's sensitivity to distributed pattern information, and gives an ability to study between-region interactions through a connectivity approach. MVPA and standard univariate analyses can each reveal the involvement of distinct regions, sometimes with little overlap between their results13. As expected for a method that draws on these analysis approaches, IC and FC also give complementary results7. The decision of whether to employ IC will ultimately depend on the conditions under investigation and the theoretical questions being posed. Design considerations that impact whether MVPA is conducted on a dataset will also affect whether IC is used. Studies designed with IC explicitly in mind will want to follow recommendations for MVPA14, while also ensuring that trial-level data can be extracted from across the scan's timecourse.

When examining and reporting IC results, it is important that searchlights overlapping with the seed are removed, to avoid circularity. Additionally, if directly comparing IC and FC results, it is recommended to also compare an FC analysis based on the mean activation of searchlights, rather than just voxels. This additional analysis can ensure that any differences between results are not because of differences in the levels of signal-to-noise in searchlights versus voxels.

The procedure described here focuses primarily on an exploratory analysis employing searchlights. It is worth noting that by replacing searchlights with regions-of-interest, IC can also compare regions that are selected a priori. The current discriminability metric – comparing a MVP's correlation for the 'true' condition to the correlation for the maximum alternate condition – is also modifiable. Many machine learning classifiers have prediction weights for different classes, which could easily replace the correlation comparisons performed here (e.g., to track the 'confidence' of a classifier over time). IC has a variety of potential uses. As well as being a primary analysis to investigate informational networks, IC can be a secondary followup analysis to an MVPA searchlight. MVPA searchlight maps are valuable for understanding which regions can distinguish different conditions, but are not typically broken down into different networks. The IC approach can help here, by revealing which sets of searchlights have synchronous discriminability. Finally, IC maps from different tasks can be compared in order to understand task networks, and patients can be compared to controls to better understand how multi-voxel differences15 are manifested at the network level.

Divulgations

The authors have nothing to disclose.

Acknowledgements

We thank Jim Haxby and colleagues for making their data available for further analyses. Marc N. Coutanche was funded by a fellowship from the Howard Hughes Medical Institute. This work was supported by NIH grants R0I-DC009209 and R01-EY02171701 awarded to Sharon L. Thompson-Schill.

Materials

MATLAB software package MathWorks
AFNI software package NIMH

References

  1. Norman, K. A., et al. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends Cogn Sci. 10 (9), 424-430 (2006).
  2. Haxby, J. V., et al. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science. 293 (5539), 2425-2430 (2001).
  3. Tong, F., Pratte, M. S. Decoding patterns of human brain activity. Annu Rev Psychol. 63, 483-509 (2012).
  4. Coutanche, M. N. Distinguishing multi-voxel patterns and mean activation: Why, how, and what does it tell us. Cogn Affect Behav Neurosci. 13 (3), (2013).
  5. Biswal, B., et al. Functional connectivity in the motor cortex of resting human brain using echo-planar mri. Magn Res Med. 34 (4), 537-541 (1995).
  6. Friston, K. J., et al. Psychophysiological and modulatory interactions in neuroimaging. Neuroimage. 6 (3), 218-229 (1997).
  7. Coutanche, M. N., Thompson-Schill, S. L. Informational Connectivity: Identifying synchronized discriminability of multi-voxel patterns across the brain. Front Hum Neurosci. 7 (15), 1-14 (2013).
  8. Chiu, Y. C., et al. Tracking cognitive fluctuations with multivoxel pattern time course (MVPTC) analysis. Neuropsychologia. 50 (4), 479-486 (2012).
  9. Nelissen, N., et al. Frontal and parietal cortical interactions with distributed visual representations during selective attention and action selection. J Neurosci. 33 (42), 16443-16458 (2013).
  10. Kriegeskorte, N., et al. Information-based functional brain mapping. Proc Natl Acad Sci U S A. 103 (10), 3863-3868 (2006).
  11. Cox, R. W. AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res. 29 (3), 162-173 (1996).
  12. Fischl, B., et al. Cortical surface-based analysis. II: Inflation, flattening, and a surface-based coordinate system. Neuroimage. 9 (2), 195-207 (1999).
  13. Jimura, K., Poldrack, R. A. Analyses of regional-average activation and multivoxel pattern information tell complementary stories. Neuropsychologia. 50 (4), 544-552 (2012).
  14. Coutanche, M. N., Thompson-Schill, S. L. The advantage of brief fMRI acquisition runs for multi-voxel pattern detection across runs. Neuroimage. 61 (4), 1113-1119 (2012).
  15. Coutanche, M. N., et al. Multi-voxel pattern analysis of fMRI data predicts clinical symptom severity. Neuroimage. 57 (1), 113-123 (2011).

Play Video

Citer Cet Article
Coutanche, M. N., Thompson-Schill, S. L. Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time. J. Vis. Exp. (89), e51226, doi:10.3791/51226 (2014).

View Video