mne.compute_covariance

mne.compute_covariance(epochs, keep_sample_mean=True, tmin=None, tmax=None, projs=None, method=’empirical’, method_params=None, cv=3, scalings=None, n_jobs=1, return_estimators=False, on_mismatch=’raise’, verbose=None)[source]

Estimate noise covariance matrix from epochs.

The noise covariance is typically estimated on pre-stim periods when the stim onset is defined from events.

If the covariance is computed for multiple event types (events with different IDs), the following two options can be used and combined:

  1. either an Epochs object for each event type is created and a list of Epochs is passed to this function.
  2. an Epochs object is created for multiple events and passed to this function.

Note

Baseline correction should be used when creating the Epochs. Otherwise the computed covariance matrix will be inaccurate.

Note

For multiple event types, it is also possible to create a single Epochs object with events obtained using merge_events(). However, the resulting covariance matrix will only be correct if keep_sample_mean is True.

Note

The covariance can be unstable if the number of samples is not sufficient. In that case it is common to regularize a covariance estimate. The method parameter of this function allows to regularize the covariance in an automated way. It also allows to select between different alternative estimation algorithms which themselves achieve regularization. Details are described in [R21].

Parameters:

epochs : instance of Epochs, or a list of Epochs objects

The epochs.

keep_sample_mean : bool (default True)

If False, the average response over epochs is computed for each event type and subtracted during the covariance computation. This is useful if the evoked response from a previous stimulus extends into the baseline period of the next. Note. This option is only implemented for method=’empirical’.

tmin : float | None (default None)

Start time for baseline. If None start at first sample.

tmax : float | None (default None)

End time for baseline. If None end at last sample.

projs : list of Projection | None (default None)

List of projectors to use in covariance calculation, or None to indicate that the projectors from the epochs should be inherited. If None, then projectors from all epochs must match.

method : str | list | None (default ‘empirical’)

The method used for covariance estimation. If ‘empirical’ (default), the sample covariance will be computed. A list can be passed to run a set of the different methods. If ‘auto’ or a list of methods, the best estimator will be determined based on log-likelihood and cross-validation on unseen data as described in [R21]. Valid methods are:

  • 'empirical': the empirical or sample covariance
  • 'diagonal_fixed': a diagonal regularization as in mne.cov.regularize (see MNE manual)
  • 'ledoit_wolf': the Ledoit-Wolf estimator [R22]
  • 'shrunk': like ‘ledoit_wolf’ with cross-validation for optimal alpha (see scikit-learn documentation on covariance estimation)
  • 'pca': probabilistic PCA with low rank [R23]
  • 'factor_analysis': Factor Analysis with low rank [R24]

If 'auto', this expands to:

['shrunk', 'diagonal_fixed', 'empirical', 'factor_analysis']

Note

'ledoit_wolf' and 'pca' are similar to 'shrunk' and 'factor_analysis', respectively. They are not included to avoid redundancy. In most cases 'shrunk' and 'factor_analysis' represent more appropriate default choices.

The 'auto' mode is not recommended if there are many segments of data, since computation can take a long time.

New in version 0.9.0.

method_params : dict | None (default None)

Additional parameters to the estimation procedure. Only considered if method is not None. Keys must correspond to the value(s) of method. If None (default), expands to:

'empirical': {'store_precision': False, 'assume_centered': True},
'diagonal_fixed': {'grad': 0.01, 'mag': 0.01, 'eeg': 0.0,
                   'store_precision': False,
                   'assume_centered': True},
'ledoit_wolf': {'store_precision': False, 'assume_centered': True},
'shrunk': {'shrinkage': np.logspace(-4, 0, 30),
           'store_precision': False, 'assume_centered': True},
'pca': {'iter_n_components': None},
'factor_analysis': {'iter_n_components': None}

cv : int | sklearn cross_validation object (default 3)

The cross validation method. Defaults to 3, which will internally trigger a default 3-fold shuffle split.

scalings : dict | None (default None)

Defaults to dict(mag=1e15, grad=1e13, eeg=1e6). These defaults will scale magnetometers and gradiometers at the same unit.

n_jobs : int (default 1)

Number of jobs to run in parallel.

return_estimators : bool (default False)

Whether to return all estimators or the best. Only considered if method equals ‘auto’ or is a list of str. Defaults to False

on_mismatch : str

What to do when the MEG<->Head transformations do not match between epochs. If “raise” (default) an error is raised, if “warn” then a warning is emitted, if “ignore” then nothing is printed. Having mismatched transforms can in some cases lead to unexpected or unstable results in covariance calculation, e.g. when data have been processed with Maxwell filtering but not transformed to the same head position.

verbose : bool | str | int | or None (default None)

If not None, override default verbose level (see mne.verbose() and Logging documentation for more).

Returns:

cov : instance of Covariance | list

The computed covariance. If method equals ‘auto’ or is a list of str and return_estimators equals True, a list of covariance estimators is returned (sorted by log-likelihood, from high to low, i.e. from best to worst).

See also

compute_raw_covariance
Estimate noise covariance from raw data

References

[R21](1, 2, 3) Engemann D. and Gramfort A. (2015) Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals, vol. 108, 328-342, NeuroImage.
[R22](1, 2) Ledoit, O., Wolf, M., (2004). A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis 88 (2), 365 - 411.
[R23](1, 2) Tipping, M. E., Bishop, C. M., (1999). Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 61 (3), 611 - 622.
[R24](1, 2) Barber, D., (2012). Bayesian reasoning and machine learning. Cambridge University Press., Algorithm 21.1