Datasets

All the dataset fetchers are available in mne.datasets. To download any of the datasets, use the data_path (fetches full dataset) or the load_data (fetches dataset partially) functions.

Sample

mne.datasets.sample.data_path()

The sample data set is recorded using a 306-channel Neuromag vectorview system.

In this experiment, checkerboard patterns were presented to the subject into the left and right visual field, interspersed by tones to the left or right ear. The interval between the stimuli was 750 ms. Occasionally a smiley face was presented at the center of the visual field. The subject was asked to press a key with the right index finger as soon as possible after the appearance of the face.

Once the data_path is known, its contents can be examined using IO functions.

Brainstorm

Dataset fetchers for three Brainstorm tutorials are available. Users must agree to the license terms of these datasets before downloading them. These files are recorded in a CTF 275 system. The data is converted to fif format before being made available to MNE users. However, MNE-Python now supports IO for the ctf format as well in addition to the C converter utilities. Please consult the IO section for details.

Auditory

mne.datasets.brainstorm.bst_raw.data_path().

Details about the data can be found at the Brainstorm auditory dataset tutorial.

Examples

MEGSIM

mne.datasets.megsim.load_data()

This dataset contains experimental and simulated MEG data. To load data from this dataset, do:

from mne.io import Raw
from mne.datasets.megsim import load_data
raw_fnames = load_data(condition='visual', data_format='raw', data_type='experimental', verbose=True)
raw = Raw(raw_fnames[0])

Detailed description of the dataset can be found in the related publication [1].

SPM faces

mne.datasets.spm_face.data_path()

The SPM faces dataset contains EEG, MEG and fMRI recordings on face perception.

Examples

EEGBCI motor imagery

mne.datasets.eegbci.load_data()

The EEGBCI dataset is documented in [2]. The data set is available at PhysioNet [3]. The dataset contains 64-channel EEG recordings from 109 subjects and 14 runs on each subject in EDF+ format. The recordings were made using the BCI2000 system. To load a subject, do:

from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
raw_fnames = eegbci.load_data(subject, runs)
raws = [read_raw_edf(f, preload=True) for f in raw_fnames]
raw = concatenate_raws(raws)

Do not hesitate to contact MNE-Python developers on the MNE mailing list to discuss the possibility to add more publicly available datasets.

Somatosensory

mne.datasets.somato.data_path()

This dataset contains somatosensory data with event-related synchronizations (ERS) and desynchronizations (ERD).

Multimodal

mne.datasets.multimodal.data_path()

This dataset contains a single subject recorded at Otaniemi (Aalto University) with auditory, visual, and somatosensory stimuli.

Visual 92 object categories

mne.datasets.visual_92_categories.data_path().

This dataset is recorded using a 306-channel Neuromag vectorview system.

Experiment consisted in the visual presentation of 92 images of human, animal and inanimate objects either natural or artificial [4]. Given the high number of conditions this dataset is well adapted to an approach based on Representational Similarity Analysis (RSA).

Examples

References

[1]Aine CJ, Sanfratello L, Ranken D, Best E, MacArthur JA, Wallace T, Gilliam K, Donahue CH, Montano R, Bryant JE, Scott A, Stephen JM (2012) MEG-SIM: A Web Portal for Testing MEG Analysis Methods using Realistic Simulated and Empirical Data. Neuroinform 10:141-158
[2]Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N., Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer Interface (BCI) System. IEEE TBME 51(6):1034-1043
[3]Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000) PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 101(23):e215-e220
[4]Cichy, R. M., Pantazis, D., & Oliva, A. “Resolving human object recognition in space and time.” Nature neuroscience (2014): 17(3), 455-462