Login is moving

Authentication for nemar.org is migrating from the legacy system to the new Cloudflare-backed identity. Until that ships, sign in via the CLI:

npm install -g nemar-cli
nemar login
nm000120 NEMAR-native dataset

Oikonomou2016 – SSVEP MAMEM 2 dataset

A 256-channel EEG dataset from 11 healthy subjects performing a steady-state visually evoked potential (SSVEP) brain-computer interface task. Subjects focused attention on flickering visual stimuli at five different frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz) to select commands. The dataset includes raw EEG signals sampled at 250 Hz with comprehensive annotations and was used to systematically evaluate signal processing algorithms for SSVEP-based BCIs, achieving 74.42% classification accuracy with optimized parameters.

EEG

Compute on this dataset

Two routes today, with a third (in-browser one-click submission) landing soon.

  1. NeuroScience Gateway (NSG) portal.

    NSG runs EEGLAB / Brainstorm / MNE pipelines on supercomputing time donated by SDSC. Create an account, point a job at this dataset's S3 prefix (s3://nemar/nm000120), and submit.
    nsgportal.org →

  2. Local processing with nemar-cli.

    Pull the dataset to your machine and run any toolbox locally. Honors the published version pinning.

    npm install -g nemar-cli
    nemar dataset clone nm000120
    cd nm000120 && nemar dataset get
  3. Just the files.

    rclone, aria2c, or any HTTPS client works against data.nemar.org/nm000120/ — the manifest carries presigned S3 URLs.

Direct compute access is coming soon. One-click NSG submission from this page is scoped for a follow-up phase. Tracked on nemarOrg/website#6.

![DOI](https://doi.org/10.82901/nemar.nm000120)

SSVEP MAMEM 2 dataset

SSVEP MAMEM 2 dataset.

Dataset Overview

  • Code: MAMEM2
  • Paradigm: ssvep
  • DOI: 10.48550/arXiv.1602.00904
  • Subjects: 11
  • Sessions per subject: 1
  • Events: 6.66=1, 7.50=2, 8.57=3, 10.00=4, 12.00=5
  • Trial interval: [1, 4] s
  • Runs per session: 5
  • File format: MAT

Acquisition

  • Sampling rate: 250.0 Hz
  • Number of channels: 256
  • Channel types: eeg=256
  • Channel names: E1, E10, E100, E101, E102, E103, E104, E105, E106, E107, E108, E109, E11, E110, E111, E112, E113, E114, E115, E116, E117, E118, E119, E12, E120, E121, E122, E123, E124, E125, E126, E127, E128, E129, E13, E130, E131, E132, E133, E134, E135, E136, E137, E138, E139, E14, E140, E141, E142, E143, E144, E145, E146, E147, E148, E149, E15, E150, E151, E152, E153, E154, E155, E156, E157, E158, E159, E16, E160, E161, E162, E163, E164, E165, E166, E167, E168, E169, E17, E170, E171, E172, E173, E174, E175, E176, E177, E178, E179, E18, E180, E181, E182, E183, E184, E185, E186, E187, E188, E189, E19, E190, E191, E192, E193, E194, E195, E196, E197, E198, E199, E2, E20, E200, E201, E202, E203, E204, E205, E206, E207, E208, E209, E21, E210, E211, E212, E213, E214, E215, E216, E217, E218, E219, E22, E220, E221, E222, E223, E224, E225, E226, E227, E228, E229, E23, E230, E231, E232, E233, E234, E235, E236, E237, E238, E239, E24, E240, E241, E242, E243, E244, E245, E246, E247, E248, E249, E25, E250, E251, E252, E253, E254, E255, E256, E26, E27, E28, E29, E3, E30, E31, E32, E33, E34, E35, E36, E37, E38, E39, E4, E40, E41, E42, E43, E44, E45, E46, E47, E48, E49, E5, E50, E51, E52, E53, E54, E55, E56, E57, E58, E59, E6, E60, E61, E62, E63, E64, E65, E66, E67, E68, E69, E7, E70, E71, E72, E73, E74, E75, E76, E77, E78, E79, E8, E80, E81, E82, E83, E84, E85, E86, E87, E88, E89, E9, E90, E91, E92, E93, E94, E95, E96, E97, E98, E99
  • Montage: GSN-HydroCel-256
  • Hardware: EGI 300 Geodesic EEG System (GES 300)
  • Reference: Cz
  • Line frequency: 50.0 Hz
  • Impedance threshold: 80.0 kOhm
  • Cap manufacturer: EGI
  • Cap model: HydroCel Geodesic Sensor Net (HCGSN)

Participants

  • Number of subjects: 11
  • Health status: healthy
  • Age: min=24, max=39
  • Gender distribution: male=8, female=3
  • Handedness: {'right': 10, 'left': 1}

Experimental Protocol

  • Paradigm: ssvep
  • Number of classes: 5
  • Class labels: 6.66, 7.50, 8.57, 10.00, 12.00
  • Trial duration: 5.0 s
  • Study design: Subjects focus attention on visual stimuli flickering at different frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz) to select commands. Each stimulus presented for 5 seconds followed by 5 seconds rest.
  • Feedback type: none
  • Stimulus type: flickering box
  • Stimulus modalities: visual
  • Primary modality: visual
  • Synchronicity: synchronous
  • Mode: offline
  • Stimulus presentation: SoftwareName=Microsoft Visual Studio 2010 with OpenGL, device=22 inch LCD monitor, refresh_rate=60 Hz, resolution=1680x1080

HED Event Annotations

Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser

  6.66
    ├─ Sensory-event
    ├─ Experimental-stimulus
    ├─ Visual-presentation
    └─ Label/6_66

  7.50
    ├─ Sensory-event
    ├─ Experimental-stimulus
    ├─ Visual-presentation
    └─ Label/7_50

  8.57
    ├─ Sensory-event
    ├─ Experimental-stimulus
    ├─ Visual-presentation
    └─ Label/8_57

  10.00
    ├─ Sensory-event
    ├─ Experimental-stimulus
    ├─ Visual-presentation
    └─ Label/10_00

  12.00
    ├─ Sensory-event
    ├─ Experimental-stimulus
    ├─ Visual-presentation
    └─ Label/12_00

Paradigm-Specific Parameters

  • Detected paradigm: ssvep
  • Stimulus frequencies: [6.66, 7.5, 8.57, 10.0, 12.0] Hz
  • Number of targets: 5
  • Number of repetitions: 3

Data Structure

  • Trials: 1104
  • Trials context: Each session includes 23 trials (8 adaptation trials excluded from analysis). 5 sessions per subject (with exceptions: S001=3 sessions, S003=4 sessions, S004=4 sessions). Total: 1104 trials of 5 seconds each.

Preprocessing

  • Data state: raw
  • Preprocessing applied: False

Signal Processing

  • Classifiers: LDA, SVM, Random Forest, kNN, Naive Bayes, AdaBoost, Decision Trees, CCA
  • Feature extraction: PWelch, Periodogram, FFT, Goertzel, PYULEAR (Yule-AR), STFT, DWT, PSD, Wavelet, Spectrogram
  • Frequency bands: analyzed=[5.0, 48.0] Hz
  • Spatial filters: CAR, CSP, Minimum Energy

Cross-Validation

  • Method: leave-one-subject-out
  • Evaluation type: cross_subject

Performance (Original Study)

  • Accuracy: 74.42%
  • Mean Accuracy Default Config: 72.47
  • Mean Accuracy Optimal Config: 74.42
  • Processing Time Msec: 68

BCI Application

  • Applications: command_selection
  • Environment: laboratory
  • Online feedback: False

Tags

  • Pathology: Healthy
  • Modality: Visual
  • Type: Research

Documentation

  • DOI: 10.48550/arXiv.1602.00904
  • Associated paper DOI: arXiv:1602.00904v2
  • License: ODC-By-1.0
  • Investigators: Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris
  • Institution: Centre for Research and Technology Hellas (CERTH)
  • Country: GR
  • Repository: GitHub
  • Data URL: https://figshare.com/articles/dataset/3153409
  • Publication year: 2016
  • Funding: H2020-ICT-2014-644780
  • Ethics approval: Approved by ethics committee of Centre for Research and Technology Hellas, date 3/7/2015, grant H2020-ICT-2014-644780
  • Keywords: SSVEP, BCI, brain-computer interface, EEG, visual evoked potentials, signal processing, feature extraction, classification

Abstract

Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. This study focuses on SSVEP-based BCIs and performs a comparative evaluation of state-of-the-art algorithms for filtering, artifact removal, feature extraction, feature selection and classification. Dataset consists of 256-channel EEG signals from 11 subjects with 5 flickering frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz).

Methodology

Leave-one-subject-out cross-validation was used to evaluate a general-purpose BCI system without subject-specific training. Systematic comparison of algorithms across all signal processing stages: (1) Signal filtering: FIR vs IIR filters; (2) Artifact removal: AMUSE vs FastICA; (3) Feature extraction: PWelch, Periodogram, PYULEAR, DWT, STFT, Goertzel; (4) Feature selection: entropy-based methods and PCA/SVD; (5) Classification: SVM, LDA, KNN, Naive Bayes, Random Forest, AdaBoost. Optimal configuration achieved 74.42% mean accuracy using IIR-Elliptic filter, AMUSE artifact removal, PWelch feature extraction with nfft=512, segment length=350, overlap=0.75, and channel-138.

References

Oikonomou, V. P., Liaros, G., Georgiadis, K., Chatzilari, E., Adam, K., Nikolopoulos, S., & Kompatsiaris, I. (2016). Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs. arXiv preprint arXiv:1602.00904.

MAMEM Steady State Visually Evoked Potential EEG Database <https://archive.physionet.org/physiobank/database/mssvepdb/>_

S. Nikolopoulos, 2016, DataAcquisitionDetails.pdf <https://figshare.com/articles/dataset/MAMEMEEGSSVEPDatasetII256channels11subjects5frequenciespresentedsimultaneously/3153409?file=4911931> Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896

Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8


Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb

Files

18 top-level entries · 4.40 GB total