Please refer to https://github.com/NeuSpeech/EEG-To-Text for the corrected code and detailed experiments. To avoid further confusion, we archived this repo.
run conda env create -f environment.yml to create the conda environment (named "EEGToText") used in our experiments.
.mat files to ~/datasets/ZuCo/task1-SR/Matlab_files,~/datasets/ZuCo/task2-NR/Matlab_files,~/datasets/ZuCo/task3-TSR/Matlab_files respectively..mat files to ~/datasets/ZuCo/task2-NR-2.0/Matlab_files.run bash ./scripts/prepare_dataset.sh to preprocess .mat files and prepare sentiment labels.
For each task, all .mat files will be converted into one .pickle file stored in ~/datasets/ZuCo/<task_name>/<task_name>-dataset.pickle.
Sentiment dataset for ZuCo (sentiment_labels.json) will be stored in ~/datasets/ZuCo/task1-SR/sentiment_labels/sentiment_labels.json.
Sentiment dataset for filtered Stanford Sentiment Treebank will be stored in ~/datasets/stanfordsentiment/ternary_dataset.json
To train an EEG-To-Text decoding model, run bash ./scripts/train_decoding.sh.
To evaluate the trained EEG-To-Text decoding model from above, run bash ./scripts/eval_decoding.sh.
For detailed configuration of the available arguments, please refer to function get_config(case = 'train_decoding') in /config.py
We first train the decoder and the classifier individually, and then we evaluate the pipeline on ZuCo task1-SR data.
To run the whole training and evaluation process, run bash ./scripts/train_eval_zeroshot_pipeline.sh.
For detailed configuration of the available arguments, please refer to function get_config(case = 'eval_sentiment') in /config.py
@inproceedings{wang2022open,
title={Open vocabulary electroencephalography-to-text decoding and zero-shot sentiment classification},
author={Wang, Zhenhailong and Ji, Heng},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={36},
number={5},
pages={5350--5358},
year={2022}
}