From cd20b9b77626292f982e9beb00f4d4360f85b306 Mon Sep 17 00:00:00 2001 From: Julius Date: Sun, 28 Mar 2021 15:47:41 +0200 Subject: [PATCH 1/9] =?UTF-8?q?=E2=80=9EREADME.md=E2=80=9C=20=C3=A4ndern?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index af23176..20f382d 100644 --- a/README.md +++ b/README.md @@ -22,3 +22,6 @@ This code was created using Python 3.7 and the following libraries: - Scikit-Learn 0.23.2 - Pandas 1.2.0 - Scipy 1.5.4 + +It is recommended to use Anaconda. +It may be necessary to set the parent directory 'semesterproject_lecture_eeg' as 'Sources Root' for the project, if pycharm is used as an IDE. \ No newline at end of file From 26dd78a410d4a696cbfb42fefd2309482db5bc97 Mon Sep 17 00:00:00 2001 From: Julius Date: Sun, 28 Mar 2021 17:34:48 +0200 Subject: [PATCH 2/9] =?UTF-8?q?=E2=80=9EREADME.md=E2=80=9C=20=C3=A4ndern?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 53 ++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 38 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index 20f382d..7b7aed6 100644 --- a/README.md +++ b/README.md @@ -1,20 +1,31 @@ ## Semesterproject of the lecture "Semesterproject Signal processing and Analysis of human brain potentials (eeg) WS 2020/21 -This repository holds the code of the semesterproject as well as the report. -The main files are 'preprocessing_and_cleaning.py', 'erp_analysis.py' and 'decoding_tf_analyis.py'. -The files hold: -- preprocessing_and_cleaning.py : Holds the pre-processing pipeline of the project. By executing the file all subjects are pre-processed. Subjects 001, 003, 014 are pre-processed with manually selected pre-processing information, all other subjects are pre-processed with the given pre-processing information. Pre-processed cleaned data is saved in the BIDS file structure as 'sub-XXX_task-N170_cleaned.fif' where XXX is the subject number. -Details can be found in the comments of the code. -- erp_analysis.py : Holds the code for the erp-analysis. Computes the peak-differences and t-tests for several experimental contrasts. Details can be found in the comments of the code. -- decoding_tf_analysis.py : Holds the code for the decoding and time-frequency analysis. Details can be found in the comments of the code. +This repository holds the code of the semesterproject as well as the report, created by Julius Voggesberger. +As the dataset for the project, the N170-dataset was chosen. +As the three subjects, to be manually pre-processed, the subjects 001, 003 and 014 were chosen. +The rest of the subjects were pre-processed with provided pre-processing information. -The folder 'utils' holds helper functions for some plots needed for the analysis and to load data, generate strings etc. and holds the code given in the lecture. -The folder 'test' holds mostly unittests that test helper functions and one function which visually checks if N170 peaks are extracted correctly. +### Structure +├── Dataset: The dataset of the project as well as the manually selected bad segments are stored here. +| ├── n170: Store the dataset here. +| └── preprocessed: Bad segments are stored here. +├── cached_data: Data that is generated in the analysis part is stored here. +| ├── decoding_data: Results of the classifiers. +| ├── erp_peaks: ERP peaks needed for the ERP analysis. +| └── tf_data: Time-frequency data needed for the tf-analysis. +├── test: Contains unittests and one visual check. +├── utils: Contains helper methods +| ├── ccs_eeg_semesterproject: Methods given in the lecture. +| ├── ccs_eeg_utils_reduced: Method for reading in BIDS. +| ├── file_utils.py: Methods for reading in files and getting epochs. +| └── plot_utils.py: Methods for manually created plots. +├── preprocessing_and_cleaning.py: The preprocessing pipeline. +├── erp_analysis.py: The ERP-Analysis and computation of ERP peaks. +└── decoding_tf_analysis.py: Decoding and time-frequency analysis. -For the code to work properly, the N170 dataset needs to be provided. -When first running the analysis, it may take a while. After running it one time the data is cached, so that it can be reused if the analysis should be executed again. Be careful though, as a parameter has to be explicitly set in the code, so that the already computed data is used. This parameter is a boolean given to each analysis function which caches data. - -This code was created using Python 3.7 and the following libraries: +### Running the project +To run the project python 3.7 is required and anaconda recommended. +The following libraries are needed: - Matplotlib 3.3.3 - MNE 0.22.0 - MNE-Bids 0.6 @@ -23,5 +34,17 @@ This code was created using Python 3.7 and the following libraries: - Pandas 1.2.0 - Scipy 1.5.4 -It is recommended to use Anaconda. -It may be necessary to set the parent directory 'semesterproject_lecture_eeg' as 'Sources Root' for the project, if pycharm is used as an IDE. \ No newline at end of file +For the code to work, the N170 dataset needs to be provided and put into the folder 'Dataset/n170/', so that the file structure 'Dataset/n170/sub-001', etc. exists. +The pre-processed raw objects are saved in their respective subject folder, in 'Dataset/n170/'. +When first running the analysis, it may take a while. +After running it one time the data is cached, so that it can be reused if the analysis should be executed again at a later time. +For the cached data to be used, a boolean parameter has to be set in the respective analysis method. + +It may be necessary to set the parent directory 'semesterproject_lecture_eeg' as 'Sources Root' for the project, if pycharm is used as an IDE. + +### Parameters +Parameters have to be changed manually in the code, if different settings want to be tried. + +### Visualisation +The visualisation methods that were used to generate the visualisations in the report, are contained in the code, if they were created manually. +If a visualisation method from mne was used to create the visualisation, it may exist in the code or not. \ No newline at end of file From 106a3ad434840d7c784b228f51ce96817e54cf74 Mon Sep 17 00:00:00 2001 From: Julius Date: Sun, 28 Mar 2021 17:35:40 +0200 Subject: [PATCH 3/9] =?UTF-8?q?=E2=80=9EREADME.md=E2=80=9C=20=C3=A4ndern?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index 7b7aed6..df40639 100644 --- a/README.md +++ b/README.md @@ -6,22 +6,22 @@ As the three subjects, to be manually pre-processed, the subjects 001, 003 and 0 The rest of the subjects were pre-processed with provided pre-processing information. ### Structure -├── Dataset: The dataset of the project as well as the manually selected bad segments are stored here. -| ├── n170: Store the dataset here. -| └── preprocessed: Bad segments are stored here. -├── cached_data: Data that is generated in the analysis part is stored here. -| ├── decoding_data: Results of the classifiers. -| ├── erp_peaks: ERP peaks needed for the ERP analysis. -| └── tf_data: Time-frequency data needed for the tf-analysis. -├── test: Contains unittests and one visual check. -├── utils: Contains helper methods -| ├── ccs_eeg_semesterproject: Methods given in the lecture. -| ├── ccs_eeg_utils_reduced: Method for reading in BIDS. -| ├── file_utils.py: Methods for reading in files and getting epochs. -| └── plot_utils.py: Methods for manually created plots. -├── preprocessing_and_cleaning.py: The preprocessing pipeline. -├── erp_analysis.py: The ERP-Analysis and computation of ERP peaks. -└── decoding_tf_analysis.py: Decoding and time-frequency analysis. +├── Dataset: The dataset of the project as well as the manually selected bad segments are stored here. \ +| ├── n170: Store the dataset here. \ +| └── preprocessed: Bad segments are stored here. \ +├── cached_data: Data that is generated in the analysis part is stored here. \ +| ├── decoding_data: Results of the classifiers. \ +| ├── erp_peaks: ERP peaks needed for the ERP analysis. \ +| └── tf_data: Time-frequency data needed for the tf-analysis. \ +├── test: Contains unittests and one visual check. \ +├── utils: Contains helper methods \ +| ├── ccs_eeg_semesterproject: Methods given in the lecture. \ +| ├── ccs_eeg_utils_reduced: Method for reading in BIDS. \ +| ├── file_utils.py: Methods for reading in files and getting epochs. \ +| └── plot_utils.py: Methods for manually created plots. \ +├── preprocessing_and_cleaning.py: The preprocessing pipeline. \ +├── erp_analysis.py: The ERP-Analysis and computation of ERP peaks. \ +└── decoding_tf_analysis.py: Decoding and time-frequency analysis. \ ### Running the project To run the project python 3.7 is required and anaconda recommended. From 7f616a4a345a51e0c495e76a15902885f2c23f8b Mon Sep 17 00:00:00 2001 From: Julius Date: Sun, 28 Mar 2021 17:36:46 +0200 Subject: [PATCH 4/9] =?UTF-8?q?=E2=80=9EREADME.md=E2=80=9C=20=C3=A4ndern?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index df40639..b91cd1b 100644 --- a/README.md +++ b/README.md @@ -6,22 +6,24 @@ As the three subjects, to be manually pre-processed, the subjects 001, 003 and 0 The rest of the subjects were pre-processed with provided pre-processing information. ### Structure -├── Dataset: The dataset of the project as well as the manually selected bad segments are stored here. \ -| ├── n170: Store the dataset here. \ -| └── preprocessed: Bad segments are stored here. \ -├── cached_data: Data that is generated in the analysis part is stored here. \ -| ├── decoding_data: Results of the classifiers. \ -| ├── erp_peaks: ERP peaks needed for the ERP analysis. \ -| └── tf_data: Time-frequency data needed for the tf-analysis. \ -├── test: Contains unittests and one visual check. \ -├── utils: Contains helper methods \ -| ├── ccs_eeg_semesterproject: Methods given in the lecture. \ -| ├── ccs_eeg_utils_reduced: Method for reading in BIDS. \ -| ├── file_utils.py: Methods for reading in files and getting epochs. \ -| └── plot_utils.py: Methods for manually created plots. \ -├── preprocessing_and_cleaning.py: The preprocessing pipeline. \ -├── erp_analysis.py: The ERP-Analysis and computation of ERP peaks. \ -└── decoding_tf_analysis.py: Decoding and time-frequency analysis. \ +``` +├── Dataset: The dataset of the project as well as the manually selected bad segments are stored here. +| ├── n170: Store the dataset here. +| └── preprocessed: Bad segments are stored here. +├── cached_data: Data that is generated in the analysis part is stored here. +| ├── decoding_data: Results of the classifiers. +| ├── erp_peaks: ERP peaks needed for the ERP analysis. +| └── tf_data: Time-frequency data needed for the tf-analysis. +├── test: Contains unittests and one visual check. +├── utils: Contains helper methods +| ├── ccs_eeg_semesterproject: Methods given in the lecture. +| ├── ccs_eeg_utils_reduced: Method for reading in BIDS. +| ├── file_utils.py: Methods for reading in files and getting epochs. +| └── plot_utils.py: Methods for manually created plots. +├── preprocessing_and_cleaning.py: The preprocessing pipeline. +├── erp_analysis.py: The ERP-Analysis and computation of ERP peaks. +└── decoding_tf_analysis.py: Decoding and time-frequency analysis. +``` ### Running the project To run the project python 3.7 is required and anaconda recommended. From a5b97a3a659a5f85a53fc8fecf352bce60d99058 Mon Sep 17 00:00:00 2001 From: Julius Date: Sun, 28 Mar 2021 20:14:00 +0200 Subject: [PATCH 5/9] =?UTF-8?q?=E2=80=9Edecoding=5Ftf=5Fanalysis.py?= =?UTF-8?q?=E2=80=9C=20=C3=A4ndern?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- decoding_tf_analysis.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/decoding_tf_analysis.py b/decoding_tf_analysis.py index 1b03368..ae36b07 100644 --- a/decoding_tf_analysis.py +++ b/decoding_tf_analysis.py @@ -193,8 +193,8 @@ def time_frequency(dataset, filename, compute_tfr=True): :param compute_tfr: If True the TFRs will be created, else the TFRs will be loaded from a precomputed file """ # Parameters - # freqs = np.linspace(0.1, 50, num=50) # Use this for linear space scaling - freqs = np.logspace(*np.log10([0.1, 50]), num=50) + freqs = np.linspace(0.1, 50, num=50) # Use this for linear space scaling + # freqs = np.logspace(*np.log10([0.1, 50]), num=50) n_cycles = freqs / 2 cond1 = [] cond2 = [] @@ -252,4 +252,4 @@ if __name__ == '__main__': mne.set_log_level(verbose=VERBOSE_LEVEL) ds = 'N170' decoding(ds, 'faces_vs_cars', True) - time_frequency(ds, 'face_intact_vs_all_0.1_50hz_ncf2', True) + time_frequency(ds, 'face_intact_vs_all_0.1_50hz_ncf2_linscale', True) From 6c1a5551794653da9fefa1fda2e5ce37dcf1f305 Mon Sep 17 00:00:00 2001 From: Julius Date: Mon, 29 Mar 2021 12:42:41 +0200 Subject: [PATCH 6/9] =?UTF-8?q?=E2=80=9EREADME.md=E2=80=9C=20=C3=A4ndern?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index b91cd1b..71038fc 100644 --- a/README.md +++ b/README.md @@ -27,6 +27,8 @@ The rest of the subjects were pre-processed with provided pre-processing informa ### Running the project To run the project python 3.7 is required and anaconda recommended. +To ensure reproducability, randomstates were used for methods which are non-deterministic. +The randomstates used are either '123' or '1234' The following libraries are needed: - Matplotlib 3.3.3 - MNE 0.22.0 From 0d513bfb96ce03091ab08d608341702dc4e5aa0a Mon Sep 17 00:00:00 2001 From: Julius Date: Mon, 29 Mar 2021 12:42:56 +0200 Subject: [PATCH 7/9] =?UTF-8?q?=E2=80=9EREADME.md=E2=80=9C=20=C3=A4ndern?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 71038fc..858827b 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,7 @@ The rest of the subjects were pre-processed with provided pre-processing informa ### Running the project To run the project python 3.7 is required and anaconda recommended. To ensure reproducability, randomstates were used for methods which are non-deterministic. -The randomstates used are either '123' or '1234' +The randomstates used are either '123' or '1234'.\ The following libraries are needed: - Matplotlib 3.3.3 - MNE 0.22.0 From 02f604cdebbd116685c592db0ddacf89f7cb3393 Mon Sep 17 00:00:00 2001 From: Julius Date: Mon, 29 Mar 2021 12:43:11 +0200 Subject: [PATCH 8/9] =?UTF-8?q?=E2=80=9EREADME.md=E2=80=9C=20=C3=A4ndern?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 858827b..32877fa 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ The rest of the subjects were pre-processed with provided pre-processing informa ``` ### Running the project -To run the project python 3.7 is required and anaconda recommended. +To run the project python 3.7 is required and anaconda recommended.\ To ensure reproducability, randomstates were used for methods which are non-deterministic. The randomstates used are either '123' or '1234'.\ The following libraries are needed: From 6161e088c60852825d74a6b944f88a5a3cc6fd93 Mon Sep 17 00:00:00 2001 From: Julius Date: Mon, 29 Mar 2021 14:26:05 +0200 Subject: [PATCH 9/9] =?UTF-8?q?=E2=80=9EREADME.md=E2=80=9C=20=C3=A4ndern?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 32877fa..ebe4caf 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -## Semesterproject of the lecture "Semesterproject Signal processing and Analysis of human brain potentials (eeg) WS 2020/21 +## Semesterproject of the lecture "Semesterproject Signal processing and Analysis of human brain potentials (eeg)" WS 2020/21 This repository holds the code of the semesterproject as well as the report, created by Julius Voggesberger. As the dataset for the project, the N170-dataset was chosen.