Fixed merge conflict

This commit is contained in:
2021-03-29 16:27:41 +02:00
2 changed files with 47 additions and 16 deletions

View File

@@ -1,20 +1,35 @@
## Semesterproject of the lecture "Semesterproject Signal processing and Analysis of human brain potentials (eeg) WS 2020/21
## Semesterproject of the lecture "Semesterproject Signal processing and Analysis of human brain potentials (eeg)" WS 2020/21
This repository holds the code of the semesterproject as well as the report.
The main files are 'preprocessing_and_cleaning.py', 'erp_analysis.py' and 'decoding_tf_analyis.py'.
The files hold:
- preprocessing_and_cleaning.py : Holds the pre-processing pipeline of the project. By executing the file all subjects are pre-processed. Subjects 001, 003, 014 are pre-processed with manually selected pre-processing information, all other subjects are pre-processed with the given pre-processing information. Pre-processed cleaned data is saved in the BIDS file structure as 'sub-XXX_task-N170_cleaned.fif' where XXX is the subject number.
Details can be found in the comments of the code.
- erp_analysis.py : Holds the code for the erp-analysis. Computes the peak-differences and t-tests for several experimental contrasts. Details can be found in the comments of the code.
- decoding_tf_analysis.py : Holds the code for the decoding and time-frequency analysis. Details can be found in the comments of the code.
This repository holds the code of the semesterproject as well as the report, created by Julius Voggesberger.
As the dataset for the project, the N170-dataset was chosen.
As the three subjects, to be manually pre-processed, the subjects 001, 003 and 014 were chosen.
The rest of the subjects were pre-processed with provided pre-processing information.
The folder 'utils' holds helper functions for some plots needed for the analysis and to load data, generate strings etc. and holds the code given in the lecture.
The folder 'test' holds mostly unittests that test helper functions and one function which visually checks if N170 peaks are extracted correctly.
### Structure
```
├── Dataset: The dataset of the project as well as the manually selected bad segments are stored here.
| ├── n170: Store the dataset here.
| └── preprocessed: Bad segments are stored here.
├── cached_data: Data that is generated in the analysis part is stored here.
| ├── decoding_data: Results of the classifiers.
| ├── erp_peaks: ERP peaks needed for the ERP analysis.
| └── tf_data: Time-frequency data needed for the tf-analysis.
├── test: Contains unittests and one visual check.
├── utils: Contains helper methods
| ├── ccs_eeg_semesterproject: Methods given in the lecture.
| ├── ccs_eeg_utils_reduced: Method for reading in BIDS.
| ├── file_utils.py: Methods for reading in files and getting epochs.
| └── plot_utils.py: Methods for manually created plots.
├── preprocessing_and_cleaning.py: The preprocessing pipeline.
├── erp_analysis.py: The ERP-Analysis and computation of ERP peaks.
└── decoding_tf_analysis.py: Decoding and time-frequency analysis.
```
For the code to work properly, the N170 dataset needs to be provided.
When first running the analysis, it may take a while. After running it one time the data is cached, so that it can be reused if the analysis should be executed again. Be careful though, as a parameter has to be explicitly set in the code, so that the already computed data is used. This parameter is a boolean given to each analysis function which caches data.
This code was created using Python 3.7 and the following libraries:
### Running the project
To run the project python 3.7 is required and anaconda recommended.\
To ensure reproducability, randomstates were used for methods which are non-deterministic.
The randomstates used are either '123' or '1234'.\
The following libraries are needed:
- Matplotlib 3.3.3
- MNE 0.22.0
- MNE-Bids 0.6
@@ -22,3 +37,18 @@ This code was created using Python 3.7 and the following libraries:
- Scikit-Learn 0.23.2
- Pandas 1.2.0
- Scipy 1.5.4
For the code to work, the N170 dataset needs to be provided and put into the folder 'Dataset/n170/', so that the file structure 'Dataset/n170/sub-001', etc. exists.
The pre-processed raw objects are saved in their respective subject folder, in 'Dataset/n170/'.
When first running the analysis, it may take a while.
After running it one time the data is cached, so that it can be reused if the analysis should be executed again at a later time.
For the cached data to be used, a boolean parameter has to be set in the respective analysis method.
It may be necessary to set the parent directory 'semesterproject_lecture_eeg' as 'Sources Root' for the project, if pycharm is used as an IDE.
### Parameters
Parameters have to be changed manually in the code, if different settings want to be tried.
### Visualisation
The visualisation methods that were used to generate the visualisations in the report, are contained in the code, if they were created manually.
If a visualisation method from mne was used to create the visualisation, it may exist in the code or not.

View File

@@ -254,5 +254,6 @@ def time_frequency(dataset, filename, scaling='lin', compute_tfr=True):
if __name__ == '__main__':
mne.set_log_level(verbose=VERBOSE_LEVEL)
ds = 'N170'
decoding(ds, 'faces_vs_cars', True)
time_frequency(ds, 'face_intact_vs_all_0.1_50hz_ncf2', 'log', False)
# decoding(ds, 'faces_vs_cars', True)
time_frequency(ds, 'face_intact_vs_all_0.1_50hz_ncf2', 'log', True)