„README.md“ ändern
This commit is contained in:
53
README.md
53
README.md
@@ -1,20 +1,31 @@
|
||||
## Semesterproject of the lecture "Semesterproject Signal processing and Analysis of human brain potentials (eeg) WS 2020/21
|
||||
|
||||
This repository holds the code of the semesterproject as well as the report.
|
||||
The main files are 'preprocessing_and_cleaning.py', 'erp_analysis.py' and 'decoding_tf_analyis.py'.
|
||||
The files hold:
|
||||
- preprocessing_and_cleaning.py : Holds the pre-processing pipeline of the project. By executing the file all subjects are pre-processed. Subjects 001, 003, 014 are pre-processed with manually selected pre-processing information, all other subjects are pre-processed with the given pre-processing information. Pre-processed cleaned data is saved in the BIDS file structure as 'sub-XXX_task-N170_cleaned.fif' where XXX is the subject number.
|
||||
Details can be found in the comments of the code.
|
||||
- erp_analysis.py : Holds the code for the erp-analysis. Computes the peak-differences and t-tests for several experimental contrasts. Details can be found in the comments of the code.
|
||||
- decoding_tf_analysis.py : Holds the code for the decoding and time-frequency analysis. Details can be found in the comments of the code.
|
||||
This repository holds the code of the semesterproject as well as the report, created by Julius Voggesberger.
|
||||
As the dataset for the project, the N170-dataset was chosen.
|
||||
As the three subjects, to be manually pre-processed, the subjects 001, 003 and 014 were chosen.
|
||||
The rest of the subjects were pre-processed with provided pre-processing information.
|
||||
|
||||
The folder 'utils' holds helper functions for some plots needed for the analysis and to load data, generate strings etc. and holds the code given in the lecture.
|
||||
The folder 'test' holds mostly unittests that test helper functions and one function which visually checks if N170 peaks are extracted correctly.
|
||||
### Structure
|
||||
├── Dataset: The dataset of the project as well as the manually selected bad segments are stored here.
|
||||
| ├── n170: Store the dataset here.
|
||||
| └── preprocessed: Bad segments are stored here.
|
||||
├── cached_data: Data that is generated in the analysis part is stored here.
|
||||
| ├── decoding_data: Results of the classifiers.
|
||||
| ├── erp_peaks: ERP peaks needed for the ERP analysis.
|
||||
| └── tf_data: Time-frequency data needed for the tf-analysis.
|
||||
├── test: Contains unittests and one visual check.
|
||||
├── utils: Contains helper methods
|
||||
| ├── ccs_eeg_semesterproject: Methods given in the lecture.
|
||||
| ├── ccs_eeg_utils_reduced: Method for reading in BIDS.
|
||||
| ├── file_utils.py: Methods for reading in files and getting epochs.
|
||||
| └── plot_utils.py: Methods for manually created plots.
|
||||
├── preprocessing_and_cleaning.py: The preprocessing pipeline.
|
||||
├── erp_analysis.py: The ERP-Analysis and computation of ERP peaks.
|
||||
└── decoding_tf_analysis.py: Decoding and time-frequency analysis.
|
||||
|
||||
For the code to work properly, the N170 dataset needs to be provided.
|
||||
When first running the analysis, it may take a while. After running it one time the data is cached, so that it can be reused if the analysis should be executed again. Be careful though, as a parameter has to be explicitly set in the code, so that the already computed data is used. This parameter is a boolean given to each analysis function which caches data.
|
||||
|
||||
This code was created using Python 3.7 and the following libraries:
|
||||
### Running the project
|
||||
To run the project python 3.7 is required and anaconda recommended.
|
||||
The following libraries are needed:
|
||||
- Matplotlib 3.3.3
|
||||
- MNE 0.22.0
|
||||
- MNE-Bids 0.6
|
||||
@@ -23,5 +34,17 @@ This code was created using Python 3.7 and the following libraries:
|
||||
- Pandas 1.2.0
|
||||
- Scipy 1.5.4
|
||||
|
||||
It is recommended to use Anaconda.
|
||||
It may be necessary to set the parent directory 'semesterproject_lecture_eeg' as 'Sources Root' for the project, if pycharm is used as an IDE.
|
||||
For the code to work, the N170 dataset needs to be provided and put into the folder 'Dataset/n170/', so that the file structure 'Dataset/n170/sub-001', etc. exists.
|
||||
The pre-processed raw objects are saved in their respective subject folder, in 'Dataset/n170/'.
|
||||
When first running the analysis, it may take a while.
|
||||
After running it one time the data is cached, so that it can be reused if the analysis should be executed again at a later time.
|
||||
For the cached data to be used, a boolean parameter has to be set in the respective analysis method.
|
||||
|
||||
It may be necessary to set the parent directory 'semesterproject_lecture_eeg' as 'Sources Root' for the project, if pycharm is used as an IDE.
|
||||
|
||||
### Parameters
|
||||
Parameters have to be changed manually in the code, if different settings want to be tried.
|
||||
|
||||
### Visualisation
|
||||
The visualisation methods that were used to generate the visualisations in the report, are contained in the code, if they were created manually.
|
||||
If a visualisation method from mne was used to create the visualisation, it may exist in the code or not.
|
||||
Reference in New Issue
Block a user