Compare commits
36 Commits
69832fc36f
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
| be4489dc73 | |||
| b621dbf9b0 | |||
| f554a5822f | |||
| dd2df8f1a8 | |||
| fc8cde3a88 | |||
| 5f5e7ffcac | |||
| d09480c5bb | |||
| 6161e088c6 | |||
| 02f604cdeb | |||
| 0d513bfb96 | |||
| 6c1a555179 | |||
| a5b97a3a65 | |||
| 7f616a4a34 | |||
| 106a3ad434 | |||
| 26dd78a410 | |||
| cd20b9b776 | |||
| c6a6fd7012 | |||
| cfa950b85b | |||
| 86044cc7c7 | |||
| 4875cf29cb | |||
| c44cb7cb13 | |||
| 6f8c1af6c6 | |||
| bb3406bd88 | |||
| 1629441d5f | |||
| 770576fb6c | |||
| c86d46f447 | |||
| 3b837869fe | |||
| 5d2099f0a7 | |||
| 0b993c108f | |||
| b88a32e596 | |||
| 67660718da | |||
| e188cb97e8 | |||
| 104b9951b6 | |||
| 2a4da39a7c | |||
| f5ec718e6b | |||
| 287fdf5c9e |
6
.gitignore
vendored
Normal file
6
.gitignore
vendored
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
# Ignore the n170 dataset, so that it wont be pushed onto the repo
|
||||||
|
/Dataset/n170
|
||||||
|
|
||||||
|
# Ignore unnecessary files
|
||||||
|
/utils/__pycache__
|
||||||
|
.idea/
|
||||||
4
Dataset/README.md
Normal file
4
Dataset/README.md
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
This folder should hold the n170 dataset.
|
||||||
|
Unpack the dataset here, so that the file structure: 'Dataset/n170/...' exists.
|
||||||
|
|
||||||
|
Bad annotations from the manually preprocessing step are saved in the 'preprocessed' folder.
|
||||||
24
Dataset/preprocessed/sub-001_task-N170_badannotations.csv
Normal file
24
Dataset/preprocessed/sub-001_task-N170_badannotations.csv
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
onset,duration,description
|
||||||
|
1970-01-01 00:00:0.0,0,
|
||||||
|
1970-01-01 00:00:03.792590,4.272874611801244,BAD_
|
||||||
|
1970-01-01 00:01:16.632102,4.703474378881992,BAD_
|
||||||
|
1970-01-01 00:01:25.812306,2.674687014751555,BAD_
|
||||||
|
1970-01-01 00:01:30.096020,3.3330078125,BAD_
|
||||||
|
1970-01-01 00:01:39.164434,2.2565083947981464,BAD_
|
||||||
|
1970-01-01 00:01:42.936322,3.154971370341613,BAD_
|
||||||
|
1970-01-01 00:02:53.246060,5.229302940605606,BAD_
|
||||||
|
1970-01-01 00:03:06.573345,2.070191187888213,BAD_
|
||||||
|
1970-01-01 00:03:13.224567,8.847997137034156,BAD_
|
||||||
|
1970-01-01 00:03:24.367312,6.69499830163042,BAD_
|
||||||
|
1970-01-01 00:03:50.203670,2.9893560753105817,BAD_
|
||||||
|
1970-01-01 00:04:51.697013,9.613967876552806,BAD_
|
||||||
|
1970-01-01 00:05:01.312749,2.7326523680124524,BAD_
|
||||||
|
1970-01-01 00:06:17.397529,5.784114178959612,BAD_
|
||||||
|
1970-01-01 00:06:25.472251,3.2791828416148974,BAD_
|
||||||
|
1970-01-01 00:06:31.833646,9.137823903338472,BAD_
|
||||||
|
1970-01-01 00:06:40.977378,2.455246748835407,BAD_
|
||||||
|
1970-01-01 00:07:06.623277,2.152998835403764,BAD_
|
||||||
|
1970-01-01 00:07:26.697804,1.9542604813664184,BAD_
|
||||||
|
1970-01-01 00:09:02.732109,4.980879998058981,BAD_
|
||||||
|
1970-01-01 00:09:15.452340,1.8755932162266618,BAD_
|
||||||
|
1970-01-01 00:11:20.352343,2.4966505725932393,BAD_
|
||||||
|
10
Dataset/preprocessed/sub-003_task-N170_badannotations.csv
Normal file
10
Dataset/preprocessed/sub-003_task-N170_badannotations.csv
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
onset,duration,description
|
||||||
|
1970-01-01 00:00:0.0,0,
|
||||||
|
1970-01-01 00:02:17.128157,2.947952251552806,BAD_
|
||||||
|
1970-01-01 00:02:20.960569,7.62244395380435,BAD_
|
||||||
|
1970-01-01 00:03:30.145704,5.332812500000017,BAD_
|
||||||
|
1970-01-01 00:04:40.074527,5.490147030279502,BAD_
|
||||||
|
1970-01-01 00:05:49.665420,3.3247270477484676,BAD_
|
||||||
|
1970-01-01 00:06:59.603314,11.209782608695662,BAD_
|
||||||
|
1970-01-01 00:08:12.657335,8.032341809006198,BAD_
|
||||||
|
1970-01-01 00:09:22.031561,2.3020526009315745,BAD_
|
||||||
|
@@ -0,0 +1,9 @@
|
|||||||
|
onset,duration,description
|
||||||
|
1970-01-01 00:00:0.0,0,
|
||||||
|
1970-01-01 00:00:00.886042,2.5421947787267083,BAD_
|
||||||
|
1970-01-01 00:01:18.262621,2.268929541925459,BAD_
|
||||||
|
1970-01-01 00:03:27.443617,0.9647090935558822,BAD_
|
||||||
|
1970-01-01 00:04:46.825909,3.2253578707297947,BAD_
|
||||||
|
1970-01-01 00:05:52.347597,2.637423573369574,BAD_
|
||||||
|
1970-01-01 00:06:59.515191,2.4345448369564906,BAD_
|
||||||
|
1970-01-01 00:08:07.230491,1.287658918866498,BAD_
|
||||||
|
63
README.md
63
README.md
@@ -1,16 +1,55 @@
|
|||||||
## Semesterproject of the lecture "Semesterproject Signal processing and Analysis
|
## Semesterproject of the lecture "Semesterproject Signal processing and Analysis of human brain potentials (eeg)" WS 2020/21
|
||||||
of human brain potentials (eeg) WS 2020/21
|
|
||||||
|
|
||||||
This repository holds the code of the semesterproject as well as the report.
|
This repository holds the code of the semesterproject as well as the report, created by Julius Voggesberger.
|
||||||
The main files are 'preprocessing_and_cleaning.py', 'erp_analysis.py' and 'decoding_tf_analyis.py'.
|
As the dataset for the project, the N170-dataset was chosen.
|
||||||
The files hold:
|
As the three subjects, to be manually pre-processed, the subjects 001, 003 and 014 were chosen.
|
||||||
- preprocessing_and_cleaning.py : Holds the pre-processing pipeline of the project. By executing the file all subjects are pre-processed. Subjects 001, 003, 014 are pre-processd with manually selected pre-processing information, all other subjects are pre-processed with the given pre-processing information. Details can be found in the comments of the code.
|
The rest of the subjects were pre-processed with provided pre-processing information.
|
||||||
- erp_analysis.py : Hold the code for the erp-analysis. Computes the peak-differences and t-tests for several experimental contrasts. Details can be found in the comments of the code.
|
|
||||||
- decoding_tf_analysis.py : Holds the code for the decoding and time-frequency analysis. Details can be found in the comments of the code.
|
|
||||||
|
|
||||||
The folder 'utils' holds helper functions for some plots needed for the analysis and to load data, generate strings etc. and holds the code given in the lecture.
|
### Structure
|
||||||
The folder 'test' holds mostly unittests that test helper functions and one function which visually checks if N170 peaks are extracted correctly.
|
```
|
||||||
|
├── Dataset: The dataset of the project as well as the manually selected bad segments are stored here.
|
||||||
|
| ├── n170: Store the dataset here.
|
||||||
|
| └── preprocessed: Bad segments are stored here.
|
||||||
|
├── cached_data: Data that is generated in the analysis part is stored here.
|
||||||
|
| ├── decoding_data: Results of the classifiers.
|
||||||
|
| ├── erp_peaks: ERP peaks needed for the ERP analysis.
|
||||||
|
| └── tf_data: Time-frequency data needed for the tf-analysis.
|
||||||
|
├── test: Contains unittests and one visual check.
|
||||||
|
├── utils: Contains helper methods
|
||||||
|
| ├── ccs_eeg_semesterproject: Methods given in the lecture.
|
||||||
|
| ├── ccs_eeg_utils_reduced: Method for reading in BIDS provided in the lecture.
|
||||||
|
| ├── file_utils.py: Methods for reading in files and getting epochs.
|
||||||
|
| └── plot_utils.py: Methods for manually created plots.
|
||||||
|
├── preprocessing_and_cleaning.py: The preprocessing pipeline.
|
||||||
|
├── erp_analysis.py: The ERP-Analysis and computation of ERP peaks.
|
||||||
|
├── decoding_tf_analysis.py: Decoding and time-frequency analysis.
|
||||||
|
└── semesterproject_report_voggesberger: The report of the project.
|
||||||
|
```
|
||||||
|
|
||||||
For the code to work properly, the N170 dataset needs to be provided.
|
### Running the project
|
||||||
When first running the analysis, it may take a while. After running it one time the data is cached, so that it can be reused if the analysis should be executed again. Be careful though, as a parameter has to be explicitly set in the code, so that the already computed data is used. This parameter is a boolean given to each analysis function which caches data.
|
To run the project python 3.7 is required and anaconda recommended.\
|
||||||
|
To ensure reproducability, randomstates were used for methods which are non-deterministic.
|
||||||
|
The randomstates used are either '123' or '1234'.\
|
||||||
|
The following libraries are needed:
|
||||||
|
- Matplotlib 3.3.3
|
||||||
|
- MNE 0.22.0
|
||||||
|
- MNE-Bids 0.6
|
||||||
|
- Numpy 1.19.4
|
||||||
|
- Scikit-Learn 0.23.2
|
||||||
|
- Pandas 1.2.0
|
||||||
|
- Scipy 1.5.4
|
||||||
|
|
||||||
|
For the code to work, the N170 dataset needs to be provided and put into the folder 'Dataset/n170/', so that the file structure 'Dataset/n170/sub-001', etc. exists.
|
||||||
|
The pre-processed raw objects are saved in their respective subject folder, in 'Dataset/n170/'.
|
||||||
|
When first running the analysis, it may take a while.
|
||||||
|
After running it one time the data is cached, so that it can be reused if the analysis should be executed again at a later time.
|
||||||
|
For the cached data to be used, a boolean parameter has to be set in the respective analysis method.
|
||||||
|
|
||||||
|
It may be necessary to set the parent directory 'semesterproject_lecture_eeg' as 'Sources Root' for the project, if pycharm is used as an IDE.
|
||||||
|
|
||||||
|
### Parameters
|
||||||
|
Parameters have to be changed manually in the code, if different settings want to be tried.
|
||||||
|
|
||||||
|
### Visualisation
|
||||||
|
The visualisation methods that were used to generate the visualisations in the report, are contained in the code, if they were created manually.
|
||||||
|
If a visualisation method from mne was used to create the visualisation, it may exist in the code or not.
|
||||||
1
cached_data/README.md
Normal file
1
cached_data/README.md
Normal file
@@ -0,0 +1 @@
|
|||||||
|
This folder holds cached data.
|
||||||
4
cached_data/decoding_data/.gitignore
vendored
Normal file
4
cached_data/decoding_data/.gitignore
vendored
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
# Ignore everything in this directory
|
||||||
|
*
|
||||||
|
# Except this file
|
||||||
|
!.gitignore
|
||||||
4
cached_data/erp_peaks/.gitignore
vendored
Normal file
4
cached_data/erp_peaks/.gitignore
vendored
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
# Ignore everything in this directory
|
||||||
|
*
|
||||||
|
# Except this file
|
||||||
|
!.gitignore
|
||||||
4
cached_data/tf_data/.gitignore
vendored
Normal file
4
cached_data/tf_data/.gitignore
vendored
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
# Ignore everything in this directory
|
||||||
|
*
|
||||||
|
# Except this file
|
||||||
|
!.gitignore
|
||||||
@@ -46,6 +46,7 @@ def events_to_labels(evts, events_dict, mask=None): # TODO Test schreiben
|
|||||||
def permutation_test(baseline, score, n_iter):
|
def permutation_test(baseline, score, n_iter):
|
||||||
"""
|
"""
|
||||||
An implementation of a permutation test for classification scores.
|
An implementation of a permutation test for classification scores.
|
||||||
|
|
||||||
:param baseline: The classification scores of the baseline, i.e. selection by chance
|
:param baseline: The classification scores of the baseline, i.e. selection by chance
|
||||||
:param score: The classification scores which are tested for significance
|
:param score: The classification scores which are tested for significance
|
||||||
:param n_iter: number of permutations
|
:param n_iter: number of permutations
|
||||||
@@ -111,6 +112,7 @@ def decoding(dataset, filename, compute_metric=True, mask=None):
|
|||||||
if times is None:
|
if times is None:
|
||||||
times = epochs.times
|
times = epochs.times
|
||||||
np.save('cached_data/decoding_data/' + filename, metric)
|
np.save('cached_data/decoding_data/' + filename, metric)
|
||||||
|
metric = np.asarray(metric)
|
||||||
else:
|
else:
|
||||||
# Dummy time which is created according to epoch.times
|
# Dummy time which is created according to epoch.times
|
||||||
times = np.linspace(-0.09960938, 1, 1127)
|
times = np.linspace(-0.09960938, 1, 1127)
|
||||||
@@ -119,6 +121,8 @@ def decoding(dataset, filename, compute_metric=True, mask=None):
|
|||||||
# Compute index of time point 0
|
# Compute index of time point 0
|
||||||
index = math.floor((len(metric[0]) / time_scale) * 100)
|
index = math.floor((len(metric[0]) / time_scale) * 100)
|
||||||
baseline = np.array(metric[:index]).flatten()
|
baseline = np.array(metric[:index]).flatten()
|
||||||
|
|
||||||
|
# Plot the result
|
||||||
plt.plot(np.linspace(-200, 1000, 1127), np.mean(metric, axis=0))
|
plt.plot(np.linspace(-200, 1000, 1127), np.mean(metric, axis=0))
|
||||||
plt.ylabel('Accuracy (%)')
|
plt.ylabel('Accuracy (%)')
|
||||||
plt.xlabel('Time (ms)')
|
plt.xlabel('Time (ms)')
|
||||||
@@ -128,11 +132,12 @@ def decoding(dataset, filename, compute_metric=True, mask=None):
|
|||||||
# Compute the permutation tests
|
# Compute the permutation tests
|
||||||
for t in range(len(metric[0][index:])):
|
for t in range(len(metric[0][index:])):
|
||||||
score_t = np.asarray(metric[:, t + index])
|
score_t = np.asarray(metric[:, t + index])
|
||||||
p = permutation_test(baseline, score_t, 100)
|
p = permutation_test(baseline, score_t, 1000)
|
||||||
p_values.append(p)
|
p_values.append(p)
|
||||||
if t % 50 == 0:
|
if t % 50 == 0:
|
||||||
print(str(t) + " Out of " + str(len(metric[0][index:])))
|
print(str(t) + " Out of " + str(len(metric[0][index:])))
|
||||||
|
|
||||||
|
# Plot the result
|
||||||
plt.plot(times[index:], p_values)
|
plt.plot(times[index:], p_values)
|
||||||
plt.ylabel('P-Value')
|
plt.ylabel('P-Value')
|
||||||
plt.xlabel('Time (ms)')
|
plt.xlabel('Time (ms)')
|
||||||
@@ -140,16 +145,19 @@ def decoding(dataset, filename, compute_metric=True, mask=None):
|
|||||||
plt.show()
|
plt.show()
|
||||||
|
|
||||||
|
|
||||||
def create_tfr(raw, condition, freqs, n_cycles, response='induced', baseline=None):
|
def create_tfr(raw, condition, freqs, n_cycles, response='induced', baseline=None, plot=False):
|
||||||
"""
|
"""
|
||||||
Compute the time frequency representation (TFR) of data for a given condition via morlet wavelets
|
Compute the time frequency representation (TFR) of data for a given condition via Morlet wavelets
|
||||||
|
|
||||||
:param raw: the data
|
:param raw: the data
|
||||||
:param condition: the condition for which to compute the TFR. Given as a list of tuples of the form (stimulus, condition) # TODO ambiguous use of condition
|
:param condition: the condition for which to compute the TFR. Given as a list of tuples of the form (stimulus, texture)
|
||||||
:param freqs: the frequencies for which to compute the TFR
|
:param freqs: the frequencies for which to compute the TFR
|
||||||
:param n_cycles: the number of cycles used by the morlet wavelets
|
:param n_cycles: the number of cycles used by the Morlet wavelets
|
||||||
:param response: type of expected TFR. Can be total, induced or evoked. Default is induced
|
:param response: type of expected TFR. Can be total, induced or evoked. Default is induced,
|
||||||
|
the others were not used for the report, only for exploration
|
||||||
:param baseline: baseline used to correct the power. A tuple of the form (start, end).
|
:param baseline: baseline used to correct the power. A tuple of the form (start, end).
|
||||||
Default is None and no baseline correction will be applid
|
Default is None and no baseline correction will be applied
|
||||||
|
:param plot: True if results should be plotted, else false.
|
||||||
:return: The TFR or the given data for a given condition. Has type AverageTFR
|
:return: The TFR or the given data for a given condition. Has type AverageTFR
|
||||||
"""
|
"""
|
||||||
epochs, _ = get_epochs(raw, condition, tmin=-0.2, tmax=1)
|
epochs, _ = get_epochs(raw, condition, tmin=-0.2, tmax=1)
|
||||||
@@ -166,14 +174,16 @@ def create_tfr(raw, condition, freqs, n_cycles, response='induced', baseline=Non
|
|||||||
power_total = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False, n_jobs=4)
|
power_total = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False, n_jobs=4)
|
||||||
power_induced = tfr_morlet(epochs.subtract_evoked(), freqs=freqs, n_cycles=n_cycles, return_itc=False, n_jobs=4)
|
power_induced = tfr_morlet(epochs.subtract_evoked(), freqs=freqs, n_cycles=n_cycles, return_itc=False, n_jobs=4)
|
||||||
power = mne.combine_evoked([power_total, power_induced], weights=[1, -1])
|
power = mne.combine_evoked([power_total, power_induced], weights=[1, -1])
|
||||||
# power.plot(picks='P7')
|
if plot: power.plot(picks='P7')
|
||||||
|
# Apply a baseline correction to the power data
|
||||||
power.apply_baseline(mode='ratio', baseline=baseline)
|
power.apply_baseline(mode='ratio', baseline=baseline)
|
||||||
# plot_oscillation_bands(power)
|
if plot:
|
||||||
# power.plot(picks='P7')
|
plot_oscillation_bands(power)
|
||||||
|
power.plot(picks='P7')
|
||||||
return power
|
return power
|
||||||
|
|
||||||
|
|
||||||
def time_frequency(dataset, filename, compute_tfr=True):
|
def time_frequency(dataset, filename, scaling='lin', compute_tfr=True):
|
||||||
"""
|
"""
|
||||||
Runs time frequency analysis
|
Runs time frequency analysis
|
||||||
|
|
||||||
@@ -181,14 +191,14 @@ def time_frequency(dataset, filename, compute_tfr=True):
|
|||||||
:param filename: Filename of either the file from which the TFRs will be loaded
|
:param filename: Filename of either the file from which the TFRs will be loaded
|
||||||
or to which they will be saved
|
or to which they will be saved
|
||||||
:param compute_tfr: If True the TFRs will be created, else the TFRs will be loaded from a precomputed file
|
:param compute_tfr: If True the TFRs will be created, else the TFRs will be loaded from a precomputed file
|
||||||
|
:param scaling: default 'lin' for linear scaling, else can be 'log' for logarithmic scaling
|
||||||
"""
|
"""
|
||||||
# Parameters
|
# Parameters
|
||||||
# Frequency space (from, to, steps) -> Control frequency resolution : Between num=50-80 good for 1-50Hz
|
if scaling == 'lin':
|
||||||
freqs = np.logspace(*np.log10([0.5, 50]), num=50) #
|
freqs = np.linspace(0.1, 50, num=50) # Use this for linear space scaling
|
||||||
# Number of cycles -> Controls time resolution ? At ~freqs/2 good for high frequency resolution
|
else:
|
||||||
n_cycles = freqs / 2 # 1 for high time resolution & freq smoothing, freqs/2 for high freq resolution & time smooth
|
freqs = np.logspace(*np.log10([0.1, 50]), num=50)
|
||||||
# Baseline -> Should not go post-stimulus, i.e. > 0 -> Best ist pre-stimulus (e.g. -400 to -200ms)
|
n_cycles = freqs / 2
|
||||||
baseline = [-0.5, 0]
|
|
||||||
cond1 = []
|
cond1 = []
|
||||||
cond2 = []
|
cond2 = []
|
||||||
times = None
|
times = None
|
||||||
@@ -206,6 +216,8 @@ def time_frequency(dataset, filename, compute_tfr=True):
|
|||||||
raw.set_montage('standard_1020', match_case=False)
|
raw.set_montage('standard_1020', match_case=False)
|
||||||
|
|
||||||
# Create the two conditions we want to compare
|
# Create the two conditions we want to compare
|
||||||
|
# IMPORTANT: If different conditions should be compared you have to change them here, by altering the second
|
||||||
|
# argument passed to create_tfr
|
||||||
power_cond1 = create_tfr(raw, [('face', 'intact')], freqs, n_cycles, 'induced', (-0.2, 0))
|
power_cond1 = create_tfr(raw, [('face', 'intact')], freqs, n_cycles, 'induced', (-0.2, 0))
|
||||||
print(' CONDITION 1 LOADED')
|
print(' CONDITION 1 LOADED')
|
||||||
cond1.append(power_cond1)
|
cond1.append(power_cond1)
|
||||||
@@ -216,24 +228,32 @@ def time_frequency(dataset, filename, compute_tfr=True):
|
|||||||
cond2.append(power_cond2)
|
cond2.append(power_cond2)
|
||||||
print(' DONE')
|
print(' DONE')
|
||||||
|
|
||||||
|
# Save the data so we can access the results more easily
|
||||||
np.save('cached_data/tf_data/' + filename + '_cond1', cond1)
|
np.save('cached_data/tf_data/' + filename + '_cond1', cond1)
|
||||||
np.save('cached_data/tf_data/' + filename + '_cond2', cond2)
|
np.save('cached_data/tf_data/' + filename + '_cond2', cond2)
|
||||||
else:
|
else:
|
||||||
|
# If the data should not be recomputed, load the given filename
|
||||||
cond1 = np.load('cached_data/tf_data/' + filename + '_cond1.npy', allow_pickle=True).tolist()
|
cond1 = np.load('cached_data/tf_data/' + filename + '_cond1.npy', allow_pickle=True).tolist()
|
||||||
cond2 = np.load('cached_data/tf_data/' + filename + '_cond2.npy', allow_pickle=True).tolist()
|
cond2 = np.load('cached_data/tf_data/' + filename + '_cond2.npy', allow_pickle=True).tolist()
|
||||||
if times is None:
|
if times is None:
|
||||||
times = cond1[0].times
|
times = cond1[0].times
|
||||||
|
|
||||||
|
# Some plots
|
||||||
|
mne.grand_average(cond1).plot(picks=['P7'], vmin=-3, vmax=3, title='Grand Average P7')
|
||||||
mne.grand_average(cond2).plot(picks=['P7'], vmin=-3, vmax=3, title='Grand Average P7')
|
mne.grand_average(cond2).plot(picks=['P7'], vmin=-3, vmax=3, title='Grand Average P7')
|
||||||
plot_oscillation_bands(mne.grand_average(cond1))
|
plot_oscillation_bands(mne.grand_average(cond1))
|
||||||
plot_oscillation_bands(mne.grand_average(cond2))
|
plot_oscillation_bands(mne.grand_average(cond2))
|
||||||
|
|
||||||
|
# Compute the cluster permutation
|
||||||
F, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
|
F, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
|
||||||
[mne.grand_average(cond1).data, mne.grand_average(cond2).data], n_jobs=4, verbose='INFO',
|
[mne.grand_average(cond1).data, mne.grand_average(cond2).data], n_jobs=4, verbose='INFO',
|
||||||
seed=123)
|
seed=123)
|
||||||
plot_tf_cluster(F, clusters, cluster_p_values, freqs, times)
|
plot_tf_cluster(F, clusters, cluster_p_values, freqs, times, scaling)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
mne.set_log_level(verbose=VERBOSE_LEVEL)
|
mne.set_log_level(verbose=VERBOSE_LEVEL)
|
||||||
ds = 'N170'
|
ds = 'N170'
|
||||||
# decoding(ds, 'faces_vs_cars_100iters', False)
|
decoding(ds, 'faces_vs_cars', True)
|
||||||
time_frequency(ds, 'face_intact_vs_all_0.1_50hz_ncf2', False)
|
time_frequency(ds, 'face_intact_vs_all_0.1_50hz_ncf2', 'log', True)
|
||||||
|
|
||||||
|
|||||||
@@ -91,13 +91,20 @@ def create_peak_difference_feature(df, max_subj=40):
|
|||||||
return peak_diff_df
|
return peak_diff_df
|
||||||
|
|
||||||
|
|
||||||
def analyze_erp(channels):
|
def analyze_erp(channels, precompute=True):
|
||||||
"""
|
"""
|
||||||
Execute several statistical tests for different hypothesis, to analyze ERPs
|
Execute several statistical tests for different hypothesis, to analyse ERPs
|
||||||
:param channels: The channels for which the tests are executed
|
:param channels: The channels for which the tests are executed
|
||||||
|
:param precompute: If true, the peak-difference data will be computed. Else it will be loaded from a precomputed file,
|
||||||
|
if it exists. This should only be set 'False' if the method was already executed once!
|
||||||
"""
|
"""
|
||||||
|
if precompute:
|
||||||
|
# Precompute the erp peaks
|
||||||
|
precompute_erp_df('N170')
|
||||||
|
|
||||||
for c in channels:
|
for c in channels:
|
||||||
print("CHANNEL: " + c)
|
print("CHANNEL: " + c)
|
||||||
|
# Load the erp peak data and create the features for the t-tests
|
||||||
erp_df = pd.read_csv('cached_data/erp_peaks/erp_peaks_' + c + '.csv', index_col=0)
|
erp_df = pd.read_csv('cached_data/erp_peaks/erp_peaks_' + c + '.csv', index_col=0)
|
||||||
feature_df = create_peak_difference_feature(erp_df)
|
feature_df = create_peak_difference_feature(erp_df)
|
||||||
# 1. H_a : There is a difference between the N170 peak of recognizing faces and cars
|
# 1. H_a : There is a difference between the N170 peak of recognizing faces and cars
|
||||||
@@ -130,5 +137,4 @@ def analyze_erp(channels):
|
|||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
mne.set_log_level(verbose=VERBOSE_LEVEL)
|
mne.set_log_level(verbose=VERBOSE_LEVEL)
|
||||||
# precompute_erp_df('N170')
|
analyze_erp(['P7', 'PO7', 'P8', 'PO8'], True)
|
||||||
analyze_erp(['P7', 'PO7', 'P8', 'PO8'])
|
|
||||||
|
|||||||
110
plotting.py
110
plotting.py
@@ -1,110 +0,0 @@
|
|||||||
import mne
|
|
||||||
from mne.preprocessing import create_eog_epochs
|
|
||||||
|
|
||||||
from mne_bids import BIDSPath, read_raw_bids
|
|
||||||
from utils.ccs_eeg_utils import read_annotations_core
|
|
||||||
from utils.file_utils import get_epochs
|
|
||||||
|
|
||||||
|
|
||||||
def load_unprocessed_subject(subject, dataset):
|
|
||||||
"""
|
|
||||||
Load the eeg data of a subject
|
|
||||||
:param subject: The subject of which the data will be loaded
|
|
||||||
:param dataset: The dataset which will be loaded
|
|
||||||
:return: the subject data
|
|
||||||
"""
|
|
||||||
bids_path = BIDSPath(subject=subject, task=dataset, session=dataset, datatype='eeg', suffix='eeg',
|
|
||||||
root='Dataset\\' + dataset)
|
|
||||||
raw = read_raw_bids(bids_path)
|
|
||||||
# Add annotations
|
|
||||||
read_annotations_core(bids_path, raw)
|
|
||||||
return raw
|
|
||||||
|
|
||||||
|
|
||||||
def filter_data(raw):
|
|
||||||
"""
|
|
||||||
Filter the data of a single subject with a bandpass filter.
|
|
||||||
The lower bound ist 0.5Hz to compensate the slow drifts.
|
|
||||||
The upper bound is 50Hz to compensate the high frequencies, including the power line spike at 60Hz
|
|
||||||
:param raw: The data to be filtered
|
|
||||||
:return: The filtered data
|
|
||||||
"""
|
|
||||||
raw.filter(0.5, 48, fir_design='firwin')
|
|
||||||
return raw
|
|
||||||
|
|
||||||
|
|
||||||
def plot_filter_data():
|
|
||||||
ds = 'N170'
|
|
||||||
for subj in ['014']:
|
|
||||||
data = load_unprocessed_subject(subj, ds)
|
|
||||||
data.load_data()
|
|
||||||
# data.plot(n_channels=len(data.ch_names), block=True, scalings=40e-6)
|
|
||||||
filter_data(data)
|
|
||||||
fig = mne.viz.plot_raw_psd(data, fmax=80, average=True, show=False)
|
|
||||||
fig.savefig("plots/frequency_filtered_subj_" + subj + "_48Hz.png")
|
|
||||||
# data.plot(n_channels=len(data.ch_names), block=True, scalings=40e-6)
|
|
||||||
|
|
||||||
|
|
||||||
def plot_filter_data_epoched(subj):
|
|
||||||
ds = 'N170'
|
|
||||||
data = load_unprocessed_subject(subj, ds)
|
|
||||||
data.load_data()
|
|
||||||
filter_data(data)
|
|
||||||
get_epochs(data)[0].average().plot()
|
|
||||||
|
|
||||||
|
|
||||||
def plot_cleaning():
|
|
||||||
ds = 'N170'
|
|
||||||
for subj in ['014']:
|
|
||||||
data = load_unprocessed_subject(subj, ds)
|
|
||||||
data.load_data()
|
|
||||||
filter_data(data)
|
|
||||||
folder = "Dataset\\" + ds + "\\sub-" + subj + "\\ses-" + ds + "\\eeg\\"
|
|
||||||
filepath = folder + "sub-" + subj + "_task-" + ds
|
|
||||||
print(filepath)
|
|
||||||
ann = mne.read_annotations(filepath + "_" + "badannotations.csv")
|
|
||||||
data.annotations.append(ann.onset, ann.duration, ann.description)
|
|
||||||
data.plot(n_channels=len(data.ch_names), block=True, scalings=40e-6)
|
|
||||||
|
|
||||||
|
|
||||||
def plot_ica():
|
|
||||||
ds = 'N170'
|
|
||||||
for subj in ['014']:
|
|
||||||
data = load_unprocessed_subject(subj, ds)
|
|
||||||
folder = "Dataset\\" + ds + "\\sub-" + subj + "\\ses-" + ds + "\\eeg\\"
|
|
||||||
filepath = folder + "sub-" + subj + "_task-" + ds
|
|
||||||
ann = mne.read_annotations(filepath + "_" + "badannotations.csv")
|
|
||||||
data.annotations.append(ann.onset, ann.duration, ann.description)
|
|
||||||
data.load_data()
|
|
||||||
data.set_channel_types({'HEOG_left': 'eog', 'HEOG_right': 'eog', 'VEOG_lower': 'eog'})
|
|
||||||
data.set_montage('standard_1020', match_case=False)
|
|
||||||
ica_raw = data.copy()
|
|
||||||
ica_raw.filter(l_freq=1, h_freq=None)
|
|
||||||
|
|
||||||
# Then run ICA
|
|
||||||
ica = mne.preprocessing.ICA(method="fastica",
|
|
||||||
random_state=123) # Use a random state for reproducable results #TODO Old Random state 123 or new one?
|
|
||||||
ica.fit(ica_raw, verbose=True)
|
|
||||||
ica_raw.load_data()
|
|
||||||
# ica.plot_components(inst=ica_raw, ch_type='eeg', contours=0, topomap_args={'extrapolate': 'head'},
|
|
||||||
# psd_args={'fmin': 0, 'fmax': 80})
|
|
||||||
ica.plot_sources(ica_raw)
|
|
||||||
ica.plot_properties(inst=ica_raw, dB=False, topomap_args={'extrapolate': 'head', 'contours': 0},
|
|
||||||
psd_args={'fmin': 3, 'fmax': 50}, picks=['eeg'])
|
|
||||||
|
|
||||||
|
|
||||||
def plot_joint_eog_plots():
|
|
||||||
ds = 'N170'
|
|
||||||
for subj in ['014']:
|
|
||||||
data = load_unprocessed_subject(subj, ds)
|
|
||||||
data.load_data()
|
|
||||||
data.set_channel_types({'HEOG_left': 'eog', 'HEOG_right': 'eog', 'VEOG_lower': 'eog'})
|
|
||||||
data.set_montage('standard_1020', match_case=False)
|
|
||||||
eog_evoked = create_eog_epochs(data).average()
|
|
||||||
eog_evoked.apply_baseline(baseline=(None, -0.2))
|
|
||||||
eog_evoked.plot_joint()
|
|
||||||
|
|
||||||
|
|
||||||
# plot_ica()
|
|
||||||
# plot_joint_eog_plots()
|
|
||||||
plot_filter_data_epoched('003')
|
|
||||||
@@ -3,7 +3,7 @@ import mne
|
|||||||
|
|
||||||
from mne_bids import (BIDSPath, read_raw_bids)
|
from mne_bids import (BIDSPath, read_raw_bids)
|
||||||
from utils.ccs_eeg_semesterproject import load_precomputed_badData, load_precomputed_ica
|
from utils.ccs_eeg_semesterproject import load_precomputed_badData, load_precomputed_ica
|
||||||
from utils.ccs_eeg_utils import read_annotations_core
|
from utils.ccs_eeg_utils_reduced import read_annotations_core
|
||||||
|
|
||||||
|
|
||||||
def load_subject(subject, dataset):
|
def load_subject(subject, dataset):
|
||||||
@@ -52,7 +52,7 @@ def filter_data(raw):
|
|||||||
"""
|
"""
|
||||||
Filter the data of a single subject with a bandpass filter.
|
Filter the data of a single subject with a bandpass filter.
|
||||||
The lower bound ist 0.5Hz to compensate the slow drifts.
|
The lower bound ist 0.5Hz to compensate the slow drifts.
|
||||||
The upper bound is 50Hz to compensate the high frequencies, including the power line spike at 60Hz
|
The upper bound is 48Hz to compensate the high frequencies, including the power line spike at 60Hz
|
||||||
:param raw: The data to be filtered
|
:param raw: The data to be filtered
|
||||||
:return: The filtered data
|
:return: The filtered data
|
||||||
"""
|
"""
|
||||||
@@ -70,8 +70,7 @@ def clean_data(raw, subject, dataset, cleaned=False):
|
|||||||
:return: the bad channels
|
:return: the bad channels
|
||||||
"""
|
"""
|
||||||
channels = None
|
channels = None
|
||||||
folder = "Dataset\\" + dataset + "\\sub-" + subject + "\\ses-" + dataset + "\\eeg\\"
|
filepath = "Dataset/preprocessed/sub-" + subject + "_task-" + dataset + "_badannotations.csv"
|
||||||
filepath = folder + "sub-" + subject + "_task-" + dataset
|
|
||||||
|
|
||||||
# If nothing was marked yet, plot the data to mark bad segments
|
# If nothing was marked yet, plot the data to mark bad segments
|
||||||
if not cleaned:
|
if not cleaned:
|
||||||
@@ -80,15 +79,15 @@ def clean_data(raw, subject, dataset, cleaned=False):
|
|||||||
bad_idx = [idx for (idx, annot) in enumerate(raw.annotations) if annot['description'] == "BAD_"]
|
bad_idx = [idx for (idx, annot) in enumerate(raw.annotations) if annot['description'] == "BAD_"]
|
||||||
# If bad intervals were found save
|
# If bad intervals were found save
|
||||||
if bad_idx:
|
if bad_idx:
|
||||||
raw.annotations[bad_idx].save(filepath + "_badannotations.csv")
|
raw.annotations[bad_idx].save(filepath)
|
||||||
|
|
||||||
if os.path.isfile(filepath + "_badannotations.csv"):
|
if os.path.isfile(filepath):
|
||||||
annotations = mne.read_annotations(filepath + "_badannotations.csv")
|
annotations = mne.read_annotations(filepath)
|
||||||
raw.annotations.append(annotations.onset, annotations.duration, annotations.description)
|
raw.annotations.append(annotations.onset, annotations.duration, annotations.description)
|
||||||
|
|
||||||
# Set the bad channels for each subject
|
# Set the bad channels for each subject
|
||||||
if subject == '001':
|
if subject == '001':
|
||||||
channels = ['F8'] # Maybe also FP2?
|
channels = ['F8']
|
||||||
elif subject == '003':
|
elif subject == '003':
|
||||||
channels = []
|
channels = []
|
||||||
elif subject == '014':
|
elif subject == '014':
|
||||||
@@ -119,8 +118,6 @@ def run_ica(raw, dataset, subject, search='manual'):
|
|||||||
|
|
||||||
if search == 'manual':
|
if search == 'manual':
|
||||||
ica_raw.load_data()
|
ica_raw.load_data()
|
||||||
# ica.plot_components(inst=ica_raw, ch_type='eeg', contours=0, topomap_args={'extrapolate': 'head'},
|
|
||||||
# psd_args={'fmin': 0, 'fmax': 80})
|
|
||||||
ica.plot_properties(inst=ica_raw, dB=False, topomap_args={'extrapolate': 'head', 'contours': 0},
|
ica.plot_properties(inst=ica_raw, dB=False, topomap_args={'extrapolate': 'head', 'contours': 0},
|
||||||
psd_args={'fmin': 0, 'fmax': 50}, picks=['eeg'])
|
psd_args={'fmin': 0, 'fmax': 50}, picks=['eeg'])
|
||||||
ica.plot_sources(ica_raw)
|
ica.plot_sources(ica_raw)
|
||||||
@@ -137,11 +134,11 @@ def run_ica(raw, dataset, subject, search='manual'):
|
|||||||
exclude = [0, 2] # Through eog: 0, 2
|
exclude = [0, 2] # Through eog: 0, 2
|
||||||
elif subj == '014':
|
elif subj == '014':
|
||||||
exclude = [0, 1, 9] # Through eog: 0,1
|
exclude = [0, 1, 9] # Through eog: 0,1
|
||||||
# ica.plot_overlay(ica_raw, exclude=exclude) # Plot differences through exclude
|
|
||||||
# ica.exclude = exclude
|
|
||||||
# Apply ica to the raw object
|
# Apply ica to the raw object
|
||||||
raw.load_data()
|
raw.load_data()
|
||||||
# ica.plot_overlay(ica_raw, exclude=exclude)
|
# ica.plot_overlay(ica_raw,
|
||||||
|
# title="Signals before (red) applying ICA and after (black) applying ICA on subject 003",
|
||||||
|
# exclude=exclude)
|
||||||
raw = ica.apply(raw, exclude=exclude)
|
raw = ica.apply(raw, exclude=exclude)
|
||||||
# Lastly save the ica to a file
|
# Lastly save the ica to a file
|
||||||
folder = "Dataset\\" + dataset + "\\sub-" + subject + "\\ses-" + dataset + "\\eeg\\"
|
folder = "Dataset\\" + dataset + "\\sub-" + subject + "\\ses-" + dataset + "\\eeg\\"
|
||||||
|
|||||||
BIN
semesterproject_report_voggesberger.pdf
Normal file
BIN
semesterproject_report_voggesberger.pdf
Normal file
Binary file not shown.
@@ -1,4 +1,6 @@
|
|||||||
from utils.file_utils import load_preprocessed_data, get_epochs
|
import mne
|
||||||
|
|
||||||
|
from utils.file_utils import get_epochs
|
||||||
|
|
||||||
|
|
||||||
def check_peaks():
|
def check_peaks():
|
||||||
@@ -6,13 +8,16 @@ def check_peaks():
|
|||||||
Sanity check for the "get_peaks" method
|
Sanity check for the "get_peaks" method
|
||||||
"""
|
"""
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
raw = load_preprocessed_data('002', 'N170')
|
subject = '001'
|
||||||
|
folder = "../Dataset/n170/sub-" + subject + "/ses-n170/eeg/"
|
||||||
|
filepath = folder + "sub-" + subject + "_task-n170"
|
||||||
|
raw = mne.io.read_raw_fif(filepath + "_cleaned.fif")
|
||||||
epochs, _ = get_epochs(raw, [('face', 'intact')], picks='P7')
|
epochs, _ = get_epochs(raw, [('face', 'intact')], picks='P7')
|
||||||
ch, latency, peak = epochs.average().get_peak(tmin=0.13, tmax=0.2, mode='neg', return_amplitude=True)
|
ch, latency, peak = epochs.average().get_peak(tmin=0.13, tmax=0.2, mode='neg', return_amplitude=True)
|
||||||
import numpy as np
|
import numpy as np
|
||||||
plt.plot(epochs.times, np.squeeze(epochs.average().data.T))
|
plt.plot(epochs.times, np.squeeze(epochs.average().data.T))
|
||||||
plt.vlines([0.13, 0.2], -0.00001, 0.00001, colors='r', linestyles='dotted')
|
plt.vlines([0.13, 0.2], -0.00001, 0.00001, colors='gray', linestyles='dotted')
|
||||||
plt.vlines(latency, -0.00001, 0.00001, colors='gray', linestyles='dotted')
|
plt.vlines(latency, -0.00001, 0.00001, colors='r', linestyles='dotted')
|
||||||
plt.show()
|
plt.show()
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,275 +0,0 @@
|
|||||||
from osfclient import cli
|
|
||||||
import os
|
|
||||||
from mne_bids.read import _from_tsv, _drop
|
|
||||||
from mne_bids import (BIDSPath, read_raw_bids)
|
|
||||||
import mne
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
import scipy.ndimage
|
|
||||||
import scipy.signal
|
|
||||||
from numpy import sin as sin
|
|
||||||
|
|
||||||
|
|
||||||
def read_annotations_core(bids_path, raw):
|
|
||||||
tsv = os.path.join(bids_path.directory, bids_path.update(suffix="events", extension=".tsv").basename)
|
|
||||||
_handle_events_reading_core(tsv, raw)
|
|
||||||
|
|
||||||
|
|
||||||
def _handle_events_reading_core(events_fname, raw):
|
|
||||||
"""Read associated events.tsv and populate raw.
|
|
||||||
Handle onset, duration, and description of each event.
|
|
||||||
"""
|
|
||||||
events_dict = _from_tsv(events_fname)
|
|
||||||
|
|
||||||
if ('value' in events_dict) and ('trial_type' in events_dict):
|
|
||||||
events_dict = _drop(events_dict, 'n/a', 'trial_type')
|
|
||||||
events_dict = _drop(events_dict, 'n/a', 'value')
|
|
||||||
|
|
||||||
descriptions = np.asarray([a + ':' + b for a, b in zip(events_dict["trial_type"], events_dict["value"])],
|
|
||||||
dtype=str)
|
|
||||||
|
|
||||||
# Get the descriptions of the events
|
|
||||||
elif 'trial_type' in events_dict:
|
|
||||||
|
|
||||||
# Drop events unrelated to a trial type
|
|
||||||
events_dict = _drop(events_dict, 'n/a', 'trial_type')
|
|
||||||
descriptions = np.asarray(events_dict['trial_type'], dtype=str)
|
|
||||||
|
|
||||||
# If we don't have a proper description of the events, perhaps we have
|
|
||||||
# at least an event value?
|
|
||||||
elif 'value' in events_dict:
|
|
||||||
# Drop events unrelated to value
|
|
||||||
events_dict = _drop(events_dict, 'n/a', 'value')
|
|
||||||
descriptions = np.asarray(events_dict['value'], dtype=str)
|
|
||||||
# Worst case, we go with 'n/a' for all events
|
|
||||||
else:
|
|
||||||
descriptions = 'n/a'
|
|
||||||
# Deal with "n/a" strings before converting to float
|
|
||||||
ons = [np.nan if on == 'n/a' else on for on in events_dict['onset']]
|
|
||||||
dus = [0 if du == 'n/a' else du for du in events_dict['duration']]
|
|
||||||
onsets = np.asarray(ons, dtype=float)
|
|
||||||
durations = np.asarray(dus, dtype=float)
|
|
||||||
# Keep only events where onset is known
|
|
||||||
good_events_idx = ~np.isnan(onsets)
|
|
||||||
onsets = onsets[good_events_idx]
|
|
||||||
durations = durations[good_events_idx]
|
|
||||||
descriptions = descriptions[good_events_idx]
|
|
||||||
del good_events_idx
|
|
||||||
# Add Events to raw as annotations
|
|
||||||
annot_from_events = mne.Annotations(onset=onsets,
|
|
||||||
duration=durations,
|
|
||||||
description=descriptions,
|
|
||||||
orig_time=None)
|
|
||||||
raw.set_annotations(annot_from_events)
|
|
||||||
return raw
|
|
||||||
|
|
||||||
|
|
||||||
# taken from the osfclient tutorial https://github.com/ubcbraincircuits/osfclienttutorial
|
|
||||||
class args:
|
|
||||||
def __init__(self, project, username=None, update=True, force=False, destination=None, source=None, recursive=False,
|
|
||||||
target=None, output=None, remote=None, local=None):
|
|
||||||
self.project = project
|
|
||||||
self.username = username
|
|
||||||
self.update = update # applies to upload, clone, and fetch
|
|
||||||
self.force = force # applies to fetch and upload
|
|
||||||
# upload arguments:
|
|
||||||
self.destination = destination
|
|
||||||
self.source = source
|
|
||||||
self.recursive = recursive
|
|
||||||
# remove argument:
|
|
||||||
self.target = target
|
|
||||||
# clone argument:
|
|
||||||
self.output = output
|
|
||||||
# fetch arguments:
|
|
||||||
self.remote = remote
|
|
||||||
self.local = local
|
|
||||||
|
|
||||||
|
|
||||||
def download_erpcore(task="MMN", subject=1, localpath="local/bids/"):
|
|
||||||
project = "9f5w7" # after recent change they put everything as "sessions" in one big BIDS file
|
|
||||||
|
|
||||||
arguments = args(project) # project ID
|
|
||||||
for extension in ["channels.tsv", "events.tsv", "eeg.fdt", "eeg.json", "eeg.set"]:
|
|
||||||
targetpath = '/sub-{:03d}/ses-{}/eeg/sub-{:03d}_ses-{}_task-{}_{}'.format(subject, task, subject, task, task,
|
|
||||||
extension)
|
|
||||||
print("Downloading {}".format(targetpath))
|
|
||||||
arguments.remote = "\\ERP_CORE_BIDS_Raw_Files/" + targetpath
|
|
||||||
arguments.local = localpath + targetpath
|
|
||||||
cli.fetch(arguments)
|
|
||||||
|
|
||||||
|
|
||||||
def simulate_ICA(dims=4):
|
|
||||||
A = [[-0.3, 0.2], [.2, 0.1]]
|
|
||||||
sample_rate = 100.0
|
|
||||||
nsamples = 1000
|
|
||||||
t = np.arange(nsamples) / sample_rate
|
|
||||||
|
|
||||||
s = []
|
|
||||||
|
|
||||||
# boxcars
|
|
||||||
s.append(np.mod(np.array(range(0, nsamples)), 250) > 125)
|
|
||||||
# a triangle staircase + trend
|
|
||||||
s.append((np.mod(np.array(range(0, nsamples)), 100) + np.array(range(0, nsamples)) * 0.05) / 100)
|
|
||||||
if dims == 4:
|
|
||||||
A = np.array([[.7, 0.3, 0.2, -0.5], [0.2, -0.5, -0.2, 0.3], [-.3, 0.1, 0, 0.2], [-0.5, -0.3, -0.2, 0.8]])
|
|
||||||
|
|
||||||
# some sinosoids
|
|
||||||
s.append(np.cos(2 * np.pi * 0.5 * t) + 0.2 * np.sin(2 * np.pi * 2.5 * t + 0.1) + 0.2 * np.sin(
|
|
||||||
2 * np.pi * 15.3 * t) + 0.1 * np.sin(2 * np.pi * 16.7 * t + 0.1) + 0.1 * np.sin(2 * np.pi * 23.45 * t + .8))
|
|
||||||
# uniform noise
|
|
||||||
s.append(0.2 * np.random.rand(nsamples))
|
|
||||||
x = np.matmul(A, np.array(s))
|
|
||||||
return x
|
|
||||||
|
|
||||||
|
|
||||||
def spline_matrix(x, knots):
|
|
||||||
# bah, spline-matrices are a pain to implement.
|
|
||||||
# But package "patsy" with function "bs" crashed my notebook...
|
|
||||||
# Anyway, knots define where the spline should be anchored. The default should work
|
|
||||||
# X defines where the spline set should be evaluated.
|
|
||||||
# e.g. call using: spline_matrix(np.linspace(0,0.95,num=100))
|
|
||||||
import scipy.interpolate as si
|
|
||||||
|
|
||||||
x_tup = si.splrep(knots, knots, k=3)
|
|
||||||
nknots = len(x_tup[0])
|
|
||||||
x_i = np.empty((len(x), nknots - 4))
|
|
||||||
for i in range(nknots - 4):
|
|
||||||
vec = np.zeros(nknots)
|
|
||||||
vec[i] = 1.0
|
|
||||||
x_list = list(x_tup)
|
|
||||||
x_list[1] = vec.tolist()
|
|
||||||
x_i[:, i] = si.splev(x, x_list)
|
|
||||||
return x_i
|
|
||||||
|
|
||||||
|
|
||||||
def simulate_TF(signal=1, noise=True):
|
|
||||||
# signal can be 1 (image), 2(chirp) or 3 (steps)
|
|
||||||
import imageio
|
|
||||||
|
|
||||||
if signal == 2:
|
|
||||||
im = imageio.imread('ex9_tf.png')
|
|
||||||
|
|
||||||
im = im[0:60, :, 3] - im[0:60, :, 1]
|
|
||||||
# im = scipy.ndimage.zoom(im,[1,1])
|
|
||||||
im = np.flip(im, axis=0)
|
|
||||||
|
|
||||||
# plt.imshow(im,origin='lower')
|
|
||||||
|
|
||||||
# sig = (scipy.fft.irfft(im.T,axis=1))
|
|
||||||
|
|
||||||
nov = 10;
|
|
||||||
im.shape[0] * 0.5
|
|
||||||
nperseg = 50;
|
|
||||||
im.shape[0] - 1
|
|
||||||
t, sig = scipy.signal.istft(im, fs=500, noverlap=nov, nperseg=nperseg)
|
|
||||||
sig = sig / 300 # normalize
|
|
||||||
elif signal == 3:
|
|
||||||
sig = scipy.signal.chirp(t=np.arange(0, 10, 1 / 500), f0=1, f1=100, t1=2, method='linear', phi=90)
|
|
||||||
elif signal == 1:
|
|
||||||
|
|
||||||
x = np.arange(0, 2, 1 / 500)
|
|
||||||
sig_steps = np.concatenate([1.0 * sin(2 * np.pi * x * 50), 1.2 * sin(2 * np.pi * x * 55 + np.pi / 2),
|
|
||||||
0.8 * sin(2 * np.pi * x * 125 + np.pi),
|
|
||||||
1.0 * sin(2 * np.pi * x * 120 + 3 * np.pi / 2)])
|
|
||||||
|
|
||||||
sig = sig_steps
|
|
||||||
if noise:
|
|
||||||
sig = sig + 0.1 * np.std(sig) * np.random.randn(sig.shape[0])
|
|
||||||
|
|
||||||
return sig
|
|
||||||
|
|
||||||
|
|
||||||
def get_TF_dataset(subject_id='002', bids_root="../local/bids"):
|
|
||||||
bids_path = BIDSPath(subject=subject_id, task="P3", session="P3",
|
|
||||||
datatype='eeg', suffix='eeg',
|
|
||||||
root=bids_root)
|
|
||||||
|
|
||||||
raw = read_raw_bids(bids_path)
|
|
||||||
read_annotations_core(bids_path, raw)
|
|
||||||
# raw.pick_channels(["Cz"])#["Pz","Fz","Cz"])
|
|
||||||
raw.load_data()
|
|
||||||
raw.set_montage('standard_1020', match_case=False)
|
|
||||||
|
|
||||||
evts, evts_dict = mne.events_from_annotations(raw)
|
|
||||||
wanted_keys = [e for e in evts_dict.keys() if "response" in e]
|
|
||||||
evts_dict_stim = dict((k, evts_dict[k]) for k in wanted_keys if k in evts_dict)
|
|
||||||
epochs = mne.Epochs(raw, evts, evts_dict_stim, tmin=-1, tmax=2)
|
|
||||||
return epochs
|
|
||||||
|
|
||||||
|
|
||||||
def get_classification_dataset(subject=1, typeInt=4):
|
|
||||||
# TypeInt:
|
|
||||||
# Task 1 (open and close left or right fist)
|
|
||||||
# Task 2 (imagine opening and closing left or right fist)
|
|
||||||
# Task 3 (open and close both fists or both feet)
|
|
||||||
# Task 4 (imagine opening and closing both fists or both feet)
|
|
||||||
assert (typeInt >= 1)
|
|
||||||
assert (typeInt <= 4)
|
|
||||||
from mne.io import concatenate_raws, read_raw_edf
|
|
||||||
from mne.datasets import eegbci
|
|
||||||
tmin, tmax = -1., 4.
|
|
||||||
runs = [3, 7, 11]
|
|
||||||
runs = [r + typeInt - 1 for r in runs]
|
|
||||||
print("loading subject {} with runs {}".format(subject, runs))
|
|
||||||
if typeInt <= 1:
|
|
||||||
event_id = dict(left=2, right=3)
|
|
||||||
else:
|
|
||||||
event_id = dict(hands=2, feet=3)
|
|
||||||
|
|
||||||
raw_fnames = eegbci.load_data(subject, runs)
|
|
||||||
raws = [read_raw_edf(f, preload=True) for f in raw_fnames]
|
|
||||||
raw = concatenate_raws(raws)
|
|
||||||
|
|
||||||
raw.filter(7., 30., fir_design='firwin', skip_by_annotation='edge')
|
|
||||||
|
|
||||||
eegbci.standardize(raw) # set channel names
|
|
||||||
montage = mne.channels.make_standard_montage('standard_1005')
|
|
||||||
raw.set_montage(montage)
|
|
||||||
raw.rename_channels(lambda x: x.strip('.'))
|
|
||||||
events, _ = mne.events_from_annotations(raw, event_id=dict(T1=2, T2=3))
|
|
||||||
|
|
||||||
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
|
|
||||||
exclude='bads')
|
|
||||||
|
|
||||||
# Read epochs (train will be done only between 1 and 2s)
|
|
||||||
# Testing will be done with a running classifier
|
|
||||||
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
|
|
||||||
baseline=None, preload=True)
|
|
||||||
return (epochs)
|
|
||||||
|
|
||||||
|
|
||||||
def ex8_simulateData(width=40, n_subjects=15, signal_mean=100, noise_between=30, noise_within=10, smooth_sd=4,
|
|
||||||
rng_seed=43):
|
|
||||||
# adapted and extended from https://mne.tools/dev/auto_tutorials/discussions/plot_background_statistics.html#sphx-glr-auto-tutorials-discussions-plot-background-statistics-py
|
|
||||||
rng = np.random.RandomState(rng_seed)
|
|
||||||
# For each "subject", make a smoothed noisy signal with a centered peak
|
|
||||||
|
|
||||||
X = noise_within * rng.randn(n_subjects, width, width)
|
|
||||||
# Add three signals
|
|
||||||
X[:, width // 6 * 2, width // 6 * 2] -= signal_mean / 3 * 3 + rng.randn(n_subjects) * noise_between
|
|
||||||
X[:, width // 6 * 4, width // 6 * 4] += signal_mean / 3 * 2 + rng.randn(n_subjects) * noise_between
|
|
||||||
X[:, width // 6 * 5, width // 6 * 5] += signal_mean / 3 * 2 + rng.randn(n_subjects) * noise_between
|
|
||||||
# Spatially smooth with a 2D Gaussian kernel
|
|
||||||
size = width // 2 - 1
|
|
||||||
gaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(smooth_sd ** 2)))
|
|
||||||
for si in range(X.shape[0]):
|
|
||||||
for ri in range(X.shape[1]):
|
|
||||||
X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, 'same')
|
|
||||||
for ci in range(X.shape[2]):
|
|
||||||
X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, 'same')
|
|
||||||
# X += 10 * rng.randn(n_subjects, width, width)
|
|
||||||
return X
|
|
||||||
|
|
||||||
|
|
||||||
def stc_plot2img(h, title="SourceEstimate", closeAfterwards=False, crop=True):
|
|
||||||
h.add_text(0.1, 0.9, title, 'title', font_size=16)
|
|
||||||
screenshot = h.screenshot()
|
|
||||||
if closeAfterwards:
|
|
||||||
h.close()
|
|
||||||
|
|
||||||
if crop:
|
|
||||||
nonwhite_pix = (screenshot != 255).any(-1)
|
|
||||||
nonwhite_row = nonwhite_pix.any(1)
|
|
||||||
nonwhite_col = nonwhite_pix.any(0)
|
|
||||||
screenshot = screenshot[nonwhite_row][:, nonwhite_col]
|
|
||||||
return screenshot
|
|
||||||
62
utils/ccs_eeg_utils_reduced.py
Normal file
62
utils/ccs_eeg_utils_reduced.py
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
import os
|
||||||
|
from mne_bids.read import _from_tsv, _drop
|
||||||
|
import mne
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
"""
|
||||||
|
The ccs_eeg_utils.py file from the lecture, but reduced to the read_annotations_core method needed for the project
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
def read_annotations_core(bids_path, raw):
|
||||||
|
tsv = os.path.join(bids_path.directory, bids_path.update(suffix="events", extension=".tsv").basename)
|
||||||
|
_handle_events_reading_core(tsv, raw)
|
||||||
|
|
||||||
|
|
||||||
|
def _handle_events_reading_core(events_fname, raw):
|
||||||
|
"""Read associated events.tsv and populate raw.
|
||||||
|
Handle onset, duration, and description of each event.
|
||||||
|
"""
|
||||||
|
events_dict = _from_tsv(events_fname)
|
||||||
|
|
||||||
|
if ('value' in events_dict) and ('trial_type' in events_dict):
|
||||||
|
events_dict = _drop(events_dict, 'n/a', 'trial_type')
|
||||||
|
events_dict = _drop(events_dict, 'n/a', 'value')
|
||||||
|
|
||||||
|
descriptions = np.asarray([a + ':' + b for a, b in zip(events_dict["trial_type"], events_dict["value"])],
|
||||||
|
dtype=str)
|
||||||
|
|
||||||
|
# Get the descriptions of the events
|
||||||
|
elif 'trial_type' in events_dict:
|
||||||
|
|
||||||
|
# Drop events unrelated to a trial type
|
||||||
|
events_dict = _drop(events_dict, 'n/a', 'trial_type')
|
||||||
|
descriptions = np.asarray(events_dict['trial_type'], dtype=str)
|
||||||
|
|
||||||
|
# If we don't have a proper description of the events, perhaps we have
|
||||||
|
# at least an event value?
|
||||||
|
elif 'value' in events_dict:
|
||||||
|
# Drop events unrelated to value
|
||||||
|
events_dict = _drop(events_dict, 'n/a', 'value')
|
||||||
|
descriptions = np.asarray(events_dict['value'], dtype=str)
|
||||||
|
# Worst case, we go with 'n/a' for all events
|
||||||
|
else:
|
||||||
|
descriptions = 'n/a'
|
||||||
|
# Deal with "n/a" strings before converting to float
|
||||||
|
ons = [np.nan if on == 'n/a' else on for on in events_dict['onset']]
|
||||||
|
dus = [0 if du == 'n/a' else du for du in events_dict['duration']]
|
||||||
|
onsets = np.asarray(ons, dtype=float)
|
||||||
|
durations = np.asarray(dus, dtype=float)
|
||||||
|
# Keep only events where onset is known
|
||||||
|
good_events_idx = ~np.isnan(onsets)
|
||||||
|
onsets = onsets[good_events_idx]
|
||||||
|
durations = durations[good_events_idx]
|
||||||
|
descriptions = descriptions[good_events_idx]
|
||||||
|
del good_events_idx
|
||||||
|
# Add Events to raw as annotations
|
||||||
|
annot_from_events = mne.Annotations(onset=onsets,
|
||||||
|
duration=durations,
|
||||||
|
description=descriptions,
|
||||||
|
orig_time=None)
|
||||||
|
raw.set_annotations(annot_from_events)
|
||||||
|
return raw
|
||||||
@@ -18,13 +18,13 @@ def load_bad_annotations(filepath, fileending="badSegments.csv"):
|
|||||||
|
|
||||||
def load_preprocessed_data(subject, dataset):
|
def load_preprocessed_data(subject, dataset):
|
||||||
"""
|
"""
|
||||||
Load the raw object as well as the annotations of the preprocessed file
|
Load the raw object of the preprocessed file
|
||||||
:param subject: The subject, for which we want to load the raw object
|
:param subject: The subject, for which we want to load the raw object
|
||||||
:param dataset: The currently viewed dataset
|
:param dataset: The currently viewed dataset
|
||||||
:param selected_subjects: The manually preprocessed subjects
|
:param selected_subjects: The manually preprocessed subjects
|
||||||
:return: The raw object
|
:return: The raw object
|
||||||
"""
|
"""
|
||||||
folder = "Dataset\\" + dataset + "\\sub-" + subject + "\\ses-" + dataset + "\\eeg\\"
|
folder = "Dataset/" + dataset + "/sub-" + subject + "/ses-" + dataset + "/eeg/"
|
||||||
filepath = folder + "sub-" + subject + "_task-" + dataset
|
filepath = folder + "sub-" + subject + "_task-" + dataset
|
||||||
raw = mne.io.read_raw_fif(filepath + "_cleaned.fif")
|
raw = mne.io.read_raw_fif(filepath + "_cleaned.fif")
|
||||||
return raw
|
return raw
|
||||||
|
|||||||
@@ -56,15 +56,17 @@ def plot_grand_average(dataset):
|
|||||||
linestyles=['solid', 'solid', 'dotted', 'dotted'])
|
linestyles=['solid', 'solid', 'dotted', 'dotted'])
|
||||||
|
|
||||||
|
|
||||||
def plot_tf_cluster(F, clusters, cluster_p_values, freqs, times):
|
def plot_tf_cluster(F, clusters, cluster_p_values, freqs, times, scaling='lin'):
|
||||||
"""
|
"""
|
||||||
Plot teh F-Statistic values of permutation clusters with p-values <= 0.05 in color and > 0.05 in grey.
|
Plot the F-Statistic values of permutation clusters with p-values <= 0.05 in color and > 0.05 in grey.
|
||||||
|
Currently only works well for the linear scaling. For the logarithmic scaling a different x-axis has to be chosen
|
||||||
|
|
||||||
:param F: F-Statistics of the permutation clusters
|
:param F: F-Statistics of the permutation clusters
|
||||||
:param clusters: all permutation clusters
|
:param clusters: all permutation clusters
|
||||||
:param cluster_p_values: p-values of the clusters
|
:param cluster_p_values: p-values of the clusters
|
||||||
:param freqs: frequency domain
|
:param freqs: frequency domain
|
||||||
:param times: time domain
|
:param times: time domain
|
||||||
|
:param scaling: default 'lin' for linear scaling, else can be 'log' for logarithmic scaling
|
||||||
"""
|
"""
|
||||||
good_c = np.nan * np.ones_like(F)
|
good_c = np.nan * np.ones_like(F)
|
||||||
for clu, p_val in zip(clusters, cluster_p_values):
|
for clu, p_val in zip(clusters, cluster_p_values):
|
||||||
@@ -74,16 +76,33 @@ def plot_tf_cluster(F, clusters, cluster_p_values, freqs, times):
|
|||||||
bbox = [times[0], times[-1], freqs[0], freqs[-1]]
|
bbox = [times[0], times[-1], freqs[0], freqs[-1]]
|
||||||
plt.imshow(F, aspect='auto', origin='lower', cmap=cm.gray, extent=bbox, interpolation='None')
|
plt.imshow(F, aspect='auto', origin='lower', cmap=cm.gray, extent=bbox, interpolation='None')
|
||||||
a = plt.imshow(good_c, cmap=cm.RdBu_r, aspect='auto', origin='lower', extent=bbox, interpolation='None')
|
a = plt.imshow(good_c, cmap=cm.RdBu_r, aspect='auto', origin='lower', extent=bbox, interpolation='None')
|
||||||
|
|
||||||
|
if scaling == 'log':
|
||||||
|
ticks = [1, 4, 8, 12, 14, 18, 22, 26, 30, 34, 38, 42, 46, 50]
|
||||||
|
labels = [round(freqs[i], 2) for i in range(len(freqs)) if i + 1 in ticks]
|
||||||
|
plt.yticks(ticks, labels)
|
||||||
|
|
||||||
plt.colorbar(a)
|
plt.colorbar(a)
|
||||||
plt.xlabel('Time (s)')
|
plt.xlabel('Time (s)')
|
||||||
plt.ylabel('Frequency (Hz)')
|
plt.ylabel('Frequency (Hz)')
|
||||||
plt.show()
|
plt.show()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def plot_oscillation_bands(condition):
|
def plot_oscillation_bands(condition):
|
||||||
|
"""
|
||||||
|
Plot the oscillation bands for a given condition in the time from 130ms to 200ms
|
||||||
|
:param condition: the condition to plot the oscillation bands for
|
||||||
|
"""
|
||||||
fig, axis = plt.subplots(1, 5, figsize=(25, 5))
|
fig, axis = plt.subplots(1, 5, figsize=(25, 5))
|
||||||
condition.plot_topomap(baseline=(-0.2, 0), fmin=0, fmax=4, title='Delta', axes=axis[0], show=False, vmin=0, vmax=1.5, tmin=0, tmax=1)
|
condition.plot_topomap(baseline=(-0.2, 0), fmin=0, fmax=4, title='Delta', axes=axis[0], show=False, vmin=0,
|
||||||
condition.plot_topomap(baseline=(-0.2, 0), fmin=4, fmax=8, title='Theta', axes=axis[1], show=False, vmin=0, vmax=0.7, tmin=0, tmax=1)
|
vmax=1.5, tmin=0.13, tmax=0.2)
|
||||||
condition.plot_topomap(baseline=(-0.2, 0), fmin=8, fmax=12, title='Alpha', axes=axis[2], show=False, vmin=-0.15, vmax=0.2, tmin=0, tmax=1)
|
condition.plot_topomap(baseline=(-0.2, 0), fmin=4, fmax=8, title='Theta', axes=axis[1], show=False, vmin=0,
|
||||||
condition.plot_topomap(baseline=(-0.2, 0), fmin=13, fmax=30, title='Beta', axes=axis[3], show=False, vmin=-0.18, vmax=0.2, tmin=0, tmax=1)
|
vmax=0.7, tmin=0.13, tmax=0.2)
|
||||||
condition.plot_topomap(baseline=(-0.2, 0), fmin=30, fmax=45, title='Gamma', axes=axis[4], vmin=0, vmax=0.2, tmin=0, tmax=1)
|
condition.plot_topomap(baseline=(-0.2, 0), fmin=8, fmax=12, title='Alpha', axes=axis[2], show=False, vmin=-0.25,
|
||||||
|
vmax=0.2, tmin=0.13, tmax=0.2)
|
||||||
|
condition.plot_topomap(baseline=(-0.2, 0), fmin=13, fmax=30, title='Beta', axes=axis[3], show=False, vmin=-0.21,
|
||||||
|
vmax=0.2, tmin=0.13, tmax=0.2)
|
||||||
|
condition.plot_topomap(baseline=(-0.2, 0), fmin=30, fmax=45, title='Gamma', axes=axis[4], vmin=-0.05, vmax=0.2,
|
||||||
|
tmin=0.13,
|
||||||
|
tmax=0.2)
|
||||||
|
|||||||
Reference in New Issue
Block a user