Op weg naar gebruikersspecifieke hoorapparaten op basis van hersengolven: detecteren of de gebruiker actief luistert

Arnout
Roebben

Wat zegt u, ik heb u niet goed verstaan? 1 op 10 Belgen lijdt aan gehoorverlies en spreekt deze zin dagelijks uit. Wat als we slimme hoorapparaten zouden kunnen ontwerpen, die hersengolven gebruiken om op het juiste moment in te schakelen? Dit wordt realiteit door te detecteren wanneer iemand actief aan het luisteren is. Telepathie zegt u? Nee, technologie!

 

Detecteren of de gebruiker actief luistert

Detecteren wanneer iemand actief aan het luisteren is, opent poorten naar een waaier aan applicaties. Zo kunnen neurogestuurde hoorapparaten ontworpen worden, die hersengolven gebruiken om in te schakelen op momenten dat de gebruiker actief aan het luisteren is. Daarom onderzoeken we hoe we kunnen detecteren wanneer deze gebruiker actief luistert. Bovendien bestuderen we methodes om deze technologie aan te passen op een gebruikersspecifieke basis om zo de ontwerptijd te verkleinen.

Natuurlijk beschikken mensen niet over telepathische gaven om te detecteren wanneer iemand actief aan het luisteren is. Daarom maken we gebruik van technologische ontwikkelingen rond ‘elektrische afbeeldingen’ van het menselijk brein, elektro-encefalogramsignalen genaamd.

 

Het elektro-encefalogram (EEG)

Het menselijk brein bestaat uit 86 miljard fundamentele cellen, neuronen geheten. Deze cellen zijn met elkaar verbonden en communiceren via elektrische signalen. Wanneer grote groepen neuronen een synchrone, elektrische activiteit vertonen, kunnen sensoren bevestigd op de schedel deze activiteit opmeten. Deze sensoren geven met andere woorden een ‘afbeelding’ van de elektrische activiteit van het brein. Zo een ‘afbeelding’ is een elektro-encefalogram (EEG).

image 506

Sensoren bevestigd op de schedel maken een ‘afbeelding’ van de breinactiviteit, het elektro-encefalogram genaamd.

Hoe kunnen we dit EEG nu gebruiken om te detecteren wanneer de gebruiker actief aan het luisteren is? Wel, het EEG bezit de bijzondere eigenschap dat het een voorstelling bevat van de spraaksignalen. Hiervoor volgt het EEG trage variaties die het spraaksignaal omsluiten, de spraakenveloppe (zie afbeelding hieronder). Die spraakenveloppe volgen is dus dé manier van het brein om sprekers op te slaan. Het EEG volgt deze spraakenveloppe bovendien in hogere mate wanneer de gebruiker aandachtiger luistert naar het spraaksignaal. Deze eigenschap kunnen we nu uitbuiten om te bepalen wanneer de gebruiker actief aan het luisteren is door computermodellen en machine learning te gebruiken.

image 507

De spraakenveloppe omsluit het spraaksignaal.

 

Computermodellen en machine learning

De computermodellen laten wiskundige bewerkingen los op het EEG om de spraakenveloppe terug te vinden uit het EEG. Zo maken we gebruik van een specifiek computermodel: het kleinste kwadratenmodel. Dit kleinste kwadratenmodel herstelt de spraakenveloppe uit het EEG door een gewogen gemiddelde te maken van de EEG-signalen. Vervolgens wordt deze gereconstrueerde enveloppe vergeleken met de onderliggende, echte spraakenveloppe en een score wordt teruggegeven. Hoe hoger de score, hoe sterker de spraakenveloppe aanwezig is in het EEG-signaal. Met andere woorden, hoe hoger de score, hoe aandachtiger de gebruiker luistert naar het spraaksignaal.

image 508

Het kleinste kwadratenmodel herstelt de spraakenveloppe uit het EEG.

Om nauwkeurig de spraakenveloppen te reconstrueren bezit dit kleinste kwadratenmodel bovendien parameters om het model aan te passen aan de gebruiker. Vergelijk het met een draaiknop om de instellingen van een machine aan te passen, zoals de knop om een oven op de juiste temperatuur te zetten. De computermodellen leren zelf de optimale waardes voor deze parameters – de juiste stand van de draaiknop – via machine learning, waarbij het computermodel de optimale waardes leert door te kijken naar voorbeelden. Hiervoor geven we het kleinste kwadratenmodel voorbeelden bestaande uit EEG-opnames met bijhorende spraakenveloppen terwijl we weten dat de gebruiker actief luistert. Door het model de parameters te laten aanpassen op basis van deze voorbeelden, leert het model zichzelf aan om te voorspellen wanneer de gebruiker actief aan het luisteren is.

 

Gebruikersspecifieke voorbeelden

Aangezien de modellen parameters leren uit voorbeelden via machine learning, moeten deze voorbeelden kwaliteitsvol zijn. Zo bereikt het kleinste kwadratenmodel de hoogste nauwkeurigheid wanneer we voorbeelden verzamelen voor elke gebruiker afzonderlijk. Deze aanpak heeft desalniettemin het nadeel dat de dataverzameling per gebruiker tijdrovend is. Om deze voorbeelden te verzamelen moeten we tenslotte experimenten uitvoeren voor elke gebruiker, waarbij de gebruiker actief moet luisteren terwijl we het EEG opslaan.

Om dit dataverzamelingsprobleem toch aan te pakken, kunnen we nu de voorbeelden van een andere persoon aanpassen aan de huidige gebruiker. Stel dat we een kleinste kwadratenmodel voor Marie willen maken. Hiervoor hebben we toegang tot EEG-signalen van Marie aangezien Marie de EEG-sensoren kan opzetten. Maar we weten niet of Marie actief luistert, zodat het voorbeeld van Marie onvolledig is en we géén machine learning kunnen toepassen. Toch kunnen we Maries EEG gebruiken om de voorbeelden van een andere persoon, Jef, aan te passen aan Marie. Door relaties te zoeken tussen de EEG-signalen van beide personen, kunnen we de voorbeelden van Jef aanpassen aan Marie. Die aangepaste voorbeelden zijn dan meer gericht op Marie en kunnen we wél gebruiken om de parameters van Maries kleinste kwadratenmodel te leren via machine learning. Deze methode levert nog niet dezelfde kwalitatieve voorbeelden op als de dataverzameling per gebruiker, maar lijkt een stap in de goede richting. De methode levert namelijk significant betere parameters op voor Maries kleinste kwadratenmodel dan wanneer we de voorbeelden van Jef gebruiken zonder aanpassing.

image 509

Door relaties in de EEG-signalen te zoeken, verbeteren de parameters van Maries kleinste kwadratenmodel significant.

 

 

De toekomst?

Via EEG en computermodellen is het mogelijk om te detecteren wanneer een gebruiker actief aan het luisteren is. Hiervoor kunnen we een kleinste kwadratenmodel gebruiken, waarbij het model de optimale parameters zelf leert uit voorbeelden via machine learning. Om het typische nadeel van de dataverzameling per gebruiker in deze machine learning-strategie te minimaliseren, kunnen we bovendien voorbeelden aanpassen aan de huidige gebruiker. Toch is het onderzoek in dit domein verre van voltooid. Zo zijn de aangepaste voorbeelden nog steeds niet volledig afgestemd op de gebruiker. Niettemin hebben we stappen gezet om te detecteren wanneer een gebruiker actief luistert, en wie weet luisteren we ooit naar elkaar via neurogestuurde hoorapparaten op basis van deze modellen. Immers, zoals Abraham Lincoln al wist: “The best way to predict the future is to create it".

Bibliografie

[1] A. Biasiucci, B. Franceschiello, and M. M. Murray, “Electroencephalography,” Current Biology, vol. 29, no. 3, pp. R80–R85, Feb. 2019. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0960982218315513

[2] G. R. Müller-Putz, “Chapter 18 - Electroencephalography,” in Handbook of Clinical Neurology, ser. Brain-Computer Interfaces, N. F. Ramsey and J. d. R. Millán, Eds. Elsevier, Jan. 2020, vol. 168, pp. 249–262. [Online]. Available: https://www.sciencedirect.com/science/article/pii/B9780444639349000184

[3] J. A. Urigüen and B. Garcia-Zapirain, “EEG artifact removal—state-of-the-art and guidelines,” Journal of Neural Engineering, vol. 12, no. 3, p. 031001, Jun. 2015. [Online]. Available: https://iopscience.iop.org/article/10.1088/1741-2560/12/3/031001

[4] A. Delorme, T. Sejnowski, and S. Makeig, “Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis,” p. 7, 2007.

[5] N. Ding and J. Z. Simon, “Emergence of neural encoding of auditory objects while listening to competing speakers,” Proceedings of the National Academy of Sciences, vol. 109, no. 29, pp. 11 854–11 859, Jul. 2012. [Online]. Available: https://www.pnas.org/content/109/29/11854

[6] D. Lesenfants and T. Francart, “The interplay of top-down focal attention and the cortical tracking of speech,” Scientific Reports, vol. 10, no. 1, p. 6922, Dec. 2020. [Online]. Available: http://www.nature.com/articles/s41598-020-63587-3

[7] E. C. Cherry, “Some Experiments on the Recognition of Speech, with One and with Two Ears,” The Journal of the Acoustical Society of America, vol. 25, no. 5, pp. 975–979, Sep. 1953. [Online]. Available: http://asa.scitation.org/doi/10.1121/1.1907229

[8] J. A. O’Sullivan, A. J. Power, N. Mesgarani, S. Rajaram, J. J. Foxe, B. G. Shinn-Cunningham, M. Slaney, S. A. Shamma, and E. C. Lalor, “Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG,”Cerebral Cortex, vol. 25, no. 7, pp. 1697–1706, Jul. 2015. [Online]. Available:https://academic.oup.com/cercor/article-lookup/doi/10.1093/cercor/bht355

[9] J. M. Festen and R. Plomp, “Effects of fluctuating noise and interfering speech on the speech-reception threshold for impaired and normal hearing,” The Journal of the Acoustical Society of America, vol. 88, no. 4, pp. 1725–1736, Oct. 1990. [Online]. Available: http://asa.scitation.org/doi/10.1121/1.400247

[10] W. Kellermann, “Beamforming for Speech and Audio Signals,” in Handbook of Signal Processing in Acoustics, D. Havelock, S. Kuwano, and M. Vorländer, Eds. New York, NY: Springer, 2008, pp. 691–702. [Online]. Available: https://doi.org/10.1007/978-0-387-30441-0_35

[11] A. de Cheveigné, D. D. Wong, G. M. Di Liberto, J. Hjortkjær, M. Slaney, and E. Lalor, “Decoding the auditory brain with canonical component analysis,” NeuroImage, vol. 172, pp. 206–216, May 2018. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1053811918300338

[12] S. Geirnaert, T. Francart, and A. Bertrand, “Fast EEG-Based Decoding Of The Directional Focus Of Auditory Attention Using Common Spatial Patterns,”IEEE Transactions on Biomedical Engineering, vol. 68, no. 5, pp. 1557–1568, May 2021.

[13] W. Klonowski, “Everything you wanted to ask about EEG but were afraid to get the right answer,” Nonlinear Biomedical Physics, vol. 3, p. 2, May 2009. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2698918/

[14] B. Somers, T. Francart, and A. Bertrand, “A generic EEG artifact removal algorithm based on the multi-channel Wiener filter,” Journal of Neural Engineering, vol. 15, no. 3, p. 036007, Jun. 2018.

[15] W. Biesmans, N. Das, T. Francart, and A. Bertrand, “Auditory-Inspired Speech Envelope Extraction Methods for Improved EEG-Based Auditory Attention Detection in a Cocktail Party Scenario,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 5, pp. 402–412, May 2017.

[16] J. Vanthornhout, L. Decruy, and T. Francart, “Effect of Task and Attention on Neural Tracking of Speech,” Frontiers in Neuroscience, vol. 13, p. 977, 2019. [Online]. Available: https://www.frontiersin.org/article/10.3389/fnins. 2019.00977

[17] P. Søndergaard and P. Majdak, “The Auditory Modeling Toolbox,” in The Technology of Binaural Listening, Modern Acoustics and Signal Processing, Jan. 2013, pp. 33–56.

[18] R. D. Patterson, M. H. Allerhand, and C. Giguère, “Time-domain modeling of peripheral auditory processing: A modular architecture and a software platform,” The Journal of the Acoustical Society of America, vol. 98, no. 4, pp. 1890–1894, Oct. 1995. [Online]. Available: http://asa.scitation.org/doi/10.1121/1.414456

[19] Z.-N. Li, M. S. Drew, and J. Liu, “MPEG Audio Compression,” in Fundamentals of Multimedia, ser. Texts in Computer Science, Z.-N. Li, M. S. Drew, and J. Liu, Eds. Cham: Springer International Publishing, 2014, pp. 457–482. [Online]. Available: https://doi.org/10.1007/978-3-319-05290-8_14

[20] S. S. Stevens, “The Measurement of Loudness,” The Journal of the Acoustical Society of America, p. 15, Sep. 1995.

[21] B. Babadi and E. N. Brown, “A Review of Multitaper Spectral Analysis,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 5, pp. 1555–1564, May 2014.

[22] L. W. Couch, “Digital & Analog Communication Systems,” in Digital & Analog Communication Systems, 8th ed. Edinburgh Gate: Pearson, 2013, pp. 436–513. [Online]. Available: https://www.pearson.com/store/p/digital-analog-communication-systems/P1…

[23] M. J. Prerau, R. E. Brown, M. T. Bianchi, J. M. Ellenbogen, and P. L. Purdon, “Sleep Neurophysiological Dynamics Through the Lens of Multitaper Spectral Analysis,” Physiology, vol. 32, no. 1, pp. 60–92, Jan. 2017. [Online].
Available: https://www.physiology.org/doi/10.1152/physiol.00062.2015

[24] D. Thomson, “Spectrum estimation and harmonic analysis,” Proceedings of the IEEE, vol. 70, no. 9, pp. 1055–1096, Sep. 1982.

[25] “Multitaper power spectral density estimate - MATLAB pmtm - MathWorks Benelux.” [Online]. Available: https://nl.mathworks.com/help/signal/ref/pmtm.html#mw_243b8a58-f1e5-4df…

[26] E. M. Zion Golumbic, N. Ding, S. Bickel, P. Lakatos, C. A. Schevon, G. M. McKhann, R. R. Goodman, R. Emerson, A. D. Mehta, J. Z. Simon, D. Poeppel, and C. E. Schroeder, “Mechanisms Underlying Selective Neuronal Tracking of Attended Speech at a ‘Cocktail Party’,” Neuron, vol. 77, no. 5, pp. 980–991, Mar. 2013. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3891478/

[27] K. C. Puvvada and J. Z. Simon, “Cortical Representations of Speech in a Multitalker Auditory Scene,” Journal of Neuroscience, vol. 37, no. 38, pp. 9189–9196, Sep. 2017. [Online]. Available: https://www.jneurosci.org/content/37/38/9189

[28] S. Geirnaert, S. Vandecappelle, E. Alickovic, A. de Cheveigné, E. Lalor, B. T. Meyer, S. Miran, T. Francart, and A. Bertrand, “EEG-based Auditory Attention Decoding: Towards Neuro-Steered Hearing Devices,” IEEE Signal
Processing Magazine, vol. 38, no. 4, pp. 89–102, Jul. 2021. [Online]. Available:http://arxiv.org/abs/2008.04569

[29] E. Edwards and E. F. Chang, “Syllabic (∼2–5 Hz) and fluctuation (∼1–10Hz) ranges in speech and auditory processing,” Hearing Research, vol. 305, pp. 113–134, Nov. 2013. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0378595513002153

[30] J. Vanthornhout, L. Decruy, J. Wouters, J. Z. Simon, and T. Francart, “Speech Intelligibility Predicted from Neural Entrainment of the Speech Envelope,” Journal of the Association for Research in Otolaryngology: JARO, vol. 19, no. 2, pp. 181–191, Apr. 2018.

[31] N. Mesgarani and E. F. Chang, “Selective cortical representation of attended speaker in multi-talker speech perception,” Nature, vol. 485, no. 7397, pp. 233–236, May 2012.

[32] A. Bednar and E. C. Lalor, “Neural tracking of auditory motion is reflected by delta phase and alpha power of EEG,” NeuroImage, vol. 181, pp. 683–691, Nov. 2018.

[33] O. Etard and T. Reichenbach, “Neural Speech Tracking in the Theta and in the Delta Frequency Band Differentially Encode Clarity and Comprehension of Speech in Noise,” Journal of Neuroscience, vol. 39, no. 29, pp. 5750–5759, Jul. 2019. [Online]. Available: https://www.jneurosci.org/content/39/29/5750

[34] J. C. F. de Winter, t. link will open in a new window Link to external site, S. D. Gosling, and J. Potter, “Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using
simulations and empirical data,” Psychological Methods, vol. 21, no. 3, pp. 273–290, Sep. 2016. [Online]. Available: https://www.proquest.com/docview/1790926596/abstract/AEF423FE88AE4979PQ…

[35] S. Miran, S. Akram, A. Sheikhattar, J. Z. Simon, T. Zhang, and B. Babadi, “Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach,” Frontiers in Neuroscience, May
2018. [Online]. Available: https://www.proquest.com/docview/2306310020/abstract/8CFD258FFACA4EF0PQ…

[36] R. Tibshirani, “Regression Shrinkage and Selection via the Lasso,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996. [Online]. Available: http://www.jstor.org/stable/2346178

[37] J. Nocedal and S. J. Wright, in Numerical Optimization, 2nd ed., ser. Springer Series in Operations Research. New York: Springer, 2006, pp. 10–27.

[38] N. P. Hurley and S. T. Rickard, “Comparing Measures of Sparsity,”arXiv:0811.4706 [cs, math], Apr. 2009. [Online]. Available: http://arxiv.org/abs/0811.4706

[39] S. Rickard and M. Fallon, “The Gini index of speech,” Jan. 2004.

[40] H. Dalton, “The Measurement of the Inequality of Incomes,” The Economic Journal, vol. 30, no. 119, p. 348, Sep. 1920. [Online]. Available:https://www.jstor.org/stable/10.2307/2223525?origin=crossref

[41] M. O. Lorenz, “Methods of Measuring the Concentration of Wealth,”Publications of the American Statistical Association, vol. 9, no. 70, p. 209, Jun. 1905. [Online]. Available: https://www.jstor.org/stable/2276207?origin=crossref

[42] A. de Cheveigné, M. Slaney, S. A. Fuglsang, and J. Hjortkjaer, “Auditory stimulus-response modeling with a match-mismatch task,” Journal of Neural Engineering, vol. 18, no. 4, p. 046040, Aug. 2021. [Online]. Available:
https://iopscience.iop.org/article/10.1088/1741-2552/abf771

[43] H. Hotelling, “Relations Between Two Sets of Variates,” Biometrika, vol. 28, no. 3/4, pp. 321–377, 1936. [Online]. Available: http://www.jstor.org/stable/2333955

[44] S. P. Boyd and L. Vandenberghe, in Convex Optimization. Cambridge, UK ; New York: Cambridge University Press, 2004, pp. 127–273.

[45] T. Inouye, K. Shinosaki, H. Sakamoto, S. Toi, S. Ukai, A. Iyama, Y. Katsuda, and M. Hirano, “Quantification of EEG irregularity by use of the entropy of the power spectrum,” Electroencephalography and Clinical Neurophysiology, vol. 79, no. 3, pp. 204–210, Sep. 1991. [Online]. Available:https://linkinghub.elsevier.com/retrieve/pii/001346949190138T

[46] H. Viertiö-Oja, V. Maja, M. Särkelä, P. Talja, N. Tenkanen, H. Tolvanen-Laakso, M. Paloheimo, A. Vakkuri, A. Yli-Hankala, and P. Meriläinen, “Description of the Entropy™ algorithm as applied in the Datex-Ohmeda
S/5™ Entropy Module,” Acta Anaesthesiologica Scandinavica, vol. 48, no. 2, pp. 154–161, 2004. [Online]. Available: http://onlinelibrary.wiley.com/doi/abs/10.1111/j.0001-5172.2004.00322.x

[47] T. T. Georgiou, “Distances between power spectral densities,”arXiv:math/0607026, Jul. 2006. [Online]. Available: http://arxiv.org/abs/math/0607026

[48] T. Georgiou and A. Lindquist, “Kullback-Leibler approximation of spectraldensity functions,” IEEE Transactions on Information Theory, vol. 49, no. 11,pp. 2910–2917, Nov. 2003.

[49] S. Kullback and R. A. Leibler, “On Information and Sufficiency,” The Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79–86, 1951. [Online]. Available:http://www.jstor.org/stable/2236703

[50] E. R. Dougherty, in Dougherty, E: Random Processes for Image Signal Processing,Bellingham, Wash, Nov. 1998, pp. 1–114.

[51] M. Aoyagi, T. Kiren, Y. Kim, Y. Suzuki, T. Fuse, and Y. Koike, “Optimal modulation frequency for amplitude-modulation following response in young children during sleep,” Hearing Research, vol. 65, no. 1-2, pp. 253–261,
Feb. 1993. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/037859559390218P

[52] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, ser. Springer Series in Statistics. Springer, 2017. [Online]. Available: https://web.stanford.edu/~hastie/ElemStatLearn/printings/ESLII_print12_…

[53] J. T. Coull, C. D. Frith, R. S. Frackowiak, and P. M. Grasby, “A frontoparietal network for rapid visual information processing: A PET study of sustained attention and working memory,” Neuropsychologia, vol. 34, no. 11,
pp. 1085–1095, Nov. 1996.

[54] J. J. Foxe, G. V. Simpson, and S. P. Ahlfors, “Parieto-occipital ∼10Hz activity reflects anticipatory state of visual attention mechanisms:,”NeuroReport, vol. 9, no. 17, pp. 3929–3933, Dec. 1998. [Online]. Available:
http://journals.lww.com/00001756-199812010-00030

[55] E. Fedorenko, J. Duncan, and N. Kanwisher, “Broad domain generality in focal regions of frontal and parietal cortex,” Proceedings of the National Academy of Sciences, vol. 110, no. 41, pp. 16 616–16 621, Oct. 2013. [Online].Available: https://www.pnas.org/content/110/41/16616

[56] S. Kweldju, “Neurobiology Research Findings: How the Brain Works during Reading,” PASAA: Journal of Language Teaching and Learning in Thailand, vol. 50, pp. 125–142, 2015. [Online]. Available: https://eric.ed.gov/?id=EJ1088308

[57] N. Kaongoen, J. H. Choi, and S. Jo, “Speech-imagery-based BCI system using ear-EEG,” Journal of Neural Engineering, Dec. 2020.

[58] A. Belyavin and N. A. Wright, “Changes in electrical activity of the brain with vigilance,” Electroencephalography and Clinical Neurophysiology, vol. 66, no. 2, pp. 137–144, Feb. 1987. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/0013469487901830

[59] W. Klimesch, “Alpha-band oscillations, attention, and controlled access to stored information,” Trends in Cognitive Sciences, vol. 16, no. 12, pp. 606–617,Dec. 2012. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1364661312002434

[60] B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-r. Muller, “Optimizing Spatial filters for Robust EEG Single-Trial Analysis,” IEEE Signal Processing Magazine, vol. 25, no. 1, pp. 41–56, 2008.

[61] Z. Koles, “The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG,” Electroencephalography and Clinical Neurophysiology, vol. 79, no. 6, pp. 440–447, Dec. 1991. [Online].
Available: https://linkinghub.elsevier.com/retrieve/pii/001346949190163X

[62] I. Xygonakis, A. Athanasiou, N. Pandria, D. Kugiumtzis, and P. D. Bamidis, “Decoding Motor Imagery through Common Spatial Pattern Filters at the EEG Source Space,” Computational Intelligence and Neuroscience, vol. 2018, p. e7957408, Aug. 2018. [Online]. Available:https://www.hindawi.com/journals/cin/2018/7957408/

[63] D. Lesenfants, D. Habbal, C. Chatelle, A. Soddu, S. Laureys, and Q. Noirhomme, “Toward an Attention-Based Diagnostic Tool for Patients With Locked-in Syndrome,” Clinical EEG and Neuroscience, vol. 49, no. 2, pp. 122–135, Mar. 2018. [Online]. Available: https://doi.org/10.1177/1550059416674842

[64] C. M. Bishop, Pattern Recognition and Machine Learning, ser. Information Science and Statistics. New York: Springer, 2006.

[65] O. Ledoit and M. Wolf, “A well-conditioned estimator for largedimensional covariance matrices,” Journal of Multivariate Analysis,vol. 88, no. 2, pp. 365–411, Feb. 2004. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0047259X03000964

[66] K. Pearson, “LIII. On lines and planes of closest fit to systems of points in space,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 2, no. 11, pp. 559–572, Nov. 1901. [Online]. Available:https://doi.org/10.1080/14786440109462720

[67] F. Castells, P. Laguna, L. Sörnmo, A. Bollmann, and J. M. Roig, “Principal Component Analysis in ECG Signal Processing,” EURASIP Journal on Advances in Signal Processing, vol. 2007, no. 1, pp. 1–21, Dec. 2007.
[Online]. Available: https://asp-eurasipjournals.springeropen.com/articles/10.1155/2007/74580

[68] A. N. Tikhonov, “On the solution of ill-posed problems and the method of regularization,” p. 5, 1963.

[69] S. E. Leurgans, R. A. Moyeed, and B. W. Silverman, “Canonical Correlation Analysis when the Data are Curves,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 55, no. 3, pp. 725–740, 1993. [Online]. Available:http://www.jstor.org/stable/2345883

[70] F. Lotte and C. Guan, “Regularizing common spatial patterns to improve BCI designs: Unified theory and new algorithms,” IEEE transactions on bio-medical engineering, vol. 58, no. 2, pp. 355–362, Feb. 2011.

[71] S. Haufe, F. Meinecke, K. Görgen, S. Dähne, J.-D. Haynes, B. Blankertz, and F. Bießmann, “On the interpretation of weight vectors of linear models in multivariate neuroimaging,” NeuroImage, vol. 87, pp. 96–110, Feb.
2014. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1053811913010914

[72] M. J. Crosse, N. J. Zuk, G. M. D. Liberto, A. Nidiffer, S. Molholm, and E. C. Lalor, “Linear Modeling of Neurophysiological Responses to Naturalistic Stimuli: Methodological Considerations for Applied Research,” May 2021.
[Online]. Available: https://psyarxiv.com/jbz2w/

[73] R. E. Bellman, Adaptive Control Processes: A Guided Tour. Princeton University Press, Dec. 2015. [Online]. Available: http://www.degruyter.com/document/doi/10.1515/9781400874668/html

[74] D. Singh and B. Singh, “Investigating the impact of data normalization on classification performance,” Applied Soft Computing, vol. 97, p. 105524, Dec. 2020. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1568494619302947

[75] S. Geirnaert, T. Francart, and A. Bertrand, “Unsupervised Self-Adaptive Auditory Attention Decoding,” IEEE journal of biomedical and health informatics, vol. PP, Apr. 2021.

[76] G. Csurka, Ed., Domain Adaptation in Computer Vision Applications,ser. Advances in Computer Vision and Pattern Recognition. Cham:Springer International Publishing, 2017. [Online]. Available: http://link.springer.com/10.1007/978-3-319-58347-1

[77] ——, Domain Adaptation in Computer Vision Applications, ser. Advances in Computer Vision and Pattern Recognition. Cham: Springer International Publishing, 2017. [Online]. Available: http://link.springer.com/10.1007/978-3-319-58347-1

[78] Y. Ganin and V. Lempitsky, “Unsupervised Domain Adaptation by Backpropagation,” arXiv:1409.7495 [cs, stat], Feb. 2015. [Online]. Available:http://arxiv.org/abs/1409.7495

[79] F. Lotte, L. Bougrain, A. Cichocki, M. Clerc, M. Congedo, A. Rakotomamonjy, and F. Yger, “A review of classification algorithms for EEG-based brain–computer interfaces: A 10 year update,” Journal of Neural Engineering, vol. 15, no. 3, p. 031005, Jun. 2018. [Online]. Available:https://iopscience.iop.org/article/10.1088/1741-2552/aab2f2

[80] P. Xiao, B. Du, and X. Li, “An Unsupervised Domain Adaptation Algorithm Based on Canonical Correlation Analysis,” in Computer Vision, ser. Communications in Computer and Information Science, J. Yang, Q. Hu, M.-M. Cheng, L. Wang, Q.Liu, X. Bai, and D. Meng, Eds. Singapore: Springer, 2017, pp.26–37.

[81] K. R. Anoop, R. Subramanian, V. Vonikakis, K. Ramakrishnan, and S. Winkler, “On the utility of canonical correlation analysis for domain adaptation in multiview
headpose estimation,” in 2015 IEEE International Conference on Image Processing (ICIP), Sep. 2015, pp. 4708–4712.

[82] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars, “Subspace Alignment For Domain Adaptation,” arXiv:1409.5241 [cs], Oct. 2014. [Online]. Available:http://arxiv.org/abs/1409.5241

[83] B. Gong, K. Grauman, and F. Sha, “Geodesic Flow Kernel and Landmarks: Kernel Methods for Unsupervised Domain Adaptation,” in Domain Adaptation in Computer Vision Applications, ser. Advances in Computer Vision and Pattern Recognition, G. Csurka, Ed. Cham: Springer International Publishing, 2017, pp. 59–79. [Online]. Available: https://doi.org/10.1007/978-3-319-58347-1_3

[84] Y. S. Aurelio, G. M. de Almeida, C. L. de Castro, and A. P. Braga, “Learning from Imbalanced Data Sets with Weighted Cross-Entropy Function,” Neural Processing Letters, vol. 50, no. 2, pp. 1937–1949, Oct. 2019. [Online]. Available: https://doi.org/10.1007/s11063-018-09977-1

[85] E. Brouckmans and L. Dewit-Vanhaelen, “Dataset (Unpublished raw data),”2022.

[86] M. Stone, “Cross-Validatory Choice and Assessment of Statistical Predictions,”Journal of the Royal Statistical Society. Series B (Methodological), vol. 36, no. 2, pp. 111–147, 1974. [Online]. Available: http://www.jstor.org/stable/2984809

[87] “Canonical correlation - MATLAB canoncorr - MathWorks Benelux.” [Online]. Available: https://nl.mathworks.com/help/stats/canoncorr.html

[88] N. Sun, “Canonical Correlation Analysis (CCA),” p. 4.

[89] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, ser. Adaptive Computation and Machine Learning. Cambridge, Mass: MIT Press, 2006.

[90] K. V. Mardia, “Measures of Multivariate Skewness and Kurtosis with Applications,” Biometrika, vol. 57, no. 3, pp. 519–530, 1970. [Online]. Available:http://www.jstor.org/stable/2334770

[91] G. E. P. Box, “A general distribution theory for a class of likilihood criteria,” Biometrika, vol. 36, no. 3-4, pp. 317–346, Dec. 1949. [Online]. Available: https://doi.org/10.1093/biomet/36.3-4.317

[92] H.-M. Kaltenbach, “Hypothesis Testing,” in A Concise Guide to Statistics, ser. SpringerBriefs in Statistics, H.-M. Kaltenbach, Ed. Berlin, Heidelberg: Springer, 2012, pp. 53–75. [Online]. Available: https://doi.org/10.1007/978-3-642-23502-3_3

[93] Y. Benjamini and Y. Hochberg, “Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 57, no. 1, pp. 289–300, 1995. [Online]. Available: http://www.jstor.org/stable/2346101

[94] N. M. Laird and J. H. Ware, “Random-Effects Models for Longitudinal Data,” Biometrics, vol. 38, no. 4, pp. 963–974, 1982. [Online]. Available: http://www.jstor.org/stable/2529876

[95] S. R. Searle, G. Casella, and C. E. McCulloch, “Variance Components,” John Wiley & Sons, p. 537, 1992.

[96] A. Gałecki and T. Burzykowski, Linear Mixed-Effects Models Using R, ser. Springer Texts in Statistics. New York, NY: Springer New York, 2013. [Online]. Available: http://link.springer.com/10.1007/978-1-4614-3900-4

[97] “Estimating Parameters in Linear Mixed-Effects Models - MATLAB & Simulink - MathWorks Benelux.” [Online]. Available: https://nl.mathworks.com/help/stats/estimating-parameters-in-linear-mix…

[98] “Linear Mixed-Effects Models - MATLAB & Simulink - Math-Works Benelux.” [Online]. Available: https://nl.mathworks.com/help/stats/linear-mixed-effects-models.html

[99] R. A. Fisher, “Statistical Methods for Research Workers,” in Breakthroughs in Statistics: Methodology and Distribution, ser. Springer Series in Statistics, S. Kotz and N. L. Johnson, Eds. New York, NY: Springer, 1992, pp. 66–70.[Online]. Available: https://doi.org/10.1007/978-1-4612-4380-9_6

[100] E. Ostertagova and O. Ostertag, “Methodology and Application of One-way ANOVA,” American Journal of Mechanical Engineering, vol. 1, pp. 256–261,Nov. 2013.

[101] “Analysis of variance for linear mixed-effects model - MATLAB -MathWorks Benelux.” [Online]. Available: https://nl.mathworks.com/help/stats/linearmixedmodel.anova.html

[102] “Development and normative data for the Flemish/Dutch Matrix test - KU Leuven.” [Online]. Available: http://limo.libis.be/primo-explore/fulldisplay/LIRIAS1777721/Lirias

[103] K. Luyckx, H. Kloots, E. Coussé, and S. Gillis, Klankfrequenties in Het Nederlands,Jan. 2007.

[104] I. Guyon and A. Elisseeff, “An Introduction to Variable and Feature Selection,”Journal of Machine Learning Research, vol. 3, no. Mar, pp. 1157–1182, 2003. [Online]. Available: https://www.jmlr.org/papers/v3/guyon03a

[105] A. V. Oppenheim, Discrete-Time Signal Processing. Pearson Education, 1999.

[106] C. Brodbeck, L. E. Hong, and J. Z. Simon, “Rapid Transformation from Auditory to Linguistic Representations of Continuous Speech,” Current Biology, vol. 28, no. 24, pp. 3976–3983.e5, Dec. 2018. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S096098221831409X

[107] M. P. Broderick, A. J. Anderson, G. M. Di Liberto, M. J. Crosse, and E. C. Lalor, “Electrophysiological Correlates of Semantic Dissimilarity Reflect the Comprehension of Natural, Narrative Speech,” Current
Biology, vol. 28, no. 5, pp. 803–809.e3, Mar. 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0960982218301465

[108] M. Gillis, J. Vanthornhout, J. Z. Simon, T. Francart, and C. Brodbeck, “Neural Markers of Speech Comprehension: Measuring EEG Tracking of Linguistic Speech Representations, Controlling the Speech Acoustics,” The Journal of Neuroscience, vol. 41, no. 50, pp. 10 316–10 329, Dec. 2021. [Online]. Available: https://www.jneurosci.org/lookup/doi/10.1523/JNEUROSCI.0812-21.2021

[109] E. N. Lorenz, “Deterministic Nonperiodic Flow,” Journal of the Atmospheric Sciences, vol. 20, no. 2, pp. 130–141, Mar. 1963. [Online]. Available: http://journals.ametsoc.org/view/journals/atsc/20/2/1520-0469_
1963_020_0130_dnf_2_0_co_2.xml

[110] J. Gao, Y. Cao, W.-w. Tung, and J. Hu, Multiscale Analysis of Complex Time Series: Integration of Chaos and Random Fractal Theory, and Beyond. John Wiley & Sons, Dec. 2007.

[111] P. Grassberger and I. Procaccia, “Measuring the strageness of strange attractors,” North-Holland Publishing Company, p. 20, May 1983.

[112] B. B. Mandelbrot, The Fractal Geometry of Nature. Henry Holt and Company, 1983.

[113] C.-K. Peng, S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley, and A. L. Goldberger, “Mosaic organization of DNA nucleotides,” Physical Review E, vol. 49, no. 2, pp. 1685–1689, Feb. 1994. [Online]. Available:https://link.aps.org/doi/10.1103/PhysRevE.49.1685

[114] A.-L. Barabási, A.-L. ászl ó Barabási, and H. E. Stanley, Fractal Concepts in Surface Growth. Cambridge University Press, Apr. 1995.

[115] M. Lavanga, O. De Wel, A. Caicedo, E. Heremans, K. Jansen, A. Dereymaeker, G. Naulaers, and S. Van Huffel, “Automatic quiet sleep detection based on multifractality in preterm neonates: Effects of maturation,” in 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jul. 2017, pp. 2010–2013.

[116] C. Varon and S. Van Huffel, “Complexity and Nonlinearities in Cardiorespiratory Signals in Sleep and Sleep Apnea,” in Complexity and Nonlinearity in Cardiovascular Signals, R. Barbieri, E. P. Scilingo, and G. Valenza, Eds. Cham: Springer International Publishing, 2017, pp. 503–537. [Online]. Available: https://doi.org/10.1007/978-3-319-58709-7_19

[117] M. J. Monesi, B. Accou, J. Montoya-Martinez, T. Francart, and H. V. Hamme, “An LSTM Based Architecture to Relate Speech Stimulus to Eeg,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2020, pp. 941–945.

[118] S. Vandecappelle, L. Deckers, N. Das, A. H. Ansari, A. Bertrand, and T. Francart, “EEG-based detection of the locus of auditory attention with
convolutional neural networks,” eLife, vol. 10, p. e56481, Apr. 2021. [Online]. Available: https://doi.org/10.7554/eLife.56481

[119] N. Das, J. Zegers, H. V. hamme, T. Francart, and A. Bertrand, “Linear versus deep learning methods for noisy speech separation for EEG-informed attention decoding,” Journal of Neural Engineering, vol. 17, no. 4, p. 046039, Aug. 2020. [Online]. Available: https://doi.org/10.1088/1741-2552/aba6f8

[120] V. N. Vapnik, The Vicinal Risk Minimization Principle and the SVMs, ser. Statistics for Engineering and Information Science, V. N. Vapnik, Ed. New York, NY: Springer, 2000. [Online]. Available: https://doi.org/10.1007/9781-4757-3264-1_9

[121] M. K. Cain, Z. Zhang, and K.-H. Yuan, “Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation,” Behavior Research Methods, vol. 49, no. 5, pp. 1716–1735, Oct. 2017. [Online]. Available: https://doi.org/10.3758/s13428-016-0814-1

[122] M. Friendly and M. Sigal, “Visualizing Tests for Equality of Covariance Matrices,” The American Statistician, vol. 74, no. 2, pp. 144–155, Apr. 2020.[Online]. Available: https://doi.org/10.1080/00031305.2018.1497537

[123] “10% of all Belgians say they have a hearing loss | hear-it.org.” [Online].Available: https://www.hear-it.org/10-all-belgians-say-they-have-hearing-loss

[124] S. Herculano-Houzel, “The Human Brain in Numbers: A Linearly Scaled-up Primate Brain,” Frontiers in Human Neuroscience, vol. 3, p. 31, Nov. 2009. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2776484/

Download scriptie (10.91 MB)
Genomineerde shortlist mtech+prijs
Universiteit of Hogeschool
KU Leuven
Thesis jaar
2022
Promotor(en)
Prof. dr. ir. Alexander Bertrand - Prof. dr. ir. Tom Francart