The effect of speaker separation and noise level on auditory attention detection

Tine Arras
Persbericht

Je hersenen vertellen naar wie je luistert

Binnenkort lees je misschien deze advertentie in de krant: “Ons nieuwe hoorapparaat is slimmer dan ooit. Het bepaalt automatisch naar wie of wat u luistert en versterkt dit geluid, terwijl het omgevingsgeluiden onderdrukt. Zo hoeft u nooit nog een gesprek te missen en kan u moeiteloos uw partner verstaan in alle omstandigheden, zelfs op restaurant of tijdens een concert. Probeer het nu gratis uit!” Pure sciencefiction? Nee hoor, de technologie bestaat al.

 

Orde in de chaos

Als je verschillende geluiden door elkaar hoort, moeten je hersenen de geluiden van elkaar scheiden. Dat doen ze op basis van bepaalde verschillen en overeenkomsten. Ze letten bijvoorbeeld op de richting waaruit een geluid komt, de luidheid, hoe hoog of laag het geluid is… Zo verdelen ze de geluiden in aparte ‘stromen’, die elk bij één geluidsbron horen. In de volgende stap kan je kiezen naar welke stroom je wil luisteren, afhankelijk van wat je op dat moment het interessantste vindt.

Een voorbeeld: je bent op een receptie. Overal rondom je staan mensen te praten en in combinatie met de achtergrondmuziek is er behoorlijk veel lawaai. Toch kan je, als je je best doet, prima begrijpen wat je gesprekspartner vertelt – of wat iemand anders vertelt, als je een saaie gesprekspartner hebt. Als je erover nadenkt is dat best gek, want met al dat lawaai is het geluid van die ene stem in verhouding niet zo luid. Toch kan het, en wetenschappers hebben er zelfs een naam voor: het cocktail party-effect. Maar hoe werkt dat dan?

Elk geluid dat je hoort wordt door je hersenen verwerkt. Wetenschappers kunnen dat verwerkingsproces afleiden uit je hersengolven; ze noemen het de representatie van geluid. De representatie is gesorteerd per stroom en verandert onder invloed van aandacht. Zodra je besluit om naar een specifieke geluidsbron te luisteren, zal de bijhorende stroom een sterkere representatie krijgen in je hersenen. Tegelijk worden de andere stromen onderdrukt. Zo kan je dus focussen op een bepaald geluid, ook al zijn er nog veel andere geluiden in de omgeving.

 

Science…

Waarom is dat allemaal relevant? Wel, met de juiste apparatuur kunnen onderzoekers je hersengolven meten. Op basis daarvan kunnen ze ontdekken op welk geluid je je aandacht richt. De wetenschappelijke term daarvoor is ‘auditieve aandachtsdetectie’. Het werkt (nog) niet perfect, maar afhankelijk van de situatie heeft de computer het in ongeveer 9 van de 10 gevallen bij het rechte eind. Niet slecht, toch? Voorlopig gebruikt men deze technologie alleen in experimenteel onderzoek, waarbij deelnemers naar twee personen moeten luisteren die door elkaar praten. In mijn masterproef onderzocht ik hoe goed het programma werkt voor een iets complexere situatie: twee vrouwen die door elkaar heen een verhaal vertellen, met een heleboel pratende mensen op de achtergrond. Soms stonden de vrouwen dicht bij elkaar, soms wat verder van elkaar weg. Ook dat bleek de prestatie van het programma te beïnvloeden.

Wetenschappers hopen het programma verder te ontwikkelen om het in hoorapparaten te gebruiken. De belangrijkste klacht van mensen die nu zo’n apparaat dragen, is dat ze het moeilijk hebben om gesprekken te volgen in groep of in lawaai. Dat komt omdat het hoorapparaat alle geluiden versterkt, dus niet alleen wat de gebruiker graag wil horen. Een ‘slim’ hoorapparaat, dat de hersenactiviteit van de gebruiker meet en alleen de gewenste stroom versterkt, kan dat probleem verhelpen.

Jammer genoeg is dat momenteel nog toekomstmuziek. De programma’s zijn alleen getest in eenvoudige situaties, met een beperkt aantal sprekers en met zorgvuldig gekozen geluiden. In het echte leven zijn er natuurlijk geen beperkingen in het aantal en soort geluiden dat er tegelijk hoorbaar is. Bovendien gebruikt men in het onderzoek een badmuts-achtige kap vol elektroden, een schakelkast ter grootte van een schoendoos en een computer om de hersenactiviteit te analyseren. Dat is niet echt handig om mee rond te lopen of mee te nemen op restaurant. Onderzoekers werken wel aan draagbare en minder opvallende alternatieven, maar die staan nog niet op punt. Er moet dus nog veel onderzoek gebeuren voor de technologie bruikbaar is in dagelijkse situaties.

 

…fiction?

Natuurlijk stopt het niet bij hoorapparaten. Een programma dat je hersenactiviteit gebruikt om te ontdekken naar wie of wat je luistert, kan je ook op andere manieren gebruiken. Zo willen zorgverleners een test ontwikkelen die rechtstreeks meet wat je hoort, zonder dat je zelf woorden of zinnen moet nazeggen. Dat is best handig, bijvoorbeeld voor mensen die moeite hebben met praten of die de testinstructies niet begrijpen. Leerkracht willen het programma misschien gebruiken om te ontdekken of hun leerlingen goed opletten in de les. Of de leerlingen dat fijn zullen vinden, is een andere vraag. En het kan nog een stuk extremer. Als je uit hersenactiviteit kan afleiden waaraan iemand aandacht besteedt, kan je dan ook zijn of haar gedachten lezen?

Het is duidelijk dat auditieve aandachtsdetectie een nieuw, maar razend interessant onderwerp is. De technologie biedt mogelijkheden die nuttig zijn voor slechthorenden, maar kan ook op andere manieren ingezet worden. Een ding is zeker: als je over enkele jaren een advertentie leest over slimme hoorapparaten, dan weet je alvast dat mijn masterproef geen sciencefiction was.

Bibliografie

Akeroyd, M. A. (2008). Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. International Journal of Audiology, 47(Suppl. 2), 53–71.

Alho, K., Teder, W., Lavikainen, J., & Näätänen, R. (1994). Strongly focused attention and auditory event-related potentials. Biological Psychology, 38(1), 73–90.

Anderson, P. A., & Gagné, J.-P. (2011). Older adults expend more listening effort than young adults recognizing speech in noise. Journal of Speech, Language, and Hearing Research, 54, 944–958.

Bau, J. (2016). Real-time EEG based auditory attention detection with closed loop gain control. KU Leuven.

Bennett, R. J., Laplante-Lévesque, A., Meyer, C. J., & Eikelboom, R. H. (2017). Exploring hearing aid problems: Perspectives of hearing aid owners and clinicians. Ear and Hearing, 1–16.

Bera, T. K. (2015). Noninvasive electromagnetic methods for brain monitoring: A technical review. In A. E. Hassanien & A. T. Azar (Eds.), Brain-computer interfaces: Current trends and applications (pp. 51–95). Cham, Switzerland: Springer.

Berti, S. (2012). Automatic processing of rare versus novel auditory stimuli reveal different mechanisms of auditory change detection. NeuroReport, 23, 441–446.

Biesmans, W., Das, N., Francart, T., & Bertrand, A. (2017). Auditory-inspired speech envelope extraction methods for improved EEG-based auditory attention detection in a cocktail party scenario. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(5), 402–412.

Biesmans, W., Vanthornhout, J., Wouters, J., Moonen, M., Francart, T., & Bertrand, A. (2015). Comparison of speech envelope extraction methods for EEG-based auditory attention detection in a cocktail party scenario. In Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE (pp. 5155–5158). Milan, Italy: IEEE.

Billings, C. J., Tremblay, K. L., Stecker, G. C., & Tolin, W. M. (2009). Human evoked cortical activity to signal-to-noise ratio and absolute signal level. Hearing Research, 254(1–2), 15–24.

BioSemi. (2002). ActiveTwo. Amsterdam: BioSemi B.V.

BioSemi. (2016). ActiVIEW. Amsterdam: BioSemi B.V.

Bizley, J. K., & Walker, K. M. M. (2009). Distributed sensitivity to conspecific vocalizations and implications for the auditory dual stream hypothesis. Journal of Neuroscience, 29(10), 3011–3013.

Bögels, S., Schriefers, H., Vonk, W., Chwilla, D. J., & Kerkhofs, R. (2013). Processing consequences of superfluous and missing prosodic breaks in auditory sentence comprehension. Neuropsychologia, 51(13), 2715–2728.

Bradley, J. S., Reich, R. D., & Norcross, S. G. (1999). On the combined effects of signal-to-noise ratio and room acoustics on speech intelligibility. The Journal of the Acoustical Society of America, 106(4), 1820–1828.

Breen, M., Dilley, L. C., Devin McAuley, J., & Sanders, L. D. (2014). Auditory evoked potentials reveal early perceptual effects of distal prosody on speech segmentation. Language, Cognition and Neuroscience, 29(9), 1132–1146.

Bregman, A. S. (1990). Auditory scene analysis: The perceptual organization of sound. Cambridge: MIT press.

Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. The Journal of the Acoustical Society of America, 25(5), 975–979.

Christison-Lagay, K. L., Gifford, A. M., & Cohen, Y. E. (2015). Neural correlates of auditory scene analysis and perception. International Journal of Psychophysiology, 95(2), 238–245.

Coffey, E. B. J., Mogilever, N. B., & Zatorre, R. J. (2017). Speech-in-noise perception in musicians: A review. Hearing Research, 352, 49–69.

Commers, L. (2017). De invloed van aandacht op het objectief meten van spraakverstaanbaarheid. KU Leuven.

Culling, J. F., Hawley, M. L., & Litovsky, R. Y. (2004). The role of head-induced interaural time and level differences in the speech reception threshold for multiple interfering sound sources. The Journal of the Acoustical Society of America, 116(2), 1057–1065.

D’Anselmo, A., Marzoli, D., & Brancucci, A. (2016). The influence of memory and attention on the ear advantage in dichotic listening. Hearing Research, 342, 144–149.

Darwin, C. J. (2005). Pitch and auditory grouping. In C. J. Plack, A. J. Oxenham, R. R. Fay, & A. N. Popper (Eds.), Pitch: Neural coding and perception (Vol. 24, pp. 278–305). New York: Springer.

Das, N., Biesmans, W., Bertrand, A., & Francart, T. (2016). The effect of head-related filtering and ear-specific decoding bias on auditory attention detection. Journal of Neural Engineering, 13(5).

Dashti, R., Khiavi, F. F., Sameni, S. J., & Bayat, A. (2015). Elderly with different types of hearing loss and comorbidities: Satisfaction with hearing aids. Jundishapur Journal of Health Sciences, 7(3), 5–8.

Dawes, P., & Munro, K. J. (2017). Auditory distraction and acclimatization to hearing aids. Ear and Hearing, 38(2), 174–183.

de Heer, W. A., Huth, A. G., Griffiths, T. L., Gallant, J. L., & Theunissen, F. E. (2017). The hierarchical cortical organization of human speech processing. The Journal of Neuroscience, 37(27), 6539–6557.

de Souza Ranaudo, F., de Sá, A. M. F. L. M., & Felix, L. B. (2012). Real time classification of selective attention on auditory steady-state evoked potentials. In Biosignals and Robotics Conference (BRC), 2012 ISSNIP (pp. 1–5). Manaus, Brazil: IEEE.

Deike, S., Denham, S. L., & Sussman, E. (2014). Probing auditory scene analysis. Frontiers in Neuroscience, 8(293), 5–7.

Desjardins, J. L., & Doherty, K. A. (2012). Age-related changes in listening effort for various types of masker noises. Ear and Hearing, 34(3), 261–272.

Dillon, H. (2012). Hearing aids (Second edi). New York: Thieme Medical Publishers Inc.

Ding, N., & Simon, J. Z. (2012a). Emergence of neural encoding of auditory objects while listening to competing speakers. Proceedings of the National Academy of Sciences, 109(29), 11854–11859.

Ding, N., & Simon, J. Z. (2012b). Neural coding of continuous speech in auditory cortex during monaural and dichotic listening. Journal of Neurophysiology, 107(1), 78–89.

Du, Y., & Zatorre, R. J. (2017). Musical training sharpens and bonds ears and tongue to hear speech better. Proceedings of the National Academy of Sciences, 114(51), 13579–13584.

Dubno, J. R., Ahistrom, J. B., & Horwitz, A. R. (2002). Spectral contributions to the benefit from spatial separation of speech and noise. Journal of Speech, Language, and Hearing Research, 45(6), 1297–1310.

Ericson, M. A., Brungart, D. S., & Simpson, B. D. (2004). Factors that influence intelligibility in multitalker speech displays. The International Journal of Aviation Psychology, 14(3), 313–334.

Escera, C. (2011). The role of auditory brainstem in regularity encoding and deviance detection. In N. Kraus, S. Anderson, T. White-Schwoch, R. R. Fay, & A. N. Popper (Eds.), The frequency-following response: A window into human communication (pp. 101–120). Cham, Switzerland: Springer.

Escera, C., Alho, K., Schröger, E., & Winkler, I. (2000). Involuntary attention and distractibility as evaluated with event-related brain potentials. Audiology and Neuro-Otology, 5(3–4), 151–166.

Escera, C., Leung, S., & Grimm, S. (2014). Deviance detection based on regularity encoding along the auditory hierarchy: Electrophysiological evidence in humans. Brain Topography, 27(4), 527–538.

Field, A., Miles, J., & Field, Z. (2012). Discovering statistics using R (First edi). London: SAGE Publications Ltd.

Folyi, T., Fehér, B., & Horváth, J. (2012). Stimulus-focused attention speeds up auditory processing. International Journal of Psychophysiology, 84(2), 155–163.

Francart, T. (2008). APEX.

Fuglsang, S. A., Dau, T., & Hjortkjær, J. (2017). Noise-robust cortical tracking of attended speech in real-world acoustic scenes. NeuroImage, 156, 435–444.

Fukushima, M., Saunders, R. C., Leopold, D. A., Mishkin, M., & Averbeck, B. B. (2014). Differential coding of conspecific vocalizations in the ventral auditory cortical stream. Journal of Neuroscience, 34(13), 4665–4676.

Gandras, K., Grimm, S., & Bendixen, A. (2017). Electrophysiological correlates of speaker segregation and foreground-background selection in ambiguous listening situations. Neuroscience, 1–11.

Getzmann, S., & Näätänen, R. (2015). The mismatch negativity as a measure of auditory stream segregation in a simulated “cocktail-party” scenario: Effect of age. Neurobiology of Aging, 36(11), 3029–3037.

Getzmann, S., Wascher, E., & Falkenstein, M. (2015). What does successful speech-in-noise perception in aging depend on? Electrophysiological correlates of high and low performance in older adults. Neuropsychologia, 70, 43–57.

Ghazanfar, A. A., & Eliades, S. J. (2014). The neurobiology of primate vocal communication. Current Opinion in Neurobiology, 28, 128–135.

Gomes, H., Duff, M., Barnhardt, J., Barrett, S., & Ritter, W. (2007). Development of auditory selective attention: Event-related potential measures of channel selection and target detection. Psychophysiology, 44(5), 711–727.

Gordon-Hickey, S., Moore, R. E., & Estis, J. M. (2012). The impact of listening conditions on background noise acceptance for young adults with normal hearing. Journal of Speech, Language, and Hearing Research, 55, 1356–1372.

Goris, L. (2016). Neural responses to the speech envelope: Towards an objective measure for speech. KU Leuven.

Grange, J. A., & Culling, J. F. (2016). The benefit of head orientation to speech intelligibility in noise. The Journal of the Acoustical Society of America, 139(2), 703–712.

Gutschalk, A., & Dykstra, A. R. (2014). Functional imaging of auditory scene analysis. Hearing Research, 307, 98–110.

Habibi, A., Cahn, B. R., Damasio, A., & Damasio, H. (2016). Neural correlates of accelerated auditory processing in children engaged in music training. Developmental Cognitive Neuroscience, 21, 1–14.

Hansen, J. C., & Hillyard, S. A. (1980). Endogenous brain potentials associated with selective auditory attention. Electroencephalography and Clinical Neurophysiology, 49(3–4), 277–90.

Hearing Loss Association of America. (2017). Types, causes and treatment. Retrieved November 11, 2017, from http://www.hearingloss.org/content/types-causes-and-treatment

Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews, 8, 393–403.

Hilkhuysen, G. L. M., Gaubitch, N., & Huckvale, M. (2013). Effects of noise suppression on intelligibility: Experts’ opinions and naïve normal-hearing listeners’ performance. Journal of Speech, Language, and Hearing Research, 56(2), 404.

Hiscock, M., & Kinsbourne, M. (2011). Attention and the right-ear advantage: What is the connection? Brain and Cognition, 76(2), 263–275.

Horton, C., Srinivasan, R., & D’Zmura, M. (2014). Envelope responses in single-trial EEG indicate attended speaker in a ‘cocktail party.’ Journal of Neural Engineering, 11, 1–12.

Houben, R., Koopman, J., Luts, H., Wagener, K. C., van Wieringen, A., Verschuure, H., & Dreschler, W. A. (2014). Development of a Dutch matrix sentence test to assess speech intelligibility in noise. International Journal of Audiology, 53(10), 760–763.

Hougaard, S., Jensen, O., Kristensen, M., Ludvigsen, C., Petersen, J., & Weis, P. (1995). Sound & hearing (Second edi). Widex ApS.

Houtgast, T., & Festen, J. M. (2008). On the auditory and cognitive functions that may explain an individual’s elevation of the speech reception threshold in noise. International Journal of Audiology, 47(6), 287–295.

Hu, Y., & Loizou, P. C. (2007). A comparative intelligibility study of single-microphone noise reduction algorithms. The Journal of the Acoustical Society of America, 122(3), 1777–1786.

Hu, Y., & Loizou, P. C. (2010). Environment-specific noise suppression for improved speech intelligibility by cochlear implant users. The Journal of the Acoustical Society of America, 127(6), 3689–3695.

Hugdahl, K., & Westerhausen, R. (2016). Speech processing asymmetry revealed by dichotic listening and functional brain imaging. Neuropsychologia, 93, 466–481.

Jacewicz, E., & Fox, R. A. (2013). The effects of dialect variation on speech intelligibility in a multitalker background. Applied Psycholinguistics, 36(3), 729–746.

Jemel, B., Oades, R. D., Oknina, L., Achenbach, C., & Röpcke, B. (2003). Frontal and temporal lobe sources for a marker of controlled auditory attention: The negative difference (Nd) event-related potential. Brain Topography, 15(4), 249–262.

Johnsrude, I. S., Mackey, A., Hakyemez, H., Alexander, E., Trang, H. P., & Carlyon, R. P. (2013). Swinging at a cocktail party: Voice familiarity aids speech perception in the presence of a competing voice. Psychological Science, 24(10), 1995–2004.

Kaya, E. M., & Elhilali, M. (2014). Investigating bottom-up auditory attention. Frontiers in Human Neuroscience, 8, 1–12.

Kerlin, J. R., Shahin, A. J., & Miller, L. M. (2010). Attentional gain control of ongoing cortical speech representations in a “cocktail party.” Journal of Neuroscience, 30(2), 620–628.

Khouri, L., & Nelken, I. (2015). Detecting the unexpected. Current Opinion in Neurobiology, 35, 142–147.

Kimura, D. (1961). Cerebral dominance and the perception of verbal stimuli. Canadian Journal of Psychology, 15(3), 166–171.

Kober, S. E., Witte, M., Ninaus, M., Neuper, C., & Wood, G. (2013). Learning to modulate one’s own brain activity: the effect of spontaneous mental strategies. Frontiers in Human Neuroscience, 7, 1–12.

Koerner, T. K., & Zhang, Y. (2015). Effects of background noise on inter-trial phase coherence and auditory N1-P2 responses to speech stimuli. Hearing Research, 328, 113–119.

Kong, Y. Y., Mullangi, A., & Ding, N. (2014). Differential modulation of auditory responses to attended and unattended speech in different listening conditions. Hearing Research, 316, 73–81.

Laneau, J., & Wouters, J. (2004). Multichannel place pitch sensitivity in cochlear implant recipients. JARO - Journal of the Association for Research in Otolaryngology, 5(3), 285–294.

Larsby, B., Hällgren, M., Lyxell, B., & Arlinger, S. (2005). Cognitive performance and perceived effort in speech processing tasks: Effects of different noise backgrounds in normal-hearing and hearing-impaired subjects. International Journal of Audiology, 44(3), 131–143.

Lee, A. K. C., Larson, E., Maddox, R. K., & Shinn-Cunningham, B. G. (2014). Using neuroimaging to understand the cortical mechanisms of auditory selective attention. Hearing Research, 307, 111–120.

Lewald, J., & Getzmann, S. (2015). Electrophysiological correlates of cocktail-party listening. Behavioural Brain Research, 292, 157–166.

Li, X., & Yang, Y. (2013). How long-term memory and accentuation interact during spoken language comprehension. Neuropsychologia, 51(5), 967–978.

Maamor, N., & Billings, C. J. (2017). Cortical signal-in-noise coding varies by noise type, signal-to-noise ratio, age, and hearing status. Neuroscience Letters, 636, 258–264.

Manan, H. A., Yusoff, A. N., Franz, E. A., & Mukari, S. Z. S. (2017). Effects of aging and background babble noise on speech perception processing: An fMRI study. Neurophysiology, 49(6), 441–452.

Mason, J. A., & Herrmann, K. R. (1998). Universal infant hearing screening by automated auditory brainstem response measurement. Pediatrics, 101(2), 221–228.

McCormack, A., & Fortnum, H. (2013). Why do people fitted with hearing aids not wear them? International Journal of Audiology, 52(5), 360–368.

McFarland, D. H. (2009). Auditory system. In Netter’s atlas of anatomy for speech, swallowing, and hearing (pp. 169–198). St. Louis: Mosby, Inc.

MED-EL. (2012). Video about hearing and how it works. Austria: YouTube.

MED-EL. (2017). Types of hearing loss. Retrieved November 10, 2017, from http://www.medel.com/hearing-loss/

Meyer, C., Hickson, L., Khan, A., & Walker, D. (2014). What is important for hearing aid satisfaction? Application of the expectancy-disconfirmation model. Journal of the American Academy of Audiology, 25(7), 644–655.

Michie, P. T., Bearpark, H. M., Crawford, J. M., & Glue, L. C. T. (1990). The nature of selective attention effects on auditory event-related potentials. Biological Psychology, 30(3), 219–250.

Mirkovic, B., Bleichner, M. G., De Vos, M., & Debener, S. (2016). Target speaker detection with concealed EEG around the ear. Frontiers in Neuroscience, 10, 1–11.

Mirkovic, B., Debener, S., Jaeger, M., & De Vos, M. (2015). Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications. Journal of Neural Engineering, 12, 1–9.

Näätänen, R. (1982). Processing negativity: an evoked-potential reflection of selective attention. Psychological Bulletin, 92(3), 605–640.

Näätänen, R., & Escera, C. (2000). Mismatch negativity: Clinical and other applications. Audiology and Neuro-Otology, 5, 105–110.

Näätänen, R., Paavilainen, P., Rinne, T., & Alho, K. (2007). The mismatch negativity (MMN) in basic research of central auditory processing: A review. Clinical Neurophysiology, 118(12), 2544–2590.

Neumann, N., & Birbaumer, N. (2003). Predictors of successful self control during brain-computer communication. Journal of Neurology Neurosurgery and Psychiatry, 74(8), 1117–1121.

O’Sullivan, J. A., Power, A. J., Mesgarani, N., Rajaram, S., Foxe, J. J., Shinn-Cunningham, B. G., … Lalor, E. C. (2015). Attentional selection in a cocktail party environment can be decoded from single-trial EEG. Cerebral Cortex, 25(7), 1697–1706.

Oates, P. A., Kurtzberg, D., & Stapells, D. R. (2002). Effects of sensorineural hearing loss on cortical event-related potential and behavioral measures of speech-sound processing. Ear and Hearing, 23(5), 399–415.

Paavilainen, P. (2013). The mismatch-negativity (MMN) component of the auditory event-related potential to violations of abstract regularities: A review. International Journal of Psychophysiology, 88(2), 109–123.

Papesh, M. A., Billings, C. J., & Baltzell, L. S. (2015). Background noise can enhance cortical auditory evoked potentials under certain conditions. Clinical Neurophysiology, 126(7), 1319–1330.

Picou, E. M., Gordon, J., & Ricketts, T. A. (2016). The effects of noise and reverberation on listening effort in adults with normal hearing. Ear and Hearing, 37(1), 1–13.

Picton, T. W. (2013). Hearing in time: Evoked potential studies of temporal processing. Ear and Hearing, 34(4), 385–401.

Picton, T. W., John, M. S., Dimitrijevic, A., & Purcell, D. W. (2003). Human auditory steady-state responses. International Journal of Audiology, 42(4), 177–219.

Pinet, M., Iverson, P., & Huckvale, M. (2011). Second-language experience and speech-in-noise recognition: Effects of talker–listener accent similarity. The Journal of the Acoustical Society of America, 130(3), 1653–1662.

Poremba, A., Bigelow, J., & Rossi, B. (2013). Processing of communication sounds: Contributions of learning, memory, and experience. Hearing Research, 305(1), 31–44.

Power, A. J., Foxe, J. J., Forde, E. J., Reilly, R. B., & Lalor, E. C. (2012). At what time is the cocktail party? A late locus of selective attention to natural speech. European Journal of Neuroscience, 35(9), 1497–1503.

Puschmann, S., Sandmann, P., Ahrens, J., Thorne, J., Weerda, R., Klump, G., … Thiel, C. M. (2013). Electrophysiological correlates of auditory change detection and change deafness in complex auditory scenes. NeuroImage, 75, 155–164.

R Core Team. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.

Rhebergen, K. S., Versfeld, N. J., & Dreschler, W. A. (2008). Learning effect observed for the speech reception threshold in interrupted noise with normal hearing listeners. International Journal of Audiology, 47(4), 185–188.

Rinne, T., Särkkä, A., Degerman, A., Schröger, E., & Alho, K. (2006). Two separate mechanisms underlie auditory change detection and involuntary control of attention. Brain Research, 1077(1), 135–143.

Rodenburg, M., Huizing, E. H., Kapteyn, T. S., & Wanink, A. (1979). Revalidatie van slechthorenden. Alphen aan den Rijn: Stafleu’s Wetenschappelijke Uitgeversmaatschappij.

RStudio Team. (2016). RStudio: Integrated development for R. Boston, MA: RStudio, Inc.

Sataloff, J., Sataloff, R. T., & Vassallo, L. A. (1980). Hearing loss (Second edi). Philadelphia and Toronto: J. B. Lippincott Company.

Schröger, E., Bendixen, A., Denham, S. L., Mill, R. W., Bohm, T. M., & Winkler, I. (2014). Predictive regularity representations in violation detection and auditory stream segregation: From conceptual to computational models. Brain Topography, 27(4), 565–577.

Schröger, E., & Wolff, C. (1998). Attentional orienting and reorienting is indicated by human event-related brain potentials. NeuroReport, 9(15), 3355–3358.

Shabtai, N. R., Nehoran, I., Ben-Asher, M., & Rafaely, B. (2017). Intelligibility of speech in noise under diotic and dichotic binaural listening. Applied Acoustics, 125, 173–175.

Sohoglu, E., & Chait, M. (2016a). Detecting and representing predictable structure during auditory scene analysis. ELife, 5, 1–17.

Sohoglu, E., & Chait, M. (2016b). Neural dynamics of change detection in crowded acoustic scenes. NeuroImage, 126, 164–172.

Solheim, J., Kværner, K. J., Sandvik, L., & Falkenberg, E. S. (2012). Factors affecting older adults’ hearing-aid use. Scandinavian Journal of Disability Research, 14(4), 300–312.

Srinivasan, R., & Nunez, P. L. (2012). Electroencephalography. Encyclopedia of Human Behavior, 15–23.

Steinschneider, M., Nourski, K. V., Rhone, A. E., Kawasaki, H., Oya, H., & Howard, M. A. (2014). Differential activation of human core, non-core and auditory-related cortex during speech categorization tasks as revealed by intracranial recordings. Frontiers in Neuroscience, 8(240), 1–13.

Strait, D. L., Parbery-Clark, A., Hittner, E., & Kraus, N. (2012). Musical training during early childhood enhances the neural encoding of speech in noise. Brain and Language, 123(3), 191–201.

Sussman, E., Ritter, W., & Vaughan, H. G. (1998). Attention affects the organization of auditory input associated with the mismatch negativity system. Brain Research, 789(1), 130–138.

Tabri, D., Chacra, K. M. S. A., & Pring, T. (2011). Speech perception in noise by monolingual, bilingual and trilingual listeners. International Journal of Language and Communication Disorders, 46(4), 411–422.

Van den Bogaert, T. (2008). Preserving binaural cues in noise reduction algorithms for hearing aids. KU Leuven.

Van den Bogaert, T., Doclo, S., Wouters, J., & Moonen, M. (2009). Speech enhancement with multichannel Wiener filter techniques in multimicrophone binaural hearing aids. The Journal of the Acoustical Society of America, 125(1), 360–371.

Van Eyndhoven, S., Francart, T., & Bertrand, A. (2017). EEG-informed attended speaker extraction from recorded speech mixtures with application in neuro-steered hearing prostheses. IEEE Transactions on Biomedical Engineering, 64(5), 1045–1056.

van Straaten, H. L. M. (1999). Automated auditory brainstem response in neonatal hearing screening. Acta Paediatrica. Supplementum, 88(432), 76–79.

Vanthornhout, J., Decruy, L., Wouters, J., Simon, J. Z., & Francart, T. (2018). Speech intelligibility predicted from neural entrainment of the speech envelope. Journal of the Association for Research in Otolaryngology, 191, 1–11.

Vitevitch, M. S. (2003). Change deafness: The inability to detect changes between two voices. Journal of Experimental Psychology: Human Perception and Performance, 29(2), 333–342.

Vouloumanos, A., Kiehl, K. A., Werker, J. F., & Liddle, P. F. (2001). Detection of sounds in the auditory stream: Event-related fMRI evidence for differential activation to speech and nonspeech. Journal of Cognitive Neuroscience, 13(7), 994–1005.

WHO. (2004). Guidelines for hearing aids and services for developing countries (Second edi). Geneva: World Health Organization.

WHO. (2017). Deafness and hearing loss. Retrieved November 10, 2017, from http://www.who.int/mediacentre/factsheets/fs300/en/

Wiegand, K., Heiland, S., Uhlig, C. H., Dykstra, A. R., & Gutschalk, A. (2018). Cortical networks for auditory detection with and without informational masking: Task effects and implications for conscious perception. NeuroImage, 167, 178–190.

Winkler, I., Denham, S. L., & Nelken, I. (2009). Modeling the auditory scene: predictive regularity representations and perceptual objects. Trends in Cognitive Sciences, 13(12), 532–540.

Woods, D. L. (1992). Auditory selective attention in middle-aged and elderly subjects: an event-related brain potential study. Electroencephalography and Clinical Neurophysiology, 84(5), 456–468.

Yoon, S. H., Nam, K. W., Yook, S., Cho, B. H., Jang, D. P., Hong, S. H., & Kim, I. Y. (2017). A trainable hearing aid algorithm reflecting individual preferences for degree of noise-suppression, input sound level, and listening situation. Clinical and Experimental Otorhinolaryngology, 10(1), 56–65.

Zekveld, A. A., Rudner, M., Johnsrude, I. S., & Rönnberg, J. (2013). The effects of working memory capacity and semantic cues on the intelligibility of speech in noise. The Journal of the Acoustical Society of America, 134(3), 2225–2234.

Zekveld, A. A., Rudner, M., Kramer, S. E., Lyzenga, J., & Rönnberg, J. (2014). Cognitive processing load during listening is reduced more by decreasing voice similarity than by increasing spatial separation between target and masker speech. Frontiers in Neuroscience, 8, 1–11.

Zhang, C., Arnott, S. R., Rabaglia, C., Avivi-Reich, M., Qi, J., Wu, X., … Schneider, B. A. (2016). Attentional modulation of informational masking on early cortical representations of speech signals. Hearing Research, 331, 119–130.

Zhang, C., Lu, L., Wu, X., & Li, L. (2014). Attentional modulation of the early cortical representation of speech signals in informational or energetic masking. Brain and Language, 135, 32–41.

Zink, R., Baptist, A., Bertrand, A., Huffel, S. Van, & Vos, M. De. (2016). Online detection of auditory attention in a neurofeedback application, 8–11.

Universiteit of Hogeschool
Logopedische en audiologische wetenschappen
Publicatiejaar
2018
Promotor(en)
Prof. dr. Tom Francart
Kernwoorden
Share this on: