Recent Forum Posts
From categories:
page 1123...next »

Koerner, T. K., Zhang, Y., Nelson, P., Wang, B., & Zou, H. (2017). Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: A P3 study. Hearing Research, 350, 58-67. http://doi.org/10.1016/j.heares.2017.04.009

Abstract
This study examined how speech babble noise differentially affected the auditory P3 responses and the associated neural oscillatory activities for consonant and vowel discrimination in relation to segmental- and sentence- level speech perception in noise. The data were collected from 16 normal-hearing participants in a double-oddball paradigm that contained a consonant (/ba/ to /da/) and vowel (/ba/ to /bu/) change in quiet and noise (speech-babble background at a -3 dB signal-to-noise ratio) conditions. Time-frequency analysis was applied to obtain inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) measures in delta, theta, and alpha frequency bands for the P3 response. Behavioral measures included percent correct phoneme detection and reaction time as well as percent correct IEEE sentence recognition in quiet and in noise. Linear mixed-effects models were applied to determine possible brain-behavior correlates. A significant noise-induced reduction in P3 amplitude was found, accompanied by significantly longer P3 latency and decreases in ITPC across all frequency bands of interest. There was a differential effect of noise on consonant discrimination and vowel discrimination in both ERP and behavioral measures, such that noise impacted the detection of the consonant change more than the vowel change. The P3 amplitude and some of the ITPC and ERSP measures were significant predictors of speech perception at segmental- and sentence-levels across listening conditions and stimuli. These data demonstrate that the P3 response with its associated cortical oscillations represents a potential neurophysiological marker for speech perception in noise.

Keywords: speech perception; event-related potential; P3; inter-trial phase coherence (ITPC); event-related spectral perturbation (ERSP) 

Jiang, W., Li, Y., Shu, H., Zhang, L., & Zhang, Y. (2017). Use of semantic context and F0 contours by older listeners during Mandarin speech recognition in quiet and single-talker interference conditions. Journal of the Acoustical Society of America, 141, EL338-EL344.

Abstract: This study followed up Wang et al. (2013) to investigate factors influencing older listeners’ Mandarin speech recognition in quiet vs. single-talker interference. Listening condition was found to interact with F0 contour, revealing that natural F0 contours provided benefit in the interference condition whereas semantic context contributed similarly to both conditions. There was also a significant interaction between semantic context and F0 contour, demonstrating the importance of semantic context when F0 was flattened. Together, findings from the two studies indicate that aging differentially affects tonal language speakers’ dependence on F0 contours and semantic context for speech perception in suboptimal conditions.

Keywords: Mandarin speech recognition; older listeners; semantic context; F0 contours; quiet; single-talker interference

Koerner, T. K., & Zhang, Y. (Accepted). Application of linear mixed-effects models in human neuroscience research: A comparison with Pearson correlation in two auditory electrophysiology studies. Brain Sciences.

Abstract: Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson’s correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modelling and interpretation of human behavior in terms of neural correlates and biomarkers.

Keywords: Pearson correlation; linear mixed-effects regression models; repeated measures; neurophysiology; event-related potential

Acknowledgments: This work was supported in part by the Charles E. Speaks Graduate Fellowship (TKK), the Bryng Bryngelson Research Fund (TKK and YZ), the Capita Foundation (YZ), the Brain Imaging Research Project award and single semester leave award (YZ) from the College of Liberal Arts, and the University of Minnesota Grand Challenges Exploratory Research Project Grant (YZ). We would like to thank Boxiang Wang, Hui Zou, Peggy Nelson, and Edward Carney for their assistance.

Luodi Yu has been selected for the Diversity Award to attend the IMFAR 2017 meeting, May 10-13 in San Francisco, California, USA.

http://www.autism-insar.org/

Geraldine Dawson, INSAR President
Matthew Lerner, INSAR Awards Committee Chair

Yu, L., Rao, A., Zhang, Y., Burton, P.C., Rishiq, D., & Abrams, H. (2017). Neuromodulatory effects of auditory training and hearing aid use on audiovisual speech perception in elderly individuals. Frontiers in Aging Neuroscience, 9, 30.
http://journal.frontiersin.org/article/10.3389/fnagi.2017.00030/

Abstract. Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used hearing aids for 8 weeks; only one received a training program named ReadMyQuips™ (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for audiovisual integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS, and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience driven cortical functionality for audiovisual speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population.

Funding statement
This project received funding from Starkey Hearing Technologies (AR and YZ), the University of Minnesota’s (UMN) Brain Imaging
Research Project Award (YZ) from the College of Liberal Arts and the UMN Grand Challenges Exploratory Research Grant Award
(YZ).

You are Invited to Neurology Grand Rounds
Cognitive Training in Early Phases of Psychotic Illness

Presenter:
Sophia Vinogradov, MD
Professor & Head, Department of Psychiatry
University of Minnesota Medical School

Date | Feb 3, 2017
Time | 12-1 PM
Location | Moos 2-530

Learning Objectives (Upon completion of this conference, participants should be able to):
1. Outline the typical profile of cognitive deficit in early phases of psychotic illness.
2. Discuss the principles of neuroscience-informed cognitive training.
3. Describe the effects of target cognitive training in early phases of psychotic illness.

ACCREDITATION STATEMENT
This activity has been planned and implemented in accordance with the Essential Areas and Policies of the Accreditation Council for Continuing Medical Education (ACCME) through the direct providership of the University of Minnesota. The University of Minnesota is accredited by the ACCME to provide continuing medical education for physicians.

American Medical Association (AMA) Credit Designation Statement
The University of Minnesota designates this live activity for a maximum of 1 AMA PRA Category 1 Credits™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

Sponsored by the Department of Neurology, University of Minnesota

Please share this notice with others who may be interested in this seminar!

Wang, X., Wang, S. Fan, Y., Huang, D., & Zhang, Y. (Accepted). Speech-specific categorical perception deficit in autism: An Event-Related Potential study of lexical tone processing in Mandarin-speaking children. Scientific Reports.

Abstract: Recent studies reveal that tonal language speakers with autism have enhanced neural sensitivity to pitch changes in nonspeech stimuli but not to lexical tone contrasts in their native language. The present ERP study investigated whether the distinct pitch processing pattern for speech and nonspeech stimuli in autism was due to a speech-specific deficit in categorical perception of lexical tones. A passive oddball paradigm was adopted to examine two groups (16 in the autism group and 15 in the control group) of Chinese children’s Mismatch Responses (MMRs) to equivalent pitch deviations representing within-category and between-category differences in speech and nonspeech contexts. To further examine group-level differences in the MMRs to categorical perception of speech/nonspeech stimuli or lack thereof, neural oscillatory activities at the single trial level were further calculated with the inter-trial phase coherence (ITPC) measure for the theta and beta frequency bands. The MMR and ITPC data from the children with autism showed evidence for lack of categorical perception in the lexical tone condition. In view of the important role of lexical tones in acquiring a tonal language, the results point to the necessity of early intervention for the individuals with autism who show such a speech-specific categorical perception deficit.

Keywords: Autism; pitch perception; lexical tone; categorical perception (CP); Event-related potential (ERP); Mismatch Responses (MMR); neural oscillation; inter-trial phase coherence (ITPC); theta activity

Provost Karen Hanson has announced 29 Grand Challenges Research grants to advance the research goals of Driving Tomorrow, the TC Campus Strategic Plan. The Driving Tomorrow research investments total $3.6 million, including $1.48 million for 21 exploratory research grants and $2.15 million for 8 collaborations shaped by interdisciplinary work groups that were built on an earlier Call for Ideas process. The funds were reallocations earmarked for strategic plan investments during annual compact planning with college deans, augmented by funds from the Global Programs and Strategy Alliance for projects supporting the U’s internationalization goals. More on the Driving Tomorrow Research Initiatives on the UMN web site.
https://strategic-planning.umn.edu/gc-research-grants-awarded

The Center for Neurobehavioral Development is pleased to host a colloquium with visiting international professor Roslyn Boyd, PhD on MONDAY September 19th at 10:00am.

Roslyn Boyd, PhD, Professor of Cerebral Palsy and Rehabilitation Research, University of Queensland, Australia will be presenting on "Advanced brain imaging in infants at risk of Cerebral Palsy and children with Cerebral Palsy".

Professor Boyd has primary training in Physiotherapy (Pediatrics, Neurological, Neonatal, Orthopaedics) with post graduate training in Biomechanics (Pgrad) and Neuroscience (PhD). She has an international track record in conducting randomized clinical trials in the field of cerebral palsy (on the efficacy of upper limb rehabilitation, early intervention and Botulinum toxin A). Dr Boyd’s studies have combined clinical outcomes with an understanding of the mechanisms underpinning response to intervention with novel use of Advanced Brain Imaging (functional MRI, Diffusion Imaging, Functional Connectivity, CP connectome). Her strong collaborations in neuroscience have enabled the development of novel rehabilitation trials in Action Observation training, multi-modal web based training (Qld E Brain program) and environmental enrichment for infants, children and youth with Cerebral Palsy.

Details are also listed on the attached flyer.
All events are located in 717 Delaware, room 330. We hope to see you there!


Center for Neurobehavioral Development
717 Delaware St. SE
Suite 333
Minneapolis, MN 55414
ude.nmu|DBNC#ude.nmu|DBNC
612.624.5600

Chieh Kao received her bachelor degree in Psychology at National Taiwan University. She had research experiences with infants, children with cochlear implants, and children with Autism Spectrum Disorders, focusing on their emotional and tonal speech perception. She is interested in how preschoolers process speech sounds under complex situations, and the neural network underlying that processing.

Zhang, Y., Cheng, B., Koerner, T. K., Schlauch, R. S., Tanaka, K., Kawakatu, M., Nemoto, I., & Imada, T. (2016). Perceptual temporal asymmetry associated with distinct ON and OFF responses to time-varying sounds with rising versus falling intensity: A magnetoencephalography study. Brain Sciences.

Abstract: This magnetoencephalography (MEG) study investigated evoked ON and OFF responses to ramped and damped sounds in normal-hearing human adults. Two pairs of stimuli that differed in spectral complexity were used in a passive listening task; each pair contained identical acoustical properties except for the intensity envelope. Behavioral duration judgment was conducted in separate sessions, which replicated the perceptual bias in favour of the ramped sounds and the effect of spectral complexity on perceived duration asymmetry. MEG results showed similar cortical sites for the ON and OFF responses. There was a dominant ON response with stronger phase-locking factor (PLF) in the alpha (8-14 Hz) and theta (4-8 Hz) bands for the damped sounds. In contrast, the OFF response for sounds with rising intensity was associated with stronger PLF in the gamma band (30-70 Hz). Exploratory correlation analysis showed that the OFF response in the left auditory cortex was a good predictor of the perceived temporal asymmetry for the spectrally simpler pair. The results indicate distinct asymmetry in ON and OFF responses and neural oscillation patterns associated with the dynamic intensity changes, which provides important preliminary data for future studies to examine how the auditory system develops such an asymmetry as a function of age and learning experience and whether the absence of asymmetry or abnormal ON and OFF responses can be taken as a biomarker for certain neurological conditions associated with auditory processing deficits.

Keywords: MEG; auditory ON response; auditory OFF response; equivalent current dipole (ECD); minimum norm estimation (MNE); phase locking factor (PLF); temporal asymmetry index (TAI)

Acknowledgments: This work was supported by funding to the Research Center for Advanced Technologies at Tokyo Denki University from the Ministry of Education, Culture, Sports, Science and Technology of Japan. Zhang additionally received support from a Brain Imaging Research Project Award and the Grant-in-Aid of Research, Artistry and Scholarship Program, University of Minnesota. Cheng received support from National Social Science Foundation of China (15BYY005) and China Scholarship Council for being a visiting professor at the University of Minnesota. We thank Dr. Lotus Jo-Fu Lin for assistance and Profs. Matti Hämäläinen and Hui Zou for technical guidance respectively on MNE analysis and statistical techniques.

1. highpass the data at 1 hz, and lowpass the data with the upper limit of frequency of interest.

2. remove the 60Hz noise using the CleanLine plugin in EEGLAB.

3. manually go through the entire raw EEG to remove problematic segments.

4. remove bad channels.

5. use common average reference (if the raw data do not use common average)

6. close all processes and software programs not related to the analysis, and clear all before running ica script (Do not run ica via the graphical user interface).

7. install more RAM (a total of 16 GB or more is recommended) in the computer and try other methods allowed by windows to increase its buffer/virtual memory size. For instance,

http://ccm.net/faq/13919-windows-7-increase-the-buffer-size-of-the-command-prompt

https://support.microsoft.com/en-us/products/windows?os=windows-7

Perception of talker facing orientation and its effects on speech perception by NH and HI listeners

Olaf Strelcyk

Sonova U.S. Corporate Services, 60555 Warrenville, IL

moc.avonos|kyclerts.falo#moc.avonos|kyclerts.falo

Despite a vast body of research on NH and HI listeners' speech perception in multitalker situations, the perception and effects of talker facing orientation have received only very little attention. Facing orientation here refers to the direction that a talker is facing in, as seen from a listener's perspective, e.g., whether a talker is directly facing the listener or looking in another direction. Two studies will be presented. The first one assessed how well listeners could identify the facing orientation of a single talker in quiet. The second study examined the importance of facing orientation for speech perception in situations with multiple talkers. Digit identification was measured for a frontal target talker in the presence of two spatially separated interfering talkers reproduced via loudspeakers. Both NH and HI listeners performed significantly better when the interfering talkers were simulated to be facing away. Facing-orientation cues enabled the NH listeners to sequentially stream the digits. The HI listeners did not stream the digits and showed smaller benefits, irrespective of amplification. The results suggest that facing orientation cannot be neglected in the exploration of speech perception in multitalker situations.

On June 2, 2016, Dr. Zhang received the 2016-2017 CLA Brain Imaging Research Project award letter (for the autism research project) from Associate Dean Alex Rothman.

Koerner, T. K., Zhang, Y., Nelson, P., Wang, B., & Zou, H. (Accepted). Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: A mismatch negativity study. Hearing Research.

Abstract
Successful speech communication requires the extraction of important acoustic cues from irrelevant background noise. In order to better understand this process, this study examined the effects of background noise on mismatch negativity (MMN) latency, amplitude, and spectral power measures as well as behavioral speech intelligibility tasks. Auditory event-related potentials (AERPs) were obtained from 15 normal-hearing participants to determine whether pre-attentive MMN measures recorded in response to a consonant (from /ba/ to /bu/) and vowel (from /ba/ to /da/) change in a double-oddball paradigm can predict sentence-level speech perception. The results showed that background noise increased MMN latencies and decreased MMN amplitudes with a reduction in the theta frequency band power. Differential noise-induced effects were observed for the pre-attentive processing of consonant and vowel changes due to different degrees of signal degradation by noise. Linear mixed-effects models further revealed significant correlations between the MMN measures and speech intelligibility scores across conditions and stimuli. These results confirm the utility of MMN as an objective neural marker for understanding noise-induced variations as well as individual differences in speech perception, which has important implications for potential clinical applications.

Keywords: MMN; time-frequency analysis; theta band; speech-in-noise perception; linear mixed effects model

Zhang, L., Li, Y., Wu, H., Li, X., Shu, H., Zhang, Y., & Li, P. (Accepted). Effects of semantic context and fundamental frequency contours on Mandarin speech recognition by second language learners. Frontiers in Psychology.

Abstract:
Speech recognition by second language (L2) learners in optimal and suboptimal conditions has been examined extensively with English as the target language in most previous studies. This study extended existing experimental protocols (Wang et al., 2013) to investigate Mandarin speech recognition by Japanese learners of Mandarin at two different levels (elementary vs. intermediate) of proficiency. The overall results showed that in addition to L2 proficiency, semantic context, F0 contours, and listening condition all affected the recognition performance on the Mandarin sentences. However, the effects of semantic context and F0 contours on L2 speech recognition diverged to some extent. Specifically, there was significant modulation effect of listening condition on semantic context, indicating that L2 learners made use of semantic context less efficiently in the interfering background than in quiet. In contrast, no significant modulation effect of listening condition on F0 contours was found. Furthermore, there was significant interaction between semantic context and F0 contours, indicating that semantic context becomes more important for L2 speech recognition when F0 information is degraded. None of these effects were found to be modulated by L2 proficiency. The discrepancy in the effects of semantic context and F0 contours on L2 speech recognition in the interfering background might be related to differences in processing capacities required by the two types of information in adverse listening conditions.

Funding statement
We would like to thank Xianjun Tan for her assistance in data collection. This research was supported by a Research Project from Faculty of Linguistic, Science Foundation of Beijing Language and Culture University (Fundamental Research Funds for the Central Universities) (14YJ150003, 16WT02) and Program for New Century Excellent Talents in University (NCET–13–0691) to LJZ, and by the US National Science Foundation (BCS-1349110) to PL. YZ was additionally supported by a Brain Imaging Research Project award from the University of Minnesota.

A new grant has been awarded to Zhang Lab with joint funding from Starkey and CATSS (Center for Applied and Translational Sensory Science).

Rao, A., Rishiq, D., Yu, L., Zhang, Y., & Abrams, H. (Accepted). Neural Correlates of Selective Attention with Hearing Aid Use Followed by ReadMyQuips Auditory Training Program. Ear and Hearing. (Project funded by Starkey Laboratories; PI: Rao, co-PI: Zhang)

ABSTRACT
Objectives: The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments.

Design: Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late Event-Related Potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at pre-fitting, pre-training and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants.

Results: After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. Changes in P3b were correlated with improvement in d prime (d') in the selective attention task. After the training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. ERP and behavioral measures
in the auditory selective attention task did not show any changes.

Conclusions: Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective attention task. RMQ training led to gains in speech perception in noise and improved listener confidence was noted in the auditory selective attention task.

Friday, May 21, at 2pm in Shevlin 110:

The “meaning” in noise – evidence for bottom-up information masking of within channel modulation coding of speech.

Simon Carlile

Starkey Hearing Research Center, Berkeley, CA 94704-1362, USA & Auditory Neuroscience Laboratory, School of Medical Sciences, University of Sydney, Australia 2006

Spectro-temporal variations are a consequence of dynamic structural variations of a sounding body. The human auditory system is highly optimized for the detection, segregation and analysis of one class of such variations – speech. Here we will examine some consequences of this optimisation in the context of complex listening involving multiple concurrent talkers. In such conversational settings, listeners rapidly shift their attention from one to another talker so foreground and background are defined dynamically by listener intent. Up-regulation of the attended-to talker is generally thought to result from endogenous, top-down attention.

Here we review a recent report indicating that substantial informational masking between concurrent talkers may result from bottom-up interactions between sources within frequency modulation channels. This indicates that temporally dynamic aspects of within-frequency channel processing plays a very important role in speech masking by the sort of “noise” most commonly encountered in natural conversational settings. Possibly even more surprising, these interactions appear to be modulated by spatial attention. Attentional enhancement of a foreground sound might then also involve processes at the level of the within-frequency channel. Whether this occurs prior to or as a consequence of grouping is a question of some functional significance but also points to the importance of knowing the listeners focus of attention.

- MAY 2016 -

CATSS Spring Symposium: Neural Interfaces for Sensory Loss
Thursday, May 19th, 12:30 to 5:00pm
McNamara Alumni Center

Tentative Schedule:

12:30pm - Participant Arrival and Registration - Johnson Room (Posters put up; light snacks and beverages will be provided.)

12:45pm - Introduction & Opening Lecture:
Dr. Geoff Ghose, Associate Professor of Neuroscience, Radiology, and Psychology, University of Minnesota: Optimizing the behavioral efficacy of cortical stimulation for perception.

1:30pm - Keynote Lecture:
Dr. Robert V. (Bob) Shannon, Professor of Research in Otolaryngology, Biomedical Engineering and Neuroscience, University of Southern California: Adventures in auditory prostheses.

2:30pm - Break for Poster Session & Refreshments in The Commons

2:55pm - Panel Discussion and Audience Participation

3:30pm - Closing Lecture:
Dr. Inyong Choi, Assistant Professor of Communication Sciences and Disorders, University of Iowa: Brain-computer interface operated by selective auditory attention: a neurofeedback-use scenario.

4:30pm - Steps toward follow-up and future collaborations; sign up for small-group sessions.

page 1123...next »
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License