Which of the following statements is most accurate in the context of lateralization of language

Dichotic Listening Studies of Brain Asymmetry

K. Hugdahl, in Encyclopedia of Neuroscience, 2009

Dichotic listening (DL) is a noninvasive technique for the study of brain lateralization, or hemispheric asymmetry. DL is the most frequently used method to reveal a left-hemisphere dominance for language processing, particularly extraction of the phonetic code from the speech signal. By recording the pattern of verbal responses to dichotic presentations of simple speech sounds (e.g., syllables), it is possible to determine the hemisphere to which receptive language capabilities are most likely localized in an individual. DL is frequently used in both experimental and clinical studies of language asymmetry, or laterality, which can also be used to study lateralization of emotion and affect. DL involves simultaneously presenting two different auditory stimuli, one in each ear, and the task of the individual is to report, after each presentation, which sound is heard. The individual is not informed beforehand that there are two different syllables on each trial. We have used the DL method in our own research to study the pathology of left temporal lobe language processing in dyslexic children, auditory hallucinations in schizophrenia, and patients with left-hemisphere arachnoid cysts, to mention a few examples. DL can also be used as a complement to invasive sodium barbital techniques when investigating language asymmetry in epileptic patients undergoing surgical treatment. However, although mainly used as a technique for the study of language laterality, in a general sense DL is a behavioral technique to study a broad range of cognitive and emotional processes, related not only to brain laterality and hemispheric asymmetry, but also to attention, conditioning, and learning and memory psychopathology and psycholinguistics. Thus, DL is a measure of both temporal and frontal lobe function, attention and information processing, and stimulus processing speed, in addition to being a measure of hemispheric asymmetry.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080450469002953

Phonological, Lexical, Syntactic, and Semantic Disorders in Children

D.L. Molfese, ... P.J. Molfese, in Encyclopedia of Language & Linguistics (Second Edition), 2006

Semantics and DS

Dichotic listening tasks involving DS children generally result in a left-ear advantage, indicating that these individuals use their right hemisphere to process for speech (Welsh, 2002). On the basis of such findings, Capone (2004) argued that difficulties in semantic processing in DS occur from a reduction in cerebral and cerebellar volume. In addition, the corpus callosum is thinner in the DS brain in the rostral fifth, the area associated with semantic communication. Welsh (2002) speculated that the thinner corpus callosum isolates the two hemispheres from each other, making it more difficult to integrate verbal information.

Vocabulary growth in DS children is delayed increasingly with age (Chapman, 2002). Studies using dichotic listening tasks report a left-ear advantage for DS, indicating that lexical operations are carried out primarily in the right hemisphere, a finding opposite to that found with normal developing children. In fact, individuals with DS who exhibit the most severe language deficits demonstrate the most atypical ear advantage (Welsh, 2002).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080448542049294

Audition

L.A. Werner, in The Senses: A Comprehensive Reference, 2008

3.50.3.2.4 Listening to competing messages

Dichotic listening is the classic paradigm for the study of selective auditory attention (Cherry, E. C., 1953): a listener is presented with two sound sequences. In one condition, the sequences are presented simultaneously to the same ear(s). In the other, one sound sequence is presented to one ear and a different sound sequence to the other. The sound sequences are typically speech. The listener is asked to report the sound presented in one sequence, while ignoring the other. Maccoby E. E. and Konrad K. W. (1966) tested kindergarten, second-grade, and fourth-grade children in such a selective listening task. In the dichotic condition, a male voice spoke words in one ear, while a female voice spoke words in the other. In the diotic condition, both voices were presented to both ears. Children were instructed to report the word spoken by either the male or female voice. Performance in the diotic condition was rather poor overall, but improved from 18% to 33% correct between kindergarten and fourth grade. In the dichotic condition, performance was uniformly better at all ages, but still improved from 35% to 52% correct over the age range tested. Doyle A.-B. (1973) reported that the improvement in performance in the diotic competing message condition continued to a lesser extent between 8 and 14 years of age. The results of more recent studies are consistent with this pattern, and indicate that differences between event-related potentials evoked by attended and unattended stimuli increase in parallel with performance in dichotic listening tasks (Bartgis, J. et al., 2003; Berman, S. and Friedman, D., 1995; Coch, D. et al., 2005).

Recent studies of children disentangling competing messages have produced a wide range of results. For example, Litovsky R. Y. (2005) asked 4- to 7-year-olds and adults to identify spondees in the presence of 1- or 2-talker speech or of speech-shaped noise modulated with the 1- or 2-talker speech envelope. Children’s thresholds were higher than adults’ in all conditions, but: (1) the amount of masking exhibited by children and adults was similar in all conditions; (2) both children and adults had higher speech reception thresholds in modulated speech-shaped noise than in speech; and (3) children and adults showed equivalent release from masking when the spondee was presented from the speaker in front of the listener and the competing sound was presented to a speaker on the listener’s right. Fallon M. et al. (2000) reported that 5-year-olds were as good as adults in identifying words in a background of multitalker babble, as long as age differences in masked thresholds were taken into account. Hall J.W. et al. (2002), in contrast, found that while adults’ spondee identification was about the same with two-talker and noise maskers, 5- to 9-year-old children’s spondee identification was worse with a two-talker masker than with a noise masker, particularly when the speech masker was presented continuously throughout the session. Finally, Wightman F. and Kistler D. (2005) asked children and adults to identify speech in a paradigm developed by Brungart, D. S. and his colleagues (Brungart, D. S., 2001). Listeners heard a target sentence along with a competing sentence in one ear, and in one condition, an additional competing sentence or a modulated speech-shaped noise was presented in the other ear. Listeners ranging in age from 4.6 to 30 years were tested with a female-talker distracter. The youngest children, 4.6–5.7 years old, needed a 23 dB greater signal-to-distracter ratio than 20- to 30-year-olds to identify a word in the target sentence when no contralateral distracter was presented. Adding noise to the contralateral ear had little effect at any age, but adding speech to the contralateral ear had a greater effect on the youngest children than on other age groups. Older children seemed to be affected by the presence of contralateral speech to about the same extent as adults.

It is difficult to draw conclusions about the development of selective attention from these studies. Several variables seem to be important in determining whether children will be able to attend selectively to one of several competing messages. One of these is the extent to which the target and distracters are synchronized in time. In the Wightman F. and Kistler D. (2005) experiment, for example, the words in the target and distracter stimuli were precisely aligned. Temporal synchrony is one variable that makes it difficult for listeners to segregate sound sources (Yost, W. A., 1991).

It is interesting that in the speech studies in which the words in the target and distracter sentences were not precisely aligned, children are generally able to take advantage of differences in spatial location to improve performance, while in informational masking studies where target and distracter are temporally aligned, they are not. This suggests a problem with sound source segregation rather than with selective attention. The precise characteristics of the distracter also seem to be important. For example, if children are less able than adults to take advantage of periods of low distracter energy to process the target, then more modulated distracters (e.g., single-talker versus multitalker) will put children at a relative disadvantage compared to adults. Finally, it does appear that children may less easily ignore the semantic content of the distracter than adults. Hall J.W. et al. (2002) reported that children’s spondee identification was more disrupted by continuous speech (in which the listener might follow the meaning) than by gated speech (in which the semantic content would be disrupted by gating), while gating the distracter made little difference to adult performance. A similar result was reported by Cherry E. C. (1981). Considerably more research will obviously be needed to understand the development of auditory attention.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123708809003868

The Human Auditory System

Frank E. Musiek, Gail D. Chermak, in Handbook of Clinical Neurology, 2015

Findings in clinical populations

Various dichotic listening paradigms are sensitive to a variety of central auditory disorders. In patients with well-localized lesions, the ear contralateral to the involved hemisphere often exhibits poor performance relative to norms (intersubject) or to the other ear (intrasubject, interaural comparison) (Fig. 18.1) (Kimura, 1961b; Musiek, 1983; Musiek and Weihing, 2011; and others). If the corpus callosum is involved, a left-ear deficit is seen consistently (Musiek and Weihing, 2011). The anatomic basis for this finding stems from the more indirect routing of left-ear stimuli, which are first directed to the right hemisphere, and then are impeded in crossing (via the corpus callosum) to the left hemisphere which is required for speech processing and verbal response (Fig. 18.2). A left-ear deficit also is seen in young children (under 12 years) whose corpus callosum have not yet attained their full complement of myelin, as well as in the elderly and in individuals with diseases affecting myelin (e.g., multiple sclerosis) (Musiek et al., 1984; Musiek and Pinheiro, 1985; Musiek and Weihing, 2011) (Fig. 18.3). The left-ear dichotic listening deficit in cases of corpus callosum involvement has been studied extensively in split-brain patients (Musiek and Pinheiro, 1985). In fact, due to the anatomic basis of dichotic listening, it has become known as a test of interhemispheric transfer or corpus callosum integrity.

Which of the following statements is most accurate in the context of lateralization of language

Fig. 18.1. Dichotic listening performance in a patient with left temporal-lobe tumor demonstrating the “contralateral” effect.

Which of the following statements is most accurate in the context of lateralization of language

Fig. 18.2. Depiction of possible routes to the cortex during dichotic listening.

Which of the following statements is most accurate in the context of lateralization of language

Fig. 18.3. Dichotic listening performance in a patient with multiple sclerosis.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444626301000184

Attraction, Distraction and Action

Andrew R.A. Conway, Michael J. Kane, in Advances in Psychology, 2001

Dichotic listening

A dichotic-listening task requires the subject to shadow, or repeat aloud, a message presented to one ear while ignoring a message presented to the other ear. Early work using the dichotic listening paradigm revealed that subjects were very capable of successful shadowing and successful blocking. In fact, subjects are so successful at blocking the unattended message that little or no semantic content is ever reported from the irrelevant channel (Broadbent, 1958; Cherry, 1953). However, Moray (1959) found that when one’s own name is presented on the unattended channel, 33% of subjects report hearing it, and so it appears that some semantic information is capable of capturing attention and therefore reaching awareness, at least for some individuals.

Using more sophisticated sound technology, Wood and Cowan (1995) replicated Moray’s (1959) study and found that 34.6% of subjects reported hearing their own name on the unattended channel. The question remained, why do some subjects recognize their name while other subjects do not? Note that by a capture/control view of dichotic listening performance, those who notice their names are those who are less successful in controlling attention by blocking task-irrelevant information. Thus, individuals with low WMC should be more likely to hear their name. In contrast, by a capacity view of dichotic listening, those who notice their names are those who have more attentional capacity to simultaneously devote to the task-relevant and task-irrelevant channels. By this view, individuals with high WMC should be more likely to hear their name.

Conway, Cowan, and Bunting (2001) tested these possibilities by testing 20 high and 20 low WMC subjects in a version of Moray’s dichotic-listening task, with high and low WMC reflecting the upper and lower quartiles of the distribution of operation span scores, respectively. The listening task required subjects to shadow 400 unrelated words presented to the right ear and ignore 350 unrelated words presented to the left ear. After 4 or 5 minutes of shadowing, the subject’s own name was presented on the unattended channel. Words were presented simultaneously at a rate of one word per second. The attended channel was always a female voice and the unattended channel was always a male voice.

Conway, Cowan, and Bunting (2001) found very large WMC-related differences in name detection, such that low-WMC subjects were much more likely to hear their own name than were high-WMC subjects (see Figure 1). Although low-WMC subjects committed more overall shadowing errors (M = 30) than did high-WMC subjects (M = 10), the WMC groups did not differ in the number of shadowing errors committed on the two words presented before the presentation of the name. This suggests that the key finding of low spans disproportionately hearing their name was not simply due to attention wandering to the unattended channel at the opportune time. Finally, shadowing performance on the words following presentation of the name was also examined. Presumably, hearing one’s own name on the unattended channel would come with a cost and this was indeed the case. Regardless of WMC, subjects who reported hearing their name committed more shadowing errors on the two words following presentation of the name than subjects who did not report hearing their name. This cost only persisted for two words as there was no difference on the third or fourth word following presentation of the name.

Which of the following statements is most accurate in the context of lateralization of language

Figure 1. Proportion of high and low span subjects who reported hearing their own name in the unattended channel.

The results of Conway et al. (2001) provide strong support for the notion that WMC is related to attention control. Specifically, high and low WMC subjects differ in performance when blocking a particularly salient, and habitually attended to, stimulus. When attempting to ignore one auditory channel while shadowing another, individuals with lesser WMC are more susceptible to attentional capture by a powerful orienting cue than are those with greater WMC.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0166411501800169

Language and the Left Hemisphere

Sebastian Ocklenburg, Onur Güntürkün, in The Lateralized Brain, 2018

The dichotic listening task has been used in neuropsychological assessment and research for several decades. Typically, participants would be tested in a laboratory setting at a university or clinic. With the iDichotic smartphone app20,21http://dichoticlistening.com/ or use the QR-code in Fig. 4.6A—it is now possible to perform the task practically everywhere. To discover which half of your brain processes language, you first need to download the app by using the QR-code in Fig. 4.6B. Insert headphones into your smartphone and make sure to place the left headphone over the left ear and vice versa. Adjust the volume of your smartphone to your comfort level and start the app by touching the icon. After the start screen (Fig. 4.7A), you are asked a few questions to optimize the app experience, and you can perform a short hearing test to make sure that both of your ears have the same hearing capabilities, as one-sided hearing issues would confound the test results. After pressing “Continue,” a screen appears on which you can choose whether to start with the “Listen” test—the non-forced condition or the “Concentrate” test—the forced-left and forced-right conditions (Fig. 4.7C). Start with the “Listen” test. You will be instructed that you will hear a series of different syllables and that you need to decide which one of the six you have just heard. Press “Start test” and then choose the syllable you heard best on the touchscreen on each trial (Fig. 4.7B). After finishing, the app will tell you whether language is processed in the left, right or both halves of your brain. If you also want to assess how strongly your language lateralization is modulated by attentional effects, click on the “Concentrate” button and this part of the test will start. You are instructed to concentrate only on one ear during the test. It can start either with the left or the right ear, and after about 3 min you will be instructed to attend to the other ear. After you finished, click on “Details” to find out to what extent concentrating on one ear at a time influenced your results.

Which of the following statements is most accurate in the context of lateralization of language

Figure 4.6. QR-codes to reach the website of the iDichotic app (A) and to download it in the app store (B).

QR-codes used with permission of Kenneth Hugdahl.

Which of the following statements is most accurate in the context of lateralization of language

Figure 4.7. Screenshots exemplifying the use of the iDichotic app. The start screen (A), the typical response panel (B) and the choosing between the different test conditions (C) are shown.

Screenshots used with permission of Kenneth Hugdahl.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128034521000047

Creativity☆

Mark A. Runco, in Reference Module in Neuroscience and Biobehavioral Psychology, 2018

Magnetic Imaging

Handedness, dichotic listening, and studies of brain trauma are being replaced by more modern techniques, such as magnetic imaging. For example, in a 1995 investigation, Thomas Albert, Christo Pantey, Christian Wiendruch, Bridget Rockstroh, and Edward Taub found that the cortical representation of the digits of the left hand of string players was larger than that in controls. The effect was smallest for the left thumb, and no such differences were observed for representations of the right hand digits. Intriguingly, the amount of cortical reorganization in the representation of the fingering digits was correlated with the age at which the person had begun to play. These results suggested that the representation of different parts of the body and the primary somatosensory cortex of humans depends on use and adaptations to the needs and experiences of the individual.

Recall the idea introduced earlier in this article, that performance often reflects both biological potential and experience. Note also the fact that this research focused on individuals within a very specific domain of performance (stringed instruments). Generalizations cannot be applied to other instruments, let alone other kinds of musical creativity or other kinds of art. Finally, it is critical to keep in mind that although the participants of this study were musicians, their actual creativity was not assessed. Music may be an unambiguously creative domain, but it would be interesting to know the actual level of creative skill of the individuals and to correlate that with the size of the cortical representations.

There has been so much fMRI research on creativity that a meta-analysis was possible in 2015. In it, Wu, Yang, Tong Sun, Chen, Wei, Zhang and Qiu reaffirmed the importance of the prefrontal cortex but also supported the idea that creativity involves widely distributed regions of the brain, which is another way of describing the networks and systems mentioned earlier. Another review of the neural imaging research on creativity by Yoruk and Runco (Activitas Nervosa Superior, 2014) pointed to the prefrontal cortex and communication between the two hemispheres. Significantly, they also described how creativity may involve not just selective neural activation, but also de-activation. Certain areas of the brain, including the prefrontal cortex, may actually be less active during creative processing. This may allow broader associations and more divergent thinking, given that evaluative and judgmental processes are relegated, at least temporarily. One label for this de-activation is hypo-frontality.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012809324503042X

Tone: Neurophonetics

J.T. Gandour, in Encyclopedia of Language & Linguistics (Second Edition), 2006

Dichotic Listening

With the dichotic listening technique, two different auditory stimuli are presented at the same time, one in each ear (Hugdahl, 1999). Dichotic presentation of verbal auditory stimuli typically yields a right ear advantage (REA) when participants are requested to report what they hear on each trial. The standard explanation for the REA is that the contralateral auditory pathways suppress the ipsilateral pathways at the level of the brain stem, thus favoring the right ear input to the language-dominant LH. Information from the left ear has to be transferred across the corpus callosum in order to be processed in the LH. This transfer attenuates the available information, in addition to increasing the time it takes for the left ear signal to reach the language centers in the LH.

The RH, on the other hand, appears to be better at processing music and pitch contours in nonlanguage contexts (Zatorre et al., 2002). This raises the question of how pitch contours are processed in the human brain when they signal differences in meaning at the lexical level. Tone languages provide a window for investigating hemispheric specialization of pitch processing in language vs. nonlanguage domains. By comparing dichotic perception of identical pitch contours in speech and nonspeech contexts, we are able to determine whether hemispheric specialization is driven primarily by complex physical cues or by domain-specific phonological functions.

The seminal dichotic perception studies of lexical tone focused on Thai (Van Lancker, 1980). Speakers of a nontone language (English), musically untrained and musically trained, served as control groups. Ear preferences were compared for three sets of stimuli: stimulus (1), a minimal set of Thai words distinguished by tone; stimulus (2), a minimal set of Thai words distinguished by initial consonant; and stimulus (3), a minimal set of hums distinguished by pitch changes homologous to stimulus (1). The Thai group showed a significant REA for both the tone words (stimulus (1)) and the consonant words (stimulus (2)), but no ear advantage for hums (stimulus (3)). English listeners, regardless of musical training, showed a REA for the consonant words of stimulus (2) only. Taken together, these findings suggest that pitch perception is lateralized to the LH when pitch variations signal language-specific functions. No ear advantage is observed for tone words in either the musically untrained or the musically trained English group, because pitch patterns are not exploited at the syllable level in English. It is unlikely that the REA for Thai tones can be attributed simply to greater familiarity with pitch contrasts, since the musically trained English group failed to show a right ear advantage for tone words. A right ear advantage for consonant words is observed in both Thai and English groups, because initial consonants are contrastive in both languages. The hums yielded no significant ear effects regardless of language experience, because they were not linguistically significant.

A left hemisphere superiority for dichotic perception of lexical tone has also been demonstrated in Mandarin Chinese (Wang et al., 2001, and references therein). Stimuli consisted of four quadruplets of Mandarin words distinguished minimally by tone. Musically untrained English listeners served as the control group. A REA for the tone words was observed in the Mandarin group only. No ear advantage was found for the nontone language listeners. Dichotic perception of the two Norwegian pitch accents similarly reveals a LH advantage for Norwegian words differing only in tone. These findings combined reinforce the view that lateralization of pitch processing varies, depending on language experience. Hemispheric specialization appears to be based on language functions instead of physical parameters.

All of the aforementioned studies of dichotic perception of tone used real words as stimuli. Thus, it may be the case that these findings on prelexical processing of tone are confounded with lexical processing (Wong, 2002).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080448542047969

Cognitive Psychology of Memory

N.W. Mulligan, in Learning and Memory: A Comprehensive Reference, 2008

2.02.2.2 The Filter Model and the Debate between Early and Late Selection Theories

From research on dichotic listening, Broadbent (1958) developed an early, highly influential information processing account of attention, perception, and memory (Figure 2). In this model, Broadbent depicted cognition as a series of discrete, serial information-processing stages. Processing begins with sensory systems, which can process large amounts of raw sensory information in parallel. Other research (e.g., split-span studies) indicated that sensory information may be preserved for a short time prior to selection. Thus, Broadbent argued that the initial sensory processing of the perceptual characteristics of inputs was deposited into a short-term memory buffer. This is the point at which attention operates in Broadbent’s model, acting as a selective filter. The filter blocks out unwanted inputs based on selection criteria that reflect the goals of the cognitive system. For example, given that the subject’s goal is to succeed at the shadowing task, selection is based on the physical location of the to-be-attended message. The selection criteria operate on one of the perceptual characteristics of inputs in the memory buffer. The attended material gains access to a limited-capacity perceptual system (the P system in Figure 2), which allows analysis for content, conscious awareness, and ultimately, encoding into long-term memory.

Which of the following statements is most accurate in the context of lateralization of language

Figure 2. The filter model. Adapted from Broadbent DE (1958) Perception and Communication. London: Pergamon Press, with permission.

It should be noted that Broadbent’s model has many similarities to the Atkinson and Shiffrin (1968) modal model. The memory buffer in Broadbent’s model corresponds to the sensory register of Atkinson and Shiffrin, in that both store raw sensory information prior to selective attention and refined perceptual analysis. The limited-capacity perceptual system of Broadbent is analogous to the short-term store of Atkinson and Shiffrin in terms of its limited capacity, its equation with the contents of awareness, and its role as a conduit to long-term storage.

Broadbent’s model proposed that the selective filter operates on the basis of physical characteristics of the message and prior to the analysis of meaning. Consequently, this model is referred to as an early-selection model of attention. Subsequent research on the filter theory raised questions about early selection and gave rise to important competitor models. For example, Moray (1959) found that participants often noticed their own name when it was presented in the ignored channel. According to the filter model, detailed content such as the identity of a word or name should be unavailable from an unattended message. A converging result came from Treisman’s (1960) study, in which one story was presented to the attended ear (and shadowed by the subject) and a second story was presented to the ignored ear. Partway through the study, the first story switched tracks and replaced the story in the ignored ear (the first story itself being replaced in the attended track with a new, third, story). According to the filter theory, if the subject is not attending to the irrelevant ear, then they should not process any of its content, and thus should have no awareness that the first story continued in the irrelevant channel. However, subjects typically continued to shadow the first story even after it switched ears. Neither of these results comports with Broadbent’s original filter model.

Treisman (1964) handled these new findings by modifying Broadbent’s theory. Treisman argued that selective attention does not operate as an all-or-none filter but, rather, operates like a gain control, attenuating unattended inputs. Such attenuated stimuli still might be recognized (i.e., their content fully analyzed) if the stimulus is very important (such as one’s own name) or has been primed by attended semantic context. This attenuation view preserves the early placement of the selective filter (now an attenuator) as operating prior to semantic analysis and the processes required for encoding into long-term memory.

An alternate approach was adopted by late selection models (e.g., Deutsch and Deutsch, 1963; Norman, 1968). These models proposed that stimuli routinely undergo substantial analysis (up to identification processing), whether attended or not. The selective mechanism in these models operates after perceptual and content analysis but before response selection. Under this view, semantic analysis helps determine which stimulus is most relevant for current goals and should guide behavioral response.

A number of subsequent results were taken as supportive of late selection. For example, Lackner and Garret (1972) found that the interpretation of ambiguous sentences in the attended ear was biased by words presented in the ignored channel. The results of Corteen and associates (Corteen and Wood, 1972; Corteen and Dunn, 1974) were similarly interpreted as supporting late selection and memory access without attention. Participants in these studies initially underwent a learning phase in which a set of words was paired with electric shock. In a subsequent phase of the experiment, participants showed a heightened galvanic skin response (GSR) to the shock-paired words even when these words were presented to the ignored channel in a dichotic listening task.

Results such as these imply semantic analysis of unattended information (and late selection) but have been controversial because of the possibility of covert shifts of attention. It is possible in dichotic listening tasks (and in other selective attention tasks) that a subject’s attention might wander to the nominally unattended ear. The critical question is whether results such as the above represent semantic analysis of unattended information or momentary shifts of attention to the ignored ear. As framed by Lachter et al. (2004), the issue is whether there is leakage (penetration of the selective filter by semantic content) or slippage (covert, perhaps unintentional, shifts of attention). In a review of the literature, Holender (1986) concluded that results that appear to support late selection in auditory selective attention are actually the result of such attentional slippage. A more recent review comes to the same conclusion for studies of both auditory and visual selective attention (Lachter et al., 2004). Furthermore, Lachter et al. (2004: 884–885) argue that the potential for slippage was underestimated in early research because estimates of the time necessary for attentional shifts were quite high (estimated to be 500 ms or longer in Broadbent (1958)). More modern estimates are as low as 150 ms for voluntary (endogenous) shifts of attention and 50 ms for involuntary (exogenous) shifts. If attention can be so rapidly shifted from one stimulus (or channel) to another, it raises the possibility that rapid shifts of attention might go unnoticed by the experimenter.

This issue has been raised regarding a number of studies. Following up on the results of Corteen and Dunn (1974), Dawson and Schell (1982) presented shock-associated words in the ignored channel of a dichotic listening task. Trials were separated based on evidence for attentional shifts to the ignored channel (based on on-line performance such as shadowing errors). The heightened GSR effect was much greater on trials exhibiting evidence of attention shifts than on trials exhibiting no such evidence. In a similar vein, Wood et al. (1997) varied the difficulty of the primary shadowing task (by increasing the rate of speech to be shadowed) under the assumption that, as the shadowing task becomes more demanding, covert attentional shifts to the ignored channel would be less likely. Evidence for semantic processing and long-term memory of the ignored channel was only found for easier shadowing tasks; no such evidence was found for the more demanding shadowing task, consistent with the idea that covert shifts of attention may give the appearance of semantic processing and long-term memory for unattended materials (for thorough review of these issues, see Holender, 1986; Lachter et al., 2004).

Finally, working memory resources, to be discussed in more detail next, may play a critical role in selective attention. For example, Conway et al. (2001) investigated the cocktail-party effect of Moray (1959), with participants of high and low working memory capacity. Prior research indicates that low working memory capacity is associated with distractibility. Conway et al. (2001) reasoned that more distractible subjects are more likely to allow their attention to shift to the irrelevant channel. This study found that the low-capacity subjects were much more likely to notice their name in the irrelevant channel. This result is consistent with the previous research suggesting that attention shifts may be responsible for the appearance of semantic processing of the ignored channel. In addition, this result indicates a close connection between working memory and selective attention. Similarly, deFockert et al. (2001) found that increasing a working memory load made subjects more susceptible to distraction in a visual selective-attention task. Several researchers have now suggested that selective attention processes are controlled by working memory resources (e.g., Engle, 2002; Lavie et al., 2004). That is, working memory capacity may dictate the extent to which we can successfully focus on one input in the face of distraction (as in dichotic listening). From a historical perspective, this is an interesting inversion. Psychology has traditionally described attention as controlling access to memory structures, but this view implies that a memory structure (working memory) controls the function of an attentional process (selective attention).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123705099001340

Disorders of Peripheral and Central Auditory Processing

Deborah Moncrieff, ... Amanda Ortmann, in Handbook of Clinical Neurophysiology, 2013

11.3.2.1 Dichotic listening

In a test of dichotic listening, the patient hears two different words or sentences presented simultaneously to the left and right ears. Depending on the task, the patient may be asked to repeat both of the words that were heard or may be asked to ignore information in one ear while repeating what was presented in the other ear. When the patient is asked to repeat from both ears, dichotic listening involves the integration of binaural auditory input. When the patient is asked to ignore one ear and repeat what is presented to the other ear, the task involves the segregation of competing binaural input. The ability to both integrate and segregate binaural information during normal listening is essential in accurate speech perception and localization, especially in difficult listening situations. There has been strong evidence of weaknesses in dichotic listening among children with a variety of learning, language and reading problems (Moncrieff and Musiek, 2002; Dlouha et al., 2007; Billett and Bellis, 2011; Obrzut and Mahoney, 2011) and evidence suggests an aging effect on the perception of dichotic stimuli (Zenker et al., 2007). This may be why these tasks remain central to the APD battery. The 50th anniversary of the development of the dichotic digits test by Kimura (1961) was recently celebrated with a special journal issue devoted to research on dichotic listening (Hugdahl, 2011). Other dichotic listening tasks followed, including the Staggered Spondaic Words test (SSW) (Katz, 1962). One pattern of performance deficit on the SSW is a significantly poorer performance during the competing condition in one ear which later became known as a “left-ear deficit” (Jerger et al., 1991). Another dichotic words test that is commonly is the Competing Words subtest from the SCAN and its revisions (Keith, 1983), but normative data failed to report ear-specific information and as a result, significantly poor scores in one ear were often masked by excellent scores in the other ear, resulting in a possibly incorrect interpretation that results were normal (Moncrieff, 2006). The presence of an abnormally large difference in performance between the two ears, regardless of whether the right or left ear performs better, has emerged as an important diagnostic criterion for one specific type of APD termed “amblyaudia” (Moncrieff, 2010). Dichotic listening tests with digits and words can be used with children as young as 5 years of age (Katz, 1962; Keith, 2009; Moncrieff, 2011), although results in younger children should be interpreted cautiously and used as baseline measures for comparison to later results. Evidence suggests that the prevalence of a dichotic right-ear advantage (REA) has been overestimated, possibly contaminating normative data that reports average results for right and left ears without regard to which ear is dominant during performance on the task (Moncrieff, 2011). New normative data reflecting the average scores of the dominant and non-dominant ears without regard to right or left are recommended for interpretation of dichotic listening test results across all ages.

The diagnosis of amblyaudia depends upon normal performance in one ear (the listener’s dominant ear) with significantly poorer performance in the other non-dominant ear during standard word-based dichotic listening tests that assess the listener’s binaural integration skills (Moncrieff, 2010). Other dichotic listening tests use sentences or consonant–vowel (CV) pairs that assess the listener’s skills with binaural separation. During a binaural separation task, the audiologist should watch for evidence of the same pattern of poorer performance in the listener’s non-dominant ear when listening to sentences. Individuals who exhibit larger than normal asymmetry during dichotic listening tasks with words and digits often demonstrate a similar unilateral weakness with dichotic sentences, but they may fail to exhibit the same result when tested with CV pairs administered as a test of binaural separation (Asbjørnsen et al., 2003; Helland et al., 2008). The discrepancy between dichotic results with words or sentences and CVs has eluded explanation, but some researchers have reported evidence that the dichotic CV task may index auditory attention when the non-forced condition is compared to results obtained when the listener is asked to ignore one ear and focus only on the other ear (Saetrevik and Hugdahl, 2007).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780702053108000119

Which of the following is most accurate in the context of lateralization of language?

Which of the following statements is most accurate in the context of lateralization of language? It is most likely left-lateralized.

Which of the following statements is most accurate with respect to the lateralization of language?

Which statement is MOST accurate with respect to the lateralization of language among right-handers? It is most likely left-lateralized.

Which statement best characterizes lateralization?

Which statement BEST characterizes lateralization? It is the tendency for the left and right hemispheres to excel in certain activities.

Which of the following statements best expresses the relationship between the central nervous system and the endocrine system?

Which of the following statements best expresses the relationship between the nervous system and the endocrine system? The endocrine system influences and is influenced by the central nervous system.