What refers to the notion that information is processed simultaneously on parallel tracks?

As described, information processing theories of emotion regulation suggest that the flexible use of attentional control is important for maintaining psychological well-being (Gross, 2015).

From: Emotion in Posttraumatic Stress Disorder, 2020

Problem Solving

R.E. Mayer, in Encyclopedia of Human Behavior (Second Edition), 2012

Problem Space and Search Processes

Information-processing theories of problem solving focus on constructing a problem space and finding a path through the problem space (Newell and Simon, 1972; Novick and Bassok, 2005). A problem space consists of a representation of the initial state, goal state, and all intervening states. For example, the problem space for solving the equation, 2X -5 = X, has this equation as the initial state, and X = ___ as the goal state. Two of the intervening states, directly after the initial state are 2X = X + 5 and 2X - X -5 = 0, which were created by applying legal operators such as add 5 to both sides or subtract X from both sides. Similarly, other states are created by applying operators to these states, and so on.

Once a problem is represented as a problem space, the problem solver's task is to search for a path from the initial state to the goal state. Means–ends analysis is a search strategy in which the problem solver works on one goal at time; if that goal cannot be achieved directly, the problem solver sets a new goal of removing barriers, and so on. This search strategy is commonly used in computer simulations of problem solving and is consistent with the way that beginners solve problems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123750006002901

Unraveling the “New Morbidity”: Adolescent Parenting and Developmental Delays

John G. Borkowski, ... Keri Weed, in International Review of Research in Mental Retardation, 1992

D Attachment and Information Processing

Information-processing theory suggests an alternative explanation for how the quality of attachment in infancy might relate to cognitive development in early childhood. The hypothesis is that an infant’s attempts to cope with an insecure attachment relationship require most of his or her attentional resources, leaving fewer resources for exploring and learning about new aspects of the environment (cf. Main, 1991).

In a series of experimental and correlational studies, Dunham and colleagues (Dunham, Dunham, Hushman, & Alexander, 1989; Dunham & Dunham, 1990) examined the relationship between contingent and noncontingent social interactions and subsequent attention to a novel stimulus in 3-month-old infants. Results suggest that, as early as 3 months of age, aspects of the interaction between infants and their caregivers begin to influence the quality of infants’ attention to relevant aspects of the environment. Dunham and Dunham (1990) concluded that “the more time the mother–infant pair spent in the dyadic state of vocal turn-taking the longer the infants fixated on the subsequent stimulus pattern and the shorter their interfixation intervals during the contingency task” (p. 789). Similarly, Tamis-LeMonda and Bomstein (1989) investigated the relationship between infants’ habituation at 5 months and their competence in a free-play setting at 13 months, with a sample of 37 middle- to upper-SES households. Results indicated that habituation (a measure of early attentional processing), but not the maternal encouragement of attention, predicted the quality of infants’ symbolic play. It is interesting to note that the same contingent interactions that we believe affect infants’ attentional processes are also related to the quality of attachment (Colin, 1991).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0074775008601196

Methodology: the think-aloud problem-solving activity and post-activity interview

Barbara Blummer, Jeffrey M. Kenton, in Improving Student Information Search, 2014

Guiding theory

Information processing theory informed the theoretical framework for the study of education graduate students’ metacognitive abilities and information-seeking behavior. Information processing theory is based on Miller’s (1956, 1960) concepts of chunk and Test-Operate-Test-Exit (TOTE). According to Miller, individuals’ abilities to chunk information, or recode it into units, allowed them to increase the amount of material they could successfully remember. His research on recoding coupled with Newell, Shaw, and Simon’s work in 1957 and 1958 on complex information processing systems altered Miller’s beliefs about what “guides behavior” (1960, p. 2). In his publication Plans and the Structure of Behavior, he likened man to a computer that contained plans, strategies, executions, and images. Miller described plans as hierarchies of instructions that identified the order of operations. On the other hand, he defined images as “organized knowledge the organism has about itself and its world” and he believed that included “values” as well as facts (p. 17). According to Miller, the feedback loop or TOTE represented the basic unit of analysis for behavior. He suggested that individuals’ actions resulted from a system of TOTE hierarchical units that were controlled by plans or processes. Although he acknowledged plans were inherited, he suggested that variations in their source, span, detail, flexibility, speed, coordination, retrieval, and openness as well as stop-orders fostered different behaviors among individuals.

According to Miller, individuals’ problem solving was a cyclical process that centered on information collection and included revision in images, predictions, and testing. He argued that individuals solve problems by utilizing images rather than systematic plans because these were inefficient. Miller suggested that as individuals compared “what is” to “what ought to be” (1960, p. 174), they created images that served as potential solutions to problems. He believed individuals’ images were based on values and facts. Moreover, Miller attributed obstacles in the problem-solving process to the inability of the image to represent the “problem situation” (1960, p. 174). On the other hand, he maintained that the formation of heuristic plans fostered the development of solutions to well-defined problems.

Human processing theory points to a general plan for human behavior, and acknowledges similarities among individuals’ information-processing skills. Foremost, the theory illustrates that the iterative nature of problem solving is reflected in the process of information collection, revision, and testing of the alternative images. This vision of problem solving suggests it is controlled by cognitive as well as metacognitive strategies, as individuals continually regulate the process to develop new solutions. The theory also recognizes differences among individuals’ metacognitive skills. Lastly, the theory highlights the role of the problem or information need in controlling the process. In this instance, a well-defined problem can be solved by a different approach compared to its ill-structured counterpart. According to Miller, well-defined problems enable the searcher to “recognize the solution” while more complex problems do not have an easily identifiable way of revealing “what he is looking for” (p. 170).

In our study, participants verbalized their plans and strategies while problem solving in a research database. The use of the think-aloud approach facilitated the identification of cognitive as well as metacognitive behaviors, and illustrated the tutorial’s impact on participants’ search strategies.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843347811500116

Systems Theories and a Priori Aspects of Perception

J. Scott Jordan, in Advances in Psychology, 1998

Energy Transformation and Autocatalytic “Aboutness”

Information-processing theory (Chalmers, 1996), committed as it is to the notion that the phenomenal is not logically supervenient upon the physical (the condition which brings about the “Explanatory Gap”), would be forced to claim that the anticipatory, phenomenal shifts in “perceived” ego-centric space that take place in phantom limbs and the Phantom Array are actually quite unnecessary. Rather, they are simply phenomenal events that, for some yet-to-be-determined reason, find themselves yoked to the “physical” events of the brain. Elsewhere I have argued (Jordan (1997)) that one need accept such epiphenomenalism, only if one “accepts” the “physical-phenomenological” distinction upon which it is based. A more coherent, parsimonious approach is to model organisms and brains as open, thermodynamic, energy-transformation systems (Lotka, 1945; Odum, 1988) that (1) are far from equilibrium and (2) have attained, phylogenetically, autocatalytic closure (Kauffman, 1995). Such organic systems are capable of maintaining the “wholeness” of their structure because the interactions among their chemical components produce products that serve to sustain (i.e., catalyze) either their own interactions, or other chemical interactions that are vital to the maintenance of the “whole” structure. Such maintenance is thermodynamic in nature; it involves the intake, transformation, and dissipation of energy. Within such systems then, the structural dynamics of any given component “contain,” as part of their own dynamic structure, the changes in their structure that have been brought about by their interaction with other components of the system. Thus, the informational/structural state of any given component is never simply “about” itself, but rather, is always “about” itself and all the other structures with which it has come into contact. Further, Odum (1988) claims that as such organic systems do their thermodynamic work, they not only transform energy from one state to another (e.g., from raw sunlight to chemical energy in plants) they simultaneously change the energy’s quality. The point to be made is the following: Just as the structural/informational dynamics of a component within an autocatalytic system are never simply “about” just that component, the structural/informational dynamics of one “quality” of energy are simultaneously “about” all other “qualities” of energy with which those structures come into contact. For example, the dynamics of the chemical reactions (i.e., energy transformations) which take place in the visual cortical neurons of V1 are simultaneously “about” (1) themselves, (2) the electromagnetic radiation striking the photoreceptors (a “lower-quality” form of energy), (3) the neurotransmitters released by projections from the lateral geniculate nucleus, and (4) the neurotransmitters that are “fedback” to V1 neurons from higher brain centers (a “higher-quality” form of energy).

If one takes Odum’s (1988) lead then, and conceptualizes the world of nature as a self-organizing energy-transformation hierarchy (versus a “physical” world as is done in the information-processing approach) one rather quickly comes to the conclusion that within such a vastly complex web of organization, the attempt to parse structure into information, matter, and energy is, at best, a “conceptual” task; not an ontological one. Thus, instead of asking, “How does the physical give rise to the phenomenal?” one asks, “What is the thermodynamic structural nature of the levels of “aboutness” that are nested within phenomenology?” In other words, phenomenology is no longer seen as being constructed of a different “substance” than that of the “physical” world. Rather, both phenomenology and the physical world are “of’ the same substance— transformed energy.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0166411598800228

Assessment

Robyn S. Hess, Rik Carl D'Amato, in Comprehensive Clinical Psychology, 1998

4.09.3.1 Models of Learning

Information-processing theories have proved extremely useful in conceptualizing learning because this model can be applied to any given cognitive task and allows the practitioner to specify where the learning process is breaking down. Silver (1993) proposed an information-processing model based on four steps: input (how information from the sense organs enters the brain), integration (interpreting and processing the information), storage (storing the information for later retrieval), and output (expressing information via language or muscle activity). Learning is reliant upon each of the first three steps and is observed or inferred from the fourth step. Other models of information processing highlight the importance of the working memory in skill acquisition and learning (Baddeley, 1986; Just & Carpenter, 1992; Swanson, 1995). Working memory has traditionally been defined as a system of limited capacity for the temporary maintenance and manipulation of information (e.g., Baddeley, 1986; Just & Carpenter, 1992) and most closely corresponds to the integration step in Silver's model. Tasks that measure working memory are those that require the client to remember a small amount of material for a short time while simultaneously carrying out further operations. In daily life, these tasks might include remembering a person's address while listening to instructions about how to reach a specific destination (Swanson, 1995). When viewed from this perspective, working memory differs from the related concept of short-term memory which is typically described as remembering small amounts of material and reproducing it without integrating or transforming the information in any way (e.g., repeating back a series of numbers) (Cantor, Engle, & Hamilton, 1991; Just & Carpenter, 1992). Working memory appears to be extremely important to an individual's ability to learn, and in adult samples has correlations of 0.55–0.92 with reading and intelligence measures (e.g., Daneman & Carpenter, 1980; Kyllonen & Christal, 1990).

In an effort to promote the notion that input and integration of stimuli can impact subsequent learning, Cronbach and Snow (1977) have advanced a theory suggesting that some types of individuals might benefit from one form of treatment, whereas others might benefit from another type of treatment: an aptitude by treatment interaction (ATI). Many researchers and educators alike believe that matching learner characteristics with treatment approaches can enhance learning (e.g., Cronbach & Snow, 1977;Resnick, 1976; Reynolds, 1981b). However, subsequent studies have demonstrated little support for this theory (e.g., Arter & Jenkins, 1977; Tarver & Dawson, 1978). Initially, theories of input examined learner modalities (e.g., visual, auditory, kinesthetic), which were later deemed to be too simplistic (Arter & Jenkins, 1977; Kaufman, 1994; Tarver & Dawson, 1978). More recently, neuropsychological models have been applied to ATIs and offer promise for identifying aptitudes and prescribing treatments (D'Amato, 1990; Hartlage & Telzrow, 1983). One of the major techniques that Cronbach and Snow (1977) suggested for matching treatment approaches with learner aptitudes was “capitalization of strengths.” Our increasing knowledge of how the brain functions allows clinicians to obtain a more detailed understanding of how a client learns new information. For example, although the cerebral hemispheres act in concert, the right hemisphere seems to be specialized for holistic, spatial, and/or nonverbal reasoning whereas the left shows a preference for verbal, serial, and/or analytic type tasks (Gaddes & Edgell, 1994; Lezak, 1995; Reynolds, 1981a; Walsh, 1978). Similarly, models of cognitive processing have been proposed that agree with the specialization of how scientists think the brain processes information; some have called this preferential processing styles (D'Amato, 1990). For example, simultaneous processing ability has been affiliated with the right hemisphere because of its holistic nature; it deals with the synthesis of parts into wholes and is often implicitly spatial (Das, Kirby, & Jarman, 1979). In contrast, the left hemisphere processes information using a more successive/sequential method, considering serial or temporal order of input (Dean, 1984, 1986). Models of brain organization have also been proposed that attempt to explain the diversity and complexity of behavior.

An expansion of the hemispheric specialization approach is offered in the planning, attention, simultaneous, successive (PASS) cognitive processing model (Das et al., 1994) which proposes four processing components. This model is based on the neuropsychological model of Luria (1970, 1973, 1980; Reynolds, 1981a) and presents a comprehensive theoretical model by which cognitive processes can be examined. On the basis of his clinical investigations with brain-injured patients, Luria (1973) suggested that there are three functional units that provide three classes of cognitive processes (i.e., memory, conceptual, and perceptual) responsible for all mental activity. Figure 2 provides a graphic presentation of the PASS model of cognitive processing. The functional units work in concert to produce behavior and provide arousal and attentional (first unit), simultaneous-successive (second unit), and planning (third unit) cognitive processes. The PASS model separates the second unit into two individual processes (i.e., simultaneous and sequential). Instruments can be used to measure individual strengths in these different styles of processing.

What refers to the notion that information is processed simultaneously on parallel tracks?

Figure 2. PASS model of cognitive processing. (Assessment of Cognitive Processes: The Pass Theory of Intelligence (p. 21), by J. P. Das, J. A. Naglieri, and J. R. Kirby, 1994, New York: Allyn & Bacon. Copyright 1994, by Allyn & Bacon. Reprinted with permission.)

Knowledge of the brain and theories governing information processing can determine the types of data collected during the assessment phase. For example, instead of simply observing whether the individual was successful at a task or set of measures, the practitioner looks beyond the product to determine the influence of related factors. These factors can include the nature of the stimuli used (visual, verbal, tactile), the method of presentation (visual, verbal, concrete, social), the type of response desired (verbal, motor, constructional), and the response time allowed (timed, untimed; Cooley & Morris, 1990). Other researchers have advocated a move to an even more intense examination of processing through the use of dynamic assessment strategies (Campione & Brown, 1987; Feuerstein et al., 1979; Palincsar, Brown, & Campione, 1991). Theoretically, this strategy allows the examiner to obtain information about the client's responsiveness to hints or probes, and thus elicits processing potential (Swanson, 1995). When an examinee is having difficulty, the examiner attempts to move the individual from failure to success by modifying the format, providing more trials, providing information on successful strategies, or offering increasingly more direct cues, hints, or prompts (Swanson, 1995). This approach allows the examiner an opportunity to evaluate performance change in the examinee with and without assistance. However, there is little if any standardized information available on this technique and it has been criticized for its clinical nature and poor reliability (e.g., Palincsar et al., 1991).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080427073000146

The Effect of Mood On Creativity in the Innovative Process

Geir Kaufmann, in The International Handbook on Innovation, 2003

Mood and Type of Process

In cognitive information processing theory, the kind of process is often distinguished in terms of the level of processing and breadth of processing (Anderson 1990).

The level of processing refers to the question of whether processing occurs at a surface level, such as the sensory level, or at a deeper level, such as the semantic level, where information is further processed in terms of meaning and organizational structure in memory. Breadth of processing refers to the distance between informational units that are related during processing.

As we have seen above, the breadth dimension has been particularly targeted by mood theories (e.g. Isen & Daubman, 1984; Isen, Johnson, Mertz & Robinson, 1985). One possible explanation is that positive material is more richly interconnected than negative material, and that positive mood may provide a retrieval cue for linking information at a broader level, as suggested by Isen & Daubman (1984). This hypothesis also entails, however, that positive material is better organized than negative material. We should then expect that positive mood promotes both a higher level and a broader information processing than negative mood. In our theory, however, positive mood is linked to broader information processing on the premise that positive mood promotes a less problematic perception of the task, and possibly also to overconfidence in ability to handle the task in question. Thus, positive mood may lead to a less cautious approach to the task than negative mood, promoting a broader but also a more superficial processing. More concisely, positive mood promotes broad and shallow processing, whereas negative mood leads to more constricted but deeper processing. This may be an advantage in a task that requires the generation of new ideas, but may be detrimental when the task requires careful considerations and deeper processing. In a recent experimental study, Elsbach & Barr (1999) observed the effect of induced positive and negative mood on the cognitive processing of information in a structured decision protocol. Here the subjects were given the task of assessing and weighing information in a complex case involving the core issue of whether a racing team should participate in an upcoming race. Negative mood subjects made much more careful and elaborate considerations than the positive mood subjects that were more prone to thrust superficial, ‘holistic’ judgment without going into the depth and details of the available information.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080441986500140

Unconscious Cognition

J.F. Kihlstrom, in Encyclopedia of Consciousness, 2009

Attention, Automaticity, and Unconscious Processing

The earliest information-processing theories of attention, such as those proposed in the 1950s by Broadbent and others, were based, to one degree or another, on the metaphor of the filter. Information which made it past the filter was available for ‘higher’ information-processing activities, such as semantic analysis, while information which did not make it past the filter was not. This same attentional filter was also the threshold which had to be crossed for information to be represented in phenomenal awareness. The filter theories of attention, in turn, raised questions about how permeable the attentional filter was, and how much information processing could occur preattentively. Was preattentive – preconscious – processing limited to elementary physical features of the stimulus, or could it extend to the meaning of the stimulus as well?

In part to solve these problems, the notion of an attentional filter was replaced by the notion of attentional capacity. Capacity theories, such as the one proposed by Kahneman in the early 1970s, began with the assumption that human information-processing capacity is limited, and proposed that the ability to perform one or more task(s) depended both on the resources available and the resources required by the task(s) themselves. Whereas the filter models conceived of information processing as serial in nature, the capacity models implied that several tasks could be carried out simultaneously, so long as their attentional requirements did not exceed available resources.

The capacity view, in turn, led in the mid-1970s to a distinction between two types of cognitive processes, ‘controlled’ and ‘automatic.’ Controlled processes are conscious, deliberate, and consume cognitive capacity – they are what most people mean by cognition. By contrast, automatic processes are more involuntary. That is, they are ‘inevitably evoked’ by the presentation of specific stimulus inputs, regardless of any intention on the part of the subject. And once evoked by an effective environmental stimulus, they are ‘incorrigibly executed,’ in a ballistic fashion. Automatic processes are ‘effortless,’ in that they consume little or no attentional capacity. And they are ‘efficient,’ in that they do not interfere with other ongoing mental activities. Perhaps because they are fast, or perhaps because they do not consume cognitive capacity, automatic processes are unconscious in the strict sense that they are inaccessible to phenomenal awareness under any circumstances, and can be known only by inference.

Automatic processes are exemplified by the Stroop color-word task, in which subjects must name the color of ink in which words are printed: Subjects show a great deal of interference when the word names a color that is different from that in which the word is printed. Apparently, subjects simply cannot help reading the words. Helmholtz’s unconscious inferences may be viewed, in retrospect, as an early foreshadowing of the concept of automaticity. More recently, linguists such as Chomsky have argued that the universal grammar that underlies our ability to use language operates unconsciously and automatically.

Over subsequent decades, the concept of automaticity has evolved further. For example, some theorists proposed that automatic processes have properties other than the four canonical ones outlined above. Others have suggested that the features represent continuous dimensions, so that processes can be more or less automatic in nature. There has been some question as to whether all the canonical features must be present to identify a process as automatic: the four canonical features might comprise a kind of prototype of automaticity, rather than being singly necessary and jointly sufficient to define a process as automatic. Even the automaticity of the Stroop effect has been cast into doubt. Challenges to capacity theory, from which the earliest ideas about automaticity emerged, have led to alternative theoretical conceptualizations in terms of memory rather than attention. Nevertheless, the concept of automaticity has gained a firm foothold in the literature of cognitive psychology, and investigators have sought to develop methods, such as Jacoby’s ‘process-dissociation procedure,’ to distinguish between the automatic and controlled contributions to task performance.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123738738000803

Learning Theory and Behaviour

K. Cheng, J.D. Crystal, in Learning and Memory: A Comprehensive Reference, 2008

1.19.5.3 Packet Theory

A recent information processing theory that makes detailed predictions is Packet Theory (Kirkpatrick, 2002; Kirkpatrick and Church, 2003; Church and Guilhardi, 2005; Guilhardi et al., 2005). It has four component processes (Figure 12). As an example, we consider a rat on a FI 30-s schedule of reinforcement for head entries into a food hopper. This means that 30 s after a pellet of food is delivered, the next pellet will be primed. The first head entry after the FI of 30 s results in food delivery.

What refers to the notion that information is processed simultaneously on parallel tracks?

Figure 12. A schematic illustration of Packet Theory in explaining performance on a FI 30 schedule. The theory has four components. A perception unit parallels the clock process in Scalar Expectancy Theory and times the expected time to reward on the current trial. The memory process is a representation of the expected time to reward, derived from an average of past durations to reward. A variable threshold is retrieved for the current trial. The decision process switches to a high rate of production of Packets when the threshold has been crossed. Packets are theoretical underlying units that generate bouts of behavior, according to a function described in the response production unit on the right. Reproduced with permission from Church RM and Guilhardi P (2005) A Turing test of a timing theory. Behav. Processes 69: 45–58.

The perception process is akin to the clock process in SET. It tracks the perceived time to the expected reward, about 30 s in this case. The memory in SET is a collection of reference durations from which the subject picks one for the current trial. In Packet Theory, the memory is the average expected time to reward, averaged over the reward times of the past. In the FI 30 example, this is also about 30 s. The memory process also has a threshold, in parallel with the threshold in SET. The threshold has a normal distribution about some mean proportion of time to expected reward. The decision process is based on the threshold. When the threshold has been passed, the animal switches from a low rate of response packets to a high rate of response packets.

Packets are theoretical entities that generate bouts of responding. This is signified in the response process, which turns packets into bouts of responses with particular characteristics, such as a distribution of interresponse times. The theory is thus explicit in producing an actual stream of response times in a simulated trial. Other theories are typically not quite as explicit about responses. For example, it is not clear how the activation level of the neural net in Hopson’s (2003) model translates to actual responses in time.

This explicit translation into responses in time means that the model can be compared against empirical data on numerous fronts, including response distributions, response rates, and interresponse times. On these multiple fronts, the model does a reasonable job of simulating the actual data obtained. Because the memory process is said to come up with an average expected time to reward, it does so even when durations to reward varied in the past, as in some random distribution about a mean. This means that the model can predict the behavior of rats in such seemingly ‘nontiming’ conditioning procedures. One of the strengths of the model is that it can encompass a range of conditioning procedures, and in fact, it is a new theory of conditioning (see other chapters on theories of conditioning in this volume).

A further contribution from the development of Packet Theory is an explicit recommendation for evaluating how well the model accounts for obtained data in the form of a Turing test (Church and Guilhardi, 2005). It is called a Turing test after the mathematician Alan Turing, who devised a similar test to examine whether a human can distinguish the responses of a computer-generated program from those of another human. In the case of Packet Theory, a set procedure such as FI 30 is given to rats and produces a sizeable set of trials at asymptote. The model is given the same procedure, parameters are chosen, and the model generates a set of trials. The data generated are responses in time. To test the ‘fit,’ one picks one trial from the rat and one trial from the model. A third trial is then picked from either the model or the rat. This trial is formally compared to the two reference trials (one from the model and one from the rat). The comparison process ‘decides’ whether the sample trial is more similar to the model’s trial or to the rat’s trial. This process can be repeated many times.

If the model generates data indistinguishable from a rat, the Turing comparison process should be at chance and be correct 50% of the time. In fact, it is correct about 60% of the time, which shows that the model, while having considerable success, may be improved by further development.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123705099001868

Dyslexia (Acquired) and Agraphia

M. Coltheart, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Computational Cognitive Neuropsychology of Reading

Currently a number of information-processing theories of reading aloud are expressed as computational models—that is, as executable computer programs which turn print into phonology and do so by using the specific information-processing procedures posited by the particular theory. Some of these models are in the connectionist tradition and are built up via connectionist learning algorithms such as backpropagation (Plaut et al. 1996, Zorzi et al. 1998). Others are nonconnectionist, with their architectures specified by the modeler rather than by a learning algorithm (Coltheart et al. 2001). This is an area of much ongoing theoretical development.

One way to test such models is to investigate their ability to simulate different patterns of acquired dyslexia. The way this is done is to try to ‘lesion’ particular components of the program so as to cause the computational model to exhibit the same kinds of errors as do patients with particular forms of acquired dyslexia—to cause the program to make regularization errors with irregular words, especially low-frequency ones, for example, whilst still being good at reading nonwords (thus simulating surface dyslexia) or to cause the program to be poor at reading nonwords while still being able to read words (thus simulating phonological dyslexia). The computational cognitive neuropsychology of reading is currently being pursued both in the connectionist tradition (Plaut 1997) and in the nonconnectionist tradition (Coltheart et al. 2001)

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767035816

Classical and Contemporary Assessment of Aphasia and Acquired Disorders of Language

YVES TURGEON, JOËL MACOIR, in Handbook of the Neuroscience of Language, 2008

1.3.2. Psycholinguistic Approach to Language Assessment

The psycholinguistic approach to language assessment derives from information processing theories. This approach, building on linguistics and cognitive psychology, focuses on language processing and provides case examples to explain given language disturbances by analyzing the simple or complex processes that may be disrupted in a given individual's language system. The psycholinguistic approach combines the theoretical viewpoints of linguistics and cognitive psychology to analyze language impairment in terms of processing instead of describing and classifying clinical symptoms. In these models, cognitive functions, including language, are sustained by specialized interconnected processing components represented in functional architectural models. For example, as shown in Figure 1.1, the ability to orally produce a word in picture naming is conceived as a staged process in which the activation flow is initiated in a conceptual–semantic component and ends with the execution of articulation mechanisms.

What refers to the notion that information is processed simultaneously on parallel tracks?

FIGURE 1.1. Schematic depiction of the cognitive neuropsychological model of spoken picture naming.

An assessment process based on cognitive neuropsychological models consists of the identification of the impaired and preserved processing components for each language modality (see Table 1.2). This analysis is performed by the administration of specific tasks or test batteries (e.g., psycholinguistic assessments of language processing in aphasia (PALPA) (Kay et al., 1992) aimed at evaluating each component and path in the model. For example, the evaluation of naming abilities in an aphasic person could be performed by administering tasks exploring the conceptual–semantic (e.g., semantic questionnaire), phonological output lexicon (e.g., picture naming task controlled for frequency, familiarity, and so forth), and phonological output buffer (e.g., repetition of words and non-words controlled for length) components. Important information regarding the level of impairments is also derived from error analysis. An anomic error could stem from distinct underlying deficits (e.g., in the activation of conceptual–semantic representations or in retrieving phonological forms of words in the output lexicon), leading to distinct types of errors (e.g., semantic substitutions and phonemic errors). The complete cognitive assessment process should give the clinician an understanding of the client's deficits (i.e., surface manifestations, underlying origins, and affected components) as well as enable him to identify the strengths and weaknesses in the client's communication abilities.

The selection of assessment methods and tools stems directly not just from reference models but also from the purpose of the evaluation. In the following section, we briefly describe bedside and screening tests as well as comprehensive test batteries for aphasia and other language disturbances.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978008045352100001X

What processing refers to the notion that information is processed simultaneously on parallel tracks?

Dual Processing. the principle that information is often simultaneously processed on separate conscious and unconscious tracks.

What are the two tracks of consciousness?

The Brain and Consciousness Perception, memory, think- ing, language, and attitudes all operate on two levels—a conscious, deliberate “high road” and an unconscious, automatic “low road.” Researchers call this dual processing.

What is meant by dual processing and the two track mind and how does it relate to our consciousness?

The dual-track mind refers to the two minds that operate at the same time inside our one brain. Namely, the conscious mind and the unconscious mind. With the origin of the name explained, let's explore how both minds work together and why both are important for our survival.

What is the term for processing information simultaneously on conscious and unconscious tracks?

dual processing. the principle that information is often simultaneously processed on separate conscious and unconscious tracks.