The integration of information from two or more sensory modalities is called

Intermodal Perception

L.E. Bahrick, G. Hollich, in Encyclopedia of Infant and Early Childhood Development, 2008

Introduction

Intermodal perception, the perception of unitary objects and events from concurrent stimulation to multiple senses, is fundamental to early development. Early sensitivity to temporal, spatial, and intensity patterns of events (‘amodal’ information) that are redundant across stimulation to different senses, guides infants’ perceptual, cognitive, and social development. Intermodal perception develops rapidly across infancy. Even very young infants are sensitive to amodal information, allowing them to perceive unitary multimodal events by linking sights and sounds of speech, emotional expressions, and objects, as well as information across visual and tactile, olfactory, and proprioceptive stimulation. Perceptual development proceeds along a path of differentiation of increasingly more specific levels of stimulation and perceptual narrowing with experience.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123708779000864

Intermodal Perception☆

Lorraine E. Bahrick, George J. Hollich, in Encyclopedia of Infant and Early Childhood Development (Second Edition), 2020

Introduction

Intermodal perception, the perception of unitary objects and events from concurrent stimulation to multiple senses, is fundamental to early development. Early sensitivity to temporal, spatial, and intensity patterns of events (“amodal” information) that are redundant across stimulation to different senses, guides infants' perceptual, cognitive, and social development. Intermodal perception develops rapidly across infancy. Even very young infants are sensitive to amodal information, allowing them to perceive unitary multimodal events by linking sights and sounds of speech, emotional expressions, and objects, as well as information across visual, tactile, olfactory, and proprioceptive stimulation. Perceptual development proceeds along a path of differentiation of increasingly more specific levels of stimulation and increasing economy of information pick up with experience.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128093245235943

The Self In Infancy

Mark A. Schmuckler, in Advances in Psychology, 1995

Self-knowledge of Limb Position and Movement

The first context in which to examine self-knowledge of body position involves recognition of limb position and movement: This question can be interestingly explored within the realm of intermodal perception in infancy. Generally, there exists strong evidence that young infants coordinate information arising from different perceptual systems. Numerous studies suggest that by the age of 5 months, infants recognize object properties such as shape, substance, and texture on the basis of visual and haptic system information (see Bushnell & Boudreau, 1991, 1993; Rose & Ruff, 1987; Spelke, 1987, for reviews). Is there any corresponding evidence that infants use intermodal information for recognizing their own limb position and movements? This form of intermodal perception requires integrating visual information with kinesthetic and proprioceptive inputs, which results in knowledge of limb movement and position.

Two experimental results have demonstrated that infants, in fact, do evidence such intermodal perception (Bahrick & Watson, 1985; Rochat & Morgan, 1995). In a series of studies, Bahrick and Watson (1985) examined visual-proprioceptive intermodal perception of the leg movements of 3- and 5-month-old infants. In this work, infants kicked their legs while sitting in an infant seat — such kicking provided proprioceptive information for leg movement. While kicking their legs, infants participated in a preferential looking task with two video monitors containing different visual images. On the first monitor, infants saw a live, on-line version of their own leg movements. Because the visual movement on this display was correlated with their own movement, it was referred to as the “contingent” display. On the second monitor, infants saw a videotape of a different child in the same situation (or, in one of the experiments, a previously recorded videotape of their own legs). Because the movement in this display was unrelated to their own movement, it was called the “noncontingent” display.

If infants discriminate between these displays, they should show preferential fixation to one of the monitors. According to Bahrick and Watson (1985), the most likely basis for this discrimination would be the detection of the contingency between the proprioceptive information for movement and the visual movement occurring on one of the monitors. The results of a series of studies convincingly demonstrated intermodal perception, with the 5-month-old (but not 3-month-old) infants preferentially fixating one of the displays. Somewhat counterintuitively, infants in these studies did not prefer an intermodal match, but instead preferred to fixate the noncontingent display. Although various explanations for such preferential fixation can be offered, this work does demonstrate that 5-month-old infants detect their own leg movements on the basis of visual and proprioceptive information.

Rochat and Morgan (1995) have extended these findings by examining the nature of the visual information necessary for performing such visual-proprioceptive intermodal recognition. Similar to Bahrick and Watson (1985), the preferential fixation of 3- and 5-month-old infants to two on-line video images of their moving legs was observed. In one study, left-right spatial directionality was reversed on one of the video monitors, thereby producing a mismatch between the perceived proprioceptive direction of movement and the seen visual direction of movement; on the other, left-right spatial directionality was retained. Infants in this situation looked longer and kicked more while attending to the left-right reversed monitor, which suggests that this display was perceived as spatially noncongruent.

A recent series of experiments conducted in my laboratory (Schmuckler, 1994) provides strong convergent evidence for the findings of Bahrick and Watson (1985) and Rochat and Morgan (1995). In this work, visual-proprioceptive perception of arm and hand movements (rather than leg movements) was examined, with the focus again on exploring the visual information necessary for intermodal perception. In this work, 5-month-old infants performed hidden arm and hand movements (as opposed to the leg movements of Rochat & Morgan, 1995) while simultaneously viewing a contingent display (an on-line image of their own hand) and a noncontingent display (a previously recorded videotape of a different child in the same situation). The logic of these experiments was identical to that of Bahrick and Watson (1985) and Rochat and Morgan (1995): If infants perceive their own limb movements, they should preferentially fixate one of the two displays.

Three experiments tested this hypothesis. An initial study was designed as a simple replication of the previous experiments, to ensure that infants have visual-proprioceptive intermodal detection of arm and hand movements. The second experiment extended these results by again exploring the importance of spatial directionality. Similar to Rochat and Morgan (1995), the left-right dimension of the video image was reversed for the contingent display, thereby producing a situation in which a physical arm and hand movement in one direction resulted in an image in which the hand seemed to move in an opposite direction. A third experiment investigated the importance of the point of observation of the hidden limb by providing a relatively novel view of this limb. In this study, the camera focused on the child’s hand was positioned on the floor, facing upwards, producing an image displaying the palm of the hand with the fingers pointed downwards and the wrist and arm of the hand at the top of the screen. Such an image is novel in that it does not correspond to the image of the hand naturally seen from one’s own view (an “egocentric” view) or from that of another individual looking at someone (an “observer” view).

Figure 1 presents the results of these three experiments, shown in terms of infants’ percent looking times toward the contingent and noncontingent displays. Replicating and extending Bahrick and Watson (1985) and Rochat and Morgan (1995), Experiment 1 demonstrated 5-month-olds’ significant preferential fixation toward the noncontingent display, thereby suggesting visual-proprioceptive intermodal discrimination. In contrast, infants in Experiment 2 did not preferentially fixate either the contingent or noncontingent display, which again demonstrates that left-right spatial reversal disrupts intermodal perception. Experiment 3 once again produced significant preferential fixation of the noncontingent display, implying that the point of observation for the hidden limb is relatively unimportant, with infants able to detect their own limb movements despite seeing this limb from a novel orientation. Interestingly, the interpretation of these results converges with those of Rochat and Morgan (1995), despite differences in the experimental setup of these studies, as well as the specific pattern of results of this work. As such, these studies provide a nice example of the principle of converging operations using discrimination measures (Garner, Hake, & Erikson, 1956; Proffitt & Bertenthal, 1990).

The integration of information from two or more sensory modalities is called

Figure 1. The mean proportion of looking time, and standard error, toward the contingent and noncontingent visual displays. Equivalent looking (50%) toward the displays is notated with a dotted line.

Together, these results provide compelling evidence that 5-month-old infants coordinate visual and proprioceptive inputs for detecting their own limb movements. Such a finding strongly implies perception of own limb movements, a finding in keeping with Neisser’s (1988, 1991, 1993) characterization of the ecological self. The next series of experiments explores evidence for infants’ self-knowledge of body posture and orientation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0166411505800135

Consciousness of Time and the Time of Consciousness☆

Shaun Gallagher, in Reference Module in Neuroscience and Biobehavioral Psychology, 2017

The Specious Present

A number of philosophers accept the challenge of explaining how it is possible for us to be conscious of something like a melody, and one of the central concepts to be developed in this regard is the notion of the specious present. The term comes to us from Robert Kelly the anonymous author of The Alternative: A Study in Psychology, a work credited to E.R. Clay, which is the way William James cited it when he introduced the term into the mainstream philosophical discussion. The specious present doctrine consists of the claim that the present or now that we experience at every moment is not a knife-edge or punctate phenomenon, but includes a brief extended interval of time—a bit of the past and a bit of the future. The strict or real present is just the momentary piece of the now that is present; but this is always supplemented in consciousness by margins or penumbral horizons of the past or future. When I listen to a melody, for example, I hear not just the note that is currently being played, but I hear it in some way accompanied by some number of previous notes, and, perhaps, some number of notes that are to be played in the next seconds.

This direct experience of succession is not a matter of perceiving one note and supplementing it with the memory of a previous note (as philosophers like Thomas Reid and Franz Brentano had proposed). This can be made clear if we consider visual perception and the difference between perceiving the hour hand of the clock and the second hand of the clock. In perceiving the hour hand I get a sense of its movement only by comparing its current position to a memory of where it was a minute or two ago. In contrast, I can actually see the movement of the second hand and this does not seem to involve a comparative judgment based on memory. Where the second hand was a second ago seems to be intuitively present in my perception of its movement.

This concept of the specious present is meant to address an issue that is fundamental for understanding consciousness. A consciousness that did not have the kind of structure described by the specious present would seemingly be an experience of only one unconnected moment after another. In that case our experience of the world would be incoherent and inchoate. The idea that memory could bring coherency to this kind of experience, connecting together a set of discontinuous flashes of perceptual consciousness, is questionable simply because on such theories memory is itself a form of consciousness and would have no more intrinsic structure than perception. My memory of a melody can be coherent only if it is an awareness of more than one note at a time, that is, only if it is more than a series of knife-edge presents and it takes the form of the specious present.

The analysis presented by William James and his followers understands the specious present as a sensed or immediately experienced duration or succession. It explains that the content of consciousness has a temporal coherence, but it does not explain precisely how this is possible. Little attention is paid to the sensing or experiencing itself, the experienced content of which has the specious present structure. It does not explain whether or how the temporal structure of consciousness itself contributes to the temporal coherence of the experienced content. Indeed, a number of theorists understand the specious present to involve a momentary (nontemporal) act of awareness. But the idea of a momentary act of awareness brings along a number of perplexities that have motivated some theorists to reject the specious present doctrine.

For example, assume that my perception of a succession of events (VWXYZ) is laid out in the specious present form so that I am still aware of V and W as just past while I am currently aware of event X as present. If my perception is itself momentary, then for V, W, and X to be represented as occurring in succession they must be represented simultaneously and we must experience them all at once. On such an account of the specious present, to be aware of successive objects consciousness needs to compare the earlier and later objects in a cognitive operation that makes the earlier and later simultaneous. Experienced content would, in some fashion, need to be at once both successive and simultaneous, both past and present. Since this is paradoxical, the critics argue that the specious present doctrine must be rejected.

Another problem with the specious present doctrine can be made clear by looking at C.D. Broad's account (see Fig. 1). According to Broad, in any one moment of conscious experience (A) we are aware of a specific duration of contents, V–X. At the next moment of consciousness (B) we are aware of W–Y. One objection to this is to point out that we seemingly experience W–X twice in succession: once in momentary consciousness A and once in momentary consciousness B. If VWXY represents a melody, then we seemingly hear the notes of that melody twice.

The integration of information from two or more sensory modalities is called

Figure 1. Broad's diagram of the specious present.

To escape this problem, however, we simply and realistically have to think that perception (or any act of consciousness) itself has duration and is not momentary. If we do this, however, we run into a different problem, as pointed out by J.D. Mabbott in his critique of Broad. Mabbott assumes that the enduring conscious act A–B corresponds to the overlap of the two originally defined specious presents. In other words, the specious present of A–B is W–X. But this leads to the absurdity that the specious present varies inversely with the duration of the act of consciousness. Assume that the specious present of the originally defined momentary act of consciousness A is 6 s long. That would make the specious present of A–B, the overlap W–X, 3 s long. Even though A–B is itself longer than the momentary act A by 3 s, its specious present seems to have shrunk. If the act of consciousness lasts 6 s (e.g., A–C) the specious present shrinks to momentariness at X. Thus, conscious acts of 6 s or longer would have no specious present. The longer the duration of the perception, the shorter the specious present. And this is like saying, the longer I look at something, the less I see it. Does this mean that we should reject the specious present doctrine?

A different interpretation of Broad's diagram would avoid the absurdity. If we assume that the duration of the specious present remains constant no matter how long the duration of the act is, then we could conceive of the specious present as having the structure of a searchlight moving along the upper line in Fig. 1, illuminating a constant duration along the lower line. This would avoid any logical absurdity. Still, there are empirical problems with this solution. Empirical studies indicate that the specious present does not remain constant, but varies even within a single individual. The searchlight widens or narrows depending on certain conditions. This seems consistent with the psychological issues described above—boredom, enjoyment, fatigue, attention, etc.

Empirical research also suggests that the specious present varies across different sense modalities (sight, touch, and hearing) even within the same individual. For example, intervals of auditory stimuli are experienced as lasting longer than objectively equal intervals of visual stimuli; visual experiences may vacate the specious present faster than auditory experiences. This might seem theoretically disconcerting. If, as in many instances, we experience through more than one sense modality simultaneously why does not our experience seem seriously incongruent? For example, when I watch a ballet, if my auditory specious present is not identical with my visual specious present, then the music could appear to be out of sync with the dancer's movements. The dancer would always be a little behind or ahead of where I think she ought to be. Obviously, since this does not appear to be the case in our actual experience, assuming we attend good ballet, then either the intersensory differences are resolved in some fashion or the specious present doctrine is wrong.

To preserve the specious present doctrine one can appeal to subpersonal processes that effect a temporal binding across different sense modalities. I see lightening before I hear the thunder. The physical differences in the relative arrival time of stimuli at the eye and ear can be accounted for by the fact that light travels through air faster than sound: 300,000,000 versus 330 m/s. Although the transduction of sound waves at the ear takes less time than the chemical transduction of light at the retina, this solves the problem only for intermodal perception of simultaneous visual-auditory events 10 m away from the perceiver. Something else must account for all events that are closer or farther than 10 m and that are correctly synthesized across modalities. Neuroscientists (e.g., Varela, Pöppel) distinguish between:

(S) neuronal system states of approximately 30–100 ms, corresponding to a quantum of experience where differentiation of succession is not possible (note, however, that this may go as high as 250 ms in some cases, e.g., the difference between auditorily perceived speech and visually perceived lip movements has to be greater than 250 ms for the asynchrony to be perceived) and

(W) a 0.5–3 s temporal window, which correlates with an experienced specious present, in which some neuronal mechanism effects a temporal integration of these S-states into a successive order.

The idea is that integration and ordering processes up to W are automatic and content-independent. In contrast, semantic binding comes into effect in the generation of W, the 3-s time scale. In other words, at or beyond the magnitude of W the coherency (or lack of coherency) of phenomenal experience may depend on content. For example, the often cited experiences of time seeming to slow down or speed up, when subjects are, respectively, bored or having fun, may be just such cases in which the specious present structure is affected by content.

The binding processes at the S scale (actually, the simple limitations of the system to discriminate succession at that scale) can explain why intersensory differences (nonsimultaneous processing of information across the various senses) do not show up in phenomenal experience. Intersensory differences are so small that they are integrated (or simply fail to count) at the S scale. This model may even help to explain some intrasensory perplexities. In the color phi phenomenon, for example, a subject is presented with two spots of differently colored light (e.g., blue and red), lasting 150 ms each, and flashed in 50 ms sequence. The time frame of the presentation is such that the perceiving subject experiences, not two separate dots in sequence, but a moving dot that changes color in midstream. If a spot of blue is flashed at point A and a spot of red is flashed 50 ms later at point C, the effect is that the subject is conscious of the red at a point B, between A and C, and at a phenomenal time that seems prior to the time the second color was actually flashed at C. The end point of the event seems to gain some representation at the midpoint of the experience; the subject sees red before there is red to be seen. One can make sense of this as follows. Assume that successive neural system states S1 and S2 have magnitudes of 60 ms each, and that S1 begins 35 ms prior to the beginning of the 50 ms interval between the flashing of a blue dot and a red dot. All phenomenal content that falls within the timeframe of S1 is experienced as simultaneous. S2 begins 25 ms after the onset of the 50 ms interval and ends 35 ms after the flashing of the red dot begins. If the phenomenal content that falls within the timeframe of S2 is experienced as simultaneous, and if, within W, the content that corresponds to S1 and S2 are experienced as successive, it seems clear that the perceiving subject will start to see the red dot when S2 begins near the midpoint of the interval and 35 ms prior to the flashing of the red dot. As Eagleman and Sejnowski have shown in regard to the flash-lag illusion, the neuronal processing that takes place in the first 80 ms after stimulus onset will determine how we visually experience movement and other temporal events.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128093245059319

Infants’ intermodal numerical knowledge

Mohammad Rashbari Dibavar, in Infant Behavior and Development, 2018

7 The intersensory redundancy hypothesis: the role of selective attention in numerosity perception

Nearly all objects and events that infants experience are multimodal, in that they simultaneously provide a complex mix of visual, tactile, auditory, and olfactory stimulation to different sensory modalities, so how do young infants experience a unitary perception of the world?

Developmental assumptions of intermodal perception fall into one of two models (Lewkowicz, 2002; Lewkowicz et al., 1994): the developmental integration view (e.g., Piaget, 1952) or the developmental differentiation view (e.g., Gibson, 1969). According to the developmental integration view, different sensory modalities operate as independent sensory systems at birth and intersensory perception emerges slowly during development. In contrast, the developmental differentiation view specifies that different sensory modalities operate as a unified system early in development, to extract invariant attributes of stimulation. Within this framework, newborn infants can perceive cross-modal correspondences based on the ability to detect amodal invariants and, as the infants develop, they discover increasingly more complex invariants. In support of this approach, there is evidence that infants, from birth, perceive many types of amodal relations (Butterworth, 1983; Coubart et al., 2014; Crassini & Broerse, 1980; Izard et al., 2009; Lawson & Turkewitz, 1980; Lewkowicz & Turkewitz, 1981; Meltzoff & Borton, 1979; Morrongiello, Fenwick, & Chance, 1998; Muir & Clifton, 1985; Sann & Streri, 2007; Slater, Brown, & Badenoch, 1997; Slater, Brown, Hayes, & Quinn, 1999; Streri & Gentaz, 2003, 2004; Von Hofsten, 1982; Wertheimer, 1961). For example, Slater et al. (1999) have observed 2-day-old infants are able to learn and remember arbitrary auditory-visual combinations in the presence of amodal information. These results challenge the developmental integration view (e.g., Piaget, 1952), as well as modern connectionist models (e.g., Elman et al., 1996) arguing that sensory channels cannot communicate in newborns.

Recently, Bahrick and Lickliter proposed the intersensory redundancy hypothesis (IRH; Bahrick & Lickliter, 2000, 2002, 2012, 2014), in order to explain how infants select and attend to relevant information and ignore the vast amount of stimulation that is irrelevant.

The IRH proposes that during early development information that is simultaneously available across multiple senses is highly salient and is therefore attended and perceived better than when the same information is presented in a unimodal context. This is termed intersensory facilitation (or multimodal prediction of the IRH). For example, young infants are able to detect a change in tempo or in rhythm (amodal properties) of a tapping toy hammer only in synchronous audiovisual presentations but not in unimodal or asynchronous audiovisual presentations (Bahrick & Lickliter, 2000; Bahrick, Flom, & Lickliter, 2002). In contrast, when modality-specific aspects of stimulation are presented in a unimodal context, they are attended and perceived better than when the same aspects are presented in the context of redundant multimodal stimulation. This is termed unimodal facilitation (or unimodal prediction of the IRH). For example, 3- and 5-month-old infants are able to detect a change in the orientation (a modality-specific visual property) of a hammer tapping a surface (downward vs. upward) in unimodal visual stimulation but not in synchronous audiovisual stimulation (Bahrick, Lickliter, & Flom, 2006). One of the developmental predictions of this hypothesis is that amodal properties are detected first in contexts that provide intersensory redundancy (when the same information is synchronously available to two or more sensory modalities) and as infants become more experienced, these properties are later generalized to non-redundant, unimodal contexts. For example, Flom and Bahrick (2007) observed that whereas with multimodal stimuli infants detect the change in affect around 4 months of age, with unimodal visual stimuli they can only do so at 7 months of age. Jordan et al. (2008) extended the IRH to preverbal numerical domain and investigated 6-month-olds’ ability to discriminate large numerosities (12 vs.8), an ability which does not emerge until 9 months of age in the unimodal context (Lipton & Spelke, 2004; Xu & Arriaga, 2007). They found that infants discriminated numerosities that differed by a 2:3 ratio when numerical information was presented simultaneously in both modalities, but not when numerosity was presented only visually, suggesting that intersensory redundancy can facilitate numerical discrimination. Similar results were found in numerical delayed match to sample task. Preschool children performed above chance at matching numerosities when sample numerosity were multisensory (audiovisual) and choice numerosity were unimodal (audio or visual), but performed at chance levels when both of sample and choice numerosities were unimodal (Jordan & Baker, 2011).

Allocating more attentional resources to redundant (synchronous audiovisual) information has also been shown in neurological evidence (Hyde, Jones, Porter, & Flom, 2010; Reynolds, Bahrick, Lickliter, & Guy, 2014). For example, in an EEG-study, Reynolds et al. (2014) found that 5-month olds showed greater Nc responsiveness (a measure of attention) to synchronous multimodal information compared to asynchronous multimodal information.

However, intersensory redundancy is not the only factor that can enhance discrimination of quantity (see Baker et al., 2014; Cantrell et al., 2015). For example, in purely visual displays 6-month-old infants have no difficulty in detecting a 2:3 ratio change in the numerosity of large sets (8 vs. 12) when both surface area and number change simultaneously (Baker et al., 2014). Thus, similar to the intersensory facilitation (Jordan et al., 2008) this intrasensory facilitation (Baker et al., 2014) also can boost infants’ sensitivity to numerical information. The difference is that in the intrasensory facilitation, infants allow to use both numerosity and non-numerical dimensions to make successful discriminations, while in the intersensory facilitation they use the amodal, redundant property of number.

Moreover, the IRH can provide a new perspective for understanding the role of non-numerical variables in different type of stimulation (multimodal and unimodal). In the case of numerical discrimination, infants are also exposed to many changes in some dimensions such as contour length, surface area, configuration, brightness, density, rate, tempo and number of stimuli. Thus, they must selectively attend to particular aspects of stimulation while ignoring others. Unlike previous studies that have focused solely on numerical (Xu & Spelke, 2000) or on non-numerical processing (Clearfield, 2005), the IRH may explain the conditions under which infants attend to and process both the amodal numeric aspect of the stimuli and the modality-specific, non-numerical property of the events. For example, according to the multimodal prediction of the IRH intersensory redundancy in multimodal presentations attracts attention to amodal aspects of stimulation at the expense of other aspects. Thus, when infants were familiarized in a bimodal paradigm with synchronous, redundant information about number (Farzin et al., 2009; Jordan & Brannon, 2006; Kobayashi et al., 2005) they should more easily detect and pick up numerical information rather than non-numerical variables (such as brightness, area or contour length), because in these experimental conditions, number is a property that is amodal and redundant across two senses, whereas non-numerical variables are qualities specific to an individual sensory system and cannot be conveyed redundantly through synchronous audiovisual stimulation (see also Baker & Jordan, 2015).

On the other hand, according to the unimodal prediction of the IRH in visual presentations attention is free to focus on non-redundant, modality-specific information (e.g., contour length, surface area, configuration, brightness and density), because there is no competition from highly salient amodal, redundantly specified properties. As a result, non-redundantly specified properties are detected more easily in unimodal stimulation. Therefore, the unimodal prediction of the IRH may explain why in unimodal numerical tasks, where no intersensory redundancy is available, infants are sensitive to non-numerical than numerical information (Clearfield & Mix, 1999; Clearfield, 2005).

In summary, the available evidence suggests that when stimuli are presented in the same modality where intersensory redundancy is absent, the perceptual non-numerical cues quickly engage the attention of infants, whereas by systematically controlling and varying these cues (Feigenson, 2005; Xu & Spelke, 2000), unimodal inputs can also drive attention to variations in numerosity of elements. On the other hand, in the bimodal stimulation the lack of perceptual overlaps across sensory modalities and the salience of the amodal property of number making it difficult for the perceptual system to quickly detect changes in the perceptual non-numerical cues.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0163638317300632

What perception involves integrating information from two or more sensory modalities?

Intermodal perception (also called intersensory or multimodal perception) is the perception of unitary objects or events that make information simultaneously available to more than one sense.

What is the name given to decreased responsiveness to a stimulus like a triangle or other shape after repeated presentations of the stimulus?

Habituation occurs when infants' looking time diminishes as a result of repeated presentation of a stimulus.

What is the term for the gurgling sounds that infants make in the back of their throats?

A ruttle is a coarse, crackling sound which some babies make even when they are well. It is caused by secretions ( snot, saliva, gunk etc) being allowed to pool in the back of the throat. Babies can allow this fluid to collect there but adults would have to cough it out or swallow it down.

What is the name given to decreased responsiveness to a stimulus?

Habituation refers to the gradual decrease in responsiveness due to repeated presentations of the same stimulus. Habituation is commonly used as a tool to demonstrate the cognitive abilities of infants and young children.