How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Social Media

Lorenzo Burridge, in Eye Tracking in User Experience Design, 2014

Research Findings

Eye-tracking data enables UX researchers to assess the main areas of interest within Facebook brand pages, highlighting fixation points and durations while illustrating individual gaze paths from the initial fixation to the last. When cross-comparing fixation counts with fixation durations, researchers can identify areas that generate interest and those that cause confusion.

The basic structure of each Facebook page is relatively similar in terms of where key elements, such as the cover photo, links, wall posts, and ads can be found. Therefore, individuals already have a clear and systematic approach to navigating Facebook pages before they have even seen the page. The cover photo is not only one of the largest elements on the page, it is also at the very top of the opening screen, so unsurprisingly, it tends to be the initial fixation point for users, as shown by the low numbers (early fixations) in Figure 9.8. This is consistent with results found in the EyeTrackShop study (2012), where the cover photo was also the main attraction (Figure 9.9).

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 9.8. Gaze plot demonstrates how effective cover photos can attract attention and lead users from one element to the other.

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 9.9. Gaze plot showing that the cover photo guides user gaze to the rest of the page to other attractive elements.

From here, users orient themselves to the surrounding elements (e.g., company name, logo, and links) to discover further what the page is about. This normally occurs in three stages: encoding the visual stimulus, peripheral sampling, and preparation of where to look next (Viviani, 1990)—see model in Figure 9.10.

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 9.10. Model for orienting and processing of a page.

As is common with banner ads on web pages (discussed in Chapter 2), the ads on the right side of a Facebook page are either noticed last or simply ignored, as users already expect them to be there and have since become “blind” to them (see Figure 9.11). This finding is also consistent with previous studies that have identified frequent web users’ awareness of ads and their familiar positions on a page. Picked up in their peripheral field of vision, they now instinctively know not to focus on them, as they are likely to be a distraction to their task at hand (Drèze & Hussherr, 1999). This is, however, more prevalent in profile pages than brand pages (EyeTrackShop, 2012), which suggests that when visiting a brand page, users are more likely to notice ads as they are more relevant to the page itself (Figure 9.12).

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 9.11. Gaze opacity map demonstrating how little attention is given to the ads compared to the rest of the page.

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 9.12. Heat map demonstrating how imagery can help lead users to other information and read any neighboring text.

Each element within a stimulus (e.g., headline, body copy, image) receives differing levels of attention for encoding, and so each is attended to via an individual systematic approach (Royden et al., 1992). Put simply, whatever catches users’ attention first, along with their own typical web viewing behavior, allows users to extract information more effectively for both task demands and further orientation of the page.

Moving down the page, the two-column structure of the wall presents elements in “sound bytes” (small blocks of either image/video/text information), separating everything into its own individual box. While previous tests indicate this as a preferred method of viewing information (Rowe & Burridge, 2012), the columns leave the viewing path more open to diversion. In other words, the user is not being directly influenced to follow a particular pathway and thus may miss important information on the page (Figure 9.13).

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 9.13. Using elements effectively to guide attention to key areas.

As stated earlier, web viewing behavior is largely determined by previously viewed pages, causing users to develop a schema for each page they view and its defining characteristics (Herder, 2005; Habuchi et al., 2006). This enables users to quickly locate and identify useful information that is relevant to their objective while disregarding information that is not relevant. With this in mind, key information is recognized better when placed in the areas it is expected (e.g., logo or site links; Buscher et al., 2009).

Effective design structure is more evident in pages that employ the use of a diagonal framework. These pages have a good balance of images and text that are presented diagonally across columns, breaking up cognitive deterrents such as large blocks of text that require more work to process. This type of design is generally preferred by users as it is the most engaging and visually appealing. These pages are easier to process due to the neater sequential structure.

Not only do the images provide easy-to-process, light information, but they also serve to direct users to a piece of text that might otherwise have been missed or ignored (Wedel & Pieters, 2000). This type of structure also performs best for engaging users down the page, but also more importantly, these types of pages are much preferred to those that are unbalanced or particularly text heavy (Rowe & Burridge, 2012).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124081383000091

The human visual system

David R. Bull, Fan Zhang, in Intelligent Image and Video Compression (Second Edition), 2021

2.9.2 Eye movements

Apart from when we compensate for head movements, our eyes move in two distinct ways:

1.

Saccades: As rapid involuntary movements between fixation points. The visual system is blanked during saccading, so we do not experience rapid motion effects due to scanning. Saccades can take two forms:

1.1.

Microsaccades: Small involuntarily movements used to refresh cell responses that can fall off due to adaptation.

1.2.

Foveation: The eyes also move rapidly in photopic vision, to allow exploration of new areas of a scene providing an impression of increased acuity across the visual field.

2.

Smooth pursuit: These are attention-guided smoother voluntary movements which happen, for example, when tracking moving objects. This process keeps the object of interest centered on the fovea so as to maintain high spatial acuity and reduce motion blur. Under the conditions of smooth pursuit, the spatio-temporal limits discussed earlier change dramatically. Girod [30] recomputed these characteristics and demonstrated that this type of eye movement has the effect of extending the temporal limit of our visual system, showing that frequencies of several hundred Hz can be perceived.

Eye tracking is used extensively in vision research [33] to assess the gaze and fixations of an observer in response to a stimulus. Some of the earliest and most famous research on this was published by Yarbus [29]. The results of Yarbus's experiment, where observers were provided with a range of viewing tasks associated with the Visitor painting, are shown in Fig. 2.29. It is interesting to observe how the saccade patterns relate to the specified task.

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 2.29. Eye movements in response to a task (from Yarbus [29]; publicly available from http://commons.wikimedia.org/wiki/File:Yarbus_The_Visitor.jpg).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128203538000116

Log File Analysis

Casper D. Hulshof, in Encyclopedia of Social Measurement, 2005

Eye Movement Registration

Eye movement registration is discussed in detail in the book by Duchowski, Eye Tracking Methodology: Theory and Practice. The method entails recording eye fixations while a person is performing a task. Afterward, fixation points can be combined with the things that happened on a video screen. It is assumed that the direction and duration of eye gazes indicate what part of the visual field people are paying attention to, and hence the type of information they are processing. Eye movement registration results in very detailed log files. When a task has a large visual component, eye movements may provide researchers with more information than other methods of obtaining log files would provide. However, interpreting fixations can be difficult, which may deem the method less suitable for tasks in which high-level cognitive processes are studied. Also, only one subject at a time can be tested using eye movement registration. One implication is that in many practical applications of social measurement (e.g., collaborative problem solving or studying group processes in a classroom), it makes no sense to just measure eye movements.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985005090

Speed Reading

D.S. McNamara, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2.1 Eye Movements

Many speed-reading courses attempt to modify readers' eye movements by increasing the perceptual span and reducing the number of fixations and regressions. However, researchers have argued that eye anatomy constrains the perceptual span to approximately three words from the fixation point; and more specifically, 3–4 letter spaces to the left and 14 letter spaces to the right of the center of an advanced reader's fixation point (Rayner 1986). The physiological boundaries of the perceptual span imply that reading rates exceeding 600–800 wpm are physiologically impossible if the reader is attempting to read all of the words (Taylor 1965). For example, a rate of 10,000wpm would allow about 2.5 seconds per page of text with 7–8 fixations. Thus, each fixation would have to include a span of 55 words, which is impossible.

In response, speed-reading advocates have claimed that it is unnecessary to read every word of a passage, and that readers can learn to fixate only important words or sentences. In support of that claim, research has indicated that speed-readers make inferences between parts of the text by using prior knowledge (Carver 1983). That is, knowledge allows readers to develop an understanding of the text, despite not having read all of the words. This process is more successful when readers have more knowledge about the text, but virtually impossible without relevant knowledge. Indeed, most speed-reading courses acknowledge that speed-reading is only effective for familiar material.

Nevertheless, there is no guarantee that knowledge-based inferences will be correct, and evidence does not support the assumption that speed-readers learn to read only the most important words or sentences of a passage (Just and Carpenter 1987).

Many speed-reading courses also encourage readers to eliminate the tendency to make regressive eye movements. However, research has shown that regressions are virtually unavoidable with unfamiliar or complex material, and are adaptive when the segments that are reread are relevant to the reader's goals. Moreover, the relationship between eye movements and reading efficiency remains controversial and there is no available evidence that eye movement training improves reading ability. Although speed-reading training may alter eye movement patterns, these patterns revert to normal when comprehension is emphasized.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767015606

Attentional Engagement in Vision Systems

In Artificial Vision: Image Description, Recognition, and Communication, 1997

1.1 INTRODUCTION

The evolution of technology in the last decades has been leading towards active visual sensors: the camera has evolved towards a visual front-end in which single or even multiple points of view are controlled on-line. This new equipment has fostered the development of new areas of research such as exploratory vision, a term introduced by Tsotsos (1994), in which the environment is analysed through a controllable camera exploiting investigations analogous to those supported by eye/head/body movements in human vision.

At a different level of control lie the sub-cases in which only the control system for camera orientation acts. Research in the field of biological and human vision, in particular, started several decades ago. The scan paths recorded by Yarbus (1967) are now well known. In the course of this work, the sequences of the fixation points of the observer's eye (when looking at paintings and other artifacts or at particular scenes of ordinary life) are first analysed. The order of the fixation points is not at all random, but the salient features of the visual field are inspected following a pathway with dense areas around the ‘critical points’. The mean time between two fixation points is around 250 ms.

At the second level of the hierarchy, characterized by more subtle and detailed scans, is the element-by-element scrutiny carried out by the fast moving aperture of a searchlight of attention (with steps lasting 15–50 ms each). In this case no ‘mechanical’ movements are required, eyes are directed to some ‘gross center of gravity’ (Findlay, 1982) of points of interest, after which a searchlight scrutinizes each single point of interest step by step.

In this chapter the attention processes of the human visual system will be considered, outlining some of the analogies and differences with respect to the frameworks introduced by computer scientists in artificial vision. In the next section a review of psychological research on attention is presented, distinguishing the pre-attentive from the attentive theories. The former family groups the early vision models and the interrupt theories together; the latter includes selective and spatial attention theories. Section 1.3 deals with the main theories and outcomes of the research in neurosciences. Finally, a conjecture is briefly introduced regarding the functional characteristics and organization of the attention supervisor of high-level behavior.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124448162500058

The Role of Contexts

Virginio Cantoni, ... Bertrand Zavidovique, in 3C Vision, 2011

The Biological Solution

The quoted new equipments supported the development of new areas of research such as the exploratory vision, a term introduced in [19], in which the (external) context is actively identified through a controllable camera.

Explorations are analogous to the corresponding investigations of human vision obtained through eye/head/body movements. These movements support the context assessment, and interestingly they may be either slow (e.g., head or body muscles) or extremely rapid (certain eye muscles), indicating different context modalities. In biological vision, particularly in human vision, the eye movements include very fast inspection capabilities given by the saccades. In fact, the scan-paths recorded by Yarbus [20] are well known. In his work, sequences of observer’s eye fixation points (looking at paintings and other artifacts or at particular scenes of ordinary life) were analyzed. The order of the fixation points is not at all random, but the salient features of the visual field are inspected following a pathway with dense areas around the “critical points.” The mean time between two fixation points is around 250 ms, which can be considered the average time to build the local retinal context.

At the higher level of the control hierarchy, with more subtle and detailed scans, an element-by-element scrutiny is achieved by a fast-moving aperture of a search-light of attention (with steps lasting 15÷50 ms each). In this case, no eye “mechanical” movements are required [21]; eyes are directed to some “gross center of gravity” [22] of points of interest, and then step-by-step a search-light scrutinizes each single point of interest (among 10 in average).

The level of data abstraction becomes higher going from the retinal neurons to the central cortical ones, whereas pattern location precision decreases: there is a semantic shift from “where” toward “what” in the scene [23–25]. In fact, as information passes through the visual pathway, the increasing receptive field size of neurons produces translation invariance. The large receptive fields of neurons at late stages lead to the mislocalization and the potential resulting miscombination of features. Moreover, the simultaneous presence of multiple objects increases the difficulties in determining which features belong to which objects. This difficulty has been termed binding problem [26]. The solution to the binding problem, an example of context loss, is provided by an attention mechanism that only allows the information from a selected spatial location; an example of context recovery, acting at the higher brain stages [27–29].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123852205000036

Electrical Stimulation of the Brain

P.G. Shinkman, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Brain Stimulation and Perceptual Prostheses

Analogous concepts have been developed for the cortical representation of somatosensory responses originating at the skin: topographical maps exist on each cortical hemisphere representing the contralateral side of the body's surface, with topological distortions similar to those of motor cortex, but corresponding instead to differences in sensory acuity, with the digits and some facial areas over-represented in terms of quantity of neural tissue involved. The somatosensory maps have been localized to the central and posterior parts of the cortex (in humans, the parietal lobe). Auditory and visual maps have also been studied in detail; the two halves of the visual field (left and right), for instance, are mapped onto the contralateral cortical hemispheres in the most posterior part of the cortex (in humans, the occipital lobe). Although multiple maps exist, the principle of proportional representation is clearly evident in the primary map, where the areas of greatest acuity (at and near the fixation point) are greatly over-represented compared to peripheral parts of the visual field.

Since it is widely accepted that all sensory responses are due to some unique pattern of electrical activity in the corresponding sensory cortex, it has been evident for some time that, in theory at least, an appropriate pattern of ESB within any sensory region of cortex might be arranged to mimic any specific normal, exteroceptive pattern of sensory stimuli in the corresponding modality. The possibility thus exists of developing prostheses for artificial vision or hearing, for example, using brain stimulation to evoke perceptual responses in persons lacking functional peripheral receptors or even sensory nerves.

Considerable exploratory work has been done on these problems, although practical prostheses based on ESB are not at the moment close to realization. A promising approach developed recently involves the implantation of relatively large (30–100) arrays of microelectrodes arranged to deliver ESB to the auditory or visual parts of cortex. For instance, cats trained to lever-press to an external auditory stimulus will respond similarly to very modest levels of ESB delivered over an array of microelectrodes implanted within auditory cortex (Rousche and Normann 1999); furthermore, patients with profound or total bilateral hearing loss may receive acoustic information through stimulation of electrodes implanted in the auditory regions of the brainstem (Laszig and Aschendorff 1999). By the same token, visual responses have been demonstrated in a blind patient receiving intracortical ESB over an array of electrodes implanted in the occipital lobe (Schmidt et al. 1996).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767034185

Perceptual intelligence

Zhongzhi Shi, in Intelligence Science, 2021

5.8.2.1 Orientation control

Orientation control means that the brain leads the focus-of-attention to the place of interest and then exercises the ability of space choosing. There are two kind methods of choosing space information: First, the attention mechanism involves the eyes. Urged by the outstanding goal in the visual field or personal will, the observer’s eyes move to the place of interest and watch the corresponding goal. The eyes enable the goal representation in the retina central concave through the watching mechanism, thus obtaining more detailed goal information. This kind of orientation control and attention shift system that relies on eye movement to be realized is called explicit attention shift system. The second kind of attention transfer mechanism does not involve any eye movement or head movement. It occurs between two large beating eye movements and turns attention to a position outside the fixation point in a covert way. This kind of attention transfer is called implicit attention transfer. Posner holds that implicit attention may involve three kinds attention: removal of attention from the present focus-of-attention (involving the brain terminal leaf); moving the attention pointer to the area where the goal is (in the charge of the midbrain district); reading the data in the place of the attention point (the function of the thalamus’s pillow). Humans have the ability of the implicit attention shift system. In an experiment when the attention was turned to a certain place outside the watched point implicitly by an attention clue, the person tested not only improved the simulating response speed to this place and reduced the threshold value of measuring but also strengthened the corresponding electric activity of scalp. The directivity of attention explains that we can’t pay attention to many goals in the visual field at the same time. Rather, we move our attention point sequentially one by one; that is to say, we can only adopt a serial way of moving. But we can choose a corresponding input processing yardstick with the vision. The attention point can focus finely and scatter in a wider space range. In the attention cognitive model, regarding the attention point as the spotlight of the variable focus reflects this kind of characteristic vividly.

The directional alternative of attention is related to the limited information handling capacity of the attention system. The enhancement of place information processing efficiency comes with the cost of inhibiting the nonattention information place.

In a clinical observation showed that for the patients with right parietal lobe injury, when the attention cues were presented in the right visual field and the targets were presented in the left visual field, the directional control ability was seriously damaged; But in other situations, the harm is little, indicating that when the ability is damaged, attention is removed from the inducing clue place. From PET data obtained from a normal tested person, when attention moves from one place to another, whether the movement is driven by the will or stimulated by the external world, the area affected is mainly on the terminal leaf on left and right sides, where the blood flow obviously increases. This is the unique area activated by attention shift. The record from a sober terminal leaf cell from a monkey proves that the terminal leaf neuron is involved in attention orientation control. P-study reveals that the dissection network modulating the exodermis of other lines selectively crosses thalamus’s pillow core; strain, loss, interference with, or strengthening of the goal causes the obvious effect in the thalamus pillow core too.

PET measures and clinical observation indicate that the attention functions of the two hemispheres of brain are asymmetric. Attention moves in the two visual fields of the left and right sides can enhance blood flow in the terminal leaf on the right side, and the enhancement of blood flow in the left terminal leaf only relates to the attention move of right visual field. This find could explain why damage of the right brain hemisphere causes more attention damage than damage to the left. But a normal brain, with an equal number of disturbance targets distributed in the left and right visual fields, cannot accomplish the mask quickly enough to concentrate on single vision. But to the patient with a resected callosum, when disturbance targets are distributed in double visual fields, the speed of searching for the targets is two times faster than when disturbance targets are concentrated in single vision. This means that after the injury of corpus callosum, the attention mechanisms of the left and right hemispheres are disconnected.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323853804000051

Eye Movements in Reading

K. Rayner, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 The Perceptual Span

How much information does a skilled reader acquire on each fixation and what is the size of the perceptual span (or area of effective vision) on each fixation? In order to investigate this question, the eye-contingent display change paradigm (McConkie and Rayner 1975, Rayner 1975, Rayner and Bertera 1979) was developed. In this paradigm, a reader's eye movements are monitored (generally every millisecond) by a highly accurate eye-tracking system. The eyetracker is interfaced with a computer which controls the display monitor from which the reader reads and changes in the text are made contingent on the location of the reader's eyes. Generally, the display changes are made during saccades and the reader is not aware of the changes.

There are three primary types of eye-contingent paradigms: the moving window, foveal mask, and boundary techniques. With the moving window technique, on each fixation a portion of the text around the fixation point is available to the reader. However, outside of this window, the text is replaced by other letters, or by Xs (see Fig. 1). When the reader moves his or her eyes, the window moves with the eyes. Thus, wherever the reader looks, there is readable text within the window and altered text outside the window. The rationale with the technique is that when the window is as large as the region from which a reader can normally obtain information, reading will not differ from when there is no window present. The foveal mask technique is very similar to the moving window paradigm except that the text and replaced letters are reversed. Thus, wherever the reader looks, the letters around the fixation are replaced by Xs while outside of the mask area the text remains normal (see Fig. 1). Finally, in the boundary technique, an invisible boundary location is specified in the text and when the reader's eye movement crosses the boundary, an originally displayed word or letter string is replaced by a target word (see Fig. 1). The amount of time that the reader looks at the target word is computed as a function of (a) the relationship between the initially displayed stimulus and the target word, and (b) the distance that the reader was from the target word prior to launching a saccade that crossed the boundary location.

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 1. Examples of the moving window, foveal mask, and boundary paradigms. The first line shows a normal line of text with the fixation location marked by an asterisk. The next two lines show an example of two successive fixations with a window of 15 letter spaces (7 letter spaces to each side of fixation) and the other letters replaced with Xs (and spaces between words preserved). The next two lines show an example of two successive fixations with a 7-letter foveal mask. The bottom two lines show an example of the boundary paradigm. The first line shows a line of text prior to a display change with fixation locations marked by asterisks. When the reader's eye movement crosses an invisible boundary (the last letter is important), an initially displayed word (money) is replaced by the target word (skill). The change occurs during the saccade so that the reader does not see the change

For readers of English (and other alphabetic writing systems printed from left-to-right), the span extends from the beginning of the currently fixated word, or roughly 3–4 letter spaces to the left of fixation, to about 14–15 letter spaces to the right of fixation. The span is thus asymmetric—it extends further to the right of fixation than to the left of fixation. For readers of languages printed from right-to-left (such as Hebrew), the span is asymmetric but in the opposite direction from English so that it is larger left of fixation than right. While the span is asymmetric, it is the case that no useful information is acquired below the line of text that is currently being read as readers apparently focus their attention only on the line being fixated.

Although the perceptual span extends about 14–15 letter spaces to the right of fixation, the area from which words can be identified on a given fixation (the word identification span) generally does not exceed 7–8 letter spaces to the right of fixation. However, neither the perceptual span nor the word identification span is fixed as both can be modulated by word length. For example, if three short words occur in succession, readers are able to identify all of them. If the upcoming word is highly constrained by the context, readers acquire more information from that word than from unpredictable words, and if the fixated word is difficult to process, readers obtain less information from the upcoming word.

Orthography also influences the size of the perceptual span. Hebrew readers have a smaller span than English readers, and Japanese and Chinese readers have even smaller spans. Hebrew is a more densely packed language than English, and Japanese and Chinese are both more densely packed than Hebrew; densely packed refers to the fact that it takes more letters per sentence in English than Hebrew, for example. Finally, reading skill influences the size of the perceptual span. Beginning readers (at the end of second grade) have a smaller span than skilled readers and adult dyslexic readers have smaller spans than skilled readers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767015552

Emotional Modulation of Perspective Taking

Ulises Xolocotzin, ... Sibel Erduran, in Emotions, Technology, and Behaviors, 2016

Methods

Participants and Design

Participants were 30 native English speakers from a range of socioeconomic and academic backgrounds, between 25 and 35 years old, 60% women. All participants were recruited through public advertisement in a midsize city in southwest England and received money as inconvenience allowance.

The experimental design was within participants with three conditions of affective stimulation: neutral, negative, and positive. In addition, to control for the effects of individual differences, all participants answered the Interpersonal Reactivity Index (IRI), which is a widely used measure of individual differences in empathic orientation (Davis, 1983). The 28-item IRI contains four subscales (seven Likert items each), including perspective taking, empathic concern, personal distress, and fantasy.

Apparatus and Stimuli

The stimuli were presented over a black background with a laptop computer in a 15-in. screen using Psychopy (Peirce, 2007). Affective primes were pictures from the International Affective Picture System (IAPS). A set of pictures was selected including images with neutral, positive, and negative content. None of the selected pictures included faces to avoid social reactions effects (e.g., attractiveness). To ensure that the selected stimuli varied only in terms of their valence, all pictures were selected to have a moderate arousing effect as indicated by the IAPS manual (Lang, Bradley, & Cuthbert, 2008).

After each priming stimuli, individuals responded to an argumentation vignette similar to the ones developed by Kuhn and Udell (2007). The vignettes included arguments defending one’s own position and arguments addressing other’s position. For example:

You are told you should eat crisps instead of ice cream. You prefer ice cream. What is the best argument for you to make?

Ice cream is sweet.

Crisps make you thirsty.

Note that both argument choices are aimed to support one’s own position, but one of them is directed to strengthen an individual’s own argument (top option in the preceding example), whereas the other argument addresses the counterpart’s argument (bottom option). Thus, choosing the position that addresses a counterpart’s point of view might indicate engagement with effortful perspective taking. The content of the vignettes included a range of everyday situations likely to be common for most participants. Topics included choices on food, social activities, sports, and so on. The final vignettes used in the study were selected and refined after a pilot study that indicated that neither choice would be more likely to be chosen, regardless of being directed to support one’s own perspective or to address a counterpart’s perspective.

The vignettes were presented at the top center of the screen simultaneously with the argument choices positioned at the bottom left and right.

Procedure

Participants were tested in a quiet room. Before a testing session, participants were told that they would be asked to give their opinion about a series of common situations. The experiment started with an introductory screen explaining to the participant that he or she was going to see a picture followed by a short story with two options, and that they should ignore the first picture and concentrate on the story. The screen also instructed participants to imagine themselves in the situation described in the story and select the best argument to defend their position. The initial screen was followed by three practice trials.

Figure 1.1 shows the sequence of events on each experimental trial. A trial started with a blank screen for 3000 ms followed by a central fixation point displayed for 2000 ms. Next the prime picture (negative, positive, or neutral) was displayed for 50 ms followed by a structural mask displayed for 50 ms, to make a stimulus-onset asynchrony (SOA) of 100 ms. Finally, the vignette and the argument choices were presented until the participant pressed one of the designated buttons to make a choice. Whether the argument choices appeared in the left side or the right side was counterbalanced across trials. Participants’ task was to click on the designated keyboard key at the right or left to indicate their choice of argument. Responses of both latency and argument choice were collected. Each participant performed 30 trials, 10 for each condition of emotional stimulation, presented at random. Valence of the affective primes (positive, negative, or neutral) presented before each of the 30 vignettes was counterbalanced across participants using a Latin square design. After the experiment, participants answered the IRI, then they were debriefed and the session finished.

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 1.1. Sequence of events on a trial. Notes: SOA = stimulus onset asynchrony.

Results

Data screening: Following Wentura and Degner (2010), responses with a latency falling more than three interquartile ranges above the third quartile of the distribution might be considered too slow to be indicative of priming effects. Responses of this kind represented 2.6% of the collected data and were discarded from further analysis. In addition, 25% of the responses of one participant were found to meet this criterion, and therefore, data from this participant was also removed from further analysis. Another three participants failed to address a counterpart’s perspective at least once in one condition; therefore their data was not included in the multivariate analyses presented later.

Argument choices: First we tested whether the preference for arguments strengthening one’s own position over arguments addressing a counterpart’s perspective was significantly above chance for each of the valence conditions. One-sample t-tests were conducted to compare each condition against a theoretical 50% chance baseline. All conditions were above chance (all ts > 2.2, all ps < 0.03), suggesting that participants did not answer the task mindlessly (i.e., randomly) and, instead, showed a systematic tendency to focus on their own perspective. Further analysis tested differences in the selection of arguments addressing a counterpart’s perspective. A repeated measures analysis of variance (ANOVA) with valence (positive/negative/neutral) as a within-subjects factor was applied to the proportion of responses addressing a counterpart’s perspective. The results indicated no significant effects of valence [F (2, 24) = 0.418, ns], as is illustrated in Figure 1.2a.

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point?

Figure 1.2. Proportion of responses addressing a counterpart's perspective (a) and response latencies (b). Error bars denote ± 1 SE.

Latency: Reaction times were analyzed with a repeated-measures ANOVA with valence (positive/negative/neutral) and perspective (own/other) as within-participants factors. The results showed no main effects of perspective [F (2, 25) = 1.64, ns] or the valence × perspective interaction [F (2, 24) = 0.84, ns]. However, there was a main effect of valence [F (2, 24) = 5.23, p < 0.05]. Bonferroni post hoc tests revealed no significant differences between the neutral and the negative valence conditions, but confirmed significantly larger latencies in the positive condition in comparison to both the neutral and negative conditions (ps < 0.05). These results are illustrated in Figure 1.2b.

Interpersonal Reactivity Index (IRI): None of the IRI scales were significantly correlated either with the overall proportion of responses addressing a counterpart’s perspective or individuals’ reaction times (RTs) (all rs < 0.24 and > − 0.36, all ps > 0.05).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128018736000017

How many letters does the eye normally take in at each fixation point before moving on to the next fixation point quizlet?

We use fixation points and saccades (rapid eye movements between fixation points) and move on average 4 letters to the left, fifteen to the right to absorb read information.

How many letters do people see in a single fixation?

That is, the perceptual span for skilled readers of alphabetic writing systems consists of 3-4 letters to the left of fixation (or the beginning of the currently fixated word) and 14-15 letter spaces to the right of fixation (see Rayner, 1998, 2009 for reviews).

What is fixation Eye Movement?

a fixation is composed of slower and minute movements (microsaccades, tremor and drift) that help the eye align with the target and avoid perceptual fading (fixational eye movements) the duration varies between 50-600 ms (however longer fixations have been reported)

What is single fixation duration?

Fixation duration is the time during which the eyes rest on an object in the surroundings. Fixation lasts approximately 250 milliseconds (ms) but often is shorter or longer. It is usually defined as the time between the end of one saccade and the beginning of the next one.