To use the ANOVA test we made the following assumptions: The presence of outliers can also cause problems. In addition, we need to make sure that the F statistic is well behaved. In particular, the F statistic is relatively robust to violations of normality provided: In general, as long as the sample sizes are equal (called a balanced model) and
sufficiently large, the normality assumption can be violated provided the samples are symmetrical or at least similar in shape (e.g. all are negatively skewed). The F statistic is not so robust to violations of homogeneity of variances. A rule of thumb for balanced models is that if the ratio of the largest variance to smallest variance is less than 3 or 4, the F-test will be valid. If the sample sizes are unequal then smaller differences in variances can invalidate the F-test.
Much more attention needs to be paid to unequal variances than to non-normality of data. We now look at how to test for violations of these assumptions and how to deal with any violations when they occur. Hypotheses: In ANOVA we wish to determine whether the classification (independent) variable affects what we observe on the response (dependent) variable. In the example, we wish to determine whether Temperature affects Learning. In statistical terms, we want to decide between two hypotheses: the null hypothesis (Ho), which says there is no effect, and the alternative hypothesis (H1) which says that there is an effect. In symbols: Note that this is a non-directional test. There is no equivalent to the directional (one-tailed) T-Test. The t test statistic for two-groups: Recall the generic formula for the T-Test:For two groups the sample statistic is the difference between the two sample means, and in the two-tail test the population parameter is zero. So, the generic formula for the two-group, two-tailed t-test can be stated as: (We usually refer to the estimated standard error as, simply, the standard error). The F test statistic for ANOVA: The F test statistic is used for ANOVA. It is very similar to the two-group, two-tailed T-test. The F-ratio has the following structure:Note that the F-ratio is based on variance rather than difference. But variance is difference: It is the average of the differences of a set of values from their mean. The F-ratio uses variance because ANOVA can have many samples of data, not just two as in T-Tests. Using the variance lets us look at the differences that exist between all of the many samples.
The most obvious thing about the data is that they are not all the same: The scores are different; they are variable. The heart of ANOVA is analyzing the total variability into these two components, the mean square between and mean square within. Once we have analyzed the total variability into its two basic components we simply compare them. The comparison is made by computing the F-ratio. For
independent-measures ANOVA the F-ratio has the following structure: or, using the vocabulary of ANOVA, For the data above: (Note: The book says 11.28, but this is a rounding error. The correct value is 11.25.) Degrees of Freedom: Note that the exact shape depends on the degrees of freedom of the two variances. We have two separate degrees of freedom, one for the numerator (sum of squares between) and the other for the denominator (sum of squares within). They depend on the number of groups and the total number of observations. The exact number of degrees of freedom follows these two formulas (k is the number of groups, N is the total number
of observations): A Conceptual View of ANOVAConceptually, the goal of ANOVA is to determine the amount of variability in groups of data, to determine where it comes from, and to see if the variability is greater between groups than within groups.We can demonstrate how this works visually. Here are three possible sets of data. In each set of data there are 3 groups sampled from 3 populations. We happen to know that each set of data comes from populations whose means are 15, 30 and 45. We have colored the data to show the groups. We use
Post Hoc TestsYou will recall, that in ANOVA the null and alternative hypotheses are:When the null hypothesis is rejected you conclude that the means are not all the same. But we are left with the question of which means are different: Post Hoc tests help give us an answer to the question of which means are different.Post Hoc tests are done "after the fact": i.e., after the ANOVA is done and has shown us that there are indeed differences amongst the means. Specifically, Post Hoc tests are done when:
T-Tests can't be used: We can't do this in the obvious way (using T-Tests on the various pairs of groups) because we would get too "rosy" a picture of the significance (for reasons I don't go into). The Post Hoc tests gaurantee we don't get too "rosy" a picture (actually, they provide a picture that is too "glum"!). Two Post Hoc tests are commonly used (although ViSta doesn't offer any Post Hoc tests):
The hypotheses, for ANOVA, are: We arbitrarily set
The data are obtained from 60 subjects, 20 in each of 3 different experimental conditions. The conditions are a Placebo condition, and two different drug conditions. The independent (classification) variable is the experimental condition (Placebo, DrugA, DrugB). The dependent variable is the time the stimulus is endured. Here are the data as shown in ViSta's data report:
The data may be gotten from the ViSta Data Applet. Then, you can do the analysis that is shown below yourself.
We visualize the data and the model in order to see if the assumptions underlying the independent-measures F-test are met. The assumptions are: The data visualization is shown below. The boxplot shows that there is somewhat more variance in the "DrugA" group, and that there is an outlier in the "DrugB" group. The Q plots (only the "DrugB" Q-Plot is shown here) and the Q-Q plot show that the data are normal, except for the outlier in the "DrugB" group.
We use ViSta to calculate the observed F-ratio, and the observed probability level. The report produced by ViSta is shown below. The information we want is near the bottom:
We note that F=4.37 and p=.01721. Since the observed p < .05, we reject the null hypothesis and conclude that it is not the case that all group means are the same. That is, at least one group mean is different than the others. Here is the F distribution for df=2,57 (3 groups, 60 observations). I have added the observed F=4.37:
Finally, we also visualize the ANOVA model to see if the assumptions underlying the independent-measures F-test are met. The boxplots are the same as those for the data. The partial regression plot shows that the model is significant at the .05 level of significance, since the curved lines cross the horizontal line. The residual plot shows the outline in the "DrugB" group, and shows that the "DrugA" group is not as well fit by the ANOVA model as the other groups. Here is the model visualization: What are the assumptions for the independent measures ANOVA?There are three primary assumptions in ANOVA: The responses for each factor level have a normal population distribution. These distributions have the same variance. The data are independent.
Which of the following is an assumption underlying a repeated measures analysis of variance ANOVA )?A repeated measures ANOVA assumes sphericity – that variances of the differences between all combinations of related groups must be equal.
|