** I still don't understand if the tests in every row of the output are results of Tukey’s test for nonadditivity or just the one at the bottom. The relevant chapter is currently available here: If p-value is low, we did find evidence of relationship between residuals and fitted values, and we need to re-specify the model before we can trust the results of the model.Įverything above, I learned and hopefully interpreted correctly from Chapter 6, section 6.2.1 "PLOTTING RESIDUALS" (specifically pp.If p-value is high, we fail to reject the null, meaning we didn't find evidence of a relationship between residuals and fitted values.Alternate hypothesis: relationship between residuals and fitted values is NOT 0.Null hypothesis: relationship between residuals and fitted values is 0.If p-value is low, reject null hypothesis, meaning the test finds evidence that the quadratic term might not be 0, meaning there is strong evidence of non-linearity in the model and it needs to be re-specified before we can trust the results of the model.įor the Tukey test at the very bottom of the table:.If p-value is high, we fail to reject null, meaning no evidence for lack of fit, meaning coefficient of quadratic term might be 0.The p-value should be the probability that the coefficient of the quadratic term is NOT 0. Alternate hypothesis: Coefficient for quadratic term is not 0.Null hypothesis: Coefficient for quadratic term is 0.It adds a quadratic term to the model for each numeric independent variable in the regression model. Any reader capable of interpreting the individual means should be able to tell which groups differ by a margin of any practical significance.The table output of the residualPlots() function from the car package shows the results of a "lack-of-fit test" and/or** "Tukey’s test for nonadditivity" (Fox & Weisberg, 2018, p. wants p-values you may have to simply give them to him/her) is simply to report the omnibus p-value along with the group means and standard deviations. With this said, I think it is far more reasonable (and maybe preferable considering the general opinion of p-values, but if your boss/advisor/colleague/etc. You can get the averages (means) for each group in the SUMMARY section of the ANOVA Test result. Next, obtain the absolute values (positive values) of the difference in the means of each pair using the ABS function. IF the omnibus test indicates evidence that significant differences exist (e.g., p 0.05), then use Tukey's HSD test to determine which parwise comparisons are specific and report an unadjusted p-value. The Tukey’s test is performed as follows: First, set up the groups in pairs.Before running any a posteriori test such as Tukey's HSD, compute a p-value using an omnibus test, which will tell you if there are significant differences among any of the pairwise comparisons.Having partitioned within levels of one factor, I proceed to Tukey's to examine combinations of the remaining factor. *EDIT: I neglected to say, this is after an omnibus ANOVA as well as simple effects test (because of significant interactions), both with corrected p-values. But with Tukey HSD, the p-values have already been adjusted within each group (accounting for 3 comparisons), so to correct all 6 p-values as if they have not already been adjusted seems not quite right. Normally I would adjust the resulting 6 p-values. This gives me a total of 6 pairwise comparisons, 3 in each group. However, the p-values from a Tukey HSD are already adjusted for multiple comparisons (within a single test).ĭoes this pose problems for adjusting the p-values again?Īs an example, let's say I am testing the difference among 3 treatments, first in group A (one test), and separately in group B (the second test). ![]() Normally I would do an FDR correction on the p-values to control the family-wise error rate. I have multiple sets of comparisons I am running together.
0 Comments
Leave a Reply. |