After fitting models, use summaryh() in place of summary() to get APA (American Psychological Association) formatted output that also includes effect size estimates for each effect (r effect size). Currently supports lm, glm, aov, anova, lmer, lme, t-test, chisq.test, and cor.test. Unfortunately, this function won't write your entire results section for you (yet).

summaryh(model, decimal = 2, showTable = FALSE, showEffectSizesTable = FALSE, ...)

Arguments

model

a fitted model

decimal

round output to decimal places

showTable

show results in table format (returns list)

showEffectSizesTable

show other effect sizes computed using es function

...

further arguments passed to or from other methods

Value

A datatable or a list with datatables (if showTable = TRUE or showEffectSizesTable = TRUE)

Note

Cohen's d: 0.20 (small), 0.50 (medium), .80 (large) (Cohen, 1992)
correlation r: .10 (small), .30 (medium), .50 (large)
R-squared: R2: .02 (small), .13 (medium), .26 (large)

Examples

summaryh(lm(mpg ~ qsec, mtcars))
#> term results #> 1: (Intercept) b = −5.11, SE = 10.03, t(30) = −0.51, p = .614, r = 0.09 #> 2: qsec b = 1.41, SE = 0.56, t(30) = 2.53, p = .017, r = 0.42
summaryh(aov(mpg ~ gear, mtcars))
#> term results #> 1: gear F(1, 30) = 9.00, p = .005, r = 0.48
summaryh(cor.test(mtcars$mpg, mtcars$gear), showEffectSizesTable = TRUE)
#> $results #> results #> 1: r(30) = 0.48, p = .005 #> #> $effectSizes #> term d r R2 f oddsratio logoddsratio auc #> 1: mtcars$mpg and mtcars$gear 1.09 0.48 0.23 0.55 7.28 1.98 0.78 #> fishersz #> 1: 0.52 #>
summaryh(t.test(mpg ~ vs, mtcars), showTable = TRUE)
#> $results #> results #> 1: t(23) = −4.67, p < .001, r = 0.70 #> #> $results2 #> term df statistic p.value es.r es.d #> 1: mpg by vs 22.716 -4.667 0 0.7 -1.958 #>
summaryh(glm(vs ~ 1, mtcars, family = "binomial"), showTable = TRUE)
#> $results #> term results #> 1: (Intercept) b = −0.25, SE = 0.36, z(31) = −0.71, p = .481, r = −0.07 #> #> $results2 #> term estimate std.error statistic p.value df es.oddsratio es.r es.d #> 1: (Intercept) -0.251 0.356 -0.705 0.481 31 0.778 -0.07 -0.14 #>