如何在geom_smooth中使用lmer

huangapple go评论101阅读模式
英文:

How to use lmer within geom_smooth

问题

  1. # 混合效应模型
  2. m_intercept <- lmer(grades ~ iq + (1 | class), data = df1)
  3. summary(m_intercept)
  4. # 使用 ggplot 的 geom_smooth 函数绘制图表
  5. ggplot(df1, aes(x = iq, y = grades, color = class)) +
  6. geom_point() +
  7. geom_smooth(method = "lm", se = FALSE)
英文:

My mixed effects model is very simple, one outcome, one covariate, one random intercept. Something similar to this.

  1. # download data directly from URL
  2. url &lt;- &quot;https://raw.githubusercontent.com/hauselin/rtutorialsite/master/data/simpsonsParadox.csv&quot;
  3. df1 &lt;- fread(url)
  4. # LMER model
  5. m_intercept &lt;- lmer(grades ~ iq + (1 | class), data = df1)
  6. summary(m_intercept)

My question is how do I plot this using ggplot function geom_smooth ?

Something similar to this

  1. ggplot(mpg, aes(displ, hwy)) +
  2. geom_point() +
  3. geom_smooth(method = lm, se = FALSE)

Thanks.

答案1

得分: 2

只需创建一个预测数据框并绘图:

  1. newdata <- expand.grid(iq = 90:130, class = letters[1:4])
  2. newdata$grades <- predict(m_intercept, newdata)
  3. ggplot(df1, aes(iq, grades, color = class)) +
  4. geom_point() +
  5. geom_line(data = newdata)

如何在geom_smooth中使用lmer

Simpson悖论的一个很好的演示。如果你的目标是真正演示Simpson悖论,你可能想添加一些花里胡哨的东西:

  1. newdata <- data.frame(iq = c(90, 105, 95, 115, 105, 125, 110, 130),
  2. class = rep(letters[1:4], each = 2))
  3. newdata$grades <- predict(m_intercept, newdata)
  4. ggplot(df1, aes(iq, grades, color = class)) +
  5. geom_point(size = 3, alpha = 0.5) +
  6. geom_line(data = newdata) +
  7. geom_smooth(se = FALSE, aes(linetype = "Naive linear model"),
  8. color = "gray50", method = "lm") +
  9. scale_linetype_manual(NULL, values = 3) +
  10. theme_minimal(base_size = 16) +
  11. scale_color_brewer("Mixed effect model\nwith intercept\nper class",
  12. palette = "Set1") +
  13. ggtitle("Illustration of Simpson's Paradox")

如何在geom_smooth中使用lmer

英文:

Simply create a data frame of predictions and plot:

  1. newdata &lt;- expand.grid(iq = 90:130, class = letters[1:4])
  2. newdata$grades &lt;- predict(m_intercept, newdata)
  3. ggplot(df1, aes(iq, grades, color = class)) +
  4. geom_point() +
  5. geom_line(data = newdata)

如何在geom_smooth中使用lmer

A nice demonstration of Simpson's paradox indeed. If your goal is to actually demonstrate Simpson's paradox, you might want to add some bells and whistles:

  1. newdata &lt;- data.frame(iq = c(90, 105, 95, 115, 105, 125, 110, 130),
  2. class = rep(letters[1:4], each = 2))
  3. newdata$grades &lt;- predict(m_intercept, newdata)
  4. ggplot(df1, aes(iq, grades, color = class)) +
  5. geom_point(size = 3, alpha = 0.5) +
  6. geom_line(data = newdata) +
  7. geom_smooth(se = FALSE, aes(linetype = &quot;Naive linear model&quot;),
  8. color = &quot;gray50&quot;, method = &quot;lm&quot;) +
  9. scale_linetype_manual(NULL, values = 3) +
  10. theme_minimal(base_size = 16) +
  11. scale_color_brewer(&quot;Mixed effect model\nwith intercept\nper class&quot;,
  12. palette = &quot;Set1&quot;) +
  13. ggtitle(&quot;Illustration of Simpson&#39;s Paradox&quot;)

如何在geom_smooth中使用lmer

huangapple
  • 本文由 发表于 2023年3月12日 14:32:56
  • 转载请务必保留本文链接:https://go.coder-hub.com/75711438.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定