Difference between revisions of "SMHS LinearModeling LMM"
Line 2: | Line 2: | ||
Scientific inference based on fixed and random effect models, assumptions, and mixed effects logistic regression. | Scientific inference based on fixed and random effect models, assumptions, and mixed effects logistic regression. | ||
+ | |||
+ | <B>Questions:</B> | ||
+ | *What happens if data are not independent and identically distributed (IIDs)? | ||
+ | *How to model multiple observations for the case/subject (across time, conditions, etc.)? | ||
+ | |||
+ | Fixed and random effects | ||
+ | Linear models express relationships between data elements (variables) in terms of a (linear) function. For example, we can model weight as a function of height. | ||
+ | |||
+ | <blockquote>Weight ~ Height + ε</blockquote> | ||
+ | |||
+ | Here, “height” a fixed effect, and ε was our “error term” representing the deviations between the model predictions and observations (of weight) due to “random” factors that we cannot control experimentally. This tem (ε) is the “probabilistic” or “stochastic” part of the model. Let’s try to unpack “ε” and add complexity to it. In mixed (fixed and random effect) models, everything in the “systematic” part of your model works just like with linear models. If we change the random aspect of our model this leaves the systematic part (height) unchanged. | ||
+ | |||
+ | Suppose we’re looking at the Baseball data and try to identify a relationship that looks like this: | ||
+ | |||
+ | <blockquote>Weight ~ position + ε</blockquote> | ||
+ | |||
+ | Position (player field position) is treated as a categorical factor with several levels (e.g., Catcher, First-Baseman, etc.) On top of that, we also have an additional fixed effect, Height, and so our bivariate linear model looks more like this: | ||
+ | |||
+ | <blockquote>Weight ~ Height + position + ε</blockquote> | ||
+ | |||
+ | This model expansion is nice, but it complicates a bit the data analytics and scientific inference. If the study design involved taking multiple measures per player, say across time/age, each player would yield multiple position, height and weight responses. According to the assumptions of the linear model, this would violate the independence assumption as multiple responses from the same subject cannot be regarded as independent from one another. Every player has a slightly different weight, and this is going to be an idiosyncratic factor that affects all responses from the same player, thus rendering these different responses inter-dependent (within player) instead of independent, as required by the model assumptions. | ||
+ | |||
+ | A way to resolve this model assumption violation is to add a <B>random effect</B> for players. This allows us to account for inter-independences by assuming a different <B>“baseline”</B> weight value for each player. For example, player 1 may have a mean weight 200 pounds across different times, and player 2 may have a mean weight of 350 pounds. Here’s a visual depiction of how this looks like: | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
.... | .... |
Revision as of 11:17, 2 February 2016
SMHS Linear Modeling - Linear mixed effects analyses
Scientific inference based on fixed and random effect models, assumptions, and mixed effects logistic regression.
Questions:
- What happens if data are not independent and identically distributed (IIDs)?
- How to model multiple observations for the case/subject (across time, conditions, etc.)?
Fixed and random effects Linear models express relationships between data elements (variables) in terms of a (linear) function. For example, we can model weight as a function of height.
Weight ~ Height + ε
Here, “height” a fixed effect, and ε was our “error term” representing the deviations between the model predictions and observations (of weight) due to “random” factors that we cannot control experimentally. This tem (ε) is the “probabilistic” or “stochastic” part of the model. Let’s try to unpack “ε” and add complexity to it. In mixed (fixed and random effect) models, everything in the “systematic” part of your model works just like with linear models. If we change the random aspect of our model this leaves the systematic part (height) unchanged.
Suppose we’re looking at the Baseball data and try to identify a relationship that looks like this:
Weight ~ position + ε
Position (player field position) is treated as a categorical factor with several levels (e.g., Catcher, First-Baseman, etc.) On top of that, we also have an additional fixed effect, Height, and so our bivariate linear model looks more like this:
Weight ~ Height + position + ε
This model expansion is nice, but it complicates a bit the data analytics and scientific inference. If the study design involved taking multiple measures per player, say across time/age, each player would yield multiple position, height and weight responses. According to the assumptions of the linear model, this would violate the independence assumption as multiple responses from the same subject cannot be regarded as independent from one another. Every player has a slightly different weight, and this is going to be an idiosyncratic factor that affects all responses from the same player, thus rendering these different responses inter-dependent (within player) instead of independent, as required by the model assumptions.
A way to resolve this model assumption violation is to add a random effect for players. This allows us to account for inter-independences by assuming a different “baseline” weight value for each player. For example, player 1 may have a mean weight 200 pounds across different times, and player 2 may have a mean weight of 350 pounds. Here’s a visual depiction of how this looks like:
....
Next See
Machine Learning Algorithms section for data modeling, training , testing, forecasting, prediction, and simulation.
- SOCR Home page: http://www.socr.umich.edu
Translate this page: