https://wiki.socr.umich.edu/api.php?action=feedcontributions&user=Pineaumi&feedformat=atomSOCR - User contributions [en]2023-12-04T22:05:42ZUser contributionsMediaWiki 1.31.6https://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_SEM&diff=16252SMHS BigDataBigSci SEM2016-05-24T15:29:36Z<p>Pineaumi: /* See also */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Structural Equation Modeling (SEM) ==<br />
<br />
SEM allow re-parameterization of random-effects to specify latent variables that may affect measures at different time points using structural equations. SEM show variables having predictive (possibly causal) effects on other variables (denoted by arrows) where coefficients index the strength and direction of predictive relations. SEM does not offer much more than what classical regression methods do, but it does allow simultaneous estimation of multiple equations modeling complementary relations. <br />
<br />
SEM is a general multivariate statistical analysis technique that can be used for causal modeling/inference, path analysis, confirmatory factor analysis (CFA), covariance structure modeling, and correlation structure modeling.<br />
<br />
===SEM Advantages===<br />
* It allows testing models with multiple dependent variables<br />
* Provides mechanisms for modeling mediating variables<br />
* Enables modeling of error terms<br />
* acilitates modeling of challenging data (longitudinal with auto-correlated errors, multi-level data, non-normal data, incomplete data)<br />
<br />
SEM allows separation of observed and latent variables. Other standard statistical procedures may be viewed as special cases of SEM, where statistical significance less important, than in other techniques, and covariances are the core of structural equation models.<br />
<br />
===Definitions===<br />
*The <b>disturbance</b>, <i>D</i>, is the variance in Y unexplained by a variable X that is assumed to affect Y.<br />
X → Y ← D<br />
<br />
* <b>Measurement error</b>, <i>E</i>, is the variance in X unexplained by A, where X is an observed variable that is presumed to measure a latent variable, <i>A</i>.<br />
A → X ← E<br />
<br />
* Categorical variables in a model are <b>exogenous</b> (independent) or <b>endogenous</b> (dependent).<br />
<br />
===Notation===<br />
<br />
* In SEM <b>observed (or manifest) indicators</b> are represented by <b>squares/rectangles</b> whereas latent variables (or factors) represented by circles/ovals.<br />
<br />
<center>[[Image:SMHS_BigDataBigSci1.png|500px]]</center><br />
<br />
*'''Relations: Direct effects''' (&rarr;), '''Reciprocal effects''' (&harr; or &#8646;), and '''Correlation or covariance''' (&#x293B; or &#x293A;) all have different appearance in SEM models.<br />
<br />
===Model Components===<br />
<br />
The <b>measurement part</b> of SEM model deals with the latent variables and their indicators. A pure measurement model is a confirmatory factor analysis (CFA) model with unmeasured covariance (bidirectional arrows) between each possible pair of latent variables. There are <u>straight arrows from the latent variables to their respective indicators and straight arrows from the error and disturbance terms to their respective variables, but no direct effects (straight arrows) connecting the latent variables</u>. The <b>measurement model</b> is evaluated using goodness of fit measures (Chi-Square test, BIC, AIC, etc.) <b>Validation of the measurement model is always first.</b> <br />
<br />
<b>Then we proceed to the structural model</b> (including a set of exogenous and endogenous variables together with the direct effects (straight arrows) connecting them along with the disturbance and error terms for these variables that reflect the effects of unmeasured variables not in the model).<br />
<br />
===Notes===<br />
<br />
* Sample-size considerations: mostly same as for regression - more is always better.<br />
* Model assessment strategies: Chi-square test, Comparative Fit Index, Root Mean Square Error, Tucker Lewis Index, Goodness of Fit Index, AIC, and BIC.><br />
* Choice for number of Indicator variables: depends on pilot data analyses, a priori concerns, fewer is better.<br />
<br />
===[[SMHS_BigDataBigSci_SEM_Ex1|Hands-on Example 1 (School Kids Mental Abilities)]]===<br />
<br />
<br />
===[[SMHS_BigDataBigSci_SEM_Ex2|Hands-on Example 2 (Parkinson’s Disease data)]]===<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci| Back to Model-based Analytics]]<br />
* [[SMHS_BigDataBigSci_SEM_sem_vs_cfa| Differences and Similarities between '''sem'''() and '''cfa'''() ]] <br />
* [[SMHS_BigDataBigSci_GCM| Next Section: Growth Curve Modeling]]<br />
* [[SMHS_BigDataBigSci_GCM| Next Section: Generalized Estimating Equation (GEE) Modeling]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_SEM}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_TimeSeriesAnalysis_LOS&diff=16251SMHS TimeSeriesAnalysis LOS2016-05-24T14:37:07Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_TimeSeriesAnalysis| SMHS: Time-series Analysis]] - Applications ==<br />
<br />
===Time series regression studies in environmental epidemiology (London Ozone Study 2002-2006)===<br />
A time series regression analysis of a London ozone dataset including daily observations from 1 January 2002 to 31 December 2006. Each day has records of (mean) '''ozone''' levels that day, and the total number of '''deaths''' that occurred in the city. <br />
<br />
====Questions====<br />
*Is there an association between day-to-day variation in ozone levels and daily risk of death?<br />
*Is ozone exposure associated with the outcome is death or other confounders - temperature and relative humidity?<br />
<br />
'''Reference:''' Bhaskaran K, Gasparrini A, Hajat S, Smeeth L, Armstrong B. Time series regression studies in environmental epidemiology. ''International Journal of Epidemiology''. 2013;42(4):1187-1195. doi:10.1093/ije/dyt092.<br />
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3780998/<br />
<br />
<b>Load the Data</b><br />
library(foreign)<br />
&#35;07_LondonOzonPolutionData_2006_TS.csv<br />
&#35;data <- read.csv("https://umich.instructure.com/files/720873/download?download_frd=1")<br />
data <- read.dta("https://umich.instructure.com/files/721042/download?download_frd=1")<br />
<br />
&#35;Set the Default Action for Missing Data to <b>na.exclude</b><br />
options(na.action="na.exclude")<br />
<br />
<b>Exploratory Analyses</b><br />
<br />
&#35;set the plotting parameters for the plot <br />
<br />
oldpar <- par(no.readonly=TRUE)<br />
par(mex=0.8,mfrow=c(2,1))<br />
<br />
&#35;sub-plot for daily deaths, with vertical lines defining years<br />
<br />
plot(data$\$$date,data$\$$numdeaths,pch=".",main="Daily deaths over time", <br />
ylab="Daily number of deaths",xlab="Date")<br />
abline(v=data$\$$date[grep("-01-01",data$\$$date)],col=grey(0.6),lty=2)<br />
<br />
&#35;plot for ozone levels<br />
<br />
plot(data$\$$date,data$\$$ozone,pch=".",main="Ozone levels over time",<br />
ylab="Daily mean ozone level(ug/m3)",xlab="Date")<br />
abline(v=data$\$$date[grep("-01-01",data$\$$date)],col=grey(0.6),lty=2)<br />
par(oldpar)<br />
layout(1)<br />
<br />
&#35;descriptive statistics<br />
<br />
summary(data)<br />
<br />
&#35;correlations<br />
<br />
cor(data[,2:4])<br />
&#35;scale exposure<br />
data$\$$ozone10 <- data$\$$ozone/10<br />
<br />
<b>Modelling Seasonality and Long-Term Trend</b><br />
<br />
&#35;option 1: time-stratified model <BR><br />
&#35;generate month and year<br />
<br />
data$\$$month <- as.factor(months(data$\$$date,abbr=TRUE))<br />
data$\$$year <- as.factor(substr(data$\$$date,1,4))<br />
<br />
&#35;fit a Poisson model with a stratum for each month nested in year<BR><br />
&#35;(use of quasi-Poisson family for scaling the standard errors)<br />
<br />
model1 <- glm(numdeaths ~ month/year,data,family=quasipoisson) <br />
summary(model1)<br />
<br />
&#35;compute predicted number of deaths from this model<br />
pred1 <- predict(model1,type="response")<br />
<br />
&#35;Figure 2a: Three alternative ways of modelling long-term patterns in the data (seasonality and trends)<br />
<br />
plot(data$\$$date,data$\$$numdeaths,ylim=c(100,300),pch=19,cex=0.2,col=grey(0.6),<br />
main="Time-stratified model (month strata)",ylab="Daily number of deaths", xlab="Date")<br />
lines(data$\$$date, pred1,lwd=2)<br />
<br />
&#35;Option 2: periodic functions model (fourier terms)<BR><br />
&#35;use function harmonic, in package '''tsModel''' <br />
<br />
install.packages("tsModel"); library(tsModel)<br />
<br />
&#35;4 sine-cosine pairs representing different harmonics with period 1 year<br />
<br />
data$\$$time <- seq(nrow(data))<br />
fourier <- harmonic(data$\$$time,nfreq=4,period=365.25)<br />
<br />
&#35;fit a Poisson model Fourier terms + linear term for trend <BR><br />
&#35;(use of quasi-Poisson family for scaling the standard errors)<br />
<br />
model2 <- glm(numdeaths ~ fourier +time,data,family=quasipoisson) <br />
summary(model2)<br />
<br />
&#35;compute predicted number of deaths from this model<br />
<br />
pred2 <- predict(model2,type="response")<br />
<br />
&#35;Figure 2b<br />
<br />
plot(data$\$$date, data$\$$numdeaths,ylim=c(100,300),pch=19,cex=0.2,col=grey(0.6),<br />
main="Sine-cosine functions (Fourier terms)",ylab="Daily number of deaths", xlab="Date")<br />
lines(data$\$$date, pred2,lwd=2)<br />
<br />
<br />
&#35;Option 3: Spline Model: Flexible Spline Functions<BR><br />
&#35;generate spline terms, use function '''bs''' in package '''splines'''<br />
library(splines)<BR><br />
&#35;A CUBIC B-SPLINE WITH 32 EQUALLY-SPACED KNOTS + 2 BOUNDARY KNOTS<BR><br />
&#35;Note: the 35 basis variables are set as df, with default knots placement. see '''?bs'''<BR><br />
&#35;other types of splines can be produced with the function ns. see '''?ns'''<br />
spl <- bs(data$\$$time,degree=3,df=35)<BR><br />
&#35;Fit a Poisson Model Fourier Terms + Linear Term for Trend<br />
<br />
model3 <- glm(numdeaths ~ spl,data,family=quasipoisson)<br />
summary(model3)<br />
<br />
&#35;compute predicted number of deaths from this model<br />
<br />
pred3 <- predict(model3,type="response")<br />
<br />
&#35;FIGURE 2C<br />
<br />
plot(data$\$$date,data$\$$numdeaths,ylim=c(100,300),pch=19,cex=0.2,col=grey(0.6), <br />
main="Flexible cubic spline model",ylab="Daily number of deaths", xlab="Date")<br />
lines(data$\$$date,pred3,lwd=2)<br />
<br />
<b>Plot Response Residuals Over Time From Model 3</b><br />
<br />
&#35;GENERATE RESIDUALS<br />
res3 <- residuals(model3,type="response")<br />
&#35;Figure 3: Residual variation in daily deaths after ‘removing’ (i.e. modelling) season and long-term trend.<br />
plot(data$\$$date,res3,ylim=c(-50,150),pch=19,cex=0.4,col=grey(0.6),<br />
main="Residuals over time",ylab="Residuals (observed-fitted)",xlab="Date")<br />
abline(h=1,lty=2,lwd=2)<br />
<br />
<b>Estimate ozone-mortality association - controlling for confounders</b><br />
<br />
&#35;compare the RR (and CI using '''ci.lin''' in package '''Epi''')<br />
<br />
install.packages("Epi"); library(Epi)<br />
<br />
&#35;unadjusted model<br />
<br />
model4 <- glm(numdeaths ~ ozone10,data,family=quasipoisson)<br />
summary(model4)<br />
(eff4 <- ci.lin(model4,subset="ozone10",Exp=T))<br />
<br />
&#35;control for seasonality (with spline as in model 3)<br />
<br />
model5 <- update(model4, .~. + spl)<br />
summary(model5)<br />
(eff5 <- ci.lin(model5,subset="ozone10",Exp=T))<br />
<br />
&#35;control for temperature - temperature modelled with categorical variables for deciles<br />
<br />
cutoffs <- quantile(data$\$$temperature,probs=0:10/10)<br />
tempdecile <- cut(data$\$$temperature,breaks=cutoffs,include.lowest=TRUE)<br />
model6 <- update(model5,.~.+tempdecile)<br />
summary(model6)<br />
(eff6 <- ci.lin(model6,subset="ozone10",Exp=T))<br />
<br />
<b>Build a summary table with effect as percent increase</b><br />
<br />
tabeff <- rbind(eff4,eff5,eff6)[,5:7]<br />
tabeff <- (tabeff-1)*100<br />
dimnames(tabeff) <- list(c("Unadjusted","Plus season/trend","Plus temperature"), c("RR","ci.low","ci.hi"))<br />
round(tabeff,2)<br />
<br />
&#35;explore the lagged (delayed) effects<br />
<br />
&#35;SINGLE-LAG MODELS<br />
<br />
&#35;prepare the table with estimates<br />
<br />
tablag <- matrix(NA,7+1,3,dimnames=list(paste("Lag",0:7), c("RR","ci.low","ci.hi")))<br />
<br />
&#35;iterate<br />
<br />
for(i in 0:7) {<br />
&#35;lag ozone and temperature variables<br />
ozone10lag <- Lag(data$\$$ozone10,i)<br />
tempdecilelag <- cut(Lag(data$\$$temperature,i),breaks=cutoffs, include.lowest=TRUE)<br />
<br />
&#35;define the transformation for temperature<br />
<br />
&#35;lag same as above, but with strata terms instead than linear<br />
<br />
mod <- glm(numdeaths ~ ozone10lag + tempdecilelag + spl,data, family=quasipoisson)<br />
tablag[i+1,] <- ci.lin(mod,subset="ozone10lag",Exp=T)[5:7]</blockquote><br />
}<br />
tablag<br />
<br />
&#35;Figure 4A: Modelling lagged (delayed) associations between ozone exposure and survival/death outcome.<br />
<br />
plot(0:7,0:7,type="n",ylim=c(0.99,1.03),main="Lag terms modelled one at a time", xlab="Lag (days)",<br />
ylab="RR and 95%CI per 10ug/m3 ozone increase")</blockquote><br />
abline(h=1)<br />
arrows(0:7,tablag[,2],0:7,tablag[,3],length=0.05,angle=90,code=3)<br />
points(0:7,tablag[,1],pch=19)<br />
<br />
<b>Model Checking</b><br />
<br />
&#35;generate deviance residuals from unconstrained distributed lag model<br />
<br />
res6 <- residuals(model6,type="deviance")<br />
<br />
&#35;Figure A1: Plot of deviance residuals over time (London data)<br />
<br />
plot(data$\$$date,res6,ylim=c(-5,10),pch=19,cex=0.7,col=grey(0.6),<br />
main="Residuals over time",ylab="Deviance residuals",xlab="Date")<br />
abline(h=0,lty=2,lwd=2)<br />
<br />
&#35;Figure A2a: Residual plot for Model6: the residuals relate to the unconstrained distributed lag model with ozone<br />
<br />
&#35;(lag days 0 to 7 inclusive), adjusted for temperature at the same lags. The spike in the plot of residuals relate to<br />
<br />
&#35;the 2003 European heat wave, and indicate that the current model does not explain the data over this period well.<br />
<br />
pacf(res6,na.action=na.omit,main="From original model")<br />
<br />
&#35;Include the 1-Day Lagged Residual in the Model<br />
<br />
model9 <- update(model6,.~.+Lag(res6,1))<br />
<br />
&#35;Figure A2b: residuals related to the unconstrained distributed lag model with ozone (lag days 0 to 7 inclusive),<br />
<br />
&#35;adjusted for temperature at the same lags<br />
<br />
pacf(residuals(model9,type="deviance"),na.action=na.omit, <br />
main="From model adjusted for residual autocorrelation")<br />
<br />
====Irish Longitudinal Study on Ageing Example====<br />
<br />
The Irish Longitudinal Study on Ageing (TILDA), 2009-2011 <BR><br />
http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/34315<BR><br />
Kenny, Rose Anne. The Irish Longitudinal Study on Ageing (TILDA),<BR><br />
2009-2011. ICPSR34315-v1. Ann Arbor, MI: Inter-university Consortium<BR><br />
Bibliographic Citation: for Political and Social Research [distributor], 2014-07-16.<BR><br />
http://doi.org/10.3886/ICPSR34315.v1<br />
<br />
The Irish Longitudinal Study on Ageing (TILDA) is a major inter-institutional initiative led by Trinity College, Dublin, to improve in the quantity and quality of data, research and information related to aging in Ireland. Eligible respondents for this study include individuals aged ≥ 50 and their spouses or partners of any age. Annual interviews on a two yearly basis (N=8,504 people) in Ireland, collecting detailed information on all aspects of their lives, including the economic (pensions, employment, living standards), health (physical, mental, service needs and usage) and social aspects (contact with friends and kin, formal and informal care, social participation). Survey interviews, physical, and biological data are collected along with demographic variables (e.g., age, sex, marital status, household composition, education, and employment), and activities of daily living (ADL), aging, childhood, depression (psychology), education, employment, exercise, eyesight, families, family life, etc.<br />
<br />
# download the RDA data object (ICPSR_34315.zip)<br />
# load in the data into RStudio<br />
dataURL <- "https://umich.instructure.com/files/703606/download?download_frd=1"<br />
load(url(dataURL))<br />
head(da34315.0001); data_colnames <- colnames(da34315.0001)<br />
vars <- da34315.0001<br />
<br />
vars; head(vars); summary(vars); data_colnames<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|[1]||”ID"||”HOUSEHOLD”<br />
|-<br />
|[3]|| "CLUSTER"||"STRATUM"<br />
|-<br />
|[5]||”REGION”|| "CAPIWEIGHT"<br />
|-<br />
|[7]|| "IN_SCQ"||"SCQ_WEIGHT"<br />
|-<br />
|[9]|| "AGE"||"SEX"<br />
|-<br />
|[11]|| "NML"||"CM003"<br />
|-<br />
|...||||<br />
|-<br />
|[1673]||"HA_WEIGHT"||"IN_HA"<br />
|- <br />
|[1675]|| "SR_HEIGHT_CENTIMETRES"||"HEIGHT" <br />
|-<br />
|[1677]|| "SR_WEIGHT_KILOGRAMMES"||"WEIGHT" <br />
|-<br />
|[1679]||"COGMMSE"||"FRGRIPSTRENGTHD"<br />
|-<br />
|[1681]||"FRGRIPSTRENGTHND"||"VISUALACUITYLEFT" <br />
|-<br />
|[1683]||"VISUALACUITYRIGHT" ||"BPSEATEDSYSTOLIC1" <br />
|-<br />
|[1685]||"BPSEATEDSYSTOLIC2"||"BPSEATEDDIASTOLIC1"<br />
|-<br />
|[1687]||"BPSEATEDDIASTOLIC2"||"BPSEATEDSYSTOLICMEAN" <br />
|-<br />
|[1689]||"BPSEATEDDIASTOLICMEAN"||"BPHYPERTENSION" <br />
|-<br />
|[1691]||"FRBMI"||"FRWAIST" <br />
|-<br />
|[1693]||"FRHIP"||"FRWHR" <br />
|-<br />
|[1695]||"WEARGLASSES"||"WOREGLASSESDURINGTEST"<br />
|-<br />
|[1697]||"BLOODS_CHOL"||"BLOODS_HDL" <br />
|-<br />
|[1699]||"BLOODS_LDL"||"BLOODS_TRIG" <br />
|-<br />
|[1701]||"BLOODS_TIMEBETWEENLASTMEALANDASS"||"DELAY_HA"<br />
|-<br />
|[1703]||"PICMEMSCORE"||"PICRECALLSCORE" <br />
|-<br />
|[1705]||"PICRECOGSCORE"||"VISREASONING" <br />
|-<br />
|[1707]||"GRIPTEST1D"||"GRIPTEST2D"<br />
|-<br />
|[1709]||"GRIPTEST1ND"||"GRIPTEST2ND"<br />
|-<br />
|[1711]||"GRIPTESTDOMINANT"||"GRIPTESTSITTING" <br />
|-<br />
|[1713]||"TEMPERATURE"||"SCQSOCACT1"<br />
|-<br />
|...||||<br />
|-<br />
|[1981]||"SOCPROXCHLD4"||"SCRFLU"<br />
|-<br />
|[1983]||"SCRCHOL"||"SCRPROSTATE"<br />
|-<br />
|[1985]||"SCRBREASTLUMPS"||"SCRMAMMOGRAM"<br />
|-<br />
|[1987]||"BEHALC_FREQ_WEEK"||"BEHALC_DRINKSPERDAY"<br />
|-<br />
|[1989]||"BEHALC_DRINKSPERWEEK"||"BEHALC_DOH_LIMIT"<br />
|-<br />
|[1991]||"BEHSMOKER"||"BEHCAGE"<br />
|}<br />
</center><br />
<br />
# extract some data elements<br />
df1 <- data.frame(vars)<br />
<br />
df_Irish_small <- df1[, c("ID", "HOUSEHOLD", "AGE", "SEX" , "HA_WEIGHT", "HEIGHT" , <br />
"WEIGHT", "COGMMSE", "FRGRIPSTRENGTHD", "VISUALACUITYLEFT", <br />
"VISUALACUITYRIGHT", "BPSEATEDSYSTOLIC1", <br />
"BPSEATEDSYSTOLIC2", "BPSEATEDDIASTOLIC1", <br />
"BPSEATEDDIASTOLIC2", "BPSEATEDSYSTOLICMEAN", <br />
"BPSEATEDDIASTOLICMEAN", "BPHYPERTENSION",<br />
"WEARGLASSES", "WOREGLASSESDURINGTEST", <br />
"BLOODS_CHOL", "BLOODS_HDL", <br />
"BLOODS_LDL", "BLOODS_TRIG", <br />
"PICMEMSCORE", "PICRECALLSCORE",<br />
"PICRECOGSCORE", "VISREASONING", <br />
"TEMPERATURE", "SOCPROXCHLD4", "SCRFLU", "SCRCHOL", "SCRPROSTATE", <br />
"SCRBREASTLUMPS", "SCRMAMMOGRAM", <br />
"BEHALC_FREQ_WEEK", "BEHALC_DRINKSPERDAY", <br />
"BEHALC_DRINKSPERWEEK", "BEHALC_DOH_LIMIT", <br />
"BEHSMOKER", "BEHCAGE" )<br />
]<br />
<br />
summary(df_Irish_small); head(df_Irish_small)<br />
write.table(df_Irish_small , "data.csv", sep=",")<br />
<br />
===Applications===<br />
<br />
====Frailty associations with sustained attention measures<sup>5</sup>====<br />
<br />
Multinomial logistic regression analyses were used to examine frailty as the outcome variable were performed to determine associations between the sustained attention measures and prefrailty or frailty. Binary logistic regression analyses determined significant associations between the sustained attention measures and the individual frailty components. The regression models included age and gender and were also extended to include additional measures of cognitive processing speed (cognitive RT from CRT), executive function (Delta CTT), number of chronic conditions, and number of medications. We also included the quadratic term age2 to allow for any potential nonlinear effects of age on frailty in each regression model. For the independent variables in the multinomial logistic regression models, relative risk (RR) ratios with 95% confidence intervals (CIs) were provided. For the independent variables in the binary logistic regression models, OR with 95% CI were provided.<br />
<br />
====Multivariable logistic regression examining the association between social relationships and depression, anxiety, and suicidal ideation<sup>6</sup>====<br />
<br />
===Footnotes===<br />
<br />
* <sup>5</sup> http://psychsocgerontology.oxfordjournals.org/content/early/2013/03/13/geronb.gbt009.full<br />
* <sup>6</sup> http://www.jad-journal.com/article/S0165-0327%2815%2900145-7/fulltext<br />
<br />
===Appendix===<br />
<br />
==See also==<br />
* [[SMHS_TimeSeriesAnalysis| Previous Section on Time-series analysis]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.ucla.edu<br />
<br />
{{translate|pageName=http://wiki.stat.ucla.edu/socr/index.php?title=SMHS_TimeSeriesAnalysis_LOS}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_TimeSeriesAnalysis_LOS&diff=16250SMHS TimeSeriesAnalysis LOS2016-05-24T14:36:55Z<p>Pineaumi: /* Multivariable logistic regression examining the association between social relationships and depression, anxiety, and suicidal ideation6 */</p>
<hr />
<div>==[[SMHS_TimeSeriesAnalysis| SMHS: Time-series Analysis]] - Applications ==<br />
<br />
===Time series regression studies in environmental epidemiology (London Ozone Study 2002-2006)===<br />
A time series regression analysis of a London ozone dataset including daily observations from 1 January 2002 to 31 December 2006. Each day has records of (mean) '''ozone''' levels that day, and the total number of '''deaths''' that occurred in the city. <br />
<br />
====Questions====<br />
*Is there an association between day-to-day variation in ozone levels and daily risk of death?<br />
*Is ozone exposure associated with the outcome is death or other confounders - temperature and relative humidity?<br />
<br />
'''Reference:''' Bhaskaran K, Gasparrini A, Hajat S, Smeeth L, Armstrong B. Time series regression studies in environmental epidemiology. ''International Journal of Epidemiology''. 2013;42(4):1187-1195. doi:10.1093/ije/dyt092.<br />
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3780998/<br />
<br />
<b>Load the Data</b><br />
library(foreign)<br />
&#35;07_LondonOzonPolutionData_2006_TS.csv<br />
&#35;data <- read.csv("https://umich.instructure.com/files/720873/download?download_frd=1")<br />
data <- read.dta("https://umich.instructure.com/files/721042/download?download_frd=1")<br />
<br />
&#35;Set the Default Action for Missing Data to <b>na.exclude</b><br />
options(na.action="na.exclude")<br />
<br />
<b>Exploratory Analyses</b><br />
<br />
&#35;set the plotting parameters for the plot <br />
<br />
oldpar <- par(no.readonly=TRUE)<br />
par(mex=0.8,mfrow=c(2,1))<br />
<br />
&#35;sub-plot for daily deaths, with vertical lines defining years<br />
<br />
plot(data$\$$date,data$\$$numdeaths,pch=".",main="Daily deaths over time", <br />
ylab="Daily number of deaths",xlab="Date")<br />
abline(v=data$\$$date[grep("-01-01",data$\$$date)],col=grey(0.6),lty=2)<br />
<br />
&#35;plot for ozone levels<br />
<br />
plot(data$\$$date,data$\$$ozone,pch=".",main="Ozone levels over time",<br />
ylab="Daily mean ozone level(ug/m3)",xlab="Date")<br />
abline(v=data$\$$date[grep("-01-01",data$\$$date)],col=grey(0.6),lty=2)<br />
par(oldpar)<br />
layout(1)<br />
<br />
&#35;descriptive statistics<br />
<br />
summary(data)<br />
<br />
&#35;correlations<br />
<br />
cor(data[,2:4])<br />
&#35;scale exposure<br />
data$\$$ozone10 <- data$\$$ozone/10<br />
<br />
<b>Modelling Seasonality and Long-Term Trend</b><br />
<br />
&#35;option 1: time-stratified model <BR><br />
&#35;generate month and year<br />
<br />
data$\$$month <- as.factor(months(data$\$$date,abbr=TRUE))<br />
data$\$$year <- as.factor(substr(data$\$$date,1,4))<br />
<br />
&#35;fit a Poisson model with a stratum for each month nested in year<BR><br />
&#35;(use of quasi-Poisson family for scaling the standard errors)<br />
<br />
model1 <- glm(numdeaths ~ month/year,data,family=quasipoisson) <br />
summary(model1)<br />
<br />
&#35;compute predicted number of deaths from this model<br />
pred1 <- predict(model1,type="response")<br />
<br />
&#35;Figure 2a: Three alternative ways of modelling long-term patterns in the data (seasonality and trends)<br />
<br />
plot(data$\$$date,data$\$$numdeaths,ylim=c(100,300),pch=19,cex=0.2,col=grey(0.6),<br />
main="Time-stratified model (month strata)",ylab="Daily number of deaths", xlab="Date")<br />
lines(data$\$$date, pred1,lwd=2)<br />
<br />
&#35;Option 2: periodic functions model (fourier terms)<BR><br />
&#35;use function harmonic, in package '''tsModel''' <br />
<br />
install.packages("tsModel"); library(tsModel)<br />
<br />
&#35;4 sine-cosine pairs representing different harmonics with period 1 year<br />
<br />
data$\$$time <- seq(nrow(data))<br />
fourier <- harmonic(data$\$$time,nfreq=4,period=365.25)<br />
<br />
&#35;fit a Poisson model Fourier terms + linear term for trend <BR><br />
&#35;(use of quasi-Poisson family for scaling the standard errors)<br />
<br />
model2 <- glm(numdeaths ~ fourier +time,data,family=quasipoisson) <br />
summary(model2)<br />
<br />
&#35;compute predicted number of deaths from this model<br />
<br />
pred2 <- predict(model2,type="response")<br />
<br />
&#35;Figure 2b<br />
<br />
plot(data$\$$date, data$\$$numdeaths,ylim=c(100,300),pch=19,cex=0.2,col=grey(0.6),<br />
main="Sine-cosine functions (Fourier terms)",ylab="Daily number of deaths", xlab="Date")<br />
lines(data$\$$date, pred2,lwd=2)<br />
<br />
<br />
&#35;Option 3: Spline Model: Flexible Spline Functions<BR><br />
&#35;generate spline terms, use function '''bs''' in package '''splines'''<br />
library(splines)<BR><br />
&#35;A CUBIC B-SPLINE WITH 32 EQUALLY-SPACED KNOTS + 2 BOUNDARY KNOTS<BR><br />
&#35;Note: the 35 basis variables are set as df, with default knots placement. see '''?bs'''<BR><br />
&#35;other types of splines can be produced with the function ns. see '''?ns'''<br />
spl <- bs(data$\$$time,degree=3,df=35)<BR><br />
&#35;Fit a Poisson Model Fourier Terms + Linear Term for Trend<br />
<br />
model3 <- glm(numdeaths ~ spl,data,family=quasipoisson)<br />
summary(model3)<br />
<br />
&#35;compute predicted number of deaths from this model<br />
<br />
pred3 <- predict(model3,type="response")<br />
<br />
&#35;FIGURE 2C<br />
<br />
plot(data$\$$date,data$\$$numdeaths,ylim=c(100,300),pch=19,cex=0.2,col=grey(0.6), <br />
main="Flexible cubic spline model",ylab="Daily number of deaths", xlab="Date")<br />
lines(data$\$$date,pred3,lwd=2)<br />
<br />
<b>Plot Response Residuals Over Time From Model 3</b><br />
<br />
&#35;GENERATE RESIDUALS<br />
res3 <- residuals(model3,type="response")<br />
&#35;Figure 3: Residual variation in daily deaths after ‘removing’ (i.e. modelling) season and long-term trend.<br />
plot(data$\$$date,res3,ylim=c(-50,150),pch=19,cex=0.4,col=grey(0.6),<br />
main="Residuals over time",ylab="Residuals (observed-fitted)",xlab="Date")<br />
abline(h=1,lty=2,lwd=2)<br />
<br />
<b>Estimate ozone-mortality association - controlling for confounders</b><br />
<br />
&#35;compare the RR (and CI using '''ci.lin''' in package '''Epi''')<br />
<br />
install.packages("Epi"); library(Epi)<br />
<br />
&#35;unadjusted model<br />
<br />
model4 <- glm(numdeaths ~ ozone10,data,family=quasipoisson)<br />
summary(model4)<br />
(eff4 <- ci.lin(model4,subset="ozone10",Exp=T))<br />
<br />
&#35;control for seasonality (with spline as in model 3)<br />
<br />
model5 <- update(model4, .~. + spl)<br />
summary(model5)<br />
(eff5 <- ci.lin(model5,subset="ozone10",Exp=T))<br />
<br />
&#35;control for temperature - temperature modelled with categorical variables for deciles<br />
<br />
cutoffs <- quantile(data$\$$temperature,probs=0:10/10)<br />
tempdecile <- cut(data$\$$temperature,breaks=cutoffs,include.lowest=TRUE)<br />
model6 <- update(model5,.~.+tempdecile)<br />
summary(model6)<br />
(eff6 <- ci.lin(model6,subset="ozone10",Exp=T))<br />
<br />
<b>Build a summary table with effect as percent increase</b><br />
<br />
tabeff <- rbind(eff4,eff5,eff6)[,5:7]<br />
tabeff <- (tabeff-1)*100<br />
dimnames(tabeff) <- list(c("Unadjusted","Plus season/trend","Plus temperature"), c("RR","ci.low","ci.hi"))<br />
round(tabeff,2)<br />
<br />
&#35;explore the lagged (delayed) effects<br />
<br />
&#35;SINGLE-LAG MODELS<br />
<br />
&#35;prepare the table with estimates<br />
<br />
tablag <- matrix(NA,7+1,3,dimnames=list(paste("Lag",0:7), c("RR","ci.low","ci.hi")))<br />
<br />
&#35;iterate<br />
<br />
for(i in 0:7) {<br />
&#35;lag ozone and temperature variables<br />
ozone10lag <- Lag(data$\$$ozone10,i)<br />
tempdecilelag <- cut(Lag(data$\$$temperature,i),breaks=cutoffs, include.lowest=TRUE)<br />
<br />
&#35;define the transformation for temperature<br />
<br />
&#35;lag same as above, but with strata terms instead than linear<br />
<br />
mod <- glm(numdeaths ~ ozone10lag + tempdecilelag + spl,data, family=quasipoisson)<br />
tablag[i+1,] <- ci.lin(mod,subset="ozone10lag",Exp=T)[5:7]</blockquote><br />
}<br />
tablag<br />
<br />
&#35;Figure 4A: Modelling lagged (delayed) associations between ozone exposure and survival/death outcome.<br />
<br />
plot(0:7,0:7,type="n",ylim=c(0.99,1.03),main="Lag terms modelled one at a time", xlab="Lag (days)",<br />
ylab="RR and 95%CI per 10ug/m3 ozone increase")</blockquote><br />
abline(h=1)<br />
arrows(0:7,tablag[,2],0:7,tablag[,3],length=0.05,angle=90,code=3)<br />
points(0:7,tablag[,1],pch=19)<br />
<br />
<b>Model Checking</b><br />
<br />
&#35;generate deviance residuals from unconstrained distributed lag model<br />
<br />
res6 <- residuals(model6,type="deviance")<br />
<br />
&#35;Figure A1: Plot of deviance residuals over time (London data)<br />
<br />
plot(data$\$$date,res6,ylim=c(-5,10),pch=19,cex=0.7,col=grey(0.6),<br />
main="Residuals over time",ylab="Deviance residuals",xlab="Date")<br />
abline(h=0,lty=2,lwd=2)<br />
<br />
&#35;Figure A2a: Residual plot for Model6: the residuals relate to the unconstrained distributed lag model with ozone<br />
<br />
&#35;(lag days 0 to 7 inclusive), adjusted for temperature at the same lags. The spike in the plot of residuals relate to<br />
<br />
&#35;the 2003 European heat wave, and indicate that the current model does not explain the data over this period well.<br />
<br />
pacf(res6,na.action=na.omit,main="From original model")<br />
<br />
&#35;Include the 1-Day Lagged Residual in the Model<br />
<br />
model9 <- update(model6,.~.+Lag(res6,1))<br />
<br />
&#35;Figure A2b: residuals related to the unconstrained distributed lag model with ozone (lag days 0 to 7 inclusive),<br />
<br />
&#35;adjusted for temperature at the same lags<br />
<br />
pacf(residuals(model9,type="deviance"),na.action=na.omit, <br />
main="From model adjusted for residual autocorrelation")<br />
<br />
====Irish Longitudinal Study on Ageing Example====<br />
<br />
The Irish Longitudinal Study on Ageing (TILDA), 2009-2011 <BR><br />
http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/34315<BR><br />
Kenny, Rose Anne. The Irish Longitudinal Study on Ageing (TILDA),<BR><br />
2009-2011. ICPSR34315-v1. Ann Arbor, MI: Inter-university Consortium<BR><br />
Bibliographic Citation: for Political and Social Research [distributor], 2014-07-16.<BR><br />
http://doi.org/10.3886/ICPSR34315.v1<br />
<br />
The Irish Longitudinal Study on Ageing (TILDA) is a major inter-institutional initiative led by Trinity College, Dublin, to improve in the quantity and quality of data, research and information related to aging in Ireland. Eligible respondents for this study include individuals aged ≥ 50 and their spouses or partners of any age. Annual interviews on a two yearly basis (N=8,504 people) in Ireland, collecting detailed information on all aspects of their lives, including the economic (pensions, employment, living standards), health (physical, mental, service needs and usage) and social aspects (contact with friends and kin, formal and informal care, social participation). Survey interviews, physical, and biological data are collected along with demographic variables (e.g., age, sex, marital status, household composition, education, and employment), and activities of daily living (ADL), aging, childhood, depression (psychology), education, employment, exercise, eyesight, families, family life, etc.<br />
<br />
# download the RDA data object (ICPSR_34315.zip)<br />
# load in the data into RStudio<br />
dataURL <- "https://umich.instructure.com/files/703606/download?download_frd=1"<br />
load(url(dataURL))<br />
head(da34315.0001); data_colnames <- colnames(da34315.0001)<br />
vars <- da34315.0001<br />
<br />
vars; head(vars); summary(vars); data_colnames<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|[1]||”ID"||”HOUSEHOLD”<br />
|-<br />
|[3]|| "CLUSTER"||"STRATUM"<br />
|-<br />
|[5]||”REGION”|| "CAPIWEIGHT"<br />
|-<br />
|[7]|| "IN_SCQ"||"SCQ_WEIGHT"<br />
|-<br />
|[9]|| "AGE"||"SEX"<br />
|-<br />
|[11]|| "NML"||"CM003"<br />
|-<br />
|...||||<br />
|-<br />
|[1673]||"HA_WEIGHT"||"IN_HA"<br />
|- <br />
|[1675]|| "SR_HEIGHT_CENTIMETRES"||"HEIGHT" <br />
|-<br />
|[1677]|| "SR_WEIGHT_KILOGRAMMES"||"WEIGHT" <br />
|-<br />
|[1679]||"COGMMSE"||"FRGRIPSTRENGTHD"<br />
|-<br />
|[1681]||"FRGRIPSTRENGTHND"||"VISUALACUITYLEFT" <br />
|-<br />
|[1683]||"VISUALACUITYRIGHT" ||"BPSEATEDSYSTOLIC1" <br />
|-<br />
|[1685]||"BPSEATEDSYSTOLIC2"||"BPSEATEDDIASTOLIC1"<br />
|-<br />
|[1687]||"BPSEATEDDIASTOLIC2"||"BPSEATEDSYSTOLICMEAN" <br />
|-<br />
|[1689]||"BPSEATEDDIASTOLICMEAN"||"BPHYPERTENSION" <br />
|-<br />
|[1691]||"FRBMI"||"FRWAIST" <br />
|-<br />
|[1693]||"FRHIP"||"FRWHR" <br />
|-<br />
|[1695]||"WEARGLASSES"||"WOREGLASSESDURINGTEST"<br />
|-<br />
|[1697]||"BLOODS_CHOL"||"BLOODS_HDL" <br />
|-<br />
|[1699]||"BLOODS_LDL"||"BLOODS_TRIG" <br />
|-<br />
|[1701]||"BLOODS_TIMEBETWEENLASTMEALANDASS"||"DELAY_HA"<br />
|-<br />
|[1703]||"PICMEMSCORE"||"PICRECALLSCORE" <br />
|-<br />
|[1705]||"PICRECOGSCORE"||"VISREASONING" <br />
|-<br />
|[1707]||"GRIPTEST1D"||"GRIPTEST2D"<br />
|-<br />
|[1709]||"GRIPTEST1ND"||"GRIPTEST2ND"<br />
|-<br />
|[1711]||"GRIPTESTDOMINANT"||"GRIPTESTSITTING" <br />
|-<br />
|[1713]||"TEMPERATURE"||"SCQSOCACT1"<br />
|-<br />
|...||||<br />
|-<br />
|[1981]||"SOCPROXCHLD4"||"SCRFLU"<br />
|-<br />
|[1983]||"SCRCHOL"||"SCRPROSTATE"<br />
|-<br />
|[1985]||"SCRBREASTLUMPS"||"SCRMAMMOGRAM"<br />
|-<br />
|[1987]||"BEHALC_FREQ_WEEK"||"BEHALC_DRINKSPERDAY"<br />
|-<br />
|[1989]||"BEHALC_DRINKSPERWEEK"||"BEHALC_DOH_LIMIT"<br />
|-<br />
|[1991]||"BEHSMOKER"||"BEHCAGE"<br />
|}<br />
</center><br />
<br />
# extract some data elements<br />
df1 <- data.frame(vars)<br />
<br />
df_Irish_small <- df1[, c("ID", "HOUSEHOLD", "AGE", "SEX" , "HA_WEIGHT", "HEIGHT" , <br />
"WEIGHT", "COGMMSE", "FRGRIPSTRENGTHD", "VISUALACUITYLEFT", <br />
"VISUALACUITYRIGHT", "BPSEATEDSYSTOLIC1", <br />
"BPSEATEDSYSTOLIC2", "BPSEATEDDIASTOLIC1", <br />
"BPSEATEDDIASTOLIC2", "BPSEATEDSYSTOLICMEAN", <br />
"BPSEATEDDIASTOLICMEAN", "BPHYPERTENSION",<br />
"WEARGLASSES", "WOREGLASSESDURINGTEST", <br />
"BLOODS_CHOL", "BLOODS_HDL", <br />
"BLOODS_LDL", "BLOODS_TRIG", <br />
"PICMEMSCORE", "PICRECALLSCORE",<br />
"PICRECOGSCORE", "VISREASONING", <br />
"TEMPERATURE", "SOCPROXCHLD4", "SCRFLU", "SCRCHOL", "SCRPROSTATE", <br />
"SCRBREASTLUMPS", "SCRMAMMOGRAM", <br />
"BEHALC_FREQ_WEEK", "BEHALC_DRINKSPERDAY", <br />
"BEHALC_DRINKSPERWEEK", "BEHALC_DOH_LIMIT", <br />
"BEHSMOKER", "BEHCAGE" )<br />
]<br />
<br />
summary(df_Irish_small); head(df_Irish_small)<br />
write.table(df_Irish_small , "data.csv", sep=",")<br />
<br />
===Applications===<br />
<br />
====Frailty associations with sustained attention measures<sup>5</sup>====<br />
<br />
Multinomial logistic regression analyses were used to examine frailty as the outcome variable were performed to determine associations between the sustained attention measures and prefrailty or frailty. Binary logistic regression analyses determined significant associations between the sustained attention measures and the individual frailty components. The regression models included age and gender and were also extended to include additional measures of cognitive processing speed (cognitive RT from CRT), executive function (Delta CTT), number of chronic conditions, and number of medications. We also included the quadratic term age2 to allow for any potential nonlinear effects of age on frailty in each regression model. For the independent variables in the multinomial logistic regression models, relative risk (RR) ratios with 95% confidence intervals (CIs) were provided. For the independent variables in the binary logistic regression models, OR with 95% CI were provided.<br />
<br />
====Multivariable logistic regression examining the association between social relationships and depression, anxiety, and suicidal ideation<sup>6</sup>====<br />
<br />
===Footnotes===<br />
<br />
* <sup>5</sup>http://psychsocgerontology.oxfordjournals.org/content/early/2013/03/13/geronb.gbt009.full<br />
* <sup>6</sup>http://www.jad-journal.com/article/S0165-0327%2815%2900145-7/fulltext<br />
<br />
===Appendix===<br />
<br />
==See also==<br />
* [[SMHS_TimeSeriesAnalysis| Previous Section on Time-series analysis]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.ucla.edu<br />
<br />
{{translate|pageName=http://wiki.stat.ucla.edu/socr/index.php?title=SMHS_TimeSeriesAnalysis_LOS}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_TimeSeriesAnalysis&diff=16249SMHS TimeSeriesAnalysis2016-05-24T14:29:03Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS| Scientific Methods for Health Sciences]] - Time Series Analysis ==<br />
<br />
===Questions===<br />
* Why are trends, patterns or predictions from models/data important?<br />
* How to detect, model and utilize trends in longitudinal data?<br />
<br />
Time series analysis represents a class of statistical methods applicable for series data aiming to extract meaningful information, trend and characterization of the process using observed longitudinal data. These trends may be used for time series forecasting and for prediction of future values based on retrospective observations. Note that classical linear modeling (e.g., regression analysis) may also be employed for prediction & testing of associations using the values of one or more independent variables and their effect on the value of another variable. However, time series analysis allows dependencies (e.g., seasonal effects to be accounted for).<br />
<br />
===Time-series representation===<br />
<br />
There are 3 (distinct and complementary) types of <b>time series patterns</b> that most time-series analyses are trying to identify, model and analyze. These include: <br />
<br />
* <b>Trend</b>: A trend is a long-term increase or decrease in the data that may be linear or non-linear, but is generally continuous (mostly monotonic). The trend may be referred to as direction.<br />
* <b>Seasonal</b>: A seasonal pattern is influence in the data, like seasonal factors (e.g., the quarter of the year, the month, or day of the week), which is always of a fixed known period.<br />
* <b>Cyclic</b>: A cyclic pattern of fluctuations corresponds to rises and falls that are <i>not of fixed period.</i><br />
<br />
<center>[[Image:SMHS_TimeSeries1.png|300px]]</center><br />
<br />
For example, the following code shows several time series with different types of time series patterns.<br />
<br />
<b>par</b>(mfrow=c(3,2))<br />
<br />
<b>n <- 98</b><br />
X <- cbind(1:n) # time points (annually)<br />
<u>Trend1</u> <- LakeHuron+0.2*X # series 1<br />
Trend2 <- LakeHuron-0.5*X # series 2<br />
<br />
<u>Season1</u> <- X; Season2 <- X; # series 1 & 2<br />
for(i in 1:n) {<br />
<b>Season1</b>[i] <- LakeHuron[i] + 5*(i%%4)<br />
<b>Season2</b>[i] <- LakeHuron[i] -2*(i%%10)<br />
}<br />
<br />
<u>Cyclic1</u> <- X; Cyclic2 <- X; # series 1 & 2<br />
for(i in 1:n) {<br />
rand1 <- as.integer(runif(1, 1, 10))<br />
<b>Cyclic1</b>[i] <- LakeHuron[i] + 3*(i%%rand1)<br />
<b>Cyclic2</b>[i] <- LakeHuron[i] - 1*(i%%rand1)<br />
}<br />
<br />
<b>plot</b>(X, Trend1, xlab="Year",ylab=" Trend1", main="Trend1 (LakeHuron+0.2*X)")<br />
<b>plot</b>(X, Trend2, xlab="Year",ylab=" Trend2" , main="Trend2 (LakeHuron-0.5*X)")<br />
<b>plot</b>(X, <b>Season1</b>, xlab="Year",ylab=" <b>Season1</b>", main=" <b>Season1</b>=Trend1 (LakeHuron+5(i%%4))")<br />
<b>plot</b>(X, <b>Season2</b>, xlab="Year",ylab=" <b>Season2</b>", main=" <b>Season2</b>=Trend1 (LakeHuron-2(i%%10))")<br />
<b>plot</b>(X, <b>Cyclic1</b>, xlab="Year",ylab=" <b>Cyclic1</b>", main=" <b>Cyclic1</b>=Trend1 (LakeHuron+3*(i%%rand1))")<br />
<b>plot</b>(X, <b>Cyclic2</b>, xlab="Year",ylab=" <b>Cyclic2</b>", main=" <b>Cyclic2</b> = Trend1 (LakeHuron-(i%%rand1))")<br />
<br />
Note: If you get this run-time graphics error:<br />
“<font color="red">Error in plot.new() : figure margins too large</font>” <BR><br />
You need to make sure your graphics window is large enough or print to PDF:<br />
<br />
pdf("myplot.pdf"); plot(x); dev.off()<br />
<br />
<center>[[Image:SMHS_TimeSeries2.png|300px]]</center><br />
<br />
Let’s look at the delta (Δ) changes - Lagged Differences, using <b>diff</b>, which returns suitably lagged and iterated differences.<br />
<br />
## Default lag = 1<br />
<b>par</b>(mfrow=c(1,1))<br />
hist(diff(Trend1), prob=T, col="red") # Plot histogram<br />
lines(density(diff(Trend1)),lwd=2) # plot density estimate<br />
x<-seq(-4,4,length=100); y<-dnorm(x, mean(diff(Trend1)), sd(diff(Trend1)))<br />
lines(x,y,lwd=2,col="blue") # plot MLE Normal Fit<br />
<br />
===Time series decomposition===<br />
<br />
Denote the time series $yt$ including the three components: a seasonal effect, a trend-cycle effect (containing both trend and cycle), and a remainder component (containing the residual variability in the time series).<br />
<br />
<b>Additive model</b>: <br />
$yt=St+Tt+Et,$ where $yt$ is the data at period $t, St$ is the seasonal component at period $t, Tt$ is the trend-cycle component at period $t$ and $Et$ is the remainder (error) component at period $t$. This <u>additive model</u> is appropriate if the magnitude of the seasonal fluctuations or the variation around the trend-cycle does not vary with the level of the time series.<br />
<br />
<b>Multiplicative model</b>: $yt=St×Tt×Et$. When the variation in the seasonal pattern, or the variation around the trend-cycle, are proportional to the level of the time series, then a multiplicative model is more appropriate. Note that when using a multiplicative model, we can transform the data to stabilize the variation in the series over time, and then use an additive model. For instance, a log transformation decomposes the multiplicative model from:<br />
<br />
$yt=St×Tt×Et$ <BR><br />
to the additive model: <BR><br />
$log(yt)=log(St)+log(Tt)+log(Et).$<br />
<br />
We can examine the Seasonal trends by decomposing the Time Series by <b><i>loess</i></b> (Local Polynomial Regression) Fitting into <b>S</b>easonal, <b>T</b>rend and irregular components using <b>L</b>oess - Local Polynomial Regression Fitting (<b>stl</b> function, in the default “stats” package):<br />
<br />
# using Monthly Males Deaths from Lung Diseases in UK from bronchitis, emphysema and asthma, 1974–1979<br />
mdeaths # is.ts(mdeaths)<br />
fit <- stl(mdeaths, s.window=5)<br />
plot(mdeaths, col="gray", main=" Lung Diseases in UK ", ylab=" Lung Diseases Deaths", xlab="")<br />
lines(fit\$\$$time.series[,2],col="red",ylab="Trend")<br />
plot(fit) # data, seasonal, trend, residuals<br />
<br />
<center>“stl” function parameters</center><br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|x||Univariate time series to be decomposed. This should be an object of class "ts" with a frequency greater than one.<br />
|-<br />
|s.window||either the character string "periodic" or the span (in lags) of the loess window for seasonal extraction, which should be odd and at least 7, according to Cleveland et al. This has no default.<br />
|-<br />
|s.degree||degree of locally-fitted polynomial in seasonal extraction. Should be zero or one.<br />
|-<br />
|t.window||the span (in lags) of the loess window for trend extraction, which should be odd. If NULL, the default, nextodd(ceiling((1.5*period) / (1-(1.5/s.window)))), is taken.<br />
|-<br />
|t.degree||degree of locally-fitted polynomial in trend extraction. Should be zero or one.<br />
|-<br />
|l.window||the span (in lags) of the loess window of the low-pass filter used for each subseries. Defaults to the smallest odd integer greater than or equal to frequency(x) which is recommended since it prevents competition between the trend and seasonal components. If not an odd integer its given value is increased to the next odd one.<br />
|-<br />
|l.degree||degree of locally-fitted polynomial for the subseries low-pass filter. Must be 0 or 1.<br />
|-<br />
|s.jump, t.jump, l.jump||integers at least one to increase speed of the respective smoother. Linear interpolation happens between every *.jump<sup>th</sup> value.<br />
|-<br />
|robust||logical indicating if robust fitting be used in the loess procedure.<br />
|-<br />
|inner||integer; the number of ‘inner’ (backfitting) iterations; usually very few (2) iterations suffice.<br />
|-<br />
|outer||integer; the number of ‘outer’ robustness iterations.<br />
|-<br />
|na.action||action on missing values.<br />
|}<br />
</center><br />
<br />
<center>[[Image:SMHS_TimeSeries3.png|400px]] [[Image:SMHS_TimeSeries4.png|400px]]</center><br />
<br />
<b>monthplot</b>(fit$\$$time.series[,"seasonal"], main="", ylab="Seasonal", lwd=5)<br />
&#35;As the “fit <- stl(mdeaths, s.window=5)” object has 3 time-series components (seasonal; trend; remainder)<br />
&#35;we can alternatively plot them separately:<br />
&#35;monthplot(fit, choice = <b><u>"seasonal"</u></b>, cex.axis = 0.8)<br />
&#35;monthplot(fit, choice = <b><u>"trend"</u></b>, cex.axis = 0.8)<br />
&#35;monthplot(fit, choice = <b><u>"remainder"</u></b>, type = "h", cex.axis = 1.2) # histogramatic<br />
<br />
<center>[[Image:SMHS_TimeSeries5.png|400px]]</center><br />
<br />
These are the seasonal plots and seasonal sub-series plots of the seasonal component illustrating the variation in the seasonal component over time (over the years).<br />
<br />
Using historical weather (average daily temperature at the University of Michigan, Ann Arbor):<br />
[http://weather-warehouse.com/WeatherHistory/PastWeatherData_AnnArborUnivOfMi_AnnArbor_MI_January.html]<br />
(See meta-data description and provenance online: [http://weather-warehouse.com/WxWfaqs.html]).<br />
<br />
<center>Mean Temperature, (F), UMich, Ann Arbor (1900-2015)</center><br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
!Year||Jan||Feb||Mar||Apr||May||Jun||Jul||Aug||Sep||Oct||Nov||Dec<br />
|-<br />
|2015||26.3||14.4||34.9||49||64.2||68||71.2||70.2||68.7||53.9||NR||NR<br />
|-<br />
|2014||24.4||19.4||29||48.9||60.7||69.7||68.8||70.8||63.2||52.1||35.4||33.3<br />
|-<br />
|2013||22.7||26.1||33.3||46||63.1||68.5||72.9||70.2||64.6||53.2||37.6||26.7<br />
|-<br />
|2012||22.4||32.8||50.7||49.2||65.2||71.4||78.9||72.2||63.9||51.7||39.6||34.8<br />
|-<br />
|...|| || || || || || || || || || || ||<br />
|-<br />
|...||17||15.3||31.4||47.3||57||69||76.6||72||63.4||52.2||35.2||23.7<br />
|-<br />
|1900||21.4||19.2||24.7||47.8||60.2||66.3||72||75.4||67.2||59||37.6||29.2<br />
|}<br />
</center><br />
<br />
# data: 07_UMich_AnnArbor_MI_TempPrecipitation_HistData_1900_2015.csv<br />
# more complete data is available here: 07_UMich_AnnArbor_MI_TempPrecipitation_HistData_1900_2015.xls umich_data <- read.csv("https://umich.instructure.com/files/702739/download?download_frd=1", header=TRUE)<br />
<br />
head(umich_data)<br />
<br />
# https://cran.r-project.org/web/packages/mgcv/mgcv.pdf <br />
# install.packages("mgcv"); require(mgcv) <br />
<br />
# install.packages("gamair"); require(gamair)<br />
par(mfrow=c(1,1))<br />
<br />
The data are in wide format – convert to long format for plotting<br />
<br />
# library("reshape2")<br />
long_data <- melt(umich_data, id.vars = c("Year"), value.name = "temperature")<br />
l.sort <- long_data[order(long_data$\$$Year),]<br />
head(l.sort); tail(l.sort)<br />
<br />
plot(l.sort$\$$temperature, data = l.sort, type = "l")<br />
<br />
<b>Fit the GAMM Model</b> (Generalized Additive Mixed Model)<br />
<br />
<center>[[Image:SMHS_TimeSeries6.png|400px]]</center><br />
<br />
<b>Fit a model with trend and seasonal components</b> --- computation may be slow:<br />
<br />
# define the parameters controlling the process of model-fitting/parameter-estimation<br />
ctrl <- list(niterEM = 0, msVerbose = TRUE, optimMethod="L-BFGS-B")<br />
<br />
# First try this model<br />
mod <- gamm(as.numeric(temperature) ~ s(as.numeric(Year)) + s(as.numeric(variable)), data = l.sort, method = "REML", correlation=corAR1(form = ~ 1|Year), knots=list(Variable = c(1, 12)), na.action=na.omit, control = ctrl)<br />
<br />
&#35;<u>Correlation</u>: <b>corStruct</b> object defineing correlation structures in <b>lme</b>. Grouping factors in the formula for this<BR><br />
&#35;object are assumed to be nested within any random effect grouping factors, without the need to make this<BR><br />
&#35;explicit in the formula (somewhat different from the behavior of <b>lme</b>).<BR> <br />
&#35;This is similar to the GEE approach to correlation in the generalized case.<BR><br />
&#35;<u>Knots</u>: an optional list of user specified knot values to be used for basis construction --<BR><br />
&#35;different terms can use different numbers of knots, unless they share a covariate.<BR><br />
&#35;If you revise the model like this (below), it will compare nicely with 3 ARMA models (later)<BR><br />
mod <- gamm(as.numeric(temperature) ~ s(as.numeric(Year), k=116) + s(as.numeric(variable), k=12), <br />
data = l.sort, correlation = corAR1(form = ~ 1|Year), control = ctrl)<br />
<br />
<b>Summary of the fitted model:</b><br />
<br />
summary(mod$\$$gam)<br />
<br />
<b>Visualize the model trend (year) and seasonal terms (months)</b><br />
<br />
plot(mod$\$$gam, pages = 1)<br />
t <- cbind(1: 1392) # define the time<br />
<br />
<center>[[Image:SMHS_TimeSeries7.png|500px]]</center><br />
<br />
<b>Plot the trend on the observed data -- with prediction:</b><br />
<br />
pred2 <- predict(mod$\$$gam, newdata = l.sort, type = "terms")<br />
ptemp2 <- attr(pred2, "constant") + <u>pred2[,1]</u> <br />
<br />
<b># pred2[,1] = trend; pred2[,2] = seasonal effects</b><br />
<b># mod$\$$gam</b> is a GAM object containing information to use predict, summary and print methods, but not to use e.g. the anova method function to compare models<br />
plot(temperature ~ t, data = l.sort, type = "l", xlab = "year", ylab = expression(Temperature ~ (degree*F)))<br />
lines(ptemp2 ~ t, data = l.sort, col = "blue", lwd = 2)<br />
<br />
<center>[[Image:SMHS_TimeSeries8.png|500px]]</center><br />
<br />
<b>Plot the seasonal model</b><br />
<br />
pred <- predict(mod$\$$gam, newdata = l.sort, type = "terms")<br />
ptemp <- attr(pred, "constant") + <u>pred[,2]</u><br />
<br />
plot(l.sort$\$$temperature ~ t, data = l.sort, type = "l", xlab = "year", ylab = expression(Temperature ~ (degree*F)))<br />
lines(ptemp, data = l.sort, col = "red", lwd = 0.5)<br />
<br />
<center>[[Image:SMHS_TimeSeries9.png|500px]]</center><br />
<br />
<b>Zoom in first 100 temps (1:100)</b><br />
<br />
plot(l.sort$\$$temperature ~ t, data = l.sort, type = "l", <b>xlim=c(0, 120)</b>, xlab = "year", ylab = expression(Temperature ~ (degree*F))); lines(ptemp, data = l.sort, col = "red", lwd = 0.5)<br />
<br />
<center>[[Image:SMHS_TimeSeries10.png|500px]]</center><br />
<br />
To examine how much the estimated trend has changed over the 116 year period, we can use the data contained in <b>pred</b> to compute the difference between the start (Jan 1900) and the end (Dec 2015) of the series in the <i><u>trend</u></i> component only:<br />
<br />
<b>tail(pred[,1], 1) - head(pred[,1], 1)</b> # subtract the predicted temp [,1] in 1900 (head) from the temp in 2015 (tail)<br />
<br />
# names(attributes(pred)); str(pred) # to see the components of the GAM prediction model object (pred)<br />
<br />
<b>Assess autocorrelation in residuals</b><br />
<br />
# head(umich_data); tail(umich_data)<br />
acf(resid(mod$\$$lme), lag.max = 36, main = "ACF")<br />
# <b>acf</b> = Auto-correlation and Cross-Covariance Function computes and plots the estimates of the autocovariance or autocorrelation function.<br />
# <b>pacf</b> is the function used for the partial autocorrelations.<br />
# <b>ccf</b> computes the cross-correlation or cross-covariance of two univariate series.<br />
pacf(resid(mod$\$$lme), lag.max = 36, main = "pACF")<br />
<br />
Looking at the residuals of this model, using the (partial) autocorrelation function, we see that there may be some residual autocorrelation in the data that the trend term didn’t account for. The shapes of the ACF and the pACF suggest an <b>AR(p)</b> model might be appropriate.<br />
<br />
<b>Fit and compare 4 alternative autoregressive models (original mod, AR1, AR2 and AR3)</b><br />
<br />
## AR(1)<br />
m1 <- gamm(as.numeric(temperature) ~ s(as.numeric(Year), k=116) + s(as.numeric(variable), k=12), <br />
data = l.sort, correlation = corARMA(form = ~ 1|Year, <b><u>p = 1</u></b>), control = ctrl)<br />
<br />
## AR(2)<br />
m2 <- gamm(as.numeric(temperature) ~ s(as.numeric(Year), k=116) + s(as.numeric(variable), k=12), <br />
data = l.sort, correlation = corARMA(form = ~ 1|Year, <b><u>p = 2</u></b>), control = ctrl)<br />
<br />
## AR(3)<br />
m3 <- gamm(as.numeric(temperature) ~ s(as.numeric(Year), k=116) + s(as.numeric(variable), k=12), <br />
data = l.sort, correlation = corARMA(form = ~ 1|Year, <b><u>p = 3</u></b>), control = ctrl)<br />
<br />
Note that the correlation argument is specified by <b>corARMA(form = ~ 1|Year, p = x)</b>, which fits an ARMA (auto-regressive moving average) process to the residuals, where <b>p</b> indicates the order for the <b>AR</b> part of the ARMA model, and <b>form = ~ 1|Year</b> specifies that the ARMA is nested within each year. This may expedite the model fitting but may also hide potential residual variation from one year to another.<br />
<br />
Let’s compare the candidate models by using the generalized likelihood ratio test via the <b>anova()</b> method for <b>lme</b> objects; see our previous mixed effects modeling notes <sup>1</sup> , <sup>2</sup>. This model selection is justified as we work with nested models -- going from the AR(3) to the AR(1) by setting some of the AR coefficients to 0. The models also vary in terms of the coefficient estimates for the splines terms which may require fixing some values while choosing the AR structure.<br />
<br />
<b><center>anova(mod$\$$lme, m1$\$$lme, m2$\$$lme, m3$\$$lme)</center></b><br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|- <br />
|||Model||df||AIC||BIC||logLik||Test||L.Ratio||p-value<br />
|-<br />
|mod$\$$lme||1||7||7455.609||7492.228|| -3720.805|| || ||<br />
|-<br />
|m1$\$$lme||2|| 7||7455.609||7492.228|| -3720.805|| || ||<br />
|-<br />
|m2$\$$lme||3|| 8||7453.982||7495.832|| -3718.991||2 vs 3||3.627409||0.0568<br />
|-<br />
|m3$\$$lme||4|| 9||7455.966||7503.048|| -3718.983|| 3 vs 4||0.015687||0.9003<br />
|}<br />
</center><br />
<br />
<b>Interpretation </b><br />
<br />
The AR(1) model (m1) does not provide a substantial increase in fit over the naive model (mod), and the AR(2) model (m2) only provides a marginal increase in the AR(1) model fit (m1). There is no improvement in moving from m2 to AR(3) model (m3).<br />
<br />
Let’s plot the AR(2) model (m2) to inspect how over-fitted the naive model with uncorrelated errors was in terms of the trend term, which shows similar smoothness compared to the initial (mod) model.<br />
<br />
plot(m2$\$$gam, scale = 0) # plot(mod2$\$$gam, scale = 0) # “scale=0” ensures optimal y-axis cropping of plot<br />
<br />
<center>[[Image:SMHS_TimeSeries11.png|500px]]</center><br />
<br />
<b>Investigation of residual patterns</b><br />
<br />
layout(matrix(1:2, ncol = 2))<br />
# original (mod) model<br />
acf(resid(mod$\$$lme), lag.max = 36, main = "ACF"); pacf(resid(mod$\$$lme), lag.max = 36, main = "pACF")<br />
# pACF controls for the values of the time series at all shorter lags, which contrasts the ACF which does not control for other lags.<br />
<br />
<center>[[Image:SMHS_TimeSeries12.png|500px]]</center><br />
<br />
This illustrates that there is some (month=1) Auto-correlation (ACF) and partial auto correlation in the residuals.<br />
<br />
# ARM(2) model (m2)<br />
layout(matrix(1:2, ncol = 2))<br />
res <- resid(m2$\$$lme, type = "normalized"); <br />
acf(res, lag.max = 36, main = "ACF - AR(2) errors"); pacf(res, lag.max = 36, main = "pACF- AR(2) errors")<br />
layout(1)<br />
<br />
<center>[[Image:SMHS_TimeSeries13.png|500px]]</center><br />
<br />
No residual auto-correlation remains in <b>m2</b>. The resulting fitted Generalized Additive Mixed Model (GAMM) object contains information about the trend and the contributions to the fitted values. The package '''mgcv'''<sup>3</sup> can spit the information using <b>predict()</b> for each of the 4 models.<br />
<br />
# require(mgcv); require(gamair)<br />
# m2 <- gamm(as.numeric(temperature) ~ s(as.numeric(Year), k=116) + s(as.numeric(variable), k=12), data = l.sort, correlation = corARMA(form = ~ 1|Year, <b><u>p = 2</u></b>), control = ctrl)<br />
<br />
pred2 <- predict(m2$\$$gam, newdata = l.sort, type = "terms")<br />
pred_trend2 <- attr(pred2, "constant") + <u>pred2[,1]</u> <b># trend</b><br />
pred_season2 <- attr(pred2, "constant") + <u>pred2[,2]</u> <b># seasonal</b> effects<br />
# plot(m2$\$$gam, scale = 0) # plot pure effects<br />
<br />
# Convert the 2 columns (Year and Month/variable) to R Date object<br />
# df_time <- as.Date(paste(as.numeric(l.sort$\$$Year), as.numeric(l.sort$\$$variable), "1", sep="-")); df_time<br />
<br />
plot(x=df_time, y=l.sort$\$$temperature, data = l.sort, type = "l", xlim=c(as.Date("1950-02-01"),as.Date("1960-01-01")), xlab = "year", ylab = expression(Temperature ~ (degree*F)))<br />
lines(x=df_time, y=pred_trend2, data = l.sort, col = "red", lwd = 2);<br />
lines(x=df_time, y=pred_season2, data = l.sort, col = "blue", lwd = 2)<br />
<br />
<center>[[Image:SMHS_TimeSeries14.png|500px]]</center><br />
<br />
===Moving average smoothing===<br />
<br />
A moving average of order $m=2k+1$ can be expressed as:<br />
$T_{t}=\frac{1}{2k+1}\sum_{j=-k}^{k}Y_{t+j}$ .<br />
<br />
The ''m''-MA represents an order m moving average, $T_t$, or the estimate of the trend-cycle at time ''t'', obtained by averaging values of the time series within ''k'' periods (left and right) of ''t''. This averaging process denoises the data (eliminates randomness in the data) and produces a smoother trend-cycle component.<br />
<br />
The 5-MA contains the values of $T_t$ with ''k''=2. To see what the trend-cycle estimate looks like, we plot it along with the original data<br />
<br />
# print the moving average results (k=3 &#8596; m=7)<br />
# library("forecast")<br />
plot(l.sort$\$$temperature, data = l.sort, type = "l", main=" UMich/AA Temp (1900-2015) ", ylab=" Temperature (F)", xlab="Year")<br />
lines(ma(l.sort$\$$temperature, 12), col="red", lwd=5)<br />
lines(ma(l.sort$\$$temperature, 36), col="blue", lwd=3)<br />
<br />
legend(0, 80, # places a legend at the appropriate place <br />
c("Raw", "k=12 smoother", "k=36 smoothest"), # puts text in the legend<br />
lty=c(1,1,1), # gives the legend appropriate symbols (lines)<br />
cex=1.0, # label sizes<br />
lwd=c(2.5,2.5), col=c("black", "red", "blue")) # gives the legend lines the correct color and width<br />
<br />
<center>[[Image:SMHS_TimeSeries15.png|500px]]</center><br />
<br />
The blue trend (''k''=36) (3 yrs) is smoother than the original (raw) data (black) and the 1-yr average (''k''=12). It captures the main movement of the time series without all the minor fluctuations. We can’t estimate $T_t$ where ''t'' is close to the ends as there is not enough data there to compute the averages. The red trend (''k''=12) is smoother than the original (raw) data (black) but more jagged than the 3-yr average. The order of the moving average (''m'') determines the smoothness of the trend-cycle estimate. A larger order implies a smoother curve.<br />
<br />
===Simulation of a time-series analysis and prediction===<br />
<br />
(1) Simulate a time series<br />
<br />
# the ts() function converts a numeric vector into an R time series object. <br />
# format is ts(vector, start=, end=, frequency=) where start and end are the times of the first and last observation<br />
# and frequency is the number of observations per unit time (1=annual, 4=quarterly, 12=monthly, etc.)<br />
Note that ''ling Rate'' = $\frac{1}{Frequency}$ <br />
<br />
# save a numeric vector containing 16-years (192 monthly) observations <br />
# from Jan 2000 to Dec 2015 as a time series object<br />
sim_ts <- ts(as.integer(runif(192,0,10)), start=c(2000, 1), end=c(2015, 12), frequency=12)<br />
sim_ts<br />
<br />
# subset the time series (June 2014 to December 2015)<br />
sim_ts2 <- window(sim_ts, start=c(2014, 6), end=c(2015, 12))<br />
sim_ts2<br />
<br />
# plot series <br />
plot(sim_ts)<br />
lines(sim_ts2, col="blue", lwd=3)<br />
<br />
<center>[[Image:SMHS_TimeSeries16.png|500px]]</center><br />
<br />
====Seasonal Decomposition====<br />
<br />
*The additive and seasonal trends, and irregular components, of time-series may be decomposed using the stl() function. Series with multiplicative effects can by transformed into series with additive effects through a log transformation (i.e., '''ln_sim_ts <- log(sim_ts)).<br />
# Seasonal decomposition<br />
fit_stl <- stl(sim_ts, s.window="period") '''# Seasonal Decomposition of Time Series by Loess'''<br />
plot(fit_stl)<br />
<br />
# inspect the distribution of the residuals<br />
hist(fit_stl$\$$time.series[,3]); # this contains the residuals: fit_stl$\$$time.series [,"remainder"], or seasonal, trend<br />
<br />
<br />
<center>[[Image:SMHS_TimeSeries17.png|500px]]</center><br />
<br />
# additional plots <br />
monthplot(sim_ts) # plots the seasonal subseries of a time series. For each season, a time series is plotted.<br />
<br />
# library(forecast)<br />
seasonplot(sim_ts)<br />
<br />
====Exponential Models====<br />
<br />
*The '''HoltWinters()''' function ('''stats''' package), and the '''ets()''' function ('''forecast''' package) can fit exponential models.<br />
# simple exponential - models level<br />
fit_HW <- HoltWinters(sim_ts, beta=FALSE, gamma=FALSE)<br />
<br />
# double exponential - models level and trend<br />
fit_HW2<- HoltWinters(sim_ts, gamma=FALSE) <br />
<br />
# triple exponential - models level, trend, and seasonal components<br />
fit_HW3 <- HoltWinters(sim_ts)<br />
<br />
plot(fit_HW, col='black')<br />
par(new=TRUE)<br />
plot(fit_HW2, ann=FALSE, axes=FALSE, col='blue')<br />
par(new=TRUE)<br />
plot(fit_HW3, axes=FALSE, col='red')<br />
# clear plot: <br />
# dev.off()<br />
<center>[[Image:SMHS_TimeSeries18.png|500px]]</center><br />
<br />
===Auto-regressive Integrated Moving Average (ARIMA) Models<sup>4</sup> ===<br />
<br />
There are 2 types of ARIMA time-series models: <BR><br />
$ X_t= \mu+ \underbrace{\sum_{i=1}^{p}{φ_iX_{t-i}}}_\text{auto-regressive (p) part} +<br />
\underbrace{\sum_{j=1}^{q}{θ_jε_{t-j}}}_\text{moving-average (q) part} + <br />
\underbrace{ ε_t }_\text{error term}.$<br />
<br />
====Non-seasonal ARIMA models====<br />
The Non-seasonal ARIMA models are denoted by ARIMA(p, d, q), where parameters p, d, and q are positive integers, <br />
* p = order of the auto-regressive model,<br />
* d = degree of differencing, when ''d''=2, the '''''d<sup>th</sup>'' difference''' is $(X_t-X_{t-1})-(X_{t-1}-X_{t-2})= X_t-2X_{t-1}+X_{t-2}$. That is, the second difference of ''X'' (d=2) is not the difference between the current period and the value 2 periods ago. It is the first-difference-of-the-first difference, the discrete analog of a second derivative, representing the local acceleration of the series rather than its local trend (first derivative).<br />
* q = order of the moving-average model.<br />
<br />
====Seasonal ARIMA models====<br />
The Seasonal AMIMA models are denoted by ''ARIMA(p, d, q)(P, D, Q)<sub>m</sub>,'' <br />
* m = number of periods in each season, <br />
* uppercase P, D, Q represent the auto-regressive, differencing, and moving average terms for the seasonal part of the ARIMA model, and the lower case (p,d,q) are as with non-seasonal ARIMA.<br />
<br />
If 2 of the 3 terms are trivial, the model is abbreviated using the non-zero parameter, skipping the "AR", "I" or "MA" from the acronym. For example, <br />
<br />
*ARIMA(1,0,0) = AR(1), a stationary and auto-correlated series can be predicted as a multiple of its own previous value, plus a constant. $X_t=μ + φ_1 × X_{t-1}+ \epsilon_t.$ Note that $ε_t=X_t-\hat{X}_t.$<br />
<br />
*An ARIMA(0,1,0) = I(1) model, not stationary series, a limiting case of an AR(1) model, the auto-regressive coefficient is equal to 1, i.e., a series with infinitely slow mean reversion, $X_t=μ+X_{t-1}+ε_t,$ a 1-step random walk.<br />
<br />
<B>For more complex models:</B><br />
*An ARIMA(1,1,0), differenced first-order auto-regressive model. $X_t=μ+X_{t-1}+α×(X_{t-1}-X_{t-2})+ε_t.$ <br />
<br />
*An ARIMA(0,2,2) model is given by $X_t=2X_{t-1}-X_{t-2}+α×ε_{t-1}+β×ε_{t-2}+ ε_t,$ where $α$ and $β$ are the MA(1) and MA(2) coefficients (sometimes these are defined with negative signs). This is a general linear exponential smoothing model that uses exponentially weighted moving averages to estimate both a local level and a local trend in the series. The long-term forecasts from this model converge to a straight line whose slope depends on the average trend observed toward the end of the series.<br />
<br />
*ARIMA(1,1,2), $X_t=μ+X_{t-1}+(X_{t-1}+X_{t-2})+α×ε{t}+β×ε_{t-1}$<br />
<br />
The '''arima'''() function ('''stats''' package) can be used to fit an <b><u>auto-regressive integrated moving averages</u></b> model. Other useful functions include:<br />
* lag(sim_ts, k) &nbsp;&nbsp;&nbsp;&nbsp; lagged version of time series, shifted back k observations<br />
* diff(sim_ts, differences=d) &nbsp;&nbsp;&nbsp;&nbsp; difference the time series d times<br />
* ndiffs(sim_ts) &nbsp;&nbsp;&nbsp;&nbsp; Number of differences required to achieve stationarity (from the forecast package)<br />
* acf(sim_ts) &nbsp;&nbsp;&nbsp;&nbsp; auto-correlation function<br />
* pacf(sim_ts) &nbsp;&nbsp;&nbsp;&nbsp; partial auto-correlation function<br />
* adf.test(sim_ts) &nbsp;&nbsp;&nbsp;&nbsp; Augmented Dickey-Fuller test. Rejecting the null hypothesis suggests that a time series is stationary (from the tseries package)</li><br />
* Box.test(x, type="Ljung-Box") &nbsp;&nbsp;&nbsp;&nbsp; Portmanteau test that observations in vector or time series x are independent.<br />
<br />
The '''forecast''' package has alternative versions of '''acf()''' and '''pacf()''' called '''Acf()''' and '''Pacf()''' respectively. <br />
&#35; fit an '''ARIMA(P, D, Q) model''' of order:<br />
* P, represents the AR order<<br />
* D, represents the degree of differencing<br />
* Q, represents the MA order.<br />
<br />
fit_arima1 <- arima(sim_ts, order=c(3, 1, 2)) <br />
# predictive accuracy <br />
library(forecast) <br />
accuracy(fit_arima1) <br />
<br />
# predict next 20 observations <br />
library(forecast) <br />
forecast(fit_arima1, 20) <br />
plot(forecast(fit_arima1, 20)) <br />
<br />
<center>[[Image:SMHS_TimeSeries19.png|600px]]</center><br />
<br />
===Automated Forecasting===<br />
<br />
The '''forecast''' package provides functions for the automatic selection of exponential and ARIMA models. The '''ets()''' (exponential TS) function supports both additive and multiplicative models. The '''auto.arima()''' function accounts for seasonal and nonseasonal ARIMA models according to criteria maximizing a cost function.<br />
<br />
&#35; library(forecast)<br />
<br />
&#35; Automated forecasting using an exponential model<br />
fit_ets <- ets(sim_ts)<br />
<br />
&#35; Automated forecasting using an ARIMA model<br />
fit_arima2 <- auto.arima(sim_ts)<br />
<br />
&#35; Compare the AIC (model quality) for both models<br />
fit_ets$\$$aic; fit_arima2$\$$aic<br />
accuracy(fit_ets); accuracy(fit_arima2);<br />
<br />
'''Akaike’s Information Criterion (AIC)''' = ''-2Log(Likelihood)+2p,'' where ''p'' is he number of estimated parameters.<br />
summary(fit_ets); summary(fit_arima2)<br />
<br />
ACF plot of the residuals from the ARIMA(3,1,2) model shows all correlations within the threshold limits indicating that the residuals are behaving like white noise. A portmanteau test returns a large p-value, also suggesting the residuals are white noise.<br />
&#35; acf computes (and by default plots) estimates of the autocovariance or autocorrelation function<br />
acf(residuals(fit_ets)) <br />
<br />
&#35; Box–Pierce or Ljung–Box test statistic for examining the null hypothesis of independence in a given time series. <br />
&#35; These are sometimes known as ‘portmanteau’ tests.<br />
Box.test(residuals(fit_ets), lag=24, fitdf=4, type="Ljung")<br />
&#35; plot forecast<br />
<br />
plot(forecast(fit_arima2))<br />
&#35; more on ARIMA https://www.otexts.org/fpp/8/7<br />
<br />
===Footnotes===<br />
* <sup>1</sup> https://umich.instructure.com/files/689861/download?download_frd=1 <br />
* <sup>2</sup> https://umich.instructure.com/courses/38100/files <br />
* <sup>3</sup> https://cran.r-project.org/web/packages/mgcv/mgcv.pdf<br />
* <sup>4</sup> http://arxiv.org/ftp/arxiv/papers/1302/1302.6613.pdf<br />
<br />
==See also==<br />
* [[SMHS_TimeSeriesAnalysis_LOS| Applications of Time-series]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.ucla.edu<br />
{{translate|pageName=http://wiki.stat.ucla.edu/socr/index.php?title=SMHS_TimeSeriesAnalysis}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_CrossVal_LDA_QDA&diff=16248SMHS BigDataBigSci CrossVal LDA QDA2016-05-24T14:09:20Z<p>Pineaumi: /* See also */</p>
<hr />
<div>==[[SMHS_BigDataBigSci_CrossVal| Big Data Science and Cross Validation]] - Foundation of LDA and QDA for prediction, dimensionality reduction or forecasting==<br />
<br />
===Summary===<br />
Both LDA (Linear Discriminant Analysis) and QDA (Quadratic Discriminant Analysis) use probabilistic models of the class conditional distribution of the data $P(X|Y=k)$ for each class $k$. Their predictions are obtained by using Bayesian theorem (http://wiki.socr.umich.edu/index.php/SMHS_BayesianInference#Bayesian_Rule):<br />
<br />
\begin{equation}<br />
P(Y=k|X)=\frac{P(X|Y=k)P(Y=k)}{P(X)}=\frac{P(X|Y=k)P(Y=k)}{\sum_{l=0}^&#8734;P(X|Y=l)P(Y=l)'}<br />
\end{equation}<br />
<br />
and we select the class $k$, which '''maximizes''' this conditional probability (maximum likelihood estimation).<br />
<br />
<br />
In linear and quadratic discriminant analysis, $P(X|Y)$ is modeled as a multivariate Gaussian distribution with density:<br />
<br />
\begin{equation}<br />
P(X|Y=k)=\frac{1}{(2\pi)^n|\sum_k|^{1/2}}×e^{\Big(-\frac{1}{2}(x-\mu_k)^T\sum_k^{-1}(X-\mu_k)\Big)}<br />
\end{equation}<br />
<br />
<br />
This model can be used to classify data by using the training data to '''estimate''':<br />
<br />
(1) the class prior probabilities $P(Y = k)$ by counting the proportion of observed instances of class $k$, <br />
<br />
(2) the class means $μ_k$ by computing the empirical sample class means, and <br />
<br />
(3) the covariance matrices by computing either the empirical sample class covariance matrices, or by using a regularized estimator, e.g., lasso). <br />
<br />
<br />
In the <u>linear case</u> (LDA), the Gaussians for each class are assumed to share the same covariance matrix:<br />
<br />
$Σ_k=Σ$ for each class $k$. This leads to linear decision surfaces between classes. This is clear from comparing the log-probability ratios of 2 classes ($k$ and $l$):<br />
<br />
$LOR=log\Big(\frac{P(Y=k│X)}{P(Y=l│X)}\Big)$<br />
(the LOR=0 ↔the two probabilities are identical, i.e., same class) <br />
<br />
$LOR=log\Big(\frac{P(Y=k│X}{P(Y=l│X)}\Big)=0 ⇔ (\mu_k-\mu_l)^T\sum^{-1}(\mu_k-\mu_1)=\frac{1}{2}({\mu_k}^T\sum^{-1}\mu_k-{\mu_l}^T\sum^{-1}\mu_l) $<br />
<br />
<br />
But, in the more general, <u>quadratic case</u> of QDA, there are no assumptions on the covariance matrices $Σ_k$ of the Gaussians, leading to quadratic decision surfaces.<br />
<br />
==LDA (Linear Discriminant Analysis)==<br />
<br />
&#35;LDA is similar to GLM (e.g., ANOVA and regression analyses), as it also attempt to express one dependent variable as a linear combination of other features or data elements, However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas LDA has continuous independent variables and a categorical dependent variable (i.e. Dx/class label). Logistic regression and probit regression are more similar to LDA than ANOVA, as they also explain a categorical variable by the values of continuous independent variables.<br />
<br />
predfun.lda = function(train.x, train.y, test.x, test.y, neg)<br />
{<br />
require("MASS")<br />
lda.fit = lda(train.x, grouping=train.y)<br />
ynew = predict(lda.fit, test.x)$\$$class<br />
out.lda = confusionMatrix(test.y, ynew, negative=neg)<br />
return( out.lda )<br />
}<br />
<br />
==QDA (Quadratic Discriminant Analysis)==<br />
<br />
predfun.qda = function(train.x, train.y, test.x, test.y, neg)<br />
{<br />
require("MASS") # for lda function<br />
qda.fit = qda(train.x, grouping=train.y)<br />
ynew = predict(qda.fit, test.x)$\$$class<br />
out.qda = confusionMatrix(test.y, ynew, negative=neg)<br />
return( out.qda )<br />
}<br />
<br />
==k-Nearest Neighbors algorithm==<br />
<br />
k-Nearest Neighbors algorithm (''k''-NN) is a non-parametric method for either classification or regression, where the <u>input</u> consists of the ''k'' closest '''training examples''' in the feature space, but the <u>output</u> depends on whether ''k''-NN is used for classification or regression:<br />
<br />
*In ''k''-NN '''classification''', the output is a class membership (labels). Objects in the testing data are classified by a majority vote of their neighbors. Each object is assigned to a class that is most common among its ''k'' nearest neighbors (''k'' is always a small positive integer). When ''k''=1, then an object is assigned to the class of its single nearest neighbor.<br />
<br />
*In ''k''-NN '''regression''', the output is the property value for the object representing the average of the values of its ''k'' nearest neighbors.<br />
<br />
&#35;X = as.matrix(input) &nbsp;&nbsp;&nbsp; # Predictor variables X = as.matrix(input.short2)<br />
<br />
&#35;Y = as.matrix(output) &nbsp;&nbsp;&nbsp; # Outcome<br />
<br />
<br />
<u>'''&#35;KNN (k-nearest neighbors)'''</u><br />
<br />
library("class")<br />
<span style="background-color: #32cd32">&#35;knn.fit.test <- knn(X, X, cl = Y, k=3, prob=F); predict(as.matrix(knn.fit.test), X) $\$$class </span><br />
<span style="background-color: #32cd32">&#35;table(knn.fit.test, Y); confusionMatrix(Y, knn.fit.test, negative="1")</span><br />
<span style="background-color: #32cd32">&#35;This can be used for polytomous variable (multiple classes)</span><br />
<br />
predfun.knn = function(train.x, train.y, test.x, test.y, neg)<br />
{<br />
require("class")<br />
knn.fit = knn(train.x, test.x, cl = train.y, prob=T) <span style="background-color: #32cd32"># knn is already a prediction function!!!</span><br />
&#35;ynew = predict(knn.fit, test.x)$\$$class # no need of another prediction, in this case<br />
out.knn = confusionMatrix(test.y, knn.fit, negative=neg)<br />
return( out.knn )<br />
}<br />
<span style="background-color: #32cd32">cv.out.knn = '''crossval::crossval'''(predfun.knn, X, Y, K=5, B=2, neg="1")</span><br />
<br />
Compare all 3 classifiers (lda, qda, knn, and logit)<br />
<br />
diagnosticErrors(cv.out.lda$\$$stat); diagnosticErrors(cv.out.qda$\$$stat); diagnosticErrors(cv.out.qda$\$$stat); <br />
diagnosticErrors(cv.out.logit$\$$stat);<br />
<br />
<br />
[[Image:SMHS BigDataBigSci CrossVal5.png|500px]]<br />
<br />
<u>'''Now let’s look at the actual prediction models!'''</u><br />
<br />
There are different approaches to split the data (partition the data) into Training and Testing sets.<br />
<br />
&#35;TRAINING: 75% of the sample size<br />
<br />
sample_size <- floor(0.75 * nrow(input))<br />
&#35;&#35;set the seed to make your partition reproducible<br />
set.seed(1234)<br />
input.train.ind <- sample(seq_len(nrow(input)), size = sample_size)<br />
input.train <- input[input.train.ind, ]<br />
output.train <- as.matrix(output)[input.train.ind, ]<br />
<br />
&#35;TESTING DATA<br />
<br />
input.test <- input[-input.train.ind, ]<br />
output.test <- as.matrix(output)[-input.train.ind, ]<br />
<br />
==k-Means Clustering (k-MC)==<br />
<br />
k-MC aims to partition ''n'' observations into ''k'' clusters where each observation belongs to the cluster with the nearest mean which acts as a prototype of a cluster. The k-MC partitions the data space into Voronoi cells. In general there is no computationally tractable solution (NP-hard problem), but there are efficient algorithms that converge quickly to local optima (e.g., expectation-maximization algorithm for mixtures of Gaussian distributions via an iterative refinement approach employed by both algorithms<sup>2</sup>).<br />
<br />
<br />
kmeans_model <- kmeans(input.train, 2)<br />
layout(matrix(1,1))<br />
plot(input.train, col = kmeans_model$\$$cluster) <br />
points(kmeans_model$\$$centers, col = 1:2, pch = 8, cex = 2)<br />
<br />
&#35;&#35;cluster centers "fitted" to each obs.:<br />
fitted.kmeans <- fitted(kmeans_model); head(fitted.kmeans)<br />
resid.kmeans <- (input.train - fitted(kmeans_model))<br />
&#35;define the sum of squares function <br />
ss <- function(data) sum(scale(data, scale = FALSE)^2)<br />
<br />
&#35;&#35;Equalities <br />
cbind(kmeans_model[c("betweenss", "tot.withinss", "totss")], # the same two columns<br />
c (ss(fitted.kmeans), ss(resid.kmeans), ss(input.train)))<br />
<br />
&#35;validation<br />
stopifnot(all.equal(kmeans_model$\$$totss, ss(input.train)),<br />
all.equal(kmeans_model$\$$tot.withinss, ss(resid.kmeans)),<br />
&#35;&#35;these three are the same:<br />
all.equal(kmeans_model$\$$betweenss, ss(fitted.kmeans)),<br />
all.equal(kmeans_model$\$$betweenss, kmeans_model$\$$totss - kmeans_model$\$$tot.withinss),<br />
&#35;&#35;and hence also<br />
all.equal(ss(input.train), ss(fitted.kmeans) + ss(resid.kmeans))<br />
)<br />
kmeans(input.train,1)$\$$withinss &nbsp;&nbsp;&nbsp; # trivial one-cluster, (its W.SS == ss(input.train))<br />
<br />
<sup>2</sup>http://escholarship.org/uc/item/1rb70972<br />
<br />
<br />
'''(1)&#35;&#35; k-Nearest Neighbor Classification'''<br />
<br />
library("class")<br />
knn_model <- knn(train= input.train, input.test, cl=as.factor(output.train), k=2)<br />
plot(knn_model)<br />
summary(knn_model)<br />
attributes(knn_model)<br />
<br />
&#35;cross-validation<br />
knn_model.cv <- knn.cv(train= input.train, cl=as.factor(output.train), k=2)<br />
summary(knn_model.cv)<br />
<br />
==Appendix: R Debugging==<br />
<br />
Most programs that give incorrect results are impacted by logical errors. When errors (bugs, exceptions) occur, we need explore deeper -- this procedure to identify and fix bugs is “debugging”.<br />
<br />
R tools for debugging: traceback(), debug() browser() trace() recover()<br />
<br />
'''traceback():''' Failing R functions report to the screen immediately the error. Calling traceback() will show the function where the error occurred. The traceback() function prints the list of functions that were called before the error occurred.<br />
The function calls are printed in reverse order.<br />
<br />
f1<-function(x) { r<- x-g1(x); r }<br />
<br />
g1<-function(y) { r<-y*h1(y); r }<br />
<br />
h1<-function(z) { r<-log(z); if(r<10) r^2 else r^3}<br />
<br />
f1(-1)<br />
<br />
<font color="red">Error in if (r < 10) r^2 else r^3 : missing value where TRUE/FALSE needed In addition: Warning message:<br />
In log(z) : NaNs produced</font><br />
<br />
traceback() <br />
3: h(y)<br />
2: g(x)<br />
1: f(-1)<br />
<br />
debug()<br />
<br />
traceback() does not tell you where is the error. To find out which line causes the error, we may step through the function using debug().<br />
<br />
debug(foo) flags the function foo() for debugging. undebug(foo) unflags the function.<br />
<br />
When a function is flagged for debugging, each statement in the function is executed one at a time. After a statement is executed, the function suspends and user can interact with the R shell.<br />
<br />
This allows us to inspect a function line-by-line.<br />
<br />
'''Example''': compute sum of squared error SS<br />
<br />
&#35;&#35; compute sum of squares <br />
SS<-function(mu,x) { d<-x-mu; d2<-d^2; ss<-sum(d2); ss }<br />
set.seed(100); x<-rnorm(100); SS(1,x)<br />
<br />
&#35;&#35; to debug<br />
debug(SS); SS(1,x)<br />
debugging in: SS(1, x) debug: {<br />
d <- x - mu d2 <- d^2<br />
ss <- sum(d2) ss<br />
}<br />
<br />
In the debugging shell (“Browse[1]>”), users can:<br />
<br />
• Enter <u>'''n'''</u> (next) executes the current line and prints the next one; <br />
<br />
• Typing <u>'''c'''</u> (continue) executes the rest of the function without stopping; <br />
<br />
• Enter <u>'''Q'''</u> quits the debugging;<br />
<br />
• Enter <u>'''ls()'''</u> list all objects in the local environment;<br />
<br />
• Enter an object name or print(<object name>) tells the current value of an object.<br />
<br />
<br />
Example:<br />
<br />
debug(SS)<br />
SS(1,x)<br />
debugging in: SS(1, x) debug: {<br />
d <- x - mu d2 <- d^2<br />
...<br />
Browse[1]> n<br />
debug: d <- x - mu ## the next command <br />
Browse[1]> ls() ## current environment [1] "mu" "x" ## there is no d<br />
Browse[1]> n ## go one step debug: d2 <- d^2 ## the next command<br />
Browse[1]> ls() ## current environment [1] "d" "mu" "x" ## d has been created<br />
Browse[1]> d[1:3] ## first three elements of d [1] -1.5021924 -0.8684688 -1.0789171<br />
Browse[1]> hist(d) ## histogram of d<br />
Browse[1]> where ## current position in call stack where 1: SS(1, x)<br />
Browse[1]> n<br />
debug: ss <- sum(d2) <br />
Browse[1]> Q ## quit<br />
<br />
'''undebug(SS)''' ## remove debug label, stop debugging process<br />
SS(1,x) ## now call SS again will without debugging<br />
<br />
You can label a function for debugging while debugging another function<br />
<br />
f<-function(x) { r<-x-g(x); r }<br />
g<-function(y) { r<-y*h(y); r }<br />
h<-function(z) { r<-log(z); if(r<10) r^2 else r^3 }<br />
<br />
debug(f) # ## If you only debug f, you will not go into g<br />
f(-1)<br />
Browse[1]> n<br />
Browse[1]> n<br />
<i><font color="red">Error in if (r < 10) r^2 else r^3 : missing value where TRUE/FALSE needed In addition: Warning message:</i> <br />
<i>In log(z) : NaNs produced</font></i><br />
<br />
But, we can also label ''g'' and ''h'' for debugging when we debug ''f''<br />
<br />
f(-1)<br />
Browse[1]> n<br />
Browse[1]> debug(g) <br />
Browse[1]> debug(h) <br />
Browse[1]> n<br />
<br />
Inserting a call to '''browser()''' in a function will pause the execution of a function at the point where browser() is called.<br />
Similar to using debug() except you can control where execution gets paused.<br />
<br />
'''Example:'''<br />
h<-function(z) {<br />
browser() ## a break point inserted here <br />
r<-log(z); if(r<10) r^2 else r^3<br />
}<br />
<br />
f(-1)<br />
Browse[1]> ls() <br />
Browse[1]> z<br />
Browse[1]> n<br />
Browse[1]> n<br />
Browse[1]> ls()<br />
Browse[1]> c<br />
<br />
Calling '''trace()''' on a function allows inserting new code into a function. The syntax for trace() may be challenging.<br />
<br />
as.list(body(h)) <br />
trace("h",quote(if(is.nan(r)) {browser()}), at=3, print=FALSE)<br />
f(1)<br />
f(-1)<br />
<br />
trace("h",quote(if(z<0) {z<-1}), at=2, print=FALSE)<br />
f(-1)<br />
untrace()<br />
<br />
During the debugging process, '''recover()''' allows checking the status of variables in upper level functions. recover() can be used as an error handler using '''options()''' (e.g. options(error=recover)). When functions throw exceptions, execution stops at point of failure. Browsing the function calls and examining the environment may indicate the source of the problem.<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci_CrossVal_| Back to Big Data Science and Cross-Validation]]<br />
* [[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]]<br />
* [[SMHS_BigDataBigSci_GCM| Growth Curve Modeling (GCM)]]<br />
* [[SMHS_BigDataBigSci_GCM| Generalized Estimating Equation (GEE) Modeling]]<br />
* [[SMHS_BigDataBigSci|Back to Big Data Science]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_CrossVal_LDA_QDA}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_CrossVal&diff=16247SMHS BigDataBigSci CrossVal2016-05-24T14:08:48Z<p>Pineaumi: /* See also */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Big Data Science]] - (Internal) Statistical Cross-Validaiton ==<br />
<br />
== Questions ==<br />
* What does it mean to validate a result, a method, approach, protocol, or data?<br />
* Can we do “pretend” validations that closely mimic reality?<br />
<br />
<center>[[Image:SMHS_BigDataBigSci_CrossVal1.png|250px]]</center><br />
<br />
''Validation'' is the scientific process of determining the degree of accuracy of a mathematical, analytic or computational model as a representation of the real world based on the intended model use. There are various challenges with using observed experimental data for model validation:<br />
<br />
1. Incomplete details of the experimental conditions may be subject to boundary and initial conditions, sample or material properties, geometry or topology of the system/process.<br />
<br />
2. Limited information about measurement errors due to lack of experimental uncertainty estimates.<br />
<br />
Empirically observed data may be used to evaluate models with conventional statistical tests applied subsequently to test null hypotheses (e.g., that the model output is correct). In this process, the discrepancy between some model-predicted values and their corresponding/observed counterparts are compared. For example, a regression model predicted values may be compared to empirical observations. Under parametric assumptions of normal residuals and linearity, we could test null hypotheses like $slope = 1$ or $intercept = 0$. When comparing the model obtained on one training dataset to an independent dataset, the slope may be different from 1 and/or the intercept may be different from 0. The purpose of the regression comparison is a formal test of the hypothesis (e.g., $slope = 1, mean_{ observed} =mean_{ predicted},$ then the distributional properties of the adjusted estimates are critical in making an accurate inference. The logistic regression test is another example for comparing predicted and observed values. Measurement errors may creep in, due to sampling or analytical biases, instrument reading or recording errors, temporal or spatial sampling sample collection discrepancies, etc.<br />
<br />
==Overview==<br />
<br />
====Cross-validation====<br />
<br />
Cross-validation is a method for validating of models by assessing the reliability and stability of the results of a statistical analysis (e.g., model predictions) based on independent datasets. For prediction of trend, association, clustering, etc., a model is usually trained on one dataset (training data) and tested on new unknown data (testing dataset). The cross-validation method defines a test dataset to evaluate the model avoiding overfitting (the process when a computational model describes random error, or noise, instead of underlying relationships in the data).<br />
<br />
====Overfitting====<br />
<br />
'''Example (US Presidential Elections):''' By 2014, there have been only '''56 presidential elections and 43 presidents'''. That is a small dataset, and learning from it may be challenging. <u>'''If the predictor space expands to include things like having false teeth, it's pretty easy for the model to go from fitting the generalizable features of the data (the signal) and to start matching the noise.'''</u> When this happens, the quality of the fit on the historical data may improve (e.g., better R<sup>2</sup>), but the model may fail miserably when used to make inferences about future presidential elections.<br />
<br />
(Figure from http://xkcd.com/1122/)<br />
<br />
<center>[[Image:SMHS BigDataBigSci_CrossVal2.png|400px]]</center><br />
<br />
'''Example (Google Flu Trends):''' A March 14, 2014 article in Science (''DOI: 10.1126/science.1248506''), identified problems in Google Flu Trends (http://www.google.org/flutrends/about/#US), DOI 10.1371/journal.pone.0023610, which may be attributed in part to overfitting. In February 2013, Nature reported that GFT was predicting more than double the proportion of doctor visits for influenza-like illness (ILI) than the Centers for Disease Control and Prevention (CDC), despite the fact that GFT was built to predict CDC reports.<br />
<br />
GFT model found the best matches among 50 million search terms to fit 1,152 data points. The odds of finding search terms that match the propensity of the flu but are structurally unrelated, and so do not predict the future, were quite high. GFT developers, in fact, report weeding out seasonal search terms unrelated to the flu but strongly correlated to the CDC data, e.g., high school basketball season. The big GFT data may have overfitted the small number of cases. The GFT approach missed the non-seasonal 2009 influenza A–H1N1 pandemic.<br />
<br />
'''Example (Autism).''' Autistic brains constantly overfit visual and cognitive stimuli. To an autistic person, a general conversation of several adults may seem like a cacophony due to super-sensitive detail-oriented hearing and perception tuned to literally pick up all elements of the conversation and the environment but downplay body language, sarcasm and non-literal cues. We can miss the forest for the trees when we start "overfitting," over-interpreting the noise on top of the actual signal. Ambient noise, trivial observations and unrelated perceptions may hide the true communication details.<br />
<br />
During each communication (conversation) there are exchanges of both information and random noise. Fitting a perfect model is only listening to the “relevant” information. Over-fitting is when your attention is (excessively) consumed with the noise, or worse, letting the noise drown out the information exchange.<br />
<br />
Any dataset is a mix of signal and noise. The main task of our brains are to sort these components and interpret the information (i.e., ignore the noise).<br />
<br />
Our predictions are most accurate if we can model as much of the signal as possible and as little of the noise as possible. Note that in these terms, R<sup>2</sup> is a poor metric to identify predictive power - it measures how much of the signal <u>'''and'''</u> the noise is explained by our model. In practice, it's hard to always identify what's signal and what's noise. This is why practical applications tends to favor simpler models, since the more complicated a model is the easier it is to overfit the noise component in the information.<br />
<br />
'''Cross-validation is an iterative process''', where each step involves:<br />
<br />
*Randomly partitioning a sample of data into 2 complementary subsets (training + testing), <br />
<br />
*Performing the analysis on the training subset <br />
<br />
*Validating the analysis on the testing subset <br />
<br />
*Increase the iteration index and repeat the process (termination criteria can involve a fixed number, or a desired (mean?) variability or error-rate).<br />
<br />
<center>[[Image:SMHS BigDataBigSci_CrossVal3.png|400px]]</center><br />
<br />
The validation results at each iteration are averaged, to reduce noise/variability, and reported.<br />
<br />
Cross-validation guards against testing hypotheses suggested by the data themselves (aka: "Type III errors", False-Suggestion) in cases when new observations are hard to obtain (due to costs, reliability, time or other constraints). <br />
<br />
Cross-validation is different from ''conventional-validation'' (e.g. 80%-20% partitioning the data set into training and testing subsets) as the in the conventional validation, the error (e.g. Root Mean Square Error, RMSE) on the training data is not a useful estimator of model performance, as it does not generalize across multiple samples. Errors of the conventional-valuation based on the results on the test data do not assess model performance, in general. A more fair way to properly estimate model prediction performance is to use cross-validation, which combines (averages) prediction errors or measures of fit to correct for the stochastic nature of training and testing data partitions and generate a more accurate and robust estimate of real model performance.<br />
<br />
A more complex model ''overfits-the-data'', relative to a simpler model when the former generates accurate fitting results for known data but less accurate results when predicting based on new data (foresight). Knowledge from past experience includes information either ''relevant or irrelevant'' (noise) for the future information. In challenging data-driven predict models when uncertainty (entropy) is high, more noise is present in past information that needs to be ignored in future forecasting. However it is generally hard to discriminate patterns from noise in complex systems (i.e., deciding which part to model and which to ignore). Models that reduce the chance of fitting noise are called '''robust'''.<br />
<br />
====Example (Linear Regression)====<br />
<br />
We can demonstrate model assessment using linear regression. Suppose we observe response values $\{y_1,...,y_n\}$, and the corresponding $k$ predictors represented as a $kD$ vector of covariates $\{x_1,...,x_n\}$, where subjects/cases are indexed by $1 ≤ i ≤ n$, and the data-elements (variables) are indexed by $1 ≤ j ≤ k$.<br />
<br />
\begin{pmatrix}<br />
x_{1,1} & \cdots & x_{1,k} \\<br />
\vdots & \ddots & \vdots \\<br />
x_{n,1} & \cdots & x_{n,k} <br />
\end{pmatrix}<br />
<br />
Using least squares to estimate the linear function parameters (effect-sizes), $\{β_1,...,β_k\}$, allows us to compute a hyperplane $y = a + xβ$ that best fits the observed data $\{x_i,y_i\}_{1≤i≤n}$. <br />
<br />
\begin{equation}<br />
\begin{pmatrix}<br />
y_{1} \\<br />
\vdots \\<br />
y_{n} <br />
\end{pmatrix}<br />
<br />
= \begin{pmatrix}<br />
α_{1} \\<br />
\vdots \\<br />
α_{n} <br />
\end{pmatrix}<br />
<br />
+\begin{pmatrix}<br />
x_{1,1} & \cdots & x_{1,k} \\<br />
\vdots & \ddots & \vdots \\<br />
x_{n,1} & \cdots & x_{n,k} <br />
\end{pmatrix}<br />
<br />
\begin{pmatrix}<br />
β_{1} \\<br />
\vdots \\<br />
β_{k} <br />
\end{pmatrix}<br />
\end{equation}<br />
<br />
<br />
$$<br />
\begin{array}{lcl}<br />
y_1=\alpha_1+x_{1,1}\beta_1+x_{1,2}\beta_2+...+x_{1,k}\beta_k\\<br />
y_2=\alpha_2+x_{2,1}\beta_1+x_{2,2}\beta_2+...+x_{2,k}\beta_k \\<br />
...\\<br />
y_n=\alpha_n+x_{n,1}\beta_1+x_{n,2}\beta_2+...+x_{n,k}\beta_k<br />
\end{array}$$<br />
<br />
The model fit may be evaluated using the mean squared error (MSE). The MSE for a given value of the parameters ''α'' and ''β'' on the observed training data $\{x_i,y_i\}_{1 ≤ i ≤ n}$.<br />
<br />
$$<br />
\begin{equation} MSE=\frac{1}{n}\sum_{i=1}^{n} \Bigg(y_i-\underbrace{(\alpha_1+x_{i,1}\beta_1+x_{i,2}\beta_2+\cdots+x_{i,k}\beta_k)}_{(\text{predicted value, } \hat{y_i} \text{, at }\{x_{i,1,}\cdots,x_{i,k}\})} \Bigg)^2<br />
\end{equation} <br />
$$<br />
<center>vs.</center><br />
$$<br />
\begin{equation} RMSE=\sqrt{\frac{1}{n}\sum_{i=1}^{n} \Bigg(y_i-\underbrace{(\alpha_1+x_{i,1}\beta_1+x_{i,2}\beta_2+\cdots+x_{i,k}\beta_k)}_{(\text{predicted value, } \hat{y_i} \text{, at }\{x_{i,1,}\cdots,x_{i,k}\})} \Bigg)^2 }.<br />
\end{equation} <br />
$$<br />
<br />
The expected value of the MSE (over the distribution of training sets) for the <u>'''training set'''</u> is $\frac{(n-k-1)}{(n + k + 1)} × E,$ where $E$ is the expected value of the MSE for the <u>'''testing'''</u>/<u>'''validation'''</u> data. Therefore, fitting a model and computing the MSE on the training set, we will get an over optimistic evaluation assessment (smaller RMSE) of how well the model may fit another dataset. This bias represents ''in-sample'' estimate of the fit, whereas we are interested in the cross-validation estimate as an ''out-of-sample'' estimate.<br />
<br />
In the linear regression model, cross validation is not useful as we can compute the <u>'''exact'''</u> correction factor $\frac{(n - k - 1)}{(n + k + 1)}$ and correctly estimate the ''out-of-sample'' fit using the (MSE underestimate) ''in-sample'' MSE estimate. However, even in this situation, cross-validation remains useful as it can be used to select an optimal regularized cost function.<br />
<br />
In most other modeling procedures (e.g. logistic regression), <u>'''in general'''</u>, there are no simple closed-form expressions (formulas) to adjust the cross-validation error estimate from the in-sample fit estimate. Cross-validation is generally applicable way to predict the performance of a model on a validation set using stochastic computation instead of obtaining experimental, theoretical, mathematical, or analytic error estimates.<br />
<br />
====Cross-validation methods====<br />
<br />
There are 2 classes of cross-validation approaches – exhaustive and non-exhaustive.<br />
<br />
'''Exhaustive cross-validation'''<br />
<br />
Exhaustive cross-validation methods are based on determining all possible ways to divide the original sample into training and testing data. For example, the ''Leave-m-out cross-validation'' involves using $m$ observations for testing and the remaining ($n-m$) observations as training (when $m=1$, leave-1-out method). This process is repeated on all partitions of the original sample. This method requires model fitting and validating $C_m^n$ times ($n$ is the total number of observations in the original sample and $m$ is the number left out for validation). This requires a very large number of steps<sup>1</sup>.<br />
<br />
'''Non-exhaustive cross-validation'''<br />
<br />
Non-exhaustive cross validation methods avoid computing estimates/errors using all possible partitionings of the original sample, but rather approximates these. For example, in the <u>'''''k''-fold cross-validation'''</u>, the original sample is randomly partitioned into $k$ equal sized subsamples. Of the $k$ subsamples, a single subsample is kept as final testing data for validation of the model. The other $k- 1$ subsamples are used as training data. The cross-validation process is then repeated $k$ times ($k$ folds). Each of the $k$ subsamples is used once as the validation data. There are corresponding $k$ results that are averaged (or otherwise aggregated) to generate a final model-quality estimation. In $k$-fold validation, all observations are used for both training and validation, and each observation is used for validation exactly once. In general, $k$ is a parameter that needs to be selected by investigator (common values may be 5, 10).<br />
<br />
A general case of the $k$-fold validation is $k=n$ (the total number of observations), when it coincides with the '''leave-one-out cross-validation'''.<br />
<br />
<br />
A variation of the $k$-fold validation is <u>'''stratified k-fold cross-validation'''</u>, where each fold has the same (approximately) mean response value. For instance, if the model represents a binary classification of cases (e.g., NC vs. PD), this implies that each fold contains roughly the same proportion of the 2 class labels.<br />
<br />
'''Repeated random sub-sampling validation''' splits randomly the entire dataset into training (where the model is fit) and testing data where the predictive accuracy is assessed). Again, the results are averaged over all iterative splits. This method has an advantage over $k$-fold cross validation as that the proportion of the training/testing split is not dependent on the number of iterations (folds). However, its drawback is that some observations may never be selected whereas others may be selected multiple-times in the testing/validation subsample, as validation subsets may overlap, and the results will vary each time we repeat the validation protocol (unless we set a seed point in the algorithm).<br />
<br />
Asymptotically, as the number of random splits increases, the ''repeated random sub-sampling validation'' approaches the ''leave-k-out cross-validation''.<br />
<br />
====Case-Studies====<br />
<br />
'''Example 1: Parkinson’s Diseases Study''' involving neuroimaging, genetics, clinical, and phenotypic data for over 600 volunteers produced multivariate data for 3 cohorts (HC=Healthy Controls, PD=Parkinson’s, SWEDD= subjects without evidence for dopaminergic deficit).<br />
<br />
&#35; update packages<br />
&#35; update.packages()<br />
<br />
&#35;load the data: 06_PPMI_ClassificationValidationData_Short.csv<br />
ppmi_data <-read.csv("https://umich.instructure.com/files/330400/download?download_frd=1",header=TRUE)<br />
<br />
&#35;binarize the Dx classes<br />
ppmi_data$\$$ResearchGroup <- ifelse(ppmi_data$\$$ResearchGroup == "Control", "Control", "Patient")<br />
attach(ppmi_data)<br />
<br />
head(ppmi_data)<br />
<br />
&#35; Model-free analysis, classification<br />
&#35; install.packages("crossval")<br />
&#35;library("crossval")<br />
require(crossval)<br />
require(ada)<br />
&#35;set up adaboosting prediction function<br />
<br />
<br />
&#35;Define a new classification result-reporting function<br />
my.ada <- function (train.x, train.y, test.x, test.y, negative, formula){<br />
ada.fit <- ada(train.x, train.y)<br />
predict.y <- predict(ada.fit, test.x)<br />
&#35;count TP, FP, TN, FN, Accuracy, etc.<br />
out <- confusionMatrix(test.y, predict.y, negative = negative)<br />
&#35;negative is the label of a negative "null" sample (default: "control").<br />
return (out)<br />
}<br />
<br />
&#35;balance cases<br />
&#35;SMOTE: Synthetic Minority Oversampling Technique to handle class misbalance in binary classification.<br />
set.seed(1000)<br />
&#35;install.packages("unbalanced") to deal with unbalanced group data<br />
require(unbalanced)<br />
ppmi_data$\$$PD <- ifelse(ppmi_data$\$$ResearchGroup=="Control",1,0) <br />
uniqueID <- unique(ppmi_data$\$$FID_IID) <br />
ppmi_data <- ppmi_data[ppmi_data$\$$VisitID==1,]<br />
'''ppmi_data$\$$PD <- factor(ppmi_data$\$$PD)'''<br />
<br />
colnames(ppmi_data)<br />
&#35;ppmi_data.1<-ppmi_data[,c(3:281,284,287,336:340,341)]<br />
n <- ncol(ppmi_data)<br />
output.1 <- ppmi_data$\$$PD<br />
<br />
&#35; remove Default Real Clinical subject classifications! <br />
ppmi_data$\$$PD <- ifelse(ppmi_data$\$$ResearchGroup=="Control",1,0) <br />
input <- ppmi_data[ ,-which(names(ppmi_data) %in% c("ResearchGroup","PD", "X", "FID_IID"))]<br />
&#35; output <- as.matrix(ppmi_data[ ,which(names(ppmi_data) %in% {"PD"})])<br />
output <- as.factor(ppmi_data$\$$PD)<br />
c(dim(input), dim(output))<br />
<br />
&#35; balance the dataset<br />
data.1<-ubBalance(X= input, Y=output, type="ubSMOTE", percOver=300, percUnder=150, verbose=TRUE)<br />
balancedData<-cbind(data.1$\$$X, data.1$\$$Y)<br />
nrow(data.1$\$$X); ncol(data.1$\$$X)<br />
nrow(balancedData); ncol(balancedData)<br />
nrow(input); ncol(input)<br />
<br />
colnames(balancedData) <- c(colnames(input), "PD")<br />
<br />
&#35;&#35;&#35;Check balance<br />
&#35;&#35;T test<br />
alpha.0.05 <- 0.05<br />
test.results.bin <- NULL # binarized/dichotomized p-values<br />
test.results.raw <- NULL # raw p-values<br />
<br />
&#35; get a better error-handling t.test function that gracefully handles NA’s and trivial variances<br />
my.t.test.p.value <- function(input1, input2) {<br />
obj <- try(t.test(input1, input2), silent=TRUE) <br />
if (is(obj, "try-error")) return(NA) else return('''obj$\$$p.value''')<br />
} <br />
<br />
<br />
for (i in 1:ncol(balancedData)) <br />
{ <br />
test.results.raw[i] <- my.t.test.p.value(input[,i], balancedData [,i])<br />
test.results.bin[i] <- ifelse(test.results.raw[i] > alpha.0.05, 1, 0) <br />
&#35; binarize the p-value (0=significant, 1=otherwise) <br />
print(c("i=", i, "var=", colnames(balancedData[i]), "t-test_raw_p_value=", test.results.raw[i]))<br />
}<br />
<br />
&#35;we can also employ (e.g., FDR, Bonferonni) <u>'''correction for multiple testing'''</u>!<br />
&#35;test.results.corr <- '''stats::p.adjust'''(test.results.raw, method = "fdr", n = length(test.results.raw)) <br />
&#35;where methods are "holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none")<br />
&#35;plot(test.results.raw, test.results.corr)<br />
&#35;sum(test.results.raw < alpha.0.05, na.rm=T)/length(test.results.raw) #check proportion of inconsistencies<br />
&#35;sum(test.results.corr < alpha.0.05, na.rm =T)/length(test.results.corr)<br />
<br />
<br />
&#35;as the sample-size is changed:<br />
length(input[,5]); length(balancedData [,5])<br />
&#35;to plot raw vs. rebalanced pairs (e.g., var="L_insular_cortex_Volume"), we need to equalize the lengths<br />
plot (input[,5] +0*balancedData [,5], balancedData [,5]) # [,5] == "L_insular_cortex_Volume"<br />
<br />
print(c("T-test results: ", test.results))<br />
&#35;zeros (0) are significant independent between-group T-test differences, ones (1) are insignificant<br />
<br />
for (i in 1:(ncol(balancedData)-1)) <br />
{<br />
test.results.raw [i] <- wilcox.test(input[,i], balancedData [,i])$\$$p.value<br />
test.results.bin [i] <- ifelse(test.results.raw [i] > alpha.0.05, 1, 0)<br />
print(c("i=", i, "Wilcoxon-test=", test.results.raw [i]))<br />
}<br />
print(c("Wilcoxon test results: ", test.results.bin))<br />
&#35;test.results.corr <- '''stats::p.adjust'''(test.results.raw, method = "fdr", n = length(test.results.raw)) <br />
&#35;where methods are "holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none")<br />
&#35;plot(test.results.raw, test.results.corr)<br />
<br />
<u>'''#cross validation'''</u><br />
&#35;using '''raw data''':<br />
X <- as.data.frame(input); Y <- output<br />
neg <- "1" # "Control" == "1"<br />
<br />
&#35;using '''Rebalanced data''': <br />
X <- as.data.frame(data.1$\$$X); Y <- data.1$\$$Y<br />
&#35;balancedData<-cbind(data.1$\$$X, data.1$\$$Y); dim(balancedData)<br />
<br />
&#35;'''Side note''': There is a function name collision for “crossval”, the same method is present in <br />
<br />
&#35;the “'''mlr'''” (machine Learning in R) package and in the “'''crossval'''” package. <br />
<br />
&#35;To specify a function call from a specific package do: packagename::functionname()<br />
<br />
set.seed(115) <br />
&#35;cv.out <- crossval::crossval(my.ada, X, Y, K = 5, B = 1, negative = neg)<br />
cv.out <- '''crossval::crossval'''(my.ada, X, Y, K = 5, B = 1, negative = neg)<br />
&#35;the label of a negative "null" sample (default: "control")<br />
out <- diagnosticErrors(cv.out$\$$stat) <br />
<br />
print(cv.out$\$$stat)<br />
print(out)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|FP||TP||TN||FN<br />
|-<br />
|1.0||59.8||23.4||0.2<br />
|}<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|acc||sens||spec||ppv||npv||lor <br />
|-<br />
|0.9857820||0.9966667||0.9590164||0.9835526||0.9915254||8.8531796 <br />
|}<br />
<br />
&#35;Define a new LDA = Linear discriminant analysis predicting function<br />
<br />
require("MASS") # for lda function<br />
<br />
'''predfun.lda = function(train.x, train.y, test.x, test.y, negative)'''<br />
{ lda.fit = lda(train.x, grouping=train.y)<br />
ynew = predict(lda.fit, test.x)$\$$class<br />
&#35;count TP, FP etc.<br />
out = confusionMatrix(test.y, ynew, negative=negative)<br />
return( out )<br />
}<br />
<br />
'''(1) a simple example using the sleep dataset''' (containing the effect of two soporific drugs to increase hours of sleep (treatment-compared design) on 10 patients)<br />
<br />
data(sleep)<br />
X = as.matrix(sleep[,1, drop=FALSE]) # increase in hours of sleep,<br />
&#35; drop is logical, if TRUE the result is coerced to the lowest possible dimension.<br />
&#35;The default is to drop if only one column is left, but not to drop if only one row is left.<br />
Y = sleep[,2] # drug given <br />
plot(X ~ Y)<br />
levels(Y) # "1" "2"<br />
dim(X) # 20 1<br />
<br />
set.seed(123456)<br />
cv.out <- '''crossval::crossval'''(predfun.lda, X, Y, K=5, B=20, negative="1")<br />
cv.out$\$$stat<br />
diagnosticErrors(cv.out$\$$stat)<br />
<br />
'''(2) A model-based example (linear regression) using the attitude dataset:''' <br />
<br />
'''&#35;?attitude, colnames(attitude)'''<br />
<br />
&#35;"rating" "complaints" "privileges" "learning" "raises" "critical" "advance"<br />
<br />
&#35;aggregated survey of clerical employees of an organization, representing 35 employees of 30<br />
<br />
&#35;(randomly selected) departments. Data=percent proportion of favorable responses to 7 questions in each department.<br />
<br />
<br />
&#35;Note: when using a data frame, a time-saver is to use “.” to indicate “include all covariates" in the DF. <br />
<br />
&#35;E.g., fit <- lm(Y ~ ., data = D)<br />
<br />
<br />
data("attitude")<br />
y = attitude[,1] # rating variable<br />
x = attitude[,-1] # date frame with the remaining variables<br />
is.factor(y)<br />
summary( lm(y ~ . , data=x) ) # R-squared: 0.7326<br />
&#35;set up lm prediction function<br />
<br />
<br />
'''predfun.lm = function(train.x, train.y, test.x, test.y)'''<br />
{ lm.fit = lm(train.y ~ . , data=train.x)<br />
ynew = predict(lm.fit, test.x )<br />
&#35;compute squared error risk (MSE)<br />
out = mean( (ynew - test.y)^2)<br />
<span style="background-color: #32cd32">&#35;note that, in general, when fitting linear model to continuous outcome variable (Y),</span><br />
<span style="background-color: #32cd32">&#35;we can’t use the '''out<-confusionMatrix(test.y, ynew, negative=negative)''', as it requires a binary outcome</span><br />
<span style="background-color: #32cd32">&#35;this is why we use the MSE as an estimate of the discrepancy between observed & predicted values</span> <br />
return(out)<br />
}<br />
<br />
&#35;require("MASS")<br />
<br />
'''predfun.lda = function(train.x, train.y, test.x, test.y, negative)'''<br />
{ lda.fit = lda(train.x, grouping=train.y)<br />
ynew = predict(lda.fit, test.x)$\$$class<br />
&#35;count TP, FP etc.<br />
out = confusionMatrix(test.y, ynew, negative=negative)<br />
return( out )<br />
}<br />
<br />
&#35;prediction MSE using all variables<br />
set.seed(123456)<br />
cv.out.lm = '''crossval::crossval'''(predfun.lm, x, y, K=5, B=20, negative="1")<br />
c(cv.out.lm$\$$stat, cv.out.lm$\$$stat.se) # 72.581198 3.736784<br />
&#35;reducing to using only two variables<br />
cv.out.lm = '''crossval::crossval'''(predfun.lm, x[,c(1,3)], y, K=5, B=20, negative="1")<br />
c(cv.out.lm$\$$stat, cv.out.lm$\$$stat.se) # 52.563957 2.015109<br />
<br />
<br />
'''(3) a real example using the ppmi_data'''<br />
<br />
&#35;ppmi_data <-read.csv("https://umich.instructure.com/files/330400/download?download_frd=1",header=TRUE)<br />
&#35;ppmi_data$\$$ResearchGroup <- ifelse(ppmi_data$\$$ResearchGroup == "Control", "Control", "Patient")<br />
&#35;attach(ppmi_data); head(ppmi_data)<br />
&#35;install.packages("crossval")<br />
&#35;library("crossval")<br />
&#35;ppmi_data$\$$PD <- ifelse(ppmi_data$\$$ResearchGroup=="Control",1,0)<br />
&#35;input <- ppmi_data[ ,-which(names(ppmi_data) %in% c("ResearchGroup","PD", "X", "FID_IID"))]<br />
&#35;output <- as.factor(ppmi_data$\$$PD)<br />
<br />
&#35;remove the irrelevant variables (e.g., visit ID)<br />
output <- as.factor(ppmi_data$\$$PD)<br />
input <- ppmi_data[, -which(names(ppmi_data) %in% c("ResearchGroup","PD", "X", "FID_IID", "VisitID"))]<br />
X = as.matrix(input) # Predictor variables<br />
Y = as.matrix(output) # Actual PD clinical assessment <br />
dim(X); dim(Y)<br />
<br />
layout(matrix(c(1,2,3,4),2,2)) # optional 4 graphs/page<br />
fit <- lm(Y~X); plot(fit) # plot the fit <br />
levels(as.factor(Y)) # "0" "1"<br />
c(dim(X), dim(Y)) # 1043 103<br />
<br />
set.seed(12345)<br />
&#35;cv.out.lm = '''crossval::crossval'''(predfun.lm, as.data.frame(X), as.numeric(Y), K=5, B=20)<br />
<br />
cv.out.lda = crossval::crossval(predfun.lda, X, Y, K=5, B=20, negative="1")<br />
&#35;K=Number of folds; <u>'''B=Number of repetitions.'''</u><br />
<br />
&#35;Results<br />
<br />
cv.out.lda$\$$stat; cv.out.lda; diagnosticErrors(cv.out.lda$\$$stat)<br />
cv.out.lm$\$$stat; cv.out.lm; diagnosticErrors(cv.out.lm$\$$stat)<br />
<br />
<br />
'''The cross-validation (CV) output object includes the following components:'''<br />
<br />
*stat.cv: ''Vector'' od statistics returned by predfun for each cross validation run<br />
<br />
*stat: ''Mean'' the statistic returned by predfun averaged over all cross validation runs<br />
<br />
*stat.se: ''Variability'': the corresponding standard error.<br />
<br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|FP||TP||TN||FN <br />
|-<br />
|0.06||96.94||33.14||2.06 <br />
|}<br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|acc||<b>sens</b>||<b>spec</b>||ppv||npv||lor <br />
|-<br />
|0.9839637||<b>0.9791919</b>||<b>0.9981928</b>||0.9993814||0.9414773||10.1655380<br />
|}<br />
<br />
[[Image:SMHS_BigDataBigSci_CrossVal4.png|500px]]<br />
<br />
====Alternative predictor functions====<br />
<br />
<b>Logistic Regression</b><br />
<br />
(See the earlier batch of class notes, https://umich.instructure.com/files/421847/download?download_frd=1)<br />
<br />
&#35;ppmi_data <-<br />
read.csv("https://umich.instructure.com/files/330400/download?download_frd=1",header=TRUE)<br />
&#35; ppmi_data$\$$ResearchGroup <- ifelse(ppmi_data$\$$ResearchGroup == "Control", "Control", "Patient")<br />
&#35;install.packages("crossval"); library("crossval")<br />
&#35;ppmi_data$\$$PD <- ifelse(ppmi_data$\$$ResearchGroup=="Control",1,0)<br />
<br />
<br />
&#35;remove the irrelevant variables (e.g., visit ID)<br />
<br />
output <- as.factor(ppmi_data$\$$PD)<br />
input <- ppmi_data[, -which(names(ppmi_data) %in% c("ResearchGroup","PD", "X", "FID_IID", "VisitID"))]<br />
X = as.matrix(input) # Predictor variables<br />
Y = as.matrix(output)<br />
<br />
<br />
<span style="background-color: #32cd32">'''Note that the predicted values are in LOG terms, so we need to exponentiate them to interpret them correctly'''</span><br />
<br />
lm.logit <- glm(as.numeric(Y) ~ ., data = as.data.frame(X), family = "binomial")<br />
ynew <- predict(lm.logit, as.data.frame(X)); plot(ynew)<br />
ynew2 <- ifelse(exp(ynew)<0.5, 0, 1); plot(ynew2)<br />
<br />
'''predfun.logit = function(train.x, train.y, test.x, test.y, neg)'''<br />
{ lm.logit <- glm(train.y ~ ., data = train.x, family = "binomial")<br />
ynew = predict(lm.logit, test.x )<br />
&#35;compute TP, FP, TN, FN<br />
<span style="background-color: #32cd32">ynew2</span> <- ifelse(exp(ynew)<0.5, 0, 1)<br />
out = confusionMatrix(test.y, ynew2, negative=neg) # Binary outcome, we can use confusionMatrix<br />
return( out )<br />
}<br />
<br />
&#35;Reduce the bag of explanatory variables, purely to simplify the interpretation of the analytics in this example!<br />
<br />
input.short <- input[, which(names(input) %in% c("R_fusiform_gyrus_Volume",<br />
"R_fusiform_gyrus_ShapeIndex", "R_fusiform_gyrus_Curvedness",<br />
"Sex", "Weight", "Age" , "chr12_rs34637584_GT", "chr17_rs11868035_GT", <br />
"UPDRS_Part_I_Summary_Score_Baseline", "UPDRS_Part_I_Summary_Score_Month_03", <br />
"UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline", <br />
"UPDRS_Part_III_Summary_Score_Baseline",<br />
"X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline" <br />
))]<br />
X = as.matrix(input.short)<br />
<br />
cv.out.logit = '''crossval::crossval'''(predfun.logit, as.data.frame(X), as.numeric(Y), K=5, B=2, neg="1")<br />
cv.out.logit$\$$stat.cv<br />
diagnosticErrors(cv.out.logit$\$$stat)<br />
<br />
<br />
<span style="background-color: #32cd32">Caution:</span> Note that if you forget to exponentiate the predicted logistic model values (see ynew2 in predict.logit), you will get nonsense results (e.g., all cases are predicted to be in one class, trivial sensitivity or NPP).<br />
<br />
predfun.qda = function(train.x, train.y, test.x, test.y, negative)<br />
{<br />
require("MASS") # for lda function<br />
qda.fit = qda(train.x, grouping=train.y)<br />
ynew = predict(qda.fit,test.x)$\$$class<br />
out.qda = confusionMatrix(test.y, ynew, negative=negative)<br />
return( out.qda )<br />
}<br />
<br />
<mark>cv.out.qda = '''crossval::crossval'''(predfun.qda,</mark> as.data.frame(input.short), <mark>as.factor(Y), K=5, B=20, neg="1")</mark><br />
diagnosticErrors(cv.out.lda$\$$stat); diagnosticErrors(cv.out.qda$\$$stat);<br />
<br />
This error message: <font color="red">“Error in qda.default(x, grouping, ...) : rank deficiency in group 1”</font> indicates that there is a rank deficiency, i.e. some variables are collinear and one or more covariance matrices cannot be inverted to obtain the estimates in group 1 (Controls)!<br />
<br />
<span style="background-color: #32cd32">If you remove the strongly correlated data elements ("R_fusiform_gyrus_Volume","R_fusiform_gyrus_ShapeIndex", and "R_fusiform_gyrus_Curvedness"), the rank-deficiency problem goes away!</span><br />
<br />
input.short2 <- input[, which(names(input) %in%)"R_fusiform_gyrus_Volume", <br />
"Sex", "Weight", "Age" , "chr17_rs11868035_GT", <br />
"UPDRS_Part_I_Summary_Score_Baseline",<br />
"UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline", <br />
"UPDRS_Part_III_Summary_Score_Baseline", <br />
"X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline" <br />
))]<br />
X = as.matrix(input.short2)<br />
cv.out.qda = crossval::crossval(predfun.qda, as.data.frame(X), as.numeric(Y), K=5, B=2, neg="1")<br />
<br />
<span style="background-color: #32cd32">Compare the QDA and GLM/Logit predictions:</span><br />
<br />
diagnosticErrors(cv.out.qda$\$$stat); diagnosticErrors(cv.out.logit$\$$stat)<br />
<br />
<br />
<br />
<sup>1</sup>http://www.ohrt.com/odds/binomial.php<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci_CrossVal_LDA_QDA| Next Section: Foundation of LDA and QDA for prediction, dimensionality reduction or forecasting]]<br />
* [[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]]<br />
* [[SMHS_BigDataBigSci_GCM| Growth Curve Modeling (GCM)]]<br />
* [[SMHS_BigDataBigSci_GCM| Generalized Estimating Equation (GEE) Modeling]]<br />
* [[SMHS_BigDataBigSci|Back to Big Data Science]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_CrossVal}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_SEM&diff=16246SMHS BigDataBigSci SEM2016-05-24T14:07:27Z<p>Pineaumi: /* See also */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Structural Equation Modeling (SEM) ==<br />
<br />
SEM allow re-parameterization of random-effects to specify latent variables that may affect measures at different time points using structural equations. SEM show variables having predictive (possibly causal) effects on other variables (denoted by arrows) where coefficients index the strength and direction of predictive relations. SEM does not offer much more than what classical regression methods do, but it does allow simultaneous estimation of multiple equations modeling complementary relations. <br />
<br />
SEM is a general multivariate statistical analysis technique that can be used for causal modeling/inference, path analysis, confirmatory factor analysis (CFA), covariance structure modeling, and correlation structure modeling.<br />
<br />
===SEM Advantages===<br />
* It allows testing models with multiple dependent variables<br />
* Provides mechanisms for modeling mediating variables<br />
* Enables modeling of error terms<br />
* acilitates modeling of challenging data (longitudinal with auto-correlated errors, multi-level data, non-normal data, incomplete data)<br />
<br />
SEM allows separation of observed and latent variables. Other standard statistical procedures may be viewed as special cases of SEM, where statistical significance less important, than in other techniques, and covariances are the core of structural equation models.<br />
<br />
===Definitions===<br />
*The <b>disturbance</b>, <i>D</i>, is the variance in Y unexplained by a variable X that is assumed to affect Y.<br />
X → Y ← D<br />
<br />
* <b>Measurement error</b>, <i>E</i>, is the variance in X unexplained by A, where X is an observed variable that is presumed to measure a latent variable, <i>A</i>.<br />
A → X ← E<br />
<br />
* Categorical variables in a model are <b>exogenous</b> (independent) or <b>endogenous</b> (dependent).<br />
<br />
===Notation===<br />
<br />
* In SEM <b>observed (or manifest) indicators</b> are represented by <b>squares/rectangles</b> whereas latent variables (or factors) represented by circles/ovals.<br />
<br />
<center>[[Image:SMHS_BigDataBigSci1.png|500px]]</center><br />
<br />
*'''Relations: Direct effects''' (&rarr;), '''Reciprocal effects''' (&harr; or &#8646;), and '''Correlation or covariance''' (&#x293B; or &#x293A;) all have different appearance in SEM models.<br />
<br />
===Model Components===<br />
<br />
The <b>measurement part</b> of SEM model deals with the latent variables and their indicators. A pure measurement model is a confirmatory factor analysis (CFA) model with unmeasured covariance (bidirectional arrows) between each possible pair of latent variables. There are <u>straight arrows from the latent variables to their respective indicators and straight arrows from the error and disturbance terms to their respective variables, but no direct effects (straight arrows) connecting the latent variables</u>. The <b>measurement model</b> is evaluated using goodness of fit measures (Chi-Square test, BIC, AIC, etc.) <b>Validation of the measurement model is always first.</b> <br />
<br />
<b>Then we proceed to the structural model</b> (including a set of exogenous and endogenous variables together with the direct effects (straight arrows) connecting them along with the disturbance and error terms for these variables that reflect the effects of unmeasured variables not in the model).<br />
<br />
===Notes===<br />
<br />
* Sample-size considerations: mostly same as for regression - more is always better.<br />
* Model assessment strategies: Chi-square test, Comparative Fit Index, Root Mean Square Error, Tucker Lewis Index, Goodness of Fit Index, AIC, and BIC.><br />
* Choice for number of Indicator variables: depends on pilot data analyses, a priori concerns, fewer is better.<br />
<br />
===[[SMHS_BigDataBigSci_SEM_Ex1|Hands-on Example 1 (School Kids Mental Abilities)]]===<br />
<br />
<br />
===[[SMHS_BigDataBigSci_SEM_Ex2|Hands-on Example 2 (Parkinson’s Disease data)]]===<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
* [[SMHS_BigDataBigSci_GCM| Next Section: Growth Curve Modeling]]<br />
* [[SMHS_BigDataBigSci_GCM| Next Section: Generalized Estimating Equation (GEE) Modeling]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_SEM}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci&diff=16245SMHS BigDataBigSci2016-05-24T13:57:39Z<p>Pineaumi: /* Generalized Estimating Equation (GEE) Modeling */</p>
<hr />
<div>==[[SMHS| Scientific Methods for Health Sciences]] - Model-based Analyses ==<br />
<br />
Structural Equation Modeling (SEM), Growth Curve Models (GCM), and Generalized Estimating Equation (GEE) Modeling<br />
<br />
==Questions ==<br />
<br />
*How to represent dependencies in linear models and examine causal effects?<br />
*Is there a way to study population average effects of a covariate against specific individual effects?<br />
<br />
==Overview==<br />
<br />
SEM allow re-parameterization of random-effects to specify latent variables that may affect measures at different time points using structural equations. SEM show variables having predictive (possibly causal) effects on other variables (denoted by arrows) where coefficients index the strength and direction of predictive relations. SEM does not offer much more than what classical regression methods do, but it does allow simultaneous estimation of multiple equations modeling complementary relations. <br />
<br />
Growth Curve (or latent growth) modeling is a statistical technique employed in SEM for estimating growth trajectories for longitudinal data (over time). It represent repeated measures of dependent variables as functions of time and other covariates. When subjects or units are observed repeatedly over known time points latent growth curve models reveal the trend of an individual as a function of an underlying growth process where the growth curve parameters can be estimated for each subject/unit.<br />
<br />
GEE is a marginal longitudinal method that directly assesses the mean relations of interest (i.e., how the mean dependent variable changes over time), accounting for covariances among the observations within subjects, and getting a better estimate and valid significance tests of the relations. Thus, GEE estimates two different equations, (1) for the mean relations, and (2) for the covariance structure. An advantage of GEE over random-effect models is that it does not require the dependent variable to be normally distributed. However, a disadvantage of GEE is that it is less flexible and versatile – commonly employed algorithms for it require a small-to-moderate number of time points evenly (or approximately evenly) spaced, and similarly spaced across subjects. Nevertheless, it is a little more flexible than repeated-measure ANOVA because it permits some missing values and has an easy way to test for and model away the specific form of autocorrelation within subjects.<br />
<br />
GEE is mostly used when the study is focused on uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent for linear models, but not in non-linear models.<br />
<br />
For instance, suppose $Y_{i,j}$ is the random effects <b>logistic model</b> of the $j^{th}$, observation of the $i^{th}$ subject, then <br />
$<br />
log\Bigg(\frac{p_{i,j}}{1-p_{i,j}} \Bigg)=μ+ν_i,<br />
$<br />
where $ν_i \sim N(0,σ^2)$ is a random effect for <u>subject i</u> and $p_{i,j}=P(Y_{i,j}=1|ν_i).$<br />
<br />
(1) When using a random effects model on such data, the estimate of μ accounts for the fact that a mean zero normally distributed perturbation was applied to each individual, making it ''individual-specific''.<br />
<br />
(2) When using a GEE model on the same data, we estimate the <i>population average log odds</i>,<br />
<br />
\begin{equation}<br />
δ=log\Bigg(\frac{E_v(\frac{1}{1+e^{-μ+v}i})}{1-E_v(\frac{1}{1+e^{-μ+v}i})}<br />
\Bigg),<br />
\end{equation} <br />
<br />
in general $μ≠δ$.<br />
<br />
If $μ=1$ and $σ^2=1$, then $δ≈.83$. <br />
<br />
empirically:<br />
<br />
m <- 1; s <- 1; v<-rnorm(1000, 0,s); v2 <- 1/(1+exp(-m+v)); v_mean <- mean(v2)<br />
<br />
d <- log(v_mean/(1-v_mean)); d<br />
<br />
Note that the random effects have mean zero on the transformed, linked, scale, but their effect is not mean zero on the original scale of the data. We can also simulate data from a mixed effects logistic regression model and compare the population level average with the inverse-logit of the intercept to see that they are not equal. This leads to a difference of the interpretation of the coefficients between GEE and random effects models, or SEM.<br />
<br />
<b>That is, there will be a difference between the GEE population average coefficients and the individual specific coefficients (random effects models).</b><br />
<br />
<b># theoretically</b>, if it can be computed:<br />
<br />
$E(Y)=μ=1$ (in this specific case), but the expectation of the population average log odds <br />
$δ=log\Bigg[\frac{P(Y_{i,j}=1|v_i)}{1-P(Y_{i,j}=1|v_i)}\Bigg]$ would be $< 1$ <SUP>1</SUP>. <br />
Note that this is kind of related to the fact that a grand-total average need not be equal to an average of partial averages. <br />
<br />
The mean of the $i^{th}$ person in the $j^{th}$ observation (e.g., location, time, etc.) can be expressed by:<br />
<br />
$E(Yij | Xij,α_j)= g[μ(Xij|β)+Uij(α_j,Xij)]$,<br />
<br />
Where $μ(X_{ij}|β)$ is the average “response” of a person with the same covariates $X_{ij}$, $β$ a set of fixed effect coefficients, and $Uij(α_j,Xij)$ is an error term that is a function of the (time, space) random effects, $α_j$, and also a function of the covariates $X_{ij}$, and $g$ is the '''link function''' which specifies the regression type -- e.g., <br />
<br />
*<u>linear</u>:''' $g^{-1} (u)=u,$<br />
<br />
*<u>log</u>:''' $g^{-1} (u)= log(u),$ <br />
<br />
*<u>logistic</u>:''' $g^{-1} (u)=log(\frac{u}{1-u})$<br />
<br />
*$E(Uij(α_j,Xij)|Xij)=0.$<br />
<br />
The link function, $g(u)$, provides the relationship between the linear predictor and the mean of the distribution function. For practical applications there are many commonly used link functions. It makes sense to try to match the domain of the link function to the range of the distribution function's mean.<br />
<br />
<center>Common distributions with typical uses and canonical link functions</center><br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|<b>Distribution</b> ||<b>Support of distribution</b>||<b>Typical uses</b>||<b>Link name</b>||<b>Link function</b>||<b>Mean function</b><br />
|-<br />
|Normal||real: $(-&#8734;, +&#8734;)$||Linear-response data||Identity||$X\beta=\mu$||$\mu=X\beta$<br />
|-<br />
|Exponential, Gamma||real:$(0, +&#8734;)$||Exponential-response data, scale parameters||Inverse||$X\beta=-\mu^{-1}$||$\mu=-(X\beta)^{-1}$<br />
|-<br />
|Inverse Gaussian||real:$(0, +&#8734;)$|| ||Inverse squared||$X\beta=-\mu^{-2}$||$\mu=(-X\beta)^{-1/2}$ <br />
|}<br />
</center><br />
<br />
===Footnotes===<br />
<br />
*<sup>1</sup> http://www.researchgate.net/publication/41895248<br />
<br />
==Model-based Analytics==<br />
<br />
===[[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]]===<br />
<br />
===[[SMHS_BigDataBigSci_GCM| Growth Curve Modeling (GCM)]]===<br />
<br />
===[[SMHS_BigDataBigSci_GCM| Generalized Estimating Equation (GEE) Modeling]]===<br />
<br />
===[[SMHS_BigDataBigSci_CrossVal|Internal Validation - Statistical n-fold cross-validaiton]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM&diff=16244SMHS BigDataBigSci GCM2016-05-23T20:00:01Z<p>Pineaumi: /* See also */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Growth Curve Models==<br />
<br />
Latent growth curve models may be used to analyze longitudinal or temporal data where the outcome measure is assessed on multiple occasions, and we examine its change over time, e.g., the trajectory over time can be<br />
modeled as a linear or quadratic function. Random effects are used to capture individual differences by conveniently representing (continuous) latent variables, aka growth factors. To fit a linear growth model we may specify a model with two latent variables: a random intercept, and a random slope:<br />
<br />
#load data <b>05_PPMI_top_UPDRS_Integrated_LongFormat.csv ( dim(myData) 661 71), wide</b> <br />
# setwd("/dir/")<br />
myData <- read.csv("https://umich.instructure.com/files/330395/download?download_frd=1&verifier=v6jBvV4x94ka3EYcGKuXXg5BZNaOLBVp0xkJih0H",header=TRUE)<br />
attach(myData)<br />
<br />
# dichotomize the "ResearchGroup" variable<br />
table(myData$\$$ResearchGroup)<br />
myData$\$$ResearchGroup <- ifelse(myData$\$$ResearchGroup == "Control", 1, 0)<br />
<br />
# linear growth model with 4 timepoints<br />
# intercept (i) and slope (s) with fixed coefficients<br />
# i =~ 1*t1 + 1*t2 + 1*t3 + 1*t4 (intercept/constant)<br />
# s =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (slope/linear term)<br />
# ??? =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (quadratic term)<br />
<br />
In this model, we have fixed all the coefficients of the linear growth functions:<br />
<br />
model4 <-<br />
' <br />
i =~ 1*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
1*UPDRS_Part_I_Summary_Score_Month_06 + 1*UPDRS_Part_I_Summary_Score_Month_09 + <br />
1*UPDRS_Part_I_Summary_Score_Month_12 + 1*UPDRS_Part_I_Summary_Score_Month_18 + <br />
1*UPDRS_Part_I_Summary_Score_Month_24 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 +<br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
1*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
1*UPDRS_Part_III_Summary_Score_Month_06 + 1*UPDRS_Part_III_Summary_Score_Month_09 + <br />
1*UPDRS_Part_III_Summary_Score_Month_12 + 1*UPDRS_Part_III_Summary_Score_Month_18 + <br />
1*UPDRS_Part_III_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 +<br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24 <br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
2*UPDRS_Part_I_Summary_Score_Month_06 + 3*UPDRS_Part_I_Summary_Score_Month_09 + <br />
4*UPDRS_Part_I_Summary_Score_Month_12 + 5*UPDRS_Part_I_Summary_Score_Month_18 + <br />
6*UPDRS_Part_I_Summary_Score_Month_24 +<br />
0*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
2*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
3*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
4*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
5*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 + <br />
6*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
0*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
2*UPDRS_Part_III_Summary_Score_Month_06 + 3*UPDRS_Part_III_Summary_Score_Month_09 + <br />
4*UPDRS_Part_III_Summary_Score_Month_12 + 5*UPDRS_Part_III_Summary_Score_Month_18 + <br />
6*UPDRS_Part_III_Summary_Score_Month_24 + <br />
0*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 +<br />
6*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 +<br />
0*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
6*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24<br />
'<br />
<br />
fit4 <- growth(model4, data=myData)<br />
summary(fit4)<br />
parameterEstimates(fit4) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame <br />
fitted(fit4) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit4)<br />
<br />
==Measures of model quality (Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA))==<br />
<br />
# report the fit measures as a signature vector: Comparative Fit Index (CFI), Root Mean Square Error of <br />
# Approximation (RMSEA)<br />
fitMeasures(fit4, c("cfi", "rmsea", "srmr"))<br />
<br />
====Comparative Fit Index====<br />
<br />
(CFI) is an incremental measure directly based on the non-centrality measure. If d = χ2(df) where df are the degrees of freedom of the model, the Comparative Fit Index is:<br />
$<br />
\frac{(Null Model)-d(Proposed Model)}{d(Null Model)}.<br />
$<br />
<br />
$0≤CFI≤1$ (by definition). It is interpreted as:<br />
<br />
*$CFI<0.9$ - model fitting is poor.<br />
<br />
*$0.9≤CFI≤0.95$ is considered marginal, <br />
<br />
*$CFI>0.95$ is good. <br />
<br />
CFI is a relative index of model fit – it compare the fit of your model to the fit of (the worst) fitting null model.<br />
<br />
====Root Mean Square Error of Approximation====<br />
(RMSEA) - “Ramsey”<br />
<br />
An absolute measure of fit based on the non-centrality parameter: <br />
<br />
$\sqrt{\frac{X^2-df}{df×(N - 1)}}$,<br />
<br />
where N the sample size and df the degrees of freedom of the model. If χ<sup>2</sup> < df, then the RMSEA∶=0. It has a penalty for complexity via the chi square to df ratio. The RMSEA is a popular measure of model fit. <br />
<br />
*RMSEA < 0.01, excellent, <br />
<br />
*RMSEA < 0.05, good <br />
<br />
*RMSEA > 0.10 cutoff for poor fitting models<br />
<br />
====Standardized Root Mean Square Residual==== <br />
(SRMR) is an absolute measure of fit defined as the standardized difference between the observed correlation and the predicted correlation. A value of zero indicates perfect fit. The SRMR has no penalty for model complexity. SRMR <0.08 is considered a good fit.<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit4)<br />
<br />
#install.packages("semTools")<br />
# library("semTools")<br />
<br />
<b><u>A Simpler Model (fit5)</u></b><br />
<br />
model5 <- '<br />
# intercept and slope with fixed coefficients<br />
i =~ UPDRS_Part_I_Summary_Score_Baseline + UPDRS_Part_I_Summary_Score_Month_03 + UPDRS_Part_I_Summary_Score_Month_24<br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + 6*UPDRS_Part_I_Summary_Score_Month_24<br />
# regressions<br />
i ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT <br />
s ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT<br />
# time-varying covariates<br />
UPDRS_Part_I_Summary_Score_Baseline ~ Weight<br />
UPDRS_Part_I_Summary_Score_Month_03 ~ ResearchGroup <br />
UPDRS_Part_I_Summary_Score_Month_24 ~ Age<br />
'<br />
<br />
fit5 <- growth(model5, data=myData)<br />
summary(fit5); fitMeasures(fit5, c("cfi", "rmsea", "srmr"))<br />
parameterEstimates(fit5) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame<br />
<br />
lavaan (0.5-18) converged normally after 99 iterations<br />
Number of observations 661<br />
Estimator ML<br />
Minimum Function Test Statistic 3.703<br />
Degrees of freedom 1<br />
P-value (Chi-square) 0.054<br />
Parameter estimates:<br />
Information Expected<br />
Standard Errors Standard<br />
Estimate Std.err Z-value P(>|z|)<br />
Latent variables:<br />
i =~<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 1.074<br />
UPDRS_P_I_S_S 1.172<br />
s =~<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 6.000<br />
<br />
Regressions:<br />
i ~<br />
R_fsfrm_gyr_V 0.000<br />
Weight 0.003<br />
ResearchGroup -0.880<br />
Age -0.009<br />
c12_34637584_ -0.907<br />
s ~<br />
R_fsfrm_gyr_V -0.000<br />
Weight -0.000<br />
ResearchGroup -0.084<br />
Age 0.002<br />
c12_34637584_ -0.047<br />
UPDRS_Part_I_Summary_Score_Baseline ~<br />
Weight -0.000<br />
UPDRS_Part_I_Summary_Score_Month_03 ~<br />
ResearchGroup 0.693<br />
UPDRS_Part_I_Summary_Score_Month_24 ~<br />
Age -0.002<br />
<br />
Covariances:<br />
i ~~<br />
s 0.074<br />
<br />
Intercepts:<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
i 1.633<br />
s -0.023<br />
<br />
Variances:<br />
UPDRS_P_I_S_S 1.017<br />
UPDRS_P_I_S_S 1.093<br />
UPDRS_P_I_S_S 2.993<br />
i 1.019<br />
s -0.025<br />
<br />
<b>cfi rmsea srmr</b><br />
<b>0.996 0.064 0.008</b><br />
<br />
fitted(fit5) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
# write.table(fitted(fit5), file="C:\\Users\\Dinov\\Desktop\\test1.txt")<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit5)<br />
<br />
# report the fit measures as a signature vector<br />
fitMeasures(fit5, c("cfi", "rmsea", "srmr")) # comparative fit index (CFI)<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit5)<br />
<br />
<b>Note:</b> See discussion of SEM modeling pros/cons <sup>2</sup>.<br />
<br />
==Generalized Estimating Equation (GEE) Modeling==<br />
<br />
Generalized Estimating Equations (GEE) modeling<sup>3</sup> is used for analyzing data with the following characteristics:<br />
(1) the observations within a group may be correlated, (2) observations in separate clusters are independent, (3) a monotone transformation of the expectation is linearly related to the explanatory variables, and (4) the variance is a function of the expectation. The expectation (#3) and the variance (# 4) are conditional given group-level or individual-level covariates.<br />
<br />
GEE is applied to handle correlated discrete and continuous outcome variables. For the outcome variables, it only requires specification of the first 2 moments and correlation among them. The goal is to estimate fixed parameters without specifying their joint distribution. The correlation is specified by one of these 4 alternatives (which is specified in the R call: geeglm(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, <b>corstr = " exchangeable"</b>, scale.fix = TRUE):<br />
<br />
<center>[[Image:SMHS_BigDataBigSci8.png|300px]]</center><br />
<br />
===Respiratory Illness GEE R example===<br />
<br />
This example is based on a data set on respiratory illness <sup>4</sup> and the <b>geepack</b> package. The data is from a clinical study of the treatment effects on patients with respiratory illness. N=111 patients from 2 clinical centers randomized to receive either placebo or active treatments. 4 temporal examinations assessed the <b>respiratory state</b> of patients as good (=1) or poor (=0). Explanatory variables characterizing a patient were: <b>center</b> (1,2), treatment (A=active, P=placebo), <b>sex</b> (M=male, F=female), <b>age</b> (in years) at baseline. The values of the covariates were constant for the repeated elementary observations on each patient.<br />
<br />
<b>Table 1</b> shows the number of patients for the response patterns across the 4 visits split by baseline-status and treatment. Baseline respiratory status = 0 appear to have either low or high number of positive responses. Baseline respiratory status = 1 tend to respond positively. <b>Table 2</b> describes the distribution of the number of positive responses per patient for sex and center.<br />
<br />
# library("geepack")<br />
<br />
<b>Table 1</b>: Distribution of patients for <b>different response patterns</b> classified by <b>baseline-respiratory</b> response and <b>treatment</b>. The patterns are ordered according to increasing numbers of positive responses.<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! ||Visit|| colspan="15"| All Possible Response Patterns (2*2*2*2=16 permutation patterns)||<br />
|-<br />
|||1||0||1||0||0||0||1||1||1||0||0||1||1||1||0||1||<br />
|-<br />
|||2||0||0||1||0||0||1||0||0||1||0||1||1||0||1||1||<br />
|-<br />
|||3||0||0||0||1||0||0||1||0||1||1||1||0||1||1||1||<br />
|-<br />
|||4||0||0||0||0||1||0||0||1||0||1||0||1||1||1||1||<br />
|-<br />
!Baseline||Treatment||||||||||||||||||||||||||||||||Sum<br />
|-<br />
| rowspan="2"|0||A||7||2||2||2||1||0||1||0||1||0||1||2||0||4||7||30<br />
|-<br />
|P||18||1||0||2||1||2||0||0||1||0||0||1||2||0||3||31<br />
|-<br />
|rowspan="2"|1||A||0||0||0||0||0||0||1||1||0||0||4||0||1||0||17||24<br />
|-<br />
|P||1||4||1||0||0||0||0||1||1||3||1||1||2||1||10||26<br />
|-<br />
|Sum||||26||7||3||4||2||2||2||2||3||3||6||4||5||5||37||111<br />
|}<br />
</center><br />
<br />
<br />
<b>Table 2</b>: Distribution of patients for the number of positive responses across the 4 visits for <b>Sex</b> and <b>Center</b>. <br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! colspan="2" rowspan="2"| ||colspan="5"|Number of positive responses<br />
|-<br />
| 0||1||2||3||4<br />
|-<br />
|rowspan="2"|Sex || F||7||3||3||3||7<br />
|-<br />
|M||19||13||9||17||30<br />
|-<br />
|rowspan="2"|Center|| 1||18||9||6||11||12<br />
|-<br />
|2||8||7||6||9||25<br />
|}<br />
</center><br />
<br />
<b>Figure 1</b> shows a plot of age against the proportion of positive responses for each patient. It indicates a quadratic relationship between the proportions and the age. Fitting a logistic model to the data (which would be appropriate if there were <i>no time effects</i> and <i>no spread in the response probabilities</i> for patients with the same covariate values).<br />
<br />
# install.packages("geepack")<br />
library("geepack")<br />
<br />
# data include a clinical trial of 111 patients with respiratory illness from two different clinics were randomized to receive either <br />
# placebo (P) or an active (A) treatment. Patients were examined at baseline and at four visits during treatment. <br />
# At each examination, respiratory status (categorized as 1 = good, 0 = poor)<br />
data("respiratory")<br />
head(respiratory)<br />
myData <- respiratory<br />
<br />
<center>head(myData)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Center||ID||Treat||Sex||Age||Baseline||Visit||Outcome<br />
|-<br />
|1 ||1||1||P||M||46||0||1||0<br />
|-<br />
|2 ||1||1||P||M||46||0||2||0<br />
|-<br />
|3 ||1||1||P||M||46||0||3||0<br />
|-<br />
|4 ||1||1||P||M||46||0||4||0<br />
|-<br />
|5||1||2||P||M||28||0||1||0<br />
|-<br />
|6||1||2||P||M||28||0||2||0<br />
|}<br />
</center><br />
<br />
# Get proportions of positive responses<br />
responses <- factor(myData$\$$outcome, labels = c("OutcomePositive", "OutcomeNegative"))<br />
data.frame <- data.frame(responses, myData$\$$age)<br />
head(data.frame)<br />
tab <- prop.table(table(data.frame), 1); tab # compute proportions<br />
sum(tab[1,]) # check proportions (sums to 1.0)?<br />
prop <- tab[1,] # save the proportions of positive responses for each patient<br />
plot(as.numeric(dimnames(tab)$\$$myData.age), tab[1,], xlab = "Age", ylab = "Proportion of Positive Outcomes")<br />
# dimnames(tab) # to see/inspect positive/negative outcomes<br />
<br />
[[Image:SMHS_BigDataBigSci9.png|500px]]<br />
<br />
x <- as.numeric(dimnames(tab)$\$$myData.age)<br />
poly <- loess( prop ~ x) # fit a Local Polynomial Regression Fitting<br />
plot(x, prop)<br />
lines(predict(poly), col='red', lwd=2)<br />
<br />
smoothingSpline <- smooth.spline(x, prop, spar=0.6)<br />
plot(x, prop)<br />
lines(smoothingSpline, col='red', lwd=1.5)<br />
smoothPolySpline <- smooth.spline(x, predict(poly), spar=0.6)<br />
lines(smoothPolySpline, col='blue', lwd=2)<br />
legend("topright", inset=.05, title="Polynomial regression models", c("Raw Poly","Smooth Poly"), fill=c('red', 'blue'), horiz=TRUE)<br />
<br />
[[Image:SMHS_BigDataBigSci10.png|500px]]<br />
<br />
model.glm <- <b>glm</b>(outcome ~ baseline + center + sex + treat + age + I(age^2), data = respiratory, family = binomial)<br />
<br />
summary(model.glm)<br />
<br />
<center>Deviance Residuals: <br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max<br />
|-<br />
| -2.5951||-0.9108||0.4034||0.8336||2.0951<br />
|}<br />
</center><br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Estimate||Std. Error||z value||$Pr( \gt |z|)$ <br />
|-<br />
|(Intercept)||3.3579727||1.0285292||3.265||0.0011 **<br />
|-<br />
|baseline||1.8850421||0.2482959||7.592||3.15e-14 ***<br />
|-<br />
|center||0.5099244||0.2453982||2.078||0.0377 *<br />
|-<br />
|sexM||-0.4510595||0.3166570||-1.424||0.1543<br />
|-<br />
|Treatp||-1.3231587||0.2431603||-5.442||5.28e-08 ***<br />
|-<br />
|age||-0.2072815||0.0472538||-4.387||1.15e-05 ***<br />
|-<br />
|I(age^2)||0.0025650||0.0006324||4.056||4.99e-05 ***<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
(Dispersion parameter for binomial family taken to be 1)<br />
<br />
Null deviance: 609.41 on 443 degrees of freedom<br />
<br />
Residual deviance: 468.62 on 437 degrees of freedom<br />
<br />
AIC: 482.62<br />
<br />
The correlation matrix of the of the outcome measures across visits is shown in <b>Table 3.</b><br />
<br />
attach(myData)<br />
mat1 <- matrix(c(outcome[visit==1], outcome [visit==2], outcome [visit==3], <br />
outcome[visit==4]), ncol = 4)<br />
cor(mat1)<br />
<br />
<b>Table 3</b>: Correlation matrix for the outcome measurements at different visits.<br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||[,1]||[,2]||[,3]||[,4]<br />
|-<br />
|[,1]||1.0000000||0.5087944||0.4431438||0.5139016<br />
|-<br />
|[,2]||0.5087944||1.0000000||0.5821877||0.5301611<br />
|-<br />
|[,3]||0.4431438||0.5821877||1.0000000||0.5871276<br />
|-<br />
|[,4]||0.5139016||0.5301611||0.5871276||1.0000000<br />
|}<br />
</center><br />
<br />
# We can also examine for multicollinearity problem, using the correlation matrix for X<br />
cor(model.matrix(model.glm)[,-1])<br />
<br />
# GEE modeling: R function arguments/options<br />
<br />
*<b>corstr</b>= for defining the correlation structure within groups in a GEE model<br />
<br />
*<b>id</b>= is used to identify the grouping variable in a GEE model<br />
<br />
*<b>scale.fix</b>= when TRUE causes the scale parameter to be fixed (by default at 1) rather than estimated<br />
<br />
*<b>waves</b>= names a positive integer-valued variable that is used to identify the order and spacing of observations within groups in a GEE model. This argument is crucial when there are missing values and gaps in the data<br />
<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, corstr = "exchangeable", scale.fix = TRUE)<br />
<br />
# The column labeled <b>Wald</b> in the summary table is the square of the z-statistic. The reported p-values are the <br />
# upper tailed probabilities from a chisq1 distribution and test whether the true parameter value ≠0.<br />
summary(gee.model1)<br />
<br />
# To test the effect of ''treatment'' using anova()<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + <b><u>treat</u></b> + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id = id, corstr = "exchangeable", std.err="san.se")<br />
gee.model2 <- geeglm(outcome ~ center + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id=id, corstr = "exchangeable", std.err="san.se")<br />
anova(gee.model1, gee.model2)<br />
<br />
# To test whether a categorical predictor with more than two levels should be retained in a GEE model we need <br />
# to test the entire set of dummy variables simultaneously as a single construct. <br />
# The geepack package provides a method for the anova function for a multivariate Wald test<br />
# When the anova function is applied to a single geeglm object it returns sequential Wald tests for <br />
# individual predictors with the tests carried out in the order the predictors are listed in the model formula.<br />
anova(gee.model1)<br />
<br />
===PD GEE example===<br />
<br />
This example used the PPMI/PD data to show GEE analysis.<br />
<br />
<b># 05_PPMI_top_UPDRS_Integrated_LongFormat1.csv</b><br />
longData <- read.csv("https://umich.instructure.com/files/330397/download?download_frd=1",header=TRUE)<br />
<br />
# library("geepack")<br />
<br />
# Data Elements: FID_IID L_insular_cortex_ComputeArea L_insular_cortex_Volume R_insular_cortex_ComputeArea R_insular_cortex_Volume L_cingulate_gyrus_ComputeArea L_cingulate_gyrus_Volume R_cingulate_gyrus_ComputeArea R_cingulate_gyrus_Volume L_caudate_ComputeArea L_caudate_Volume R_caudate_ComputeArea R_caudate_Volume L_putamen_ComputeArea L_putamen_Volume R_putamen_ComputeArea R_putamen_Volume Sex Weight ResearchGroup Age chr12_rs34637584_GT chr17_rs11868035_GT chr17_rs11012_GT chr17_rs393152_GT chr17_rs12185268_GT chr17_rs199533_GT UPDRS_part_I UPDRS_part_II UPDRS_part_III time_visit<br />
<br />
dim(longData) <br />
<br />
data1 = na.omit(longData)<br />
attach(data1)<br />
ControlGroup <- ifelse(ResearchGroup == "Control", 1, 0)<br />
<br />
# these calculations take a long time!!!<br />
# if you get <i>“Error in geese.fit(xx, yy, id, offset, soffset, w, waves = waves, zsca, : <br />
# nrow(zsca) and length(y)</i> not match” – this indicates some of the variables are of different lengths<br />
# if you get <i>“glm.fit: algorithm did not converge”</i> – see this discussion: http://goo.gl/lrjBjB <br />
<br />
gee.model0 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
gee.model1 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
# compare 2 gee models<br />
# anova(gee.model0,gee.model1)<br />
<br />
# you can try the “family = poisson(link = "log")” model for the ResearchGroup response, as well<br />
<br />
gee.model2 <- <b>geeglm</b>(ControlGroup <br />
~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+R_insular_cortex_ComputeArea+ R_insular_cortex_Volume +L_cingulate_gyrus_ComputeArea + L_cingulate_gyrus_Volume + R_cingulate_gyrus_ComputeArea + R_cingulate_gyrus_Volume + L_caudate_ComputeArea + L_caudate_Volume + R_caudate_ComputeArea + R_caudate_Volume + L_putamen_ComputeArea + L_putamen_Volume + R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr12_rs34637584_GT + chr17_rs11868035_GT + chr17_rs11012_GT + chr17_rs393152_GT + chr17_rs12185268_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
Remember that we do not interpret GEE coefficients as relating to individuals – GEE models are marginal models and the conclusions drawn are interpreted as population-based. Also, the time element in the model (time_visit) is just another controlling factor. <b>The effect-sizes (betas) associated with each variable/predictor represent the slopes associated with the corresponding covariate, while holding time constant</b>. If we need to examine interactions (e.g., Weight change over Time), we need to include an interaction term in model: (i.e. + Weight*time_visit).<br />
<br />
summary (gee.model2)<br />
<br />
# Individual Wald test and <b>confidence intervals</b> for each covariate<br />
predictors2 <- coef(summary(gee.model2))<br />
CI2 <- with(as.data.frame(predictors2), cbind(lwr=Estimate-1.96*Std.err, est=Estimate, upr=Estimate+1.96*Std.err))<br />
rownames(CI2) <- rownames(predictors2)<br />
CI2<br />
<br />
==Appendix==<br />
<br />
SEM References<br />
<br />
*http://socserv.mcmaster.ca/jfox/Misc/sem/SEM-paper.pdf <br />
<br />
GEE References<br />
<br />
*https://cran.r-project.org/web/packages/geepack/geepack.pdf<br />
<br />
*http://www.jstatsoft.org/v15/i02/paper<br />
<br />
===Footnotes===<br />
<br />
*<sup>2</sup> http://www.imachordata.com/ecological-sems-and-composite-variables-what-why-and-how/<br />
*<sup>3</sup> http://www.jstatsoft.org/v15/i02/ <br />
*<sup>4</sup> https://books.google.com/books?id=mdEqBgAAQBAJ<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
* [[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]] <br />
<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM&diff=16243SMHS BigDataBigSci GCM2016-05-23T19:58:06Z<p>Pineaumi: /* Model-based Analytics - Growth Curve Models */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Growth Curve Models==<br />
<br />
Latent growth curve models may be used to analyze longitudinal or temporal data where the outcome measure is assessed on multiple occasions, and we examine its change over time, e.g., the trajectory over time can be<br />
modeled as a linear or quadratic function. Random effects are used to capture individual differences by conveniently representing (continuous) latent variables, aka growth factors. To fit a linear growth model we may specify a model with two latent variables: a random intercept, and a random slope:<br />
<br />
#load data <b>05_PPMI_top_UPDRS_Integrated_LongFormat.csv ( dim(myData) 661 71), wide</b> <br />
# setwd("/dir/")<br />
myData <- read.csv("https://umich.instructure.com/files/330395/download?download_frd=1&verifier=v6jBvV4x94ka3EYcGKuXXg5BZNaOLBVp0xkJih0H",header=TRUE)<br />
attach(myData)<br />
<br />
# dichotomize the "ResearchGroup" variable<br />
table(myData$\$$ResearchGroup)<br />
myData$\$$ResearchGroup <- ifelse(myData$\$$ResearchGroup == "Control", 1, 0)<br />
<br />
# linear growth model with 4 timepoints<br />
# intercept (i) and slope (s) with fixed coefficients<br />
# i =~ 1*t1 + 1*t2 + 1*t3 + 1*t4 (intercept/constant)<br />
# s =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (slope/linear term)<br />
# ??? =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (quadratic term)<br />
<br />
In this model, we have fixed all the coefficients of the linear growth functions:<br />
<br />
model4 <-<br />
' <br />
i =~ 1*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
1*UPDRS_Part_I_Summary_Score_Month_06 + 1*UPDRS_Part_I_Summary_Score_Month_09 + <br />
1*UPDRS_Part_I_Summary_Score_Month_12 + 1*UPDRS_Part_I_Summary_Score_Month_18 + <br />
1*UPDRS_Part_I_Summary_Score_Month_24 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 +<br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
1*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
1*UPDRS_Part_III_Summary_Score_Month_06 + 1*UPDRS_Part_III_Summary_Score_Month_09 + <br />
1*UPDRS_Part_III_Summary_Score_Month_12 + 1*UPDRS_Part_III_Summary_Score_Month_18 + <br />
1*UPDRS_Part_III_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 +<br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24 <br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
2*UPDRS_Part_I_Summary_Score_Month_06 + 3*UPDRS_Part_I_Summary_Score_Month_09 + <br />
4*UPDRS_Part_I_Summary_Score_Month_12 + 5*UPDRS_Part_I_Summary_Score_Month_18 + <br />
6*UPDRS_Part_I_Summary_Score_Month_24 +<br />
0*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
2*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
3*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
4*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
5*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 + <br />
6*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
0*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
2*UPDRS_Part_III_Summary_Score_Month_06 + 3*UPDRS_Part_III_Summary_Score_Month_09 + <br />
4*UPDRS_Part_III_Summary_Score_Month_12 + 5*UPDRS_Part_III_Summary_Score_Month_18 + <br />
6*UPDRS_Part_III_Summary_Score_Month_24 + <br />
0*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 +<br />
6*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 +<br />
0*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
6*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24<br />
'<br />
<br />
fit4 <- growth(model4, data=myData)<br />
summary(fit4)<br />
parameterEstimates(fit4) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame <br />
fitted(fit4) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit4)<br />
<br />
==Measures of model quality (Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA))==<br />
<br />
# report the fit measures as a signature vector: Comparative Fit Index (CFI), Root Mean Square Error of <br />
# Approximation (RMSEA)<br />
fitMeasures(fit4, c("cfi", "rmsea", "srmr"))<br />
<br />
====Comparative Fit Index====<br />
<br />
(CFI) is an incremental measure directly based on the non-centrality measure. If d = χ2(df) where df are the degrees of freedom of the model, the Comparative Fit Index is:<br />
$<br />
\frac{(Null Model)-d(Proposed Model)}{d(Null Model)}.<br />
$<br />
<br />
$0≤CFI≤1$ (by definition). It is interpreted as:<br />
<br />
*$CFI<0.9$ - model fitting is poor.<br />
<br />
*$0.9≤CFI≤0.95$ is considered marginal, <br />
<br />
*$CFI>0.95$ is good. <br />
<br />
CFI is a relative index of model fit – it compare the fit of your model to the fit of (the worst) fitting null model.<br />
<br />
====Root Mean Square Error of Approximation====<br />
(RMSEA) - “Ramsey”<br />
<br />
An absolute measure of fit based on the non-centrality parameter: <br />
<br />
$\sqrt{\frac{X^2-df}{df×(N - 1)}}$,<br />
<br />
where N the sample size and df the degrees of freedom of the model. If χ<sup>2</sup> < df, then the RMSEA∶=0. It has a penalty for complexity via the chi square to df ratio. The RMSEA is a popular measure of model fit. <br />
<br />
*RMSEA < 0.01, excellent, <br />
<br />
*RMSEA < 0.05, good <br />
<br />
*RMSEA > 0.10 cutoff for poor fitting models<br />
<br />
====Standardized Root Mean Square Residual==== <br />
(SRMR) is an absolute measure of fit defined as the standardized difference between the observed correlation and the predicted correlation. A value of zero indicates perfect fit. The SRMR has no penalty for model complexity. SRMR <0.08 is considered a good fit.<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit4)<br />
<br />
#install.packages("semTools")<br />
# library("semTools")<br />
<br />
<b><u>A Simpler Model (fit5)</u></b><br />
<br />
model5 <- '<br />
# intercept and slope with fixed coefficients<br />
i =~ UPDRS_Part_I_Summary_Score_Baseline + UPDRS_Part_I_Summary_Score_Month_03 + UPDRS_Part_I_Summary_Score_Month_24<br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + 6*UPDRS_Part_I_Summary_Score_Month_24<br />
# regressions<br />
i ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT <br />
s ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT<br />
# time-varying covariates<br />
UPDRS_Part_I_Summary_Score_Baseline ~ Weight<br />
UPDRS_Part_I_Summary_Score_Month_03 ~ ResearchGroup <br />
UPDRS_Part_I_Summary_Score_Month_24 ~ Age<br />
'<br />
<br />
fit5 <- growth(model5, data=myData)<br />
summary(fit5); fitMeasures(fit5, c("cfi", "rmsea", "srmr"))<br />
parameterEstimates(fit5) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame<br />
<br />
lavaan (0.5-18) converged normally after 99 iterations<br />
Number of observations 661<br />
Estimator ML<br />
Minimum Function Test Statistic 3.703<br />
Degrees of freedom 1<br />
P-value (Chi-square) 0.054<br />
Parameter estimates:<br />
Information Expected<br />
Standard Errors Standard<br />
Estimate Std.err Z-value P(>|z|)<br />
Latent variables:<br />
i =~<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 1.074<br />
UPDRS_P_I_S_S 1.172<br />
s =~<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 6.000<br />
<br />
Regressions:<br />
i ~<br />
R_fsfrm_gyr_V 0.000<br />
Weight 0.003<br />
ResearchGroup -0.880<br />
Age -0.009<br />
c12_34637584_ -0.907<br />
s ~<br />
R_fsfrm_gyr_V -0.000<br />
Weight -0.000<br />
ResearchGroup -0.084<br />
Age 0.002<br />
c12_34637584_ -0.047<br />
UPDRS_Part_I_Summary_Score_Baseline ~<br />
Weight -0.000<br />
UPDRS_Part_I_Summary_Score_Month_03 ~<br />
ResearchGroup 0.693<br />
UPDRS_Part_I_Summary_Score_Month_24 ~<br />
Age -0.002<br />
<br />
Covariances:<br />
i ~~<br />
s 0.074<br />
<br />
Intercepts:<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
i 1.633<br />
s -0.023<br />
<br />
Variances:<br />
UPDRS_P_I_S_S 1.017<br />
UPDRS_P_I_S_S 1.093<br />
UPDRS_P_I_S_S 2.993<br />
i 1.019<br />
s -0.025<br />
<br />
<b>cfi rmsea srmr</b><br />
<b>0.996 0.064 0.008</b><br />
<br />
fitted(fit5) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
# write.table(fitted(fit5), file="C:\\Users\\Dinov\\Desktop\\test1.txt")<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit5)<br />
<br />
# report the fit measures as a signature vector<br />
fitMeasures(fit5, c("cfi", "rmsea", "srmr")) # comparative fit index (CFI)<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit5)<br />
<br />
<b>Note:</b> See discussion of SEM modeling pros/cons <sup>2</sup>.<br />
<br />
==Generalized Estimating Equation (GEE) Modeling==<br />
<br />
Generalized Estimating Equations (GEE) modeling<sup>3</sup> is used for analyzing data with the following characteristics:<br />
(1) the observations within a group may be correlated, (2) observations in separate clusters are independent, (3) a monotone transformation of the expectation is linearly related to the explanatory variables, and (4) the variance is a function of the expectation. The expectation (#3) and the variance (# 4) are conditional given group-level or individual-level covariates.<br />
<br />
GEE is applied to handle correlated discrete and continuous outcome variables. For the outcome variables, it only requires specification of the first 2 moments and correlation among them. The goal is to estimate fixed parameters without specifying their joint distribution. The correlation is specified by one of these 4 alternatives (which is specified in the R call: geeglm(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, <b>corstr = " exchangeable"</b>, scale.fix = TRUE):<br />
<br />
<center>[[Image:SMHS_BigDataBigSci8.png|300px]]</center><br />
<br />
===Respiratory Illness GEE R example===<br />
<br />
This example is based on a data set on respiratory illness <sup>4</sup> and the <b>geepack</b> package. The data is from a clinical study of the treatment effects on patients with respiratory illness. N=111 patients from 2 clinical centers randomized to receive either placebo or active treatments. 4 temporal examinations assessed the <b>respiratory state</b> of patients as good (=1) or poor (=0). Explanatory variables characterizing a patient were: <b>center</b> (1,2), treatment (A=active, P=placebo), <b>sex</b> (M=male, F=female), <b>age</b> (in years) at baseline. The values of the covariates were constant for the repeated elementary observations on each patient.<br />
<br />
<b>Table 1</b> shows the number of patients for the response patterns across the 4 visits split by baseline-status and treatment. Baseline respiratory status = 0 appear to have either low or high number of positive responses. Baseline respiratory status = 1 tend to respond positively. <b>Table 2</b> describes the distribution of the number of positive responses per patient for sex and center.<br />
<br />
# library("geepack")<br />
<br />
<b>Table 1</b>: Distribution of patients for <b>different response patterns</b> classified by <b>baseline-respiratory</b> response and <b>treatment</b>. The patterns are ordered according to increasing numbers of positive responses.<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! ||Visit|| colspan="15"| All Possible Response Patterns (2*2*2*2=16 permutation patterns)||<br />
|-<br />
|||1||0||1||0||0||0||1||1||1||0||0||1||1||1||0||1||<br />
|-<br />
|||2||0||0||1||0||0||1||0||0||1||0||1||1||0||1||1||<br />
|-<br />
|||3||0||0||0||1||0||0||1||0||1||1||1||0||1||1||1||<br />
|-<br />
|||4||0||0||0||0||1||0||0||1||0||1||0||1||1||1||1||<br />
|-<br />
!Baseline||Treatment||||||||||||||||||||||||||||||||Sum<br />
|-<br />
| rowspan="2"|0||A||7||2||2||2||1||0||1||0||1||0||1||2||0||4||7||30<br />
|-<br />
|P||18||1||0||2||1||2||0||0||1||0||0||1||2||0||3||31<br />
|-<br />
|rowspan="2"|1||A||0||0||0||0||0||0||1||1||0||0||4||0||1||0||17||24<br />
|-<br />
|P||1||4||1||0||0||0||0||1||1||3||1||1||2||1||10||26<br />
|-<br />
|Sum||||26||7||3||4||2||2||2||2||3||3||6||4||5||5||37||111<br />
|}<br />
</center><br />
<br />
<br />
<b>Table 2</b>: Distribution of patients for the number of positive responses across the 4 visits for <b>Sex</b> and <b>Center</b>. <br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! colspan="2" rowspan="2"| ||colspan="5"|Number of positive responses<br />
|-<br />
| 0||1||2||3||4<br />
|-<br />
|rowspan="2"|Sex || F||7||3||3||3||7<br />
|-<br />
|M||19||13||9||17||30<br />
|-<br />
|rowspan="2"|Center|| 1||18||9||6||11||12<br />
|-<br />
|2||8||7||6||9||25<br />
|}<br />
</center><br />
<br />
<b>Figure 1</b> shows a plot of age against the proportion of positive responses for each patient. It indicates a quadratic relationship between the proportions and the age. Fitting a logistic model to the data (which would be appropriate if there were <i>no time effects</i> and <i>no spread in the response probabilities</i> for patients with the same covariate values).<br />
<br />
# install.packages("geepack")<br />
library("geepack")<br />
<br />
# data include a clinical trial of 111 patients with respiratory illness from two different clinics were randomized to receive either <br />
# placebo (P) or an active (A) treatment. Patients were examined at baseline and at four visits during treatment. <br />
# At each examination, respiratory status (categorized as 1 = good, 0 = poor)<br />
data("respiratory")<br />
head(respiratory)<br />
myData <- respiratory<br />
<br />
<center>head(myData)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Center||ID||Treat||Sex||Age||Baseline||Visit||Outcome<br />
|-<br />
|1 ||1||1||P||M||46||0||1||0<br />
|-<br />
|2 ||1||1||P||M||46||0||2||0<br />
|-<br />
|3 ||1||1||P||M||46||0||3||0<br />
|-<br />
|4 ||1||1||P||M||46||0||4||0<br />
|-<br />
|5||1||2||P||M||28||0||1||0<br />
|-<br />
|6||1||2||P||M||28||0||2||0<br />
|}<br />
</center><br />
<br />
# Get proportions of positive responses<br />
responses <- factor(myData$\$$outcome, labels = c("OutcomePositive", "OutcomeNegative"))<br />
data.frame <- data.frame(responses, myData$\$$age)<br />
head(data.frame)<br />
tab <- prop.table(table(data.frame), 1); tab # compute proportions<br />
sum(tab[1,]) # check proportions (sums to 1.0)?<br />
prop <- tab[1,] # save the proportions of positive responses for each patient<br />
plot(as.numeric(dimnames(tab)$\$$myData.age), tab[1,], xlab = "Age", ylab = "Proportion of Positive Outcomes")<br />
# dimnames(tab) # to see/inspect positive/negative outcomes<br />
<br />
[[Image:SMHS_BigDataBigSci9.png|500px]]<br />
<br />
x <- as.numeric(dimnames(tab)$\$$myData.age)<br />
poly <- loess( prop ~ x) # fit a Local Polynomial Regression Fitting<br />
plot(x, prop)<br />
lines(predict(poly), col='red', lwd=2)<br />
<br />
smoothingSpline <- smooth.spline(x, prop, spar=0.6)<br />
plot(x, prop)<br />
lines(smoothingSpline, col='red', lwd=1.5)<br />
smoothPolySpline <- smooth.spline(x, predict(poly), spar=0.6)<br />
lines(smoothPolySpline, col='blue', lwd=2)<br />
legend("topright", inset=.05, title="Polynomial regression models", c("Raw Poly","Smooth Poly"), fill=c('red', 'blue'), horiz=TRUE)<br />
<br />
[[Image:SMHS_BigDataBigSci10.png|500px]]<br />
<br />
model.glm <- <b>glm</b>(outcome ~ baseline + center + sex + treat + age + I(age^2), data = respiratory, family = binomial)<br />
<br />
summary(model.glm)<br />
<br />
<center>Deviance Residuals: <br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max<br />
|-<br />
| -2.5951||-0.9108||0.4034||0.8336||2.0951<br />
|}<br />
</center><br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Estimate||Std. Error||z value||$Pr( \gt |z|)$ <br />
|-<br />
|(Intercept)||3.3579727||1.0285292||3.265||0.0011 **<br />
|-<br />
|baseline||1.8850421||0.2482959||7.592||3.15e-14 ***<br />
|-<br />
|center||0.5099244||0.2453982||2.078||0.0377 *<br />
|-<br />
|sexM||-0.4510595||0.3166570||-1.424||0.1543<br />
|-<br />
|Treatp||-1.3231587||0.2431603||-5.442||5.28e-08 ***<br />
|-<br />
|age||-0.2072815||0.0472538||-4.387||1.15e-05 ***<br />
|-<br />
|I(age^2)||0.0025650||0.0006324||4.056||4.99e-05 ***<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
(Dispersion parameter for binomial family taken to be 1)<br />
<br />
Null deviance: 609.41 on 443 degrees of freedom<br />
<br />
Residual deviance: 468.62 on 437 degrees of freedom<br />
<br />
AIC: 482.62<br />
<br />
The correlation matrix of the of the outcome measures across visits is shown in <b>Table 3.</b><br />
<br />
attach(myData)<br />
mat1 <- matrix(c(outcome[visit==1], outcome [visit==2], outcome [visit==3], <br />
outcome[visit==4]), ncol = 4)<br />
cor(mat1)<br />
<br />
<b>Table 3</b>: Correlation matrix for the outcome measurements at different visits.<br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||[,1]||[,2]||[,3]||[,4]<br />
|-<br />
|[,1]||1.0000000||0.5087944||0.4431438||0.5139016<br />
|-<br />
|[,2]||0.5087944||1.0000000||0.5821877||0.5301611<br />
|-<br />
|[,3]||0.4431438||0.5821877||1.0000000||0.5871276<br />
|-<br />
|[,4]||0.5139016||0.5301611||0.5871276||1.0000000<br />
|}<br />
</center><br />
<br />
# We can also examine for multicollinearity problem, using the correlation matrix for X<br />
cor(model.matrix(model.glm)[,-1])<br />
<br />
# GEE modeling: R function arguments/options<br />
<br />
*<b>corstr</b>= for defining the correlation structure within groups in a GEE model<br />
<br />
*<b>id</b>= is used to identify the grouping variable in a GEE model<br />
<br />
*<b>scale.fix</b>= when TRUE causes the scale parameter to be fixed (by default at 1) rather than estimated<br />
<br />
*<b>waves</b>= names a positive integer-valued variable that is used to identify the order and spacing of observations within groups in a GEE model. This argument is crucial when there are missing values and gaps in the data<br />
<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, corstr = "exchangeable", scale.fix = TRUE)<br />
<br />
# The column labeled <b>Wald</b> in the summary table is the square of the z-statistic. The reported p-values are the <br />
# upper tailed probabilities from a chisq1 distribution and test whether the true parameter value ≠0.<br />
summary(gee.model1)<br />
<br />
# To test the effect of ''treatment'' using anova()<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + <b><u>treat</u></b> + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id = id, corstr = "exchangeable", std.err="san.se")<br />
gee.model2 <- geeglm(outcome ~ center + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id=id, corstr = "exchangeable", std.err="san.se")<br />
anova(gee.model1, gee.model2)<br />
<br />
# To test whether a categorical predictor with more than two levels should be retained in a GEE model we need <br />
# to test the entire set of dummy variables simultaneously as a single construct. <br />
# The geepack package provides a method for the anova function for a multivariate Wald test<br />
# When the anova function is applied to a single geeglm object it returns sequential Wald tests for <br />
# individual predictors with the tests carried out in the order the predictors are listed in the model formula.<br />
anova(gee.model1)<br />
<br />
===PD GEE example===<br />
<br />
This example used the PPMI/PD data to show GEE analysis.<br />
<br />
<b># 05_PPMI_top_UPDRS_Integrated_LongFormat1.csv</b><br />
longData <- read.csv("https://umich.instructure.com/files/330397/download?download_frd=1",header=TRUE)<br />
<br />
# library("geepack")<br />
<br />
# Data Elements: FID_IID L_insular_cortex_ComputeArea L_insular_cortex_Volume R_insular_cortex_ComputeArea R_insular_cortex_Volume L_cingulate_gyrus_ComputeArea L_cingulate_gyrus_Volume R_cingulate_gyrus_ComputeArea R_cingulate_gyrus_Volume L_caudate_ComputeArea L_caudate_Volume R_caudate_ComputeArea R_caudate_Volume L_putamen_ComputeArea L_putamen_Volume R_putamen_ComputeArea R_putamen_Volume Sex Weight ResearchGroup Age chr12_rs34637584_GT chr17_rs11868035_GT chr17_rs11012_GT chr17_rs393152_GT chr17_rs12185268_GT chr17_rs199533_GT UPDRS_part_I UPDRS_part_II UPDRS_part_III time_visit<br />
<br />
dim(longData) <br />
<br />
data1 = na.omit(longData)<br />
attach(data1)<br />
ControlGroup <- ifelse(ResearchGroup == "Control", 1, 0)<br />
<br />
# these calculations take a long time!!!<br />
# if you get <i>“Error in geese.fit(xx, yy, id, offset, soffset, w, waves = waves, zsca, : <br />
# nrow(zsca) and length(y)</i> not match” – this indicates some of the variables are of different lengths<br />
# if you get <i>“glm.fit: algorithm did not converge”</i> – see this discussion: http://goo.gl/lrjBjB <br />
<br />
gee.model0 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
gee.model1 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
# compare 2 gee models<br />
# anova(gee.model0,gee.model1)<br />
<br />
# you can try the “family = poisson(link = "log")” model for the ResearchGroup response, as well<br />
<br />
gee.model2 <- <b>geeglm</b>(ControlGroup <br />
~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+R_insular_cortex_ComputeArea+ R_insular_cortex_Volume +L_cingulate_gyrus_ComputeArea + L_cingulate_gyrus_Volume + R_cingulate_gyrus_ComputeArea + R_cingulate_gyrus_Volume + L_caudate_ComputeArea + L_caudate_Volume + R_caudate_ComputeArea + R_caudate_Volume + L_putamen_ComputeArea + L_putamen_Volume + R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr12_rs34637584_GT + chr17_rs11868035_GT + chr17_rs11012_GT + chr17_rs393152_GT + chr17_rs12185268_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
Remember that we do not interpret GEE coefficients as relating to individuals – GEE models are marginal models and the conclusions drawn are interpreted as population-based. Also, the time element in the model (time_visit) is just another controlling factor. <b>The effect-sizes (betas) associated with each variable/predictor represent the slopes associated with the corresponding covariate, while holding time constant</b>. If we need to examine interactions (e.g., Weight change over Time), we need to include an interaction term in model: (i.e. + Weight*time_visit).<br />
<br />
summary (gee.model2)<br />
<br />
# Individual Wald test and <b>confidence intervals</b> for each covariate<br />
predictors2 <- coef(summary(gee.model2))<br />
CI2 <- with(as.data.frame(predictors2), cbind(lwr=Estimate-1.96*Std.err, est=Estimate, upr=Estimate+1.96*Std.err))<br />
rownames(CI2) <- rownames(predictors2)<br />
CI2<br />
<br />
==Appendix==<br />
<br />
SEM References<br />
<br />
*http://socserv.mcmaster.ca/jfox/Misc/sem/SEM-paper.pdf <br />
<br />
GEE References<br />
<br />
*https://cran.r-project.org/web/packages/geepack/geepack.pdf<br />
<br />
*http://www.jstatsoft.org/v15/i02/paper<br />
<br />
===Footnotes===<br />
<br />
*<sup>2</sup> http://www.imachordata.com/ecological-sems-and-composite-variables-what-why-and-how/<br />
*<sup>3</sup> http://www.jstatsoft.org/v15/i02/ <br />
*<sup>4</sup> https://books.google.com/books?id=mdEqBgAAQBAJ<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
* [[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]] <br />
* [[SMHS_BigDataBigSci_GEE| Next Section: Generalized Estimating Equation (GEE) Modeling]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE&diff=16242SMHS BigDataBigSci GEE2016-05-23T19:52:39Z<p>Pineaumi: /* See also */</p>
<hr />
<div></div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE&diff=16241SMHS BigDataBigSci GEE2016-05-23T19:50:18Z<p>Pineaumi: /* Overview */</p>
<hr />
<div>==See also==<br />
*[[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
*[[SMHS_BigDataBigSci_GCM| Back to Growth Curve Modeling ]]<br />
*[[SMHS_BigDataBigSci_SEM| Back to Structural Equation Modeling (SEM)]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE&diff=16240SMHS BigDataBigSci GEE2016-05-23T19:49:33Z<p>Pineaumi: /* Questions */</p>
<hr />
<div>==Overview==<br />
<br />
GEE is a marginal longitudinal method that directly assesses the mean relations of interest (i.e., how the mean dependent variable changes over time), accounting for covariances among the observations within subjects, and getting a better estimate and valid significance tests of the relations. Thus, GEE estimates two different equations, (1) for the mean relations, and (2) for the covariance structure. An advantage of GEE over random-effect models is that it does not require the dependent variable to be normally distributed. However, a disadvantage of GEE is that it is less flexible and versatile – commonly employed algorithms for it require a small-to-moderate number of time points evenly (or approximately evenly) spaced, and similarly spaced across subjects. Nevertheless, it is a little more flexible than repeated-measure ANOVA because it permits some missing values and has an easy way to test for and model away the specific form of autocorrelation within subjects.<br />
<br />
GEE is mostly used when the study is focused on uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent for linear models, but not in non-linear models.<br />
<br />
==See also==<br />
*[[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
*[[SMHS_BigDataBigSci_GCM| Back to Growth Curve Modeling ]]<br />
*[[SMHS_BigDataBigSci_SEM| Back to Structural Equation Modeling (SEM)]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE&diff=16239SMHS BigDataBigSci GEE2016-05-23T19:49:27Z<p>Pineaumi: /* Model-based Analytics - Generalized Estimating Equation (GEE) Modeling */</p>
<hr />
<div>==Questions==<br />
<br />
*How to represent dependencies in linear models and examine causal effects?<br />
*Is there a way to study population average effects of a covariate against specific individual effects?<br />
<br />
==Overview==<br />
<br />
GEE is a marginal longitudinal method that directly assesses the mean relations of interest (i.e., how the mean dependent variable changes over time), accounting for covariances among the observations within subjects, and getting a better estimate and valid significance tests of the relations. Thus, GEE estimates two different equations, (1) for the mean relations, and (2) for the covariance structure. An advantage of GEE over random-effect models is that it does not require the dependent variable to be normally distributed. However, a disadvantage of GEE is that it is less flexible and versatile – commonly employed algorithms for it require a small-to-moderate number of time points evenly (or approximately evenly) spaced, and similarly spaced across subjects. Nevertheless, it is a little more flexible than repeated-measure ANOVA because it permits some missing values and has an easy way to test for and model away the specific form of autocorrelation within subjects.<br />
<br />
GEE is mostly used when the study is focused on uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent for linear models, but not in non-linear models.<br />
<br />
==See also==<br />
*[[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
*[[SMHS_BigDataBigSci_GCM| Back to Growth Curve Modeling ]]<br />
*[[SMHS_BigDataBigSci_SEM| Back to Structural Equation Modeling (SEM)]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE&diff=16238SMHS BigDataBigSci GEE2016-05-23T19:47:11Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Generalized Estimating Equation (GEE) Modeling ==<br />
<br />
==Questions==<br />
<br />
*How to represent dependencies in linear models and examine causal effects?<br />
*Is there a way to study population average effects of a covariate against specific individual effects?<br />
<br />
==Overview==<br />
<br />
GEE is a marginal longitudinal method that directly assesses the mean relations of interest (i.e., how the mean dependent variable changes over time), accounting for covariances among the observations within subjects, and getting a better estimate and valid significance tests of the relations. Thus, GEE estimates two different equations, (1) for the mean relations, and (2) for the covariance structure. An advantage of GEE over random-effect models is that it does not require the dependent variable to be normally distributed. However, a disadvantage of GEE is that it is less flexible and versatile – commonly employed algorithms for it require a small-to-moderate number of time points evenly (or approximately evenly) spaced, and similarly spaced across subjects. Nevertheless, it is a little more flexible than repeated-measure ANOVA because it permits some missing values and has an easy way to test for and model away the specific form of autocorrelation within subjects.<br />
<br />
GEE is mostly used when the study is focused on uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent for linear models, but not in non-linear models.<br />
<br />
==See also==<br />
*[[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
*[[SMHS_BigDataBigSci_GCM| Back to Growth Curve Modeling ]]<br />
*[[SMHS_BigDataBigSci_SEM| Back to Structural Equation Modeling (SEM)]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM&diff=16237SMHS BigDataBigSci GCM2016-05-23T19:43:57Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Growth Curve Models==<br />
<br />
Latent growth curve models may be used to analyze longitudinal or temporal data where the outcome measure is assessed on multiple occasions, and we examine its change over time, e.g., the trajectory over time can be<br />
modeled as a linear or quadratic function. Random effects are used to capture individual differences by conveniently representing (continuous) latent variables, aka growth factors. To fit a linear growth model we may specify a model with two latent variables: a random intercept, and a random slope:<br />
<br />
#load data <b>05_PPMI_top_UPDRS_Integrated_LongFormat.csv ( dim(myData) 661 71), wide</b> <br />
# setwd("/dir/")<br />
myData <- read.csv("https://umich.instructure.com/files/330395/download?download_frd=1&verifier=v6jBvV4x94ka3EYcGKuXXg5BZNaOLBVp0xkJih0H",header=TRUE)<br />
attach(myData)<br />
<br />
# dichotomize the "ResearchGroup" variable<br />
table(myData$\$$ResearchGroup)<br />
myData$\$$ResearchGroup <- ifelse(myData$\$$ResearchGroup == "Control", 1, 0)<br />
<br />
# linear growth model with 4 timepoints<br />
# intercept (i) and slope (s) with fixed coefficients<br />
# i =~ 1*t1 + 1*t2 + 1*t3 + 1*t4 (intercept/constant)<br />
# s =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (slope/linear term)<br />
# ??? =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (quadratic term)<br />
<br />
In this model, we have fixed all the coefficients of the linear growth functions:<br />
<br />
model4 <-<br />
' <br />
i =~ 1*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
1*UPDRS_Part_I_Summary_Score_Month_06 + 1*UPDRS_Part_I_Summary_Score_Month_09 + <br />
1*UPDRS_Part_I_Summary_Score_Month_12 + 1*UPDRS_Part_I_Summary_Score_Month_18 + <br />
1*UPDRS_Part_I_Summary_Score_Month_24 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 +<br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
1*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
1*UPDRS_Part_III_Summary_Score_Month_06 + 1*UPDRS_Part_III_Summary_Score_Month_09 + <br />
1*UPDRS_Part_III_Summary_Score_Month_12 + 1*UPDRS_Part_III_Summary_Score_Month_18 + <br />
1*UPDRS_Part_III_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 +<br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24 <br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
2*UPDRS_Part_I_Summary_Score_Month_06 + 3*UPDRS_Part_I_Summary_Score_Month_09 + <br />
4*UPDRS_Part_I_Summary_Score_Month_12 + 5*UPDRS_Part_I_Summary_Score_Month_18 + <br />
6*UPDRS_Part_I_Summary_Score_Month_24 +<br />
0*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
2*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
3*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
4*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
5*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 + <br />
6*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
0*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
2*UPDRS_Part_III_Summary_Score_Month_06 + 3*UPDRS_Part_III_Summary_Score_Month_09 + <br />
4*UPDRS_Part_III_Summary_Score_Month_12 + 5*UPDRS_Part_III_Summary_Score_Month_18 + <br />
6*UPDRS_Part_III_Summary_Score_Month_24 + <br />
0*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 +<br />
6*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 +<br />
0*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
6*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24<br />
'<br />
<br />
fit4 <- growth(model4, data=myData)<br />
summary(fit4)<br />
parameterEstimates(fit4) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame <br />
fitted(fit4) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit4)<br />
<br />
==Measures of model quality (Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA))==<br />
<br />
# report the fit measures as a signature vector: Comparative Fit Index (CFI), Root Mean Square Error of <br />
# Approximation (RMSEA)<br />
fitMeasures(fit4, c("cfi", "rmsea", "srmr"))<br />
<br />
====Comparative Fit Index====<br />
<br />
(CFI) is an incremental measure directly based on the non-centrality measure. If d = χ2(df) where df are the degrees of freedom of the model, the Comparative Fit Index is:<br />
$<br />
\frac{(Null Model)-d(Proposed Model)}{d(Null Model)}.<br />
$<br />
<br />
$0≤CFI≤1$ (by definition). It is interpreted as:<br />
<br />
*$CFI<0.9$ - model fitting is poor.<br />
<br />
*$0.9≤CFI≤0.95$ is considered marginal, <br />
<br />
*$CFI>0.95$ is good. <br />
<br />
CFI is a relative index of model fit – it compare the fit of your model to the fit of (the worst) fitting null model.<br />
<br />
====Root Mean Square Error of Approximation====<br />
(RMSEA) - “Ramsey”<br />
<br />
An absolute measure of fit based on the non-centrality parameter: <br />
<br />
$\sqrt{\frac{X^2-df}{df×(N - 1)}}$,<br />
<br />
where N the sample size and df the degrees of freedom of the model. If χ<sup>2</sup> < df, then the RMSEA∶=0. It has a penalty for complexity via the chi square to df ratio. The RMSEA is a popular measure of model fit. <br />
<br />
*RMSEA < 0.01, excellent, <br />
<br />
*RMSEA < 0.05, good <br />
<br />
*RMSEA > 0.10 cutoff for poor fitting models<br />
<br />
====Standardized Root Mean Square Residual==== <br />
(SRMR) is an absolute measure of fit defined as the standardized difference between the observed correlation and the predicted correlation. A value of zero indicates perfect fit. The SRMR has no penalty for model complexity. SRMR <0.08 is considered a good fit.<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit4)<br />
<br />
#install.packages("semTools")<br />
# library("semTools")<br />
<br />
<b><u>A Simpler Model (fit5)</u></b><br />
<br />
model5 <- '<br />
# intercept and slope with fixed coefficients<br />
i =~ UPDRS_Part_I_Summary_Score_Baseline + UPDRS_Part_I_Summary_Score_Month_03 + UPDRS_Part_I_Summary_Score_Month_24<br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + 6*UPDRS_Part_I_Summary_Score_Month_24<br />
# regressions<br />
i ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT <br />
s ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT<br />
# time-varying covariates<br />
UPDRS_Part_I_Summary_Score_Baseline ~ Weight<br />
UPDRS_Part_I_Summary_Score_Month_03 ~ ResearchGroup <br />
UPDRS_Part_I_Summary_Score_Month_24 ~ Age<br />
'<br />
<br />
fit5 <- growth(model5, data=myData)<br />
summary(fit5); fitMeasures(fit5, c("cfi", "rmsea", "srmr"))<br />
parameterEstimates(fit5) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame<br />
<br />
lavaan (0.5-18) converged normally after 99 iterations<br />
Number of observations 661<br />
Estimator ML<br />
Minimum Function Test Statistic 3.703<br />
Degrees of freedom 1<br />
P-value (Chi-square) 0.054<br />
Parameter estimates:<br />
Information Expected<br />
Standard Errors Standard<br />
Estimate Std.err Z-value P(>|z|)<br />
Latent variables:<br />
i =~<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 1.074<br />
UPDRS_P_I_S_S 1.172<br />
s =~<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 6.000<br />
<br />
Regressions:<br />
i ~<br />
R_fsfrm_gyr_V 0.000<br />
Weight 0.003<br />
ResearchGroup -0.880<br />
Age -0.009<br />
c12_34637584_ -0.907<br />
s ~<br />
R_fsfrm_gyr_V -0.000<br />
Weight -0.000<br />
ResearchGroup -0.084<br />
Age 0.002<br />
c12_34637584_ -0.047<br />
UPDRS_Part_I_Summary_Score_Baseline ~<br />
Weight -0.000<br />
UPDRS_Part_I_Summary_Score_Month_03 ~<br />
ResearchGroup 0.693<br />
UPDRS_Part_I_Summary_Score_Month_24 ~<br />
Age -0.002<br />
<br />
Covariances:<br />
i ~~<br />
s 0.074<br />
<br />
Intercepts:<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
i 1.633<br />
s -0.023<br />
<br />
Variances:<br />
UPDRS_P_I_S_S 1.017<br />
UPDRS_P_I_S_S 1.093<br />
UPDRS_P_I_S_S 2.993<br />
i 1.019<br />
s -0.025<br />
<br />
<b>cfi rmsea srmr</b><br />
<b>0.996 0.064 0.008</b><br />
<br />
fitted(fit5) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
# write.table(fitted(fit5), file="C:\\Users\\Dinov\\Desktop\\test1.txt")<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit5)<br />
<br />
# report the fit measures as a signature vector<br />
fitMeasures(fit5, c("cfi", "rmsea", "srmr")) # comparative fit index (CFI)<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit5)<br />
<br />
<b>Note:</b> See discussion of SEM modeling pros/cons <sup>2</sup>.<br />
<br />
==Generalized Estimating Equation (GEE) Modeling==<br />
<br />
Generalized Estimating Equations (GEE) modeling<sup>3</sup> is used for analyzing data with the following characteristics:<br />
(1) the observations within a group may be correlated, (2) observations in separate clusters are independent, (3) a monotone transformation of the expectation is linearly related to the explanatory variables, and (4) the variance is a function of the expectation. The expectation (#3) and the variance (# 4) are conditional given group-level or individual-level covariates.<br />
<br />
GEE is applied to handle correlated discrete and continuous outcome variables. For the outcome variables, it only requires specification of the first 2 moments and correlation among them. The goal is to estimate fixed parameters without specifying their joint distribution. The correlation is specified by one of these 4 alternatives (which is specified in the R call: geeglm(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, <b>corstr = " exchangeable"</b>, scale.fix = TRUE):<br />
<br />
<center>[[Image:SMHS_BigDataBigSci8.png|300px]]</center><br />
<br />
===Respiratory Illness GEE R example===<br />
<br />
This example is based on a data set on respiratory illness <sup>4</sup> and the <b>geepack</b> package. The data is from a clinical study of the treatment effects on patients with respiratory illness. N=111 patients from 2 clinical centers randomized to receive either placebo or active treatments. 4 temporal examinations assessed the <b>respiratory state</b> of patients as good (=1) or poor (=0). Explanatory variables characterizing a patient were: <b>center</b> (1,2), treatment (A=active, P=placebo), <b>sex</b> (M=male, F=female), <b>age</b> (in years) at baseline. The values of the covariates were constant for the repeated elementary observations on each patient.<br />
<br />
<b>Table 1</b> shows the number of patients for the response patterns across the 4 visits split by baseline-status and treatment. Baseline respiratory status = 0 appear to have either low or high number of positive responses. Baseline respiratory status = 1 tend to respond positively. <b>Table 2</b> describes the distribution of the number of positive responses per patient for sex and center.<br />
<br />
# library("geepack")<br />
<br />
<b>Table 1</b>: Distribution of patients for <b>different response patterns</b> classified by <b>baseline-respiratory</b> response and <b>treatment</b>. The patterns are ordered according to increasing numbers of positive responses.<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! ||Visit|| colspan="15"| All Possible Response Patterns (2*2*2*2=16 permutation patterns)||<br />
|-<br />
|||1||0||1||0||0||0||1||1||1||0||0||1||1||1||0||1||<br />
|-<br />
|||2||0||0||1||0||0||1||0||0||1||0||1||1||0||1||1||<br />
|-<br />
|||3||0||0||0||1||0||0||1||0||1||1||1||0||1||1||1||<br />
|-<br />
|||4||0||0||0||0||1||0||0||1||0||1||0||1||1||1||1||<br />
|-<br />
!Baseline||Treatment||||||||||||||||||||||||||||||||Sum<br />
|-<br />
| rowspan="2"|0||A||7||2||2||2||1||0||1||0||1||0||1||2||0||4||7||30<br />
|-<br />
|P||18||1||0||2||1||2||0||0||1||0||0||1||2||0||3||31<br />
|-<br />
|rowspan="2"|1||A||0||0||0||0||0||0||1||1||0||0||4||0||1||0||17||24<br />
|-<br />
|P||1||4||1||0||0||0||0||1||1||3||1||1||2||1||10||26<br />
|-<br />
|Sum||||26||7||3||4||2||2||2||2||3||3||6||4||5||5||37||111<br />
|}<br />
</center><br />
<br />
<br />
<b>Table 2</b>: Distribution of patients for the number of positive responses across the 4 visits for <b>Sex</b> and <b>Center</b>. <br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! colspan="2" rowspan="2"| ||colspan="5"|Number of positive responses<br />
|-<br />
| 0||1||2||3||4<br />
|-<br />
|rowspan="2"|Sex || F||7||3||3||3||7<br />
|-<br />
|M||19||13||9||17||30<br />
|-<br />
|rowspan="2"|Center|| 1||18||9||6||11||12<br />
|-<br />
|2||8||7||6||9||25<br />
|}<br />
</center><br />
<br />
<b>Figure 1</b> shows a plot of age against the proportion of positive responses for each patient. It indicates a quadratic relationship between the proportions and the age. Fitting a logistic model to the data (which would be appropriate if there were <i>no time effects</i> and <i>no spread in the response probabilities</i> for patients with the same covariate values).<br />
<br />
# install.packages("geepack")<br />
library("geepack")<br />
<br />
# data include a clinical trial of 111 patients with respiratory illness from two different clinics were randomized to receive either <br />
# placebo (P) or an active (A) treatment. Patients were examined at baseline and at four visits during treatment. <br />
# At each examination, respiratory status (categorized as 1 = good, 0 = poor)<br />
data("respiratory")<br />
head(respiratory)<br />
myData <- respiratory<br />
<br />
<center>head(myData)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Center||ID||Treat||Sex||Age||Baseline||Visit||Outcome<br />
|-<br />
|1 ||1||1||P||M||46||0||1||0<br />
|-<br />
|2 ||1||1||P||M||46||0||2||0<br />
|-<br />
|3 ||1||1||P||M||46||0||3||0<br />
|-<br />
|4 ||1||1||P||M||46||0||4||0<br />
|-<br />
|5||1||2||P||M||28||0||1||0<br />
|-<br />
|6||1||2||P||M||28||0||2||0<br />
|}<br />
</center><br />
<br />
# Get proportions of positive responses<br />
responses <- factor(myData$\$$outcome, labels = c("OutcomePositive", "OutcomeNegative"))<br />
data.frame <- data.frame(responses, myData$\$$age)<br />
head(data.frame)<br />
tab <- prop.table(table(data.frame), 1); tab # compute proportions<br />
sum(tab[1,]) # check proportions (sums to 1.0)?<br />
prop <- tab[1,] # save the proportions of positive responses for each patient<br />
plot(as.numeric(dimnames(tab)$\$$myData.age), tab[1,], xlab = "Age", ylab = "Proportion of Positive Outcomes")<br />
# dimnames(tab) # to see/inspect positive/negative outcomes<br />
<br />
[[Image:SMHS_BigDataBigSci9.png|500px]]<br />
<br />
x <- as.numeric(dimnames(tab)$\$$myData.age)<br />
poly <- loess( prop ~ x) # fit a Local Polynomial Regression Fitting<br />
plot(x, prop)<br />
lines(predict(poly), col='red', lwd=2)<br />
<br />
smoothingSpline <- smooth.spline(x, prop, spar=0.6)<br />
plot(x, prop)<br />
lines(smoothingSpline, col='red', lwd=1.5)<br />
smoothPolySpline <- smooth.spline(x, predict(poly), spar=0.6)<br />
lines(smoothPolySpline, col='blue', lwd=2)<br />
legend("topright", inset=.05, title="Polynomial regression models", c("Raw Poly","Smooth Poly"), fill=c('red', 'blue'), horiz=TRUE)<br />
<br />
[[Image:SMHS_BigDataBigSci10.png|500px]]<br />
<br />
model.glm <- <b>glm</b>(outcome ~ baseline + center + sex + treat + age + I(age^2), data = respiratory, family = binomial)<br />
<br />
summary(model.glm)<br />
<br />
<center>Deviance Residuals: <br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max<br />
|-<br />
| -2.5951||-0.9108||0.4034||0.8336||2.0951<br />
|}<br />
</center><br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Estimate||Std. Error||z value||$Pr( \gt |z|)$ <br />
|-<br />
|(Intercept)||3.3579727||1.0285292||3.265||0.0011 **<br />
|-<br />
|baseline||1.8850421||0.2482959||7.592||3.15e-14 ***<br />
|-<br />
|center||0.5099244||0.2453982||2.078||0.0377 *<br />
|-<br />
|sexM||-0.4510595||0.3166570||-1.424||0.1543<br />
|-<br />
|Treatp||-1.3231587||0.2431603||-5.442||5.28e-08 ***<br />
|-<br />
|age||-0.2072815||0.0472538||-4.387||1.15e-05 ***<br />
|-<br />
|I(age^2)||0.0025650||0.0006324||4.056||4.99e-05 ***<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
(Dispersion parameter for binomial family taken to be 1)<br />
<br />
Null deviance: 609.41 on 443 degrees of freedom<br />
<br />
Residual deviance: 468.62 on 437 degrees of freedom<br />
<br />
AIC: 482.62<br />
<br />
The correlation matrix of the of the outcome measures across visits is shown in <b>Table 3.</b><br />
<br />
attach(myData)<br />
mat1 <- matrix(c(outcome[visit==1], outcome [visit==2], outcome [visit==3], <br />
outcome[visit==4]), ncol = 4)<br />
cor(mat1)<br />
<br />
<b>Table 3</b>: Correlation matrix for the outcome measurements at different visits.<br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||[,1]||[,2]||[,3]||[,4]<br />
|-<br />
|[,1]||1.0000000||0.5087944||0.4431438||0.5139016<br />
|-<br />
|[,2]||0.5087944||1.0000000||0.5821877||0.5301611<br />
|-<br />
|[,3]||0.4431438||0.5821877||1.0000000||0.5871276<br />
|-<br />
|[,4]||0.5139016||0.5301611||0.5871276||1.0000000<br />
|}<br />
</center><br />
<br />
# We can also examine for multicollinearity problem, using the correlation matrix for X<br />
cor(model.matrix(model.glm)[,-1])<br />
<br />
# GEE modeling: R function arguments/options<br />
<br />
*<b>corstr</b>= for defining the correlation structure within groups in a GEE model<br />
<br />
*<b>id</b>= is used to identify the grouping variable in a GEE model<br />
<br />
*<b>scale.fix</b>= when TRUE causes the scale parameter to be fixed (by default at 1) rather than estimated<br />
<br />
*<b>waves</b>= names a positive integer-valued variable that is used to identify the order and spacing of observations within groups in a GEE model. This argument is crucial when there are missing values and gaps in the data<br />
<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, corstr = "exchangeable", scale.fix = TRUE)<br />
<br />
# The column labeled <b>Wald</b> in the summary table is the square of the z-statistic. The reported p-values are the <br />
# upper tailed probabilities from a chisq1 distribution and test whether the true parameter value ≠0.<br />
summary(gee.model1)<br />
<br />
# To test the effect of ''treatment'' using anova()<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + <b><u>treat</u></b> + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id = id, corstr = "exchangeable", std.err="san.se")<br />
gee.model2 <- geeglm(outcome ~ center + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id=id, corstr = "exchangeable", std.err="san.se")<br />
anova(gee.model1, gee.model2)<br />
<br />
# To test whether a categorical predictor with more than two levels should be retained in a GEE model we need <br />
# to test the entire set of dummy variables simultaneously as a single construct. <br />
# The geepack package provides a method for the anova function for a multivariate Wald test<br />
# When the anova function is applied to a single geeglm object it returns sequential Wald tests for <br />
# individual predictors with the tests carried out in the order the predictors are listed in the model formula.<br />
anova(gee.model1)<br />
<br />
===PD GEE example===<br />
<br />
This example used the PPMI/PD data to show GEE analysis.<br />
<br />
<b># 05_PPMI_top_UPDRS_Integrated_LongFormat1.csv</b><br />
longData <- read.csv("https://umich.instructure.com/files/330397/download?download_frd=1",header=TRUE)<br />
<br />
# library("geepack")<br />
<br />
# Data Elements: FID_IID L_insular_cortex_ComputeArea L_insular_cortex_Volume R_insular_cortex_ComputeArea R_insular_cortex_Volume L_cingulate_gyrus_ComputeArea L_cingulate_gyrus_Volume R_cingulate_gyrus_ComputeArea R_cingulate_gyrus_Volume L_caudate_ComputeArea L_caudate_Volume R_caudate_ComputeArea R_caudate_Volume L_putamen_ComputeArea L_putamen_Volume R_putamen_ComputeArea R_putamen_Volume Sex Weight ResearchGroup Age chr12_rs34637584_GT chr17_rs11868035_GT chr17_rs11012_GT chr17_rs393152_GT chr17_rs12185268_GT chr17_rs199533_GT UPDRS_part_I UPDRS_part_II UPDRS_part_III time_visit<br />
<br />
dim(longData) <br />
<br />
data1 = na.omit(longData)<br />
attach(data1)<br />
ControlGroup <- ifelse(ResearchGroup == "Control", 1, 0)<br />
<br />
# these calculations take a long time!!!<br />
# if you get <i>“Error in geese.fit(xx, yy, id, offset, soffset, w, waves = waves, zsca, : <br />
# nrow(zsca) and length(y)</i> not match” – this indicates some of the variables are of different lengths<br />
# if you get <i>“glm.fit: algorithm did not converge”</i> – see this discussion: http://goo.gl/lrjBjB <br />
<br />
gee.model0 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
gee.model1 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
# compare 2 gee models<br />
# anova(gee.model0,gee.model1)<br />
<br />
# you can try the “family = poisson(link = "log")” model for the ResearchGroup response, as well<br />
<br />
gee.model2 <- <b>geeglm</b>(ControlGroup <br />
~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+R_insular_cortex_ComputeArea+ R_insular_cortex_Volume +L_cingulate_gyrus_ComputeArea + L_cingulate_gyrus_Volume + R_cingulate_gyrus_ComputeArea + R_cingulate_gyrus_Volume + L_caudate_ComputeArea + L_caudate_Volume + R_caudate_ComputeArea + R_caudate_Volume + L_putamen_ComputeArea + L_putamen_Volume + R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr12_rs34637584_GT + chr17_rs11868035_GT + chr17_rs11012_GT + chr17_rs393152_GT + chr17_rs12185268_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
Remember that we do not interpret GEE coefficients as relating to individuals – GEE models are marginal models and the conclusions drawn are interpreted as population-based. Also, the time element in the model (time_visit) is just another controlling factor. <b>The effect-sizes (betas) associated with each variable/predictor represent the slopes associated with the corresponding covariate, while holding time constant</b>. If we need to examine interactions (e.g., Weight change over Time), we need to include an interaction term in model: (i.e. + Weight*time_visit).<br />
<br />
summary (gee.model2)<br />
<br />
# Individual Wald test and <b>confidence intervals</b> for each covariate<br />
predictors2 <- coef(summary(gee.model2))<br />
CI2 <- with(as.data.frame(predictors2), cbind(lwr=Estimate-1.96*Std.err, est=Estimate, upr=Estimate+1.96*Std.err))<br />
rownames(CI2) <- rownames(predictors2)<br />
CI2<br />
<br />
==Appendix==<br />
<br />
SEM References<br />
<br />
*http://socserv.mcmaster.ca/jfox/Misc/sem/SEM-paper.pdf <br />
<br />
GEE References<br />
<br />
*https://cran.r-project.org/web/packages/geepack/geepack.pdf<br />
<br />
*http://www.jstatsoft.org/v15/i02/paper<br />
<br />
===Footnotes===<br />
<br />
*<sup>2</sup> http://www.imachordata.com/ecological-sems-and-composite-variables-what-why-and-how/<br />
*<sup>3</sup> http://www.jstatsoft.org/v15/i02/ <br />
*<sup>4</sup> https://books.google.com/books?id=mdEqBgAAQBAJ<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
* [[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]] <br />
* [[SMHS_BigDataBigSci_GEE| Next Section: Generalized Estimating Equation (GEE) Modeling]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE&diff=16236SMHS BigDataBigSci GEE2016-05-23T19:19:44Z<p>Pineaumi: /* Overview */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Generalized Estimating Equation (GEE) Modeling ==<br />
<br />
==Questions==<br />
<br />
*How to represent dependencies in linear models and examine causal effects?<br />
*Is there a way to study population average effects of a covariate against specific individual effects?<br />
<br />
==Overview==<br />
<br />
GEE is a marginal longitudinal method that directly assesses the mean relations of interest (i.e., how the mean dependent variable changes over time), accounting for covariances among the observations within subjects, and getting a better estimate and valid significance tests of the relations. Thus, GEE estimates two different equations, (1) for the mean relations, and (2) for the covariance structure. An advantage of GEE over random-effect models is that it does not require the dependent variable to be normally distributed. However, a disadvantage of GEE is that it is less flexible and versatile – commonly employed algorithms for it require a small-to-moderate number of time points evenly (or approximately evenly) spaced, and similarly spaced across subjects. Nevertheless, it is a little more flexible than repeated-measure ANOVA because it permits some missing values and has an easy way to test for and model away the specific form of autocorrelation within subjects.<br />
<br />
GEE is mostly used when the study is focused on uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent for linear models, but not in non-linear models.<br />
<br />
===Footnotes===<br />
<br />
*<sup>3</sup> http://www.jstatsoft.org/v15/i02/ <br />
<br />
*<sup>4</sup> https://books.google.com/books?id=mdEqBgAAQBAJ<br />
<br />
==See also==<br />
*[[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
*[[SMHS_BigDataBigSci_GCM| Back to Growth Curve Modeling ]]<br />
*[[SMHS_BigDataBigSci_SEM| Back to Structural Equation Modeling (SEM)]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GEE}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM&diff=16235SMHS BigDataBigSci GCM2016-05-23T19:14:35Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Growth Curve Models==<br />
<br />
Latent growth curve models may be used to analyze longitudinal or temporal data where the outcome measure is assessed on multiple occasions, and we examine its change over time, e.g., the trajectory over time can be<br />
modeled as a linear or quadratic function. Random effects are used to capture individual differences by conveniently representing (continuous) latent variables, aka growth factors. To fit a linear growth model we may specify a model with two latent variables: a random intercept, and a random slope:<br />
<br />
#load data <b>05_PPMI_top_UPDRS_Integrated_LongFormat.csv ( dim(myData) 661 71), wide</b> <br />
# setwd("/dir/")<br />
myData <- read.csv("https://umich.instructure.com/files/330395/download?download_frd=1&verifier=v6jBvV4x94ka3EYcGKuXXg5BZNaOLBVp0xkJih0H",header=TRUE)<br />
attach(myData)<br />
<br />
# dichotomize the "ResearchGroup" variable<br />
table(myData$\$$ResearchGroup)<br />
myData$\$$ResearchGroup <- ifelse(myData$\$$ResearchGroup == "Control", 1, 0)<br />
<br />
# linear growth model with 4 timepoints<br />
# intercept (i) and slope (s) with fixed coefficients<br />
# i =~ 1*t1 + 1*t2 + 1*t3 + 1*t4 (intercept/constant)<br />
# s =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (slope/linear term)<br />
# ??? =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (quadratic term)<br />
<br />
In this model, we have fixed all the coefficients of the linear growth functions:<br />
<br />
model4 <-<br />
' <br />
i =~ 1*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
1*UPDRS_Part_I_Summary_Score_Month_06 + 1*UPDRS_Part_I_Summary_Score_Month_09 + <br />
1*UPDRS_Part_I_Summary_Score_Month_12 + 1*UPDRS_Part_I_Summary_Score_Month_18 + <br />
1*UPDRS_Part_I_Summary_Score_Month_24 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 +<br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
1*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
1*UPDRS_Part_III_Summary_Score_Month_06 + 1*UPDRS_Part_III_Summary_Score_Month_09 + <br />
1*UPDRS_Part_III_Summary_Score_Month_12 + 1*UPDRS_Part_III_Summary_Score_Month_18 + <br />
1*UPDRS_Part_III_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 +<br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24 <br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
2*UPDRS_Part_I_Summary_Score_Month_06 + 3*UPDRS_Part_I_Summary_Score_Month_09 + <br />
4*UPDRS_Part_I_Summary_Score_Month_12 + 5*UPDRS_Part_I_Summary_Score_Month_18 + <br />
6*UPDRS_Part_I_Summary_Score_Month_24 +<br />
0*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
2*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
3*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
4*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
5*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 + <br />
6*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
0*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
2*UPDRS_Part_III_Summary_Score_Month_06 + 3*UPDRS_Part_III_Summary_Score_Month_09 + <br />
4*UPDRS_Part_III_Summary_Score_Month_12 + 5*UPDRS_Part_III_Summary_Score_Month_18 + <br />
6*UPDRS_Part_III_Summary_Score_Month_24 + <br />
0*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 +<br />
6*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 +<br />
0*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
6*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24<br />
'<br />
<br />
fit4 <- growth(model4, data=myData)<br />
summary(fit4)<br />
parameterEstimates(fit4) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame <br />
fitted(fit4) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit4)<br />
<br />
==Measures of model quality (Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA))==<br />
<br />
# report the fit measures as a signature vector: Comparative Fit Index (CFI), Root Mean Square Error of <br />
# Approximation (RMSEA)<br />
fitMeasures(fit4, c("cfi", "rmsea", "srmr"))<br />
<br />
====Comparative Fit Index====<br />
<br />
(CFI) is an incremental measure directly based on the non-centrality measure. If d = χ2(df) where df are the degrees of freedom of the model, the Comparative Fit Index is:<br />
$<br />
\frac{(Null Model)-d(Proposed Model)}{d(Null Model)}.<br />
$<br />
<br />
$0≤CFI≤1$ (by definition). It is interpreted as:<br />
<br />
*$CFI<0.9$ - model fitting is poor.<br />
<br />
*$0.9≤CFI≤0.95$ is considered marginal, <br />
<br />
*$CFI>0.95$ is good. <br />
<br />
CFI is a relative index of model fit – it compare the fit of your model to the fit of (the worst) fitting null model.<br />
<br />
====Root Mean Square Error of Approximation====<br />
(RMSEA) - “Ramsey”<br />
<br />
An absolute measure of fit based on the non-centrality parameter: <br />
<br />
$\sqrt{\frac{X^2-df}{df×(N - 1)}}$,<br />
<br />
where N the sample size and df the degrees of freedom of the model. If χ<sup>2</sup> < df, then the RMSEA∶=0. It has a penalty for complexity via the chi square to df ratio. The RMSEA is a popular measure of model fit. <br />
<br />
*RMSEA < 0.01, excellent, <br />
<br />
*RMSEA < 0.05, good <br />
<br />
*RMSEA > 0.10 cutoff for poor fitting models<br />
<br />
====Standardized Root Mean Square Residual==== <br />
(SRMR) is an absolute measure of fit defined as the standardized difference between the observed correlation and the predicted correlation. A value of zero indicates perfect fit. The SRMR has no penalty for model complexity. SRMR <0.08 is considered a good fit.<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit4)<br />
<br />
#install.packages("semTools")<br />
# library("semTools")<br />
<br />
<b><u>A Simpler Model (fit5)</u></b><br />
<br />
model5 <- '<br />
# intercept and slope with fixed coefficients<br />
i =~ UPDRS_Part_I_Summary_Score_Baseline + UPDRS_Part_I_Summary_Score_Month_03 + UPDRS_Part_I_Summary_Score_Month_24<br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + 6*UPDRS_Part_I_Summary_Score_Month_24<br />
# regressions<br />
i ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT <br />
s ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT<br />
# time-varying covariates<br />
UPDRS_Part_I_Summary_Score_Baseline ~ Weight<br />
UPDRS_Part_I_Summary_Score_Month_03 ~ ResearchGroup <br />
UPDRS_Part_I_Summary_Score_Month_24 ~ Age<br />
'<br />
<br />
fit5 <- growth(model5, data=myData)<br />
summary(fit5); fitMeasures(fit5, c("cfi", "rmsea", "srmr"))<br />
parameterEstimates(fit5) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame<br />
<br />
lavaan (0.5-18) converged normally after 99 iterations<br />
Number of observations 661<br />
Estimator ML<br />
Minimum Function Test Statistic 3.703<br />
Degrees of freedom 1<br />
P-value (Chi-square) 0.054<br />
Parameter estimates:<br />
Information Expected<br />
Standard Errors Standard<br />
Estimate Std.err Z-value P(>|z|)<br />
Latent variables:<br />
i =~<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 1.074<br />
UPDRS_P_I_S_S 1.172<br />
s =~<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 6.000<br />
<br />
Regressions:<br />
i ~<br />
R_fsfrm_gyr_V 0.000<br />
Weight 0.003<br />
ResearchGroup -0.880<br />
Age -0.009<br />
c12_34637584_ -0.907<br />
s ~<br />
R_fsfrm_gyr_V -0.000<br />
Weight -0.000<br />
ResearchGroup -0.084<br />
Age 0.002<br />
c12_34637584_ -0.047<br />
UPDRS_Part_I_Summary_Score_Baseline ~<br />
Weight -0.000<br />
UPDRS_Part_I_Summary_Score_Month_03 ~<br />
ResearchGroup 0.693<br />
UPDRS_Part_I_Summary_Score_Month_24 ~<br />
Age -0.002<br />
<br />
Covariances:<br />
i ~~<br />
s 0.074<br />
<br />
Intercepts:<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
i 1.633<br />
s -0.023<br />
<br />
Variances:<br />
UPDRS_P_I_S_S 1.017<br />
UPDRS_P_I_S_S 1.093<br />
UPDRS_P_I_S_S 2.993<br />
i 1.019<br />
s -0.025<br />
<br />
<b>cfi rmsea srmr</b><br />
<b>0.996 0.064 0.008</b><br />
<br />
fitted(fit5) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
# write.table(fitted(fit5), file="C:\\Users\\Dinov\\Desktop\\test1.txt")<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit5)<br />
<br />
# report the fit measures as a signature vector<br />
fitMeasures(fit5, c("cfi", "rmsea", "srmr")) # comparative fit index (CFI)<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit5)<br />
<br />
<b>Note:</b> See discussion of SEM modeling pros/cons <sup>2</sup>.<br />
<br />
==Generalized Estimating Equation (GEE) Modeling==<br />
<br />
Generalized Estimating Equations (GEE) modeling<sup>3</sup> is used for analyzing data with the following characteristics:<br />
(1) the observations within a group may be correlated, (2) observations in separate clusters are independent, (3) a monotone transformation of the expectation is linearly related to the explanatory variables, and (4) the variance is a function of the expectation. The expectation (#3) and the variance (# 4) are conditional given group-level or individual-level covariates.<br />
<br />
GEE is applied to handle correlated discrete and continuous outcome variables. For the outcome variables, it only requires specification of the first 2 moments and correlation among them. The goal is to estimate fixed parameters without specifying their joint distribution. The correlation is specified by one of these 4 alternatives (which is specified in the R call: geeglm(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, <b>corstr = " exchangeable"</b>, scale.fix = TRUE):<br />
<br />
<center>[[Image:SMHS_BigDataBigSci8.png|300px]]</center><br />
<br />
===Respiratory Illness GEE R example===<br />
<br />
This example is based on a data set on respiratory illness <sup>4</sup> and the <b>geepack</b> package. The data is from a clinical study of the treatment effects on patients with respiratory illness. N=111 patients from 2 clinical centers randomized to receive either placebo or active treatments. 4 temporal examinations assessed the <b>respiratory state</b> of patients as good (=1) or poor (=0). Explanatory variables characterizing a patient were: <b>center</b> (1,2), treatment (A=active, P=placebo), <b>sex</b> (M=male, F=female), <b>age</b> (in years) at baseline. The values of the covariates were constant for the repeated elementary observations on each patient.<br />
<br />
<b>Table 1</b> shows the number of patients for the response patterns across the 4 visits split by baseline-status and treatment. Baseline respiratory status = 0 appear to have either low or high number of positive responses. Baseline respiratory status = 1 tend to respond positively. <b>Table 2</b> describes the distribution of the number of positive responses per patient for sex and center.<br />
<br />
# library("geepack")<br />
<br />
<b>Table 1</b>: Distribution of patients for <b>different response patterns</b> classified by <b>baseline-respiratory</b> response and <b>treatment</b>. The patterns are ordered according to increasing numbers of positive responses.<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! ||Visit|| colspan="15"| All Possible Response Patterns (2*2*2*2=16 permutation patterns)||<br />
|-<br />
|||1||0||1||0||0||0||1||1||1||0||0||1||1||1||0||1||<br />
|-<br />
|||2||0||0||1||0||0||1||0||0||1||0||1||1||0||1||1||<br />
|-<br />
|||3||0||0||0||1||0||0||1||0||1||1||1||0||1||1||1||<br />
|-<br />
|||4||0||0||0||0||1||0||0||1||0||1||0||1||1||1||1||<br />
|-<br />
!Baseline||Treatment||||||||||||||||||||||||||||||||Sum<br />
|-<br />
| rowspan="2"|0||A||7||2||2||2||1||0||1||0||1||0||1||2||0||4||7||30<br />
|-<br />
|P||18||1||0||2||1||2||0||0||1||0||0||1||2||0||3||31<br />
|-<br />
|rowspan="2"|1||A||0||0||0||0||0||0||1||1||0||0||4||0||1||0||17||24<br />
|-<br />
|P||1||4||1||0||0||0||0||1||1||3||1||1||2||1||10||26<br />
|-<br />
|Sum||||26||7||3||4||2||2||2||2||3||3||6||4||5||5||37||111<br />
|}<br />
</center><br />
<br />
<br />
<b>Table 2</b>: Distribution of patients for the number of positive responses across the 4 visits for <b>Sex</b> and <b>Center</b>. <br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! colspan="2" rowspan="2"| ||colspan="5"|Number of positive responses<br />
|-<br />
| 0||1||2||3||4<br />
|-<br />
|rowspan="2"|Sex || F||7||3||3||3||7<br />
|-<br />
|M||19||13||9||17||30<br />
|-<br />
|rowspan="2"|Center|| 1||18||9||6||11||12<br />
|-<br />
|2||8||7||6||9||25<br />
|}<br />
</center><br />
<br />
<b>Figure 1</b> shows a plot of age against the proportion of positive responses for each patient. It indicates a quadratic relationship between the proportions and the age. Fitting a logistic model to the data (which would be appropriate if there were <i>no time effects</i> and <i>no spread in the response probabilities</i> for patients with the same covariate values).<br />
<br />
# install.packages("geepack")<br />
library("geepack")<br />
<br />
# data include a clinical trial of 111 patients with respiratory illness from two different clinics were randomized to receive either <br />
# placebo (P) or an active (A) treatment. Patients were examined at baseline and at four visits during treatment. <br />
# At each examination, respiratory status (categorized as 1 = good, 0 = poor)<br />
data("respiratory")<br />
head(respiratory)<br />
myData <- respiratory<br />
<br />
<center>head(myData)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Center||ID||Treat||Sex||Age||Baseline||Visit||Outcome<br />
|-<br />
|1 ||1||1||P||M||46||0||1||0<br />
|-<br />
|2 ||1||1||P||M||46||0||2||0<br />
|-<br />
|3 ||1||1||P||M||46||0||3||0<br />
|-<br />
|4 ||1||1||P||M||46||0||4||0<br />
|-<br />
|5||1||2||P||M||28||0||1||0<br />
|-<br />
|6||1||2||P||M||28||0||2||0<br />
|}<br />
</center><br />
<br />
# Get proportions of positive responses<br />
responses <- factor(myData$\$$outcome, labels = c("OutcomePositive", "OutcomeNegative"))<br />
data.frame <- data.frame(responses, myData$\$$age)<br />
head(data.frame)<br />
tab <- prop.table(table(data.frame), 1); tab # compute proportions<br />
sum(tab[1,]) # check proportions (sums to 1.0)?<br />
prop <- tab[1,] # save the proportions of positive responses for each patient<br />
plot(as.numeric(dimnames(tab)$\$$myData.age), tab[1,], xlab = "Age", ylab = "Proportion of Positive Outcomes")<br />
# dimnames(tab) # to see/inspect positive/negative outcomes<br />
<br />
[[Image:SMHS_BigDataBigSci9.png|500px]]<br />
<br />
x <- as.numeric(dimnames(tab)$\$$myData.age)<br />
poly <- loess( prop ~ x) # fit a Local Polynomial Regression Fitting<br />
plot(x, prop)<br />
lines(predict(poly), col='red', lwd=2)<br />
<br />
smoothingSpline <- smooth.spline(x, prop, spar=0.6)<br />
plot(x, prop)<br />
lines(smoothingSpline, col='red', lwd=1.5)<br />
smoothPolySpline <- smooth.spline(x, predict(poly), spar=0.6)<br />
lines(smoothPolySpline, col='blue', lwd=2)<br />
legend("topright", inset=.05, title="Polynomial regression models", c("Raw Poly","Smooth Poly"), fill=c('red', 'blue'), horiz=TRUE)<br />
<br />
[[Image:SMHS_BigDataBigSci10.png|500px]]<br />
<br />
model.glm <- <b>glm</b>(outcome ~ baseline + center + sex + treat + age + I(age^2), data = respiratory, family = binomial)<br />
<br />
summary(model.glm)<br />
<br />
<center>Deviance Residuals: <br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max<br />
|-<br />
| -2.5951||-0.9108||0.4034||0.8336||2.0951<br />
|}<br />
</center><br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Estimate||Std. Error||z value||$Pr( \gt |z|)$ <br />
|-<br />
|(Intercept)||3.3579727||1.0285292||3.265||0.0011 **<br />
|-<br />
|baseline||1.8850421||0.2482959||7.592||3.15e-14 ***<br />
|-<br />
|center||0.5099244||0.2453982||2.078||0.0377 *<br />
|-<br />
|sexM||-0.4510595||0.3166570||-1.424||0.1543<br />
|-<br />
|Treatp||-1.3231587||0.2431603||-5.442||5.28e-08 ***<br />
|-<br />
|age||-0.2072815||0.0472538||-4.387||1.15e-05 ***<br />
|-<br />
|I(age^2)||0.0025650||0.0006324||4.056||4.99e-05 ***<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
(Dispersion parameter for binomial family taken to be 1)<br />
<br />
Null deviance: 609.41 on 443 degrees of freedom<br />
<br />
Residual deviance: 468.62 on 437 degrees of freedom<br />
<br />
AIC: 482.62<br />
<br />
The correlation matrix of the of the outcome measures across visits is shown in <b>Table 3.</b><br />
<br />
attach(myData)<br />
mat1 <- matrix(c(outcome[visit==1], outcome [visit==2], outcome [visit==3], <br />
outcome[visit==4]), ncol = 4)<br />
cor(mat1)<br />
<br />
<b>Table 3</b>: Correlation matrix for the outcome measurements at different visits.<br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||[,1]||[,2]||[,3]||[,4]<br />
|-<br />
|[,1]||1.0000000||0.5087944||0.4431438||0.5139016<br />
|-<br />
|[,2]||0.5087944||1.0000000||0.5821877||0.5301611<br />
|-<br />
|[,3]||0.4431438||0.5821877||1.0000000||0.5871276<br />
|-<br />
|[,4]||0.5139016||0.5301611||0.5871276||1.0000000<br />
|}<br />
</center><br />
<br />
# We can also examine for multicollinearity problem, using the correlation matrix for X<br />
cor(model.matrix(model.glm)[,-1])<br />
<br />
# GEE modeling: R function arguments/options<br />
<br />
*<b>corstr</b>= for defining the correlation structure within groups in a GEE model<br />
<br />
*<b>id</b>= is used to identify the grouping variable in a GEE model<br />
<br />
*<b>scale.fix</b>= when TRUE causes the scale parameter to be fixed (by default at 1) rather than estimated<br />
<br />
*<b>waves</b>= names a positive integer-valued variable that is used to identify the order and spacing of observations within groups in a GEE model. This argument is crucial when there are missing values and gaps in the data<br />
<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, corstr = "exchangeable", scale.fix = TRUE)<br />
<br />
# The column labeled <b>Wald</b> in the summary table is the square of the z-statistic. The reported p-values are the <br />
# upper tailed probabilities from a chisq1 distribution and test whether the true parameter value ≠0.<br />
summary(gee.model1)<br />
<br />
# To test the effect of ''treatment'' using anova()<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + <b><u>treat</u></b> + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id = id, corstr = "exchangeable", std.err="san.se")<br />
gee.model2 <- geeglm(outcome ~ center + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id=id, corstr = "exchangeable", std.err="san.se")<br />
anova(gee.model1, gee.model2)<br />
<br />
# To test whether a categorical predictor with more than two levels should be retained in a GEE model we need <br />
# to test the entire set of dummy variables simultaneously as a single construct. <br />
# The geepack package provides a method for the anova function for a multivariate Wald test<br />
# When the anova function is applied to a single geeglm object it returns sequential Wald tests for <br />
# individual predictors with the tests carried out in the order the predictors are listed in the model formula.<br />
anova(gee.model1)<br />
<br />
===PD GEE example===<br />
<br />
This example used the PPMI/PD data to show GEE analysis.<br />
<br />
<b># 05_PPMI_top_UPDRS_Integrated_LongFormat1.csv</b><br />
longData <- read.csv("https://umich.instructure.com/files/330397/download?download_frd=1",header=TRUE)<br />
<br />
# library("geepack")<br />
<br />
# Data Elements: FID_IID L_insular_cortex_ComputeArea L_insular_cortex_Volume R_insular_cortex_ComputeArea R_insular_cortex_Volume L_cingulate_gyrus_ComputeArea L_cingulate_gyrus_Volume R_cingulate_gyrus_ComputeArea R_cingulate_gyrus_Volume L_caudate_ComputeArea L_caudate_Volume R_caudate_ComputeArea R_caudate_Volume L_putamen_ComputeArea L_putamen_Volume R_putamen_ComputeArea R_putamen_Volume Sex Weight ResearchGroup Age chr12_rs34637584_GT chr17_rs11868035_GT chr17_rs11012_GT chr17_rs393152_GT chr17_rs12185268_GT chr17_rs199533_GT UPDRS_part_I UPDRS_part_II UPDRS_part_III time_visit<br />
<br />
dim(longData) <br />
<br />
data1 = na.omit(longData)<br />
attach(data1)<br />
ControlGroup <- ifelse(ResearchGroup == "Control", 1, 0)<br />
<br />
# these calculations take a long time!!!<br />
# if you get <i>“Error in geese.fit(xx, yy, id, offset, soffset, w, waves = waves, zsca, : <br />
# nrow(zsca) and length(y)</i> not match” – this indicates some of the variables are of different lengths<br />
# if you get <i>“glm.fit: algorithm did not converge”</i> – see this discussion: http://goo.gl/lrjBjB <br />
<br />
gee.model0 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
gee.model1 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
# compare 2 gee models<br />
# anova(gee.model0,gee.model1)<br />
<br />
# you can try the “family = poisson(link = "log")” model for the ResearchGroup response, as well<br />
<br />
gee.model2 <- <b>geeglm</b>(ControlGroup <br />
~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+R_insular_cortex_ComputeArea+ R_insular_cortex_Volume +L_cingulate_gyrus_ComputeArea + L_cingulate_gyrus_Volume + R_cingulate_gyrus_ComputeArea + R_cingulate_gyrus_Volume + L_caudate_ComputeArea + L_caudate_Volume + R_caudate_ComputeArea + R_caudate_Volume + L_putamen_ComputeArea + L_putamen_Volume + R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr12_rs34637584_GT + chr17_rs11868035_GT + chr17_rs11012_GT + chr17_rs393152_GT + chr17_rs12185268_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
Remember that we do not interpret GEE coefficients as relating to individuals – GEE models are marginal models and the conclusions drawn are interpreted as population-based. Also, the time element in the model (time_visit) is just another controlling factor. <b>The effect-sizes (betas) associated with each variable/predictor represent the slopes associated with the corresponding covariate, while holding time constant</b>. If we need to examine interactions (e.g., Weight change over Time), we need to include an interaction term in model: (i.e. + Weight*time_visit).<br />
<br />
summary (gee.model2)<br />
<br />
# Individual Wald test and <b>confidence intervals</b> for each covariate<br />
predictors2 <- coef(summary(gee.model2))<br />
CI2 <- with(as.data.frame(predictors2), cbind(lwr=Estimate-1.96*Std.err, est=Estimate, upr=Estimate+1.96*Std.err))<br />
rownames(CI2) <- rownames(predictors2)<br />
CI2<br />
<br />
==Appendix==<br />
<br />
SEM References<br />
<br />
*http://socserv.mcmaster.ca/jfox/Misc/sem/SEM-paper.pdf <br />
<br />
GEE References<br />
<br />
*https://cran.r-project.org/web/packages/geepack/geepack.pdf<br />
<br />
*http://www.jstatsoft.org/v15/i02/paper<br />
<br />
===Footnotes===<br />
<br />
*<sup>2</sup> http://www.imachordata.com/ecological-sems-and-composite-variables-what-why-and-how/<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
* [[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]] <br />
* [[SMHS_BigDataBigSci_GEE| Next Section: Generalized Estimating Equation (GEE) Modeling]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM&diff=16234SMHS BigDataBigSci GCM2016-05-23T19:14:10Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Growth Curve Models==<br />
<br />
Latent growth curve models may be used to analyze longitudinal or temporal data where the outcome measure is assessed on multiple occasions, and we examine its change over time, e.g., the trajectory over time can be<br />
modeled as a linear or quadratic function. Random effects are used to capture individual differences by conveniently representing (continuous) latent variables, aka growth factors. To fit a linear growth model we may specify a model with two latent variables: a random intercept, and a random slope:<br />
<br />
#load data <b>05_PPMI_top_UPDRS_Integrated_LongFormat.csv ( dim(myData) 661 71), wide</b> <br />
# setwd("/dir/")<br />
myData <- read.csv("https://umich.instructure.com/files/330395/download?download_frd=1&verifier=v6jBvV4x94ka3EYcGKuXXg5BZNaOLBVp0xkJih0H",header=TRUE)<br />
attach(myData)<br />
<br />
# dichotomize the "ResearchGroup" variable<br />
table(myData$\$$ResearchGroup)<br />
myData$\$$ResearchGroup <- ifelse(myData$\$$ResearchGroup == "Control", 1, 0)<br />
<br />
# linear growth model with 4 timepoints<br />
# intercept (i) and slope (s) with fixed coefficients<br />
# i =~ 1*t1 + 1*t2 + 1*t3 + 1*t4 (intercept/constant)<br />
# s =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (slope/linear term)<br />
# ??? =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (quadratic term)<br />
<br />
In this model, we have fixed all the coefficients of the linear growth functions:<br />
<br />
model4 <-<br />
' <br />
i =~ 1*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
1*UPDRS_Part_I_Summary_Score_Month_06 + 1*UPDRS_Part_I_Summary_Score_Month_09 + <br />
1*UPDRS_Part_I_Summary_Score_Month_12 + 1*UPDRS_Part_I_Summary_Score_Month_18 + <br />
1*UPDRS_Part_I_Summary_Score_Month_24 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 +<br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
1*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
1*UPDRS_Part_III_Summary_Score_Month_06 + 1*UPDRS_Part_III_Summary_Score_Month_09 + <br />
1*UPDRS_Part_III_Summary_Score_Month_12 + 1*UPDRS_Part_III_Summary_Score_Month_18 + <br />
1*UPDRS_Part_III_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 +<br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24 <br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
2*UPDRS_Part_I_Summary_Score_Month_06 + 3*UPDRS_Part_I_Summary_Score_Month_09 + <br />
4*UPDRS_Part_I_Summary_Score_Month_12 + 5*UPDRS_Part_I_Summary_Score_Month_18 + <br />
6*UPDRS_Part_I_Summary_Score_Month_24 +<br />
0*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
2*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
3*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
4*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
5*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 + <br />
6*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
0*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
2*UPDRS_Part_III_Summary_Score_Month_06 + 3*UPDRS_Part_III_Summary_Score_Month_09 + <br />
4*UPDRS_Part_III_Summary_Score_Month_12 + 5*UPDRS_Part_III_Summary_Score_Month_18 + <br />
6*UPDRS_Part_III_Summary_Score_Month_24 + <br />
0*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 +<br />
6*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 +<br />
0*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
6*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24<br />
'<br />
<br />
fit4 <- growth(model4, data=myData)<br />
summary(fit4)<br />
parameterEstimates(fit4) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame <br />
fitted(fit4) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit4)<br />
<br />
==Measures of model quality (Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA))==<br />
<br />
# report the fit measures as a signature vector: Comparative Fit Index (CFI), Root Mean Square Error of <br />
# Approximation (RMSEA)<br />
fitMeasures(fit4, c("cfi", "rmsea", "srmr"))<br />
<br />
====Comparative Fit Index====<br />
<br />
(CFI) is an incremental measure directly based on the non-centrality measure. If d = χ2(df) where df are the degrees of freedom of the model, the Comparative Fit Index is:<br />
$<br />
\frac{(Null Model)-d(Proposed Model)}{d(Null Model)}.<br />
$<br />
<br />
$0≤CFI≤1$ (by definition). It is interpreted as:<br />
<br />
*$CFI<0.9$ - model fitting is poor.<br />
<br />
*$0.9≤CFI≤0.95$ is considered marginal, <br />
<br />
*$CFI>0.95$ is good. <br />
<br />
CFI is a relative index of model fit – it compare the fit of your model to the fit of (the worst) fitting null model.<br />
<br />
====Root Mean Square Error of Approximation====<br />
(RMSEA) - “Ramsey”<br />
<br />
An absolute measure of fit based on the non-centrality parameter: <br />
<br />
$\sqrt{\frac{X^2-df}{df×(N - 1)}}$,<br />
<br />
where N the sample size and df the degrees of freedom of the model. If χ<sup>2</sup> < df, then the RMSEA∶=0. It has a penalty for complexity via the chi square to df ratio. The RMSEA is a popular measure of model fit. <br />
<br />
*RMSEA < 0.01, excellent, <br />
<br />
*RMSEA < 0.05, good <br />
<br />
*RMSEA > 0.10 cutoff for poor fitting models<br />
<br />
====Standardized Root Mean Square Residual==== <br />
(SRMR) is an absolute measure of fit defined as the standardized difference between the observed correlation and the predicted correlation. A value of zero indicates perfect fit. The SRMR has no penalty for model complexity. SRMR <0.08 is considered a good fit.<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit4)<br />
<br />
#install.packages("semTools")<br />
# library("semTools")<br />
<br />
<b><u>A Simpler Model (fit5)</u></b><br />
<br />
model5 <- '<br />
# intercept and slope with fixed coefficients<br />
i =~ UPDRS_Part_I_Summary_Score_Baseline + UPDRS_Part_I_Summary_Score_Month_03 + UPDRS_Part_I_Summary_Score_Month_24<br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + 6*UPDRS_Part_I_Summary_Score_Month_24<br />
# regressions<br />
i ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT <br />
s ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT<br />
# time-varying covariates<br />
UPDRS_Part_I_Summary_Score_Baseline ~ Weight<br />
UPDRS_Part_I_Summary_Score_Month_03 ~ ResearchGroup <br />
UPDRS_Part_I_Summary_Score_Month_24 ~ Age<br />
'<br />
<br />
fit5 <- growth(model5, data=myData)<br />
summary(fit5); fitMeasures(fit5, c("cfi", "rmsea", "srmr"))<br />
parameterEstimates(fit5) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame<br />
<br />
lavaan (0.5-18) converged normally after 99 iterations<br />
Number of observations 661<br />
Estimator ML<br />
Minimum Function Test Statistic 3.703<br />
Degrees of freedom 1<br />
P-value (Chi-square) 0.054<br />
Parameter estimates:<br />
Information Expected<br />
Standard Errors Standard<br />
Estimate Std.err Z-value P(>|z|)<br />
Latent variables:<br />
i =~<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 1.074<br />
UPDRS_P_I_S_S 1.172<br />
s =~<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 6.000<br />
<br />
Regressions:<br />
i ~<br />
R_fsfrm_gyr_V 0.000<br />
Weight 0.003<br />
ResearchGroup -0.880<br />
Age -0.009<br />
c12_34637584_ -0.907<br />
s ~<br />
R_fsfrm_gyr_V -0.000<br />
Weight -0.000<br />
ResearchGroup -0.084<br />
Age 0.002<br />
c12_34637584_ -0.047<br />
UPDRS_Part_I_Summary_Score_Baseline ~<br />
Weight -0.000<br />
UPDRS_Part_I_Summary_Score_Month_03 ~<br />
ResearchGroup 0.693<br />
UPDRS_Part_I_Summary_Score_Month_24 ~<br />
Age -0.002<br />
<br />
Covariances:<br />
i ~~<br />
s 0.074<br />
<br />
Intercepts:<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
i 1.633<br />
s -0.023<br />
<br />
Variances:<br />
UPDRS_P_I_S_S 1.017<br />
UPDRS_P_I_S_S 1.093<br />
UPDRS_P_I_S_S 2.993<br />
i 1.019<br />
s -0.025<br />
<br />
<b>cfi rmsea srmr</b><br />
<b>0.996 0.064 0.008</b><br />
<br />
fitted(fit5) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
# write.table(fitted(fit5), file="C:\\Users\\Dinov\\Desktop\\test1.txt")<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit5)<br />
<br />
# report the fit measures as a signature vector<br />
fitMeasures(fit5, c("cfi", "rmsea", "srmr")) # comparative fit index (CFI)<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit5)<br />
<br />
<b>Note:</b> See discussion of SEM modeling pros/cons <sup>2</sup>.<br />
<br />
==Generalized Estimating Equation (GEE) Modeling==<br />
<br />
Generalized Estimating Equations (GEE) modeling<sup>3</sup> is used for analyzing data with the following characteristics:<br />
(1) the observations within a group may be correlated, (2) observations in separate clusters are independent, (3) a monotone transformation of the expectation is linearly related to the explanatory variables, and (4) the variance is a function of the expectation. The expectation (#3) and the variance (# 4) are conditional given group-level or individual-level covariates.<br />
<br />
GEE is applied to handle correlated discrete and continuous outcome variables. For the outcome variables, it only requires specification of the first 2 moments and correlation among them. The goal is to estimate fixed parameters without specifying their joint distribution. The correlation is specified by one of these 4 alternatives (which is specified in the R call: geeglm(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, <b>corstr = " exchangeable"</b>, scale.fix = TRUE):<br />
<br />
<center>[[Image:SMHS_BigDataBigSci8.png|300px]]</center><br />
<br />
===Respiratory Illness GEE R example===<br />
<br />
This example is based on a data set on respiratory illness <sup>4</sup> and the <b>geepack</b> package. The data is from a clinical study of the treatment effects on patients with respiratory illness. N=111 patients from 2 clinical centers randomized to receive either placebo or active treatments. 4 temporal examinations assessed the <b>respiratory state</b> of patients as good (=1) or poor (=0). Explanatory variables characterizing a patient were: <b>center</b> (1,2), treatment (A=active, P=placebo), <b>sex</b> (M=male, F=female), <b>age</b> (in years) at baseline. The values of the covariates were constant for the repeated elementary observations on each patient.<br />
<br />
<b>Table 1</b> shows the number of patients for the response patterns across the 4 visits split by baseline-status and treatment. Baseline respiratory status = 0 appear to have either low or high number of positive responses. Baseline respiratory status = 1 tend to respond positively. <b>Table 2</b> describes the distribution of the number of positive responses per patient for sex and center.<br />
<br />
# library("geepack")<br />
<br />
<b>Table 1</b>: Distribution of patients for <b>different response patterns</b> classified by <b>baseline-respiratory</b> response and <b>treatment</b>. The patterns are ordered according to increasing numbers of positive responses.<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! ||Visit|| colspan="15"| All Possible Response Patterns (2*2*2*2=16 permutation patterns)||<br />
|-<br />
|||1||0||1||0||0||0||1||1||1||0||0||1||1||1||0||1||<br />
|-<br />
|||2||0||0||1||0||0||1||0||0||1||0||1||1||0||1||1||<br />
|-<br />
|||3||0||0||0||1||0||0||1||0||1||1||1||0||1||1||1||<br />
|-<br />
|||4||0||0||0||0||1||0||0||1||0||1||0||1||1||1||1||<br />
|-<br />
!Baseline||Treatment||||||||||||||||||||||||||||||||Sum<br />
|-<br />
| rowspan="2"|0||A||7||2||2||2||1||0||1||0||1||0||1||2||0||4||7||30<br />
|-<br />
|P||18||1||0||2||1||2||0||0||1||0||0||1||2||0||3||31<br />
|-<br />
|rowspan="2"|1||A||0||0||0||0||0||0||1||1||0||0||4||0||1||0||17||24<br />
|-<br />
|P||1||4||1||0||0||0||0||1||1||3||1||1||2||1||10||26<br />
|-<br />
|Sum||||26||7||3||4||2||2||2||2||3||3||6||4||5||5||37||111<br />
|}<br />
</center><br />
<br />
<br />
<b>Table 2</b>: Distribution of patients for the number of positive responses across the 4 visits for <b>Sex</b> and <b>Center</b>. <br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! colspan="2" rowspan="2"| ||colspan="5"|Number of positive responses<br />
|-<br />
| 0||1||2||3||4<br />
|-<br />
|rowspan="2"|Sex || F||7||3||3||3||7<br />
|-<br />
|M||19||13||9||17||30<br />
|-<br />
|rowspan="2"|Center|| 1||18||9||6||11||12<br />
|-<br />
|2||8||7||6||9||25<br />
|}<br />
</center><br />
<br />
<b>Figure 1</b> shows a plot of age against the proportion of positive responses for each patient. It indicates a quadratic relationship between the proportions and the age. Fitting a logistic model to the data (which would be appropriate if there were <i>no time effects</i> and <i>no spread in the response probabilities</i> for patients with the same covariate values).<br />
<br />
# install.packages("geepack")<br />
library("geepack")<br />
<br />
# data include a clinical trial of 111 patients with respiratory illness from two different clinics were randomized to receive either <br />
# placebo (P) or an active (A) treatment. Patients were examined at baseline and at four visits during treatment. <br />
# At each examination, respiratory status (categorized as 1 = good, 0 = poor)<br />
data("respiratory")<br />
head(respiratory)<br />
myData <- respiratory<br />
<br />
<center>head(myData)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Center||ID||Treat||Sex||Age||Baseline||Visit||Outcome<br />
|-<br />
|1 ||1||1||P||M||46||0||1||0<br />
|-<br />
|2 ||1||1||P||M||46||0||2||0<br />
|-<br />
|3 ||1||1||P||M||46||0||3||0<br />
|-<br />
|4 ||1||1||P||M||46||0||4||0<br />
|-<br />
|5||1||2||P||M||28||0||1||0<br />
|-<br />
|6||1||2||P||M||28||0||2||0<br />
|}<br />
</center><br />
<br />
# Get proportions of positive responses<br />
responses <- factor(myData$\$$outcome, labels = c("OutcomePositive", "OutcomeNegative"))<br />
data.frame <- data.frame(responses, myData$\$$age)<br />
head(data.frame)<br />
tab <- prop.table(table(data.frame), 1); tab # compute proportions<br />
sum(tab[1,]) # check proportions (sums to 1.0)?<br />
prop <- tab[1,] # save the proportions of positive responses for each patient<br />
plot(as.numeric(dimnames(tab)$\$$myData.age), tab[1,], xlab = "Age", ylab = "Proportion of Positive Outcomes")<br />
# dimnames(tab) # to see/inspect positive/negative outcomes<br />
<br />
[[Image:SMHS_BigDataBigSci9.png|500px]]<br />
<br />
x <- as.numeric(dimnames(tab)$\$$myData.age)<br />
poly <- loess( prop ~ x) # fit a Local Polynomial Regression Fitting<br />
plot(x, prop)<br />
lines(predict(poly), col='red', lwd=2)<br />
<br />
smoothingSpline <- smooth.spline(x, prop, spar=0.6)<br />
plot(x, prop)<br />
lines(smoothingSpline, col='red', lwd=1.5)<br />
smoothPolySpline <- smooth.spline(x, predict(poly), spar=0.6)<br />
lines(smoothPolySpline, col='blue', lwd=2)<br />
legend("topright", inset=.05, title="Polynomial regression models", c("Raw Poly","Smooth Poly"), fill=c('red', 'blue'), horiz=TRUE)<br />
<br />
[[Image:SMHS_BigDataBigSci10.png|500px]]<br />
<br />
model.glm <- <b>glm</b>(outcome ~ baseline + center + sex + treat + age + I(age^2), data = respiratory, family = binomial)<br />
<br />
summary(model.glm)<br />
<br />
<center>Deviance Residuals: <br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max<br />
|-<br />
| -2.5951||-0.9108||0.4034||0.8336||2.0951<br />
|}<br />
</center><br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Estimate||Std. Error||z value||$Pr( \gt |z|)$ <br />
|-<br />
|(Intercept)||3.3579727||1.0285292||3.265||0.0011 **<br />
|-<br />
|baseline||1.8850421||0.2482959||7.592||3.15e-14 ***<br />
|-<br />
|center||0.5099244||0.2453982||2.078||0.0377 *<br />
|-<br />
|sexM||-0.4510595||0.3166570||-1.424||0.1543<br />
|-<br />
|Treatp||-1.3231587||0.2431603||-5.442||5.28e-08 ***<br />
|-<br />
|age||-0.2072815||0.0472538||-4.387||1.15e-05 ***<br />
|-<br />
|I(age^2)||0.0025650||0.0006324||4.056||4.99e-05 ***<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
(Dispersion parameter for binomial family taken to be 1)<br />
<br />
Null deviance: 609.41 on 443 degrees of freedom<br />
<br />
Residual deviance: 468.62 on 437 degrees of freedom<br />
<br />
AIC: 482.62<br />
<br />
The correlation matrix of the of the outcome measures across visits is shown in <b>Table 3.</b><br />
<br />
attach(myData)<br />
mat1 <- matrix(c(outcome[visit==1], outcome [visit==2], outcome [visit==3], <br />
outcome[visit==4]), ncol = 4)<br />
cor(mat1)<br />
<br />
<b>Table 3</b>: Correlation matrix for the outcome measurements at different visits.<br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||[,1]||[,2]||[,3]||[,4]<br />
|-<br />
|[,1]||1.0000000||0.5087944||0.4431438||0.5139016<br />
|-<br />
|[,2]||0.5087944||1.0000000||0.5821877||0.5301611<br />
|-<br />
|[,3]||0.4431438||0.5821877||1.0000000||0.5871276<br />
|-<br />
|[,4]||0.5139016||0.5301611||0.5871276||1.0000000<br />
|}<br />
</center><br />
<br />
# We can also examine for multicollinearity problem, using the correlation matrix for X<br />
cor(model.matrix(model.glm)[,-1])<br />
<br />
# GEE modeling: R function arguments/options<br />
<br />
*<b>corstr</b>= for defining the correlation structure within groups in a GEE model<br />
<br />
*<b>id</b>= is used to identify the grouping variable in a GEE model<br />
<br />
*<b>scale.fix</b>= when TRUE causes the scale parameter to be fixed (by default at 1) rather than estimated<br />
<br />
*<b>waves</b>= names a positive integer-valued variable that is used to identify the order and spacing of observations within groups in a GEE model. This argument is crucial when there are missing values and gaps in the data<br />
<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, corstr = "exchangeable", scale.fix = TRUE)<br />
<br />
# The column labeled <b>Wald</b> in the summary table is the square of the z-statistic. The reported p-values are the <br />
# upper tailed probabilities from a chisq1 distribution and test whether the true parameter value ≠0.<br />
summary(gee.model1)<br />
<br />
# To test the effect of ''treatment'' using anova()<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + <b><u>treat</u></b> + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id = id, corstr = "exchangeable", std.err="san.se")<br />
gee.model2 <- geeglm(outcome ~ center + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id=id, corstr = "exchangeable", std.err="san.se")<br />
anova(gee.model1, gee.model2)<br />
<br />
# To test whether a categorical predictor with more than two levels should be retained in a GEE model we need <br />
# to test the entire set of dummy variables simultaneously as a single construct. <br />
# The geepack package provides a method for the anova function for a multivariate Wald test<br />
# When the anova function is applied to a single geeglm object it returns sequential Wald tests for <br />
# individual predictors with the tests carried out in the order the predictors are listed in the model formula.<br />
anova(gee.model1)<br />
<br />
===PD GEE example===<br />
<br />
This example used the PPMI/PD data to show GEE analysis.<br />
<br />
<b># 05_PPMI_top_UPDRS_Integrated_LongFormat1.csv</b><br />
longData <- read.csv("https://umich.instructure.com/files/330397/download?download_frd=1",header=TRUE)<br />
<br />
# library("geepack")<br />
<br />
# Data Elements: FID_IID L_insular_cortex_ComputeArea L_insular_cortex_Volume R_insular_cortex_ComputeArea R_insular_cortex_Volume L_cingulate_gyrus_ComputeArea L_cingulate_gyrus_Volume R_cingulate_gyrus_ComputeArea R_cingulate_gyrus_Volume L_caudate_ComputeArea L_caudate_Volume R_caudate_ComputeArea R_caudate_Volume L_putamen_ComputeArea L_putamen_Volume R_putamen_ComputeArea R_putamen_Volume Sex Weight ResearchGroup Age chr12_rs34637584_GT chr17_rs11868035_GT chr17_rs11012_GT chr17_rs393152_GT chr17_rs12185268_GT chr17_rs199533_GT UPDRS_part_I UPDRS_part_II UPDRS_part_III time_visit<br />
<br />
dim(longData) <br />
<br />
data1 = na.omit(longData)<br />
attach(data1)<br />
ControlGroup <- ifelse(ResearchGroup == "Control", 1, 0)<br />
<br />
# these calculations take a long time!!!<br />
# if you get <i>“Error in geese.fit(xx, yy, id, offset, soffset, w, waves = waves, zsca, : <br />
# nrow(zsca) and length(y)</i> not match” – this indicates some of the variables are of different lengths<br />
# if you get <i>“glm.fit: algorithm did not converge”</i> – see this discussion: http://goo.gl/lrjBjB <br />
<br />
gee.model0 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
gee.model1 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
# compare 2 gee models<br />
# anova(gee.model0,gee.model1)<br />
<br />
# you can try the “family = poisson(link = "log")” model for the ResearchGroup response, as well<br />
<br />
gee.model2 <- <b>geeglm</b>(ControlGroup <br />
~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+R_insular_cortex_ComputeArea+ R_insular_cortex_Volume +L_cingulate_gyrus_ComputeArea + L_cingulate_gyrus_Volume + R_cingulate_gyrus_ComputeArea + R_cingulate_gyrus_Volume + L_caudate_ComputeArea + L_caudate_Volume + R_caudate_ComputeArea + R_caudate_Volume + L_putamen_ComputeArea + L_putamen_Volume + R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr12_rs34637584_GT + chr17_rs11868035_GT + chr17_rs11012_GT + chr17_rs393152_GT + chr17_rs12185268_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
Remember that we do not interpret GEE coefficients as relating to individuals – GEE models are marginal models and the conclusions drawn are interpreted as population-based. Also, the time element in the model (time_visit) is just another controlling factor. <b>The effect-sizes (betas) associated with each variable/predictor represent the slopes associated with the corresponding covariate, while holding time constant</b>. If we need to examine interactions (e.g., Weight change over Time), we need to include an interaction term in model: (i.e. + Weight*time_visit).<br />
<br />
summary (gee.model2)<br />
<br />
# Individual Wald test and <b>confidence intervals</b> for each covariate<br />
predictors2 <- coef(summary(gee.model2))<br />
CI2 <- with(as.data.frame(predictors2), cbind(lwr=Estimate-1.96*Std.err, est=Estimate, upr=Estimate+1.96*Std.err))<br />
rownames(CI2) <- rownames(predictors2)<br />
CI2<br />
<br />
==Appendix==<br />
<br />
SEM References<br />
<br />
*http://socserv.mcmaster.ca/jfox/Misc/sem/SEM-paper.pdf <br />
<br />
GEE References<br />
<br />
*https://cran.r-project.org/web/packages/geepack/geepack.pdf<br />
<br />
*http://www.jstatsoft.org/v15/i02/paper<br />
<br />
===Footnotes===<br />
<br />
*<sup>2</sup> http://www.imachordata.com/ecological-sems-and-composite-variables-what-why-and-how/<br />
<br />
*<sup>3</sup> http://www.jstatsoft.org/v15/i02/ <br />
<br />
*<sup>4</sup> https://books.google.com/books?id=mdEqBgAAQBAJ<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
* [[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]] <br />
* [[SMHS_BigDataBigSci_GEE| Next Section: Generalized Estimating Equation (GEE) Modeling]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM&diff=16233SMHS BigDataBigSci GCM2016-05-23T19:12:55Z<p>Pineaumi: /* Comparative Fit Index */</p>
<hr />
<div>==[[SMHS_BigDataBigSci| Model-based Analytics]] - Growth Curve Models==<br />
<br />
Latent growth curve models may be used to analyze longitudinal or temporal data where the outcome measure is assessed on multiple occasions, and we examine its change over time, e.g., the trajectory over time can be<br />
modeled as a linear or quadratic function. Random effects are used to capture individual differences by conveniently representing (continuous) latent variables, aka growth factors. To fit a linear growth model we may specify a model with two latent variables: a random intercept, and a random slope:<br />
<br />
#load data <b>05_PPMI_top_UPDRS_Integrated_LongFormat.csv ( dim(myData) 661 71), wide</b> <br />
# setwd("/dir/")<br />
myData <- read.csv("https://umich.instructure.com/files/330395/download?download_frd=1&verifier=v6jBvV4x94ka3EYcGKuXXg5BZNaOLBVp0xkJih0H",header=TRUE)<br />
attach(myData)<br />
<br />
# dichotomize the "ResearchGroup" variable<br />
table(myData$\$$ResearchGroup)<br />
myData$\$$ResearchGroup <- ifelse(myData$\$$ResearchGroup == "Control", 1, 0)<br />
<br />
# linear growth model with 4 timepoints<br />
# intercept (i) and slope (s) with fixed coefficients<br />
# i =~ 1*t1 + 1*t2 + 1*t3 + 1*t4 (intercept/constant)<br />
# s =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (slope/linear term)<br />
# ??? =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 (quadratic term)<br />
<br />
In this model, we have fixed all the coefficients of the linear growth functions:<br />
<br />
model4 <-<br />
' <br />
i =~ 1*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
1*UPDRS_Part_I_Summary_Score_Month_06 + 1*UPDRS_Part_I_Summary_Score_Month_09 + <br />
1*UPDRS_Part_I_Summary_Score_Month_12 + 1*UPDRS_Part_I_Summary_Score_Month_18 + <br />
1*UPDRS_Part_I_Summary_Score_Month_24 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 +<br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
1*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
1*UPDRS_Part_III_Summary_Score_Month_06 + 1*UPDRS_Part_III_Summary_Score_Month_09 + <br />
1*UPDRS_Part_III_Summary_Score_Month_12 + 1*UPDRS_Part_III_Summary_Score_Month_18 + <br />
1*UPDRS_Part_III_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 +<br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
1*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24 <br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + <br />
2*UPDRS_Part_I_Summary_Score_Month_06 + 3*UPDRS_Part_I_Summary_Score_Month_09 + <br />
4*UPDRS_Part_I_Summary_Score_Month_12 + 5*UPDRS_Part_I_Summary_Score_Month_18 + <br />
6*UPDRS_Part_I_Summary_Score_Month_24 +<br />
0*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline + <br />
1*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 + <br />
2*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 + <br />
3*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 + <br />
4*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 + <br />
5*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 + <br />
6*UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 + <br />
0*UPDRS_Part_III_Summary_Score_Baseline + 1*UPDRS_Part_III_Summary_Score_Month_03 + <br />
2*UPDRS_Part_III_Summary_Score_Month_06 + 3*UPDRS_Part_III_Summary_Score_Month_09 + <br />
4*UPDRS_Part_III_Summary_Score_Month_12 + 5*UPDRS_Part_III_Summary_Score_Month_18 + <br />
6*UPDRS_Part_III_Summary_Score_Month_24 + <br />
0*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 +<br />
6*X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 +<br />
0*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline + <br />
2*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 + <br />
4*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 + <br />
6*X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24<br />
'<br />
<br />
fit4 <- growth(model4, data=myData)<br />
summary(fit4)<br />
parameterEstimates(fit4) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame <br />
fitted(fit4) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit4)<br />
<br />
==Measures of model quality (Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA))==<br />
<br />
# report the fit measures as a signature vector: Comparative Fit Index (CFI), Root Mean Square Error of <br />
# Approximation (RMSEA)<br />
fitMeasures(fit4, c("cfi", "rmsea", "srmr"))<br />
<br />
====Comparative Fit Index====<br />
<br />
(CFI) is an incremental measure directly based on the non-centrality measure. If d = χ2(df) where df are the degrees of freedom of the model, the Comparative Fit Index is:<br />
$<br />
\frac{(Null Model)-d(Proposed Model)}{d(Null Model)}.<br />
$<br />
<br />
$0≤CFI≤1$ (by definition). It is interpreted as:<br />
<br />
*$CFI<0.9$ - model fitting is poor.<br />
<br />
*$0.9≤CFI≤0.95$ is considered marginal, <br />
<br />
*$CFI>0.95$ is good. <br />
<br />
CFI is a relative index of model fit – it compare the fit of your model to the fit of (the worst) fitting null model.<br />
<br />
====Root Mean Square Error of Approximation====<br />
(RMSEA) - “Ramsey”<br />
<br />
An absolute measure of fit based on the non-centrality parameter: <br />
<br />
$\sqrt{\frac{X^2-df}{df×(N - 1)}}$,<br />
<br />
where N the sample size and df the degrees of freedom of the model. If χ<sup>2</sup> < df, then the RMSEA∶=0. It has a penalty for complexity via the chi square to df ratio. The RMSEA is a popular measure of model fit. <br />
<br />
*RMSEA < 0.01, excellent, <br />
<br />
*RMSEA < 0.05, good <br />
<br />
*RMSEA > 0.10 cutoff for poor fitting models<br />
<br />
====Standardized Root Mean Square Residual==== <br />
(SRMR) is an absolute measure of fit defined as the standardized difference between the observed correlation and the predicted correlation. A value of zero indicates perfect fit. The SRMR has no penalty for model complexity. SRMR <0.08 is considered a good fit.<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit4)<br />
<br />
#install.packages("semTools")<br />
# library("semTools")<br />
<br />
<b><u>A Simpler Model (fit5)</u></b><br />
<br />
model5 <- '<br />
# intercept and slope with fixed coefficients<br />
i =~ UPDRS_Part_I_Summary_Score_Baseline + UPDRS_Part_I_Summary_Score_Month_03 + UPDRS_Part_I_Summary_Score_Month_24<br />
s =~ 0*UPDRS_Part_I_Summary_Score_Baseline + 1*UPDRS_Part_I_Summary_Score_Month_03 + 6*UPDRS_Part_I_Summary_Score_Month_24<br />
# regressions<br />
i ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT <br />
s ~ R_fusiform_gyrus_Volume + Weight + ResearchGroup + Age + chr12_rs34637584_GT<br />
# time-varying covariates<br />
UPDRS_Part_I_Summary_Score_Baseline ~ Weight<br />
UPDRS_Part_I_Summary_Score_Month_03 ~ ResearchGroup <br />
UPDRS_Part_I_Summary_Score_Month_24 ~ Age<br />
'<br />
<br />
fit5 <- growth(model5, data=myData)<br />
summary(fit5); fitMeasures(fit5, c("cfi", "rmsea", "srmr"))<br />
parameterEstimates(fit5) # extracts the values of the estimated parameters, the standard errors, <br />
# the z-values, the standardized parameter values, and returns a data frame<br />
<br />
lavaan (0.5-18) converged normally after 99 iterations<br />
Number of observations 661<br />
Estimator ML<br />
Minimum Function Test Statistic 3.703<br />
Degrees of freedom 1<br />
P-value (Chi-square) 0.054<br />
Parameter estimates:<br />
Information Expected<br />
Standard Errors Standard<br />
Estimate Std.err Z-value P(>|z|)<br />
Latent variables:<br />
i =~<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 1.074<br />
UPDRS_P_I_S_S 1.172<br />
s =~<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 1.000<br />
UPDRS_P_I_S_S 6.000<br />
<br />
Regressions:<br />
i ~<br />
R_fsfrm_gyr_V 0.000<br />
Weight 0.003<br />
ResearchGroup -0.880<br />
Age -0.009<br />
c12_34637584_ -0.907<br />
s ~<br />
R_fsfrm_gyr_V -0.000<br />
Weight -0.000<br />
ResearchGroup -0.084<br />
Age 0.002<br />
c12_34637584_ -0.047<br />
UPDRS_Part_I_Summary_Score_Baseline ~<br />
Weight -0.000<br />
UPDRS_Part_I_Summary_Score_Month_03 ~<br />
ResearchGroup 0.693<br />
UPDRS_Part_I_Summary_Score_Month_24 ~<br />
Age -0.002<br />
<br />
Covariances:<br />
i ~~<br />
s 0.074<br />
<br />
Intercepts:<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
UPDRS_P_I_S_S 0.000<br />
i 1.633<br />
s -0.023<br />
<br />
Variances:<br />
UPDRS_P_I_S_S 1.017<br />
UPDRS_P_I_S_S 1.093<br />
UPDRS_P_I_S_S 2.993<br />
i 1.019<br />
s -0.025<br />
<br />
<b>cfi rmsea srmr</b><br />
<b>0.996 0.064 0.008</b><br />
<br />
fitted(fit5) # return the model-implied (fitted) covariance matrix (and mean vector) of a fitted model<br />
# write.table(fitted(fit5), file="C:\\Users\\Dinov\\Desktop\\test1.txt")<br />
<br />
# resid() function return (unstandardized) residuals of a fitted model including the difference between <br />
# the observed and implied covariance matrix and mean vector<br />
resid(fit5)<br />
<br />
# report the fit measures as a signature vector<br />
fitMeasures(fit5, c("cfi", "rmsea", "srmr")) # comparative fit index (CFI)<br />
<br />
# inspect the model results (report parameter table)<br />
inspect(fit5)<br />
<br />
<b>Note:</b> See discussion of SEM modeling pros/cons <sup>2</sup>.<br />
<br />
==Generalized Estimating Equation (GEE) Modeling==<br />
<br />
Generalized Estimating Equations (GEE) modeling<sup>3</sup> is used for analyzing data with the following characteristics:<br />
(1) the observations within a group may be correlated, (2) observations in separate clusters are independent, (3) a monotone transformation of the expectation is linearly related to the explanatory variables, and (4) the variance is a function of the expectation. The expectation (#3) and the variance (# 4) are conditional given group-level or individual-level covariates.<br />
<br />
GEE is applied to handle correlated discrete and continuous outcome variables. For the outcome variables, it only requires specification of the first 2 moments and correlation among them. The goal is to estimate fixed parameters without specifying their joint distribution. The correlation is specified by one of these 4 alternatives (which is specified in the R call: geeglm(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, <b>corstr = " exchangeable"</b>, scale.fix = TRUE):<br />
<br />
<center>[[Image:SMHS_BigDataBigSci8.png|300px]]</center><br />
<br />
===Respiratory Illness GEE R example===<br />
<br />
This example is based on a data set on respiratory illness <sup>4</sup> and the <b>geepack</b> package. The data is from a clinical study of the treatment effects on patients with respiratory illness. N=111 patients from 2 clinical centers randomized to receive either placebo or active treatments. 4 temporal examinations assessed the <b>respiratory state</b> of patients as good (=1) or poor (=0). Explanatory variables characterizing a patient were: <b>center</b> (1,2), treatment (A=active, P=placebo), <b>sex</b> (M=male, F=female), <b>age</b> (in years) at baseline. The values of the covariates were constant for the repeated elementary observations on each patient.<br />
<br />
<b>Table 1</b> shows the number of patients for the response patterns across the 4 visits split by baseline-status and treatment. Baseline respiratory status = 0 appear to have either low or high number of positive responses. Baseline respiratory status = 1 tend to respond positively. <b>Table 2</b> describes the distribution of the number of positive responses per patient for sex and center.<br />
<br />
# library("geepack")<br />
<br />
<b>Table 1</b>: Distribution of patients for <b>different response patterns</b> classified by <b>baseline-respiratory</b> response and <b>treatment</b>. The patterns are ordered according to increasing numbers of positive responses.<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! ||Visit|| colspan="15"| All Possible Response Patterns (2*2*2*2=16 permutation patterns)||<br />
|-<br />
|||1||0||1||0||0||0||1||1||1||0||0||1||1||1||0||1||<br />
|-<br />
|||2||0||0||1||0||0||1||0||0||1||0||1||1||0||1||1||<br />
|-<br />
|||3||0||0||0||1||0||0||1||0||1||1||1||0||1||1||1||<br />
|-<br />
|||4||0||0||0||0||1||0||0||1||0||1||0||1||1||1||1||<br />
|-<br />
!Baseline||Treatment||||||||||||||||||||||||||||||||Sum<br />
|-<br />
| rowspan="2"|0||A||7||2||2||2||1||0||1||0||1||0||1||2||0||4||7||30<br />
|-<br />
|P||18||1||0||2||1||2||0||0||1||0||0||1||2||0||3||31<br />
|-<br />
|rowspan="2"|1||A||0||0||0||0||0||0||1||1||0||0||4||0||1||0||17||24<br />
|-<br />
|P||1||4||1||0||0||0||0||1||1||3||1||1||2||1||10||26<br />
|-<br />
|Sum||||26||7||3||4||2||2||2||2||3||3||6||4||5||5||37||111<br />
|}<br />
</center><br />
<br />
<br />
<b>Table 2</b>: Distribution of patients for the number of positive responses across the 4 visits for <b>Sex</b> and <b>Center</b>. <br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
! colspan="2" rowspan="2"| ||colspan="5"|Number of positive responses<br />
|-<br />
| 0||1||2||3||4<br />
|-<br />
|rowspan="2"|Sex || F||7||3||3||3||7<br />
|-<br />
|M||19||13||9||17||30<br />
|-<br />
|rowspan="2"|Center|| 1||18||9||6||11||12<br />
|-<br />
|2||8||7||6||9||25<br />
|}<br />
</center><br />
<br />
<b>Figure 1</b> shows a plot of age against the proportion of positive responses for each patient. It indicates a quadratic relationship between the proportions and the age. Fitting a logistic model to the data (which would be appropriate if there were <i>no time effects</i> and <i>no spread in the response probabilities</i> for patients with the same covariate values).<br />
<br />
# install.packages("geepack")<br />
library("geepack")<br />
<br />
# data include a clinical trial of 111 patients with respiratory illness from two different clinics were randomized to receive either <br />
# placebo (P) or an active (A) treatment. Patients were examined at baseline and at four visits during treatment. <br />
# At each examination, respiratory status (categorized as 1 = good, 0 = poor)<br />
data("respiratory")<br />
head(respiratory)<br />
myData <- respiratory<br />
<br />
<center>head(myData)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Center||ID||Treat||Sex||Age||Baseline||Visit||Outcome<br />
|-<br />
|1 ||1||1||P||M||46||0||1||0<br />
|-<br />
|2 ||1||1||P||M||46||0||2||0<br />
|-<br />
|3 ||1||1||P||M||46||0||3||0<br />
|-<br />
|4 ||1||1||P||M||46||0||4||0<br />
|-<br />
|5||1||2||P||M||28||0||1||0<br />
|-<br />
|6||1||2||P||M||28||0||2||0<br />
|}<br />
</center><br />
<br />
# Get proportions of positive responses<br />
responses <- factor(myData$\$$outcome, labels = c("OutcomePositive", "OutcomeNegative"))<br />
data.frame <- data.frame(responses, myData$\$$age)<br />
head(data.frame)<br />
tab <- prop.table(table(data.frame), 1); tab # compute proportions<br />
sum(tab[1,]) # check proportions (sums to 1.0)?<br />
prop <- tab[1,] # save the proportions of positive responses for each patient<br />
plot(as.numeric(dimnames(tab)$\$$myData.age), tab[1,], xlab = "Age", ylab = "Proportion of Positive Outcomes")<br />
# dimnames(tab) # to see/inspect positive/negative outcomes<br />
<br />
[[Image:SMHS_BigDataBigSci9.png|500px]]<br />
<br />
x <- as.numeric(dimnames(tab)$\$$myData.age)<br />
poly <- loess( prop ~ x) # fit a Local Polynomial Regression Fitting<br />
plot(x, prop)<br />
lines(predict(poly), col='red', lwd=2)<br />
<br />
smoothingSpline <- smooth.spline(x, prop, spar=0.6)<br />
plot(x, prop)<br />
lines(smoothingSpline, col='red', lwd=1.5)<br />
smoothPolySpline <- smooth.spline(x, predict(poly), spar=0.6)<br />
lines(smoothPolySpline, col='blue', lwd=2)<br />
legend("topright", inset=.05, title="Polynomial regression models", c("Raw Poly","Smooth Poly"), fill=c('red', 'blue'), horiz=TRUE)<br />
<br />
[[Image:SMHS_BigDataBigSci10.png|500px]]<br />
<br />
model.glm <- <b>glm</b>(outcome ~ baseline + center + sex + treat + age + I(age^2), data = respiratory, family = binomial)<br />
<br />
summary(model.glm)<br />
<br />
<center>Deviance Residuals: <br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max<br />
|-<br />
| -2.5951||-0.9108||0.4034||0.8336||2.0951<br />
|}<br />
</center><br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Estimate||Std. Error||z value||$Pr( \gt |z|)$ <br />
|-<br />
|(Intercept)||3.3579727||1.0285292||3.265||0.0011 **<br />
|-<br />
|baseline||1.8850421||0.2482959||7.592||3.15e-14 ***<br />
|-<br />
|center||0.5099244||0.2453982||2.078||0.0377 *<br />
|-<br />
|sexM||-0.4510595||0.3166570||-1.424||0.1543<br />
|-<br />
|Treatp||-1.3231587||0.2431603||-5.442||5.28e-08 ***<br />
|-<br />
|age||-0.2072815||0.0472538||-4.387||1.15e-05 ***<br />
|-<br />
|I(age^2)||0.0025650||0.0006324||4.056||4.99e-05 ***<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
(Dispersion parameter for binomial family taken to be 1)<br />
<br />
Null deviance: 609.41 on 443 degrees of freedom<br />
<br />
Residual deviance: 468.62 on 437 degrees of freedom<br />
<br />
AIC: 482.62<br />
<br />
The correlation matrix of the of the outcome measures across visits is shown in <b>Table 3.</b><br />
<br />
attach(myData)<br />
mat1 <- matrix(c(outcome[visit==1], outcome [visit==2], outcome [visit==3], <br />
outcome[visit==4]), ncol = 4)<br />
cor(mat1)<br />
<br />
<b>Table 3</b>: Correlation matrix for the outcome measurements at different visits.<br />
<br />
<center>Coefficients:<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||[,1]||[,2]||[,3]||[,4]<br />
|-<br />
|[,1]||1.0000000||0.5087944||0.4431438||0.5139016<br />
|-<br />
|[,2]||0.5087944||1.0000000||0.5821877||0.5301611<br />
|-<br />
|[,3]||0.4431438||0.5821877||1.0000000||0.5871276<br />
|-<br />
|[,4]||0.5139016||0.5301611||0.5871276||1.0000000<br />
|}<br />
</center><br />
<br />
# We can also examine for multicollinearity problem, using the correlation matrix for X<br />
cor(model.matrix(model.glm)[,-1])<br />
<br />
# GEE modeling: R function arguments/options<br />
<br />
*<b>corstr</b>= for defining the correlation structure within groups in a GEE model<br />
<br />
*<b>id</b>= is used to identify the grouping variable in a GEE model<br />
<br />
*<b>scale.fix</b>= when TRUE causes the scale parameter to be fixed (by default at 1) rather than estimated<br />
<br />
*<b>waves</b>= names a positive integer-valued variable that is used to identify the order and spacing of observations within groups in a GEE model. This argument is crucial when there are missing values and gaps in the data<br />
<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + treat + sex + baseline + age, data = respiratory, family = "binomial", id = id, corstr = "exchangeable", scale.fix = TRUE)<br />
<br />
# The column labeled <b>Wald</b> in the summary table is the square of the z-statistic. The reported p-values are the <br />
# upper tailed probabilities from a chisq1 distribution and test whether the true parameter value ≠0.<br />
summary(gee.model1)<br />
<br />
# To test the effect of ''treatment'' using anova()<br />
gee.model1 <- <b>geeglm</b>(outcome ~ center + <b><u>treat</u></b> + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id = id, corstr = "exchangeable", std.err="san.se")<br />
gee.model2 <- geeglm(outcome ~ center + sex + baseline + age, data = respiratory, family=binomial(link="logit"), id=id, corstr = "exchangeable", std.err="san.se")<br />
anova(gee.model1, gee.model2)<br />
<br />
# To test whether a categorical predictor with more than two levels should be retained in a GEE model we need <br />
# to test the entire set of dummy variables simultaneously as a single construct. <br />
# The geepack package provides a method for the anova function for a multivariate Wald test<br />
# When the anova function is applied to a single geeglm object it returns sequential Wald tests for <br />
# individual predictors with the tests carried out in the order the predictors are listed in the model formula.<br />
anova(gee.model1)<br />
<br />
===PD GEE example===<br />
<br />
This example used the PPMI/PD data to show GEE analysis.<br />
<br />
<b># 05_PPMI_top_UPDRS_Integrated_LongFormat1.csv</b><br />
longData <- read.csv("https://umich.instructure.com/files/330397/download?download_frd=1",header=TRUE)<br />
<br />
# library("geepack")<br />
<br />
# Data Elements: FID_IID L_insular_cortex_ComputeArea L_insular_cortex_Volume R_insular_cortex_ComputeArea R_insular_cortex_Volume L_cingulate_gyrus_ComputeArea L_cingulate_gyrus_Volume R_cingulate_gyrus_ComputeArea R_cingulate_gyrus_Volume L_caudate_ComputeArea L_caudate_Volume R_caudate_ComputeArea R_caudate_Volume L_putamen_ComputeArea L_putamen_Volume R_putamen_ComputeArea R_putamen_Volume Sex Weight ResearchGroup Age chr12_rs34637584_GT chr17_rs11868035_GT chr17_rs11012_GT chr17_rs393152_GT chr17_rs12185268_GT chr17_rs199533_GT UPDRS_part_I UPDRS_part_II UPDRS_part_III time_visit<br />
<br />
dim(longData) <br />
<br />
data1 = na.omit(longData)<br />
attach(data1)<br />
ControlGroup <- ifelse(ResearchGroup == "Control", 1, 0)<br />
<br />
# these calculations take a long time!!!<br />
# if you get <i>“Error in geese.fit(xx, yy, id, offset, soffset, w, waves = waves, zsca, : <br />
# nrow(zsca) and length(y)</i> not match” – this indicates some of the variables are of different lengths<br />
# if you get <i>“glm.fit: algorithm did not converge”</i> – see this discussion: http://goo.gl/lrjBjB <br />
<br />
gee.model0 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
gee.model1 <- geeglm(ControlGroup ~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+ R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr17_rs11012_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
# compare 2 gee models<br />
# anova(gee.model0,gee.model1)<br />
<br />
# you can try the “family = poisson(link = "log")” model for the ResearchGroup response, as well<br />
<br />
gee.model2 <- <b>geeglm</b>(ControlGroup <br />
~ L_insular_cortex_ComputeArea+L_insular_cortex_Volume+R_insular_cortex_ComputeArea+ R_insular_cortex_Volume +L_cingulate_gyrus_ComputeArea + L_cingulate_gyrus_Volume + R_cingulate_gyrus_ComputeArea + R_cingulate_gyrus_Volume + L_caudate_ComputeArea + L_caudate_Volume + R_caudate_ComputeArea + R_caudate_Volume + L_putamen_ComputeArea + L_putamen_Volume + R_putamen_ComputeArea + R_putamen_Volume + Sex + Weight + Age + chr12_rs34637584_GT + chr17_rs11868035_GT + chr17_rs11012_GT + chr17_rs393152_GT + chr17_rs12185268_GT + chr17_rs199533_GT + UPDRS_part_I + UPDRS_part_II + time_visit, data = data1, family=binomial(link="logit"), id = FID_IID, corstr = "unstructured", std.err="san.se")<br />
<br />
Remember that we do not interpret GEE coefficients as relating to individuals – GEE models are marginal models and the conclusions drawn are interpreted as population-based. Also, the time element in the model (time_visit) is just another controlling factor. <b>The effect-sizes (betas) associated with each variable/predictor represent the slopes associated with the corresponding covariate, while holding time constant</b>. If we need to examine interactions (e.g., Weight change over Time), we need to include an interaction term in model: (i.e. + Weight*time_visit).<br />
<br />
summary (gee.model2)<br />
<br />
# Individual Wald test and <b>confidence intervals</b> for each covariate<br />
predictors2 <- coef(summary(gee.model2))<br />
CI2 <- with(as.data.frame(predictors2), cbind(lwr=Estimate-1.96*Std.err, est=Estimate, upr=Estimate+1.96*Std.err))<br />
rownames(CI2) <- rownames(predictors2)<br />
CI2<br />
<br />
==Appendix==<br />
<br />
SEM References<br />
<br />
*http://socserv.mcmaster.ca/jfox/Misc/sem/SEM-paper.pdf <br />
<br />
GEE References<br />
<br />
*https://cran.r-project.org/web/packages/geepack/geepack.pdf<br />
<br />
*http://www.jstatsoft.org/v15/i02/paper<br />
<br />
===Footnotes===<br />
<br />
* <sup>3</sup> http://www.jstatsoft.org/v15/i02/ <br />
<br />
* <sup>4</sup> https://books.google.com/books?id=mdEqBgAAQBAJ<br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci| Back to Model-based Analytics]] <br />
* [[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]] <br />
* [[SMHS_BigDataBigSci_GEE| Next Section: Generalized Estimating Equation (GEE) Modeling]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_GCM}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_SEM_Ex2&diff=16232SMHS BigDataBigSci SEM Ex22016-05-23T19:10:55Z<p>Pineaumi: /* Output */</p>
<hr />
<div>==[[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]] - Hands-on Example 2 (Parkinson’s Disease data) ==<br />
<br />
# Data: PPMI Integrated imaging, demographics, genetics, clinical and cognitive (UPDRS) data. <br />
# Dinov et al., 2016<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; width:75%" border="1"<br />
|-<br />
!Index||FID_IID||L_cingulate_gyrus_ComputeArea||L_cingulate_gyrus_Volume||R_cingulate_gyrus_ComputeArea||R_cingulate_gyrus_Volume||L_caudate_ComputeArea||L_caudate_Volume||R_caudate_ComputeArea||R_caudate_Volume||L_putamen_ComputeArea||L_putamen_Volume||R_putamen_ComputeArea||R_putamen_Volume||L_hippocampus_ComputeArea||L_hippocampus_Volume||R_hippocampus_ComputeArea||R_hippocampus_Volume||cerebellum_ComputeArea||cerebellum_Volume||L_fusiform_gyrus_ComputeArea||L_fusiform_gyrus_Volume||R_fusiform_gyrus_ComputeArea||R_fusiform_gyrus_Volume||Sex||Weight||ResearchGroup||Age||chr12_rs34637584_GT||chr17_rs11868035_GT||chr17_rs11012_GT||chr17_rs393152_GT||chr17_rs12185268_GT||UPDRS_part_I||UPDRS_part_II||UPDRS_part_III||UPDRS_part_IV||time_visit<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||0||2||12||NA||0<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||0||2||18||NA||42<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||0||3||23||NA||24<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||1||3||19||NA||9<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||4||3||20||NA||0<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||1||4||29||NA||42<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||0||2||39||NA||24<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||0||5||25||NA||9<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||1||6||34||NA||0<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||1||11||42||0||42<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||1||5||39||0||24<br />
|-<br />
|2||3001||4381.93||11205.13||4610.447||12246.55||621.5344||821.8991||1302.146||2526.248||1029.175||1543.017||1680.197||3792.201||1769.672||4737.038||1578.946||3817.621||20909.58||185742.6||4534.707||15830.32||3945.037||14471.84||1||74.2||PD||65.1808||0||1||1||1||1||NA||NA||NA||NA||9<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||3||15||17||NA||3<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||2||10||22||NA||48<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||NA||NA||NA||NA||30<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||1||16||20||NA||12<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||3||15||27||0||3<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||4||16||22||0||48<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||8||14||22||0||30<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||4||13||24||1||12<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||4||16||31||4||3<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||6||14||19||4||48<br />
|-<br />
|3||3002||3221.54||7439.645||3194.348||7264.683||876.9414||1364.86||1056.22||1965.206||1275.905||2696.695||1375.725||2966.682||1529.759||3736.04||1799.439||4665.168||17627.01||155632.3||4013.385||12677.99||3551.876||11263.23||2||70.6||PD||67.6247||0||1||0||0||0||5||18||29||3||30<br />
|}<br />
</center><br />
<br />
# install.packages("lavaan") <br />
library(lavaan)<br />
#load data 05_PPMI_top_UPDRS_Integrated_LongFormat1.csv ( dim(myData) 1764 31 )<br />
# setwd("/dir/")<br />
myData <- read.csv("https://umich.instructure.com/files/330397/download?download_frd=1&verifier=3bYRT9FXgBGMCQv8MNxsclWnMgodiJRYo3ODFtDq",header=TRUE)<br />
<br />
# dichotomize the "ResearchGroup" variable<br />
myData$\$$ResearchGroup <- ifelse(myData$\$$ResearchGroup == "Control", 1, 0)<br />
<br />
# Data elements: Index FID_IID L_cingulate_gyrus_ComputeArea L_cingulate_gyrus_Volume <br />
R_cingulate_gyrus_ComputeArea R_cingulate_gyrus_Volume L_caudate_ComputeArea <br />
L_caudate_Volume R_caudate_ComputeArea R_caudate_Volume <br />
L_putamen_ComputeArea L_putamen_Volume R_putamen_ComputeArea <br />
R_putamen_Volume L_hippocampus_ComputeArea L_hippocampus_Volume R_hippocampus_ComputeArea <br />
R_hippocampus_Volume cerebellum_ComputeArea <br />
cerebellum_Volume L_fusiform_gyrus_ComputeArea L_fusiform_gyrus_Volume R_fusiform_gyrus_ComputeArea <br />
R_fusiform_gyrus_Volume Sex Weight ResearchGroup Age chr12_rs34637584_GT chr17_rs11868035_GT chr17_rs11012_GT chr17_rs393152_GT <br />
chr17_rs12185268_GT UPDRS_Part_I_Summary_Score_Baseline<br />
UPDRS_Part_I_Summary_Score_Month_03 UPDRS_Part_I_Summary_Score_Month_06 UPDRS_Part_I_Summary_Score_Month_09 UPDRS_Part_I_Summary_Score_Month_12 UPDRS_Part_I_Summary_Score_Month_18 UPDRS_Part_I_Summary_Score_Month_24 UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 UPDRS_Part_III_Summary_Score_Baseline UPDRS_Part_III_Summary_Score_Month_03 UPDRS_Part_III_Summary_Score_Month_06 UPDRS_Part_III_Summary_Score_Month_09 UPDRS_Part_III_Summary_Score_Month_12 UPDRS_Part_III_Summary_Score_Month_18 UPDRS_Part_III_Summary_Score_Month_24 X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24<br />
<br />
====Validation of the measurement model====<br />
<br />
myData<-within(myData, {<br />
L_cingulate_gyrus_ComputeArea <- lm(L_cingulate_gyrus_ComputeArea ~ L_cingulate_gyrus_Volume+R_cingulate_gyrus_ComputeArea+R_cingulate_gyrus_Volume+L_caudate_ComputeArea+L_caudate_Volume+R_caudate_ComputeArea+R_caudate_Volume+L_putamen_ComputeArea+L_putamen_Volume+R_putamen_ComputeArea+R_putamen_Volum e+L_hippocampus_ComputeArea+L_hippocampus_Volume+R_hippocampus_ComputeArea+R_hippocampus_Volume+cerebellum_ComputeArea+cerebellum_Volume+L_fusiform_gyrus_ComputeArea+L_fusiform_gyrus_Volume+R_fusiform_gyrus_ComputeArea+R_fusiform_gyru s_Volume, data=myData)$\$$residuals<br />
Weight <- lm(Weight ~ Sex+ResearchGroup+Age+chr12_rs34637584_GT+chr17_rs11868035_GT+chr17_rs11012_GT+chr17_rs393152_GT+chr17_rs12185268_GT, data=myData)$\$$residuals<br />
UPDRS_Part_I_Summary_Score_Baseline <- lm(UPDRS_Part_I_Summary_Score_Baseline ~ UPDRS_Part_I_Summary_Score_Month_03+UPDRS_Part_I_Summary_Score_Month_06+UPDRS_Part_I_Summary_Score_Month_09+UPDRS_Part_I_Summary_Score_Month_12+UPDRS_Part_I_Summary_Score_Month_18+UPDRS_Part_I_Summary_Score_Month_24+UPDRS_Part_II_Pati ent_Questionnaire_Summary_Score_Baseline+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09+UPDRS_Part_II_Pa tient_Questionnaire_Summary_Score_Month_12+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24+UPDRS_Part_III_Summary_Score_Baseline+UPDRS_Part_III_Summary_Score_Month_ 03+UPDRS_Part_III_Summary_Score_Month_06+UPDRS_Part_III_Summary_Score_Month_09+UPDRS_Part_III_Summary_Score_Month_12+UPDRS_Part_III_Summary_Score_Month_18+UPDRS_Part_III_Summary_Score_Month_24+X_Assessment_Non.Motor_Epworth_Sleepiness _Scale_Summary_Score_Baseline+X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06+X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12+X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_ Month_24+X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline+X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06+X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short _Summary_Score_Month_12+X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24, data=myData)$\$$residuals })<br />
<br />
====Structural Model====<br />
<br />
# Next, proceed with the structural model including the residuals from data to account for effects of site.<br />
<br />
Lavaan model specification:<br />
<br />
formula type operator mnemonic<br />
latent variable definition =~ is measured by<br />
regression ~ is regressed on<br />
(residual) (co)variance ~~ is correlated with<br />
Intercept ~ 1 Intercept<br />
<br />
For example,<br />
myModel <-<br />
<b># regressions</b><br />
y1 + y2 <mark>~</mark> f1 + f2 + x1 + x2<br />
f1 ~ f2 + f3<br />
f2 ~ f3 + x1 + x2<br />
<br />
<b># latent variable definitions</b><br />
f1 <mark>=~</mark> y1 + y2 + y3<br />
f2 =~ y4 + y5 + y6<br />
f3 =~ y7 + y8 + y9 + y10<br />
<br />
<b># variances and covariances</b><br />
y1 <mark>~~</mark> y1<br />
y1 ~~ y2<br />
f1 ~~ f2<br />
<br />
<b># intercepts</b><br />
y1 <mark>~</mark> 1<br />
f1 ~ 1<br />
model1 <-<br />
'<br />
# latent variable definitions - defining how the latent variables are “manifested by” a set of observed <br />
# (or manifest) variables, aka “indicators”<br />
# (1) Measurement Model <br />
Imaging =~ L_cingulate_gyrus_ComputeArea+L_cingulate_gyrus_Volume<br />
DemoGeno =~ Weight+Sex+Age<br />
UPDRS =~ UPDRS_Part_I_Summary_Score_Baseline+UPDRS_Part_I_Summary_Score_Month_03<br />
<br />
# (2) Regressions <br />
ResearchGroup ~ Imaging + DemoGeno + UPDRS <br />
'<br />
model2 <-<br />
'<br />
# latent variable definitions - defining how the latent variables are “manifested by” a set of observed <br />
# (or manifest) variables, aka “indicators”<br />
# (1) Measurement Model <br />
Imaging =~ L_cingulate_gyrus_ComputeArea+L_cingulate_gyrus_Volume+R_cingulate_gyrus_ComputeArea+R_cingulate_gyrus_Volume+L_caudate_ComputeArea+L_caudate_Volume+R_caudate_ComputeArea+R_caudate_Volume+L_putamen_ComputeArea+L_putamen_Volume+R_putam en_ComputeArea+R_putamen_Volume+L_hippocampus_ComputeArea+L_hippocampus_Volume+R_hippocampus_ComputeArea+R_hippocampus_Volume+cerebellum_ComputeArea+cerebellum_Volume+L_fusiform_gyrus_ComputeArea+L_fusiform_gyrus_Volume+R_fusiform_gyr us_ComputeArea+R_fusiform_gyrus_Volume<br />
DemoGeno =~ Weight+Sex+Age+chr12_rs34637584_GT+chr17_rs11868035_GT+chr17_rs11012_GT+chr17_rs393152_GT+chr17_rs12185268_GT<br />
UPDRS =~ UPDRS_Part_I_Summary_Score_Baseline+UPDRS_Part_I_Summary_Score_Month_03+UPDRS_Part_I_Summary_Score_Month_06+UPDRS_Part_I_Summary_Score_Month_09+UPDRS_Part_I_Summary_Score_Month_12+UPDRS_Part_I_Summary_Score_Month_18+UPDRS_Part_I_Summa ry_Score_Month_24+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06+UPDRS_Part_II_Patient_Questionnaire_Sum mary_Score_Month_09+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18+UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24+UPDRS_Part_III_Summary_Score_Baseline +UPDRS_Part_III_Summary_Score_Month_03+UPDRS_Part_III_Summary_Score_Month_06+UPDRS_Part_III_Summary_Score_Month_09+UPDRS_Part_III_Summary_Score_Month_12+UPDRS_Part_III_Summary_Score_Month_18+UPDRS_Part_III_Summary_Score_Month_24+X_Ass essment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline+X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06+X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12+X_Assessment_Non.Motor_Epw orth_Sleepiness_Scale_Summary_Score_Month_24+X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline+X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06+X_Assessment_Non.Motor_ Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12+X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24<br />
<br />
# (2) Regressions <br />
# ResearchGroup ~ Imaging + DemoGeno + UPDRS <br />
# transform cat variable to numeric:<br />
# myData$\$$ResearchGroup <- ifelse(myData$\$$ResearchGroup == "Control", 0, <br />
# ifelse(myData$\$$ResearchGroup == "PD", 2, 1))<br />
RG_ranked ~ Imaging + DemoGeno + UPDRS<br />
<br />
# (3) Residual Variances<br />
L_insular_cortex_ComputeArea ~~ L_insular_cortex_ComputeArea<br />
L_insular_cortex_Volume ~~ L_insular_cortex_Volume<br />
R_insular_cortex_ComputeArea ~~ R_insular_cortex_ComputeArea<br />
R_insular_cortex_Volume ~~ R_insular_cortex_Volume<br />
L_cingulate_gyrus_ComputeArea ~~ L_cingulate_gyrus_ComputeArea<br />
L_cingulate_gyrus_Volume ~~ L_cingulate_gyrus_Volume<br />
R_cingulate_gyrus_ComputeArea ~~ R_cingulate_gyrus_ComputeArea<br />
R_cingulate_gyrus_Volume ~~ R_cingulate_gyrus_Volume<br />
L_caudate_ComputeArea ~~ L_caudate_ComputeArea<br />
L_caudate_Volume ~~ L_caudate_Volume<br />
R_caudate_ComputeArea ~~ R_caudate_ComputeArea<br />
R_caudate_Volume ~~ R_caudate_Volume<br />
L_putamen_ComputeArea ~~ L_putamen_ComputeArea<br />
L_putamen_Volume ~~ L_putamen_Volume<br />
R_putamen_ComputeArea ~~ R_putamen_ComputeArea<br />
R_putamen_Volume ~~ R_putamen_Volume<br />
L_hippocampus_ComputeArea ~~ L_hippocampus_ComputeArea<br />
L_hippocampus_Volume ~~ L_hippocampus_Volume<br />
R_hippocampus_ComputeArea ~~ R_hippocampus_ComputeArea<br />
R_hippocampus_Volume ~~ R_hippocampus_Volume<br />
cerebellum_ComputeArea ~~ cerebellum_ComputeArea<br />
cerebellum_Volume ~~ cerebellum_Volume<br />
L_fusiform_gyrus_ComputeArea ~~ L_fusiform_gyrus_ComputeArea<br />
L_fusiform_gyrus_Volume ~~ L_fusiform_gyrus_Volume<br />
R_fusiform_gyrus_ComputeArea ~~ R_fusiform_gyrus_ComputeArea<br />
R_fusiform_gyrus_Volume ~~ R_fusiform_gyrus_Volume<br />
R_fusiform_gyrus_ShapeIndex ~~ R_fusiform_gyrus_ShapeIndex<br />
R_fusiform_gyrus_Curvedness ~~ R_fusiform_gyrus_Curvedness<br />
Sex ~~ Sex<br />
Weight ~~ Weight<br />
ResearchGroup ~~ ResearchGroup<br />
VisitID ~~ VisitID<br />
Age ~~ Age<br />
chr12_rs34637584_GT ~~ chr12_rs34637584_GT<br />
chr17_rs11868035_GT ~~ chr17_rs11868035_GT<br />
chr17_rs11012_GT ~~ chr17_rs11012_GT<br />
chr17_rs393152_GT ~~ chr17_rs393152_GT<br />
chr17_rs12185268_GT ~~ chr17_rs12185268_GT<br />
chr17_rs199533_GT ~~ chr17_rs199533_GT<br />
UPDRS_Part_I_Summary_Score_Baseline ~~ UPDRS_Part_I_Summary_Score_Baseline<br />
UPDRS_Part_I_Summary_Score_Month_03 ~~ UPDRS_Part_I_Summary_Score_Month_03<br />
UPDRS_Part_I_Summary_Score_Month_06 ~~ UPDRS_Part_I_Summary_Score_Month_06<br />
UPDRS_Part_I_Summary_Score_Month_09 ~~ UPDRS_Part_I_Summary_Score_Month_09<br />
UPDRS_Part_I_Summary_Score_Month_12 ~~ UPDRS_Part_I_Summary_Score_Month_12<br />
UPDRS_Part_I_Summary_Score_Month_18 ~~ UPDRS_Part_I_Summary_Score_Month_18<br />
UPDRS_Part_I_Summary_Score_Month_24 ~~ UPDRS_Part_I_Summary_Score_Month_24<br />
UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline ~~ UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline<br />
UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03 ~~ UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_03<br />
UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06 ~~ UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_06<br />
UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09 ~~ UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_09<br />
UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12 ~~ UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_12<br />
UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18 ~~ UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_18<br />
UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24 ~~ UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24<br />
UPDRS_Part_III_Summary_Score_Baseline ~~ UPDRS_Part_III_Summary_Score_Baseline<br />
UPDRS_Part_III_Summary_Score_Month_03 ~~ UPDRS_Part_III_Summary_Score_Month_03<br />
UPDRS_Part_III_Summary_Score_Month_06 ~~ UPDRS_Part_III_Summary_Score_Month_06<br />
UPDRS_Part_III_Summary_Score_Month_09 ~~ UPDRS_Part_III_Summary_Score_Month_09<br />
UPDRS_Part_III_Summary_Score_Month_12 ~~ UPDRS_Part_III_Summary_Score_Month_12<br />
UPDRS_Part_III_Summary_Score_Month_18 ~~ UPDRS_Part_III_Summary_Score_Month_18<br />
UPDRS_Part_III_Summary_Score_Month_24 ~~ UPDRS_Part_III_Summary_Score_Month_24<br />
X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline ~~ X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Baseline<br />
X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06 ~~ X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_06<br />
X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12 ~~ X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_12<br />
X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24 ~~ X_Assessment_Non.Motor_Epworth_Sleepiness_Scale_Summary_Score_Month_24<br />
X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline ~~ X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline<br />
X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06 ~~ X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_06<br />
X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12 ~~ X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_12<br />
X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24 ~~ X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Month_24<br />
<br />
# (4) Residual Covariances <br />
Sex ~~ Weight<br />
'<br />
# confirmatory factor analysis (CFA)<br />
# The baseline is a null model constraining the observed variables to covary with no other variables.<br />
# That is, the covariances are fixed to 0 and only individual variances are estimated. This is represents<br />
# a “reasonable worst-possible fitting model”, against which the new fitted model is compared <br />
# to calculate appropriate model-quality indices (e.g., CFA).<br />
<br />
# standardize all variable to avoid huge variations between variable distributions<br />
library("MASS")<br />
# myData <- read.csv("https://umich.instructure.com/files/330397/download?download_frd=1&verifier=3bYRT9FXgBGMCQv8MNxsclWnMgodiJRYo3ODFtDq",header=TRUE)<br />
<br />
summary(myData)<br />
myData2<-scale(myData); summary(myData2)<br />
<br />
myDF <- data.frame(myData2)<br />
# myDF3 <- subset(myDF, select=c("L_cingulate_gyrus_ComputeArea", "cerebellum_Volume", "Weight", "Sex", "Age", " UPDRS_part_I", "UPDRS_part_II", "UPDRS_part_III", "ResearchGroup"))<br />
<br />
myDF3 <- subset(myDF, select=c("R_insular_cortex_ComputeArea", "R_insular_cortex_Volume", "Sex", "Weight", "ResearchGroup", "Age", "chr12_rs34637584_GT", "chr17_rs11868035_GT", "chr17_rs11012_GT"))<br />
<br />
model3 <-<br />
'<br />
# latent variable definitions - defining how the latent variables are “manifested by” a set of observed <br />
# (or manifest) variables, aka “indicators”<br />
# (1) Measurement Model <br />
# Imaging =~ L_cingulate_gyrus_ComputeArea + cerebellum_Volume<br />
Imaging =~ R_insular_cortex_ComputeArea + R_insular_cortex_Volume<br />
DemoGeno =~ Weight+Sex+Age<br />
# UPDRS =~ UPDRS_Part_I_Summary_Score_Baseline+X_Assessment_Non.Motor_Geriatric_Depression_Scale_GDS_Short_Summary_Score_Baseline<br />
UPDRS =~ UPDRS_part_I +UPDRS_part_II + UPDRS_part_III<br />
# (2) Regressions <br />
ResearchGroup ~ Imaging + DemoGeno + UPDRS<br />
<br />
fit3 <- cfa(model3, data= myData2, missing='FIML') # deal with missing values (missing='FIML')<br />
summary(fit3, fit.measures=TRUE)<br />
lavaan (0.5-18) converged normally after 2044 iterations<br />
Number of observations &nbsp;&nbsp; 1764<br />
Number of missing patterns &nbsp;&nbsp; 3<br />
Estimator &nbsp;&nbsp; ML<br />
Minimum Function Test Statistic &nbsp;&nbsp; 455.923<br />
Degrees of freedom &nbsp;&nbsp; 15<br />
P-value (Chi-square) &nbsp;&nbsp; 0.000<br />
Model test baseline model:<br />
Minimum Function Test Statistic &nbsp;&nbsp; 2625.020<br />
Degrees of freedom &nbsp;&nbsp; 28<br />
P-value &nbsp;&nbsp; 0.000<br />
User model versus baseline model:<br />
Comparative Fit Index (CFI) &nbsp;&nbsp; 0.830<br />
Tucker-Lewis Index (TLI) &nbsp;&nbsp; 0.683<br />
Loglikelihood and Information Criteria:<br />
Loglikelihood user model (H0) &nbsp;&nbsp; -51499.484<br />
Loglikelihood unrestricted model (H1) &nbsp;&nbsp; -51271.522<br />
Number of free parameters &nbsp;&nbsp; 29<br />
Akaike (AIC) &nbsp;&nbsp; 103056.967<br />
Bayesian (BIC) &nbsp;&nbsp; 103215.752<br />
Sample-size adjusted Bayesian (BIC) &nbsp;&nbsp; 103123.621<br />
Root Mean Square Error of Approximation:<br />
RMSEA &nbsp;&nbsp; 0.129<br />
90 Percent Confidence Interval &nbsp;&nbsp; 0.119 0.139<br />
P-value RMSEA <= 0.05 &nbsp;&nbsp; 0.000<br />
Standardized Root Mean Square Residual:<br />
SRMR &nbsp;&nbsp; 0.062<br />
Parameter estimates:<br />
Information &nbsp;&nbsp; Observed<br />
Standard Errors &nbsp;&nbsp; Standard<br />
<br />
Estimate Std.err Z-value P(>|z|)<br />
<br />
Latent variables:<br />
Imaging =~<br />
R_cnglt_gyr_V &nbsp;&nbsp; 1.000<br />
L_cadt_CmptAr &nbsp;&nbsp; 493.058<br />
DemoGeno =~<br />
Weight &nbsp;&nbsp; 1.000<br />
Sex &nbsp;&nbsp; 24.158<br />
Age &nbsp;&nbsp; 0.094<br />
UPDRS =~<br />
UPDRS_part_I &nbsp;&nbsp; 1.000<br />
UPDRS_part_II &nbsp;&nbsp; 7.389<br />
Regressions:<br />
ResearchGroup ~<br />
Imaging &nbsp;&nbsp; -0.000<br />
DemoGeno &nbsp;&nbsp; 0.002<br />
UPDRS &nbsp;&nbsp; -0.323<br />
Covariances:<br />
Imaging ~~<br />
DemoGeno &nbsp;&nbsp; 0.001<br />
UPDRS &nbsp;&nbsp; 0.002<br />
DemoGeno ~~<br />
UPDRS &nbsp;&nbsp; 0.000<br />
Intercepts:<br />
R_cnglt_gyr_V &nbsp;&nbsp; 7895.658<br />
L_cadt_CmptAr &nbsp;&nbsp; 635.570<br />
Weight &nbsp;&nbsp; 82.048<br />
Sex &nbsp;&nbsp; 1.340<br />
Age &nbsp;&nbsp; 61.073<br />
UPDRS_part_I &nbsp;&nbsp; 1.126<br />
UPDRS_part_II &nbsp;&nbsp; 4.905<br />
ResearchGroup &nbsp;&nbsp; 0.290<br />
Imaging &nbsp;&nbsp; 0.000<br />
DemoGeno &nbsp;&nbsp; 0.000<br />
UPDRS &nbsp;&nbsp; 0.000<br />
Variances:<br />
R_cnglt_gyr_V &nbsp;&nbsp; 17070159.189<br />
L_cadt_CmptAr &nbsp;&nbsp; -536243845.090<br />
Weight &nbsp;&nbsp; 274.912<br />
Sex &nbsp;&nbsp; 96.664<br />
Age &nbsp;&nbsp; 105.347<br />
UPDRS_part_I &nbsp;&nbsp; 2.442<br />
UPDRS_part_II &nbsp;&nbsp; -0.256<br />
ResearchGroup &nbsp;&nbsp; 0.149<br />
Imaging &nbsp;&nbsp; 2206.397<br />
DemoGeno &nbsp;&nbsp; -0.165<br />
UPDRS &nbsp;&nbsp; 0.550<br />
'<br />
<br />
====Output====<br />
3 parts of the Lavaan SEM output<br />
*First six lines are called the header contains the following information:<br />
*lavaan version number<br />
*lavaan converge info (normal or not), and # iterations needed<br />
*the number of observations that were effectively used in the analysis<br />
*the estimator that was used to obtain the parameter values (here: ML)<br />
*the model test statistic, the degrees of freedom, and a corresponding p-value<br />
<br />
# Next, is the Model test baseline model and the value for the SRMR<br />
# The last section contains the parameter estimates, standard errors (if the information matrix is expected or observed, and if the standard errors are standard, robust, or based on the bootstrap). Then, it tabulates all free (and fixed) parameters that were included in the model. Typically, first the latent variables are shown, followed by covariances and (residual) variances. The first column (Estimate) contains the (estimated or fixed) parameter value for each model parameter; the second column (Std.err) contains the standard error for each estimated parameter; the third column (Z-value) contains the Wald statistic (which is simply obtained by dividing the parameter value by its standard error), and the last column contains the p-value for testing the null hypothesis that the parameter equals zero in the population.<br />
<br />
<b>Note:</b> You can get this type of error ''<b>“…system is computationally singular: reciprocal condition…”,</b>'' which indicates that the design matrix is not invertible. Thus, it can't be used to develop a regression model. This is due to linearly dependent columns, i.e. strongly correlated variables. Resolve pairwise covariances (or correlations) of your variables to investigate if there are any variables that can potentially be removed. You're looking for covariances (or correlations) >> 0. We can also automate this variable selection by using a forward stepwise regression.<br />
<br />
# Graphical fit model visualization<br />
library(semPlot)<br />
semPaths(fit3)<br />
<br />
<center>[[Image:SMHS_BigDataBigSci4.png|500px]]</center><br />
<br />
semPaths(fit3, "std", ask = FALSE, as.expression = "edges", mar = c(3, 1, 5, 1))<br />
<br />
<center>[[Image:SMHS_BigDataBigSci5.png|500px]]</center><br />
<br />
==See also==<br />
* [[SMHS_BigDataBigSci_SEM_sem_vs_cfa| Next See: Differences and Similarities between '''sem'''() and '''cfa'''() ]]<br />
* [[SMHS_BigDataBigSci_SEM| Back to Structural Equation Modeling (SEM)]]<br />
* [[SMHS_BigDataBigSci_SEM_Ex1| Back to SEM Example 1: School Kids Mental Abilities]]<br />
<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci_SEM_Ex2}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci&diff=16231SMHS BigDataBigSci2016-05-23T19:09:25Z<p>Pineaumi: /* Overview */</p>
<hr />
<div>==[[SMHS| Scientific Methods for Health Sciences]] - Model-based Analyses ==<br />
<br />
Structural Equation Modeling (SEM), Growth Curve Models (GCM), and Generalized Estimating Equation (GEE) Modeling<br />
<br />
==Questions ==<br />
<br />
*How to represent dependencies in linear models and examine causal effects?<br />
*Is there a way to study population average effects of a covariate against specific individual effects?<br />
<br />
==Overview==<br />
<br />
SEM allow re-parameterization of random-effects to specify latent variables that may affect measures at different time points using structural equations. SEM show variables having predictive (possibly causal) effects on other variables (denoted by arrows) where coefficients index the strength and direction of predictive relations. SEM does not offer much more than what classical regression methods do, but it does allow simultaneous estimation of multiple equations modeling complementary relations. <br />
<br />
Growth Curve (or latent growth) modeling is a statistical technique employed in SEM for estimating growth trajectories for longitudinal data (over time). It represent repeated measures of dependent variables as functions of time and other covariates. When subjects or units are observed repeatedly over known time points latent growth curve models reveal the trend of an individual as a function of an underlying growth process where the growth curve parameters can be estimated for each subject/unit.<br />
<br />
GEE is a marginal longitudinal method that directly assesses the mean relations of interest (i.e., how the mean dependent variable changes over time), accounting for covariances among the observations within subjects, and getting a better estimate and valid significance tests of the relations. Thus, GEE estimates two different equations, (1) for the mean relations, and (2) for the covariance structure. An advantage of GEE over random-effect models is that it does not require the dependent variable to be normally distributed. However, a disadvantage of GEE is that it is less flexible and versatile – commonly employed algorithms for it require a small-to-moderate number of time points evenly (or approximately evenly) spaced, and similarly spaced across subjects. Nevertheless, it is a little more flexible than repeated-measure ANOVA because it permits some missing values and has an easy way to test for and model away the specific form of autocorrelation within subjects.<br />
<br />
GEE is mostly used when the study is focused on uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent for linear models, but not in non-linear models.<br />
<br />
For instance, suppose $Y_{i,j}$ is the random effects <b>logistic model</b> of the $j^{th}$, observation of the $i^{th}$ subject, then <br />
$<br />
log\Bigg(\frac{p_{i,j}}{1-p_{i,j}} \Bigg)=μ+ν_i,<br />
$<br />
where $ν_i \sim N(0,σ^2)$ is a random effect for <u>subject i</u> and $p_{i,j}=P(Y_{i,j}=1|ν_i).$<br />
<br />
(1) When using a random effects model on such data, the estimate of μ accounts for the fact that a mean zero normally distributed perturbation was applied to each individual, making it ''individual-specific''.<br />
<br />
(2) When using a GEE model on the same data, we estimate the <i>population average log odds</i>,<br />
<br />
\begin{equation}<br />
δ=log\Bigg(\frac{E_v(\frac{1}{1+e^{-μ+v}i})}{1-E_v(\frac{1}{1+e^{-μ+v}i})}<br />
\Bigg),<br />
\end{equation} <br />
<br />
in general $μ≠δ$.<br />
<br />
If $μ=1$ and $σ^2=1$, then $δ≈.83$. <br />
<br />
empirically:<br />
<br />
m <- 1; s <- 1; v<-rnorm(1000, 0,s); v2 <- 1/(1+exp(-m+v)); v_mean <- mean(v2)<br />
<br />
d <- log(v_mean/(1-v_mean)); d<br />
<br />
Note that the random effects have mean zero on the transformed, linked, scale, but their effect is not mean zero on the original scale of the data. We can also simulate data from a mixed effects logistic regression model and compare the population level average with the inverse-logit of the intercept to see that they are not equal. This leads to a difference of the interpretation of the coefficients between GEE and random effects models, or SEM.<br />
<br />
<b>That is, there will be a difference between the GEE population average coefficients and the individual specific coefficients (random effects models).</b><br />
<br />
<b># theoretically</b>, if it can be computed:<br />
<br />
$E(Y)=μ=1$ (in this specific case), but the expectation of the population average log odds <br />
$δ=log\Bigg[\frac{P(Y_{i,j}=1|v_i)}{1-P(Y_{i,j}=1|v_i)}\Bigg]$ would be $< 1$ <SUP>1</SUP>. <br />
Note that this is kind of related to the fact that a grand-total average need not be equal to an average of partial averages. <br />
<br />
The mean of the $i^{th}$ person in the $j^{th}$ observation (e.g., location, time, etc.) can be expressed by:<br />
<br />
$E(Yij | Xij,α_j)= g[μ(Xij|β)+Uij(α_j,Xij)]$,<br />
<br />
Where $μ(X_{ij}|β)$ is the average “response” of a person with the same covariates $X_{ij}$, $β$ a set of fixed effect coefficients, and $Uij(α_j,Xij)$ is an error term that is a function of the (time, space) random effects, $α_j$, and also a function of the covariates $X_{ij}$, and $g$ is the '''link function''' which specifies the regression type -- e.g., <br />
<br />
*<u>linear</u>:''' $g^{-1} (u)=u,$<br />
<br />
*<u>log</u>:''' $g^{-1} (u)= log(u),$ <br />
<br />
*<u>logistic</u>:''' $g^{-1} (u)=log(\frac{u}{1-u})$<br />
<br />
*$E(Uij(α_j,Xij)|Xij)=0.$<br />
<br />
The link function, $g(u)$, provides the relationship between the linear predictor and the mean of the distribution function. For practical applications there are many commonly used link functions. It makes sense to try to match the domain of the link function to the range of the distribution function's mean.<br />
<br />
<center>Common distributions with typical uses and canonical link functions</center><br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|<b>Distribution</b> ||<b>Support of distribution</b>||<b>Typical uses</b>||<b>Link name</b>||<b>Link function</b>||<b>Mean function</b><br />
|-<br />
|Normal||real: $(-&#8734;, +&#8734;)$||Linear-response data||Identity||$X\beta=\mu$||$\mu=X\beta$<br />
|-<br />
|Exponential, Gamma||real:$(0, +&#8734;)$||Exponential-response data, scale parameters||Inverse||$X\beta=-\mu^{-1}$||$\mu=-(X\beta)^{-1}$<br />
|-<br />
|Inverse Gaussian||real:$(0, +&#8734;)$|| ||Inverse squared||$X\beta=-\mu^{-2}$||$\mu=(-X\beta)^{-1/2}$ <br />
|}<br />
</center><br />
<br />
===Footnotes===<br />
<br />
*<sup>1</sup> http://www.researchgate.net/publication/41895248<br />
<br />
==Model-based Analytics==<br />
<br />
===[[SMHS_BigDataBigSci_SEM| Structural Equation Modeling (SEM)]]===<br />
<br />
===[[SMHS_BigDataBigSci_GCM| Growth Curve Modeling (GCM)]]===<br />
<br />
===[[SMHS_BigDataBigSci_GEE| Generalized Estimating Equation (GEE) Modeling]]===<br />
<br />
===[[SMHS_BigDataBigSci_CrossVal|Internal Validation - Statistical n-fold cross-validaiton]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=SMHS_BigDataBigSci}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_CER&diff=16230SMHS MethodsHeterogeneity CER2016-05-23T19:03:37Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Comparative Effectiveness Research: Case Studies <sup>13</sup> (CER) ==<br />
<br />
===Observational Studies: Tips for the CER Practitioners===<br />
<br />
*Different study types can offer different understandings; neither should be discounted without closer examination.<br />
<br />
*RCTs provide an accurate understanding of the effect of a particular intervention in a well-defined patient group under “controlled” circumstances.<br />
<br />
*Observational studies provide an understanding of real-world care and its impact, but can be biased due to uncontrolled factors.<br />
<br />
*Observational studies differ in the types of databases used. These databases may lack clinical detail and contain incomplete or inaccurate data.<br />
<br />
*Before accepting the findings from an observational study, consider whether confounding factors may have influenced the results.<br />
<br />
*In this scenario, subgroup analysis was vital in clarifying both study designs; what is true for the many (e.g., overall, estrogen appeared to be detrimental) may not be true for the few (e.g., that for the younger post-menopausal woman, the benefits were greater and the harms less frequent).<br />
<br />
*Carefully examine the generalizability of the study. Do the study’s patients and intervention match those under consideration?<br />
<br />
*Observational studies can identify associations but cannot prove cause-and-effect relationships.<br />
<br />
===Case-Study 1: The Cetuximab Study<sup>14</sup>===<br />
<br />
<b>What was done and what was found?</b><br />
<br />
Cetuximab, an anti-epidermal growth factor receptor (EGFR) agent, has recently been added to the therapeutic armamentarium. Two important CRTs examined its impact in patients with mCRC (metastatic-stage Colorectal cancer). In the first one, 56 centers in 11 European countries investigated the outcomes associated with cetuximab therapy in 329 mCRC patients who experienced disease progression either on irinotecan therapy or within 3 months thereafter. The study reported that the group on a combination of irinotecan and cetuximab had a significantly higher rate of overall response to treatment (primary endpoint) than the group on cetuximab alone: 22.9% (95% CI, 17.5-29.1%) vs. 10.8% (95% CI, 5.7-18.1%) (P=0.007), respectively. Similarly, the median time to progression was significantly longer in the combination therapy group (4.1 vs. 1.5 months, P<0.001). As these patients had already progressed on irinotecan prior to the study, any response was viewed as positive. Safety between the two treatment arms was similar: approximately 80% of patients in each arm experienced a rash. Grade 3 or 4 (the more severe) toxic effects on the skin were slightly more frequent in the combination-therapy group compared to cetuximab monotherapy, observed in 9.4% and 5.2% of participants, respectively. Other side effects, such as diarrhea and neutropenia observed in the combination-therapy arm, were considered to be in the range expected for irinotecan alone. Data from this study demonstrated the efficacy and safety of cetuximab and were instrumental in the FDA’s 2004 approval.<br />
<br />
A second CRT (2007) examined 572 patients and suggested efficacy of cetuximab in the treatment of mCRC. This study was a randomized, non-blinded, controlled trial that examined cetuximab monotherapy plus best supportive care compared to best supportive care alone in patients who had received and failed prior chemotherapy regimens. It reported that median overall survival (the primary endpoint) was significantly higher in patients receiving cetuximab plus best supportive care compared to best supportive care alone (6.1 vs. 4.6 months, respectively) (hazard ratio for death=0.77; 95% CI: 0.64- 0.92, P=0.005). This RCT described a greater incidence of adverse events in the cetuximab plus best supportive care group compared to best supportive care alone including (most significantly) rash, as well as edema, fatigue, nausea and vomiting.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
These RCTs had fairly broad enrollment criteria and the cetuximab benefits were modest. Emerging scientific theories raised the possibility that genetically defined population subsets might experience a greater-than-average treatment benefit. One such area of inquiry entailed examining “biomarkers,” or genetic indicators of a patient’s greater response to therapy. Even as the above RCTs were being conducted, data emerged showing the importance of the KRAS gene.<br />
<br />
<b>Emerging Data</b><br />
<br />
Based on the emerging biochemical evidence that the epidermal growth factor receptor (EGFR) treatment mechanism (Cetuximab) was even more finely detailed than previously understood, the study authors of the 2007 RCT undertook a retrospective subgroup analysis using tumor tissue samples preserved from their initial study. Following laboratory analysis, all viable tissue samples were classified as having a wild-type (non-mutated) or a mutated KRAS gene. Instead of the previous two study arms (cetuximab plus best supportive care vs. best supportive care alone), there were 4 for this new analysis: each of the two original study arms was further divided by wild-type vs. mutated KRAS status. Laboratory evaluation determined that 40.9% and 42.3% of all patients in the RCT had a KRAS mutation in the cetuximab plus best supportive care group compared to the best supportive care group alone, respectively. The efficacy of cetuximab was found to be significantly correlated with KRAS status: in patients with wild-type (non-mutated). KRAS genes, cetuximab plus best supportive care compared to best supportive care alone improved overall survival (median 9.5 vs. 4.8 months, respectively; hazard ratio for death=0.55; 95% CI, 0.41-0.74, P<0.001), and progression-free survival (median 3.7 vs. 1.9 months, respectively; hazard ratio for progression or death=0.40; 95% CI, 0.30-0.54, P<0.001). Meanwhile, in patients with mutated KRAS tumors, the authors found no significant difference in outcome between cetuximab plus best supportive care vs. best supportive care alone.<br />
<br />
<b>What next?</b><br />
<br />
Based on these and similar results from other studies, the FDA narrowed its product labeling in July 2009 to indicate that cetuximab is not recommended for mCRC patients with mutated KRAS tumors. This distinction reduces the relevant population by approximately 40%. Similarly, the American society of Clinical oncology released a provisional clinical recommendation that all mCRC patients have their tumors tested for KRAS status before receiving anti-EGFR therapy. The benefits of targeted treatment are many. Patients who previously underwent cetuximab therapy without knowing their genetic predisposition would no longer have to be exposed to the drug’s toxic effects if unnecessary, as the efficacy of cetuximab is markedly higher in the genetically defined appropriate patients. In a less-uncertain environment, clinicians can be more confident in advocating a course of action in their care of patients. And finally, knowledge that targeted therapy is possible suggests the potential for further innovation in treatment options. In fact, research continues to demonstrate options for targeted cetuximab treatment of mCRC at an even finer scale than seen with KRAS; and similar genetic targeting is being investigated, and advocated, in other cancer types.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are generally viewed as the gold standard, results of one or even a series of trials may not accurately reflect the benefits experienced by an individual patient. This case-study suggests that cetuximab initially appeared to have rather modest clinical benefits. Albeit, new information that became available and subsequent genetic subgroup assessments led to very different conclusions. Clinicians should be aware that the current knowledge is likely to evolve and any decisions about patient care should be carefully considered with that sense of uncertainty in mind. As in this case study, subgroup analyses (e.g., genetic subtypes) need a theoretical rationale. Ideally, the analyses should be determined at the time of original RCT design and should not just occur as explorations of the subsequent data. When improperly employed, post hoc analyses may lead to incorrect patient care conclusions.<br />
<br />
<b>RCTs Tips for the CER Practitioners</b><br />
<br />
*RCTs can determine whether an intervention can provide benefit in a very controlled environment.<br />
<br />
*The controlled nature of an RCT may limit its generalizability to a broader population.<br />
<br />
*No results are permanent; advances in scientific knowledge and understanding can influence how we view the effectiveness (or safety) of a therapeutic intervention.<br />
<br />
*Targeted therapy illuminated by carefully thought out subgroup analyses can improve the efficacious and safe use of an intervention.<br />
<br />
===Case-Study 2: The Rosiglitazone Study<sup>15</sup>===<br />
<br />
<b>Meta-analysis</b><br />
<br />
Often the results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single “result.” Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.<br />
<br />
When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive “numbers” but biased results. The following are important criteria for properly conducted meta-analyses:<br />
<br />
1. Carefully defining unbiased inclusion or exclusion criteria for study selection<br />
<br />
2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and time-frame<br />
<br />
3. Applying correct statistical methods to combine and analyze the data<br />
<br />
Reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions. Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research. The following case study will examine several key principles that will be useful as the reader encounters these publications.<br />
<br />
<b>Clinical Application</b><br />
<br />
Heart disease is the leading cause of mortality in the United States, resulting in approximately 20% of all deaths. Diabetics are particularly susceptible to heart disease, with more than 65% of deaths attributable to it. The nonfatal complications of diabetes are wide-ranging and include kidney failure, nerve damage, amputation, stroke and blindness, among other outcomes. In 2007, the total estimated cost of diabetes in the United States was $174B; $116B was derived from direct medical expenditures and the rest from the indirect cost of lost productivity due to the disease. With such serious health effects and heavy direct and indirect costs tied to diabetes, proper disease management is critical. Historically, diabetes treatment has focused on strict blood sugar control, assuming that this goal not only targets diabetes but also reduces other serious comorbidities of the disease.<br />
<br />
Anti-diabetic agents have long been associated with key questions as to their benefits/risks in the treatment of diabetes. The sulfonylurea tolbutamide, a first generation anti-diabetic drug, was found in a landmark study in the 1970s to significantly increase the CV mortality rate compared to patients not on this agent. Further analysis by external parties concluded that the methods employed in this trial were significantly flawed (e.g., use of an “arbitrary” definition of diabetes status, heterogeneous baseline characteristics of the populations studied, and incorrect statistical methods). Since these early studies, CV concerns continue to be an issue with selected oral hypoglycemic agents that have subsequently entered the marketplace.<br />
<br />
A class of drugs, thiazolidinedione (TZD), was approved in the late 1990s, as a solution to the problems associated with the older generation of sulfonylureas. Rosiglitazone, a member of the TZD class, was approved by the FDA in 1999 and was widely prescribed for the treatment of type-2 diabetes. A number of RCTs supported the benefit of rosiglitazone as an important new oral antidiabetic agent. However, safety concerns developed as the FDA received reports of adverse cardiac events potentially associated with rosiglitazone. It was in this setting that a meta-analysis by Nissen and Wolski was published in the New England Journal of Medicine in June 2007.<br />
<br />
<b>What was done?</b><br />
<br />
Nissen and Wolski conducted a meta-analysis examining the impact of rosiglitazone on cardiac events and mortality compared to alternative therapeutic approaches. The study began with a broad search to locate potential studies for review. The authors screened published phase II, III, and IV trials; the FDA website; and the drug manufacturer’s clinical-trial registry for applicable data relating to rosiglitazone use. When the initial search was complete, the studies were further categorized by pre-stated inclusion criteria. Meta-analysis inclusion criteria were simple: studies had to include rosiglitazone and a randomized comparator group treated with either another drug or placebo, study arms had to show similar length of treatment, and all groups had to have received more than 24 weeks of exposure to the study drugs. The studies had to contain outcome data of interest including the rate of myocardial infarction (MI) or death from all CV causes. Out of 116 studies surveyed by the authors, 42 met their inclusion criteria and were included in the meta-analysis. Of the studies they included, 23 had durations of 26 weeks or less, and only five studies followed patients for more than a year. Until this point, the study’s authors were following a path similar to that of any reviewer interested in CV outcomes, examining the results of these 42 studies and comparing them qualitatively. Quantitatively combining the data, however, required the authors to make choices about the studies they could merge and the statistical methods they should apply for analysis. Those decisions greatly influenced the results that were reported.<br />
<br />
<b>What was found?</b><br />
<br />
When the studies were combined, the meta-analysis contained data from 15,565 patients in the rosiglitazone group and 12,282 patients as comparators. Analyzing their data, the authors chose one particular statistical method (the Peto odds ratio method, a fixed-effect statistical approach), which calculates the odds of events occurring where the outcomes of interest are rare and small in number. In comparing rosiglitazone with a “control” group that included other drugs or placebo, the authors reported odds ratios of 1.43 (95% CI, 1.03-1.98; P=0.03) and 1.64 (95% CI,<br />
0.98-2.74; P=0.06) for MI and death from CV causes, respectively. In other words, the odds of an MI or death from a CV cause are higher for rosiglitazone patients than for patients on other therapies or placebo. The authors reported that rosiglitazone was significantly associated with an increase in the risk of MI and had borderline significance in increasing the risk of death from all CV causes. These findings appeared online on the same day that the FDA issued a safety alert regarding rosiglitazone. Discussion of the meta-analysis was immediately featured prominently in the news media. By December 2007, prescription claims for the drug at retail pharmacies had fallen by more than 50%.<br />
<br />
As diabetic patients and their clinicians reacted to the news, a methodologic debate also ensued. This discussion included statistical issues pertaining to the conduct of the analysis, its implications for clinical care, and finally the FDA and drug manufacturer’s roles in overseeing and regulating rosiglitazone. The concern among patients with diabetes regarding treatment, continues in the medical community today.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
Should the studies have been combined? Commentators faulted the authors for including several studies that were not originally intended to investigate diabetes, and for combining both placebo and drug therapy data into one comparator arm. Some critics noted that despite the stated inclusion criteria, some data were derived from studies where the rosiglitazone arm was allowed a longer follow-up than the comparator arm. By failing to account for this longer follow-up period, commentators felt that the authors may have overestimated the effect of rosiglitazone on CV outcomes. Many reviewers were concerned that this meta-analysis excluded trials in which no patients suffered an MI or died from CV causes – the outcomes of greatest interest. Some reviewers also noted that the exclusion of zero-event trials from the pooled dataset not only gave an incomplete picture of the impact of rosiglitazone but could have increased the odds ratio estimate. In general, the pooled dataset was criticized by many for being a faulty microcosm of the information available regarding rosiglitazone.<br />
<br />
It is essential that a meta-analysis be based on similarity in the data sources. If studies differ in important areas such as the patient populations, interventions, or outcomes, combining their data may not be suitable. The researchers accepted studies and populations that were clinically heterogeneous, yet pooled them as if they were not. The study reported that the results were combined from a number of trials that were not initially intended to investigate CV outcomes. Furthermore, the available data did not allow for time-to-event analysis, an essential tool in comparing the impact of alternative treatment options. Reviewers considered the data to be insufficiently homogeneous, and the line of cause and effect to be murkier than the authors described.<br />
<br />
<b>Were the statistical methods optimal?</b><br />
<br />
The statistical methods for this meta-analysis also came under significant criticism. The critiques focused on the authors’ use of the Peto method as being an incorrect choice because data were pooled from both small and very large studies, resulting in a potential overestimation of treatment effect. Others reviewers pointed that the Peto method should not have been used, as a number of the underlying studies did not have patients assigned equally to rosiglitazone and comparator groups. Finally, critics suggested that the heterogeneity of the included studies required an altogether different set of analytic techniques.<br />
<br />
Demonstrating the sensitivity of the authors’ initial analysis to the inclusion criteria and statistical tests used, a number of researchers reworked the data from this study. one researcher used the same studies but analyzed the data with a more commonly used statistical method (Mantel-Haenszel), and found no significant increase in the relative risk or common odds ratio with MI or CV death. When the pool of studies was expanded to include those originally eliminated because they had zero CV events, the odds ratios for MI and death from CV causes dropped from 1.43 to 1.26 (95% CI, 0.93-1.72) and from 1.64 to 1.14 (95% CI, 0.74-1.74), respectively. Neither of the recalculated odd ratios were significant for MI or CV death. Finally, several newer long-term studies have been published since the Nissen meta-analysis. Incorporating their results with the meta-analysis data showed that rosiglitazone is associated with an increased risk of MI but not of CV death. Thus, the findings from these meta-analyses varied with the methods employed, the studies included, and the addition of later trials.<br />
<br />
<b>Emerging Data</b><br />
<br />
The controversy surrounding the rosiglitazone meta-analysis authored by Nissen and Wolski forced an unplanned interim analysis of a long-term, randomized trial investigating the CV effects of rosiglitazone among patients with type 2 diabetes. The authors of the RECORD trial noted that even though the follow-up at 3.75 years was shorter than expected, rosiglitazone, when added to standard glucose-lowering therapy, was found to be associated with an increase in the risk of heart failure but was not associated with any increase in death from CV or other causes. Data at the time were found to be insufficient to determine the effect of rosiglitazone on an increase in the risk of MI. the final report of that trial, published in June 2009, confirmed the elevated risk of heart failure in people with type 2 diabetes treated with rosiglitazone in addition to glucose-lowering drugs, but continued to show inconclusive results about the effect of the drug therapy on the risk of MI. Further, the RECORD trial clarified that rosiglitazone does not result in an increased risk of CV morbidity or mortality compared to standard glucose-lowering drugs. Other trials conducted since the publishing of the meta-analysis have corroborated these results, casting further doubt on the findings of the meta-analysis published by Nissen and Wolski.<br />
<br />
<b>Now what?</b><br />
<br />
Some sources suggest that the original Nissen meta-analysis delivered more harm than benefit, and that a well-recognized medical journal may have erred in its process of peer review. Despite this criticism, it is important to note that subsequent publications support the risk of adverse CV events associated with rosiglitazone, although rosiglitazone use does not appear to increase deaths. These results and emerging data point to the need for further rigorous research to clarify the benefits and risks of rosiglitazone on a variety of outcomes, and the importance of directing the drug to the population that will maximally benefit from its use.<br />
<br />
<b>Lessons Learned From this Case Study</b><br />
<br />
Results from initial randomized trials that seem definitive at one time may not be conclusive, as further trials may emerge to clarify, redirect, or negate previously accepted results. A meta-analysis of those trials can lead to varying results based upon the timing of the analysis and the choices made in its performance.<br />
<br />
<b>Meta-Analysis: Tips for CER Practitioners</b><br />
<br />
*The results of a meta-analysis are highly dependent on the studies included (and excluded). Are these criteria properly defined and relevant to the purposes of the meta-analysis? Were the combined studies sufficiently similar? Can results from this cohort be generalized to other populations of interest?<br />
<br />
*The statistical methodology can impact study results. Have there been reviews critiquing the methods used in the meta-analysis?<br />
<br />
*A variety of statistical tests should be considered, and perhaps reported, in the analysis of results. Do the authors mention their rationale in choosing a statistical method? Do they show the stability of their results across a spectrum of analytical methods?<br />
<br />
*Nothing is permanent. Emerging data may change the playing field, and meta- analysis results are only as good as the data and statistics from which they are derived.<br />
<br />
===Case-Study 3: The Nurses’ Health Study<sup>16</sup>===<br />
<br />
<b>An observational study</b><br />
<br />
An observational study is a very common type of research design in which the effects of a treatment or condition are studied without formally randomizing patients in an experimental design. Such studies can be done prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. Latter studies are frequently performed by using an electronic database that contains, for example, administrative, “billing,” or claims data. Less commonly, observational research uses electronic health records, which have greater clinical information that more closely resembles the data collected in an RCT. Observational studies often take place in “real- world” environments, which allow researchers to collect data for a wide array of outcomes. Patients are not randomized in these studies, but the findings can be used to generate hypotheses for investigation in a more constrained experimental setting. Perhaps the best known observational study is the “Framingham study,” which collected demographic and health data for a group of individuals over many years (and continues to do so) and has provided an understanding of the key risk factors for heart disease and stroke.<br />
<br />
Observational studies present many advantages to the comparative effectiveness researcher. the study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). Furthermore, observational studies can be conducted at low cost, particularly if they involve the secondary analysis of existing data sources. CER often uses administrative databases, which are based upon the billing data submitted by providers during routine care. These databases typically have limited clinical information, may have errors in them, and generally do not undergo auditing.<br />
<br />
The uncontrolled nature of observational studies allows them to be subject to bias and confounding. For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without careful statistical adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies can identify important associations but cannot prove cause and effect. These studies can generate hypotheses that may require RCTs for fuller demonstration of those relationships. Secondary analysis can also be problematic if researchers overwork datasets by doing multiple exploratory analyses (e.g., data-dredging): the more we look, the more we find, even if those findings are merely statistical aberrations. Unfortunately, the growing need for CER and the wide availability of administrative databases may lead to selection of research of poor quality with inaccurate findings.<br />
<br />
In comparative effectiveness research, observational studies are typically considered to be less conclusive than RCTs and meta-analyses. Nonetheless, they can be useful, especially because they examine typical care. Due to lower cost and improvements in health information, observational studies will become increasingly common. Critical assessment of whether the described results are helpful or biased (based upon how the study was performed) are necessary. This case will illustrate several characteristics of the types of studies that will assist in evaluating newly published work. <br />
<br />
<b>Clinical Applications</b><br />
<br />
Cardiovascular diseases (CVD) are the leading cause of death in women older than the age of 50. Epidemiologic evidence suggests that estrogen is a key mediator in the development of CVD. Estrogen is an ovarian hormone whose production decreases as women approach menopause. The steep increase in CVD in women at menopause and older and in women who have had hysterectomies further supports a relationship between estrogen and CVD. Building on this evidence of biologic plausibility, epidemiological and observational studies suggested that estrogen replacement therapy (a form of <b>hormone replacement therapy</b>, or HRT) had positive effects on the risk of CVD in postmenopausal women, (albeit with some negative effects in its potential to increase the risk for breast cancer and stroke). Based on these findings, in the 1980s and 1990s HRT was routinely employed to treat menopausal symptoms and serve as prophylaxis against CVD.<br />
<br />
<b>What was done?</b><br />
<br />
The Nurses’ Health Study (NHS) began collecting data in 1976. In the study, researchers intended to examine a broad range of health effects in women over a long period of time, and a key goal was to clarify the role of HRT in heart disease. The cohort (i.e., the group being followed) included married registered nurses aged 30-55 in 1976 who lived in the 11 most populous states. To collect data, the researchers mailed the study participants a survey every 2 years that asked questions about topics such as smoking, hormone use, menopausal status, and less frequently, diet. Data were collected for key end points that included MI, coronary-artery bypass grafting or angioplasty, stroke, total CVD mortality, and deaths from all causes.<br />
<br />
<b>What was found?</b><br />
<br />
At a 10-year follow-up point, the NHS had a study pool of 48,470 women. The researchers found that estrogen use (alone, without progestin) in postmenopausal women was associated with a reduction in the incidence of CVD as well as in CVD mortality compared to non-users. Later, estrogen-progestin combination therapy was shown to be even more cardioprotective than estrogen monotherapy, and lower doses of estrogen replacement therapy were found to deliver equal cardioprotection and lower the risk for adverse events. NHS researchers were alert to the potential for bias in observational studies. Adjustment for risk factors such as age (a typical practice to eliminate confounding) did not change the reported findings.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
The NHS was not unique in reporting the benefits associated with HRT; other observational studies corroborated the NHS findings. A secondary retrospective data analysis of the UK primary care electronic medical record database, for example, also showed the protective effect associated with HRT use. Researchers were aware of the fundamental limitations of observational studies, particularly with regard to selection bias. They and practicing clinicians were also aware of the potential negative health effects of HRT, which had to be constantly weighed against the potential cardioprotective benefits in deciding a patient’s course of treatment. As a large section of the population could experience the health effects of HRT, researchers began planning RCTs to verify the promising observational study results. It was highly anticipated that those RCTs would corroborate the belief that estrogen replacement can reduce CVD risk.<br />
<br />
<b>Randomized Controlled Trial: The Women’s Health Initiative</b><br />
<br />
The Women’s health Initiative (WHI) was a major study established by the National Institutes of health in 1992 to assess a broad range of health effects in postmenopausal women. The trial was intended to follow these women for 8 years, at a cost of millions of dollars in federal funding. Among its many facets, it included an RCT to confirm the results from the observational studies discussed above. To fully investigate earlier findings, the WHI had two subgroups. One subgroup consisted of women with prior hysterectomies; they received estrogen monotherapy. The second group consisted of women who had not undergone hysterectomy; they received estrogen in combination with progestin. The WHI enrolled 27,347 women in their HRT investigation: 10,739 in the estrogen-alone arm and 16,608 in the estrogen plus progestin arm. Within each arm, women were randomly assigned to receive either HRT or placebo. All women in the trial were postmenopausal and aged 50-79 years; the mean age was 63.6 years (a fact that would be important in later analysis). Some participants had experienced previous CV events. The primary outcome of both subgroups was coronary heart disease (CHD), as described by nonfatal MI or death due to CHD.<br />
<br />
The estrogen-progestin arm of the WHI was halted after a mean follow-up of 5.2 years, 3 years earlier than expected, as the HRT users in this arm were found to be at increased risk for CHD compared to those who received placebo. The study also noted elevated rates of breast cancer and stroke, among other poor outcomes. The estrogen-alone arm continued for an average follow-up of 6.8 years before being similarly discontinued ahead of schedule. Although this part of the study did not find an increased risk of CHD, it also did not find any cardioprotective effect. Beyond failing to locate any clear CV benefits, the WHI also found real evidence of harm, including increased risk of blood clots, breast cancer and stroke. Initial WHI publications therefore recommended against HRT being prescribed for the secondary prevention of CVD.<br />
<br />
<b>What Next?</b><br />
<br />
Scientists and the clinicians who relied on their data for guidance in treating patients, were faced with conflicting data: epidemiological and observational studies suggested that HRT was cardioprotective while the higher-quality evidence from RCTs strongly suggested the opposite. Clinicians primarily followed the WHI results, so prescriptions for HRT in postmenopausal women quickly declined. Meanwhile, researchers began to analyze the studies for potential discrepancies, and found that the women being followed in the NHS and the WHI differed in several important characteristics.<br />
<br />
First, the WHI population was older than the NHS cohort, and many had entered menopause at least 10 years before they enrolled in the RCT. Thus, the WHI enrollees experienced a long duration from the onset of menopause to the commencement of HRT. At the same time, many in the NHS population were closer to the onset of menopause and were still displaying hormonal symptoms when they began HRT. Second, although the NHS researchers adjusted the data for various confounding effects, their results could still have been subject to bias. In general, the NHS cohort was more highly educated and of a higher socioeconomic status than the WHI participants, and therefore more likely to see a physician regularly. The NHS women were also leaner and generally healthier than their RCT counterparts, and had been selected for their evident lack of pre-existing CV conditions. This selection bias in the NHS enrollment may have led to a “healthy woman” effect that in turn led to an overestimation of the benefits of therapy in the observational study. Third, researchers noted that dosing differences between the two study types may have contributed to the divergent results. The NHS reported beneficial results following low-dose estrogen therapy. The WHL, meanwhile, used a higher estrogen dose, exposing women to a larger dosage of hormones and increasing their risk for adverse events. The increased risk profile of the WHI women (e.g., older, more comorbidities, higher estrogen dose) could have contributed to the evidence of harm seen in the WHI results.<br />
<br />
<b>Emerging Data</b><br />
In addition to identifying the inherent differences between the two study populations, researchers began a secondary analysis of the NHS and WHI trials. NHS researchers reported that women who began HRT close to the onset of menopause had a significantly reduced risk of CHD. In the subgroups of women that were older and had a similar duration after menopause compared with the WHI women, they found no significant relationship between HRT and CHD. Also, the WHI study further stratified these results by age, and found that women who began HRT close to their onset of menopause experienced some cardioprotection, while women who were further from the onset of menopause had a slightly elevated risk for CHD.<br />
<br />
Secondary analysis of both studies was therefore necessary to show that age and a short duration from the onset of menopause are crucial to HRT success as a cardioprotective agent. Neither study type provided “truth” or rather, both studies provided “truth” if viewed carefully (e.g., both produced valid and important results). The differences seen in the studies were rooted in the timing of HRT and the populations being studied.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are given a higher evidence grade, observational studies provide important clinical insights. In this example, the study populations differed. For policymakers and clinicians, it is crucial to examine whether the CER was based upon patients similar to those being considered. Any study with a dissimilar population may provide non-relevant results. Thus, readers of CER need to carefully examine the generalizability of the findings being reported.<br />
<br />
==Appendix==<br />
<br />
General Classification and Regression Tree (CART) data analysis steps part of the R package <b>rpart.</b><br />
<br />
===Growing the Tree===<br />
<br />
# To grow a tree, use<br />
rpart(formula, data=, method=,control=), where<br />
formula is in the format outcome ~ predictor1+predictor2+...<br />
data= specifies the data frame<br />
method= "class" for a classification tree, use "anova" for a regression tree<br />
control= optional parameters for controlling tree growth. For example, control=rpart.control(minsplit=30, cp=0.001) requires that the minimum number of observations in a node be 30 before attempting a split and that a split must decrease the overall lack of fit by a factor of 0.001 (cost complexity factor) before being attempted.<br />
<br />
===Examining Results===<br />
<br />
# These functions help with examining the results.<br />
printcp(fit) display complexity parameter (cp) table<br />
plotcp(fit) plot cross-validation results<br />
rsq.rpart(fit) plot approximate R-squared and relative error for different splits (2 plots). labels are only appropriate for the "anova" method.<br />
print(fit) print results<br />
summary(fit) detailed results including surrogate splits<br />
plot(fit) plot decision tree<br />
text(fit) label the decision tree plot<br />
post(fit, file=) create postscript plot of decision tree<br />
# In trees created by rpart(), move to the LEFT branch when the stated condition is true.<br />
<br />
===Pruning Trees===<br />
<br />
#In general, trees should be pruned back to avoid overfitting the data. The tree size should minimize the cross-#validated error – xerror column printed by printcp(). Pruning the tree is accomplished by:<br />
prune(fit, cp= )<br />
# use printcp( ) to examine the cross-validation error results, select the complexity parameter (CP) associated with minimum error, and insert the CP it into the prune() function. This (automatically selecting the complexity parameter associated with the smallest cross-validated error) can be done succinctly by:<br />
fit$\$$cptable[which.min(fit$\$$cptable[,"xerror"]),"CP"]<br />
<br />
===Compete Dataset for N-of-1 Example===<br />
[[SMHS_MethodsHeterogeneity_CER_Nof1|This N-of-1 Dataset]] includes an example.<br />
<br />
===Footnotes===<br />
<br />
*<sup>13</sup> Based on 2009 NPC report, http://www.npcnow.org/publication/demystifying-comparative-effectiveness-research-case-study-learning-guide <br />
*<sup>14</sup> http://www.cancer.gov/cancertopics/druginfo/fda-cetuximab<br />
*<sup>15</sup> http://www.nejm.org/doi/full/10.1056/NEJMoa072761<br />
*<sup>16</sup> http://jech.bmj.com/content/59/9/740.short<br />
<br />
===[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_CER}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_CER&diff=16229SMHS MethodsHeterogeneity CER2016-05-23T19:02:54Z<p>Pineaumi: /* Case-Study 3: The Nurses’ Health Study */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Comparative Effectiveness Research: Case Studies <sup>13</sup> (CER) ==<br />
<br />
===Observational Studies: Tips for the CER Practitioners===<br />
<br />
*Different study types can offer different understandings; neither should be discounted without closer examination.<br />
<br />
*RCTs provide an accurate understanding of the effect of a particular intervention in a well-defined patient group under “controlled” circumstances.<br />
<br />
*Observational studies provide an understanding of real-world care and its impact, but can be biased due to uncontrolled factors.<br />
<br />
*Observational studies differ in the types of databases used. These databases may lack clinical detail and contain incomplete or inaccurate data.<br />
<br />
*Before accepting the findings from an observational study, consider whether confounding factors may have influenced the results.<br />
<br />
*In this scenario, subgroup analysis was vital in clarifying both study designs; what is true for the many (e.g., overall, estrogen appeared to be detrimental) may not be true for the few (e.g., that for the younger post-menopausal woman, the benefits were greater and the harms less frequent).<br />
<br />
*Carefully examine the generalizability of the study. Do the study’s patients and intervention match those under consideration?<br />
<br />
*Observational studies can identify associations but cannot prove cause-and-effect relationships.<br />
<br />
===Case-Study 1: The Cetuximab Study<sup>14</sup>===<br />
<br />
<b>What was done and what was found?</b><br />
<br />
Cetuximab, an anti-epidermal growth factor receptor (EGFR) agent, has recently been added to the therapeutic armamentarium. Two important CRTs examined its impact in patients with mCRC (metastatic-stage Colorectal cancer). In the first one, 56 centers in 11 European countries investigated the outcomes associated with cetuximab therapy in 329 mCRC patients who experienced disease progression either on irinotecan therapy or within 3 months thereafter. The study reported that the group on a combination of irinotecan and cetuximab had a significantly higher rate of overall response to treatment (primary endpoint) than the group on cetuximab alone: 22.9% (95% CI, 17.5-29.1%) vs. 10.8% (95% CI, 5.7-18.1%) (P=0.007), respectively. Similarly, the median time to progression was significantly longer in the combination therapy group (4.1 vs. 1.5 months, P<0.001). As these patients had already progressed on irinotecan prior to the study, any response was viewed as positive. Safety between the two treatment arms was similar: approximately 80% of patients in each arm experienced a rash. Grade 3 or 4 (the more severe) toxic effects on the skin were slightly more frequent in the combination-therapy group compared to cetuximab monotherapy, observed in 9.4% and 5.2% of participants, respectively. Other side effects, such as diarrhea and neutropenia observed in the combination-therapy arm, were considered to be in the range expected for irinotecan alone. Data from this study demonstrated the efficacy and safety of cetuximab and were instrumental in the FDA’s 2004 approval.<br />
<br />
A second CRT (2007) examined 572 patients and suggested efficacy of cetuximab in the treatment of mCRC. This study was a randomized, non-blinded, controlled trial that examined cetuximab monotherapy plus best supportive care compared to best supportive care alone in patients who had received and failed prior chemotherapy regimens. It reported that median overall survival (the primary endpoint) was significantly higher in patients receiving cetuximab plus best supportive care compared to best supportive care alone (6.1 vs. 4.6 months, respectively) (hazard ratio for death=0.77; 95% CI: 0.64- 0.92, P=0.005). This RCT described a greater incidence of adverse events in the cetuximab plus best supportive care group compared to best supportive care alone including (most significantly) rash, as well as edema, fatigue, nausea and vomiting.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
These RCTs had fairly broad enrollment criteria and the cetuximab benefits were modest. Emerging scientific theories raised the possibility that genetically defined population subsets might experience a greater-than-average treatment benefit. One such area of inquiry entailed examining “biomarkers,” or genetic indicators of a patient’s greater response to therapy. Even as the above RCTs were being conducted, data emerged showing the importance of the KRAS gene.<br />
<br />
<b>Emerging Data</b><br />
<br />
Based on the emerging biochemical evidence that the epidermal growth factor receptor (EGFR) treatment mechanism (Cetuximab) was even more finely detailed than previously understood, the study authors of the 2007 RCT undertook a retrospective subgroup analysis using tumor tissue samples preserved from their initial study. Following laboratory analysis, all viable tissue samples were classified as having a wild-type (non-mutated) or a mutated KRAS gene. Instead of the previous two study arms (cetuximab plus best supportive care vs. best supportive care alone), there were 4 for this new analysis: each of the two original study arms was further divided by wild-type vs. mutated KRAS status. Laboratory evaluation determined that 40.9% and 42.3% of all patients in the RCT had a KRAS mutation in the cetuximab plus best supportive care group compared to the best supportive care group alone, respectively. The efficacy of cetuximab was found to be significantly correlated with KRAS status: in patients with wild-type (non-mutated). KRAS genes, cetuximab plus best supportive care compared to best supportive care alone improved overall survival (median 9.5 vs. 4.8 months, respectively; hazard ratio for death=0.55; 95% CI, 0.41-0.74, P<0.001), and progression-free survival (median 3.7 vs. 1.9 months, respectively; hazard ratio for progression or death=0.40; 95% CI, 0.30-0.54, P<0.001). Meanwhile, in patients with mutated KRAS tumors, the authors found no significant difference in outcome between cetuximab plus best supportive care vs. best supportive care alone.<br />
<br />
<b>What next?</b><br />
<br />
Based on these and similar results from other studies, the FDA narrowed its product labeling in July 2009 to indicate that cetuximab is not recommended for mCRC patients with mutated KRAS tumors. This distinction reduces the relevant population by approximately 40%. Similarly, the American society of Clinical oncology released a provisional clinical recommendation that all mCRC patients have their tumors tested for KRAS status before receiving anti-EGFR therapy. The benefits of targeted treatment are many. Patients who previously underwent cetuximab therapy without knowing their genetic predisposition would no longer have to be exposed to the drug’s toxic effects if unnecessary, as the efficacy of cetuximab is markedly higher in the genetically defined appropriate patients. In a less-uncertain environment, clinicians can be more confident in advocating a course of action in their care of patients. And finally, knowledge that targeted therapy is possible suggests the potential for further innovation in treatment options. In fact, research continues to demonstrate options for targeted cetuximab treatment of mCRC at an even finer scale than seen with KRAS; and similar genetic targeting is being investigated, and advocated, in other cancer types.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are generally viewed as the gold standard, results of one or even a series of trials may not accurately reflect the benefits experienced by an individual patient. This case-study suggests that cetuximab initially appeared to have rather modest clinical benefits. Albeit, new information that became available and subsequent genetic subgroup assessments led to very different conclusions. Clinicians should be aware that the current knowledge is likely to evolve and any decisions about patient care should be carefully considered with that sense of uncertainty in mind. As in this case study, subgroup analyses (e.g., genetic subtypes) need a theoretical rationale. Ideally, the analyses should be determined at the time of original RCT design and should not just occur as explorations of the subsequent data. When improperly employed, post hoc analyses may lead to incorrect patient care conclusions.<br />
<br />
<b>RCTs Tips for the CER Practitioners</b><br />
<br />
*RCTs can determine whether an intervention can provide benefit in a very controlled environment.<br />
<br />
*The controlled nature of an RCT may limit its generalizability to a broader population.<br />
<br />
*No results are permanent; advances in scientific knowledge and understanding can influence how we view the effectiveness (or safety) of a therapeutic intervention.<br />
<br />
*Targeted therapy illuminated by carefully thought out subgroup analyses can improve the efficacious and safe use of an intervention.<br />
<br />
===Case-Study 2: The Rosiglitazone Study<sup>15</sup>===<br />
<br />
<b>Meta-analysis</b><br />
<br />
Often the results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single “result.” Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.<br />
<br />
When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive “numbers” but biased results. The following are important criteria for properly conducted meta-analyses:<br />
<br />
1. Carefully defining unbiased inclusion or exclusion criteria for study selection<br />
<br />
2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and time-frame<br />
<br />
3. Applying correct statistical methods to combine and analyze the data<br />
<br />
Reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions. Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research. The following case study will examine several key principles that will be useful as the reader encounters these publications.<br />
<br />
<b>Clinical Application</b><br />
<br />
Heart disease is the leading cause of mortality in the United States, resulting in approximately 20% of all deaths. Diabetics are particularly susceptible to heart disease, with more than 65% of deaths attributable to it. The nonfatal complications of diabetes are wide-ranging and include kidney failure, nerve damage, amputation, stroke and blindness, among other outcomes. In 2007, the total estimated cost of diabetes in the United States was $174B; $116B was derived from direct medical expenditures and the rest from the indirect cost of lost productivity due to the disease. With such serious health effects and heavy direct and indirect costs tied to diabetes, proper disease management is critical. Historically, diabetes treatment has focused on strict blood sugar control, assuming that this goal not only targets diabetes but also reduces other serious comorbidities of the disease.<br />
<br />
Anti-diabetic agents have long been associated with key questions as to their benefits/risks in the treatment of diabetes. The sulfonylurea tolbutamide, a first generation anti-diabetic drug, was found in a landmark study in the 1970s to significantly increase the CV mortality rate compared to patients not on this agent. Further analysis by external parties concluded that the methods employed in this trial were significantly flawed (e.g., use of an “arbitrary” definition of diabetes status, heterogeneous baseline characteristics of the populations studied, and incorrect statistical methods). Since these early studies, CV concerns continue to be an issue with selected oral hypoglycemic agents that have subsequently entered the marketplace.<br />
<br />
A class of drugs, thiazolidinedione (TZD), was approved in the late 1990s, as a solution to the problems associated with the older generation of sulfonylureas. Rosiglitazone, a member of the TZD class, was approved by the FDA in 1999 and was widely prescribed for the treatment of type-2 diabetes. A number of RCTs supported the benefit of rosiglitazone as an important new oral antidiabetic agent. However, safety concerns developed as the FDA received reports of adverse cardiac events potentially associated with rosiglitazone. It was in this setting that a meta-analysis by Nissen and Wolski was published in the New England Journal of Medicine in June 2007.<br />
<br />
<b>What was done?</b><br />
<br />
Nissen and Wolski conducted a meta-analysis examining the impact of rosiglitazone on cardiac events and mortality compared to alternative therapeutic approaches. The study began with a broad search to locate potential studies for review. The authors screened published phase II, III, and IV trials; the FDA website; and the drug manufacturer’s clinical-trial registry for applicable data relating to rosiglitazone use. When the initial search was complete, the studies were further categorized by pre-stated inclusion criteria. Meta-analysis inclusion criteria were simple: studies had to include rosiglitazone and a randomized comparator group treated with either another drug or placebo, study arms had to show similar length of treatment, and all groups had to have received more than 24 weeks of exposure to the study drugs. The studies had to contain outcome data of interest including the rate of myocardial infarction (MI) or death from all CV causes. Out of 116 studies surveyed by the authors, 42 met their inclusion criteria and were included in the meta-analysis. Of the studies they included, 23 had durations of 26 weeks or less, and only five studies followed patients for more than a year. Until this point, the study’s authors were following a path similar to that of any reviewer interested in CV outcomes, examining the results of these 42 studies and comparing them qualitatively. Quantitatively combining the data, however, required the authors to make choices about the studies they could merge and the statistical methods they should apply for analysis. Those decisions greatly influenced the results that were reported.<br />
<br />
<b>What was found?</b><br />
<br />
When the studies were combined, the meta-analysis contained data from 15,565 patients in the rosiglitazone group and 12,282 patients as comparators. Analyzing their data, the authors chose one particular statistical method (the Peto odds ratio method, a fixed-effect statistical approach), which calculates the odds of events occurring where the outcomes of interest are rare and small in number. In comparing rosiglitazone with a “control” group that included other drugs or placebo, the authors reported odds ratios of 1.43 (95% CI, 1.03-1.98; P=0.03) and 1.64 (95% CI,<br />
0.98-2.74; P=0.06) for MI and death from CV causes, respectively. In other words, the odds of an MI or death from a CV cause are higher for rosiglitazone patients than for patients on other therapies or placebo. The authors reported that rosiglitazone was significantly associated with an increase in the risk of MI and had borderline significance in increasing the risk of death from all CV causes. These findings appeared online on the same day that the FDA issued a safety alert regarding rosiglitazone. Discussion of the meta-analysis was immediately featured prominently in the news media. By December 2007, prescription claims for the drug at retail pharmacies had fallen by more than 50%.<br />
<br />
As diabetic patients and their clinicians reacted to the news, a methodologic debate also ensued. This discussion included statistical issues pertaining to the conduct of the analysis, its implications for clinical care, and finally the FDA and drug manufacturer’s roles in overseeing and regulating rosiglitazone. The concern among patients with diabetes regarding treatment, continues in the medical community today.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
Should the studies have been combined? Commentators faulted the authors for including several studies that were not originally intended to investigate diabetes, and for combining both placebo and drug therapy data into one comparator arm. Some critics noted that despite the stated inclusion criteria, some data were derived from studies where the rosiglitazone arm was allowed a longer follow-up than the comparator arm. By failing to account for this longer follow-up period, commentators felt that the authors may have overestimated the effect of rosiglitazone on CV outcomes. Many reviewers were concerned that this meta-analysis excluded trials in which no patients suffered an MI or died from CV causes – the outcomes of greatest interest. Some reviewers also noted that the exclusion of zero-event trials from the pooled dataset not only gave an incomplete picture of the impact of rosiglitazone but could have increased the odds ratio estimate. In general, the pooled dataset was criticized by many for being a faulty microcosm of the information available regarding rosiglitazone.<br />
<br />
It is essential that a meta-analysis be based on similarity in the data sources. If studies differ in important areas such as the patient populations, interventions, or outcomes, combining their data may not be suitable. The researchers accepted studies and populations that were clinically heterogeneous, yet pooled them as if they were not. The study reported that the results were combined from a number of trials that were not initially intended to investigate CV outcomes. Furthermore, the available data did not allow for time-to-event analysis, an essential tool in comparing the impact of alternative treatment options. Reviewers considered the data to be insufficiently homogeneous, and the line of cause and effect to be murkier than the authors described.<br />
<br />
<b>Were the statistical methods optimal?</b><br />
<br />
The statistical methods for this meta-analysis also came under significant criticism. The critiques focused on the authors’ use of the Peto method as being an incorrect choice because data were pooled from both small and very large studies, resulting in a potential overestimation of treatment effect. Others reviewers pointed that the Peto method should not have been used, as a number of the underlying studies did not have patients assigned equally to rosiglitazone and comparator groups. Finally, critics suggested that the heterogeneity of the included studies required an altogether different set of analytic techniques.<br />
<br />
Demonstrating the sensitivity of the authors’ initial analysis to the inclusion criteria and statistical tests used, a number of researchers reworked the data from this study. one researcher used the same studies but analyzed the data with a more commonly used statistical method (Mantel-Haenszel), and found no significant increase in the relative risk or common odds ratio with MI or CV death. When the pool of studies was expanded to include those originally eliminated because they had zero CV events, the odds ratios for MI and death from CV causes dropped from 1.43 to 1.26 (95% CI, 0.93-1.72) and from 1.64 to 1.14 (95% CI, 0.74-1.74), respectively. Neither of the recalculated odd ratios were significant for MI or CV death. Finally, several newer long-term studies have been published since the Nissen meta-analysis. Incorporating their results with the meta-analysis data showed that rosiglitazone is associated with an increased risk of MI but not of CV death. Thus, the findings from these meta-analyses varied with the methods employed, the studies included, and the addition of later trials.<br />
<br />
<b>Emerging Data</b><br />
<br />
The controversy surrounding the rosiglitazone meta-analysis authored by Nissen and Wolski forced an unplanned interim analysis of a long-term, randomized trial investigating the CV effects of rosiglitazone among patients with type 2 diabetes. The authors of the RECORD trial noted that even though the follow-up at 3.75 years was shorter than expected, rosiglitazone, when added to standard glucose-lowering therapy, was found to be associated with an increase in the risk of heart failure but was not associated with any increase in death from CV or other causes. Data at the time were found to be insufficient to determine the effect of rosiglitazone on an increase in the risk of MI. the final report of that trial, published in June 2009, confirmed the elevated risk of heart failure in people with type 2 diabetes treated with rosiglitazone in addition to glucose-lowering drugs, but continued to show inconclusive results about the effect of the drug therapy on the risk of MI. Further, the RECORD trial clarified that rosiglitazone does not result in an increased risk of CV morbidity or mortality compared to standard glucose-lowering drugs. Other trials conducted since the publishing of the meta-analysis have corroborated these results, casting further doubt on the findings of the meta-analysis published by Nissen and Wolski.<br />
<br />
<b>Now what?</b><br />
<br />
Some sources suggest that the original Nissen meta-analysis delivered more harm than benefit, and that a well-recognized medical journal may have erred in its process of peer review. Despite this criticism, it is important to note that subsequent publications support the risk of adverse CV events associated with rosiglitazone, although rosiglitazone use does not appear to increase deaths. These results and emerging data point to the need for further rigorous research to clarify the benefits and risks of rosiglitazone on a variety of outcomes, and the importance of directing the drug to the population that will maximally benefit from its use.<br />
<br />
<b>Lessons Learned From this Case Study</b><br />
<br />
Results from initial randomized trials that seem definitive at one time may not be conclusive, as further trials may emerge to clarify, redirect, or negate previously accepted results. A meta-analysis of those trials can lead to varying results based upon the timing of the analysis and the choices made in its performance.<br />
<br />
<b>Meta-Analysis: Tips for CER Practitioners</b><br />
<br />
*The results of a meta-analysis are highly dependent on the studies included (and excluded). Are these criteria properly defined and relevant to the purposes of the meta-analysis? Were the combined studies sufficiently similar? Can results from this cohort be generalized to other populations of interest?<br />
<br />
*The statistical methodology can impact study results. Have there been reviews critiquing the methods used in the meta-analysis?<br />
<br />
*A variety of statistical tests should be considered, and perhaps reported, in the analysis of results. Do the authors mention their rationale in choosing a statistical method? Do they show the stability of their results across a spectrum of analytical methods?<br />
<br />
*Nothing is permanent. Emerging data may change the playing field, and meta- analysis results are only as good as the data and statistics from which they are derived.<br />
<br />
===Case-Study 3: The Nurses’ Health Study<sup>16</sup>===<br />
<br />
<b>An observational study</b><br />
<br />
An observational study is a very common type of research design in which the effects of a treatment or condition are studied without formally randomizing patients in an experimental design. Such studies can be done prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. Latter studies are frequently performed by using an electronic database that contains, for example, administrative, “billing,” or claims data. Less commonly, observational research uses electronic health records, which have greater clinical information that more closely resembles the data collected in an RCT. Observational studies often take place in “real- world” environments, which allow researchers to collect data for a wide array of outcomes. Patients are not randomized in these studies, but the findings can be used to generate hypotheses for investigation in a more constrained experimental setting. Perhaps the best known observational study is the “Framingham study,” which collected demographic and health data for a group of individuals over many years (and continues to do so) and has provided an understanding of the key risk factors for heart disease and stroke.<br />
<br />
Observational studies present many advantages to the comparative effectiveness researcher. the study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). Furthermore, observational studies can be conducted at low cost, particularly if they involve the secondary analysis of existing data sources. CER often uses administrative databases, which are based upon the billing data submitted by providers during routine care. These databases typically have limited clinical information, may have errors in them, and generally do not undergo auditing.<br />
<br />
The uncontrolled nature of observational studies allows them to be subject to bias and confounding. For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without careful statistical adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies can identify important associations but cannot prove cause and effect. These studies can generate hypotheses that may require RCTs for fuller demonstration of those relationships. Secondary analysis can also be problematic if researchers overwork datasets by doing multiple exploratory analyses (e.g., data-dredging): the more we look, the more we find, even if those findings are merely statistical aberrations. Unfortunately, the growing need for CER and the wide availability of administrative databases may lead to selection of research of poor quality with inaccurate findings.<br />
<br />
In comparative effectiveness research, observational studies are typically considered to be less conclusive than RCTs and meta-analyses. Nonetheless, they can be useful, especially because they examine typical care. Due to lower cost and improvements in health information, observational studies will become increasingly common. Critical assessment of whether the described results are helpful or biased (based upon how the study was performed) are necessary. This case will illustrate several characteristics of the types of studies that will assist in evaluating newly published work. <br />
<br />
<b>Clinical Applications</b><br />
<br />
Cardiovascular diseases (CVD) are the leading cause of death in women older than the age of 50. Epidemiologic evidence suggests that estrogen is a key mediator in the development of CVD. Estrogen is an ovarian hormone whose production decreases as women approach menopause. The steep increase in CVD in women at menopause and older and in women who have had hysterectomies further supports a relationship between estrogen and CVD. Building on this evidence of biologic plausibility, epidemiological and observational studies suggested that estrogen replacement therapy (a form of <b>hormone replacement therapy</b>, or HRT) had positive effects on the risk of CVD in postmenopausal women, (albeit with some negative effects in its potential to increase the risk for breast cancer and stroke). Based on these findings, in the 1980s and 1990s HRT was routinely employed to treat menopausal symptoms and serve as prophylaxis against CVD.<br />
<br />
<b>What was done?</b><br />
<br />
The Nurses’ Health Study (NHS) began collecting data in 1976. In the study, researchers intended to examine a broad range of health effects in women over a long period of time, and a key goal was to clarify the role of HRT in heart disease. The cohort (i.e., the group being followed) included married registered nurses aged 30-55 in 1976 who lived in the 11 most populous states. To collect data, the researchers mailed the study participants a survey every 2 years that asked questions about topics such as smoking, hormone use, menopausal status, and less frequently, diet. Data were collected for key end points that included MI, coronary-artery bypass grafting or angioplasty, stroke, total CVD mortality, and deaths from all causes.<br />
<br />
<b>What was found?</b><br />
<br />
At a 10-year follow-up point, the NHS had a study pool of 48,470 women. The researchers found that estrogen use (alone, without progestin) in postmenopausal women was associated with a reduction in the incidence of CVD as well as in CVD mortality compared to non-users. Later, estrogen-progestin combination therapy was shown to be even more cardioprotective than estrogen monotherapy, and lower doses of estrogen replacement therapy were found to deliver equal cardioprotection and lower the risk for adverse events. NHS researchers were alert to the potential for bias in observational studies. Adjustment for risk factors such as age (a typical practice to eliminate confounding) did not change the reported findings.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
The NHS was not unique in reporting the benefits associated with HRT; other observational studies corroborated the NHS findings. A secondary retrospective data analysis of the UK primary care electronic medical record database, for example, also showed the protective effect associated with HRT use. Researchers were aware of the fundamental limitations of observational studies, particularly with regard to selection bias. They and practicing clinicians were also aware of the potential negative health effects of HRT, which had to be constantly weighed against the potential cardioprotective benefits in deciding a patient’s course of treatment. As a large section of the population could experience the health effects of HRT, researchers began planning RCTs to verify the promising observational study results. It was highly anticipated that those RCTs would corroborate the belief that estrogen replacement can reduce CVD risk.<br />
<br />
<b>Randomized Controlled Trial: The Women’s Health Initiative</b><br />
<br />
The Women’s health Initiative (WHI) was a major study established by the National Institutes of health in 1992 to assess a broad range of health effects in postmenopausal women. The trial was intended to follow these women for 8 years, at a cost of millions of dollars in federal funding. Among its many facets, it included an RCT to confirm the results from the observational studies discussed above. To fully investigate earlier findings, the WHI had two subgroups. One subgroup consisted of women with prior hysterectomies; they received estrogen monotherapy. The second group consisted of women who had not undergone hysterectomy; they received estrogen in combination with progestin. The WHI enrolled 27,347 women in their HRT investigation: 10,739 in the estrogen-alone arm and 16,608 in the estrogen plus progestin arm. Within each arm, women were randomly assigned to receive either HRT or placebo. All women in the trial were postmenopausal and aged 50-79 years; the mean age was 63.6 years (a fact that would be important in later analysis). Some participants had experienced previous CV events. The primary outcome of both subgroups was coronary heart disease (CHD), as described by nonfatal MI or death due to CHD.<br />
<br />
The estrogen-progestin arm of the WHI was halted after a mean follow-up of 5.2 years, 3 years earlier than expected, as the HRT users in this arm were found to be at increased risk for CHD compared to those who received placebo. The study also noted elevated rates of breast cancer and stroke, among other poor outcomes. The estrogen-alone arm continued for an average follow-up of 6.8 years before being similarly discontinued ahead of schedule. Although this part of the study did not find an increased risk of CHD, it also did not find any cardioprotective effect. Beyond failing to locate any clear CV benefits, the WHI also found real evidence of harm, including increased risk of blood clots, breast cancer and stroke. Initial WHI publications therefore recommended against HRT being prescribed for the secondary prevention of CVD.<br />
<br />
<b>What Next?</b><br />
<br />
Scientists and the clinicians who relied on their data for guidance in treating patients, were faced with conflicting data: epidemiological and observational studies suggested that HRT was cardioprotective while the higher-quality evidence from RCTs strongly suggested the opposite. Clinicians primarily followed the WHI results, so prescriptions for HRT in postmenopausal women quickly declined. Meanwhile, researchers began to analyze the studies for potential discrepancies, and found that the women being followed in the NHS and the WHI differed in several important characteristics.<br />
<br />
First, the WHI population was older than the NHS cohort, and many had entered menopause at least 10 years before they enrolled in the RCT. Thus, the WHI enrollees experienced a long duration from the onset of menopause to the commencement of HRT. At the same time, many in the NHS population were closer to the onset of menopause and were still displaying hormonal symptoms when they began HRT. Second, although the NHS researchers adjusted the data for various confounding effects, their results could still have been subject to bias. In general, the NHS cohort was more highly educated and of a higher socioeconomic status than the WHI participants, and therefore more likely to see a physician regularly. The NHS women were also leaner and generally healthier than their RCT counterparts, and had been selected for their evident lack of pre-existing CV conditions. This selection bias in the NHS enrollment may have led to a “healthy woman” effect that in turn led to an overestimation of the benefits of therapy in the observational study. Third, researchers noted that dosing differences between the two study types may have contributed to the divergent results. The NHS reported beneficial results following low-dose estrogen therapy. The WHL, meanwhile, used a higher estrogen dose, exposing women to a larger dosage of hormones and increasing their risk for adverse events. The increased risk profile of the WHI women (e.g., older, more comorbidities, higher estrogen dose) could have contributed to the evidence of harm seen in the WHI results.<br />
<br />
<b>Emerging Data</b><br />
In addition to identifying the inherent differences between the two study populations, researchers began a secondary analysis of the NHS and WHI trials. NHS researchers reported that women who began HRT close to the onset of menopause had a significantly reduced risk of CHD. In the subgroups of women that were older and had a similar duration after menopause compared with the WHI women, they found no significant relationship between HRT and CHD. Also, the WHI study further stratified these results by age, and found that women who began HRT close to their onset of menopause experienced some cardioprotection, while women who were further from the onset of menopause had a slightly elevated risk for CHD.<br />
<br />
Secondary analysis of both studies was therefore necessary to show that age and a short duration from the onset of menopause are crucial to HRT success as a cardioprotective agent. Neither study type provided “truth” or rather, both studies provided “truth” if viewed carefully (e.g., both produced valid and important results). The differences seen in the studies were rooted in the timing of HRT and the populations being studied.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are given a higher evidence grade, observational studies provide important clinical insights. In this example, the study populations differed. For policymakers and clinicians, it is crucial to examine whether the CER was based upon patients similar to those being considered. Any study with a dissimilar population may provide non-relevant results. Thus, readers of CER need to carefully examine the generalizability of the findings being reported.<br />
<br />
==Appendix==<br />
<br />
General Classification and Regression Tree (CART) data analysis steps part of the R package <b>rpart.</b><br />
<br />
===Growing the Tree===<br />
<br />
# To grow a tree, use<br />
rpart(formula, data=, method=,control=), where<br />
formula is in the format outcome ~ predictor1+predictor2+...<br />
data= specifies the data frame<br />
method= "class" for a classification tree, use "anova" for a regression tree<br />
control= optional parameters for controlling tree growth. For example, control=rpart.control(minsplit=30, cp=0.001) requires that the minimum number of observations in a node be 30 before attempting a split and that a split must decrease the overall lack of fit by a factor of 0.001 (cost complexity factor) before being attempted.<br />
<br />
===Examining Results===<br />
<br />
# These functions help with examining the results.<br />
printcp(fit) display complexity parameter (cp) table<br />
plotcp(fit) plot cross-validation results<br />
rsq.rpart(fit) plot approximate R-squared and relative error for different splits (2 plots). labels are only appropriate for the "anova" method.<br />
print(fit) print results<br />
summary(fit) detailed results including surrogate splits<br />
plot(fit) plot decision tree<br />
text(fit) label the decision tree plot<br />
post(fit, file=) create postscript plot of decision tree<br />
# In trees created by rpart(), move to the LEFT branch when the stated condition is true.<br />
<br />
===Pruning Trees===<br />
<br />
#In general, trees should be pruned back to avoid overfitting the data. The tree size should minimize the cross-#validated error – xerror column printed by printcp(). Pruning the tree is accomplished by:<br />
prune(fit, cp= )<br />
# use printcp( ) to examine the cross-validation error results, select the complexity parameter (CP) associated with minimum error, and insert the CP it into the prune() function. This (automatically selecting the complexity parameter associated with the smallest cross-validated error) can be done succinctly by:<br />
fit$\$$cptable[which.min(fit$\$$cptable[,"xerror"]),"CP"]<br />
<br />
===Compete Dataset for N-of-1 Example===<br />
[[SMHS_MethodsHeterogeneity_CER_Nof1|This N-of-1 Dataset]] includes an example.<br />
<br />
===Footnotes===<br />
<br />
*<sup>13</sup> Based on 2009 NPC report, http://www.npcnow.org/publication/demystifying-comparative-effectiveness-research-case-study-learning-guide <br />
*<sup>14</sup> http://www.cancer.gov/cancertopics/druginfo/fda-cetuximab<br />
*<sup>15</sup> http://www.nejm.org/doi/full/10.1056/NEJMoa072761<br />
<br />
===[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_CER}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_CER&diff=16228SMHS MethodsHeterogeneity CER2016-05-23T19:02:19Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Comparative Effectiveness Research: Case Studies <sup>13</sup> (CER) ==<br />
<br />
===Observational Studies: Tips for the CER Practitioners===<br />
<br />
*Different study types can offer different understandings; neither should be discounted without closer examination.<br />
<br />
*RCTs provide an accurate understanding of the effect of a particular intervention in a well-defined patient group under “controlled” circumstances.<br />
<br />
*Observational studies provide an understanding of real-world care and its impact, but can be biased due to uncontrolled factors.<br />
<br />
*Observational studies differ in the types of databases used. These databases may lack clinical detail and contain incomplete or inaccurate data.<br />
<br />
*Before accepting the findings from an observational study, consider whether confounding factors may have influenced the results.<br />
<br />
*In this scenario, subgroup analysis was vital in clarifying both study designs; what is true for the many (e.g., overall, estrogen appeared to be detrimental) may not be true for the few (e.g., that for the younger post-menopausal woman, the benefits were greater and the harms less frequent).<br />
<br />
*Carefully examine the generalizability of the study. Do the study’s patients and intervention match those under consideration?<br />
<br />
*Observational studies can identify associations but cannot prove cause-and-effect relationships.<br />
<br />
===Case-Study 1: The Cetuximab Study<sup>14</sup>===<br />
<br />
<b>What was done and what was found?</b><br />
<br />
Cetuximab, an anti-epidermal growth factor receptor (EGFR) agent, has recently been added to the therapeutic armamentarium. Two important CRTs examined its impact in patients with mCRC (metastatic-stage Colorectal cancer). In the first one, 56 centers in 11 European countries investigated the outcomes associated with cetuximab therapy in 329 mCRC patients who experienced disease progression either on irinotecan therapy or within 3 months thereafter. The study reported that the group on a combination of irinotecan and cetuximab had a significantly higher rate of overall response to treatment (primary endpoint) than the group on cetuximab alone: 22.9% (95% CI, 17.5-29.1%) vs. 10.8% (95% CI, 5.7-18.1%) (P=0.007), respectively. Similarly, the median time to progression was significantly longer in the combination therapy group (4.1 vs. 1.5 months, P<0.001). As these patients had already progressed on irinotecan prior to the study, any response was viewed as positive. Safety between the two treatment arms was similar: approximately 80% of patients in each arm experienced a rash. Grade 3 or 4 (the more severe) toxic effects on the skin were slightly more frequent in the combination-therapy group compared to cetuximab monotherapy, observed in 9.4% and 5.2% of participants, respectively. Other side effects, such as diarrhea and neutropenia observed in the combination-therapy arm, were considered to be in the range expected for irinotecan alone. Data from this study demonstrated the efficacy and safety of cetuximab and were instrumental in the FDA’s 2004 approval.<br />
<br />
A second CRT (2007) examined 572 patients and suggested efficacy of cetuximab in the treatment of mCRC. This study was a randomized, non-blinded, controlled trial that examined cetuximab monotherapy plus best supportive care compared to best supportive care alone in patients who had received and failed prior chemotherapy regimens. It reported that median overall survival (the primary endpoint) was significantly higher in patients receiving cetuximab plus best supportive care compared to best supportive care alone (6.1 vs. 4.6 months, respectively) (hazard ratio for death=0.77; 95% CI: 0.64- 0.92, P=0.005). This RCT described a greater incidence of adverse events in the cetuximab plus best supportive care group compared to best supportive care alone including (most significantly) rash, as well as edema, fatigue, nausea and vomiting.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
These RCTs had fairly broad enrollment criteria and the cetuximab benefits were modest. Emerging scientific theories raised the possibility that genetically defined population subsets might experience a greater-than-average treatment benefit. One such area of inquiry entailed examining “biomarkers,” or genetic indicators of a patient’s greater response to therapy. Even as the above RCTs were being conducted, data emerged showing the importance of the KRAS gene.<br />
<br />
<b>Emerging Data</b><br />
<br />
Based on the emerging biochemical evidence that the epidermal growth factor receptor (EGFR) treatment mechanism (Cetuximab) was even more finely detailed than previously understood, the study authors of the 2007 RCT undertook a retrospective subgroup analysis using tumor tissue samples preserved from their initial study. Following laboratory analysis, all viable tissue samples were classified as having a wild-type (non-mutated) or a mutated KRAS gene. Instead of the previous two study arms (cetuximab plus best supportive care vs. best supportive care alone), there were 4 for this new analysis: each of the two original study arms was further divided by wild-type vs. mutated KRAS status. Laboratory evaluation determined that 40.9% and 42.3% of all patients in the RCT had a KRAS mutation in the cetuximab plus best supportive care group compared to the best supportive care group alone, respectively. The efficacy of cetuximab was found to be significantly correlated with KRAS status: in patients with wild-type (non-mutated). KRAS genes, cetuximab plus best supportive care compared to best supportive care alone improved overall survival (median 9.5 vs. 4.8 months, respectively; hazard ratio for death=0.55; 95% CI, 0.41-0.74, P<0.001), and progression-free survival (median 3.7 vs. 1.9 months, respectively; hazard ratio for progression or death=0.40; 95% CI, 0.30-0.54, P<0.001). Meanwhile, in patients with mutated KRAS tumors, the authors found no significant difference in outcome between cetuximab plus best supportive care vs. best supportive care alone.<br />
<br />
<b>What next?</b><br />
<br />
Based on these and similar results from other studies, the FDA narrowed its product labeling in July 2009 to indicate that cetuximab is not recommended for mCRC patients with mutated KRAS tumors. This distinction reduces the relevant population by approximately 40%. Similarly, the American society of Clinical oncology released a provisional clinical recommendation that all mCRC patients have their tumors tested for KRAS status before receiving anti-EGFR therapy. The benefits of targeted treatment are many. Patients who previously underwent cetuximab therapy without knowing their genetic predisposition would no longer have to be exposed to the drug’s toxic effects if unnecessary, as the efficacy of cetuximab is markedly higher in the genetically defined appropriate patients. In a less-uncertain environment, clinicians can be more confident in advocating a course of action in their care of patients. And finally, knowledge that targeted therapy is possible suggests the potential for further innovation in treatment options. In fact, research continues to demonstrate options for targeted cetuximab treatment of mCRC at an even finer scale than seen with KRAS; and similar genetic targeting is being investigated, and advocated, in other cancer types.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are generally viewed as the gold standard, results of one or even a series of trials may not accurately reflect the benefits experienced by an individual patient. This case-study suggests that cetuximab initially appeared to have rather modest clinical benefits. Albeit, new information that became available and subsequent genetic subgroup assessments led to very different conclusions. Clinicians should be aware that the current knowledge is likely to evolve and any decisions about patient care should be carefully considered with that sense of uncertainty in mind. As in this case study, subgroup analyses (e.g., genetic subtypes) need a theoretical rationale. Ideally, the analyses should be determined at the time of original RCT design and should not just occur as explorations of the subsequent data. When improperly employed, post hoc analyses may lead to incorrect patient care conclusions.<br />
<br />
<b>RCTs Tips for the CER Practitioners</b><br />
<br />
*RCTs can determine whether an intervention can provide benefit in a very controlled environment.<br />
<br />
*The controlled nature of an RCT may limit its generalizability to a broader population.<br />
<br />
*No results are permanent; advances in scientific knowledge and understanding can influence how we view the effectiveness (or safety) of a therapeutic intervention.<br />
<br />
*Targeted therapy illuminated by carefully thought out subgroup analyses can improve the efficacious and safe use of an intervention.<br />
<br />
===Case-Study 2: The Rosiglitazone Study<sup>15</sup>===<br />
<br />
<b>Meta-analysis</b><br />
<br />
Often the results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single “result.” Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.<br />
<br />
When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive “numbers” but biased results. The following are important criteria for properly conducted meta-analyses:<br />
<br />
1. Carefully defining unbiased inclusion or exclusion criteria for study selection<br />
<br />
2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and time-frame<br />
<br />
3. Applying correct statistical methods to combine and analyze the data<br />
<br />
Reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions. Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research. The following case study will examine several key principles that will be useful as the reader encounters these publications.<br />
<br />
<b>Clinical Application</b><br />
<br />
Heart disease is the leading cause of mortality in the United States, resulting in approximately 20% of all deaths. Diabetics are particularly susceptible to heart disease, with more than 65% of deaths attributable to it. The nonfatal complications of diabetes are wide-ranging and include kidney failure, nerve damage, amputation, stroke and blindness, among other outcomes. In 2007, the total estimated cost of diabetes in the United States was $174B; $116B was derived from direct medical expenditures and the rest from the indirect cost of lost productivity due to the disease. With such serious health effects and heavy direct and indirect costs tied to diabetes, proper disease management is critical. Historically, diabetes treatment has focused on strict blood sugar control, assuming that this goal not only targets diabetes but also reduces other serious comorbidities of the disease.<br />
<br />
Anti-diabetic agents have long been associated with key questions as to their benefits/risks in the treatment of diabetes. The sulfonylurea tolbutamide, a first generation anti-diabetic drug, was found in a landmark study in the 1970s to significantly increase the CV mortality rate compared to patients not on this agent. Further analysis by external parties concluded that the methods employed in this trial were significantly flawed (e.g., use of an “arbitrary” definition of diabetes status, heterogeneous baseline characteristics of the populations studied, and incorrect statistical methods). Since these early studies, CV concerns continue to be an issue with selected oral hypoglycemic agents that have subsequently entered the marketplace.<br />
<br />
A class of drugs, thiazolidinedione (TZD), was approved in the late 1990s, as a solution to the problems associated with the older generation of sulfonylureas. Rosiglitazone, a member of the TZD class, was approved by the FDA in 1999 and was widely prescribed for the treatment of type-2 diabetes. A number of RCTs supported the benefit of rosiglitazone as an important new oral antidiabetic agent. However, safety concerns developed as the FDA received reports of adverse cardiac events potentially associated with rosiglitazone. It was in this setting that a meta-analysis by Nissen and Wolski was published in the New England Journal of Medicine in June 2007.<br />
<br />
<b>What was done?</b><br />
<br />
Nissen and Wolski conducted a meta-analysis examining the impact of rosiglitazone on cardiac events and mortality compared to alternative therapeutic approaches. The study began with a broad search to locate potential studies for review. The authors screened published phase II, III, and IV trials; the FDA website; and the drug manufacturer’s clinical-trial registry for applicable data relating to rosiglitazone use. When the initial search was complete, the studies were further categorized by pre-stated inclusion criteria. Meta-analysis inclusion criteria were simple: studies had to include rosiglitazone and a randomized comparator group treated with either another drug or placebo, study arms had to show similar length of treatment, and all groups had to have received more than 24 weeks of exposure to the study drugs. The studies had to contain outcome data of interest including the rate of myocardial infarction (MI) or death from all CV causes. Out of 116 studies surveyed by the authors, 42 met their inclusion criteria and were included in the meta-analysis. Of the studies they included, 23 had durations of 26 weeks or less, and only five studies followed patients for more than a year. Until this point, the study’s authors were following a path similar to that of any reviewer interested in CV outcomes, examining the results of these 42 studies and comparing them qualitatively. Quantitatively combining the data, however, required the authors to make choices about the studies they could merge and the statistical methods they should apply for analysis. Those decisions greatly influenced the results that were reported.<br />
<br />
<b>What was found?</b><br />
<br />
When the studies were combined, the meta-analysis contained data from 15,565 patients in the rosiglitazone group and 12,282 patients as comparators. Analyzing their data, the authors chose one particular statistical method (the Peto odds ratio method, a fixed-effect statistical approach), which calculates the odds of events occurring where the outcomes of interest are rare and small in number. In comparing rosiglitazone with a “control” group that included other drugs or placebo, the authors reported odds ratios of 1.43 (95% CI, 1.03-1.98; P=0.03) and 1.64 (95% CI,<br />
0.98-2.74; P=0.06) for MI and death from CV causes, respectively. In other words, the odds of an MI or death from a CV cause are higher for rosiglitazone patients than for patients on other therapies or placebo. The authors reported that rosiglitazone was significantly associated with an increase in the risk of MI and had borderline significance in increasing the risk of death from all CV causes. These findings appeared online on the same day that the FDA issued a safety alert regarding rosiglitazone. Discussion of the meta-analysis was immediately featured prominently in the news media. By December 2007, prescription claims for the drug at retail pharmacies had fallen by more than 50%.<br />
<br />
As diabetic patients and their clinicians reacted to the news, a methodologic debate also ensued. This discussion included statistical issues pertaining to the conduct of the analysis, its implications for clinical care, and finally the FDA and drug manufacturer’s roles in overseeing and regulating rosiglitazone. The concern among patients with diabetes regarding treatment, continues in the medical community today.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
Should the studies have been combined? Commentators faulted the authors for including several studies that were not originally intended to investigate diabetes, and for combining both placebo and drug therapy data into one comparator arm. Some critics noted that despite the stated inclusion criteria, some data were derived from studies where the rosiglitazone arm was allowed a longer follow-up than the comparator arm. By failing to account for this longer follow-up period, commentators felt that the authors may have overestimated the effect of rosiglitazone on CV outcomes. Many reviewers were concerned that this meta-analysis excluded trials in which no patients suffered an MI or died from CV causes – the outcomes of greatest interest. Some reviewers also noted that the exclusion of zero-event trials from the pooled dataset not only gave an incomplete picture of the impact of rosiglitazone but could have increased the odds ratio estimate. In general, the pooled dataset was criticized by many for being a faulty microcosm of the information available regarding rosiglitazone.<br />
<br />
It is essential that a meta-analysis be based on similarity in the data sources. If studies differ in important areas such as the patient populations, interventions, or outcomes, combining their data may not be suitable. The researchers accepted studies and populations that were clinically heterogeneous, yet pooled them as if they were not. The study reported that the results were combined from a number of trials that were not initially intended to investigate CV outcomes. Furthermore, the available data did not allow for time-to-event analysis, an essential tool in comparing the impact of alternative treatment options. Reviewers considered the data to be insufficiently homogeneous, and the line of cause and effect to be murkier than the authors described.<br />
<br />
<b>Were the statistical methods optimal?</b><br />
<br />
The statistical methods for this meta-analysis also came under significant criticism. The critiques focused on the authors’ use of the Peto method as being an incorrect choice because data were pooled from both small and very large studies, resulting in a potential overestimation of treatment effect. Others reviewers pointed that the Peto method should not have been used, as a number of the underlying studies did not have patients assigned equally to rosiglitazone and comparator groups. Finally, critics suggested that the heterogeneity of the included studies required an altogether different set of analytic techniques.<br />
<br />
Demonstrating the sensitivity of the authors’ initial analysis to the inclusion criteria and statistical tests used, a number of researchers reworked the data from this study. one researcher used the same studies but analyzed the data with a more commonly used statistical method (Mantel-Haenszel), and found no significant increase in the relative risk or common odds ratio with MI or CV death. When the pool of studies was expanded to include those originally eliminated because they had zero CV events, the odds ratios for MI and death from CV causes dropped from 1.43 to 1.26 (95% CI, 0.93-1.72) and from 1.64 to 1.14 (95% CI, 0.74-1.74), respectively. Neither of the recalculated odd ratios were significant for MI or CV death. Finally, several newer long-term studies have been published since the Nissen meta-analysis. Incorporating their results with the meta-analysis data showed that rosiglitazone is associated with an increased risk of MI but not of CV death. Thus, the findings from these meta-analyses varied with the methods employed, the studies included, and the addition of later trials.<br />
<br />
<b>Emerging Data</b><br />
<br />
The controversy surrounding the rosiglitazone meta-analysis authored by Nissen and Wolski forced an unplanned interim analysis of a long-term, randomized trial investigating the CV effects of rosiglitazone among patients with type 2 diabetes. The authors of the RECORD trial noted that even though the follow-up at 3.75 years was shorter than expected, rosiglitazone, when added to standard glucose-lowering therapy, was found to be associated with an increase in the risk of heart failure but was not associated with any increase in death from CV or other causes. Data at the time were found to be insufficient to determine the effect of rosiglitazone on an increase in the risk of MI. the final report of that trial, published in June 2009, confirmed the elevated risk of heart failure in people with type 2 diabetes treated with rosiglitazone in addition to glucose-lowering drugs, but continued to show inconclusive results about the effect of the drug therapy on the risk of MI. Further, the RECORD trial clarified that rosiglitazone does not result in an increased risk of CV morbidity or mortality compared to standard glucose-lowering drugs. Other trials conducted since the publishing of the meta-analysis have corroborated these results, casting further doubt on the findings of the meta-analysis published by Nissen and Wolski.<br />
<br />
<b>Now what?</b><br />
<br />
Some sources suggest that the original Nissen meta-analysis delivered more harm than benefit, and that a well-recognized medical journal may have erred in its process of peer review. Despite this criticism, it is important to note that subsequent publications support the risk of adverse CV events associated with rosiglitazone, although rosiglitazone use does not appear to increase deaths. These results and emerging data point to the need for further rigorous research to clarify the benefits and risks of rosiglitazone on a variety of outcomes, and the importance of directing the drug to the population that will maximally benefit from its use.<br />
<br />
<b>Lessons Learned From this Case Study</b><br />
<br />
Results from initial randomized trials that seem definitive at one time may not be conclusive, as further trials may emerge to clarify, redirect, or negate previously accepted results. A meta-analysis of those trials can lead to varying results based upon the timing of the analysis and the choices made in its performance.<br />
<br />
<b>Meta-Analysis: Tips for CER Practitioners</b><br />
<br />
*The results of a meta-analysis are highly dependent on the studies included (and excluded). Are these criteria properly defined and relevant to the purposes of the meta-analysis? Were the combined studies sufficiently similar? Can results from this cohort be generalized to other populations of interest?<br />
<br />
*The statistical methodology can impact study results. Have there been reviews critiquing the methods used in the meta-analysis?<br />
<br />
*A variety of statistical tests should be considered, and perhaps reported, in the analysis of results. Do the authors mention their rationale in choosing a statistical method? Do they show the stability of their results across a spectrum of analytical methods?<br />
<br />
*Nothing is permanent. Emerging data may change the playing field, and meta- analysis results are only as good as the data and statistics from which they are derived.<br />
<br />
===Case-Study 3: The Nurses’ Health Study===<br />
<br />
<b>An observational study</b><br />
<br />
An observational study is a very common type of research design in which the effects of a treatment or condition are studied without formally randomizing patients in an experimental design. Such studies can be done prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. Latter studies are frequently performed by using an electronic database that contains, for example, administrative, “billing,” or claims data. Less commonly, observational research uses electronic health records, which have greater clinical information that more closely resembles the data collected in an RCT. Observational studies often take place in “real- world” environments, which allow researchers to collect data for a wide array of outcomes. Patients are not randomized in these studies, but the findings can be used to generate hypotheses for investigation in a more constrained experimental setting. Perhaps the best known observational study is the “Framingham study,” which collected demographic and health data for a group of individuals over many years (and continues to do so) and has provided an understanding of the key risk factors for heart disease and stroke.<br />
<br />
Observational studies present many advantages to the comparative effectiveness researcher. the study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). Furthermore, observational studies can be conducted at low cost, particularly if they involve the secondary analysis of existing data sources. CER often uses administrative databases, which are based upon the billing data submitted by providers during routine care. These databases typically have limited clinical information, may have errors in them, and generally do not undergo auditing.<br />
<br />
The uncontrolled nature of observational studies allows them to be subject to bias and confounding. For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without careful statistical adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies can identify important associations but cannot prove cause and effect. These studies can generate hypotheses that may require RCTs for fuller demonstration of those relationships. Secondary analysis can also be problematic if researchers overwork datasets by doing multiple exploratory analyses (e.g., data-dredging): the more we look, the more we find, even if those findings are merely statistical aberrations. Unfortunately, the growing need for CER and the wide availability of administrative databases may lead to selection of research of poor quality with inaccurate findings.<br />
<br />
In comparative effectiveness research, observational studies are typically considered to be less conclusive than RCTs and meta-analyses. Nonetheless, they can be useful, especially because they examine typical care. Due to lower cost and improvements in health information, observational studies will become increasingly common. Critical assessment of whether the described results are helpful or biased (based upon how the study was performed) are necessary. This case will illustrate several characteristics of the types of studies that will assist in evaluating newly published work. <br />
<br />
<b>Clinical Applications</b><br />
<br />
Cardiovascular diseases (CVD) are the leading cause of death in women older than the age of 50. Epidemiologic evidence suggests that estrogen is a key mediator in the development of CVD. Estrogen is an ovarian hormone whose production decreases as women approach menopause. The steep increase in CVD in women at menopause and older and in women who have had hysterectomies further supports a relationship between estrogen and CVD. Building on this evidence of biologic plausibility, epidemiological and observational studies suggested that estrogen replacement therapy (a form of <b>hormone replacement therapy</b>, or HRT) had positive effects on the risk of CVD in postmenopausal women, (albeit with some negative effects in its potential to increase the risk for breast cancer and stroke). Based on these findings, in the 1980s and 1990s HRT was routinely employed to treat menopausal symptoms and serve as prophylaxis against CVD.<br />
<br />
<b>What was done?</b><br />
<br />
The Nurses’ Health Study (NHS) began collecting data in 1976. In the study, researchers intended to examine a broad range of health effects in women over a long period of time, and a key goal was to clarify the role of HRT in heart disease. The cohort (i.e., the group being followed) included married registered nurses aged 30-55 in 1976 who lived in the 11 most populous states. To collect data, the researchers mailed the study participants a survey every 2 years that asked questions about topics such as smoking, hormone use, menopausal status, and less frequently, diet. Data were collected for key end points that included MI, coronary-artery bypass grafting or angioplasty, stroke, total CVD mortality, and deaths from all causes.<br />
<br />
<b>What was found?</b><br />
<br />
At a 10-year follow-up point, the NHS had a study pool of 48,470 women. The researchers found that estrogen use (alone, without progestin) in postmenopausal women was associated with a reduction in the incidence of CVD as well as in CVD mortality compared to non-users. Later, estrogen-progestin combination therapy was shown to be even more cardioprotective than estrogen monotherapy, and lower doses of estrogen replacement therapy were found to deliver equal cardioprotection and lower the risk for adverse events. NHS researchers were alert to the potential for bias in observational studies. Adjustment for risk factors such as age (a typical practice to eliminate confounding) did not change the reported findings.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
The NHS was not unique in reporting the benefits associated with HRT; other observational studies corroborated the NHS findings. A secondary retrospective data analysis of the UK primary care electronic medical record database, for example, also showed the protective effect associated with HRT use. Researchers were aware of the fundamental limitations of observational studies, particularly with regard to selection bias. They and practicing clinicians were also aware of the potential negative health effects of HRT, which had to be constantly weighed against the potential cardioprotective benefits in deciding a patient’s course of treatment. As a large section of the population could experience the health effects of HRT, researchers began planning RCTs to verify the promising observational study results. It was highly anticipated that those RCTs would corroborate the belief that estrogen replacement can reduce CVD risk.<br />
<br />
<b>Randomized Controlled Trial: The Women’s Health Initiative</b><br />
<br />
The Women’s health Initiative (WHI) was a major study established by the National Institutes of health in 1992 to assess a broad range of health effects in postmenopausal women. The trial was intended to follow these women for 8 years, at a cost of millions of dollars in federal funding. Among its many facets, it included an RCT to confirm the results from the observational studies discussed above. To fully investigate earlier findings, the WHI had two subgroups. One subgroup consisted of women with prior hysterectomies; they received estrogen monotherapy. The second group consisted of women who had not undergone hysterectomy; they received estrogen in combination with progestin. The WHI enrolled 27,347 women in their HRT investigation: 10,739 in the estrogen-alone arm and 16,608 in the estrogen plus progestin arm. Within each arm, women were randomly assigned to receive either HRT or placebo. All women in the trial were postmenopausal and aged 50-79 years; the mean age was 63.6 years (a fact that would be important in later analysis). Some participants had experienced previous CV events. The primary outcome of both subgroups was coronary heart disease (CHD), as described by nonfatal MI or death due to CHD.<br />
<br />
The estrogen-progestin arm of the WHI was halted after a mean follow-up of 5.2 years, 3 years earlier than expected, as the HRT users in this arm were found to be at increased risk for CHD compared to those who received placebo. The study also noted elevated rates of breast cancer and stroke, among other poor outcomes. The estrogen-alone arm continued for an average follow-up of 6.8 years before being similarly discontinued ahead of schedule. Although this part of the study did not find an increased risk of CHD, it also did not find any cardioprotective effect. Beyond failing to locate any clear CV benefits, the WHI also found real evidence of harm, including increased risk of blood clots, breast cancer and stroke. Initial WHI publications therefore recommended against HRT being prescribed for the secondary prevention of CVD.<br />
<br />
<b>What Next?</b><br />
<br />
Scientists and the clinicians who relied on their data for guidance in treating patients, were faced with conflicting data: epidemiological and observational studies suggested that HRT was cardioprotective while the higher-quality evidence from RCTs strongly suggested the opposite. Clinicians primarily followed the WHI results, so prescriptions for HRT in postmenopausal women quickly declined. Meanwhile, researchers began to analyze the studies for potential discrepancies, and found that the women being followed in the NHS and the WHI differed in several important characteristics.<br />
<br />
First, the WHI population was older than the NHS cohort, and many had entered menopause at least 10 years before they enrolled in the RCT. Thus, the WHI enrollees experienced a long duration from the onset of menopause to the commencement of HRT. At the same time, many in the NHS population were closer to the onset of menopause and were still displaying hormonal symptoms when they began HRT. Second, although the NHS researchers adjusted the data for various confounding effects, their results could still have been subject to bias. In general, the NHS cohort was more highly educated and of a higher socioeconomic status than the WHI participants, and therefore more likely to see a physician regularly. The NHS women were also leaner and generally healthier than their RCT counterparts, and had been selected for their evident lack of pre-existing CV conditions. This selection bias in the NHS enrollment may have led to a “healthy woman” effect that in turn led to an overestimation of the benefits of therapy in the observational study. Third, researchers noted that dosing differences between the two study types may have contributed to the divergent results. The NHS reported beneficial results following low-dose estrogen therapy. The WHL, meanwhile, used a higher estrogen dose, exposing women to a larger dosage of hormones and increasing their risk for adverse events. The increased risk profile of the WHI women (e.g., older, more comorbidities, higher estrogen dose) could have contributed to the evidence of harm seen in the WHI results.<br />
<br />
<b>Emerging Data</b><br />
In addition to identifying the inherent differences between the two study populations, researchers began a secondary analysis of the NHS and WHI trials. NHS researchers reported that women who began HRT close to the onset of menopause had a significantly reduced risk of CHD. In the subgroups of women that were older and had a similar duration after menopause compared with the WHI women, they found no significant relationship between HRT and CHD. Also, the WHI study further stratified these results by age, and found that women who began HRT close to their onset of menopause experienced some cardioprotection, while women who were further from the onset of menopause had a slightly elevated risk for CHD.<br />
<br />
Secondary analysis of both studies was therefore necessary to show that age and a short duration from the onset of menopause are crucial to HRT success as a cardioprotective agent. Neither study type provided “truth” or rather, both studies provided “truth” if viewed carefully (e.g., both produced valid and important results). The differences seen in the studies were rooted in the timing of HRT and the populations being studied.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are given a higher evidence grade, observational studies provide important clinical insights. In this example, the study populations differed. For policymakers and clinicians, it is crucial to examine whether the CER was based upon patients similar to those being considered. Any study with a dissimilar population may provide non-relevant results. Thus, readers of CER need to carefully examine the generalizability of the findings being reported.<br />
<br />
==Appendix==<br />
<br />
General Classification and Regression Tree (CART) data analysis steps part of the R package <b>rpart.</b><br />
<br />
===Growing the Tree===<br />
<br />
# To grow a tree, use<br />
rpart(formula, data=, method=,control=), where<br />
formula is in the format outcome ~ predictor1+predictor2+...<br />
data= specifies the data frame<br />
method= "class" for a classification tree, use "anova" for a regression tree<br />
control= optional parameters for controlling tree growth. For example, control=rpart.control(minsplit=30, cp=0.001) requires that the minimum number of observations in a node be 30 before attempting a split and that a split must decrease the overall lack of fit by a factor of 0.001 (cost complexity factor) before being attempted.<br />
<br />
===Examining Results===<br />
<br />
# These functions help with examining the results.<br />
printcp(fit) display complexity parameter (cp) table<br />
plotcp(fit) plot cross-validation results<br />
rsq.rpart(fit) plot approximate R-squared and relative error for different splits (2 plots). labels are only appropriate for the "anova" method.<br />
print(fit) print results<br />
summary(fit) detailed results including surrogate splits<br />
plot(fit) plot decision tree<br />
text(fit) label the decision tree plot<br />
post(fit, file=) create postscript plot of decision tree<br />
# In trees created by rpart(), move to the LEFT branch when the stated condition is true.<br />
<br />
===Pruning Trees===<br />
<br />
#In general, trees should be pruned back to avoid overfitting the data. The tree size should minimize the cross-#validated error – xerror column printed by printcp(). Pruning the tree is accomplished by:<br />
prune(fit, cp= )<br />
# use printcp( ) to examine the cross-validation error results, select the complexity parameter (CP) associated with minimum error, and insert the CP it into the prune() function. This (automatically selecting the complexity parameter associated with the smallest cross-validated error) can be done succinctly by:<br />
fit$\$$cptable[which.min(fit$\$$cptable[,"xerror"]),"CP"]<br />
<br />
===Compete Dataset for N-of-1 Example===<br />
[[SMHS_MethodsHeterogeneity_CER_Nof1|This N-of-1 Dataset]] includes an example.<br />
<br />
===Footnotes===<br />
<br />
*<sup>13</sup> Based on 2009 NPC report, http://www.npcnow.org/publication/demystifying-comparative-effectiveness-research-case-study-learning-guide <br />
*<sup>14</sup> http://www.cancer.gov/cancertopics/druginfo/fda-cetuximab<br />
*<sup>15</sup> http://www.nejm.org/doi/full/10.1056/NEJMoa072761<br />
<br />
===[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_CER}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_CER&diff=16227SMHS MethodsHeterogeneity CER2016-05-23T19:01:38Z<p>Pineaumi: /* Case-Study 2: The Rosiglitazone Study */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Comparative Effectiveness Research: Case Studies <sup>13</sup> (CER) ==<br />
<br />
===Observational Studies: Tips for the CER Practitioners===<br />
<br />
*Different study types can offer different understandings; neither should be discounted without closer examination.<br />
<br />
*RCTs provide an accurate understanding of the effect of a particular intervention in a well-defined patient group under “controlled” circumstances.<br />
<br />
*Observational studies provide an understanding of real-world care and its impact, but can be biased due to uncontrolled factors.<br />
<br />
*Observational studies differ in the types of databases used. These databases may lack clinical detail and contain incomplete or inaccurate data.<br />
<br />
*Before accepting the findings from an observational study, consider whether confounding factors may have influenced the results.<br />
<br />
*In this scenario, subgroup analysis was vital in clarifying both study designs; what is true for the many (e.g., overall, estrogen appeared to be detrimental) may not be true for the few (e.g., that for the younger post-menopausal woman, the benefits were greater and the harms less frequent).<br />
<br />
*Carefully examine the generalizability of the study. Do the study’s patients and intervention match those under consideration?<br />
<br />
*Observational studies can identify associations but cannot prove cause-and-effect relationships.<br />
<br />
===Case-Study 1: The Cetuximab Study<sup>14</sup>===<br />
<br />
<b>What was done and what was found?</b><br />
<br />
Cetuximab, an anti-epidermal growth factor receptor (EGFR) agent, has recently been added to the therapeutic armamentarium. Two important CRTs examined its impact in patients with mCRC (metastatic-stage Colorectal cancer). In the first one, 56 centers in 11 European countries investigated the outcomes associated with cetuximab therapy in 329 mCRC patients who experienced disease progression either on irinotecan therapy or within 3 months thereafter. The study reported that the group on a combination of irinotecan and cetuximab had a significantly higher rate of overall response to treatment (primary endpoint) than the group on cetuximab alone: 22.9% (95% CI, 17.5-29.1%) vs. 10.8% (95% CI, 5.7-18.1%) (P=0.007), respectively. Similarly, the median time to progression was significantly longer in the combination therapy group (4.1 vs. 1.5 months, P<0.001). As these patients had already progressed on irinotecan prior to the study, any response was viewed as positive. Safety between the two treatment arms was similar: approximately 80% of patients in each arm experienced a rash. Grade 3 or 4 (the more severe) toxic effects on the skin were slightly more frequent in the combination-therapy group compared to cetuximab monotherapy, observed in 9.4% and 5.2% of participants, respectively. Other side effects, such as diarrhea and neutropenia observed in the combination-therapy arm, were considered to be in the range expected for irinotecan alone. Data from this study demonstrated the efficacy and safety of cetuximab and were instrumental in the FDA’s 2004 approval.<br />
<br />
A second CRT (2007) examined 572 patients and suggested efficacy of cetuximab in the treatment of mCRC. This study was a randomized, non-blinded, controlled trial that examined cetuximab monotherapy plus best supportive care compared to best supportive care alone in patients who had received and failed prior chemotherapy regimens. It reported that median overall survival (the primary endpoint) was significantly higher in patients receiving cetuximab plus best supportive care compared to best supportive care alone (6.1 vs. 4.6 months, respectively) (hazard ratio for death=0.77; 95% CI: 0.64- 0.92, P=0.005). This RCT described a greater incidence of adverse events in the cetuximab plus best supportive care group compared to best supportive care alone including (most significantly) rash, as well as edema, fatigue, nausea and vomiting.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
These RCTs had fairly broad enrollment criteria and the cetuximab benefits were modest. Emerging scientific theories raised the possibility that genetically defined population subsets might experience a greater-than-average treatment benefit. One such area of inquiry entailed examining “biomarkers,” or genetic indicators of a patient’s greater response to therapy. Even as the above RCTs were being conducted, data emerged showing the importance of the KRAS gene.<br />
<br />
<b>Emerging Data</b><br />
<br />
Based on the emerging biochemical evidence that the epidermal growth factor receptor (EGFR) treatment mechanism (Cetuximab) was even more finely detailed than previously understood, the study authors of the 2007 RCT undertook a retrospective subgroup analysis using tumor tissue samples preserved from their initial study. Following laboratory analysis, all viable tissue samples were classified as having a wild-type (non-mutated) or a mutated KRAS gene. Instead of the previous two study arms (cetuximab plus best supportive care vs. best supportive care alone), there were 4 for this new analysis: each of the two original study arms was further divided by wild-type vs. mutated KRAS status. Laboratory evaluation determined that 40.9% and 42.3% of all patients in the RCT had a KRAS mutation in the cetuximab plus best supportive care group compared to the best supportive care group alone, respectively. The efficacy of cetuximab was found to be significantly correlated with KRAS status: in patients with wild-type (non-mutated). KRAS genes, cetuximab plus best supportive care compared to best supportive care alone improved overall survival (median 9.5 vs. 4.8 months, respectively; hazard ratio for death=0.55; 95% CI, 0.41-0.74, P<0.001), and progression-free survival (median 3.7 vs. 1.9 months, respectively; hazard ratio for progression or death=0.40; 95% CI, 0.30-0.54, P<0.001). Meanwhile, in patients with mutated KRAS tumors, the authors found no significant difference in outcome between cetuximab plus best supportive care vs. best supportive care alone.<br />
<br />
<b>What next?</b><br />
<br />
Based on these and similar results from other studies, the FDA narrowed its product labeling in July 2009 to indicate that cetuximab is not recommended for mCRC patients with mutated KRAS tumors. This distinction reduces the relevant population by approximately 40%. Similarly, the American society of Clinical oncology released a provisional clinical recommendation that all mCRC patients have their tumors tested for KRAS status before receiving anti-EGFR therapy. The benefits of targeted treatment are many. Patients who previously underwent cetuximab therapy without knowing their genetic predisposition would no longer have to be exposed to the drug’s toxic effects if unnecessary, as the efficacy of cetuximab is markedly higher in the genetically defined appropriate patients. In a less-uncertain environment, clinicians can be more confident in advocating a course of action in their care of patients. And finally, knowledge that targeted therapy is possible suggests the potential for further innovation in treatment options. In fact, research continues to demonstrate options for targeted cetuximab treatment of mCRC at an even finer scale than seen with KRAS; and similar genetic targeting is being investigated, and advocated, in other cancer types.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are generally viewed as the gold standard, results of one or even a series of trials may not accurately reflect the benefits experienced by an individual patient. This case-study suggests that cetuximab initially appeared to have rather modest clinical benefits. Albeit, new information that became available and subsequent genetic subgroup assessments led to very different conclusions. Clinicians should be aware that the current knowledge is likely to evolve and any decisions about patient care should be carefully considered with that sense of uncertainty in mind. As in this case study, subgroup analyses (e.g., genetic subtypes) need a theoretical rationale. Ideally, the analyses should be determined at the time of original RCT design and should not just occur as explorations of the subsequent data. When improperly employed, post hoc analyses may lead to incorrect patient care conclusions.<br />
<br />
<b>RCTs Tips for the CER Practitioners</b><br />
<br />
*RCTs can determine whether an intervention can provide benefit in a very controlled environment.<br />
<br />
*The controlled nature of an RCT may limit its generalizability to a broader population.<br />
<br />
*No results are permanent; advances in scientific knowledge and understanding can influence how we view the effectiveness (or safety) of a therapeutic intervention.<br />
<br />
*Targeted therapy illuminated by carefully thought out subgroup analyses can improve the efficacious and safe use of an intervention.<br />
<br />
===Case-Study 2: The Rosiglitazone Study<sup>15</sup>===<br />
<br />
<b>Meta-analysis</b><br />
<br />
Often the results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single “result.” Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.<br />
<br />
When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive “numbers” but biased results. The following are important criteria for properly conducted meta-analyses:<br />
<br />
1. Carefully defining unbiased inclusion or exclusion criteria for study selection<br />
<br />
2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and time-frame<br />
<br />
3. Applying correct statistical methods to combine and analyze the data<br />
<br />
Reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions. Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research. The following case study will examine several key principles that will be useful as the reader encounters these publications.<br />
<br />
<b>Clinical Application</b><br />
<br />
Heart disease is the leading cause of mortality in the United States, resulting in approximately 20% of all deaths. Diabetics are particularly susceptible to heart disease, with more than 65% of deaths attributable to it. The nonfatal complications of diabetes are wide-ranging and include kidney failure, nerve damage, amputation, stroke and blindness, among other outcomes. In 2007, the total estimated cost of diabetes in the United States was $174B; $116B was derived from direct medical expenditures and the rest from the indirect cost of lost productivity due to the disease. With such serious health effects and heavy direct and indirect costs tied to diabetes, proper disease management is critical. Historically, diabetes treatment has focused on strict blood sugar control, assuming that this goal not only targets diabetes but also reduces other serious comorbidities of the disease.<br />
<br />
Anti-diabetic agents have long been associated with key questions as to their benefits/risks in the treatment of diabetes. The sulfonylurea tolbutamide, a first generation anti-diabetic drug, was found in a landmark study in the 1970s to significantly increase the CV mortality rate compared to patients not on this agent. Further analysis by external parties concluded that the methods employed in this trial were significantly flawed (e.g., use of an “arbitrary” definition of diabetes status, heterogeneous baseline characteristics of the populations studied, and incorrect statistical methods). Since these early studies, CV concerns continue to be an issue with selected oral hypoglycemic agents that have subsequently entered the marketplace.<br />
<br />
A class of drugs, thiazolidinedione (TZD), was approved in the late 1990s, as a solution to the problems associated with the older generation of sulfonylureas. Rosiglitazone, a member of the TZD class, was approved by the FDA in 1999 and was widely prescribed for the treatment of type-2 diabetes. A number of RCTs supported the benefit of rosiglitazone as an important new oral antidiabetic agent. However, safety concerns developed as the FDA received reports of adverse cardiac events potentially associated with rosiglitazone. It was in this setting that a meta-analysis by Nissen and Wolski was published in the New England Journal of Medicine in June 2007.<br />
<br />
<b>What was done?</b><br />
<br />
Nissen and Wolski conducted a meta-analysis examining the impact of rosiglitazone on cardiac events and mortality compared to alternative therapeutic approaches. The study began with a broad search to locate potential studies for review. The authors screened published phase II, III, and IV trials; the FDA website; and the drug manufacturer’s clinical-trial registry for applicable data relating to rosiglitazone use. When the initial search was complete, the studies were further categorized by pre-stated inclusion criteria. Meta-analysis inclusion criteria were simple: studies had to include rosiglitazone and a randomized comparator group treated with either another drug or placebo, study arms had to show similar length of treatment, and all groups had to have received more than 24 weeks of exposure to the study drugs. The studies had to contain outcome data of interest including the rate of myocardial infarction (MI) or death from all CV causes. Out of 116 studies surveyed by the authors, 42 met their inclusion criteria and were included in the meta-analysis. Of the studies they included, 23 had durations of 26 weeks or less, and only five studies followed patients for more than a year. Until this point, the study’s authors were following a path similar to that of any reviewer interested in CV outcomes, examining the results of these 42 studies and comparing them qualitatively. Quantitatively combining the data, however, required the authors to make choices about the studies they could merge and the statistical methods they should apply for analysis. Those decisions greatly influenced the results that were reported.<br />
<br />
<b>What was found?</b><br />
<br />
When the studies were combined, the meta-analysis contained data from 15,565 patients in the rosiglitazone group and 12,282 patients as comparators. Analyzing their data, the authors chose one particular statistical method (the Peto odds ratio method, a fixed-effect statistical approach), which calculates the odds of events occurring where the outcomes of interest are rare and small in number. In comparing rosiglitazone with a “control” group that included other drugs or placebo, the authors reported odds ratios of 1.43 (95% CI, 1.03-1.98; P=0.03) and 1.64 (95% CI,<br />
0.98-2.74; P=0.06) for MI and death from CV causes, respectively. In other words, the odds of an MI or death from a CV cause are higher for rosiglitazone patients than for patients on other therapies or placebo. The authors reported that rosiglitazone was significantly associated with an increase in the risk of MI and had borderline significance in increasing the risk of death from all CV causes. These findings appeared online on the same day that the FDA issued a safety alert regarding rosiglitazone. Discussion of the meta-analysis was immediately featured prominently in the news media. By December 2007, prescription claims for the drug at retail pharmacies had fallen by more than 50%.<br />
<br />
As diabetic patients and their clinicians reacted to the news, a methodologic debate also ensued. This discussion included statistical issues pertaining to the conduct of the analysis, its implications for clinical care, and finally the FDA and drug manufacturer’s roles in overseeing and regulating rosiglitazone. The concern among patients with diabetes regarding treatment, continues in the medical community today.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
Should the studies have been combined? Commentators faulted the authors for including several studies that were not originally intended to investigate diabetes, and for combining both placebo and drug therapy data into one comparator arm. Some critics noted that despite the stated inclusion criteria, some data were derived from studies where the rosiglitazone arm was allowed a longer follow-up than the comparator arm. By failing to account for this longer follow-up period, commentators felt that the authors may have overestimated the effect of rosiglitazone on CV outcomes. Many reviewers were concerned that this meta-analysis excluded trials in which no patients suffered an MI or died from CV causes – the outcomes of greatest interest. Some reviewers also noted that the exclusion of zero-event trials from the pooled dataset not only gave an incomplete picture of the impact of rosiglitazone but could have increased the odds ratio estimate. In general, the pooled dataset was criticized by many for being a faulty microcosm of the information available regarding rosiglitazone.<br />
<br />
It is essential that a meta-analysis be based on similarity in the data sources. If studies differ in important areas such as the patient populations, interventions, or outcomes, combining their data may not be suitable. The researchers accepted studies and populations that were clinically heterogeneous, yet pooled them as if they were not. The study reported that the results were combined from a number of trials that were not initially intended to investigate CV outcomes. Furthermore, the available data did not allow for time-to-event analysis, an essential tool in comparing the impact of alternative treatment options. Reviewers considered the data to be insufficiently homogeneous, and the line of cause and effect to be murkier than the authors described.<br />
<br />
<b>Were the statistical methods optimal?</b><br />
<br />
The statistical methods for this meta-analysis also came under significant criticism. The critiques focused on the authors’ use of the Peto method as being an incorrect choice because data were pooled from both small and very large studies, resulting in a potential overestimation of treatment effect. Others reviewers pointed that the Peto method should not have been used, as a number of the underlying studies did not have patients assigned equally to rosiglitazone and comparator groups. Finally, critics suggested that the heterogeneity of the included studies required an altogether different set of analytic techniques.<br />
<br />
Demonstrating the sensitivity of the authors’ initial analysis to the inclusion criteria and statistical tests used, a number of researchers reworked the data from this study. one researcher used the same studies but analyzed the data with a more commonly used statistical method (Mantel-Haenszel), and found no significant increase in the relative risk or common odds ratio with MI or CV death. When the pool of studies was expanded to include those originally eliminated because they had zero CV events, the odds ratios for MI and death from CV causes dropped from 1.43 to 1.26 (95% CI, 0.93-1.72) and from 1.64 to 1.14 (95% CI, 0.74-1.74), respectively. Neither of the recalculated odd ratios were significant for MI or CV death. Finally, several newer long-term studies have been published since the Nissen meta-analysis. Incorporating their results with the meta-analysis data showed that rosiglitazone is associated with an increased risk of MI but not of CV death. Thus, the findings from these meta-analyses varied with the methods employed, the studies included, and the addition of later trials.<br />
<br />
<b>Emerging Data</b><br />
<br />
The controversy surrounding the rosiglitazone meta-analysis authored by Nissen and Wolski forced an unplanned interim analysis of a long-term, randomized trial investigating the CV effects of rosiglitazone among patients with type 2 diabetes. The authors of the RECORD trial noted that even though the follow-up at 3.75 years was shorter than expected, rosiglitazone, when added to standard glucose-lowering therapy, was found to be associated with an increase in the risk of heart failure but was not associated with any increase in death from CV or other causes. Data at the time were found to be insufficient to determine the effect of rosiglitazone on an increase in the risk of MI. the final report of that trial, published in June 2009, confirmed the elevated risk of heart failure in people with type 2 diabetes treated with rosiglitazone in addition to glucose-lowering drugs, but continued to show inconclusive results about the effect of the drug therapy on the risk of MI. Further, the RECORD trial clarified that rosiglitazone does not result in an increased risk of CV morbidity or mortality compared to standard glucose-lowering drugs. Other trials conducted since the publishing of the meta-analysis have corroborated these results, casting further doubt on the findings of the meta-analysis published by Nissen and Wolski.<br />
<br />
<b>Now what?</b><br />
<br />
Some sources suggest that the original Nissen meta-analysis delivered more harm than benefit, and that a well-recognized medical journal may have erred in its process of peer review. Despite this criticism, it is important to note that subsequent publications support the risk of adverse CV events associated with rosiglitazone, although rosiglitazone use does not appear to increase deaths. These results and emerging data point to the need for further rigorous research to clarify the benefits and risks of rosiglitazone on a variety of outcomes, and the importance of directing the drug to the population that will maximally benefit from its use.<br />
<br />
<b>Lessons Learned From this Case Study</b><br />
<br />
Results from initial randomized trials that seem definitive at one time may not be conclusive, as further trials may emerge to clarify, redirect, or negate previously accepted results. A meta-analysis of those trials can lead to varying results based upon the timing of the analysis and the choices made in its performance.<br />
<br />
<b>Meta-Analysis: Tips for CER Practitioners</b><br />
<br />
*The results of a meta-analysis are highly dependent on the studies included (and excluded). Are these criteria properly defined and relevant to the purposes of the meta-analysis? Were the combined studies sufficiently similar? Can results from this cohort be generalized to other populations of interest?<br />
<br />
*The statistical methodology can impact study results. Have there been reviews critiquing the methods used in the meta-analysis?<br />
<br />
*A variety of statistical tests should be considered, and perhaps reported, in the analysis of results. Do the authors mention their rationale in choosing a statistical method? Do they show the stability of their results across a spectrum of analytical methods?<br />
<br />
*Nothing is permanent. Emerging data may change the playing field, and meta- analysis results are only as good as the data and statistics from which they are derived.<br />
<br />
===Case-Study 3: The Nurses’ Health Study===<br />
<br />
<b>An observational study</b><br />
<br />
An observational study is a very common type of research design in which the effects of a treatment or condition are studied without formally randomizing patients in an experimental design. Such studies can be done prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. Latter studies are frequently performed by using an electronic database that contains, for example, administrative, “billing,” or claims data. Less commonly, observational research uses electronic health records, which have greater clinical information that more closely resembles the data collected in an RCT. Observational studies often take place in “real- world” environments, which allow researchers to collect data for a wide array of outcomes. Patients are not randomized in these studies, but the findings can be used to generate hypotheses for investigation in a more constrained experimental setting. Perhaps the best known observational study is the “Framingham study,” which collected demographic and health data for a group of individuals over many years (and continues to do so) and has provided an understanding of the key risk factors for heart disease and stroke.<br />
<br />
Observational studies present many advantages to the comparative effectiveness researcher. the study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). Furthermore, observational studies can be conducted at low cost, particularly if they involve the secondary analysis of existing data sources. CER often uses administrative databases, which are based upon the billing data submitted by providers during routine care. These databases typically have limited clinical information, may have errors in them, and generally do not undergo auditing.<br />
<br />
The uncontrolled nature of observational studies allows them to be subject to bias and confounding. For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without careful statistical adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies can identify important associations but cannot prove cause and effect. These studies can generate hypotheses that may require RCTs for fuller demonstration of those relationships. Secondary analysis can also be problematic if researchers overwork datasets by doing multiple exploratory analyses (e.g., data-dredging): the more we look, the more we find, even if those findings are merely statistical aberrations. Unfortunately, the growing need for CER and the wide availability of administrative databases may lead to selection of research of poor quality with inaccurate findings.<br />
<br />
In comparative effectiveness research, observational studies are typically considered to be less conclusive than RCTs and meta-analyses. Nonetheless, they can be useful, especially because they examine typical care. Due to lower cost and improvements in health information, observational studies will become increasingly common. Critical assessment of whether the described results are helpful or biased (based upon how the study was performed) are necessary. This case will illustrate several characteristics of the types of studies that will assist in evaluating newly published work. <br />
<br />
<b>Clinical Applications</b><br />
<br />
Cardiovascular diseases (CVD) are the leading cause of death in women older than the age of 50. Epidemiologic evidence suggests that estrogen is a key mediator in the development of CVD. Estrogen is an ovarian hormone whose production decreases as women approach menopause. The steep increase in CVD in women at menopause and older and in women who have had hysterectomies further supports a relationship between estrogen and CVD. Building on this evidence of biologic plausibility, epidemiological and observational studies suggested that estrogen replacement therapy (a form of <b>hormone replacement therapy</b>, or HRT) had positive effects on the risk of CVD in postmenopausal women, (albeit with some negative effects in its potential to increase the risk for breast cancer and stroke). Based on these findings, in the 1980s and 1990s HRT was routinely employed to treat menopausal symptoms and serve as prophylaxis against CVD.<br />
<br />
<b>What was done?</b><br />
<br />
The Nurses’ Health Study (NHS) began collecting data in 1976. In the study, researchers intended to examine a broad range of health effects in women over a long period of time, and a key goal was to clarify the role of HRT in heart disease. The cohort (i.e., the group being followed) included married registered nurses aged 30-55 in 1976 who lived in the 11 most populous states. To collect data, the researchers mailed the study participants a survey every 2 years that asked questions about topics such as smoking, hormone use, menopausal status, and less frequently, diet. Data were collected for key end points that included MI, coronary-artery bypass grafting or angioplasty, stroke, total CVD mortality, and deaths from all causes.<br />
<br />
<b>What was found?</b><br />
<br />
At a 10-year follow-up point, the NHS had a study pool of 48,470 women. The researchers found that estrogen use (alone, without progestin) in postmenopausal women was associated with a reduction in the incidence of CVD as well as in CVD mortality compared to non-users. Later, estrogen-progestin combination therapy was shown to be even more cardioprotective than estrogen monotherapy, and lower doses of estrogen replacement therapy were found to deliver equal cardioprotection and lower the risk for adverse events. NHS researchers were alert to the potential for bias in observational studies. Adjustment for risk factors such as age (a typical practice to eliminate confounding) did not change the reported findings.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
The NHS was not unique in reporting the benefits associated with HRT; other observational studies corroborated the NHS findings. A secondary retrospective data analysis of the UK primary care electronic medical record database, for example, also showed the protective effect associated with HRT use. Researchers were aware of the fundamental limitations of observational studies, particularly with regard to selection bias. They and practicing clinicians were also aware of the potential negative health effects of HRT, which had to be constantly weighed against the potential cardioprotective benefits in deciding a patient’s course of treatment. As a large section of the population could experience the health effects of HRT, researchers began planning RCTs to verify the promising observational study results. It was highly anticipated that those RCTs would corroborate the belief that estrogen replacement can reduce CVD risk.<br />
<br />
<b>Randomized Controlled Trial: The Women’s Health Initiative</b><br />
<br />
The Women’s health Initiative (WHI) was a major study established by the National Institutes of health in 1992 to assess a broad range of health effects in postmenopausal women. The trial was intended to follow these women for 8 years, at a cost of millions of dollars in federal funding. Among its many facets, it included an RCT to confirm the results from the observational studies discussed above. To fully investigate earlier findings, the WHI had two subgroups. One subgroup consisted of women with prior hysterectomies; they received estrogen monotherapy. The second group consisted of women who had not undergone hysterectomy; they received estrogen in combination with progestin. The WHI enrolled 27,347 women in their HRT investigation: 10,739 in the estrogen-alone arm and 16,608 in the estrogen plus progestin arm. Within each arm, women were randomly assigned to receive either HRT or placebo. All women in the trial were postmenopausal and aged 50-79 years; the mean age was 63.6 years (a fact that would be important in later analysis). Some participants had experienced previous CV events. The primary outcome of both subgroups was coronary heart disease (CHD), as described by nonfatal MI or death due to CHD.<br />
<br />
The estrogen-progestin arm of the WHI was halted after a mean follow-up of 5.2 years, 3 years earlier than expected, as the HRT users in this arm were found to be at increased risk for CHD compared to those who received placebo. The study also noted elevated rates of breast cancer and stroke, among other poor outcomes. The estrogen-alone arm continued for an average follow-up of 6.8 years before being similarly discontinued ahead of schedule. Although this part of the study did not find an increased risk of CHD, it also did not find any cardioprotective effect. Beyond failing to locate any clear CV benefits, the WHI also found real evidence of harm, including increased risk of blood clots, breast cancer and stroke. Initial WHI publications therefore recommended against HRT being prescribed for the secondary prevention of CVD.<br />
<br />
<b>What Next?</b><br />
<br />
Scientists and the clinicians who relied on their data for guidance in treating patients, were faced with conflicting data: epidemiological and observational studies suggested that HRT was cardioprotective while the higher-quality evidence from RCTs strongly suggested the opposite. Clinicians primarily followed the WHI results, so prescriptions for HRT in postmenopausal women quickly declined. Meanwhile, researchers began to analyze the studies for potential discrepancies, and found that the women being followed in the NHS and the WHI differed in several important characteristics.<br />
<br />
First, the WHI population was older than the NHS cohort, and many had entered menopause at least 10 years before they enrolled in the RCT. Thus, the WHI enrollees experienced a long duration from the onset of menopause to the commencement of HRT. At the same time, many in the NHS population were closer to the onset of menopause and were still displaying hormonal symptoms when they began HRT. Second, although the NHS researchers adjusted the data for various confounding effects, their results could still have been subject to bias. In general, the NHS cohort was more highly educated and of a higher socioeconomic status than the WHI participants, and therefore more likely to see a physician regularly. The NHS women were also leaner and generally healthier than their RCT counterparts, and had been selected for their evident lack of pre-existing CV conditions. This selection bias in the NHS enrollment may have led to a “healthy woman” effect that in turn led to an overestimation of the benefits of therapy in the observational study. Third, researchers noted that dosing differences between the two study types may have contributed to the divergent results. The NHS reported beneficial results following low-dose estrogen therapy. The WHL, meanwhile, used a higher estrogen dose, exposing women to a larger dosage of hormones and increasing their risk for adverse events. The increased risk profile of the WHI women (e.g., older, more comorbidities, higher estrogen dose) could have contributed to the evidence of harm seen in the WHI results.<br />
<br />
<b>Emerging Data</b><br />
In addition to identifying the inherent differences between the two study populations, researchers began a secondary analysis of the NHS and WHI trials. NHS researchers reported that women who began HRT close to the onset of menopause had a significantly reduced risk of CHD. In the subgroups of women that were older and had a similar duration after menopause compared with the WHI women, they found no significant relationship between HRT and CHD. Also, the WHI study further stratified these results by age, and found that women who began HRT close to their onset of menopause experienced some cardioprotection, while women who were further from the onset of menopause had a slightly elevated risk for CHD.<br />
<br />
Secondary analysis of both studies was therefore necessary to show that age and a short duration from the onset of menopause are crucial to HRT success as a cardioprotective agent. Neither study type provided “truth” or rather, both studies provided “truth” if viewed carefully (e.g., both produced valid and important results). The differences seen in the studies were rooted in the timing of HRT and the populations being studied.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are given a higher evidence grade, observational studies provide important clinical insights. In this example, the study populations differed. For policymakers and clinicians, it is crucial to examine whether the CER was based upon patients similar to those being considered. Any study with a dissimilar population may provide non-relevant results. Thus, readers of CER need to carefully examine the generalizability of the findings being reported.<br />
<br />
==Appendix==<br />
<br />
General Classification and Regression Tree (CART) data analysis steps part of the R package <b>rpart.</b><br />
<br />
===Growing the Tree===<br />
<br />
# To grow a tree, use<br />
rpart(formula, data=, method=,control=), where<br />
formula is in the format outcome ~ predictor1+predictor2+...<br />
data= specifies the data frame<br />
method= "class" for a classification tree, use "anova" for a regression tree<br />
control= optional parameters for controlling tree growth. For example, control=rpart.control(minsplit=30, cp=0.001) requires that the minimum number of observations in a node be 30 before attempting a split and that a split must decrease the overall lack of fit by a factor of 0.001 (cost complexity factor) before being attempted.<br />
<br />
===Examining Results===<br />
<br />
# These functions help with examining the results.<br />
printcp(fit) display complexity parameter (cp) table<br />
plotcp(fit) plot cross-validation results<br />
rsq.rpart(fit) plot approximate R-squared and relative error for different splits (2 plots). labels are only appropriate for the "anova" method.<br />
print(fit) print results<br />
summary(fit) detailed results including surrogate splits<br />
plot(fit) plot decision tree<br />
text(fit) label the decision tree plot<br />
post(fit, file=) create postscript plot of decision tree<br />
# In trees created by rpart(), move to the LEFT branch when the stated condition is true.<br />
<br />
===Pruning Trees===<br />
<br />
#In general, trees should be pruned back to avoid overfitting the data. The tree size should minimize the cross-#validated error – xerror column printed by printcp(). Pruning the tree is accomplished by:<br />
prune(fit, cp= )<br />
# use printcp( ) to examine the cross-validation error results, select the complexity parameter (CP) associated with minimum error, and insert the CP it into the prune() function. This (automatically selecting the complexity parameter associated with the smallest cross-validated error) can be done succinctly by:<br />
fit$\$$cptable[which.min(fit$\$$cptable[,"xerror"]),"CP"]<br />
<br />
===Compete Dataset for N-of-1 Example===<br />
[[SMHS_MethodsHeterogeneity_CER_Nof1|This N-of-1 Dataset]] includes an example.<br />
<br />
===Footnotes===<br />
<br />
*<sup>13</sup> Based on 2009 NPC report, http://www.npcnow.org/publication/demystifying-comparative-effectiveness-research-case-study-learning-guide <br />
*<sup>14</sup> http://www.cancer.gov/cancertopics/druginfo/fda-cetuximab<br />
<br />
===[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_CER}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_CER&diff=16226SMHS MethodsHeterogeneity CER2016-05-23T19:00:26Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Comparative Effectiveness Research: Case Studies <sup>13</sup> (CER) ==<br />
<br />
===Observational Studies: Tips for the CER Practitioners===<br />
<br />
*Different study types can offer different understandings; neither should be discounted without closer examination.<br />
<br />
*RCTs provide an accurate understanding of the effect of a particular intervention in a well-defined patient group under “controlled” circumstances.<br />
<br />
*Observational studies provide an understanding of real-world care and its impact, but can be biased due to uncontrolled factors.<br />
<br />
*Observational studies differ in the types of databases used. These databases may lack clinical detail and contain incomplete or inaccurate data.<br />
<br />
*Before accepting the findings from an observational study, consider whether confounding factors may have influenced the results.<br />
<br />
*In this scenario, subgroup analysis was vital in clarifying both study designs; what is true for the many (e.g., overall, estrogen appeared to be detrimental) may not be true for the few (e.g., that for the younger post-menopausal woman, the benefits were greater and the harms less frequent).<br />
<br />
*Carefully examine the generalizability of the study. Do the study’s patients and intervention match those under consideration?<br />
<br />
*Observational studies can identify associations but cannot prove cause-and-effect relationships.<br />
<br />
===Case-Study 1: The Cetuximab Study<sup>14</sup>===<br />
<br />
<b>What was done and what was found?</b><br />
<br />
Cetuximab, an anti-epidermal growth factor receptor (EGFR) agent, has recently been added to the therapeutic armamentarium. Two important CRTs examined its impact in patients with mCRC (metastatic-stage Colorectal cancer). In the first one, 56 centers in 11 European countries investigated the outcomes associated with cetuximab therapy in 329 mCRC patients who experienced disease progression either on irinotecan therapy or within 3 months thereafter. The study reported that the group on a combination of irinotecan and cetuximab had a significantly higher rate of overall response to treatment (primary endpoint) than the group on cetuximab alone: 22.9% (95% CI, 17.5-29.1%) vs. 10.8% (95% CI, 5.7-18.1%) (P=0.007), respectively. Similarly, the median time to progression was significantly longer in the combination therapy group (4.1 vs. 1.5 months, P<0.001). As these patients had already progressed on irinotecan prior to the study, any response was viewed as positive. Safety between the two treatment arms was similar: approximately 80% of patients in each arm experienced a rash. Grade 3 or 4 (the more severe) toxic effects on the skin were slightly more frequent in the combination-therapy group compared to cetuximab monotherapy, observed in 9.4% and 5.2% of participants, respectively. Other side effects, such as diarrhea and neutropenia observed in the combination-therapy arm, were considered to be in the range expected for irinotecan alone. Data from this study demonstrated the efficacy and safety of cetuximab and were instrumental in the FDA’s 2004 approval.<br />
<br />
A second CRT (2007) examined 572 patients and suggested efficacy of cetuximab in the treatment of mCRC. This study was a randomized, non-blinded, controlled trial that examined cetuximab monotherapy plus best supportive care compared to best supportive care alone in patients who had received and failed prior chemotherapy regimens. It reported that median overall survival (the primary endpoint) was significantly higher in patients receiving cetuximab plus best supportive care compared to best supportive care alone (6.1 vs. 4.6 months, respectively) (hazard ratio for death=0.77; 95% CI: 0.64- 0.92, P=0.005). This RCT described a greater incidence of adverse events in the cetuximab plus best supportive care group compared to best supportive care alone including (most significantly) rash, as well as edema, fatigue, nausea and vomiting.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
These RCTs had fairly broad enrollment criteria and the cetuximab benefits were modest. Emerging scientific theories raised the possibility that genetically defined population subsets might experience a greater-than-average treatment benefit. One such area of inquiry entailed examining “biomarkers,” or genetic indicators of a patient’s greater response to therapy. Even as the above RCTs were being conducted, data emerged showing the importance of the KRAS gene.<br />
<br />
<b>Emerging Data</b><br />
<br />
Based on the emerging biochemical evidence that the epidermal growth factor receptor (EGFR) treatment mechanism (Cetuximab) was even more finely detailed than previously understood, the study authors of the 2007 RCT undertook a retrospective subgroup analysis using tumor tissue samples preserved from their initial study. Following laboratory analysis, all viable tissue samples were classified as having a wild-type (non-mutated) or a mutated KRAS gene. Instead of the previous two study arms (cetuximab plus best supportive care vs. best supportive care alone), there were 4 for this new analysis: each of the two original study arms was further divided by wild-type vs. mutated KRAS status. Laboratory evaluation determined that 40.9% and 42.3% of all patients in the RCT had a KRAS mutation in the cetuximab plus best supportive care group compared to the best supportive care group alone, respectively. The efficacy of cetuximab was found to be significantly correlated with KRAS status: in patients with wild-type (non-mutated). KRAS genes, cetuximab plus best supportive care compared to best supportive care alone improved overall survival (median 9.5 vs. 4.8 months, respectively; hazard ratio for death=0.55; 95% CI, 0.41-0.74, P<0.001), and progression-free survival (median 3.7 vs. 1.9 months, respectively; hazard ratio for progression or death=0.40; 95% CI, 0.30-0.54, P<0.001). Meanwhile, in patients with mutated KRAS tumors, the authors found no significant difference in outcome between cetuximab plus best supportive care vs. best supportive care alone.<br />
<br />
<b>What next?</b><br />
<br />
Based on these and similar results from other studies, the FDA narrowed its product labeling in July 2009 to indicate that cetuximab is not recommended for mCRC patients with mutated KRAS tumors. This distinction reduces the relevant population by approximately 40%. Similarly, the American society of Clinical oncology released a provisional clinical recommendation that all mCRC patients have their tumors tested for KRAS status before receiving anti-EGFR therapy. The benefits of targeted treatment are many. Patients who previously underwent cetuximab therapy without knowing their genetic predisposition would no longer have to be exposed to the drug’s toxic effects if unnecessary, as the efficacy of cetuximab is markedly higher in the genetically defined appropriate patients. In a less-uncertain environment, clinicians can be more confident in advocating a course of action in their care of patients. And finally, knowledge that targeted therapy is possible suggests the potential for further innovation in treatment options. In fact, research continues to demonstrate options for targeted cetuximab treatment of mCRC at an even finer scale than seen with KRAS; and similar genetic targeting is being investigated, and advocated, in other cancer types.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are generally viewed as the gold standard, results of one or even a series of trials may not accurately reflect the benefits experienced by an individual patient. This case-study suggests that cetuximab initially appeared to have rather modest clinical benefits. Albeit, new information that became available and subsequent genetic subgroup assessments led to very different conclusions. Clinicians should be aware that the current knowledge is likely to evolve and any decisions about patient care should be carefully considered with that sense of uncertainty in mind. As in this case study, subgroup analyses (e.g., genetic subtypes) need a theoretical rationale. Ideally, the analyses should be determined at the time of original RCT design and should not just occur as explorations of the subsequent data. When improperly employed, post hoc analyses may lead to incorrect patient care conclusions.<br />
<br />
<b>RCTs Tips for the CER Practitioners</b><br />
<br />
*RCTs can determine whether an intervention can provide benefit in a very controlled environment.<br />
<br />
*The controlled nature of an RCT may limit its generalizability to a broader population.<br />
<br />
*No results are permanent; advances in scientific knowledge and understanding can influence how we view the effectiveness (or safety) of a therapeutic intervention.<br />
<br />
*Targeted therapy illuminated by carefully thought out subgroup analyses can improve the efficacious and safe use of an intervention.<br />
<br />
===Case-Study 2: The Rosiglitazone Study===<br />
<br />
<b>Meta-analysis</b><br />
<br />
Often the results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single “result.” Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.<br />
<br />
When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive “numbers” but biased results. The following are important criteria for properly conducted meta-analyses:<br />
<br />
1. Carefully defining unbiased inclusion or exclusion criteria for study selection<br />
<br />
2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and time-frame<br />
<br />
3. Applying correct statistical methods to combine and analyze the data<br />
<br />
Reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions. Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research. The following case study will examine several key principles that will be useful as the reader encounters these publications.<br />
<br />
<b>Clinical Application</b><br />
<br />
Heart disease is the leading cause of mortality in the United States, resulting in approximately 20% of all deaths. Diabetics are particularly susceptible to heart disease, with more than 65% of deaths attributable to it. The nonfatal complications of diabetes are wide-ranging and include kidney failure, nerve damage, amputation, stroke and blindness, among other outcomes. In 2007, the total estimated cost of diabetes in the United States was $174B; $116B was derived from direct medical expenditures and the rest from the indirect cost of lost productivity due to the disease. With such serious health effects and heavy direct and indirect costs tied to diabetes, proper disease management is critical. Historically, diabetes treatment has focused on strict blood sugar control, assuming that this goal not only targets diabetes but also reduces other serious comorbidities of the disease.<br />
<br />
Anti-diabetic agents have long been associated with key questions as to their benefits/risks in the treatment of diabetes. The sulfonylurea tolbutamide, a first generation anti-diabetic drug, was found in a landmark study in the 1970s to significantly increase the CV mortality rate compared to patients not on this agent. Further analysis by external parties concluded that the methods employed in this trial were significantly flawed (e.g., use of an “arbitrary” definition of diabetes status, heterogeneous baseline characteristics of the populations studied, and incorrect statistical methods). Since these early studies, CV concerns continue to be an issue with selected oral hypoglycemic agents that have subsequently entered the marketplace.<br />
<br />
A class of drugs, thiazolidinedione (TZD), was approved in the late 1990s, as a solution to the problems associated with the older generation of sulfonylureas. Rosiglitazone, a member of the TZD class, was approved by the FDA in 1999 and was widely prescribed for the treatment of type-2 diabetes. A number of RCTs supported the benefit of rosiglitazone as an important new oral antidiabetic agent. However, safety concerns developed as the FDA received reports of adverse cardiac events potentially associated with rosiglitazone. It was in this setting that a meta-analysis by Nissen and Wolski was published in the New England Journal of Medicine in June 2007.<br />
<br />
<b>What was done?</b><br />
<br />
Nissen and Wolski conducted a meta-analysis examining the impact of rosiglitazone on cardiac events and mortality compared to alternative therapeutic approaches. The study began with a broad search to locate potential studies for review. The authors screened published phase II, III, and IV trials; the FDA website; and the drug manufacturer’s clinical-trial registry for applicable data relating to rosiglitazone use. When the initial search was complete, the studies were further categorized by pre-stated inclusion criteria. Meta-analysis inclusion criteria were simple: studies had to include rosiglitazone and a randomized comparator group treated with either another drug or placebo, study arms had to show similar length of treatment, and all groups had to have received more than 24 weeks of exposure to the study drugs. The studies had to contain outcome data of interest including the rate of myocardial infarction (MI) or death from all CV causes. Out of 116 studies surveyed by the authors, 42 met their inclusion criteria and were included in the meta-analysis. Of the studies they included, 23 had durations of 26 weeks or less, and only five studies followed patients for more than a year. Until this point, the study’s authors were following a path similar to that of any reviewer interested in CV outcomes, examining the results of these 42 studies and comparing them qualitatively. Quantitatively combining the data, however, required the authors to make choices about the studies they could merge and the statistical methods they should apply for analysis. Those decisions greatly influenced the results that were reported.<br />
<br />
<b>What was found?</b><br />
<br />
When the studies were combined, the meta-analysis contained data from 15,565 patients in the rosiglitazone group and 12,282 patients as comparators. Analyzing their data, the authors chose one particular statistical method (the Peto odds ratio method, a fixed-effect statistical approach), which calculates the odds of events occurring where the outcomes of interest are rare and small in number. In comparing rosiglitazone with a “control” group that included other drugs or placebo, the authors reported odds ratios of 1.43 (95% CI, 1.03-1.98; P=0.03) and 1.64 (95% CI,<br />
0.98-2.74; P=0.06) for MI and death from CV causes, respectively. In other words, the odds of an MI or death from a CV cause are higher for rosiglitazone patients than for patients on other therapies or placebo. The authors reported that rosiglitazone was significantly associated with an increase in the risk of MI and had borderline significance in increasing the risk of death from all CV causes. These findings appeared online on the same day that the FDA issued a safety alert regarding rosiglitazone. Discussion of the meta-analysis was immediately featured prominently in the news media. By December 2007, prescription claims for the drug at retail pharmacies had fallen by more than 50%.<br />
<br />
As diabetic patients and their clinicians reacted to the news, a methodologic debate also ensued. This discussion included statistical issues pertaining to the conduct of the analysis, its implications for clinical care, and finally the FDA and drug manufacturer’s roles in overseeing and regulating rosiglitazone. The concern among patients with diabetes regarding treatment, continues in the medical community today.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
Should the studies have been combined? Commentators faulted the authors for including several studies that were not originally intended to investigate diabetes, and for combining both placebo and drug therapy data into one comparator arm. Some critics noted that despite the stated inclusion criteria, some data were derived from studies where the rosiglitazone arm was allowed a longer follow-up than the comparator arm. By failing to account for this longer follow-up period, commentators felt that the authors may have overestimated the effect of rosiglitazone on CV outcomes. Many reviewers were concerned that this meta-analysis excluded trials in which no patients suffered an MI or died from CV causes – the outcomes of greatest interest. Some reviewers also noted that the exclusion of zero-event trials from the pooled dataset not only gave an incomplete picture of the impact of rosiglitazone but could have increased the odds ratio estimate. In general, the pooled dataset was criticized by many for being a faulty microcosm of the information available regarding rosiglitazone.<br />
<br />
It is essential that a meta-analysis be based on similarity in the data sources. If studies differ in important areas such as the patient populations, interventions, or outcomes, combining their data may not be suitable. The researchers accepted studies and populations that were clinically heterogeneous, yet pooled them as if they were not. The study reported that the results were combined from a number of trials that were not initially intended to investigate CV outcomes. Furthermore, the available data did not allow for time-to-event analysis, an essential tool in comparing the impact of alternative treatment options. Reviewers considered the data to be insufficiently homogeneous, and the line of cause and effect to be murkier than the authors described.<br />
<br />
<b>Were the statistical methods optimal?</b><br />
<br />
The statistical methods for this meta-analysis also came under significant criticism. The critiques focused on the authors’ use of the Peto method as being an incorrect choice because data were pooled from both small and very large studies, resulting in a potential overestimation of treatment effect. Others reviewers pointed that the Peto method should not have been used, as a number of the underlying studies did not have patients assigned equally to rosiglitazone and comparator groups. Finally, critics suggested that the heterogeneity of the included studies required an altogether different set of analytic techniques.<br />
<br />
Demonstrating the sensitivity of the authors’ initial analysis to the inclusion criteria and statistical tests used, a number of researchers reworked the data from this study. one researcher used the same studies but analyzed the data with a more commonly used statistical method (Mantel-Haenszel), and found no significant increase in the relative risk or common odds ratio with MI or CV death. When the pool of studies was expanded to include those originally eliminated because they had zero CV events, the odds ratios for MI and death from CV causes dropped from 1.43 to 1.26 (95% CI, 0.93-1.72) and from 1.64 to 1.14 (95% CI, 0.74-1.74), respectively. Neither of the recalculated odd ratios were significant for MI or CV death. Finally, several newer long-term studies have been published since the Nissen meta-analysis. Incorporating their results with the meta-analysis data showed that rosiglitazone is associated with an increased risk of MI but not of CV death. Thus, the findings from these meta-analyses varied with the methods employed, the studies included, and the addition of later trials.<br />
<br />
<b>Emerging Data</b><br />
<br />
The controversy surrounding the rosiglitazone meta-analysis authored by Nissen and Wolski forced an unplanned interim analysis of a long-term, randomized trial investigating the CV effects of rosiglitazone among patients with type 2 diabetes. The authors of the RECORD trial noted that even though the follow-up at 3.75 years was shorter than expected, rosiglitazone, when added to standard glucose-lowering therapy, was found to be associated with an increase in the risk of heart failure but was not associated with any increase in death from CV or other causes. Data at the time were found to be insufficient to determine the effect of rosiglitazone on an increase in the risk of MI. the final report of that trial, published in June 2009, confirmed the elevated risk of heart failure in people with type 2 diabetes treated with rosiglitazone in addition to glucose-lowering drugs, but continued to show inconclusive results about the effect of the drug therapy on the risk of MI. Further, the RECORD trial clarified that rosiglitazone does not result in an increased risk of CV morbidity or mortality compared to standard glucose-lowering drugs. Other trials conducted since the publishing of the meta-analysis have corroborated these results, casting further doubt on the findings of the meta-analysis published by Nissen and Wolski.<br />
<br />
<b>Now what?</b><br />
<br />
Some sources suggest that the original Nissen meta-analysis delivered more harm than benefit, and that a well-recognized medical journal may have erred in its process of peer review. Despite this criticism, it is important to note that subsequent publications support the risk of adverse CV events associated with rosiglitazone, although rosiglitazone use does not appear to increase deaths. These results and emerging data point to the need for further rigorous research to clarify the benefits and risks of rosiglitazone on a variety of outcomes, and the importance of directing the drug to the population that will maximally benefit from its use.<br />
<br />
<b>Lessons Learned From this Case Study</b><br />
<br />
Results from initial randomized trials that seem definitive at one time may not be conclusive, as further trials may emerge to clarify, redirect, or negate previously accepted results. A meta-analysis of those trials can lead to varying results based upon the timing of the analysis and the choices made in its performance.<br />
<br />
<b>Meta-Analysis: Tips for CER Practitioners</b><br />
<br />
*The results of a meta-analysis are highly dependent on the studies included (and excluded). Are these criteria properly defined and relevant to the purposes of the meta-analysis? Were the combined studies sufficiently similar? Can results from this cohort be generalized to other populations of interest?<br />
<br />
*The statistical methodology can impact study results. Have there been reviews critiquing the methods used in the meta-analysis?<br />
<br />
*A variety of statistical tests should be considered, and perhaps reported, in the analysis of results. Do the authors mention their rationale in choosing a statistical method? Do they show the stability of their results across a spectrum of analytical methods?<br />
<br />
*Nothing is permanent. Emerging data may change the playing field, and meta- analysis results are only as good as the data and statistics from which they are derived.<br />
<br />
===Case-Study 3: The Nurses’ Health Study===<br />
<br />
<b>An observational study</b><br />
<br />
An observational study is a very common type of research design in which the effects of a treatment or condition are studied without formally randomizing patients in an experimental design. Such studies can be done prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. Latter studies are frequently performed by using an electronic database that contains, for example, administrative, “billing,” or claims data. Less commonly, observational research uses electronic health records, which have greater clinical information that more closely resembles the data collected in an RCT. Observational studies often take place in “real- world” environments, which allow researchers to collect data for a wide array of outcomes. Patients are not randomized in these studies, but the findings can be used to generate hypotheses for investigation in a more constrained experimental setting. Perhaps the best known observational study is the “Framingham study,” which collected demographic and health data for a group of individuals over many years (and continues to do so) and has provided an understanding of the key risk factors for heart disease and stroke.<br />
<br />
Observational studies present many advantages to the comparative effectiveness researcher. the study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). Furthermore, observational studies can be conducted at low cost, particularly if they involve the secondary analysis of existing data sources. CER often uses administrative databases, which are based upon the billing data submitted by providers during routine care. These databases typically have limited clinical information, may have errors in them, and generally do not undergo auditing.<br />
<br />
The uncontrolled nature of observational studies allows them to be subject to bias and confounding. For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without careful statistical adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies can identify important associations but cannot prove cause and effect. These studies can generate hypotheses that may require RCTs for fuller demonstration of those relationships. Secondary analysis can also be problematic if researchers overwork datasets by doing multiple exploratory analyses (e.g., data-dredging): the more we look, the more we find, even if those findings are merely statistical aberrations. Unfortunately, the growing need for CER and the wide availability of administrative databases may lead to selection of research of poor quality with inaccurate findings.<br />
<br />
In comparative effectiveness research, observational studies are typically considered to be less conclusive than RCTs and meta-analyses. Nonetheless, they can be useful, especially because they examine typical care. Due to lower cost and improvements in health information, observational studies will become increasingly common. Critical assessment of whether the described results are helpful or biased (based upon how the study was performed) are necessary. This case will illustrate several characteristics of the types of studies that will assist in evaluating newly published work. <br />
<br />
<b>Clinical Applications</b><br />
<br />
Cardiovascular diseases (CVD) are the leading cause of death in women older than the age of 50. Epidemiologic evidence suggests that estrogen is a key mediator in the development of CVD. Estrogen is an ovarian hormone whose production decreases as women approach menopause. The steep increase in CVD in women at menopause and older and in women who have had hysterectomies further supports a relationship between estrogen and CVD. Building on this evidence of biologic plausibility, epidemiological and observational studies suggested that estrogen replacement therapy (a form of <b>hormone replacement therapy</b>, or HRT) had positive effects on the risk of CVD in postmenopausal women, (albeit with some negative effects in its potential to increase the risk for breast cancer and stroke). Based on these findings, in the 1980s and 1990s HRT was routinely employed to treat menopausal symptoms and serve as prophylaxis against CVD.<br />
<br />
<b>What was done?</b><br />
<br />
The Nurses’ Health Study (NHS) began collecting data in 1976. In the study, researchers intended to examine a broad range of health effects in women over a long period of time, and a key goal was to clarify the role of HRT in heart disease. The cohort (i.e., the group being followed) included married registered nurses aged 30-55 in 1976 who lived in the 11 most populous states. To collect data, the researchers mailed the study participants a survey every 2 years that asked questions about topics such as smoking, hormone use, menopausal status, and less frequently, diet. Data were collected for key end points that included MI, coronary-artery bypass grafting or angioplasty, stroke, total CVD mortality, and deaths from all causes.<br />
<br />
<b>What was found?</b><br />
<br />
At a 10-year follow-up point, the NHS had a study pool of 48,470 women. The researchers found that estrogen use (alone, without progestin) in postmenopausal women was associated with a reduction in the incidence of CVD as well as in CVD mortality compared to non-users. Later, estrogen-progestin combination therapy was shown to be even more cardioprotective than estrogen monotherapy, and lower doses of estrogen replacement therapy were found to deliver equal cardioprotection and lower the risk for adverse events. NHS researchers were alert to the potential for bias in observational studies. Adjustment for risk factors such as age (a typical practice to eliminate confounding) did not change the reported findings.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
The NHS was not unique in reporting the benefits associated with HRT; other observational studies corroborated the NHS findings. A secondary retrospective data analysis of the UK primary care electronic medical record database, for example, also showed the protective effect associated with HRT use. Researchers were aware of the fundamental limitations of observational studies, particularly with regard to selection bias. They and practicing clinicians were also aware of the potential negative health effects of HRT, which had to be constantly weighed against the potential cardioprotective benefits in deciding a patient’s course of treatment. As a large section of the population could experience the health effects of HRT, researchers began planning RCTs to verify the promising observational study results. It was highly anticipated that those RCTs would corroborate the belief that estrogen replacement can reduce CVD risk.<br />
<br />
<b>Randomized Controlled Trial: The Women’s Health Initiative</b><br />
<br />
The Women’s health Initiative (WHI) was a major study established by the National Institutes of health in 1992 to assess a broad range of health effects in postmenopausal women. The trial was intended to follow these women for 8 years, at a cost of millions of dollars in federal funding. Among its many facets, it included an RCT to confirm the results from the observational studies discussed above. To fully investigate earlier findings, the WHI had two subgroups. One subgroup consisted of women with prior hysterectomies; they received estrogen monotherapy. The second group consisted of women who had not undergone hysterectomy; they received estrogen in combination with progestin. The WHI enrolled 27,347 women in their HRT investigation: 10,739 in the estrogen-alone arm and 16,608 in the estrogen plus progestin arm. Within each arm, women were randomly assigned to receive either HRT or placebo. All women in the trial were postmenopausal and aged 50-79 years; the mean age was 63.6 years (a fact that would be important in later analysis). Some participants had experienced previous CV events. The primary outcome of both subgroups was coronary heart disease (CHD), as described by nonfatal MI or death due to CHD.<br />
<br />
The estrogen-progestin arm of the WHI was halted after a mean follow-up of 5.2 years, 3 years earlier than expected, as the HRT users in this arm were found to be at increased risk for CHD compared to those who received placebo. The study also noted elevated rates of breast cancer and stroke, among other poor outcomes. The estrogen-alone arm continued for an average follow-up of 6.8 years before being similarly discontinued ahead of schedule. Although this part of the study did not find an increased risk of CHD, it also did not find any cardioprotective effect. Beyond failing to locate any clear CV benefits, the WHI also found real evidence of harm, including increased risk of blood clots, breast cancer and stroke. Initial WHI publications therefore recommended against HRT being prescribed for the secondary prevention of CVD.<br />
<br />
<b>What Next?</b><br />
<br />
Scientists and the clinicians who relied on their data for guidance in treating patients, were faced with conflicting data: epidemiological and observational studies suggested that HRT was cardioprotective while the higher-quality evidence from RCTs strongly suggested the opposite. Clinicians primarily followed the WHI results, so prescriptions for HRT in postmenopausal women quickly declined. Meanwhile, researchers began to analyze the studies for potential discrepancies, and found that the women being followed in the NHS and the WHI differed in several important characteristics.<br />
<br />
First, the WHI population was older than the NHS cohort, and many had entered menopause at least 10 years before they enrolled in the RCT. Thus, the WHI enrollees experienced a long duration from the onset of menopause to the commencement of HRT. At the same time, many in the NHS population were closer to the onset of menopause and were still displaying hormonal symptoms when they began HRT. Second, although the NHS researchers adjusted the data for various confounding effects, their results could still have been subject to bias. In general, the NHS cohort was more highly educated and of a higher socioeconomic status than the WHI participants, and therefore more likely to see a physician regularly. The NHS women were also leaner and generally healthier than their RCT counterparts, and had been selected for their evident lack of pre-existing CV conditions. This selection bias in the NHS enrollment may have led to a “healthy woman” effect that in turn led to an overestimation of the benefits of therapy in the observational study. Third, researchers noted that dosing differences between the two study types may have contributed to the divergent results. The NHS reported beneficial results following low-dose estrogen therapy. The WHL, meanwhile, used a higher estrogen dose, exposing women to a larger dosage of hormones and increasing their risk for adverse events. The increased risk profile of the WHI women (e.g., older, more comorbidities, higher estrogen dose) could have contributed to the evidence of harm seen in the WHI results.<br />
<br />
<b>Emerging Data</b><br />
In addition to identifying the inherent differences between the two study populations, researchers began a secondary analysis of the NHS and WHI trials. NHS researchers reported that women who began HRT close to the onset of menopause had a significantly reduced risk of CHD. In the subgroups of women that were older and had a similar duration after menopause compared with the WHI women, they found no significant relationship between HRT and CHD. Also, the WHI study further stratified these results by age, and found that women who began HRT close to their onset of menopause experienced some cardioprotection, while women who were further from the onset of menopause had a slightly elevated risk for CHD.<br />
<br />
Secondary analysis of both studies was therefore necessary to show that age and a short duration from the onset of menopause are crucial to HRT success as a cardioprotective agent. Neither study type provided “truth” or rather, both studies provided “truth” if viewed carefully (e.g., both produced valid and important results). The differences seen in the studies were rooted in the timing of HRT and the populations being studied.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are given a higher evidence grade, observational studies provide important clinical insights. In this example, the study populations differed. For policymakers and clinicians, it is crucial to examine whether the CER was based upon patients similar to those being considered. Any study with a dissimilar population may provide non-relevant results. Thus, readers of CER need to carefully examine the generalizability of the findings being reported.<br />
<br />
==Appendix==<br />
<br />
General Classification and Regression Tree (CART) data analysis steps part of the R package <b>rpart.</b><br />
<br />
===Growing the Tree===<br />
<br />
# To grow a tree, use<br />
rpart(formula, data=, method=,control=), where<br />
formula is in the format outcome ~ predictor1+predictor2+...<br />
data= specifies the data frame<br />
method= "class" for a classification tree, use "anova" for a regression tree<br />
control= optional parameters for controlling tree growth. For example, control=rpart.control(minsplit=30, cp=0.001) requires that the minimum number of observations in a node be 30 before attempting a split and that a split must decrease the overall lack of fit by a factor of 0.001 (cost complexity factor) before being attempted.<br />
<br />
===Examining Results===<br />
<br />
# These functions help with examining the results.<br />
printcp(fit) display complexity parameter (cp) table<br />
plotcp(fit) plot cross-validation results<br />
rsq.rpart(fit) plot approximate R-squared and relative error for different splits (2 plots). labels are only appropriate for the "anova" method.<br />
print(fit) print results<br />
summary(fit) detailed results including surrogate splits<br />
plot(fit) plot decision tree<br />
text(fit) label the decision tree plot<br />
post(fit, file=) create postscript plot of decision tree<br />
# In trees created by rpart(), move to the LEFT branch when the stated condition is true.<br />
<br />
===Pruning Trees===<br />
<br />
#In general, trees should be pruned back to avoid overfitting the data. The tree size should minimize the cross-#validated error – xerror column printed by printcp(). Pruning the tree is accomplished by:<br />
prune(fit, cp= )<br />
# use printcp( ) to examine the cross-validation error results, select the complexity parameter (CP) associated with minimum error, and insert the CP it into the prune() function. This (automatically selecting the complexity parameter associated with the smallest cross-validated error) can be done succinctly by:<br />
fit$\$$cptable[which.min(fit$\$$cptable[,"xerror"]),"CP"]<br />
<br />
===Compete Dataset for N-of-1 Example===<br />
[[SMHS_MethodsHeterogeneity_CER_Nof1|This N-of-1 Dataset]] includes an example.<br />
<br />
===Footnotes===<br />
<br />
*<sup>13</sup> Based on 2009 NPC report, http://www.npcnow.org/publication/demystifying-comparative-effectiveness-research-case-study-learning-guide <br />
*<sup>14</sup> http://www.cancer.gov/cancertopics/druginfo/fda-cetuximab<br />
<br />
===[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_CER}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_CER&diff=16225SMHS MethodsHeterogeneity CER2016-05-23T18:58:39Z<p>Pineaumi: /* Compete Dataset for N-of-1 Example */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Comparative Effectiveness Research: Case Studies <sup>13</sup> (CER) ==<br />
<br />
===Observational Studies: Tips for the CER Practitioners===<br />
<br />
*Different study types can offer different understandings; neither should be discounted without closer examination.<br />
<br />
*RCTs provide an accurate understanding of the effect of a particular intervention in a well-defined patient group under “controlled” circumstances.<br />
<br />
*Observational studies provide an understanding of real-world care and its impact, but can be biased due to uncontrolled factors.<br />
<br />
*Observational studies differ in the types of databases used. These databases may lack clinical detail and contain incomplete or inaccurate data.<br />
<br />
*Before accepting the findings from an observational study, consider whether confounding factors may have influenced the results.<br />
<br />
*In this scenario, subgroup analysis was vital in clarifying both study designs; what is true for the many (e.g., overall, estrogen appeared to be detrimental) may not be true for the few (e.g., that for the younger post-menopausal woman, the benefits were greater and the harms less frequent).<br />
<br />
*Carefully examine the generalizability of the study. Do the study’s patients and intervention match those under consideration?<br />
<br />
*Observational studies can identify associations but cannot prove cause-and-effect relationships.<br />
<br />
===Case-Study 1: The Cetuximab Study<sup>14</sup>===<br />
<br />
<b>What was done and what was found?</b><br />
<br />
Cetuximab, an anti-epidermal growth factor receptor (EGFR) agent, has recently been added to the therapeutic armamentarium. Two important CRTs examined its impact in patients with mCRC (metastatic-stage Colorectal cancer). In the first one, 56 centers in 11 European countries investigated the outcomes associated with cetuximab therapy in 329 mCRC patients who experienced disease progression either on irinotecan therapy or within 3 months thereafter. The study reported that the group on a combination of irinotecan and cetuximab had a significantly higher rate of overall response to treatment (primary endpoint) than the group on cetuximab alone: 22.9% (95% CI, 17.5-29.1%) vs. 10.8% (95% CI, 5.7-18.1%) (P=0.007), respectively. Similarly, the median time to progression was significantly longer in the combination therapy group (4.1 vs. 1.5 months, P<0.001). As these patients had already progressed on irinotecan prior to the study, any response was viewed as positive. Safety between the two treatment arms was similar: approximately 80% of patients in each arm experienced a rash. Grade 3 or 4 (the more severe) toxic effects on the skin were slightly more frequent in the combination-therapy group compared to cetuximab monotherapy, observed in 9.4% and 5.2% of participants, respectively. Other side effects, such as diarrhea and neutropenia observed in the combination-therapy arm, were considered to be in the range expected for irinotecan alone. Data from this study demonstrated the efficacy and safety of cetuximab and were instrumental in the FDA’s 2004 approval.<br />
<br />
A second CRT (2007) examined 572 patients and suggested efficacy of cetuximab in the treatment of mCRC. This study was a randomized, non-blinded, controlled trial that examined cetuximab monotherapy plus best supportive care compared to best supportive care alone in patients who had received and failed prior chemotherapy regimens. It reported that median overall survival (the primary endpoint) was significantly higher in patients receiving cetuximab plus best supportive care compared to best supportive care alone (6.1 vs. 4.6 months, respectively) (hazard ratio for death=0.77; 95% CI: 0.64- 0.92, P=0.005). This RCT described a greater incidence of adverse events in the cetuximab plus best supportive care group compared to best supportive care alone including (most significantly) rash, as well as edema, fatigue, nausea and vomiting.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
These RCTs had fairly broad enrollment criteria and the cetuximab benefits were modest. Emerging scientific theories raised the possibility that genetically defined population subsets might experience a greater-than-average treatment benefit. One such area of inquiry entailed examining “biomarkers,” or genetic indicators of a patient’s greater response to therapy. Even as the above RCTs were being conducted, data emerged showing the importance of the KRAS gene.<br />
<br />
<b>Emerging Data</b><br />
<br />
Based on the emerging biochemical evidence that the epidermal growth factor receptor (EGFR) treatment mechanism (Cetuximab) was even more finely detailed than previously understood, the study authors of the 2007 RCT undertook a retrospective subgroup analysis using tumor tissue samples preserved from their initial study. Following laboratory analysis, all viable tissue samples were classified as having a wild-type (non-mutated) or a mutated KRAS gene. Instead of the previous two study arms (cetuximab plus best supportive care vs. best supportive care alone), there were 4 for this new analysis: each of the two original study arms was further divided by wild-type vs. mutated KRAS status. Laboratory evaluation determined that 40.9% and 42.3% of all patients in the RCT had a KRAS mutation in the cetuximab plus best supportive care group compared to the best supportive care group alone, respectively. The efficacy of cetuximab was found to be significantly correlated with KRAS status: in patients with wild-type (non-mutated). KRAS genes, cetuximab plus best supportive care compared to best supportive care alone improved overall survival (median 9.5 vs. 4.8 months, respectively; hazard ratio for death=0.55; 95% CI, 0.41-0.74, P<0.001), and progression-free survival (median 3.7 vs. 1.9 months, respectively; hazard ratio for progression or death=0.40; 95% CI, 0.30-0.54, P<0.001). Meanwhile, in patients with mutated KRAS tumors, the authors found no significant difference in outcome between cetuximab plus best supportive care vs. best supportive care alone.<br />
<br />
<b>What next?</b><br />
<br />
Based on these and similar results from other studies, the FDA narrowed its product labeling in July 2009 to indicate that cetuximab is not recommended for mCRC patients with mutated KRAS tumors. This distinction reduces the relevant population by approximately 40%. Similarly, the American society of Clinical oncology released a provisional clinical recommendation that all mCRC patients have their tumors tested for KRAS status before receiving anti-EGFR therapy. The benefits of targeted treatment are many. Patients who previously underwent cetuximab therapy without knowing their genetic predisposition would no longer have to be exposed to the drug’s toxic effects if unnecessary, as the efficacy of cetuximab is markedly higher in the genetically defined appropriate patients. In a less-uncertain environment, clinicians can be more confident in advocating a course of action in their care of patients. And finally, knowledge that targeted therapy is possible suggests the potential for further innovation in treatment options. In fact, research continues to demonstrate options for targeted cetuximab treatment of mCRC at an even finer scale than seen with KRAS; and similar genetic targeting is being investigated, and advocated, in other cancer types.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are generally viewed as the gold standard, results of one or even a series of trials may not accurately reflect the benefits experienced by an individual patient. This case-study suggests that cetuximab initially appeared to have rather modest clinical benefits. Albeit, new information that became available and subsequent genetic subgroup assessments led to very different conclusions. Clinicians should be aware that the current knowledge is likely to evolve and any decisions about patient care should be carefully considered with that sense of uncertainty in mind. As in this case study, subgroup analyses (e.g., genetic subtypes) need a theoretical rationale. Ideally, the analyses should be determined at the time of original RCT design and should not just occur as explorations of the subsequent data. When improperly employed, post hoc analyses may lead to incorrect patient care conclusions.<br />
<br />
<b>RCTs Tips for the CER Practitioners</b><br />
<br />
*RCTs can determine whether an intervention can provide benefit in a very controlled environment.<br />
<br />
*The controlled nature of an RCT may limit its generalizability to a broader population.<br />
<br />
*No results are permanent; advances in scientific knowledge and understanding can influence how we view the effectiveness (or safety) of a therapeutic intervention.<br />
<br />
*Targeted therapy illuminated by carefully thought out subgroup analyses can improve the efficacious and safe use of an intervention.<br />
<br />
===Case-Study 2: The Rosiglitazone Study===<br />
<br />
<b>Meta-analysis</b><br />
<br />
Often the results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single “result.” Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.<br />
<br />
When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive “numbers” but biased results. The following are important criteria for properly conducted meta-analyses:<br />
<br />
1. Carefully defining unbiased inclusion or exclusion criteria for study selection<br />
<br />
2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and time-frame<br />
<br />
3. Applying correct statistical methods to combine and analyze the data<br />
<br />
Reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions. Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research. The following case study will examine several key principles that will be useful as the reader encounters these publications.<br />
<br />
<b>Clinical Application</b><br />
<br />
Heart disease is the leading cause of mortality in the United States, resulting in approximately 20% of all deaths. Diabetics are particularly susceptible to heart disease, with more than 65% of deaths attributable to it. The nonfatal complications of diabetes are wide-ranging and include kidney failure, nerve damage, amputation, stroke and blindness, among other outcomes. In 2007, the total estimated cost of diabetes in the United States was $174B; $116B was derived from direct medical expenditures and the rest from the indirect cost of lost productivity due to the disease. With such serious health effects and heavy direct and indirect costs tied to diabetes, proper disease management is critical. Historically, diabetes treatment has focused on strict blood sugar control, assuming that this goal not only targets diabetes but also reduces other serious comorbidities of the disease.<br />
<br />
Anti-diabetic agents have long been associated with key questions as to their benefits/risks in the treatment of diabetes. The sulfonylurea tolbutamide, a first generation anti-diabetic drug, was found in a landmark study in the 1970s to significantly increase the CV mortality rate compared to patients not on this agent. Further analysis by external parties concluded that the methods employed in this trial were significantly flawed (e.g., use of an “arbitrary” definition of diabetes status, heterogeneous baseline characteristics of the populations studied, and incorrect statistical methods). Since these early studies, CV concerns continue to be an issue with selected oral hypoglycemic agents that have subsequently entered the marketplace.<br />
<br />
A class of drugs, thiazolidinedione (TZD), was approved in the late 1990s, as a solution to the problems associated with the older generation of sulfonylureas. Rosiglitazone, a member of the TZD class, was approved by the FDA in 1999 and was widely prescribed for the treatment of type-2 diabetes. A number of RCTs supported the benefit of rosiglitazone as an important new oral antidiabetic agent. However, safety concerns developed as the FDA received reports of adverse cardiac events potentially associated with rosiglitazone. It was in this setting that a meta-analysis by Nissen and Wolski was published in the New England Journal of Medicine in June 2007.<br />
<br />
<b>What was done?</b><br />
<br />
Nissen and Wolski conducted a meta-analysis examining the impact of rosiglitazone on cardiac events and mortality compared to alternative therapeutic approaches. The study began with a broad search to locate potential studies for review. The authors screened published phase II, III, and IV trials; the FDA website; and the drug manufacturer’s clinical-trial registry for applicable data relating to rosiglitazone use. When the initial search was complete, the studies were further categorized by pre-stated inclusion criteria. Meta-analysis inclusion criteria were simple: studies had to include rosiglitazone and a randomized comparator group treated with either another drug or placebo, study arms had to show similar length of treatment, and all groups had to have received more than 24 weeks of exposure to the study drugs. The studies had to contain outcome data of interest including the rate of myocardial infarction (MI) or death from all CV causes. Out of 116 studies surveyed by the authors, 42 met their inclusion criteria and were included in the meta-analysis. Of the studies they included, 23 had durations of 26 weeks or less, and only five studies followed patients for more than a year. Until this point, the study’s authors were following a path similar to that of any reviewer interested in CV outcomes, examining the results of these 42 studies and comparing them qualitatively. Quantitatively combining the data, however, required the authors to make choices about the studies they could merge and the statistical methods they should apply for analysis. Those decisions greatly influenced the results that were reported.<br />
<br />
<b>What was found?</b><br />
<br />
When the studies were combined, the meta-analysis contained data from 15,565 patients in the rosiglitazone group and 12,282 patients as comparators. Analyzing their data, the authors chose one particular statistical method (the Peto odds ratio method, a fixed-effect statistical approach), which calculates the odds of events occurring where the outcomes of interest are rare and small in number. In comparing rosiglitazone with a “control” group that included other drugs or placebo, the authors reported odds ratios of 1.43 (95% CI, 1.03-1.98; P=0.03) and 1.64 (95% CI,<br />
0.98-2.74; P=0.06) for MI and death from CV causes, respectively. In other words, the odds of an MI or death from a CV cause are higher for rosiglitazone patients than for patients on other therapies or placebo. The authors reported that rosiglitazone was significantly associated with an increase in the risk of MI and had borderline significance in increasing the risk of death from all CV causes. These findings appeared online on the same day that the FDA issued a safety alert regarding rosiglitazone. Discussion of the meta-analysis was immediately featured prominently in the news media. By December 2007, prescription claims for the drug at retail pharmacies had fallen by more than 50%.<br />
<br />
As diabetic patients and their clinicians reacted to the news, a methodologic debate also ensued. This discussion included statistical issues pertaining to the conduct of the analysis, its implications for clinical care, and finally the FDA and drug manufacturer’s roles in overseeing and regulating rosiglitazone. The concern among patients with diabetes regarding treatment, continues in the medical community today.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
Should the studies have been combined? Commentators faulted the authors for including several studies that were not originally intended to investigate diabetes, and for combining both placebo and drug therapy data into one comparator arm. Some critics noted that despite the stated inclusion criteria, some data were derived from studies where the rosiglitazone arm was allowed a longer follow-up than the comparator arm. By failing to account for this longer follow-up period, commentators felt that the authors may have overestimated the effect of rosiglitazone on CV outcomes. Many reviewers were concerned that this meta-analysis excluded trials in which no patients suffered an MI or died from CV causes – the outcomes of greatest interest. Some reviewers also noted that the exclusion of zero-event trials from the pooled dataset not only gave an incomplete picture of the impact of rosiglitazone but could have increased the odds ratio estimate. In general, the pooled dataset was criticized by many for being a faulty microcosm of the information available regarding rosiglitazone.<br />
<br />
It is essential that a meta-analysis be based on similarity in the data sources. If studies differ in important areas such as the patient populations, interventions, or outcomes, combining their data may not be suitable. The researchers accepted studies and populations that were clinically heterogeneous, yet pooled them as if they were not. The study reported that the results were combined from a number of trials that were not initially intended to investigate CV outcomes. Furthermore, the available data did not allow for time-to-event analysis, an essential tool in comparing the impact of alternative treatment options. Reviewers considered the data to be insufficiently homogeneous, and the line of cause and effect to be murkier than the authors described.<br />
<br />
<b>Were the statistical methods optimal?</b><br />
<br />
The statistical methods for this meta-analysis also came under significant criticism. The critiques focused on the authors’ use of the Peto method as being an incorrect choice because data were pooled from both small and very large studies, resulting in a potential overestimation of treatment effect. Others reviewers pointed that the Peto method should not have been used, as a number of the underlying studies did not have patients assigned equally to rosiglitazone and comparator groups. Finally, critics suggested that the heterogeneity of the included studies required an altogether different set of analytic techniques.<br />
<br />
Demonstrating the sensitivity of the authors’ initial analysis to the inclusion criteria and statistical tests used, a number of researchers reworked the data from this study. one researcher used the same studies but analyzed the data with a more commonly used statistical method (Mantel-Haenszel), and found no significant increase in the relative risk or common odds ratio with MI or CV death. When the pool of studies was expanded to include those originally eliminated because they had zero CV events, the odds ratios for MI and death from CV causes dropped from 1.43 to 1.26 (95% CI, 0.93-1.72) and from 1.64 to 1.14 (95% CI, 0.74-1.74), respectively. Neither of the recalculated odd ratios were significant for MI or CV death. Finally, several newer long-term studies have been published since the Nissen meta-analysis. Incorporating their results with the meta-analysis data showed that rosiglitazone is associated with an increased risk of MI but not of CV death. Thus, the findings from these meta-analyses varied with the methods employed, the studies included, and the addition of later trials.<br />
<br />
<b>Emerging Data</b><br />
<br />
The controversy surrounding the rosiglitazone meta-analysis authored by Nissen and Wolski forced an unplanned interim analysis of a long-term, randomized trial investigating the CV effects of rosiglitazone among patients with type 2 diabetes. The authors of the RECORD trial noted that even though the follow-up at 3.75 years was shorter than expected, rosiglitazone, when added to standard glucose-lowering therapy, was found to be associated with an increase in the risk of heart failure but was not associated with any increase in death from CV or other causes. Data at the time were found to be insufficient to determine the effect of rosiglitazone on an increase in the risk of MI. the final report of that trial, published in June 2009, confirmed the elevated risk of heart failure in people with type 2 diabetes treated with rosiglitazone in addition to glucose-lowering drugs, but continued to show inconclusive results about the effect of the drug therapy on the risk of MI. Further, the RECORD trial clarified that rosiglitazone does not result in an increased risk of CV morbidity or mortality compared to standard glucose-lowering drugs. Other trials conducted since the publishing of the meta-analysis have corroborated these results, casting further doubt on the findings of the meta-analysis published by Nissen and Wolski.<br />
<br />
<b>Now what?</b><br />
<br />
Some sources suggest that the original Nissen meta-analysis delivered more harm than benefit, and that a well-recognized medical journal may have erred in its process of peer review. Despite this criticism, it is important to note that subsequent publications support the risk of adverse CV events associated with rosiglitazone, although rosiglitazone use does not appear to increase deaths. These results and emerging data point to the need for further rigorous research to clarify the benefits and risks of rosiglitazone on a variety of outcomes, and the importance of directing the drug to the population that will maximally benefit from its use.<br />
<br />
<b>Lessons Learned From this Case Study</b><br />
<br />
Results from initial randomized trials that seem definitive at one time may not be conclusive, as further trials may emerge to clarify, redirect, or negate previously accepted results. A meta-analysis of those trials can lead to varying results based upon the timing of the analysis and the choices made in its performance.<br />
<br />
<b>Meta-Analysis: Tips for CER Practitioners</b><br />
<br />
*The results of a meta-analysis are highly dependent on the studies included (and excluded). Are these criteria properly defined and relevant to the purposes of the meta-analysis? Were the combined studies sufficiently similar? Can results from this cohort be generalized to other populations of interest?<br />
<br />
*The statistical methodology can impact study results. Have there been reviews critiquing the methods used in the meta-analysis?<br />
<br />
*A variety of statistical tests should be considered, and perhaps reported, in the analysis of results. Do the authors mention their rationale in choosing a statistical method? Do they show the stability of their results across a spectrum of analytical methods?<br />
<br />
*Nothing is permanent. Emerging data may change the playing field, and meta- analysis results are only as good as the data and statistics from which they are derived.<br />
<br />
===Case-Study 3: The Nurses’ Health Study===<br />
<br />
<b>An observational study</b><br />
<br />
An observational study is a very common type of research design in which the effects of a treatment or condition are studied without formally randomizing patients in an experimental design. Such studies can be done prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. Latter studies are frequently performed by using an electronic database that contains, for example, administrative, “billing,” or claims data. Less commonly, observational research uses electronic health records, which have greater clinical information that more closely resembles the data collected in an RCT. Observational studies often take place in “real- world” environments, which allow researchers to collect data for a wide array of outcomes. Patients are not randomized in these studies, but the findings can be used to generate hypotheses for investigation in a more constrained experimental setting. Perhaps the best known observational study is the “Framingham study,” which collected demographic and health data for a group of individuals over many years (and continues to do so) and has provided an understanding of the key risk factors for heart disease and stroke.<br />
<br />
Observational studies present many advantages to the comparative effectiveness researcher. the study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). Furthermore, observational studies can be conducted at low cost, particularly if they involve the secondary analysis of existing data sources. CER often uses administrative databases, which are based upon the billing data submitted by providers during routine care. These databases typically have limited clinical information, may have errors in them, and generally do not undergo auditing.<br />
<br />
The uncontrolled nature of observational studies allows them to be subject to bias and confounding. For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without careful statistical adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies can identify important associations but cannot prove cause and effect. These studies can generate hypotheses that may require RCTs for fuller demonstration of those relationships. Secondary analysis can also be problematic if researchers overwork datasets by doing multiple exploratory analyses (e.g., data-dredging): the more we look, the more we find, even if those findings are merely statistical aberrations. Unfortunately, the growing need for CER and the wide availability of administrative databases may lead to selection of research of poor quality with inaccurate findings.<br />
<br />
In comparative effectiveness research, observational studies are typically considered to be less conclusive than RCTs and meta-analyses. Nonetheless, they can be useful, especially because they examine typical care. Due to lower cost and improvements in health information, observational studies will become increasingly common. Critical assessment of whether the described results are helpful or biased (based upon how the study was performed) are necessary. This case will illustrate several characteristics of the types of studies that will assist in evaluating newly published work. <br />
<br />
<b>Clinical Applications</b><br />
<br />
Cardiovascular diseases (CVD) are the leading cause of death in women older than the age of 50. Epidemiologic evidence suggests that estrogen is a key mediator in the development of CVD. Estrogen is an ovarian hormone whose production decreases as women approach menopause. The steep increase in CVD in women at menopause and older and in women who have had hysterectomies further supports a relationship between estrogen and CVD. Building on this evidence of biologic plausibility, epidemiological and observational studies suggested that estrogen replacement therapy (a form of <b>hormone replacement therapy</b>, or HRT) had positive effects on the risk of CVD in postmenopausal women, (albeit with some negative effects in its potential to increase the risk for breast cancer and stroke). Based on these findings, in the 1980s and 1990s HRT was routinely employed to treat menopausal symptoms and serve as prophylaxis against CVD.<br />
<br />
<b>What was done?</b><br />
<br />
The Nurses’ Health Study (NHS) began collecting data in 1976. In the study, researchers intended to examine a broad range of health effects in women over a long period of time, and a key goal was to clarify the role of HRT in heart disease. The cohort (i.e., the group being followed) included married registered nurses aged 30-55 in 1976 who lived in the 11 most populous states. To collect data, the researchers mailed the study participants a survey every 2 years that asked questions about topics such as smoking, hormone use, menopausal status, and less frequently, diet. Data were collected for key end points that included MI, coronary-artery bypass grafting or angioplasty, stroke, total CVD mortality, and deaths from all causes.<br />
<br />
<b>What was found?</b><br />
<br />
At a 10-year follow-up point, the NHS had a study pool of 48,470 women. The researchers found that estrogen use (alone, without progestin) in postmenopausal women was associated with a reduction in the incidence of CVD as well as in CVD mortality compared to non-users. Later, estrogen-progestin combination therapy was shown to be even more cardioprotective than estrogen monotherapy, and lower doses of estrogen replacement therapy were found to deliver equal cardioprotection and lower the risk for adverse events. NHS researchers were alert to the potential for bias in observational studies. Adjustment for risk factors such as age (a typical practice to eliminate confounding) did not change the reported findings.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
The NHS was not unique in reporting the benefits associated with HRT; other observational studies corroborated the NHS findings. A secondary retrospective data analysis of the UK primary care electronic medical record database, for example, also showed the protective effect associated with HRT use. Researchers were aware of the fundamental limitations of observational studies, particularly with regard to selection bias. They and practicing clinicians were also aware of the potential negative health effects of HRT, which had to be constantly weighed against the potential cardioprotective benefits in deciding a patient’s course of treatment. As a large section of the population could experience the health effects of HRT, researchers began planning RCTs to verify the promising observational study results. It was highly anticipated that those RCTs would corroborate the belief that estrogen replacement can reduce CVD risk.<br />
<br />
<b>Randomized Controlled Trial: The Women’s Health Initiative</b><br />
<br />
The Women’s health Initiative (WHI) was a major study established by the National Institutes of health in 1992 to assess a broad range of health effects in postmenopausal women. The trial was intended to follow these women for 8 years, at a cost of millions of dollars in federal funding. Among its many facets, it included an RCT to confirm the results from the observational studies discussed above. To fully investigate earlier findings, the WHI had two subgroups. One subgroup consisted of women with prior hysterectomies; they received estrogen monotherapy. The second group consisted of women who had not undergone hysterectomy; they received estrogen in combination with progestin. The WHI enrolled 27,347 women in their HRT investigation: 10,739 in the estrogen-alone arm and 16,608 in the estrogen plus progestin arm. Within each arm, women were randomly assigned to receive either HRT or placebo. All women in the trial were postmenopausal and aged 50-79 years; the mean age was 63.6 years (a fact that would be important in later analysis). Some participants had experienced previous CV events. The primary outcome of both subgroups was coronary heart disease (CHD), as described by nonfatal MI or death due to CHD.<br />
<br />
The estrogen-progestin arm of the WHI was halted after a mean follow-up of 5.2 years, 3 years earlier than expected, as the HRT users in this arm were found to be at increased risk for CHD compared to those who received placebo. The study also noted elevated rates of breast cancer and stroke, among other poor outcomes. The estrogen-alone arm continued for an average follow-up of 6.8 years before being similarly discontinued ahead of schedule. Although this part of the study did not find an increased risk of CHD, it also did not find any cardioprotective effect. Beyond failing to locate any clear CV benefits, the WHI also found real evidence of harm, including increased risk of blood clots, breast cancer and stroke. Initial WHI publications therefore recommended against HRT being prescribed for the secondary prevention of CVD.<br />
<br />
<b>What Next?</b><br />
<br />
Scientists and the clinicians who relied on their data for guidance in treating patients, were faced with conflicting data: epidemiological and observational studies suggested that HRT was cardioprotective while the higher-quality evidence from RCTs strongly suggested the opposite. Clinicians primarily followed the WHI results, so prescriptions for HRT in postmenopausal women quickly declined. Meanwhile, researchers began to analyze the studies for potential discrepancies, and found that the women being followed in the NHS and the WHI differed in several important characteristics.<br />
<br />
First, the WHI population was older than the NHS cohort, and many had entered menopause at least 10 years before they enrolled in the RCT. Thus, the WHI enrollees experienced a long duration from the onset of menopause to the commencement of HRT. At the same time, many in the NHS population were closer to the onset of menopause and were still displaying hormonal symptoms when they began HRT. Second, although the NHS researchers adjusted the data for various confounding effects, their results could still have been subject to bias. In general, the NHS cohort was more highly educated and of a higher socioeconomic status than the WHI participants, and therefore more likely to see a physician regularly. The NHS women were also leaner and generally healthier than their RCT counterparts, and had been selected for their evident lack of pre-existing CV conditions. This selection bias in the NHS enrollment may have led to a “healthy woman” effect that in turn led to an overestimation of the benefits of therapy in the observational study. Third, researchers noted that dosing differences between the two study types may have contributed to the divergent results. The NHS reported beneficial results following low-dose estrogen therapy. The WHL, meanwhile, used a higher estrogen dose, exposing women to a larger dosage of hormones and increasing their risk for adverse events. The increased risk profile of the WHI women (e.g., older, more comorbidities, higher estrogen dose) could have contributed to the evidence of harm seen in the WHI results.<br />
<br />
<b>Emerging Data</b><br />
In addition to identifying the inherent differences between the two study populations, researchers began a secondary analysis of the NHS and WHI trials. NHS researchers reported that women who began HRT close to the onset of menopause had a significantly reduced risk of CHD. In the subgroups of women that were older and had a similar duration after menopause compared with the WHI women, they found no significant relationship between HRT and CHD. Also, the WHI study further stratified these results by age, and found that women who began HRT close to their onset of menopause experienced some cardioprotection, while women who were further from the onset of menopause had a slightly elevated risk for CHD.<br />
<br />
Secondary analysis of both studies was therefore necessary to show that age and a short duration from the onset of menopause are crucial to HRT success as a cardioprotective agent. Neither study type provided “truth” or rather, both studies provided “truth” if viewed carefully (e.g., both produced valid and important results). The differences seen in the studies were rooted in the timing of HRT and the populations being studied.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are given a higher evidence grade, observational studies provide important clinical insights. In this example, the study populations differed. For policymakers and clinicians, it is crucial to examine whether the CER was based upon patients similar to those being considered. Any study with a dissimilar population may provide non-relevant results. Thus, readers of CER need to carefully examine the generalizability of the findings being reported.<br />
<br />
==Appendix==<br />
<br />
General Classification and Regression Tree (CART) data analysis steps part of the R package <b>rpart.</b><br />
<br />
===Growing the Tree===<br />
<br />
# To grow a tree, use<br />
rpart(formula, data=, method=,control=), where<br />
formula is in the format outcome ~ predictor1+predictor2+...<br />
data= specifies the data frame<br />
method= "class" for a classification tree, use "anova" for a regression tree<br />
control= optional parameters for controlling tree growth. For example, control=rpart.control(minsplit=30, cp=0.001) requires that the minimum number of observations in a node be 30 before attempting a split and that a split must decrease the overall lack of fit by a factor of 0.001 (cost complexity factor) before being attempted.<br />
<br />
===Examining Results===<br />
<br />
# These functions help with examining the results.<br />
printcp(fit) display complexity parameter (cp) table<br />
plotcp(fit) plot cross-validation results<br />
rsq.rpart(fit) plot approximate R-squared and relative error for different splits (2 plots). labels are only appropriate for the "anova" method.<br />
print(fit) print results<br />
summary(fit) detailed results including surrogate splits<br />
plot(fit) plot decision tree<br />
text(fit) label the decision tree plot<br />
post(fit, file=) create postscript plot of decision tree<br />
# In trees created by rpart(), move to the LEFT branch when the stated condition is true.<br />
<br />
===Pruning Trees===<br />
<br />
#In general, trees should be pruned back to avoid overfitting the data. The tree size should minimize the cross-#validated error – xerror column printed by printcp(). Pruning the tree is accomplished by:<br />
prune(fit, cp= )<br />
# use printcp( ) to examine the cross-validation error results, select the complexity parameter (CP) associated with minimum error, and insert the CP it into the prune() function. This (automatically selecting the complexity parameter associated with the smallest cross-validated error) can be done succinctly by:<br />
fit$\$$cptable[which.min(fit$\$$cptable[,"xerror"]),"CP"]<br />
<br />
===Compete Dataset for N-of-1 Example===<br />
[[SMHS_MethodsHeterogeneity_CER_Nof1|This N-of-1 Dataset]] includes an example.<br />
<br />
===Footnotes===<br />
<br />
*<sup>13</sup> Based on 2009 NPC report, www.npcnow.org/publication/demystifying-comparative-effectiveness-research-case-study-learning-guide <br />
*<sup>14</sup> http://www.cancer.gov/cancertopics/druginfo/fda-cetuximab<br />
<br />
===[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_CER}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_CER&diff=16224SMHS MethodsHeterogeneity CER2016-05-23T18:57:45Z<p>Pineaumi: /* Case-Study 1: The Cetuximab Study */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Comparative Effectiveness Research: Case Studies <sup>13</sup> (CER) ==<br />
<br />
===Observational Studies: Tips for the CER Practitioners===<br />
<br />
*Different study types can offer different understandings; neither should be discounted without closer examination.<br />
<br />
*RCTs provide an accurate understanding of the effect of a particular intervention in a well-defined patient group under “controlled” circumstances.<br />
<br />
*Observational studies provide an understanding of real-world care and its impact, but can be biased due to uncontrolled factors.<br />
<br />
*Observational studies differ in the types of databases used. These databases may lack clinical detail and contain incomplete or inaccurate data.<br />
<br />
*Before accepting the findings from an observational study, consider whether confounding factors may have influenced the results.<br />
<br />
*In this scenario, subgroup analysis was vital in clarifying both study designs; what is true for the many (e.g., overall, estrogen appeared to be detrimental) may not be true for the few (e.g., that for the younger post-menopausal woman, the benefits were greater and the harms less frequent).<br />
<br />
*Carefully examine the generalizability of the study. Do the study’s patients and intervention match those under consideration?<br />
<br />
*Observational studies can identify associations but cannot prove cause-and-effect relationships.<br />
<br />
===Case-Study 1: The Cetuximab Study<sup>14</sup>===<br />
<br />
<b>What was done and what was found?</b><br />
<br />
Cetuximab, an anti-epidermal growth factor receptor (EGFR) agent, has recently been added to the therapeutic armamentarium. Two important CRTs examined its impact in patients with mCRC (metastatic-stage Colorectal cancer). In the first one, 56 centers in 11 European countries investigated the outcomes associated with cetuximab therapy in 329 mCRC patients who experienced disease progression either on irinotecan therapy or within 3 months thereafter. The study reported that the group on a combination of irinotecan and cetuximab had a significantly higher rate of overall response to treatment (primary endpoint) than the group on cetuximab alone: 22.9% (95% CI, 17.5-29.1%) vs. 10.8% (95% CI, 5.7-18.1%) (P=0.007), respectively. Similarly, the median time to progression was significantly longer in the combination therapy group (4.1 vs. 1.5 months, P<0.001). As these patients had already progressed on irinotecan prior to the study, any response was viewed as positive. Safety between the two treatment arms was similar: approximately 80% of patients in each arm experienced a rash. Grade 3 or 4 (the more severe) toxic effects on the skin were slightly more frequent in the combination-therapy group compared to cetuximab monotherapy, observed in 9.4% and 5.2% of participants, respectively. Other side effects, such as diarrhea and neutropenia observed in the combination-therapy arm, were considered to be in the range expected for irinotecan alone. Data from this study demonstrated the efficacy and safety of cetuximab and were instrumental in the FDA’s 2004 approval.<br />
<br />
A second CRT (2007) examined 572 patients and suggested efficacy of cetuximab in the treatment of mCRC. This study was a randomized, non-blinded, controlled trial that examined cetuximab monotherapy plus best supportive care compared to best supportive care alone in patients who had received and failed prior chemotherapy regimens. It reported that median overall survival (the primary endpoint) was significantly higher in patients receiving cetuximab plus best supportive care compared to best supportive care alone (6.1 vs. 4.6 months, respectively) (hazard ratio for death=0.77; 95% CI: 0.64- 0.92, P=0.005). This RCT described a greater incidence of adverse events in the cetuximab plus best supportive care group compared to best supportive care alone including (most significantly) rash, as well as edema, fatigue, nausea and vomiting.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
These RCTs had fairly broad enrollment criteria and the cetuximab benefits were modest. Emerging scientific theories raised the possibility that genetically defined population subsets might experience a greater-than-average treatment benefit. One such area of inquiry entailed examining “biomarkers,” or genetic indicators of a patient’s greater response to therapy. Even as the above RCTs were being conducted, data emerged showing the importance of the KRAS gene.<br />
<br />
<b>Emerging Data</b><br />
<br />
Based on the emerging biochemical evidence that the epidermal growth factor receptor (EGFR) treatment mechanism (Cetuximab) was even more finely detailed than previously understood, the study authors of the 2007 RCT undertook a retrospective subgroup analysis using tumor tissue samples preserved from their initial study. Following laboratory analysis, all viable tissue samples were classified as having a wild-type (non-mutated) or a mutated KRAS gene. Instead of the previous two study arms (cetuximab plus best supportive care vs. best supportive care alone), there were 4 for this new analysis: each of the two original study arms was further divided by wild-type vs. mutated KRAS status. Laboratory evaluation determined that 40.9% and 42.3% of all patients in the RCT had a KRAS mutation in the cetuximab plus best supportive care group compared to the best supportive care group alone, respectively. The efficacy of cetuximab was found to be significantly correlated with KRAS status: in patients with wild-type (non-mutated). KRAS genes, cetuximab plus best supportive care compared to best supportive care alone improved overall survival (median 9.5 vs. 4.8 months, respectively; hazard ratio for death=0.55; 95% CI, 0.41-0.74, P<0.001), and progression-free survival (median 3.7 vs. 1.9 months, respectively; hazard ratio for progression or death=0.40; 95% CI, 0.30-0.54, P<0.001). Meanwhile, in patients with mutated KRAS tumors, the authors found no significant difference in outcome between cetuximab plus best supportive care vs. best supportive care alone.<br />
<br />
<b>What next?</b><br />
<br />
Based on these and similar results from other studies, the FDA narrowed its product labeling in July 2009 to indicate that cetuximab is not recommended for mCRC patients with mutated KRAS tumors. This distinction reduces the relevant population by approximately 40%. Similarly, the American society of Clinical oncology released a provisional clinical recommendation that all mCRC patients have their tumors tested for KRAS status before receiving anti-EGFR therapy. The benefits of targeted treatment are many. Patients who previously underwent cetuximab therapy without knowing their genetic predisposition would no longer have to be exposed to the drug’s toxic effects if unnecessary, as the efficacy of cetuximab is markedly higher in the genetically defined appropriate patients. In a less-uncertain environment, clinicians can be more confident in advocating a course of action in their care of patients. And finally, knowledge that targeted therapy is possible suggests the potential for further innovation in treatment options. In fact, research continues to demonstrate options for targeted cetuximab treatment of mCRC at an even finer scale than seen with KRAS; and similar genetic targeting is being investigated, and advocated, in other cancer types.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are generally viewed as the gold standard, results of one or even a series of trials may not accurately reflect the benefits experienced by an individual patient. This case-study suggests that cetuximab initially appeared to have rather modest clinical benefits. Albeit, new information that became available and subsequent genetic subgroup assessments led to very different conclusions. Clinicians should be aware that the current knowledge is likely to evolve and any decisions about patient care should be carefully considered with that sense of uncertainty in mind. As in this case study, subgroup analyses (e.g., genetic subtypes) need a theoretical rationale. Ideally, the analyses should be determined at the time of original RCT design and should not just occur as explorations of the subsequent data. When improperly employed, post hoc analyses may lead to incorrect patient care conclusions.<br />
<br />
<b>RCTs Tips for the CER Practitioners</b><br />
<br />
*RCTs can determine whether an intervention can provide benefit in a very controlled environment.<br />
<br />
*The controlled nature of an RCT may limit its generalizability to a broader population.<br />
<br />
*No results are permanent; advances in scientific knowledge and understanding can influence how we view the effectiveness (or safety) of a therapeutic intervention.<br />
<br />
*Targeted therapy illuminated by carefully thought out subgroup analyses can improve the efficacious and safe use of an intervention.<br />
<br />
===Case-Study 2: The Rosiglitazone Study===<br />
<br />
<b>Meta-analysis</b><br />
<br />
Often the results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single “result.” Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.<br />
<br />
When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive “numbers” but biased results. The following are important criteria for properly conducted meta-analyses:<br />
<br />
1. Carefully defining unbiased inclusion or exclusion criteria for study selection<br />
<br />
2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and time-frame<br />
<br />
3. Applying correct statistical methods to combine and analyze the data<br />
<br />
Reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions. Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research. The following case study will examine several key principles that will be useful as the reader encounters these publications.<br />
<br />
<b>Clinical Application</b><br />
<br />
Heart disease is the leading cause of mortality in the United States, resulting in approximately 20% of all deaths. Diabetics are particularly susceptible to heart disease, with more than 65% of deaths attributable to it. The nonfatal complications of diabetes are wide-ranging and include kidney failure, nerve damage, amputation, stroke and blindness, among other outcomes. In 2007, the total estimated cost of diabetes in the United States was $174B; $116B was derived from direct medical expenditures and the rest from the indirect cost of lost productivity due to the disease. With such serious health effects and heavy direct and indirect costs tied to diabetes, proper disease management is critical. Historically, diabetes treatment has focused on strict blood sugar control, assuming that this goal not only targets diabetes but also reduces other serious comorbidities of the disease.<br />
<br />
Anti-diabetic agents have long been associated with key questions as to their benefits/risks in the treatment of diabetes. The sulfonylurea tolbutamide, a first generation anti-diabetic drug, was found in a landmark study in the 1970s to significantly increase the CV mortality rate compared to patients not on this agent. Further analysis by external parties concluded that the methods employed in this trial were significantly flawed (e.g., use of an “arbitrary” definition of diabetes status, heterogeneous baseline characteristics of the populations studied, and incorrect statistical methods). Since these early studies, CV concerns continue to be an issue with selected oral hypoglycemic agents that have subsequently entered the marketplace.<br />
<br />
A class of drugs, thiazolidinedione (TZD), was approved in the late 1990s, as a solution to the problems associated with the older generation of sulfonylureas. Rosiglitazone, a member of the TZD class, was approved by the FDA in 1999 and was widely prescribed for the treatment of type-2 diabetes. A number of RCTs supported the benefit of rosiglitazone as an important new oral antidiabetic agent. However, safety concerns developed as the FDA received reports of adverse cardiac events potentially associated with rosiglitazone. It was in this setting that a meta-analysis by Nissen and Wolski was published in the New England Journal of Medicine in June 2007.<br />
<br />
<b>What was done?</b><br />
<br />
Nissen and Wolski conducted a meta-analysis examining the impact of rosiglitazone on cardiac events and mortality compared to alternative therapeutic approaches. The study began with a broad search to locate potential studies for review. The authors screened published phase II, III, and IV trials; the FDA website; and the drug manufacturer’s clinical-trial registry for applicable data relating to rosiglitazone use. When the initial search was complete, the studies were further categorized by pre-stated inclusion criteria. Meta-analysis inclusion criteria were simple: studies had to include rosiglitazone and a randomized comparator group treated with either another drug or placebo, study arms had to show similar length of treatment, and all groups had to have received more than 24 weeks of exposure to the study drugs. The studies had to contain outcome data of interest including the rate of myocardial infarction (MI) or death from all CV causes. Out of 116 studies surveyed by the authors, 42 met their inclusion criteria and were included in the meta-analysis. Of the studies they included, 23 had durations of 26 weeks or less, and only five studies followed patients for more than a year. Until this point, the study’s authors were following a path similar to that of any reviewer interested in CV outcomes, examining the results of these 42 studies and comparing them qualitatively. Quantitatively combining the data, however, required the authors to make choices about the studies they could merge and the statistical methods they should apply for analysis. Those decisions greatly influenced the results that were reported.<br />
<br />
<b>What was found?</b><br />
<br />
When the studies were combined, the meta-analysis contained data from 15,565 patients in the rosiglitazone group and 12,282 patients as comparators. Analyzing their data, the authors chose one particular statistical method (the Peto odds ratio method, a fixed-effect statistical approach), which calculates the odds of events occurring where the outcomes of interest are rare and small in number. In comparing rosiglitazone with a “control” group that included other drugs or placebo, the authors reported odds ratios of 1.43 (95% CI, 1.03-1.98; P=0.03) and 1.64 (95% CI,<br />
0.98-2.74; P=0.06) for MI and death from CV causes, respectively. In other words, the odds of an MI or death from a CV cause are higher for rosiglitazone patients than for patients on other therapies or placebo. The authors reported that rosiglitazone was significantly associated with an increase in the risk of MI and had borderline significance in increasing the risk of death from all CV causes. These findings appeared online on the same day that the FDA issued a safety alert regarding rosiglitazone. Discussion of the meta-analysis was immediately featured prominently in the news media. By December 2007, prescription claims for the drug at retail pharmacies had fallen by more than 50%.<br />
<br />
As diabetic patients and their clinicians reacted to the news, a methodologic debate also ensued. This discussion included statistical issues pertaining to the conduct of the analysis, its implications for clinical care, and finally the FDA and drug manufacturer’s roles in overseeing and regulating rosiglitazone. The concern among patients with diabetes regarding treatment, continues in the medical community today.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
Should the studies have been combined? Commentators faulted the authors for including several studies that were not originally intended to investigate diabetes, and for combining both placebo and drug therapy data into one comparator arm. Some critics noted that despite the stated inclusion criteria, some data were derived from studies where the rosiglitazone arm was allowed a longer follow-up than the comparator arm. By failing to account for this longer follow-up period, commentators felt that the authors may have overestimated the effect of rosiglitazone on CV outcomes. Many reviewers were concerned that this meta-analysis excluded trials in which no patients suffered an MI or died from CV causes – the outcomes of greatest interest. Some reviewers also noted that the exclusion of zero-event trials from the pooled dataset not only gave an incomplete picture of the impact of rosiglitazone but could have increased the odds ratio estimate. In general, the pooled dataset was criticized by many for being a faulty microcosm of the information available regarding rosiglitazone.<br />
<br />
It is essential that a meta-analysis be based on similarity in the data sources. If studies differ in important areas such as the patient populations, interventions, or outcomes, combining their data may not be suitable. The researchers accepted studies and populations that were clinically heterogeneous, yet pooled them as if they were not. The study reported that the results were combined from a number of trials that were not initially intended to investigate CV outcomes. Furthermore, the available data did not allow for time-to-event analysis, an essential tool in comparing the impact of alternative treatment options. Reviewers considered the data to be insufficiently homogeneous, and the line of cause and effect to be murkier than the authors described.<br />
<br />
<b>Were the statistical methods optimal?</b><br />
<br />
The statistical methods for this meta-analysis also came under significant criticism. The critiques focused on the authors’ use of the Peto method as being an incorrect choice because data were pooled from both small and very large studies, resulting in a potential overestimation of treatment effect. Others reviewers pointed that the Peto method should not have been used, as a number of the underlying studies did not have patients assigned equally to rosiglitazone and comparator groups. Finally, critics suggested that the heterogeneity of the included studies required an altogether different set of analytic techniques.<br />
<br />
Demonstrating the sensitivity of the authors’ initial analysis to the inclusion criteria and statistical tests used, a number of researchers reworked the data from this study. one researcher used the same studies but analyzed the data with a more commonly used statistical method (Mantel-Haenszel), and found no significant increase in the relative risk or common odds ratio with MI or CV death. When the pool of studies was expanded to include those originally eliminated because they had zero CV events, the odds ratios for MI and death from CV causes dropped from 1.43 to 1.26 (95% CI, 0.93-1.72) and from 1.64 to 1.14 (95% CI, 0.74-1.74), respectively. Neither of the recalculated odd ratios were significant for MI or CV death. Finally, several newer long-term studies have been published since the Nissen meta-analysis. Incorporating their results with the meta-analysis data showed that rosiglitazone is associated with an increased risk of MI but not of CV death. Thus, the findings from these meta-analyses varied with the methods employed, the studies included, and the addition of later trials.<br />
<br />
<b>Emerging Data</b><br />
<br />
The controversy surrounding the rosiglitazone meta-analysis authored by Nissen and Wolski forced an unplanned interim analysis of a long-term, randomized trial investigating the CV effects of rosiglitazone among patients with type 2 diabetes. The authors of the RECORD trial noted that even though the follow-up at 3.75 years was shorter than expected, rosiglitazone, when added to standard glucose-lowering therapy, was found to be associated with an increase in the risk of heart failure but was not associated with any increase in death from CV or other causes. Data at the time were found to be insufficient to determine the effect of rosiglitazone on an increase in the risk of MI. the final report of that trial, published in June 2009, confirmed the elevated risk of heart failure in people with type 2 diabetes treated with rosiglitazone in addition to glucose-lowering drugs, but continued to show inconclusive results about the effect of the drug therapy on the risk of MI. Further, the RECORD trial clarified that rosiglitazone does not result in an increased risk of CV morbidity or mortality compared to standard glucose-lowering drugs. Other trials conducted since the publishing of the meta-analysis have corroborated these results, casting further doubt on the findings of the meta-analysis published by Nissen and Wolski.<br />
<br />
<b>Now what?</b><br />
<br />
Some sources suggest that the original Nissen meta-analysis delivered more harm than benefit, and that a well-recognized medical journal may have erred in its process of peer review. Despite this criticism, it is important to note that subsequent publications support the risk of adverse CV events associated with rosiglitazone, although rosiglitazone use does not appear to increase deaths. These results and emerging data point to the need for further rigorous research to clarify the benefits and risks of rosiglitazone on a variety of outcomes, and the importance of directing the drug to the population that will maximally benefit from its use.<br />
<br />
<b>Lessons Learned From this Case Study</b><br />
<br />
Results from initial randomized trials that seem definitive at one time may not be conclusive, as further trials may emerge to clarify, redirect, or negate previously accepted results. A meta-analysis of those trials can lead to varying results based upon the timing of the analysis and the choices made in its performance.<br />
<br />
<b>Meta-Analysis: Tips for CER Practitioners</b><br />
<br />
*The results of a meta-analysis are highly dependent on the studies included (and excluded). Are these criteria properly defined and relevant to the purposes of the meta-analysis? Were the combined studies sufficiently similar? Can results from this cohort be generalized to other populations of interest?<br />
<br />
*The statistical methodology can impact study results. Have there been reviews critiquing the methods used in the meta-analysis?<br />
<br />
*A variety of statistical tests should be considered, and perhaps reported, in the analysis of results. Do the authors mention their rationale in choosing a statistical method? Do they show the stability of their results across a spectrum of analytical methods?<br />
<br />
*Nothing is permanent. Emerging data may change the playing field, and meta- analysis results are only as good as the data and statistics from which they are derived.<br />
<br />
===Case-Study 3: The Nurses’ Health Study===<br />
<br />
<b>An observational study</b><br />
<br />
An observational study is a very common type of research design in which the effects of a treatment or condition are studied without formally randomizing patients in an experimental design. Such studies can be done prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. Latter studies are frequently performed by using an electronic database that contains, for example, administrative, “billing,” or claims data. Less commonly, observational research uses electronic health records, which have greater clinical information that more closely resembles the data collected in an RCT. Observational studies often take place in “real- world” environments, which allow researchers to collect data for a wide array of outcomes. Patients are not randomized in these studies, but the findings can be used to generate hypotheses for investigation in a more constrained experimental setting. Perhaps the best known observational study is the “Framingham study,” which collected demographic and health data for a group of individuals over many years (and continues to do so) and has provided an understanding of the key risk factors for heart disease and stroke.<br />
<br />
Observational studies present many advantages to the comparative effectiveness researcher. the study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). Furthermore, observational studies can be conducted at low cost, particularly if they involve the secondary analysis of existing data sources. CER often uses administrative databases, which are based upon the billing data submitted by providers during routine care. These databases typically have limited clinical information, may have errors in them, and generally do not undergo auditing.<br />
<br />
The uncontrolled nature of observational studies allows them to be subject to bias and confounding. For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without careful statistical adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies can identify important associations but cannot prove cause and effect. These studies can generate hypotheses that may require RCTs for fuller demonstration of those relationships. Secondary analysis can also be problematic if researchers overwork datasets by doing multiple exploratory analyses (e.g., data-dredging): the more we look, the more we find, even if those findings are merely statistical aberrations. Unfortunately, the growing need for CER and the wide availability of administrative databases may lead to selection of research of poor quality with inaccurate findings.<br />
<br />
In comparative effectiveness research, observational studies are typically considered to be less conclusive than RCTs and meta-analyses. Nonetheless, they can be useful, especially because they examine typical care. Due to lower cost and improvements in health information, observational studies will become increasingly common. Critical assessment of whether the described results are helpful or biased (based upon how the study was performed) are necessary. This case will illustrate several characteristics of the types of studies that will assist in evaluating newly published work. <br />
<br />
<b>Clinical Applications</b><br />
<br />
Cardiovascular diseases (CVD) are the leading cause of death in women older than the age of 50. Epidemiologic evidence suggests that estrogen is a key mediator in the development of CVD. Estrogen is an ovarian hormone whose production decreases as women approach menopause. The steep increase in CVD in women at menopause and older and in women who have had hysterectomies further supports a relationship between estrogen and CVD. Building on this evidence of biologic plausibility, epidemiological and observational studies suggested that estrogen replacement therapy (a form of <b>hormone replacement therapy</b>, or HRT) had positive effects on the risk of CVD in postmenopausal women, (albeit with some negative effects in its potential to increase the risk for breast cancer and stroke). Based on these findings, in the 1980s and 1990s HRT was routinely employed to treat menopausal symptoms and serve as prophylaxis against CVD.<br />
<br />
<b>What was done?</b><br />
<br />
The Nurses’ Health Study (NHS) began collecting data in 1976. In the study, researchers intended to examine a broad range of health effects in women over a long period of time, and a key goal was to clarify the role of HRT in heart disease. The cohort (i.e., the group being followed) included married registered nurses aged 30-55 in 1976 who lived in the 11 most populous states. To collect data, the researchers mailed the study participants a survey every 2 years that asked questions about topics such as smoking, hormone use, menopausal status, and less frequently, diet. Data were collected for key end points that included MI, coronary-artery bypass grafting or angioplasty, stroke, total CVD mortality, and deaths from all causes.<br />
<br />
<b>What was found?</b><br />
<br />
At a 10-year follow-up point, the NHS had a study pool of 48,470 women. The researchers found that estrogen use (alone, without progestin) in postmenopausal women was associated with a reduction in the incidence of CVD as well as in CVD mortality compared to non-users. Later, estrogen-progestin combination therapy was shown to be even more cardioprotective than estrogen monotherapy, and lower doses of estrogen replacement therapy were found to deliver equal cardioprotection and lower the risk for adverse events. NHS researchers were alert to the potential for bias in observational studies. Adjustment for risk factors such as age (a typical practice to eliminate confounding) did not change the reported findings.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
The NHS was not unique in reporting the benefits associated with HRT; other observational studies corroborated the NHS findings. A secondary retrospective data analysis of the UK primary care electronic medical record database, for example, also showed the protective effect associated with HRT use. Researchers were aware of the fundamental limitations of observational studies, particularly with regard to selection bias. They and practicing clinicians were also aware of the potential negative health effects of HRT, which had to be constantly weighed against the potential cardioprotective benefits in deciding a patient’s course of treatment. As a large section of the population could experience the health effects of HRT, researchers began planning RCTs to verify the promising observational study results. It was highly anticipated that those RCTs would corroborate the belief that estrogen replacement can reduce CVD risk.<br />
<br />
<b>Randomized Controlled Trial: The Women’s Health Initiative</b><br />
<br />
The Women’s health Initiative (WHI) was a major study established by the National Institutes of health in 1992 to assess a broad range of health effects in postmenopausal women. The trial was intended to follow these women for 8 years, at a cost of millions of dollars in federal funding. Among its many facets, it included an RCT to confirm the results from the observational studies discussed above. To fully investigate earlier findings, the WHI had two subgroups. One subgroup consisted of women with prior hysterectomies; they received estrogen monotherapy. The second group consisted of women who had not undergone hysterectomy; they received estrogen in combination with progestin. The WHI enrolled 27,347 women in their HRT investigation: 10,739 in the estrogen-alone arm and 16,608 in the estrogen plus progestin arm. Within each arm, women were randomly assigned to receive either HRT or placebo. All women in the trial were postmenopausal and aged 50-79 years; the mean age was 63.6 years (a fact that would be important in later analysis). Some participants had experienced previous CV events. The primary outcome of both subgroups was coronary heart disease (CHD), as described by nonfatal MI or death due to CHD.<br />
<br />
The estrogen-progestin arm of the WHI was halted after a mean follow-up of 5.2 years, 3 years earlier than expected, as the HRT users in this arm were found to be at increased risk for CHD compared to those who received placebo. The study also noted elevated rates of breast cancer and stroke, among other poor outcomes. The estrogen-alone arm continued for an average follow-up of 6.8 years before being similarly discontinued ahead of schedule. Although this part of the study did not find an increased risk of CHD, it also did not find any cardioprotective effect. Beyond failing to locate any clear CV benefits, the WHI also found real evidence of harm, including increased risk of blood clots, breast cancer and stroke. Initial WHI publications therefore recommended against HRT being prescribed for the secondary prevention of CVD.<br />
<br />
<b>What Next?</b><br />
<br />
Scientists and the clinicians who relied on their data for guidance in treating patients, were faced with conflicting data: epidemiological and observational studies suggested that HRT was cardioprotective while the higher-quality evidence from RCTs strongly suggested the opposite. Clinicians primarily followed the WHI results, so prescriptions for HRT in postmenopausal women quickly declined. Meanwhile, researchers began to analyze the studies for potential discrepancies, and found that the women being followed in the NHS and the WHI differed in several important characteristics.<br />
<br />
First, the WHI population was older than the NHS cohort, and many had entered menopause at least 10 years before they enrolled in the RCT. Thus, the WHI enrollees experienced a long duration from the onset of menopause to the commencement of HRT. At the same time, many in the NHS population were closer to the onset of menopause and were still displaying hormonal symptoms when they began HRT. Second, although the NHS researchers adjusted the data for various confounding effects, their results could still have been subject to bias. In general, the NHS cohort was more highly educated and of a higher socioeconomic status than the WHI participants, and therefore more likely to see a physician regularly. The NHS women were also leaner and generally healthier than their RCT counterparts, and had been selected for their evident lack of pre-existing CV conditions. This selection bias in the NHS enrollment may have led to a “healthy woman” effect that in turn led to an overestimation of the benefits of therapy in the observational study. Third, researchers noted that dosing differences between the two study types may have contributed to the divergent results. The NHS reported beneficial results following low-dose estrogen therapy. The WHL, meanwhile, used a higher estrogen dose, exposing women to a larger dosage of hormones and increasing their risk for adverse events. The increased risk profile of the WHI women (e.g., older, more comorbidities, higher estrogen dose) could have contributed to the evidence of harm seen in the WHI results.<br />
<br />
<b>Emerging Data</b><br />
In addition to identifying the inherent differences between the two study populations, researchers began a secondary analysis of the NHS and WHI trials. NHS researchers reported that women who began HRT close to the onset of menopause had a significantly reduced risk of CHD. In the subgroups of women that were older and had a similar duration after menopause compared with the WHI women, they found no significant relationship between HRT and CHD. Also, the WHI study further stratified these results by age, and found that women who began HRT close to their onset of menopause experienced some cardioprotection, while women who were further from the onset of menopause had a slightly elevated risk for CHD.<br />
<br />
Secondary analysis of both studies was therefore necessary to show that age and a short duration from the onset of menopause are crucial to HRT success as a cardioprotective agent. Neither study type provided “truth” or rather, both studies provided “truth” if viewed carefully (e.g., both produced valid and important results). The differences seen in the studies were rooted in the timing of HRT and the populations being studied.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are given a higher evidence grade, observational studies provide important clinical insights. In this example, the study populations differed. For policymakers and clinicians, it is crucial to examine whether the CER was based upon patients similar to those being considered. Any study with a dissimilar population may provide non-relevant results. Thus, readers of CER need to carefully examine the generalizability of the findings being reported.<br />
<br />
==Appendix==<br />
<br />
General Classification and Regression Tree (CART) data analysis steps part of the R package <b>rpart.</b><br />
<br />
===Growing the Tree===<br />
<br />
# To grow a tree, use<br />
rpart(formula, data=, method=,control=), where<br />
formula is in the format outcome ~ predictor1+predictor2+...<br />
data= specifies the data frame<br />
method= "class" for a classification tree, use "anova" for a regression tree<br />
control= optional parameters for controlling tree growth. For example, control=rpart.control(minsplit=30, cp=0.001) requires that the minimum number of observations in a node be 30 before attempting a split and that a split must decrease the overall lack of fit by a factor of 0.001 (cost complexity factor) before being attempted.<br />
<br />
===Examining Results===<br />
<br />
# These functions help with examining the results.<br />
printcp(fit) display complexity parameter (cp) table<br />
plotcp(fit) plot cross-validation results<br />
rsq.rpart(fit) plot approximate R-squared and relative error for different splits (2 plots). labels are only appropriate for the "anova" method.<br />
print(fit) print results<br />
summary(fit) detailed results including surrogate splits<br />
plot(fit) plot decision tree<br />
text(fit) label the decision tree plot<br />
post(fit, file=) create postscript plot of decision tree<br />
# In trees created by rpart(), move to the LEFT branch when the stated condition is true.<br />
<br />
===Pruning Trees===<br />
<br />
#In general, trees should be pruned back to avoid overfitting the data. The tree size should minimize the cross-#validated error – xerror column printed by printcp(). Pruning the tree is accomplished by:<br />
prune(fit, cp= )<br />
# use printcp( ) to examine the cross-validation error results, select the complexity parameter (CP) associated with minimum error, and insert the CP it into the prune() function. This (automatically selecting the complexity parameter associated with the smallest cross-validated error) can be done succinctly by:<br />
fit$\$$cptable[which.min(fit$\$$cptable[,"xerror"]),"CP"]<br />
<br />
===Compete Dataset for N-of-1 Example===<br />
[[SMHS_MethodsHeterogeneity_CER_Nof1|This N-of-1 Dataset]] includes an example.<br />
<br />
===[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_CER}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_CER&diff=16223SMHS MethodsHeterogeneity CER2016-05-23T18:57:20Z<p>Pineaumi: /* Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research - Comparative Effectiveness Research (CER) */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Comparative Effectiveness Research: Case Studies <sup>13</sup> (CER) ==<br />
<br />
===Observational Studies: Tips for the CER Practitioners===<br />
<br />
*Different study types can offer different understandings; neither should be discounted without closer examination.<br />
<br />
*RCTs provide an accurate understanding of the effect of a particular intervention in a well-defined patient group under “controlled” circumstances.<br />
<br />
*Observational studies provide an understanding of real-world care and its impact, but can be biased due to uncontrolled factors.<br />
<br />
*Observational studies differ in the types of databases used. These databases may lack clinical detail and contain incomplete or inaccurate data.<br />
<br />
*Before accepting the findings from an observational study, consider whether confounding factors may have influenced the results.<br />
<br />
*In this scenario, subgroup analysis was vital in clarifying both study designs; what is true for the many (e.g., overall, estrogen appeared to be detrimental) may not be true for the few (e.g., that for the younger post-menopausal woman, the benefits were greater and the harms less frequent).<br />
<br />
*Carefully examine the generalizability of the study. Do the study’s patients and intervention match those under consideration?<br />
<br />
*Observational studies can identify associations but cannot prove cause-and-effect relationships.<br />
<br />
===Case-Study 1: The Cetuximab Study===<br />
<br />
<b>What was done and what was found?</b><br />
<br />
Cetuximab, an anti-epidermal growth factor receptor (EGFR) agent, has recently been added to the therapeutic armamentarium. Two important CRTs examined its impact in patients with mCRC (metastatic-stage Colorectal cancer). In the first one, 56 centers in 11 European countries investigated the outcomes associated with cetuximab therapy in 329 mCRC patients who experienced disease progression either on irinotecan therapy or within 3 months thereafter. The study reported that the group on a combination of irinotecan and cetuximab had a significantly higher rate of overall response to treatment (primary endpoint) than the group on cetuximab alone: 22.9% (95% CI, 17.5-29.1%) vs. 10.8% (95% CI, 5.7-18.1%) (P=0.007), respectively. Similarly, the median time to progression was significantly longer in the combination therapy group (4.1 vs. 1.5 months, P<0.001). As these patients had already progressed on irinotecan prior to the study, any response was viewed as positive. Safety between the two treatment arms was similar: approximately 80% of patients in each arm experienced a rash. Grade 3 or 4 (the more severe) toxic effects on the skin were slightly more frequent in the combination-therapy group compared to cetuximab monotherapy, observed in 9.4% and 5.2% of participants, respectively. Other side effects, such as diarrhea and neutropenia observed in the combination-therapy arm, were considered to be in the range expected for irinotecan alone. Data from this study demonstrated the efficacy and safety of cetuximab and were instrumental in the FDA’s 2004 approval.<br />
<br />
A second CRT (2007) examined 572 patients and suggested efficacy of cetuximab in the treatment of mCRC. This study was a randomized, non-blinded, controlled trial that examined cetuximab monotherapy plus best supportive care compared to best supportive care alone in patients who had received and failed prior chemotherapy regimens. It reported that median overall survival (the primary endpoint) was significantly higher in patients receiving cetuximab plus best supportive care compared to best supportive care alone (6.1 vs. 4.6 months, respectively) (hazard ratio for death=0.77; 95% CI: 0.64- 0.92, P=0.005). This RCT described a greater incidence of adverse events in the cetuximab plus best supportive care group compared to best supportive care alone including (most significantly) rash, as well as edema, fatigue, nausea and vomiting.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
These RCTs had fairly broad enrollment criteria and the cetuximab benefits were modest. Emerging scientific theories raised the possibility that genetically defined population subsets might experience a greater-than-average treatment benefit. One such area of inquiry entailed examining “biomarkers,” or genetic indicators of a patient’s greater response to therapy. Even as the above RCTs were being conducted, data emerged showing the importance of the KRAS gene.<br />
<br />
<b>Emerging Data</b><br />
<br />
Based on the emerging biochemical evidence that the epidermal growth factor receptor (EGFR) treatment mechanism (Cetuximab) was even more finely detailed than previously understood, the study authors of the 2007 RCT undertook a retrospective subgroup analysis using tumor tissue samples preserved from their initial study. Following laboratory analysis, all viable tissue samples were classified as having a wild-type (non-mutated) or a mutated KRAS gene. Instead of the previous two study arms (cetuximab plus best supportive care vs. best supportive care alone), there were 4 for this new analysis: each of the two original study arms was further divided by wild-type vs. mutated KRAS status. Laboratory evaluation determined that 40.9% and 42.3% of all patients in the RCT had a KRAS mutation in the cetuximab plus best supportive care group compared to the best supportive care group alone, respectively. The efficacy of cetuximab was found to be significantly correlated with KRAS status: in patients with wild-type (non-mutated). KRAS genes, cetuximab plus best supportive care compared to best supportive care alone improved overall survival (median 9.5 vs. 4.8 months, respectively; hazard ratio for death=0.55; 95% CI, 0.41-0.74, P<0.001), and progression-free survival (median 3.7 vs. 1.9 months, respectively; hazard ratio for progression or death=0.40; 95% CI, 0.30-0.54, P<0.001). Meanwhile, in patients with mutated KRAS tumors, the authors found no significant difference in outcome between cetuximab plus best supportive care vs. best supportive care alone.<br />
<br />
<b>What next?</b><br />
<br />
Based on these and similar results from other studies, the FDA narrowed its product labeling in July 2009 to indicate that cetuximab is not recommended for mCRC patients with mutated KRAS tumors. This distinction reduces the relevant population by approximately 40%. Similarly, the American society of Clinical oncology released a provisional clinical recommendation that all mCRC patients have their tumors tested for KRAS status before receiving anti-EGFR therapy. The benefits of targeted treatment are many. Patients who previously underwent cetuximab therapy without knowing their genetic predisposition would no longer have to be exposed to the drug’s toxic effects if unnecessary, as the efficacy of cetuximab is markedly higher in the genetically defined appropriate patients. In a less-uncertain environment, clinicians can be more confident in advocating a course of action in their care of patients. And finally, knowledge that targeted therapy is possible suggests the potential for further innovation in treatment options. In fact, research continues to demonstrate options for targeted cetuximab treatment of mCRC at an even finer scale than seen with KRAS; and similar genetic targeting is being investigated, and advocated, in other cancer types.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are generally viewed as the gold standard, results of one or even a series of trials may not accurately reflect the benefits experienced by an individual patient. This case-study suggests that cetuximab initially appeared to have rather modest clinical benefits. Albeit, new information that became available and subsequent genetic subgroup assessments led to very different conclusions. Clinicians should be aware that the current knowledge is likely to evolve and any decisions about patient care should be carefully considered with that sense of uncertainty in mind. As in this case study, subgroup analyses (e.g., genetic subtypes) need a theoretical rationale. Ideally, the analyses should be determined at the time of original RCT design and should not just occur as explorations of the subsequent data. When improperly employed, post hoc analyses may lead to incorrect patient care conclusions.<br />
<br />
<b>RCTs Tips for the CER Practitioners</b><br />
<br />
*RCTs can determine whether an intervention can provide benefit in a very controlled environment.<br />
<br />
*The controlled nature of an RCT may limit its generalizability to a broader population.<br />
<br />
*No results are permanent; advances in scientific knowledge and understanding can influence how we view the effectiveness (or safety) of a therapeutic intervention.<br />
<br />
*Targeted therapy illuminated by carefully thought out subgroup analyses can improve the efficacious and safe use of an intervention.<br />
<br />
===Case-Study 2: The Rosiglitazone Study===<br />
<br />
<b>Meta-analysis</b><br />
<br />
Often the results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single “result.” Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.<br />
<br />
When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive “numbers” but biased results. The following are important criteria for properly conducted meta-analyses:<br />
<br />
1. Carefully defining unbiased inclusion or exclusion criteria for study selection<br />
<br />
2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and time-frame<br />
<br />
3. Applying correct statistical methods to combine and analyze the data<br />
<br />
Reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions. Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research. The following case study will examine several key principles that will be useful as the reader encounters these publications.<br />
<br />
<b>Clinical Application</b><br />
<br />
Heart disease is the leading cause of mortality in the United States, resulting in approximately 20% of all deaths. Diabetics are particularly susceptible to heart disease, with more than 65% of deaths attributable to it. The nonfatal complications of diabetes are wide-ranging and include kidney failure, nerve damage, amputation, stroke and blindness, among other outcomes. In 2007, the total estimated cost of diabetes in the United States was $174B; $116B was derived from direct medical expenditures and the rest from the indirect cost of lost productivity due to the disease. With such serious health effects and heavy direct and indirect costs tied to diabetes, proper disease management is critical. Historically, diabetes treatment has focused on strict blood sugar control, assuming that this goal not only targets diabetes but also reduces other serious comorbidities of the disease.<br />
<br />
Anti-diabetic agents have long been associated with key questions as to their benefits/risks in the treatment of diabetes. The sulfonylurea tolbutamide, a first generation anti-diabetic drug, was found in a landmark study in the 1970s to significantly increase the CV mortality rate compared to patients not on this agent. Further analysis by external parties concluded that the methods employed in this trial were significantly flawed (e.g., use of an “arbitrary” definition of diabetes status, heterogeneous baseline characteristics of the populations studied, and incorrect statistical methods). Since these early studies, CV concerns continue to be an issue with selected oral hypoglycemic agents that have subsequently entered the marketplace.<br />
<br />
A class of drugs, thiazolidinedione (TZD), was approved in the late 1990s, as a solution to the problems associated with the older generation of sulfonylureas. Rosiglitazone, a member of the TZD class, was approved by the FDA in 1999 and was widely prescribed for the treatment of type-2 diabetes. A number of RCTs supported the benefit of rosiglitazone as an important new oral antidiabetic agent. However, safety concerns developed as the FDA received reports of adverse cardiac events potentially associated with rosiglitazone. It was in this setting that a meta-analysis by Nissen and Wolski was published in the New England Journal of Medicine in June 2007.<br />
<br />
<b>What was done?</b><br />
<br />
Nissen and Wolski conducted a meta-analysis examining the impact of rosiglitazone on cardiac events and mortality compared to alternative therapeutic approaches. The study began with a broad search to locate potential studies for review. The authors screened published phase II, III, and IV trials; the FDA website; and the drug manufacturer’s clinical-trial registry for applicable data relating to rosiglitazone use. When the initial search was complete, the studies were further categorized by pre-stated inclusion criteria. Meta-analysis inclusion criteria were simple: studies had to include rosiglitazone and a randomized comparator group treated with either another drug or placebo, study arms had to show similar length of treatment, and all groups had to have received more than 24 weeks of exposure to the study drugs. The studies had to contain outcome data of interest including the rate of myocardial infarction (MI) or death from all CV causes. Out of 116 studies surveyed by the authors, 42 met their inclusion criteria and were included in the meta-analysis. Of the studies they included, 23 had durations of 26 weeks or less, and only five studies followed patients for more than a year. Until this point, the study’s authors were following a path similar to that of any reviewer interested in CV outcomes, examining the results of these 42 studies and comparing them qualitatively. Quantitatively combining the data, however, required the authors to make choices about the studies they could merge and the statistical methods they should apply for analysis. Those decisions greatly influenced the results that were reported.<br />
<br />
<b>What was found?</b><br />
<br />
When the studies were combined, the meta-analysis contained data from 15,565 patients in the rosiglitazone group and 12,282 patients as comparators. Analyzing their data, the authors chose one particular statistical method (the Peto odds ratio method, a fixed-effect statistical approach), which calculates the odds of events occurring where the outcomes of interest are rare and small in number. In comparing rosiglitazone with a “control” group that included other drugs or placebo, the authors reported odds ratios of 1.43 (95% CI, 1.03-1.98; P=0.03) and 1.64 (95% CI,<br />
0.98-2.74; P=0.06) for MI and death from CV causes, respectively. In other words, the odds of an MI or death from a CV cause are higher for rosiglitazone patients than for patients on other therapies or placebo. The authors reported that rosiglitazone was significantly associated with an increase in the risk of MI and had borderline significance in increasing the risk of death from all CV causes. These findings appeared online on the same day that the FDA issued a safety alert regarding rosiglitazone. Discussion of the meta-analysis was immediately featured prominently in the news media. By December 2007, prescription claims for the drug at retail pharmacies had fallen by more than 50%.<br />
<br />
As diabetic patients and their clinicians reacted to the news, a methodologic debate also ensued. This discussion included statistical issues pertaining to the conduct of the analysis, its implications for clinical care, and finally the FDA and drug manufacturer’s roles in overseeing and regulating rosiglitazone. The concern among patients with diabetes regarding treatment, continues in the medical community today.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
Should the studies have been combined? Commentators faulted the authors for including several studies that were not originally intended to investigate diabetes, and for combining both placebo and drug therapy data into one comparator arm. Some critics noted that despite the stated inclusion criteria, some data were derived from studies where the rosiglitazone arm was allowed a longer follow-up than the comparator arm. By failing to account for this longer follow-up period, commentators felt that the authors may have overestimated the effect of rosiglitazone on CV outcomes. Many reviewers were concerned that this meta-analysis excluded trials in which no patients suffered an MI or died from CV causes – the outcomes of greatest interest. Some reviewers also noted that the exclusion of zero-event trials from the pooled dataset not only gave an incomplete picture of the impact of rosiglitazone but could have increased the odds ratio estimate. In general, the pooled dataset was criticized by many for being a faulty microcosm of the information available regarding rosiglitazone.<br />
<br />
It is essential that a meta-analysis be based on similarity in the data sources. If studies differ in important areas such as the patient populations, interventions, or outcomes, combining their data may not be suitable. The researchers accepted studies and populations that were clinically heterogeneous, yet pooled them as if they were not. The study reported that the results were combined from a number of trials that were not initially intended to investigate CV outcomes. Furthermore, the available data did not allow for time-to-event analysis, an essential tool in comparing the impact of alternative treatment options. Reviewers considered the data to be insufficiently homogeneous, and the line of cause and effect to be murkier than the authors described.<br />
<br />
<b>Were the statistical methods optimal?</b><br />
<br />
The statistical methods for this meta-analysis also came under significant criticism. The critiques focused on the authors’ use of the Peto method as being an incorrect choice because data were pooled from both small and very large studies, resulting in a potential overestimation of treatment effect. Others reviewers pointed that the Peto method should not have been used, as a number of the underlying studies did not have patients assigned equally to rosiglitazone and comparator groups. Finally, critics suggested that the heterogeneity of the included studies required an altogether different set of analytic techniques.<br />
<br />
Demonstrating the sensitivity of the authors’ initial analysis to the inclusion criteria and statistical tests used, a number of researchers reworked the data from this study. one researcher used the same studies but analyzed the data with a more commonly used statistical method (Mantel-Haenszel), and found no significant increase in the relative risk or common odds ratio with MI or CV death. When the pool of studies was expanded to include those originally eliminated because they had zero CV events, the odds ratios for MI and death from CV causes dropped from 1.43 to 1.26 (95% CI, 0.93-1.72) and from 1.64 to 1.14 (95% CI, 0.74-1.74), respectively. Neither of the recalculated odd ratios were significant for MI or CV death. Finally, several newer long-term studies have been published since the Nissen meta-analysis. Incorporating their results with the meta-analysis data showed that rosiglitazone is associated with an increased risk of MI but not of CV death. Thus, the findings from these meta-analyses varied with the methods employed, the studies included, and the addition of later trials.<br />
<br />
<b>Emerging Data</b><br />
<br />
The controversy surrounding the rosiglitazone meta-analysis authored by Nissen and Wolski forced an unplanned interim analysis of a long-term, randomized trial investigating the CV effects of rosiglitazone among patients with type 2 diabetes. The authors of the RECORD trial noted that even though the follow-up at 3.75 years was shorter than expected, rosiglitazone, when added to standard glucose-lowering therapy, was found to be associated with an increase in the risk of heart failure but was not associated with any increase in death from CV or other causes. Data at the time were found to be insufficient to determine the effect of rosiglitazone on an increase in the risk of MI. the final report of that trial, published in June 2009, confirmed the elevated risk of heart failure in people with type 2 diabetes treated with rosiglitazone in addition to glucose-lowering drugs, but continued to show inconclusive results about the effect of the drug therapy on the risk of MI. Further, the RECORD trial clarified that rosiglitazone does not result in an increased risk of CV morbidity or mortality compared to standard glucose-lowering drugs. Other trials conducted since the publishing of the meta-analysis have corroborated these results, casting further doubt on the findings of the meta-analysis published by Nissen and Wolski.<br />
<br />
<b>Now what?</b><br />
<br />
Some sources suggest that the original Nissen meta-analysis delivered more harm than benefit, and that a well-recognized medical journal may have erred in its process of peer review. Despite this criticism, it is important to note that subsequent publications support the risk of adverse CV events associated with rosiglitazone, although rosiglitazone use does not appear to increase deaths. These results and emerging data point to the need for further rigorous research to clarify the benefits and risks of rosiglitazone on a variety of outcomes, and the importance of directing the drug to the population that will maximally benefit from its use.<br />
<br />
<b>Lessons Learned From this Case Study</b><br />
<br />
Results from initial randomized trials that seem definitive at one time may not be conclusive, as further trials may emerge to clarify, redirect, or negate previously accepted results. A meta-analysis of those trials can lead to varying results based upon the timing of the analysis and the choices made in its performance.<br />
<br />
<b>Meta-Analysis: Tips for CER Practitioners</b><br />
<br />
*The results of a meta-analysis are highly dependent on the studies included (and excluded). Are these criteria properly defined and relevant to the purposes of the meta-analysis? Were the combined studies sufficiently similar? Can results from this cohort be generalized to other populations of interest?<br />
<br />
*The statistical methodology can impact study results. Have there been reviews critiquing the methods used in the meta-analysis?<br />
<br />
*A variety of statistical tests should be considered, and perhaps reported, in the analysis of results. Do the authors mention their rationale in choosing a statistical method? Do they show the stability of their results across a spectrum of analytical methods?<br />
<br />
*Nothing is permanent. Emerging data may change the playing field, and meta- analysis results are only as good as the data and statistics from which they are derived.<br />
<br />
===Case-Study 3: The Nurses’ Health Study===<br />
<br />
<b>An observational study</b><br />
<br />
An observational study is a very common type of research design in which the effects of a treatment or condition are studied without formally randomizing patients in an experimental design. Such studies can be done prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. Latter studies are frequently performed by using an electronic database that contains, for example, administrative, “billing,” or claims data. Less commonly, observational research uses electronic health records, which have greater clinical information that more closely resembles the data collected in an RCT. Observational studies often take place in “real- world” environments, which allow researchers to collect data for a wide array of outcomes. Patients are not randomized in these studies, but the findings can be used to generate hypotheses for investigation in a more constrained experimental setting. Perhaps the best known observational study is the “Framingham study,” which collected demographic and health data for a group of individuals over many years (and continues to do so) and has provided an understanding of the key risk factors for heart disease and stroke.<br />
<br />
Observational studies present many advantages to the comparative effectiveness researcher. the study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). Furthermore, observational studies can be conducted at low cost, particularly if they involve the secondary analysis of existing data sources. CER often uses administrative databases, which are based upon the billing data submitted by providers during routine care. These databases typically have limited clinical information, may have errors in them, and generally do not undergo auditing.<br />
<br />
The uncontrolled nature of observational studies allows them to be subject to bias and confounding. For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without careful statistical adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies can identify important associations but cannot prove cause and effect. These studies can generate hypotheses that may require RCTs for fuller demonstration of those relationships. Secondary analysis can also be problematic if researchers overwork datasets by doing multiple exploratory analyses (e.g., data-dredging): the more we look, the more we find, even if those findings are merely statistical aberrations. Unfortunately, the growing need for CER and the wide availability of administrative databases may lead to selection of research of poor quality with inaccurate findings.<br />
<br />
In comparative effectiveness research, observational studies are typically considered to be less conclusive than RCTs and meta-analyses. Nonetheless, they can be useful, especially because they examine typical care. Due to lower cost and improvements in health information, observational studies will become increasingly common. Critical assessment of whether the described results are helpful or biased (based upon how the study was performed) are necessary. This case will illustrate several characteristics of the types of studies that will assist in evaluating newly published work. <br />
<br />
<b>Clinical Applications</b><br />
<br />
Cardiovascular diseases (CVD) are the leading cause of death in women older than the age of 50. Epidemiologic evidence suggests that estrogen is a key mediator in the development of CVD. Estrogen is an ovarian hormone whose production decreases as women approach menopause. The steep increase in CVD in women at menopause and older and in women who have had hysterectomies further supports a relationship between estrogen and CVD. Building on this evidence of biologic plausibility, epidemiological and observational studies suggested that estrogen replacement therapy (a form of <b>hormone replacement therapy</b>, or HRT) had positive effects on the risk of CVD in postmenopausal women, (albeit with some negative effects in its potential to increase the risk for breast cancer and stroke). Based on these findings, in the 1980s and 1990s HRT was routinely employed to treat menopausal symptoms and serve as prophylaxis against CVD.<br />
<br />
<b>What was done?</b><br />
<br />
The Nurses’ Health Study (NHS) began collecting data in 1976. In the study, researchers intended to examine a broad range of health effects in women over a long period of time, and a key goal was to clarify the role of HRT in heart disease. The cohort (i.e., the group being followed) included married registered nurses aged 30-55 in 1976 who lived in the 11 most populous states. To collect data, the researchers mailed the study participants a survey every 2 years that asked questions about topics such as smoking, hormone use, menopausal status, and less frequently, diet. Data were collected for key end points that included MI, coronary-artery bypass grafting or angioplasty, stroke, total CVD mortality, and deaths from all causes.<br />
<br />
<b>What was found?</b><br />
<br />
At a 10-year follow-up point, the NHS had a study pool of 48,470 women. The researchers found that estrogen use (alone, without progestin) in postmenopausal women was associated with a reduction in the incidence of CVD as well as in CVD mortality compared to non-users. Later, estrogen-progestin combination therapy was shown to be even more cardioprotective than estrogen monotherapy, and lower doses of estrogen replacement therapy were found to deliver equal cardioprotection and lower the risk for adverse events. NHS researchers were alert to the potential for bias in observational studies. Adjustment for risk factors such as age (a typical practice to eliminate confounding) did not change the reported findings.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
The NHS was not unique in reporting the benefits associated with HRT; other observational studies corroborated the NHS findings. A secondary retrospective data analysis of the UK primary care electronic medical record database, for example, also showed the protective effect associated with HRT use. Researchers were aware of the fundamental limitations of observational studies, particularly with regard to selection bias. They and practicing clinicians were also aware of the potential negative health effects of HRT, which had to be constantly weighed against the potential cardioprotective benefits in deciding a patient’s course of treatment. As a large section of the population could experience the health effects of HRT, researchers began planning RCTs to verify the promising observational study results. It was highly anticipated that those RCTs would corroborate the belief that estrogen replacement can reduce CVD risk.<br />
<br />
<b>Randomized Controlled Trial: The Women’s Health Initiative</b><br />
<br />
The Women’s health Initiative (WHI) was a major study established by the National Institutes of health in 1992 to assess a broad range of health effects in postmenopausal women. The trial was intended to follow these women for 8 years, at a cost of millions of dollars in federal funding. Among its many facets, it included an RCT to confirm the results from the observational studies discussed above. To fully investigate earlier findings, the WHI had two subgroups. One subgroup consisted of women with prior hysterectomies; they received estrogen monotherapy. The second group consisted of women who had not undergone hysterectomy; they received estrogen in combination with progestin. The WHI enrolled 27,347 women in their HRT investigation: 10,739 in the estrogen-alone arm and 16,608 in the estrogen plus progestin arm. Within each arm, women were randomly assigned to receive either HRT or placebo. All women in the trial were postmenopausal and aged 50-79 years; the mean age was 63.6 years (a fact that would be important in later analysis). Some participants had experienced previous CV events. The primary outcome of both subgroups was coronary heart disease (CHD), as described by nonfatal MI or death due to CHD.<br />
<br />
The estrogen-progestin arm of the WHI was halted after a mean follow-up of 5.2 years, 3 years earlier than expected, as the HRT users in this arm were found to be at increased risk for CHD compared to those who received placebo. The study also noted elevated rates of breast cancer and stroke, among other poor outcomes. The estrogen-alone arm continued for an average follow-up of 6.8 years before being similarly discontinued ahead of schedule. Although this part of the study did not find an increased risk of CHD, it also did not find any cardioprotective effect. Beyond failing to locate any clear CV benefits, the WHI also found real evidence of harm, including increased risk of blood clots, breast cancer and stroke. Initial WHI publications therefore recommended against HRT being prescribed for the secondary prevention of CVD.<br />
<br />
<b>What Next?</b><br />
<br />
Scientists and the clinicians who relied on their data for guidance in treating patients, were faced with conflicting data: epidemiological and observational studies suggested that HRT was cardioprotective while the higher-quality evidence from RCTs strongly suggested the opposite. Clinicians primarily followed the WHI results, so prescriptions for HRT in postmenopausal women quickly declined. Meanwhile, researchers began to analyze the studies for potential discrepancies, and found that the women being followed in the NHS and the WHI differed in several important characteristics.<br />
<br />
First, the WHI population was older than the NHS cohort, and many had entered menopause at least 10 years before they enrolled in the RCT. Thus, the WHI enrollees experienced a long duration from the onset of menopause to the commencement of HRT. At the same time, many in the NHS population were closer to the onset of menopause and were still displaying hormonal symptoms when they began HRT. Second, although the NHS researchers adjusted the data for various confounding effects, their results could still have been subject to bias. In general, the NHS cohort was more highly educated and of a higher socioeconomic status than the WHI participants, and therefore more likely to see a physician regularly. The NHS women were also leaner and generally healthier than their RCT counterparts, and had been selected for their evident lack of pre-existing CV conditions. This selection bias in the NHS enrollment may have led to a “healthy woman” effect that in turn led to an overestimation of the benefits of therapy in the observational study. Third, researchers noted that dosing differences between the two study types may have contributed to the divergent results. The NHS reported beneficial results following low-dose estrogen therapy. The WHL, meanwhile, used a higher estrogen dose, exposing women to a larger dosage of hormones and increasing their risk for adverse events. The increased risk profile of the WHI women (e.g., older, more comorbidities, higher estrogen dose) could have contributed to the evidence of harm seen in the WHI results.<br />
<br />
<b>Emerging Data</b><br />
In addition to identifying the inherent differences between the two study populations, researchers began a secondary analysis of the NHS and WHI trials. NHS researchers reported that women who began HRT close to the onset of menopause had a significantly reduced risk of CHD. In the subgroups of women that were older and had a similar duration after menopause compared with the WHI women, they found no significant relationship between HRT and CHD. Also, the WHI study further stratified these results by age, and found that women who began HRT close to their onset of menopause experienced some cardioprotection, while women who were further from the onset of menopause had a slightly elevated risk for CHD.<br />
<br />
Secondary analysis of both studies was therefore necessary to show that age and a short duration from the onset of menopause are crucial to HRT success as a cardioprotective agent. Neither study type provided “truth” or rather, both studies provided “truth” if viewed carefully (e.g., both produced valid and important results). The differences seen in the studies were rooted in the timing of HRT and the populations being studied.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are given a higher evidence grade, observational studies provide important clinical insights. In this example, the study populations differed. For policymakers and clinicians, it is crucial to examine whether the CER was based upon patients similar to those being considered. Any study with a dissimilar population may provide non-relevant results. Thus, readers of CER need to carefully examine the generalizability of the findings being reported.<br />
<br />
==Appendix==<br />
<br />
General Classification and Regression Tree (CART) data analysis steps part of the R package <b>rpart.</b><br />
<br />
===Growing the Tree===<br />
<br />
# To grow a tree, use<br />
rpart(formula, data=, method=,control=), where<br />
formula is in the format outcome ~ predictor1+predictor2+...<br />
data= specifies the data frame<br />
method= "class" for a classification tree, use "anova" for a regression tree<br />
control= optional parameters for controlling tree growth. For example, control=rpart.control(minsplit=30, cp=0.001) requires that the minimum number of observations in a node be 30 before attempting a split and that a split must decrease the overall lack of fit by a factor of 0.001 (cost complexity factor) before being attempted.<br />
<br />
===Examining Results===<br />
<br />
# These functions help with examining the results.<br />
printcp(fit) display complexity parameter (cp) table<br />
plotcp(fit) plot cross-validation results<br />
rsq.rpart(fit) plot approximate R-squared and relative error for different splits (2 plots). labels are only appropriate for the "anova" method.<br />
print(fit) print results<br />
summary(fit) detailed results including surrogate splits<br />
plot(fit) plot decision tree<br />
text(fit) label the decision tree plot<br />
post(fit, file=) create postscript plot of decision tree<br />
# In trees created by rpart(), move to the LEFT branch when the stated condition is true.<br />
<br />
===Pruning Trees===<br />
<br />
#In general, trees should be pruned back to avoid overfitting the data. The tree size should minimize the cross-#validated error – xerror column printed by printcp(). Pruning the tree is accomplished by:<br />
prune(fit, cp= )<br />
# use printcp( ) to examine the cross-validation error results, select the complexity parameter (CP) associated with minimum error, and insert the CP it into the prune() function. This (automatically selecting the complexity parameter associated with the smallest cross-validated error) can be done succinctly by:<br />
fit$\$$cptable[which.min(fit$\$$cptable[,"xerror"]),"CP"]<br />
<br />
===Compete Dataset for N-of-1 Example===<br />
[[SMHS_MethodsHeterogeneity_CER_Nof1|This N-of-1 Dataset]] includes an example.<br />
<br />
===[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_CER}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_CER&diff=16222SMHS MethodsHeterogeneity CER2016-05-23T18:56:34Z<p>Pineaumi: /* Observational Studies: Tips for the CER Practitioners */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Comparative Effectiveness Research (CER) ==<br />
<br />
===Overview===<br />
<br />
===Observational Studies: Tips for the CER Practitioners===<br />
<br />
*Different study types can offer different understandings; neither should be discounted without closer examination.<br />
<br />
*RCTs provide an accurate understanding of the effect of a particular intervention in a well-defined patient group under “controlled” circumstances.<br />
<br />
*Observational studies provide an understanding of real-world care and its impact, but can be biased due to uncontrolled factors.<br />
<br />
*Observational studies differ in the types of databases used. These databases may lack clinical detail and contain incomplete or inaccurate data.<br />
<br />
*Before accepting the findings from an observational study, consider whether confounding factors may have influenced the results.<br />
<br />
*In this scenario, subgroup analysis was vital in clarifying both study designs; what is true for the many (e.g., overall, estrogen appeared to be detrimental) may not be true for the few (e.g., that for the younger post-menopausal woman, the benefits were greater and the harms less frequent).<br />
<br />
*Carefully examine the generalizability of the study. Do the study’s patients and intervention match those under consideration?<br />
<br />
*Observational studies can identify associations but cannot prove cause-and-effect relationships.<br />
<br />
===Case-Study 1: The Cetuximab Study===<br />
<br />
<b>What was done and what was found?</b><br />
<br />
Cetuximab, an anti-epidermal growth factor receptor (EGFR) agent, has recently been added to the therapeutic armamentarium. Two important CRTs examined its impact in patients with mCRC (metastatic-stage Colorectal cancer). In the first one, 56 centers in 11 European countries investigated the outcomes associated with cetuximab therapy in 329 mCRC patients who experienced disease progression either on irinotecan therapy or within 3 months thereafter. The study reported that the group on a combination of irinotecan and cetuximab had a significantly higher rate of overall response to treatment (primary endpoint) than the group on cetuximab alone: 22.9% (95% CI, 17.5-29.1%) vs. 10.8% (95% CI, 5.7-18.1%) (P=0.007), respectively. Similarly, the median time to progression was significantly longer in the combination therapy group (4.1 vs. 1.5 months, P<0.001). As these patients had already progressed on irinotecan prior to the study, any response was viewed as positive. Safety between the two treatment arms was similar: approximately 80% of patients in each arm experienced a rash. Grade 3 or 4 (the more severe) toxic effects on the skin were slightly more frequent in the combination-therapy group compared to cetuximab monotherapy, observed in 9.4% and 5.2% of participants, respectively. Other side effects, such as diarrhea and neutropenia observed in the combination-therapy arm, were considered to be in the range expected for irinotecan alone. Data from this study demonstrated the efficacy and safety of cetuximab and were instrumental in the FDA’s 2004 approval.<br />
<br />
A second CRT (2007) examined 572 patients and suggested efficacy of cetuximab in the treatment of mCRC. This study was a randomized, non-blinded, controlled trial that examined cetuximab monotherapy plus best supportive care compared to best supportive care alone in patients who had received and failed prior chemotherapy regimens. It reported that median overall survival (the primary endpoint) was significantly higher in patients receiving cetuximab plus best supportive care compared to best supportive care alone (6.1 vs. 4.6 months, respectively) (hazard ratio for death=0.77; 95% CI: 0.64- 0.92, P=0.005). This RCT described a greater incidence of adverse events in the cetuximab plus best supportive care group compared to best supportive care alone including (most significantly) rash, as well as edema, fatigue, nausea and vomiting.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
These RCTs had fairly broad enrollment criteria and the cetuximab benefits were modest. Emerging scientific theories raised the possibility that genetically defined population subsets might experience a greater-than-average treatment benefit. One such area of inquiry entailed examining “biomarkers,” or genetic indicators of a patient’s greater response to therapy. Even as the above RCTs were being conducted, data emerged showing the importance of the KRAS gene.<br />
<br />
<b>Emerging Data</b><br />
<br />
Based on the emerging biochemical evidence that the epidermal growth factor receptor (EGFR) treatment mechanism (Cetuximab) was even more finely detailed than previously understood, the study authors of the 2007 RCT undertook a retrospective subgroup analysis using tumor tissue samples preserved from their initial study. Following laboratory analysis, all viable tissue samples were classified as having a wild-type (non-mutated) or a mutated KRAS gene. Instead of the previous two study arms (cetuximab plus best supportive care vs. best supportive care alone), there were 4 for this new analysis: each of the two original study arms was further divided by wild-type vs. mutated KRAS status. Laboratory evaluation determined that 40.9% and 42.3% of all patients in the RCT had a KRAS mutation in the cetuximab plus best supportive care group compared to the best supportive care group alone, respectively. The efficacy of cetuximab was found to be significantly correlated with KRAS status: in patients with wild-type (non-mutated). KRAS genes, cetuximab plus best supportive care compared to best supportive care alone improved overall survival (median 9.5 vs. 4.8 months, respectively; hazard ratio for death=0.55; 95% CI, 0.41-0.74, P<0.001), and progression-free survival (median 3.7 vs. 1.9 months, respectively; hazard ratio for progression or death=0.40; 95% CI, 0.30-0.54, P<0.001). Meanwhile, in patients with mutated KRAS tumors, the authors found no significant difference in outcome between cetuximab plus best supportive care vs. best supportive care alone.<br />
<br />
<b>What next?</b><br />
<br />
Based on these and similar results from other studies, the FDA narrowed its product labeling in July 2009 to indicate that cetuximab is not recommended for mCRC patients with mutated KRAS tumors. This distinction reduces the relevant population by approximately 40%. Similarly, the American society of Clinical oncology released a provisional clinical recommendation that all mCRC patients have their tumors tested for KRAS status before receiving anti-EGFR therapy. The benefits of targeted treatment are many. Patients who previously underwent cetuximab therapy without knowing their genetic predisposition would no longer have to be exposed to the drug’s toxic effects if unnecessary, as the efficacy of cetuximab is markedly higher in the genetically defined appropriate patients. In a less-uncertain environment, clinicians can be more confident in advocating a course of action in their care of patients. And finally, knowledge that targeted therapy is possible suggests the potential for further innovation in treatment options. In fact, research continues to demonstrate options for targeted cetuximab treatment of mCRC at an even finer scale than seen with KRAS; and similar genetic targeting is being investigated, and advocated, in other cancer types.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are generally viewed as the gold standard, results of one or even a series of trials may not accurately reflect the benefits experienced by an individual patient. This case-study suggests that cetuximab initially appeared to have rather modest clinical benefits. Albeit, new information that became available and subsequent genetic subgroup assessments led to very different conclusions. Clinicians should be aware that the current knowledge is likely to evolve and any decisions about patient care should be carefully considered with that sense of uncertainty in mind. As in this case study, subgroup analyses (e.g., genetic subtypes) need a theoretical rationale. Ideally, the analyses should be determined at the time of original RCT design and should not just occur as explorations of the subsequent data. When improperly employed, post hoc analyses may lead to incorrect patient care conclusions.<br />
<br />
<b>RCTs Tips for the CER Practitioners</b><br />
<br />
*RCTs can determine whether an intervention can provide benefit in a very controlled environment.<br />
<br />
*The controlled nature of an RCT may limit its generalizability to a broader population.<br />
<br />
*No results are permanent; advances in scientific knowledge and understanding can influence how we view the effectiveness (or safety) of a therapeutic intervention.<br />
<br />
*Targeted therapy illuminated by carefully thought out subgroup analyses can improve the efficacious and safe use of an intervention.<br />
<br />
===Case-Study 2: The Rosiglitazone Study===<br />
<br />
<b>Meta-analysis</b><br />
<br />
Often the results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single “result.” Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.<br />
<br />
When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive “numbers” but biased results. The following are important criteria for properly conducted meta-analyses:<br />
<br />
1. Carefully defining unbiased inclusion or exclusion criteria for study selection<br />
<br />
2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and time-frame<br />
<br />
3. Applying correct statistical methods to combine and analyze the data<br />
<br />
Reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions. Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research. The following case study will examine several key principles that will be useful as the reader encounters these publications.<br />
<br />
<b>Clinical Application</b><br />
<br />
Heart disease is the leading cause of mortality in the United States, resulting in approximately 20% of all deaths. Diabetics are particularly susceptible to heart disease, with more than 65% of deaths attributable to it. The nonfatal complications of diabetes are wide-ranging and include kidney failure, nerve damage, amputation, stroke and blindness, among other outcomes. In 2007, the total estimated cost of diabetes in the United States was $174B; $116B was derived from direct medical expenditures and the rest from the indirect cost of lost productivity due to the disease. With such serious health effects and heavy direct and indirect costs tied to diabetes, proper disease management is critical. Historically, diabetes treatment has focused on strict blood sugar control, assuming that this goal not only targets diabetes but also reduces other serious comorbidities of the disease.<br />
<br />
Anti-diabetic agents have long been associated with key questions as to their benefits/risks in the treatment of diabetes. The sulfonylurea tolbutamide, a first generation anti-diabetic drug, was found in a landmark study in the 1970s to significantly increase the CV mortality rate compared to patients not on this agent. Further analysis by external parties concluded that the methods employed in this trial were significantly flawed (e.g., use of an “arbitrary” definition of diabetes status, heterogeneous baseline characteristics of the populations studied, and incorrect statistical methods). Since these early studies, CV concerns continue to be an issue with selected oral hypoglycemic agents that have subsequently entered the marketplace.<br />
<br />
A class of drugs, thiazolidinedione (TZD), was approved in the late 1990s, as a solution to the problems associated with the older generation of sulfonylureas. Rosiglitazone, a member of the TZD class, was approved by the FDA in 1999 and was widely prescribed for the treatment of type-2 diabetes. A number of RCTs supported the benefit of rosiglitazone as an important new oral antidiabetic agent. However, safety concerns developed as the FDA received reports of adverse cardiac events potentially associated with rosiglitazone. It was in this setting that a meta-analysis by Nissen and Wolski was published in the New England Journal of Medicine in June 2007.<br />
<br />
<b>What was done?</b><br />
<br />
Nissen and Wolski conducted a meta-analysis examining the impact of rosiglitazone on cardiac events and mortality compared to alternative therapeutic approaches. The study began with a broad search to locate potential studies for review. The authors screened published phase II, III, and IV trials; the FDA website; and the drug manufacturer’s clinical-trial registry for applicable data relating to rosiglitazone use. When the initial search was complete, the studies were further categorized by pre-stated inclusion criteria. Meta-analysis inclusion criteria were simple: studies had to include rosiglitazone and a randomized comparator group treated with either another drug or placebo, study arms had to show similar length of treatment, and all groups had to have received more than 24 weeks of exposure to the study drugs. The studies had to contain outcome data of interest including the rate of myocardial infarction (MI) or death from all CV causes. Out of 116 studies surveyed by the authors, 42 met their inclusion criteria and were included in the meta-analysis. Of the studies they included, 23 had durations of 26 weeks or less, and only five studies followed patients for more than a year. Until this point, the study’s authors were following a path similar to that of any reviewer interested in CV outcomes, examining the results of these 42 studies and comparing them qualitatively. Quantitatively combining the data, however, required the authors to make choices about the studies they could merge and the statistical methods they should apply for analysis. Those decisions greatly influenced the results that were reported.<br />
<br />
<b>What was found?</b><br />
<br />
When the studies were combined, the meta-analysis contained data from 15,565 patients in the rosiglitazone group and 12,282 patients as comparators. Analyzing their data, the authors chose one particular statistical method (the Peto odds ratio method, a fixed-effect statistical approach), which calculates the odds of events occurring where the outcomes of interest are rare and small in number. In comparing rosiglitazone with a “control” group that included other drugs or placebo, the authors reported odds ratios of 1.43 (95% CI, 1.03-1.98; P=0.03) and 1.64 (95% CI,<br />
0.98-2.74; P=0.06) for MI and death from CV causes, respectively. In other words, the odds of an MI or death from a CV cause are higher for rosiglitazone patients than for patients on other therapies or placebo. The authors reported that rosiglitazone was significantly associated with an increase in the risk of MI and had borderline significance in increasing the risk of death from all CV causes. These findings appeared online on the same day that the FDA issued a safety alert regarding rosiglitazone. Discussion of the meta-analysis was immediately featured prominently in the news media. By December 2007, prescription claims for the drug at retail pharmacies had fallen by more than 50%.<br />
<br />
As diabetic patients and their clinicians reacted to the news, a methodologic debate also ensued. This discussion included statistical issues pertaining to the conduct of the analysis, its implications for clinical care, and finally the FDA and drug manufacturer’s roles in overseeing and regulating rosiglitazone. The concern among patients with diabetes regarding treatment, continues in the medical community today.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
Should the studies have been combined? Commentators faulted the authors for including several studies that were not originally intended to investigate diabetes, and for combining both placebo and drug therapy data into one comparator arm. Some critics noted that despite the stated inclusion criteria, some data were derived from studies where the rosiglitazone arm was allowed a longer follow-up than the comparator arm. By failing to account for this longer follow-up period, commentators felt that the authors may have overestimated the effect of rosiglitazone on CV outcomes. Many reviewers were concerned that this meta-analysis excluded trials in which no patients suffered an MI or died from CV causes – the outcomes of greatest interest. Some reviewers also noted that the exclusion of zero-event trials from the pooled dataset not only gave an incomplete picture of the impact of rosiglitazone but could have increased the odds ratio estimate. In general, the pooled dataset was criticized by many for being a faulty microcosm of the information available regarding rosiglitazone.<br />
<br />
It is essential that a meta-analysis be based on similarity in the data sources. If studies differ in important areas such as the patient populations, interventions, or outcomes, combining their data may not be suitable. The researchers accepted studies and populations that were clinically heterogeneous, yet pooled them as if they were not. The study reported that the results were combined from a number of trials that were not initially intended to investigate CV outcomes. Furthermore, the available data did not allow for time-to-event analysis, an essential tool in comparing the impact of alternative treatment options. Reviewers considered the data to be insufficiently homogeneous, and the line of cause and effect to be murkier than the authors described.<br />
<br />
<b>Were the statistical methods optimal?</b><br />
<br />
The statistical methods for this meta-analysis also came under significant criticism. The critiques focused on the authors’ use of the Peto method as being an incorrect choice because data were pooled from both small and very large studies, resulting in a potential overestimation of treatment effect. Others reviewers pointed that the Peto method should not have been used, as a number of the underlying studies did not have patients assigned equally to rosiglitazone and comparator groups. Finally, critics suggested that the heterogeneity of the included studies required an altogether different set of analytic techniques.<br />
<br />
Demonstrating the sensitivity of the authors’ initial analysis to the inclusion criteria and statistical tests used, a number of researchers reworked the data from this study. one researcher used the same studies but analyzed the data with a more commonly used statistical method (Mantel-Haenszel), and found no significant increase in the relative risk or common odds ratio with MI or CV death. When the pool of studies was expanded to include those originally eliminated because they had zero CV events, the odds ratios for MI and death from CV causes dropped from 1.43 to 1.26 (95% CI, 0.93-1.72) and from 1.64 to 1.14 (95% CI, 0.74-1.74), respectively. Neither of the recalculated odd ratios were significant for MI or CV death. Finally, several newer long-term studies have been published since the Nissen meta-analysis. Incorporating their results with the meta-analysis data showed that rosiglitazone is associated with an increased risk of MI but not of CV death. Thus, the findings from these meta-analyses varied with the methods employed, the studies included, and the addition of later trials.<br />
<br />
<b>Emerging Data</b><br />
<br />
The controversy surrounding the rosiglitazone meta-analysis authored by Nissen and Wolski forced an unplanned interim analysis of a long-term, randomized trial investigating the CV effects of rosiglitazone among patients with type 2 diabetes. The authors of the RECORD trial noted that even though the follow-up at 3.75 years was shorter than expected, rosiglitazone, when added to standard glucose-lowering therapy, was found to be associated with an increase in the risk of heart failure but was not associated with any increase in death from CV or other causes. Data at the time were found to be insufficient to determine the effect of rosiglitazone on an increase in the risk of MI. the final report of that trial, published in June 2009, confirmed the elevated risk of heart failure in people with type 2 diabetes treated with rosiglitazone in addition to glucose-lowering drugs, but continued to show inconclusive results about the effect of the drug therapy on the risk of MI. Further, the RECORD trial clarified that rosiglitazone does not result in an increased risk of CV morbidity or mortality compared to standard glucose-lowering drugs. Other trials conducted since the publishing of the meta-analysis have corroborated these results, casting further doubt on the findings of the meta-analysis published by Nissen and Wolski.<br />
<br />
<b>Now what?</b><br />
<br />
Some sources suggest that the original Nissen meta-analysis delivered more harm than benefit, and that a well-recognized medical journal may have erred in its process of peer review. Despite this criticism, it is important to note that subsequent publications support the risk of adverse CV events associated with rosiglitazone, although rosiglitazone use does not appear to increase deaths. These results and emerging data point to the need for further rigorous research to clarify the benefits and risks of rosiglitazone on a variety of outcomes, and the importance of directing the drug to the population that will maximally benefit from its use.<br />
<br />
<b>Lessons Learned From this Case Study</b><br />
<br />
Results from initial randomized trials that seem definitive at one time may not be conclusive, as further trials may emerge to clarify, redirect, or negate previously accepted results. A meta-analysis of those trials can lead to varying results based upon the timing of the analysis and the choices made in its performance.<br />
<br />
<b>Meta-Analysis: Tips for CER Practitioners</b><br />
<br />
*The results of a meta-analysis are highly dependent on the studies included (and excluded). Are these criteria properly defined and relevant to the purposes of the meta-analysis? Were the combined studies sufficiently similar? Can results from this cohort be generalized to other populations of interest?<br />
<br />
*The statistical methodology can impact study results. Have there been reviews critiquing the methods used in the meta-analysis?<br />
<br />
*A variety of statistical tests should be considered, and perhaps reported, in the analysis of results. Do the authors mention their rationale in choosing a statistical method? Do they show the stability of their results across a spectrum of analytical methods?<br />
<br />
*Nothing is permanent. Emerging data may change the playing field, and meta- analysis results are only as good as the data and statistics from which they are derived.<br />
<br />
===Case-Study 3: The Nurses’ Health Study===<br />
<br />
<b>An observational study</b><br />
<br />
An observational study is a very common type of research design in which the effects of a treatment or condition are studied without formally randomizing patients in an experimental design. Such studies can be done prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. Latter studies are frequently performed by using an electronic database that contains, for example, administrative, “billing,” or claims data. Less commonly, observational research uses electronic health records, which have greater clinical information that more closely resembles the data collected in an RCT. Observational studies often take place in “real- world” environments, which allow researchers to collect data for a wide array of outcomes. Patients are not randomized in these studies, but the findings can be used to generate hypotheses for investigation in a more constrained experimental setting. Perhaps the best known observational study is the “Framingham study,” which collected demographic and health data for a group of individuals over many years (and continues to do so) and has provided an understanding of the key risk factors for heart disease and stroke.<br />
<br />
Observational studies present many advantages to the comparative effectiveness researcher. the study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). Furthermore, observational studies can be conducted at low cost, particularly if they involve the secondary analysis of existing data sources. CER often uses administrative databases, which are based upon the billing data submitted by providers during routine care. These databases typically have limited clinical information, may have errors in them, and generally do not undergo auditing.<br />
<br />
The uncontrolled nature of observational studies allows them to be subject to bias and confounding. For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without careful statistical adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies can identify important associations but cannot prove cause and effect. These studies can generate hypotheses that may require RCTs for fuller demonstration of those relationships. Secondary analysis can also be problematic if researchers overwork datasets by doing multiple exploratory analyses (e.g., data-dredging): the more we look, the more we find, even if those findings are merely statistical aberrations. Unfortunately, the growing need for CER and the wide availability of administrative databases may lead to selection of research of poor quality with inaccurate findings.<br />
<br />
In comparative effectiveness research, observational studies are typically considered to be less conclusive than RCTs and meta-analyses. Nonetheless, they can be useful, especially because they examine typical care. Due to lower cost and improvements in health information, observational studies will become increasingly common. Critical assessment of whether the described results are helpful or biased (based upon how the study was performed) are necessary. This case will illustrate several characteristics of the types of studies that will assist in evaluating newly published work. <br />
<br />
<b>Clinical Applications</b><br />
<br />
Cardiovascular diseases (CVD) are the leading cause of death in women older than the age of 50. Epidemiologic evidence suggests that estrogen is a key mediator in the development of CVD. Estrogen is an ovarian hormone whose production decreases as women approach menopause. The steep increase in CVD in women at menopause and older and in women who have had hysterectomies further supports a relationship between estrogen and CVD. Building on this evidence of biologic plausibility, epidemiological and observational studies suggested that estrogen replacement therapy (a form of <b>hormone replacement therapy</b>, or HRT) had positive effects on the risk of CVD in postmenopausal women, (albeit with some negative effects in its potential to increase the risk for breast cancer and stroke). Based on these findings, in the 1980s and 1990s HRT was routinely employed to treat menopausal symptoms and serve as prophylaxis against CVD.<br />
<br />
<b>What was done?</b><br />
<br />
The Nurses’ Health Study (NHS) began collecting data in 1976. In the study, researchers intended to examine a broad range of health effects in women over a long period of time, and a key goal was to clarify the role of HRT in heart disease. The cohort (i.e., the group being followed) included married registered nurses aged 30-55 in 1976 who lived in the 11 most populous states. To collect data, the researchers mailed the study participants a survey every 2 years that asked questions about topics such as smoking, hormone use, menopausal status, and less frequently, diet. Data were collected for key end points that included MI, coronary-artery bypass grafting or angioplasty, stroke, total CVD mortality, and deaths from all causes.<br />
<br />
<b>What was found?</b><br />
<br />
At a 10-year follow-up point, the NHS had a study pool of 48,470 women. The researchers found that estrogen use (alone, without progestin) in postmenopausal women was associated with a reduction in the incidence of CVD as well as in CVD mortality compared to non-users. Later, estrogen-progestin combination therapy was shown to be even more cardioprotective than estrogen monotherapy, and lower doses of estrogen replacement therapy were found to deliver equal cardioprotection and lower the risk for adverse events. NHS researchers were alert to the potential for bias in observational studies. Adjustment for risk factors such as age (a typical practice to eliminate confounding) did not change the reported findings.<br />
<br />
<b>Was this the right answer?</b><br />
<br />
The NHS was not unique in reporting the benefits associated with HRT; other observational studies corroborated the NHS findings. A secondary retrospective data analysis of the UK primary care electronic medical record database, for example, also showed the protective effect associated with HRT use. Researchers were aware of the fundamental limitations of observational studies, particularly with regard to selection bias. They and practicing clinicians were also aware of the potential negative health effects of HRT, which had to be constantly weighed against the potential cardioprotective benefits in deciding a patient’s course of treatment. As a large section of the population could experience the health effects of HRT, researchers began planning RCTs to verify the promising observational study results. It was highly anticipated that those RCTs would corroborate the belief that estrogen replacement can reduce CVD risk.<br />
<br />
<b>Randomized Controlled Trial: The Women’s Health Initiative</b><br />
<br />
The Women’s health Initiative (WHI) was a major study established by the National Institutes of health in 1992 to assess a broad range of health effects in postmenopausal women. The trial was intended to follow these women for 8 years, at a cost of millions of dollars in federal funding. Among its many facets, it included an RCT to confirm the results from the observational studies discussed above. To fully investigate earlier findings, the WHI had two subgroups. One subgroup consisted of women with prior hysterectomies; they received estrogen monotherapy. The second group consisted of women who had not undergone hysterectomy; they received estrogen in combination with progestin. The WHI enrolled 27,347 women in their HRT investigation: 10,739 in the estrogen-alone arm and 16,608 in the estrogen plus progestin arm. Within each arm, women were randomly assigned to receive either HRT or placebo. All women in the trial were postmenopausal and aged 50-79 years; the mean age was 63.6 years (a fact that would be important in later analysis). Some participants had experienced previous CV events. The primary outcome of both subgroups was coronary heart disease (CHD), as described by nonfatal MI or death due to CHD.<br />
<br />
The estrogen-progestin arm of the WHI was halted after a mean follow-up of 5.2 years, 3 years earlier than expected, as the HRT users in this arm were found to be at increased risk for CHD compared to those who received placebo. The study also noted elevated rates of breast cancer and stroke, among other poor outcomes. The estrogen-alone arm continued for an average follow-up of 6.8 years before being similarly discontinued ahead of schedule. Although this part of the study did not find an increased risk of CHD, it also did not find any cardioprotective effect. Beyond failing to locate any clear CV benefits, the WHI also found real evidence of harm, including increased risk of blood clots, breast cancer and stroke. Initial WHI publications therefore recommended against HRT being prescribed for the secondary prevention of CVD.<br />
<br />
<b>What Next?</b><br />
<br />
Scientists and the clinicians who relied on their data for guidance in treating patients, were faced with conflicting data: epidemiological and observational studies suggested that HRT was cardioprotective while the higher-quality evidence from RCTs strongly suggested the opposite. Clinicians primarily followed the WHI results, so prescriptions for HRT in postmenopausal women quickly declined. Meanwhile, researchers began to analyze the studies for potential discrepancies, and found that the women being followed in the NHS and the WHI differed in several important characteristics.<br />
<br />
First, the WHI population was older than the NHS cohort, and many had entered menopause at least 10 years before they enrolled in the RCT. Thus, the WHI enrollees experienced a long duration from the onset of menopause to the commencement of HRT. At the same time, many in the NHS population were closer to the onset of menopause and were still displaying hormonal symptoms when they began HRT. Second, although the NHS researchers adjusted the data for various confounding effects, their results could still have been subject to bias. In general, the NHS cohort was more highly educated and of a higher socioeconomic status than the WHI participants, and therefore more likely to see a physician regularly. The NHS women were also leaner and generally healthier than their RCT counterparts, and had been selected for their evident lack of pre-existing CV conditions. This selection bias in the NHS enrollment may have led to a “healthy woman” effect that in turn led to an overestimation of the benefits of therapy in the observational study. Third, researchers noted that dosing differences between the two study types may have contributed to the divergent results. The NHS reported beneficial results following low-dose estrogen therapy. The WHL, meanwhile, used a higher estrogen dose, exposing women to a larger dosage of hormones and increasing their risk for adverse events. The increased risk profile of the WHI women (e.g., older, more comorbidities, higher estrogen dose) could have contributed to the evidence of harm seen in the WHI results.<br />
<br />
<b>Emerging Data</b><br />
In addition to identifying the inherent differences between the two study populations, researchers began a secondary analysis of the NHS and WHI trials. NHS researchers reported that women who began HRT close to the onset of menopause had a significantly reduced risk of CHD. In the subgroups of women that were older and had a similar duration after menopause compared with the WHI women, they found no significant relationship between HRT and CHD. Also, the WHI study further stratified these results by age, and found that women who began HRT close to their onset of menopause experienced some cardioprotection, while women who were further from the onset of menopause had a slightly elevated risk for CHD.<br />
<br />
Secondary analysis of both studies was therefore necessary to show that age and a short duration from the onset of menopause are crucial to HRT success as a cardioprotective agent. Neither study type provided “truth” or rather, both studies provided “truth” if viewed carefully (e.g., both produced valid and important results). The differences seen in the studies were rooted in the timing of HRT and the populations being studied.<br />
<br />
<b>Lessons Learned From this case Study</b><br />
<br />
Although RCTs are given a higher evidence grade, observational studies provide important clinical insights. In this example, the study populations differed. For policymakers and clinicians, it is crucial to examine whether the CER was based upon patients similar to those being considered. Any study with a dissimilar population may provide non-relevant results. Thus, readers of CER need to carefully examine the generalizability of the findings being reported.<br />
<br />
==Appendix==<br />
<br />
General Classification and Regression Tree (CART) data analysis steps part of the R package <b>rpart.</b><br />
<br />
===Growing the Tree===<br />
<br />
# To grow a tree, use<br />
rpart(formula, data=, method=,control=), where<br />
formula is in the format outcome ~ predictor1+predictor2+...<br />
data= specifies the data frame<br />
method= "class" for a classification tree, use "anova" for a regression tree<br />
control= optional parameters for controlling tree growth. For example, control=rpart.control(minsplit=30, cp=0.001) requires that the minimum number of observations in a node be 30 before attempting a split and that a split must decrease the overall lack of fit by a factor of 0.001 (cost complexity factor) before being attempted.<br />
<br />
===Examining Results===<br />
<br />
# These functions help with examining the results.<br />
printcp(fit) display complexity parameter (cp) table<br />
plotcp(fit) plot cross-validation results<br />
rsq.rpart(fit) plot approximate R-squared and relative error for different splits (2 plots). labels are only appropriate for the "anova" method.<br />
print(fit) print results<br />
summary(fit) detailed results including surrogate splits<br />
plot(fit) plot decision tree<br />
text(fit) label the decision tree plot<br />
post(fit, file=) create postscript plot of decision tree<br />
# In trees created by rpart(), move to the LEFT branch when the stated condition is true.<br />
<br />
===Pruning Trees===<br />
<br />
#In general, trees should be pruned back to avoid overfitting the data. The tree size should minimize the cross-#validated error – xerror column printed by printcp(). Pruning the tree is accomplished by:<br />
prune(fit, cp= )<br />
# use printcp( ) to examine the cross-validation error results, select the complexity parameter (CP) associated with minimum error, and insert the CP it into the prune() function. This (automatically selecting the complexity parameter associated with the smallest cross-validated error) can be done succinctly by:<br />
fit$\$$cptable[which.min(fit$\$$cptable[,"xerror"]),"CP"]<br />
<br />
===Compete Dataset for N-of-1 Example===<br />
[[SMHS_MethodsHeterogeneity_CER_Nof1|This N-of-1 Dataset]] includes an example.<br />
<br />
===[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]===<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_CER}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_MetaAnalysis&diff=16221SMHS MethodsHeterogeneity MetaAnalysis2016-05-23T18:55:03Z<p>Pineaumi: /* Nonparametric Regression Methods */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Meta-Analyses ==<br />
<br />
==Meta-analysis==<br />
<br />
===Overview===<br />
<br />
Meta-analysis is an approach to combine treatment effects across trials or studies into an aggregated treatment effect with higher statistical power than observed in each individual trials. It may detect HTE by testing for differences in treatment effects across similar RCTs. It requires that the individual treatment effects are similar to ensure pooling is meaningful. In the presence of large clinical or methodological differences between the trials, it may be to avoid meta-analyses. The presence of HTE across studies in a meta-analysis may be due to differences in the design or execution of the individual trials (e.g., randomization methods, patient selection criteria). <b>Cochran's Q is a methods for detection of heterogeneity, which is computed as the weighted sum of squared differences between each study's treatment effect and the pooled effects across the studies.</b> It is a barometer of inter-trial differences impacting the observed study result. A possible source of error in a meta-analysis is publication bias. Trial size may introduce publication bias since larger trials are more likely to be published. Language and accessibility represent other potential confounding factors. When the heterogeneity is not due to poor study design, it may be useful to optimize the treatment benefits for different cohorts of participants. <br />
<br />
Cochran's Q statistics is the weighted sum of squares on a standardized scale<sup>8</sup>. <b>The corresponding P value indicates the strength of the evidence of presence of heterogeneity.</b> This test may have low power to detect heterogeneity sometimes and it is suggested to use a value of 0.10 as a cut-off for significance (Higgins et al., 2003). The Q statistics also may have too much power as a test of heterogeneity when the number of studies is large.<br />
<br />
===Simulation Example 1===<br />
<br />
# Install and Load library<br />
install.packages("meta")<br />
library(meta)<br />
<br />
# Set number of studies<br />
n.studies = 15<br />
<br />
# number of treatments: case1, case2, control<br />
n.trt = 3<br />
<br />
# number of outcomes<br />
n.event = 2<br />
<br />
# simulate the (balanced) number of cases (case1 and case2) and controls in each study<br />
ctl.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case1.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case2.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
<br />
# Simulate the number of outcome events (e.g., deaths) and no events in the control group<br />
event.ctl.group = rbinom(n = n.studies, size = ctl.group, prob = rep(<mark>0.1</mark>, length(ctl.group)))<br />
noevent.ctl.group = ctl.group - event.ctl.group<br />
<br />
# Simulate the number of events and no events in the case1 group<br />
event.case1.group = rbinom(n = n.studies, size = case1.group, prob = rep(<mark>0.5</mark>, length(case1.group)))<br />
noevent.case1.group = case1.group - event.case1.group<br />
<br />
# Simulate the number of events and no events in the case2 group<br />
event.case2.group = rbinom(n = n.studies, size = case2.group, prob = rep(<mark>0.6</mark>, length(case2.group)))<br />
noevent.case2.group = case2.group - event.case2.group<br />
<br />
# Run the univariate meta-analysis using <b>metabin()</b>, Meta-analysis of binary outcome data – <br />
# Calculation of fixed and random effects estimates (risk ratio, odds ratio, risk difference or arcsine<br />
# difference) for meta-analyses with binary outcome data. Mantel-Haenszel (MH), <br />
# inverse variance and Peto method are available for pooling.<br />
<br />
# <b>method</b> = A character string indicating which method is to be used for pooling of studies. <br />
# one of "MH" , "Inverse" , or "Cochran"<br />
# sm = A character string indicating which summary measure (“OR”, "RR" "RD"=risk difference) is to be <br />
# used for pooling of studies<br />
<br />
# Control vs. Case1, n.e and n.c are numbers in experimental and control groups<br />
meta.ctr_case1 <- metabin(event.e = <b>event.case1.group</b>, n.e = case1.group, event.c = <b>event.ctl.group</b>, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
# in this case we use Odds Ratio, of the odds of death in the experimental and control studies<br />
forest(meta.ctr_case1)<br />
<br />
<center>[[Image:SMHS_Methods8.png|500px]] </center><br />
<br />
# Control vs. Case2<br />
meta.ctr_case2 <- metabin(event.e = event.case2.group, n.e = case2.group, event.c = event.ctl.group, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
forest(meta.ctr_case2)<br />
<br />
<center>[[Image:SMHS_Methods9.png|500px]] </center><br />
<br />
# Case1 vs. Case2<br />
meta.case1_case2 <- metabin(event.e = event.case1.group, n.e = case1.group, event.c = event.case2.group, <br />
n.c = case2.group, method = "MH", sm = "OR")<br />
forest(meta.case1_case2)<br />
summary(meta.case1_case2)<br />
<br />
Test of heterogeneity:<br />
Q d.f. p-value<br />
11.99 14 0.6071<br />
<br />
<center>[[Image:SMHS_Methods10.png|500px]] </center><br />
<br />
The <b>forest plo</b>t shows the ''I''<sup>2</sup> test indicates the evidence to reject the null hypothesis (no study heterogeneity and the fixed effects model should be used).<br />
<br />
==Series of “N of 1” trials==<br />
<br />
This technique combines (a “series of”) n-of-1 trial data to identify HTE. An n-of-1 trial is a repeated crossover trial for a single patient, which randomly assigns the patient to one treatment vs. another for a given time period, after which the patient is re-randomized to treatment for the next time period, usually repeated for 4-6 time periods. Such trials are most feasibly done in chronic conditions, where little or no washout period is needed between treatments and treatment effects are identifiable in the short-term, such as pain or reliable surrogate markers. Combining data from identical n-of-1 trials across a set of patients enables the statistical analysis controlling for patient fixed or random effects, covariates, centers, or sequence effects, see <b>Figure</b> below. These combined trials are often analyzed within a Bayesian context using shrinkage estimators that combine individual and group mean treatment effects to create a “posterior” individual mean treatment effect estimate which is a form of inverse variance-weighted average of the individual and group effects. Such trials are typically more expensive than standard RCTs on a per-patient basis, however, they require much smaller sample sizes, often less than 100 patients (due to the efficient individual-as-own-control design), and create individual treatment effect estimates that are not possible in a non-crossover design<sup>9</sup>. For the individual patient, the treatment effect can be re-estimated after each time period, and the trial stopped at any point when the more effective treatment is identified with reasonable statistical certainty.<br />
<br />
====Example====<br />
<br />
A study involving 8 participants collected data across 30 days, in which 15 treatment days and 15 control days are randomly assigned within each participant<sup>10</sup>. The treatment effect is represented as a binary variable (control day=0; treatment day=1). The outcome variable represents the response to the intervention within each of the 8 participants. Study employed a fixed-effects modeling. By creating N − 1 dummy-coded variables representing the N=8 participants, where the last (i=8) participant serves as the reference (i.e., as the model intercept). So, each dummy-coded variable represents the difference between each participant (i) and the 8th participant. Thus, all other patients' values will be relative to the values of the 8th (reference) subject. The overall differences across participants in fixed effects can be evaluated with multiple <b>degree-of-freedom F-tests.</b><br />
<br />
<center>[[Image:SMHS_Methods11.png|500px]] </center><br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|...||...||...||...||...||...||...||...||...||...<br />
<br />
|}<br />
</center> Complete data is available in the <b>Appendix.</b><br />
<br />
<br />
<br />
<center>Data Summary<br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Intercept||Constant<br />
|-<br />
|Physical Activity||PhyAct<br />
|-<br />
|Intervention||Tx<br />
|-<br />
|WP Social Support||WPSS<br />
|-<br />
|PM Social Support (1-3)||PMss3<br />
|-<br />
|Self Efficacy||SelfEff25<br />
<br />
|}<br />
</center><br />
<br />
rm(list=ls())<br />
Nof1 <-read.table("https://umich.instructure.com/files/330385/download?download_frd=1&verifier=DwJUGSd6t24dvK7uYmzA2aDyzlmsohyaK6P7jK0Q", sep=",", header = TRUE) # 02_Nof1_Data.csv<br />
attach(Nof1)<br />
head(Nof1)<br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|2||1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|3||1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|4||1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|5||1||5||1||33||8||0.59||4.62||4.03||1.03||21<br />
|-<br />
|6||1||6||1||33||8||-1.16||2.87||4.03||1.03||0<br />
<br />
|}<br />
</center><br />
<br />
df.1 = data.frame(PhyAct, Tx, WPSS, PMss3, SelfEff25) <br />
<br />
# library("lme4")<br />
<br />
lm.1 = model.lmer <- lmer(PhyAct ~ Tx + SelfEff + Tx*SelfEff + (1|Day) + (1|ID) , data= df.1)<br />
summary(lm.1)<br />
<br />
Linear mixed model fit by REML ['lmerMod']<br />
Formula: PhyAct ~ Tx + SelfEff + Tx * SelfEff + (1 | Day) + (1 | ID)<br />
Data: df.1<br />
<br />
REML criterion at convergence: 8820<br />
<br />
<center> Scaled Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Min||1Q||Median||3Q||Max<br />
|-<br />
|-2.7012||-0.6833||-0.0333||0.6542||3.9612<br />
|}<br />
</center><br />
<br />
<br />
<center> Random Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Groups ||Name||Variance ||Std.Dev.<br />
|-<br />
| Day||(Intercept) ||0.0 || 0.00 <br />
|-<br />
<br />
|ID|| (Intercept)||601.5||24.53 <br />
|-<br />
<br />
|Residual|| ||969.0 ||31.13 <br />
|}<br />
Number of obs: 900, groups: Day, 30; ID, 30<br />
</center> <br />
<br />
<br />
<center> Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Estimate||Std.||Error||t value<br />
|-<br />
|(Intercept)||38.3772||14.4738||2.651<br />
|-<br />
|Tx||4.0283||6.3745||0.632<br />
|-<br />
|SelfEff||0.5818||0.5942||0.979<br />
|-<br />
|Tx:SelfEff||0.9702||0.2617||3.708<br />
|}<br />
</center><br />
<br />
<br />
<center> Correlation of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||(Intr)||Tx ||SlfEff<br />
|-<br />
| Tx|| -0.220|| || <br />
|-<br />
| SelfEff||-0.946 ||0.208 || <br />
|-<br />
| Tx:SelfEff ||0.208 ||-0.946 ||-0.220<br />
|}<br />
</center><br />
<br />
<br />
# Model: PhyAct = Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25 + ε<br />
lm.2 = lm(PhyAct ~ Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25, df.1) <br />
summary(lm.2)<br />
<br />
Call:<br />
lm(formula = PhyAct ~ Tx + WPSS + PMss3 + Tx * WPSS + Tx * PMss3 + <br />
SelfEff25 + Tx * SelfEff25, data = df.1)<br />
<br />
<center> Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max <br />
|-<br />
| -102.39||-28.24||-1.47||25.16||122.41 <br />
<br />
|}<br />
</center><br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||52.0067||1.8080||28.764||< 2e-16 ***<br />
|-<br />
|Tx||27.7366||2.5569||10.848||< 2e-16 ***<br />
|-<br />
|WPSS||1.9631||2.4272||0.809||0.418853 <br />
|- <br />
|PMss3||13.5110||2.7853||4.851||1.45e-06 ***<br />
|-<br />
|SelfEff25||0.6289||0.2205||2.852||0.004439 ** <br />
|-<br />
|Tx:WPSS||9.9114||3.4320||2.888||0.003971 ** <br />
|-<br />
|Tx:PMss3||8.8422||3.9390||2.245||0.025025 * <br />
|-<br />
|Tx:SelfEff25||1.0460||0.3118||3.354||0.000829 ***<br />
<br />
<br />
|}<br />
</center><br />
<br />
[Using SAS (StudyI_Analyses.sas, StudyIIab_Analyses.sas)]<br />
<br />
<center> Type 3 Tests of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|<b>Effect</b>||<b>Num DF</b>||<b>Den DF</b>||<b>F Value</b>||<b>$Pr>F$</b><br />
|-<br />
|<b>Tx</b>||1||224||67.46||<.0001 <br />
|-<br />
|<b>ID</b>||7||224||25.95||<.0001<br />
|-<br />
|<b>Tx*ID</b>||7||224||2.92||0.0060<br />
|}<br />
</center><br />
<br />
==Quantile Treatment Effect (QTE)==<br />
<br />
QTE employs quantile regression estimation (QRE) to examine the central tendency and statistical dispersion of the treatment effect in a population. These may not be revealed by the conventional mean estimation in RCTs. For instance, patients with different comorbidity scores may respond differently to a treatment. Quantile regression has the ability to reveal HTE according to the ranking of patients’ comorbidity scores or some other relevant covariate by which patients may be ranked. Therefore, in an attempt to inform patient-centered care, quantile regression provides more information on the distribution of the treatment effect than typical conditional mean treatment effect estimation. QTE characterizes the heterogeneous treatment effect on individuals and groups across various positions in the distributions of different outcomes of interest. This unique feature has given quantile regression analysis substantial attention and has been employed across a wide range of applications, particularly when evaluating the economic effects of welfare reform.<br />
<br />
One caveat of applying QRE in clinical trials for examining HTE is that the QTE doesn’t demonstrate the treatment effect for a given patient. Instead, it focuses on the treatment effect among subjects within the qth quantile, such as those who are exactly at the top 10th percent in terms of blood pressure or a depression score for some covariate of interest, for example, comorbidity score. It is not uncommon for the qth quantiles to be two different sets of patients before and after the treatment. For this reason, we have to assume that these two groups of patients are homogeneous if they were in the same quantiles.<br />
<br />
<b>Income-Food Expenditure Example:</b> Let’s examine the Engel data (N=235) on the relationship between food expenditure (foodexp) and household income (income)<sup>11</sup>. We can plot the data and then explore the superposition of the six fitted quantile regression lines. <br />
<br />
install.packages("quantreg")<br />
library(quantreg)<br />
data(engel)<br />
attach(engel)<br />
<br />
<center>head(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|1||420.1577||255.8394<br />
|-<br />
|2||541.4117||310.9587<br />
|-<br />
|3||901.1575||485.6800<br />
|- <br />
|4||639.0802||402.9974<br />
|-<br />
|5||750.8756||495.5608<br />
|-<br />
|6||945.7989||633.7978<br />
<br />
|}<br />
</center><br />
<br />
<br />
<center>summary(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|Min||377.1||242.3<br />
|-<br />
|1st Qu.||638.9||429.7<br />
|-<br />
|Median||884.0||582.5<br />
|- <br />
|Mean||982.5||624.2<br />
|-<br />
|3rd Qu.||1164.0||743.9<br />
|-<br />
|Max||4957.8||2032.7<br />
<br />
|}<br />
</center><br />
<br />
Note: If <i>Y</i> be a real valued random variable with cumulative distribution function F<sub>Y</sub>(y)=P(Y≤ y), then the τ-quantile of <i>Y</i> is given by<br />
<br />
<center> Q<sub>Y</sub>(τ)=F<sub>Y</sub><sup>-1</sup>(τ)=inf{ y:F<sub>Y</sub>(y)≥τ} </center><br />
<br />
where 0≤τ≤1.<br />
<br />
<center>[[Image:SMHS_Methods12.png|500px]] </center><br />
<br />
# (1) Graphics<br />
plot(income, foodexp, cex=.25, type="n", xlab="Household Income", ylab="Food Expenditure")<br />
points(income, foodexp, cex=.5, col="blue")<br />
<br />
# tau - the quantile(s) to be estimated, in the range from 0 to 1. An object "rq.process" and an object "rqs" <br />
# are returned containing the matrix of coefficient estimates at the specified quantiles.<br />
abline( rq(foodexp ~ income, tau=.5), col="blue") # Quantile Regression Model<br />
<br />
abline( lm(foodexp ~ income), lty=2, lwd=3, col="red") # linear model<br />
taus <- c(0.05, 0.1, 0.25, 0.75, 0.90, 0.95)<br />
colors <- rainbow(length(taus))<br />
<br />
models <- vector(mode = "list", length = length(taus)) # define a vector of models to store QR for diff taus<br />
model.names <- vector(mode = "list", length = length(taus)) # define a vector model names<br />
<br />
for( i in 1:length(taus)){<br />
models[[i]] <- rq(foodexp ~ income, tau=taus[i]) <br />
var <- taus[i]<br />
model.names[[i]] <- paste("Model [", i , "]: tau=", var)<br />
abline( models[[i]], lwd=2, col= colors[[i]])<br />
}<br />
legend(3000, 1100, model.names, col= colors, pch= taus, bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods13.png|500px]] </center><br />
<br />
# (2) Inference about quantile regression coefficients. As an alternative to the rank-inversion confidence intervals, we can obtain a table of coefficients, standard errors, t-statistics, and p-values using the summary function:<br />
<br />
<b>summary(models[[3]], se = "nid")</b><br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
# Alternatively, we can use summary.rq to compute bootstrapped standard errors.<br />
summary.rq(models[[3]], se = "nid")<br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
==Nonparametric Regression Methods ==<br />
<br />
Nonparametric regression enables dealing with HTE in RCTs. Different nonparametric methods, such as kernel smoothing methods and series methods, can be used to generate test statistics for examining the presence of HTE. A kernel method is a weighting scheme based on a kernel function (e.g. uniform, Gaussian). When evaluating the treatment effect of a patient in RCTs, the kernel method assigns larger weights to those observations with similar covariates. This is done because it is assumed that patients with similar covariates provide more relevant data on predicted treatment response. Examining participants that have different backgrounds (e.g., demographic, clinical), kernel smoothing methods utilize information from highly divergent participants when estimating a particular subject’s treatment effect. Lower weights are assigned to very different subjects and the kernel methods require choosing a set of smoothing parameters to group patients according to their relative degree of similarities. A drawback is that the corresponding proposed test statistics may be sensitive to the chosen bandwidths, which inhibits the interpretation of the results. Series methods use approximating functions (splines or power series of the explanatory variables) to construct test statistics. Compared to kernel smoothing methods, series methods normally have the advantage of computational convenience; however, the precision of test statistics depends on the number of terms selected in the series. <br />
<br />
<b>Canadian Wage Data Example:</b> Nonparametric regression extends the classical parametric regression (e.g., lm, lmer) involving one continuous dependent variable, y, and (1 or more) continuous explanatory variable(s), x. Let’s start with a popular parametric model of a wage equation that we can extend to a fully nonparametric regression model. First, we will compare and contrast the parametric and nonparametric approach towards univariate regression and then proceed to multivariate regression.<br />
<br />
Let’s use the Canadian cross-section wage data (<b>cps71</b>) consisting of a random sample taken from the 1971 Canadian Census for male individuals having common education (High-School). N=205 observations, 2 variables, the logarithm of the individual’s wage (logwage) and their age (age). The classical wage equation model includes a quadratic term of age.<br />
<br />
# install.packages("np")<br />
library("np")<br />
data("cps71")<br />
<br />
# (1) Linear Model -> R<sup>2</sup> = 0.2308<br />
model.lin <- lm( logwage ~ age + I(age^2), data = cps71)<br />
summary(model.lin)<br />
<br />
Call:<br />
lm(formula = logwage ~ age + I(age^2), data = cps71)<br />
<br />
Residuals:<br />
Min 1Q Median 3Q Max <br />
-2.4041 -0.1711 0.0884 0.3182 1.3940 <br />
<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||10.0419773||0.4559986||22.022||< 2e-16 ***<br />
|-<br />
|Age||0.1731310||0.0238317|| 7.265||7.96e-12 ***<br />
|-<br />
|I(age^2)||-0.0019771||0.0002898||-6.822||1.02e-10 ***<br />
<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
Residual standard error: 0.5608 on 202 degrees of freedom<br />
Multiple R-squared: <mark>0.2308</mark>, Adjusted R-squared: 0.2232 <br />
F-statistic: 30.3 on 2 and 202 DF, p-value: 3.103e-12<br />
<br />
# (2) Next, we consider the local linear nonparametric method employing cross-validated <br />
# bandwidth selection and estimation in one step. Start with computing the least-squares<br />
# cross-validated bandwidths for the local constant estimator (default).<br />
# Note that <b>R<sup>2</sup> = 0.3108675</b><br />
bandwidth <- npregbw(formula= logwage ~ age, data = cps71)<br />
model.np <- npreg(bandwidth, regtype = "ll", bwmethod = "cv.aic", gradients = TRUE, data = cps71)<br />
summary(model.np)<br />
<br />
Regression Data: 205 training points, in 1 variable(s) age<br />
Bandwidth(s): 1.892157<br />
Kernel Regression Estimator: Local-Constant<br />
Bandwidth Type: Fixed<br />
Residual standard error: 0.5307943<br />
R-squared: <b><mark>0.3108675</mark></b><br />
Continuous Kernel Type: Second-Order Gaussian<br />
No. Continuous Explanatory Vars.: 1<br />
<br />
# NP model significance may be tested by<br />
npsigtest(model.np)<br />
<br />
Kernel Regression Significance Test<br />
Type I Test with IID Bootstrap (399 replications, Pivot=TRUE, joint=FALSE)<br />
Explanatory variables tested for significance: age (1)<br />
<br />
age<br />
Bandwidth(s): 1.892157<br />
<br />
Individual Significance Tests<br />
P Value: <br />
age < 2.22e-16 ***<br />
<br />
# So, as was the case for the linear parametric model, Age is significant in the local linear NP-model<br />
<br />
# (3) Graphical comparison of parametric and nonparametric models. <br />
plot(cps71$\$$age, cps71$\$$logwage, xlab = "age", ylab = "log(wage)", cex=.1)<br />
lines(cps71$\$$age, fitted(model.lin), lty = 2, col = " red")<br />
lines(cps71$\$$age, fitted(model.np), lty = 1, col = "blue")<br />
legend("topright", c("Data", "Linear", "Non-linear"), col=c("Black", "Red", "Blue"), pch = c(1, 1, 1), bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods14.png|500px]] </center><br />
<br />
# some additional plots resenting the parametric (quadratic, dashed line) and the nonparametric estimates <br />
# (solid line) of the regression function for the cps71 data. <br />
plot(model.np, plot.errors.method = "asymptotic")<br />
plot(model.np, gradients = TRUE)<br />
lines(cps71$\$$age, coef(model.lin)[2]+2*cps71$\$$age*coef(model.lin)[3], lty = 2, col = "red")<br />
plot(model.np, gradients = TRUE, plot.errors.method = "asymptotic")<br />
<br />
# (4) using the Lin and NL models to generate predictions based on the obtained appropriate <br />
# bandwidths and estimated a nonparametric model. We need to create a set of explanatory<br />
# variables for which to generate predictions. These can be part of the original dataset or be<br />
# outside its scope. Typically, we don’t have the outcome for the evaluation data and need only <br />
# provide the explanatory variables for which predicted values are generated by the models.<br />
# Occasionally, splitting the dataset into two independent samples (training/testing), allows estimation<br />
# of a model on one sample, and evaluation of its performance on another.<br />
<br />
cps.eval.data <- data.frame(age = seq(10,70, by=10)) # simulate some explanatory X values (ages)<br />
pred.lin <- predict(model.lin, newdata = cps.eval.data) # Linear Prediction of log(Wage)<br />
pred.np <- predict(model.np, newdata = cps.eval.data) # non-Linear Prediction of log(Wage)<br />
plot(pred.lin, pred.np)<br />
abline(lm(pred.np ~ pred.lin))<br />
<br />
<center>[[Image:SMHS_Methods15.png|500px]] </center><br />
<br />
.<br />
.<br />
.<br />
<br />
==Predictive risk models ==<br />
<br />
Predictive risk models represent a class of methods for identifying potential for HTE when the individual patient risk for disease-related events at baseline depends on observed factors. For instance, common measures are disease staging criteria, such as those used in COPD or heart failure, Framingham risk scores for cardiovascular event risk, or genetic variations, e.g., HER2 for breast cancer. Initial predictive risk modeling, aka risk function estimation, is often performed without accounting for treatment effects. Least squares or Cox proportional hazards regression methods are appropriate in many cases and provide relatively more interpretable risk functions, but rely on linearity assumptions and may not provide optimal predictive metrics. Partial least squares is an extension of least squares methods that can reduce the dimensionality of the predictor space by interposing latent variables, predicted by linear combinations of observable characteristics, as the intermediate predictors of one or more outcomes. Recursive partitioning, such as random forests, support vector machines, and neural networks represent latter methods with better predictive power than linear methods. Risk function estimation can range from highly exploratory analyses to near meta-analytic model validation, and may be useful at any stage of product development.<br />
<br />
HIV Example: The <b>“hmohiv”</b> dataset represents a study of HIV positive patients examining whether there was a difference in survival times of HIV positive patients between a cohort using intravenous drugs (drug=1) and a cohort not using the IV drug (drug=0). The <b>hmohiv</b> data includes the following variables:<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Time||Age||Drug||Censor||Entdate||Enddate<br />
|- <br />
|1||5||46||0||1||5/15/1990||10/14/1990<br />
|-<br />
|2||6||35||1||0||9/19/1989||3/20/1990<br />
|-<br />
|3||8||30||1||1||4/21/1991||12/20/1991<br />
|-<br />
|4||3||30||1||1||1/3/1991||4/4/1991<br />
|-<br />
|5||22||36||0||1||9/18/1989||7/19/1991<br />
|-<br />
|6||1||32||1||0||3/18/1991||4/17/1991<br />
|-<br />
|...||...||...||...||...||...||...<br />
<br />
<br />
|}<br />
</center><br />
<br />
#cleaning up environment<br />
rm(list=ls())<br />
<br />
# load survival library<br />
library(survival)<br />
<br />
# load hmohiv data<br />
hmohiv<-read.table("http://www.ats.ucla.edu/stat/r/examples/asa/hmohiv.csv", sep=",", header = TRUE)<br />
attach(hmohiv)<br />
<br />
# Fit Cox proportional hazards regression model<br />
cox.model <- coxph( Surv(time, censor) ~ drug, method="breslow")<br />
fit.1 <- survfit(cox.model, newdata=drug.new)<br />
<br />
# construct a frame of the 2 cohorts IV_drug and no-IV-drug<br />
drug.new<-data.frame(drug=c(0,1))<br />
<br />
# plot results<br />
plot(fit.1, xlab="Survival Time (Months)", ylab="Survival Probability")<br />
points(fit.1$\$$time, fit.1$\$$surv[,1], pch=1)<br />
points(fit.1$\$$time, fit.1$\$$surv[,2], pch=2)<br />
legend(40, .8, c("Drug Absent", "Drug Present"), pch=c(1,2))<br />
<br />
<center>[[Image:SMHS_Methods16.png|500px]] </center><br />
<br />
# to inslect the resulting Cox Proportional Hazard Model<br />
cox.model <br />
Call:<br />
coxph(formula = Surv(time, censor) ~ drug, method = "breslow")<br />
<br />
coef exp(coef) se(coef) z p<br />
<b>drug</b> 0.779 2.18 0.242 3.22 <b>0.0013</b><br />
<br />
Likelihood ratio test=10.2 on 1 df, p=0.00141 n= 100, number of events= 80 <br />
<br />
===Footnotes===<br />
<br />
*<sup>8</sup> http://onlinelibrary.wiley.com/enhanced/doi/10.1002/jrsm.54<br />
*<sup>9</sup> http://effectivehealthcare.ahrq.gov/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=1857 <br />
*<sup>10</sup> http://jpepsy.oxfordjournals.org/content/39/2/138.full#sec-14<br />
*<sup>11</sup> http://www.ers.usda.gov/media/200576/err32c_1_.pdf<br />
<br />
==[[SMHS_MethodsHeterogeneity_CER|Next see: Comparative Effectiveness Research (CER)]]==<br />
<br />
*[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_MetaAnalysis}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_MetaAnalysis&diff=16220SMHS MethodsHeterogeneity MetaAnalysis2016-05-23T18:53:40Z<p>Pineaumi: /* Nonparametric Regression Methods */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Meta-Analyses ==<br />
<br />
==Meta-analysis==<br />
<br />
===Overview===<br />
<br />
Meta-analysis is an approach to combine treatment effects across trials or studies into an aggregated treatment effect with higher statistical power than observed in each individual trials. It may detect HTE by testing for differences in treatment effects across similar RCTs. It requires that the individual treatment effects are similar to ensure pooling is meaningful. In the presence of large clinical or methodological differences between the trials, it may be to avoid meta-analyses. The presence of HTE across studies in a meta-analysis may be due to differences in the design or execution of the individual trials (e.g., randomization methods, patient selection criteria). <b>Cochran's Q is a methods for detection of heterogeneity, which is computed as the weighted sum of squared differences between each study's treatment effect and the pooled effects across the studies.</b> It is a barometer of inter-trial differences impacting the observed study result. A possible source of error in a meta-analysis is publication bias. Trial size may introduce publication bias since larger trials are more likely to be published. Language and accessibility represent other potential confounding factors. When the heterogeneity is not due to poor study design, it may be useful to optimize the treatment benefits for different cohorts of participants. <br />
<br />
Cochran's Q statistics is the weighted sum of squares on a standardized scale<sup>8</sup>. <b>The corresponding P value indicates the strength of the evidence of presence of heterogeneity.</b> This test may have low power to detect heterogeneity sometimes and it is suggested to use a value of 0.10 as a cut-off for significance (Higgins et al., 2003). The Q statistics also may have too much power as a test of heterogeneity when the number of studies is large.<br />
<br />
===Simulation Example 1===<br />
<br />
# Install and Load library<br />
install.packages("meta")<br />
library(meta)<br />
<br />
# Set number of studies<br />
n.studies = 15<br />
<br />
# number of treatments: case1, case2, control<br />
n.trt = 3<br />
<br />
# number of outcomes<br />
n.event = 2<br />
<br />
# simulate the (balanced) number of cases (case1 and case2) and controls in each study<br />
ctl.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case1.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case2.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
<br />
# Simulate the number of outcome events (e.g., deaths) and no events in the control group<br />
event.ctl.group = rbinom(n = n.studies, size = ctl.group, prob = rep(<mark>0.1</mark>, length(ctl.group)))<br />
noevent.ctl.group = ctl.group - event.ctl.group<br />
<br />
# Simulate the number of events and no events in the case1 group<br />
event.case1.group = rbinom(n = n.studies, size = case1.group, prob = rep(<mark>0.5</mark>, length(case1.group)))<br />
noevent.case1.group = case1.group - event.case1.group<br />
<br />
# Simulate the number of events and no events in the case2 group<br />
event.case2.group = rbinom(n = n.studies, size = case2.group, prob = rep(<mark>0.6</mark>, length(case2.group)))<br />
noevent.case2.group = case2.group - event.case2.group<br />
<br />
# Run the univariate meta-analysis using <b>metabin()</b>, Meta-analysis of binary outcome data – <br />
# Calculation of fixed and random effects estimates (risk ratio, odds ratio, risk difference or arcsine<br />
# difference) for meta-analyses with binary outcome data. Mantel-Haenszel (MH), <br />
# inverse variance and Peto method are available for pooling.<br />
<br />
# <b>method</b> = A character string indicating which method is to be used for pooling of studies. <br />
# one of "MH" , "Inverse" , or "Cochran"<br />
# sm = A character string indicating which summary measure (“OR”, "RR" "RD"=risk difference) is to be <br />
# used for pooling of studies<br />
<br />
# Control vs. Case1, n.e and n.c are numbers in experimental and control groups<br />
meta.ctr_case1 <- metabin(event.e = <b>event.case1.group</b>, n.e = case1.group, event.c = <b>event.ctl.group</b>, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
# in this case we use Odds Ratio, of the odds of death in the experimental and control studies<br />
forest(meta.ctr_case1)<br />
<br />
<center>[[Image:SMHS_Methods8.png|500px]] </center><br />
<br />
# Control vs. Case2<br />
meta.ctr_case2 <- metabin(event.e = event.case2.group, n.e = case2.group, event.c = event.ctl.group, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
forest(meta.ctr_case2)<br />
<br />
<center>[[Image:SMHS_Methods9.png|500px]] </center><br />
<br />
# Case1 vs. Case2<br />
meta.case1_case2 <- metabin(event.e = event.case1.group, n.e = case1.group, event.c = event.case2.group, <br />
n.c = case2.group, method = "MH", sm = "OR")<br />
forest(meta.case1_case2)<br />
summary(meta.case1_case2)<br />
<br />
Test of heterogeneity:<br />
Q d.f. p-value<br />
11.99 14 0.6071<br />
<br />
<center>[[Image:SMHS_Methods10.png|500px]] </center><br />
<br />
The <b>forest plo</b>t shows the ''I''<sup>2</sup> test indicates the evidence to reject the null hypothesis (no study heterogeneity and the fixed effects model should be used).<br />
<br />
==Series of “N of 1” trials==<br />
<br />
This technique combines (a “series of”) n-of-1 trial data to identify HTE. An n-of-1 trial is a repeated crossover trial for a single patient, which randomly assigns the patient to one treatment vs. another for a given time period, after which the patient is re-randomized to treatment for the next time period, usually repeated for 4-6 time periods. Such trials are most feasibly done in chronic conditions, where little or no washout period is needed between treatments and treatment effects are identifiable in the short-term, such as pain or reliable surrogate markers. Combining data from identical n-of-1 trials across a set of patients enables the statistical analysis controlling for patient fixed or random effects, covariates, centers, or sequence effects, see <b>Figure</b> below. These combined trials are often analyzed within a Bayesian context using shrinkage estimators that combine individual and group mean treatment effects to create a “posterior” individual mean treatment effect estimate which is a form of inverse variance-weighted average of the individual and group effects. Such trials are typically more expensive than standard RCTs on a per-patient basis, however, they require much smaller sample sizes, often less than 100 patients (due to the efficient individual-as-own-control design), and create individual treatment effect estimates that are not possible in a non-crossover design<sup>9</sup>. For the individual patient, the treatment effect can be re-estimated after each time period, and the trial stopped at any point when the more effective treatment is identified with reasonable statistical certainty.<br />
<br />
====Example====<br />
<br />
A study involving 8 participants collected data across 30 days, in which 15 treatment days and 15 control days are randomly assigned within each participant<sup>10</sup>. The treatment effect is represented as a binary variable (control day=0; treatment day=1). The outcome variable represents the response to the intervention within each of the 8 participants. Study employed a fixed-effects modeling. By creating N − 1 dummy-coded variables representing the N=8 participants, where the last (i=8) participant serves as the reference (i.e., as the model intercept). So, each dummy-coded variable represents the difference between each participant (i) and the 8th participant. Thus, all other patients' values will be relative to the values of the 8th (reference) subject. The overall differences across participants in fixed effects can be evaluated with multiple <b>degree-of-freedom F-tests.</b><br />
<br />
<center>[[Image:SMHS_Methods11.png|500px]] </center><br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|...||...||...||...||...||...||...||...||...||...<br />
<br />
|}<br />
</center> Complete data is available in the <b>Appendix.</b><br />
<br />
<br />
<br />
<center>Data Summary<br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Intercept||Constant<br />
|-<br />
|Physical Activity||PhyAct<br />
|-<br />
|Intervention||Tx<br />
|-<br />
|WP Social Support||WPSS<br />
|-<br />
|PM Social Support (1-3)||PMss3<br />
|-<br />
|Self Efficacy||SelfEff25<br />
<br />
|}<br />
</center><br />
<br />
rm(list=ls())<br />
Nof1 <-read.table("https://umich.instructure.com/files/330385/download?download_frd=1&verifier=DwJUGSd6t24dvK7uYmzA2aDyzlmsohyaK6P7jK0Q", sep=",", header = TRUE) # 02_Nof1_Data.csv<br />
attach(Nof1)<br />
head(Nof1)<br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|2||1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|3||1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|4||1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|5||1||5||1||33||8||0.59||4.62||4.03||1.03||21<br />
|-<br />
|6||1||6||1||33||8||-1.16||2.87||4.03||1.03||0<br />
<br />
|}<br />
</center><br />
<br />
df.1 = data.frame(PhyAct, Tx, WPSS, PMss3, SelfEff25) <br />
<br />
# library("lme4")<br />
<br />
lm.1 = model.lmer <- lmer(PhyAct ~ Tx + SelfEff + Tx*SelfEff + (1|Day) + (1|ID) , data= df.1)<br />
summary(lm.1)<br />
<br />
Linear mixed model fit by REML ['lmerMod']<br />
Formula: PhyAct ~ Tx + SelfEff + Tx * SelfEff + (1 | Day) + (1 | ID)<br />
Data: df.1<br />
<br />
REML criterion at convergence: 8820<br />
<br />
<center> Scaled Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Min||1Q||Median||3Q||Max<br />
|-<br />
|-2.7012||-0.6833||-0.0333||0.6542||3.9612<br />
|}<br />
</center><br />
<br />
<br />
<center> Random Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Groups ||Name||Variance ||Std.Dev.<br />
|-<br />
| Day||(Intercept) ||0.0 || 0.00 <br />
|-<br />
<br />
|ID|| (Intercept)||601.5||24.53 <br />
|-<br />
<br />
|Residual|| ||969.0 ||31.13 <br />
|}<br />
Number of obs: 900, groups: Day, 30; ID, 30<br />
</center> <br />
<br />
<br />
<center> Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Estimate||Std.||Error||t value<br />
|-<br />
|(Intercept)||38.3772||14.4738||2.651<br />
|-<br />
|Tx||4.0283||6.3745||0.632<br />
|-<br />
|SelfEff||0.5818||0.5942||0.979<br />
|-<br />
|Tx:SelfEff||0.9702||0.2617||3.708<br />
|}<br />
</center><br />
<br />
<br />
<center> Correlation of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||(Intr)||Tx ||SlfEff<br />
|-<br />
| Tx|| -0.220|| || <br />
|-<br />
| SelfEff||-0.946 ||0.208 || <br />
|-<br />
| Tx:SelfEff ||0.208 ||-0.946 ||-0.220<br />
|}<br />
</center><br />
<br />
<br />
# Model: PhyAct = Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25 + ε<br />
lm.2 = lm(PhyAct ~ Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25, df.1) <br />
summary(lm.2)<br />
<br />
Call:<br />
lm(formula = PhyAct ~ Tx + WPSS + PMss3 + Tx * WPSS + Tx * PMss3 + <br />
SelfEff25 + Tx * SelfEff25, data = df.1)<br />
<br />
<center> Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max <br />
|-<br />
| -102.39||-28.24||-1.47||25.16||122.41 <br />
<br />
|}<br />
</center><br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||52.0067||1.8080||28.764||< 2e-16 ***<br />
|-<br />
|Tx||27.7366||2.5569||10.848||< 2e-16 ***<br />
|-<br />
|WPSS||1.9631||2.4272||0.809||0.418853 <br />
|- <br />
|PMss3||13.5110||2.7853||4.851||1.45e-06 ***<br />
|-<br />
|SelfEff25||0.6289||0.2205||2.852||0.004439 ** <br />
|-<br />
|Tx:WPSS||9.9114||3.4320||2.888||0.003971 ** <br />
|-<br />
|Tx:PMss3||8.8422||3.9390||2.245||0.025025 * <br />
|-<br />
|Tx:SelfEff25||1.0460||0.3118||3.354||0.000829 ***<br />
<br />
<br />
|}<br />
</center><br />
<br />
[Using SAS (StudyI_Analyses.sas, StudyIIab_Analyses.sas)]<br />
<br />
<center> Type 3 Tests of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|<b>Effect</b>||<b>Num DF</b>||<b>Den DF</b>||<b>F Value</b>||<b>$Pr>F$</b><br />
|-<br />
|<b>Tx</b>||1||224||67.46||<.0001 <br />
|-<br />
|<b>ID</b>||7||224||25.95||<.0001<br />
|-<br />
|<b>Tx*ID</b>||7||224||2.92||0.0060<br />
|}<br />
</center><br />
<br />
==Quantile Treatment Effect (QTE)==<br />
<br />
QTE employs quantile regression estimation (QRE) to examine the central tendency and statistical dispersion of the treatment effect in a population. These may not be revealed by the conventional mean estimation in RCTs. For instance, patients with different comorbidity scores may respond differently to a treatment. Quantile regression has the ability to reveal HTE according to the ranking of patients’ comorbidity scores or some other relevant covariate by which patients may be ranked. Therefore, in an attempt to inform patient-centered care, quantile regression provides more information on the distribution of the treatment effect than typical conditional mean treatment effect estimation. QTE characterizes the heterogeneous treatment effect on individuals and groups across various positions in the distributions of different outcomes of interest. This unique feature has given quantile regression analysis substantial attention and has been employed across a wide range of applications, particularly when evaluating the economic effects of welfare reform.<br />
<br />
One caveat of applying QRE in clinical trials for examining HTE is that the QTE doesn’t demonstrate the treatment effect for a given patient. Instead, it focuses on the treatment effect among subjects within the qth quantile, such as those who are exactly at the top 10th percent in terms of blood pressure or a depression score for some covariate of interest, for example, comorbidity score. It is not uncommon for the qth quantiles to be two different sets of patients before and after the treatment. For this reason, we have to assume that these two groups of patients are homogeneous if they were in the same quantiles.<br />
<br />
<b>Income-Food Expenditure Example:</b> Let’s examine the Engel data (N=235) on the relationship between food expenditure (foodexp) and household income (income)<sup>11</sup>. We can plot the data and then explore the superposition of the six fitted quantile regression lines. <br />
<br />
install.packages("quantreg")<br />
library(quantreg)<br />
data(engel)<br />
attach(engel)<br />
<br />
<center>head(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|1||420.1577||255.8394<br />
|-<br />
|2||541.4117||310.9587<br />
|-<br />
|3||901.1575||485.6800<br />
|- <br />
|4||639.0802||402.9974<br />
|-<br />
|5||750.8756||495.5608<br />
|-<br />
|6||945.7989||633.7978<br />
<br />
|}<br />
</center><br />
<br />
<br />
<center>summary(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|Min||377.1||242.3<br />
|-<br />
|1st Qu.||638.9||429.7<br />
|-<br />
|Median||884.0||582.5<br />
|- <br />
|Mean||982.5||624.2<br />
|-<br />
|3rd Qu.||1164.0||743.9<br />
|-<br />
|Max||4957.8||2032.7<br />
<br />
|}<br />
</center><br />
<br />
Note: If <i>Y</i> be a real valued random variable with cumulative distribution function F<sub>Y</sub>(y)=P(Y≤ y), then the τ-quantile of <i>Y</i> is given by<br />
<br />
<center> Q<sub>Y</sub>(τ)=F<sub>Y</sub><sup>-1</sup>(τ)=inf{ y:F<sub>Y</sub>(y)≥τ} </center><br />
<br />
where 0≤τ≤1.<br />
<br />
<center>[[Image:SMHS_Methods12.png|500px]] </center><br />
<br />
# (1) Graphics<br />
plot(income, foodexp, cex=.25, type="n", xlab="Household Income", ylab="Food Expenditure")<br />
points(income, foodexp, cex=.5, col="blue")<br />
<br />
# tau - the quantile(s) to be estimated, in the range from 0 to 1. An object "rq.process" and an object "rqs" <br />
# are returned containing the matrix of coefficient estimates at the specified quantiles.<br />
abline( rq(foodexp ~ income, tau=.5), col="blue") # Quantile Regression Model<br />
<br />
abline( lm(foodexp ~ income), lty=2, lwd=3, col="red") # linear model<br />
taus <- c(0.05, 0.1, 0.25, 0.75, 0.90, 0.95)<br />
colors <- rainbow(length(taus))<br />
<br />
models <- vector(mode = "list", length = length(taus)) # define a vector of models to store QR for diff taus<br />
model.names <- vector(mode = "list", length = length(taus)) # define a vector model names<br />
<br />
for( i in 1:length(taus)){<br />
models[[i]] <- rq(foodexp ~ income, tau=taus[i]) <br />
var <- taus[i]<br />
model.names[[i]] <- paste("Model [", i , "]: tau=", var)<br />
abline( models[[i]], lwd=2, col= colors[[i]])<br />
}<br />
legend(3000, 1100, model.names, col= colors, pch= taus, bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods13.png|500px]] </center><br />
<br />
# (2) Inference about quantile regression coefficients. As an alternative to the rank-inversion confidence intervals, we can obtain a table of coefficients, standard errors, t-statistics, and p-values using the summary function:<br />
<br />
<b>summary(models[[3]], se = "nid")</b><br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
# Alternatively, we can use summary.rq to compute bootstrapped standard errors.<br />
summary.rq(models[[3]], se = "nid")<br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
==Nonparametric Regression Methods ==<br />
<br />
Nonparametric regression enables dealing with HTE in RCTs. Different nonparametric methods, such as kernel smoothing methods and series methods, can be used to generate test statistics for examining the presence of HTE. A kernel method is a weighting scheme based on a kernel function (e.g. uniform, Gaussian). When evaluating the treatment effect of a patient in RCTs, the kernel method assigns larger weights to those observations with similar covariates. This is done because it is assumed that patients with similar covariates provide more relevant data on predicted treatment response. Examining participants that have different backgrounds (e.g., demographic, clinical), kernel smoothing methods utilize information from highly divergent participants when estimating a particular subject’s treatment effect. Lower weights are assigned to very different subjects and the kernel methods require choosing a set of smoothing parameters to group patients according to their relative degree of similarities. A drawback is that the corresponding proposed test statistics may be sensitive to the chosen bandwidths, which inhibits the interpretation of the results. Series methods use approximating functions (splines or power series of the explanatory variables) to construct test statistics. Compared to kernel smoothing methods, series methods normally have the advantage of computational convenience; however, the precision of test statistics depends on the number of terms selected in the series. <br />
<br />
<b>Canadian Wage Data Example:</b> Nonparametric regression extends the classical parametric regression (e.g., lm, lmer) involving one continuous dependent variable, y, and (1 or more) continuous explanatory variable(s), x. Let’s start with a popular parametric model of a wage equation that we can extend to a fully nonparametric regression model. First, we will compare and contrast the parametric and nonparametric approach towards univariate regression and then proceed to multivariate regression.<br />
<br />
Let’s use the Canadian cross-section wage data (<b>cps71</b>) consisting of a random sample taken from the 1971 Canadian Census for male individuals having common education (High-School). N=205 observations, 2 variables, the logarithm of the individual’s wage (logwage) and their age (age). The classical wage equation model includes a quadratic term of age.<br />
<br />
# install.packages("np")<br />
library("np")<br />
data("cps71")<br />
<br />
# (1) Linear Model -> R<sup>2</sup> = 0.2308<br />
model.lin <- lm( logwage ~ age + I(age^2), data = cps71)<br />
summary(model.lin)<br />
<br />
Call:<br />
lm(formula = logwage ~ age + I(age^2), data = cps71)<br />
<br />
Residuals:<br />
Min 1Q Median 3Q Max <br />
-2.4041 -0.1711 0.0884 0.3182 1.3940 <br />
<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||10.0419773||0.4559986||22.022||< 2e-16 ***<br />
|-<br />
|Age||0.1731310||0.0238317|| 7.265||7.96e-12 ***<br />
|-<br />
|I(age^2)||-0.0019771||0.0002898||-6.822||1.02e-10 ***<br />
<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
Residual standard error: 0.5608 on 202 degrees of freedom<br />
Multiple R-squared: 0.2308, Adjusted R-squared: 0.2232 <br />
F-statistic: 30.3 on 2 and 202 DF, p-value: 3.103e-12<br />
<br />
# (2) Next, we consider the local linear nonparametric method employing cross-validated <br />
# bandwidth selection and estimation in one step. Start with computing the least-squares<br />
# cross-validated bandwidths for the local constant estimator (default).<br />
# Note that <b>R<sup>2</sup> = 0.3108675</b><br />
bandwidth <- npregbw(formula= logwage ~ age, data = cps71)<br />
model.np <- npreg(bandwidth, regtype = "ll", bwmethod = "cv.aic", gradients = TRUE, data = cps71)<br />
summary(model.np)<br />
<br />
Regression Data: 205 training points, in 1 variable(s) age<br />
Bandwidth(s): 1.892157<br />
Kernel Regression Estimator: Local-Constant<br />
Bandwidth Type: Fixed<br />
Residual standard error: 0.5307943<br />
R-squared: <b><mark>0.3108675</mark></b><br />
Continuous Kernel Type: Second-Order Gaussian<br />
No. Continuous Explanatory Vars.: 1<br />
<br />
# NP model significance may be tested by<br />
npsigtest(model.np)<br />
<br />
Kernel Regression Significance Test<br />
Type I Test with IID Bootstrap (399 replications, Pivot=TRUE, joint=FALSE)<br />
Explanatory variables tested for significance: age (1)<br />
<br />
age<br />
Bandwidth(s): 1.892157<br />
<br />
Individual Significance Tests<br />
P Value: <br />
age < 2.22e-16 ***<br />
<br />
# So, as was the case for the linear parametric model, Age is significant in the local linear NP-model<br />
<br />
# (3) Graphical comparison of parametric and nonparametric models. <br />
plot(cps71$\$$age, cps71$\$$logwage, xlab = "age", ylab = "log(wage)", cex=.1)<br />
lines(cps71$\$$age, fitted(model.lin), lty = 2, col = " red")<br />
lines(cps71$\$$age, fitted(model.np), lty = 1, col = "blue")<br />
legend("topright", c("Data", "Linear", "Non-linear"), col=c("Black", "Red", "Blue"), pch = c(1, 1, 1), bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods14.png|500px]] </center><br />
<br />
# some additional plots resenting the parametric (quadratic, dashed line) and the nonparametric estimates <br />
# (solid line) of the regression function for the cps71 data. <br />
plot(model.np, plot.errors.method = "asymptotic")<br />
plot(model.np, gradients = TRUE)<br />
lines(cps71$\$$age, coef(model.lin)[2]+2*cps71$\$$age*coef(model.lin)[3], lty = 2, col = "red")<br />
plot(model.np, gradients = TRUE, plot.errors.method = "asymptotic")<br />
<br />
# (4) using the Lin and NL models to generate predictions based on the obtained appropriate <br />
# bandwidths and estimated a nonparametric model. We need to create a set of explanatory<br />
# variables for which to generate predictions. These can be part of the original dataset or be<br />
# outside its scope. Typically, we don’t have the outcome for the evaluation data and need only <br />
# provide the explanatory variables for which predicted values are generated by the models.<br />
# Occasionally, splitting the dataset into two independent samples (training/testing), allows estimation<br />
# of a model on one sample, and evaluation of its performance on another.<br />
<br />
cps.eval.data <- data.frame(age = seq(10,70, by=10)) # simulate some explanatory X values (ages)<br />
pred.lin <- predict(model.lin, newdata = cps.eval.data) # Linear Prediction of log(Wage)<br />
pred.np <- predict(model.np, newdata = cps.eval.data) # non-Linear Prediction of log(Wage)<br />
plot(pred.lin, pred.np)<br />
abline(lm(pred.np ~ pred.lin))<br />
<br />
<center>[[Image:SMHS_Methods15.png|500px]] </center><br />
<br />
.<br />
.<br />
.<br />
<br />
==Predictive risk models ==<br />
<br />
Predictive risk models represent a class of methods for identifying potential for HTE when the individual patient risk for disease-related events at baseline depends on observed factors. For instance, common measures are disease staging criteria, such as those used in COPD or heart failure, Framingham risk scores for cardiovascular event risk, or genetic variations, e.g., HER2 for breast cancer. Initial predictive risk modeling, aka risk function estimation, is often performed without accounting for treatment effects. Least squares or Cox proportional hazards regression methods are appropriate in many cases and provide relatively more interpretable risk functions, but rely on linearity assumptions and may not provide optimal predictive metrics. Partial least squares is an extension of least squares methods that can reduce the dimensionality of the predictor space by interposing latent variables, predicted by linear combinations of observable characteristics, as the intermediate predictors of one or more outcomes. Recursive partitioning, such as random forests, support vector machines, and neural networks represent latter methods with better predictive power than linear methods. Risk function estimation can range from highly exploratory analyses to near meta-analytic model validation, and may be useful at any stage of product development.<br />
<br />
HIV Example: The <b>“hmohiv”</b> dataset represents a study of HIV positive patients examining whether there was a difference in survival times of HIV positive patients between a cohort using intravenous drugs (drug=1) and a cohort not using the IV drug (drug=0). The <b>hmohiv</b> data includes the following variables:<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Time||Age||Drug||Censor||Entdate||Enddate<br />
|- <br />
|1||5||46||0||1||5/15/1990||10/14/1990<br />
|-<br />
|2||6||35||1||0||9/19/1989||3/20/1990<br />
|-<br />
|3||8||30||1||1||4/21/1991||12/20/1991<br />
|-<br />
|4||3||30||1||1||1/3/1991||4/4/1991<br />
|-<br />
|5||22||36||0||1||9/18/1989||7/19/1991<br />
|-<br />
|6||1||32||1||0||3/18/1991||4/17/1991<br />
|-<br />
|...||...||...||...||...||...||...<br />
<br />
<br />
|}<br />
</center><br />
<br />
#cleaning up environment<br />
rm(list=ls())<br />
<br />
# load survival library<br />
library(survival)<br />
<br />
# load hmohiv data<br />
hmohiv<-read.table("http://www.ats.ucla.edu/stat/r/examples/asa/hmohiv.csv", sep=",", header = TRUE)<br />
attach(hmohiv)<br />
<br />
# Fit Cox proportional hazards regression model<br />
cox.model <- coxph( Surv(time, censor) ~ drug, method="breslow")<br />
fit.1 <- survfit(cox.model, newdata=drug.new)<br />
<br />
# construct a frame of the 2 cohorts IV_drug and no-IV-drug<br />
drug.new<-data.frame(drug=c(0,1))<br />
<br />
# plot results<br />
plot(fit.1, xlab="Survival Time (Months)", ylab="Survival Probability")<br />
points(fit.1$\$$time, fit.1$\$$surv[,1], pch=1)<br />
points(fit.1$\$$time, fit.1$\$$surv[,2], pch=2)<br />
legend(40, .8, c("Drug Absent", "Drug Present"), pch=c(1,2))<br />
<br />
<center>[[Image:SMHS_Methods16.png|500px]] </center><br />
<br />
# to inslect the resulting Cox Proportional Hazard Model<br />
cox.model <br />
Call:<br />
coxph(formula = Surv(time, censor) ~ drug, method = "breslow")<br />
<br />
coef exp(coef) se(coef) z p<br />
<b>drug</b> 0.779 2.18 0.242 3.22 <b>0.0013</b><br />
<br />
Likelihood ratio test=10.2 on 1 df, p=0.00141 n= 100, number of events= 80 <br />
<br />
===Footnotes===<br />
<br />
*<sup>8</sup> http://onlinelibrary.wiley.com/enhanced/doi/10.1002/jrsm.54<br />
*<sup>9</sup> http://effectivehealthcare.ahrq.gov/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=1857 <br />
*<sup>10</sup> http://jpepsy.oxfordjournals.org/content/39/2/138.full#sec-14<br />
*<sup>11</sup> http://www.ers.usda.gov/media/200576/err32c_1_.pdf<br />
<br />
==[[SMHS_MethodsHeterogeneity_CER|Next see: Comparative Effectiveness Research (CER)]]==<br />
<br />
*[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_MetaAnalysis}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_MetaAnalysis&diff=16219SMHS MethodsHeterogeneity MetaAnalysis2016-05-23T18:52:37Z<p>Pineaumi: /* Quantile Treatment Effect (QTE) */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Meta-Analyses ==<br />
<br />
==Meta-analysis==<br />
<br />
===Overview===<br />
<br />
Meta-analysis is an approach to combine treatment effects across trials or studies into an aggregated treatment effect with higher statistical power than observed in each individual trials. It may detect HTE by testing for differences in treatment effects across similar RCTs. It requires that the individual treatment effects are similar to ensure pooling is meaningful. In the presence of large clinical or methodological differences between the trials, it may be to avoid meta-analyses. The presence of HTE across studies in a meta-analysis may be due to differences in the design or execution of the individual trials (e.g., randomization methods, patient selection criteria). <b>Cochran's Q is a methods for detection of heterogeneity, which is computed as the weighted sum of squared differences between each study's treatment effect and the pooled effects across the studies.</b> It is a barometer of inter-trial differences impacting the observed study result. A possible source of error in a meta-analysis is publication bias. Trial size may introduce publication bias since larger trials are more likely to be published. Language and accessibility represent other potential confounding factors. When the heterogeneity is not due to poor study design, it may be useful to optimize the treatment benefits for different cohorts of participants. <br />
<br />
Cochran's Q statistics is the weighted sum of squares on a standardized scale<sup>8</sup>. <b>The corresponding P value indicates the strength of the evidence of presence of heterogeneity.</b> This test may have low power to detect heterogeneity sometimes and it is suggested to use a value of 0.10 as a cut-off for significance (Higgins et al., 2003). The Q statistics also may have too much power as a test of heterogeneity when the number of studies is large.<br />
<br />
===Simulation Example 1===<br />
<br />
# Install and Load library<br />
install.packages("meta")<br />
library(meta)<br />
<br />
# Set number of studies<br />
n.studies = 15<br />
<br />
# number of treatments: case1, case2, control<br />
n.trt = 3<br />
<br />
# number of outcomes<br />
n.event = 2<br />
<br />
# simulate the (balanced) number of cases (case1 and case2) and controls in each study<br />
ctl.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case1.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case2.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
<br />
# Simulate the number of outcome events (e.g., deaths) and no events in the control group<br />
event.ctl.group = rbinom(n = n.studies, size = ctl.group, prob = rep(<mark>0.1</mark>, length(ctl.group)))<br />
noevent.ctl.group = ctl.group - event.ctl.group<br />
<br />
# Simulate the number of events and no events in the case1 group<br />
event.case1.group = rbinom(n = n.studies, size = case1.group, prob = rep(<mark>0.5</mark>, length(case1.group)))<br />
noevent.case1.group = case1.group - event.case1.group<br />
<br />
# Simulate the number of events and no events in the case2 group<br />
event.case2.group = rbinom(n = n.studies, size = case2.group, prob = rep(<mark>0.6</mark>, length(case2.group)))<br />
noevent.case2.group = case2.group - event.case2.group<br />
<br />
# Run the univariate meta-analysis using <b>metabin()</b>, Meta-analysis of binary outcome data – <br />
# Calculation of fixed and random effects estimates (risk ratio, odds ratio, risk difference or arcsine<br />
# difference) for meta-analyses with binary outcome data. Mantel-Haenszel (MH), <br />
# inverse variance and Peto method are available for pooling.<br />
<br />
# <b>method</b> = A character string indicating which method is to be used for pooling of studies. <br />
# one of "MH" , "Inverse" , or "Cochran"<br />
# sm = A character string indicating which summary measure (“OR”, "RR" "RD"=risk difference) is to be <br />
# used for pooling of studies<br />
<br />
# Control vs. Case1, n.e and n.c are numbers in experimental and control groups<br />
meta.ctr_case1 <- metabin(event.e = <b>event.case1.group</b>, n.e = case1.group, event.c = <b>event.ctl.group</b>, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
# in this case we use Odds Ratio, of the odds of death in the experimental and control studies<br />
forest(meta.ctr_case1)<br />
<br />
<center>[[Image:SMHS_Methods8.png|500px]] </center><br />
<br />
# Control vs. Case2<br />
meta.ctr_case2 <- metabin(event.e = event.case2.group, n.e = case2.group, event.c = event.ctl.group, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
forest(meta.ctr_case2)<br />
<br />
<center>[[Image:SMHS_Methods9.png|500px]] </center><br />
<br />
# Case1 vs. Case2<br />
meta.case1_case2 <- metabin(event.e = event.case1.group, n.e = case1.group, event.c = event.case2.group, <br />
n.c = case2.group, method = "MH", sm = "OR")<br />
forest(meta.case1_case2)<br />
summary(meta.case1_case2)<br />
<br />
Test of heterogeneity:<br />
Q d.f. p-value<br />
11.99 14 0.6071<br />
<br />
<center>[[Image:SMHS_Methods10.png|500px]] </center><br />
<br />
The <b>forest plo</b>t shows the ''I''<sup>2</sup> test indicates the evidence to reject the null hypothesis (no study heterogeneity and the fixed effects model should be used).<br />
<br />
==Series of “N of 1” trials==<br />
<br />
This technique combines (a “series of”) n-of-1 trial data to identify HTE. An n-of-1 trial is a repeated crossover trial for a single patient, which randomly assigns the patient to one treatment vs. another for a given time period, after which the patient is re-randomized to treatment for the next time period, usually repeated for 4-6 time periods. Such trials are most feasibly done in chronic conditions, where little or no washout period is needed between treatments and treatment effects are identifiable in the short-term, such as pain or reliable surrogate markers. Combining data from identical n-of-1 trials across a set of patients enables the statistical analysis controlling for patient fixed or random effects, covariates, centers, or sequence effects, see <b>Figure</b> below. These combined trials are often analyzed within a Bayesian context using shrinkage estimators that combine individual and group mean treatment effects to create a “posterior” individual mean treatment effect estimate which is a form of inverse variance-weighted average of the individual and group effects. Such trials are typically more expensive than standard RCTs on a per-patient basis, however, they require much smaller sample sizes, often less than 100 patients (due to the efficient individual-as-own-control design), and create individual treatment effect estimates that are not possible in a non-crossover design<sup>9</sup>. For the individual patient, the treatment effect can be re-estimated after each time period, and the trial stopped at any point when the more effective treatment is identified with reasonable statistical certainty.<br />
<br />
====Example====<br />
<br />
A study involving 8 participants collected data across 30 days, in which 15 treatment days and 15 control days are randomly assigned within each participant<sup>10</sup>. The treatment effect is represented as a binary variable (control day=0; treatment day=1). The outcome variable represents the response to the intervention within each of the 8 participants. Study employed a fixed-effects modeling. By creating N − 1 dummy-coded variables representing the N=8 participants, where the last (i=8) participant serves as the reference (i.e., as the model intercept). So, each dummy-coded variable represents the difference between each participant (i) and the 8th participant. Thus, all other patients' values will be relative to the values of the 8th (reference) subject. The overall differences across participants in fixed effects can be evaluated with multiple <b>degree-of-freedom F-tests.</b><br />
<br />
<center>[[Image:SMHS_Methods11.png|500px]] </center><br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|...||...||...||...||...||...||...||...||...||...<br />
<br />
|}<br />
</center> Complete data is available in the <b>Appendix.</b><br />
<br />
<br />
<br />
<center>Data Summary<br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Intercept||Constant<br />
|-<br />
|Physical Activity||PhyAct<br />
|-<br />
|Intervention||Tx<br />
|-<br />
|WP Social Support||WPSS<br />
|-<br />
|PM Social Support (1-3)||PMss3<br />
|-<br />
|Self Efficacy||SelfEff25<br />
<br />
|}<br />
</center><br />
<br />
rm(list=ls())<br />
Nof1 <-read.table("https://umich.instructure.com/files/330385/download?download_frd=1&verifier=DwJUGSd6t24dvK7uYmzA2aDyzlmsohyaK6P7jK0Q", sep=",", header = TRUE) # 02_Nof1_Data.csv<br />
attach(Nof1)<br />
head(Nof1)<br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|2||1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|3||1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|4||1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|5||1||5||1||33||8||0.59||4.62||4.03||1.03||21<br />
|-<br />
|6||1||6||1||33||8||-1.16||2.87||4.03||1.03||0<br />
<br />
|}<br />
</center><br />
<br />
df.1 = data.frame(PhyAct, Tx, WPSS, PMss3, SelfEff25) <br />
<br />
# library("lme4")<br />
<br />
lm.1 = model.lmer <- lmer(PhyAct ~ Tx + SelfEff + Tx*SelfEff + (1|Day) + (1|ID) , data= df.1)<br />
summary(lm.1)<br />
<br />
Linear mixed model fit by REML ['lmerMod']<br />
Formula: PhyAct ~ Tx + SelfEff + Tx * SelfEff + (1 | Day) + (1 | ID)<br />
Data: df.1<br />
<br />
REML criterion at convergence: 8820<br />
<br />
<center> Scaled Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Min||1Q||Median||3Q||Max<br />
|-<br />
|-2.7012||-0.6833||-0.0333||0.6542||3.9612<br />
|}<br />
</center><br />
<br />
<br />
<center> Random Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Groups ||Name||Variance ||Std.Dev.<br />
|-<br />
| Day||(Intercept) ||0.0 || 0.00 <br />
|-<br />
<br />
|ID|| (Intercept)||601.5||24.53 <br />
|-<br />
<br />
|Residual|| ||969.0 ||31.13 <br />
|}<br />
Number of obs: 900, groups: Day, 30; ID, 30<br />
</center> <br />
<br />
<br />
<center> Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Estimate||Std.||Error||t value<br />
|-<br />
|(Intercept)||38.3772||14.4738||2.651<br />
|-<br />
|Tx||4.0283||6.3745||0.632<br />
|-<br />
|SelfEff||0.5818||0.5942||0.979<br />
|-<br />
|Tx:SelfEff||0.9702||0.2617||3.708<br />
|}<br />
</center><br />
<br />
<br />
<center> Correlation of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||(Intr)||Tx ||SlfEff<br />
|-<br />
| Tx|| -0.220|| || <br />
|-<br />
| SelfEff||-0.946 ||0.208 || <br />
|-<br />
| Tx:SelfEff ||0.208 ||-0.946 ||-0.220<br />
|}<br />
</center><br />
<br />
<br />
# Model: PhyAct = Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25 + ε<br />
lm.2 = lm(PhyAct ~ Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25, df.1) <br />
summary(lm.2)<br />
<br />
Call:<br />
lm(formula = PhyAct ~ Tx + WPSS + PMss3 + Tx * WPSS + Tx * PMss3 + <br />
SelfEff25 + Tx * SelfEff25, data = df.1)<br />
<br />
<center> Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max <br />
|-<br />
| -102.39||-28.24||-1.47||25.16||122.41 <br />
<br />
|}<br />
</center><br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||52.0067||1.8080||28.764||< 2e-16 ***<br />
|-<br />
|Tx||27.7366||2.5569||10.848||< 2e-16 ***<br />
|-<br />
|WPSS||1.9631||2.4272||0.809||0.418853 <br />
|- <br />
|PMss3||13.5110||2.7853||4.851||1.45e-06 ***<br />
|-<br />
|SelfEff25||0.6289||0.2205||2.852||0.004439 ** <br />
|-<br />
|Tx:WPSS||9.9114||3.4320||2.888||0.003971 ** <br />
|-<br />
|Tx:PMss3||8.8422||3.9390||2.245||0.025025 * <br />
|-<br />
|Tx:SelfEff25||1.0460||0.3118||3.354||0.000829 ***<br />
<br />
<br />
|}<br />
</center><br />
<br />
[Using SAS (StudyI_Analyses.sas, StudyIIab_Analyses.sas)]<br />
<br />
<center> Type 3 Tests of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|<b>Effect</b>||<b>Num DF</b>||<b>Den DF</b>||<b>F Value</b>||<b>$Pr>F$</b><br />
|-<br />
|<b>Tx</b>||1||224||67.46||<.0001 <br />
|-<br />
|<b>ID</b>||7||224||25.95||<.0001<br />
|-<br />
|<b>Tx*ID</b>||7||224||2.92||0.0060<br />
|}<br />
</center><br />
<br />
==Quantile Treatment Effect (QTE)==<br />
<br />
QTE employs quantile regression estimation (QRE) to examine the central tendency and statistical dispersion of the treatment effect in a population. These may not be revealed by the conventional mean estimation in RCTs. For instance, patients with different comorbidity scores may respond differently to a treatment. Quantile regression has the ability to reveal HTE according to the ranking of patients’ comorbidity scores or some other relevant covariate by which patients may be ranked. Therefore, in an attempt to inform patient-centered care, quantile regression provides more information on the distribution of the treatment effect than typical conditional mean treatment effect estimation. QTE characterizes the heterogeneous treatment effect on individuals and groups across various positions in the distributions of different outcomes of interest. This unique feature has given quantile regression analysis substantial attention and has been employed across a wide range of applications, particularly when evaluating the economic effects of welfare reform.<br />
<br />
One caveat of applying QRE in clinical trials for examining HTE is that the QTE doesn’t demonstrate the treatment effect for a given patient. Instead, it focuses on the treatment effect among subjects within the qth quantile, such as those who are exactly at the top 10th percent in terms of blood pressure or a depression score for some covariate of interest, for example, comorbidity score. It is not uncommon for the qth quantiles to be two different sets of patients before and after the treatment. For this reason, we have to assume that these two groups of patients are homogeneous if they were in the same quantiles.<br />
<br />
<b>Income-Food Expenditure Example:</b> Let’s examine the Engel data (N=235) on the relationship between food expenditure (foodexp) and household income (income)<sup>11</sup>. We can plot the data and then explore the superposition of the six fitted quantile regression lines. <br />
<br />
install.packages("quantreg")<br />
library(quantreg)<br />
data(engel)<br />
attach(engel)<br />
<br />
<center>head(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|1||420.1577||255.8394<br />
|-<br />
|2||541.4117||310.9587<br />
|-<br />
|3||901.1575||485.6800<br />
|- <br />
|4||639.0802||402.9974<br />
|-<br />
|5||750.8756||495.5608<br />
|-<br />
|6||945.7989||633.7978<br />
<br />
|}<br />
</center><br />
<br />
<br />
<center>summary(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|Min||377.1||242.3<br />
|-<br />
|1st Qu.||638.9||429.7<br />
|-<br />
|Median||884.0||582.5<br />
|- <br />
|Mean||982.5||624.2<br />
|-<br />
|3rd Qu.||1164.0||743.9<br />
|-<br />
|Max||4957.8||2032.7<br />
<br />
|}<br />
</center><br />
<br />
Note: If <i>Y</i> be a real valued random variable with cumulative distribution function F<sub>Y</sub>(y)=P(Y≤ y), then the τ-quantile of <i>Y</i> is given by<br />
<br />
<center> Q<sub>Y</sub>(τ)=F<sub>Y</sub><sup>-1</sup>(τ)=inf{ y:F<sub>Y</sub>(y)≥τ} </center><br />
<br />
where 0≤τ≤1.<br />
<br />
<center>[[Image:SMHS_Methods12.png|500px]] </center><br />
<br />
# (1) Graphics<br />
plot(income, foodexp, cex=.25, type="n", xlab="Household Income", ylab="Food Expenditure")<br />
points(income, foodexp, cex=.5, col="blue")<br />
<br />
# tau - the quantile(s) to be estimated, in the range from 0 to 1. An object "rq.process" and an object "rqs" <br />
# are returned containing the matrix of coefficient estimates at the specified quantiles.<br />
abline( rq(foodexp ~ income, tau=.5), col="blue") # Quantile Regression Model<br />
<br />
abline( lm(foodexp ~ income), lty=2, lwd=3, col="red") # linear model<br />
taus <- c(0.05, 0.1, 0.25, 0.75, 0.90, 0.95)<br />
colors <- rainbow(length(taus))<br />
<br />
models <- vector(mode = "list", length = length(taus)) # define a vector of models to store QR for diff taus<br />
model.names <- vector(mode = "list", length = length(taus)) # define a vector model names<br />
<br />
for( i in 1:length(taus)){<br />
models[[i]] <- rq(foodexp ~ income, tau=taus[i]) <br />
var <- taus[i]<br />
model.names[[i]] <- paste("Model [", i , "]: tau=", var)<br />
abline( models[[i]], lwd=2, col= colors[[i]])<br />
}<br />
legend(3000, 1100, model.names, col= colors, pch= taus, bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods13.png|500px]] </center><br />
<br />
# (2) Inference about quantile regression coefficients. As an alternative to the rank-inversion confidence intervals, we can obtain a table of coefficients, standard errors, t-statistics, and p-values using the summary function:<br />
<br />
<b>summary(models[[3]], se = "nid")</b><br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
# Alternatively, we can use summary.rq to compute bootstrapped standard errors.<br />
summary.rq(models[[3]], se = "nid")<br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
==Nonparametric Regression Methods ==<br />
<br />
Nonparametric regression enables dealing with HTE in RCTs. Different nonparametric methods, such as kernel smoothing methods and series methods, can be used to generate test statistics for examining the presence of HTE. A kernel method is a weighting scheme based on a kernel function (e.g. uniform, Gaussian). When evaluating the treatment effect of a patient in RCTs, the kernel method assigns larger weights to those observations with similar covariates. This is done because it is assumed that patients with similar covariates provide more relevant data on predicted treatment response. Examining participants that have different backgrounds (e.g., demographic, clinical), kernel smoothing methods utilize information from highly divergent participants when estimating a particular subject’s treatment effect. Lower weights are assigned to very different subjects and the kernel methods require choosing a set of smoothing parameters to group patients according to their relative degree of similarities. A drawback is that the corresponding proposed test statistics may be sensitive to the chosen bandwidths, which inhibits the interpretation of the results. Series methods use approximating functions (splines or power series of the explanatory variables) to construct test statistics. Compared to kernel smoothing methods, series methods normally have the advantage of computational convenience; however, the precision of test statistics depends on the number of terms selected in the series. <br />
<br />
Canadian Wage Data Example: Nonparametric regression extends the classical parametric regression (e.g., lm, lmer) involving one continuous dependent variable, y, and (1 or more) continuous explanatory variable(s), x. Let’s start with a popular parametric model of a wage equation that we can extend to a fully nonparametric regression model. First, we will compare and contrast the parametric and nonparametric approach towards univariate regression and then proceed to multivariate regression.<br />
<br />
Let’s use the Canadian cross-section wage data (<b>cps71</b>) consisting of a random sample taken from the 1971 Canadian Census for male individuals having common education (High-School). N=205 observations, 2 variables, the logarithm of the individual’s wage (logwage) and their age (age). The classical wage equation model includes a quadratic term of age.<br />
<br />
# install.packages("np")<br />
library("np")<br />
data("cps71")<br />
<br />
# (1) Linear Model -> R<sup>2</sup> = 0.2308<br />
model.lin <- lm( logwage ~ age + I(age^2), data = cps71)<br />
summary(model.lin)<br />
<br />
Call:<br />
lm(formula = logwage ~ age + I(age^2), data = cps71)<br />
<br />
Residuals:<br />
Min 1Q Median 3Q Max <br />
-2.4041 -0.1711 0.0884 0.3182 1.3940 <br />
<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||10.0419773||0.4559986||22.022||< 2e-16 ***<br />
|-<br />
|Age||0.1731310||0.0238317|| 7.265||7.96e-12 ***<br />
|-<br />
|I(age^2)||-0.0019771||0.0002898||-6.822||1.02e-10 ***<br />
<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
Residual standard error: 0.5608 on 202 degrees of freedom<br />
Multiple R-squared: 0.2308, Adjusted R-squared: 0.2232 <br />
F-statistic: 30.3 on 2 and 202 DF, p-value: 3.103e-12<br />
<br />
# (2) Next, we consider the local linear nonparametric method employing cross-validated <br />
# bandwidth selection and estimation in one step. Start with computing the least-squares<br />
# cross-validated bandwidths for the local constant estimator (default).<br />
# Note that <b>R<sup>2</sup> = 0.3108675</b><br />
bandwidth <- npregbw(formula= logwage ~ age, data = cps71)<br />
model.np <- npreg(bandwidth, regtype = "ll", bwmethod = "cv.aic", gradients = TRUE, data = cps71)<br />
summary(model.np)<br />
<br />
Regression Data: 205 training points, in 1 variable(s) age<br />
Bandwidth(s): 1.892157<br />
Kernel Regression Estimator: Local-Constant<br />
Bandwidth Type: Fixed<br />
Residual standard error: 0.5307943<br />
R-squared: <b><mark>0.3108675</mark></b><br />
Continuous Kernel Type: Second-Order Gaussian<br />
No. Continuous Explanatory Vars.: 1<br />
<br />
# NP model significance may be tested by<br />
npsigtest(model.np)<br />
<br />
Kernel Regression Significance Test<br />
Type I Test with IID Bootstrap (399 replications, Pivot=TRUE, joint=FALSE)<br />
Explanatory variables tested for significance: age (1)<br />
<br />
age<br />
Bandwidth(s): 1.892157<br />
<br />
Individual Significance Tests<br />
P Value: <br />
age < 2.22e-16 ***<br />
<br />
# So, as was the case for the linear parametric model, Age is significant in the local linear NP-model<br />
<br />
# (3) Graphical comparison of parametric and nonparametric models. <br />
plot(cps71$\$$age, cps71$\$$logwage, xlab = "age", ylab = "log(wage)", cex=.1)<br />
lines(cps71$\$$age, fitted(model.lin), lty = 2, col = " red")<br />
lines(cps71$\$$age, fitted(model.np), lty = 1, col = "blue")<br />
legend("topright", c("Data", "Linear", "Non-linear"), col=c("Black", "Red", "Blue"), pch = c(1, 1, 1), bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods14.png|500px]] </center><br />
<br />
# some additional plots resenting the parametric (quadratic, dashed line) and the nonparametric estimates <br />
# (solid line) of the regression function for the cps71 data. <br />
plot(model.np, plot.errors.method = "asymptotic")<br />
plot(model.np, gradients = TRUE)<br />
lines(cps71$\$$age, coef(model.lin)[2]+2*cps71$\$$age*coef(model.lin)[3], lty = 2, col = "red")<br />
plot(model.np, gradients = TRUE, plot.errors.method = "asymptotic")<br />
<br />
# (4) using the Lin and NL models to generate predictions based on the obtained appropriate <br />
# bandwidths and estimated a nonparametric model. We need to create a set of explanatory<br />
# variables for which to generate predictions. These can be part of the original dataset or be<br />
# outside its scope. Typically, we don’t have the outcome for the evaluation data and need only <br />
# provide the explanatory variables for which predicted values are generated by the models.<br />
# Occasionally, splitting the dataset into two independent samples (training/testing), allows estimation<br />
# of a model on one sample, and evaluation of its performance on another.<br />
<br />
cps.eval.data <- data.frame(age = seq(10,70, by=10)) # simulate some explanatory X values (ages)<br />
pred.lin <- predict(model.lin, newdata = cps.eval.data) # Linear Prediction of log(Wage)<br />
pred.np <- predict(model.np, newdata = cps.eval.data) # non-Linear Prediction of log(Wage)<br />
plot(pred.lin, pred.np)<br />
abline(lm(pred.np ~ pred.lin))<br />
<br />
<center>[[Image:SMHS_Methods15.png|500px]] </center><br />
<br />
.<br />
.<br />
.<br />
<br />
==Predictive risk models ==<br />
<br />
Predictive risk models represent a class of methods for identifying potential for HTE when the individual patient risk for disease-related events at baseline depends on observed factors. For instance, common measures are disease staging criteria, such as those used in COPD or heart failure, Framingham risk scores for cardiovascular event risk, or genetic variations, e.g., HER2 for breast cancer. Initial predictive risk modeling, aka risk function estimation, is often performed without accounting for treatment effects. Least squares or Cox proportional hazards regression methods are appropriate in many cases and provide relatively more interpretable risk functions, but rely on linearity assumptions and may not provide optimal predictive metrics. Partial least squares is an extension of least squares methods that can reduce the dimensionality of the predictor space by interposing latent variables, predicted by linear combinations of observable characteristics, as the intermediate predictors of one or more outcomes. Recursive partitioning, such as random forests, support vector machines, and neural networks represent latter methods with better predictive power than linear methods. Risk function estimation can range from highly exploratory analyses to near meta-analytic model validation, and may be useful at any stage of product development.<br />
<br />
HIV Example: The <b>“hmohiv”</b> dataset represents a study of HIV positive patients examining whether there was a difference in survival times of HIV positive patients between a cohort using intravenous drugs (drug=1) and a cohort not using the IV drug (drug=0). The <b>hmohiv</b> data includes the following variables:<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Time||Age||Drug||Censor||Entdate||Enddate<br />
|- <br />
|1||5||46||0||1||5/15/1990||10/14/1990<br />
|-<br />
|2||6||35||1||0||9/19/1989||3/20/1990<br />
|-<br />
|3||8||30||1||1||4/21/1991||12/20/1991<br />
|-<br />
|4||3||30||1||1||1/3/1991||4/4/1991<br />
|-<br />
|5||22||36||0||1||9/18/1989||7/19/1991<br />
|-<br />
|6||1||32||1||0||3/18/1991||4/17/1991<br />
|-<br />
|...||...||...||...||...||...||...<br />
<br />
<br />
|}<br />
</center><br />
<br />
#cleaning up environment<br />
rm(list=ls())<br />
<br />
# load survival library<br />
library(survival)<br />
<br />
# load hmohiv data<br />
hmohiv<-read.table("http://www.ats.ucla.edu/stat/r/examples/asa/hmohiv.csv", sep=",", header = TRUE)<br />
attach(hmohiv)<br />
<br />
# Fit Cox proportional hazards regression model<br />
cox.model <- coxph( Surv(time, censor) ~ drug, method="breslow")<br />
fit.1 <- survfit(cox.model, newdata=drug.new)<br />
<br />
# construct a frame of the 2 cohorts IV_drug and no-IV-drug<br />
drug.new<-data.frame(drug=c(0,1))<br />
<br />
# plot results<br />
plot(fit.1, xlab="Survival Time (Months)", ylab="Survival Probability")<br />
points(fit.1$\$$time, fit.1$\$$surv[,1], pch=1)<br />
points(fit.1$\$$time, fit.1$\$$surv[,2], pch=2)<br />
legend(40, .8, c("Drug Absent", "Drug Present"), pch=c(1,2))<br />
<br />
<center>[[Image:SMHS_Methods16.png|500px]] </center><br />
<br />
# to inslect the resulting Cox Proportional Hazard Model<br />
cox.model <br />
Call:<br />
coxph(formula = Surv(time, censor) ~ drug, method = "breslow")<br />
<br />
coef exp(coef) se(coef) z p<br />
<b>drug</b> 0.779 2.18 0.242 3.22 <b>0.0013</b><br />
<br />
Likelihood ratio test=10.2 on 1 df, p=0.00141 n= 100, number of events= 80 <br />
<br />
===Footnotes===<br />
<br />
*<sup>8</sup> http://onlinelibrary.wiley.com/enhanced/doi/10.1002/jrsm.54<br />
*<sup>9</sup> http://effectivehealthcare.ahrq.gov/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=1857 <br />
*<sup>10</sup> http://jpepsy.oxfordjournals.org/content/39/2/138.full#sec-14<br />
*<sup>11</sup> http://www.ers.usda.gov/media/200576/err32c_1_.pdf<br />
<br />
==[[SMHS_MethodsHeterogeneity_CER|Next see: Comparative Effectiveness Research (CER)]]==<br />
<br />
*[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_MetaAnalysis}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_MetaAnalysis&diff=16218SMHS MethodsHeterogeneity MetaAnalysis2016-05-23T18:52:00Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Meta-Analyses ==<br />
<br />
==Meta-analysis==<br />
<br />
===Overview===<br />
<br />
Meta-analysis is an approach to combine treatment effects across trials or studies into an aggregated treatment effect with higher statistical power than observed in each individual trials. It may detect HTE by testing for differences in treatment effects across similar RCTs. It requires that the individual treatment effects are similar to ensure pooling is meaningful. In the presence of large clinical or methodological differences between the trials, it may be to avoid meta-analyses. The presence of HTE across studies in a meta-analysis may be due to differences in the design or execution of the individual trials (e.g., randomization methods, patient selection criteria). <b>Cochran's Q is a methods for detection of heterogeneity, which is computed as the weighted sum of squared differences between each study's treatment effect and the pooled effects across the studies.</b> It is a barometer of inter-trial differences impacting the observed study result. A possible source of error in a meta-analysis is publication bias. Trial size may introduce publication bias since larger trials are more likely to be published. Language and accessibility represent other potential confounding factors. When the heterogeneity is not due to poor study design, it may be useful to optimize the treatment benefits for different cohorts of participants. <br />
<br />
Cochran's Q statistics is the weighted sum of squares on a standardized scale<sup>8</sup>. <b>The corresponding P value indicates the strength of the evidence of presence of heterogeneity.</b> This test may have low power to detect heterogeneity sometimes and it is suggested to use a value of 0.10 as a cut-off for significance (Higgins et al., 2003). The Q statistics also may have too much power as a test of heterogeneity when the number of studies is large.<br />
<br />
===Simulation Example 1===<br />
<br />
# Install and Load library<br />
install.packages("meta")<br />
library(meta)<br />
<br />
# Set number of studies<br />
n.studies = 15<br />
<br />
# number of treatments: case1, case2, control<br />
n.trt = 3<br />
<br />
# number of outcomes<br />
n.event = 2<br />
<br />
# simulate the (balanced) number of cases (case1 and case2) and controls in each study<br />
ctl.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case1.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case2.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
<br />
# Simulate the number of outcome events (e.g., deaths) and no events in the control group<br />
event.ctl.group = rbinom(n = n.studies, size = ctl.group, prob = rep(<mark>0.1</mark>, length(ctl.group)))<br />
noevent.ctl.group = ctl.group - event.ctl.group<br />
<br />
# Simulate the number of events and no events in the case1 group<br />
event.case1.group = rbinom(n = n.studies, size = case1.group, prob = rep(<mark>0.5</mark>, length(case1.group)))<br />
noevent.case1.group = case1.group - event.case1.group<br />
<br />
# Simulate the number of events and no events in the case2 group<br />
event.case2.group = rbinom(n = n.studies, size = case2.group, prob = rep(<mark>0.6</mark>, length(case2.group)))<br />
noevent.case2.group = case2.group - event.case2.group<br />
<br />
# Run the univariate meta-analysis using <b>metabin()</b>, Meta-analysis of binary outcome data – <br />
# Calculation of fixed and random effects estimates (risk ratio, odds ratio, risk difference or arcsine<br />
# difference) for meta-analyses with binary outcome data. Mantel-Haenszel (MH), <br />
# inverse variance and Peto method are available for pooling.<br />
<br />
# <b>method</b> = A character string indicating which method is to be used for pooling of studies. <br />
# one of "MH" , "Inverse" , or "Cochran"<br />
# sm = A character string indicating which summary measure (“OR”, "RR" "RD"=risk difference) is to be <br />
# used for pooling of studies<br />
<br />
# Control vs. Case1, n.e and n.c are numbers in experimental and control groups<br />
meta.ctr_case1 <- metabin(event.e = <b>event.case1.group</b>, n.e = case1.group, event.c = <b>event.ctl.group</b>, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
# in this case we use Odds Ratio, of the odds of death in the experimental and control studies<br />
forest(meta.ctr_case1)<br />
<br />
<center>[[Image:SMHS_Methods8.png|500px]] </center><br />
<br />
# Control vs. Case2<br />
meta.ctr_case2 <- metabin(event.e = event.case2.group, n.e = case2.group, event.c = event.ctl.group, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
forest(meta.ctr_case2)<br />
<br />
<center>[[Image:SMHS_Methods9.png|500px]] </center><br />
<br />
# Case1 vs. Case2<br />
meta.case1_case2 <- metabin(event.e = event.case1.group, n.e = case1.group, event.c = event.case2.group, <br />
n.c = case2.group, method = "MH", sm = "OR")<br />
forest(meta.case1_case2)<br />
summary(meta.case1_case2)<br />
<br />
Test of heterogeneity:<br />
Q d.f. p-value<br />
11.99 14 0.6071<br />
<br />
<center>[[Image:SMHS_Methods10.png|500px]] </center><br />
<br />
The <b>forest plo</b>t shows the ''I''<sup>2</sup> test indicates the evidence to reject the null hypothesis (no study heterogeneity and the fixed effects model should be used).<br />
<br />
==Series of “N of 1” trials==<br />
<br />
This technique combines (a “series of”) n-of-1 trial data to identify HTE. An n-of-1 trial is a repeated crossover trial for a single patient, which randomly assigns the patient to one treatment vs. another for a given time period, after which the patient is re-randomized to treatment for the next time period, usually repeated for 4-6 time periods. Such trials are most feasibly done in chronic conditions, where little or no washout period is needed between treatments and treatment effects are identifiable in the short-term, such as pain or reliable surrogate markers. Combining data from identical n-of-1 trials across a set of patients enables the statistical analysis controlling for patient fixed or random effects, covariates, centers, or sequence effects, see <b>Figure</b> below. These combined trials are often analyzed within a Bayesian context using shrinkage estimators that combine individual and group mean treatment effects to create a “posterior” individual mean treatment effect estimate which is a form of inverse variance-weighted average of the individual and group effects. Such trials are typically more expensive than standard RCTs on a per-patient basis, however, they require much smaller sample sizes, often less than 100 patients (due to the efficient individual-as-own-control design), and create individual treatment effect estimates that are not possible in a non-crossover design<sup>9</sup>. For the individual patient, the treatment effect can be re-estimated after each time period, and the trial stopped at any point when the more effective treatment is identified with reasonable statistical certainty.<br />
<br />
====Example====<br />
<br />
A study involving 8 participants collected data across 30 days, in which 15 treatment days and 15 control days are randomly assigned within each participant<sup>10</sup>. The treatment effect is represented as a binary variable (control day=0; treatment day=1). The outcome variable represents the response to the intervention within each of the 8 participants. Study employed a fixed-effects modeling. By creating N − 1 dummy-coded variables representing the N=8 participants, where the last (i=8) participant serves as the reference (i.e., as the model intercept). So, each dummy-coded variable represents the difference between each participant (i) and the 8th participant. Thus, all other patients' values will be relative to the values of the 8th (reference) subject. The overall differences across participants in fixed effects can be evaluated with multiple <b>degree-of-freedom F-tests.</b><br />
<br />
<center>[[Image:SMHS_Methods11.png|500px]] </center><br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|...||...||...||...||...||...||...||...||...||...<br />
<br />
|}<br />
</center> Complete data is available in the <b>Appendix.</b><br />
<br />
<br />
<br />
<center>Data Summary<br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Intercept||Constant<br />
|-<br />
|Physical Activity||PhyAct<br />
|-<br />
|Intervention||Tx<br />
|-<br />
|WP Social Support||WPSS<br />
|-<br />
|PM Social Support (1-3)||PMss3<br />
|-<br />
|Self Efficacy||SelfEff25<br />
<br />
|}<br />
</center><br />
<br />
rm(list=ls())<br />
Nof1 <-read.table("https://umich.instructure.com/files/330385/download?download_frd=1&verifier=DwJUGSd6t24dvK7uYmzA2aDyzlmsohyaK6P7jK0Q", sep=",", header = TRUE) # 02_Nof1_Data.csv<br />
attach(Nof1)<br />
head(Nof1)<br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|2||1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|3||1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|4||1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|5||1||5||1||33||8||0.59||4.62||4.03||1.03||21<br />
|-<br />
|6||1||6||1||33||8||-1.16||2.87||4.03||1.03||0<br />
<br />
|}<br />
</center><br />
<br />
df.1 = data.frame(PhyAct, Tx, WPSS, PMss3, SelfEff25) <br />
<br />
# library("lme4")<br />
<br />
lm.1 = model.lmer <- lmer(PhyAct ~ Tx + SelfEff + Tx*SelfEff + (1|Day) + (1|ID) , data= df.1)<br />
summary(lm.1)<br />
<br />
Linear mixed model fit by REML ['lmerMod']<br />
Formula: PhyAct ~ Tx + SelfEff + Tx * SelfEff + (1 | Day) + (1 | ID)<br />
Data: df.1<br />
<br />
REML criterion at convergence: 8820<br />
<br />
<center> Scaled Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Min||1Q||Median||3Q||Max<br />
|-<br />
|-2.7012||-0.6833||-0.0333||0.6542||3.9612<br />
|}<br />
</center><br />
<br />
<br />
<center> Random Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Groups ||Name||Variance ||Std.Dev.<br />
|-<br />
| Day||(Intercept) ||0.0 || 0.00 <br />
|-<br />
<br />
|ID|| (Intercept)||601.5||24.53 <br />
|-<br />
<br />
|Residual|| ||969.0 ||31.13 <br />
|}<br />
Number of obs: 900, groups: Day, 30; ID, 30<br />
</center> <br />
<br />
<br />
<center> Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Estimate||Std.||Error||t value<br />
|-<br />
|(Intercept)||38.3772||14.4738||2.651<br />
|-<br />
|Tx||4.0283||6.3745||0.632<br />
|-<br />
|SelfEff||0.5818||0.5942||0.979<br />
|-<br />
|Tx:SelfEff||0.9702||0.2617||3.708<br />
|}<br />
</center><br />
<br />
<br />
<center> Correlation of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||(Intr)||Tx ||SlfEff<br />
|-<br />
| Tx|| -0.220|| || <br />
|-<br />
| SelfEff||-0.946 ||0.208 || <br />
|-<br />
| Tx:SelfEff ||0.208 ||-0.946 ||-0.220<br />
|}<br />
</center><br />
<br />
<br />
# Model: PhyAct = Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25 + ε<br />
lm.2 = lm(PhyAct ~ Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25, df.1) <br />
summary(lm.2)<br />
<br />
Call:<br />
lm(formula = PhyAct ~ Tx + WPSS + PMss3 + Tx * WPSS + Tx * PMss3 + <br />
SelfEff25 + Tx * SelfEff25, data = df.1)<br />
<br />
<center> Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max <br />
|-<br />
| -102.39||-28.24||-1.47||25.16||122.41 <br />
<br />
|}<br />
</center><br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||52.0067||1.8080||28.764||< 2e-16 ***<br />
|-<br />
|Tx||27.7366||2.5569||10.848||< 2e-16 ***<br />
|-<br />
|WPSS||1.9631||2.4272||0.809||0.418853 <br />
|- <br />
|PMss3||13.5110||2.7853||4.851||1.45e-06 ***<br />
|-<br />
|SelfEff25||0.6289||0.2205||2.852||0.004439 ** <br />
|-<br />
|Tx:WPSS||9.9114||3.4320||2.888||0.003971 ** <br />
|-<br />
|Tx:PMss3||8.8422||3.9390||2.245||0.025025 * <br />
|-<br />
|Tx:SelfEff25||1.0460||0.3118||3.354||0.000829 ***<br />
<br />
<br />
|}<br />
</center><br />
<br />
[Using SAS (StudyI_Analyses.sas, StudyIIab_Analyses.sas)]<br />
<br />
<center> Type 3 Tests of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|<b>Effect</b>||<b>Num DF</b>||<b>Den DF</b>||<b>F Value</b>||<b>$Pr>F$</b><br />
|-<br />
|<b>Tx</b>||1||224||67.46||<.0001 <br />
|-<br />
|<b>ID</b>||7||224||25.95||<.0001<br />
|-<br />
|<b>Tx*ID</b>||7||224||2.92||0.0060<br />
|}<br />
</center><br />
<br />
==Quantile Treatment Effect (QTE)==<br />
<br />
QTE employs quantile regression estimation (QRE) to examine the central tendency and statistical dispersion of the treatment effect in a population. These may not be revealed by the conventional mean estimation in RCTs. For instance, patients with different comorbidity scores may respond differently to a treatment. Quantile regression has the ability to reveal HTE according to the ranking of patients’ comorbidity scores or some other relevant covariate by which patients may be ranked. Therefore, in an attempt to inform patient-centered care, quantile regression provides more information on the distribution of the treatment effect than typical conditional mean treatment effect estimation. QTE characterizes the heterogeneous treatment effect on individuals and groups across various positions in the distributions of different outcomes of interest. This unique feature has given quantile regression analysis substantial attention and has been employed across a wide range of applications, particularly when evaluating the economic effects of welfare reform.<br />
<br />
One caveat of applying QRE in clinical trials for examining HTE is that the QTE doesn’t demonstrate the treatment effect for a given patient. Instead, it focuses on the treatment effect among subjects within the qth quantile, such as those who are exactly at the top 10th percent in terms of blood pressure or a depression score for some covariate of interest, for example, comorbidity score. It is not uncommon for the qth quantiles to be two different sets of patients before and after the treatment. For this reason, we have to assume that these two groups of patients are homogeneous if they were in the same quantiles.<br />
<br />
Income-Food Expenditure Example: Let’s examine the Engel data (N=235) on the relationship between food expenditure (foodexp) and household income (income)<sup>11</sup>. We can plot the data and then explore the superposition of the six fitted quantile regression lines. <br />
<br />
install.packages("quantreg")<br />
library(quantreg)<br />
data(engel)<br />
attach(engel)<br />
<br />
<center>head(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|1||420.1577||255.8394<br />
|-<br />
|2||541.4117||310.9587<br />
|-<br />
|3||901.1575||485.6800<br />
|- <br />
|4||639.0802||402.9974<br />
|-<br />
|5||750.8756||495.5608<br />
|-<br />
|6||945.7989||633.7978<br />
<br />
|}<br />
</center><br />
<br />
<br />
<center>summary(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|Min||377.1||242.3<br />
|-<br />
|1st Qu.||638.9||429.7<br />
|-<br />
|Median||884.0||582.5<br />
|- <br />
|Mean||982.5||624.2<br />
|-<br />
|3rd Qu.||1164.0||743.9<br />
|-<br />
|Max||4957.8||2032.7<br />
<br />
|}<br />
</center><br />
<br />
Note: If <i>Y</i> be a real valued random variable with cumulative distribution function F<sub>Y</sub>(y)=P(Y≤ y), then the τ-quantile of <i>Y</i> is given by<br />
<br />
<center> Q<sub>Y</sub>(τ)=F<sub>Y</sub><sup>-1</sup>(τ)=inf{ y:F<sub>Y</sub>(y)≥τ} </center><br />
<br />
where 0≤τ≤1.<br />
<br />
<center>[[Image:SMHS_Methods12.png|500px]] </center><br />
<br />
# (1) Graphics<br />
plot(income, foodexp, cex=.25, type="n", xlab="Household Income", ylab="Food Expenditure")<br />
points(income, foodexp, cex=.5, col="blue")<br />
<br />
# tau - the quantile(s) to be estimated, in the range from 0 to 1. An object "rq.process" and an object "rqs" <br />
# are returned containing the matrix of coefficient estimates at the specified quantiles.<br />
abline( rq(foodexp ~ income, tau=.5), col="blue") # Quantile Regression Model<br />
<br />
abline( lm(foodexp ~ income), lty=2, lwd=3, col="red") # linear model<br />
taus <- c(0.05, 0.1, 0.25, 0.75, 0.90, 0.95)<br />
colors <- rainbow(length(taus))<br />
<br />
models <- vector(mode = "list", length = length(taus)) # define a vector of models to store QR for diff taus<br />
model.names <- vector(mode = "list", length = length(taus)) # define a vector model names<br />
<br />
for( i in 1:length(taus)){<br />
models[[i]] <- rq(foodexp ~ income, tau=taus[i]) <br />
var <- taus[i]<br />
model.names[[i]] <- paste("Model [", i , "]: tau=", var)<br />
abline( models[[i]], lwd=2, col= colors[[i]])<br />
}<br />
legend(3000, 1100, model.names, col= colors, pch= taus, bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods13.png|500px]] </center><br />
<br />
# (2) Inference about quantile regression coefficients. As an alternative to the rank-inversion confidence intervals, we can obtain a table of coefficients, standard errors, t-statistics, and p-values using the summary function:<br />
<br />
<b>summary(models[[3]], se = "nid")</b><br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
# Alternatively, we can use summary.rq to compute bootstrapped standard errors.<br />
summary.rq(models[[3]], se = "nid")<br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
==Nonparametric Regression Methods ==<br />
<br />
Nonparametric regression enables dealing with HTE in RCTs. Different nonparametric methods, such as kernel smoothing methods and series methods, can be used to generate test statistics for examining the presence of HTE. A kernel method is a weighting scheme based on a kernel function (e.g. uniform, Gaussian). When evaluating the treatment effect of a patient in RCTs, the kernel method assigns larger weights to those observations with similar covariates. This is done because it is assumed that patients with similar covariates provide more relevant data on predicted treatment response. Examining participants that have different backgrounds (e.g., demographic, clinical), kernel smoothing methods utilize information from highly divergent participants when estimating a particular subject’s treatment effect. Lower weights are assigned to very different subjects and the kernel methods require choosing a set of smoothing parameters to group patients according to their relative degree of similarities. A drawback is that the corresponding proposed test statistics may be sensitive to the chosen bandwidths, which inhibits the interpretation of the results. Series methods use approximating functions (splines or power series of the explanatory variables) to construct test statistics. Compared to kernel smoothing methods, series methods normally have the advantage of computational convenience; however, the precision of test statistics depends on the number of terms selected in the series. <br />
<br />
Canadian Wage Data Example: Nonparametric regression extends the classical parametric regression (e.g., lm, lmer) involving one continuous dependent variable, y, and (1 or more) continuous explanatory variable(s), x. Let’s start with a popular parametric model of a wage equation that we can extend to a fully nonparametric regression model. First, we will compare and contrast the parametric and nonparametric approach towards univariate regression and then proceed to multivariate regression.<br />
<br />
Let’s use the Canadian cross-section wage data (<b>cps71</b>) consisting of a random sample taken from the 1971 Canadian Census for male individuals having common education (High-School). N=205 observations, 2 variables, the logarithm of the individual’s wage (logwage) and their age (age). The classical wage equation model includes a quadratic term of age.<br />
<br />
# install.packages("np")<br />
library("np")<br />
data("cps71")<br />
<br />
# (1) Linear Model -> R<sup>2</sup> = 0.2308<br />
model.lin <- lm( logwage ~ age + I(age^2), data = cps71)<br />
summary(model.lin)<br />
<br />
Call:<br />
lm(formula = logwage ~ age + I(age^2), data = cps71)<br />
<br />
Residuals:<br />
Min 1Q Median 3Q Max <br />
-2.4041 -0.1711 0.0884 0.3182 1.3940 <br />
<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||10.0419773||0.4559986||22.022||< 2e-16 ***<br />
|-<br />
|Age||0.1731310||0.0238317|| 7.265||7.96e-12 ***<br />
|-<br />
|I(age^2)||-0.0019771||0.0002898||-6.822||1.02e-10 ***<br />
<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
Residual standard error: 0.5608 on 202 degrees of freedom<br />
Multiple R-squared: 0.2308, Adjusted R-squared: 0.2232 <br />
F-statistic: 30.3 on 2 and 202 DF, p-value: 3.103e-12<br />
<br />
# (2) Next, we consider the local linear nonparametric method employing cross-validated <br />
# bandwidth selection and estimation in one step. Start with computing the least-squares<br />
# cross-validated bandwidths for the local constant estimator (default).<br />
# Note that <b>R<sup>2</sup> = 0.3108675</b><br />
bandwidth <- npregbw(formula= logwage ~ age, data = cps71)<br />
model.np <- npreg(bandwidth, regtype = "ll", bwmethod = "cv.aic", gradients = TRUE, data = cps71)<br />
summary(model.np)<br />
<br />
Regression Data: 205 training points, in 1 variable(s) age<br />
Bandwidth(s): 1.892157<br />
Kernel Regression Estimator: Local-Constant<br />
Bandwidth Type: Fixed<br />
Residual standard error: 0.5307943<br />
R-squared: <b><mark>0.3108675</mark></b><br />
Continuous Kernel Type: Second-Order Gaussian<br />
No. Continuous Explanatory Vars.: 1<br />
<br />
# NP model significance may be tested by<br />
npsigtest(model.np)<br />
<br />
Kernel Regression Significance Test<br />
Type I Test with IID Bootstrap (399 replications, Pivot=TRUE, joint=FALSE)<br />
Explanatory variables tested for significance: age (1)<br />
<br />
age<br />
Bandwidth(s): 1.892157<br />
<br />
Individual Significance Tests<br />
P Value: <br />
age < 2.22e-16 ***<br />
<br />
# So, as was the case for the linear parametric model, Age is significant in the local linear NP-model<br />
<br />
# (3) Graphical comparison of parametric and nonparametric models. <br />
plot(cps71$\$$age, cps71$\$$logwage, xlab = "age", ylab = "log(wage)", cex=.1)<br />
lines(cps71$\$$age, fitted(model.lin), lty = 2, col = " red")<br />
lines(cps71$\$$age, fitted(model.np), lty = 1, col = "blue")<br />
legend("topright", c("Data", "Linear", "Non-linear"), col=c("Black", "Red", "Blue"), pch = c(1, 1, 1), bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods14.png|500px]] </center><br />
<br />
# some additional plots resenting the parametric (quadratic, dashed line) and the nonparametric estimates <br />
# (solid line) of the regression function for the cps71 data. <br />
plot(model.np, plot.errors.method = "asymptotic")<br />
plot(model.np, gradients = TRUE)<br />
lines(cps71$\$$age, coef(model.lin)[2]+2*cps71$\$$age*coef(model.lin)[3], lty = 2, col = "red")<br />
plot(model.np, gradients = TRUE, plot.errors.method = "asymptotic")<br />
<br />
# (4) using the Lin and NL models to generate predictions based on the obtained appropriate <br />
# bandwidths and estimated a nonparametric model. We need to create a set of explanatory<br />
# variables for which to generate predictions. These can be part of the original dataset or be<br />
# outside its scope. Typically, we don’t have the outcome for the evaluation data and need only <br />
# provide the explanatory variables for which predicted values are generated by the models.<br />
# Occasionally, splitting the dataset into two independent samples (training/testing), allows estimation<br />
# of a model on one sample, and evaluation of its performance on another.<br />
<br />
cps.eval.data <- data.frame(age = seq(10,70, by=10)) # simulate some explanatory X values (ages)<br />
pred.lin <- predict(model.lin, newdata = cps.eval.data) # Linear Prediction of log(Wage)<br />
pred.np <- predict(model.np, newdata = cps.eval.data) # non-Linear Prediction of log(Wage)<br />
plot(pred.lin, pred.np)<br />
abline(lm(pred.np ~ pred.lin))<br />
<br />
<center>[[Image:SMHS_Methods15.png|500px]] </center><br />
<br />
.<br />
.<br />
.<br />
<br />
==Predictive risk models ==<br />
<br />
Predictive risk models represent a class of methods for identifying potential for HTE when the individual patient risk for disease-related events at baseline depends on observed factors. For instance, common measures are disease staging criteria, such as those used in COPD or heart failure, Framingham risk scores for cardiovascular event risk, or genetic variations, e.g., HER2 for breast cancer. Initial predictive risk modeling, aka risk function estimation, is often performed without accounting for treatment effects. Least squares or Cox proportional hazards regression methods are appropriate in many cases and provide relatively more interpretable risk functions, but rely on linearity assumptions and may not provide optimal predictive metrics. Partial least squares is an extension of least squares methods that can reduce the dimensionality of the predictor space by interposing latent variables, predicted by linear combinations of observable characteristics, as the intermediate predictors of one or more outcomes. Recursive partitioning, such as random forests, support vector machines, and neural networks represent latter methods with better predictive power than linear methods. Risk function estimation can range from highly exploratory analyses to near meta-analytic model validation, and may be useful at any stage of product development.<br />
<br />
HIV Example: The <b>“hmohiv”</b> dataset represents a study of HIV positive patients examining whether there was a difference in survival times of HIV positive patients between a cohort using intravenous drugs (drug=1) and a cohort not using the IV drug (drug=0). The <b>hmohiv</b> data includes the following variables:<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Time||Age||Drug||Censor||Entdate||Enddate<br />
|- <br />
|1||5||46||0||1||5/15/1990||10/14/1990<br />
|-<br />
|2||6||35||1||0||9/19/1989||3/20/1990<br />
|-<br />
|3||8||30||1||1||4/21/1991||12/20/1991<br />
|-<br />
|4||3||30||1||1||1/3/1991||4/4/1991<br />
|-<br />
|5||22||36||0||1||9/18/1989||7/19/1991<br />
|-<br />
|6||1||32||1||0||3/18/1991||4/17/1991<br />
|-<br />
|...||...||...||...||...||...||...<br />
<br />
<br />
|}<br />
</center><br />
<br />
#cleaning up environment<br />
rm(list=ls())<br />
<br />
# load survival library<br />
library(survival)<br />
<br />
# load hmohiv data<br />
hmohiv<-read.table("http://www.ats.ucla.edu/stat/r/examples/asa/hmohiv.csv", sep=",", header = TRUE)<br />
attach(hmohiv)<br />
<br />
# Fit Cox proportional hazards regression model<br />
cox.model <- coxph( Surv(time, censor) ~ drug, method="breslow")<br />
fit.1 <- survfit(cox.model, newdata=drug.new)<br />
<br />
# construct a frame of the 2 cohorts IV_drug and no-IV-drug<br />
drug.new<-data.frame(drug=c(0,1))<br />
<br />
# plot results<br />
plot(fit.1, xlab="Survival Time (Months)", ylab="Survival Probability")<br />
points(fit.1$\$$time, fit.1$\$$surv[,1], pch=1)<br />
points(fit.1$\$$time, fit.1$\$$surv[,2], pch=2)<br />
legend(40, .8, c("Drug Absent", "Drug Present"), pch=c(1,2))<br />
<br />
<center>[[Image:SMHS_Methods16.png|500px]] </center><br />
<br />
# to inslect the resulting Cox Proportional Hazard Model<br />
cox.model <br />
Call:<br />
coxph(formula = Surv(time, censor) ~ drug, method = "breslow")<br />
<br />
coef exp(coef) se(coef) z p<br />
<b>drug</b> 0.779 2.18 0.242 3.22 <b>0.0013</b><br />
<br />
Likelihood ratio test=10.2 on 1 df, p=0.00141 n= 100, number of events= 80 <br />
<br />
===Footnotes===<br />
<br />
*<sup>8</sup> http://onlinelibrary.wiley.com/enhanced/doi/10.1002/jrsm.54<br />
*<sup>9</sup> http://effectivehealthcare.ahrq.gov/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=1857 <br />
*<sup>10</sup> http://jpepsy.oxfordjournals.org/content/39/2/138.full#sec-14<br />
*<sup>11</sup> http://www.ers.usda.gov/media/200576/err32c_1_.pdf<br />
<br />
==[[SMHS_MethodsHeterogeneity_CER|Next see: Comparative Effectiveness Research (CER)]]==<br />
<br />
*[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_MetaAnalysis}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_MetaAnalysis&diff=16217SMHS MethodsHeterogeneity MetaAnalysis2016-05-23T18:51:35Z<p>Pineaumi: /* Quantile Treatment Effect (QTE) */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Meta-Analyses ==<br />
<br />
==Meta-analysis==<br />
<br />
===Overview===<br />
<br />
Meta-analysis is an approach to combine treatment effects across trials or studies into an aggregated treatment effect with higher statistical power than observed in each individual trials. It may detect HTE by testing for differences in treatment effects across similar RCTs. It requires that the individual treatment effects are similar to ensure pooling is meaningful. In the presence of large clinical or methodological differences between the trials, it may be to avoid meta-analyses. The presence of HTE across studies in a meta-analysis may be due to differences in the design or execution of the individual trials (e.g., randomization methods, patient selection criteria). <b>Cochran's Q is a methods for detection of heterogeneity, which is computed as the weighted sum of squared differences between each study's treatment effect and the pooled effects across the studies.</b> It is a barometer of inter-trial differences impacting the observed study result. A possible source of error in a meta-analysis is publication bias. Trial size may introduce publication bias since larger trials are more likely to be published. Language and accessibility represent other potential confounding factors. When the heterogeneity is not due to poor study design, it may be useful to optimize the treatment benefits for different cohorts of participants. <br />
<br />
Cochran's Q statistics is the weighted sum of squares on a standardized scale<sup>8</sup>. <b>The corresponding P value indicates the strength of the evidence of presence of heterogeneity.</b> This test may have low power to detect heterogeneity sometimes and it is suggested to use a value of 0.10 as a cut-off for significance (Higgins et al., 2003). The Q statistics also may have too much power as a test of heterogeneity when the number of studies is large.<br />
<br />
===Simulation Example 1===<br />
<br />
# Install and Load library<br />
install.packages("meta")<br />
library(meta)<br />
<br />
# Set number of studies<br />
n.studies = 15<br />
<br />
# number of treatments: case1, case2, control<br />
n.trt = 3<br />
<br />
# number of outcomes<br />
n.event = 2<br />
<br />
# simulate the (balanced) number of cases (case1 and case2) and controls in each study<br />
ctl.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case1.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case2.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
<br />
# Simulate the number of outcome events (e.g., deaths) and no events in the control group<br />
event.ctl.group = rbinom(n = n.studies, size = ctl.group, prob = rep(<mark>0.1</mark>, length(ctl.group)))<br />
noevent.ctl.group = ctl.group - event.ctl.group<br />
<br />
# Simulate the number of events and no events in the case1 group<br />
event.case1.group = rbinom(n = n.studies, size = case1.group, prob = rep(<mark>0.5</mark>, length(case1.group)))<br />
noevent.case1.group = case1.group - event.case1.group<br />
<br />
# Simulate the number of events and no events in the case2 group<br />
event.case2.group = rbinom(n = n.studies, size = case2.group, prob = rep(<mark>0.6</mark>, length(case2.group)))<br />
noevent.case2.group = case2.group - event.case2.group<br />
<br />
# Run the univariate meta-analysis using <b>metabin()</b>, Meta-analysis of binary outcome data – <br />
# Calculation of fixed and random effects estimates (risk ratio, odds ratio, risk difference or arcsine<br />
# difference) for meta-analyses with binary outcome data. Mantel-Haenszel (MH), <br />
# inverse variance and Peto method are available for pooling.<br />
<br />
# <b>method</b> = A character string indicating which method is to be used for pooling of studies. <br />
# one of "MH" , "Inverse" , or "Cochran"<br />
# sm = A character string indicating which summary measure (“OR”, "RR" "RD"=risk difference) is to be <br />
# used for pooling of studies<br />
<br />
# Control vs. Case1, n.e and n.c are numbers in experimental and control groups<br />
meta.ctr_case1 <- metabin(event.e = <b>event.case1.group</b>, n.e = case1.group, event.c = <b>event.ctl.group</b>, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
# in this case we use Odds Ratio, of the odds of death in the experimental and control studies<br />
forest(meta.ctr_case1)<br />
<br />
<center>[[Image:SMHS_Methods8.png|500px]] </center><br />
<br />
# Control vs. Case2<br />
meta.ctr_case2 <- metabin(event.e = event.case2.group, n.e = case2.group, event.c = event.ctl.group, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
forest(meta.ctr_case2)<br />
<br />
<center>[[Image:SMHS_Methods9.png|500px]] </center><br />
<br />
# Case1 vs. Case2<br />
meta.case1_case2 <- metabin(event.e = event.case1.group, n.e = case1.group, event.c = event.case2.group, <br />
n.c = case2.group, method = "MH", sm = "OR")<br />
forest(meta.case1_case2)<br />
summary(meta.case1_case2)<br />
<br />
Test of heterogeneity:<br />
Q d.f. p-value<br />
11.99 14 0.6071<br />
<br />
<center>[[Image:SMHS_Methods10.png|500px]] </center><br />
<br />
The <b>forest plo</b>t shows the ''I''<sup>2</sup> test indicates the evidence to reject the null hypothesis (no study heterogeneity and the fixed effects model should be used).<br />
<br />
==Series of “N of 1” trials==<br />
<br />
This technique combines (a “series of”) n-of-1 trial data to identify HTE. An n-of-1 trial is a repeated crossover trial for a single patient, which randomly assigns the patient to one treatment vs. another for a given time period, after which the patient is re-randomized to treatment for the next time period, usually repeated for 4-6 time periods. Such trials are most feasibly done in chronic conditions, where little or no washout period is needed between treatments and treatment effects are identifiable in the short-term, such as pain or reliable surrogate markers. Combining data from identical n-of-1 trials across a set of patients enables the statistical analysis controlling for patient fixed or random effects, covariates, centers, or sequence effects, see <b>Figure</b> below. These combined trials are often analyzed within a Bayesian context using shrinkage estimators that combine individual and group mean treatment effects to create a “posterior” individual mean treatment effect estimate which is a form of inverse variance-weighted average of the individual and group effects. Such trials are typically more expensive than standard RCTs on a per-patient basis, however, they require much smaller sample sizes, often less than 100 patients (due to the efficient individual-as-own-control design), and create individual treatment effect estimates that are not possible in a non-crossover design<sup>9</sup>. For the individual patient, the treatment effect can be re-estimated after each time period, and the trial stopped at any point when the more effective treatment is identified with reasonable statistical certainty.<br />
<br />
====Example====<br />
<br />
A study involving 8 participants collected data across 30 days, in which 15 treatment days and 15 control days are randomly assigned within each participant<sup>10</sup>. The treatment effect is represented as a binary variable (control day=0; treatment day=1). The outcome variable represents the response to the intervention within each of the 8 participants. Study employed a fixed-effects modeling. By creating N − 1 dummy-coded variables representing the N=8 participants, where the last (i=8) participant serves as the reference (i.e., as the model intercept). So, each dummy-coded variable represents the difference between each participant (i) and the 8th participant. Thus, all other patients' values will be relative to the values of the 8th (reference) subject. The overall differences across participants in fixed effects can be evaluated with multiple <b>degree-of-freedom F-tests.</b><br />
<br />
<center>[[Image:SMHS_Methods11.png|500px]] </center><br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|...||...||...||...||...||...||...||...||...||...<br />
<br />
|}<br />
</center> Complete data is available in the <b>Appendix.</b><br />
<br />
<br />
<br />
<center>Data Summary<br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Intercept||Constant<br />
|-<br />
|Physical Activity||PhyAct<br />
|-<br />
|Intervention||Tx<br />
|-<br />
|WP Social Support||WPSS<br />
|-<br />
|PM Social Support (1-3)||PMss3<br />
|-<br />
|Self Efficacy||SelfEff25<br />
<br />
|}<br />
</center><br />
<br />
rm(list=ls())<br />
Nof1 <-read.table("https://umich.instructure.com/files/330385/download?download_frd=1&verifier=DwJUGSd6t24dvK7uYmzA2aDyzlmsohyaK6P7jK0Q", sep=",", header = TRUE) # 02_Nof1_Data.csv<br />
attach(Nof1)<br />
head(Nof1)<br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|2||1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|3||1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|4||1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|5||1||5||1||33||8||0.59||4.62||4.03||1.03||21<br />
|-<br />
|6||1||6||1||33||8||-1.16||2.87||4.03||1.03||0<br />
<br />
|}<br />
</center><br />
<br />
df.1 = data.frame(PhyAct, Tx, WPSS, PMss3, SelfEff25) <br />
<br />
# library("lme4")<br />
<br />
lm.1 = model.lmer <- lmer(PhyAct ~ Tx + SelfEff + Tx*SelfEff + (1|Day) + (1|ID) , data= df.1)<br />
summary(lm.1)<br />
<br />
Linear mixed model fit by REML ['lmerMod']<br />
Formula: PhyAct ~ Tx + SelfEff + Tx * SelfEff + (1 | Day) + (1 | ID)<br />
Data: df.1<br />
<br />
REML criterion at convergence: 8820<br />
<br />
<center> Scaled Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Min||1Q||Median||3Q||Max<br />
|-<br />
|-2.7012||-0.6833||-0.0333||0.6542||3.9612<br />
|}<br />
</center><br />
<br />
<br />
<center> Random Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Groups ||Name||Variance ||Std.Dev.<br />
|-<br />
| Day||(Intercept) ||0.0 || 0.00 <br />
|-<br />
<br />
|ID|| (Intercept)||601.5||24.53 <br />
|-<br />
<br />
|Residual|| ||969.0 ||31.13 <br />
|}<br />
Number of obs: 900, groups: Day, 30; ID, 30<br />
</center> <br />
<br />
<br />
<center> Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Estimate||Std.||Error||t value<br />
|-<br />
|(Intercept)||38.3772||14.4738||2.651<br />
|-<br />
|Tx||4.0283||6.3745||0.632<br />
|-<br />
|SelfEff||0.5818||0.5942||0.979<br />
|-<br />
|Tx:SelfEff||0.9702||0.2617||3.708<br />
|}<br />
</center><br />
<br />
<br />
<center> Correlation of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||(Intr)||Tx ||SlfEff<br />
|-<br />
| Tx|| -0.220|| || <br />
|-<br />
| SelfEff||-0.946 ||0.208 || <br />
|-<br />
| Tx:SelfEff ||0.208 ||-0.946 ||-0.220<br />
|}<br />
</center><br />
<br />
<br />
# Model: PhyAct = Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25 + ε<br />
lm.2 = lm(PhyAct ~ Tx + WPSS + PMss3 + Tx*WPSS + Tx*PMss3 + SelfEff25 + Tx*SelfEff25, df.1) <br />
summary(lm.2)<br />
<br />
Call:<br />
lm(formula = PhyAct ~ Tx + WPSS + PMss3 + Tx * WPSS + Tx * PMss3 + <br />
SelfEff25 + Tx * SelfEff25, data = df.1)<br />
<br />
<center> Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Min||1Q||Median||3Q||Max <br />
|-<br />
| -102.39||-28.24||-1.47||25.16||122.41 <br />
<br />
|}<br />
</center><br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||52.0067||1.8080||28.764||< 2e-16 ***<br />
|-<br />
|Tx||27.7366||2.5569||10.848||< 2e-16 ***<br />
|-<br />
|WPSS||1.9631||2.4272||0.809||0.418853 <br />
|- <br />
|PMss3||13.5110||2.7853||4.851||1.45e-06 ***<br />
|-<br />
|SelfEff25||0.6289||0.2205||2.852||0.004439 ** <br />
|-<br />
|Tx:WPSS||9.9114||3.4320||2.888||0.003971 ** <br />
|-<br />
|Tx:PMss3||8.8422||3.9390||2.245||0.025025 * <br />
|-<br />
|Tx:SelfEff25||1.0460||0.3118||3.354||0.000829 ***<br />
<br />
<br />
|}<br />
</center><br />
<br />
[Using SAS (StudyI_Analyses.sas, StudyIIab_Analyses.sas)]<br />
<br />
<center> Type 3 Tests of Fixed Effects<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|<b>Effect</b>||<b>Num DF</b>||<b>Den DF</b>||<b>F Value</b>||<b>$Pr>F$</b><br />
|-<br />
|<b>Tx</b>||1||224||67.46||<.0001 <br />
|-<br />
|<b>ID</b>||7||224||25.95||<.0001<br />
|-<br />
|<b>Tx*ID</b>||7||224||2.92||0.0060<br />
|}<br />
</center><br />
<br />
==Quantile Treatment Effect (QTE)==<br />
<br />
QTE employs quantile regression estimation (QRE) to examine the central tendency and statistical dispersion of the treatment effect in a population. These may not be revealed by the conventional mean estimation in RCTs. For instance, patients with different comorbidity scores may respond differently to a treatment. Quantile regression has the ability to reveal HTE according to the ranking of patients’ comorbidity scores or some other relevant covariate by which patients may be ranked. Therefore, in an attempt to inform patient-centered care, quantile regression provides more information on the distribution of the treatment effect than typical conditional mean treatment effect estimation. QTE characterizes the heterogeneous treatment effect on individuals and groups across various positions in the distributions of different outcomes of interest. This unique feature has given quantile regression analysis substantial attention and has been employed across a wide range of applications, particularly when evaluating the economic effects of welfare reform.<br />
<br />
One caveat of applying QRE in clinical trials for examining HTE is that the QTE doesn’t demonstrate the treatment effect for a given patient. Instead, it focuses on the treatment effect among subjects within the qth quantile, such as those who are exactly at the top 10th percent in terms of blood pressure or a depression score for some covariate of interest, for example, comorbidity score. It is not uncommon for the qth quantiles to be two different sets of patients before and after the treatment. For this reason, we have to assume that these two groups of patients are homogeneous if they were in the same quantiles.<br />
<br />
Income-Food Expenditure Example: Let’s examine the Engel data (N=235) on the relationship between food expenditure (foodexp) and household income (income)<sup>11</sup>. We can plot the data and then explore the superposition of the six fitted quantile regression lines. <br />
<br />
install.packages("quantreg")<br />
library(quantreg)<br />
data(engel)<br />
attach(engel)<br />
<br />
<center>head(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|1||420.1577||255.8394<br />
|-<br />
|2||541.4117||310.9587<br />
|-<br />
|3||901.1575||485.6800<br />
|- <br />
|4||639.0802||402.9974<br />
|-<br />
|5||750.8756||495.5608<br />
|-<br />
|6||945.7989||633.7978<br />
<br />
|}<br />
</center><br />
<br />
<br />
<center>summary(engel)<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Income||Foodexp<br />
|- <br />
|Min||377.1||242.3<br />
|-<br />
|1st Qu.||638.9||429.7<br />
|-<br />
|Median||884.0||582.5<br />
|- <br />
|Mean||982.5||624.2<br />
|-<br />
|3rd Qu.||1164.0||743.9<br />
|-<br />
|Max||4957.8||2032.7<br />
<br />
|}<br />
</center><br />
<br />
Note: If <i>Y</i> be a real valued random variable with cumulative distribution function F<sub>Y</sub>(y)=P(Y≤ y), then the τ-quantile of <i>Y</i> is given by<br />
<br />
<center> Q<sub>Y</sub>(τ)=F<sub>Y</sub><sup>-1</sup>(τ)=inf{ y:F<sub>Y</sub>(y)≥τ} </center><br />
<br />
where 0≤τ≤1.<br />
<br />
<center>[[Image:SMHS_Methods12.png|500px]] </center><br />
<br />
# (1) Graphics<br />
plot(income, foodexp, cex=.25, type="n", xlab="Household Income", ylab="Food Expenditure")<br />
points(income, foodexp, cex=.5, col="blue")<br />
<br />
# tau - the quantile(s) to be estimated, in the range from 0 to 1. An object "rq.process" and an object "rqs" <br />
# are returned containing the matrix of coefficient estimates at the specified quantiles.<br />
abline( rq(foodexp ~ income, tau=.5), col="blue") # Quantile Regression Model<br />
<br />
abline( lm(foodexp ~ income), lty=2, lwd=3, col="red") # linear model<br />
taus <- c(0.05, 0.1, 0.25, 0.75, 0.90, 0.95)<br />
colors <- rainbow(length(taus))<br />
<br />
models <- vector(mode = "list", length = length(taus)) # define a vector of models to store QR for diff taus<br />
model.names <- vector(mode = "list", length = length(taus)) # define a vector model names<br />
<br />
for( i in 1:length(taus)){<br />
models[[i]] <- rq(foodexp ~ income, tau=taus[i]) <br />
var <- taus[i]<br />
model.names[[i]] <- paste("Model [", i , "]: tau=", var)<br />
abline( models[[i]], lwd=2, col= colors[[i]])<br />
}<br />
legend(3000, 1100, model.names, col= colors, pch= taus, bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods13.png|500px]] </center><br />
<br />
# (2) Inference about quantile regression coefficients. As an alternative to the rank-inversion confidence intervals, we can obtain a table of coefficients, standard errors, t-statistics, and p-values using the summary function:<br />
<br />
<b>summary(models[[3]], se = "nid")</b><br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
# Alternatively, we can use summary.rq to compute bootstrapped standard errors.<br />
summary.rq(models[[3]], se = "nid")<br />
<br />
Call: rq(formula = foodexp ~ income, tau = taus[i])<br />
tau: [1] 0.25<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|||Value||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||95.48354||21.39237||4.46344||0.00001<br />
|-<br />
|Income||0.47410||0.02906||16.31729||0.00000<br />
<br />
|}<br />
</center><br />
<br />
==Nonparametric Regression Methods ==<br />
<br />
Nonparametric regression enables dealing with HTE in RCTs. Different nonparametric methods, such as kernel smoothing methods and series methods, can be used to generate test statistics for examining the presence of HTE. A kernel method is a weighting scheme based on a kernel function (e.g. uniform, Gaussian). When evaluating the treatment effect of a patient in RCTs, the kernel method assigns larger weights to those observations with similar covariates. This is done because it is assumed that patients with similar covariates provide more relevant data on predicted treatment response. Examining participants that have different backgrounds (e.g., demographic, clinical), kernel smoothing methods utilize information from highly divergent participants when estimating a particular subject’s treatment effect. Lower weights are assigned to very different subjects and the kernel methods require choosing a set of smoothing parameters to group patients according to their relative degree of similarities. A drawback is that the corresponding proposed test statistics may be sensitive to the chosen bandwidths, which inhibits the interpretation of the results. Series methods use approximating functions (splines or power series of the explanatory variables) to construct test statistics. Compared to kernel smoothing methods, series methods normally have the advantage of computational convenience; however, the precision of test statistics depends on the number of terms selected in the series. <br />
<br />
Canadian Wage Data Example: Nonparametric regression extends the classical parametric regression (e.g., lm, lmer) involving one continuous dependent variable, y, and (1 or more) continuous explanatory variable(s), x. Let’s start with a popular parametric model of a wage equation that we can extend to a fully nonparametric regression model. First, we will compare and contrast the parametric and nonparametric approach towards univariate regression and then proceed to multivariate regression.<br />
<br />
Let’s use the Canadian cross-section wage data (<b>cps71</b>) consisting of a random sample taken from the 1971 Canadian Census for male individuals having common education (High-School). N=205 observations, 2 variables, the logarithm of the individual’s wage (logwage) and their age (age). The classical wage equation model includes a quadratic term of age.<br />
<br />
# install.packages("np")<br />
library("np")<br />
data("cps71")<br />
<br />
# (1) Linear Model -> R<sup>2</sup> = 0.2308<br />
model.lin <- lm( logwage ~ age + I(age^2), data = cps71)<br />
summary(model.lin)<br />
<br />
Call:<br />
lm(formula = logwage ~ age + I(age^2), data = cps71)<br />
<br />
Residuals:<br />
Min 1Q Median 3Q Max <br />
-2.4041 -0.1711 0.0884 0.3182 1.3940 <br />
<br />
<br />
<center>Coefficients<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||Estimate||Std. Error||t Value||$Pr(>|t|)$<br />
|- <br />
|(Intercept)||10.0419773||0.4559986||22.022||< 2e-16 ***<br />
|-<br />
|Age||0.1731310||0.0238317|| 7.265||7.96e-12 ***<br />
|-<br />
|I(age^2)||-0.0019771||0.0002898||-6.822||1.02e-10 ***<br />
<br />
|}<br />
</center><br />
<br />
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1<br />
<br />
Residual standard error: 0.5608 on 202 degrees of freedom<br />
Multiple R-squared: 0.2308, Adjusted R-squared: 0.2232 <br />
F-statistic: 30.3 on 2 and 202 DF, p-value: 3.103e-12<br />
<br />
# (2) Next, we consider the local linear nonparametric method employing cross-validated <br />
# bandwidth selection and estimation in one step. Start with computing the least-squares<br />
# cross-validated bandwidths for the local constant estimator (default).<br />
# Note that <b>R<sup>2</sup> = 0.3108675</b><br />
bandwidth <- npregbw(formula= logwage ~ age, data = cps71)<br />
model.np <- npreg(bandwidth, regtype = "ll", bwmethod = "cv.aic", gradients = TRUE, data = cps71)<br />
summary(model.np)<br />
<br />
Regression Data: 205 training points, in 1 variable(s) age<br />
Bandwidth(s): 1.892157<br />
Kernel Regression Estimator: Local-Constant<br />
Bandwidth Type: Fixed<br />
Residual standard error: 0.5307943<br />
R-squared: <b><mark>0.3108675</mark></b><br />
Continuous Kernel Type: Second-Order Gaussian<br />
No. Continuous Explanatory Vars.: 1<br />
<br />
# NP model significance may be tested by<br />
npsigtest(model.np)<br />
<br />
Kernel Regression Significance Test<br />
Type I Test with IID Bootstrap (399 replications, Pivot=TRUE, joint=FALSE)<br />
Explanatory variables tested for significance: age (1)<br />
<br />
age<br />
Bandwidth(s): 1.892157<br />
<br />
Individual Significance Tests<br />
P Value: <br />
age < 2.22e-16 ***<br />
<br />
# So, as was the case for the linear parametric model, Age is significant in the local linear NP-model<br />
<br />
# (3) Graphical comparison of parametric and nonparametric models. <br />
plot(cps71$\$$age, cps71$\$$logwage, xlab = "age", ylab = "log(wage)", cex=.1)<br />
lines(cps71$\$$age, fitted(model.lin), lty = 2, col = " red")<br />
lines(cps71$\$$age, fitted(model.np), lty = 1, col = "blue")<br />
legend("topright", c("Data", "Linear", "Non-linear"), col=c("Black", "Red", "Blue"), pch = c(1, 1, 1), bty='n', cex=.75)<br />
<br />
<center>[[Image:SMHS_Methods14.png|500px]] </center><br />
<br />
# some additional plots resenting the parametric (quadratic, dashed line) and the nonparametric estimates <br />
# (solid line) of the regression function for the cps71 data. <br />
plot(model.np, plot.errors.method = "asymptotic")<br />
plot(model.np, gradients = TRUE)<br />
lines(cps71$\$$age, coef(model.lin)[2]+2*cps71$\$$age*coef(model.lin)[3], lty = 2, col = "red")<br />
plot(model.np, gradients = TRUE, plot.errors.method = "asymptotic")<br />
<br />
# (4) using the Lin and NL models to generate predictions based on the obtained appropriate <br />
# bandwidths and estimated a nonparametric model. We need to create a set of explanatory<br />
# variables for which to generate predictions. These can be part of the original dataset or be<br />
# outside its scope. Typically, we don’t have the outcome for the evaluation data and need only <br />
# provide the explanatory variables for which predicted values are generated by the models.<br />
# Occasionally, splitting the dataset into two independent samples (training/testing), allows estimation<br />
# of a model on one sample, and evaluation of its performance on another.<br />
<br />
cps.eval.data <- data.frame(age = seq(10,70, by=10)) # simulate some explanatory X values (ages)<br />
pred.lin <- predict(model.lin, newdata = cps.eval.data) # Linear Prediction of log(Wage)<br />
pred.np <- predict(model.np, newdata = cps.eval.data) # non-Linear Prediction of log(Wage)<br />
plot(pred.lin, pred.np)<br />
abline(lm(pred.np ~ pred.lin))<br />
<br />
<center>[[Image:SMHS_Methods15.png|500px]] </center><br />
<br />
.<br />
.<br />
.<br />
<br />
==Predictive risk models ==<br />
<br />
Predictive risk models represent a class of methods for identifying potential for HTE when the individual patient risk for disease-related events at baseline depends on observed factors. For instance, common measures are disease staging criteria, such as those used in COPD or heart failure, Framingham risk scores for cardiovascular event risk, or genetic variations, e.g., HER2 for breast cancer. Initial predictive risk modeling, aka risk function estimation, is often performed without accounting for treatment effects. Least squares or Cox proportional hazards regression methods are appropriate in many cases and provide relatively more interpretable risk functions, but rely on linearity assumptions and may not provide optimal predictive metrics. Partial least squares is an extension of least squares methods that can reduce the dimensionality of the predictor space by interposing latent variables, predicted by linear combinations of observable characteristics, as the intermediate predictors of one or more outcomes. Recursive partitioning, such as random forests, support vector machines, and neural networks represent latter methods with better predictive power than linear methods. Risk function estimation can range from highly exploratory analyses to near meta-analytic model validation, and may be useful at any stage of product development.<br />
<br />
HIV Example: The <b>“hmohiv”</b> dataset represents a study of HIV positive patients examining whether there was a difference in survival times of HIV positive patients between a cohort using intravenous drugs (drug=1) and a cohort not using the IV drug (drug=0). The <b>hmohiv</b> data includes the following variables:<br />
<br />
<center><br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Time||Age||Drug||Censor||Entdate||Enddate<br />
|- <br />
|1||5||46||0||1||5/15/1990||10/14/1990<br />
|-<br />
|2||6||35||1||0||9/19/1989||3/20/1990<br />
|-<br />
|3||8||30||1||1||4/21/1991||12/20/1991<br />
|-<br />
|4||3||30||1||1||1/3/1991||4/4/1991<br />
|-<br />
|5||22||36||0||1||9/18/1989||7/19/1991<br />
|-<br />
|6||1||32||1||0||3/18/1991||4/17/1991<br />
|-<br />
|...||...||...||...||...||...||...<br />
<br />
<br />
|}<br />
</center><br />
<br />
#cleaning up environment<br />
rm(list=ls())<br />
<br />
# load survival library<br />
library(survival)<br />
<br />
# load hmohiv data<br />
hmohiv<-read.table("http://www.ats.ucla.edu/stat/r/examples/asa/hmohiv.csv", sep=",", header = TRUE)<br />
attach(hmohiv)<br />
<br />
# Fit Cox proportional hazards regression model<br />
cox.model <- coxph( Surv(time, censor) ~ drug, method="breslow")<br />
fit.1 <- survfit(cox.model, newdata=drug.new)<br />
<br />
# construct a frame of the 2 cohorts IV_drug and no-IV-drug<br />
drug.new<-data.frame(drug=c(0,1))<br />
<br />
# plot results<br />
plot(fit.1, xlab="Survival Time (Months)", ylab="Survival Probability")<br />
points(fit.1$\$$time, fit.1$\$$surv[,1], pch=1)<br />
points(fit.1$\$$time, fit.1$\$$surv[,2], pch=2)<br />
legend(40, .8, c("Drug Absent", "Drug Present"), pch=c(1,2))<br />
<br />
<center>[[Image:SMHS_Methods16.png|500px]] </center><br />
<br />
# to inslect the resulting Cox Proportional Hazard Model<br />
cox.model <br />
Call:<br />
coxph(formula = Surv(time, censor) ~ drug, method = "breslow")<br />
<br />
coef exp(coef) se(coef) z p<br />
<b>drug</b> 0.779 2.18 0.242 3.22 <b>0.0013</b><br />
<br />
Likelihood ratio test=10.2 on 1 df, p=0.00141 n= 100, number of events= 80 <br />
<br />
===Footnotes===<br />
<br />
*<sup>8</sup> http://onlinelibrary.wiley.com/enhanced/doi/10.1002/jrsm.54<br />
*<sup>9</sup> http://effectivehealthcare.ahrq.gov/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=1857 <br />
*<sup>10</sup> http://jpepsy.oxfordjournals.org/content/39/2/138.full#sec-14<br />
<br />
==[[SMHS_MethodsHeterogeneity_CER|Next see: Comparative Effectiveness Research (CER)]]==<br />
<br />
*[[SMHS_MethodsHeterogeneity|Back to the Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research section]]<br />
<br />
<hr><br />
* SOCR Home page: http://www.socr.umich.edu<br />
<br />
{{translate|pageName=http://wiki.socr.umich.edu/index.php/SMHS_MethodsHeterogeneity_MetaAnalysis}}</div>Pineaumihttps://wiki.socr.umich.edu/index.php?title=SMHS_MethodsHeterogeneity_MetaAnalysis&diff=16216SMHS MethodsHeterogeneity MetaAnalysis2016-05-23T18:48:36Z<p>Pineaumi: /* Footnotes */</p>
<hr />
<div>==[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research]] - Meta-Analyses ==<br />
<br />
==Meta-analysis==<br />
<br />
===Overview===<br />
<br />
Meta-analysis is an approach to combine treatment effects across trials or studies into an aggregated treatment effect with higher statistical power than observed in each individual trials. It may detect HTE by testing for differences in treatment effects across similar RCTs. It requires that the individual treatment effects are similar to ensure pooling is meaningful. In the presence of large clinical or methodological differences between the trials, it may be to avoid meta-analyses. The presence of HTE across studies in a meta-analysis may be due to differences in the design or execution of the individual trials (e.g., randomization methods, patient selection criteria). <b>Cochran's Q is a methods for detection of heterogeneity, which is computed as the weighted sum of squared differences between each study's treatment effect and the pooled effects across the studies.</b> It is a barometer of inter-trial differences impacting the observed study result. A possible source of error in a meta-analysis is publication bias. Trial size may introduce publication bias since larger trials are more likely to be published. Language and accessibility represent other potential confounding factors. When the heterogeneity is not due to poor study design, it may be useful to optimize the treatment benefits for different cohorts of participants. <br />
<br />
Cochran's Q statistics is the weighted sum of squares on a standardized scale<sup>8</sup>. <b>The corresponding P value indicates the strength of the evidence of presence of heterogeneity.</b> This test may have low power to detect heterogeneity sometimes and it is suggested to use a value of 0.10 as a cut-off for significance (Higgins et al., 2003). The Q statistics also may have too much power as a test of heterogeneity when the number of studies is large.<br />
<br />
===Simulation Example 1===<br />
<br />
# Install and Load library<br />
install.packages("meta")<br />
library(meta)<br />
<br />
# Set number of studies<br />
n.studies = 15<br />
<br />
# number of treatments: case1, case2, control<br />
n.trt = 3<br />
<br />
# number of outcomes<br />
n.event = 2<br />
<br />
# simulate the (balanced) number of cases (case1 and case2) and controls in each study<br />
ctl.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case1.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
case2.group = rbinom(n = n.studies, size = 200, prob = 0.3)<br />
<br />
# Simulate the number of outcome events (e.g., deaths) and no events in the control group<br />
event.ctl.group = rbinom(n = n.studies, size = ctl.group, prob = rep(<mark>0.1</mark>, length(ctl.group)))<br />
noevent.ctl.group = ctl.group - event.ctl.group<br />
<br />
# Simulate the number of events and no events in the case1 group<br />
event.case1.group = rbinom(n = n.studies, size = case1.group, prob = rep(<mark>0.5</mark>, length(case1.group)))<br />
noevent.case1.group = case1.group - event.case1.group<br />
<br />
# Simulate the number of events and no events in the case2 group<br />
event.case2.group = rbinom(n = n.studies, size = case2.group, prob = rep(<mark>0.6</mark>, length(case2.group)))<br />
noevent.case2.group = case2.group - event.case2.group<br />
<br />
# Run the univariate meta-analysis using <b>metabin()</b>, Meta-analysis of binary outcome data – <br />
# Calculation of fixed and random effects estimates (risk ratio, odds ratio, risk difference or arcsine<br />
# difference) for meta-analyses with binary outcome data. Mantel-Haenszel (MH), <br />
# inverse variance and Peto method are available for pooling.<br />
<br />
# <b>method</b> = A character string indicating which method is to be used for pooling of studies. <br />
# one of "MH" , "Inverse" , or "Cochran"<br />
# sm = A character string indicating which summary measure (“OR”, "RR" "RD"=risk difference) is to be <br />
# used for pooling of studies<br />
<br />
# Control vs. Case1, n.e and n.c are numbers in experimental and control groups<br />
meta.ctr_case1 <- metabin(event.e = <b>event.case1.group</b>, n.e = case1.group, event.c = <b>event.ctl.group</b>, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
# in this case we use Odds Ratio, of the odds of death in the experimental and control studies<br />
forest(meta.ctr_case1)<br />
<br />
<center>[[Image:SMHS_Methods8.png|500px]] </center><br />
<br />
# Control vs. Case2<br />
meta.ctr_case2 <- metabin(event.e = event.case2.group, n.e = case2.group, event.c = event.ctl.group, <br />
n.c = ctl.group, method = "MH", sm = "OR")<br />
forest(meta.ctr_case2)<br />
<br />
<center>[[Image:SMHS_Methods9.png|500px]] </center><br />
<br />
# Case1 vs. Case2<br />
meta.case1_case2 <- metabin(event.e = event.case1.group, n.e = case1.group, event.c = event.case2.group, <br />
n.c = case2.group, method = "MH", sm = "OR")<br />
forest(meta.case1_case2)<br />
summary(meta.case1_case2)<br />
<br />
Test of heterogeneity:<br />
Q d.f. p-value<br />
11.99 14 0.6071<br />
<br />
<center>[[Image:SMHS_Methods10.png|500px]] </center><br />
<br />
The <b>forest plo</b>t shows the ''I''<sup>2</sup> test indicates the evidence to reject the null hypothesis (no study heterogeneity and the fixed effects model should be used).<br />
<br />
==Series of “N of 1” trials==<br />
<br />
This technique combines (a “series of”) n-of-1 trial data to identify HTE. An n-of-1 trial is a repeated crossover trial for a single patient, which randomly assigns the patient to one treatment vs. another for a given time period, after which the patient is re-randomized to treatment for the next time period, usually repeated for 4-6 time periods. Such trials are most feasibly done in chronic conditions, where little or no washout period is needed between treatments and treatment effects are identifiable in the short-term, such as pain or reliable surrogate markers. Combining data from identical n-of-1 trials across a set of patients enables the statistical analysis controlling for patient fixed or random effects, covariates, centers, or sequence effects, see <b>Figure</b> below. These combined trials are often analyzed within a Bayesian context using shrinkage estimators that combine individual and group mean treatment effects to create a “posterior” individual mean treatment effect estimate which is a form of inverse variance-weighted average of the individual and group effects. Such trials are typically more expensive than standard RCTs on a per-patient basis, however, they require much smaller sample sizes, often less than 100 patients (due to the efficient individual-as-own-control design), and create individual treatment effect estimates that are not possible in a non-crossover design<sup>9</sup>. For the individual patient, the treatment effect can be re-estimated after each time period, and the trial stopped at any point when the more effective treatment is identified with reasonable statistical certainty.<br />
<br />
====Example====<br />
<br />
A study involving 8 participants collected data across 30 days, in which 15 treatment days and 15 control days are randomly assigned within each participant<sup>10</sup>. The treatment effect is represented as a binary variable (control day=0; treatment day=1). The outcome variable represents the response to the intervention within each of the 8 participants. Study employed a fixed-effects modeling. By creating N − 1 dummy-coded variables representing the N=8 participants, where the last (i=8) participant serves as the reference (i.e., as the model intercept). So, each dummy-coded variable represents the difference between each participant (i) and the 8th participant. Thus, all other patients' values will be relative to the values of the 8th (reference) subject. The overall differences across participants in fixed effects can be evaluated with multiple <b>degree-of-freedom F-tests.</b><br />
<br />
<center>[[Image:SMHS_Methods11.png|500px]] </center><br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|...||...||...||...||...||...||...||...||...||...<br />
<br />
|}<br />
</center> Complete data is available in the <b>Appendix.</b><br />
<br />
<br />
<br />
<center>Data Summary<br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
|Intercept||Constant<br />
|-<br />
|Physical Activity||PhyAct<br />
|-<br />
|Intervention||Tx<br />
|-<br />
|WP Social Support||WPSS<br />
|-<br />
|PM Social Support (1-3)||PMss3<br />
|-<br />
|Self Efficacy||SelfEff25<br />
<br />
|}<br />
</center><br />
<br />
rm(list=ls())<br />
Nof1 <-read.table("https://umich.instructure.com/files/330385/download?download_frd=1&verifier=DwJUGSd6t24dvK7uYmzA2aDyzlmsohyaK6P7jK0Q", sep=",", header = TRUE) # 02_Nof1_Data.csv<br />
attach(Nof1)<br />
head(Nof1)<br />
<br />
<center><br />
<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| ||ID||Day||Tx||SelfEff||SelfEff25||WPSS||SocSuppt||PMss||PMss3||PhyAct<br />
|-<br />
|1||1||1||1||33||8||0.97||5.00||4.03||1.03||53<br />
|-<br />
|2||1||2||1||33||8||-0.17||3.87||4.03||1.03||73<br />
|-<br />
|3||1||3||0||33||8||0.81||4.84||4.03||1.03||23<br />
|-<br />
|4||1||4||0||33||8||-0.41||3.62||4.03||1.03||36<br />
|-<br />
|5||1||5||1||33||8||0.59||4.62||4.03||1.03||21<br />
|-<br />
|6||1||6||1||33||8||-1.16||2.87||4.03||1.03||0<br />
<br />
|}<br />
</center><br />
<br />
df.1 = data.frame(PhyAct, Tx, WPSS, PMss3, SelfEff25) <br />
<br />
# library("lme4")<br />
<br />
lm.1 = model.lmer <- lmer(PhyAct ~ Tx + SelfEff + Tx*SelfEff + (1|Day) + (1|ID) , data= df.1)<br />
summary(lm.1)<br />
<br />
Linear mixed model fit by REML ['lmerMod']<br />
Formula: PhyAct ~ Tx + SelfEff + Tx * SelfEff + (1 | Day) + (1 | ID)<br />
Data: df.1<br />
<br />
REML criterion at convergence: 8820<br />
<br />
<center> Scaled Residuals<br />
{| class="wikitable" style="text-align:center; " border="1"<br />
|-<br />
| Min||1Q||Median||3Q||Max<br />
|-<br />
|-2.7012||-0.6833||-0.0333||0.6542||3.9612<br />
|}<br />
</center><br />
<br /&