Difference between revisions of "Scientific Methods for Health Sciences"

From SOCR
Jump to: navigation, search
(Clinical vs. Statistical Significance)
 
(45 intermediate revisions by 5 users not shown)
Line 1: Line 1:
'''The Scientific Methods for Health Sciences EBook is still under active development. It is expected to be complete by Sept 01, 2014, when this banner will be removed.'''
+
{| cellspacing="5" cellpadding="0" style="margin:0em 0em 1em 0em; border:1px solid #1DA0E7; background:#B3DDF4;width:100%"
 +
| '''The Scientific Methods for Health Sciences EBook is still under active development. When the EBook is complete this banner will be removed.'''
 +
|}
  
 
== [[Main_Page | SOCR Wiki]]: Scientific Methods for Health Sciences ==
 
== [[Main_Page | SOCR Wiki]]: Scientific Methods for Health Sciences ==
 
[[Image:SMHS_EBook.png|250px|thumbnail|right| Scientific Methods for Health Sciences EBook]]
 
[[Image:SMHS_EBook.png|250px|thumbnail|right| Scientific Methods for Health Sciences EBook]]
 
 
Electronic book (EBook) on Scientific Methods for Health Sciences (coming up ...)
 
  
 
==[[SMHS_Preface| Preface]]==
 
==[[SMHS_Preface| Preface]]==
The ''Scientific Methods for Health Sciences (SMHS) EBook'' is designed to support a 4-course training of scientific methods for graduate students in the health sciences.
+
The ''Scientific Methods for Health Sciences (SMHS) EBook'' (ISBN: 978-0-9829949-1-7) is designed to support a [http://www.socr.umich.edu/people/dinov/SMHS_Courses.html 4-course training curriculum] emphasizing the fundamentals, applications and practice of scientific methods specifically for graduate students in the health sciences.
  
 
===[[SMHS_Format| Format]]===
 
===[[SMHS_Format| Format]]===
Line 17: Line 16:
  
 
===[[SMHS_copyright | Copyrights]]===
 
===[[SMHS_copyright | Copyrights]]===
The SMHS EBook is a freely and openly accessible electronic book developed by SOCR and the general community.
+
The SMHS EBook is a freely and openly accessible electronic book developed by [[SOCR]] and the general health sciences community.
  
 
==Chapter I: Fundamentals==
 
==Chapter I: Fundamentals==
Line 34: Line 33:
 
===[[SMHS_OR_RR | Odds Ratio/Relative Risk]]===
 
===[[SMHS_OR_RR | Odds Ratio/Relative Risk]]===
 
The relative risk, RR, (a measure of dependence comparing two probabilities in terms of their ratio) and the odds ratio, OR, (the fraction of one probability and its complement) are widely applicable in many healthcare studies.
 
The relative risk, RR, (a measure of dependence comparing two probabilities in terms of their ratio) and the odds ratio, OR, (the fraction of one probability and its complement) are widely applicable in many healthcare studies.
 +
 +
===[[SMHS_CenterSpreadShape | Centrality, Variability and Shape]]===
 +
Three main features of sample data are commonly reported as critical in understanding and interpreting the population, or process, the data represents. These include Center, Spread and Shape. The main measures of centrality are Mean, Median and Mode(s). Common measures of variability include the range, the variance, the standard deviation, and mean absolute deviation. The shape of a (sample or population) distribution is an important characterization of the process and its intrinsic properties.
  
 
===[[SMHS_ProbabilityDistributions | Probability Distributions]]===
 
===[[SMHS_ProbabilityDistributions | Probability Distributions]]===
Probability distributions are mathematical models for processes that we observe in nature. Although there are different types of distributions, they have common features and properties that make them useful in various scientific applications.
+
Probability distributions are mathematical models for processes that we observe in nature. Although there are different types of distributions, they have common features and properties that make them useful in various scientific applications. This section presents the Bernoulli, Binomial, Multinomial, Geometric, Hypergeometric, Negative binomial, Negative multinomial distribution, Poisson distribution, and Normal distributions, as well as the concept of moment generating function.
  
 
===[[SMHS_ResamplingSimulation | Resampling and Simulation]]===
 
===[[SMHS_ResamplingSimulation | Resampling and Simulation]]===
Line 45: Line 47:
  
 
===[[SMHS_IntroEpi | Intro to Epidemiology]]===
 
===[[SMHS_IntroEpi | Intro to Epidemiology]]===
Epidemiology is the study of the distribution and determinants of disease frequency in human populations. This section presents the basic epidemiology concepts. More advanced epidemiological methodologies are discussed in [[SMHS_Epidemiology|the next chapter]].
+
Epidemiology is the study of the distribution and determinants of disease frequency in human populations. This section presents the basic epidemiology concepts. More advanced epidemiological methodologies are discussed in [[SMHS_Epidemiology|the next chapter]]. This section also presents the Positive and Negative Predictive Values (PPV/NPV).
  
 
===[[SMHS_ExpObsStudies | Experiments vs. Observational Studies]]===
 
===[[SMHS_ExpObsStudies | Experiments vs. Observational Studies]]===
Line 54: Line 56:
  
 
===[[SMHS_HypothesisTesting | Hypothesis Testing]]===
 
===[[SMHS_HypothesisTesting | Hypothesis Testing]]===
Hypothesis testing is a quantitative decision-making technique for examining the characteristics (e.g., centrality, span) of populations or processes based on observed experimental data.
+
Hypothesis testing is a quantitative decision-making technique for examining the characteristics (e.g., centrality, span) of populations or processes based on observed experimental data. In this section we discuss inference about a mean, mean differences (both small and large samples), a proportion or differences of proportions and differences of variances.
  
 
===[[SMHS_PowerSensitivitySpecificity | Statistical Power, Sensitivity and Specificity]]===
 
===[[SMHS_PowerSensitivitySpecificity | Statistical Power, Sensitivity and Specificity]]===
Line 73: Line 75:
 
===[[SMHS_ClinicalStatSignificance | Clinical vs. Statistical Significance]]===
 
===[[SMHS_ClinicalStatSignificance | Clinical vs. Statistical Significance]]===
 
Statistical significance addresses the question of whether or not the results of a statistical test meet an accepted quantitative criterion, whereas clinical significance is answers the question of whether the observed difference between two treatments (e.g., new and old therapy) found in the study large enough to alter the clinical practice.
 
Statistical significance addresses the question of whether or not the results of a statistical test meet an accepted quantitative criterion, whereas clinical significance is answers the question of whether the observed difference between two treatments (e.g., new and old therapy) found in the study large enough to alter the clinical practice.
 
IV. HS 850: Fundamentals
 
 
Clinical vs. Statistical Significance
 
 
1) Overview: Statistical significance is related to the question of whether or not the results of a statistical test meet an accepted criterion. The criterion can be arbitrary and the same statistical test may give different results based on different criterion of significance. Usually, statistical significance is expressed in terms of probability (say p value, which is the probability of obtaining a test statistic result at least as extreme as the one that was actually observed assuming the null hypothesis is true). Clinical significance is the difference between new and old therapy found in the study large enough to alter the practice. This section presents a general introduction to the field of statistical significance with important concepts of tests for statistical significance and measurements of significance of tests as well as the application of statistical test in clinical and the comparison between clinical and statistical significance.
 
 
2) Motivation: Significance is one of the most commonly used measurements in statistical tests from various fields. However, most researchers and students misinterpret statistical significance and non-significance. Few people know the exact indication of p value, which in some sense defines statistical significance. So the question would be, how can we define statistical significance? Is there any other ways to define statistical significance besides p value? What is missing in the ways to make inferences in clinical vs. statistical significance?  This lecture aims to help students have a thorough understanding of clinical and statistical significance.
 
 
3) Theory
 
 
3.1) Statistical significance: the low probability at which an observed effect would have occurred due to chance. It is an integral part of statistical hypothesis testing where it plays a vital role to decide if a null hypothesis can be rejected. The criterion level is typically the value of p<0.05, which is chosen to minimize the possibility of a Type I error, finding a significant difference when one does not exist. It does not protect us from Type II error, which is defined as failure to find a difference when the difference does exist.
 
*Statistical significance involves important factors like (1) magnitude of the effect; (2) the sample size; (3) the reliability of the effect (i.e., the treatment equally effective for all participants); (4) the reliability of the measurement instrument.
 
*Problems with p value and statistical significance: (1) failure to reject the null hypothesis doesn’t mean we accept the null; (2) in any cases, the true effects in real life are never zero and things can be disproved only in pure math not in real life; (3) it’s not logical to assume that the effects are zero until disproved; (4) the significant level is arbitrary.
 
*p value: probability of obtaining a test statistic result at least as extreme as the one that was actually observed when the null hypothesis is actually true. It is used in the context of null hypothesis testing in order to quantify the idea of statistical significance of evidence. A researcher will often reject the null when p value turns out to be less than a predetermined significance level, say 0.05. If the p value is very small, usually less than or equal to a threshold value previously chosen (significance level), it suggests that the observed data is inconsistent with the assumption that the null hypothesis is true and thus the hypothesis must be rejected. The smaller the p value, the larger the significance because it informs that the hypothesis under consideration may not adequately explain the observation.
 
*Definition of p value: Pr⁡(X≥x|H_0) for right tail event; Pr⁡(X≤x|H_0) for left tail event; 2*min⁡(Pr⁡(X≥x│H_0 ),Pr⁡(X≥x│H_0 )) for double tail event.
 
*The hypothesis H_0 is rejected if any of these probabilities is less than or equal to a small, fixed but arbitrarily predefined threshold α (level of significance), which only depends on the consensus of the research community that the investigator is working on. α=Pr⁡(reject H_0│H_0  is true)=Pr⁡(p≤α).
 
*Interpretation of p value: p≤0.01 very strong presumption against null hypothesis; 0.01<p≤0.05 strong presumption against null hypothesis; 0.05<p≤0.1 low presumption against null hypothesis; p>0.1 no presumption against the null hypothesis.
 
*Criticism about p value: (1) p value does not in itself allow reasoning about the probabilities of hypotheses, which requires multiple hypotheses or a range of hypotheses with a prior distribution of likelihoods between them; (2) it refers only to a single hypothesis (null hypothesis) and does not make reference to or allow conclusions about any other hypotheses such as alternative hypothesis; (3) the criterion is based on arbitrary choice of level; (4) p value is incompatible with the likelihood principle and the p value depends on the experiment design or equivalently on the test statistic in question; (5) it is an informal measure of evidence against the null hypothesis.
 
*Several common misunderstandings about p values: (1) it is not the probability that the null hypothesis is true, nor is it the probability that the alternative probability is false, it is not concerned with either of them; (2) it is not the probability that a finding is merely by chance; (3) it is not the probability of falsely rejecting the null hypothesis; (4) it is not the probability that replicating the experiment would yield the same conclusion; (5) the significance level is not determined by p value; (6) p value does not indicate the size or importance of the observed effect.
 
 
3.2) Clinical significance: in medicine and psychology, clinical significance is the practical importance of a treatment effect of whether it has a real genuine, noticeable effect on daily life. It yields information on whether a treatment is effective enough to change a patient’s diagnostic label and answers question of whether the treatment effective enough to cause the patient to be normal in clinical treatment studies. It is also a consideration when interpreting the result of a psychological assessment of an individual. Frequently, there will be a difference of scores that is statistically significant, unlikely to have occurred purely by chance.
 
*A clear demonstration of clinical significance would be to take a group of clients who score, say, beyond +2 SDs of the normative group prior to treatment and move them to within 1 SD from the mean of that group. The research implication of this definition is that you want to select people who are clearly disturbed to be in the clinical outcome study. If the mean of your untreated group is at, say, +1.2 SDs above the mean the change due to treatment probably is not going to be viewed as clinically significant.
 
*Clinical significance is defined by the smallest clinically beneficial and harmful values of the effect. These values are usually equal and opposite in sign. Because there is always a leap of faith in applying the results of a study to your patients (who, after all, were not in the study), perhaps a small improvement in the new therapy is not sufficient to cause you to alter your clinical approach. Note that you would almost certainly not alter your approach if the study results were not statistically significant (i.e. could well have been due to chance). But when is the difference between two therapies large enough for you to alter your practice?
 
 
[[File:Value of Effect Statistic.jpg]]
 
</center>
 
 
*Statistics cannot fully answer this question. It is one of clinical judgment, considering the magnitude of benefit of each treatment, the respective profiles of side effects of the two treatments, their relative costs, your comfort with prescribing a new therapy, the patient's preferences, and so on. But we can provide different ways of illustrating the benefit of treatments, in terms of the number needed to treat. If a study is very large, its result may be statistically significant (unlikely to be due to chance), and yet the deviation from the null hypothesis may be too small to be of any clinical interest.  Conversely, the result may not be statistically significant because the study was so small (or "under powered"), but the difference is large and would seem potentially important from a clinical point of view.  You will then be wise to do another, perhaps larger, study.
 
*The smallest clinically beneficial and harmful values help define probabilities that the true effect could be clinically beneficial, trivial, or harmful (p_beneficial,p_trivial,p_harmful) and these P’s make an effort easier to assess and to publish.
 
Ways to calculate clinical significance:
 
*Jacobson-Truax: common method of calculating clinical significance. It involves calculating a Reliability Change Index (RCI). RCI equals the difference between a participant’s pre-test and post-test scores, divided by the standard error of the difference.
 
*Gulliksen-Lord-Novick: it is similar to Jacobson-Truax except that it takes into account regression to the mean. It is done by subtracting the pre-test and post-test scores from a population mean, and divided by the standard deviation of the population.
 
*Edwards-Nunnally: more stringent alternative to calculate clinical significance compared to Jacobson-Truax method. Reliability scores are used to bring the pre-test scores closer to the mean, and then a confidence interval is developed for this adjusted pre-test score.
 
*Hageman-Arrindel: involves indices of group change and individual change. The reliability of change indicates whether a patient has improved, stayed the same, or deteriorated. A second index, the clinical significance of change, indicates four categories similar to those used by Jacobson-Truax: deteriorated, not reliably changed, improved but not recovered, and recovered.
 
*Hierarchical Linear Modeling (HLM): involves growth curve analysis instead of pre-test post-test comparisons, so three data points are needed from each patient, instead of only two data points (pre-test and post-test).
 
 
 
3.3) One example illustrating the use of spreadsheet and the clinical importance of p=0.2.
 
 
|class="wikitable" style="text-align:center; width:75%" border="1"
 
|p value||Value of statistic||Confidence level (%)||Degree of freedom||Confidence limits||Threshold for clinical chances||||
 
|lower||upper||positive||negative||||
 
|0.03||1.5||90||18||0.4||2.6||1||-1||||
 
|0.2||2.4||90||18||-0.7 5.5||1||-1||||
 
</center>
 
 
 
Clinically positive Clinically trivial Clinically negative
 
prob (%) odds prob (%) odds prob (%) odds
 
78 3:1 22 1:3 0 1:2.2071
 
Likely, probable Unlikely, probably not (almost certainly) not
 
78 3:1 19 1:4 4 1:25
 
Likely, probable Unlikely, probably not Very unlikely
 
 
And when reporting the research, one need to show the observed magnitude of the effect; attend to precision of estimation by showing 90% confidence limits of the true value; show the p value when necessary; attend to clinical, practical or mechanistic significance by stating the smallest worthwhile value when showing the probabilities that the true effect is beneficial, trivial, and/or harmful; make a qualitative statement about the clinical or practical significance of the effect with terms like likely, unlikely.
 
 
One example would be: Clinically trivial, statistically significant and publishable rare outcome that can arise from a large sample size and usually misinterpreted as a worthwhile effect: (1) the observed effect of the treatment is 1.1 units (90% likely limits 0.4 to 1.8 units and p=0.007), (2) the chances that the true effect is practically beneficial/trivial/harmful are 1/99/0%.
 
 
4) Applications
 
 
4.1) This article (http://archpsyc.jamanetwork.com/article.aspx?articleid=206036) titled Revised Prevalence Estimates of Mental Disorders In The United States responses to question on life interference from telling a professional about, or using medication for symptoms to ascertain the prevalence of clinically significant mental disorders in each survey. It made a revised national prevalence estimate by selecting the lower estimate of the 2 surveys for each diagnostic category accounting for comorbidity and combining categories. It concluded that establishing the clinical significance of disorders in the community is crucial for estimating treatment need and that more work should be done in defining and operationalizing clinical significance, and characterizing the utility of clinically significant symptoms in determining treatment need even when some criteria of the disorder are not met.
 
 
4.2) This article (http://jama.jamanetwork.com/article.aspx?articleid=187180) aims to evaluate whether the time to completion and the time to publication of randomized phase 2 and phase 3 trials are affected by the statistical significance of results and to describe the natural history of such trial and conducted a prospective cohort of randomized efficacy trials conducted by 2 trialist groups from 1986 to 1996. It finally concluded that among randomized efficacy trials, there is a time lag in the publication of negative findings that occurs mostly after the completion of the trial follow-up.
 
 
5) Software
 
 
http://graphpad.com/quickcalcs/PValue1.cfm
 
 
http://www.surveysystem.com/sscalc.htm
 
 
http://vassarstats.net/vsclin.html
 
 
6) Problems
 
 
6.1) Suppose we are playing one roll of a pair of dice and we roll a pair of dice once and assumes a null hypothesis that the dice are fair. The test statistic is "the sum of the rolled numbers" and is one-tailed. Suppose we observe both dice show 6, which yield a test statistic of 12. The p-value of this outcome is about 0.028 (1/36) (the highest test statistic out of 6×6 = 36 possible outcomes). If the researcher assumed a significance level of 0.05, what would be the conclusion from this experiment? What would be a potential problem with experiment to run the conclusion you proposed?
 
 
6.2) Suppose a researcher flips a coin some arbitrary number of times (n) and assumes a null hypothesis that the coin is fair. The test statistic is the total number of heads. Suppose the researcher observes heads for each flip, yielding a test statistic of n and a p-value of 2/2n. If the coin was flipped only 5 times, the p-value would be 2/32 = 0.0625, which is not significant at the 0.05 level. But if the coin was flipped 10 times, the p-value would be 2/1024 ≈ 0.002, which is significant at the 0.05 level. What would be the problem here?
 
 
6.3) Suppose a researcher flips a coin two times and assumes a null hypothesis that the coin is unfair: it has two heads and no tails. The test statistic is the total number of heads (one-tailed). The researcher observes one head and one tail (HT), yielding a test statistic of 1 and a p-value of 0. In this case the data is inconsistent with the hypothesis–for a two-headed coin, a tail can never come up. In this case the outcome is not simply unlikely in the null hypothesis, but in fact impossible, and the null hypothesis can be definitely rejected as false. In practice such experiments almost never occur, as all data that could be observed would be possible in the null hypothesis (albeit unlikely). What if the null hypothesis were instead that the coin came up heads 99% of the time (otherwise the same setup)?
 
 
7) References
 
 
http://mirlyn.lib.umich.edu/Record/004199238
 
 
http://mirlyn.lib.umich.edu/Record/004232056
 
 
http://mirlyn.lib.umich.edu/Record/004133572
 
 
 
Answers:
 
 
6.1) We would deem this result significant and would reject the hypothesis that the dice are fair. In this case, a single roll provides a very weak basis (that is, insufficient data) to draw a meaningful conclusion about the dice. This illustrates the danger with blindly applying p-value without considering the experiment design.
 
 
6.2) In both cases the data suggest that the null hypothesis is false (that is, the coin is not fair somehow), but changing the sample size changes the p-value and significance level. In the first case the sample size is not large enough to allow the null hypothesis to be rejected at 0.05 level of significance (in fact, the p-value will never be below 0.05). This demonstrates that in interpreting p-values, one must also know the sample size, which complicates the analysis.
 
 
6.3) The p-value would instead be approximately 0.02 (0.0199). In this case the null hypothesis could not definitely be ruled out – this outcome is unlikely in the null hypothesis, but not impossible – but the null hypothesis would be rejected at the 0.05 level of significance, and in fact at the 0.02 level, since the outcome is less than 2% likely in the null hypothesis.
 
  
 
===[[SMHS_CorrectionMultipleTesting | Correction for Multiple Testing]]===
 
===[[SMHS_CorrectionMultipleTesting | Correction for Multiple Testing]]===
 
Multiple testing refers to analytical protocols involving testing of several (typically more then two) hypotheses. Multiple testing studies require correction for type I (false-positive rate), which can be done using Bonferroni's method, Tukey’s procedure, family-wise error rate (FWER), or false discovery rate (FDR).
 
Multiple testing refers to analytical protocols involving testing of several (typically more then two) hypotheses. Multiple testing studies require correction for type I (false-positive rate), which can be done using Bonferroni's method, Tukey’s procedure, family-wise error rate (FWER), or false discovery rate (FDR).
 
  
 
==Chapter II: Applied Inference==
 
==Chapter II: Applied Inference==
 
===[[SMHS_Epidemiology| Epidemiology]]===
 
===[[SMHS_Epidemiology| Epidemiology]]===
 +
This section expands the [[SMHS_IntroEpi|Epidemiology Introduction]] from the previous chapter. Here we will discuss numbers needed to treat and various likelihoods related to genetic association studies, including linkage and association, LOD scores and Hardy-Weinberg equilibrium.
  
 
===[[SMHS_SLR| Correlation and Regression (ρ and slope inference, 1-2 samples)]]===
 
===[[SMHS_SLR| Correlation and Regression (ρ and slope inference, 1-2 samples)]]===
 +
Studies of correlations between two, or more, variables and regression modeling are important in many scientific inquiries. The simplest situation such situation is exploring the association and correlation of bivariate data ($X$ and $Y$).
  
 
===[[SMHS_ROC| ROC Curve]]===
 
===[[SMHS_ROC| ROC Curve]]===
 +
The receiver operating characteristic (ROC) curve is a graphical tool for investigating the performance of a binary classifier system as its discrimination threshold varies. We also discuss the concepts of positive and negative predictive values.
  
 
===[[SMHS_ANOVA| ANOVA]]===
 
===[[SMHS_ANOVA| ANOVA]]===
 +
Analysis of Variance (ANOVA) is a statistical method fpr examining the differences between group means. ANOVA is a generalization of the [[SMHS_HypothesisTesting|t-test]] for more than 2 groups. It splits the observed variance into components attributed to different sources of variation.
  
 
===[[SMHS_NonParamInference| Non-parametric inference]]===
 
===[[SMHS_NonParamInference| Non-parametric inference]]===
 +
Nonparametric inference involves a class of methods for descriptive and inferential statistics that are not based on parametrized families of probability distributions, which is the basis of the [[SMHS_ParamInference|parametric inference we discussed earlier]]. This section presents the Sign test, Wilcoxon Signed Rank test, Wilcoxon-Mann-Whitney test, the McNemar test, the Kruskal-Wallis test, and the Fligner-Killeen test.
  
 
===[[SMHS_Cronbachs| Instrument Performance Evaluation: Cronbach's α]]===
 
===[[SMHS_Cronbachs| Instrument Performance Evaluation: Cronbach's α]]===
 +
Cronbach’s alpha (α) is a measure of internal consistency used to estimate the reliability of a cumulative psychometric test.
  
 
===[[SMHS_ReliabilityValidity| Measurement Reliability and Validity]]===
 
===[[SMHS_ReliabilityValidity| Measurement Reliability and Validity]]===
 +
Measures of Validity include: Construct validity (extent to which the operation actually measures what the theory intends to), Content validity (the extent to which the content of the test matches the content associated with the construct), Criterion validity (the correlation between the test and a variable representative of the construct), experimental validity (validity of design of experimental research studies). Similarly, there many alternate strategies to assess instrument Reliability (or repeatability) -- test-retest reliability, administering different versions of an assessment tool to the same group of individuals, inter-rater reliability, internal consistency reliability.
  
 
===[[SMHS_SurvivalAnalysis| Survival Analysis]]===
 
===[[SMHS_SurvivalAnalysis| Survival Analysis]]===
 +
Survival analysis is used for analyzing longitudinal data on the occurrence of events (e.g., death, injury, onset of illness, recovery from illness). In this section we discuss data structure, survival/hazard functions, parametric versus semi-parametric regression techniques and introduction to Kaplan-Meier methods (non-parametric).
  
 
===[[SMHS_DecisionTheory| Decision Theory]]===
 
===[[SMHS_DecisionTheory| Decision Theory]]===
 +
Decision theory helps determining the optimal course of action among a number of alternatives, when consequences cannot be forecasted with certainty. There are different types of loss-functions and decision principles (e.g., frequentist vs. Bayesian).
  
 
===[[SMHS_CLT_LLN| CLT/LLNs – limiting results and misconceptions]]===
 
===[[SMHS_CLT_LLN| CLT/LLNs – limiting results and misconceptions]]===
 +
The Law of Large Numbers (LLT) and the Central Limit Theorem (CLT) are the first and second fundamental laws of probability. CLT yields that the arithmetic mean of a sufficiently large number of iterates of independent random variables given certain conditions will be approximately normally distributed. LLT states that in performing the same experiment a large number of times, the average of the results obtained should be close to the expected value and tends to get closer to the expected value with increasing number of trials.
  
 
===[[SMHS_AssociationTests| Association Tests]]===
 
===[[SMHS_AssociationTests| Association Tests]]===
 +
There are alternative methods to measure association two quantities (e.g., relative risk, risk ratio, efficacy, prevalence ratio). This section also includes details on Chi-square tests for association and goodness-of-fit, Fisher’s exact test, randomized controlled trials (RCT), and external and internal validity.
  
 
===[[SMHS_BayesianInference| Bayesian Inference]]===
 
===[[SMHS_BayesianInference| Bayesian Inference]]===
 +
Bayes’ rule connects the theories of conditional and compound probability and provides a way to update probability estimates for a hypothesis as additional evidence is observed.
  
 
===[[SMHS_PCA_ICA_FA| PCA/ICA/Factor Analysis]]===
 
===[[SMHS_PCA_ICA_FA| PCA/ICA/Factor Analysis]]===
 +
Principal component analysis is a mathematical procedure that transforms a number of possibly correlated variables into a fewer number of uncorrelated variables through a process known as orthogonal transformation. Independent component analysis is a computational tool to separate a multivariate signal into additive subcomponents by assuming that the subcomponents are non-Gaussian signals and are statistically independent from each other. Factor analysis is a statistical method, which describes variability among observed correlated variables in terms of potentially lower number of unobserved variables.
  
 
===[[SMHS_CIs| Point/Interval Estimation (CI) – MoM, MLE]]===
 
===[[SMHS_CIs| Point/Interval Estimation (CI) – MoM, MLE]]===
 +
Estimation of population parameters is critical in many applications. In statistics, estimation is commonly accomplished in terms of point-estimates or interval-estimates for specific (unknown) population parameters of interest. The method of moments (MOM) and maximum likelihood estimation (MLE) techniques are used frequently in practice. In this section, we also lay the foundations for expectation maximization and Gaussian mixture modeling.
  
 
===[[SMHS_ResearchCritiques| Study/Research Critiques]]===
 
===[[SMHS_ResearchCritiques| Study/Research Critiques]]===
 +
The scientific rigor in published literature, grant proposals and general reports needs to be assessed and scrutinized to minimize errors in data extraction and meta-analysis. Reporting biases present significant obstacles to collecting of relevant information on the effectiveness of an intervention, strength of relations between variables, or causal associations.
  
 
===[[SMHS_CommonMistakesMisconceptions| Common mistakes and misconceptions in using probability and statistics, identifying potential assumption violations, and avoiding them]]===
 
===[[SMHS_CommonMistakesMisconceptions| Common mistakes and misconceptions in using probability and statistics, identifying potential assumption violations, and avoiding them]]===
Line 210: Line 129:
 
==Chapter III: Linear Modeling==
 
==Chapter III: Linear Modeling==
 
===[[SMHS_MLR | Multiple Linear Regression (MLR)]]===
 
===[[SMHS_MLR | Multiple Linear Regression (MLR)]]===
 +
Multiple Linear Regression encapsulated a family of statistical analyses for modeling single or multiple independent variables and one dependent variable. MLR computationally estimates all of the effects of each independent variable (coefficients) based on the data using least square fitting.
  
 
===[[SMHS_GLM| Generalized Linear Modeling (GLM)]]===
 
===[[SMHS_GLM| Generalized Linear Modeling (GLM)]]===
 +
Generalized Linear Modeling (GLM) is a flexible generalization of ordinary linear multivariate regression, which allows for response variables that have error distribution models other than a normal distribution. GLM unifies statistical models like linear regression, logistic regression and Poisson regression.
  
 
===[[SMHS_ANCOVA| Analysis of Covariance (ANCOVA)]]===
 
===[[SMHS_ANCOVA| Analysis of Covariance (ANCOVA)]]===
First, see the [[SMHS_ANOVA|ANOVA]] section above.
+
Analysis of Variance ([[SMHS_ANOVA|ANOVA]]) is a common method applied to analyze the differences between group means. Analysis of Covariance (ANCOVA) is another method applied to blend ANOVA and regression and evaluate whether population means of a dependent variance are equal across levels of a categorical independent variable while statistically controlling for the effects of other continuous variables.
  
 
===[[SMHS_MANOVA| Multivariate Analysis of Variance (MANOVA)]]===
 
===[[SMHS_MANOVA| Multivariate Analysis of Variance (MANOVA)]]===
 +
A generalized form of [[SMHS_ANOVA|ANOVA]] is the multivariate analysis of variance (MANOVA), which is a statistical procedure for comparing multivariate means of several groups.
  
 
===[[SMHS_MANCOVA| Multivariate Analysis of Covariance (MANCOVA)]]===
 
===[[SMHS_MANCOVA| Multivariate Analysis of Covariance (MANCOVA)]]===
 +
Similar to [[SMHS_MANOVA|MANOVA]], the multivariate analysis of covariance (MANOCVA) is an extension of [[SMHS_ANCOVA|ANCOVA]] that is designed for cases where there is more than one dependent variable and when a control of concomitant continuous independent variables is present.
  
 
===[[SMHS_rANOVA| Repeated measures Analysis of Variance (rANOVA)]]===
 
===[[SMHS_rANOVA| Repeated measures Analysis of Variance (rANOVA)]]===
 +
Repeated measures are used in situations when the same objects/units/entities take part in all conditions of an experiment. Given there is multiple measures on the same subject, we have to control for correlation between multiple measures on the same subject. Repeated measures ANOVA (rANOVA) is the equivalent of the one-way [[SMHS_ANOVA|ANOVA]], but for related, not independent, groups. It is also referred to as within-subject ANOVA or ANOVA for correlated samples.
  
 
===[[SMHS_PartialCorrelation| Partial Correlation]]===
 
===[[SMHS_PartialCorrelation| Partial Correlation]]===
 +
Partial correlation measures the degree of association between two random variables by measuring variances controlling for certain other factors or variables.
  
===[[SMHS_TimeSeriese| Time Series Analysis]]===
+
===[[SMHS_TimeSeries| Time Series Analysis]]===
 +
Time series data is a sequence of data points measured at successive points in time. Time series analysis is a technique used in varieties of studies involving temporal measurements and tracking metrics.
  
 
===[[SMHS_FixedRandomMixedModels|Fixed, Randomized and Mixed Effect Models]]===
 
===[[SMHS_FixedRandomMixedModels|Fixed, Randomized and Mixed Effect Models]]===
 +
Fixed effect models are statistical models that represent the observed quantities in terms of explanatory variables (covariates) treated as non-random, while random effect models assume that the dataset being analyzed consist of a hierarchy of different population whose differences relate to that hierarchy. Mixed effect models consist of both fixed effects and random effects. For random effects model and mixed models, either all or part of the explanatory variables are treated as if they rise from random causes.
  
 
===[[SMHS_HLM| Hierarchical Linear Models (HLM)]]===
 
===[[SMHS_HLM| Hierarchical Linear Models (HLM)]]===
 +
Hierarchical linear model (also called multilevel models) refer to statistical models of parameters that vary at more than one level. These are generalizations of linear models and are widely applied in various studies especially for research designs where data for participants are organized at more than one level.
  
 
===[[SMHS_MultimodelInference|Multi-Model Inference]]===
 
===[[SMHS_MultimodelInference|Multi-Model Inference]]===
 +
Multi-Model Inference involves model selection of a relationship between $Y$ (response) and predictors $X_1, X_2, ..., X_n$ that is simple, effective and retains good predictive power, as measured by the SSE, AIC or BIC.
  
 
===[[SMHS_MixtureModeling|Mixture Modeling]]===
 
===[[SMHS_MixtureModeling|Mixture Modeling]]===
 +
Mixture modeling is a probabilistic modeling technique for representing the presence of sub-populations within overall population, without requiring that an observed data set identifies the sub-population to which an individual observation belongs.
  
 
===[[SMHS_Surveys|Surveys]]===
 
===[[SMHS_Surveys|Surveys]]===
 +
Survey methodologies involve data collection using questionnaires designed to improve the number of responses and the reliability of the responses in the surveys. The ultimate goal is to make statistical inferences about the population, which would depend strongly on the survey questions provided. The commonly used survey methods include polls, public health surveys, market research surveys, censuses and so on.
  
 
===[[SMHS_LongitudinalData|Longitudinal Data]]===
 
===[[SMHS_LongitudinalData|Longitudinal Data]]===
 +
Longitudinal data represent data collected from a population over a given time period where the same subjects are measured at multiple points in time. Longitudinal data analyses are widely used statistical techniques in many health science fields.
  
 
===[[SMHS_GEE| Generalized Estimating Equations (GEE) Models]]===
 
===[[SMHS_GEE| Generalized Estimating Equations (GEE) Models]]===
 +
Generalized estimating equation (GEE) is a method for parameter estimation when fitting [[SMHS_GLM|generalized linear models]] with a possible unknown correlation between outcomes. It provides a general approach for analyzing discrete and continuous responses with marginal models and works as a popular alternative to maximum likelihood estimation (MLE).
  
 
===[[SMHS_ModelFitting| Model Fitting and Model Quality (KS-test)]]===
 
===[[SMHS_ModelFitting| Model Fitting and Model Quality (KS-test)]]===
 +
The Kolmogorov-Smirnov Test (K-S test) is a nonparametric test commonly applied to test for the equality of continuous, one-dimensional probability distributions. This test can be used to compare one sample against a reference probability distribution (one-sample K-S test) or to compare two samples (two-sample K-S test).
  
 
==Chapter IV: Special Topics==
 
==Chapter IV: Special Topics==
===Scientific Visualization===
+
===[[SMHS_DataSimulation| Data Simulation ]]===
===PCOR/CER methods Heterogeneity of Treatment Effects===
+
This section demonstrates the core principles of simulating multivariate datasets.
===Big-Data/Big-Science===
+
===[[SMHS_LinearModeling| Linear Modeling ]]===
===Missing data===
+
This section is a review of linear modeling.
 +
 +
===[[SMHS_SciVisualization| Scientific Visualization ]]===
 +
 
 +
This section discusses how and why we should "look" at data.
 +
 
 +
===[[SMHS_MethodsHeterogeneity| Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research ]]===
 +
This section discusses methods for studying heterogeneity of treatment effects and case-studies of comparative effectiveness research.
 +
 
 +
===[[SMHS_BigDataBigSci| Big-Data/Big-Science ]]===
 +
 
 +
This section discusses structural equation modeling and generalized estimated equation modeling. Furthermore, it discusses statistical validation, cross validation, classification, and prediction.
 +
 
 +
===[[SMHS_MissingData|Missing data]]===
 +
Many research studies encounter incomplete (missing) data that require special handling (e.g., teleprocessing, statistical analysis, visualization). There are a variety of methods (e.g., multiple imputation) to deal with missing data, detect missingness, impute the data, analyze the completed dataset and compare the characteristics of the raw and imputed data.
 +
 
 
===Genotype-Environment-Phenotype associations===
 
===Genotype-Environment-Phenotype associations===
 
===Medical imaging===
 
===Medical imaging===
Line 255: Line 204:
 
===Causality/Causal Inference, SEM===
 
===Causality/Causal Inference, SEM===
 
===Classification methods===
 
===Classification methods===
===Time-series analysis===
+
===[[SMHS_TimeSeriesAnalysis|Time-Series Analysis]]===
 +
 
 +
In this section, we will discuss Time Series Analysis, which represents a class of statistical methods applicable for series data aiming to extract meaningful information, trend and characterization of the process using observed longitudinal data.
 +
 
 
===Scientific Validation===
 
===Scientific Validation===
 
===Geographic Information Systems (GIS)===
 
===Geographic Information Systems (GIS)===
Line 262: Line 214:
 
===Network Analysis===
 
===Network Analysis===
  
 +
<hr>
  
 
+
==References==
 
 
 
 
 
 
 
 
 
 
<hr>
 
 
* SOCR Home page: http://www.socr.umich.edu
 
* SOCR Home page: http://www.socr.umich.edu
 +
* [http://www.socr.umich.edu/people/dinov/SMHS_Courses.html Scientific Methods for Health Sciences (SMHS) Course Series]
 +
* [http://predictive.space/ Data Science and Predictive Analytics (DSPA)]
 +
* Dinov, ID. (2018) [http://www.springer.com/us/book/9783319723464 Data Science and Predictive Analytics: Biomedical and Health Applications using R, Springer (ISBN 978-3-319-72346-4)]
  
 
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=Scientific_Methods_for_Health_Sciences}}
 
{{translate|pageName=http://wiki.socr.umich.edu/index.php?title=Scientific_Methods_for_Health_Sciences}}

Latest revision as of 12:34, 2 January 2020

The Scientific Methods for Health Sciences EBook is still under active development. When the EBook is complete this banner will be removed.

Contents

SOCR Wiki: Scientific Methods for Health Sciences

Scientific Methods for Health Sciences EBook

Preface

The Scientific Methods for Health Sciences (SMHS) EBook (ISBN: 978-0-9829949-1-7) is designed to support a 4-course training curriculum emphasizing the fundamentals, applications and practice of scientific methods specifically for graduate students in the health sciences.

Format

Follow the instructions in this page to expand, revise or improve the materials in this EBook.

Learning and Instructional Usage

This section describes the means of traversing, searching, discovering and utilizing the SMHS EBook resources in both formal and informal learning setting.

Copyrights

The SMHS EBook is a freely and openly accessible electronic book developed by SOCR and the general health sciences community.

Chapter I: Fundamentals

Exploratory Data Analysis, Plots and Charts

Review of data types, exploratory data analyses and graphical representation of information.

Ubiquitous Variation

There are many ways to quantify variability, which is present in all natural processes.

Parametric Inference

Foundations of parametric (model-based) statistical inference.

Probability Theory

Random variables, stochastic processes, and events are the core concepts necessary to define likelihoods of certain outcomes or results to be observed. We define event manipulations and present the fundamental principles of probability theory including conditional probability, total and Bayesian probability laws, and various combinatorial ideas.

Odds Ratio/Relative Risk

The relative risk, RR, (a measure of dependence comparing two probabilities in terms of their ratio) and the odds ratio, OR, (the fraction of one probability and its complement) are widely applicable in many healthcare studies.

Centrality, Variability and Shape

Three main features of sample data are commonly reported as critical in understanding and interpreting the population, or process, the data represents. These include Center, Spread and Shape. The main measures of centrality are Mean, Median and Mode(s). Common measures of variability include the range, the variance, the standard deviation, and mean absolute deviation. The shape of a (sample or population) distribution is an important characterization of the process and its intrinsic properties.

Probability Distributions

Probability distributions are mathematical models for processes that we observe in nature. Although there are different types of distributions, they have common features and properties that make them useful in various scientific applications. This section presents the Bernoulli, Binomial, Multinomial, Geometric, Hypergeometric, Negative binomial, Negative multinomial distribution, Poisson distribution, and Normal distributions, as well as the concept of moment generating function.

Resampling and Simulation

Resampling is a technique for estimation of sample statistics (e.g., medians, percentiles) by using subsets of available data or by randomly drawing replacement data. Simulation is a computational technique addressing specific imitations of what’s happening in the real world or system over time without awaiting it to happen by chance.

Design of Experiments

Design of experiments (DOE) is a technique for systematic and rigorous problem solving that applies data collection principles to ensure the generation of valid, supportable and reproducible conclusions.

Intro to Epidemiology

Epidemiology is the study of the distribution and determinants of disease frequency in human populations. This section presents the basic epidemiology concepts. More advanced epidemiological methodologies are discussed in the next chapter. This section also presents the Positive and Negative Predictive Values (PPV/NPV).

Experiments vs. Observational Studies

Experimental and observational studies have different characteristics and are useful in complementary investigations of association and causality.

Estimation

Estimation is a method of using sample data to approximate the values of specific population parameters of interest like population mean, variability or 97th percentile. Estimated parameters are expected to be interpretable, accurate and optimal, in some form.

Hypothesis Testing

Hypothesis testing is a quantitative decision-making technique for examining the characteristics (e.g., centrality, span) of populations or processes based on observed experimental data. In this section we discuss inference about a mean, mean differences (both small and large samples), a proportion or differences of proportions and differences of variances.

Statistical Power, Sensitivity and Specificity

The fundamental concepts of type I (false-positive) and type II (false-negative) errors lead to the important study-specific notions of statistical power, sample size, effect size, sensitivity and specificity.

Data Management

All modern data-driven scientific inquiries demand deep understanding of tabular, ASCII, binary, streaming, and cloud data management, processing and interpretation.

Bias and Precision

Bias and precision are two important and complementary characteristics of estimated parameters that quantify the accuracy and variability of approximated quantities.

Association and Causality

An association is a relationship between two, or more, measured quantities that renders them statistically dependent so that the occurrence of one does affect the probability of the other. A causal relation is a specific type of association between an event (the cause) and a second event (the effect) that is considered to be a consequence of the first event.

Rate-of-change

Rate of change is a technical indicator describing the rate in which one quantity changes in relation to another quantity.

Clinical vs. Statistical Significance

Statistical significance addresses the question of whether or not the results of a statistical test meet an accepted quantitative criterion, whereas clinical significance is answers the question of whether the observed difference between two treatments (e.g., new and old therapy) found in the study large enough to alter the clinical practice.

Correction for Multiple Testing

Multiple testing refers to analytical protocols involving testing of several (typically more then two) hypotheses. Multiple testing studies require correction for type I (false-positive rate), which can be done using Bonferroni's method, Tukey’s procedure, family-wise error rate (FWER), or false discovery rate (FDR).

Chapter II: Applied Inference

Epidemiology

This section expands the Epidemiology Introduction from the previous chapter. Here we will discuss numbers needed to treat and various likelihoods related to genetic association studies, including linkage and association, LOD scores and Hardy-Weinberg equilibrium.

Correlation and Regression (ρ and slope inference, 1-2 samples)

Studies of correlations between two, or more, variables and regression modeling are important in many scientific inquiries. The simplest situation such situation is exploring the association and correlation of bivariate data ($X$ and $Y$).

ROC Curve

The receiver operating characteristic (ROC) curve is a graphical tool for investigating the performance of a binary classifier system as its discrimination threshold varies. We also discuss the concepts of positive and negative predictive values.

ANOVA

Analysis of Variance (ANOVA) is a statistical method fpr examining the differences between group means. ANOVA is a generalization of the t-test for more than 2 groups. It splits the observed variance into components attributed to different sources of variation.

Non-parametric inference

Nonparametric inference involves a class of methods for descriptive and inferential statistics that are not based on parametrized families of probability distributions, which is the basis of the parametric inference we discussed earlier. This section presents the Sign test, Wilcoxon Signed Rank test, Wilcoxon-Mann-Whitney test, the McNemar test, the Kruskal-Wallis test, and the Fligner-Killeen test.

Instrument Performance Evaluation: Cronbach's α

Cronbach’s alpha (α) is a measure of internal consistency used to estimate the reliability of a cumulative psychometric test.

Measurement Reliability and Validity

Measures of Validity include: Construct validity (extent to which the operation actually measures what the theory intends to), Content validity (the extent to which the content of the test matches the content associated with the construct), Criterion validity (the correlation between the test and a variable representative of the construct), experimental validity (validity of design of experimental research studies). Similarly, there many alternate strategies to assess instrument Reliability (or repeatability) -- test-retest reliability, administering different versions of an assessment tool to the same group of individuals, inter-rater reliability, internal consistency reliability.

Survival Analysis

Survival analysis is used for analyzing longitudinal data on the occurrence of events (e.g., death, injury, onset of illness, recovery from illness). In this section we discuss data structure, survival/hazard functions, parametric versus semi-parametric regression techniques and introduction to Kaplan-Meier methods (non-parametric).

Decision Theory

Decision theory helps determining the optimal course of action among a number of alternatives, when consequences cannot be forecasted with certainty. There are different types of loss-functions and decision principles (e.g., frequentist vs. Bayesian).

CLT/LLNs – limiting results and misconceptions

The Law of Large Numbers (LLT) and the Central Limit Theorem (CLT) are the first and second fundamental laws of probability. CLT yields that the arithmetic mean of a sufficiently large number of iterates of independent random variables given certain conditions will be approximately normally distributed. LLT states that in performing the same experiment a large number of times, the average of the results obtained should be close to the expected value and tends to get closer to the expected value with increasing number of trials.

Association Tests

There are alternative methods to measure association two quantities (e.g., relative risk, risk ratio, efficacy, prevalence ratio). This section also includes details on Chi-square tests for association and goodness-of-fit, Fisher’s exact test, randomized controlled trials (RCT), and external and internal validity.

Bayesian Inference

Bayes’ rule connects the theories of conditional and compound probability and provides a way to update probability estimates for a hypothesis as additional evidence is observed.

PCA/ICA/Factor Analysis

Principal component analysis is a mathematical procedure that transforms a number of possibly correlated variables into a fewer number of uncorrelated variables through a process known as orthogonal transformation. Independent component analysis is a computational tool to separate a multivariate signal into additive subcomponents by assuming that the subcomponents are non-Gaussian signals and are statistically independent from each other. Factor analysis is a statistical method, which describes variability among observed correlated variables in terms of potentially lower number of unobserved variables.

Point/Interval Estimation (CI) – MoM, MLE

Estimation of population parameters is critical in many applications. In statistics, estimation is commonly accomplished in terms of point-estimates or interval-estimates for specific (unknown) population parameters of interest. The method of moments (MOM) and maximum likelihood estimation (MLE) techniques are used frequently in practice. In this section, we also lay the foundations for expectation maximization and Gaussian mixture modeling.

Study/Research Critiques

The scientific rigor in published literature, grant proposals and general reports needs to be assessed and scrutinized to minimize errors in data extraction and meta-analysis. Reporting biases present significant obstacles to collecting of relevant information on the effectiveness of an intervention, strength of relations between variables, or causal associations.

Common mistakes and misconceptions in using probability and statistics, identifying potential assumption violations, and avoiding them

Chapter III: Linear Modeling

Multiple Linear Regression (MLR)

Multiple Linear Regression encapsulated a family of statistical analyses for modeling single or multiple independent variables and one dependent variable. MLR computationally estimates all of the effects of each independent variable (coefficients) based on the data using least square fitting.

Generalized Linear Modeling (GLM)

Generalized Linear Modeling (GLM) is a flexible generalization of ordinary linear multivariate regression, which allows for response variables that have error distribution models other than a normal distribution. GLM unifies statistical models like linear regression, logistic regression and Poisson regression.

Analysis of Covariance (ANCOVA)

Analysis of Variance (ANOVA) is a common method applied to analyze the differences between group means. Analysis of Covariance (ANCOVA) is another method applied to blend ANOVA and regression and evaluate whether population means of a dependent variance are equal across levels of a categorical independent variable while statistically controlling for the effects of other continuous variables.

Multivariate Analysis of Variance (MANOVA)

A generalized form of ANOVA is the multivariate analysis of variance (MANOVA), which is a statistical procedure for comparing multivariate means of several groups.

Multivariate Analysis of Covariance (MANCOVA)

Similar to MANOVA, the multivariate analysis of covariance (MANOCVA) is an extension of ANCOVA that is designed for cases where there is more than one dependent variable and when a control of concomitant continuous independent variables is present.

Repeated measures Analysis of Variance (rANOVA)

Repeated measures are used in situations when the same objects/units/entities take part in all conditions of an experiment. Given there is multiple measures on the same subject, we have to control for correlation between multiple measures on the same subject. Repeated measures ANOVA (rANOVA) is the equivalent of the one-way ANOVA, but for related, not independent, groups. It is also referred to as within-subject ANOVA or ANOVA for correlated samples.

Partial Correlation

Partial correlation measures the degree of association between two random variables by measuring variances controlling for certain other factors or variables.

Time Series Analysis

Time series data is a sequence of data points measured at successive points in time. Time series analysis is a technique used in varieties of studies involving temporal measurements and tracking metrics.

Fixed, Randomized and Mixed Effect Models

Fixed effect models are statistical models that represent the observed quantities in terms of explanatory variables (covariates) treated as non-random, while random effect models assume that the dataset being analyzed consist of a hierarchy of different population whose differences relate to that hierarchy. Mixed effect models consist of both fixed effects and random effects. For random effects model and mixed models, either all or part of the explanatory variables are treated as if they rise from random causes.

Hierarchical Linear Models (HLM)

Hierarchical linear model (also called multilevel models) refer to statistical models of parameters that vary at more than one level. These are generalizations of linear models and are widely applied in various studies especially for research designs where data for participants are organized at more than one level.

Multi-Model Inference

Multi-Model Inference involves model selection of a relationship between $Y$ (response) and predictors $X_1, X_2, ..., X_n$ that is simple, effective and retains good predictive power, as measured by the SSE, AIC or BIC.

Mixture Modeling

Mixture modeling is a probabilistic modeling technique for representing the presence of sub-populations within overall population, without requiring that an observed data set identifies the sub-population to which an individual observation belongs.

Surveys

Survey methodologies involve data collection using questionnaires designed to improve the number of responses and the reliability of the responses in the surveys. The ultimate goal is to make statistical inferences about the population, which would depend strongly on the survey questions provided. The commonly used survey methods include polls, public health surveys, market research surveys, censuses and so on.

Longitudinal Data

Longitudinal data represent data collected from a population over a given time period where the same subjects are measured at multiple points in time. Longitudinal data analyses are widely used statistical techniques in many health science fields.

Generalized Estimating Equations (GEE) Models

Generalized estimating equation (GEE) is a method for parameter estimation when fitting generalized linear models with a possible unknown correlation between outcomes. It provides a general approach for analyzing discrete and continuous responses with marginal models and works as a popular alternative to maximum likelihood estimation (MLE).

Model Fitting and Model Quality (KS-test)

The Kolmogorov-Smirnov Test (K-S test) is a nonparametric test commonly applied to test for the equality of continuous, one-dimensional probability distributions. This test can be used to compare one sample against a reference probability distribution (one-sample K-S test) or to compare two samples (two-sample K-S test).

Chapter IV: Special Topics

Data Simulation

This section demonstrates the core principles of simulating multivariate datasets.

Linear Modeling

This section is a review of linear modeling.

Scientific Visualization

This section discusses how and why we should "look" at data.

Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research

This section discusses methods for studying heterogeneity of treatment effects and case-studies of comparative effectiveness research.

Big-Data/Big-Science

This section discusses structural equation modeling and generalized estimated equation modeling. Furthermore, it discusses statistical validation, cross validation, classification, and prediction.

Missing data

Many research studies encounter incomplete (missing) data that require special handling (e.g., teleprocessing, statistical analysis, visualization). There are a variety of methods (e.g., multiple imputation) to deal with missing data, detect missingness, impute the data, analyze the completed dataset and compare the characteristics of the raw and imputed data.

Genotype-Environment-Phenotype associations

Medical imaging

Data Networks

Adaptive Clinical Trials

Databases/registries

Meta-analyses

Causality/Causal Inference, SEM

Classification methods

Time-Series Analysis

In this section, we will discuss Time Series Analysis, which represents a class of statistical methods applicable for series data aiming to extract meaningful information, trend and characterization of the process using observed longitudinal data.

Scientific Validation

Geographic Information Systems (GIS)

Rasch measurement model/analysis

MCMC sampling for Bayesian inference

Network Analysis


References



Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif