Scientific Methods for Health Sciences

(Redirected from SMHS)
Jump to: navigation, search
The Scientific Methods for Health Sciences EBook is still under active development. When the EBook is complete this banner will be removed.


SOCR Wiki: Scientific Methods for Health Sciences

Scientific Methods for Health Sciences EBook


The Scientific Methods for Health Sciences (SMHS) EBook is designed to support a 4-course training curriculum emphasizing the fundamentals, applications and practice of scientific methods specifically for graduate students in the health sciences.


Follow the instructions in this page to expand, revise or improve the materials in this EBook.

Learning and Instructional Usage

This section describes the means of traversing, searching, discovering and utilizing the SMHS EBook resources in both formal and informal learning setting.


The SMHS EBook is a freely and openly accessible electronic book developed by SOCR and the general health sciences community.

Chapter I: Fundamentals

Exploratory Data Analysis, Plots and Charts

Review of data types, exploratory data analyses and graphical representation of information.

Ubiquitous Variation

There are many ways to quantify variability, which is present in all natural processes.

Parametric Inference

Foundations of parametric (model-based) statistical inference.

Probability Theory

Random variables, stochastic processes, and events are the core concepts necessary to define likelihoods of certain outcomes or results to be observed. We define event manipulations and present the fundamental principles of probability theory including conditional probability, total and Bayesian probability laws, and various combinatorial ideas.

Odds Ratio/Relative Risk

The relative risk, RR, (a measure of dependence comparing two probabilities in terms of their ratio) and the odds ratio, OR, (the fraction of one probability and its complement) are widely applicable in many healthcare studies.

Centrality, Variability and Shape

Three main features of sample data are commonly reported as critical in understanding and interpreting the population, or process, the data represents. These include Center, Spread and Shape. The main measures of centrality are Mean, Median and Mode(s). Common measures of variability include the range, the variance, the standard deviation, and mean absolute deviation. The shape of a (sample or population) distribution is an important characterization of the process and its intrinsic properties.

Probability Distributions

Probability distributions are mathematical models for processes that we observe in nature. Although there are different types of distributions, they have common features and properties that make them useful in various scientific applications. This section presents the Bernoulli, Binomial, Multinomial, Geometric, Hypergeometric, Negative binomial, Negative multinomial distribution, Poisson distribution, and Normal distributions, as well as the concept of moment generating function.

Resampling and Simulation

Resampling is a technique for estimation of sample statistics (e.g., medians, percentiles) by using subsets of available data or by randomly drawing replacement data. Simulation is a computational technique addressing specific imitations of what’s happening in the real world or system over time without awaiting it to happen by chance.

Design of Experiments

Design of experiments (DOE) is a technique for systematic and rigorous problem solving that applies data collection principles to ensure the generation of valid, supportable and reproducible conclusions.

Intro to Epidemiology

Epidemiology is the study of the distribution and determinants of disease frequency in human populations. This section presents the basic epidemiology concepts. More advanced epidemiological methodologies are discussed in the next chapter. This section also presents the Positive and Negative Predictive Values (PPV/NPV).

Experiments vs. Observational Studies

Experimental and observational studies have different characteristics and are useful in complementary investigations of association and causality.


Estimation is a method of using sample data to approximate the values of specific population parameters of interest like population mean, variability or 97th percentile. Estimated parameters are expected to be interpretable, accurate and optimal, in some form.

Hypothesis Testing

Hypothesis testing is a quantitative decision-making technique for examining the characteristics (e.g., centrality, span) of populations or processes based on observed experimental data. In this section we discuss inference about a mean, mean differences (both small and large samples), a proportion or differences of proportions and differences of variances.

Statistical Power, Sensitivity and Specificity

The fundamental concepts of type I (false-positive) and type II (false-negative) errors lead to the important study-specific notions of statistical power, sample size, effect size, sensitivity and specificity.

Data Management

All modern data-driven scientific inquiries demand deep understanding of tabular, ASCII, binary, streaming, and cloud data management, processing and interpretation.

Bias and Precision

Bias and precision are two important and complementary characteristics of estimated parameters that quantify the accuracy and variability of approximated quantities.

Association and Causality

An association is a relationship between two, or more, measured quantities that renders them statistically dependent so that the occurrence of one does affect the probability of the other. A causal relation is a specific type of association between an event (the cause) and a second event (the effect) that is considered to be a consequence of the first event.


Rate of change is a technical indicator describing the rate in which one quantity changes in relation to another quantity.

Clinical vs. Statistical Significance

Statistical significance addresses the question of whether or not the results of a statistical test meet an accepted quantitative criterion, whereas clinical significance is answers the question of whether the observed difference between two treatments (e.g., new and old therapy) found in the study large enough to alter the clinical practice.

Correction for Multiple Testing

Multiple testing refers to analytical protocols involving testing of several (typically more then two) hypotheses. Multiple testing studies require correction for type I (false-positive rate), which can be done using Bonferroni's method, Tukey’s procedure, family-wise error rate (FWER), or false discovery rate (FDR).

Chapter II: Applied Inference


This section expands the Epidemiology Introduction from the previous chapter. Here we will discuss numbers needed to treat and various likelihoods related to genetic association studies, including linkage and association, LOD scores and Hardy-Weinberg equilibrium.

Correlation and Regression (ρ and slope inference, 1-2 samples)

Studies of correlations between two, or more, variables and regression modeling are important in many scientific inquiries. The simplest situation such situation is exploring the association and correlation of bivariate data ($X$ and $Y$).

ROC Curve

The receiver operating characteristic (ROC) curve is a graphical tool for investigating the performance of a binary classifier system as its discrimination threshold varies. We also discuss the concepts of positive and negative predictive values.


Analysis of Variance (ANOVA) is a statistical method fpr examining the differences between group means. ANOVA is a generalization of the t-test for more than 2 groups. It splits the observed variance into components attributed to different sources of variation.

Non-parametric inference

Nonparametric inference involves a class of methods for descriptive and inferential statistics that are not based on parametrized families of probability distributions, which is the basis of the parametric inference we discussed earlier. This section presents the Sign test, Wilcoxon Signed Rank test, Wilcoxon-Mann-Whitney test, the McNemar test, the Kruskal-Wallis test, and the Fligner-Killeen test.

Instrument Performance Evaluation: Cronbach's α

Cronbach’s alpha (α) is a measure of internal consistency used to estimate the reliability of a cumulative psychometric test.

Measurement Reliability and Validity

Measures of Validity include: Construct validity (extent to which the operation actually measures what the theory intends to), Content validity (the extent to which the content of the test matches the content associated with the construct), Criterion validity (the correlation between the test and a variable representative of the construct), experimental validity (validity of design of experimental research studies). Similarly, there many alternate strategies to assess instrument Reliability (or repeatability) -- test-retest reliability, administering different versions of an assessment tool to the same group of individuals, inter-rater reliability, internal consistency reliability.

Survival Analysis

Survival analysis is used for analyzing longitudinal data on the occurrence of events (e.g., death, injury, onset of illness, recovery from illness). In this section we discuss data structure, survival/hazard functions, parametric versus semi-parametric regression techniques and introduction to Kaplan-Meier methods (non-parametric).

Decision Theory

Decision theory helps determining the optimal course of action among a number of alternatives, when consequences cannot be forecasted with certainty. There are different types of loss-functions and decision principles (e.g., frequentist vs. Bayesian).

CLT/LLNs – limiting results and misconceptions

The Law of Large Numbers (LLT) and the Central Limit Theorem (CLT) are the first and second fundamental laws of probability. CLT yields that the arithmetic mean of a sufficiently large number of iterates of independent random variables given certain conditions will be approximately normally distributed. LLT states that in performing the same experiment a large number of times, the average of the results obtained should be close to the expected value and tends to get closer to the expected value with increasing number of trials.

Association Tests

There are alternative methods to measure association two quantities (e.g., relative risk, risk ratio, efficacy, prevalence ratio). This section also includes details on Chi-square tests for association and goodness-of-fit, Fisher’s exact test, randomized controlled trials (RCT), and external and internal validity.

Bayesian Inference

Bayes’ rule connects the theories of conditional and compound probability and provides a way to update probability estimates for a hypothesis as additional evidence is observed.

PCA/ICA/Factor Analysis

Principal component analysis is a mathematical procedure that transforms a number of possibly correlated variables into a fewer number of uncorrelated variables through a process known as orthogonal transformation. Independent component analysis is a computational tool to separate a multivariate signal into additive subcomponents by assuming that the subcomponents are non-Gaussian signals and are statistically independent from each other. Factor analysis is a statistical method, which describes variability among observed correlated variables in terms of potentially lower number of unobserved variables.

Point/Interval Estimation (CI) – MoM, MLE

Estimation of population parameters is critical in many applications. In statistics, estimation is commonly accomplished in terms of point-estimates or interval-estimates for specific (unknown) population parameters of interest. The method of moments (MOM) and maximum likelihood estimation (MLE) techniques are used frequently in practice. In this section, we also lay the foundations for expectation maximization and Gaussian mixture modeling.

Study/Research Critiques

The scientific rigor in published literature, grant proposals and general reports needs to be assessed and scrutinized to minimize errors in data extraction and meta-analysis. Reporting biases present significant obstacles to collecting of relevant information on the effectiveness of an intervention, strength of relations between variables, or causal associations.

Common mistakes and misconceptions in using probability and statistics, identifying potential assumption violations, and avoiding them

Chapter III: Linear Modeling

Multiple Linear Regression (MLR)

Multiple Linear Regression encapsulated a family of statistical analyses for modeling single or multiple independent variables and one dependent variable. MLR computationally estimates all of the effects of each independent variable (coefficients) based on the data using least square fitting.

Generalized Linear Modeling (GLM)

Generalized Linear Modeling (GLM) is a flexible generalization of ordinary linear multivariate regression, which allows for response variables that have error distribution models other than a normal distribution. GLM unifies statistical models like linear regression, logistic regression and Poisson regression.

Analysis of Covariance (ANCOVA)

Analysis of Variance (ANOVA) is a common method applied to analyze the differences between group means. Analysis of Covariance (ANCOVA) is another method applied to blend ANOVA and regression and evaluate whether population means of a dependent variance are equal across levels of a categorical independent variable while statistically controlling for the effects of other continuous variables.

Multivariate Analysis of Variance (MANOVA)

A generalized form of ANOVA is the multivariate analysis of variance (MANOVA), which is a statistical procedure for comparing multivariate means of several groups.

Multivariate Analysis of Covariance (MANCOVA)

Similar to MANOVA, the multivariate analysis of covariance (MANOCVA) is an extension of ANCOVA that is designed for cases where there is more than one dependent variable and when a control of concomitant continuous independent variables is present.

Repeated measures Analysis of Variance (rANOVA)

Repeated measures are used in situations when the same objects/units/entities take part in all conditions of an experiment. Given there is multiple measures on the same subject, we have to control for correlation between multiple measures on the same subject. Repeated measures ANOVA (rANOVA) is the equivalent of the one-way ANOVA, but for related, not independent, groups. It is also referred to as within-subject ANOVA or ANOVA for correlated samples.

Partial Correlation

Partial correlation measures the degree of association between two random variables by measuring variances controlling for certain other factors or variables.

Time Series Analysis

Time series data is a sequence of data points measured at successive points in time. Time series analysis is a technique used in varieties of studies involving temporal measurements and tracking metrics.

Fixed, Randomized and Mixed Effect Models

Fixed effect models are statistical models that represent the observed quantities in terms of explanatory variables (covariates) treated as non-random, while random effect models assume that the dataset being analyzed consist of a hierarchy of different population whose differences relate to that hierarchy. Mixed effect models consist of both fixed effects and random effects. For random effects model and mixed models, either all or part of the explanatory variables are treated as if they rise from random causes.

Hierarchical Linear Models (HLM)

Hierarchical linear model (also called multilevel models) refer to statistical models of parameters that vary at more than one level. These are generalizations of linear models and are widely applied in various studies especially for research designs where data for participants are organized at more than one level.

Multi-Model Inference

Multi-Model Inference involves model selection of a relationship between $Y$ (response) and predictors $X_1, X_2, ..., X_n$ that is simple, effective and retains good predictive power, as measured by the SSE, AIC or BIC.

Mixture Modeling

Mixture modeling is a probabilistic modeling technique for representing the presence of sub-populations within overall population, without requiring that an observed data set identifies the sub-population to which an individual observation belongs.


Survey methodologies involve data collection using questionnaires designed to improve the number of responses and the reliability of the responses in the surveys. The ultimate goal is to make statistical inferences about the population, which would depend strongly on the survey questions provided. The commonly used survey methods include polls, public health surveys, market research surveys, censuses and so on.

Longitudinal Data

Longitudinal data represent data collected from a population over a given time period where the same subjects are measured at multiple points in time. Longitudinal data analyses are widely used statistical techniques in many health science fields.

Generalized Estimating Equations (GEE) Models

Generalized estimating equation (GEE) is a method for parameter estimation when fitting generalized linear models with a possible unknown correlation between outcomes. It provides a general approach for analyzing discrete and continuous responses with marginal models and works as a popular alternative to maximum likelihood estimation (MLE).

Model Fitting and Model Quality (KS-test)

The Kolmogorov-Smirnov Test (K-S test) is a nonparametric test commonly applied to test for the equality of continuous, one-dimensional probability distributions. This test can be used to compare one sample against a reference probability distribution (one-sample K-S test) or to compare two samples (two-sample K-S test).

Chapter IV: Special Topics

Data Simulation

This section demonstrates the core principles of simulating multivariate datasets.

Linear Modeling

This section is a review of linear modeling.

Scientific Visualization

This section discusses how and why we should "look" at data.

Methods for Studying Heterogeneity of Treatment Effects, Case-Studies of Comparative Effectiveness Research

This section discusses methods for studying heterogeneity of treatment effects and case-studies of comparative effectiveness research.


This section discusses structural equation modeling and generalized estimated equation modeling. Furthermore, it discusses statistical validation, cross validation, classification, and prediction.

Missing data

Many research studies encounter incomplete (missing) data that require special handling (e.g., teleprocessing, statistical analysis, visualization). There are a variety of methods (e.g., multiple imputation) to deal with missing data, detect missingness, impute the data, analyze the completed dataset and compare the characteristics of the raw and imputed data.

Genotype-Environment-Phenotype associations

Medical imaging

Data Networks

Adaptive Clinical Trials



Causality/Causal Inference, SEM

Classification methods

Time-Series Analysis

In this section, we will discuss Time Series Analysis, which represents a class of statistical methods applicable for series data aiming to extract meaningful information, trend and characterization of the process using observed longitudinal data.

Scientific Validation

Geographic Information Systems (GIS)

Rasch measurement model/analysis

MCMC sampling for Bayesian inference

Network Analysis

Translate this page:

Uk flag.gif

De flag.gif

Es flag.gif

Fr flag.gif

It flag.gif

Pt flag.gif

Jp flag.gif

Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Fi flag.gif

इस भाषा में
In flag.gif

No flag.png

Kr flag.gif

Cn flag.gif

Cn flag.gif

Ru flag.gif

Nl flag.gif

Gr flag.gif

Hr flag.gif

Česká republika
Cz flag.gif

Dk flag.gif

Pl flag.png

Ro flag.png

Se flag.gif