SMHS ClinicalStatSignificance
Contents
Scientific Methods for Health Sciences - Clinical vs. Statistical Significance
Overview
Statistical significance is related to the question of whether or not the results of a statistical test meet an accepted criterion. The criterion can be arbitrary and the same statistical test may give different results based on different criterion of significance. Usually, statistical significance is expressed in terms of probability (say p value, which is the probability of obtaining a test statistic result at least as extreme as the one that was actually observed assuming the null hypothesis is true). Clinical significance is the difference between new and old therapy found in the study large enough to alter the practice. This section presents a general introduction to the field of statistical significance with important concepts of tests for statistical significance and measurements of significance of tests as well as the application of statistical test in clinical and the comparison between clinical and statistical significance.
Motivation
Significance is one of the most commonly used measurements in statistical tests from various fields. However, most researchers and students misinterpret statistical significance and non-significance. Few people know the exact indication of p value, which in some sense defines statistical significance. So the question would be, how can we define statistical significance? Is there any other ways to define statistical significance besides p value? What is missing in the ways to make inferences in clinical vs. statistical significance? This lecture aims to help students have a thorough understanding of clinical and statistical significance.
Theory
3.1) Statistical significance: the low probability at which an observed effect would have occurred due to chance. It is an integral part of statistical hypothesis testing where it plays a vital role to decide if a null hypothesis can be rejected. The criterion level is typically the value of p<0.05, which is chosen to minimize the possibility of a Type I error, finding a significant difference when one does not exist. It does not protect us from Type II error, which is defined as failure to find a difference when the difference does exist.
- Statistical significance involves important factors like (1) magnitude of the effect; (2) the sample size; (3) the reliability of the effect (i.e., the treatment equally effective for all participants); (4) the reliability of the measurement instrument.
- Problems with p value and statistical significance: (1) failure to reject the null hypothesis doesn’t mean we accept the null; (2) in any cases, the true effects in real life are never zero and things can be disproved only in pure math not in real life; (3) it’s not logical to assume that the effects are zero until disproved; (4) the significant level is arbitrary.
- p value: probability of obtaining a test statistic result at least as extreme as the one that was actually observed when the null hypothesis is actually true. It is used in the context of null hypothesis testing in order to quantify the idea of statistical significance of evidence. A researcher will often reject the null when p value turns out to be less than a predetermined significance level, say 0.05. If the p value is very small, usually less than or equal to a threshold value previously chosen (significance level), it suggests that the observed data is inconsistent with the assumption that the null hypothesis is true and thus the hypothesis must be rejected. The smaller the p value, the larger the significance because it informs that the hypothesis under consideration may not adequately explain the observation.
- Definition of p value: $Pr(X \ge x|H_0)$ for right tail event; $Pr(X \le x|H_0)$ for left tail event; $2*min(Pr(X \ge x│H_0 ),Pr(X \ge x│H_0 ))$ for double tail event.
- The hypothesis H_0 is rejected if any of these probabilities is less than or equal to a small, fixed but arbitrarily predefined threshold α (level of significance), which only depends on the consensus of the research community that the investigator is working on. $\alpha =Pr(reject H_0│H_0 is true)=Pr(p \le \alpha)$.
- Interpretation of p value: p≤0.01 very strong presumption against null hypothesis; 0.01<p≤0.05 strong presumption against null hypothesis; 0.05<p≤0.1 low presumption against null hypothesis; p>0.1 no presumption against the null hypothesis.
- Criticism about p value: (1) p value does not in itself allow reasoning about the probabilities of hypotheses, which requires multiple hypotheses or a range of hypotheses with a prior distribution of likelihoods between them; (2) it refers only to a single hypothesis (null hypothesis) and does not make reference to or allow conclusions about any other hypotheses such as alternative hypothesis; (3) the criterion is based on arbitrary choice of level; (4) p value is incompatible with the likelihood principle and the p value depends on the experiment design or equivalently on the test statistic in question; (5) it is an informal measure of evidence against the null hypothesis.
- Several common misunderstandings about p values: (1) it is not the probability that the null hypothesis is true, nor is it the probability that the alternative probability is false, it is not concerned with either of them; (2) it is not the probability that a finding is merely by chance; (3) it is not the probability of falsely rejecting the null hypothesis; (4) it is not the probability that replicating the experiment would yield the same conclusion; (5) the significance level is not determined by p value; (6) p value does not indicate the size or importance of the observed effect.
3.2) Clinical significance: in medicine and psychology, clinical significance is the practical importance of a treatment effect of whether it has a real genuine, noticeable effect on daily life. It yields information on whether a treatment is effective enough to change a patient’s diagnostic label and answers question of whether the treatment effective enough to cause the patient to be normal in clinical treatment studies. It is also a consideration when interpreting the result of a psychological assessment of an individual. Frequently, there will be a difference of scores that is statistically significant, unlikely to have occurred purely by chance.
- A clear demonstration of clinical significance would be to take a group of clients who score, say, beyond +2 SDs of the normative group prior to treatment and move them to within 1 SD from the mean of that group. The research implication of this definition is that you want to select people who are clearly disturbed to be in the clinical outcome study. If the mean of your untreated group is at, say, +1.2 SDs above the mean the change due to treatment probably is not going to be viewed as clinically significant.
- Clinical significance is defined by the smallest clinically beneficial and harmful values of the effect. These values are usually equal and opposite in sign. Because there is always a leap of faith in applying the results of a study to your patients (who, after all, were not in the study), perhaps a small improvement in the new therapy is not sufficient to cause you to alter your clinical approach. Note that you would almost certainly not alter your approach if the study results were not statistically significant (i.e. could well have been due to chance). But when is the difference between two therapies large enough for you to alter your practice?
- Statistics cannot fully answer this question. It is one of clinical judgment, considering the magnitude of benefit of each treatment, the respective profiles of side effects of the two treatments, their relative costs, your comfort with prescribing a new therapy, the patient's preferences, and so on. But we can provide different ways of illustrating the benefit of treatments, in terms of the number needed to treat. If a study is very large, its result may be statistically significant (unlikely to be due to chance), and yet the deviation from the null hypothesis may be too small to be of any clinical interest. Conversely, the result may not be statistically significant because the study was so small (or "under powered"), but the difference is large and would seem potentially important from a clinical point of view. You will then be wise to do another, perhaps larger, study.
- The smallest clinically beneficial and harmful values help define probabilities that the true effect could be clinically beneficial, trivial, or harmful ($p_{beneficial},$p_{trivial}$,$p_{harmful}$) and these P’s make an effort easier to assess and to publish.
Ways to calculate clinical significance:
- Jacobson-Truax: common method of calculating clinical significance. It involves calculating a Reliability Change Index (RCI). RCI equals the difference between a participant’s pre-test and post-test scores, divided by the standard error of the difference.
- Gulliksen-Lord-Novick: it is similar to Jacobson-Truax except that it takes into account regression to the mean. It is done by subtracting the pre-test and post-test scores from a population mean, and divided by the standard deviation of the population.
- Edwards-Nunnally: more stringent alternative to calculate clinical significance compared to Jacobson-Truax method. Reliability scores are used to bring the pre-test scores closer to the mean, and then a confidence interval is developed for this adjusted pre-test score.
- Hageman-Arrindel: involves indices of group change and individual change. The reliability of change indicates whether a patient has improved, stayed the same, or deteriorated. A second index, the clinical significance of change, indicates four categories similar to those used by Jacobson-Truax: deteriorated, not reliably changed, improved but not recovered, and recovered.
- Hierarchical Linear Modeling (HLM): involves growth curve analysis instead of pre-test post-test comparisons, so three data points are needed from each patient, instead of only two data points (pre-test and post-test).
3.3) One example illustrating the use of spreadsheet and the clinical importance of p=0.2.
p value | Value of statistic | Confidence level (%) | Degree of freedom | "Confidence limits" | "Threshold for clinical chances" | ||
lower | upper | positive | negative | ||||
0.03 | 1.5 | 90 | 18 | 0.4 | 2.6 | 1 | -1 |
0.2 | 2.4 | 90 | 18 | -0.7 | 5.5 | 1 | -1 |
Clinically positive | Clinically trivial | − | prob (%) | odds prob (%) | odds prob (%) | − | 78 | 3:1 | 22 | 1:3 | 0 | − | Likely, probable | Unlikely, probably not | − | 78 | 3:1 | 19 | 1:4 | 4 | |||||
Likely, probable | Unlikely, probably not |
And when reporting the research, one need to show the observed magnitude of the effect; attend to precision of estimation by showing 90% confidence limits of the true value; show the p value when necessary; attend to clinical, practical or mechanistic significance by stating the smallest worthwhile value when showing the probabilities that the true effect is beneficial, trivial, and/or harmful; make a qualitative statement about the clinical or practical significance of the effect with terms like likely, unlikely.
One example would be: Clinically trivial, statistically significant and publishable rare outcome that can arise from a large sample size and usually misinterpreted as a worthwhile effect: (1) the observed effect of the treatment is 1.1 units (90% likely limits 0.4 to 1.8 units and p=0.007), (2) the chances that the true effect is practically beneficial/trivial/harmful are 1/99/0%.
Applications
4.1) This article (http://archpsyc.jamanetwork.com/article.aspx?articleid=206036) titled Revised Prevalence Estimates of Mental Disorders In The United States responses to question on life interference from telling a professional about, or using medication for symptoms to ascertain the prevalence of clinically significant mental disorders in each survey. It made a revised national prevalence estimate by selecting the lower estimate of the 2 surveys for each diagnostic category accounting for comorbidity and combining categories. It concluded that establishing the clinical significance of disorders in the community is crucial for estimating treatment need and that more work should be done in defining and operationalizing clinical significance, and characterizing the utility of clinically significant symptoms in determining treatment need even when some criteria of the disorder are not met.
4.2) This article (http://jama.jamanetwork.com/article.aspx?articleid=187180) aims to evaluate whether the time to completion and the time to publication of randomized phase 2 and phase 3 trials are affected by the statistical significance of results and to describe the natural history of such trial and conducted a prospective cohort of randomized efficacy trials conducted by 2 trialist groups from 1986 to 1996. It finally concluded that among randomized efficacy trials, there is a time lag in the publication of negative findings that occurs mostly after the completion of the trial follow-up.
Software
Problems
- Suppose we are playing one roll of a pair of dice and we roll a pair of dice once and assumes a null hypothesis that the dice are fair. The test statistic is "the sum of the rolled numbers" and is one-tailed. Suppose we observe both dice show 6, which yield a test statistic of 12. The p-value of this outcome is about 0.028 (1/36) (the highest test statistic out of 6×6 = 36 possible outcomes). If the researcher assumed a significance level of 0.05, what would be the conclusion from this experiment? What would be a potential problem with experiment to run the conclusion you proposed?
- Suppose a researcher flips a coin some arbitrary number of times (n) and assumes a null hypothesis that the coin is fair. The test statistic is the total number of heads. Suppose the researcher observes heads for each flip, yielding a test statistic of n and a p-value of 2/2n. If the coin was flipped only 5 times, the p-value would be 2/32 = 0.0625, which is not significant at the 0.05 level. But if the coin was flipped 10 times, the p-value would be 2/1024 ≈ 0.002, which is significant at the 0.05 level. What would be the problem here?
- Suppose a researcher flips a coin two times and assumes a null hypothesis that the coin is unfair: it has two heads and no tails. The test statistic is the total number of heads (one-tailed). The researcher observes one head and one tail (HT), yielding a test statistic of 1 and a p-value of 0. In this case the data is inconsistent with the hypothesis–for a two-headed coin, a tail can never come up. In this case the outcome is not simply unlikely in the null hypothesis, but in fact impossible, and the null hypothesis can be definitely rejected as false. In practice such experiments almost never occur, as all data that could be observed would be possible in the null hypothesis (albeit unlikely). What if the null hypothesis were instead that the coin came up heads 99% of the time (otherwise the same setup)?
References
- Statistical inference / George Casella, Roger L. Berger.
- Sampling / Steven K. Thompson
- Sampling theory and methods / S. Sampath
- SOCR Home page: http://www.socr.umich.edu
Translate this page: