Difference between revisions of "SMHS Cronbachs"

From SOCR
Jump to: navigation, search
(Software)
 
(46 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
==[[SMHS| Scientific Methods for Health Sciences]] - Instrument Performance Evaluation: Cronbach's α ==
 
==[[SMHS| Scientific Methods for Health Sciences]] - Instrument Performance Evaluation: Cronbach's α ==
  
 +
===Overview:===
 +
Cronbach’s alpha $\alpha$ is a coefficient of internal consistency and is commonly used as an estimate of the reliability of a psychometric test. Internal consistency is typically a measure based on the correlations between different items on the same test and measures whether several items that propose to measure the same general construct and produce similar scores. Cronbach’s alpha is widely used in the social science, nursing, business and other disciplines. Here we present a general introduction to Cronbach’s alpha, how is it calculated, how to apply it in research and what are some common problems when using Cronbach’s alpha.
  
 +
===Motivation:===
 +
We have discussed about internal and external consistency and their importance in researches and studies. How do we measure internal consistency? For example, suppose we are interested in measuring the extent of handicap of patients suffering from certain disease. The dataset contains 10records measuring the degree of difficulty experienced in carrying out daily activities. Each item is recorded from 1 (no difficulty) to 4 (can’t do). When those data is used to form a scale they need to have internal consistency. All items should measure the same thing, so they could be correlated with one another. Cronbach’s alpha generally increases when correlations between items increase.
 +
 +
 +
===Theory===
 +
====Cronbach’s Alpha====
 +
Cronbach’s Alpha is a measure of internal consistency or reliability of a psychometric instrument and measures how well a set of items measure a single, one-dimensional latent aspect of individuals.
 +
 +
*Suppose we measure a quantity X, which is a sum of K components: $X=Y_{1}+ Y_{2}+⋯+Y_{k}$, then Cronbach’s alpha is defined as $\alpha =\frac{K}{K-1}$  $\left( 1-\frac{\sum_{i=1}^{K}\sigma^{2}_{{Y}_{i}}} {\sigma_{X}^{2}}\right)$, where $\sigma_{X}^{2}$ is the variance of the observed total test scores, and $ \sigma^{2}_{{Y}_{i}} $ is the variance of component $i$ for the current sample.
 +
 +
: If items are scored from 0 to 1, then $\alpha =\frac{K}{K-1}$ $\left( 1-\frac{\sum_{i=1}^{K}P_{i}Q_{i}} {\sigma_{X}^{2}} \right)$, where $P_{i}$ is the proportion scoring 1 on item $i$ and $Q_{i}=1-P_{i}$. Alternatively, Cronbach’s alpha can be defined as $\alpha$=$\frac{K\bar c}{(\bar v +(K-1) \bar c )}$,where K is as above, $\bar v$ is the average variance of each component and $\bar c$ is the average of all covariance between the components across the current sample of persons.
 +
 +
*The standardized Cronbach’s alpha can be defined as $\alpha_{standardized}=\frac{K\bar r}  {(1+(K-1)\bar r )}$, $\bar r$ is the mean of $\frac {K(K-1)}{2}$ non redundant correlation coefficients (i.e., the mean of an upper triangular, or lower triangular, correlation matrix).
 +
 +
*The theoretical value of alpha varies from 0 to 1 considering it is ratio of two variance. $\rho_{XX}=\frac{\sigma_{T}^{2}} {\sigma_{X}^{2}}$, reliability of test scores is the ratio of the true score and total score variance.
 +
 +
====Internal consistency====
 +
Internal consistency is a measure of whether several items that proposed to measure the same general construct produce similar score. It is usually measured with Cronbach’s alpha, which is calculated from the pairwise correlation between items. Internal consistency can take values from negative infinity to 1. It is negative when there is greater within subject variability than between-subject variability. Only positive values of Cronbach’s alpha make sense. Cronbach’s alpha will generally increases as the inter-correlations among items tested increase.
 +
 +
<center>
 +
{| class="wikitable" style="text-align:center; width:35%" border="1"
 +
|-
 +
|Cronbach's alpha|| Internal consistency
 +
|-
 +
| $\alpha$  ≥ 0.9|| Excellent (High-Stakes testing)
 +
|-
 +
|0.7 ≤ $\alpha$ < 0.9|| Good (Low-Stakes testing)
 +
|-
 +
|0.6 ≤ $\alpha$ < 0.7|| Acceptable
 +
|-
 +
|0.5 ≤ $\alpha$ < 0.6|| Poor
 +
|-
 +
|$\alpha$ < 0.5 ||Unacceptable
 +
|-
 +
|}
 +
</center>
 +
 +
====Other Measures====
 +
* '''Intra-class correlation:''' The Intra-class correlation coefficient (ICC) assesses the consistency, or reproducibility, of quantitative measurements made by different observers measuring the same quantity. Broadly speaking, the ICC is defined as the ratio of between-cluster variance to total variance:
 +
$$ICC = \frac{Variance due to rated subjects (patients)}{(Variance due to subjects) + (Variance due to Judges) + (Residual Variance)}.$$
 +
 +
* Example: Suppose 4 nurses rate 6 patients on a 10 point depression scale:
 +
 +
<center>
 +
{| class="wikitable" style="text-align:center; width:75%" border="1"
 +
|-
 +
!PatientID||NurseRater1||NurseRater2||NurseRater3||NurseRater4
 +
|-
 +
|1||9||2||5||8
 +
|-
 +
|2||6||1||3||2
 +
|-
 +
|3||8||4||6||8
 +
|-
 +
|4||7||1||2||6
 +
|-
 +
|5||10||5||6||9
 +
|-
 +
|6||6||2||4||7
 +
|}</center>
 +
 +
This data can also be presented as a frame:
 +
<center>
 +
{| class="wikitable" style="text-align:center; width:75%" border="1"
 +
|-
 +
!PatientID||Rating||Nurse
 +
|-
 +
|1||9||1
 +
|-
 +
|2||6||1
 +
|-
 +
|3||8||1
 +
|-
 +
|4||7||1
 +
|-
 +
|5||10||1
 +
|-
 +
|6||6||1
 +
|-
 +
|7||2||2
 +
|-
 +
|8||1||2
 +
|-
 +
|9||4||2
 +
|-
 +
|10||1||2
 +
|-
 +
|11||5||2
 +
|-
 +
|12||2||2
 +
|-
 +
|13||5||3
 +
|-
 +
|14||3||3
 +
|-
 +
|15||6||3
 +
|-
 +
|16||2||3
 +
|-
 +
|17||6||3
 +
|-
 +
|18||4||3
 +
|-
 +
|19||8||4
 +
|-
 +
|20||2||4
 +
|-
 +
|21||8||4
 +
|-
 +
|22||6||4
 +
|-
 +
|23||9||4
 +
|-
 +
|24||7||4
 +
|}
 +
</center>
 +
 +
install.packages("ICC")
 +
library("ICC")
 +
# save the data in the table above in a local file ""
 +
dataset <- read.csv('C:\\Users\\Desktop\\Nurse_data.csv', header = TRUE)
 +
# remove the first columns (Patient ID number)
 +
dataset <- dataset[,-1]
 +
attach(dataset)
 +
dataset
 +
 +
Nest("p", w=0.14, x=Rating, y=Nurse, data=dataset)
 +
icc <-ICCest(Rating, Nurse, dataset)
 +
icc$\$ $UpperCI-icc$\$ $LowerCI #confidence interval width
 +
icc
 +
 +
ICC: -0.4804401
 +
95% CI(ICC): (-0.6560437 : -0.03456346)
 +
 +
Cronbach’s alpha equals to the stepped-up intra-class correlation coefficient, which is commonly used in observational studies if and only if the value of the item variance component equals zero. If this variance component is negative, then alpha will underestimate the stepped-up intra-class correlation coefficient; if it’s positive, alpha will overestimate the stepped-up intra-class correlation.
 +
 +
====Generalizability theory====
 +
Cronbach’s alpha is an unbiased estimate of the generalizability. It can be viewed as a measure of how well the sum score on the selected items capture the expected score in the entire domain, even if that domain is heterogeneous.
 +
 +
====Problems with Cronbach’s alpha====
 +
# it is dependent not only on the magnitude of the correlations among items, but also on the number of items in the scale. Hence, a scale can be made to look more homogenous simply by increasing the number of items though the average correlation remains the same;
 +
# if two scales each measuring a distinct aspect are combined to form a long scale, alpha would probably be high though the merged scale is obviously tapping two different attributes;
 +
# if alpha is too high, then it may suggest a high level of item redundancy.
 +
 +
====Split-Half Reliability====
 +
In Split-Half Reliability assessment, the test is split in half (e.g., odd / even) creating “equivalent forms”. The two “forms” are correlated with each other and the correlation coefficient is adjusted to reflect the entire test length, using the Spearman-Brown Prophecy formula. Suppose the $Corr(Even,Odd)=r$ is the raw correlation between the even and odd items. Then the adjusted correlation will be:$r’ = \frac{n r}{(n-1)\, (r+1)},$ where n = number of items (in this case n=2).
 +
 +
Example:
 +
 +
<center>
 +
{| class="wikitable" style="text-align:center; width:35%" border="1"
 +
|-
 +
|Index|| Q1|| Q2|| Q3|| Q4|| Q5|| Q6|| Odd|| Even
 +
|-
 +
|1 ||1|| 0|| 0|| 1|| 1|| 0|| 2|| 1
 +
|-
 +
|2|| 1|| 1 ||0 ||1|| 0 ||1|| 1|| 3
 +
|-
 +
|3|| 1|| 1|| 1|| 1|| 1|| 0|| 3|| 2
 +
|-
 +
|4 ||1 ||0 ||0 ||0 ||1 ||0|| 2|| 0
 +
|-
 +
|5|| 1|| 1|| 1|| 1|| 0|| 0|| 2|| 2
 +
|-
 +
|6 ||0|| 0 ||0 ||0 ||1 ||0 ||1|| 0
 +
|-
 +
| colspan=6 rowspan=4| ||mean|| 1.833333333|| 1.33333333
 +
|-
 +
| SD|| 0.752772653|| 1.21106014
 +
|-
 +
| corr(Even,Odd)|| 0.073127242 || rowspan=2|
 +
|-
 +
| AdjCorr(Even,Odd)=$\frac{nr}{(n-1)(r+1)}$|| 0.136288111
 +
|-
 +
|}
 +
</center>
 +
 +
====KR-20====
 +
The [http://en.wikipedia.org/wiki/Kuder%E2%80%93Richardson_Formula_20 Kuder–Richardson Formula 20 (KR-20)] is a very reliable internal reliability estimate which simulates calculating split-half reliability for every possible combination of items. For a test with ''K'' test items indexed ''i''=1 to ''K'':
 +
$$KR-20 = \frac{K}{K-1} \left( 1 - \frac{\sum_{i=1}^K p_i q_i}{\sigma^2_X} \right),$$
 +
where $p_i$ is the proportion of ''correct'' responses to test item ''i'', $q_i$ is the proportion of ''incorrect'' responses to test item ''i'' (thus $p_i + q_i= 1$), the variance for the denominator is
 +
$\sigma^2_X = \frac{\sum_{i=1}^n (X_i-\bar{X})^2\,{}}{n-1},$ and where $n$ is the total sample size.
 +
 +
The Cronbach's α and KR-20 are similar -- KR-20 is a derivative of the Cronbach's α with the advantage that it can handle both dichotomous and continuous variables, however, KR-20 can't be used when multiple-choice questions involve partial credit and require systematic item-based analysis.
 +
 +
====Standard Error of Measurement (SEM)====
 +
The greater the reliability of the test, the smaller the SEM.
 +
 +
$$SEM=S\sqrt{1-r_{xx}},$$
 +
where $r_{xx’}$ is the correlation between two instances of the measurements under identical conditions, and $S$ is the total standard deviation.
 +
 +
===Applications===
 +
 +
* [http://link.springer.com/article/10.1007/s10869-005-8262-4 This article] explores the internal validity and reliability of Kolb’s revised learning style inventory in a sample with 221 graduate and undergraduate business students. It also reviewed research on the LSI and studied on implications of conducting factor analysis using ipsative data (type of data where respondents compare two or more desirable options and pick the one that is most preferred (sometimes called a "forced choice" scale). Experiential learning theory is presented and the concept of learning styles explained. This paper largely supports prior research supporting the internal reliability of scales.
 +
 +
* [https://scholarworks.iupui.edu/bitstream/handle/1805/344/Gliem%20&%20Gliem.pdf?s This article] showed the reason a single-item questions pertaining to a construct are not reliable and should not be used in drawing conclusions. It compared the reliability of a summated, multi-item scale versus a single-item question and showed how unreliable a single item is and therefore not appropriate to make inferences based on analysis of single item question, which are used in measuring a construct.
 +
 +
===Software===
 +
 +
* SOCR Cronbach's alpha calculator webapp (coming up) ...
 +
 +
* '''In R:''' using [http://cran.r-project.org/web/packages/psy/psy.pdf the ''psy'' package] and the psychometry dataset (expsy), which is a [http://www.r-tutor.com/r-introduction/data-frame frame] with 30 rows and 16 columns with missing data, where it1-it10 correspond to the rating of 30 patients with a 10 items scale, r1, r2, r3 to the rating of item 1 by 3 different clinicians of the same 30 patients, rb1, rb2, rb3 to the binary transformation of r1, r2, r3 (1 or 2 -> 0; and 3 or 4 -> 1).
 +
 +
cronbach(v1)  ## v1 is n*p matrix or data frame with n subjects and p items.
 +
## This phrase is used to compute the Cronbach’s reliability coefficient alpha.
 +
## This coefficient may be applied to a series of items aggregated in a single score.
 +
## It estimates reliability in the framework of the domain sampling model.
 +
 +
An example to calculate Cronbach’s alpha:
 +
library(psy)
 +
data(expsy)   
 +
cronbach(expsy[,1:10]) 
 +
## this choose the vector of the columns 1 to 10 and calculated the  Cronbach’s Alpha value
 +
 +
$\$ $sample.size
 +
[1] 27
 +
$\$ $number.of.items
 +
[1] 10
 +
$\$ $alpha
 +
[1] 0.1762655
 +
## not good because item 2 is reversed (1 is high and 4 is low)   
 +
 +
cronbach(cbind(expsy[,c(1,3:10)],-1*expsy[,2])) 
 +
## this choose columns 1 and columns 3 to 10 and added in the reversed column 2,
 +
## and then calculated the Cronbach’s Alpha value for the revised data
 +
 +
$\$ $sample.size
 +
[1] 27
 +
$\$ $number.of.items
 +
[1] 10
 +
$\$ $alpha
 +
[1] 0.3752657
 +
 +
## better to obtain a 95%confidence interval:   
 +
datafile <- cbind(expsy[,c(1,3:10)],-1*expsy[,2]) 
 +
## extract the revised data into a new dataset named ‘datafile’
 +
library(boot)
 +
cronbach.boot <- function(data,x) {cronbach(data[x,])\([[3]]\)}
 +
res <- boot(datafile,cronbach.boot,1000) 
 +
res
 +
 +
Call:
 +
boot(data = datafile, statistic = cronbach.boot, R = 1000)
 +
Bootstrap Statistics :
 +
    original      bias    std. error
 +
t1* 0.3752657 -0.06104997  0.2372292
 +
 +
quantile(res$\$ $t,c(0.025,0.975))  ## this calculated the 25% and 97.5% value to form the 95% confidence interval of Cronbach’s alpha
 +
      2.5%      97.5%
 +
-0.2987214  0.6330491
 +
## two-sided bootstrapped confidence interval of Cronbach’s alpha boot.ci(res,type="bca")
 +
## adjusted bootstrap percentile (BCa) confidence interval (better)
 +
 +
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
 +
Based on 1000 bootstrap replicates
 +
 +
CALL :
 +
boot.ci(boot.out = res, type = "bca")
 +
 +
Intervals :
 +
Level      BCa         
 +
95%  (-0.1514,  0.6668 ) 
 +
Calculations and Intervals on Original Scale
 +
 +
The [http://cran.r-project.org/web/packages/coefficientalpha/coefficientalpha.pdf CoefficientAlpha R package] provides an alternative methods for computing Cronbach's alpha coefficient in the presence of missing data and for non-normal data. It also reports robust standard error and confidence interval estimates for alpha.
 +
 +
===Cronbach's $\alpha$ calculations===
 +
The table below illustrates the setting and core calculations involved in computing the Cronbach's $\alpha$.
  
===Under development===
 
'''Include the following table in the Methods section!!!'''
 
 
<center>
 
<center>
 
{| align="center" border="1"
 
{| align="center" border="1"
Line 23: Line 291:
 
|}
 
|}
 
</center>
 
</center>
 +
 +
===Cronbach's $\alpha$ inference===
 +
Cronbach's $\alpha$ coefficient is a point estimate of the reliability. Its standard error is important to construct an interval estimation of its true value and to obtain statistical inference about its significance. There are parametric and non-parametric methods to estimate the variance of Cronbach's $\alpha$, $V(\alpha)$, see [http://www.researchgate.net/profile/Michail_Tsagris/publication/267097800_Confidence_intervals_for_Cronbachs_reliability_coefficient/links/544568eb0cf2d62c304d7f70.pdf this paper (Confidence intervals for Cronbach’s reliability coefficient)].
 +
 +
The [http://link.springer.com/article/10.1007/BF02296146 Chronbach’s alpha has a known distribution], which allows us to compute its variance. Thus, we can compute the confidence interval for $\alpha$ and make inference (e.g., $H_o: \alpha=\alpha_o$, vs. $H_a: \alpha \not= \alpha_o$).
 +
 +
* '''Inference''': We can have parametric (using Pearson correlation matrix) or non-parametric (using Spearman correlation matrix) confidence intervals (CIs) for $\alpha$. Note that Cronbach's alpha may be appropriate for continuous numeric data types. If we have ordinal data, $\alpha$ may underestimate the true instrument reliability. For ordinal data, [http://www.pareonline.net/getvn.asp?v=17&n=3 Zumbo's ordinal alpha or ordinal omega coefficients] may be more appropriate. These estimators employ a correlation matrix under the assumption of latent multivariate normality. For [http://www.researchgate.net/profile/Michail_Tsagris/publication/267097800_Confidence_intervals_for_Cronbachs_reliability_coefficient/links/544568eb0cf2d62c304d7f70.pdf non-parametric CIs we can use the Spearman correlation matrix (based on data ranking), and for parametric CI’s, we can use raw data and Pearson correlation matrix].
 +
 +
* '''Confidence Intervals''': the Cronbach’s $\alpha$ reliability coefficient is defined as:
 +
$$\alpha=\frac{N}{N-1}
 +
\left ( 1-\frac{\sum_{j=1}^N{V(Y_j)}}
 +
              {V\left ( \sum_{j=1}^N{Y_j} \right )}
 +
\right ),$$
 +
: where $Y_j$ represents the $j^{th}$ variable $Y$ (the $j^{th}$ item in the $Y$ questionnaire), and $V$ is the variance.
 +
 +
: The estimated variance of $\alpha$ is:
 +
$$ \hat{\sigma}^2_{\hat{\alpha}} = V(\hat{\alpha})=d\frac{N^2}{k(N-1)^2},
 +
$$
 +
 +
: where $N$ is the number of items, $k$ is the sample size (number of completed questionnaires), $d=\frac{2}{(j^tSj)^3} \left ( (j^tSj) \left ( tr(S^2) +tr^2(S) \right ) - 2(tr(S)(j^tS^2j)\right )$, $S$ is unbiased sample estimate of the true covariance matrix of the question items ($\Sigma$), $tr$ is a trace of a matrix, $j$ is an N-dimensional vector of ones.
 +
 +
: Thus [[SMHS_HypothesisTesting#Testing_a_claim_about_a_mean_with_large_sample_size |the $(1-\gamma)100\%$ confidence interval for $\alpha$]] is:
 +
$$\left ( \hat{\alpha} - z_{\frac{\gamma}{2}}\hat{\sigma}_{\hat{\alpha}}, \hat{\alpha} + z_{\frac{\gamma}{2}}\hat{\sigma}_{\hat{\alpha}} \right ), $$
 +
: where $z_{\frac{\gamma}{2}}$ is the [[EBook#The_Standard_Normal_Distribution |normal distribution critical value]] corresponding to false-positive error rate $\gamma$.
 +
 +
===Problems===
 +
* Use the [[SOCR_TurkiyeStudentEvalData| Turkiye Student Course Evaluation survey (N=5,000)]] to compute the ICC and Cronbach's alpha.
 +
 +
===References===
 +
* [http://en.wikipedia.org/wiki/Cronbach's_alpha  Cronbach's alpha Wikipedia]
 +
*[http://en.wikipedia.org/wiki/Kuder–Richardson_Formula_20  Kuder-Richardson Formula 20 Wikipedia]
  
  

Latest revision as of 13:52, 23 November 2015

Scientific Methods for Health Sciences - Instrument Performance Evaluation: Cronbach's α

Overview:

Cronbach’s alpha $\alpha$ is a coefficient of internal consistency and is commonly used as an estimate of the reliability of a psychometric test. Internal consistency is typically a measure based on the correlations between different items on the same test and measures whether several items that propose to measure the same general construct and produce similar scores. Cronbach’s alpha is widely used in the social science, nursing, business and other disciplines. Here we present a general introduction to Cronbach’s alpha, how is it calculated, how to apply it in research and what are some common problems when using Cronbach’s alpha.

Motivation:

We have discussed about internal and external consistency and their importance in researches and studies. How do we measure internal consistency? For example, suppose we are interested in measuring the extent of handicap of patients suffering from certain disease. The dataset contains 10records measuring the degree of difficulty experienced in carrying out daily activities. Each item is recorded from 1 (no difficulty) to 4 (can’t do). When those data is used to form a scale they need to have internal consistency. All items should measure the same thing, so they could be correlated with one another. Cronbach’s alpha generally increases when correlations between items increase.


Theory

Cronbach’s Alpha

Cronbach’s Alpha is a measure of internal consistency or reliability of a psychometric instrument and measures how well a set of items measure a single, one-dimensional latent aspect of individuals.

  • Suppose we measure a quantity X, which is a sum of K components: $X=Y_{1}+ Y_{2}+⋯+Y_{k}$, then Cronbach’s alpha is defined as $\alpha =\frac{K}{K-1}$ $\left( 1-\frac{\sum_{i=1}^{K}\sigma^{2}_{{Y}_{i}}} {\sigma_{X}^{2}}\right)$, where $\sigma_{X}^{2}$ is the variance of the observed total test scores, and $ \sigma^{2}_{{Y}_{i}} $ is the variance of component $i$ for the current sample.
If items are scored from 0 to 1, then $\alpha =\frac{K}{K-1}$ $\left( 1-\frac{\sum_{i=1}^{K}P_{i}Q_{i}} {\sigma_{X}^{2}} \right)$, where $P_{i}$ is the proportion scoring 1 on item $i$ and $Q_{i}=1-P_{i}$. Alternatively, Cronbach’s alpha can be defined as $\alpha$=$\frac{K\bar c}{(\bar v +(K-1) \bar c )}$,where K is as above, $\bar v$ is the average variance of each component and $\bar c$ is the average of all covariance between the components across the current sample of persons.
  • The standardized Cronbach’s alpha can be defined as $\alpha_{standardized}=\frac{K\bar r} {(1+(K-1)\bar r )}$, $\bar r$ is the mean of $\frac {K(K-1)}{2}$ non redundant correlation coefficients (i.e., the mean of an upper triangular, or lower triangular, correlation matrix).
  • The theoretical value of alpha varies from 0 to 1 considering it is ratio of two variance. $\rho_{XX}=\frac{\sigma_{T}^{2}} {\sigma_{X}^{2}}$, reliability of test scores is the ratio of the true score and total score variance.

Internal consistency

Internal consistency is a measure of whether several items that proposed to measure the same general construct produce similar score. It is usually measured with Cronbach’s alpha, which is calculated from the pairwise correlation between items. Internal consistency can take values from negative infinity to 1. It is negative when there is greater within subject variability than between-subject variability. Only positive values of Cronbach’s alpha make sense. Cronbach’s alpha will generally increases as the inter-correlations among items tested increase.

Cronbach's alpha Internal consistency
$\alpha$ ≥ 0.9 Excellent (High-Stakes testing)
0.7 ≤ $\alpha$ < 0.9 Good (Low-Stakes testing)
0.6 ≤ $\alpha$ < 0.7 Acceptable
0.5 ≤ $\alpha$ < 0.6 Poor
$\alpha$ < 0.5 Unacceptable

Other Measures

  • Intra-class correlation: The Intra-class correlation coefficient (ICC) assesses the consistency, or reproducibility, of quantitative measurements made by different observers measuring the same quantity. Broadly speaking, the ICC is defined as the ratio of between-cluster variance to total variance:

$$ICC = \frac{Variance due to rated subjects (patients)}{(Variance due to subjects) + (Variance due to Judges) + (Residual Variance)}.$$

  • Example: Suppose 4 nurses rate 6 patients on a 10 point depression scale:
PatientID NurseRater1 NurseRater2 NurseRater3 NurseRater4
1 9 2 5 8
2 6 1 3 2
3 8 4 6 8
4 7 1 2 6
5 10 5 6 9
6 6 2 4 7

This data can also be presented as a frame:

PatientID Rating Nurse
1 9 1
2 6 1
3 8 1
4 7 1
5 10 1
6 6 1
7 2 2
8 1 2
9 4 2
10 1 2
11 5 2
12 2 2
13 5 3
14 3 3
15 6 3
16 2 3
17 6 3
18 4 3
19 8 4
20 2 4
21 8 4
22 6 4
23 9 4
24 7 4
install.packages("ICC")
library("ICC")
# save the data in the table above in a local file ""
dataset <- read.csv('C:\\Users\\Desktop\\Nurse_data.csv', header = TRUE)
# remove the first columns (Patient ID number)
dataset <- dataset[,-1]
attach(dataset) 
dataset

Nest("p", w=0.14, x=Rating, y=Nurse, data=dataset)
icc <-ICCest(Rating, Nurse, dataset)
icc$\$ $UpperCI-icc$\$ $LowerCI #confidence interval width 
icc
ICC: -0.4804401
95% CI(ICC): (-0.6560437 : -0.03456346)

Cronbach’s alpha equals to the stepped-up intra-class correlation coefficient, which is commonly used in observational studies if and only if the value of the item variance component equals zero. If this variance component is negative, then alpha will underestimate the stepped-up intra-class correlation coefficient; if it’s positive, alpha will overestimate the stepped-up intra-class correlation.

Generalizability theory

Cronbach’s alpha is an unbiased estimate of the generalizability. It can be viewed as a measure of how well the sum score on the selected items capture the expected score in the entire domain, even if that domain is heterogeneous.

Problems with Cronbach’s alpha

  1. it is dependent not only on the magnitude of the correlations among items, but also on the number of items in the scale. Hence, a scale can be made to look more homogenous simply by increasing the number of items though the average correlation remains the same;
  2. if two scales each measuring a distinct aspect are combined to form a long scale, alpha would probably be high though the merged scale is obviously tapping two different attributes;
  3. if alpha is too high, then it may suggest a high level of item redundancy.

Split-Half Reliability

In Split-Half Reliability assessment, the test is split in half (e.g., odd / even) creating “equivalent forms”. The two “forms” are correlated with each other and the correlation coefficient is adjusted to reflect the entire test length, using the Spearman-Brown Prophecy formula. Suppose the $Corr(Even,Odd)=r$ is the raw correlation between the even and odd items. Then the adjusted correlation will be:$r’ = \frac{n r}{(n-1)\, (r+1)},$ where n = number of items (in this case n=2).

Example:

Index Q1 Q2 Q3 Q4 Q5 Q6 Odd Even
1 1 0 0 1 1 0 2 1
2 1 1 0 1 0 1 1 3
3 1 1 1 1 1 0 3 2
4 1 0 0 0 1 0 2 0
5 1 1 1 1 0 0 2 2
6 0 0 0 0 1 0 1 0
mean 1.833333333 1.33333333
SD 0.752772653 1.21106014
corr(Even,Odd) 0.073127242
AdjCorr(Even,Odd)=$\frac{nr}{(n-1)(r+1)}$ 0.136288111

KR-20

The Kuder–Richardson Formula 20 (KR-20) is a very reliable internal reliability estimate which simulates calculating split-half reliability for every possible combination of items. For a test with K test items indexed i=1 to K: $$KR-20 = \frac{K}{K-1} \left( 1 - \frac{\sum_{i=1}^K p_i q_i}{\sigma^2_X} \right),$$ where $p_i$ is the proportion of correct responses to test item i, $q_i$ is the proportion of incorrect responses to test item i (thus $p_i + q_i= 1$), the variance for the denominator is $\sigma^2_X = \frac{\sum_{i=1}^n (X_i-\bar{X})^2\,{}}{n-1},$ and where $n$ is the total sample size.

The Cronbach's α and KR-20 are similar -- KR-20 is a derivative of the Cronbach's α with the advantage that it can handle both dichotomous and continuous variables, however, KR-20 can't be used when multiple-choice questions involve partial credit and require systematic item-based analysis.

Standard Error of Measurement (SEM)

The greater the reliability of the test, the smaller the SEM.

$$SEM=S\sqrt{1-r_{xx}},$$ where $r_{xx’}$ is the correlation between two instances of the measurements under identical conditions, and $S$ is the total standard deviation.

Applications

  • This article explores the internal validity and reliability of Kolb’s revised learning style inventory in a sample with 221 graduate and undergraduate business students. It also reviewed research on the LSI and studied on implications of conducting factor analysis using ipsative data (type of data where respondents compare two or more desirable options and pick the one that is most preferred (sometimes called a "forced choice" scale). Experiential learning theory is presented and the concept of learning styles explained. This paper largely supports prior research supporting the internal reliability of scales.
  • This article showed the reason a single-item questions pertaining to a construct are not reliable and should not be used in drawing conclusions. It compared the reliability of a summated, multi-item scale versus a single-item question and showed how unreliable a single item is and therefore not appropriate to make inferences based on analysis of single item question, which are used in measuring a construct.

Software

  • SOCR Cronbach's alpha calculator webapp (coming up) ...
  • In R: using the psy package and the psychometry dataset (expsy), which is a frame with 30 rows and 16 columns with missing data, where it1-it10 correspond to the rating of 30 patients with a 10 items scale, r1, r2, r3 to the rating of item 1 by 3 different clinicians of the same 30 patients, rb1, rb2, rb3 to the binary transformation of r1, r2, r3 (1 or 2 -> 0; and 3 or 4 -> 1).
cronbach(v1)  ## v1 is n*p matrix or data frame with n subjects and p items.
## This phrase is used to compute the Cronbach’s reliability coefficient alpha. 
## This coefficient may be applied to a series of items aggregated in a single score. 
## It estimates reliability in the framework of the domain sampling model. 

An example to calculate Cronbach’s alpha:

library(psy)
data(expsy)     
cronbach(expsy[,1:10])  
## this choose the vector of the columns 1 to 10 and calculated the  Cronbach’s Alpha value
$\$ $sample.size
[1] 27
$\$ $number.of.items
[1] 10
$\$ $alpha
[1] 0.1762655
## not good because item 2 is reversed (1 is high and 4 is low)     
cronbach(cbind(expsy[,c(1,3:10)],-1*expsy[,2]))  
## this choose columns 1 and columns 3 to 10 and added in the reversed column 2, 
## and then calculated the Cronbach’s Alpha value for the revised data
$\$ $sample.size
[1] 27
$\$ $number.of.items
[1] 10
$\$ $alpha
[1] 0.3752657
## better to obtain a 95%confidence interval:     
datafile <- cbind(expsy[,c(1,3:10)],-1*expsy[,2])  
## extract the revised data into a new dataset named ‘datafile’
library(boot)
cronbach.boot <- function(data,x) {cronbach(data[x,])\([[3]]\)}
res <- boot(datafile,cronbach.boot,1000)   
res
Call:
boot(data = datafile, statistic = cronbach.boot, R = 1000)
Bootstrap Statistics :
    original      bias    std. error
t1* 0.3752657 -0.06104997   0.2372292
quantile(res$\$ $t,c(0.025,0.975))  ## this calculated the 25% and 97.5% value to form the 95% confidence interval of Cronbach’s alpha
     2.5%      97.5% 
-0.2987214  0.6330491
## two-sided bootstrapped confidence interval of Cronbach’s alpha boot.ci(res,type="bca") 
## adjusted bootstrap percentile (BCa) confidence interval (better) 
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates

CALL : 
boot.ci(boot.out = res, type = "bca")

Intervals : 
Level       BCa          
95%   (-0.1514,  0.6668 )  
Calculations and Intervals on Original Scale

The CoefficientAlpha R package provides an alternative methods for computing Cronbach's alpha coefficient in the presence of missing data and for non-normal data. It also reports robust standard error and confidence interval estimates for alpha.

Cronbach's $\alpha$ calculations

The table below illustrates the setting and core calculations involved in computing the Cronbach's $\alpha$.

Subjects Items/Questions Part of the Assessment Instrument Total Score per Subject
$Q_1$ $Q_2$ ... $Q_k$
$S_1$ $Y_{1,1}$ $Y_{1,2}$ $Y_{1,k}$ $X_1=\sum_{j=1}^k{Y_{1,j}}$
$S_2$ $Y_{2,1}$ $Y_{2,2}$ $Y_{2,k}$ $X_2=\sum_{j=1}^k{Y_{2,j}}$
... ... ... ... ... ...
$S_n$ $Y_{n,1}$ $Y_{n,2}$ $Y_{n,k}$ $X_n=\sum_{j=1}^k{Y_{n,j}}$
Variance per Item $\sigma_{Y_{.,1}}^2=\frac{1}{n-1}\sum_{i=1}^n{(Y_{i,1}-\bar{Y}_{.,1})^2}$ $$\sigma_{Y_{.,2}}^2=\frac{1}{n-1}\sum_{i=1}^n{(Y_{i,2}-\bar{Y}_{.,2})^2}$$ $$\sigma_{Y_{.,k}}^2=\frac{1}{n-1}\sum_{i=1}^n{(Y_{i,k}-\bar{Y}_{.,k})^2}$$ $$\sigma_X^2=\frac{1}{n-1}\sum_{i=1}^n{(X_i-\bar{X})^2}$$

Cronbach's $\alpha$ inference

Cronbach's $\alpha$ coefficient is a point estimate of the reliability. Its standard error is important to construct an interval estimation of its true value and to obtain statistical inference about its significance. There are parametric and non-parametric methods to estimate the variance of Cronbach's $\alpha$, $V(\alpha)$, see this paper (Confidence intervals for Cronbach’s reliability coefficient).

The Chronbach’s alpha has a known distribution, which allows us to compute its variance. Thus, we can compute the confidence interval for $\alpha$ and make inference (e.g., $H_o: \alpha=\alpha_o$, vs. $H_a: \alpha \not= \alpha_o$).

  • Confidence Intervals: the Cronbach’s $\alpha$ reliability coefficient is defined as:

$$\alpha=\frac{N}{N-1} \left ( 1-\frac{\sum_{j=1}^N{V(Y_j)}} {V\left ( \sum_{j=1}^N{Y_j} \right )} \right ),$$

where $Y_j$ represents the $j^{th}$ variable $Y$ (the $j^{th}$ item in the $Y$ questionnaire), and $V$ is the variance.
The estimated variance of $\alpha$ is:

$$ \hat{\sigma}^2_{\hat{\alpha}} = V(\hat{\alpha})=d\frac{N^2}{k(N-1)^2}, $$

where $N$ is the number of items, $k$ is the sample size (number of completed questionnaires), $d=\frac{2}{(j^tSj)^3} \left ( (j^tSj) \left ( tr(S^2) +tr^2(S) \right ) - 2(tr(S)(j^tS^2j)\right )$, $S$ is unbiased sample estimate of the true covariance matrix of the question items ($\Sigma$), $tr$ is a trace of a matrix, $j$ is an N-dimensional vector of ones.
Thus the $(1-\gamma)100\%$ confidence interval for $\alpha$ is:

$$\left ( \hat{\alpha} - z_{\frac{\gamma}{2}}\hat{\sigma}_{\hat{\alpha}}, \hat{\alpha} + z_{\frac{\gamma}{2}}\hat{\sigma}_{\hat{\alpha}} \right ), $$

where $z_{\frac{\gamma}{2}}$ is the normal distribution critical value corresponding to false-positive error rate $\gamma$.

Problems

References





Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif