#rstats #stats #IRT #scaledevelopment
Thinking about scale reliability lately.
An obvious thing to remember is that scale reliability (omega) is the upper bound of correlations it can meaningfully (!) have with anything else.
This is highly practical, because, in theory (I haven’t fleshed this out yet), this enables us to test construct validity pretty well by attenuating correlations with reliability. This has been done before, albeit (imho) slightly clumsily by Kristof (1983). They used a Spearman Brown approach based on an ideal correlation to establish thresholds a correlation must test against to conclude discriminant or convergent validity. The formula was threshold = rho_ideal * sqrt(rel_a * rel_b). This is less than ideal, because what an ideal correlation is is up to the researcher.
However, keeping the above fact in mind, we can do better. Kill thresholds and just make it a measure of relatedness altogether that is capped by the lower reliability.
Let’s call this Kappa = rho_real/sqrt(rel_a*rel_b).
This should (again, I haven’t tested this yet) give you two types of information right away:
1)If its >1, then your scale is off.
2)If it’s less than 1 it informs about construct validity. If it’s less than 0.5, probably discriminant, if it’s higher probably convergent. The extreme the score, the higher the confidence.
Please correct me If I'm doing some goofing here.